bff_contained_ngram_count_before_dedupe
int64
language_id_whole_page_fasttext
stringclasses
0 values
metadata
stringlengths
31
35
previous_word_count
stringclasses
0 values
text
stringlengths
1
2.28M
url
stringclasses
0 values
warcinfo
stringclasses
0 values
fasttext_openhermes_reddit_eli5_vs_rw_v2_bigram_200k_train_prob
stringclasses
0 values
id
stringlengths
21
25
doc
stringlengths
124
140
added
stringdate
2024-02-18 23:39:39
2024-02-18 23:45:20
created
stringdate
1992-02-03 20:07:05
2023-03-03 02:17:43
source
stringclasses
0 values
attributes
stringclasses
288 values
version
stringclasses
0 values
null
null
{"provenance":"002.jsonl.gz:10107"}
null
\section{Motivation and Background} After three decades of continuous growth~\cite{woodall_measuring_2017}, the Web has become an integral part of our society. It is designed as an open and transparent system, but more recently algorithmic systems that neglect these principles populate the Web. Exemplary of these systems are recommender systems, which have different impacts ranging from societal discourses (e.g., the British EU referendum or the U.S. presidential election of 2016) to profane details of everyday life, such as when choosing a product or service, a place to spend the holidays, or consuming personalized entertainment products. On the whole range of their usage, recommender systems are often interpreted as algorithmic decision support systems that frequently include discussions on bias (e.g.,~\cite{oneil_weapons_2016}). One of these frequently raised issues is that of \emph{algorithmic bias}, where specific groups of people based on gender, ethnicity, class or ideology are systematically discriminated by algorithmic decisions~\cite{baeza-yates_bias_2018}. These discussions indicate a growing discomfort with the algorithmic advances both used in and facilitated by the Web. To this end, ongoing design discourses urge engineers to consider explaining the presence and function of algorithms to end users~\cite{kuang_next_2017}. Lawmakers, too, are increasingly called upon to respond to issues such as algorithmic bias. Exemplary of the latter is the General Data Protection Regulation (GDPR) by the European Union, which seeks to curtail violations of data privacy and unregulated profitisation from user data. Selbst and Powles go so far as to say that the GDPR effectively guarantees a ``right to explanation'' of algorithmic processes to end users~\cite{selbst_meaningful_2018}. This right to explanation is understood as an explicit challenge to the discipline of Human-Computer Interaction (HCI); to be met with concrete means of representing and explaining algorithmic processes. In HCI, this challenge aligns with the discourse of \textit{algorithm awareness}. Hamilton and colleagues define algorithm awareness as the extent to which a user is aware of the existence and functioning of algorithms in a specific context of use~\cite{hamilton_path_2014}. However, the scope of the term algorithm awareness is not yet clearly defined, partially a result of the lack of experimental results associated with the discourse. As a consequence, it is unresolved whether algorithm awareness is the result of unearthing new methods of interaction, novel forms of representation, finding means of explaining algorithmic processes, or all of these aspects taken together. Similarly, its methodological perspective is vaguely defined. Are algorithm-aware designs, for example, a result of a critical technical practice~\cite{sengers_reflective_2005}, or are they a new form of human-centered design? If algorithm awareness as a principle is to contribute to an understanding of web-based algorithmic systems today (and tomorrow), these methodological shortcomings need to be addressed. In our research, we focus on one specific aspect from the discourse of algorithm awareness, the aspect of algorithm representation. We first discuss related work from the areas of HCI, Computer-Supported Collaborative Work (CSCW), and Science and Technology Studies (STS). We argue that algorithm awareness should be understood within the context of human-technology relations since algorithmic systems increasingly impact how we see the world. We then introduce a use case which allows for studying different representations for algorithm awareness because of its open design. The use case is situated in the peer production system Wikidata, in which the completeness recommender Recoin is being used by editors to receive recommendations for their next edits. As opposed to commercial web-based systems, the design principles of Wikidata give as access to all necessary information available regarding Recoin's engineering and usage. We can, thus, reflect on the various decisions made during Recoin's development and can suggest different modes of representing the algorithmic system by considering the dimensions of explainability and interactivity. Our research makes the following contributions: We provide experimental results to be used in the continuing development of the discourse on algorithm awareness. This concerns insights on design measures, namely textual explanation of algorithmic logic and interactive affordances, respectively. Our results suggest that the provision of more interactive means for exploring the mechanism of the algorithm for users has significant potential, but that more research is needed in this area. As for conducting experiments in this context, we provide first methodological insights, which suggest that measures of comprehension, fairness, accuracy and trustworthiness employed in the field are not yet exhaustive for the concerns of algorithm awareness. Our qualitative insights provide a first indication for further measures. The study participants, for example, were less concerned with the details of understanding an algorithmic calculation than with who or what is judging the result of the algorithm. In the following section, we discuss existing research from three perspective. First, we discuss the role of automation in peer production communities which leads to an increased usage of algorithmic systems in this context. Second, we review existing approaches that attempt to make these algorithmic systems transparent. Third, we combine these insights to argue for the urgency of researching algorithm awareness. This theoretical section is followed by a detailed introduction to Recoin, our use case, including its technical design as well as how it connects specifically to the topic at hand. Subsequently, we showcase our experiment, in which we detail the setup, design, results and analyses involved in our experimental study. Then, we proceed to discuss our insights. Finally, we conclude by outlining future work, in which we seek to undertake qualitative studies into how the evaluated modes of representation affect the relations between humans and algorithmic systems. \section{Related Work} \label{sec:relatedwork} \paragraph{\textbf{Automation in Peer Production Communities}} In contrast to the predominant commercial platforms on the Web, peer production communities, such as Wikipedia, OpenStreetMap, or Linux, provide a valuable alternative for people to share their ideas, experiences, and their collaboratively created knowledge openly~\cite{Benkler:2002vn,Benkler:2006pi}. In these communities automation is an integral component in order to handle especially "mindless, boring and often reoccurring tasks" \cite{Mueller-Birn:2013uq}. In Wikipedia, for example, various forms of algorithmic support exist; recommender systems, such as the SuggestBot help people to find suitable tasks~\cite{Cosley:2007fk}, neural networks, such as employed by ClueBots help to revert obvious vandalism~\cite{carter2008cluebot}, or semi-automated user interfaces, such as Snuggle help editors to socialize more efficiently with newcomers~\cite{Halfaker:2014ky}. Wikidata as Wikipedia's sister project profited from existing experiences in these automation efforts, thus, tools for vandalism detection were highly sophisticated from the beginning~\cite{Sarabadani:2017ia}. However, depending on how this automation is being used, the outcome goes in both directions. The unreflected use of automation can suppress participation of good-faith newcomers~\cite{Halfaker:2012uq}, and on the other hand, recommender systems on Wikipedia can significantly improve editor engagement and content creation~\cite{wulczyn2016growing}. Existing research shows, how the openness of peer production systems, such as the various Wikimedia projects (Wikipedia, Wikidata, etc.) enable researchers to investigate the manifold facets of automation in a real-world setting, and simultaneously support these projects in their goals of providing free high quality content. \paragraph{\textbf{Approaches to Algorithm Awareness}} With regards to related discourses such as on Fairness, Accountability and Transparency (FAT)~\cite{kohli_translation_2018} or Explainable Artificial Intelligence (XAI)~\cite{gunning_explainable_nodate}; algorithm awareness is more aligned to the study of lay's persons experiences of algorithmic systems. As with FAT and XAI, the concerns of the discourse are illustrative of pressing socio-cultural, economic and political needs. However, and similarly to FAT and XAI, algorithm awareness so far suffers from the lack of a methodological definition. Both in terms of design and engineering, implementation of algorithm-aware designs is challenged by two fundamental issues which can be derived from Hamilton and colleagues definition~\cite{hamilton_path_2014}: (1) the perceivability of an algorithm (e.g., results, logic, data) and (2) an actionable mode of representation that leads to informed usage. So far, the context of conducted algorithm awareness studies differs greatly. Studies have included, for example, both attempts at reverse-engineering web-based systems such as the Facebook's newsfeed~\cite{eslami_feedvis:_2015} as well as manipulating online peer grading systems~\cite{kizilcec_how_2016}. In the former, Eslami specifies that an algorithm-aware design should provide an actionable degree of transparency to algorithmic processes in order to promote a more informed and adaptive use of a specific system by its users~\cite{eslami_understanding_2017}. In her study, Eslami operationalizes the approach of \textit{seamful design}\footnote{The term seamful design is created as opposite concept to seamless design, where the algorithmic system fades into the background of human's perception.} to display results of the Facebook newsfeed algorithm that usually do not get displayed. In the latter, Kizilcec proposes another dimension to algorithm awareness ~\cite{kizilcec_how_2016}: the question of how much transparency of an algorithmic system is actually desirable to ensure understandability and usage. For his study, Kizilcec exposes participants in a peer-graded massive open online course to three kinds of transparency when confronted with their course grades. For each kind of transparency, he asked participants to rate their comprehension of the user interface and to what extent they evaluate the provided information as fair, accurate and trustworthy. These measures provide a first set of measures to empirically study how humans understand algorithmic systems. His results suggest a medium degree of transparency (in this case, textually disclosing the result and logic) as most effective. A high transparency (of the result, the underlying logic and raw peer grading data), he finds, is in fact detrimental to trust in the algorithmic system -- whether or not the received grade was lower than expected. A particular focus in algorithm awareness, as well as in XAI and FAT, are the concrete means by which humans may become more informed about algorithmic systems. A frequently deployed solution is the use of textual explanation of algorithmic processes or outputs across all discourses, featuring in contexts such as social media~\cite{rader_explanations_2018}, aviation assistants~\cite{lyons_engineering_2016}, online advertising~\cite{eslami_communicating_2018}, classification algorithms in machine learning~\cite{ribeiro_why_2016} and in online peer grading as discussed above~\cite{kizilcec_how_2016}. The prevalence of this solution may be interpreted as a clear indication for textual explanation being most suitable for establishing algorithm awareness. Within the aforementioned studies, various versions of textual explanations were studied comparatively. For example, even though Kizilcec questioned how much information a user may require, his various conditions of transparency all feature textual, explanatory solutions only~\cite{kizilcec_how_2016}. This may be considered a gap in the discourse. Returning to Hamilton and colleagues, the complexities of contemporary algorithmic systems do not only pose the question of how much humans may need to understand, but also in what way~\cite{hamilton_path_2014}. This suggests, for example, that experimenters should also explore differences between textual explanation of algorithmic logic and interactive, non-declarative solutions in the same context. \paragraph{\textbf{Urgency for Algorithm Awareness}} Due to the increase of automation on the Web, finding means for a better understanding of algorithms both by experts and lay users is particularly urgent. With algorithms, existing biases may become amplified substantially. In the discourse on recommender systems, bias has been observed as a challenge early on, and a major line of recommender systems research investigates how to avoid popularity bias, i.e., providing recommendations that are already known to satisfy a large number of users~\cite{fleder2007recommender,fleder2009blockbuster}. More recently, several works investigate the explainability of recommender systems~\cite{zhang2014explicit,he2015trirank}. Even open peer production systems such as Wikidata need to be seen in this context. That is, if there is a pre-existing bias in a knowledge base such as Wikidata, a recommender system may cause this bias to become self-perpetuating. Additionally, encoded bias may spread into the outputs of Wikidata APIs--thereby opaquely influencing the standard in domains that rely on Wikidata services. In his overview of bias on the Web, Baeza-Yates concludes that an awareness of bias (whether algorithmic or cultural) is the primary precondition for designers and engineers to mitigate potentially negative effects on users~\cite{baeza-yates_bias_2018}. The developer perspective as advanced by Baeza-Yates suggests that an engineering solution may be found with the potential to eliminate bias, whether by way of analyzing biased tendencies in the data used by a Web platform or running of extensive A/B-testing of subgroups \cite{kohavi_controlled_2009}. However, as repeatedly noted by Wiltse and Redstr\"om, the complexity of algorithmic systems in the modern Web troubles this suggestion. In their words, the Web is populated not by clear developer-client relations, but by \textit{fluid assemblages}, i.e. socio-technical configurations that change in various context of use \cite{redstrom_press_2015,wiltse_wicked_2015}. Bias, therefore, is not necessarily a definitive phenomenon for either human or machine. Accordingly, counting on purely technical solutions to eliminate bias needs to be up for debate. Instead, and as called for by various researchers from algorithm awareness, FAT and XAI, empirical studies that provide insights into how algorithmic systems (and the biases encoded therein) may be made more transparent. In the next section, we introduce the context of the open peer production system Wikidata, in which our use case, Recoin -- a property recommender-system -- is used. \section{Recoin: Property Recommender} Wikidata is an open peer production system~\cite{Vrandecic:2014hl}. Its structured data is organized in entities, where two types of entities exist: items and properties. Items represent real-world objects (individuals) or abstract concepts (classes). Each item is described by statements; for example, the entity \texttt{Q1076962} represents the human \textit{Chris Hadfield}. Each item is described by statements that follow a \textit{subject-predicate-object} structure (e.g., \textit{Chris Hadfield} (\texttt{Q1076962}) ''is instance of'' (\texttt{P31}) ''human'' (\texttt{Q5})). Thus, a property, i.e. predicate, describes the data value, i.e. object, of a statement. In October 2018, the community has more than 200k registered contributors with 19K active on a monthly base. They have created more than 570m statements on more than 50m entities. Even though Wikidata was founded to serve as a structured data hub across all Wikimedia projects, today, it is utilized for many other purposes; for example, researchers apply Wikidata as authoritative for interlinking external datasets, such as for gene data~\cite{burgstaller2016wikidata} or digital preservation data~\cite{thornton2017modeling}, or companies use Wikidata's knowledge graph for improving their search results, such as Google or Apple. A significant issue for Wikidata's community is consequently the quality of the data. Data quality is a classical problem in data management, however, in peer production settings such as in Wikidata, data quality assessment is complicated because of the continuous, incremental data insertions by its users, the distributed expertise and interest of the community, and the absence of a defined boundary in terms of its scope. Over the past years, the community has introduced many tools that address this challenge, that range from visualizing constraint violations to de-duplication tools and translation tools. One of these tools is Recoin that we present in more detail in the next section. \subsection{Technical Design} Recoin is a recommender system for understanding and improving the completeness of entities in Wikidata~\cite{ahmeti2017assessing,balaraman2018recoin}. A main motivation for implementing Recoin is Wikidata's openess, since it allows anyone to add nearly any kind of entities - items and properties. The latter led to a huge space of possible properties (4,859 properties as of July 2nd, 2018), with many applying only to a very specific context (e.g., ''possessed by spirit'' (\texttt{P4292}) or ''doctoral advisor'' (\texttt{P184})). Consequently, even experienced editors in Wikidata may lose track of which properties are relevant and important to a given item which might hinder them to improve data quality in Wikidata~\cite{razniewski2017doctoral}. Recoin is a gadget - an interface element - on Wikidata\footnote{Further information is available at \url{https://www.wikidata.org/wiki/Wikidata:Recoin}.}. A visual indicator informs a person about the relative completeness of an item and, moreover, it provides an editor with concrete recommendations about potentially missing properties on this item. Figure~\ref{fig:recoin}) shows the gadget on an item page on Wikidata. A visual indicator (icon on the top right) shows a color-coded progress bar that has five levels ranging from empty to complete. On the top of an item page, the recommendations are provided in an expandable list that shows up to ten of the most relevant missing properties. The idea of relative completeness is motivated by the situation that in absolute terms, measuring the completeness of an item is impossible in an open setting. The relative completeness, thus, considers completeness in relation to other, similar items. The relatedness function of Recoin considers two items as similar if they share the same class\footnote{An exception are items that are instance a of the class \emph{human}. In this case, the class ''occupation'' is used.}. The visual indicator of Recoin should not be understood as an absolute statement, i.e. level 5 (=complete) means, that all possible statements a given on the item page, but should rather be interpreted as a comparative measure, i.e. the statements on this item are more complete than on similar items. The completeness levels in Recoin are based on thresholds that are manually determined. It considers the average frequency of the 5 most frequent properties among the related ones to consider an item as \emph{most complete} (0\%-5\% average frequency), \emph{quite complete} (5\%-10\% average frequency), and so on. Furthermore, each user is shown 10 recommendations in order to avoid an overwhelming user experience. \begin{figure}[t] \begin{center} \includegraphics[width=6cm]{recoin} \caption{Recoin for the astronaut Chris Hadfield.} \label{fig:recoin} \end{center} \end{figure} \subsection{Need for Algorithm Awareness} \label{sec:needalawarness} As of September 25, 2018, Recoin is enabled by 220 editors on Wikidata\footnote{Further information are provided on the following page \url{https://www.wikidata.org/wiki/Special:GadgetUsage}.}, who created, based on Recoin's recommendations, 7,062 statements on 4,750 items. Even though Recoin is a straightforward approach for improving data quality on Wikidata, editors are hesitating to apply Recoin. Moreover, after persons used Recoin, they have raised a number of concerns. Based on existing discussions on Recoin's community page\footnote{\url{https://www.wikidata.org/wiki/Wikidata_talk:Recoin}} and on the mailing list, we identified three typical issues. Editors, for example, posed questions regarding the \textit{scope of the recommender}: \emph{``Not sure if Recoin looks at qualifiers \& their absence; if not, might this be something to think about?''}. The information provided by Recoin hindered editors to understand which data was being used to compute the recommendations. In another case, an editor was wondering about Recoin's underlying algorithm: \emph{``Something weird going on on Jan Jonker (Q47500903), stays on least complete.''}. In this case, the unchanging visual indicator of Recoin caused the user to \textit{question the functionality} of Recoin. Another user was concerned about the provided recommendation and its suitability for specific items: \emph{``How is Property:P1853 "Blood type" on this list, is that relevant (or even desirable) information for most people?''}. The user was not able to include its personal preferences - world view - in Recoin's recommendation. However, the third typical issue exemplified by Wikidata's editors raises a more genuine concern over the \textit{impact of Recoin on an already biased knowledge base} (e.g., the predominance of the English language~\cite{kaffee_glimpse_2017}). One editor stated: \emph{ ``This tool has its attractions but it does entrench cultural dominance even further as it stamps "quality" on items. The items with the most statements are the ones that are most likely to get good grades. Items from Indonesia let alone countries like Cameroon or the Gambia are easily substandard.''}.\footnote{The corresponding thread can be found on Wikidata's mailing list archive~\url{https://lists.wikimedia.org/pipermail/wikidata/2017-December/011576.html}.} On a surface read, this quote further substantiates the misunderstood nature of Recoin's function, as it is not intended as a unilateral absolute grading of the completeness of a particular item, but rather as a comparative tool that recommendations depend on the activities of the editors on similar items. However, and much more significant, the concern raised about cultural dominance is a very contemporary problem in algorithmic system design. Recoin fails to address this concerns by its current design and mediated function. In other words, the cultural bias in the recommended properties, even if not intended, seem to affect the usage of Recoin. Based on these insights, we wanted to better understand how a re-design of Recoin that considers algorithm awareness by focusing on explainability and interactivity can address the aforementioned issues. As opposed to existing research in this context, for example carried out by Eslami et al.~\cite{eslami_feedvis:_2015}, we do not require methods such as reverse engineering to understand the algorithmic system we are dealing with on a technical level. This knowledge is key to understanding the intricacies of the Web platforms today; as the ways in which an algorithm operates within a larger socio-technical context arguably also shapes the extent to which humans can or should be aware of it. Therefore, with an openly available recommender system in an open peer production system, we can conduct experiments that are closely tied to the actual practice of Wikidata editing activities, i.e., we can reflect on the technical and the social system similarly. In the following section, we introduce our experimental setup that help us to examine the impact of varying degrees of explainability and interactivity of the UI of the recommender system on humans. Following the concept of Recoin, our experiment featured a task of data completion. By measuring the interactions of participants with various designs during a data completion task and by eliciting self-reports, we sought to understand which design measures increased task efficiency while at the same time were most effective in increasing understanding of the algorithmic system. \begin{figure}[t] \begin{center} \includegraphics[width=6cm]{recoin-x} \caption{Recoin with Explanation \textit{(RX)}.} \label{fig:recoin-x} \end{center} \end{figure} \section{Experimental Setup} Informed by previous research, we designed two alternative UIs where each represents another degree of explainability and interactivity. Each user interface extends or replaces the previous version by specific design elements. In the following, we differentiate the original Recoin design (R1), the textual explanation design (RX), and the interactive redesign (RIX). The RX design is mainly inspired by previous work, from which we adapted the explanation design~\cite{kizilcec_how_2016}. The RIX design follows an interactive approach, where the user can interact with the outcome of the algorithm, thus, can explore how the algorithm outcome changes based on specific settings. The three UI designs are used in five experimental conditions (C1 to C4), supplemented by a baseline where participants only used the regular Wikidata interface. We explain these conditions in more detail in one of the following paragraphs. Based on these designs, participants should solve the same task across all conditions: adding data to a Wikidata item. We recruited 105 participants via Amazon Mechanical Turk (MTurk)\footnote{For more information please check \url{https://www.mturk.com/}.}, whereby each participant had a minimum task approval rate of 95\%, and a minimum amount of HITs of 1,000. Each participant received USD 3.50 (equivalent to USD 14.00 hourly wage) for full participation. We recruited only U.S. participants to reduce cultural confounds. We randomly and evenly distributed participants over our five conditions (i.e. 21 participants), also ensuring that no participant could re-do the task or join another condition by associating participants with qualifications. Each participants was given 10 minutes for task completion. In each condition, participants went through the same general procedure during task completion. At first, we provided a brief on-boarding, then we provided a task briefing. After the study participant carried out the task, she had to fill out an explicative self-report which contained the dimensions comprehension, fairness, accuracy and trust. Additionally, all participants obtained a task completion score, which we correlated with server activity to ensure that our final study corpus featured no invalid contributions. All data and results of our study will be available under an open license\footnote{Omitted for review.}. In the following, we outline the design decisions that have led to our three designs for Recoin in more detail. Then, we describe the task and the experimental design. \subsection{Design Rationales} In the following, we describe each UI approach of the recommender Recoin in more detail. For each user interface, we provide a corresponding visual representation. \subsubsection{Recoin User Interface R1} The original design of Recoin (cp. Figure~\ref{fig:recoin}) was primarily informed by existing UI design practices in Wikipedia. The status indicator icon was chosen to mirror the article quality levels on Wikipedia, such as "Good article" or "Featured article" \footnote{For more information we refer to \url{https://en.wikipedia.org/wiki/Wikipedia:Good_articles} and \url{https://en.wikipedia.org/wiki/Wikipedia:Featured_articles}.}. The used progress bar was motivated by existing visualizations in Wikipedia projects.\footnote{Examples are \url{https://www.wikidata.org/wiki/Wikidata:WikiProject\_Q5/lists/riders\_and\_their\_horse} and \url{https://www.wikidata.org/wiki/Wikidata:Wikivoyage/Lists/Embassies}.} Some parameters to represent the results of the Recoin recommender were determined without further considerations, such as the thresholds that represent the five levels of completeness. \begin{figure}[t] \begin{center} \includegraphics[width=8.2cm]{recoin-ix} \caption{Recoin Interactive Redesign \textit{(RIX)}.} \label{fig:recoin-ix} \end{center} \end{figure} \subsubsection{Recoin User Interface RX} Textual explanation of algorithmic logic is a wide-spread measure in the related work, and has been deployed in contexts such as social media~\cite{rader_explanations_2018}, aviation assistants~\cite{lyons_engineering_2016}, on-line advertising~\cite{eslami_communicating_2018} and classification algorithms in machine learning~\cite{ribeiro_why_2016}. For our design of RX, we drew inspiration from Kizilcec, who tested three states of transparency to understand algorithm awareness in on-line peer grading~\cite{kizilcec_how_2016}. Since the algorithm's function can be compared to Recoin (i.e., a rating algorithm), we adapted the format of Kizilcec's best solution and added a textual explanation to Recoin's user interface that describes the logic behind Recoin's calculation (cp. Figure~\ref{fig:recoin-x}). \subsubsection{Recoin User Interface RIX} Our interactive user interface (RIX, cp. Figure~\ref{fig:recoin-ix}) is based on insights gained from user feedback from Recoin's current users (cp. Section~\ref{sec:needalawarness}) and from the philosophy of technology as discussed by Verbeek~\cite{verbeek_what_2006}. Concerning the latter, we posit that Recoin \textit{actively} transforms the relationship an editor has with Wikidata and the entities therein. Through Recoin, Wikidata items that formerly were objects containing knowledge are now also objects that are rated. Technically, this rating is not an indication of absolute qualities, but one of community-driven standards, i.e. how the Wikidata community currently views a specific class of items. However, as illustrated in the various responses to Recoin, this mediation is not adequately communicated by Recoin's current design. Furthermore, in reflecting on Recoin with the original developers, we found that the comparative parameter of dividing the relevancy of the top five properties was arbitrarily chosen. In line with Mager~\cite{mager_internet_2018}, we consider transparency for this result of developer decision-making as essential. We operationalized these insights for RIX by considering how the community-driven aspect of Recoin could not only be displayed, but made interactively explorable. To this end we (1) included a reference to the class of the displayed entity (e.g., ''astronaut'' in our running example) in the drop-down title. This was designed to convey that this particular item is rated based on its class. Next, we augmented the drop-down itself extensively. We (2) substituted the relevance percentage with a numerical explanation for each suggested property (e.g., a relevance for the property 'time in space' of 67.03\% means that 549 out of 819 astronauts have this property). In contrast to a percentage, it was our intuition that relating to the class would highlight the community-driven aspect of Recoin. To strengthen this aspect further, we (3) included a range slider which allows filtering properties based on their prominence in the class (i.e., compare this entity based on their occurrence in a minimum/maximum of \textit{n} astronauts). Finally, we offered a way for directly interacting with Recoin's calculation: we (4) allowed our participants to reconfigure the relevancy comparison by (de-)selecting individual properties. Thereby, we wished to show that relevancy can be a dynamic, community-driven attribute in this algorithmic system. \begin{figure}[t] \begin{center} \includegraphics[width=8cm]{anon_briefing} \caption{Briefing page, with material added and resources for manually carrying out Recoin's functions and tutorial.} \label{fig:briefing} \end{center} \end{figure} \subsection{Task} For the study participants, we defined a typical editing task on Wikidata. We presented each study participant with a copy of Wikidata's user interface, to provide a most realistic task setting. First, the participants received a brief on-boarding for Wikidata, and, depending on the condition, for Recoin as well. Participants then proceeded to the task briefing page (cp. Figure~\ref{fig:briefing}). The participants were asked to add further properties and data to a Wikidata item. Additionally, we supplied participants with a short video tutorial that explained how properties can be added to an item on Wikidata. In each condition, the Wikidata item to be edited was \textit{Chris Hadfield}, a Canadian astronaut\footnote{The original page is provided here: \url{https://www.wikidata.org/wiki/Q1076962}.}. This item was chosen because it has a number of missing statements that are easily retrievable, and on the other hand, the item describes an astronaut who is probably well-known by our study participants which are U.S. based. Additionally, the occupation of astronauts was thought to be relatively neutral, as opposed to, for example, politicians or soccer players. We provided study participants with source material for the task composed of comparatively relevant and irrelevant pieces of information about \textit{Hadfield}. We also supplied a link to a very detailed Wikidata item with the same occupation, the US-American astronaut \textit{Buzz Aldrin}\footnote{\url{https://www.wikidata.org/wiki/Q2252}.}, and a link to a Wikidata query of the occupation ''astronaut''\footnote{Please check \url{http://tinyurl.com/ycnh3q37}.}, both with the intention to allow study participants to compare the given item with other items, i.e. we encouraged our participants to perform the functionality of Recoin manually. In addition, we provided a short video tutorial on how to add statements to Wikidata items. Following the task briefing, participants could choose to commence the task, which lead to the reconstructed Wikidata page for \textit{Hadfield}. Within a 10 minute limit, participants could then add statements to the item. We randomly assigned each participant to one of the conditions. Once the 10 minutes passed, participants were alerted that time was up, and that they should proceed to the self-report. Here, participants were confronted with a grade (from A-F) of their task. This grade was calculated through the difference in completeness before and after participants added information to the Wikidata item (e.g., when a participant additions increased the relative completeness of \textit{Hadfield} by more than 20\% but less than 30\%, they received a "B"). In correspondence to this grade, participant's were asked to rate their comprehension (5-point Likert scale), feelings of accuracy, fairness and trust (7-point Likert scale) of the recommender system. Again, due to substantial methodological and contextual similarities to Kizilcec's online study~\cite{kizilcec_how_2016}, we adopted the aforementioned measures to our study. Participants were also asked to expand on their ratings using free text fields. Upon submitting their ratings, participants were returned to the MTurk platform. \begin{table}[t] \begin{tabularx}{\columnwidth}{XL{6cm}} \toprule \textbf{Relevance} & Difference of the completeness value of the item before and after task completion. \\ \hline \textbf{Usage} & Number of times the recommender Recoin was used during task completion. \\ \hline \textbf{Compre\-hension} & To what extent do you understand how your task has been graded? \textit{(1) No understanding at all to (5) Excellent understanding.} \\ \hline \textbf{Fairness} & How fair or unfair is your grade? \textit{(1) Definitely unfair to (7) Definitely fair.} \\ \hline \textbf{Accuracy} & How inaccurate or accurate is the grade? \textit{(1) Very inaccurate to (7) Very accurate.} \\ \hline \textbf{Trust} & How much do you trust or distrust Wikidata to fairly grade your task? \textit{(1) Definitely distrust to (7) Definitely trust.}\\ \bottomrule \end{tabularx} \caption{Overview of measures employed in our online experiment.} \label{measure-table} \end{table} \subsection{Study Design} We conducted a between-subject study with five conditions. In the following, we define each condition and explain each measure we collected during the study. \subsubsection{Conditions} The first three conditions (baseline, C1, C2) were designed to test usage and understanding of the current version of Recoin, i.e. R1. We then proceeded to test the collected baseline against textual explanation (C3 with RX) as found in related work \cite{kizilcec_how_2016,gunning_explainable_nodate,phillips_interpretable_2018} and a redesign motivated by the shortcomings found therein (C4 with RIX). By comparing the results of the conditions, we aimed to gather insights on how design impacted human understanding of Recoin's function. All conditions are described in more detail next, followed by a description of the collected measures. \begin{itemize} \item \textit{Baseline}: Participants can add data on a Wikidata item \textit{without} Recoin being present in the user interface. \item \textit{Condition 1}: Participants can add data on a Wikidata item \textit{with} Recoin (R1) being present in the user interface. \item \textit{Condition 2}: C1 but Recoin is mentioned during the on-boarding process. \item \textit{Condition 3}: C2 but Recoin (RX) with explanation interface. \item \textit{Condition 4}: C2 but Recoin (RIX) with interactive interface. \end{itemize} \subsubsection{Task Measures} \textbf{Relevance}: As the improvement of data quality is the primary goal of Recoin, we wanted to ensure that we understood how each condition affected the change in completeness; independent of the quantity of contributions. Thus, we defined the metric \textit{relevance} as our dependent variable. Relevance is defined as the difference of the completeness values of Recoin before and after a participant added properties to the item. \textbf{Recoin Usage:} As a recommender system, it is particularly important to understand how each condition (aside from the baseline) affected the number of times Recoin was used directly to add information to an item. This is expressed by the measure \textit{usage} which serves as a dependent variable. \textbf{Time:} We fixed the time participants can add properties to an item to ensure that our conditions are comparable. The measure \textit{time} serves as our control variable. \textbf{Demographics:} All study participants were recruited via MTurk. While we assume that the majority of participants are US-Americans, we did not further specify our demographics. Thus, as is typical, demographics were our covariates. \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{anon_questionnaire} \caption{Questionnaire after adding properties that lead to an increase in relevance of 25\%.} \label{fig:questionnaire} \end{center} \end{figure} \subsubsection{Self-Report Measures} Upon completion of the task, participants were directed to a self-report page (cp. Figure~\ref{fig:questionnaire}). The page prominently featured a grading of the participant task performance, which was calculated by normalizing the average comparative relevance, i.e. Recoin's assessment, of each contribution per participant. We graded the participants performance in their task (A-F) in order to elicit a reaction to the task even if participants did not notice, use or understood Recoin. We purposefully designed this grading to encourage study participants to reflect on the task, for example a participant may receive the grade F despite many additions to the item. Furthermore, we included ratings with the four factors from previous research on algorithm awareness~\cite{kizilcec_how_2016}: comprehension (5-point Likert scale), accuracy, fairness and trustworthiness (7-point Likert scale) of the algorithmic system. These measures should be strongly correlated according to the procedural justice theory in the related work. Low ratings of all measures, for example, would stem from violated expectations in an outcome~\cite{kizilcec_how_2016}. We asked all participants in addition, to expand on their ratings via text-fields for collecting qualitative data as well. \begin{table}[t] \begin{tabularx}{\columnwidth}{XC{1.5cm}C{2cm}C{2cm}} \toprule \textbf{Condition} & \textbf{\# All Edits} & \textbf{\# Recoin Usage} & \textbf{SD Recoin Usage}\\ \midrule \textbf{Baseline} & 249 & - & - \\ \textbf{C1} & 319 & 61 & 3.10 \\ \textbf{C2} & 382 & 91 & 8.56 \\ \textbf{C3} & 301 & 55 & 3.50 \\ \textbf{C4} & 281 & 71 & 4.25 \\ \bottomrule \end{tabularx} \caption{Number of edits, i.e. contributions in each condition, with the number of Recoin usage and standard deviation.} \label{edittable} \end{table} \subsection{Hypotheses} For our hypotheses, we were interested in testing the impact of Recoin on data completeness. Having provided participants with equal opportunities to add relevant data to the item \textit{Hadfield}, we examined whether Recoin improves the completeness of an item or not. Based on our analysis of the status quo (cp. Section~\ref{sec:needalawarness}), we did not expect study participants to actively use Recoin which lead to the following hypothesis: \textbf{$H_{1}$:} Using Recoin does not lead to significantly higher relevance in terms of data completeness. Based on the discussed literature on algorithm awareness, we assume that an user interface that conveys explainability and interactivity of the underlying recommender system leads to higher usage rates: \textbf{$H_{2}$:} The interface design of Recoin impacts the number of time participants used Recoin. Furthermore, we assumed that the effectiveness of algorithm aware designs would be captured most succinctly by the comprehension measure, which would accordingly allow us to distinguish the impact of the RX and RIX designs. Given the results of textual explanation employed in related work (cp. Section~\ref{sec:relatedwork}), we therefore hypothesized that: \textbf{$H_{3}$:} A textual explanation of the algorithmic logic leads to higher comprehension than the interactive redesign. Finally, to gain insights on methodological procedure, we sought to test the experimental self-report measures employed by Kizilcec~\cite{kizilcec_how_2016}. According to this research, the self-report measures should exhibit a high degree of correlation (Cronbach's $\alpha$ = 0.83). We therefore hypothesized: \textbf{$H_{4}$:} The correlation of self-report measures for textual explanation solutions will equally hold for testing the interactive solution. \subsection{Results} \begin{table}[b] \begin{tabularx}{\columnwidth}{XC{0.9cm}C{0.7cm}C{0.95cm}C{0.7cm}C{0.7cm}C{0.8cm}} \toprule \textbf{Condition} & \textbf{Grade} & \textbf{Rel.} & \textbf{Comp.} & \textbf{Fair.} & \textbf{Acc.} & \textbf{Trust} \\ \midrule \textbf{Baseline} & C & 11 & 2 & 4 & 4 & 4 \\ \textbf{C1} & C & 15 & \textbf{3} & 4 & 5 & 4 \\ \textbf{C2} & C & 19 & \textbf{3} & 5 & 5 & \textbf{5} \\ \textbf{C3} & \textbf{B} & 20 & \textbf{3} & 4 & 4 & 4 \\ \textbf{C4} & \textbf{B} & \textbf{21} & \textbf{3} & \textbf{6} & \textbf{6} & \textbf{5} \\ \bottomrule \end{tabularx} \caption{Median values for (1) task performance: \textit{Grades} dependent on the increase of \textit{Relevance}; (2) self-report: \textit{Comprehension} (1 - 5), \textit{Fairness}, \textit{Accuracy} and \textit{Trust} (1-7).} \label{mediantable} \end{table} We recruited 21 participants for each condition ($n = 105$). Overall, we received 1,532 edits (cp. Table~\ref{edittable}), with participants in the \textit{C2}-condition providing the most. In the \textit{C4}-condition, our interactive redesign, participants used Recoin most frequently, with more than half (61.09\%) of participants adding data via the Recoin interface at least once. This condition also included the most relevant contributions, with a median increase in completeness for the Hadfield item of 21\%. The median values of task performance, i.e., received grade and average increase in completeness, as well as the ordinal Likert-scales from the participant self-report, i.e., comprehension, fairness, accuracy and trust, can be seen in Table~\ref{mediantable}. We expected only a small amount of qualitative data. However, we found that displaying a grade in the self-report provided a highly effective trigger. Overall, 82 of our 105 participants chose to expand on their self-reported ratings via the provided text fields. This allowed us to probe participant statements for insights on specific subjective perspectives. In the following, we show the results of our analysis for each hypothesis by using the Kruskal-Wallis test for ordinal data and ANOVA for numerical data. We report the results of the algorithm awareness measures with Spearman correlation tests. Finally, we provide findings of our qualitative analysis of participant statements. \begin{figure}[t] \begin{center} \includegraphics[width=7cm]{Rel_Rplot07} \caption{Boxplot of the mean increase of completeness per condition.} \label{fig:rel-con} \end{center} \end{figure} \subsubsection*{$H_{1}$: Using Recoin does not lead to significantly higher relevance in terms of data completeness.} We reject this hypothesis. An increase of comparative relevance for the Hadfield item is highly dependent on using Recoin at least once ($p_{rel,rec} < 0.001$). Additionally, when looking at the increase of comparative relevance per participant as a function of the numbers of additions made via Recoin, we can see that the most significant difference occurs around 7 additions made (\textit{p\textsubscript{rel,numUse}} = 0.02). This shows that Recoin is highly efficient, as adding a majority of the ten recommended properties should lead to the highest increase in relevance. \subsubsection*{$H_{2}$: The interface design of Recoin impacts Recoin usage.} Even though the redesign (C4) slightly outperformed the other conditions in terms of the goal of the set task, we could not find any significant difference for the number of additions made via Recoin between C1, C2, C3, or C4 ($p = 0.74$). Therefore, we cannot confirm this hypothesis with statistical significance. \subsubsection*{$H_{3}$: Textual explanation of the algorithmic logic leads to higher comprehension than the interactive redesign.} We could not find statistically significant differences between ratings of comprehension between condition C4 (RX) and C5 (RIX) (\textit{p\textsubscript{comp,con}} = 0.98). Therefore, this hypothesis is not confirmed. \begin{table}[b] \begin{tabularx}{\columnwidth}{XC{0.9cm}C{0.9cm}C{0.9cm}C{0.9cm}C{0.9cm}} \textbf{Condition} & \textbf{Baseline} & \textbf{C1} & \textbf{C2} & \textbf{C3} & \textbf{C4} \\ \midrule \textbf{Cronbach's $\alpha$} & 0.79 & 0.75 & 0.67 & 0.65 & 0.51 \\ \end{tabularx} \caption{Cronbach's $\alpha$ for questionnaire measures across all conditions.} \label{crotable} \end{table} \subsubsection*{$H_{4}$: The correlation of self-report measures for textual explanation solutions will equally hold for testing the interactive solution.} We had to reject this hypothesis as well. Reacting to the large variance in C4 (cp. Figure~\ref{fig:rel-con}), we tested the validity of the questionnaire measures. As opposed to previous research~\cite{kizilcec_how_2016}, the self-reported measures are not correlated, instead they differ significantly (cp. Table~\ref{crotable}). This especially concerns C4, our redesign (RIX), where variance was very high (Cronbach's $\alpha = 0.51$). \begin{table}[t] \begin{tabularx}{\columnwidth}{XXXXX} \textbf{Factor} & \textbf{Comp.} & \textbf{Fair.} & \textbf{Acc.} & \textbf{Trust} \\ \hline \textbf{Comp.} & - & 0.19 & 0.15 & \textbf{0.33}$^*$ \\ \textbf{Fair.} & 0.19 & - & \textbf{0.40}$^*$ & \textbf{0.60}$^*$ \\ \textbf{Acc.} & 0.15 & \textbf{0.40}$^*$ & - & 0.16 \\ \textbf{Trust} & \textbf{0.33}$^*$ & \textbf{0.60}$^*$ & 0.16 & - \end{tabularx} \caption{Spearman correlation coefficients and p-values for C2-C4 for self-report measures \textit{Comp.=Comprehension}, \textit{Fair.=Fairness}, \textit{Acc.=Accuracy} and \textit{Trust} with $^*$ for $p < 0.05$.} \label{corrtable} \end{table} \begin{figure}[] \begin{center} \includegraphics[width=8cm]{C4-correlation-metrics} \caption{Spearman correlation coefficient matrix for C4 with $^*$ for $p < 0.05$; $^{**}$ for $p < 0.005$.} \label{fig:c4-corr} \end{center} \end{figure} As a reaction to the high variance, we conducted Spearman's correlation tests for the ordinal Likert scales used in the participant self-report measures. The most statistically significant correlative finding from our data is that trust and fairness share a medium to strong positive relationship in our experiment. This is shown across all conditions (\textit{r\textsubscript{t,f} = 0.65, p < 0.01}), as well as between those wherein Recoin was introduced during onboarding (C2, C3, C4) (\textit{r\textsubscript{t,f} = 0.60, p < 0.01}) (cf. table \ref{corrtable}). The predominant relationship of trust and fairness was reaffirmed for C2 and C3, the original design and the additional textual explanation respectively, as the strongest and most significant relationship (C2: \textit{r\textsubscript{t,f} = 0.73, p < 0.01}; C3: \textit{r\textsubscript{t,f} = 0.63, p < 0.01}). A further relationship of fairness and accuracy was found for C2 and C3, as a medium positive relationship (C2: \textit{r\textsubscript{f,a} = 0.52, p = 0.01}; C3: \textit{r\textsubscript{f,a} = 0.53, p = 0.01}). When participants used the interactive design of Recoin (RIX in C4), two different relationships emerged (cp. Figure~\ref{fig:c4-corr}). We found that the relationship between comprehension and fairness was strongest (\textit{r\textsubscript{c,f} = 0.61, p < 0.01}), closely followed by the relationship comprehension and trust (\textit{r\textsubscript{c,t} = 0.57, p = 0.01}). Surprisingly, the strong relationships found across all conditions were not present for the interactive design (cp. Figure~\ref{fig:c4-corr}). \subsubsection{Qualitative Analysis} Due to the high variance encountered in C4, and the unexpected lack of correlation between our self-report measures, we also expanded our analysis to participant statements. Accordingly, we sampled participant statements in order to probe specific subjective viewpoints. In this section, we showcase some preliminary insights. \subsubsection*{Base Trust in Open Knowledge Platforms} A recurring theme, when participants chose to expand on their rating of trust, was a certain \textit{base trust} in open knowledge platform. This occurred even when no explanation element was provided, and also when participants received a poor grade in the condition that did not feature Recoin (Baseline): \begin{quotation} \emph{``Considering there was not a good definition of how we would be judged, it is tough to know if the judging was actually fair or unfair. However, I tend to trust Wikipedia so Wikidata is probably trustworthy.''} Baseline-P18; graded \textit{D} \end{quotation} This \textit{base trust} was also extended to the algorithm specifically, as long as it abides by platform standards: \begin{quotation} \emph{``I assume that an algorithm is used to grade the task, in which case I assume that it's free of bias, which is why I do trust Wikidata a good deal when it comes to fairness. Provided, the algorithm itself works as it's supposed to.''} C2-P15; rated \textit{Trust} at \textit{6 (High)} \end{quotation} \subsubsection*{High task efficiency may not indicate algorithm awareness} The qualitative data also suggests that task efficiency in terms of the algorithm does not necessarily indicate algorithm awareness. On the contrary, the only participant that offered a fundamentally accurate account of Recoin received the second lowest grade possible: \begin{quotation} \emph{``My only theory is that it's graded based on the relevance of entries made in regards to his occupation (astronaut) while most of my entries concerned his family, his awards and etc, rather than his activity as an astronaut.''} C2-P15; graded \textit{D} \end{quotation} The commentary of a well-performing participant (graded \textit{B}) furthermore suggests that there may be a difference in understanding algorithmic logic and understanding the integration into the algorithmic system: \begin{quotation} \emph{``It seems odd that I would be the one putting in the data and it is grading me considering why couldn't it just put the data in itself if it is accurate enough to grade.''} C1-P17; rated \textit{Accuracy} at \textit{6 (Very accurate)} \end{quotation} Finally, and in a similar fashion, a participant formulated the key question they had to the algorithmic system as follows: \begin{quotation} \emph{``I understand that the relevance is graded, I'm not sure exactly how relevance is judged.''} C2-P2; rated \textit{Comprehension} at \textit{2 (Low Understanding)} \end{quotation} In summary, the unexpectedly high variance in the C4 condition, combined with the difference in correlative relationships across conditions, as well as our qualitative data, allow us to gather relevant insights for further research. In the next section, we will discuss limitations to our experiment, before concluding with the contributions as well as the implications for future work. \section{Discussion} First, we found no significant differences between the conditions in terms of average increases in completeness. However, this also suggests that the solution of textual explanation found in related work is not an inherently clear choice for algorithm awareness. This indicates that the design decisions for algorithm awareness are still methodologically unrefined. Additionally, we sought to understand if our alternative to textual explanation, one taking an interactive and non-declarative approach, could be measured according to the existing self-report measures as suggested by previous research~\cite{kizilcec_how_2016}. We found that the measures ''Comprehension'', ''Fairness'', ''Accuracy'' and ''Trust'' were not equally distributed across our experimental conditions. On the contrary, divergent correlative relationships emerged. The status quo design (R1) as well as the addition of textual explanation design (RX) featured the same strong relationships of trust and fairness as well as fairness and accuracy. In contrast, our redesign (RIX) did not exhibit these relationships, but rather suggested that comprehension was most influential. This was shown in the medium to strong correlation between comprehension and fairness as well as comprehension and trust. We therefore posit that expanding on these self-report measures for algorithm awareness is another, distinct area requiring further research. Moreover, the qualitative data we gathered also included insightful statements made by our participants. The phenomenon of \textit{base trust} that we encountered in participant statements is relevant for future algorithm awareness studies. If verified, it needs to be taken into account in cases where researchers may wish to abstract from platforms to look at specific problems. In a broader context, experiments on transparency in algorithmic systems, especially in recommender systems, are frequently undertaken in order to minimize or even eliminate bias. However, as also found by Ekstrand and Tian in experimenting with various recommendation algorithms~\cite{ekstrand_all_2018}, a complete solution to the problem of bias is improbable. That is to say: bias is inevitable, and is a result of humans and technology interacting. This position is echoed in the work of the philosopher of technology Verbeek, who argues that technology fundamentally \textit{mediates} human relations to a particular ''world'', i.e., groups of other humans, values, practices etc.~\cite{verbeek_what_2006}. Biases, especially those commonly not aware of, play an instrumental role here. The solution, then, may not be finding the best measure for an elimination of bias, but rather finding the most actionable measure for making bias transparent. Our experimental results align with this assertion insofar that participants had issues with understanding the algorithmic system not on the basis of whether or not something is correctly calculated, but rather who or what has the \textit{agency} for judging the result (e.g., the platform itself, the algorithm as a contained unit, peer review etc.). This, along with a lack of significant differences between conditions, indicates that our intuition to design for an interactive mediation of the community-driven basis for Recoin was useful. Therefore, we posit that promoting algorithm awareness by interactivity is a promising research area. Our study has a number of limitations that should be considered. As opposed to other work (e.g. \cite{gunning_explainable_nodate}), our research focuses on non-technical experts. Furthermore, by recruiting our study participants over MTurk, it can be easily asserted that the demographics of the platform predispose the experiment to cultural bias. Additionally, online experiments in general are limited in two ways. On the one hand, observation of the subtleties of human-technology relations is not possible, such as the non-linguistic ways in which interaction expresses itself and decision-making occurs. On the other, by using MTurk we did not study Wikidata editors, but novices which might have never before come into contact with Wikidata. This means that, while we certainly could infer insights on algorithm awareness and human-technology relations, studying the lived practice of Wikidata editors may reveal other or even contradictory results. \section{Conclusion} Our research was motivated by a wish to deepen our understanding of existing design parameters for algorithm awareness. We used the recommender system Recoin, employed in the online peer production system Wikidata, as a use case for our online experiments. In five different conditions, we provided the study participants with a varying degree of explanations and interactivity while using the recommender system. We were able to gather experimental data on the effect of various algorithm aware design measures, and to reflect on the validity of measures used in related work. However, our experiments alone are not yet exhaustive enough for us to reason more substantially about what human awareness means when algorithms are involved. Partly, this is due to the lack of longitudinal, qualitative data gathered from extensive and sustained use of Recoin. The participants of our experiments were predominantly unaware of Wikidata, and the task itself was both brief and controlled in terms of the knowledge that was provided. Wikidata lives and breathes from enthusiasts and domain experts that contribute extensively in their areas of interest. Thus, in future work, we seek to conduct studies that complement these results by probing individual and subjective use over time. This will allow us to understand more deeply, for example, how algorithm aware designs impact the relation between the Wikidata editors and the platform. From such studies, we plan to expand our framework to other use cases. Thereby, we hope to contribute to the urgent need for understanding how the increasingly ubiquitous algorithmic systems shape everyday life for and from the Web. \bibliographystyle{ACM-Reference-Format}
null
null
null
proofpile-arXiv_000-10106
{"arxiv_id":"2009.09049","language":"en","timestamp":1600740151000,"url":"https:\/\/arxiv.org\/abs\/2009.09049","yymm":"2009"}
2024-02-18T23:40:24.879Z
2020-09-22T02:02:31.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10108"}
null
\section{Introduction} Materials made from constituents that use energy to move are called active. These inherently out of equilibrium systems have attractive physical properties: active materials can spontaneously form patterns \cite{bois2011pattern}, collectively move \cite{voituriez2005spontaneous, furthauer2012taylor, wioland2013confinement}, self-organize into structures \cite{salbreux2009hydrodynamics,brugues2014physical}, and do work \cite{marchetti2013hydrodynamics}. Biology, through evolution, has found ways to exploit this potential. The cytoskeleton, an active material made from biopolymer filaments and molecular scale motors, drives cellular functions with remarkable spatial and temporal coordination \cite{alberts_2002_book, howard2001mechanics}. The ability of cells to move, divide, and deform relies on this robust, dynamic and adaptive material. To understand the molecular underpinnings of cellular mechanics and design similarly useful active matter systems in the lab, a theory that predicts their behavior from the interactions between their constituents is needed. The aim of this paper, is to address this challenge for highly crosslinked systems made from rigid rod-like filaments and molecular scale motors. The large-scale physics of active materials can be described by phenomenological theories, which are derived from symmetry considerations and conservation laws, without making assumptions on the detailed molecular scale interactions that give rise to the materials properties \cite{kruse2004asters, furthauer2012active, julicher2018hydrodynamic}. This has allowed exploring the exotic properties of active materials, and the quantitative description of subcellular structures, such as the spindle \cite{brugues2014physical, oriola2020active} (the structure that segregates chromosomes during cell division) and the cell cortex\cite{mayer2010anisotropies,salbreux2012actin,naganathan2014active,naganathan2018morphogenetic} (the structure that provides eukaryotic cells with the ability to control their shape), even though the microscale processes at work often remain opaque. In contrast, understanding how molecular perturbations affect cellular scale structures requires theories that explain how material properties depend on the underlying molecular behaviors. Designing active materials with desirable properties in the lab will also require the ability to predict how emergent properties of materials result from their constituents \cite{foster2019cytoskeletal}. Until now, the attempts to bridge this gap have relied heavily on computational methods \cite{belmonte2017theory, foster2017connecting, gao2015multiscalepolar}, or were restricted to sparsely crosslinked systems \cite{liverpool2003instabilities, liverpool2005bridging, liverpool2008hydrodynamics, aranson2005pattern, saintillan2008instabilitiesPRL}, one dimensional systems \cite{kruse2000actively, kruse2003self}, or systems with permanent crosslinks \cite{broedersz2014modeling}. Our interest here are cytoskeletal networks, which are in general highly crosslinked by tens to hundreds of transient crosslinks linking each filament into the network. In this regime, the forces generated by different crosslinks in the network balance against each other, and not against friction with the surrounding medium, as they would in a sparsely crosslinked regime \cite{furthauer2019self}. We derive how the large scale properties of an actively crosslinked network of cytoskeletal filaments depend on the micro-scale interactions between its components. This theory generalizes our earlier work on one specific type of motor-filament mixture, XCTK2 and microtubules \cite{furthauer2019self, striebel2020mechanistic}, by introducing a generic phenomenological model to describe the forces that crosslink populations exert between filaments. The structure of this paper is as follows. In section \ref{sse:generic}, we discuss the force and torque balance for systems of interacting particles, and specialize to the case of interacting rod-like filaments. This will allow us to introduce key concepts of the continuum description, such as the network stress tensor. Next, in section \ref{sse:interactions}, we present a phenomenological model for crosslink interactions between filaments, that can describe the properties of many different types of crosslinks in terms of just a few parameters, which we call crosslink moments. In section \ref{sse:continuum}, we derive the continuum theory for highly crosslinked active networks and obtain the equations of motion for these systems. Finally, in section \ref{sse:macro} we give an overview of the main predictions of our theory and discuss the consequences of specific micro-scale properties for the mechanical properties of the consequent active material. We summarize and contextualize our findings in the discussion section \ref{sse:discussion}. \section{Force and torque balance in systems of interacting rod-like particles}\label{sse:generic} We start by discussing the generic framework of our description. In this section we give equations for particle, momentum and angular momentum conservation and introduce the stress tensor, for generic systems of particles with short ranged interactions. We then specialize to the case of interacting rod-like filaments, which form the networks that we study here. \subsection{Particle Number Continuity} Consider a material that consists of a large number $N$ of particles, that are characterized by their center of mass positions $\mathbf x_i$ and their orientations $\mathbf p_i$, where $|\mathbf p_i| = 1$ is an unit vector and $i$ is the particle index. We define the particle number density \begin{equation} \psi(\mathbf x, \mathbf p) = \sum_i \delta(\mathbf x - \mathbf x_i)\delta(\mathbf p - \mathbf p_i). \label{eq:density_definition} \end{equation} Here and in the following $\delta(\mathbf x-\mathbf x_i)$ has dimensions of inverse volume, while $\delta(\mathbf p-\mathbf p_i)$ is dimensionless. Ultimately, our goal is to predict how $\psi$ changes over time. This is given by the Smoluchowski equation \begin{equation} \partial_t \psi(\mathbf x, \mathbf p) = -\nabla\cdot\left(\dot{\mathbf x} \psi \right) -\partial_\mathbf{p}\cdot\left(\dot{\mathbf p} \psi \right), \label{eq:smoluchowski} \end{equation} where \begin{equation} \dot{\mathbf x}\psi =\sum_i \dot{\mathbf x}_i\delta(\mathbf x - \mathbf x_i)\delta(\mathbf p - \mathbf p_i) \end{equation} and \begin{equation} \dot{\mathbf p}\psi =\sum_i \dot{\mathbf p}_i\delta(\mathbf x - \mathbf x_i)\delta(\mathbf p - \mathbf p_i) \end{equation} define $\dot{\mathbf x}$ and $\dot{\mathbf p}$, the fluxes of particle position and orientation. The aim of this paper is to derive $\dot{\mathbf x}$ and $\dot{\mathbf p}$, from the forces and torques that act on and between particles. \subsection{Force Balance} Each particle in the active network obeys Newton's laws of motion. That is \begin{equation} \dot{\mathbf g}_i = \sum_j \mathbf F_{ij} \mathbf + \mathbf F_i^\mathrm{(drag)}, \label{eq:Newton_single_particle} \end{equation} where $\mathbf g_i$ is the particle momentum, and $\mathbf F_{ij}$ is the force that particle $j$ exerts on particle $i$. Moreover, $\mathbf F_i^\mathrm{(drag)}$ is the drag force between the particle $i$ and the fluid in which it is immersed. Momentum conservation implies $\mathbf F_{ij} = - \mathbf F_{ji}$. We are interested in systems where the direct particle-particle interactions are short ranged. This means that $\mathbf F_{ij}\ne 0$ only if $|\mathbf x_i -\mathbf x_j|<d$, where $d$ is an interaction length that is small (relative to system size). The momentum density is defined by \begin{equation} \mathbf g = \sum_i \delta(\mathbf x - \mathbf x_i) \mathbf g_i \end{equation} which, using Eq.~{(\ref{eq:Newton_single_particle})}, obeys \begin{eqnarray} \partial_t \mathbf g &+& \nabla\cdot\sum_i\delta(\mathbf x - \mathbf x_i)\mathbf v_i \mathbf g_i = \sum_{i,j}\delta(\mathbf x - \mathbf x_i) \mathbf F_{ij} + \sum_{i}\delta(\mathbf x - \mathbf x_i) \mathbf F^\mathrm{(drag)}_{i} , \nonumber\\ \label{eq:Fb_continuum} \end{eqnarray} where $\mathbf v_i = \dot{\mathbf x}_i$ is the velocity of the $i$-th particle. The terms on the left hand side of Eq.~{(\ref{eq:Fb_continuum})} are inertial, and in the overdamped limit, relevant to the systems studied here, they are vanishingly small. Interactions between particles are described by the first term on the right hand side of Eq.~{(\ref{eq:Fb_continuum})} and generate a momentum density flux $\mathbf\Sigma$ (the stress tensor) through the material. To wit, using that $d$ is small, so that particle-particle interactions are short-ranged, gives \begin{eqnarray} &~&\sum_{i,j}\delta(\mathbf x - \mathbf x_i)\mathbf F_{ij} =\frac{1}2\sum_{i,j}\left( \delta(\mathbf x - \mathbf x_i) - \delta(\mathbf x - \mathbf x_j)\right)\mathbf F_{ij} \nonumber\\ &=&-\nabla \cdot \sum_{i,j} \delta\left(\mathbf x - \mathbf x_i\right)\frac{\mathbf x_i -\mathbf x_j}2 \mathbf F_{ij} +\mathcal{O}(d^3) \nonumber \\ &=&\nabla\cdot\mathbf\Sigma. \end{eqnarray} where \begin{equation} \mathbf\Sigma = -\sum_{i,j} \delta\left(\mathbf x - \mathbf x_i \right)\frac{\mathbf x_{i} - \mathbf x_{j}}2\mathbf F_{ij} +\mathcal{O}(d^3). \label{eq:define_stress} \end{equation} Note that Eq.~{(\ref{eq:define_stress})} does not necessarily produce a symmetric stress tensor. Force couples for which $\mathbf F_{ij}$ and $\mathbf x_{i}-\mathbf x_{j}$ are not parallel generate antisymmetric stress contributions, since these couples are not torque free. We discuss how to reconcile this with angular momentum conservation in Appendix \ref{app:angular}. The drag force density is \begin{equation} \mathbf f = \sum_i\delta(\mathbf x-\mathbf x_i)\mathbf F_i^\mathrm{(drag)}, \label{eq:define_permaeation_force} \end{equation} and after dropping inertial terms, the force balance reads \begin{equation} \nabla\cdot\mathbf\Sigma + \mathbf f = \mathbf 0, \label{eq:gel_force_balance} \end{equation} and the total force on particle $i$ obeys \begin{equation} \sum_j \mathbf F_{ij} + \mathbf F_{i}^\mathrm{(drag)} = \mathbf 0. \label{eq:Force_balance_mti} \end{equation} This completes the discussion of the force balance of the system. We next discuss angular momentum conservation. \subsection{Torque Balance} \label{sse:torque_generic} The total angular momentum of particle $i$, \begin{equation} \mathbf \ell^\mathrm{(tot)}_i = \mathbf \ell_i +\mathbf x_i\times \mathbf g_i, \label{eq:total_angular_momentum_i} \end{equation} is conserved, where $\mathbf \ell_i$ is its spin angular momentum and its $\mathbf x_i\times\mathbf g_i$ its orbital angular momentum. Newton's laws imply that \begin{equation} \dot{\mathbf\ell}_i = \sum_j\mathbf T_{ij} + \mathbf T_{i}^\mathrm{(drag)}, \end{equation} where $\mathbf T_{ij}$ is the torque exerted by particle $j$ on particle $i$, in the frame of reference moving with particle $i$,nd $\mathbf T_i^\mathrm{(drag)}$ is the torque from interaction with the medium, in the same frame of reference. Importantly, since the total angular momentum is a conserved quantity, the total torque transmitted between particles $\mathbf T_{ij} + \mathbf x_i\times \mathbf F_{ij} = -\mathbf T_{ji} - \mathbf x_j\times \mathbf F_{ji}$ is odd upon exchange of the particle indices $i$ and $j$. Taking a time derivative of Eq.{~(\ref{eq:total_angular_momentum_i})} and using Eq.{~(\ref{eq:Newton_single_particle})} leads to the torque balance equation for particle $i$ \begin{equation} \sum_j \left( \mathbf T_{ij} + \mathbf x_i\times \mathbf F_{ij} \right) + \mathbf T_i^\mathrm{(drag)} +\mathbf x_i \times \mathbf F_{i}^\mathrm{(drag)} = \mathbf 0, \end{equation} and thus \begin{equation} \sum_j\mathbf T_{ij} + \mathbf T_i^\mathrm{(drag)} = \mathbf 0, \end{equation} where we ignored the inertial term $\mathbf v_i \times \mathbf g_i$ and used Eq.~{(\ref{eq:Force_balance_mti})}. The angular momentum fluxes associated with spin, orbital and total angular momentum are discussed in Appendix{~\ref{app:angular}} for completeness. \subsection{The special case of rod-like filaments} \begin{figure}[h] \centering \includegraphics[width=0.6\columnwidth]{FigCartoon2.pdf} \caption{ a/ Interaction between two cytoskeletal filament $i$ and $j$ via a molecular motor. Filaments are characterized by their positions $\mathbf x_i, \mathbf x_j$, their orientations $\mathbf p_i, \mathbf p_j$, and connect by a motor between arc-length position $s_i, s_j$. A motor consist of two heads that can be different (circle, pentagon) and are connected by a linker (black zig-zag) of lengt $R$ b/ The total force on filament $i$ is given by the sum of the forces exerted by all $a$ (circle) and $b$ (pentagon) heads, which connect the filament into the network. The shaded area shows all geometrically accessible positions that can be crosslinked to the central (black) filament. } \label{fig:cartoons} \end{figure} We now specialize to rod-like particles, such as the microtubules and actin filaments that make up the cytoskeleton. In particular, we calculate the objects $\mathbf F_{ij}$, $\mathbf T_{ij}$, and $\mathbf \Sigma$ from prescribed interaction forces and torques along rod-like particles. \subsubsection{Forces} Again, filament $i$ is described by it center of mass $\mathbf{x}_i$ and orientation vector $\mathbf{p_i}$. All filaments are taken as having the same length $L$, and position along filament $i$ is given by $\mathbf{x}_i+s_i\mathbf{p}_i$, where $s_i\in[-L/2,L/2]$ is the signed arclength. We consider the vectorial momentum flux from arclength position $s_i$ on filament $i$ to arclength position $s_j$ on filament $j$ \begin{equation} \mathbf f_{ij} = \mathbf f_{ij}\left(s_i, s_j \right). \end{equation} where $\mathbf f_{ij} = - \mathbf f_{ji}$ and having dimensions of force over area, i.e. a stress. Here we focus on forces generated by crosslinks; see Fig.~{\ref{fig:cartoons}} (a). The total force between two particles is \begin{eqnarray} \mathbf F_{ij} &=& \left\lfloor \delta(\mathbf x - \mathbf x_j - s_j \mathbf p_j) \mathbf f_{ij}\right\rceil^{ij}_{\Omega(\mathbf x_i +s_i\mathbf p_i)}, \nonumber\\ \end{eqnarray} where the brackets $\lfloor\cdots\rceil^{ij}_{\Omega(\mathbf x_i)}$ denote the operation \begin{equation} \lfloor\phi\rceil_{\Omega(\mathbf x_i)}^{ij} = \int\limits_{-\frac{L}{2}}^{\frac{L}{2}} ds_i \int\limits_{-\frac{L}{2}}^{\frac{L}{2}} ds_j \int\limits_{\Omega(\mathbf x_i)}d\mathbf x^3 \phi, \label{eq:bracket_def} \end{equation} where $\phi$ is a dummy argument and $\Omega$ is a sphere whose radius is the size of a cross-linker (i.e., $d$, the interaction distance). With the definition Eq.~{(\ref{eq:bracket_def})}, the operation $\lfloor\cdots\rceil^{ij}_{\Omega(\mathbf x_i+s_i\mathbf p_i)}$ integrates its argument over all geometrically possible crosslink interactions, between filaments $i$ and $j$; see Fig.~{\ref{fig:cartoons}} (b). By Taylor expanding and keeping terms up to second order in the filament arc length $(s_i, s_j)$, we find \begin{eqnarray} &&\mathbf F_{i}^\mathrm{(tot)} = \sum_j \left\lfloor\left\{ \begin{array}{c} 1\\ + (s_i \mathbf p_i- s_j \mathbf p_j) \cdot \nabla \\ +\frac{1}{2}(s_i^2\mathbf p_i \mathbf p_i + s_j^2\mathbf p_j \mathbf p_j) : \nabla \nabla \end{array} \right\}\delta(\mathbf x - \mathbf x_j) \mathbf f_{ij}\right\rceil_{\Omega(\mathbf x_i)}^{ij} + \mathbf F_i^\mathrm{(drag)} \nonumber\\ \label{eq:rod_force} \end{eqnarray} and the network stress \begin{eqnarray} &&\mathbf \Sigma = -\frac{1}2 \sum_{i,j} \left\lfloor \begin{array}{c} \delta(\mathbf x-\mathbf x_i)\delta(\mathbf x^\prime-\mathbf x_j) \\ \left(\mathbf x_i -\mathbf x_j +s_i \mathbf p_i - s_j \mathbf p_j\right)\mathbf f_{ij} \end{array} \right\rceil_{\Omega(\mathbf x_i)}^{ij} \label{eq:rod_stress} \end{eqnarray} where we used that $\mathbf f_{ij}=-\mathbf f_{ji}$. \subsubsection{Torques} Similarly, the angular momentum flux that crosslinkers exert between filaments can be written as \begin{equation} \mathbf t_{ij} = \bar{ \mathbf t}_{ij}\left(s_i, s_j \right) + s_i\mathbf p_i\times \mathbf f_{ij}, \label{eq:torque_density} \end{equation} which dimensionally is a torque per unit area. Thus \begin{equation} \mathbf T_{ij} = \lfloor\delta(\mathbf x - \mathbf x_j - s_j \mathbf p_j)\mathbf t_{ij}\rceil_{\Omega(\mathbf x_i + s_i \mathbf p_i)} \end{equation} which leads to \begin{eqnarray} \mathbf T_{i}^\mathrm{(tot)} = \mathbf T_i^\mathrm{(drag)}+ \sum_j \left\lfloor \begin{array}{c} \delta(\mathbf x - \mathbf x_j)\left(\bar{\mathbf t}_{ij} + s_i\mathbf p_i\times\mathbf f_{ij}\right)\\ + (s_i\mathbf p_i -s_j \mathbf p_j)\cdot\nabla\delta(\mathbf x - \mathbf x_j)\left(\bar {\mathbf t}_{ij} + s_i\mathbf p_i\times\mathbf f_{ij}\right)\\ + \frac{1}{2}(s^2_i\mathbf p_i\mathbf p_i +s^2_j \mathbf p_j\mathbf p_j):\nabla\nabla\delta(\mathbf x - \mathbf x_j) \bar {\mathbf t}_{ij} \end{array} \right\rceil_{\Omega(\mathbf x_i)}^{ij} \nonumber\\ \label{eq:rod_torque} \end{eqnarray} In the following we will consider crosslinks for which $\bar{ \mathbf{t}}_{ij} = 0$, for simplicity. \section{Filament-Filament interactions by crosslinks and collisions} \label{sse:interactions} We next discuss how filaments in highly crosslinked networks exchange linear and angular momentum. Two types of interactions are important here: interactions mediated by crosslinking molecules, which can be simple static linkers or active molecular motors, and steric interactions. We start by discussing the former. \subsection{Crosslinking interactions} To describe crosslinking interactions, we propose a phenomenological model for the stress $\mathbf f_{ij}$ that crosslinkers exert between the attachment positions $s_i$ and $s_j$ on filaments $i$ and $j$. \begin{eqnarray} \mathbf f_{ij} &=& K(s_i, s_j, t)\left( \mathbf x_i +s_i\mathbf p_i - \mathbf x_j -s_j\mathbf p_j\right) \nonumber\\ &+& \gamma (s_i, s_j, t)\left( \mathbf v_i +s_i\dot{\mathbf p}_i - \mathbf v_j -s_j\dot{\mathbf p}_j\right) \nonumber\\ &+& \left[ \sigma(s_i, s_j, t) \mathbf p_i - \sigma(s_j, s_i, t) \mathbf p_j \right ], \label{eq:generic_force_density} \end{eqnarray} The first term in this model, with coefficient $K$, is proportional to the displacement between between the attachment points, $\mathbf x_i+s_i\mathbf p_i - \mathbf x_j -s_j\mathbf p_j$, and captures the effects of crosslink elasticity and motor slow-down under force. The second term, with coefficient $\gamma$, is proportional to $\mathbf v_i +s_i\dot{\mathbf p}_i - \mathbf v_j -s_j\dot{\mathbf p}_j$, and captures friction-like effects arising from velocity differences between the attachment points. The last terms are motor forces that act along filament orientations $\mathbf{p}_i$ and $\mathbf{p}_j$, with their coefficients $\sigma$ having dimensions of stress. Additional forces proportional to the relative rotation rate between filaments, $\dot { \mathbf p}_i - \dot{ \mathbf p}_j$, are allowed by symmetry, but are neglected here for simplicity. In general, the coefficients $K$, $\gamma$, and $\sigma$ are tensors that depend on time, the relative orientations between microtubule $i$ and $j$ and the attachment positions $s_i, s_j$ on both filaments. In this work, we take them to be scalar and independent of the relative orientation, for simplicity. Generalizing the calculations that follow to include the dependences of $K, \gamma$ and $\sigma$ on $\mathbf p_i$ and $\mathbf p_j$ is straightforward but laborious and will be discussed in a subsequent publication. We emphasize that Eq.~{(\ref{eq:generic_force_density})} is a statement about the expected average effect of crosslinks in a dense local environment and is not a description of individual crosslinking events. Inserting Eq.{(\ref{eq:generic_force_density})} into Eqs.~{(\ref{eq:rod_force}, \ref{eq:rod_stress}, \ref{eq:rod_torque})} we find that the stresses and forces collectively generated by crosslinks depend on $s_{ij}$-moments of the form \begin{equation} X_{nm}(\mathbf x) = \lfloor X(s_i, s_j) s_i^n s_j^m\rceil_{\Omega(\mathbf x)}^{ij}, \label{eq:moment_definition} \end{equation} where $X=K$, $\gamma$, or $\sigma$. We refer to these as crosslink moments. In principle, given Eqs.{(\ref{eq:rod_force}, \ref{eq:rod_stress}, \ref{eq:generic_force_density})} only the moments $X_{00}, X_{01}, X_{10}, X_{11}, X_{20}, X_{02}, X_{21},$ and $X_{12}$, contribute to the stresses and forces in the filament network. We further note that $X_{11}$, $X_{21}$ and $X_{12}$ are $\mathcal{O}(L^4)$, and can thus be neglected without breaking asymptotic consistency. Moreover, $X_{20}$ and $X_{02}$ can be expressed in terms of lower order moments since $X_{20} = X_{02} + \mathcal {O}(L^4) = (L^2/12) X_{00} + \mathcal {O}(L^4)$. Finally, by construction $K(s_i, s_j) = K(s_j, s_i)$ and $\gamma(s_i, s_j) = \gamma(s_j, s_i)$, and thus $\gamma_{01}=\gamma_{10}\equiv\gamma_1$ and $K_{01}=K_{10} \equiv K_1$. To further simplify our notation, we introduce $X_0 =X_{00}$. Explicit expressions for the seven crosslinking moments that contribute to the continuum theory are given in the Appendix \ref{app:coeffcients}. In summary, in the long wave length limit all forces and stresses in the network can be expressed in terms of just a few moments, $K_0, K_1, \gamma_0, \gamma_1, \sigma_0, \sigma_{01}, \sigma_{10}$. How different crosslinker behavior set these moments will be discussed in Section\ref{sse:macro}. \subsection{Sterically mediated interactions} In addition to crosslinker mediated forces and torques, steric interactions between filaments generate momentum and angular momentum transfer in the system. We model steric interactions by a free energy $E = \int_{\mathcal V}e(\mathbf p_i, \cdots, \mathbf x_i,\cdots) d^3x$ which depends on all particle positions and orientations. The steric force is \begin{equation} \bar{\mathbf F}_i = -\frac{\delta E}{\delta \mathbf x_i}, \end{equation} and the torque acting on it is \begin{equation} \bar{\mathbf T}_i = -\frac{\delta E}{\delta \mathbf p_i}. \end{equation} This approach is commonly used throughout soft matter physics \cite{martin1972unified,chaikin2000principles}. Common choices for the free energy density $e$ are the ones proposed by Maier and Saupe \cite{doi1988theory}, or Landau and DeGennes \cite{de1993physics}. \section{Continuum Theory for highly crosslinked active networks} \label{sse:continuum} In the previous sections we derived a generic expression for the stresses and forces acting in a network of filaments interacting through local forces and torques, and proposed a phenomenological model for crosslink-driven interactions between filaments. We now combine these two and obtain expressions for the stresses, force, and torques acting in a highly crosslinked filament network, and from there derive equations of motion for the material. We start by introducing the coarse-grain fields in terms of which our theory is phrased. \subsection{Continuous Fields} The coarse grained fields of relevance are the number density, \begin{equation} \rho = \sum_i \delta(\mathbf x-\mathbf x_i), \label{eq:Density definition} \end{equation} the velocity $\mathbf v = \left<\mathbf v_i \right>$, the polarity $\mathbf P =\left<\mathbf p_i \right>$, the nematic-order tensor $\mathcal Q =\left<\mathbf p_i\mathbf p_i \right>$, and the third and fourth order tensors $\mathcal T = \left<\mathbf p_i\mathbf p_i\mathbf p_i\right>$, and $\mathcal S = \left<\mathbf p_i\mathbf p_i\mathbf p_i\mathbf p_i\right>$. Here the brackets $\left<\cdot\right>$ signify the averaging operation \begin{equation} \rho\left<\phi_i\right> = \sum_i \delta(\mathbf x -\mathbf x_i) \phi_i, \label{eq:averaging_definition} \end{equation} where $\phi_i$ is a dummy variable. Furthermore, we define the tensors $\mathbf j = \left<\mathbf p_i\left( \mathbf v_i - \mathbf v\right)\right>$, $\mathcal J = \left<\mathbf p_i\mathbf p_i\left( \mathbf v_i - \mathbf v\right)\right>$, $\mathcal H = \left<\mathbf p_i\dot{\mathbf p}_i\right>$, and the rotation rate $\mathbf \omega =\left<\dot{\mathbf p}_i\right>$. \subsection{Stresses} The presence of crosslinkers generates stresses in the material which, through Eq.~{(\ref{eq:rod_stress}), depends on the crosslinking force density Eq.~{(\ref{eq:generic_force_density})}. Following the nomenclature from Eq.~{(\ref{eq:generic_force_density})}, we write the material stress as \begin{equation} \mathbf \Sigma = \mathbf \Sigma^{(K)} + \mathbf \Sigma^{(\gamma)} + \mathbf \Sigma^{(V)} + \mathbf{\bar\Sigma}, \end{equation} where \begin{equation} \mathbf\Sigma^{(K)} = -\rho^2K_0\left(\alpha \mathcal{I} +\frac{L^2}{12}\mathcal{Q}\right), \label{eq:StressK} \end{equation} is the stress due to the crosslink elasticity, \begin{equation} \mathbf\Sigma^{(\gamma)} = -\rho^2 \left(\eta \nabla \mathbf v + \gamma_1 \mathbf j + \gamma_0\frac{L^2}{12}\mathcal H \right), \label{eq:StressGamma} \end{equation} is the viscous like stress generated by crosslinkers, and \begin{equation} \mathbf\Sigma^{(V)} = -\rho^2 \left(\alpha \sigma_0 \nabla \mathbf P + \sigma_{10} \mathcal Q - \sigma_{01}\mathbf P \mathbf P \right) \label{eq:StressV} \end{equation} is the stress generated by motor stepping. Here, we defined the network viscosity $\eta = \alpha\gamma_0$ and $\alpha = \frac{3R^2}{10}$. Finally, the steric (or Ericksen) stress obeys the Gibbs Duhem Relation \begin{equation} \nabla\cdot\bar{\mathbf\Sigma} = \rho\nabla\mu + (\nabla \mathcal E):\mathcal Q. \end{equation} where $\mu = -\frac{\delta e}{\delta \rho}$ is the chemical potential, and $\mathcal E = -\frac{\delta e}{\delta \mathcal Q}$ is the steric distortion field. An explicit definition of $ \bar{\mathbf\Sigma}$ and the derivation of the Gibbs Duhem relation are given in Appendix~{(\ref{app:steric})}. Note that for simplicity, we chose that the steric free energy density $e$ depends only on nematic order and not on polarity. \subsection{Forces} We now calculate the forces acting on filament $i$. The total force $\mathbf F_i$ on filament $i$ is given by \begin{equation} \mathbf F_i = \mathbf F_i^{(K)} + \mathbf F_i^{(\gamma)} + \mathbf F_i^{(V)} + \bar{\mathbf F}_i + \mathbf F_i^\mathrm{(drag)}, \label{eq:ForceAll} \end{equation} where \begin{eqnarray} \mathbf F^{(K)}_i &=& (\nabla\rho)\cdot \frac{L^2}{12}K_0(\mathbf p_i\mathbf p_i -\mathcal{Q}) -\frac{1}{\rho}\nabla\cdot\mathbf\Sigma^{(K)}, \label{eq:ForceK} \end{eqnarray} is the elasticity driven force \begin{eqnarray} \mathbf F_i^{(\gamma)} &=& \gamma_0 \rho (\mathbf v_i -\mathbf v) + \gamma_1 \rho (\dot{\mathbf p}_i -\mathbf\omega) \nonumber\\ &+& \gamma_1 \left( \nabla \rho \right)\cdot\left[ \mathbf p_i\left( \mathbf v_i -\mathbf v \right) -\mathbf j - \mathbf P\left( \mathbf v_i -\mathbf v\right) \right] \nonumber\\ &+& \frac{L^2}{12}\gamma_0 \left( \nabla \rho \right)\cdot\left[\mathbf p_i \dot{\mathbf p}_i - \mathcal H \right] \nonumber\\ &+& \frac{L^2}{12}\gamma_0 \left( \nabla\nabla \rho \right):\left[ \mathbf p_i {\mathbf p}_i \left(\mathbf v_i-\mathbf v\right) - \mathcal J+\mathcal Q \left(\mathbf v_i-\mathbf v\right) \right] \nonumber\\ &-& \frac{1}{\rho}\nabla\cdot\mathbf\Sigma^{(\gamma)}. \label{eq:ForceGamma} \end{eqnarray} is the viscous like force, and \begin{eqnarray} \mathbf F_i^{(V)} &=& \rho \sigma_0(\mathbf p_i - \mathbf P) \nonumber\\ &+& (\nabla \rho) \cdot \left[ \sigma_{10}\left(\mathbf p_i \mathbf p_i -\mathcal Q \right) - \sigma_{01}\left( \mathbf p_i \mathbf P + \mathbf P \mathbf p_i - 2\mathbf P\mathbf P\right)\right ] \nonumber\\ &+& \frac{L^2}{12} \sigma_0 (\nabla \nabla \rho):\left[ \mathbf p_i\mathbf p_i\mathbf p_i + \mathcal Q \mathbf p_i - \mathbf p_i \mathbf p_i \mathbf P - \mathcal T \right] \nonumber\\ &-&\frac{1}\rho \nabla\cdot \mathbf\Sigma^{(V)}. \label{eq:ForceV} \end{eqnarray} is the motor force. Finally, \begin{equation} \bar {\mathbf{F}}_i = -\frac{\nabla\mathcal E}{\rho}:(\mathbf p_i \mathbf p_i -\mathcal Q) -\frac{1}\rho \nabla\cdot\bar{\mathbf\Sigma}, \end{equation} is the steric force on filament $i$, where we again chose $e$ to only depend on nematic order and not on polarity. \subsection{Crosslinker induced Torque} We next calculate the torques acting on filament $i$. The total torque acting on filament $i$ is \begin{equation} \mathbf T_i = \mathbf T_i^{(\gamma)} + \mathbf T_i^{(V)} +\bar{\mathbf T}_i+ \mathbf T_i^{(\mathrm{drag})} \label{eq:filament_total_torque} \end{equation} Note, that crosslinker elasticity does not contribute. Here \begin{eqnarray} \mathbf T_i^{(\gamma)} &=& \gamma_1 \rho\mathbf p_i\times\left( \mathbf v_i -\mathbf v \right) + \frac{L^2}{12}\gamma_0 \rho\mathbf p_i\times\dot{\mathbf p}_i \nonumber\\ &+& \frac{L^2}{12}\gamma_0 \mathbf p_i\times\left(\mathbf p_i\cdot \nabla \rho \right)\left( \mathbf v_i - \mathbf v \right) \nonumber\\ &-&\frac{L^2}{12}\gamma_0\rho\mathbf p_i\times(\mathbf p_i\cdot\nabla\mathbf v) \label{eq:TorqueGamma} \end{eqnarray} and \begin{eqnarray} \mathbf T^{(V)}_i &=& -\rho\mathbf p_i\times\left(\sigma_{01}\mathbf P + \frac{L^2}{12}\sigma_0\mathbf p_i\cdot\nabla\mathbf P\right) -\frac{L^2}{12}\sigma_0\mathbf p_i\times (\mathbf p_i\cdot\nabla\rho)\mathbf P \end{eqnarray} are the viscous and motor torques, respectively. Steric interactions contribute to the torque \begin{equation} \bar{\mathbf{T}}_i = \mathbf p_i\times \frac{\mathcal E}{\rho}\cdot\mathbf p_i. \end{equation} \subsection{Equations of Motion} To find equations of motion for the highly crosslinked network, we use Eqs.~{(\ref{eq:ForceAll}, \ref{eq:ForceK}, \ref{eq:ForceGamma}, \ref{eq:ForceV})}, and obtain \begin{eqnarray} \mathbf v_i-\mathbf v &=& - \frac{\sigma_0}{\gamma_0} (\mathbf p_i-\mathbf P) - \frac{1}{\rho\gamma_0}\left(\mathbf F^\mathrm{(drag)}_i - f/\rho\right) + \mathcal{O}\left( L^2 \right), \label{eq:low_order_vi} \end{eqnarray} which will be a useful low-order approximation to $\mathbf v_i -\mathbf v$. Note too that we have dropped steric forces, since $\nabla \mathcal E /\rho$ scales with the inverse of the system size, which is much larger than $L$. Using Eq.~{(\ref{eq:low_order_vi})} in Eq.~{(\ref{eq:filament_total_torque})} we find the equation of motion for filament rotations, \begin{eqnarray} \dot{\mathbf p}_i &=& \left( \mathcal{I} -\mathbf p_i \mathbf p_i \right)\cdot \left\{ \begin{array}{c} \mathbf p_i\cdot\mathcal U \\+ \frac{12}{\gamma_0L^2\rho^2}\mathbf p_i\cdot \mathcal E \\+ \frac{12 }{\gamma_0L^2}A^{(\mathbf P)} \mathbf P \end{array} \right\}, \label{eq:kinetic_angular} \end{eqnarray} where we neglect drag mediated terms, which are subdominant at high density, for simplicity. A detailed calculation, and expressions which includes drag terms, is given in Appendix~\ref{app:calculate}. Here, \begin{equation} \mathcal U = \nabla\mathbf v +\frac{\sigma_0}{\gamma_0}\nabla\mathbf P, \end{equation} is the active strain rate tensor, which consists of the consists of the strain rate and vorticity $\nabla\mathbf v$ and an active polar contribution $\nabla\mathbf P$. Moreover \begin{equation} A^{(\mathbf P)} = \sigma_{01}-\sigma_0\frac{\gamma_1}{\gamma_0}. \label{eq:App} \end{equation} is the polar activity coefficient. The filament velocities are given by \begin{eqnarray} \mathbf v_i -\mathbf v &=& -\frac{\sigma_0}{\gamma_0}\left( \mathbf p_i -\mathbf P\right) \nonumber\\ &-&\frac{\gamma_1}{\gamma_0}\left((\mathbf p_i-\mathbf P)\cdot\mathcal{U} -(\mathbf p_i\mathbf p_i\mathbf p_i-\mathcal T):\mathcal{U}\right) \nonumber\\ &-&\frac{12 \gamma_1}{ L^2 \rho^2\gamma_0^2}\left((\mathbf p_i-\mathbf P)\cdot\mathcal{E} -(\mathbf p_i\mathbf p_i\mathbf p_i-\mathcal T):\mathcal{E}\right) \nonumber\\ &+&\frac{12 \gamma_1}{L^2\gamma_0^2}A^{(\mathbf P)}\left( \mathbf p_i\mathbf p_i-\mathcal Q \right)\cdot \mathbf P, \label{eq:v_final} \end{eqnarray} where we used Eqs.~{(\ref{eq:low_order_vi}, \ref{eq:kinetic_angular})} in Eq.~{(\ref{eq:ForceAll})}. In Eq.~{(\ref{eq:v_final})}, we ignored terms proportional to density gradients, for simplicity. The full expression is given in Appendix~\ref{app:calculate}. After some further algebra (see Appendix~\ref{app:calculate}), we arrive at an expression for the material stress in terms of the current distribution of filaments, \begin{eqnarray} \mathbf\Sigma &=& -\rho^2\left(\chi:\mathcal U +\alpha K_0\mathcal{I} + A^{(\mathcal Q)}\mathcal Q -A^{(\mathbf P)}\mathcal T\cdot \mathbf P \right) + \mathbf\Sigma^\mathrm{(S)}, \label{eq:stress_final} \end{eqnarray} where \begin{equation} \chi_{\alpha\beta\gamma\mu} = \eta\delta_{\alpha\gamma}\delta_{\beta\mu} + \frac{L^2}{12}\gamma_0\left(\mathcal Q_{\alpha\gamma}\delta_{\beta\mu} \mathcal - S_{\alpha\beta\gamma\mu}\right), \label{eq:generalized_viscosity} \end{equation} is the anisotropic viscosity tensor, \begin{equation} A^{(\mathcal Q)} = \sigma_{10}-\sigma_0\frac{\gamma_1}{\gamma_0} +\frac{L^2}{12}K_0 \label{eq:AQ} \end{equation} is the nematic activity coefficient, and \begin{equation} \mathbf\Sigma^\mathrm{(S)}_{\alpha\beta} = \bar{\mathbf\Sigma}_{\alpha\beta} -\left( \mathcal Q_{\alpha\gamma}\delta_{\beta\mu} \mathcal - S_{\alpha\beta\gamma\mu} \right)\mathcal E_{\gamma\mu}. \label{eq:steric_stress_final} \end{equation} is the steric stress tensor. Together Eqs.~{(\ref{eq:smoluchowski}, \ref{eq:kinetic_angular}, \ref{eq:v_final}, \ref{eq:stress_final})} define a full kinetic theory for the highly crosslinked active network. \section{Designing materials by choosing crosslinks} \label{sse:macro} Eqs.~{(\ref{eq:smoluchowski}, \ref{eq:kinetic_angular}, \ref{eq:stress_final}, \ref{eq:v_final})} define a full kinetic theory for highly crosslinked active networks. This theory has the same active stresses known from symmetry based theories for active materials\cite{marchetti2013hydrodynamics, kruse2005generic, furthauer2012active} and thus can give rise to the same rich phenomenology. Since our framework derives these stresses from microscale properties of the constituents of the material it enables us to make predictions on how the microscopic properties of the network constituents affect its large scale behavior. We first discuss how motor properties set crosslink moments in Eq.~{(\ref{eq:generic_force_density})}. We then study how these crosslink properties impact the large scale properties of the material. \subsection{Tuning Crosslink-Moments} \begin{figure}[h] \centering \includegraphics[width=0.75\columnwidth]{MotorTypes.pdf} \caption{(a, b) Populations of crosslink heads are characterized by the density with which they bind a filament along its arclength $s$ and the speed at which they move when force free. Two different head types, one with non-uniform speed but uniform density (a) another with uniform speed and non-uniform density (b) are shown. In (c) we list some possible crosslink heads. Red and Blue lines illustrate the change of crosslink speed and density with $s$, respectively. In (d) we illustrate example crosslinks which consist of two heads and a linker.} \label{fig:motor_types} \end{figure} The coefficients in Eq.~{(\ref{eq:generic_force_density})} arise from a distribution of active and passive crosslinks that act between filaments. Consider an ensemble of crosslinking molecules, each consisting of two heads $a$ and $b$, joined by a spring-like linker; see Fig.~{(\ref{fig:motor_types})}. For any small volume in an active network, we can count the number densities $\xi_a(s)$, and $\xi_b(s)$ of $a$ and $b$ heads of doubly-bound crosslinks that are attached to a filament at arc-length position $s$. In an idealized experiment $\xi_a(s)$ and $\xi_b(s)$ could be determined by recording the positions of motor heads on filaments. The number-density $\xi_{ab}(s_i, s_j)$ of $a$ heads at position $s_i$ on microtubule $i$ connected to $b$ heads at position $s_j$ on microtubule $j$ is then given by \begin{eqnarray} \xi_{ab}(s_i, s_j) = \frac{\xi_a(s_i)\xi_b(s_j)} {N_b^{(i)}(s_i)} \label{eq:binding_xiab} \end{eqnarray} where $N^{(i)}_b(s_i)$ counts the $b$ heads that an $a$-head attached at position $s_i$ on filament $i$ could be connected to given the crosslink size. It obeys \begin{equation} N^{(i)}_b(s_i) = \sum\limits_{k\ne i} \int\limits_{-L/2}^{L/2} ds_k \int\limits_{\Omega(\mathbf x_i + s_i\mathbf p_i)}d\mathbf x^3 \xi_b(s_k)\delta(\mathbf x_k + s_k \mathbf p_k -\mathbf x). \end{equation} Analogous definitions for $\xi_{ba}(s_i, s_j)$ and $N_a^{(i)}(s_i)$ are implied. It follows naturally that $\xi(s_i, s_j) =\xi_{ab}(s_i, s_j) + \xi_{ba}(s_i, s_j)$ is the total number density of crosslinks acting between filaments $i$ and $j$ at the arclength positions $s_i$, $s_j$. Now let $V_a(s), V_b(s)$ be the load-free velocities of motor-heads $a, b$ moving along filaments. Here, $V_a(s), V_b(s)$ are functions of the arc-length position $s$. Like $\xi_a$ and $\xi_b$, they are in principle measurable. With these definitions, the force per unit surface that attached motors exert is \begin{eqnarray} &&\mathbf f_{ij} = -\Gamma \xi(s_i, s_j) \left(\mathbf v_i +s_i\dot{\mathbf p}_i - \mathbf v_j +s_j\dot{\mathbf p}_j \right) \nonumber\\ &&-\kappa \xi(s_i, s_j)\left(\mathbf x_i +s_i\mathbf p_i - \mathbf x_j +s_j\mathbf p_j \right) \nonumber\\ &&-\Gamma \left(\left[\xi_{ab}(s_i, s_j)V_a(s_i) + \xi_{ba}(s_i, s_j)V_b(s_i)\right]\mathbf p_i \right) \nonumber\\ &&+\Gamma \left(\left[\xi_{ab}(s_j, s_i)V_a(s_j) + \xi_{ba}(s_j,s_i)V_b(s_j)\right]\mathbf p_j \right), \end{eqnarray} where $\Gamma$ is an effective linear friction coefficient between the two attachment points and $\kappa$ is an effective spring constant. They depend on the microscopic properties of motors, filaments, and the concentrations of both and their regulators. In general, $\Gamma$ and $\kappa$ are second rank tensors, which depend on the relative orientations of filaments. Here we take them to be scalar, for simplicity and consistency with earlier assumptions. By comparing to Eq.~{(\ref{eq:generic_force_density})} we identify \begin{equation} \gamma(s_i, s_j) = -\Gamma \xi(s_i,s_j), \label{eq:gamma_def} \end{equation} \begin{equation} K(s_i, s_j) = -\kappa \xi(s_i,s_j), \label{eq:K_def} \end{equation} and \begin{eqnarray} \sigma(s_i, s_j) &=& -\Gamma \xi_{ab}(s_i, s_j)V_a(s_i) +\Gamma \xi_{ba}(s_i,s_j)V_b(s_i). \label{eq:sigma_def} \end{eqnarray} \begin{table} \begin{centering} \hskip-1.25cm \begin{tabular}{|c|c|c|c|c|c|} \hline &$\gamma_0$, $K_0$ & $\gamma_1$, $K_1$ & $\sigma_0$& $\sigma_{10}$& $\sigma_{01}$ \\ \hline \includegraphics[scale=.75]{csu.pdf} \begin{tabular}{c} symmetric \\ uniform \\ non-motile \end{tabular} & yes & no & no & no & no \\ \hline \includegraphics[scale=.75]{csnu.pdf} \begin{tabular}{c} non-symmetric \\ uniform \\ non-motile \end{tabular} & yes & yes & no & no & no \\ \hline \includegraphics[scale=.75]{msu.pdf} \begin{tabular}{c} symmetric \\ uniform \\ motor \end{tabular} & yes & no & yes & no & no \\ \hline \includegraphics[scale=.75]{msnu.pdf} \begin{tabular}{c} symmetric \\ non-uniform \\ motor \end{tabular} & yes & yes & yes & \begin{tabular}{c} yes \\ $\sigma_{10}=\sigma_1$ \end{tabular} & \begin{tabular}{c} yes \\ $\sigma_{01}=\sigma_1$ \end{tabular}\\ \hline \includegraphics[scale=.75]{mnsnu.pdf} \begin{tabular}{c} non-symmetric \\ non-uniform \\ motor \end{tabular} & yes & yes & yes & yes & yes \\ \hline \end{tabular} \end{centering} \caption{Table summarizing which crosslink moments different crosslink types generate.}\label{tab:motor_to_moments} \end{table} Using Eqs.~{(\ref{eq:gamma_def}, \ref{eq:K_def}, \ref{eq:sigma_def})}, we now discuss some important classes of crosslinking molecules. We consider crosslinks whose heads can be motile or non-motile, the binding and walking properties can act uniformly or non-uniformly along filaments, and the two heads of the crosslink can be the same (symmetric crosslink) or different (non-symmetric crosslink). Figure~{\ref{fig:motor_types}} maps how varying crosslink types can be constructed, while Table \ref{tab:motor_to_moments} lists the moments to which different classes of crosslinks contribute. {\it Non-motile crosslinks} are crosslinks that do not actively move, i.e. $V_a = V_b = 0$. Examples of non-motile crosslinks in cytoskeletal systems are the actin bundlers such as fascin, or microtubule crosslinks such as Ase1p \cite{alberts_2002_book}. While these types of crosslinks are not necessarily passive, since the way they binding or unbind can break detailed balance, that their attached heads do not walk along filaments implies that $\sigma_0 = \sigma_{10} = \sigma_{01} = 0$. Non-motile crosslinks change the material properties of the material by contributing to the crosslink moments $\gamma_0, \gamma_1$ and $K_0, K_1$. Some non-motile crosslinks bind non-specifically along filaments they interact with, giving uniform distributions. For these $\gamma_1 = K_1 = 0$. Others preferentially associate to filament ends, and thus bind non-uniformly. For these $\gamma_1$ and $K_1$ are positive. Note that the two heads of a non-motile crosslink can be identical (symmetric) or not (non-symmetric). Given the symmetric structure of Eqs~{(\ref{eq:gamma_def}, \ref{eq:K_def})} mechanically a non-symmetric non-motile crosslink behaves the same as a symmetric non-motile crosslink. and {\it Symmetric Motor crosslinks} are motor molecules whose two heads have identical properties, i.e. $V_a = V_b = V$ and $\xi_a = \xi_b = \xi$. Examples are the microtubule motor molecule Eg-5 kinesin, and the Kinesin-2 motor construct popularized by many in-vitro experiments\cite{sanchez2012spontaneous}. Symmetric motors contribute to the large-scale properties of the material by generating motor forces. In particular they contribute to the crosslink moments $\sigma_0$, $\sigma_{10}$, and $\sigma_{01}$. From Eq.~{(\ref{eq:sigma_def})} it is easy to see that $\sigma_0 = V_0 \gamma_0 +V_1 \gamma_1/L^2$, where we defined the moments of the motor velocity $V(s_i, s_j)$ using Eq.~{(\ref{eq:moment_definition})}. Some symmetric motor proteins preferentially associate to filament ends, and display end-clustering behavior, where their walking speed depends on the position at which they are attached to filaments. Motors that do either of these also generate a contribution to $\sigma_{10}$ and $\sigma_{01}$. Since both motor heads are identical we have $\sigma_{10} = \sigma_{01} \equiv \sigma_1$ and from Eq.~{(\ref{eq:sigma_def}) we find that $\sigma_1 = \gamma_1 V_0 +V_1 \gamma_0$. {\it Non-Symmetric motor crosslinks} are motor molecules whose two heads have differing properties. An example is the microtubule-associated motor dynein, that consists of a non-motile end that clusters near microtubule minus-ends and a walking head that binds to nearby microtubules whenever they are within reach \cite{foster2015active, foster2017connecting}. A consequence of motors being non-symmetric is that $\sigma_{10} \ne \sigma_{01}$. Since non-symmetric motors can break the symmetry between the two heads in a variety of ways we spell out the consequences for a few cases. Let us first consider a crosslinker with one head $a$ that acts as a passive crosslink ($V_a = 0$) and a second head $b$ that acts as a motor, moving with the stepping speed $V_b=V$. For such a crosslink $\sigma_0 = \gamma_0 V_0/2$. If both heads are distributed uniformly along filaments and their $V$ is position independent then $\sigma_{01} = \sigma_{10} = 0$. If the walking $b$-head is distributed nonuniformly ($\xi_b =\xi_b(s), \xi_a=$ constant) then $\sigma_{10} = \gamma_1 V_0$ and $\sigma_{01} = 0$. Conversely, if the static $a$-head has a patterned distribution ($\xi_a =\xi_a(s), \xi_b=$ constant) then $\sigma_{01} = \gamma_1 V_0, \sigma_{10} = 0$. Finally, we note that if both heads are distributed uniformly along the filament ($\xi_a =\xi_b = $constant), but the walking $b$-head of the motor changes its speed as function of position then $\sigma_{10} = V_1 \gamma_0/2$ and $\sigma_{01} = 0$. Note that stresses and forces are additive. Thus it may be possible to design specific crosslink moments by designing mixtures of different crosslinkers. For instance mixing a non-motile crosslink that has specific binding to a filament solution might allow to change just $\gamma_0$ and $\gamma_1$ in a targeted way. We will elaborate on some of these possibilities in what follows. \subsection{Tuning viscosity} We now discuss how microscopic processes shape the overall magnitude of the viscosity tensor $\chi$. From Eq.~{(\ref{eq:generalized_viscosity})} and remembering that $\eta = 3R^2/10 \gamma_0$, it is apparent that the overall viscosity of the material is proportional to the number of crosslinking interactions and their resistance to the relative motion of filaments, quantified by the friction coefficient $\rho^2\gamma_0$. Furthermore, $\gamma_0$ itself scales as the squared filament length $L$, and the cubed crosslink size $R$ (see the definition in Appendix~\ref{app:coeffcients}), which, with $\rho^2$, sets the overall scale of the viscosity as $\rho^2L^2R^3$. We next show how micro-scale properties of network constituents shape the anisotropy of $\chi$ ; see Eq.~{(\ref{eq:generalized_viscosity}). To characterize this we define the anisotropy ratio $a$ as \begin{equation} a = \frac{L^2\gamma_0}{12\eta} = \frac{5}{18}\frac{L^2}{R^2}, \end{equation} which is the ratio of the magnitudes of the isotropic part of $\chi_{\alpha\beta\gamma\mu}$, that is $\eta \delta_{\alpha\gamma}\delta_{\beta\mu}$, and its anisotropic part $\gamma_0 L^2/12(\mathcal Q_{\alpha\gamma}\delta_{\beta\mu}-\mathcal{S}_{\alpha\beta\gamma\mu})$. Most apparently the anisotropy ratio will be large if the typical filament length $L$ is large compared to the motor interaction range $R$. This is typically the case in microtubule based systems, as microtubules are often microns long and interact via motor groups that are a few tens of nano-meters in scale \cite{alberts_2002_book}. Conversely, in actomyosin systems filaments are often shorter (hundreds of nano-meters) and motors-clusters called mini-filaments, can have sizes similar to the filament lengths \cite{alberts_2002_book}. The anisotropy of the viscous stress is not exclusive to active systems and has been described before in the context of similar passive systems, such as liquid crystals and liquid crystal polymers \cite{de1993physics,chaikin2000principles,doi1988theory}. \subsection{Tuning the active self-strain} \begin{table} \hskip-1.3cm \begin{centering} \begin{tabular}{|c|c|c|} \hline Mixture & Active Strain, $\sigma_0/\gamma_0$ & \begin{tabular}{c} Active Pressure, $\Pi^\mathrm{(A)}$ \\ Axial stress, $\bar{S}$ \end{tabular} \\ \hline \includegraphics[scale=.5]{msu.pdf}& $\frac{\sigma_0}{\gamma_0}=V_0$ & no \\ \hline \includegraphics[scale=.5]{msnu.pdf}& $\frac{\sigma_0}{\gamma_0}=V_0$ & no \\ \hline \includegraphics[scale=.5]{mnsnu.pdf}& $\frac{\sigma_0}{\gamma_0}=\frac{V^a_0+V^b_0}{2}$ & \includegraphics[scale=.4]{pimns.pdf} \\ \hline \includegraphics[scale=.5]{msu.pdf} + \includegraphics[scale=.5]{csu.pdf} & \quad \includegraphics[scale=.4]{slgr.pdf} \quad & no \\ \hline \begin{tabular}{c} \includegraphics[scale=.5]{msu.pdf} + \includegraphics[scale=.5]{csnu.pdf} \\ or \\ \includegraphics[scale=.5]{msnu.pdf} + \includegraphics[scale=.5]{csu.pdf} \end{tabular} & \includegraphics[scale=.4]{slgr.pdf} & \quad \includegraphics[scale=.4]{pims.pdf} \quad \\ \hline \end{tabular} \end{centering} \caption{Active pressure and strain generated by different crosslink types and mixtures. In the plots pertaining to the active strain rate $\gamma_0 =\gamma_0^{(M)}+\gamma_0^{(X)}$ where $\gamma_0^{(M)}$ denotes the is the part of $\gamma_0$ induced by mobile and $\gamma_0^{(X)}$ denotes contribution from non-mobile crosslinkers. The filament sliding velocity expected in a stress free system is $V_{slide} = \sigma_0/\gamma_0$ and is given units of the force free speed of immobile crosslinks and describe the expected speed of filament sliding in the material. Moreover, $\bar S= |\Pi^\mathrm{(A)}/q|$ is the magnitude of the motor-stepping induced axial stress, i.e of the axial stress in the limit $K_0\to0$.} \label{tab:motor_to_effects} \end{table} The viscous stress in highly crosslinked networks is given by $\chi:\mathcal U$, where $\mathcal U = \nabla \mathbf v + (\sigma_0/\gamma_0)\nabla\mathbf P$ takes the role of the strain-rate in passive materials, but with an active contribution $(\sigma_0/\gamma_0)\nabla\mathbf P$. Thus, internally driven materials can exhibit active self-straining. In particular a material in which each filament moves with the velocity $\mathbf v_i = -\sigma_0/\gamma_0\mathbf p_i + \mathbf C$, where $\mathbf C$ is a constant vector that is sets the net speed of the material in the frame of reference, has $\mathcal U =0$, and thus zero viscous stress. In such a material filaments can slide past each other at a speed $\sigma_0/\gamma_0$ without stressing the material. Notably, the sliding speed is independent of the local polarity and nematic order of the material \cite{furthauer2019self}. The crosslink moments that contribute to the active straining behavior are $\sigma_0$ and $\gamma_0$. In active filament networks with a single type of crosslink $\sigma_0/\gamma_0\simeq V_0$, regardless of crosslink concentration. Thus for single-crosslinker systems, the magnitude of self-straining is independent of the motor concentration \cite{furthauer2019self}. Self-straining can be tuned in mixtures of crosslinks. For instance the addition of a non-motile crosslinker can increase $\gamma_0$, while leaving $\sigma_0$ unchanged. In this way self-straining can be relatively suppressed. In table \ref{tab:motor_to_effects} we plot the expected active strain-rate for materials actuated by mixtures of immotile and motor crosslinks. In such a material $\gamma_0 = \gamma_0^{(M)}+\gamma_0^{(X)}$ where $\gamma_0^{(M)}$ denotes the part of $\gamma_0$ induced by motile crosslinkers and $\gamma_0^{(X)}$ denotes that from non-motile crosslinkers. The resulting velocity $V_{slide}$ with which a filament slides through the material will scale as $V_{slide}\simeq\gamma_0^{(M)}/( \gamma_0^{(M)}+\gamma_0^{(X)})$; see Table~\ref{tab:motor_to_effects}. \subsection{Tuning the Active Pressure} Many active networks spontaneously contract \cite{foster2015active} or expand \cite{sanchez2012spontaneous}. We now study the motor properties that enable these behaviors. An active material with stress free boundary conditions, can spontaneously contract if its self-pressure, \begin{equation} \Pi = \mathrm{Tr}\left( \mathbf\Sigma + \rho^2\chi:\mathcal U \right). \end{equation} is negative. Conversely the material can spontaneously extend if $\Pi$ is positive. We can also write \begin{equation} \Pi = \Pi^\mathrm{(A)} + \Pi^\mathrm{(S)} \end{equation} where $\Pi^\mathrm{(S)} = \mathrm{Tr}\left(\mathbf \Sigma^\mathrm{(S)} \right)$ is the sterically mediated pressure, and $\Pi^\mathrm{(A)}$ is the activity driven pressure (or active pressure) given by \begin{equation} \Pi^\mathrm{(A)} = -\rho^2\left(\alpha K_0 + A^{(\mathcal Q)} - A^{(\mathbf P)} |\mathbf P|^2 \right). \label{eq:mpress} \end{equation} see Eq.~{(\ref{eq:stress_final})}. Here and in the following we approximated $\mathrm{Tr}(\mathcal T\cdot\mathbf P)\simeq |\mathbf P|^2$ for simplicity. We ask which properties of crosslinks set the active pressure and how its sign can be chosen. We first discuss how interaction elasticity impacts the active pressure $\Pi^{(A)}$ in the absence of motile crosslinks, i.e. when $\sigma_0 = \sigma_{10} = \sigma_{01} = 0$. In this case, Eq.~{(\ref{eq:mpress})} simplifies to $\Pi^\mathrm{(A)} = -\rho^2(\alpha + L^2/12)K_0$, where we used Eq.~{(\ref{eq:AQ})}. Thus, even in the absence of motile crosslinks, active pressure can be generated. This can be tuned by changing the effective spring constant $K_0$. We note that $\Pi^\mathrm{(A)}+\Pi^\mathrm{(S)}=0$ when crosslink binding-unbinding obeys detailed balance and the system is in equilibrium. The moment $K_0$ can have either sign when detailed balance is broken. Microscopically this effect could be achieved, for instance, by a crosslinker in which active processes change the rest length of a spring-like linker between the two heads once they bind to filaments. We next discuss the contributions of motor motility to the active pressure. To start, we study a simplified apolar (i.e. $\mathbf P=0$) system where $K=0$. In such a system the active pressure is given by \begin{equation} \Pi^\mathrm{(A)} = -\rho^2\left(\sigma_{10} - \sigma_0\frac{\gamma_1}{\gamma_0} \right) \label{eq:pi_motors_only_nema} \end{equation} We ask how motor properties set the value and sign of this parameter combination. We first point out that generating active pressure by motor stepping requires that either $\sigma_{10}$ or $\gamma_1$ are non-zero. This means that generating active pressure requires breaking the uniformity of binding or walking properties along the filament. A crosslink which has two heads that act uniformly can thus not generate active pressure on its own. However, when operating in conjunction with a passive crosslink that preferentially binds either end of the filament, the same motor can generate an active pressure. This pressure will be contractile if the non-motile crosslinks couple the end that the motor walks towards ($\gamma_1$ and $\sigma_0$ have the same sign) and extensile if they couple the other ($\gamma_1$ and $\sigma_0$ have opposite signs). In summary, a motor crosslink that acts the same everywhere along the filaments it couples does not generate active pressure on its own. However, it can do so when mixed with a passive crosslink that acts non-uniformly. We next ask if a system with just one type of non-uniformly acting crosslink can generate active pressure. To start, consider {\it symmetric motor crosslinks}, i.e a motor consisting of two heads with identical (but non-uniform) properties. We then have $\sigma_{01} = \sigma_{10} = \gamma_1 V_0 + \gamma_0 V_1$ and $\sigma_0 = V_0 \gamma_0 + V_1 \gamma_1 /L^2$. Using this in Eq.~{(\ref{eq:pi_motors_only_nema})} and dropping the term proportional to $\gamma_1^2$ (higher order in this case) we find that such symmetric motor crosslinks generate no contribution to the active pressure when operating alone. However when operating in concert with a non-motile crosslink, even one that binds filaments uniformly, they can generate and active pressure. The sign of the active pressure is set by the particular asymmetry of motor binding and motion. The system is contractile if motors cluster or speed up near the end towards which they walk, and extensile if they cluster or accelerate near the end that they walk from. Our prediction that many motor molecules can only generate active pressure in the presence of an additional crosslink, might explain observation on acto-myosin gels, which have been shown to contract only when combined a passive crosslink operate in concert with the motor myosin \cite{ennomani2016architecture}. We next ask if {\it non-symmetric motor crosslinks} can generate active pressure. Consider a crosslink with one immobile and one walking head. For such a crosslink $\sigma_0 = \gamma_0 V_0/2$. If the immobile head preferentially binds near one filament end, while the walking head attaches everywhere uniformly, then $\sigma_{10} = \gamma_1 V_0$ and $\sigma_{01} = 0$. For such a motor we predict an active pressure proportional to $V_0/2$. The active pressure will be contractile if the static ends bind near the end that the motor head walks to and extensile if the situation is reversed. The motor dynein has been suggested to consist of an immobile head that attaches near microtubule minus ends and a walking head that grabs other microtubules and walks towards their minus ends. Our theory suggests that this should lead to contractions, which is consistent with experimental findings \cite{ennomani2016architecture}. After having discussed the effects of motor stepping on the active pressure in systems with $\mathbf P = 0$, we ask how the situation changes in polar systems. In polar system an additional contribution, $-(\sigma_{01} - \frac{\gamma_1}{\gamma_0}\sigma_0)|\mathbf P|^2$, exists. For symmetric motors, where $\sigma_{01}=\sigma_{10}$ this implies that the active pressure generated by a network of symmetric motors and passive crosslinks is strongest in apolar regions of the system and subsides in polar regions, since the polar and apolar contributions to the active stress appear in Eq.~{(\ref{eq:stress_final})} with opposite signs. We plot the magnitude of the active pressure $\Pi^\mathrm{(A)} \simeq 1 -|\mathbf P|^2$ as a function of $|\mathbf P|$ in Table \ref{tab:motor_to_effects}. This is reminiscent of the behavior predicted in the frameworks of a sparsely crosslinked system in \cite{gao2015multiscalepolar}. In contrast the effects of non-symmetric motors can be enhanced in polar regions. Consider again, the example of a motor with one static head that preferentially binds near one of the filament ends and a mobile head that acts uniformly. For this motor $\sigma_{10} = \gamma_1 V_0$ and $\sigma_{01} = 0$ and $\sigma_0 = \gamma_0 V_0/2$. It is thus predicted to generate twice the amount of active pressure in a polar network than in an apolar one and $\Pi^\mathrm{(A)}\simeq (1+|\mathbf P|)/2$, see the table \ref{tab:motor_to_effects} for a plot of the active pressure $\Pi^\mathrm{(A)}$ as a function of $|\mathbf P|$. This is reminiscent of the motor dynein in spindles, which is though to generate the most prominent contractions near the spindle poles, which are polar \cite{brugues2012nucleation}. Finally, we ask how filament length affects the active pressure. Looking at the definitions of the nematic and polar activity Eqs.~{(\ref{eq:AQ}, \ref{eq:App})} and remembering the definition and scaling of the coefficient in there (see App.~{(\ref{app:coeffcients})}), we notice that the active pressure scales as $L^4$. Since the viscosity scaled with $L^2$, this predicts that systems with shorter filaments contract slower than systems with longer filaments. This effect has observed for dynein based contractions in-vitro \cite{foster2017connecting}. \subsection{Tuning axial stresses, buckling and aster formation} Motors in active filament networks generate anisotropic (axial) contributions to the stress, which can lead to large scale instabilities in materials with nematic order \cite{simha2002hydrodynamic, kruse2005generic, saintillan2008instabilitiesPRL, furthauer2012taylor}. At larger active stresses, nematics are unstable to splay deformations in systems that are contractile along the nematic axis, and to bend deformations in systems that are extensile along the nematic axis \cite{kruse2005generic, marchetti2013hydrodynamics}. In both cases, the instabilities set in when the square root of ratio of the elastic (bend or splay) modulus that opposes the deformation to the active stress - also called the Fr\'eedericksz length - becomes comparable to the systems size. We now discuss which motor properties control the emergence of these instabilities, and how a system can be tuned exhibit bend or splay deformations. For this we ask how axial stresses, which are governed by the activity parameters $\mathcal A^{(\mathcal Q)}$ and $\mathcal{A}^{(\mathbf P)}$, are set in our system. The magnitude $S$ of the axial stress along the nematic axis is given by \begin{equation} S = -\rho^2 q\left( \mathcal A^{(\mathcal Q)} - \mathcal A^{(\mathbf P)}|\mathbf P|^2\right), \label{eq:axial_mag} \end{equation} where we defined the nematic order parameter $q$, as the largest eigenvalue of ${\mathcal Q} - \mathrm{Tr}(\mathcal Q)\mathcal{I}/3$; see Eq.~{(\ref{eq:stress_final})}. The axial stress is contractile along the nematic axis if $S$ is positive and extensile if $S$ is negative. Comparing Eqs.~{(\ref{eq:axial_mag}, \ref{eq:mpress})} we find that $S = q(\Pi^\mathrm{(A)} + \rho^2\alpha K_0)$ and in the limit where $K_0\to 0$, where motor elasticity is negligible, $S = q\Pi^\mathrm{(A)}$. We discussed how $\Pi^\mathrm{(A)}$ is set for different types of crosslinks in the previous section; see Table \ref{tab:motor_to_effects}. The prototypical active nematic \cite{sanchez2012spontaneous} which consists of apolar bundles of microtubules actuated by the kinesin motors and is axial extensile. In our theory, an axial extensile stress (i.e. $S<0$) in an apolar system ($\mathbf P = 0$) implies that $\mathcal A^{(\mathcal Q)} = \sigma_{10} -\sigma_0\frac{\gamma_1}{\gamma_0} +\frac{L^2}{12}K_0 >0$. This can be achieved either by crosslinks that act uniformly (i.e. $\sigma_{10} -\sigma_0\frac{\gamma_1}{\gamma_0} = 0$) and generate a spring like response that induces $K_0>0$ or by crosslinks that have non-uniform motor stepping behavior which generates $\sigma_{10} -\sigma_0\frac{\gamma_1}{\gamma_0}>0$. The latter implies either a non-symmetric motor crosslinks, or the presence of more than one kind of crosslinks, as was discussed more extensively earlier in the context of active pressure. At high enough active stress we expect systems with negative $S$ to become unstable towards buckling. This has been observed in \cite{senoussi2019tunable,strubing2019wrinkling}. Conversely axial contractile behavior can be achieved if either $K_0<0$ or $\sigma_{10} -\sigma_0\frac{\gamma_1}{\gamma_0}<0$. At high enough active stress, such systems can become unstable towards an aster forming transition, as seen in \cite{foster2015active}. Note that $S\simeq\Pi^\mathrm{(A)} + \rho^2\alpha K_0$, implies that $S$ and $\Pi^\mathrm{(A)}$ need not be the same if $K_0\neq 0$. In particular when $\Pi^\mathrm{(A)}$ and $K_0$ have opposite signs systems can exist, which are axially extensile while being bulk contractile and vice versa. We finally note that the magnitude of axial stresses changes if the system transitions from apolar to polar, if the origin of the axial stresses is motor stepping but not if the origin of the axial stresses is the effective spring like behavior of motor, since $\mathcal A^{(\mathcal Q)}$, but not $\mathcal A^{(\mathbf P)}$, depends on $K_0$, see Eqs.~{(\ref{eq:App}, \ref{eq:AQ})}. In systems in which the active stress is generated by the stepping of symmetric motor-crosslink, $|S|$ is highest nematic apolar phase ($|\mathbf P|=0$), while systems made from non-symmetric crosslinks generate the most stress when polar ($|\mathbf P|=1$); see Table \ref{tab:motor_to_effects}. This opens the possibility that a system can overcome the threshold towards an instability when its other dynamics drives it from nematic apolar to polar arrangements or vice versa. We suggest that the buckling instabilities discussed in \cite{senoussi2019tunable,strubing2019wrinkling} should be interpreted in this light. \section{Discussion} \label{sse:discussion} In this paper, we asked how the properties of motorized crosslinkers that act between the filaments of a highly crosslinked polymer network set the large scale properties of the material. For this, we first develop a method for quantitatively stating what the properties of motorized crosslinks are. We introduce a generic phenomenological model for the forces that crosslink populations exert between the filaments which they connect; see Eq.~{(\ref{eq:generic_force_density})}. This model describes forces that are (i) proportional to the distance ($K$), and (ii) the relative rate of displacement ($\gamma$). Finally (iii) it describes the active motor forces ($\sigma$) that crosslinks can exert. Importantly, forces from crosslinkers ($K, \gamma, \sigma$) can depend on the position on the two filaments which they couple. This allows the description of a wide range of motor properties, such as end-binding affinity, end-dwelling, and even the description of non-symmetric crosslinks that consist of motors with two heads of different properties. We next derived the stresses and forces generated on large time and length scale, given our phenomenological crosslink model. We find that the emergent material stresses depend only on a small set of moments; see Eq.~{(\ref{eq:moment_definition})} of the crosslink properties. These moments are effectively descriptions of the expectation value of the force exerted between two filaments given their positions and relative orientations. The resulting stresses, forces, and filament reorientation rates (Eqs.~{(\ref{eq:stress_final}, \ref{eq:v_final}, \ref{eq:kinetic_angular})}) recover the symmetries and structure predicted by phenomenological theories for active materials, but beyond that provide a way of identifying how specific micro-scale processes set specific properties of the material. We discussed how four key aspects of the dynamics of highly crosslinked filament networks can be tuned by the micro-scale properties of motors and filaments. In particular we discussed how (i) the highly anisotropic viscosity of the material is set; (ii) how active self-straining is regulated; (iii) how contractile or extensile active pressure can be generated; (iv) which motor properties regulate the axial active nematic and bipolar stresses, which can lead to large scale instabilities. Our theory makes specific predictions for the effects of distinct classes of crosslinkers on cytoskeletal networks. Intriguingly these predictions suggest explanations for phenomena experimentally seen, but currently poorly understood. Experiments have shown that mixtures of actin filaments and myosin molecular motors can spontaneously contract, but only in the presence of an additional passive crosslinker \cite{ennomani2016architecture}. Our theory allows us to speculate on explanations for this observation. In the crosslink classification that we introduced, myosin, which form large mini-filaments, is a symmetric motor crosslink; see Fig.~{(\ref{fig:motor_types})}. We find that symmetric motor crosslinks, which have two heads that act the same can generate contractions only in the presence of an additional crosslinker that helps break the balance between $\gamma_1/\gamma_0\sigma_0$ and $\sigma_{01}$ in the active pressure; see Eq.~{(\ref{eq:pi_motors_only_nema})} and Table~\ref{tab:motor_to_effects}. Further work will be needed to explore whether this connection can be made quantitative. A second observation that was poorly understood prior to this work is the sliding motion of microtubules in meiotic Xenopus spindles, which are the structures which segregate chromosomes during the meiotic cell division. These spindles consist of inter-penetrating arrays of anti-parallel microtubules, which are nematic near the chromosomes, and highly polar near the spindle poles. In most of the spindle the two anti-parallel populations of microtubules slide past each other, at near constant speed driven by the molecular motor Eg-5 Kinesin, regardless of the local network polarity. Our earlier work \cite{furthauer2019self} showed that active self straining explains this polarity independent motion. The theory that we develop here provides the tools to explore the behavior of different motors and motor mixtures which will allow us to investigate the mechanism by which different motors in the spindle shape its morphology. This will help to explain complex behaviors of spindles such as the barreling instability \cite{oriola2020active} that gives spindles their characteristic shape or the observation that spindles can fuse \cite{gatlin2009spindle}. Our theory provides specific predictions on how changing motor properties can change the properties of the material which they constitute, it can enable the design of new active materials. We predict the expected large scale properties of a material, in which an experimentalist had introduced engineered crosslinks with controlled properties. With current technology, an experimentalist could engineer a motor that preferentially attaches one of its heads to a specified location on a filament, while its walking head reaches out into the network. Or, as has already been demonstrated in studies by the Surrey Lab \cite{roostalu2018determinants} the difference in the rates of filament growth and motor walking speeds, could be exploited to generate different dynamic motor distributions on filaments. This design space will provide ample room to experimentally test our predictions, and use them to engineer systems with desirable properties. Finally recent advances in optical control of motor systems \cite{ross2019controlling} could be used to provide spatial control. The theory presented here does however make some simplifications. Importantly, we neglected that the distribution of bound crosslinks on filaments themselves in general depends on the configuration of the network. This means that the crosslink moments can themselves be functions of the local network order parameters. Effects like this have been argued to be important for instance when explaining the transition from contractile to extensile stresses in ordering microtubule networks \cite{lenz2020reversal} and the physics of active bundles \cite{kruse2003self}. Such effects can be recovered when making the interactions $K, \gamma, \sigma$ in the phenomenological crosslink force model Eq.~{(\ref{eq:generic_force_density})} functions of $\mathbf p_i$, and $\mathbf p_j$. This will be the topic of a subsequent publication. In summary, in this paper we derived a continuum theory for systems made from cytoskeletal filaments and motors in the highly crosslinked regime. Our theory makes testable predictions on the behavior of the emerging system, provides a unifying framework in which dense cytoskeletal systems can be understood from the ground up, and provides the design paradigms, which will enable the creation of active matter systems with desirable properties in the lab. {\bf Acknowledgements} We thank Meridith Betterton and Adam Lamson for insightful discussions. We also thank Peter J. Foster and James F. Pelletier for feedback on the manuscript. DN acknowledges support by the National Science Foundation under awards DMR-2004380 and DMR-0820484. MJS acknowledges support by the National Science Foundation under awards DMR-1420073 (NYU MRSEC), DMS-1620331, and DMR-2004469.
null
null
null
proofpile-arXiv_000-10107
{"arxiv_id":"2009.09006","language":"en","timestamp":1600740055000,"url":"https:\/\/arxiv.org\/abs\/2009.09006","yymm":"2009"}
2024-02-18T23:40:24.883Z
2020-09-22T02:00:55.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10109"}
null
\section{Introduction} Most hot subdwarf stars are compact, core He-burning objects of spectral types O (sdO) and B (sdB) with a thin hydrogen-rich envelope \citep[see][for reviews]{heb09,heb16}. The bulk of sdB stars form the hot end of the horizontal branch (HB), the so-called extended horizontal branch (EHB). Due to their lack of an extended hydrogen envelope, hot subdwarf stars are not able to sustain a hydrogen-burning shell \citep{Sweigart1987}. They are thought to have a He-burning lifetime of about 100\,Myr, after which point they directly evolve towards the white dwarf (WD) cooling sequence \citep{dor93}. Despite their lack of a thick hydrogen envelope, the atmospheres of most sdBs are dominated by hydrogen as a result of atomic diffusion, that is, the balance between radiative levitation and gravitational settling, damped by turbulence and mass loss \citep{mich11, hu11}. In contrast, many sdO stars are extremely He-enhanced and show almost no hydrogen in their atmospheres \citep{Stroeer2007, Nemeth2012, Fontaine2014}. Helium-rich sdO stars are thought to be the result of either a delayed He-flash at the top of the red giant branch \citep[RGB,][]{bert08} or the merging of two low-mass stars: for example, two He-WDs \citep{zhang12}. Unlike the He-poor sdB stars, these He-sdOs do not seem to be influenced by diffusion processes \citep[due to convection caused by the ionisation of He\,\textsc{ii};][]{Groth1985}. Two questions arise: will most He-sdOs evolve to become He-poor sdBs, or do they represent a distinct population? And at which point in the stellar evolution does atmospheric diffusion become important? Both \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ are part of the small population of intermediately He-rich sdOB (iHe-sdOB) stars that is of special interest when trying to address these questions \citep{2012ASPC..452...41J}. They share many physical properties, which make them a unique pair not only among the iHe-sdOBs. \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{paper-lc1a.pdf \includegraphics[width=1\columnwidth]{paper-lc1b.pdf \includegraphics[width=1\columnwidth]{paper-lsp1c.pdf} \par\end{centering} \caption{Photometry obtained for Feige\,46 (TIC\,371813244) with TESS. \emph{\emph{Top panel}:} light curve from Sector 22 (amplitude is in percent of the mean brightness of the star) spanning 26.59\,d (638.16\,h) sampled every 120\,s. Gaps in this time series are caused by the mid-sector interruption during data download and measurements removed from the light curve because of a non-optimal quality warning. \emph{\emph{Middle panel}:} close-up view of the light curve covering the first nine days, where modulations are visible. \emph{\emph{Bottom panel}:} Lomb-Scargle Periodogram of the light curve up to the Nyquist frequency limit ($\sim4167$\,$\mu$Hz). Significant periodic signal is clearly detected in the $250-500$\, $\mu$Hz range. \label{fig:TESS-F46-LSP}} \end{figure} Kinematic analyses of \object{LS\,IV$-$14$\degr$116}\ \citep{ran15} and \object{Feige\,46}\ \citep{Latour2019a} have shown that both stars are likely to be members of the Galactic halo unlike most of the helium-rich hot subdwarfs \citep{2017MNRAS.467...68M}. Both stars show light variations attributed to pulsations. Since its light variations were discovered by \cite{ahm05}, \object{LS\,IV$-$14$\degr$116}\ remained the sole member of its class of pulsating stars, now termed V366 Aqr variables, until \cite{Latour2019a} identified similar pulsations in \object{Feige\,46}. \cite{ahm05} identified two periods of 1950\,s and 2900\,s in the light variations of \object{LS\,IV$-$14$\degr$116}. These pulsations were confirmed in follow-up observations by \cite{jeff11} and \cite{green11}, who identified four additional periods up to 5084\,s. Pulsational light variations in sdB stars are well established. Both pressure (p-mode) and gravity (g-mode) oscillations have been observed in hot subdwarf stars -- the former have periods of a few minutes (short periods), whereas the periods of the latter range from 30 minutes to a few hours (long periods; for recent compilations, see \citealt{2017MNRAS.466.5020H} and \citealt{2018OAst...27..157R}). The pulsations observed in He-poor sdB stars are thought to be driven by an opacity ($\kappa$-) mechanism that is related to an iron and nickel opacity bump in the thin stellar envelope. This mechanism can produce both short-period oscillations \citep{cha96,cha97} at the temperature of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ ($\sim$35\,000\,K), and long-period oscillations \citep{green03,jeff06} at lower temperatures. The detection of long periods in \object{LS\,IV$-$14$\degr$116}\ is remarkable, because the $\kappa$-mechanism predicts that short-period pulsations should be excited at the high effective temperature and surface gravity of \object{LS\,IV$-$14$\degr$116}, which, however, are not observed. How the observed long-period pulsations are excited in \object{LS\,IV$-$14$\degr$116}\ remains an open question. \cite{bat18} and \cite{bert11, Miller2020} showed that gravity modes stochastically excited by He-flash driven convection are able to produce long-period pulsation similar to that observed in \object{LS\,IV$-$14$\degr$116}. This would place \object{LS\,IV$-$14$\degr$116}\ in an evolutionary state immediately following one of the first He-core flashes, subsequent to either a late hot He-flash or the merging of two He-WDs. Alternatively, \cite{saio19} showed that the pulsation of \object{LS\,IV$-$14$\degr$116}\ could also be explained by carbon and oxygen opacity bumps, but would require very substantial C/O enrichment at temperatures around $10^6$ K. Another striking peculiarity of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ is their chemical composition characterised by extreme overabundances of heavy metals. \cite{nas11} found \object{LS\,IV$-$14$\degr$116}\ to be enriched in strontium, yttrium, and zirconium, to the order of 10\,000 times the solar values. A very similar abundance pattern was found in \object{Feige\,46}\ by \cite{Latour2019b}. Whether or not this atmospheric enrichment in heavy metals extends to the envelope, where it could influence the driving of pulsations via the $\kappa$-mechanism is not known. Other recently discovered heavy-metal subdwarfs include the lead-rich iHe-sdOBs [CW83]\,0825+15 \citep{jeff17}, EC\,22536-4304 \citep{jeff19}, PG\,1559+048, and FBS\,1749+373 \citep{Naslim2020}. This extreme enrichment compared to solar values is thought to be the result of strong atmospheric diffusion processes. While the population of known heavy-metal subdwarfs continues to grow, it remains too small to relate the observed differences in enrichment to specific ranges in their atmospheric parameters. In addition, theoretical diffusion calculations for iHe-sdOB stars are still lacking. In this investigation, we focus on the determination and comparison of the detailed abundance patterns of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}. We recently obtained high-resolution spectra for \object{Feige\,46}\ at the ESO VLT, while archival spectra were retrieved for \object{LS\,IV$-$14$\degr$116}. A coarse inspection of the spectra showed that they were strikingly similar. The same metal lines are detected in both stars at very similar strengths, indicating that the abundances are similar as well. It is therefore tempting to study both stars jointly. Before addressing the main aim of the study, we start with a short account of the recent TESS light curve of \object{Feige\,46}\ in Sect.~\ref{sect:tess}. Photometric measurements, \textit{Gaia} astrometry, and the spectroscopic surface gravity and effective temperature are combined to derive the mass, radius, and luminosity of each star in Sect.~\ref{sect:sed}. In Sect.~\ref{sect:obs}, we give an overview of the available spectra. Our spectral analysis is described in Sect.~\ref{sect:analysis}. We summarise our results in Sect.~\ref{sect:conclusions}. \begin{table*} \caption{List of modes detected in the TESS time series of Feige\,46.\label{tab:Frequencies}} \centering{}% \begin{tabular}{lllllr} \hline \hline \noalign{\vskip5bp} Frequency & Freq. change$^{a}$ & Period & Period change & Amplitude$^{b}$ & S/N\tabularnewline ($\mu$Hz) & ($\mu$Hz) & (s) & (s) & (\%) & \tabularnewline[5bp] \hline \noalign{\vskip5bp} $435.948\pm0.130^{c}$ & $+0.156\pm0.130$ & $2293.85\pm0.68^{c}$ & $-0.82\pm0.68$ & $0.133\pm0.014$ & $9.4$\tabularnewline $435.573\pm0.050^{c}$ & $+0.045\pm0.050$ & $2295.83\pm0.26^{c}$ & $-0.23\pm0.26$ & $0.344\pm0.014$ & $24.1$\tabularnewline $363.606\pm0.035$ & & $2750.23\pm0.26$ & & $0.097\pm0.014$ & $6.8$\tabularnewline $362.625\pm0.019$ & $+0.009\pm0.019$ & $2757.67\pm0.14$ & $-0.07\pm0.14$ & $0.182\pm0.014$ & $12.8$\tabularnewline $333.358\pm0.029$ & $-0.066\pm0.029$ & $2999.78\pm0.26$ & $+0.60\pm0.27$ & $0.117\pm0.014$ & $8.2$\tabularnewline $294.017\pm0.016$ & $-0.039\pm0.016$ & $3401.17\pm0.18$ & $+0.46\pm0.19$ & $0.217\pm0.014$ & $15.3$\tabularnewline \hline \end{tabular} \tablefoot{ $^{a}$Relative to the measurement in \cite{Latour2019a}. $^{b}$Given amplitude values are uncorrected for contamination (see text). $^{c}$Formal fitting errors were increased by a factor of 5 to loosely account for the poorly resolved peaks. } \end{table*} \section{The TESS light curve of Feige\,46}\label{sect:tess} \object{Feige\,46}\ was observed with TESS in Sector 22, from February 19 to March 17, 2020. The light curve covers a time baseline of $26.59$ days sampled nearly continuously every 120 seconds, except for a four-day interruption mid-run, which is typical of TESS data. It is therefore shorter than the light curve used in \cite{Latour2019a}, which was taken between February 26 and May 25, 2018 (for a 87.76-day time baseline) with the Mont4K CCD camera at the 1.55m Kuiper telescope of Steward Observatory on Mt Bigelow, resulting in a lower frequency resolution of $0.44$\,$\mu$Hz compared to $0.13$\,$\mu$Hz. However, the duty cycle is vastly improved with TESS and daily frequency aliases in Fourier transforms are no longer present. Due to the large pixels of TESS ($\sim$21 arcsec), the light curve is likely affected by a slightly fainter visual companion located 13.3 arcsec northwest of Feige\,46 ($G_{{\rm RP}}=13.95$ compared to $13.55$ for Feige\,46). According to the TESS contamination indicator (crowdsap), only $61.4\%$ of the collected light is attributed to Feige\,46. Assuming the contaminating star is not variable, this blend only affects the measured amplitudes of Feige\,46 brightness variations, which have to be scaled up by a factor of $1.63$. The pulsation frequencies remain unaffected. Figure \ref{fig:TESS-F46-LSP} illustrates the TESS observations obtained for Feige\,46. A close-up view of the light curve (middle panel) suggests the presence of periodic light modulations, which become clearly apparent in the Lomb-Scargle periodogram (LSP; bottom panel), which covers the entire frequency range (in log scale) accessible to these data (i.~e.~up to the Nyquist frequency limit corresponding to the 120\,s sampling). Significant peaks are found in the $250-500$ $\mu$Hz frequency range, where the pulsations were indeed expected, while nothing above a 4-$\sigma$ detection threshold emerges elsewhere in the spectrum. We extracted the periodic modulations in Feige\,46 by following a similar approach to \cite{Latour2019a}, using a standard Fourier analysis and pre-whitening techniques (see, e.~g.~\citealt{Billeres2000}). This was accomplished efficiently with our dedicated time-series analysis software, FELIX (\citealt{charpinet2010}; \citealt{Zong2016}). The entire pre-whitening procedure is illustrated in Fig. \ref{fig:TESS-F46-Prewhitening}, and the extracted mode parameters are listed in Table \ref{tab:Frequencies}. \begin{figure} \begin{centering} \includegraphics[width=1\columnwidth]{paper-lsp2a.pdf \par\end{centering} \begin{centering} \includegraphics[width=1\columnwidth]{paper-lsp2b.pdf \par\end{centering} \begin{centering} \includegraphics[width=1\columnwidth]{paper-lsp2d.pdf} \par\end{centering} \caption{Pre-whitening sequence of detected pulsation modes. \emph{\emph{Top panel:}} Lomb-Scargle Periodogram of the TESS time series in the relevant $270-420$ $\mu$Hz frequency range. The horizontal dotted line indicates four times the median noise level and corresponds to the chosen detection limit. \emph{\emph{Middle panel:}} residual periodogram after pre-whitening the four dominant peaks identified in the top panel. Two additional low-amplitude peaks are identified. \emph{\emph{Bottom panel:}} from top to bottom are the original, reconstructed (from the frequencies, amplitudes, and phases of the six fitted sine waves; plotted upside down) and residual periodograms after completing the pre-whitening process. \label{fig:TESS-F46-Prewhitening}} \end{figure} Most peaks, except the largest amplitude one, are easily reproduced by fitting a pure sinusoidal component to the time series, leaving no residual behind in Fourier space. For the largest peak around 453\,$\mu$Hz, however, a single frequency fit leaves a significant residual, and we find its structure to be better reproduced when a blend of two close, poorly resolved frequency components is instead assumed. We favour this solution considering that this peak was clearly resolved in two independent components, interpreted as a possible rotational multiplet, in \cite{Latour2019a}. Due to this blend, the frequencies of these two components are less accurately measured in the TESS data (see Table \ref{tab:Frequencies}). Overall, we detect five of the six periods found by \cite{Latour2019a}. Their peak with the lowest amplitude, at 2586\,s, is not visible in the TESS run. However, we find a new period at 2750\,s that is close to the already known period at 2758\,s. These modes are separated by $\sim$1\,$\mu$Hz and might be part of a rotation multiplet, but better data are required to reliably detect and identify rotational splitting in this star. Since the TESS light curve and that of \cite{Latour2019a} were obtained, on average, 698 days apart, it is possible to constrain a potential period decay, as predicted by \cite{bat18}. These authors propose that pulsators like Feige\,46 are fast evolving stars experiencing helium sub-flashes before reaching the extreme horizontal branch. The pulsations would be driven by the $\epsilon$-mechanism associated with these sub-flashes. \cite{bat18} predicted period changes in the $10^{-5}-10^{-7}$ s/s range for late hot flasher models, with the fastest rates corresponding to pulsations driven by the first He-flash and the slowest rates corresponding to subsequent flashes (see their Table 3). The longest periods ($\sim$2000\,s) are only excited during the first one or two He-flashes, and would therefore be associated with the fastest period changes. In this context, the periods observed in Feige\,46, which are all longer than 2000\,s, would be expected to change rapidly, at a rate close to $\sim$10$^{-5}$\,s/s. We do find period differences between the two observing runs (see Table \ref{tab:Frequencies}), but these are difficult to associate with certainty to secular variations, because of frequency resolution limitations and the likely presence of poorly resolved rotational splittings. The important finding, however, is that these period variations are, at most, of the order of one second (to be conservative), which would correspond to a rate of period change of $\sim$10$^{-8}$\,s/s or less. This is orders of magnitude slower than the rates predicted by \cite{bat18}. In other words, if the periods were to change at $10^{-5}$\,s/s, this effect would by far dominate the period differences between the two epochs of observation, but this is clearly not observed in Feige\,46. \\ It would be interesting to check for period decay in \object{LS\,IV$-$14$\degr$116}\ as well. We are aware of three photometric observation runs, performed in 2004 \citep{ahm05}, 2005 \citep{jeff11}, and 2010 \citep{green11}. Comparing the periods found by \cite{ahm05} and \cite{jeff11} with those stated in \cite{green11} suggests an upper limit on the rate of period change of $\sim$10$^{-6}$\,s/s or less, which is less than what was predicted by \cite{bat18}. A future consistent analysis of all data sets might improve on this upper limit, especially since \object{LS\,IV$-$14$\degr$116}\ is scheduled to be observed with the CHEOPS satellite. \section{Parallax, spectral energy distribution, and stellar parameters}\label{sect:sed} The \textit{Gaia} mission recently provided parallaxes for a large number of hot subdwarf stars. This allows atmospheric parameters to be converted to the fundamental stellar parameters: mass, radius, and luminosity, without relying on predictions from evolutionary models. The parallax measurements for \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ are of excellent quality, with uncertainties of less than 5\%. In addition, photometry is required to derive the angular diameters ($\theta$) of the stars. \begin{figure*} \sidecaption \includegraphics[width=11.7cm]{LSIV-14116_photometry_SED.pdf} \caption{Comparison of smoothed final synthetic spectrum of \object{LS\,IV$-$14$\degr$116}\ (grey line) with photometric data. The two black data points labelled `box' are binned fluxes from an IUE spectrum \citep[LWP10814LL, magenta line,][]{Wamsteker2000_INES}. Filter-averaged fluxes are shown as coloured data points that were converted from observed magnitudes (the dashed horizontal lines indicate filter widths). The residual panels at the bottom and on the right sides, respectively, show the differences between synthetic and observed magnitudes/colours. The following colour codes are used to identify the photometric systems: SDSS \cite[yellow,][]{Henden_2015,SDSS_2015}, SkyMapper \cite[dark yellow,][]{Wolf_2018}, Pan-STARRS1 \cite[red,][]{2016arXiv161205560C}, Johnson-Cousins \cite[blue,][]{Henden_2015,O'Donoghue_2013}, Str\"omgren \cite[green,][]{Hauck_1998}, \textit{Gaia} \cite[cyan,][]{Gaia2018_VizieR}, VISTA \cite[dark red,][]{McMahon_2013}, DENIS \cite[orange,][]{vizier:B/denis}, 2MASS \cite[bright red,][]{Cutri2003_2MASS}, and WISE \cite[magenta,][]{Schlafly_2019}.} \label{fig:sed_fit} \end{figure*} We combined apparent magnitudes from the ultraviolet to the infrared to construct the observed spectral energy distribution (SED) of \object{LS\,IV$-$14$\degr$116}\ (see Fig.~\ref{fig:sed_fit}). Our final synthetic spectrum of \object{LS\,IV$-$14$\degr$116}\ was then scaled to fit this SED using $\chi^2$ minimisation based on the method described by \cite{2018OAst...27...35H}. Interstellar reddening is considered after \cite{Fitzpatrick2019}, assuming an extinction parameter $R\,(55)=3.02$. Fit parameters are the angular diameter $\theta$ and $E$(44$-$55), which is the monochromatic analogon of the colour excess $E$($B$--$V$). To derive the stellar radius $R$, the \textit{Gaia} parallax $\varpi$ is combined with the resulting angular diameter $\theta = 2R\varpi$. The stellar mass $M$ is then derived using the spectroscopic surface gravity $g = GM/R^2$, where $G$ is the gravitational constant ($\log g = 5.85$ for \object{LS\,IV$-$14$\degr$116}). The stellar luminosity $L$ is based on the spectroscopic effective temperature ($T_\mathrm{eff} = 35500$\,K). We repeated the SED fit for \object{Feige\,46} using $\log g = 5.93$ and $T_\mathrm{eff} = 36100$\,K (see Fig.~\ref{fig:sed_feige}). The atmospheric parameters used are the same as those used for the spectroscopic analysis and are described in Sect.~\ref{sect:methods}. For both stars, we assume systematic errors of 0.1\,dex in $\log g$ and 1000\,K in $T_{\rm eff}$. The results of this analysis are listed in Table \ref{tab:SED}. The derived stellar mass for \object{LS\,IV$-$14$\degr$116}\ ($0.38\pm 0.10$\,$M_\odot$) is somewhat less than the value obtained for \object{Feige\,46}\ ($0.53\pm 0.14$\,$M_\odot$). Given the uncertainties both masses are consistent with the canonical mass suggested by evolution models \citep[$\sim$0.46\,$M_\odot$,][]{dor93,2003MNRAS.341..669H}. \begin{table} \setstretch{1.2} \caption{Parallax and parameters derived from the SED fitting. The atmospheric parameters $T_{\rm eff}$\ and $\log g$ are derived from spectroscopy and discussed in Sect.~\ref{sect:methods}.} \label{tab:SED} \vspace{-8pt} \begin{center} \begin{tabular}{lcc} \toprule \toprule & \object{LS\,IV$-$14$\degr$116} & \object{Feige\,46}\\ \midrule $\varpi$\,(mas) & $2.38 \pm 0.09$ & $1.86 \pm 0.07$ \\ $d$\,(pc) & $420 \pm 15$ & $538 \pm 19$ \\ $\theta\,(10^{-11}\,\mathrm{rad})$ & $1.310 \pm 0.007$ & $1.093 \pm 0.008$ \\ $E$(44$-$55) & $0.033 \pm 0.005$ & $0.011 \pm 0.005$ \\ $T_{\rm eff}$\,(K) & $35500 \pm 1000$ & $36100 \pm 1000$ \\ $\log g$ & $5.85 \pm 0.10$ & $5.93 \pm 0.10$ \\ $R/R_\odot$ & $0.122 \pm 0.005$ & $0.130\pm 0.006$ \\ $M/M_\odot$ & $0.38\pm 0.10$ & $0.53 \pm 0.14$\\ $L/L_\odot$ & $21\pm 3$ & $26\pm4$ \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Spectroscopic observations}\label{sect:obs} \begin{table} \setstretch{1.2} \captionof{table}{UVES spectra used in the present analysis. For \object{LS\,IV$-$14$\degr$116}, only spectra with sufficient S/N for cross-correlation were used. Total exposure times are given per wavelength range and resolution.} \label{tab:obs} \vspace{-11pt} \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{l c c c c c} \toprule \toprule Star & Range / \AA & R & $n_\mathrm{exp}$ & $\sum t_\mathrm{exp}$ / s & Run ID \\ \midrule \object{Feige\,46}\ & $3305-4525$ & 40970 & \phantom{0}4 & \phantom{0}5920 & 0104.D-0206(A) \\ & $4620-6645$ & 42310 & \phantom{0}4 & \phantom{0}5920 & \\ \object{LS\,IV$-$14$\degr$116}\ & $3290-4525$ & 40970 & 12 & \phantom{0}3600 & 087.D-0950(A) \\ & $4788-6835$ & 42310 & 15 & \phantom{0}4500 & \\ & $3290-4525$ & 49620 & 18 & \phantom{0}3600 & 095.D-0733(A) \\ & $4788-6835$ & 51690 & 18 & \phantom{0}3600 & \\ & $3290-4525$ & 58640 & 64 & 12800 & \\ & $4788-6835$ & 66320 & 71 & 14200 & \\ \bottomrule \end{tabular} } \end{center} \end{table} We obtained four VLT/UVES spectra of \object{Feige\,46}\ in February 2020 with a total exposure time of 5920\,s (ID 0104.D-0206(A)). These spectra have a resolution of $R$ $\approx$ 41000 and cover the spectral range from 3305\,\AA\ to 6645\,\AA,\ with gaps at 4525 -- 4620 \AA\ and 5599 -- 5678 \AA. The individual spectra were stacked after cross-correlation to obtain a single spectrum with an increased signal-to-noise ratio (S/N) of about 80. The radial velocity obtained, $v_\mathrm{rad}=89$ km\,s$^{-1}$, is fully consistent with the value found by \cite{dri87}: $90\pm 4$ km\,s$^{-1}$. For the spectral analysis, the observed spectrum was shifted to the stellar rest frame. We refer the reader to \cite{Latour2019b} for the description and analysis of older spectra of \object{Feige\,46}, including ultraviolet (UV) observations. \object{LS\,IV$-$14$\degr$116}\ has been observed extensively with the UVES spectrograph. A total of 788 spectra are available in the ESO archive (corresponding to 394 exposures). Spectra were taken as part of two programmes: on 7 September, 2011 (ID 087.D-0950(A)) and between 23 and 27 August, 2015 (ID 095.D-0733(A)). These programs used time-resolved spectroscopy in order to relate the observed photometric variability to radial velocity variations \citep{jeff15,Martin2017}. We combined spectra from both runs to create a high-S/N spectrum that is suitable for a detailed abundance analysis. For each resolution, spectra with the highest S/N (typically 16 to 25) were cross-correlated and stacked. These stacked spectra were convolved to the lowest common resolutions ($R=40970$ for the blue range, and $R=42310$ for the red range) and were then co-added. We then shifted the spectrum to the stellar rest frame, correcting for the high radial velocity of about $v_\mathrm{rad}=-154$ km\,s$^{-1}$. The final spectrum has a mean effective S/N of about 200, which is limited by small-scale artefacts. Details of the UVES spectra used in the present analysis are given in Table \ref{tab:obs}. \cite{ran15} carried out spectropolarimetry of \object{LS\,IV$-$14$\degr$116}\ with VLT/FORS2 to search for a magnetic field. While no polarisation could be detected, their observations produced a flux spectrum of excellent quality (spectral resolution $\Delta \lambda \approx 1.8$\,\AA, S/N $\approx$ 700). In contrast to the UVES spectra, this long-slit spectrum is not affected by the normalisation issues that frequently occur in the reduction procedure of Echelle spectra. The FORS2 spectrum is therefore useful for determining atmospheric parameters based on broad hydrogen and helium lines. \begin{table} \setstretch{1.2} \captionof{table}{Sources of oscillator strengths for detected lines of heavy metals in \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}. } \label{tab:fsource} \vspace{-11pt} \begin{center} \begin{tabular}{lrr} \toprule \toprule Ion & $N_\mathrm{ident}$ & Reference \\ \midrule Ga\,\textsc{iii} & 9 & \cite{oreilly98} \\ Ge\,\textsc{iii} & 3 & \cite{nas11} \\ Ge\,\textsc{iv} & 6 & \cite{oreilly98} \\ Kr\,\textsc{iii} & 17 & \cite{Raineri1998} \\ Sr\,\textsc{ii} & 2 & \cite{Fernandez2020} \\ & 3 & Kurucz/Linelists \\ Sr\,\textsc{iii} & 35 & Kurucz/Atoms\\ Y\,\textsc{iii} & 2 & \cite{nas11} \\ & 3 & \cite{Fernandez2020} \\ Zr\,\textsc{iii} & 2 & Kurucz/Linelists \\ Zr\,\textsc{iv} & 16 &\cite{Rauch2017} \\ Sn\,\textsc{iv} & 2 & \cite{Kaur2020} \\ Pb\,\textsc{iv} & 1 & \cite{safronova04} \\ \bottomrule \end{tabular} \end{center} \end{table} \section{Spectroscopic analysis} \label{sect:analysis} The excellent UVES spectra enable a detailed abundance analysis, as well as a consistent comparison of abundances between \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}, which is described in the following section. \subsection{Methods}\label{sect:methods} To minimise systematic errors, we analysed the spectra of both \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ using the same fitting method and the same type of model atmospheres, following the procedure described in \cite{Latour2019b} and \cite{dorsch2019}. This analysis is based on model atmospheres and synthetic spectra computed using the hydrostatic, homogeneous, plane-parallel, non-local thermodynamic equilibrium (NLTE) codes \textsc{Tlusty} and \textsc{Synspec} \citep{hubeny88, Lanz2003, hub11}. We used the most recent public versions as described in \cite{hubeny17a,hubeny17b,hubeny17c}. Our line list is based on atomic data provided by R.~Kurucz.\footnote{\url{http://kurucz.harvard.edu/linelists/gfnew/gfall08oct17.dat}; see also \cite{Kurucz2018}.\label{note1}} We extended this line list to include lines from additional heavy ions. The atomic data previously collected are described in \cite{dorsch2019} and \cite{Latour2019b}. This list was further extended to model the rich spectrum of \object{Feige\,46}. The main sources for detected lines of heavy ions are listed in Table \ref{tab:fsource}. Heavy elements (here $Z>30$) in ionisation stages \textsc{i-iii} are included in LTE using the treatment of \cite{Proffitt2001}, who added ionisation energies and partition functions from R.~Kurucz's ATLAS9 code \citep{Kurucz1993} to \textsc{Synspec}. Partition functions for higher ionisation stages are calculated as described in \cite{Latour2019b}. As in our previous analysis of \object{Feige\,46}, all model atmospheres were calculated using the atmospheric parameters derived by \cite{Latour2019a} ($T_{\rm eff}$\ = 36\,100 K, $\log g$ = 5.93, and a helium abundance of $\log \epsilon_\mathrm{He} / \epsilon_\mathrm{H} = -0.32$). Atmospheric parameters for \object{LS\,IV$-$14$\degr$116}\ were derived by \cite{ran15} based on a high S/N FORS2 spectrum ($T_{\rm eff}$\ = 35\,150 K, $\log g$ = 5.88, $\log \epsilon_\mathrm{He} / \epsilon_\mathrm{H} = -0.62$). We used a grid of line-blanketed NLTE models to re-fit the same FORS2 spectrum, and we obtained $T_{\rm eff}$\ = 35\,500 K, $\log g$ = 5.85, $\log \epsilon_\mathrm{He} / \epsilon_\mathrm{H} = -0.60$, which is fully compatible with the results of \cite{ran15}. The model grid used for this fit includes H, He, C, N, O, Ne, Mg, Al, Si, and Fe in NLTE with abundances appropriate for \object{LS\,IV$-$14$\degr$116}. Using the atmospheric parameters reported above for each star, we then constructed series of models, varying the abundance of one element at a time. These models also include nickel in NLTE. Based on these grids, we determined metal abundances using the $\chi^2$-fitting program SPAS developed by \cite{hirsch09}. Both \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ show slightly broadened lines that are best reproduced at a projected rotational velocity of $v_\mathrm{rot} \sin i = 9$\,km\,s$^{-1}$. This broadening might not be caused solely by rotation, but instead likely results from unresolved (high-order) pulsations. Indeed, \cite{jeff15} found that the principal pulsation mode in \object{LS\,IV$-$14$\degr$116}\ (1950\,s) leads to radial velocity variations with a semi-amplitude of about 5.5\,km\,s$^{-1}$. They also came to the conclusion that other pulsation periods lead to additional unresolved motion. Similar variability could be present in \object{Feige\,46}, which would explain the observed broadening given that the UVES exposure times (1480\,s) cover a significant fraction of the shortest period observed in \object{Feige\,46}\ (2295\,s). However, the exposure times of the UVES spectra of \object{LS\,IV$-$14$\degr$116}\ were much shorter (200\,s or 300\,s). The remaining broadening (despite cross-correlating individual exposures before co-adding) may be explained by a combination of uncertainties in the cross-correlation, high-order pulsations, unresolved motion due to multiple periods, and actual rotation. \cite{jeff15} also find evidence for differential pulsation: line strength and pulsation amplitude might be correlated. Therefore, correlating single spectra using specific strong lines would not perfectly mitigate the broadening in the stacked spectrum for weak lines. However, differential pulsation was not confirmed in the radial velocity study of \cite{Martin2017}. Additional broadening may be caused by microturbulence ($v_\mathrm{tb}$). However, as shown by \cite{Latour2019b}, a microturbulence of 5\,km$^{-1}$ is too high to simultaneously reproduce UV and optical lines in \object{Feige\,46}. We therefore adopted $v_\mathrm{tb}=2$\,km\,s$^{-1}$ for both stars, which leads to negligible broadening. \begin{figure*} \includegraphics[width=1\textwidth]{f46_light_5ranges_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_light_6ranges_v2_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{f46_zniii_8ranges_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_zniii_8ranges_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{f46_gaiii_yiii_sniv_7lines_broad_ln_v2.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_gaiii_yiii_sniv_pbiv_7lines_broad_ln_v2.pdf}\vspace{-6pt} \caption{Representative regions in the UVES spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}. The best fit models are shown in red.} \label{lines_1} \end{figure*} \begin{figure*} \includegraphics[width=1\textwidth]{f46_seiii_unid_7lines_oslsiv_broad.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{f46_unid_geiii_asiii_unid_oslsiv_broad.pdf}\vspace{-6pt} \caption{Additional regions in the UVES spectrum of \object{Feige\,46}\ showing newly identified lines that are lacking oscillator strengths and the strongest unidentified line observed. The UVES spectrum of \object{LS\,IV$-$14$\degr$116}\ is shown for comparison, offset by $-0.1$.} \label{lines_unid} \end{figure*} \begin{table} \centering \caption{Updated line positions. Observed positions are accurate to about 0.02\,\AA\ depending on the specific line strengths. } \label{table_lshift} \setstretch{1.1} \begin{tabular}{l c c c} \toprule \toprule Ion & $\lambda_\mathrm{lit}$ / \AA & $\lambda_\mathrm{obs}$ / \AA & $\Delta \lambda$ / \AA \\ \midrule \ion{Zn}{iii} & 5075.243 & 5075.330 & $+0.087$ \\ \ion{Zn}{iii} & 5157.431 & 5157.580 & $+0.149$ \\[2pt] \ion{Ge}{iii} & 4178.960 & 4179.078 & $+0.118$ \\[2pt] \ion{Ge}{iv} & 3320.410 & 3320.530 & $+0.120$ \\ \ion{Ge}{iv} & 3333.640 & 3333.785 & $+0.145$ \\ \ion{Ge}{iv} & 3554.190 & 3554.257 & $+0.067$ \\ \ion{Ge}{iv} & 3676.650 & 3676.735 & $+0.085$ \\ \ion{Ge}{iv} & 4979.190 & 4979.987 & $+0.797$ \\ \ion{Ge}{iv} & 5072.900 & 5073.330 & $+0.430$ \\[2pt] \ion{Kr}{iii} & 3311.540 & 3311.490 & $-0.050$ \\ \ion{Kr}{iii} & 3474.750 & 3474.650 & $-0.100$ \\[2pt] \ion{Sr}{iii} & 3976.706 & 3976.033 & $-0.673$ \\ \ion{Sr}{iii} & 3991.587 & 3992.272 & $+0.685$ \\[2pt] \ion{Y}{iii} & 4039.602 & 4039.576 & $-0.026$ \\[2pt] \ion{Zr}{iv} & 5462.333 & 5462.380 & $+0.047$ \\ \ion{Zr}{iv} & 5779.843 & 5779.880 & $+0.037$ \\[2pt] \ion{Sn}{iv} & 3862.051 & 3861.207 & $-0.844$ \\ \ion{Sn}{iv} & 4217.184 & 4216.192 & $-0.992$ \\ \bottomrule \end{tabular} \end{table} \begin{figure*} \includegraphics[width=1\textwidth]{f46_geiii_geiv_7lines_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_geiii_geiv_7lines_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{f46_kriii_7lines_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_kriii_7lines_broad_ln.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{f46_srii_sriii_7lines_broad_ln_v2.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_srii_sriii_7lines_broad_ln_v2.pdf}\vspace{-6pt} \caption{Strongest lines identified in the UVES spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ for elements Ge, Kr, and Sr.} \label{lines_2} \end{figure*} \begin{figure*} \includegraphics[width=1\textwidth]{f46_zriv_blue_broad_ln_v2.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_zriv_blue_broad_ln_v2.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{f46_zriv_red_broad_ln_v2.pdf}\vspace{-15pt} \includegraphics[width=1\textwidth]{lsiv_zriv_red_broad_ln_v2.pdf}\vspace{-6pt} \caption{\ion{Zr}{iv} lines and one \ion{Zr}{iii} line identified in UVES spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ at the best fit abundances.} \label{lines_zr} \end{figure*} \subsection{Individual abundances} In the following section, we present, in detail, the result of our abundance analysis for each element. A summary of the abundances derived for \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ is given in Tables \ref{table_abund_feige} and \ref{table_abund_lsiv}, respectively. Abundances stated in the text are always relative to solar values. The full UVES spectra along with the final models for both stars are shown in Appendix \ref{sect:fullspec}. \subsubsection{Light elements and the iron group: Carbon to zinc} Examples of the strongest lines from light elements for both stars along with the final models are shown in the top panels of Fig.~\ref{lines_1}. The following paragraphs summarise the derivation of abundances for light metals and the iron group. \paragraph{Carbon, nitrogen, and oxygen:} plenty of carbon, nitrogen, and oxygen lines are available to determine abundances, including the lines shown in Fig.~\ref{lines_1}. Both stars have a carbon abundance close to the solar number fraction that is slightly enhanced for \object{Feige\,46}\ (+0.25 dex) and somewhat depleted for \object{LS\,IV$-$14$\degr$116}\ ($-0.19$ dex). Nitrogen is overabundant in both stars by 0.46 dex and 0.28 dex, respectively, while oxygen is significantly underabundant by $-1.03$ dex and $-1.23$ dex. On average, the CNO content of \object{LS\,IV$-$14$\degr$116}\ is lower than that of \object{Feige\,46}\ by about 0.2\,dex. Although the general fit for carbon lines is good, there is some discrepancy between the strongest \ion{C}{ii} and \ion{C}{iii} lines. We attribute this mostly to NLTE effects that are not perfectly modelled. For instance, the \ion{C}{ii} 4267.3\,\AA\ doublet is too strong in our synthetic spectra, while the C\,\textsc{iii} triplet 4152.5, 4156.5, 4162.9\,\AA\ is slightly too weak (see Fig.~\ref{fig:fspec_feige} and \ref{fig:fspec_lsiv}). \ion{C}{ii}\,5661.9\,\AA\ is predicted to be in emission although no line is observed at this position in the UVES spectrum of \object{LS\,IV$-$14$\degr$116}. Some nitrogen lines display similar behaviour: \ion{N}{ii} 4630.5, 4643.1, 4803.3, 5005.2, 5179.5, and 5710.8\,\AA\ are too weak in our models and were not considered for determining the nitrogen abundance. These lines also appeared in emission in the synthetic spectra of the iHe-sdO HD\,127493 \citep{dorsch2019}, who used the same model atoms. Resolving these issues is a complex task because almost all optical lines of \ion{C}{ii-iii} and \ion{N}{ii} originate from high-lying levels. The population of these levels is very sensitive to the photo-ionisation (radiative bound-free) cross-sections used. The development of new \textsc{Tlusty} model atoms would be required for at least C\,\textsc{ii-iii} and N\,\textsc{ii,} which is an elaborate process and beyond of the scope of the present investigation. For the time being, the best fit to lines of C, N, and O can be considered satisfactory. The derived abundances of C, N, and O for \object{Feige\,46}\ do not differ significantly from the values given by \cite{Latour2019b} (see Fig. \ref{comp:latour2019}). \vspace{-12pt} \paragraph{Neon:} the slightly sub-solar neon abundance for both stars is based on several Ne\,\textsc{ii} lines in the blue range, for example \ion{Ne}{ii} 3334.8, 3664.1, and 3694.2\,\AA. \vspace{-12pt} \paragraph{Magnesium:} the Mg\,\textsc{ii} 4481\,\AA\ doublet is observed in both stars and best reproduced at abundances of $-0.79$ dex for \object{Feige\,46},\ and $-1.07$ dex for \object{LS\,IV$-$14$\degr$116}. \vspace{-12pt} \paragraph{Aluminium:} the strongest predicted aluminium lines, \ion{Al}{iii} 4479.9, 4512.6, and 5696.6\,\AA, are not detected in \object{Feige\,46}\ or \object{LS\,IV$-$14$\degr$116}. The upper limit derived from these lines is slightly sub-solar. \vspace{-12pt} \paragraph{Silicon:} sub-solar silicon abundances are based mainly on the \ion{Si}{iv} 4088.9, 4116.1\,\AA\ doublet. \vspace{-12pt} \paragraph{Phosphorus:} the only phosphorus line observed in \object{Feige\,46}, \ion{P}{iii} 4222.2\,\AA, is very weak but present in \object{LS\,IV$-$14$\degr$116}\ as well. The derived abundance based on this line is solar for \object{Feige\,46}\ and slightly sub-solar for \object{LS\,IV$-$14$\degr$116}. \vspace{-12pt} \paragraph{Sulphur:} no sulphur lines are detected in either star. The upper limit derived for \object{Feige\,46}\ is consistent with the value found by \cite{Latour2019b} from the UV spectrum. \vspace{-12pt} \paragraph{Argon:} the argon abundance for \object{Feige\,46}\ is based on the weak \ion{Ar}{iii} 3311.6 and 3503.6\,\AA\ lines. The same lines could not be used for \object{LS\,IV$-$14$\degr$116}, where the Ar abundance (about solar) is instead based on \ion{Ar}{iii} 3336.1 and 3511.2\,\AA. Significant uncertainty ($\sim$0.2\,dex) is introduced by the continuum placement since all Ar lines are very weak. \vspace{-12pt} \paragraph{Calcium:} the upper limits derived for calcium are based on the non-detection of the \ion{Ca}{ii} 3933.7\,\AA\ resonance line, which is well separated from interstellar lines in both stars. These upper limits indicate severe underabundances (by about 0.7\,dex) for both stars, which is consistent with the non-detection of the \ion{Ca}{iii} 4233.7 and 4240.7\,\AA\ lines that are usually observed in He-poor sdOB stars. \vspace{-12pt} \paragraph{Titanium:} weak titanium lines are observed in both stars. We used \ion{Ti}{iii} 3354.7, 4215.5, 4285.6\,\AA\ and \ion{Ti}{iv} 3541.4, 3576.5, 4971.2, and 5398.9\,\AA\ to derive super-solar abundances. \vspace{-12pt} \paragraph{Chromium, manganese, iron, and cobalt:} no lines from the iron-peak elements chromium, manganese, iron, and cobalt are observed in UVES spectra of either star. For completeness, we list the abundances derived from UV lines for \object{Feige\,46}\ by \cite{Latour2019b} in Table \ref{table_abund_feige}. The absence of high-resolution UV spectra of \object{LS\,IV$-$14$\degr$116}\ means that no information on the abundance of these elements can be obtained for that star, except for iron. The iron upper limit for \object{LS\,IV$-$14$\degr$116}\ (0.35 times solar) is based on the non-detection of \ion{Fe}{iii} 5243.3 and 5891.9\,\AA, which are too strong in the final model. \ion{Fe}{iii} 4137.8 and 4164.7\,\AA\ are well reproduced at this abundance. \vspace{-12pt} \paragraph{Nickel:} several weak nickel lines (\ion{Ni}{iii}) could be used to derive abundances for both stars, for example \ion{Ni}{iii} 5332.2, 5436.9, 5481.3 and 5482.3\,\AA. The Ni abundance derived from the optical lines for \object{Feige\,46}\ is the same as that obtained from the UV lines: overabundant by about 1 dex with respect to solar. \vspace{-12pt} \paragraph{Zinc:} the zinc abundances for \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ (about 300 times solar) are based on 13 and 16 strong lines, respectively (e.~g.~\ion{Zn}{iii} 3683.4, 4818.9, 4970.8, 5075.2, 5249.7, and 5563.7\,\AA; see Fig.~\ref{lines_1}). \subsubsection{Heavy metals} \begin{table} \centering \caption{Abundance results for \object{Feige\,46}\ by number relative to hydrogen ($\log \epsilon$/$\epsilon_{\mathrm{H}}$), by number fraction ($\log \epsilon$), and number fraction relative to solar ($\log \epsilon$/$\epsilon_{\odot}$). The number of resolved lines used per ionisation stage is given in the last column. }\label{table_abund_feige} \vspace{-10pt} \setstretch{1.1} \begin{center} \resizebox{\columnwidth}{!}{ \begin{tabular}{l@{\hspace{2pt}}rrrr} \toprule \toprule Element & \multicolumn{1}{c}{$\log \epsilon/\epsilon_{\mathrm{H}}$} & \multicolumn{1}{c}{$\log \epsilon$} & \multicolumn{1}{c}{$\log \epsilon/\epsilon_{\odot}$} & \multicolumn{1}{c}{$N_\mathrm{lines}$}\\ \midrule H & $0.00\pm0.00$ & $-0.17\pm0.02$ & $-0.13\pm0.02$ \\ He & $-0.32\pm0.05$ & $-0.49\pm0.03$ & $0.62\pm0.04$ \\ \ion{C}{ii-iv} & $-3.19\pm0.13$ & $-3.36\pm0.13$ & $0.25\pm0.14$ & 6/16/1\\ \ion{N}{ii-iii} & $-3.57\pm0.08$ & $-3.74\pm0.08$ & $0.46\pm0.10$ & 23/14\\ \ion{O}{ii-iii} & $-4.21\pm0.10$ & $-4.38\pm0.10$ & $-1.03\pm0.11$ & 12/1 \\ \ion{Ne}{ii} & $-4.31\pm0.07$ & $-4.48\pm0.07$ & $-0.38\pm0.12$ & 18 \\ \ion{Mg}{ii} & $-5.05\pm0.02$ & $-5.22\pm0.02$ & $-0.79\pm0.04$ & 1\\ \ion{Al}{iii} & <$-6.16^{+0.40}_{}$ & <$-6.33^{+0.40}_{}$ & <$-0.74^{+0.40}_{}$ & \\ \ion{Si}{iii-iv} & $-5.51\pm0.03$ & $-5.68\pm0.03$ & $-1.15\pm0.04$ & 1/3 \\ \ion{P}{iii} & $-6.44\pm0.05$ & $-6.61\pm0.05$ & $0.02\pm0.06$ & 1\\ S & <$-5.60^{+0.30}_{}$ & <$-5.77^{+0.30}_{}$ & <$-0.85^{+0.30}_{}$ \\ \ion{Ar}{iii} & $-5.75\pm0.14$ & $-5.92\pm0.14$ & $-0.28\pm0.20$ & 3 \\ Ca & <$-6.15^{+0.40}_{}$ & <$-6.32^{+0.40}_{}$ & <$-0.62^{+0.40}_{}$ \\ \ion{Ti}{iii-iv} & $-5.51\pm0.12$ & $-5.68\pm0.12$ & $1.41\pm0.13$ & 3/2\\ $^\ast$Cr & $-5.68\pm0.17$ & $-5.85\pm0.17$ & $0.55\pm0.18$ \\ $^\ast$Mn & <$-5.69^{+0.40}_{}$ & <$-5.86^{+0.40}_{}$ & <$0.75^{+0.40}_{}$ \\ $^\ast$Fe & $-4.64\pm0.14$ & $-4.81\pm0.14$ & $-0.27\pm0.15$ \\ $^\ast$Co & $-5.85\pm0.21$ & $-6.02\pm0.21$ & $1.03\pm0.23$ \\ \ion{Ni}{iii} & $-4.53\pm0.19$ & $-4.70\pm0.19$ & $1.12\pm0.19$ & 8 \\ \ion{Zn}{iii} & $-4.79\pm0.12$ & $-4.96\pm0.12$ & $2.51\pm0.13$ & 13 \\ \ion{Ga}{iii} & $-5.48\pm0.12$ & $-5.66\pm0.12$ & $3.34\pm0.15$ & 10 \\ \ion{Ge}{iii-iv} & $-4.89\pm0.15$ & $-5.06\pm0.15$ & $3.33\pm0.19$ & 3/3 \\ \ion{Kr}{iii} & $-4.90\pm0.07$ & $-5.07\pm0.07$ & $3.72\pm0.10$ & 11 \\ \ion{Sr}{ii-iii} & $-4.51\pm0.09$ & $-4.68\pm0.09$ & $4.49\pm0.12$ & 3/19 \\ \ion{Y}{iii} & $-5.23\pm0.02$ & $-5.40\pm0.02$ & $4.43\pm0.05$ & 2 \\ \ion{Zr}{iii-iv} & $-5.00\pm0.08$ & $-5.17\pm0.08$ & $4.29\pm0.09$ & 1/12 \\ \ion{Sn}{iv} & $-6.26\pm0.06$ & $-6.43\pm0.06$ & $3.57\pm0.12$ & 2 \\ $^\ast$\ion{Pb}{iv} & <$-7.29^{+0.60}_{}$ & <$-7.46^{+0.60}_{}$ & <$2.83^{+0.60}_{}$ \\ \bottomrule \end{tabular} } \end{center} \vspace{-10pt} \tablefoot{$^\ast$ results for Cr, Mn, Fe, Co, and Pb are from \cite{Latour2019b} and based on UV data.} \end{table} \begin{table} \centering \caption{Same as Table \ref{table_abund_feige}, but for \object{LS\,IV$-$14$\degr$116}.} \label{table_abund_lsiv} \setstretch{1.1} \resizebox{\columnwidth}{!}{ \begin{tabular}{l@{\hspace{2pt}}rrrr} \toprule \toprule Element & \multicolumn{1}{c}{$\log \epsilon/\epsilon_{\mathrm{H}}$} & \multicolumn{1}{c}{$\log \epsilon$} & \multicolumn{1}{c}{$\log \epsilon/\epsilon_{\odot}$} & \multicolumn{1}{c}{$N_\mathrm{lines}$}\\ \midrule H & $0.00\pm0.00$ & $-0.10\pm0.02$ & $-0.06\pm0.02$ \\ He & $-0.60\pm0.10$ & $-0.70\pm0.08$ & $0.41\pm0.08$ \\ \ion{C}{ii-iii} & $-3.70\pm0.12$ & $-3.80\pm0.12$ & $-0.19\pm0.13$ & 6/9 \\ \ion{N}{ii-iii} & $-3.82\pm0.06$ & $-3.92\pm0.06$ & $0.28\pm0.08$ & 20/3 \\ \ion{O}{ii-iii} & $-4.48\pm0.10$ & $-4.57\pm0.10$ & $-1.23\pm0.11$ & 11/1 \\ \ion{Ne}{ii} & $-4.50\pm0.06$ & $-4.60\pm0.06$ & $-0.49\pm0.12$ & 13 \\ \ion{Mg}{ii} & $-5.40\pm0.02$ & $-5.50\pm0.02$ & $-1.07\pm0.05$ & 1 \\ \ion{Al}{iii} & <$-6.42^{+0.30}_{}$ & <$-6.52^{+0.30}_{}$ & <$-0.93^{+0.30}_{}$ & \\ \ion{Si}{iii-iv} & $-6.03\pm0.07$ & $-6.13\pm0.07$ & $-1.60\pm0.07$ & 1/2 \\ \ion{P}{iii} & $-6.76\pm0.06$ & $-6.85\pm0.05$ & $-0.23\pm0.06$ & 1 \\ S & <$-6.00^{+0.30}_{}$ & <$-6.10^{+0.30}_{}$ & <$-1.18^{+0.30}_{}$ \\ \ion{Ar}{iii} & $-5.55\pm0.04$ & $-5.64\pm0.04$ & $-0.01\pm0.14$ & 2 \\ Ca & <$-6.36^{+0.30}_{}$ & <$-6.46^{+0.30}_{}$ & <$-0.76^{+0.30}_{}$ \\ \ion{Ti}{iii-iv} & $-5.69\pm0.12$ & $-5.79\pm0.12$ & $1.30\pm0.13$ & 3/2 \\ Fe & <$-4.90^{+0.30}_{}$ & <$-5.00^{+0.30}_{}$ & <$-0.46^{+0.30}_{}$ \\ \ion{Ni}{iii} & $-4.62\pm0.13$ & $-4.72\pm0.13$ & $1.10\pm0.14$ & 14 \\ \ion{Zn}{iii} & $-4.92\pm0.08$ & $-5.02\pm0.08$ & $2.46\pm0.09$ & 15 \\ \ion{Ga}{iii} & $-5.62\pm0.06$ & $-5.72\pm0.06$ & $3.28\pm0.11$ & 7 \\ \ion{Ge}{iii-iv} & $-5.05\pm0.10$ & $-5.14\pm0.10$ & $3.24\pm0.14$ & 3/5 \\ \ion{Kr}{iii} & $-4.91\pm0.10$ & $-5.01\pm0.11$ & $3.77\pm0.12$ & 10 \\ \ion{Sr}{ii-iii} & $-4.44\pm0.09$ & $-4.54\pm0.09$ & $4.63\pm0.11$ & 4/21 \\ \ion{Y}{iii} & $-5.13\pm0.01$ & $-5.23\pm0.01$ & $4.60\pm0.05$ & 2 \\ \ion{Zr}{iii-iv} & $-4.76\pm0.09$ & $-4.85\pm0.09$ & $4.60\pm0.10$ & 1/13 \\ \ion{Sn}{iv} & $-5.56\pm0.04$ & $-5.65\pm0.04$ & $4.34\pm0.11$ & 2 \\ \ion{Pb}{iv} & $-6.75\pm0.40$ & $-6.84\pm0.40$ & $3.44\pm0.42$ & 1 \\ \bottomrule \end{tabular} } \end{table} From a spectroscopic perspective, the prevalence of strong lines of heavy elements (here $Z > 30$) is the most striking feature of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}. Nevertheless, many lines of heavy metals remained either undetected in the previous analyses \citep{nas11,Latour2019b}, owing to the limited S/N and wavelength coverage of the spectra available, or unidentified due to the scarcity of atomic data. Therefore, we set out to identify these lines that are present both in \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}. Oscillator strengths are available for many ions that are expected to show spectral lines in the programme stars. However, several of these lines have remained unidentified so far because their rest wavelengths are not known with sufficient precision. The large wavelength coverage and good S/N of our spectra allowed us to identify lines of such ions from predicted relative intensities by adjusting the theoretical wavelengths to match the position of observed lines. These empirical wavelengths may also be useful in future atomic structure calculations. The 102 detected heavy-metal lines with available oscillator strengths are listed in Table \ref{table_idf}. This includes strong previously unidentified lines noted by \cite{nas11} at 4007\,\AA\ and 4216\,\AA, that now appear to be \ion{Sr}{iii} and \ion{Sn}{iv}. Lines that required significant shifts to match observed lines are additionally listed in Table \ref{table_lshift}. The 21 newly identified lines that lack oscillator strengths are listed in Table \ref{table_idpos}; some are shown in Fig.~\ref{lines_unid}. The 51 remaining unidentified lines are listed in Table \ref{table_unid}. In the following paragraphs, we briefly describe the analysis for each heavy element detected. The strongest modelled lines for each heavy element are shown in Fig.~\ref{lines_1} (Ga, Y, Sn, Pb), Fig.~\ref{lines_2} (Ge, Kr, Sr), and Fig.~\ref{lines_zr} (Zr) for both stars. \vspace{-12pt} \paragraph{Gallium:} we identified several \ion{Ga}{iii} lines in the spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}. Oscillator strengths for optical \ion{Ga}{iii} lines were derived by \cite{oreilly98}. In particular, \ion{Ga}{iii} 3517.4, 3577.3, 3806.7, 4380.6, 4381.8, 4993.9, 5337.2, 5358.2, 5844.9, and 5993.9\,\AA\ could be used to derive an abundance of about 2000 times solar for both stars. To our knowledge, they have never been observed in any star. \begin{figure* \centering \includegraphics[width=0.95\textwidth]{lsiv_f46_reltosun_v3.pdf \caption{Abundance patterns of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ relative to that of the Sun (by number fraction). Only elements with an abundance measurement are shown. Upper limits are marked with an arrow and less saturated colours. For comparison, abundance measurements for He-poor sdOB stars (33000\,K < $T_{\rm eff}$\ < 36500\,K) are shown as grey open circles \citep[][based on optical data]{Geier2013}, diamonds \citep[][based on far-UV data]{chayer2006}, and squares \citep[][based on UV data]{otoole06}.} \label{fig:pattern} \end{figure*} \vspace{-12pt} \paragraph{Germanium:} \cite{nas11} identified and provide oscillator strengths for three \ion{Ge}{iii} lines in the optical spectrum of \object{LS\,IV$-$14$\degr$116}. Oscillator strengths for optical lines of \ion{Ge}{iv} were provided by \cite{oreilly98}. However, these \ion{Ge}{iv} lines have never been used to derive abundances, and their wavelengths had to be shifted to match the observed ones as listed in Table \ref{table_lshift}. We used these three \ion{Ge}{iii} lines as well as four \ion{Ge}{iv} lines to derive a germanium abundance of 2000 times solar for both stars. There is a mismatch between \ion{Ge}{iii} and \ion{Ge}{iv} lines, which systematically appear too weak in our synthetic spectra. This may be due to NLTE effects or systematic differences between the atomic data used for \ion{Ge}{iii} and \ion{Ge}{iv}. An effective temperature of 35920\,K would be required for \object{LS\,IV$-$14$\degr$116}\ to simultaneously reproduce both ionisation stages. However, this temperature is too high by about 400\,K to be able to reproduce the ionisation balance of most other elements. \vspace{-12pt} \paragraph{Arsenic:} two weak, unidentified lines at 3922.5\,\AA\ and 4037\,\AA\ are observed close to experimental wavelengths of the As\,\textsc{iii} lines provided by \cite{Lang1928}, as listed in NIST.\footnote{National Institute of Standards and Technology, \url{https://physics.nist.gov/PhysRefData/ASD/lines_form.html}; see also \cite{NIST_ASD}.} We are not aware of oscillator strengths for optical \ion{As}{iii} lines, and, therefore, cannot derive the abundance. \vspace{-12pt} \paragraph{Selenium:} fifteen previously unidentified lines were identified with \ion{Se}{iii} using the experimental wavelengths provided by \cite{Badami1933} as listed in NIST (see Fig.~\ref{lines_unid}). This is the first time these lines have been observed in any star, and they are visible in both \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}. A list of identifications is given in Table \ref{table_idpos}. Unfortunately, no oscillator strengths are available for optical \ion{Se}{iii} lines. \vspace{-12pt} \paragraph{Krypton:} \ion{Kr}{iii} shows many lines in the UVES spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ that have never been identified in any star as far as we know.\footnote{Around 2012, N.~Naslim reported the possible presence of krypton lines to one of us (CSJ); this could not be confirmed at the time.} Fortunately, oscillator strengths are provided by \cite{Raineri1998} allowing us to determine the krypton abundance. Some lines were shifted to match the observed position; they are listed in Table \ref{table_lshift}. We have used \ion{Kr}{iii} 3325.76, 3342.48, 3351.94, 3474.65, 3488.55, 3564.24, 3641.35, 3690.66, and 4067.40\,\AA\ to derive an abundance of about 5500 times solar for both stars. The predicted \ion{Kr}{iii} 3308.22, 3396.72\,\AA\ lines do not match observed lines. The alternative oscillator strengths for these two lines provided by \cite{Eser2018} are even larger. These lines might require large shifts or have inaccurate oscillator strengths. \vspace{-12pt} \paragraph{Strontium:} in total, 35 previously unidentified lines can clearly be attributed to \ion{Sr}{iii}: for example, the strong 3430.8, 3936.4\,\AA\ lines. To our knowledge, these lines have never before been reported in stellar spectra. Wavelengths and oscillator strengths for \ion{Sr}{ii-iii} were provided by R.~Kurucz, allowing us to determine the strontium abundance. The resonance lines \ion{Sr}{ii} 4077.7, 4215.5\,\AA\ used by \cite{nas11} to derive the strontium abundance in \object{LS\,IV$-$14$\degr$116}\ are also observed in \object{Feige\,46}. To model these lines, we used oscillator strengths from \cite{Fernandez2020}, who recently investigated \ion{Sr}{ii} in detail (along with \ion{Y}{iii} and \ion{Zr}{iv}). Both stars also show \ion{Sr}{ii} lines at 3380.7, 3464.5, and 4305.4\,\AA. Fitting four \ion{Sr}{ii} lines (three for \object{Feige\,46}) as well as 21 \ion{Sr}{iii} lines (19 for \object{Feige\,46}) results in an abundance of 43\,000 times solar for \object{LS\,IV$-$14$\degr$116}\ and 31\,000 times solar for \object{Feige\,46}. \vspace{-12pt} \paragraph{Yttrium:} \cite{nas11} identified two strong yttrium lines in the spectrum of \object{LS\,IV$-$14$\degr$116}: \ion{Y}{iii} 4039.602 and 4040.112\,\AA. Fitting these lines (\ion{Y}{iii} 4039.6\,\AA\ at a slightly revised position) results in abundances of 27\,000 times solar for \object{Feige\,46}\ and 40\,000 times solar for \object{LS\,IV$-$14$\degr$116}. Oscillator strengths for additional \ion{Y}{iii} lines observed at 5102.9, 5238.1, and 5602.2\,\AA\ are provided by \cite{Fernandez2020}. However, these lines are not consistent with \ion{Y}{iii} 4039.6, 4040.1\,\AA\ and were therefore not considered for the abundance determination. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{sun_lsiv_f46_nf_v3.pdf \caption{Atmospheric abundances for \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ by number fraction \cite[with solar values from][]{asplund09}.} \label{fig:pattern_nf} \end{figure*} \vspace{-12pt} \paragraph{Zirconium:} by far the strongest lines from heavy metals in the optical spectrum of both stars originate from zirconium \textsc{iv} transitions (see Fig.~\ref{lines_zr}). Oscillator strengths for four Zr\,\textsc{iv} lines were provided by \cite{nas11} and for two additional lines by \cite{nas13}. \cite{Rauch2017} also provide oscillator strengths for a large number of UV and optical Zr\,\textsc{iv} lines, while \cite{Fernandez2020} have recently computed oscillator strengths for eight Zr\,\textsc{iv} lines that are observed in the UVES spectra of both stars. We exclusively rely on data from \citet{Rauch2017}, as they provide the most extensive list. A single strong Zr\,\textsc{iii} line is observed at 3497.9\,\AA\ and is somewhat too weak in our models. We used this line as well as several Zr\,\textsc{iv} lines to determine the abundance in both stars, including the four Zr\,\textsc{iv} lines used by \cite{nas11}. The best fit Zr abundance for \object{LS\,IV$-$14$\degr$116}, 40\,000 times solar, is significantly higher than that for \object{Feige\,46}\ (20\,000 times solar). As shown in Fig.~\ref{lines_zr}, Zr\,\textsc{iv} lines are very well reproduced in both stars (with the exception of Zr\,\textsc{iv} 3919.3 and 5462.3\,\AA, which are too strong in our models). In addition, we slightly revised the position of two \ion{Zr}{iv} lines: \ion{Zr}{iv} 5462.38 and 5779.88\,\AA. \vspace{-12pt} \paragraph{Tin:} strong spectral lines of \ion{Sn}{iv} at 3862.1 and 4217.2\,\AA\ are visible in the UVES spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}. These lines have not been previously identified in any star. To model these lines, we used oscillator strengths provided by \cite{Kaur2020}, but the rest wavelengths had to be adjusted (see Table \ref{table_lshift}). The abundance of tin derived from the two newly identified lines turns out to be 22\,000 times solar for \object{LS\,IV$-$14$\degr$116}\ and 3700 times solar for \object{Feige\,46}, which is consistent with the value derived from UV lines by \cite{Latour2019b}. \vspace{-12pt} \paragraph{Lead:} the lead abundance of \object{LS\,IV$-$14$\degr$116}, 2800 times solar, is based on a very weak \ion{Pb}{iv} 4049.8\,\AA\ line. No other lead lines are detected, thus the abundance has a large uncertainty. \cite{Latour2019b} determined an upper limit for lead in \object{Feige\,46}, 680 times solar. It is based on the Pb\,\textsc{iv}\,1313\,\AA\ resonance line and is likely close to the actual abundance. Although this upper limit is consistent with the non-detection of Pb lines in the optical spectrum of \object{Feige\,46}, it should be confirmed by UV observations of higher quality than the low S/N IUE spectrum that is currently available. \vspace{-12pt} \paragraph{Undetected elements:} we searched unsuccessfully for lines of fluorine, sodium, chlorine, potassium, scandium, vanadium, rubidium, and xenon. Details can be found in Appendix \ref{appendix_undetected_lines}. \subsection{The emerging abundance pattern} Both stars show almost the same abundance pattern, as illustrated in Fig.~\ref{fig:pattern}. When compared to solar values, nitrogen is enhanced and oxygen depleted. Carbon is slightly super-solar in \object{Feige\,46}\ and slightly sub-solar in \object{LS\,IV$-$14$\degr$116}. The light metals C, N, O, Ne, Mg, Si, and P are all slightly more abundant in \object{Feige\,46},\ but otherwise follow the same pattern as in \object{LS\,IV$-$14$\degr$116}. The abundances of elements from argon to krypton (when known) are almost identical, and calcium is depleted in both stars. Heavy elements are enriched to very high values, from zinc at about 300 times solar, to zirconium well above 20\,000 times solar. While being highly enriched when compared to solar values, the concentration of Sr, Y, Zr, Sn, and Pb in the line-forming region of \object{Feige\,46}\ is progressively less extreme when compared to that of \object{LS\,IV$-$14$\degr$116}. This enrichment is nevertheless notable when compared to He-poor sdOB stars, which have been observed to be enhanced in Zn, Ga, Ge, Zr, and Sn to about 10 to 200 times the solar value \citep{otoole06,chayer2006,bla08}. The extreme enrichment in heavy metals and the abundances of lighter metals are different to those observed in He-poor sdOB stars. In particular, the argon and calcium abundances in \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ are significantly lower than the super-solar values \cite{Geier2013} obtained for He-poor sdOBs of similar temperatures. Such strong deficiency when compared to He-poor sdOB stars (2\,dex for calcuium) cannot be explained by a lower initial metallicity that might be expected for \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ due to their halo kinematics. It is worth mentioning that this calcium deficiency is not observed in lead-rich iHe-sdOB stars such as [CW83]\,0825+15 \citep{jeff17}, FBS\,1749+373, and PG\,1559+048 \citep{Naslim2020}. These stars show calcium abundances in line with those observed in He-poor sdOB stars. In contrast to this, the carbon and nitrogen abundances in \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ are higher than in the average He-poor sdOB star. Such enrichment could be caused by an excess of material processed by H-burning (CNO-cycle) and He-burning (3$\alpha$) in the atmospheres of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ when compared to He-poor sdOB stars, as predicted for both hot flasher \citep{bert08} and merger scenarios \citep{zhang12}. This is consistent with the positive correlation between the helium and carbon abundances of sdOB stars in the globular cluster $\omega$\,Cen, as found by \cite{Latour2014}. In addition, the abundances of C, N, and O might still be affected by diffusion processes to some degree (in both He-poor and iHe-sdOB stars). \section{Discussion and conclusions}\label{sect:conclusions} We performed a detailed spectral analysis of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}. This consistent analysis of both stars enables an accurate and direct comparison of their abundance patterns, which would be hampered by the use of different analysis methods. The abundance patterns of both stars, as well as their differences, can likely be explained with atmospheric diffusion processes. In terms of diffusion, it is convenient to consider the abundance pattern as number fraction without the comparison to solar values (Fig.~\ref{fig:pattern_nf}). It is easy to recognise that, overall, the abundance of light metals from carbon to phosphor drops by three orders of magnitude from $\log \epsilon \approx-3.5$ to about $-$6.5\,dex. Unlike in the Sun, the abundances of heavier elements (except calcium) do not continue to drop further, but follow a more constant pattern. A comparison of detailed diffusion calculations with these observed abundance patterns is required to resolve the question of whether diffusion alone is enough to produce such high enrichment of heavy metals. In addition, atmospheric models that consider atmospheric stratification are required to determine whether the observed heavy-metal enrichment can be explained by thin layers of enriched material in the line-forming region. Thanks to the excellent quality and wavelength coverage, we were able to identify many previously unidentified lines in the UVES spectra of \object{Feige\,46}\ and \object{LS\,IV$-$14$\degr$116}\ with transitions of heavy ions. Strong lines with available oscillator strengths originate from ions such as \ion{Ge}{iv}, \ion{Kr}{iii}, \ion{Sr}{iii}, \ion{Zr}{iii}, and \ion{Sn}{iv}. Their identification will enable the determination of abundances in future analyses of other heavy-metal stars, even with spectra of lower quality. Atomic data are still lacking for some heavy elements and ionisation stages \textsc{iii-iv}, including several newly identified lines of \ion{Ge}{iii}, \ion{Se}{iii}, and \ion{Y}{iii}. We also provide observed wavelengths for these lines that may be useful in future atomic structure calculations. About 50, mostly weak, lines detected in the spectra of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ remain unidentified and could belong to elements not yet identified in either star. We also analysed the TESS light curve of \object{Feige\,46}\ and detected five of the six modes found by \cite{Latour2019a}. The period stability of these five pulsation modes ($\dot P \lesssim 10^{-8}$ s/s) is not compatible with the stronger period decay predicted by \cite{bat18} for a star quickly evolving through a series of late helium flashes. This questions the idea that \object{Feige\,46}\ may be a pre-EHB object with pulsations driven by the $\epsilon$-mechanism generated by late helium flashes. Stellar parameters (mass, radius, and luminosity) were derived from the high-quality \textit{Gaia} parallax by combining it with the atmospheric parameters and the spectral energy distribution. The results for both stars are limited by the uncertainty of the surface gravity, but consistent with the canonical subdwarf mass predicted by hot flasher models \citep[0.46M$_\odot$,][]{dor93,2003MNRAS.341..669H}. The similarity of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ in terms of atmospheric parameters, abundances, pulsation, and kinematics remains puzzling. A larger sample of intermediately He-rich sdOB stars with detailed observed abundance patterns is required to draw conclusions regarding the causal relation between these features. Such a sample would also be required in order to answer the questions: \begin{itemize} \item What makes the heavy-metal stars different from the normal sdOB stars? Other chemically peculiar stars such as helium-rich main-sequence B stars and Ap stars have strong magnetic fields, and so it has been suggested that the heavy-metal stars are magnetic too, but no magnetic field has been detected in LS IV-14 116 \citep[down to 300\,G,][]{ran15}.\vspace{2pt} \item Are most iHe-sdOB stars an intermediate stage in the evolution of He-sdOs towards the He-poor sdBs? \vspace{2pt} \item At which point in their evolution will atmospheric diffusion become important? \end{itemize} Fortunately, recent surveys such as the LAMOST survey \citep[e.g.][]{Lei_2020} and the SALT/HRS survey \citep[e.g.][]{2017OAst...26..202J} are discovering many new He-rich subdwarf stars. Future analyses of a larger sample of stars that share the atmospheric parameters of \object{LS\,IV$-$14$\degr$116}\ and \object{Feige\,46}\ (intermediate He-enrichment and $T_{\rm eff}$\ around 35\,000\,K), but also of their possible progenitors, the extreme He-sdOs, might give important clues towards the evolution of He-rich subdwarf stars. \begin{acknowledgements} We thank Simon Kreuzer for the development of the the photometry query tool and Ingrid Pelisoli for very helpful comments on TESS. We thank the referee, Suzanna Randall, for useful suggestions that improved this paper. M.~L.~acknowledges funding from the Deutsche Forschungsgemeinschaft (grant DR 281/35-1). S.~C.~acknowledges financial support from the Centre National d’Études Spatiales (CNES, France) and from the Agence Nationale de la Recherche (ANR, France) under grant ANR-17-CE31-0018. Based on observations collected at the European Southern Observatory under ESO programmes 0104.D-0206(A), 087.D-0950(A), and 095.D-0733(A). This paper includes data collected by the TESS mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the TESS mission is provided by NASA's Science Mission directorate. Based on INES data from the IUE satellite. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. Based on observations obtained as part of the VISTA Hemisphere Survey, ESO Progam, 179.A-2010 (PI: McMahon). The TOSS service (\url{http://dc.g-vo.org/TOSS}) used for this paper was constructed as part of the activities of the German Astrophysical Virtual Observatory. We acknowledge the use of the Atomic Line List (\url{http://www.pa.uky.edu/~peter/newpage/}). This research has made use of NASA's Astrophysics Data System. \end{acknowledgements} \bibliographystyle{aa}
null
null
null
proofpile-arXiv_000-10108
{"arxiv_id":"2009.09032","language":"en","timestamp":1600740118000,"url":"https:\/\/arxiv.org\/abs\/2009.09032","yymm":"2009"}
2024-02-18T23:40:24.888Z
2020-09-22T02:01:58.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10110"}
null
\section{Executive Summary} \subfile{sections/ExecSummary.tex} \pagebreak \section{Introduction}\label{sect:Intro} \subfile{sections/Introduction.tex} \pagebreak \section{The Features of Wargames} \label{sect:GameFeatures} \subfile{sections/WargameFeatures.tex} \pagebreak \section{Recent Progress in Deep Reinforcement Learning}\label{sect:DeepRL} \subfile{sections/DeepRL.tex} \pagebreak \section{Other Game AI techniques}\label{sect:OtherAlgos} \subfile{sections/OtherAlgorithms.tex} \pagebreak \section{Algorithms for Wargames}\label{sect:algoSummary} \subfile{sections/AlgorithmsForWargames.tex} \pagebreak \section{Platforms}\label{sect:Platforms} \subfile{sections/Platforms.tex} \pagebreak \section{Recommendations/Next Steps} \label{sect:Recommendations} \subfile{sections/Recommendations.tex} \pagebreak \small \bibliographystyle{abbrv} \subsection{Forward Model}\label{sect:ForwardModel} The forward model of a game takes the current state and the current set of actions, and transitions to the next state. Every computer game has such a model; without it the game would not function. However, in order to use the model in statistical forward planning and model-based reinforcement learning algorithms we need it to be both fast and easily copied. The model needs to be easily copied so that hypothetical actions can be applied to a copy of the game rather than the live game; this can be technically demanding due to the need to be very careful about which parts of state need to be deep-copied and is less bug-prone with heavy use of immutable data types. Speed is important for SFP algorithms in order to run many simulations for each decision to be made in the live game. For model-based RL, speed is less important but also useful due to the high sample complexity of these methods. With a copyable forward model a number of statistical planning-based algorithms are usable, such as Monte Carlo Tree Search (MCTS)~\cite{Browne_Powley_Whitehouse_Lucas_Cowling_Rohlfshagen_Tavener_Perez_Samothrakis_Colton_2012}, Combinatorial Multi-Armed Bandits (CMAB)~\cite{Ontanon_2017}, and Rolling Horizon Evolutionary Algorithms (RHEA)~\cite{Perez_Samothrakis_Lucas_Rohlfshagen_2013}. These are reviewed in Section~\ref{sect:SFP}, and are especially useful if there is room for `thinking' time between each move, anything from 40 milliseconds to a couple of minutes. A fast non-copyable forward model is still useful for a subset of AI techniques, most notably Deep Reinforcement Learning. In this case the game can only be run end-to-end from a configured starting point, and planning algorithms from a mid-point cannot be used. But each of these end-to-end runs can be treated as a single iteration to generate data for RL algorithms. For example OpenAI's state of the art Dota2 player is trained using Dota2 games that run at approximately half normal speed (for local optimisation reasons) with no copyable forward model~\cite{berner2019dota}. If we do have a copyable fast forward model then Expert Iteration is a technique that DeepMind used in AlphaZero to great effect, and which combines Deep RL with MCTS. This has also been helpful in other games such as Hearthstone~\cite{Swiechowski_Tajmajer_Janusz_2018} and Hanabi~\cite{Goodman_2019}, and is discussed in Section~\ref{sect:OtherAlgos}. Linked to the existence of a forward model is that in any game we have access to a semantic representation of the game state (unit positions, current capabilities, sighted enemies, terrain details etc.). In this case it is generally preferable to use these as input features for any technique, including Deep RL, and not pixel-based screen shots. This avoids the overhead of learning an implicit semantic representation from pixels and means that the AI technique can focus on learning an optimal plan directly. The prevalence of pixel-based inputs in the academic literature even when a forward model is available is due to the underlying goal of Artificial General Intelligence (AGI) that learns from raw sensory input. This is not relevant to wargames. Pixel-input is also much used in the literature due to the focus on self-driving technology, for which there is no forward model and raw sensory input is all that is available. Again, this is not relevant to wargames. \subsection{Unit-based techniques}\label{sect:unitTechniques} There are two distinct aspects of the use of units in wargames that lead to slightly different groups of techniques. These are: \begin{enumerate} \item \textbf{Compositional/Factorisation} techniques. These consider that rather than trying to come up with a single policy or plan across all available units it is quicker to come up with a plan for each unit individually. A set of individual unit plans/policies will be less effective than a global plan, but is much less computationally demanding and may be more akin to the way that unit operate in `reality'. \begin{itemize} \item{Combinatorial Multi-armed bandits (CMAB). Where multiple units can be given simultaneous orders each underlying unit action can be selected independently (a unit-based `arm'), and these amalgamated into a global `arm'~\cite{Ontanon_2017, Barriga_Stanescu_Buro_2017}. The global level allows for dependencies between units to be learned. } \item{Multi-Agent Reinforcement Learning (MARL) that learns a policy for each unit independently. The idea again is to benefit from the fact that the state space at least partially factorises into components for each unit, and learning these factorisations will be faster than the joint policy~\cite{Tan_1993}.} \item{Decentralised POMDPs~\cite{oliehoek2012decentralized, Amato_Oliehoek_2015}. This is an alternative way of phrasing the problem using a unit-based factorisation, however the literature generally focuses on problem-solving in a non-adversarial environment such as Fire Brigade units co-operating to extinguish a fire~\cite{Paquet_Bernier_Chaib-draa_2004}. Units are generally, but not always, homogeneous, with the benefit coming from learning an `easier' policy for a generic individual unit instead of a more complicated global policy.} \item Search a game-tree for each unit individually, and then amalgamate these results~\cite{Lisy_Bosansky_Vaculin_Pechoucek_2010}. A similar reduction of the combinatorial problems of many units is to group units together into smaller sub-groups, for example based on proximity or unit type, and only regard interactions in planning actions within each sub-group. \end{itemize} \item \textbf{Hierarchical} techniques. These consider decision-making at multiple levels, and is a natural fit for many aspects of wargaming. A higher-level strategic policy makes decisions about broad goals and transmits these goals to the lower-level units, which seek the best local way to implement them. This hierarchy can theoretically have many levels, but two is usually enough in the literature. \begin{itemize} \item Hierarchical approaches that set a high-level goal (e.g. `Attack', or `Defend Position'), then grouping of units to a sub-goal, and at the lowest level those specified units planning/learning to achieve their sub-goal~\cite{Stanescu_Barriga_Buro_2014, Churchill_Buro_2015}. Different algorithms can be used at each level in the hierarchy, varying between heuristic (expert) rules, Deep RL, tree-based search and others. This decomposition of the overall task reduces the inherent combinatorial explosion, with lower levels using a reward function to achieve a goal specified by the previous level. \item A similar approach can be used to decide which evaluation function to use in MCTS (or any search algorithm), for example by using Hierarchical Task Networks as the higher planning level to select which evaluation function to use in MCTS at the lower execution level~\cite{Neufeld_Mostaghim_Perez-Liebana_2019}. \item In the work of \cite{Ontanon_Buro_2015} in an RTS game, 19 high-level `tasks' (e.g. harvest resources, or rush player) and 49 low-level `methods' are used in hierarchical planning. By using A* pathfinding and `methods' as the lowest level in planning rather than primitive actions (such as `move north one hex') this technique leverages domain knowledge and performs well compared to MCTS or other methods. \item FeUdal Networks use Deep Reinforcement Learning with a `Manager' network setting a high-level goal, and then `Worker' network(s) attempting to fulfil the high-level goal~\cite{Vezhnevets_Osindero_Schaul_Heess_Jaderberg_Silver_Kavukcuoglu_2017, ahilan2019feudal}. This could be adapted to a wargame environment with one Worker network per unit, or group of units. \end{itemize} \end{enumerate} \subsection{Information Flow and Unit Autonomy}\label{sect:InfoFlow2} There is relatively little academic work on restricted Information Flow within Game AI, outside of more abstract games with deliberately constrained information channels, such as Hanabi or Bridge. In these games Deep Learning approaches have been used to learn conventions from scratch~\cite{Foerster_Song_Hughes_Burch_Dunning_Whiteson_Botvinick_Bowling_2018, Yeh_Hsieh_Lin_2018}. More relevant to wargames is work in Dec-POMDPs (Decentralized Partially Observable Markov Decision Processes) that have multiple autonomous agents on the same team working towards a single team-level goal~\cite{oliehoek2016concise, Paquet_Bernier_Chaib-draa_2004, Melo_Spaan_Witwicki_2012}. These were mentioned in Section~\ref{sect:unitTechniques} as appropriate to reducing the complexity of managing multiple different units by factorising a plan into semi-independent plans for each unit, and then amalgamating these to give a global plan. Within this framework there are several options that can be used to additionally model unit autonomy and information flow: \begin{itemize} \item \textbf{Global Information}. The default option is to use global information for each individual decision/plan. This provides benefit in reducing the computation of the overall plan, but assumes that each individual unit has the same information as the overall commander. \item \textbf{Local Information}. Each unit makes its decision using only locally available information. This models local autonomy more directly, is realistic in the case of a unit being cut-off from communication networks, and is computationally cheaper. \item \textbf{Information Cost}. Communicating information can be treated as an action in its own right, and the commander can choose to pass on some global information to units that they can then incorporate in local decisions. This can also work in the opposite direction. There is little work in this area, and the focus is on side-ways communication between the individual agents without any central command unit~\cite{Melo_Spaan_Witwicki_2012, wu2011online}. \end{itemize} Consider a concrete example of how the framework might be adapted in a wargame environment to provide low-level control of units once given a target by the commander (either human or computer). If guided by a utility function that applies suitable weights to secondary goals such as self-preservation and remaining unobserved as well reaching the target, then instead of following a centrally-plotted path the units could make their own decisions at each point based on their local information only, adapting this as they become aware of enemy (or friendly) units. There are no restrictions on the algorithms used for individual agent policies in a Dec-POMDP setting, and RL or search-based planning methods are both common. Co-evolution is also used~\cite{chades2002heuristic}, and this can be especially relevant with an absence of formal communication and heterogeneous units, where each unit type needs to adapt to the others. Graph neural networks (GNN) are a new emerging area of research in Deep RL that may be a useful tool for modelling information flow in wargames~\cite{zhou2018graph}. The basic idea behind a GNN is to model the dependencies of graphs via learned message passing between the nodes of the graph. The same neural network is used for all nodes in the graph to learn to integrate information from incoming nodes and to iteratively come up with a solution. In the case of wargaming, nodes in the GNN could represent units, while the connections between them reflect communication channels. A GNN could be trained to integrate information in a decentralised way and learn by itself to resolve communication issues. GNNs are discussed in a little more detail in Section~\ref{sect:GNN}. \subsection{Imperfect Information}\label{sect:POMDP} More realistic algorithms that work in large-scale POMDPs usually sample from some estimate of the unknown State Space: \begin{itemize} \item Perfect Information methods can be used by sampling possible states, then solving each sample, and then taking the `average' of the best actions. These approaches have theoretical flaws, but can work well in practise in some domains. To understand the detail of the flaws (`strategy fusion' and 'non-locality'), see~\cite{Frank_Basin_1998}. \item Monte Carlo search methods sample from possible hidden states and take an expectation over these. For example Information-Set MCTS samples one possible state of the world on each iteration~\cite{Cowling_Powley_Whitehouse_2012}, and Imperfect Information Monte Carlo Search (IIMC) does the same but with recursive opponent models that factor in what the opponent can observe to allow bluffing~\cite{Furtak_Buro_2013}. \item A distribution over hidden information (or `belief states') can be maintained with a particle filter and updated as the game unfolds to complement these methods~\cite{Silver_Veness_2010}. This approximates a full Bayesian update, which is usually intractable in a non-trivial game environment. \item Counterfactual Regret Minimisation (CFR) has worked exceptionally well in dealing with hidden information and player bluffing in up to 6-player Texas Hold'em Poker~\cite{Zinkevich_Johanson_Bowling_Piccione_2008, Lanctot_Lisy_Bowling_2014, Brown_Sandholm_2019}. However this required pre-calculation of a whole-game Nash Equilibrium strategy, which took 8 days on 64 cores for 6-players. Some work has compared MCTS to Monte Carlo versions of CFR, and found that while CFR can give better results, MCTS performs better for more limited computational budgets~\cite{Ponsen_2011}. \item Deep Reinforcement Learning can be used directly on the Observation Space, relying on implicit models of the hidden state to be built. This applies in Dota2 and also in other approaches to Poker~\cite{Deepstack_2017, berner2019dota}. \item Smoothed versions of both MCTS and Deep Learning play a mixture of a best response policy with an averaged fictitious play can help convergence in imperfect information environments~\cite{Heinrich_Silver_2015, Heinrich_Silver_2016}. \end{itemize} \subsection{Adversarial Opponents}\label{sect:AdvOpponent} The presence of adaptive, adversarial agents complicates any learning approach and introduces issues around the `Red Queen' effect, in which improving play will not necessarily improve the outcome if the opponent is also learning. This is a well-documented problem in co-evolution, with non-progressive cycles. Consider a simple case of Rock-Paper-Scissors. If the opponent has a tendency to play Rock, then we will learn to play Paper, which will prompt the opponent to learn to play Scissors\ldots and we can keep cycling \textit{ad infinitum}. It is important to use a variety of training opponents to avoid over-adapting the current policy to the current opponent. This is sometimes called a `Hall of Fame' and can be constructed from policies from earlier stages of the learning process~\cite{Cliff_Miller_1995, Moriarty_Mikkulainen_1996, Rosin_Belew_1997}. This ensures that we do not forget how to deal with particular strategy once we have defeated it once. In the Rock-Paper-Scissors example this would learn a strategy that can be effective against an opponent that plays any of Rock, Paper or Scissors, and converges to the Nash Equilibrium strategy. Approaches of this sort are used in AlphaStar, with multiple agents learning in parallel and playing each other in a Tournament to bootstrap strategies against each other without over-fitting to a single opponent (see Section~\ref{sec:continual_learning}). It is notable that master level Go in AlphaZero did not require this opponent variety and only used self-play against the current policy. However, AlphaZero (unlike the pure RL of AlphaStar) used MCTS planning on top of RL to select each move, which would be able to detect longer-term problems in a highly recommended move at the top level of the tree. AlphaZero did use distributed asynchronous learning, as did OpenAI's Dota2 work. This involves running many thousands of games simultaneously, each using a local copy of the current learned policy and providing updates to a central hub. This hub learns the policy using the traces of all games and periodically sends out updates. This helps avoid overfitting of the policy by including information from many different games at different stages in each update~\cite{mnih2016asynchronous, berner2019dota}. Another tool to consider in an adversarial environment where both sides are learning is the `Win or learn fast' principle~\cite{Bowling_Veloso_2002}. Roughly speaking, this ramps up the learning rate when we are losing training games compared to when we are winning. This gives better theoretical convergence guarantees in many cases. Opponent Modelling is the general idea that we think about what our opponent will do when planning our moves~\cite{Albrecht_Stone_2018}. In the context of a 2-player primarily zero-sum wargame the best approach is to learn the opponent's strategy simultaneously (with Deep RL or co-evolution), or use the same planning technique (e.g. MCTS). More sophisticated `theory of mind' approaches are unnecessary, and are primarily of benefit in general-sum games where there is an incentive for players to co-operate in at least some parts of the game. \subsection{Relative Reward Sparsity}\label{sect:RelativeRS} It is a triumph of RL (and especially DeepRL) that it is able to cope so well with such sparse rewards, and learn policy and value networks able to play classic games to a superhuman standard (see AlphaZero~\cite{AlphaZero_2017} and MuZero~\cite{schrittwieser2019mastering}). The algorithms rely on the fact that although the rewards are sparse, there are patterns in the game state that can accurately predict the value, and these patterns can be learned. It is essential that during learning, the learner gets to experience a range of outcomes; for example, if every playout ends in a loss, there is no learning signal. In RL if the inherent reward signal is too sparse then \emph{Reward Shaping} may be used to provide additional incentives and guide a learning agent towards successful behaviours. Related concepts are curriculum learning, where the agents learns tasks of increasing complexity on the path towards the full scale of problem, and intrinsic motivation, where the agent actively seeks out novel game states it has not seen before~\cite{bengio2009curriculum, sukhbaatar2017intrinsic, Justesen_Risi_2018}. $\epsilon$-greedy or softmax approaches often do very poorly with sparse rewards because they explore independently at each action/time-step. Instead we want to use an exploration policy that is `consistent' over time, making similar deviations when exploring, for example by biasing moves that go up-hill, or that attack at worse odds than normal. There is a rich literature on how we can encourage consistent exploration in RL, including evolution of the evaluation function or policy and intrinsic motivation to seek out novel states~\cite{Bellemare_Srinivasan_Ostrovski_Schaul_Saxton_Munos_2016, Ostrovski_Bellemare_Oord_Munos_2017, Salimans_Ho_Chen_Sidor_Sutskever_2017, Osband_Aslanides_Cassirer_2018, Osband_Blundell_Pritzel_VanRoy_2016}. Expert Iteration learns an interim reward signal that can be used in short-term planning towards a long-term goal that is beyond the planing horizon. This is a technique used in AlphaZero (amongst others) to learn a game evaluation function from RL, and then use this in MCTS search~\cite{Anthony_Tian_Barber_2017, Swiechowski_Tajmajer_Janusz_2018}. Over several iterations of this meta-process the local evaluation function is bootstrapped up to learn what local, short-term features are conducive to global, long-term success. Model Predictive Control (MPC) is another option if we have a forward model. This uses a planning horizon of $H$ to plan an optimal sequence of moves, and then re-plans after taking the first of these and seeing the effect in the environment~\cite{Lowrey_Rajeswaran_Kakade_Todorov_Mordatch_2018}. It is possible to learn an evaluation function (via RL for example) that is then used to optimally plan up to $H$ steps ahead using MPC. Learning an interim evaluation function can also provide useful information on what the agent is seeking to maximise in the short-term as useful to achieve a long-term goal, as the weights it uses may be quite different to those a human expert would design in. This is not always needed. For example the world-championship beating Dota2 work from OpenAI used a hand-designed short-term reward signal based on the team's domain knowledge. Where this domain knowledge exists, this is usually a better thing to try first, as it can give the desired level of play with less complexity. \subsection{Strategy Exploration}\label{sect:Exploration} In some use-cases the purpose of a wargame is not to find the single best course of action, but to explore a range of possible outcomes. This is especially true in the case of Concept or Force Development wargames. A number of techniques can be useful here to ensure that a diverse range of policies is found: \begin{itemize} \item \textbf{Multi-objective optimisation}. This starts from the premise that there may be multiple different objectives, and a policy is needed that can adapt at run-time to the actual objective. It hence seeks a range of different policies that are `optimal' in different ways~\cite{Deb_Agrawal_Pratap_Meyarivan_2000, Perez-Liebana_Mostaghim_Lucas_2016}. Given a set of different objectives, for example to a) minimise casualties, b) minimise fuel usage, c) maximise the enemy forces degraded, a multi-objective approach returns a set of policies on the `Pareto-front' of these dimensions. A policy on the Pareto-front is one which is better than all other policies for at least one linear combination of the different objectives; for example with 80\% weighting on minimising casualties, 20\% on fuel usage and 0\% on degrading the enemy. These can also be set as secondary targets to the main goal of the wargame to give policies which achieve the core victory objective with the best result on this secondary target. When the actual objective of a game is known (usually at game start, but this could change at some mid-point in line with Objective Variability discussed in Section~\ref{sect:ObjectiveVar}), then a relevant policy to be used can be selected from the trained library. See~\cite{mossalam2016multi} for a Deep Learning multi-objective approach. \item \textbf{Niching}. In Evolutionary Algorithms that maintain a population of policies, an additional bonus can be specified to ensure that this population remains diverse and does not converge too quickly on a single solution. This helps ensure that more of the possible policy space is explored. Classic options here can include a reward based on genome-distance, if this can be specified, but this may not automatically correspond to phenotypic distance. An alternative approach is to match the population against a diverse set of opponents (most commonly from a `Hall of Fame' from past generations), and divide the reward for beating each across all those population members that are able to. This rewards a policy that can defeat one or two opponents the others cannot, even if it has a low overall score~\cite{Rosin_Belew_1997}. For a more detailed review see~\cite{preuss2015multimodal}. \item \textbf{MAP-Elites}. Similar to Multi-objective optimisation, MAP-Elites is a recent algorithm that seeks to find diverse different solutions to the same problem~\cite{Mouret_Clune_2015}. It requires the specification of one or more dimensions of `phenotype', which for example could be `movement', `fuel usage' or `casualties suffered'. These are combined in a grid, and an optimal policy is sought for each cell (e.g. high movement, low fuel usage and low casualties). \item \textbf{Nash Averaging}. This can be used in combination with any of the above approaches, as well as in its own right. The common intuition is that a set of different policies may not be very diverse in practise. Nash Averaging can use game theoretic analysis on the results of tournaments between the policies (or their performance on a range of different scenarios) to extract the small sub-set that between them encapsulate the full diversity of the population~\cite{Balduzzi_Tuyls_Perolat_Graepel_2018}. Nash Averaging has the benefit of discounting duplicate opponents when assessing the strength of each player, and hence prevents a weak player appearing strong by being especially good at beating a cluster of self-similar weak players. \end{itemize} \subsection{Map-based}\label{sect:maps} One feature of many wargames, especially land-based ones, is that they are played out on a map of varying terrain features. There are a few approaches that are natural tools to use in this case, which we highlight here: \begin{itemize} \item \textbf{ Convolutional Neural Networks (CNNs)}. While we strongly recommend against learning to play wargames from raw pixel visual inputs, CNNs are still a useful tool for processing a symbolic version of a map. This is for example what a Go or Chess board really are. CNNs are used for example in~\cite{Barriga_Stanescu_Buro_2017} to process a symbolic representation of an RTS map to pick one of four scripts to use for a given unit. \item \textbf{Multi-unit path-finding}. A* is the classic algorithm in this case, but copes less well when multiple units are involved. In this case Conflict-Based Search is the current state-of-the-art~\cite{sharon2015conflict, boyarski2015icbs}. \end{itemize} \subsection{Human-like control}\label{sect:imitation} In some wargame use-cases (see Section~\ref{sect:wargameTypes}) there is a requirement for an AI to act in a `human-like' way, which may also involve following standard military doctrine for the force in question. Deep RL that has allowed agents to beat the world champions in some games, is less relevant for these use-cases, in which policies should not perform arbitrary policies but policies that are simultaneously high-performing and human-like. To encourage more human-like play, data from humans can be incorporated at different points in the training loop. \textbf{Imitation learning:} One typical approach is to collect human playtraces and then train a policy network in a supervised way to reduce the error between the predicted action for a certain state, and the action that a human performed. This approach was part of the original AlphaGo, in which a policy was trained to predict the most likely actions human would perform using databases from master-level tournaments~\cite{AlphaGo_2016}. This policy was then used to guide the MCTS tree-search by seeding initial values for each leaf node. In this example the method was used to get a better policy, not one that was specifically `human-like', and in later work (AlphaZero), the authors discard this approach~\cite{AlphaZero_2017}. Other examples have aimed at `human-like' behaviour via this approach, for example learning a player-specific driving style in a racing game~\cite{Munoz_Gutierrez_Sanchis_2013}. Beyond very simple cases `naive' imitation learning in this fashion can be fragile and lead to very poor performance when the agent moves even a small distance from the training data. To mitigate this algorithms such as \textsc{DAgger} can be used to generate new expert-like training data in situations where this deviation is most likely~\cite{Ross_Pineau_Chaib-draa_Kreitmann_2011}. This requires a policy that can give a reasonable 'expert' move in any given situation. In a wargame context human experts could indicate what the `correct' move is after the AI has taken one in a few trial games, and this used to augment the training data. \textbf{Reward shaping:} Another approach, which was deployed in AlphaStar, is to incorporate the distance to human playtraces into the reward function used for RL. For example, in AlphaStar, bots were rewarded for winning while at the same time producing strategies reminiscent of human strategies. The benefit of this approach is that it directly allows the agent to optimize for both metrics, without needing a dedicated purely supervised pre-training phase. This approach requires a suitable metric to be defined between human play and the strategy followed, and this often requires expert domain knowledge to specify what classic human strategies look like in game terms. \textbf{Inverse Reinforcement Learning:} This approach `inverts' the classic RL problem by seeking to learn the reward function an agent is using based on their behaviour~\cite{Ng_Russell_2000}. It is useful for tasks in which a reward function might be hard to define. Human playtraces are used to estimate the implicit reward function, and then this reward function is used in classic RL to learn a policy. This avoids some of the fragility issues with naive imitation learning, but is computationally more demanding. This approach has been used, for example, to construct a human-like AI in the game of Super Mario~\cite{Lee_Luo_Zambetta_Li_2014}. \textbf{Preference Based Reinforcement Learning: } A reward function can also be learned from human preferences~\cite{wirth2017survey}. In the approach by~\cite{christiano2017deep}, humans are presented with short video clips of agents performing Atari or MuJoCo robot control tasks, starting from random policies. The user selects which of the two policies they prefer; by repeatedly asking a human for the preferences, a model of the user's preferences is learned through supervised learning. Once this model is learned, RL can be driven by this learned model. This approach was successful applied to learning policies for different simulated robots. For our wargaming purposes, we imagine units could be trained through RL via self-play and a first estimate of a reward function. Every once in a while a user is asked to compare selections of agent policies, which should allow to refine the initial reward function to produce more and more human-like and realistic strategies over time. \vfill \subsection{From RL to Deep RL} An agent in reinforcement learning tries to solve a sequential decision making problem under uncertainty through a trial-and-error learning approach with respect to a reward signal \cite{sutton1998introduction}. The approach can often be described as follows: the agent receives a state $s_t$ at timestep $t$ from the environment and based on this state, samples an action $a_t$ from its policy $\pi(a_t|s_t)$. After performing the chosen action, the environment transitions to the next state $s_{t+1}$ and the agent receives a scalar reward $r_{t+1}$. The main objective in an RL setting is to optimize the agent's policy in order to maximise the sum of rewards. A particularly popular early RL method was Q-learning, which tries to learn an optimal state-action value function $Q(s, a)$ describing the best action $a$ to take in a particular state $s$. However, a challenge with this original tabular version of Q-learning is that the size of the table quickly becomes infeasible when the number of different states in the environment grows. Additionally, learning in such a sparse table would not be very efficient. A breakthrough introduced in 2015 by DeepMind was a method called Deep Q-Network (DQN), in which $Q(s, a)$ is represented by a deep neural network that learns low-level representations of high-level states \cite{atari}. The DQN approach was the first approach to being able to master many Atari games at a human level from pixels alone. Since then, many different deep reinforcement learning approaches combining the representational power of neural networks with reinforcement learning have shown impressive results in a variety of different domains. Two of the most impressive applications of Deep RL in games are DeepMind's work on StarCraft II \cite{vinyals2019grandmaster} and OpenAI's bot for Dota2 \cite{berner2019dota}. In the following we will review these approaches in terms of their computational and also engineering efforts. Both StarCraft II and Dota2 not only have a very high action space but many aspects are continuous (see Sections~\ref{sect:StateSpace} and~\ref{sect:ActionSpace}). Additionally, we will review other recent trends in RL that are particular important for wargaming, such as being able to continually learn, to produce explainable actions, or hierarchical deep RL methods. \subsection{Dota2} One of the potentially most relevant applications of deep RL to games, in regards to the challenges faced in typical wargames, is OpenAI's OpenFive bot \cite{berner2019dota}. This was the first AI system to defeat world champions at an e-sport game. In contrast to DeepMind's work on AlphaStar that learns from pixels, OpenFive learns from pre-processed information on enemy and unit locations. This setup is likely more relevant for an AI that is tasked with playing wargames such as CMANO or Flashpoint. Dota2 is a multi-player real-time strategy game with complex rules that have evolved over several years. The game is played on a square map with two teams of five players each competing against each other. Each player controls one of the five team members, which is a hero unit with special abilities. Players can gather resources through NPC units called ``creeps" and use those to increase the power of their hero by purchasing new items and improving their abilities. Dota2 is challenging for RL systems since it includes imperfect information (not all parts of the map are visible at the same time; see Section~\ref{sect:Observability}), long time horizons, and continuous state and action spaces. Games last for approximately 45 minutes and the state of the world is only partially observable, limited to the areas near units and buildings. \subsubsection{Neural Network architecture} The OpenFive neural network architecture that controls each agent in the game observes around 16,000 inputs each time step (Figure~\ref{fig:open_five}). The majority of these input observations are per-unit observations for each of the 189 units in the game (e.g.\ 5 heroes, 21 buildings, etc). The action space is divided into primary actions that the agent can take (e.g.\ move, attack) and actions are then parameterised with additional neural network outputs. Availability of actions was checked through hand-designed scripts. In total, the action space had 1,837,080 dimensions. Each of the five agents is controlled by replicas of the same network. Observations are shared across all members of the team (the network gets as input an indicator which unit it is controlling). The employed network is a recurrent neural network with approximately 159 million parameters, mainly consisting of a single-layer 4096-unit LSTM. \begin{figure}[ht] \centering \includegraphics[width=0.9\textwidth]{open_five_network.png} \caption{Neural architecture for OpenAI Five \cite{berner2019dota}.} \label{fig:open_five} \end{figure} As mentioned above, instead of working from the screen pixels directly, information about opponent positions etc, which would also be available to a human player, is pre-processed and given to the neural network directly. This way the system is focused on high-level decision making and strategic planning instead of visual information processing. Such as system thus aligns well with the setup likely encountered in wargames. Some game mechanics were controlled by hand-designed scripts, such as the order in which heroes purchase items. Additionally, while humans need to click on certain parts of the map to view it and obtain information, this is instantaneously available to the network at each time-step. This means the AI arguably has an advantage over human-players in reducing the number of clicks needed to get information, however this `click-to-view' is an artefact of the real-time game genre, and the key point for wargames is that the AI does not have access to information unavailable to human players. \subsubsection{Training Regimen} The network was trained with a hand-designed reward function that not only rewarded the agent for winning the game (which would be a very sparse reward) but includes additional signals such as if another character died, or which resources were collected. This reward function was constructed by people familiar with Dota 2, and according to the paper, not altered much during the project. Instead of manually designing reward functions, these can also be directly learned from human preferences, which we explain in more detail in Section~\ref{sect:imitation}. The particular RL training algorithm was Proximal Policy Optimization (PPO) \cite{schulman2017proximal}, which is a state-of-the-art policy optimization method. The system is trained through \emph{self-play}, in which the agent plays 80\% of games against itself and 20\% against an older version of its policy. The system implements a complicated training architecture with both GPU and CPU pipelines that handle the game rollouts, to reach the large number of samples needed for training: ``After ten months of training using 770$\pm$50 PetaFlops/s days of compute, it defeated the Dota 2 world champions in a best-of-three match and 99.4\% of human players during a multi-day online showcase." \cite{berner2019dota}. In conclusion, the OpenFive agent uses a very computationally expensive approach that required extensive fine-tuning of its reward function. Thus, in its current form, it is likely not the best approach for wargaming. \subsection{AlphaStar} Even more so than Dota2, StarCraft II is incredibly complex. Instead of five units, players in StarCraft control hundreds of units. Additionally players need to balance short and long-term goals. Therefore, AlphaStar \cite{vinyals2019grandmaster} uses a more complicated neural architecture than the OpenFive Dota agent, combining multiple recent deep learning advancements, such as a neural network transformer \cite{vaswani2017attention}, LSTM core \cite{hochreiter1997long}, pointer network \cite{vinyals2015pointer}, and centralised value baseline \cite{foerster2018counterfactual}. While an RL approach by itself was able to learn to play the game at a high-level, it is important to note that Grandmaster level in this game was only reached when starting training with human demonstrations. Training in a purely supervised way, the system was already able to reach gold level play. Training the agent further through RL, allowed AlphaStar to be above 99.8\% of officially ranked human players, reaching Grandmaster status. In Section~\ref{sect:imitation} we will go into more detail on how human demonstration and related methods can bias RL systems towards human-like play. The previously mentioned AlphaStar tournament-like training setup (also called the AlphaStar league) points to the fact that it is incredibly difficult to make sure agents do not overfit to a particular playstyle or opponent, which would allow them to be easily exploited. Interestingly, AlphaStar did not employ any search-based planning method. Instead, the LSTM-based network learned to do some form of planning by itself. However, training this very complex system comes at a cost: ``The AlphaStar league was run for 14 days, using 16 TPUs for each agent. During training, each agent experienced up to 200 years of real-time StarCraft play." \subsection{Frameworks for scaling up training of deep RL agents} \label{sec:deep_rl_frameworks} While AlphaStar and Dota 2 relied on their own engineered training system, a few frameworks have since been released that have the potential to reduce the engineering efforts required for large-scale distributed AI computing. Therefore, these systems are likely of relevance for improving capabilities in the area of AI and wargaming. We identified one particular framework, that could significantly reduce such engineering efforts. Fiber\footnote{https://uber.github.io/fiber/introduction/} is a framework recently released by Uber to more easily scale machine learning approaches such as reinforcement learning and evolutionary optimization to many different machines in a compute cluster~\cite{wang2019poet}. Fiber tries to address common issues with scaling RL up to a large number of computers, such as (1) time it takes to adapt code that runs locally to work on a cluster of computers, (2) dynamic scaling to the amount of available resources, (3) error handling when running jobs on a cluster, and (3) high learning costs when learning a new API for distributed computation. Fiber seems easy to learn, since it uses the same API as Python multiprocessing (which many RL approaches are already based on) and applications can be run the same way they are normally run without extra deployment steps. The framework is also supposed to work well with existing machine learning frameworks and algorithms. Other existing frameworks include PyTorch-based Horizon \cite{gauci2018horizon}, which allow end-to-end RL training but seems less flexible than Fiber to quickly try out new ideas. Dopamine is another tensorflow-based system \cite{castro2018dopamine} that is flexible to use but does not directly support distributed training. \subsection{Automatically Learning Forward Models for Planning} \label{sec:learning_fm} Wargames would benefit from agents that can do long-term strategic planning. While these abilities can in principle emerge in an LSTM-based neural network that learns to play (such as AlphaStar), this approach is in generally computationally expensive and requires very long training times. An alternative approach that has gained significant attention recently is to learn a model of the world through machine learning \cite{ha2018world} and then use that model for a planning algorithm \cite{schrittwieser2019mastering,hafner2019dream,lucas2019local,ha2018world}. This approach tries to combine the benefits of high-performing planning algorithms \cite{Browne_Powley_Whitehouse_Lucas_Cowling_Rohlfshagen_Tavener_Perez_Samothrakis_Colton_2012} with breakthroughs in model-free RL. For example, the muZero approach developed by DeepMind~\cite{schrittwieser2019mastering}, first learns a model of the environment's dynamics and then uses Monte-Carlo Tree Search at each timestep in the learned forward model to find the next action to execute in the actual environment. The benefit of this approach is that planning can happen in a smaller latent space, in which it is not trained to predict the next pixels but instead the reward, the action-selection policy, and the value function. These functions are trained through backpropagation based on trajectories sampled from a replay buffer. Using a small latent space for planning is computationally faster and only incorporates the information the system learns is needed to make effective decisions; for example by ignoring wall textures or pixel-animations of sprites, both of which take up a lot of 'information' in the raw pixel input and do not need to be included in a useful forward model. Their approach showed new state-of-the-art performance in both the visually rich domain of Atari games, and the board games Go, Chess, and Shogi. However, computational requirements are rather heavy for this approach. For the three board games, they used 16 TPUs for training and 1,000 TPUs for selfplay. Agents were trained for 1 million training steps using 800 simulation steps per move. For the Atari games, 8 TPUs were used for training and 32 TPUs for selfplay. In 2019, cost of a a cloud TPU v3 device was at \$8 US per hour. The time for training the Atari agents was 12 hours (for the final model, not taking into account models trained during experimentation) and is likely much higher for Go. We estimate one training run on Atari costs around 40 TPUs * 12 * 8 = \$3,840 US and running 36 hours training for the board games would be 1,000 TPUs * 36 * 8 = \$288,000 US. One thing to note is that algorithms are also becoming more efficient to train, and increasing in efficiency more than Moore's law would predict\footnote{https://openai.com/blog/ai-and-efficiency/}. For example, it now takes 44 times less compute to train a high-performing network for state-of-the-art image recognition than it took in 2012 (Moore's law would have only predicated a 11x cost improvement). Similar trends can be observed in other domains. For example, AlphaZero required 8x less compute than AlphaGoZero. The increasing commoditisation of GPU/TPU facilities in the cloud have also contributed to decreasing the overall cost of computation over time. \subsection{Explainable AI}\label{sect:ExplainableAI} To best inform decisions on future force structures based on the results from the wargaming experiments, it would be ideal if the algorithm could explain its decision. For example, did the AI attack the ship because it believed it to have a particular weapon on board and would its decision have been different, if that wasn't the case. While methods based on deep neural networks are often considered to be black boxes that are difficult to analyse, recent advances have allowed more insights into the decision making process of neural networks. These techniques can be roughly divided into approaches that aim to (a) visualize features, and (b) elucidate the relationship between neurons. Visualizing hidden layers’ features can give insights into what network inputs would cause a certain reaction in an internal neuron or in the network output. In contrast to optimizing the network’s weights, as normally done through gradient descent to train the network, one can use the same process to optimize the input to the network that would maximally activate a certain neuron \cite{nguyen2016multifaceted}. These techniques can help to identify what features certain neurons learned to pay attention to (Figure~\ref{fig:activation_max}). Instead of looking at only single features, more advanced methods look at combinations of features to determine which are important for the network to reach its decision \cite{olah2020zoom}. For example, to identify a car, these methods determined that the network learns to identify windows, car bodies, and wheels; only if all three are present does the network predict a car. \begin{figure} \centering \includegraphics[width=1.0\textwidth]{activation_maximization.png} \caption{Activation maximization \cite{nguyen2016multifaceted}. These show the input (in this case, an image) that would maximally activate a specific neuron, to show what patterns the neural network is looking for.} \label{fig:activation_max} \end{figure} Similarly these approaches can create saliency maps (Figure~\ref{fig:saliency}) that indicate how important each input signal is for the network outputs. Similar approaches cover parts of the image or blur them out and these occupancy-based salience maps can similarly give insights into what types of input are important for the network. While these approaches are typically applied to image-based inputs, they can also work for other input representations~\cite{Gupta_Puri_Verma_Kayastha_Deshmukh_Krishnamurthy_Singh_2020}. \begin{figure}[h] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{saliency_map.png} \label{fig:sfig1} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{ChessSaliency.png} \label{fig:sfig2} \end{subfigure} \caption{Saliency maps~\cite{greydanus2017visualizing, Gupta_Puri_Verma_Kayastha_Deshmukh_Krishnamurthy_Singh_2020}. The brighter pixels in (a) show the areas of the neural network input that affect the decision (i.e. changing them, changes the decision). In (b) the highlighted pieces are the ones that affect the neural network decision to move the bishop as shown on the right. } \label{fig:saliency} \end{figure} Additionally, by giving counterfactual examples to the network (e.g.\ instead of weapon type A, the network now receives weapon type B as input), can give insights into the situations that resulted in the network performing a certain action \cite{myers2020revealing}. \subsection{Continual adaptation and fast learning} \label{sec:continual_learning} While applications of Deep RL have shown impressive results, they only perform well in situations they have been trained for in advance and are not able to learn new knowledge during execution time. If situations or circumstances change only slightly, current Deep RL-trained AI systems will likely fail to perform well. These issues clearly limit the usage of these machines, which have to be taken offline to be re-trained while relying heavily on human expertise and programming. In the context of wargaming, a new unit type might be introduced or the strength of existing units could change. One approach would be to just retrain the system from scratch, but depending on the frequency of changes to the game and the computational training demands, this approach could quickly become infeasible. Because the code and settings for Dota 2 changed over time, OpenAI developed an approach they called `surgery' that allowed them to resume training. A similar approach could be deployed for wargaming. However, ideally, an approach should be able to continually incorporate new knowledge and adapt to new circumstances during deployment without having to be taken offline for training. While current deep learning systems are good at learning a particular task, they still struggle to learn new tasks quickly; meta-learning tries to address this challenge. The idea of meta-learning or \emph{learning to learn} (i.e.\ learning the learning algorithms themselves) has been around since the late 1980s and early 1990s but has recently gathered wider interest. A family of meta-learning approaches trains a special type of recurrent network (i.e.\ an LSTM) through gradient-based optimization that learns to update another network~\cite{ravi2016optimization, zoph2016neural}. In an approach based on reinforcement learning (RL) an agent learns how to schedule learning rates of another network to accelerate its convergence \cite{fu2016deep}. Typically, such networks are trained on several different tasks and then tested on their ability to learn new tasks. A recent trend in meta-learning is to find good initial weights through gradient-based optimization methods from which adaptation can be performed in a few iterations. This approach was first introduced by \cite{finn2017model} and is called Model-Agnostic Meta-Learning (MAML). A similar approach is used in Natural Language Processing (NLP), with pre-trained networks used as a starting point (having been trained on all of wikipedia for example, to gain a solid model of English syntax), and then specialised in some domain, for example by fine-tuning on Hansard to parse political speeches, or medical journals for an application in medicine. \subsection{Overfitting in RL agents}\label{sect:overfit} A challenge with employing RL agents for wargames is that pure RL algorithms often overfit to the particular scenario they are trained in, resulting in policies that do not generalize well to related problems or even different instances of the same problem. Even small game modifications can often lead to dramatically reduced performance, demonstrating that these networks learn reactions to particular situations rather than general strategies~\cite{kansky2017schema,zhang2018study}. For example, to train OpenAI Five, several aspects of the game had to be randomized during training (e.g.\ the chosen heroes and the items they purchased), which was necessary for robust strategies to emerge that can handle different situations arising when playing against human opponents. This type of overfitting can also be exploited for adversarial attacks. For example, Gleave et al.~\cite{gleave2019adversarial} showed that an agent acting in an adversarial way (i.e.\ a way unexpected to another agent), can break the policy of humanoid robots trained through RL to perform a variety of different tasks. In one interesting case, a robot goal keeper just falling to the ground, twitching randomly, can totally break the policy of an otherwise well performing goal-shooting robot. One idea to counter overfitting is to train agents on a a set of progressively harder tasks. For example, it has been shown that while evolving neural networks to drive a simulated car around a particular race track works well, the resulting network has learned only to drive that particular track; but by gradually including more difficult levels in the fitness evaluation, a network can be evolved to drive many tracks well, even hard tracks that could not be learned from scratch~\cite{togelius2006evolving}. Essentially the same idea has later been independently invented as curriculum learning~\cite{bengio2009curriculum}. Similar ideas have been formulated within a coevolutionary framework~\cite{brant2017minimal}. Several machine learning algorithms also gradually scale the difficulty of the problem. Automated curriculum learning includes intelligent sampling of training samples to optimize the learning progress \cite{graves2017automated}. Intelligent task selection through asymmetric self-play with two agents can be used for unsupervised pre-training \cite{sukhbaatar2017intrinsic}. The \textsc{powerplay} algorithm continually searches for new tasks and new problem solvers concurrently \cite{schmidhuber2013powerplay} and in Teacher-Student Curriculum Learning \cite{matiisen2017teacher} the teacher tries to select sub-tasks for which the slope of the learning curve of the student is highest. Reverse curriculum generation automatically generates a curriculum of start states, further and further away from the goal, that adapts to the agent's performance \cite{florensa2017reverse}. A promising recent method to combat overfitting in deep RL is to train agents on a large number of different procedurally generated environments \cite{risi2019increasing, justesen2018illuminating}. In the \emph{progressive procedural content generation} approach \cite{justesen2018illuminating}, an agent is presented with a completely new generated level each episode during reinforcement learning. If the agent is able to solve the level, difficulty of the generated environments is increased. This approach allows agents to solve levels they would normally not be able to solve, and also produces agents that overfit less and generalize better to other levels. \subsection{Hierarchical RL}\label{sect:HRL} A particularly relevant form of RL for wargaming is hierarchical RL (HRL). HRL aims to scale up RL by not just training one policy but a hierarchy of policies. In this hierarchy, some actions operate on low level actions while other levels can make use of these simpler actions to perform macro-actions. This type of abstraction greatly reduces the search space of the problem but finding the right level of abstraction remains an important challenge in HRL. Some of the most popular forms of HRL are `options frameworks' \cite{sutton1999between} and `feudal RL'. In the options framework, actions can take a variable amount of time. More specifically, options consist of an option policy, an initiation set, and a termination set. A particular option is available to take in a certain state $s$, if $s$ is part of that option's initiation set. In that case the policy is executed until it terminates stochastically according to some termination condition. Early work in this area dealt with pre-defined options but more recent work focuses on learning the appropriate options. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{feudal_network.png} \caption{FeUdal Network Architecture \cite{ahilan2019feudal}.} \label{fig:feudal} \end{figure} In feudal RL, levels of hierarchy within an agent communicate via explicit goals but not exactly how these goals should be achieved. More recent work in RL extends the idea of feudal RL to deep neural networks \cite{ahilan2019feudal}. In this work (Figure~\ref{fig:feudal}), a feudal system is composed of two different neural networks. A manager network sets goals at a lower temporal resolution for a worker operating at a higher temporal resolution. The worker performs primitive actions conditioned on the goal given by the manager network. This approach has shown promise in hard exploration problems such as the Atari game Montezuma’s revenge. While typical RL algorithms only reach very low scores in this game, the FeUdal approach allowed the manager to learn meaningful sub-goals for the worker, such as picking up the key in the environment or reaching the ladder. For wargaming, the feudal approach is likely more relevant than the options framework, since it already reflects the typical hierarchical troop structure in combat scenarios. \subsection{Deep Multi-agent Learning} OpenFive and AlphaStar present two different forms of deep multi-agent learning. OpenFive can be described as a distributed homogeneous multi-agent system, in which each agent is controlled by clones of the same network but there is no overall manager network. AlphaStar is using a different approach, in which one centralised manager network controls all the available units. Other approaches such as distributed heterogeneous multi-agent systems train a different neural network for each agent, based on approaches such as independent Q-learning (cite). However, it is difficult to scale these approaches to a large number of agents. Multi-agent learning and HRL methods have also been combined \cite{kumar2017federated}. An important factor to consider for wargaming is scalability. Ideally we do not want to retrain the whole system from scratch when a new unit type if introduced, which limits the use of centralised controllers. Heterogeneous multi-agent RL approaches are also divided into approaches in which the environment becomes partially observable from the point of each agent that might not have access to the whole state space. In the context of wargaming agents, we can assume that all agents have access to all information all the time. \vfill \subsection{Types of wargames}\label{sect:wargameTypes} Wargames are used by defence and military organizations for several distinct purposes. \begin{enumerate} \item Planned Force Testing. These are large-scale wargames involving tens to hundreds of participants in multiple different and collaborative commands. They are concerned with end-to-end testing of holistic military capabilities across logistics, diplomacy, theatre access and scenario aftermath as well as combat. Computer tools may be used for some of the detailed moderation, but these are primarily run as analogue exercises. \item Plan Variation Testing. This takes a single scenario plan, for example part of an outcome from Planned Force Testing, to consider in more detail. These are often run multiple times to analyse interesting branch-points that need human decision-making focus. \item Concept/Force Development. These wargames take a proposed concept or change to capability, such as a new rifle or autonomous land vehicle, and explore the associated strategy/tactics space to find viable concept combinations and potential doctrine suitable for the proposal. They will be at the minimum scale needed to test the proposal. \item Procurement. These wargames are the most constrained type. They are used in the context of a decision to purchase one type of equipment from a small set of options. The scenarios are tightly controlled and repeated multiple times to quantify as far as possible the differences between the options. Ideally these outcome differences would be analysed with changes to equipment but not military doctrine; with doctrine changed by old equipment; and with both doctrine and equipment changes. \end{enumerate} This study focuses on the potential use of AI techniques in computer-moderated wargames. These tend to be complex games with large state spaces and large action spaces, to name just two dimensions of complexity. Section~\ref{sect:GameFeatures} provides a comprehensive study of all the main aspects of a game that can make it difficult for an AI or human player to play it well. This focus on complex games is in contrast to abstract game-theory type games where the actions and rewards for each player form a payoff matrix, and the games have no intrinsic state. \vfill \subsection{Evolutionary Algorithms} Evolutionary Algorithms (EA) of various types are derivative-free \textbf{search} algorithms that start with a population of possible (often random) solutions, and make changes (mutations, cross-over) to the best of these solutions to come up with a new population of solutions; repeating this process until a sufficiently good solution is found. A major challenge is often finding a suitable representation (genome) of the solution that is amenable to the mutation and cross-over operators. EAs have some clear theoretical disadvantages over RL approaches; they are less sample efficient and cannot use the information from a gradient. In derivative-based RL (e.g back-propagation on Neural Networks), the policy or function being learned is updated with each individual action by gradient descent in the direction of (current) greatest benefit. In an EA the policy or function is only updated, at best, once after each full game in a random direction~\cite{Lucas_2008}. However, EAs can be much better at exploring large search spaces precisely because of their ability to take large random jumps, and if the fitness landscape is highly irregular then gradient-descent methods can get stuck in local optima (although there are various techniques such as momentum to reduce the risk of this happening). More advanced methods such as CMA-ES or NES also incorporate a form of pseudo-derivate to make bigger jumps in more promising direction~\cite{Hansen_Ostermeier_2001, Wierstra_Schaul_Glasmachers_Sun_Peters_Schmidhuber_2014}. CMA-ES in particular has a range of software libraries across different platforms to make it accessible. For example, using CMA-ES to evolve Deep Neural Networks has been found to be quite competitive with RL in Atari games \cite{Salimans_Ho_Chen_Sidor_Sutskever_2017}. Having a population of different solutions also enables niching techniques that reward a diverse set of solutions, and not just the best possible game score~\cite{preuss2015multimodal, Agapitos_Togelius_Lucas_Schmidhuber_Konstantinidis_2008, Rosin_Belew_1997}. This is a useful technique to explore the space of viable strategies, and not just generate a single `best' one. \subsection{Random Forests} Random Forests are a tool used in supervised learning, especially with structured data~\cite{breiman2001random}. In Kaggle competitions they have been the most common tool used by winning competition entries, along with the related XGBoost~\cite{chen2016xgboost}, with Deep Neural Networks a close second and preferred when image data is involved~\cite{Usmani_2018}. They consist of multiple Decision Trees, with each tree classifying the data by branching at each node on one or more of the data attributes (each node forms a binary \textit{if}\ldots \textit{then}\ldots \textit{else} condition). Aggregating the decisions of all the trees in the forest (as an ensemble method) gives much better performance than a single individual tree. Due to the innate discontinuity at each branch in a tree, decision trees can be effective at modelling policies or functions that have similarly sharp discontinuities in them. Trees/Forests are not used as much in recent Game AI research, but they can be a useful tool at times. For example decision trees have been used to provide an easily interpretable policy to teach players~\cite{Silva_Togelius_Lantz_Nealen_2018}, and also as a means of learning a forward model~\cite{Dockhorn_Lucas_Volz_Bravi_Gaina_Perez-Liebana_2019}. \subsection{Statistical Forward Planning}\label{sect:SFP} Statistical Forward Planning (SFP) algorithms are descendants in some respects of the classical min-max search used in Chess and Checkers research in the 1980s. The combinatorial explosion of large action spaces and branching factors prevents exhaustive search to a certain depth in the game-tree, and SFP methods do not attempt this. Instead they search stochastically, and use statistics from previous iterations to make the decision about where to search next. At each iteration a forward model simulates the effect of possible actions on the current game state in a `what if' scenario to obtain a value for the proposed plan. The crucial requirement is a forward model for these simulated rollouts. This is a notable difference to model-free RL methods , although model-based RL can use (or learn) a forward model to generate training data. Even so, in \textit{vanilla} RL the learned policy that makes decisions after training is complete does not use a forward model. \textit{Vanilla} SFP methods do not do any pre-training, but use the forward model to make a decision when an action is required. They need a computational budget to use for this, and everything else being equal are slower in actual play. (See Section~\ref{sect:Hybrids} for notes on exceptions to the \textit{vanilla} case.) Common SFP methods used in Game AI of possible relevance to wargames include: \begin{itemize} \item \textbf{MCTS}. Monte Carlo Tree Search expands the game tree by one node per iteration, and stores statistics at each node on the number of times each action has been taken from that node, and the average value that resulted. These are then used to balance exploitation of the best action with exploration of actions that have been tried relatively few times. See~\cite{Browne_Powley_Whitehouse_Lucas_Cowling_Rohlfshagen_Tavener_Perez_Samothrakis_Colton_2012} for a survey. \item \textbf{RHEA/OEP}. Rolling Horizon Evolutionary and Online Evolutionary Planning Algorithms use EAs to generate forward plans at each iteration, based on previously successful ones~\cite{Perez_Samothrakis_Lucas_Rohlfshagen_2013, Justesen_Mahlmann_Togelius_2016}. These methods divide their budget more equally at all depths in the game-tree, which can be an advantage over the more short-term focus of MCTS in some domains. \item \textbf{CMAB}. Combinatorial Multi-Armed Bandits are very similar to MCTS, and have been successful in RTS games, including MicroRTS and Starcraft~\cite{Ontanon_2017}. The crucial difference is that a global action is factored into contributions from each unit under the player's control, and statistics are maintained for each unit as well as at the global level. As with any SFP, the forward model is called over a number of iterations. On each of these an action is selected for each individual unit based on past statistics to balance exploration of new actions with exploitation of previously tried good actions, exactly as in MCTS. These are then combined to give a single `global' action. At each future iteration during planning a CMAB algorithm either selects the previous best `global' action, or samples a new individual action for each unit to create a new `global' action. This is a good approach with a large number of units, each of which can take its own action (as in wargames). \end{itemize} \subsection{Others} There are a number of older techniques used in commercial Game AI, such as hand-written Finite State Machines, Parameterised Scripts and Behaviour Trees~\cite{yannakakis2018artificial}. These are not learned (although RL and EA methods amongst other can be used to learn the best values in parameterised scripts for example), but have advantages in ease of debugging and interpretability. The example of the OpenAI work on Dota2, which used hand-crafted static heuristics (or scripts) in several decision areas, shows that in an application domain these can be a useful part of a full system, even if they are not stressed in the academic literature produced. \subsection{Hybrids}\label{sect:Hybrids} None of these AI approaches are the answer to all games, and some will work better on some problems than others. It is better to think of them as a tools that provide options to be combined with one another as needed, with no single silver bullet that will solve each and every problem. Applying an AI technique to a game (or any application) first involves understanding the features and structure of the game, and tuning the appropriate tools to this structure. Some illustrative examples of this hybridisation are listed below, and many more are covered in Section~\ref{sect:algoSummary}. \begin{figure} \includegraphics[width=1.0\textwidth]{EIDiag.png} \caption{Expert Iteration. Iterations of a Statistical Forward Planning algorithm generate data that is used to learn a policy, $\Pi(s)$, or an evaluation function $Q(s, a)$. This is then used in the next iteration of SFP to ratchet up performance.} \label{fig:ExpertIteration} \end{figure} \begin{itemize} \item \textbf{Expert Iteration}. One particularly powerful recent hybridisation, used by AlphaGo amongst others, is between Deep RL and SFP to give `Expert Iteration'~\cite{Anthony_Tian_Barber_2017, AlphaZero_2017, Tong_Liu_Li_2019}. This iterates i) playing games using SFP to generate training data that, ii) is used by Supervised RL to learn policies or evaluation functions used in the next round of SFP games (see Figure~\ref{fig:ExpertIteration}). In this way the quality of play ratchets up, and the overall cycle can be cast a Reinforcement Learning, with the SFP algorithm used as both policy improvement and valuation operator. This is discussed further in the context of wargames in Section~\ref{sect:RelativeRS}. \item \cite{Marino_Moraes_Toledo_Lelis_2019} use Evolutionary Algorithms to learn an Action Abstraction that provides the Actions used in SFP (specifically, CMAB is used). This provides slightly better results than previous work by the same authors that used an expert-specified Action Abstraction directly in CMAB. This is a good example that solutions can often be built up over time, learning one aspect of a game and using rough heuristics in others. Trying to learn everything from scratch at once can make the learning task much, much more difficult by not using scaffolding that is helpful to reach a good solution before it is discarded. \item \cite{Stanescu_Barriga_Hess_Buro_2016} use a Convolutional Neural Network to learn the value of a state from the map in an RTS, and then use the CNN as an evaluation function in MCTS to decide which of a small number of heuristic scripts to use at that point in the game. The CNN is pre-trained on a variety of different maps so that it will generalise reasonably well to unseen scenario maps. \item \cite{Horn_Volz_Perez-Liebana_Preuss_2016} investigate several hybrids between MCTS and RHEA in GVGAI play. The best overall algorithm uses RHEA of the first $X$ moves and then MCTS from that point. The precise hybrid that works best varies by game, for example they find that MCTS is better at dealing with adversarial opponents in a game, but can have problem where path-finding is most important. \item \cite{Lucas_Reynolds_2005} use Evolutionary Algorithms to learn Finite State Machines, and find that this is better at dealing with noisy data. \item A good example of constructing a system from disparate building blocks is~\cite{ha2018world}. They combine a Variational Autoencoder neural network to process a visual image, a Recurrent Neural Network as a memory component, and an evolutionary algorithm (CMA-ES) to evolve a neural network controller (policy). The first two components are trained first on game-play, and then the controller is evolved using the resultant learned model. \end{itemize} Algorithm are often constructed of commonly recurring components, and it can be helpful to consider how these interact in hybrids. \begin{itemize} \item Policy. The most obvious component is a policy that decides what action to take in any given situation. Usually abbreviated to $\pi(s, a)$ that defines a probability of playing each action, $a$, given the current state, $s$. Classically this is learned directly in policy-based reinforcement learning. In SFP and search-based approaches distinct policies can be used to decide which action to take in simulation using the forward model; these rollout policies tend to balance the taking the best action with exploring new actions to find a better one. \item State Evaluation Function. This provides a value for a state, or for a specific action in a state. In a game this is usually the percentage change of winning, or the total score estimated if that action is taken. Usually abbreviated to $V(s)$ for a state-evaluation function, or $Q(s, a)$ for an action-state evaluation function. Classically these are learned by reinforcement learning methods, such as Deep Q-learning. (Note that a learned $Q(s, a)$ can implicitly define a policy, for example by taking the action with the highest $Q$-value; with a forward model, a $V(s)$ function can be used in the same way.) \item Embedding. A common technique with high-dimensional inputs is to `embed' these into a lower-dimensional latent space. This latent space is then used to make actual decisions. This is effectively what many deep neural networks, for example to convert a $10^8$-dimension visual image into a 256-dimensional embedding. This can then be used with a simple one-layer neural network, or even just a linear function, to generate a $V$, $Q$ or $\pi$ function. \end{itemize} \vfill \subsection{Open AI Gym} \url{https://gym.openai.com/} This is a widely used framework within reinforcement learning and includes a wide variety of environments, such as the Atari Learning Environment (ALE) and MuJoCo tasks. However, there are significant limitations: support for planning algorithms is limited (e.g. the copy() method, essential to statistical forward planning algorithms) works on the ALE tasks, but not on others. The main limitation which renders it unsuitable for wargaming is the lack of support for multi-player games. \subsection{OpenSpiel} \url{https://github.com/deepmind/open_spiel} \begin{itemize} \item Core implementation language: C++, with wrappers for Python (and possibly other languages) \end{itemize} OpenSpiel is led by Marc Lanctot from DeepMind. The main focus of OpenSpiel is on classic board and card games, though it can potentially be used by any game that meets the standard interface and has an SFP-ready game state (a forward model that can be easily copied and run fast). The standard interface assumes that actions in a game can be enumerated. At each agent decision point, the set of available actions is passed to the agent and the agent then selects one of them. This is a standard interface that works well for classic games with small numbers (e.g. up to hundreds) of possible moves in each game state, but not for all games. For example, in Flashpoint Campaigns, move actions can contain up to three waypoints en route to the destination, each of which can be on any hex in the map. Hence, if the map has 100 hexes, then there would be $10^8$ possible movement paths for each unit, which would be infeasible to enumerate. OpenSpiel has an elegant way of representing chance events, which are a central feature of games such as Backgammon and Poker. Chance is represented as another player in the game, end each chance event is modelled as a move made by the chance player. Each state in the game has a one to one mapping from the state to the move history, which is made possible by the inclusion of the chance moves in the move history. One of the main attractions of OpenSpiel is the significant number of implementations of recent Game AI algorithms, such as Monte Carlo Tree Search and a range of DRL agents such as DQN and A3C. OpenSpiel also comes with a wide range of classic games, which are parameterised to enable different versions to be run. \subsection{Polygames} \url{https://github.com/facebookincubator/Polygames} \begin{itemize} \item Core implementation language: C++, with wrappers for Python (and possibly other languages) \end{itemize} Polygames is similar in scope to OpenSpiel, with the main focus on 2-player classic board games such as Go and Othello. Compared to OpenSpiel, it has fewer games and fewer AI agents. However, Polygames does provide a standard way to interface convolutional neural nets to board game environments. As with OpenSpiel, games in Polygames can be run with different parameters (such as board size). Although the challenge of learning from pixels (and more generally from visual input such as board-tiles) is less relevant to wargame AI, it may still be helpful to allow agents to learn from a spatial arrangement of map-tiles. \subsection{ELF: Extensive, Lightweight and Flexible platform for game research} \url{https://github.com/facebookresearch/ELF} \begin{itemize} \item Core implementation language: C++, with wrappers for Python (and possibly other languages) \end{itemize} ELF comes with three sample enviromments: an ALE wrapper, Go, and most relevant to wargames, a mini RTS. The mini-RTS game is similar in nature to microRTS. ELF offers a reasonably fast and well designed implementation in C++, but does not offer any particular advantages for wargames. \subsection{General Video Game AI (GVGAI)} \url{http://gvgai.net} GVGAI \cite{GVGAI-Multi} is a multi-faceted game AI framework that enables experimentation with many aspects of video games, each defined by its own competition track: \begin{itemize} \item The Single-player planning track. Agents use the forward model to play a wide range of games and puzzles. Leading agents include a range of tree search methods as well as rolling-horizon evolution. This track has had the most interest. \item Single-player learning track. This is most similar to the ALE framework, and agents are given a short learning period on some training levels before attempting to play unseen test levels. The short learning time and limited advance information made this (in retrospect) too challenging and led to a limited number of entries. Agents can learn from pixels or an object-based game state observation. \item Two-player planning track: agents have access to the forward model, but have to cope with an unknown agent on an unknown game. \item Level-generation track: the aim is to generate interesting (or at least playable) novel levels for unseen games: very challenging. \item Rule-generation track: generating new game rules for novel games; again very challenging. \end{itemize} \begin{itemize} \item Original implementation in Python (Schaul), more recent implementation in Java (much faster and more competition/evaluation tracks (2-player, learning track, level generation track). Java version also has wrappers for Open AI Gym \end{itemize} The learning track of GVGAI is similar in nature to ALE. ALE has the advantage of using commercial games (albeit from a bygone era) that are easy to relate to, though due to the finite set, limited number of levels for each game and deterministic nature has been susceptible to significant over-fitting \cite{OpenAIOverfitting}. Although less widely used than ALE, GVGAI does have some distinct advantages that are relevant to Wargame AI: \begin{itemize} \item An extensible set of games, game levels and game variations which are relatively easy to author in VGDL \item Support for 2-player games \item Stochastic games \end{itemize} A particular feature of the GVGAI framework is the \textit{Video Game Description Language} (VGDL). VGDL is a Domain Specific Language (DSL) designed to make it easy to express the rules of typical arcade-oriented 2D video games. Although VGDL was designed with this in mind, the overall approach of having a high-level DSL is very relevant for war game AI, and one of our recommendations is to consider developing a language to describe war game scenarios, including maps, assets, objectives and observers (where an observer specifies how and when to convert game states in observations that can be fed to the agents). \footnote{With languages such as Kotlin implementing DSLs is straightforward, and enables edit-time type-checking which greatly assists in the content authoring process: \url{https://kotlinlang.org/docs/reference/type-safe-builders.html }} \subsection{Codingame} \url{https://www.codingame.com/start} Codingame is a bit of an outlier here in this section as it does not meet the criterion of having a standard interface between the agents and the games. Instead, each game / challenge can adopt the interfaces that best suit it. Games for the platform can be coded in any programming language, and communication between the game engine and the agents is done by exchanging text messages via console IO. However, to make a game SFP-ready may mean porting the game to the language of choice and ensuring it has a suitable copy() method. Codingame is a very popular platform for software developers wishing to practice and achieve recognition for their coding skills while solving game AI challenges. Codingame also offers a platform for recruiters wishing to hire software developers. Game AI competitions run on the platform often have thousands of entrants. On discussion with one of the competition organisers, it is clear that many of the entries are typically just porting or applying standard algorithms, but the leading entries often show significant ingenuity and achieve high standards of game play. The platform is mentioned here as a possible forum to run wargame AI challenges, and likely generate more entries than for a typical academic competition. \subsection{Flexibility} For future research on wargame AI there is a strong need for flexibility. This includes the ability to change rules, edit scenarios, back-track to a point in a game to consider a new branch. There are also aspects needed to support war games that are not often needed for commercial games. These include the ability to: moderate manually, set doctrinal restrictions, analyse resulting data for narrative construction and identify and replay critical branch-points. \subsection{Summary of Platforms} Of the platforms mentioned above, the ones that are of particular interest for wargame AI are OpenSpiel and GVGAI. OpenSpiel is well designed and extensible and built for SFP algorithms. It comes with an extensive set of classic games and a range of recent good performing game AI agents. However, it does not support structured actions, since each action is coded as an integer from a (usually small) finite set. This might sound like a minor software engineering point, but is actually a manifestation of a deeper issue as it affects the operation of most of (perhaps all) the agents developed for a platform. For example, many algorithms work by sampling actions from a relatively small set of possible actions; if this set is effectively infinite they will fail unless adapted to cope with this. GVGAI is of interest due to its use of VGDL, showing a relevant practical example of how a DSL can enable relatively rapid authoring of new games. Like OpenSpiel though it codes actions as integers, so for this and other reasons does not support wargames. There is much to be gained by developing a new platform built from the ground up for wargames and wargame AI. This should offer support for scenario design and analysis, and to enable easy interaction with a range of AI agents. This will be discussed in more detail in section~\ref{sect:Recommendations}. \vfill \subsection{Recommendation 1: Develop a new Wargame AI Framework}\label{rec1} The review of existing platforms in section \ref{sect:Platforms} indicates that the existing game AI platforms do not provide adequate support for wargame AI for a number of reasons. This is not surprising, as none of them were designed with that purpose. However, the existing platforms do clearly show the potential for a well positioned system to supercharge research in a particular domain. A prime example of this is how the ALE framework (often used via OpenAI Gym) has captured the imagination of DeepRL researchers especially, leading to the publication of hundreds of papers. As a result the domain of Atari 2600 games is now well understood. \begin{figure}[h] \includegraphics[width=1.0\textwidth]{Interface.png} \caption{Generic proposed architecture with standard interface to cleanly separate AI Algorithm implementations from wargames. Wargames could be based on a new platform, or wrapped legacy/commercial environments. Each wargame would need to implement a minimum Core part of the interface, with support for Optional elements allowing use of increasing number of algorithms and analysis.} \label{fig:interface} \end{figure} Previous attempts have shown that using the latest AI techniques on existing wargame environments can be problematic due to their relatively slow execution time, lack of a clean interface for AI interaction, and lack of a forward model that can be copied or branched easily. The central recommendation of this report is that a framework is developed to support both the development of AI for wargames, and of wargames that are amenable to AI. Figure~\ref{fig:interface} shows this high-level architecture, consisting of three conceptual components: \begin{enumerate} \item \textbf{Interface}. This is the core component, and is primarily a design specification rather than software. Like the OpenAI Gym or OpenSpiel interfaces this defines the API methods that a wargame needs to support for AI to be used. This specification can have core methods that must be implemented, and others that are optional. We give some examples of this shortly. \item \textbf{Wargames}. Wargame implementations can be new ones, or potentially existing ones that are updated to support the standardised interface. \item \textbf{AI Algorithms}. AI development is cleanly separated from the underlying wargames, permitting algorithms to be ported across different environments and new techniques tried. This does not mean that AI development is agnostic to the target wargame, for as this report has catalogued, successful AI techniques are tailored and optimised to the structure of the domain, each of which will have distinctive patterns to its action and observation spaces. \end{enumerate} It is possible that existing wargames, either commercial or legacy in-house systems, can be updated to provide an API that supports this new interface, or a software wrapper developed that achieves the same end, although this may require prohibitive work to support all parts of the interface specification, in particular a forward model for SFP algorithms. One particular recommendation is to develop a software platform for new wargames. This platform can design in from the start key elements useful for the AI interface, such as a fast forward model and hierarchically decomposable observation and action spaces. A single platform is also more cost-effective than developing separate wargames in each domain, with some elements, such as units, command structures or enemy visibility common across all types of wargames. Using third-party suppliers to wrap/adapt an existing commercial game has some major advantages in terms of leveraging existing functionality and a pool of relevant software engineering expertise. The disadvantages are a potential loss of control over the process and content upgrades. The balance between these will obviously depend on the extent to which an existing wargame meets current requirements, and an assessment of the relative depth of skill-sets and experience. \begin{comment} \begin{table}[] \begin{tabular}[t]{|p{2cm}|p{3.2cm}|p{3.2cm}|p{3.2cm}|} \hline \textbf{Component} & \textbf{Dstl in-house} & \textbf{Third-party or Commercial Partner} & \textbf{Existing Legacy} \\ \hline Interface & Essential for Dstl to own the definition of this interface & N/A & N/A \\ \hline AI Algorithms & + Control & + Leverage expertise & N/A\\ & - Lack of Expertise & - Control & \\ \hline Individual & + Control & + Leverage expertise & + Re-use code\\ Wargames& - Lack of Expertise & - Control & - Cost, Constraints \\ \hline Wargame & + Control & + Leverage expertise & N/A\\ Platform & - Lack of Expertise & - Control & \\ \hline \end{tabular} \caption{Components of a future wargame framework from Figure~\ref{fig:interface}, with high-level pros and cons of development options. Yes/No indicates } \label{table:components} \end{table} \end{comment} Future defence wargaming environments should be developed to this platform/framework to enable AI techniques to be used uniformly across games. While the design framework and interface should be generic, it is advisable to start with an initial domain-specific implementation that meets a current requirement as a proof of concept. We consider this further in Recommendation 2. \subsubsection{Interface/Wargame sub-components} Figure~\ref{fig:interface} suggests that a suitable interface will have core and optional parts, with not all wargames needing to support all of the latter. The list below provides a first sketch of interface sub-components, and how these will benefit/impact supporting wargames. This is not an exhaustive list, and the work to detail the requirements beyond this high-level is beyond the scope of this report. \begin{itemize} \item \textbf{Core}. A clean design split between the core engine, graphical display, and AI. Support for either Deep RL and SFP algorithms requires many game executions, either from start-to-finish or for game-tree planning. Running in `headless' mode with no graphical display or expensive database access is essential to achieve the required speed of execution when there are no humans-in-the-loop. The OpenAI and OpenSpiel frameworks provide good model examples, and there is significant overlap but adjustments are needed for wargame-specific features such as multiple units. Some core concepts to consider are: \begin{itemize} \item \texttt{State.step()} to move the time forward one step. In a wargame environment a continuous time argument may make more sense. \item \texttt{State.apply(actions)} to specify actions to take. Unlike the OpenAI/OpenSpiel interfaces this should be extended to cover a list of actions for different units. \item \texttt{State.registerPolicy(unit, policy)} is an option to provide a policy to be used for one or more agents. An injected policy is then used when the model is rolled forward with \texttt{step()}. \item Each \texttt{State} should have, to use the OpenAI terms, an \texttt{action\_space} and \texttt{observation\_space} defined. The \texttt{observation\_space} should be a structured semantic representation and not pixel-based. For a wargame support for unit-specific spaces will be needed with a hierarchical structure. It is worth stressing that each wargame will otherwise have different domain-relevant conventions for these spaces, as do the OpenAI Gym environments, or OpenSpiel games. \end{itemize} \item \textbf{Copyable Forward Model}. A \texttt{State.copy()} method to copy the current state to implement a forward model. This is required for support of SFP and other planning algorithms, such as used in AlphaGo. This is put as `optional' here as pure-RL methods do not require it as long as the core game engine is fast (as outlined in Sections~\ref{sect:DeepRL} and~\ref{sect:OtherAlgos}). For existing systems in particular this can be a software engineering challenge to implement with a need for robust deep copies of the game state. \item \textbf{Observation Filters}. \texttt{State.observationFilter(unit)}-like methods that report only the state visible to a unit, or group of units. This can be enhanced to support different levels of filter to emulate different levels of imperfect information for unit decision-making. \item \textbf{Belief Injection}. \texttt{State.observe(unit, data)} or \texttt{State.assume(data)} methods to provide API access to the forward model to allow AI algorithms to use the model / engine to evaluate hypothetical plans incorporating estimates about what cannot be currently observed. \item \textbf{OpenSpiel support}. OpenSpiel provides a number of robust and state of the art Game AI algorithms. An additional interface sub-component could provide conversion support to the OpenSpiel interface to allow these to be used in unit-based wargames. \item \textbf{Visuals.} \texttt{State.render(unit)} to allow the current state to be visually displayed, perhaps from the perspective of a specific unit or element of the command structure. \end{itemize} If the route to a full wargame software platform is taken, then some key aspects this should include on top of the list above are: \begin{itemize} \item Easy generation of features. A feature is a game-specific piece of information likely to be helpful to a decision-making agent. Instead of (or as well as) making the full raw game state available agents, a mechanism to specify these easily for a particular wargame or scenario will speed development and learning. \item An easy way for users to author new scenarios. This, like the next two points, would be well supported by a Domain-Specific Language (DSL) tailored to wargame requirements. A model is VGDL used by GVGAI discussed in Section~\ref{sect:Platforms}. \item Extensible: easy to add new unit types \item Configurable objectives (including multi-objectives) \item Fully instrumented with configurable logging (this also applies to AI Algorithms so that the data behind each decision is recorded for later analysis and insight mining \item Record/Replay facility so that a previously played game can be watched later \item Offer planning at multiple levels of abstraction \end{itemize} \subsubsection{Next Steps} Specifically recommended next steps are: \begin{enumerate} \item Detailed work to define the interface component for wargame AI sketched above. This can, and probably should, be done in parallel with one or both of the following points in an iterative approach. \item Identify useful existing wargames that can be adapted without excessive development effort, either in-house or from third-parties. \item Scope possible work on building a more generic wargame platform. \end{enumerate} \subsection{Recommendation 2: Tractable Challenges}\label{sect:Challenges} This section reviews some of the challenges of incorporating the current state of the art AI techniques into wargames, using the framework typology of Section~\ref{sect:wargameTypes}. In all cases there is an assumption that the relevant sections of the wargame have been engineered in line with the recommendation in Section~\ref{rec1}. Underlying conclusions are that using AI to decide on communication with other agents or humans is at the very challenging end. A general rule in all cases is that the smaller the scenario, and less of a strategic master-plan is required, the more suitable AI will be. For example a sub-hunt scenario that involves 2-3 destroyers is tractable, as is a company-level scenario to secure and hold an objective. A scenario that models an amphibious assault on an extensive archipelago with sea and air support is less suitable. This is particularly true for an initial proof of concept. From the review below, some potential scenario-agnostic goals for a proof of concept are: \begin{enumerate} \item Unit `micro-control' as assistance for human players in a computer-moderated wargame. This avoids the need for strategic level decision-making, which is the responsibility of the human player. An analogy is the in-game AI in Flashpoint Campaigns that controls the detailed moves of the player's units. The complexity can be reduced if each unit is autonomous, making decisions based primarily on local information as in DecPOMDP models. \item AI full control of all units in a small wargame, with a single clearly defined objective. Either for use as an opponent for human players, or for analyst insight with scenario results with both sides under AI control. Adding realism to the AI through full imitation of human-like behaviours would add considerable complexity. More realistically different reward functions could be tailored to obtain behavioural variations. \item AI control of one or both sides in a Concept Development wargame. In this context the open-ended goal of seeking a diverse range of strategies can be supported by use of multi-objective approaches. The ability with AI to run hundreds or thousands of games with the same, or slightly different setup is useful for statistical analysis of the results, as well as permitting a full exploration of the strategy space. \item As the previous point, but in a Procurement setting. This also benefits from the ability to run many games to feed into statistical analysis, but the requirement to have control experiments with doctrinal realism of behaviour adds challenge if this cannot be easily codified. \end{enumerate} \subsubsection{Planned Force Testing} Significant incorporation of AI into large-scale Planned Force tests involving large numbers of communicating groups and sub-teams is not a realistic goal in the immediate future. This requires breakthroughs in areas of modelling restricted information flow and co-ordination between multiple agents not addressed seriously be recent research (see Sections~\ref{sect:InfoFlow} and~\ref{sect:AdvOpponent}). The scope of including diplomatic and/or external logistic aspects is also not well studied. However, there may be some opportunities to incorporate AI in aspects of the moderation of parts of a large-scale wargame, which are covered in the following sections. \subsubsection{Plan Variation Testing}\label{sect:PlanVarTest} Given a specific scenario to be repeated a number of times there are a few ways that AI techniques could be utilised, of varying challenge levels. Two aspects are specifically relevant here: the level of control the AI has, and the constraints under which it must operate. \begin{enumerate} \item{\textbf{AI control} \begin{itemize} \item Interactive control. The AI is expected to emulate the dynamic behaviour of a human player by responding to changing objectives and orders as the scenario unfolds. Two-way communication, responding to orders or providing an interpretative summary of the current situation is not feasible. One-way communication with precise changes to strategic targets is more feasible, as long as these can be encoded clearly in a utility or state evaluation function, but still very challenging. Any adaptive requirement here would be much easier to support with SFP or other planning methods, or else known changes would need to be included in pre-training for purer RL approaches. \item Full control. The AI has full control of one side in the game, but with a fixed objective at the start of the scenario that does not change (although clearly the AI must still respond to the actions of the opposing side). This is less challenging than interactive control, and is less restrictive about applied methods with pure RL now competitive. It is also most similar to the Dota2 and Starcraft II game set-ups, which set a rough upper bound on the challenge and cost level. \item Unit micro-management. A human player is responsible for strategic decisions, while the AI is responsible for moving units on a hex-by-hex basis to implement these. This would be most similar, for example, to the role of the AI in Flashpoint Campaigns, which decides for each unit exactly where to move, and when to attack visible enemies. The benefit sought here is to make best use of the human player's time, and avoid it being spent on low-value activities. The challenge in this case is reduced significantly by the lack of need to consider strategic moves. \end{itemize} } \item{\textbf{AI constraint} \begin{itemize} \item Unit independence. Each unit (or group of units) is controlled independently, so only acts based on its local knowledge along with some initially specified goal. This contrasts with a default approach in which a single AI controls all units, taking into account what each of them can see. The local, independent approach is less computationally challenging, as it avoids some of the problems of large action and state spaces. It would require some additional features in the API discussed in Section~\ref{rec1} to support it. The default approach of all units being controlled by one AI is simpler architecturally, and theoretically more optimal, but is not computationally tractable for large numbers of units. \item Fog of War. Fully intelligent understanding of imperfect information is challenging, although the support of it is not, provided this is factored up-front into the API design. By `intelligent' we mean taking into account the value of information-gathering moves, and bluffing/second-guessing the opponent (Section~\ref{sect:Observability}). However, a good simulation of an intelligent understanding can be obtained by crafting the reward function to include a bonus for visible terrain, and scouting activities. \item Military doctrine. Where an AI needs to act in line with specific tactical doctrine to provide useful results, the challenge is clearly defining what the military doctrine is. If this can be defined in game mechanic terms, then this can be implemented by filtering non-compliant actions from the action space. More strategic elements can be factored in by engineering the reward function or victory points awarded. Learning a doctrine implicitly from observations of human actions is much more difficult. \item Specific Plan. More challenging is where the AI needs to broadly follow a plan generated as an outcome from a previous physical wargame, minimising variations from this. This can potentially be supported by incorporating the target plan into the reward function, although with less weight than the actual victory conditions. This will incentivise the AI to keep as closely as possible to the plan, but not at the expense of winning. Deviations from the target plan can then be analysed for common breakpoints. \end{itemize} } \end{enumerate} \subsubsection{Concept/Force Development}\label{sect:ConceptDev} These wargames are particularly suitable for using AI due to their explicitly exploratory goals. These means less work is required to impose constraints based on tactical doctrine or \textit{a priori} assumptions on the best/correct way to act in a situation, and algorithms such as MAP-Elites and multi-objective training are especially useful here to encourage a diverse array of solutions (see Section~\ref{sect:Exploration}). The issues instead revolve around interpretability of the results and insight extraction. \begin{itemize} \item \textbf{Human Observation}. Analyst observation of games using the final learned policies can be a prime source of insight. \item \textbf{Statistical Analysis}. This requires flexible and in-depth instrumentation as discussed in Section~\ref{rec1}. This focuses on patterns in the course of game, and does not provide direct information on why those patterns occur. This would still require analyst insight. \item \textbf{Plan interrogation}. Instrumentation can include all the actions an agent considers at each step, and the relative weights applied to each; for example the visit counts in MCTS or the Q-value of the action in Reinforcement Learning. In the case of SFP methods this query can be extended deeper into future simulated at that decision point to understand what the agent believed would happen to some time horizon. This can highlight critical decision points where different actions had very similar values, but led to very different future results. \item \textbf{Feature Perturbation}. Plan interrogation does not provide direct evidence as to why an agent took a particular decision. Feature Perturbation is especially helpful in Deep RL methods in which the neural network that makes a decision is very difficult to interpret directly. This modifies different inputs to the decision and highlights those that were instrumental in making the decision~\cite{Gupta_Puri_Verma_Kayastha_Deshmukh_Krishnamurthy_Singh_2020}. See Section~\ref{sect:ExplainableAI} for further methods. \end{itemize} \subsubsection{Procurement Testing} The smaller scale of these wargames makes these amenable to AI techniques, and many of the points made in Sections~\ref{sect:ConceptDev} and~\ref{sect:PlanVarTest} apply here. Statistical analysis from multiple iterations (assuming AI versus AI games) comparing the outcomes when equipment/doctrine is varied between a small number of specific options would provide further benefits beyond what is currently achievable due to resource constraints with human-in-the-loop games that limit the number of iterations. The main challenge in this domain is applying doctrinal and other constraints to AI behaviour. \subsection{Recommendation 3a: A Scalable Deep Learning Approach for Wargaming through Learned Unit Embeddings} This section presents one particular deep learning approach that we believe is promising for wargaming. Wargames come with their own particular requirements and challenges compared to other games that deep learning approaches have been applied to so far. Here we propose one particular deep learning approach that combines the most relevant aspects of the AlphaStar and OpenFive architecture, while paying particular attention to the computational and scaling demands of the approach. An overview of the proposed approach is shown in Figure~\ref{fig:approach1}. Similarly to AlphaStar or AlphaGo, we first train a value and policy network based on existing human playtraces in a supervised way. First training on existing playtraces, instead of starting from a \emph{tabula rasa}, will significantly decrease the computational costs needed for learning a high-performing policy. In the second step, and given that a fast forward model of the game is available, the policy can be further improved through an expert iteration algorithm (see Figure~\ref{fig:ExpertIteration}). \begin{figure}[h] \includegraphics[width=1.0\textwidth]{approach_idea1.pdf} \caption{Training (A). A policy network that predicts a distribution of valid moves and a value network that predicts the expected game outcome are first trained based on human examples. Policies are further fine-tuned through expert iteration. The policy network (B) is a feudal network in which a manager controls lower level units. The manager and unit network takes as input processed unit information such as distances, enemy types, etc. instead of working from raw pixels. (C) In order to support new unit types without having to retrain the whole system, unit types embeddings are based on unit abilities. This way the feudal network should be able to generalise to new units, based on similar units it already learned to control. } \label{fig:approach1} \end{figure} In terms of employed policy network, we believe the most scalable version with the least computational demands would be a Feudal network \cite{ahilan2019feudal} approach (Figure~\ref{fig:approach1}b). Similarly to a system such as AlphaGo, one manager network controls each of the different units. One main disadvantage of the AlphaStar system for wargaming is that it would require retrained from scratch every time a new unit or weapon type is introduced. Therefore we propose a new Feudal policy manager approach that does not directly work with specific unit types, but with \emph{unit embeddings} instead. Embeddings are most often used in a subfield of the deep learning dealing with natural language processing \cite{levy2014dependency}. The idea is that words that have a similar meaning should be represented by a learned representation of vectors that are similar to each other. We imagine a similar embedding can be learned for different unit and weapon types. This way, the Feudal policy network would not have to be retrained once a new unit is introduced and the network should be able to interpolate its policy to a new unit, whose embedding should be close to already existing unit types. Other approaches such as (1) automatically learning a forward model for faster planning in latent space (Section~\ref{sec:learning_fm}), or (2) adding functionality for continual learning (Section~\ref{sec:continual_learning}) could be added to this system in the future. To train such as system, we believe the Fiber framework (Section~\ref{sec:deep_rl_frameworks}) introduced by Uber could be a good fit and would require minimal engineering efforts. Given a fast enough simulator and the fact that we should be able to kickstart training with human replays, we speculate that even a modest setup of $\sim$100 CPUs and $\sim$5 GPUs might allow the system to be trained in around 12 hours. \subsection{Recommendation 3b: Graph Neural Networks for Unit Information Integration}\label{sect:GNN} In addition to using mostly off-the-shelf components as in described in the previous section, a less explored but potentially very relevant approach for wargaming are graph neural networks. Graph neural networks (GNN) are becoming more and more popular in deep learning \cite{zhou2018graph}, and have shown promising results in domains such as physics system, learning molecular fingerprints, modelling disease, or predicting protein interactions. The basic idea behind graph neural network (Figure~\ref{fig:gnn}), is to model the dependencies of graphs via learned message passing between the nodes of the graph. For example, a Sudoko puzzle can be modelled as a graph \cite{palm2018recurrent}, in which each of the 81 cells in the 9$\times$9 Sudoko grid is a node in the graph. The same neural network is used for all nodes in the graph to learn to integrate information from incoming nodes and to iteratively come up with the Sudoko solution. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{graph_neural_network.png} \caption{Example of a Graph Neural Network modelling a social network \cite{zhou2018graph}. A similar structure could model the chain of command and communication in wargames.} \label{fig:gnn} \end{figure} In the case of wargaming, nodes in the GNN could represent units, while the connections between them reflect communication channels. A GNN could be trained to integrate information in a decentralised way and learn by itself to resolve communication issues and misalignment (e.g.\ seeing the same unit twice, noisy communication channels) into a coherent whole. Additionally, a particularly relevant and recently proposed GNN architecture is called recurrent independent mechanism (RIM) \cite{goyal2019recurrent}. In this system, units only sporadically communicate and keep a level of autonomy. These RIMs might be particularly promising to model situations in which a lower level wargaming unit lost contact to a higher level orders. The unit would have to learn to work both when higher levels orders are available, but also compensate and follow their own autonomy when they are not. \subsection{Recommendation 3c: High Performance Statistical Forward Planning Agents} \label{HighPerfSFP} As an alternative/parallel approach to the Deep RL-focus of the previous two recommendations, an SFP approach could productively be built on recent results in RTS and multi-unit game environments. This approach would leverage domain knowledge in the form of low-level action scripts to provide Action Abstraction. Domain knowledge can also be incorporated in a state evaluation function used to determine the value of an action choice after rolling out the forward model. Specific variants here are: \begin{enumerate} \item A Combinatorial Multi-Armed Bandit approach, such as NaiveMCTS~\cite{Ontanon_2017} or direct MCTS~\cite{Churchill_Buro_2015}. The expert-scripts provide the set of actions for each unit. This is suitable for a relatively small number of units. \item A two-stage approach that uses a neural network trained by RL to make the initial decision for each unit, followed by low-level modification of this plan using MCTS or CMAB as in~\cite{Barriga_Stanescu_Buro_2017} to take account of unit (and enemy) interactions. This makes more effective use of the computational budget to scale up to larger scenarios. \item Online Evolutionary Planning (OEP/RHEA) as an evolutionary option for multiple units~\cite{Justesen_Mahlmann_Risi_Togelius_2018}. \item Stratified Strategy Selection~\cite{Lelis_2017}. All units are partitioned into a set of types, for example based on position, unit type, damage and so on. Each type is then given a single consistent order (i.e. script), and the algorithm searches in the space of type partitions and script combinations. The scales well to large numbers of units. \end{enumerate} In all cases the Action Abstraction scripts and/or the state evaluation functions can be learned through either Evolutionary methods (see~\cite{Marino_Moraes_Toledo_Lelis_2019, Neufeld_Mostaghim_Perez-Liebana_2019}), or Expert Iteration based on RL from expert games as described in Section~\ref{sect:OtherAlgos}. \subsection{Recommendation 4: Open areas of research} As this report has shown, much of the intensive research effort over the past few years into Deep RL and Game AI has generated techniques that are selectively very relevant to wargames. Below we summarise a number of areas relevant to wargames that have not benefited from this surge of recent work, and which are relatively under-developed. \begin{itemize} \item Restricted Information Flow between agents, and between low-level units and high-level commanders. \item Diplomacy and Negotiation in games with more than two players. \item Exploring Graph Neural Networks for modelling unit information integration \item Combining deep RL with learned unit embeddings as a scalable approach for wargaming AI \item Integration of human-factor research on realistic behavioural patterns of military commanders into AI constraints \end{itemize} \vfill \section{Conclusions} Great strides have been made in Game AI over the last 5 years in particular, though the headline achievements using DeepRL methods have often been achieved at the cost of great effort, expertise and expense. Wargames are in many cases not fundamentally harder than playing StarCraft to a high standard, but they are different to commercial games and come in a wide variety. In most cases it will not be possible to put vast amounts of effort into developing strong AI for any particular wargame, but there remain many interesting and open questions regarding the level of competence that could be achieved using existing recent methods, and the amount of investment needed to achieve satisfactory performance. We make a number of recommendations that offer viable ways forward and we firmly believe that these should be explored in detail and taken forward as appropriate. There has never been a better time to invest in this area, and the potential benefits are significant. They include better staff training, better decision making, and an improved understanding of the nuances of problem domains that comes as an inevitable and desirable spinoff from building better models and gaming with them. \vfill \subsection{Action Space}\label{sect:ActionSpace} The Action Space of a game is the set of actions available to a player when they make a move. In the case of Go this is a discrete set of 361 board locations (on a 19 by 19 board), and in Atari games this is the set of possible joystick positions, plus the pressing of the button (a maximum of 18 for all combinations, but most game use far fewer). These number represent an upper bound as not all of these actions will be valid in any given situation, for example in Go some of the board positions will already be occupied and hence unavailable. A key distinction is whether the action space of a game is \textbf{discrete}, as in the Go and Atari examples, or \textbf{continuous}. In MuJoCo for example an `action' is the setting of angles and torques for all limb rotations and movements, and each angle can be any value between 0 and 360 degrees. In this the number of dimensions that need to be specified is important. In games such as Starcraft II or CMANO, any specific unit can be ordered to move to a map position, which is a continuous two or three dimensional vector; although other actions, such as the target of an attack, are discrete. Any continuous action space can be turned into a discrete one by selecting specific points as chooseable (`discretisation'). This is part of the design of MicroRTS with the game working on a fixed grid, hence discretizing a continuous 2-dimensional space into fixed squares; Flashpoint, like other wargames, does the same with hexes. Table~\ref{table:ActionSpace} summarises the features of the different environments. Starcraft II and MicroRTS are similar in style, with multiple units being controlled simultaneously and hence a full action space that is combinatorially very large. Chess, Go, Atari and GVGAI are a different type of environment with a smaller set of actions available at any one time. A player in Dota 2 is halfway between these, and controls a single unit which can take one action at a time (but up to 80,000 actions may be possible). The Starcraft II action space can be considered in two ways: \begin{enumerate} \item {Clicks on screen. This is the approach used by DeepMind with Deep Learning, and the gives the rough $10^8$ set of actions each turn, as these are defined by the human point-and-click mouse interface~\cite{vinyals2017starcraft}.} \item{Unit orders. This uses an API, which provides a list of available units and the orders that can be given to each.} \end{enumerate} The second of these is most appropriate for wargames, for which there is no requirement to additionally learn the vagaries of a human interface. Classic AI approaches, including MinMax and Monte Carlo Tree Search as well as Q-Learning based approaches used in Go and Atari require a discretised action space. Policy Gradient RL techniques have been developed to operate with a continuous action space (or policy)~\cite{policyGradient}. Another approach is to discretise the space, and this is also used to reduce the sizes of discrete spaces to a smaller number. This `Action Abstraction' reduces a large number of possible actions to a small set tractable for forward planning or other techniques~\cite{Churchill_Buro_2013, Moraes_Marino_Lelis_Nascimento_2018}. For example in some MicroRTS and Starcraft II work, a set of scripts can be used to define a particular tactic (`worker rush', `build economy', 'defend against incoming attack'), and the AI focuses on learning which tactic to apply in a given situation without needing to learn the detailed instructions that make it up~\cite{Churchill_Buro_2013, Neufeld_Mostaghim_Perez-Liebana_2019}. Sub-goal MCTS does something similar with pre-specified sub-goals that are used to automatically prune the action space~\cite{Gabor_Peter_Phan_Meyer_Linnhoff-Popien_2019}. This is a key mechanism to introduce domain knowledge in wargames. It can leverage expert knowledge embedded in the scripts that define useful action sequences (such as `take cover', `seek high ground', `patrol with active sonar'), without being fully constrained as the AI can learn which script to use in which situation, which may not always accord with expert intuition. The net result is to significantly speed up the learning or planning process (by orders of magnitude) at a cost in loss of flexibility. Extensions of this idea relevant to wargames are to simultaneously learn the parameters of scripts (for example, an aggression parameter that specifies the odds required to attack); to evolve a sub-set of scripts to use from a larger population~\cite{Marino_Moraes_Toledo_Lelis_2019}; or to use the results of scripted recommendations for more detailed low-level planning~\cite{Barriga_Stanescu_Buro_2017}. This last work generates actions for units based on scripts, and then devotes some of the computational budget to refining proposed moves for units that are close to enemy units, for which a small modifications of a high-level default move may have an especially large impact. (There is overlap here with the unit-based techniques discussed in Section~\ref{sect:unitTechniques}.) \begin{table}[] \begin{tabular}[t]{|l|l|c|c|c|} \hline \textbf{Game} & \textbf{Discrete/Cont.} & \textbf{Order Mode} & \textbf{Max Action Space} & \textbf{Decisions} \\ \hline \textbf{CMANO} & Continuous & $10^{0\mhyphen 3}$ units & $10^{10+}$ & $10^5$ \\ \textbf{Flashpoint} & Discrete & $10^{1\mhyphen 2}$ units & $10^{10+}$ &$10^0$ \\ \textbf{Chess/Go} & Discrete & 1 move & $10^2$ to $10^3$ &$10^0$ to $10^1$ \\ \textbf{Atari/GVGAI} & Discrete & 1 move & $10^1$ & $10^2$ to $10^4$ \\ \textbf{Starcraft II} & Continuous & $10^{0\mhyphen 3}$ units & $10^8$ or $10^{10+}$ & $10^{5+}$ \\ \textbf{Dota2} & Discrete & 1 move & $10^5$ & $10^{5+}$ \\ \textbf{MicroRTS} & Discrete & $10^{1\mhyphen 2}$ units & $10^6$ to $10^8$ & $10^2$ to $10^3$ \\ \textbf{MuJoCo} & Continuous & 3-17 dimensions & - & $10^{3+}$ \\ \hline \end{tabular} \caption{Action Space Categorisation. `Order Mode' can be 1 move per turn, orders per unit or a single multi-dimensional vector per time-step. `Decisions' is an approximation of the number of decisions a player makes during one game.} \label{table:ActionSpace} \end{table} \subsection{Branching Factor}\label{sect:Branching} The Branching Factor of a game is the number of new positions that can be reached after each move/action. In general the higher this is, the more challenging the game is to any AI technique. This is closely related to the Action Space. In a deterministic game, each distinct action will lead to a different game position and hence the branching factor is equal to the average action space size. In a stochastic game then the branching factor can be much higher, for example in a wargame a decision to attack a unit can lead to many different states depending on the random damage dealt to both sides, and the results of any morale checks. The impact of branching factor is hence covered in Sections~\ref{sect:ActionSpace} and ~\ref{sect:Stochasticity} on Action Space and Stochasticity respectively. \subsection{Number of Decisions}\label{sect:NumberDecisions} This refers to the average number of decisions or actions made by a player during the game. Games with more decisions are potentially more complex, but this is also dependent on the size of the action space. For this reason the number of decisions a player makes is included alongside Action Space in the third line of Table~\ref{table:ActionSpace}, as these two can roughly be multiplied together as a rough gauge of complexity. The length of a game can be characterised by either the number of decisions made by each player or the number of game simulation `ticks'. Classic games such as chess have one `game tick' per decision. Realtime wargames may play out for tens of thousands of game simulation ticks, but on most of those the players may be taking no actions (even if they could) due to limits on human perception, thinking and reaction times. In Starcraft, the AI agent may interact via a constrained interface in order to reduce the click rate and level the playing field when playing against human players. The very different numbers of decisions available for CMANO ($10^5$) and Flashpoint ($10^0$ to $10^1$) in Table~\ref{table:ActionSpace} draw attention to two quite different ways of looking at wargames. The figure for CMANO is estimated from a maximum of one action per second over a 2-3 hour realtime game; which is of the same order of magnitude as in Starcraft and Dota 2. In Flashpoint Campaigns the player has a `command cycle' that only allows them to enter orders (for all units) at a few specific points. Once orders have been given, 15-30 minutes of simulated time then unfold during which the player can only watch events. This difference is between a low-level versus high-level control perspective, which can apply to \emph{either} an AI or human player. Flashpoint seeks to model the constraints around human decision making in the field, in line with much of the purpose of wargaming outlined in Section~\ref{sect:wargameTypes}. The second by second view is more in line with low-level control of units to fulfil these goals, and in Flashpoint these are in effect executed by the in-game AI that makes decisions for each unit about moving, attacking and retreating based on the high-level goals provided by the player. This perspective is relevant for AI that supports a strategic human player by micromanaging units for them in pursuit of a specified high-level goal. Neither perspective is similar to the strict turn based approach of classic games, or the Atari/GVGAI environment with a very constrained set of actions at each decision point. This distinction between low-level (or `tactical') and high-level (or `strategic') decisions is more important than the raw numerical differences in Table~\ref{table:ActionSpace}. It is addressed by a number of techniques covered in more detail elsewhere in this report, specifically: \begin{itemize} \item Hierarchical methods in Sections~\ref{sect:unitTechniques} and~\ref{sect:HRL}; \item Action Abstraction (for example each high-level strategy being a script that specifies the low-level tactics to follow) in Section~\ref{sect:ActionSpace}. \end{itemize} \subsection{State Space}\label{sect:StateSpace} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess/Go} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{MicroRTS} & \textbf{MuJoCo}\\ & & & \textbf{GVGAI} & \textbf{(+ Dota2)} & & \\ \hline LC & LC & $10^{47, 170}$ & $10^{6000}$ / LC & $10^{1685}$ / LC & LC & LC \\ Unit & Unit & Global & Global & Unit & Unit & Global\\ \hline \end{tabular} \caption{State Space categorization. `LC' indicates a game is `locally continuous'. The state-space for Atari is based on the possible values for all pixels over four frames after downsampling~\cite{atari}. State-space estimates for other games from~\cite{Ontanon_2017}. The second line indicates if state is purely global, or if it can largely be factored to individual units.} \label{table:StateSpace} \end{table} We can categorise the State Space of a game in terms of the number of different states it can be in. These are calculable in cases like Chess or Go ($10^{47}$ and $10^{170}$ respectively). In games with continuous state spaces, these are technically infinite. This categorisation is especially relevant to search-based techniques, as used classically for Chess. In Chess or Go, each board position is unique (ignoring symmetries), and it can make a huge difference to the outcome of a game if a pawn is one square to the left, or if a Go stone is moved by one space. A difference of one square for a pawn can for example can open up an attack on a player's Queen or King. This is much less true in Starcraft II, many Atari games, and RTS games in general. These games are more `locally continuous', in the sense that small changes in unit position, health or strength usually lead to similarly small changes in the outcome. We should also distinguish between the state space and the observation space (section~\ref{sect:ObservationSpace}). The observation space is the lens through which the agent views the underlying state. The difference between state and observation of state is emphasized for partially observable games, where the agent is forced to make assumptions about the unobservable parts of the state. However, it may also be that the observation is an expanded version of the underlying state. For example the full observation space for Atari 2600 games is $10^{70,000}$ based on the number of pixels, the possible colours of each pixel. This is reduced (downsampled) to $10^{6000}$ for processing by the deep network in the original (and later) DeepMind work~\cite{atari}. This downsampling by thousands of orders of magnitude `loses' all but an infinitesimal fraction of the original information without noticeably reducing performance, and is a clear indication that the game is locally continuous in this sense. In this regard they are wargame-like, and classic state-space complexity is less relevant. However, the Atari 2600 console only has 128k bytes of RAM (1024 bits) so the underlying state space is limited by this, and hence much smaller than even the compressed observation space described above (though some cartridges came with up to 4k of extra RAM). The main point is that after graphical rendering the observation space can be much larger than the underlying state space. Local continuity still allows for discontinuities; for example an infantry company may move only a few metres, but if this is from inside a dense wood onto an open plain in sight range of the enemy then it will have a major, discontinuous impact on the game. The key point is that the formal state space size is not a good measure of game complexity. It is better to think about this as a low-dimensional, non-linear manifold within the high-dimensional pixel (or other) space. This lower-dimensional manifold represents the true complexity of the state space, but is not directly observable. It is this low-dimensional manifold that machine learning techniques seek to identify, building a model that understands the boundary between wood and plain is the important implicit variable. Deep RL does this using multiple convolutional neuron layers, random forests by constructing multiple decision trees and so on. Hence in Table~\ref{table:StateSpace}, we categorize games by this idea of `local continuity'. In addition to a distinction between locally-continuous and discrete games, some games can be largely factored into the state of individual units. In these cases, we can consider the state space to be composed of a number of distinct components: \begin{enumerate} \item {The state of each unit; position, damage, equipment etc.} \item {The interactions between units; their relative positions are likely to be relevant in a wargame.} \item {Purely global state; developed technologies in Starcraft II, or the number of available air-strikes in Flashpoint.} \end{enumerate} Wargames are unit-based and locally continuous, and most similar to the RTS-style games in Table~\ref{table:StateSpace}. The number of units that a player controls is an important aspect of the complexity of RTS games and wargames. A large number of units are in play limits the ability of a player to track them all, and to properly consider the best actions that each should take in relation to what the others are doing. In wargames such as CMANO, a player can even write Lua scripts to control how a set of units behave. As the number of units increase, so the need for some type of structured system of control becomes apparent, such as standard military hierarchies. Various approaches have been used in the academic literature looking at unit-oriented state spaces, and these are surveyed in detail in Section~\ref{sect:algoSummary} given how important this aspect is to wargames. \subsection{Observability}\label{sect:Observability} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{Dota2} & \textbf{$\mu$RTS} & \textbf{MuJoCo}\\ & & \textbf{/Go} & \textbf{GVGAI} & & & & \\ \hline Imperfect & Imperfect & Perfect & Perfect & Imperfect & Imperfect & Perfect & Perfect \\ Fog of War & Fog of War & & & Fog of War& Fog of War & & \\ Unit & Unit & & & Camera/Unit & Unit & & \\ \hline \end{tabular} \caption{Observability categorization. The first line shows if a game has Perfect or Imperfect information, the second line the type of Imperfect information, and the third line how Observability is restricted.} \label{table:Observability} \end{table} The Observability of game defines what elements of the full game state are visible. At one extreme, a game with Perfect Information has the whole state space visible to all players. A game with Imperfect Information has some parts of this hidden, for example in most card games a player can see their own hand, but not that of the other players. In a wargame the most common form of Imperfect Information is `Fog of War' (FoW), in which a player can only see their own units and those of the enemy within line of sight and range. Other forms of imperfect information are possible, for example a player may not have full knowledge of the capabilities of enemy units at the start of a scenario, even when the units are visible. For the games under consideration the primary aspect of Observability that is relevant is FoW, as summarised in Table~\ref{table:Observability}. In this regard they split straightforwardly into Perfect Information games, and Imperfect Information ones with Fog of War. Hidden information through FoW fundamentally changes some of the challenges faced compared to perfect information games such as Go. We can now only observe part of the full State Space, and formally therefore have to track all possible states that the unobservable parts of the State Space \emph{could} be in. This expands the complexity of the problem, and the most theoretically rigorous analysis of this expansion (in Partially Observable Markov Decision Processes, or POMDPs) is only tractable in small, toy scenarios~\cite{Gmytrasiewicz_Doshi_2005, Ross_Pineau_Chaib-draa_Kreitmann_2011}. More realistic algorithms that work in large-scale POMDPs usually sample from some estimate of the unknown State Space and are reviewed in more detail in Section~\ref{sect:POMDP}. \subsection{Observation Space}\label{sect:ObservationSpace} In a Perfect Information game the Observation Space is by definition the same as the State Space, as this is fully observable. In an Imperfect Information game the Observation Space is a strict subset of the State Space. For example, where we have Fog of War we can only see some hexes and the full state of our own units. This should not be interpreted to mean that a smaller Observation Space makes a game easier, and the smaller the Observation Space compared to the State Space, the harder the game in general. Refusing to look at a Chess board when playing would simplify the Observation Space to a single state, and be unlikely to improve one's play. The essential categorization of games is covered in Table~\ref{table:Observability}. The third line in this table emphasizes that the mode of Observation can vary: \begin{itemize} \item {Unit-based visibility. This is most standard, with a player able to see anything that any of their units can see.} \item {Camera-based visibility. In Starcraft II this is an additional constraint important in a real-time game. A player must physically navigate the camera view to the area they wish to look at, and then they still only see what is visible to their units.} \end{itemize} The relevance of the additional camera-based limitation largely depends on the pace of the game. It is akin to a wargame that models the reality of a commander's restricted attention bandwidth when rapid decisions are needed. In a more relaxed turn-based game, or where the passing of time is slow, then it is not relevant. In wargames the camera-based limitation is not relevant as this approach to modelling attention bandwidth is primarily an artifact of the commercial RTS game genre. In DOTA 2 each player controls a single unit (hero), and can sees the world from this unit's perspective. However the player also has access to a mini-map that shows the world with all units visible to any other player on the same team. This subtlety does not affect the data in Table~\ref{table:Observability}. \subsection{Information flow}\label{sect:InfoFlow} A feature of wargames that is missing from all of the example games here, including Flashpoint and CMANO, is the role of a restricted information flow. In the games of Table~\ref{SummaryTable}, information is presented to the player by visibility of units on a map. In the case of FoW the player (commander) can see anything that their units can see. In physical wargames, as in real combat situations, this is frequently not the case. Instead the commander is reliant on reports back from units on the ground, which have a restricted informational bandwidth in what they can transmit. A unit that comes under unexpected enemy fire would report this as a priority with perhaps an estimate of the opposing force composition, and not the complete specification of all visible units. This feature is important in physically-moderated wargames, but is not currently common in computer-moderated ones. Computer-moderated games are required to incorporate restrictions in Information Flow from commander to units, modelling command and control constraints. Information Flow restrictions from units back to commander is anticipated to be a requirement for future wargames. \subsection{Stochasticity}\label{sect:Stochasticity} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess/Go} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{MicroRTS} & \textbf{MuJoCo}\\ & & & \textbf{GVGAI*} & \textbf{+ Dota2} & & \\ \hline Stochastic & Stochastic & Determin. & Determin. & Stochastic & Determin. & Stochastic \\ \hline \end{tabular} \caption{Stochasticity categorization. Environments are either Stochastic (have random event outcomes), or Deterministic. GVGAI has a mixture of deterministic games and stochastic games.} \label{table:Stochasticity} \end{table} This refers to whether game is deterministic or has random (stochastic) elements. In a deterministic game the same action in a given state always leads to the same next state. In a stochastic game this will lead to a distribution of possible next states. This is closely linked to Branching Factor (Section ~\ref{sect:Branching}), which dramatically increases for stochastic games due to the increase in possible outcomes from a single action, and is a key reason that the minimax search techniques that achieved world championship level play in Chess are less suitable for wargames. These can be converted to an `Expectiminimax' algorithm, but at the cost of more computation due to the branching factor, and a reduced ability to prune the search tree~\cite{russell2016artificial}. A more common approach is to use Monte Carlo techniques to randomise the outcomes in `rollout' simulations used in both Monte Carlo Tree Search and in the generation of data for Deep RL techniques. Deterministic single player games are susceptible to over-fitting: agents may learn a successful sequence of actions, rather than learning how to play the game in a general way. An example of this is learning the precise route for a Pac-Man level given deterministic search patterns by the ghosts; this does not help the agent in the next level. Environments such as Atari/GVGAI\footnote{The deterministic single-player subset of GVGAI games} and MuJoCo are susceptible to this kind of over-fitting in RL techniques, though that has been mitigated by adding randomness in the interface between the game engine and the agent~\cite{Haarnoja_Zhou_Abbeel_Levine_2018, Fujimoto_vanHoof_Meger_2018}. Classic forms of stochasticity in wargames are the uncertainty of combat outcome, or spotting probability of enemy units, minefields etc, which in both physical and computerised versions are decided by dice-rolling~\cite{peterson2012playing}. This stochasticity introduces the possibility of low-probability but high-impact events in any single run of a game. For example, spotting a well-camouflaged tank column at distance may enable them to be neutralised at low cost, giving a very different outcome in, say, 5\% of games. This is a problem for physical wargames, or ones with humans-in-the-loop, as this constrains the number of games that can be run to (usually) low single-figures. For this reason identification of these events is key to determine whether to branch the game (see Section~\ref{sect:wargameTypes}) so that it is representative of the median expected outcome. This is less of an issue in fully computerised wargames with an AI on both sides, as if thousands of games are run, then these low-probability events are representatively sampled and averaged out. For wargames, there are other sources of uncertainty. Since they are inherently multiplayer, the actions of the opponent (whether human or AI) are normally unpredictable, and partial observability also has similar effects. This uncertainty of opponent action (especially with an adversarial opponent) is covered in Section ~\ref{sect:Opponent}, and is not formally `random', but simply not known. If the policy of the opponent were known perfectly, then this might still be random if playing a mixed strategy deliberately, for example in Rock-Paper-Scissors in which a known opponent might play each option with equal random probability. In practise many game engines may model an AI opponent using an element of randomness, representing the real-world unknown actions they would take. The explicit use of randomness in simulations can provide an efficient way to model a number of types of uncertainty. For the benchmark game environments, MicroRTS uses deterministic combat, with each unit doing a fixed amount of damage, while CMANO, Flashpoint, DOTA2 and Starcraft II have random damage. CMANO and Flashpoint additionally have random chances for spotting enemy units, affecting the Observation Space. An interesting way to model stochastic games is taken in the OpenSpiel framework (see Section~\ref{sect:Platforms}). For most OpenSpiel games any randomness is modelled as the actions of a special random player. The actions of the random player are recorded, just as they are for any other player. From the root state of a game, the sequence of actions taken then uniquely defines the resulting game state. \subsection{Win Conditions}\label{sect:WinConditions} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess/Go} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{MicroRTS} & \textbf{MuJoCo}\\ & & & \textbf{GVGAI} & \textbf{+ DOTA2} & & \\ \hline VP Scale & VP Scale & Win-Lose & VP Scale & Win-Lose & Win-Lose & VP Scale \\ \hline \end{tabular} \caption{Win Condition categorization. Environments are either Win-Lose, or have a scale of victory points (VP).} \label{table:WinConditions} \end{table} Games range from simple, clearly defined and unchanging win conditions (e.g. Chess, Go) to more complex conditions, which in wargames may be asymmetric, and could potentially change during the course of a campaign. In a wargame the same scenario could be played with different win conditions for each player in order to provide different challenges. In most games to date, the win conditions are single-objective with binary (or ternary if a game can end in a draw / stalemate) outcomes (e.g. in chess, the aim is to get the opponent in Checkmate), even though they can be achieved in a multitude of ways. In this case a player either wins or loses (or draws). This can be supplemented by a numeric score (or Victory Points) of some sort. Many Atari/GVGAI games have both a win-lose condition combined with a score, for example a treasure hunt game may be `won' when the player reaches the exit before being eaten by a spider, but their score is determined by the number of gold coins they pick up en route. In a similar way wargames can be multi-objective, where the aim is to achieve a particular outcome while minimising casualties and economic cost. These are often reformulated, as in Flashpoint, to a numeric scale by assigning victory points to every feature of the outcome, although this mapping may be unsatisfactory and can lead to unexpected (and implausible) AI behaviour if the balance of the reward is mis-specified. Biasing the AI in this way can also discourage exploration away from pre-conceived strategies embedded implicitly in the victory points scale. An alternate, multi-objective, approach is not to weight the different parts of the `score', but seek a range of different policies that are `optimal' in different ways~\cite{Deb_Agrawal_Pratap_Meyarivan_2000, Perez-Liebana_Mostaghim_Lucas_2016}. For example one policy might minimise casualties and fuel usage, while another optimises the number of enemy forces degraded with less concern for the associated cost. This can be a useful way of exploring the space of viable strategies. The multi-objective approach is covered in more detail in Section~\ref{sect:Exploration}. Of the benchmark games the RTS and Dota2 games are Win-Lose, and differ in this respect to wargames, while some Atari games are closer given the presence of a running score as well as a win-lose condition. MuJoCo is rated as a victory point scale as success is measured by how far and fast an agent learns to move. \subsection{Reward Sparsity}\label{sect:RewardSparsity} This refers to the inherent rewards given \emph{during} a game. For classic games such as chess, the only inherent reward is at the end of a game when a player wins, loses or draws. This is closely linked to the Win Conditions of a game in Table~\ref{table:WinConditions}. A game with a binary Win-Lose condition almost by definition also has a sparse reward, while one with a VP Scale of some type provides inherent rewards (VPs) as the game progresses. Most RTS and wargames have natural features to provide anytime score updates, for example by penalising friendly casualties and loss of assets and incentivising territory gain. However, these natural heuristics can be misleading in the short-term with immediate losses required (such as sending units into combat to take casualties) as an investment for a longer-term payoff (taking a strategic location). In this sense they have elements of `deceptive' games, in which following the immediate reward signal is counter-productive to final victory~\cite{Anderson_Stephenson_Togelius_Salge_Levine_Renz_2018}. Hence, methods to promote exploration such as intrinsic motivation and curriculum learning can be very relevant when a key desired outcome is exploration of possible tactics and doctrines, as in Concept Development wargames. Hence, while wargames, like RTS, do have a short-term point score many of the AI techniques developed to cope with sparse rewards are important. These are looked at in more detail in Section~\ref{sect:RelativeRS}. \subsection{Active/Adversarial Opponents}\label{sect:Opponent} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess/Go} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{MicroRTS} & \textbf{MuJoCo}\\ & & & \textbf{GVGAI} & \textbf{+ Dota2} & & \\ \hline Adaptive & Adaptive & Adaptive & Non-Adapt. & Adaptive & Adaptive & Non-Adapt. \\ Adversarial & Adversarial & Adversarial & & Adversarial & Adversarial & \\ \hline \end{tabular} \caption{Opponent categorization. Opponents may be Adaptive to a players actions and may or may not also be Adversarial to them.} \label{table:Opponent} \end{table} An essential component of a wargame is one or more opponents. In this they contrast with many Atari games in which a single-player fights against the environment, as in Space Invaders, Pac-Man or Asteroids. In these, the enemies are scripted units that do not change behaviour or adapt to player actions. Other Atari games have enemies that do adapt, for example the ghosts in Ms Pac-Man will change direction to move towards a visible player, however these remain simple reactive scripts and are not `Adversarial' in terms of planning a strategy to win the game. This contrasts with Chess, Go, RTS games or wargames, in which there is a clearly defined opposing force that makes proactive decisions to achieve objectives and adapt to the player's actions. All the benchmark games considered are one or two player games. Some wargames can have more than two sides, for example a neutral Brown or White supranational force, or a Pink side that may, depending on what happens, become actively allied to Blue or Red. A large Planned Force Test will have multiple autonomous players on each side, with restricted communication between them even if they are collaborating on a joint goal. However, the computerised wargames directly relevant to AI control are Red vs Blue 2-player situations, and here the essential requirement to deal with large numbers of units controlled by a single player is covered in detail in Section~\ref{sect:StateSpace}. There is also at this stage relatively little academic work that addresses the unique aspects of wargames with more than two interacting players, especially where communication can occur. This can introduce `king-making' in which a third player can decide which of the others achieves a victory objective, and negotiation between players can become vital to win\cite{elias2012characteristics}. \subsection{Scenario Variability}\label{sect:ScenarioVar} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{Dota2} & \textbf{$\mu$RTS} & \textbf{MuJoCo}\\ & & \textbf{/Go} & \textbf{GVGAI} & & & & \\ \hline Scenario & Scenario & Fixed & Scenario & Scenario & Scenario & Scenario & Fixed \\ \hline \end{tabular} \caption{Scenario categorization.} \label{table:Scenario} \end{table} In most classic board games the start state is always the same, whereas in wargames the same underlying rules may be played out over a wide range of initial scenarios. In Table~\ref{table:Scenario} games can relatively cleanly be split into those that have some concept of `scenario' (for Atari games these are different `levels', which use the same underlying rules but with different maps), and those which have a single fixed set up. In the case of DOTA2 there is little difference between maps compared to RTS and wargames, however a similar level of variability comes in due to the combinations of different heroes that form both teams; and with a slight stretch we can consider these different opposing teams as different `scenarios'. The biggest risk of using techniques with a single fixed set up is of over-fitting to the map. This is not an issue in Chess or Go, in which we want to overfit; learning a policy able to generalise well to a 7x7 board, or with half the number of starting pawns would be wasteful. In wargames, as in RTS games generally, we are only interested in AI techniques that can generalise to scenario variations. As a note of caution, some papers in the literature use games that support scenarios/levels, but only report results that are trained and tested on one specific set up. Planning techniques in general are more robust in the case of scenario variability than methods without a planning component, such as \emph{vanilla} RL. In particular, both MicroRTS and GVGAI competitions put the trained agent through multiple maps, at least some of which are unseen prior to the competition. It is noticeable that the winning entries in both competitions (see~\cite{Perez-Liebana_Samothrakis_Togelius_Schaul_Lucas_Couetoux_Lee_Lim_Thompson_2016, Ontanon_Barriga_Silva_Moraes_Lelis_2018} for a summary of these) use planning and search-based methods with little direct RL, even though the decision time for each move is only in the range 40-100 milliseconds on a single processor. The MicroRTS winners tend to use a two-level hierarchical approach to select amongst scripts to use for different units (these are discussed in more detail in Section~\ref{sect:unitTechniques}). The headline work from DeepMind and OpenAI on Starcraft II and Dota2 using Deep RL does support multiple `scenarios' in the sense of having different heroes, maps and opposing races. However these are all available at the start and included in the training process. There is little work showing good performance with Deep RL on unseen environments, although this is an area of active research interest in terms of transfer-learning. This does not mean Deep RL is not useful in the context of wargames, but is not suitable for completely unseen scenarios. Any substantive change to a scenario will require some training of a Deep RL model for good performance. The issues around this overfitting in Deep RL are discussed in more detail, along with mitigation approaches, in Sections~\ref{sec:continual_learning} and~\ref{sect:overfit}. \subsection{Objective Variability}\label{sect:ObjectiveVar} In a wargame the objective that a player is striving for may change as the game progresses. For example due to changes or setbacks on other fronts, a player attempting to take a strategic location may be ordered to fall back and reinforce elsewhere. This can cause the `reward' function or win condition to change drastically. Wargames such as CMANO come with a large number of scenarios, and these can potentially be played out with a number of different objectives for each player. This presents a significant generalisation problem for Deep RL algorithms, which are normally trained with a specific objective in mind, and transferring what they learn during training on one objective to then act in the pursuit of a different objective is likely to remain a research challenge for some time. If this variability is incorporated during training, then this is less of an problem. This could be via a multi-objective approach, or incorporating the possible objectives in a VP-scale (see section~\ref{sect:WinConditions}). Planning/search methods with a forward model will be better at adjusting to changing objectives if this is needed. None of the considered benchmarks really incorporate any variation of game objectives in this way. \subsection{Player Asymmetry}\label{sect:PlayerAsymmetry} \begin{table}[] \begin{tabular}[t]{|l|l|l|l|l|l|l|} \hline \textbf{CMANO} & \textbf{Flashpoint} & \textbf{Chess/Go} & \textbf{Atari/} & \textbf{Starcraft II} & \textbf{MicroRTS} & \textbf{MuJoCo}\\ & & & \textbf{GVGAI} & \textbf{+ DOTA 2} & & \\ \hline Asymmetric & Asymmetric & Symmetric & - & Asymmetric & Symmetric & - \\ \hline \end{tabular} \caption{Player asymmetry categorisation. Atari and MuJoCo are 1-player environments.} \label{table:Asymmetry} \end{table} This refers to the asymmetry in player capabilities and roles. For example, although chess may have a slight first-player advantage, the action space is the same for each player, and the game is well balanced and largely symmetric. On the other hand, in the Ms Pac-Man versus ghosts competition, AI agents could play as Pac-Man, or play as the entire team of ghosts, hence the objectives and the actions spaces are both asymmetric. Wargames are in general highly asymmetric, with differences in capabilities, unit compositions, starting positions and ultimate objectives between the two sides. Even the RTS games in Table~\ref{table:Asymmetry} are relatively symmetric compared to wargames; in Starcraft II there are four possible factions with specific units, but each game starts with a similar set up and both sides have the same objective to destroy the other. In MicroRTS the same units and technology are shared by both sides. DOTA 2 might have very different teams of 5 heroes on each side, but the map is essentially symmetric, as is the goal of each side to destroy the other's base. High levels of asymmetry prevent easy use of some AI techniques, especially those that rely on self-play in RL. In this approach the AI can bootstrap its playing ability by using itself, or a slightly older version of itself, to play the opponent. However, the policy that the Blue side is learning is not \textit{a priori} going to work very well if used to model the actions of an asymmetric Red side. \subsection{Features specific to wargames}\label{sect:wargameSpecific} This section discusses some requirements of practical wargames not in directly linked to the game-play itself, but to the underlying purpose of running the wargame, and the efficient generation of desired outcomes. \subsubsection{Instrumentation} A frequent desired outcome of wargames is to understand key decision points in a game rather than just the win-lose result. The end-user is the wargame analyst. For example, a rare event such as the early sinking of Blue's only aircraft carrier due to a lucky missile hit may mean that playing out the rest of the scenario is of limited value. What is important to know is that there is a risk of this happening, perhaps even quantified that this occurs in 2-5\% of games. Finding this important insight can require excessive data mining and be easy to miss. Wargames hence require rich instrumentation of the trajectory that an individual game takes to facilitate anomaly detection of this sort of situation and reduce the work required to understand the location of common branch points. \subsubsection{Moderation} Computer-moderated wargames always have options for human intervention to adjust the state of play at any point and/or back-track to a previous point. This is required when one side is being played by humans to make efficient use of their time. If a low-probability high-impact event occurs, such as the aircraft carrier loss of the previous example, then the decision may be made to branch at that point, log the result, and then continue play in a counterfactual universe. This need for human moderation is less of an issue if all sides are AI-controlled. \subsubsection{Player Constraints} One issue with human players in wargames is a tendency to `play the game' rather then `play the problem'. Examples are the `last turn effect' in which players may gamble everything on a chance of victory even if in the real world this would leave them heavily exposed to future enemy action~\cite{rubel2006epistemology}. This is an issue to which AI agents are particularly prone without careful specification of the reward signal they use as outlined in Section~\ref{sect:WinConditions}. A related requirement is often for players to be `credible' in terms of following operational constraints, such as orders from further up the command hierarchy or standard operating procedures for the side in question. This only applies in some types of wargames; in Concept Development the objective is to deliberately explore diverse and unexpected strategies with new equipment or capabilities. We discuss useful techniques for this later in Section~\ref{sect:Exploration}. One issue with practical wargames is that valuable time of participating human experts can be taken up with micro-management of units. This detracts from the central aim of the wargame to analyse/train human decision-making. A helpful tool in some wargames would be to have an AI control units once given a high-level order. It is essential that this is realistic as naive use of A* pathfinding and simple scripted AI has shown problems with units moving directly from A to B without paying attention to available cover or the unexpected appearance of enemy forces. \vfill
null
null
null
proofpile-arXiv_000-10109
{"arxiv_id":"2009.08922","language":"en","timestamp":1601258947000,"url":"https:\/\/arxiv.org\/abs\/2009.08922","yymm":"2009"}
2024-02-18T23:40:24.894Z
2020-09-28T02:09:07.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10111"}
null
\section{Appendix} \subsection{Notation Summary} \begin{table}[htp] \centering \caption{Notation}\label{tab:notation} \begin{tabular}{c|l} \hline Notation & Definition \\ \hline\hline $\mathcal{D}_s=\{(x_i^s, t_i^s)\}_{i=1}^{n_s}$ & Training set with $n_s$ examples in the central server \\ $\mathcal{D}_{test}$ & Test data set in the central server \\ $\mathcal{D}_k= \{(x_i^k, t_i^k)\}_{i=1}^{n_k}$ & Local training set with $n_k$ examples in the $k^{\mathrm{th}}$ client\\ $C$ & Number of label classes in the training set, i.e., $t\in \mathbb{R}^C$ \\ $K$ & Number of clients \\ $f(\cdot)$ & Learning algorithm \\ $L(\cdot,\cdot)$ & Loss function over prediction and target \\ $y_*, y_m$ & Optimal prediction and main prediction \\ $B(\cdot), V(\cdot), N(\cdot)$ & Expected bias, variance and irreducible noise \\ $\mathbb{E}[\cdot]$ & Expected value \\ $\mathcal{D}_p$ & Candidate training set for poisoning attack \\ $w_k$ & Local model parameters over $\mathcal{D}_k$ \\ $w_G$ & Server's global model parameters \\ $f_{\mathcal{D}_k}(x; w_k)$ ($f_{\mathcal{D}_k}$ for short) & Local model prediction on $x$ with parameters $w_k$ \\ $\mathrm{Aggregate}(w_1,w_2,\cdots,w_K)$ & Synchronous aggregation function over local parameters \\ $B(\cdot; w_1,\cdots,w_K)$ & Empirical estimated bias over training sets $\{\mathcal{D}_1, \cdots,\mathcal{D}_K\}$ \\ $V(\cdot; w_1,\cdots,w_K)$ & Empirical estimated variance over training sets $\{\mathcal{D}_1, \cdots,\mathcal{D}_K\}$ \\ $E$ & Number of local epochs \\ $F$ & Fraction of clients selected on each round \\ $B$ & Batch size of local client \\ $\eta$ & Learning rate \\ $\Omega(\cdot)$ & Perturbation constraint \\ $\epsilon$ & Perturbation magnitude \\ $\hat{x}$ & Adversarial counterpart of an example $x$, i.e., $\hat{x}\in \Omega(x)$ \\ $H(\cdot)$ & Entropy \\\hline \end{tabular} \end{table} \subsection{Proof of Theorems} {\bf Proof of Theorem~\ref{thm:empirical_estimation}}. Theorem~\ref{thm:empirical_estimation} states that assume $L(\cdot,\cdot)$ is the cross-entropy loss function, then the empirical estimated main prediction $y_m$ for an input example $(x,t)$ has the following closed-form expression: \begin{equation*} y_m(x; w_1,\cdots,w_K) = \frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) \end{equation*} Furthermore, the empirical bias and variance as well as their gradients over input $x$ are estimated as: \begin{equation*} \begin{aligned} B(x; w_1,\cdots,w_K) = \frac{1}{K} \sum_{k=1}^K L(f_{\mathcal{D}_k}(x;w_k), t) &~~\mathrm{and}~~ V(x; w_1,\cdots,w_K) = L(y_m, y_m) = H(y_m) \\ \nabla_x B(x; w_1,\cdots,w_K) &= \frac{1}{K} \sum_{k=1}^K \nabla_x L(f_{\mathcal{D}_k}(x;w_k), t) \\ \nabla_x V(x; w_1,\cdots,w_K) &= - \frac{1}{K}\sum_{k=1}^K \sum_{j=1}^C (\log{y_m^{(j)}} + 1)\cdot \nabla_{x} f_{\mathcal{D}_k}^{(j)}(x;w_k) \end{aligned} \end{equation*} where $H(y_m) = -\sum_{j=1}^C y_m^{(j)}\log{y_m^{(j)}}$ is the entropy of the optimal prediction $y_m$ and $C$ is the number of classes. \begin{proof} We first calculate the main prediction. Let \begin{equation*} \begin{aligned} \mathcal{M} &= \frac{1}{K}\sum_{k=1}^K L(f_{\mathcal{D}_k}(x;w_k), y') \\ &= -\frac{1}{K}\sum_{k=1}^K \big( f_{\mathcal{D}_k}(x;w_k) \cdot \log{y'} + (1- f_{\mathcal{D}_k}(x;w_k)) \cdot \log{(1-y')} \big) \end{aligned} \end{equation*} Take the derivative of $\mathcal{M}$ with respect to $y'$, we have \begin{equation*} \begin{aligned} \frac{\partial \mathcal{M}}{\partial y'} &= -\frac{1}{K}\sum_{k=1}^K \Big( \frac{f_{\mathcal{D}_k}(x; w_k)}{y'} - \frac{1- f_{\mathcal{D}_k}(x;w_k)}{1-y'} \Big) = -\frac{1}{K}\sum_{k=1}^K \frac{f_{\mathcal{D}_k}(x;w_k) - y'}{y'(1-y')} \end{aligned} \end{equation*} By letting $\partial \mathcal{M} / \partial y' = 0$, we have \begin{equation*} y_m(x; w_1,\cdots,w_K) = \frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) \end{equation*} Then for bias and variance, we have, \begin{equation*} \begin{aligned} B(x; w_1,\cdots,w_K) &= L \Big(\frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x;w_k), t \Big) \\ &= -\frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x;w_k) \cdot \log{t} + \Big( 1-\frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x;w_k) \Big) \cdot \log(1-\log{t}) \\ &= -\frac{1}{K}\sum_{k=1}^K \Big( f_{\mathcal{D}_k}(x;w_k) \cdot \log{t} + (1- f_{\mathcal{D}_k}(x;w_k)) \cdot \log(1-\log{t}) \Big) \\ &= \frac{1}{K} \sum_{k=1}^K L(f_{\mathcal{D}_k}(x;w_k), t) \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} V(x) &= \frac{1}{K} \sum_{k=1}^K L(f_{\mathcal{D}_k}(x;w_k), y_m) \\ &= -\frac{1}{K} \sum_{k=1}^K \Big( f_{\mathcal{D}_k}(x;w_k) \log{y_m} + (1- f_{\mathcal{D}_k}(x;w_k)) \log(1-\log{y_m}) \Big) \\ &= -y_m \log{y_m} - (1- y_m) \log(1-\log{y_m}) \\ &= L(y_m, y_m) \\ &= H(y_m) \end{aligned} \end{equation*} which completes the proof. \end{proof} \subsection{Additional Experiments} \subsubsection{MSE v.s. Cross-entropy}\label{sec:mse_ce} Both cross-entropy and mean squared error (MSE) loss functions could be used for training a neural network model. In our paper, the loss function of neural networks determines the derivation of bias and variance terms used for producing the adversarial examples. More specifically, we show that when using cross-entropy loss function, the empirical estimate of bias and variance as well as their gradients over input $x$ are as follows: \begin{equation*} \begin{aligned} B_{CE}(x; w_1,\cdots,w_K) = \frac{1}{K} \sum_{k=1}^K L(f_{\mathcal{D}_k}(x;w_k), t) &~~\mathrm{and}~~ V_{CE}(x; w_1,\cdots,w_K) = H(y_m) \\ \nabla_x B_{CE}(x; w_1,\cdots,w_K) &= \frac{1}{K} \sum_{k=1}^K \nabla_x L(f_{\mathcal{D}_k}(x;w_k), t) \\ \nabla_x V_{CE}(x; w_1,\cdots,w_K) &= - \frac{1}{K}\sum_{k=1}^K \sum_{j=1}^C (\log{y_m^{(j)}} + 1)\cdot \nabla_{x} f_{\mathcal{D}_k}^{(j)}(x;w_k) \end{aligned} \end{equation*} In the similar way, we show that when using MSE loss function, the empirical estimate of bias and variance as well as their gradients over input $x$ are as follows: \begin{equation*} \begin{aligned} B_{MSE}(x; w_1,\cdots,w_K) &= ||\frac{1}{K} \sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) - t||_2^2 \\ V_{MSE}(x; w_1,\cdots,w_K) &= \frac{1}{K-1} \sum_{k=1}^K ||f_{\mathcal{D}_k}(x;w_k) - \frac{1}{K} \sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k)||_2^2 \end{aligned} \end{equation*} and \begin{equation*} \begin{aligned} &\nabla_x B_{MSE}(x; w_1,\cdots,w_K) = \left(\frac{1}{K} \sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) - t \right) \left( \frac{1}{K} \sum_{k=1}^K \nabla_x f_{\mathcal{D}_k}(x; w_k) \right) \\ &\nabla_x V_{MSE}(x; w_1,\cdots,w_K)\\ &= \frac{1}{K-1} \sum_{k=1}^K \left(\left( f_{\mathcal{D}_k}(x;w_k) - \frac{1}{K} \sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) \right) \left(\nabla_x f_{\mathcal{D}_k}(x;w_k) - \frac{1}{K} \sum_{k=1}^K \nabla_x f_{\mathcal{D}_k}(x; w_k)\right)\right) \end{aligned} \end{equation*} It can be seen that the empirical estimate of $\nabla_x B_{MSE}(x; w_1,\cdots,w_K)$ has much higher computational complexity than $\nabla_x B_{CE}(x; w_1,\cdots,w_K)$ because it involves the gradient calculation of prediction vector $f_{\mathcal{D}_k}(x; w_k)$ over the input tensor $x$. Besides, it is easy to show that the empirical estimate of $\nabla_x V_{MSE}(x; w_1,\cdots,w_K)$ is also computationally expensive. We empirically compare the cross-entropy and MSE loss function in our framework. Table~\ref{mse_ce} reports the adversarial robustness of our decentralized framework w.r.t. FGSM attack ($\epsilon=0.3$) on MNIST with IID setting. It can be observed that (1) our framework with MSE loss function has significantly larger running time; (2) the robustness of our framework with MSE loss function becomes slightly weaker when using the MSE loss function, which might be induced by the weakness of MSE loss function in the classification problem setting. \begin{table}[htp] \centering \begin{tabular}{ccccc} \hline \multirow{2}{*}{} & \multirow{2}{*}{Clean} & \multicolumn{3}{c}{{{Decent\_BVA}}{}} \\ \cline{3-5} & & BiasOnly & VarianceOnly & BVA \\ \hline Cross-entropy & 0.5875 (38.13s) & 0.7627 (47.58s) & 0.7594 (63.46s) & 0.7756 (63.67s) \\ MSE & 0.6011 (39.67s) & 0.7112 (65.03s) & 0.7108 (162.40s) & 0.7119 (179.60s) \\ \hline \end{tabular} \caption{Adversarial robustness with different loss functions and running time (second/epoch)}\label{mse_ce} \end{table} \subsubsection{BV-PGD v.s. BV-FGSM} Our bias-variance attack could be naturally generalized to any gradient-based adversarial attack algorithms when the gradients of bias $B(\cdot)$ and variance $V(\cdot)$ w.r.t. $x$ are tractable to be estimated from finite training sets. Here we empirically compare the adversarial robustness of our framework with BV-PGD or BV-FGSM. Table \ref{fgsm_pgd} provides our results on w.r.t. FGSM and PGD attacks ($\epsilon=0.3$) on MNIST with IID and non-IID settings. Compared to FedAvg, our framework {{Decent\_BVA}}{} with either BV-FGSM or BV-PGD could largely improve the model robustness against adversarial noise. \begin{table}[htp] \centering \setlength\tabcolsep{1pt} \begin{tabular}{ccccccccccc} \hline \multicolumn{2}{c}{\multirow{2}{*}{}} & \multicolumn{4}{c}{IID} & & \multicolumn{4}{c}{non-IID} \\ \cline{3-6} \cline{8-11} \multicolumn{2}{c}{} & Clean & FGSM & PGD-10 & PGD-20 & & Clean & FGSM & PGD-10 & PGD-20 \\ \hline FedAvg & - &0.9863 & 0.5875 & 0.6203 & 0.2048 & & 0.9462 & 0.1472 & 0.5254 & 0.0894 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}{{Decent\_BVA}}{} \\ (BV-FGSM training)\end{tabular}} & BiasOnly & 0.9840 & 0.7627 & 0.7671 & 0.5154 & & 0.9597 & 0.6510 & 0.6226 & 0.3614 \\ & VarianceOnly & 0.9834 & 0.7594 & 0.7616 & 0.5253 & & 0.9577 & 0.5979 & 0.5990 & 0.3504 \\ & BVA & 0.9837 & 0.7756 & 0.7927 & 0.5699 & & 0.9671 & 0.6696 & 0.6953 & 0.4717 \\ \hline \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}{{Decent\_BVA}}{}\\ (BV-PGD training)\end{tabular}} & BiasOnly & 0.9817 & 0.7578 & 0.8455 & 0.6422 & & 0.9690 & 0.6585 & 0.7868 & 0.5749 \\ & VarianceOnly & 0.9850 & 0.7419 & 0.8226 & 0.6076 & & 0.9688 & 0.6293 & 0.7587 & 0.5299 \\ & BVA & 0.9849 & 0.7565& 0.8399 & 0.6317 & & 0.9670 & 0.6592 & 0.7836 & 0.5752 \\ \hline \end{tabular} \caption{Adversarial robustness with BV-PGD and BV-FGSM}\label{fgsm_pgd} \end{table} \subsubsection{Parameter Analysis} In this part, we perform parameter analysis regarding a few key hyper-parameters that have high influence on the model performance. Since our target is to train a decentralized model where the robustness of the model is high but the accuracy still remains. For that purpose, we have the following three sets of experimental plots to guide us choosing the optimal hyper-parameters used in the experiments. \begin{figure}[!h] \hspace{-3mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/numShare_plot1.png} \vspace{-6mm} \caption{\small Accuracy of clean training with varying $n_s$} \label{fig_num_share1} \end{minipage}% \hfill \hspace{1mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/numShare_plot2.png} \vspace{-6mm} \caption{\small Robustness under FGSM attack with varying $n_s$} \label{fig_num_share2} \end{minipage} \hfill \hspace{0mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/numShare_plot3.png} \vspace{-6mm} \caption{\small Robustness under PGD-20 attack with varying $n_s$} \label{fig_num_share3} \end{minipage} \vspace{-62mm} \end{figure} {\bfseries Number of shared perturbed samples $n_s$}. From Fig.~\ref{fig_num_share1}, we see that the accuracy of \textit{FedAvg} (i.e., $n_s=0$) has the best accuracy as we expected. For \textit{{{Decent\_BVA}}} with varying size of asymmetrical transmitted perturbed samples (i.e., $n_s=8, 16, 32, 64$), its performance drops slightly with increasing $n_s$ (on average drop of $0.05\%$ per plot). As a comparison, the robustness on test set $\mathcal{D}_{test}$ increases dramatically with increasing $n_s$ (increasing ranges from $18\%$ to $22\%$ under FGSM attack and ranges from $15\%$ to $60\%$ under PGD-20 attack). However, choosing large $n_s$ would have high model robustness but also suffer from the high communication cost. In experiment, we choose $n_s=64$ for the ideal trade-off point. \begin{figure}[!h] \hspace{-3mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/momentum_plot1.png} \vspace{-6mm} \caption{\small Accuracy of clean training with varying momentum} \label{fig_momentum1} \end{minipage}% \hfill \hspace{1mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/momentum_plot2.png} \vspace{-6mm} \caption{\small Robustness under FGSM attack with varying momentum} \label{fig_momentum2} \end{minipage} \hfill \hspace{0mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/momentum_plot3.png} \vspace{-6mm} \caption{\small Robustness under PGD-20 attack with varying momentum} \label{fig_momentum3} \end{minipage} \vspace{-55mm} \end{figure} {\bfseries Momentum}. We also care about the choice of options in the SGD optimizer's settings. As seen in Fig.~\ref{fig_momentum1}, the accuracy of clean training is monotonically increase with momentum. Interestingly, we also observe that the decentralized model is less vulnerable when momentum is large no matter the adversarial attack is FGSM or PGD-20 on test set $\mathcal{D}_{test}$, see Fig.~\ref{fig_momentum2} and Fig.~\ref{fig_momentum3}. This phenomenon is also monotonically observed when we changing momentum from $0.1$ to $0.9$ and this suggests us to choose momentum as $0.9$ through all experiments. \begin{figure}[!h] \hspace{-3mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/local_epoch_plot1.png} \vspace{-6mm} \caption{\small Accuracy of clean training with varying local epoch} \label{fig_localEpoch1} \end{minipage}% \hfill \hspace{1mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/local_epoch_plot2.png} \vspace{-6mm} \caption{\small Robustness under FGSM attack with varying local epoch} \label{fig_localEpoch2} \end{minipage} \hfill \hspace{0mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4.5cm,height=4cm]{figures/local_epoch_plot3.png} \vspace{-6mm} \caption{\small Robustness under PGD-20 attack with varying local epoch} \label{fig_localEpoch3} \end{minipage} \vspace{-60mm} \end{figure} {\bfseries Local epochs $E$}. The third important factor of decentralized training is the number of epoch $E$ we should use. In Fig.~\ref{fig_localEpoch1}, we clearly see that more local epochs in each client leads to more accurate aggregated server model in prediction. Similarly, its robustness against both FGSM and PGD-20 attack on test set $\mathcal{D}_{test}$ also have the best performance when local epochs are large on device. Hence, in our experiments, if the on-device computational cost is not very high (large data example size, model with many layers), we choose $E=50$. Otherwise, we will reduce $E$ to a smaller number accordingly. \subsection{Network Architectures} For the MNIST and Fashion-MNIST data sets, the CNN network structure is shared since the input image examples have the same dimensions. The detail is shown in Table.~\ref{arch_cnn_mnist}. Conv1 and Conv2 denote the convolution block that may have one or more convolution layers. E.g., $[5 \times 5, 10] \times 1$ denotes 1 cascaded convolution layers with 10 filters of size $5 \times 5$. \begin{table}[h!] \centering \begin{tabular}{|l|c|} \hline \textbf{Layers} & \textbf{CNN-MNIST} \\ \hline \hline Conv1 & {[}5 $\times$ 5, 10{]} $\times$ 1 \\ \hline Pool1 & 2 $\times$ 2 Max Pooling, Stride 2 \\ \hline Conv2 & {[}5 $\times$ 5, 20{]} $\times$ 1 \\ \hline Dropout & 0.5 \\ \hline Pool2 & 2 $\times$ 2 Max Pooling, Stride 2 \\ \hline FC1 & 50 \\ \hline FC2 & 10 \\ \hline \end{tabular} \vspace{2mm} \caption{CNN architecture for MNIST and Fashion-MNIST} \label{arch_cnn_mnist} \end{table} For the CIFAR data set, the CNN network structure is shown in Table.~\ref{arch_cnn_cifar}. It should be noticed that state-of-the-art approaches have achieved a better test accuracy for CIFAR, nevertheless, this model is sufficient for our experimental needs since our goal is to evaluate the robust decentralized model instead of achieving the best possible accuracy on testing stage. \begin{table}[!t] \centering \begin{tabular}{|l|c|} \hline \textbf{Layers} & \textbf{CNN-CIFAR} \\ \hline \hline Conv1 & {[}3 $\times$ 3, 32{]} $\times$ 1 \\ \hline Conv2 & {[}3 $\times$ 3, 64{]} $\times$ 1 \\ \hline Pool1 & 2 $\times$ 2 Max Pooling, Stride 2 \\ \hline Conv3 & {[}3 $\times$ 3, 128{]} $\times$ 2 \\ \hline Pool2 & 2 $\times$ 2 Max Pooling, Stride 2 \\ \hline Dropout & 0.05 \\ \hline Conv4 & {[}3 $\times$ 3, 256{]} $\times$ 2 \\ \hline Pool3 & 2 $\times$ 2 Max Pooling, Stride 2 \\ \hline Dropout & 0.1 \\ \hline FC1 & 1024 \\ \hline FC2 & 512 \\ \hline Dropout & 0.1 \\ \hline FC3 & 10 \\ \hline \end{tabular} \vspace{2mm} \caption{CNN architecture for CIFAR-10} \label{arch_cnn_cifar} \end{table} \section{Introduction} \vspace{-2mm} The explosive amount of decentralized user data collected from the ever-growing usage of smart devices, e.g., smartphones, wearable devices, home sensors, etc., has led to a surge of interest in the field of decentralized learning. To protect the privacy-sensitive data of the clients, privacy-preserving decentralized learning~\cite{mcmahan2017communication, fl_yangqiang} has been proposed. It decentralizes the model learning by allowing a large number of clients to train local models using their own data, and then collectively merges the clients' models on a central server by secure aggregation~\cite{homomorphic_encryption}. Privacy-preserving decentralized learning has attracted much attention in recent years with the prevalence of efficient light-weight deep models~\cite{mobilenet} and low-cost network communications~\cite{TernGrad}. In decentralized learning, the central server can only inspect the secure aggregation of the local models as a whole. Consequently, it is susceptible to corrupted updates from the clients (system failures, adversarial manipulations, etc.). Recently, multiple robust decentralized learning models~\cite{robust_fed_localPoisoning, robust_agg_fed, robust_fed_weightTruncate, robust_fed_hyperParameter} have been proposed. However, these works only focus on performing client-level model poisoning attacks or designing server-level aggregation variants with hyper-parameter tuning, and largely ignore the underlying true cause of decentralized learning's vulnerability that comes from the server's generalization error. Our work bridges this gap by investigating the loss incurred during secure aggregation of decentralized learning from the perspective of bias-variance decomposition~\cite{pedro2000unified, valentini2004bias}. Specifically, we show that the untargeted adversarial attacks over the central server can be decomposed as bias attack and variance attack over multiple local clients, where the bias measures the loss triggered by the main prediction of these clients, and the variance measures the variations among clients' model predictions. In this way, we can perform adversarial training on local models by receiving a small amount of bias-variance perturbed data from the server via asymmetrical communication. The experiments are conducted on neural networks with cross-entropy loss, however, other loss functions are also allowed as long as their gradients in terms of bias and variance are tractable to be estimated from local clients' models. In this way, any gradient-based adversarial training strategies could be used by taking the bias-variance oriented adversarial examples into consideration, e.g., bias-variance based FGSM and PGD proposed in this paper. Compared with previous work, our major contributions include: \begin{itemize}[leftmargin=*,noitemsep,nolistsep] \item We give the exact solution of bias-variance analysis in terms of the generalization error for neural network based decentralized learning. \item Without violating the clients' privacy, we show that providing a tiny amount of bias-variance perturbed data to the clients through asymmetrical communication could significantly improve the robustness of the server model. In contrast, the conventional decentralized learning framework is vulnerable to the strong attacking methods with increasing communication rounds. \item We conduct extensive experiments to demonstrate the robustness of our proposed framework against adversarial attacks. \end{itemize} \vspace{-2mm} \section{Preliminaries} \vspace{-2mm} \subsection{Notation} \vspace{-2mm} In decentralized learning, there are a central server and $K$ different clients, each with access to a private training set $\mathcal{D}_k= \{(x_i^k, t_i^k)\}_{i=1}^{n_k}$, where $x_i^k$, $t_i^k$, and $n_k$ are the features, label, and number of training examples in the $k^{\mathrm{th}}$ client $(k=1,2\cdots,K)$. The raw data $\mathcal{D}_k$ must not be shared with the server and other clients. In addition, there is a small public training set $\mathcal{D}_s=\{(x_j^s, t_j^s)\}_{j=1}^{n_s}$ with $n_s$ training examples in the server which could be shared with clients. The goal of decentralized learning is to train a global classifier $f(\cdot)$ using knowledge from all the clients such that it generalizes well over test data $\mathcal{D}_{test}$. The notation used in this paper is summarized in the Appendix (see Table \ref{tab:notation}). \vspace{-2mm} \subsection{Problem Definition}\label{sec:problem_definition} \vspace{-2mm} Privacy-preserving decentralized learning trains the machine learning models without directly accessing to the clients' raw data. A large number of clients could collaborate in achieving the learning objective under the coordination of a central server which aggregates the asynchronous local clients' parameters~\cite{kairouz2019advances}. In this paper, we study the adversarial robustness of neural networks\footnote{Our theoretical contribution mainly focuses on classification using neural networks with cross-entropy loss and mean squared loss. However, the proposed framework is generic to allow the use of other classification loss functions as well.} in the decentralized learning setting, and we formulate robust decentralized learning as follows. \begin{definition} \textbf{(Robust Decentralized Learning)} \\ \indent\textbf{\em Input:} (1). A set of private training data $\{\mathcal{D}_k\}_{k=1}^K$ on $K$ different clients; (2). Limited training data $\mathcal{D}_s$ on the central server; (3). Learning algorithm $f(\cdot)$ and loss function $L(\cdot, \cdot)$. \indent\textbf{\em Output:} A trained model on the central server that is robust against adversarial perturbation. \end{definition} We would like to point out that our problem definition has the following properties: \begin{itemize}[leftmargin=*,noitemsep,nolistsep] \item {\bf Asymmetrical communication:} In our design, the asymmetrical communication between each client and server cloud is allowed: the server provides both global model parameters and limited training data to the clients; while each client uploads its local parameters back to the server. \item {\bf Data distribution:} In this paper, we assume that all training examples of the clients and the server follow the same data distribution. However, the experiments show that our proposed algorithm also achieves satisfactory performance in the non-IID setting, which is typical among personalized clients (e.g., users' mobile devices~\cite{hard2018federated}) producing the non-IID local data sets in real scenarios. \item {\bf Shared learning algorithm:} We assume that all the clients would use the identical model $f(\cdot)$, including model architectures as well as hyper-parameters (e.g., learning rate, local epochs, local batch size), which could be assigned by the central server. \end{itemize} \vspace{-2mm} \subsection{Bias-Variance Trade-off} \vspace{-2mm} Following~\cite{pedro2000unified,valentini2004bias}, we define the optimal prediction, main prediction as well as the bias, variance and noise for any real-valued loss function $L(\cdot,\cdot)$ as follows: \begin{definition}({\bf Optimal Prediction and Main Prediction}) Given a loss function $L(\cdot, \cdot)$ and learning algorithm $f(\cdot)$, the optimal prediction $y_*$ and main prediction $y_m$ for an example are defined as follows: \begin{equation} y_*(x) = \arg\min_{y} \mathbb{E}_t[L(y,t)] \quad\mathrm{and}\quad y_m(x) = \arg\min_{y'} \mathbb{E}_{\mathcal{D}}[L(f_{\mathcal{D}}(x), y')] \end{equation} where $\mathcal{D}$ is the training set and $f_{\mathcal{D}}$ denotes the model trained using $\mathcal{D}$. In short, the main prediction is the prediction whose average loss relative to all the predictions over data distributions is minimum, e.g., the main prediction for zero-one loss is the mode of predictions. In this work, we show that the main prediction is the average prediction of client models for mean squared loss and cross-entropy loss in Section~\ref{BVD_gradients}. \end{definition} \begin{definition}({\bf Bias, Variance and Noise}) Given a loss function $L(\cdot, \cdot)$ and learning algorithm $f(\cdot)$, the expected loss $\mathbb{E}_{\mathcal{D},t}[L(f_{\mathcal{D}}(x), t)]$ for an example $x$ can be decomposed\footnote{This decomposition is based on the weighted sum of bias, variance, and noise.} into bias, variance and noise as follows: \begin{equation}\label{eq:bias_variance_noise} B(x) = L(y_m, y_*) \quad\mathrm{and}\quad V(x) = \mathbb{E}_{\mathcal{D}}[L(f_{\mathcal{D}}(x), y_m)] \quad\mathrm{and}\quad N(x) = \mathbb{E}_{t}[L(y_*, t)] \end{equation} \end{definition} In short, bias is the loss incurred by the main prediction w.r.t. the optimal prediction, and variance is the average loss incurred by predictions w.r.t. the main prediction. Noise is conventionally assumed to be irreducible and independent to $f(\cdot)$. \begin{remark} Our definitions on optimal prediction, main prediction, bias, variance and noise slightly differ from previous ones~\cite{pedro2000unified,valentini2004bias}. For example, conventional optimal prediction was defined as $y_*(x) = \arg\min_{y} \mathbb{E}_t[L(t, y)]$, and it is equivalent to our definition when loss function is symmetric over its arguments, i.e., $L(y_1, y_2) = L(y_2, y_1)$. But it does not hold for non-symmetric loss functions. \end{remark} \vspace{-2mm} \section{The Proposed Framework} \vspace{-2mm} A typical framework~\cite{kairouz2019advances} of privacy-preserving decentralized learning can be summarized as follows: (1) {\em Client Update:} Each client updates local model parameters $w_k$ by minimizing the empirical loss over its own training set; (2) {\em Forward Communication:} Each client uploads its model updates to the central server; (3) {\em Server Update:} It synchronously aggregates the received parameters; (4) {\em Backward Communication:} The global parameters are sent back to the clients. Our framework follows the same paradigm but with substantial modifications: {\bfseries Server Update}. Server has two components: The first component uses FedAvg~\cite{mcmahan2017communication} algorithm to aggregate the local models' parameters, i.e., $w_G = \mathrm{Aggregate}(w_1,w_2,\cdots,w_K) = \sum_{k=1}^K \frac{n_k}{n}w_k$ where $n = \sum_{k=1}^K n_k$ and $w_k$ is the model parameters in the $k^{\mathrm{th}}$ client. Meanwhile, another component is designed to produce adversarially perturbed examples which could be induced by a poisoning attack algorithm for the usage of robust adversarial training. It has been well studied~\cite{belkin2019reconciling,pedro2000unified,valentini2004bias} that in the classification setting, the generalization error of a learning algorithm on an example is determined by the irreducible noise, bias and variance terms as defined in Eq. (\ref{eq:bias_variance_noise}). Similar to the previous work, we also assume a noise free learning scenario where the class label $t$ is a deterministic function of $x$ (i.e., if $x$ is sampled repeatedly, the same values of its class $t$ will be observed). This motivates us to generate the adversarial examples by attacking the bias and variance terms induced by clients' models as: \begin{equation} \label{eq:maximization} \max_{\hat{x}\in\Omega(x)} B(\hat{x}; w_1,\cdots,w_K) + \lambda V(\hat{x}; w_1,\cdots,w_K) \quad \forall (x,t)\in \mathcal{D}_s \end{equation} where $B(\hat{x}; w_1,\cdots,w_K)$ and $V(\hat{x}; w_1,\cdots,w_K)$ could be empirically estimated from a finite number of local training sets $\{\mathcal{D}_1, \mathcal{D}_2, \cdots, \mathcal{D}_K\}$. Here $\lambda$ is a hyper-parameter to measure the trade-off of bias and variance, and $\Omega(x)$ is the perturbation constraint. Note that $\mathcal{D}_s$, which is on the server, is the candidate subset of all available training examples that would lead to their perturbed counterparts. This is a more feasible setting as compared to generating adversarial examples on clients' devices because the server usually has much powerful computational capacity in real scenarios that allows the usage of flexible poisoning attack algorithms. In this case, both poisoned examples and server model parameters would be sent back to each client ({\em Backward Communication}), while only clients' local parameters would be uploaded to the server ({\em Forward Communication}), i.e., the {\em asymmetrical communication} between the clients and the server as discussed in Section \ref{sec:problem_definition}. {\bfseries Client Update}. The robust training of one client's prediction model (i.e., $w_k$) can be formulated as the following minimization problem. \begin{equation} \label{eq:minimization} \min_{w_k} \left( \sum_{i=1}^{n_k} L(f_{\mathcal{D}_k}(x_i^k; w_k), t_i^k) + \sum_{j=1}^{n_s} L(f_{\mathcal{D}_k}(\hat{x}_j^s; w_k), t_j^s) \right) \end{equation} where $\hat{x}_j^s\in\Omega(x_j^s)$ is the perturbed counterpart of clean example $x_j^s$ asymmetrically transmitted from the server. \begin{remark} Intuitively, in the noise-free scenarios, bias measures the systematic loss of a learning algorithm, and variance measures the prediction consistency of the learner over different training sets. Therefore, our robust decentralized learning framework has the following advantages: (i) it encourages the clients to consistently produce the optimal prediction for perturbed examples, thereby leading to better generalization performance; (ii) local adversarial training on perturbed examples allows to learn a robust local model, and thus a robust global model could be aggregated from clients. \end{remark} Theoretically, we could still have another possible robust decentralized training strategy: \begin{equation}\label{eq:local_robsut} \min_{w_k} \sum_{i=1}^{n_k} \max_{\hat{x}_i^k\in\Omega(x_i^k)} L(f(\hat{x}_i^k; w_k), t_i^k) \quad \forall k \in \{1,2, \cdots, K\} \end{equation} where the local candidate set of available training examples for each client $k$ that can have their perturbation counterparts is $\mathcal{D}_k$. Following~\cite{madry2018towards,tramer2018ensemble}, one intuitive interpretation of this framework is considered as a composition of inner maximization and outer minimization problems. Specifically, the inner maximization problem aims to synthesize the adversarial counterparts of clean examples, while the outer minimization problem finds the optimal model parameters over perturbed training examples. In this way, each local robust model is trained individually and the global server model is aggregated from them. One major limitation of this method is that poisoning attacks on clients could largely increase the computational cost and memory usage. This might not be feasible due to the client's limited storage and computational capacity (e.g., a mobile device~\cite{hard2018federated}) in real scenarios. \vspace{-2mm} \section{Algorithm} \vspace{-2mm} \subsection{Bias-Variance Attack} \label{BVD_gradients} \vspace{-2mm} We first consider the maximization problem in Eq.~(\ref{eq:maximization}) using bias-variance based adversarial attacks. It aims to find the adversarial example $\hat{x}$ (from the original example $x$) that would produce large bias and variance values w.r.t. clients' local models. Specifically, perturbation constraint $\hat{x}\in \Omega(x)$ forces the adversarial example $\hat{x}$ to be visually indistinguishable w.r.t. $x$. Here we consider the well-studied $l_{\infty}$-bounded adversaries~\cite{goodfellow2014explaining,madry2018towards,tramer2018ensemble} such that $\Omega(x) := \{\hat{x} \big| ||\hat{x} - x||_{\infty} \leq \epsilon \}$ for a perturbation magnitude $\epsilon$. Furthermore, we propose to consider the following two gradient-based algorithms to generate adversarial examples with bounded $l_{\infty}$ norm. {\bf Bias-variance based Fast Gradient Sign Method (BV-FGSM):} Following FGSM~\cite{goodfellow2014explaining}, it linearizes the maximization problem in Eq. (\ref{eq:maximization}) with one-step attack as follows. \begin{equation}\label{BVD-FGSM} \hat{x}_{\mathrm{BV-FGSM}} := x + \epsilon \cdot \mathrm{sign}\left( \nabla_x \left(B(x; w_1,\cdots,w_K) + \lambda V(x; w_1,\cdots,w_K) \right) \right) \end{equation} {\bf Bias-variance based Projected Gradient Descent (BV-PGD):} PGD can be considered as a multi-step variant of FGSM~\cite{kurakin2016adversarial} and might generate powerful adversarial examples. This motivated us to derive a BV-based PGD attack: \begin{equation}\label{BVD-PGD} \hat{x}_{\mathrm{BV-PGD}}^{l+1} := \mathrm{Proj}_{\Omega(x)} \left( \hat{x}^l + \epsilon \cdot \mathrm{sign}\left( \nabla_{\hat{x}^l} \left(B(\hat{x}^l; w_1,\cdots,w_K) + \lambda V(\hat{x}^l; w_1,\cdots,w_K) \right) \right) \right) \end{equation} where $\hat{x}^l$ is the adversarial example at the $l^{\mathrm{th}}$ step with the initialization $\hat{x}^0 = x$ and $\mathrm{Proj}_{\Omega(x)}(\cdot)$ projects each step onto $\Omega(x)$. \begin{remark} The proposed framework could be naturally generalized to any gradient-based adversarial attack algorithms where the gradients of bias $B(\cdot)$ and variance $V(\cdot)$ w.r.t. $x$ are tractable when estimated from finite training sets. Compared with the existing attack methods~\cite{carlini2017towards,goodfellow2014explaining,kurakin2016adversarial,moosavi2016deepfool}, our loss function the adversary aims to optimize is a linear combination of bias and variance, whereas existing work largely focused on attacking the overall classification error that considers bias only. \end{remark} The following theorem states that bias $B(\cdot)$ and variance $V(\cdot)$ as well as their gradients over input $x$ could be estimated using the clients' models. \begin{theorem}\label{thm:empirical_estimation} Assume that $L(\cdot,\cdot)$ is the cross-entropy loss function, then the empirical estimated main prediction $y_m$ for an input example $(x,t)$ has the following closed-form expression: \begin{equation*} y_m(x; w_1,\cdots,w_K) = \frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) \end{equation*} Furthermore, the empirical bias and variance as well as their gradients over input $x$ are estimated as: \begin{equation*} \begin{aligned} B(x; w_1,\cdots,w_K) = \frac{1}{K} \sum_{k=1}^K L(f_{\mathcal{D}_k}(x;w_k), t) &~~\mathrm{and}~~ V(x; w_1,\cdots,w_K) = L(y_m, y_m) = H(y_m) \\ \nabla_x B(x; w_1,\cdots,w_K) &= \frac{1}{K} \sum_{k=1}^K \nabla_x L(f_{\mathcal{D}_k}(x;w_k), t) \\ \nabla_x V(x; w_1,\cdots,w_K) &= - \frac{1}{K}\sum_{k=1}^K \sum_{j=1}^C (\log{y_m^{(j)}} + 1)\cdot \nabla_{x} f_{\mathcal{D}_k}^{(j)}(x;w_k) \end{aligned} \end{equation*} where $H(y_m) = -\sum_{j=1}^C y_m^{(j)}\log{y_m^{(j)}}$ is the entropy of the main prediction $y_m$ and $C$ is the number of classes. \end{theorem} In addition to the commonly used cross-entropy loss in neural networks, we also consider the case where $L(\cdot, \cdot)$ is the mean squared error (MSE) loss function. Then, the empirical estimated main prediction $y_m$ for an input example $(x,t)$ has the following closed-form expression: \begin{equation*} y_m(x; w_1,\cdots,w_K) = \frac{1}{K}\sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) \end{equation*} Furthermore, the empirical bias and unbiased variance are estimated as: \begin{equation*} \begin{aligned} B(x; w_1,\cdots,w_K) &= ||\frac{1}{K} \sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k) - t||_2^2 \\ V(x; w_1,\cdots,w_K) &= \frac{1}{K-1} \sum_{k=1}^K ||f_{\mathcal{D}_k}(x;w_k) - \frac{1}{K} \sum_{k=1}^K f_{\mathcal{D}_k}(x; w_k)||_2^2 \end{aligned} \end{equation*} \begin{remark} In this paper, we focus on studying the robust decentralized learning problem using neural network model with cross-entropy loss function. Notice that in our framework, the decentralized training of neural networks with MSE loss function might have weaker robustness and higher computational complexity compared to cross-entropy loss function (see Appendix~\ref{sec:mse_ce}). \end{remark} \vspace{-2mm} \subsection{{{Decent\_BVA}}} \vspace{-2mm} We present a novel robust \underline{decent}ralized learning algorithm with our proposed \underline{b}ias-\underline{v}ariance \underline{a}ttacks, named \textit{{{Decent\_BVA}}}. Following the framework in Eq.~(\ref{eq:maximization}) and Eq.~(\ref{eq:minimization}), key components to our \begin{minipage}{0.46\textwidth} \begin{algorithm}[H] \centering \caption{{{Decent\_BVA}}}\label{algorithm:RobustFed} \begin{algorithmic}[1] \STATE {\bfseries Input:} $K$ (number of clients, with local data sets $\{\mathcal{D}_k\}_{k=1}^K$ indexed by $k$); $f$ (learning model), $E$ (number of local epochs); $F$ (fraction of clients selected on each round); $B$ (batch size of local client); $\eta$ (learning rate); $\mathcal{D}_{s}$ (data set on server); $\epsilon$ (perturbation magnitude). \STATE {\bfseries Initialization:} Initialize $w_G^0$ and $\hat{\mathcal{D}}_s = \emptyset$ \FOR{each round $r=1,2,\cdots$} \STATE $m=\max(F\cdot K, 1)$ \STATE $S_r \leftarrow$ randomly sampled $m$ clients \FOR{each client $k\in S_r$ in parallel} \STATE $w^{r}_k \leftarrow$ {\bfseries ClientUpdate}($w_G^{r-1}, \hat{\mathcal{D}}_s, k$) \ENDFOR \STATE $\hat{\mathcal{D}}_s \leftarrow$ {\bfseries BVAttack}($\mathcal{D}_s, \{w_{k}^{r}\} | k\in S_r$) \STATE $w_G^r \leftarrow$ {\bfseries Aggregate}($w_{k}^{r} | k\in S_r$) \ENDFOR \RETURN $w_G$ \end{algorithmic} \end{algorithm} \end{minipage} \hfill \begin{minipage}{0.52\textwidth} \begin{algorithm}[H] \centering \caption{ClientUpdate({$w, \hat{\mathcal{D}}_s, k$})}\label{ClientUpdate} \begin{algorithmic}[1] \STATE Initialize $k^{\mathrm{th}}$ client's model with $w$ \STATE $\mathcal{B} \leftarrow$ split $\mathcal{D}_k \cup \hat{\mathcal{D}}_s$ into batches of size $B$ \FOR{each local epoch $i=1,2,\cdots,E$} \FOR{local batch $(x, t) \in \mathcal{B}$} \STATE $w \leftarrow w - \eta \nabla L(f_{\mathcal{D}_k}(x; w), t)$ \ENDFOR \ENDFOR \RETURN $w$ \end{algorithmic} \end{algorithm} \vspace{-6mm} \begin{algorithm}[H] \centering \caption{BVAttack({$\mathcal{D}_s, \{w_{k}^{r}\} | k\in S_r$})}\label{BVDAttack} \begin{algorithmic}[1] \STATE Initialize $\hat{\mathcal{D}}_s = \emptyset$ \FOR{$(x, t) \in \mathcal{D}_s$} \STATE Estimate the gradients $\nabla_x B(x)$ and $\nabla_x V(x)$ using Theorem \ref{thm:empirical_estimation} \STATE Calculate $\hat{x}$ using Eq. (\ref{BVD-FGSM}) or (\ref{BVD-PGD}) and add to $\hat{\mathcal{D}}_s$ \ENDFOR \RETURN $\hat{\mathcal{D}}_s$ \end{algorithmic} \end{algorithm} \end{minipage} algorithm are: bias-variance attacks for generating adversarial examples on the server, and adversarial training with poisoned server examples and clean local examples on each client. Therefore, a robust decentralized learning model could be optimized by iteratively updating the adversarial examples and local model parameters. The proposed algorithm is summarized in Alg.~\ref{algorithm:RobustFed}. It is given the server's $\mathcal{D}_{s}$ and clients' training data $\{\mathcal{D}_k\}_{k=1}^K$ as input, and outputs a robust global model on the server. To begin with, it initializes the server's model parameter $w_G$ and perturbed data $\hat{\mathcal{D}}_s$, and then assigns to the randomly selected clients (Steps 4-5). Each client optimizes its own local model with the received parameters $w$ and $\hat{\mathcal{D}}_s$ (Steps 6-8), and then uploads the updated parameters back to the server. Then, the server updates the perturbed data $\hat{\mathcal{D}}_s$ (Step 9) using our proposed bias-variance attack algorithm (see Alg.~\ref{BVDAttack}) and global model parameters (Step 10) using an aggregation method, e.g., FedAvg~\cite{mcmahan2017communication}. Specifically, for client update (see Alg.~\ref{ClientUpdate}), each client initializes its local model $f_{\mathcal{D}}(\cdot)$ with the received global parameters $w_G$, and then optimizes the local model with its own clean data $\mathcal{D}_k$ and the received data $\hat{\mathcal{D}}_s$. Following~\cite{mcmahan2017communication,xie2019dba}, considering the neural network model $f(\cdot)$, we use Stochastic Gradient Descent (SGD) to train $E$ epochs with local learning rate $\eta$ and batch size $B$ in each client. \vspace{-2mm} \section{Experiments} \vspace{-2mm} In this section, we evaluate the adversarial robustness of our proposed algorithm on three benchmark data sets: MNIST\footnote{\scriptsize \url{http://yann.lecun.com/exdb/mnist}}, Fashion-MNIST\footnote{\scriptsize \url{https://github.com/zalandoresearch/fashion-mnist}}, CIFAR-10\footnote{\scriptsize \url{https://www.cs.toronto.edu/~kriz/cifar.html}}. \begin{figure}[!t] \begin{minipage}[c][11cm][t]{.49\textwidth} \centering \includegraphics[width=6cm,height=4cm]{figures/BVD_gradients_2.png} \vspace{-2mm} \caption{\small Visualizations of bias, variance, bias+variance, and perturbed images for MNIST.} \label{fig_visual_bvd} \end{minipage}% \hfill \begin{minipage}[c][11cm][t]{.49\textwidth} \centering \includegraphics[width=5.5cm,height=4cm]{figures/bias_variance_CNN_MNIST.png} \vspace{-2mm} \caption{\small Bias-variance curve w.r.t. the CNN model complexity on MNIST.} \label{curve_bvd_cnn} \end{minipage} \vspace{-66mm} \end{figure} \vspace{-2mm} \subsection{Baselines} \vspace{-2mm} The baseline models we use include: (1). \textit{FedAvg}: the classical federated averaging model~\cite{mcmahan2017communication}. (2). \textit{Decent\_Baseline}: The simplified version of our proposed method where the local clients are robustly trained using~\cite{madry2018towards} but the asymmetrical transmitted perturbed data are generated using the gradients from FedAvg's aggregation on the server. (3)-(5). \textit{Decent\_Bias, Decent\_Variance, Decent\_BVA}: Our proposed methods, similar to \textit{Decent\_Baseline} but the asymmetrical transmitted perturbed data is generated using the gradients from bias-only attack, variance-only attack, and bias-variance attack respectively. (6). \textit{FedAvg\_Robust\_Local}: Each client performs its own adversarial training using Eq.~(\ref{eq:local_robsut}); then, their model updates are aggregated on the server using \textit{FedAvg}. (7). \textit{Decent\_BVA\_Local}: A combination of baselines (5) and (6). Note that the latter two baselines have high computational requirements on client devices and may not be applicable in real scenarios. \begin{figure}[!t] \hspace{-2mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=5cm,height=4cm]{figures/fashionMNIST_plot1.png} \vspace{-6mm} \caption{Convergence on Fashion-MNIST} \label{fig_convergence_fashion} \end{minipage}% \hfill \hspace{3mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=5cm,height=4cm]{figures/fashionMNIST_plot2.png} \vspace{-6mm} \caption{Performance on Fashion-MNIST} \label{fig_performance_fashion} \end{minipage} \hfill \hspace{2mm} \begin{minipage}[c][11cm][t]{.3\textwidth} \centering \includegraphics[width=4cm,height=4cm]{figures/fashionMNIST_plot3.png} \vspace{-2.5mm} \caption{Efficiency on Fashion-MNIST} \label{fig_efficiency_fashion} \end{minipage} \vspace{-67mm} \end{figure} \vspace{-2mm} \subsection{Setting} \vspace{-2mm} Regarding the defense model architecture, we use 4-layer CNNs (2 convolutional layers and 2 fully connected layers) for MNIST and Fashion-MNIST. For CIFAR-10, a 9-layer CNN (6 convolutional layers and 3 fully connected layers) model is used. The detailed designs are illustrated in the Appendix. The training is performed using SGD optimizer with fixed learning rate of $0.01$ and momentum of value $0.9$. Total number of clients is 100 and the fraction of clients that are sampled to perform training is set as $F=0.1$; the asymmetrically transmitted data is randomly sampled and has the size of $n_s=64$; the local batch size $B$ of each client is $64$, and the local training epochs $E$ of each client are $50, 50, 10$ for the three data sets respectively. We empirically demonstrate that these hyper-parameter settings are preferable in terms of both training accuracy and robustness. Please see the details in Fig.~\ref{fig_num_share1} - Fig.~\ref{fig_localEpoch3} in the Appendix. For the adversarial training of the clients, the perturbed examples are generated by poisoning the training set using our proposed BV-FGSM attack algorithm. To evaluate the robustness of our decentralized learning algorithm against existing adversarial attacks, except for the clean model training, we perform FGSM~\cite{goodfellow2014explaining}, 10-step PGD~\cite{kurakin2016adversarial}, and 20-step PGD~\cite{kurakin2016adversarial} towards the aggregated server model on the test set $\mathcal{D}_{test}$. Following~\cite{goodfellow2014explaining, defense_quanquangu}, the maximum perturbations allowed are $\epsilon=0.3$ for MNIST and Fashion-MNIST, and $\epsilon=\frac{8}{255}$ for CIFAR-10. For IID sampling, the data is first shuffled and partitioned into 100 parts for 100 clients respectively; For non-IID setting, data is divided into 200 shards based on sorted example labels, then assigns each client with 2 shards. In such case, each client will have data with at most two classes. \vspace{-2mm} \subsection{Result Analysis} \vspace{-2mm} To analyze the properties of our proposed \textit{{{Decent\_BVA}}} framework, in Fig.~\ref{fig_visual_bvd}, we first visualize the extracted gradients using bias attack, variance attack, and bias-variance attack. Notice that the gradients of bias and variance are similar but with subtle differences in local pixel areas. However, according to Theorem~\ref{thm:empirical_estimation}, the gradient calculation of these two are quite different: bias requires the target label as input, but variance only needs the model output and main prediction. From another perspective, we also investigate the bias-variance magnitude relationship with varying model complexity. As shown in Fig.~\ref{curve_bvd_cnn}, with increasing model complexity (more convolutional filters in CNN), both bias and variance decrease. This result is different from the double-descent curve or bell-shape variance curve claimed in ~\cite{belkin2019reconciling, yang2020rethinking}. The reasons are twofold: First, their bias-variance definitions are from the MSE regression decomposition perspective, whereas our decomposition utilizes the concept of main prediction and the generalization error is decomposed from the classification perspective; Second, their implementations only evaluate the bias and variance using training batches on one central model and thus is different from the definition which requires the variance to be estimated from multiple sub-models (in our scenario, client models). The convergence plot of all baseline methods is presented in Fig.~\ref{fig_convergence_fashion}. We observe that \textit{FedAvg} has the best convergence, and all robust training will have a slightly higher loss upon convergence. This matches the observations in~\cite{madry2018towards} which states that training performance may be sacrificed in order to provide robustness for small capacity networks. As for the robustness performance shown in Fig.~\ref{fig_performance_fashion}, we observe that the aggregation of decentralized learning is vulnerable to adversarial attacks since both \textit{FedAvg} and \textit{FedAvg\_Robust\_Local} have decreased performance with an increasing number of server-client communications. Other baselines that utilized the asymmetrical communications have increasing robustness with more communication rounds although only a small number of perturbed examples ($n_s=64$) are transmitted. We also observe that when communication rounds reach 70, \textit{{{Decent\_BVA}}} starts to have similar performance as \textit{FedAvg\_Robust\_Local} while the latter is more resource-demanding as shown in Fig.~\ref{fig_efficiency_fashion}, where the pie plot size represents the running time. Overall, asymmetrical communications with perturbations increase model robustness and bias-variance based adversarial training is more effective for decentralized learning. For the comprehensive experiments shown in Table~\ref{tableResults:MNIST-IID-NONIID}, it is easy to verify that our proposed model outperforms all other baselines regardless of the source of the perturbed examples (i.e., locally generated like \textit{FedAvg\_Robust\_Local} or asymmetrically transmitted from the server like \textit{{{Decent\_BVA}}}). Comparing with standard robust decentralized training \textit{Decent\_Baseline}, the performance of \textit{{{Decent\_BVA}}} against adversarial attacks still increases $4\% - 15\%$ and $10\% - 16\%$ on IID and non-IID settings respectively, although \textit{{{Decent\_BVA}}} is theoretically suitable for the cases that clients have IID samples. In Table~\ref{tableResults:Fashion-CIFAR}, we observe a similar trend where \textit{{{Decent\_BVA}}} outperforms \textit{Decent\_Baseline} on FashionMNIST (with $0.3\% - 5\%$ increases) and on CIFAR-10 (with $0.2\% - 0.4\%$ increases) when defending different types of adversarial examples. For results that use local adversarial training, we only see comparable results. Our conjecture is that local adversarial training is already able to generate high-quality perturbed examples and adding more similar ones from asymmetrical communication would not have much space for improving the model robustness. \begin{table}[!t] \centering \setlength\tabcolsep{2.9pt} \scalebox{0.9}{ \begin{tabular}{lccccccccc} \hline \multirow{2}{*}{} & \multicolumn{4}{c}{IID} & & \multicolumn{4}{c}{non-IID} \\ \cline{2-5} \cline{7-10} & Clean & FGSM & PGD-10 & PGD-20 & & Clean & FGSM & PGD-10 & PGD-20 \\ \hline FedAvg & \bf \underline{0.9863} & 0.5875 & 0.6203 & 0.2048 & & 0.9462 & 0.1472 & 0.5254 & 0.0894 \\ Decent\_Baseline & 0.9849 & 0.7374 & 0.7145 & 0.4158 & & 0.9596 & 0.5630 & 0.5948 & 0.3149 \\ Decent\_Bias & 0.9840 & 0.7627 & 0.7671 & 0.5154 & & 0.9597 & 0.6510 & 0.6226 & 0.3614 \\ Decent\_Variance & 0.9834 & 0.7594 & 0.7616 & 0.5253 & & 0.9577 & 0.5979 & 0.5990 & 0.3504 \\ Decent\_BVA & 0.9837 & \underline{0.7756} & \underline{0.7927} & \underline{0.5699} & & \bf \underline{0.9671} & \underline{0.6696} & \underline{0.6953} & \underline{0.4717} \\ \hline FedAvg\_Robust\_Local & 0.9747 & 0.9028 & 0.9268 & 0.8595 & & 0.9265 & 0.7695 & 0.8435 & 0.7422 \\ Decent\_BVA\_Local & 0.9739 & \bf 0.9185 & \bf 0.9329 & \bf 0.8874 & & 0.9543 & \bf 0.8059 & \bf0.8582 & \bf0.7447 \\ \hline \end{tabular}} \caption{Adversarial robustness on MNIST with IID and non-IID settings} \label{tableResults:MNIST-IID-NONIID} \vspace{-5mm} \end{table} \begin{table}[!t] \centering \setlength\tabcolsep{2.9pt} \scalebox{0.9}{ \begin{tabular}{lccccccccc} \hline \multirow{2}{*}{} & \multicolumn{4}{c}{Fashion-MNIST} & & \multicolumn{4}{c}{CIFAR-10} \\ \cline{2-5} \cline{7-10} & Clean & FGSM & PGD-10 & PGD-20 & & Clean & FGSM & PGD-10 & PGD-20 \\ \hline FedAvg & \bf \underline{0.8785} & 0.2971 & 0.0406 & 0.0188 & &\bf \underline{0.7887} & 0.1477 & 0.0337 & 0.0254 \\ Decent\_Baseline & 0.8628 & 0.5029 & 0.1977 & 0.1628 & & 0.7793 & 0.2079 & 0.0866 & 0.0753 \\ Decent\_Bias & 0.8655 & 0.5407 & 0.1934 & 0.1689 & & 0.7760 & 0.2043 & \underline{0.0894} & 0.0769 \\ Decent\_Variance & 0.8617 & 0.5334 & 0.1935 & 0.1628 & & 0.7810 & 0.1969 & 0.0799 & 0.0693 \\ Decent\_BVA & 0.8597 & \underline{0.5506} & \underline{0.2002} & \underline{0.1799} & & 0.7812 & \underline{0.2119} & 0.0890 & \underline{0.0771} \\ \hline FedAvg\_Robust\_Local & 0.8542 & \bf 0.7424 & 0.2880 & 0.1692 & & 0.7672 &0.2796 & 0.1929 & \bf 0.1892 \\ Decent\_BVA\_Local & 0.8358 & 0.7163 & \bf0.3973 & \bf0.2626 & & 0.7676 &\bf 0.2803 &\bf 0.1935 &\bf 0.1892 \\ \hline \end{tabular} } \caption{Adversarial robustness on Fashion-MNIST and CIFAR-10} \label{tableResults:Fashion-CIFAR} \vspace{-8mm} \end{table} \vspace{-2mm} \section{Related Work} \vspace{-2mm} {\bf Adversarial Machine Learning:} While machine learning models have achieved remarkable performance over clean inputs, recent work~\cite{goodfellow2014explaining,szegedy2014intriguing} showed that those trained models are vulnerable to adversarially chosen examples by adding the imperceptive noise to the clean inputs. In general, the adversarial robustness of centralized machine learning models have been explored from the following aspects: adversarial attacks~\cite{biggio2012poisoning,mei2015using,carlini2017towards,shafahi2018poison,guo2019simple,athalye2018synthesizing,zhu2019transferable}, defense (or robust model training)~\cite{madry2018towards,tramer2018ensemble,wong2018provable,cohen2019certified,shafahi2019adversarial,tramer2019adversarial} and interpretable adversarial robustness~\cite{schmidt2018adversarially,cullina2018pac,gilmer2019adversarial,yin2019rademacher,tsipras2018robustness}. {\bf Decentralized Learning:} Decentralized learning with preserved privacy~\cite{konevcny2016federated,mcmahan2017communication,hard2018federated} has become prevalent in recent years. Meanwhile, the vulnerability of decentralized learning to backdoor attacks has also been explored by~\cite{bagdasaryan2018backdoor,bhagoji2019analyzing,xie2019dba}. Following their work, multiple robust decentralized learning models~\cite{robust_fed_localPoisoning, robust_agg_fed, robust_fed_weightTruncate, robust_fed_hyperParameter} are also proposed and studied. In this paper, we studied the underlying true cause of decentralized learning's vulnerability from the perspective of bias-variance analysis. This is in sharp contrast to the existing work that focused on performing client-level model poisoning attacks or designing server-level aggregation variants with hyper-parameter tuning. {\bf Bias-Variance Decomposition:} Bias-Variance Decomposition (BVD)~\cite{geman1992neural} was originally introduced to analyze the generalization error of a learning algorithm. Then a generalized BVD \cite{pedro2000unified,valentini2004bias} was studied in the classification setting which enabled flexible loss functions (e.g., squared loss, zero-one loss). More recently, bias-variance trade-off was experimentally evaluated on modern neural network models~\cite{neal2018modern,belkin2019reconciling,yang2020rethinking}. \vspace{-2mm} \section{Conclusion} \vspace{-2mm} In this paper, we proposed a novel framework for robust decentralized learning, where the loss incurred during the server's aggregation stage is dissected into a bias part and a variance part. Our approach improves the model robustness through adversarial training by letting the server share a few perturbed samples to the clients via asymmetrical communications. Extensive experiments have been conducted where we evaluated its performance from various aspects on a few benchmark data sets. We believe the further exploration of this direction will lead to more findings on the robustness of decentralized learning. \newpage \section*{Broader Impact} In this paper, researchers introduce \textit{{{Decent\_BVA}}}, a decentralized learning approach that learns a shared prediction model that is robust against corrupted data collected from the client devices. \textit{{{Decent\_BVA}}} could be applied to multiple types of real-life applications, including but not limited to internet of things, telecommunications, medical diagnosis, etc. In these domains, the new trend of research is collaboratively speeding up the model development with improved personalized features while preserving user privacy via decentralized learning. Our research could be used to enhance the robustness of these applications, avoiding the systematic collapse of the central model due to a few local compromised client devices. We envision future research to continue along the direction of understanding the bias and variance incurred in the decentralized aggregation stage of model learning, especially their influences on user experience and privacy protection. \bibliographystyle{plain}
null
null
null
proofpile-arXiv_000-10110
{"arxiv_id":"2009.09026","language":"en","timestamp":1600740099000,"url":"https:\/\/arxiv.org\/abs\/2009.09026","yymm":"2009"}
2024-02-18T23:40:24.907Z
2020-09-22T02:01:39.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10112"}
null
\section{Introduction}\label{sec:intro} The total mass density of low-mass galaxies flattens up at their center showing what is called a {\em core}. This observational fact was mentioned as a long standing problem of the $\Lambda$ CDM paradigm (e.g., see the recent reviews by \citeauthor{2015PNAS..11212249W}~\citeyear{2015PNAS..11212249W} and \citeauthor{2017Galax...5...17D}~\citeyear{2017Galax...5...17D}), since early DM-only numerical simulations predicted the existence of density cusps rather than cores in the inner regions of galaxies \cite[][]{1994Natur.370..629M}. A popular explanation of the so-called {\em core-cusp problem} relies on including baryon physics in the simulations which, through gravity, couples baryon processes with DM. Explosive baryon-driven events at the center of the galaxies produce sudden changes of the gravitational potential which, integrated over time, turn the DM distribution from cusp to core \cite[][]{2010Natur.463..203G}. Alternatively, the core-cusp problem may also point out a failure of the cold DM hypothesis \citep[][]{2015PNAS..11212249W,2017Galax...5...17D}. Solutions include considering warm DM, so that its free-streaming velocities erase primordial fluctuations on small scales \citep{2000ApJ...542..622C}, or assuming self-interacting DM, so that the scattering between DM particles redistributes energy and momentum generating inner cores \citep[][]{2000PhRvL..84.3760S}. Here we propose an alternative solution to the core-cusp problem based on the principle of maximum Tsallis entropy and the polytropes it leads to. For theoretical reasons presented in Sect.~\ref{sec:polytropes}, polytropes may provide a good representation for the distribution of mass within galaxies, and they all have cores. Therefore, the question arises as to whether the cores of the polytropes reproduce the cores observed in the matter distribution of dwarf galaxies. Here we show that they do without any free parameter (Sect.~\ref{sec:result}). Polytropes describe thermodynamic (or meta-stable) equilibrium configurations of self-gravitating system under special conditions. Thus, our result suggests that these conditions are met in dwarf galaxies and may drive their internal structure (Sect.~\ref{sec:conclusions}). \section{Maximum-entropy self-gravitating systems and polytropes} \label{sec:polytropes} Galaxies are self-gravitating structures which, among all possible equilibrium configurations, choose only those consistent with a stellar mass surface density profile resembling a S\'ersic function \citep[e.g.,][]{2003ApJ...594..186B,2012ApJS..203...24V}\footnote{The S\'ersic functions include exponential disks, observed in dwarf galaxies \cite[e.g.,][]{1994A&AS..106..451D}, and {\em de Vaucouleurs} 1/4-profiles, characteristic of massive ellipticals \cite[e.g.,][]{1948AnAp...11..247D}.}. The settling into this particular configuration could be due to either some fundamental physical process (as it happens with the velocities of the molecules in a gas) or to the initial conditions that gave rise to the system \citep{2008gady.book.....B}. The mass distribution in galaxies is currently explained as the outcome of initial conditions \citep{2014ApJ...790L..24C,2015ApJ...805L..16N,2017MNRAS.465L..84L,2020MNRAS.495.4994B}. The option of a fundamental process determining the configuration is traditionally discredited because, following the principles of statistical physics, it should correspond to the most probable configuration of a self-gravitating system and, thus, it should result from maximizing the entropy. Using the classical Boltzmann-Gibbs entropy leads to a distribution with infinity mass and energy \citep{2008gady.book.....B,2008arXiv0812.2610P}, disfavoring this explanation. In the standard Boltzmann-Gibbs approach, however, the long-range forces that govern self-gravitating systems are not properly taken into account. Systems with long-range interactions admit long-lasting meta-stable states described by a maximum entropy formalism based on Tsallis ($S_q$) non-additive entropies \citep[][and references there in]{1988JSP....52..479T,2009insm.book.....T}. Observational evidence for the $S_q$ statistics has been found in connection with various astrophysical problems \citep{2013SSRv..175..183L,2013ApJ...777...20S}. In particular, the maximization under suitable constraints of the Tsallis entropy of a Newtonian self-gravitating N-body system leads to a polytropic distributions \citep{1993PhLA..174..384P,2005PhyA..350..303L}, which has finite mass and a shape closely resembling the DM distribution found in numerical simulations of galaxy formation \citep{2004MNRAS.349.1039N,2009PhyA..388.2321C}. In the current cosmological model, DM provides most of the gravitational pull needed for the ordinary matter to colapse forming visible galaxies, and thus, polytropes approximately describe the gravitational potential of galaxies. As it is shown below, the mass density associated with a polytrope always has a core. The question arises as to whether the cores of the polytropes reproduce the cores observed in the matter distribution of dwarf galaxies, thus providing an alternative view for solving the core-cusp problem (Sect.~\ref{sec:intro}). A polytrope of index $m$ is defined as the spherically-symmetric self-gravitating structure resulting from the solution of the Lane-Emden equation for the (normalized) gravitational potential $\psi$ \citep{1967aits.book.....C,2008gady.book.....B}, \begin{equation} \frac{1}{s^2}\frac{d}{ds}\Big(s^2\frac{d\psi}{ds}\Big)= % \begin{cases} -3\psi^m & \psi > 0,\\ 0 & \psi \le 0.\\ \end{cases} \label{eq:lane_emden} \end{equation} The symbol $s$ stands for the scaled radial distance in the 3D space and the mass volume density is recovered from $\psi$ as \begin{equation} \rho(r) = \rho(0)\,\psi(s)^m, \label{eq:densityle} \end{equation} \begin{equation} r = b\, s, \label{eq:radius} \end{equation} % where $r$ stands for the physical radial distance and $\rho(0)$ and $b$ are two arbitrary constants. Equation~(\ref{eq:lane_emden}) is solved under the initial conditions $\psi(0)=1$ and $d\psi(0)/ds =0$\,\,\footnote{Equation~(\ref{eq:lane_emden}) also admits solutions with $d\psi(0)/ds \not= 0$, but those are discarded because they have infinite central density and total mass \citep[e.g.,][]{2008gady.book.....B}.}. Figure~\ref{fig:lane_emden} illustrates the variety of physically admissible polytropes, with the range of polytropic indexes \begin{equation} 3/2 \le m \le 5, \label{eq:nlimits} \end{equation} set because polytropes with $m\le 3/2$ are unstable or have infinite density and those with $m > 5$ have infinite mass \citep{1993PhLA..174..384P,2008gady.book.....B}. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{lane_emden2.pdf} \caption{ Volume mass density resulting from the numerical solutions of the Lane-Emden equation (polytropes). Polytropes are self-gravitating systems having maximum Tsallis entropy. The curves are normalized to the central density and to the half-mass radius ($r_{1/2}$). The examples show the range of physically plausible solutions, with the corresponding polytropic index given in the inset. } \label{fig:lane_emden} \end{figure} In order to compute the polytropes in Fig.~\ref{fig:lane_emden}, Eq.~(\ref{eq:lane_emden}) was split into a system of two first order differential equations for $\psi$ and $d\psi/ds$, which were integrated from $s=0$ using {\tt Lsoda} \citep{2019ascl.soft05021H} as implemented in {\em python} ({\em scipy.odeint}). Note that all polytropes have cores, in the sense that $d\ln\rho/d\ln r\rightarrow 0$ when $r\rightarrow 0$. This property follows from the initial condition $d\psi(0)/ds=0$ and Eq.(\ref{eq:densityle}). It is shown by the density profiles displayed in Fig.~\ref{fig:lane_emden}. \section{Results}\label{sec:result} \begin{figure} \includegraphics[width=\linewidth]{core-casp1.pdf} \caption{Density profile observed in the inner regions of the 26 {\em Little Things} galaxies by \citet{2015AJ....149..180O} (the blue symbols and the blue region give the mean and the RMS dispersion among the different objects). To reduce scatter, the observed densities and radii are normalized to the density and radius where the logarithmic derivative of the circular velocity equals 0.3 ($d\log v_c/d\log r =0.3$), denoted as $\rho_{0.3}$ and $r_{0.3}$, respectively. Polytropes are parameter free in this representation (the solid lines with the corresponding indexes given in the inset). The dashed line gives a best-fit to the observed density using a NFW profile \citep{2015AJ....149..180O}, which does not follow the observed core. } \label{fig:core-casp1} \end{figure} Figure~\ref{fig:core-casp1} shows the state-of-the-art observation of galaxy cores in dwarf galaxies by \citet{2015AJ....149..180O}, which is based on 26 galaxies with stellar masses $6.5 \le \log(M_\star/M_\odot) \le 8.2$ (blue symbols with the blue region giving the RMS dispersion among the different objects). The total density is inferred from the circular-speed $v_c$ measured in the 21-cm hydrogen line which, for axi-symmetric systems, is related to $v_c$ as \citep[e.g.,][]{2001ApJ...552L..23D}, \begin{equation} \rho(r)= \frac{1}{ 4\pi G}\,\Big[\frac{v_c}{r}\Big]^2\,\Big[1+2\frac{d\log v_c}{d\log r}\Big], \label{eq:vc} \end{equation} were $G$ is the gravitational constant. The scatter of the 26 density profiles gets largely reduced when each individual profile is normalized to the radius and density where $d\log v_c/d\log r =0.3$, denoted as $r_{0.3}$ and $\rho_{0.3}$, respectively \citep{2015AJ....149..180O}. In addition to reducing the observational scatter, this normalization makes the comparison with polytropes parameter-free. The density $\rho(r)$ consistent with a polytrope of index $m$ (Eq.~[\ref{eq:densityle}]) depends on two parameters $\rho(0)$ and $b$. Using Eqs.(\ref{eq:radius}) and Eq.~(\ref{eq:vc}), one can show that \begin{equation} \rho(x\,r_{0.3})\big/\rho_{0.3}=\psi^m(x\,s_{0.3})\big/\psi^m(s_{0.3}), \label{eq:scaling} \end{equation} where $x=r/r_{0.3}$ and $s_{0.3}$ is the value for $r_{0.3}$ obtained from $\psi(s)$. The right-hand side of Eq.~(\ref{eq:scaling}) does not depend on $\rho(0)$ or $b$, indicating that the same happens with the normalized density (the left-hand side of the equation) which, consequently, has no freedom in Fig.~\ref{fig:core-casp1}. Thus, the agreement between the observed and the predicted cores is particularly revealing, suggesting a true connection between polytropes and the inner structure of dwarf galaxies. \section{Conclusions}\label{sec:conclusions} We have shown that the polytropes, resulting from the principle of Tsallis entropy, reproduce without any tuning the cores observed in the matter distribution of dwarf galaxies. The genesis of these cores is currently interpreted as driven by the interplay between baryons and DM, so that repetitive baryon motions modify the overall gravitational potential and the associated matter distribution (Sect.~\ref{sec:intro}). We note that the two explanations are not in contradiction. They are consistent if the baryon driven motions just shorten the time-scale needed to thermalize the global gravitational potential into a polytrope. Our study is focused on the central regions of the galaxies, but polytropes also work well in the outskirts. The outer parts are fully dominated by DM, and it has been repeatedly shown that polytropes can be fit with Einasto profiles \citep[e.g.,][]{2006JCAP...06..008Z,2012MNRAS.423.2190S}, which fit well the outer parts of the DM profiles found in cosmological numerical simulations \citep[e.g.,][]{2004MNRAS.349.1039N,2005ApJ...624L..85M,2009PhyA..388.2321C}. In support of this, \citet{2015MNRAS.449.3645F} employ the maximum Tsallis entropy formalism to fit the radial dependence of $v_c$ in 24 galaxies with $8 \le \log(M_\star/M_\odot)\le 11$. $v_c(r)$ and $\rho(r)$ are interchangeable (Eq.~[\ref{eq:vc}]), so that the goodness of the fit at all radial distances also applies to $\rho(r)$ (even though \citeauthor{2015MNRAS.449.3645F} pay no specific attention to the cores studied here). The association between dwarf galaxies and maximum Tsallis entropy opens up the possibility of using the well-proven tool-kit of statistical mechanics to understand them \citep[][] 2008arXiv0812.2610P, 2013MNRAS.430..121P, 2013MNRAS.430.1578S}. Identifying galaxies with polytropes has a number of additional implications. Accurate mass profiles are needed to plan and interpret the astrophysical experiments to disclose the nature of DM. DM annihilation cross-sections depend on halo shape \cite[e.g.,][]{2018PhRvD..97f3013Z}, and precise DM profiles and their time evolution should help us to distinguish between cold, warm, or self-interacting DM \citep[e.g.,][]{2015PNAS..11212249W,2016MNRAS.460.1214L}. The suite of mass models currently used in gravitational lensing studies does not include polytropes \cite[e.g.,][]{2001astro.ph..2341K}, but subtle details in the mass model are critically important when precise magnifications are needed, or when lensing is used to derive cosmological parameters \citep{2001AJ....122..103K,2007JCAP...07..006E}. The ability of polytropes to reproduce observed galaxy properties also has impact on the statistical mechanics side. Comparison with the cosmic evolution of astronomical objects will shed new light on whether the $S_q$ entropies, besides providing H-funtionals able to select particular steady state solutions of the Vlasov equation \citep{2005PhyA..356..419C}, also have a deeper thermodynamical meaning for self-gravitating systems. \begin{acknowledgements} Thanks are due to Jose Diego for discussions on the mass models used in gravitational lensing analysis, to Bruce Elmegreen for help and references on the stability of disk galaxies, and to Arianna Di Cintio for references on the {\em core-cusp problem}. JSA acknowledges support from the Spanish Ministry of Economy and Competitiveness (MINECO), project AYA2016-79724-C4-2-P (ESTALLIDOS). IT also acknowledges financial support from the European Union's Horizon 2020 research and innovation programme under Marie Sk\l odowska-Curie grant agreement No 721463 to the SUNDIAL ITN network, from the State Research Agency (AEI) of the Spanish Ministry of Science, Innovation and Universities (MCIU) and the European Regional Development Fund (FEDER) under the grant with reference AYA2016-77237-C3-1-P, from IAC projects P/300624 and P/300724, financed by the Ministry of Science, Innovation and Universities, through the State Budget and by the Canary Islands Department of Economy, Knowledge and Employment, through the Regional Budget of the Autonomous Community, and from the Fundaci\'on BBVA under its 2017 programme of assistance to scientific research groups, for the project {\em Using machine-learning techniques to drag galaxies from the noise in deep imaging}. \end{acknowledgements}
null
null
null
proofpile-arXiv_000-10111
{"arxiv_id":"2009.08994","language":"en","timestamp":1600740024000,"url":"https:\/\/arxiv.org\/abs\/2009.08994","yymm":"2009"}
2024-02-18T23:40:24.910Z
2020-09-22T02:00:24.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10113"}
null
\section{Introduction} Berezin-Toeplitz quantization receives a lot of attention in mathematical literature. Foundational ideas were introduced in papers by Berezin and in works of Boutet de Monvel and Guillemin on Toeplitz operators. Substantial recent contributions were made by X. Ma, G. Marinescu, L. Polterovich, and many others. Typically, the setting involves Berezin-Toeplitz operators on a symplectic manifold, with a choice of compatible almost complex structure. There is a general question of how quantization depends on the choice of the almost complex structure and there has been a fair amount of effort to investigate this issue. On a hyperk\"ahler manifold, one can ask a related but somewhat different question. Recall that a hyperk\"ahler manifold has a distinguished family of complex structures parametrized by $S^2$, and a corresponding family of K\"ahler forms. At most countably many of these K\"ahler structures are algebraic \cite[Proposition 2.2]{verbitsky:95}, but there exist compact hyperk\"ahler manifolds $(M,g,I_1, I_2,I_3)$ such that the K\"ahler manifold $(M,g,I_j)$ is complex projective for $j=1,2,3$. Examples of such $M$ are tori with linear complex structures and certain $K3$. On such $M$, we have three Berezin-Toeplitz quantizations (each for a different symplectic form), and one can ask how to make one quantization out of these three. This was pursued in \cite{barron:17,castejon:16}, see also \cite[Ch.5]{barron:18}. For a smooth function $f$ on $M$ there are three Berezin-Toeplitz operators $T_f^{k;j}$, $k\in\ensuremath{{\mathbb N}}$, on each of the K\"ahler manifolds $(M,g,I_j)$. The works \cite{barron:17, castejon:16} addressed various ways of constructing one linear operator out of these three and the $k\to\infty$ properties of these operators. Berezin-Toeplitz quantization on noncompact symplectic manifolds is certainly important, from mathematical and physical point of view. The basic noncompact case of $\ensuremath{\mathbb C}^n$ is worked out in \cite{coburn:92}. Let us consider $\ensuremath{\mathbb R}^{4n}$ with the standard hyperk\"ahler structure $(g,I,J,K)$, where $g$ is the standard Euclidean metric, and $I$, $J$, $K$ are three linear complex structures on $\ensuremath{\mathbb R}^{4n}$ that satisfy the quaternion relations. Of course (as it is the case in general for hypercomplex manifolds) there is a whole two-dimensional sphere of induced complex structures on $\ensuremath{\mathbb R}^{4n}$, \begin{equation} \label{2sphere} S^2 = \{ aI+bJ+cK \ | \ a,b,c\in\ensuremath{\mathbb R} , \ a^2+b^2+c^2=1\}. \end{equation} For each fixed $(a,b,c)$, an appropriate choice of complex coordinates identifies $(\ensuremath{\mathbb R}^{4n},aI+bJ+cK)$ with $\ensuremath{\mathbb C}^{2n}$, we get Segal-Bargmann spaces with respect to these complex coordinates, and a Berezin-Toeplitz quantization (see the discussion after Theorem \ref{mainth}). Instead of following the approach of \cite{barron:17, castejon:16}, we pursue the objective of bringing together all these quantizations (not just three). We can consider the twistor space of $\mathbb{R}^{4n}$, which is a complex manifold $\Tw(\mathbb{R}^{4n})$ parametrizing the different complex structures $aI + bJ + cK$ at points of $\mathbb{R}^{4n}$. There is a hermitian metric on $\Tw(\ensuremath{\mathbb R}^{4n})$ induced by the hyperk\"ahler structure on $\ensuremath{\mathbb R}^{4n}$. This hermitian metric is not K\"ahler (although it is balanced). The twistor space comes equipped with a natural holomorphic projection $\pi : \Tw(\mathbb{R}^{4n}) \to \mathbb{CP}^1$, where, identifying $\mathbb{CP}^1$ with $S^2$ in the usual way, the fibers of $\pi$ are just the complex manifolds $(\mathbb{R}^{4n}, aI + bJ + cK)$. In this way, the twistor space $\mathrm{Tw}(\mathbb{R}^{4n})$ is, intuitively speaking, the union of these $(\mathbb{R}^{4n}, aI + bJ + cK)$. The precise definition is in Section \ref{sec:twistor} below. Therefore it makes sense to attempt to replace a family of quantizations parametrized by points of $S^2$ by one quantization on $\Tw (\ensuremath{\mathbb R}^{4n})$. A general idea to use the twistor space has been in the air (see \cite{barron:17} for discussion in the context of Berezin-Toeplitz quantization). For the proofs, two complex charts on $\ensuremath{\mathbb C}\Pr^1$ would typically be used, each obtained by deleting a point. For this reason we restrict our consideration to the the twistor family which is obtained by removing a single fiber of the projection $\pi : \Tw(\mathbb{R}^{4n}) \to \mathbb{CP}^1$ from the twistor space $\Tw(\mathbb{R}^{4n})$. The resulting manifold has underlying topological structure $\ensuremath{\mathbb R}^{4n} \times \ensuremath{\mathbb C}$. As an observation, its analytic structure is different from that of a degenerate twistor space \cite{verbitsky:15}, whose underlying topological structure is also the Cartesian product of a hyperk\"ahler manifold with $\ensuremath{\mathbb C}$. This is explained in section \ref{degtwistors}. The main result is in section \ref{sec:asymp}. In its essence, it is a statement about Toeplitz operators on relevant Bergman spaces. As it often happens in this subject, there is a strong underlying argument why this technical statement about operators is useful for quantization. We explained the motivation above. To discuss the context related to physics more broadly, twistors were introduced in groundbreaking work by Sir Roger Penrose. Twistor quantization appeared, in particular, in insightful papers \cite{penrose:68, penrose:99}. Let us recall some details. Penrose twistors are defined on Minkowski space or, more generally, in a curved spacetime. The Minkowski space is a $4$-dimensional real vector space, with a metric of signature $(1,3)$, and its twistor space is a $4$-dimensional complex vector space. Discussion of quantization involves commutation relations for operators that correspond to functions on the twistor space, and the correspondence principle. Penrose's twistor program ideas have been extended to Riemannian setting and widely applied. Specifically, for $\ensuremath{\mathbb R}^4$, see the paper on the instanton moduli space by Atiyah, Hitchin and Singer \cite{atiyah:78}. \section{The twistor space} \label{sec:twistor} \subsection{The twistor space of a general hyperk\"ahler manifold $M$ } We begin by giving the definition of a hyperk\"ahler manifold and its twistor space, and describing the metric structure of the twistor space. \begin{defn} A \emph{hyperk\"ahler} manifold is a smooth manifold $M$ together with a triple of integrable almost complex structures $I, J, K : TM \to TM$ satisfying \[ I^2 = J^2 = K^2 = -\mathrm{Id}, \ IJ = -JI = K, \] and a Riemannian metric $g$, simultaneously Hermitian with respect to $I, J, K$, and such that the corresponding Hermitian forms $\omega_I, \omega_J, \omega_K$ are closed. \end{defn} It's not hard to verify that with this definition, the form $\omega_J + i\omega_K$ is non-degenerate, has type $(2,0)$ with respect to the structure $I$ and is closed, thus making $(M, I)$ into a holomorphic symplectic manifold. In addition to $I, J, K$, a hyperk\"ahler manifold has many other complex structures. Any linear combination $A = aI + bJ + cK$ with $a, b, c \in \ensuremath{\mathbb R}$ satisfying $a^2 + b^2 + c^2 = 1$ is also an integrable almost complex structure. The metric $g$ is Hermitian with respect to $A$, and the corresponding Hermitian form $\omega_A$ is closed. In this way, we obtain a family of \emph{induced complex structures} on $M$ parametrized by a two-dimensional sphere: \[ S^2 = \left\{aI + bJ + cK \ |\ a,b,c\in\ensuremath{\mathbb R}, a^2 + b^2 + c^2 = 1 \right\}. \] \begin{defn} The \emph{twistor space} of a hyperk\"ahler manifold $M$ is the product manifold $\Tw(M) = M \times S^2$. \end{defn} Viewing $S^2$ as the set of induced complex structures on $M$ as above, the twistor space $\Tw(M)$ parametrizes these structures at points of $M$. It comes equipped with a natural almost complex sructure $\mathcal{I} : T\Tw(M) \to T\Tw(M)$, defined as follows. Identifying $S^2 \cong \mathbb{CP}^1$ via the stereographic projection (see the next subsection), we let $I_{\mathbb{CP}^1} : T\mathbb{CP}^1 \to T\mathbb{CP}^1$ denote the corresponding almost complex structure. At any point $(x, A) \in M \times \mathbb{CP}^1 \cong \Tw(M)$, the tangent space decomposes as $T_{(x, A)}\Tw(M) = T_x M \oplus T_A \mathbb{CP}^1$; we will call vectors in $T_x M$ vertical and vectors in $T_A \mathbb{CP}^1$ horizontal, and similarly for differential forms. We define \begin{equation} \label{compstr} \begin{array}{ccccc} \mathcal{I} & : & T_x M \oplus T_A \mathbb{CP}^1 & \longrightarrow & T_x M \oplus T_A \mathbb{CP}^1 \\ && (X, V) & \longmapsto & \left(AX, I_{\mathbb{CP}^1}V\right). \end{array} \end{equation} It's not hard to verify that $\mathcal{I}^2 = -\mathrm{Id}$, so that $\mathcal{I}$ is an almost complex structure on the twistor space $\Tw(M)$. It actually turns out to be integrable (\cite{salamon:82}, see also \cite{kaledin:98}), making $\Tw(M)$ into a complex manifold. Its complex dimension is clearly one more than the complex dimension of $M$ (in any induced complex structure). For the rest of this subsection, the complex dimension of the twistor space $\Tw(M)$ will be denoted by $m$, so that the complex dimension of $M$ is $m-1$. The twistor space $\Tw(M) \cong M \times \mathbb{CP}^1$ has two natural projections: \[ \xymatrix{& \Tw(M) \ar[rd]^\pi \ar[dl]_\sigma & \\ M & & \mathbb{CP}^1,} \] the second of which is a holomorphic map. It's not hard to see that for any $A \in \mathbb{CP}^1$, the fiber $\pi^{-1}(A)$ is just the manifold $M$ with the induced complex structure $A$. One can thus think of the twistor space $\Tw(M)$ as the collection of K\"ahler manifolds $(M, A)$ lying above the points $A \in \mathbb{CP}^1$ via the map $\pi$. The sections of $\pi$ are called \emph{twistor lines}. There is a natural Hermitian metric on the twistor space $\Tw(M)$, namely the product of the hyperk\"ahler metric $g$ from $M$ and the Fubini-Study metric $g_{\mathbb{CP}^1}$ from $\mathbb{CP}^1$: \[ \sigma^*\left(g\right) + \pi^*\left(g_{\mathbb{CP}^1}\right). \] We let $\omega$ denote its Hermitian form and look at its decomposition into its vertical and horizontal parts: \begin{equation} \label{omega_decomp} \omega = \omega_M + \omega_{\mathbb{CP}^1}. \end{equation} Here both $\omega_M$ and $\omega_{\mathbb{CP}^1}$ are 2-forms on $\Tw(M)$, and at any point of $\Tw(M)$, $\omega_M$ is an element of $\Lambda^2 M$ while $\omega_{\mathbb{CP}^1}$ is an element of $\Lambda^2 \mathbb{CP}^1$. $\omega_{\mathbb{CP}^1}$ is just the pullback of the Fubini-Study form from $\mathbb{CP}^1$ via the projection $\pi$, while $\omega_M$ is not the pullback of any form from $M$, but it has the property that its restriction to any fibre $\pi^{-1}(A) = (M, A)$ is just the K\"ahler form $\omega_A$, defined above. Furthermore, we have the following result. \begin{lemma} \label{volumeform} In the above notation, the form $\omega_M^{m-1}$ on the twistor space $\Tw(M)$ is the pullback of a volume form $\Omega$ from $M$ via the map $\sigma : \Tw(M) \to M$: \[ \omega_M^{m-1} = \sigma^*\left(\Omega\right). \] \end{lemma} \proof As noted above, the restriction of the form $\omega_M$ on $\Tw(M)$ to the fibre $\pi^{-1}(A)$ is $\omega_A$, in other words, \[ \left.\omega_M\right|_{\pi^{-1}(A)} = \left.\sigma^*\left(\omega_A\right)\right|_{\pi^{-1}(A)}, \] and similarly, \[ \left.\omega_M^{m-1}\right|_{\pi^{-1}(A)} = \left.\sigma^*\left(\omega_A^{m-1}\right)\right|_{\pi^{-1}(A)}. \] For any $A \in \mathbb{CP}^1$, $\omega_A^{m-1}$ is a volume form on $M$, since $\omega_A$ is non-degenerate, and the complex dimension of $M$ is $m-1$. Thus, if we show that the forms $\omega_A^{m-1}$ for different $A$ are all equal, the result will follow. Recall that on any orientable manifold, to each Riemannian metric one can canonically associate two volume forms, and choosing an orientation is equivalent to choosing one of these forms. Indeed, a metric on the tangent bundle induces a metric on the cotangent bundle and all its exterior powers, including the top one, which is a (real) line bundle. A Riemannian metric thus determines two unit vectors in each fiber of the bundle of differential forms of top degree, and in this context an orientation is simply a consistent choice of one of these unit vectors at each point of the manifold. In our setting of a hyperk\"ahler manifold $M$, the hyperk\"ahler metric $g$ thus gives rise to two volume forms. Each induced complex structure $A$ determines an orientation on $M$, which amounts to choosing one of these two volume forms. We denote this choice by $\mathrm{Vol}_{(M,A)}$. By basic Hermitian geometry, \[ \mathrm{Vol}_{(M,A)} = \frac{1}{(m-1)!}\ \omega_A^{m-1}. \] It only remains to observe that as $A \in \mathbb{CP}^1$ is changing, the right hand side is changing continuously. It follows from this that all forms $\mathrm{Vol}_{(M,A)}$ must be the same, and the same goes for the forms $\omega_A^{m-1}$. $\Box$ The natural product metric with Hermitian form $\omega$ on the twistor space $\Tw(M)$ described above is never K\"ahler. Indeed, the exterior differential $d$ on $\Tw(M)$ also decomposes into horizontal and vertical parts, $d = d_M + d_{\mathbb{CP}^1}$, and we have, using the decomposition \eqref{omega_decomp}, \[ d\omega = d_M \omega_M + d_{\mathbb{CP}^1} \omega_M + d_M \omega_{\mathbb{CP}^1} + d_{\mathbb{CP}^1} \omega_{\mathbb{CP}^1}. \] The first term is zero by the hyperk\"ahler condition on $M$, while the last two terms are zero because $\omega_{\mathbb{CP}^1}$ is a pullback of a closed form from $\mathbb{CP}^1$ to $\Tw(M)$. However, the second term will never be zero (Lemma 4.4 in \cite{kaled-verbit}, see also the proof of Theorem 1 in \cite{tomberg:15}), and thus $d\omega \ne 0$. On the other hand, the metric on $\Tw(M)$ satisfies the weaker condition of being balanced: $d\left(\omega^{m-1}\right) = 0$. This was first shown by Kaledin and Verbitsky \cite{kaled-verbit}. Indeed, \[ d\left(\omega^{m-1}\right) = d\left((\omega_M + \omega_{\mathbb{CP}^1})^{m-1}\right) = d\left(\omega_M^{m-1}\right) + (m-1)d\left(\omega_M^{m-2} \wedge \omega_{\mathbb{CP}^1}\right) \] Here the first term is zero as a consequence of Lemma \ref{volumeform}. For the second term, we have \[ d\left(\omega_M^{m-2} \wedge \omega_{\mathbb{CP}^1}\right) = d\left(\omega_M^{m-2}\right) \wedge \omega_{\mathbb{CP}^1} = (m-2)\,d\omega_M \wedge \omega_M^{m-3} \wedge \omega_{\mathbb{CP}^1} = \] \[ = (m-2)\left(d_M\omega_M + d_{\mathbb{CP}^1}\omega_M\right) \wedge \omega_M^{m-3} \wedge \omega_{\mathbb{CP}^1} = (m-2)\,d_{\mathbb{CP}^1}\omega_M \wedge \omega_M^{m-3} \wedge \omega_{\mathbb{CP}^1}. \] In this last wedge product, $d_{\mathbb{CP}^1}\omega_M \in \Lambda^2 M \otimes \Lambda^1 \mathbb{CP}^1$ at any point of $\Tw(M)$, and since it's being wedged with $\omega_{\mathbb{CP}^1} \in \Lambda^2 \mathbb{CP}^1$, the product will be zero by the dimension of $\mathbb{CP}^1$, which shows that $\Tw(M)$ is balanced. \subsection{Complex structure and complex coordinates on the twistor space of $\ensuremath{\mathbb R}^{4n}$ } \label{subsec:twistscs} Let ${\mathbb{H}}$ denote the algebra of quaternions, with the standard basis $\{ 1,{\mathrm{i}}, {\mathrm j}, {\mathrm k}\}$ and the relations ${\mathrm i}^2={\mathrm j}^2={\mathrm k}^2={\mathrm i}{\mathrm j}{\mathrm k}=-1$. Let $n$ be a positive integer. We will use the isomorphisms \begin{equation} \label{isoms} \ensuremath{\mathbb R}^{4n}\cong \ensuremath{\mathbb R}^4\otimes \ensuremath{\mathbb R}^n\cong {\mathbb{H}}\otimes \ensuremath{\mathbb R}^n\cong \ensuremath{\mathbb C}^{2n}\oplus \ensuremath{\mathbb C}^{2n}{\mathrm {j}}. \end{equation} An element of $\ensuremath{\mathbb R}^{4n}$ will be represented by $$ {\mathbf {x_1}}+{\mathbf {x_2}}{\mathrm{i}}+{\mathbf {x_3}}{\mathrm{j}}+{\mathbf {x_4}}{\mathrm{k}}= {\mathbf {z}}+{\mathbf {w}}{\mathrm{j}} , $$ where ${\mathbf {x_m}}=\begin{pmatrix}x_m^{(1)}\\ ...\\ x_m^{(n)}\end{pmatrix}$ for $m\in\{ 1,2,3,4\}$, and $$ {\mathbf {z}}={\mathbf {x_1}}+{\mathbf {x_2}}{\mathrm{i}}, \ {\mathbf {w}}= {\mathbf {x_3}}+{\mathbf {x_4}}{\mathrm{i}}. $$ Let $I$, $J$, $K$ be the three standard linear complex structures on $\ensuremath{\mathbb R}^{4n}$ obtained by ${\mathrm i}$, ${\mathrm j}$, ${\mathrm k}$ acting by left multiplication on ${\mathbb{H}}$. Let $1_n$ denote the $n\times n$ identity matrix, and let $0_n$ denote the $n\times n$ zero matrix. Using the standard basis of $\ensuremath{\mathbb R}^4\otimes \ensuremath{\mathbb R}^n$ and the first of the isomorphisms (\ref{isoms}), we write a vector of $\ensuremath{\mathbb R}^{4n}$ as a column vector $$ {\mathbf{x}}=\begin{pmatrix} {\mathbf {x_1}} \\ {\mathbf {x_2}} \\ {\mathbf {x_3}} \\ {\mathbf {x_4}} \end{pmatrix} $$ and observe that the matrices of $I$, $J$, $K$ are, respectively, \begin{equation} \label{matIJK} \begin{pmatrix}0_n & -1_n & & \\ 1_n & 0_n & & \\ & & 0_n & -1_n \\ & & 1_n & 0_n \end{pmatrix}, \ \begin{pmatrix} & & -1_n & 0_n \\ & & 0_n & 1_n \\ 1_n & 0_n & & \\ 0_n & -1_n & & \end{pmatrix}, \ \begin{pmatrix}& & & -1_n \\ & & -1_n & \\ & 1_n & & \\ 1_n & & & \end{pmatrix} \end{equation} (an empty spot indicates that the corresponding matrix entries are zero). Or, we can characterize $I$ as the endomorphism of the tangent bundle on $\ensuremath{\mathbb R}^{4n}$ such that \begin{equation} \label{Iaction} I: \ \frac{\partial}{\partial x_1^{(l)} }\mapsto \frac{\partial}{\partial x_2^{(l)} }, \ \frac{\partial}{\partial x_2^{(l)} }\mapsto -\frac{\partial}{\partial x_1^{(l)} }, \ \frac{\partial}{\partial x_3^{(l)} }\mapsto \frac{\partial}{\partial x_4^{(l)} }, \ \frac{\partial}{\partial x_4^{(l)} }\mapsto -\frac{\partial}{\partial x_3^{(l)} }, \ l\in \{1,...,n\} \end{equation} and \begin{equation} \label{Jaction} J: \ \frac{\partial}{\partial x_1^{(l)} }\mapsto \frac{\partial}{\partial x_3^{(l)} }, \ \frac{\partial}{\partial x_3^{(l)} }\mapsto -\frac{\partial}{\partial x_1^{(l)} }, \ \frac{\partial}{\partial x_4^{(l)} }\mapsto \frac{\partial}{\partial x_2^{(l)} }, \ \frac{\partial}{\partial x_2^{(l)} }\mapsto -\frac{\partial}{\partial x_4^{(l)} }, \ l\in \{1,...,n\} \end{equation} \begin{equation} \label{Kaction} K: \ \frac{\partial}{\partial x_1^{(l)} }\mapsto \frac{\partial}{\partial x_4^{(l)} }, \ \frac{\partial}{\partial x_4^{(l)} }\mapsto -\frac{\partial}{\partial x_1^{(l)} }, \ \frac{\partial}{\partial x_2^{(l)} }\mapsto \frac{\partial}{\partial x_3^{(l)} }, \ \frac{\partial}{\partial x_3^{(l)} }\mapsto -\frac{\partial}{\partial x_2^{(l)} }, \ l\in \{1,...,n\}. \end{equation} The vector space $\ensuremath{\mathbb R}^{4n}$, equipped with the three linear complex structures $I$, $J$, $K$ and the usual flat Euclidean metric, is a hyperk\"ahler manifold. The induced complex structure $aI + bJ + cK$, viewed as an endomorphism of $T(\ensuremath{\mathbb R}^{4n})$, takes $\frac{\partial}{\partial x_1^{(l)} }$ to $a\frac{\partial}{\partial x_2^{(l)} }+b\frac{\partial}{\partial x_3^{(l)} }+c\frac{\partial}{\partial x_4^{(l)} }$, for $1\le l\le n$, (see (\ref{Iaction}), (\ref{Jaction}), (\ref{Kaction})), and so on. \begin{remark} The space of linear complex structures on $\ensuremath{\mathbb R}^{4n}$ is isomorphic to $GL(4n,\ensuremath{\mathbb R})/GL(2n,\ensuremath{\mathbb C})$ \cite[I.2.2.5]{mcduff:98}. Here we denote by $GL(2n,\ensuremath{\mathbb C})$ the subgroup of $GL(4n,\ensuremath{\mathbb R})$ that consists of the matrices of $I$-linear transformations of $\ensuremath{\mathbb R}^{4n}$ or, equivalently, of the matrices in $GL(4n,\ensuremath{\mathbb R})$ that commute with $I$. Suppose $a_1I+b_1J+c_1K$ and $a_2I+b_2J+c_2K$ are two linear complex structures such that $(a_1,b_1,c_1)\ne (a_2,b_2,c_2)$. There is $A\in GL(4n,\ensuremath{\mathbb R})$ such that $a_1I+b_1J+c_1K=A(a_2I+b_2J+c_2K)A^{-1}$ \cite[Prop. 2.47]{mcduff:98}. The complex structures $a_1I+b_1J+c_1K$ and $a_2I+b_2J+c_2K$ represent the same point in $GL(4n,\ensuremath{\mathbb R})/GL(2n,\ensuremath{\mathbb C})$ (i.e. there is $A\in GL(2n,\ensuremath{\mathbb C})$ such that $a_1I+b_1J+c_1K=A(a_2I+b_2J+c_2K)A^{-1}$) if and only if $a_1=a_2$. It is straightforward to verify this using the matrix representations (\ref{matIJK}). For example, there is no matrix $A$ in $GL(2n,\ensuremath{\mathbb C})$ such that $AI=IA$ and $AI=JA$, and it is possible to explicitly find $A\in GL(2n,\ensuremath{\mathbb C})$ such that $AI=IA$ and $AJ=KA$. \end{remark} For the case of the hyperk\"ahler manifold $\ensuremath{\mathbb R}^{4n}$, its twistor space $\Tw(\ensuremath{\mathbb R}^{4n})$ has a coordinate description, which we now give. The sphere (\ref{2sphere}) is a $2$-dimensional sphere in ${\mathrm{End}} (T\ensuremath{\mathbb R}^{4n})$. It is diffeomorphic to the unit sphere $\mathbb{S}^2=\{ (a,b,c)| \ a,b,c\in\ensuremath{\mathbb R}, a^2+b^2+c^2=1\}$ in $\ensuremath{\mathbb R}^3$ via $$ s:\mathbb{S}^2\to S^2 $$ $$ (a,b,c)\mapsto aI+bJ+cK . $$ A standard way to cover $\mathbb{S}^2\cong \ensuremath{\mathbb C}\Pr^1$ by two complex charts is to introduce a complex coordinate $\zeta_1$ on $\mathbb{S}^2-\{ pt\}$, taking the point to be, say, $(0,0,1)$, using the stereographic projection from this point, and the complex coordinate $\zeta_2$ on $\mathbb{S}^2-\{ (0,0,-1)\}$, which is $\dfrac{1}{\zeta_1}$ on the intersection of the charts. The formulas for the stereographic projection from $(0,0,1)$ (see e.g. \cite[I.\S 6]{conway:78}) give $$ \zeta_1=\dfrac{a+ib}{1-c}, \ a=\frac{\zeta_1+\bar{\zeta_1}}{|\zeta_1|^2+1}, \ b=\frac{-i(\zeta_1-\bar{\zeta_1})}{|\zeta_1|^2+1}, \ c=\frac{|\zeta_1|^2-1}{|\zeta_1|^2+1}. $$ We will use a slightly different convention. Let \begin{equation} \label{zetaformula} \zeta=\dfrac{-c+ib}{a+1} \end{equation} be the complex coordinate on $\mathbb{S}^2-\{ (-1,0,0)\} \cong\ensuremath{\mathbb C}$. It is the complex coordinate $\dfrac{b+ic}{1+a}$, obtained from the stereographic projection from $(-1,0,0)$, times $i$. We note that $$ a=\frac{1-|\zeta|^2}{1+|\zeta|^2}, \ b=\frac{-i(\zeta-\bar{\zeta})}{|\zeta|^2+1}, \ c=-\frac{\zeta+\bar{\zeta}}{|\zeta|^2+1}. $$ On $\mathbb{S}^2-\{ (1,0,0)\}$ we will use the complex coordinate $$ \tilde{\zeta}=-\frac{c+ib}{1-a}. $$ Over $\{ a\ne \pm 1\}$ $$ {\tilde{\zeta}}=\frac{1}{\zeta}. $$ Let $z_0$ and $z_1$ be the homogeneous coordinates on $\ensuremath{\mathbb C}\Pr^1$. Then the diffeomorphisms $$ \ensuremath{\mathbb C}\Pr^1-\{ [1:0]\}\to \mathbb{S}^2-\{ (-1,0,0)\}, \ \ensuremath{\mathbb C}\Pr^1-\{ [0:1]\}\to \mathbb{S}^2-\{ (1,0,0)\} $$ are, respectively, obtained from the maps $$ \ensuremath{\mathbb C}\Pr^1-\{ [1:0]\}\to \ensuremath{\mathbb C} , \ [z_0:z_1]\mapsto \zeta=\frac {z_0}{z_1} $$ and $$ \ensuremath{\mathbb C}\Pr^1-\{ [0:1]\}\to \ensuremath{\mathbb C} , \ [z_0:z_1]\mapsto \tilde{\zeta}=\frac {z_1}{z_0}. $$ Recall that the matrix group $SU(2)$ consists of the matrices $\begin{pmatrix} \alpha & \beta \\ -\bar\beta & \bar \alpha \end{pmatrix}$ with $\alpha,\beta\in\ensuremath{\mathbb C}$ such that $|\alpha |^2+|\beta|^2=1$. It acts on $\ensuremath{\mathbb C}\Pr^1$ by $$ \begin{pmatrix} \alpha & \beta \\ -\bar\beta & \bar \alpha \end{pmatrix} \ : \ [z_0:z_1]\mapsto [\alpha z_0+\beta z_1:-\bar\beta z_0+\bar\alpha z_1]. $$ This action is transitive. The corresponding action on $\mathbb{S}^2$ is $$ (a,b,c)\mapsto (a',b',c'), $$ where $$ a'=(\alpha\bar\alpha-\beta\bar\beta)a+2Im(\alpha\bar\beta)b+2Re(\alpha\bar\beta)c $$ $$ b'=-i(\alpha\beta-\bar\alpha\bar\beta)a+Re(\alpha^2+\bar\beta^2)b-Im(\alpha^2+\bar\beta^2)c $$ $$ c'=-(\alpha\beta+\bar\alpha\bar\beta)a+Im(\alpha^2-\bar\beta^2)b+Re(\alpha^2-\bar\beta^2)c. $$ The twistor space $\Tw(\ensuremath{\mathbb R}^{4n})$ is covered by two charts $\ensuremath{\mathbb R}^{4n}\times (S^2-\{ -I\})$ and $\ensuremath{\mathbb R}^{4n}\times (S^2-\{ I\})$. As in \cite{hitchin:13}, we will use complex coordinates $v_1$, ..., $v_n$, $\xi_1$, ...,$\xi_n$,$\zeta$ on $\ensuremath{\mathbb R}^{4n}\times (S^2-\{ -I\})$, where \begin{equation} \label{coordv} {\mathbf {v}}=\begin{pmatrix}v_1\\ ...\\ v_n\end{pmatrix}={\mathbf {z}}+\zeta\bar{\mathbf {w}}={\mathbf {x_1}}+i{\mathbf {x_2}}+\zeta({\mathbf {x_3}}-i{\mathbf {x_4}}) \end{equation} and \begin{equation} \label{coordksi} {\bm {\xi}}=\begin{pmatrix}\xi_1\\ ...\\ \xi_n\end{pmatrix}={\mathbf {w}}-\zeta\bar{\mathbf {z}}= {\mathbf {x_3}}+i{\mathbf {x_4}}-\zeta({\mathbf {x_1}}-i{\mathbf {x_2}}). \end{equation} On the chart $\ensuremath{\mathbb R}^{4n}\times (S^2-\{ I\})$, the complex coordinates are \begin{equation} \label{tildecoords} {\tilde{{\mathbf{v}}}}={\tilde{\zeta}}{\mathbf {z}}+\bar{\mathbf {w}}, \ {\tilde{{\bm{\xi}}}}={\tilde{\zeta}}{\mathbf {w}}-\bar{\mathbf {z}}, \ {\tilde{\zeta}}. \end{equation} Over the intersection of the charts $$ {\mathbf {\tilde{v}}}=\frac{1}{\zeta}{\mathbf {v}}, \ {\tilde{{\bm {\xi}}}}=\frac{1}{\zeta}{\bm \xi}, \ \tilde{\zeta}=\frac{1}{\zeta}. $$ \begin{remark} The fact that $({\mathbf{v}},{\bm{\xi}},\zeta)$, $({\tilde{{\mathbf{v}}}}, {\tilde{{\bm{\xi}}}},{\tilde{\zeta}})$ are complex coordinates on $\Tw (\ensuremath{\mathbb R}^{4n})$, for the complex structure (\ref{compstr}), is well known. It is used in \cite{hitchin:13, hitchin:14}. If one wishes to verify this explicitly, then one can check the equalities that involve the almost complex structures and the differentials of the maps. \end{remark} \subsection{Twistor space with one fiber removed} \label{subsec:twistsfiberrem} So, we have a diffeomorphism $$ Tw(\ensuremath{\mathbb R}^{4n})-(\ensuremath{\mathbb R}^{4n}\times \{ -I\})\to \ensuremath{\mathbb C}^{2n+1} $$ $$ ({\mathbf{x}},aI+bJ+cK)\mapsto \begin{pmatrix} {{\mathbf{v}}}\\ {{\bm{\xi}}}\\ {\zeta} \end{pmatrix}. $$ In this paper, we will concentrate our attention on the twistor space of $\ensuremath{\mathbb R}^{4n}$ with one fiber of the projection $\pi : \Tw(\ensuremath{\mathbb R}^{4n}) \to \mathbb{CP}^1$ removed, rather than the whole twistor space. Choosing this fiber, we selected the point to be removed from $S^2$ to be $-I=s((-1,0,0))$ (which is $[1:0]$ in $\ensuremath{\mathbb C}\Pr^1$). We defined complex coordinates, $ {{\mathbf{v}}}$, ${{\bm{\xi}}}$, ${\zeta}$ on $Tw(\ensuremath{\mathbb R}^{4n})-(\ensuremath{\mathbb R}^{4n}\times \{ s((-1,0,0))\})$. Let $(a_0,b_0,c_0)$ be a point in $\mathbb{S}^2$ such that $(a_0,b_0,c_0)\ne (-1,0,0)$. We will now explain how to define complex coordinates, $({\mathbf{v '}},{\bm{\xi '}},\zeta')$, on $\Tw(\ensuremath{\mathbb R}^{4n})-(\ensuremath{\mathbb R}^{4n}\times \{ s(a_0,b_0,c_0)\})$. The point $(a_0,b_0,c_0)$ corresponds to the point $[\zeta_0:1]\in \ensuremath{\mathbb C}\Pr^1$, where $\zeta_0=\dfrac{-c_0+ib_0}{a_0+1}$. The matrix \begin{equation} \label{su2matrix} \gamma= \frac{1}{\sqrt{1+|\zeta_0|^2}} \begin{pmatrix} -e^{-i\psi}\zeta_0 &e^{i\psi}\\ -e^{-i\psi}& -e^{i\psi}\bar\zeta_0 \end{pmatrix} \end{equation} where $\psi$ is an arbitrary real number, represents an element of $SU(2)$ that takes $[1:0]$ to $[\zeta_0:1]$. Also $$ \gamma^{-1}= \frac{1}{\sqrt{1+|\zeta_0|^2}} \begin{pmatrix} -e^{i\psi}\bar\zeta_0& -e^{i\psi}\\ e^{-i\psi}&-e^{-i\psi}\zeta_0 \end{pmatrix} $$ takes $[\zeta_0:1]$ to $[1:0]$ and takes $(a_0,b_0,c_0)$ to $(-1,0,0)$. Consider a point $P=({\mathbf{x_1}},{\mathbf{x_2}},{\mathbf{x_3}},{\mathbf{x_4}},s(a,b,c))$ in the twistor space, where ${\mathbf{x_1}}$, ${\mathbf{x_2}}$, ${\mathbf{x_3}}$, ${\mathbf{x_4}}$ are arbitrary and $(a,b,c)\ne (a_0,b_0,c_0)$. We set the complex coordinates $({\mathbf{v '}},{\bm{\xi '}},\zeta')$, of $P$ to be $$ \zeta'=-e^{-2i\psi}\frac{\bar\zeta_0 (-c+ib)+a+1}{-c+ib-\zeta_0(a+1)} $$ $$ {\mathbf{v '}}=(\zeta' +e^{2i\psi}\bar\zeta_0){\mathbf {z}}+(\zeta_0 \zeta'-e^{2i\psi})\bar{\mathbf {w}} $$ $$ {\bm{\xi '}}=(\zeta' +e^{2i\psi}\bar\zeta_0){\mathbf {w}}-(\zeta_0 \zeta'-e^{2i\psi})\bar{\mathbf {z}} $$ where, as before, $$ {\mathbf{z}}={\mathbf{x_1}}+{\mathbf{x_2}}i, \ {\mathbf{w}}={\mathbf{x_3}}+{\mathbf{x_4}}i. $$ To explain, the complex number $\zeta'$ is $\zeta'=\dfrac{-c'+ib'}{a'+1}$, where $(a',b',c')=\gamma^{-1}(a,b,c)$, and ${\mathbf{v '}}$ and ${\bm{\xi '}}$ are (\ref{coordv}) and (\ref{coordksi}) adjusted to the new choice of the complex coordinate on $\mathbb{S}^2-\{ pt\}$ (informally speaking, we should express the old coordinate $\zeta$ in terms of the new coordinate $\zeta'$). In particular, if we set $\zeta_0=0$ and $\psi$ to be such that $e^{2i\psi}=-1$, then we get ${\mathbf{v '}}=\tilde{{\mathbf{v }}}$, ${\bm{\xi '}}=\tilde{{\bm{\xi }}}$, and the equalities above become (\ref{tildecoords}). Let us denote the map from the twistor space with one fiber removed to $\ensuremath{\mathbb C}^{2n+1}$ defined above by $f$ and denote the standard complex structure on $\ensuremath{\mathbb C}^{2n+1}$ by $J_0$. To verify that ${\mathbf{v '}}$, ${\bm{\xi '}}$, $\zeta'$ are indeed complex coordinates, one can explicitly check the equality $$ J_0\circ df=df\circ \Bigl ( (aI+bJ+cK)\oplus I_{\mathbb{CP}^1} \Bigr ). $$ \subsection{Further considerations} \label{degtwistors} In this subsection, we make some observations. Let $g$ be the standard flat Euclidean metric on $\ensuremath{\mathbb R}^{4n}$. On the hyperk\"ahler manifold $(\ensuremath{\mathbb R}^{4n}, g, I,J,K)$, the three K\"ahler forms $\omega_I$, $\omega_J$, $\omega_K$, defined by $$ \omega_I(X,X')=g(IX,X'), \ \omega_J(X,X')=g(JX,X'), \ \omega_K(X,X')=g(KX,X') $$ for all $X,X'\in T_x\ensuremath{\mathbb R}^{4n}$, $x\in \ensuremath{\mathbb R}^{4n}$, have coordinate descriptions $$ \omega_I=\sum_{l=1}^n\Bigl ( dx_1^{(l)}\wedge dx_2^{(l)}+dx_3^{(l)}\wedge dx_4^{(l)}\Bigr ) $$ $$ \omega_J=\sum_{l=1}^n\Bigl ( dx_1^{(l)}\wedge dx_3^{(l)}+dx_4^{(l)}\wedge dx_2^{(l)}\Bigr ) $$ $$ \omega_K=\sum_{l=1}^n\Bigl ( dx_1^{(l)}\wedge dx_4^{(l)}+dx_2^{(l)}\wedge dx_3^{(l)}\Bigr ). $$ The holomorphic symplectic form $ \omega_J+i \omega_K$ is equal to $\sum_{l=1}^ndz_l\wedge dw_l$. The $2$-form $\omega_M$ (where $M=\ensuremath{\mathbb R}^{4n}$) on $\Tw (\ensuremath{\mathbb R}^{4n})$ that appears in the decomposition (\ref{omega_decomp}) is defined as follows: at a point $(x,A)$, $x\in \ensuremath{\mathbb R}^{4n}$, $A\in \ensuremath{\mathbb C}\Pr^1$, for $(X,V), (X',V')\in T_x\ensuremath{\mathbb R}^{4n}\oplus T_A\ensuremath{\mathbb C}\Pr^1$ $$ \omega_M ((X,V), (X',V'))=g(AX,X'). $$ For example, for $n=1$, in the coordinate conventions used above, $$ \omega_M=dx_1\wedge dx_2+b \ dx_1\wedge dx_3+c \ dx_1\wedge dx_4+c \ dx_2\wedge dx_3-b \ dx_2\wedge dx_4+a \ dx_3\wedge dx_4. $$ Also, let us discuss how the set up in this paper relates to definitions in \cite{verbitsky:15}. The twistor family $\ensuremath{\mathbb R}^{4n}\times \Bigl (S^2-\{ pt\}\Bigr )$, with its complex structure obtained from $\Tw (\ensuremath{\mathbb R}^{4n})$, at a quick glance, is possibly reminiscent of the {\it degenerate twistor space} of \cite{verbitsky:15}. However, simply removing a point from $S^2$ and obtaining $\ensuremath{\mathbb C}$ instead of $\ensuremath{\mathbb C}\Pr^1$ of complex structures certainly does not have to lead to the situation described in \cite{verbitsky:15}. In \cite{verbitsky:15}, for a compact simple $4n$-dimensional hyperk\"ahler manifold $(M,g,I,J,K)$, and a choice of a $(1,1)$-form $\eta$ on $(M,I)$, which is closed semipositive form of rank $2n$, the {\it degenerate twistor space} of $M$ is defined as $M\times \ensuremath{\mathbb C}$, with the (integrable) almost complex structure $I_{\eta}\oplus I_{\ensuremath{\mathbb C}}$, where $I_{\ensuremath{\mathbb C}}$ is the standard complex structure on $\ensuremath{\mathbb C}$, and, at $\zeta\in\ensuremath{\mathbb C}$, $I_{\eta}$ is the complex structure on $M$ for which $T^{0,1}$ is \begin{equation} \label{condt01} \{ v\in TM\otimes\ensuremath{\mathbb C} \ | v \lrcorner (\omega_J+i\omega_K+\zeta\eta)=0\} . \end{equation} Let us try to explain why our setting is different from \cite{verbitsky:15}. Compactness is not an issue, since linear complex structures descend to the torus $\ensuremath{\mathbb R}^{4n}/\ensuremath{{\mathbb Z}}^{4n}$. Recall that if $A$ is an endomorphism of an even-dimensional vector space $V$ such that $A^2=-1$, then then $V_{\ensuremath{\mathbb C}}=V\oplus iV$ is $V_{\ensuremath{\mathbb C}}=T^{1,0}\oplus T^{0,1}$, where $T^{1,0}$ is the eigenspace for $i$, it consists of the vectors $v-iAv$, $v\in V$, and $T^{0,1}$ is the eigenspace for $-i$, and it consists of the vectors $v+iAv$, $v\in V$. With that in mind, and applying (\ref{Iaction}), (\ref{Jaction}), (\ref{Kaction}), we find that the complex structure on $\ensuremath{\mathbb R}^{4n}\times \ensuremath{\mathbb C}$ is the one for which, at a fixed $\zeta\in\ensuremath{\mathbb C}$, $T^{0,1}(\ensuremath{\mathbb R}^{4n})$ is the span of $\dfrac{\partial}{\partial \bar z_l}+\zeta\dfrac{\partial}{\partial w_l}$ and $\dfrac{\partial}{\partial \bar w_l}-\zeta\dfrac{\partial}{\partial z_l}$, $l=1,...,n$. Equivalently, at $\zeta\in\ensuremath{\mathbb C}$, $T^{0,1}(\ensuremath{\mathbb R}^{4n})$ is $$ \{ v\in T(\ensuremath{\mathbb R}^{4n})\otimes \ensuremath{\mathbb C} \ | v\lrcorner \beta=0\} $$ where $$ \beta=\sum_{l=1}^n (dz_l+\zeta d\bar w_l)\wedge (dw_l-\zeta d\bar z_l). $$ Let us try a different choice of the form. Let $n=1$ and choose $$ \eta=\frac{i}{2}(dz\wedge d\bar z+dw\wedge d\bar w+dz\wedge d\bar w+dw\wedge d\bar z). $$ This is a $(1,1)$ form on $(\ensuremath{\mathbb R}^{4n},I)$, which is closed, semipositive in the sense of \cite[Def. 3.6]{verbitsky:15}, and of rank $2$ - cf \cite[Def. 3.17]{verbitsky:15}). It defines the complex structure on $\ensuremath{\mathbb R}^4\times \ensuremath{\mathbb C}$ for which $T^{0,1}$, defined by (\ref{condt01}), is the span of $\dfrac{\partial}{\partial \bar z}+ \zeta(\dfrac{\partial}{\partial z}-\dfrac{\partial}{\partial w})$ and $\dfrac{\partial}{\partial \bar w}+ \zeta(\dfrac{\partial}{\partial z}-\dfrac{\partial}{\partial w})$. \section{Analysis on $\ensuremath{\mathbb C}^{2n+1}$} In this section, we collect various facts that are necessary to proceed with the proof of the main theorem in section \ref{sec:asymp}. We will define several function spaces on $\ensuremath{\mathbb C}^{2n+1}$, $n\in\ensuremath{{\mathbb N}}$. The isomorphism between $Tw(\ensuremath{\mathbb R}^{4n})-(\ensuremath{\mathbb R}^{4n}\times \{ a_0I+b_0J+c_0K\})$ and $\ensuremath{\mathbb C}^{2n+1}$ described above in section \ref{sec:twistor} will not be used in this section. Denote by $v_1$,...,$v_n$,$\xi_1$,...$\xi_n$,$\zeta$ the complex coordinates on $\ensuremath{\mathbb C}^{2n+1}$. Write, as before, $$ {\mathbf {v}}=\begin{pmatrix}v_1\\ ...\\ v_n\end{pmatrix}, \ {\bm {\xi}}=\begin{pmatrix}\xi_1\\ ...\\ \xi_n\end{pmatrix}. $$ Let $d\mu=d\mu({\mathbf{v}},{\bm{\xi}},\zeta) $ be the Lebesgue measure on $\ensuremath{\mathbb C}^{2n+1}$. We will also write $$ d\mu( {\mathbf{v}},{\bm{\xi}})=dRe(v_1)dIm(v_1)...dRe(v_n)dIm(v_n)dRe(\xi_1)dIm(\xi_1)...dRe(\xi_n)dIm(\xi_n) $$ and $$ d\mu( \zeta)=dRe(\zeta)dIm(\zeta). $$ We will write, for two vectors ${\mathbf{a}},{\mathbf{b}}\in\ensuremath{\mathbb C}^n$: $$ {\mathbf{a}}\cdot\bar{\mathbf{b}}=\sum_{l=1}^na_l\bar b_l. $$ \begin{remark}As a comment on notation, for $z\in\ensuremath{\mathbb C}$, $f(z)$ will mean the value of the function $f$ at the point $z$. Writing $f(z)$ and not $f(z,{\bar{z}} )$, we are not implying that the function $f$ is holomorphic ($\frac{\partial f}{\partial \bar z}=0$). Similarly for $\ensuremath{\mathbb C}^{l}$, $l\in\ensuremath{{\mathbb N}}$. In the presentation it will be stated which functions are assumed to be holomorphic and it will be clear which functions are not assumed to be holomorphic. \end{remark} Consider the space $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$, where $k\in\ensuremath{{\mathbb N}}$ and \begin{equation} \label{measuremuk} d\mu_k({\mathbf{v}},{\bm{\xi}},\zeta)=(\frac{k}{\pi})^{2n}e^{-k({\mathbf{v}}\cdot\bar{\mathbf{v}}+ {\bm{\xi}}\cdot\bar{\bm{\xi}})} \frac{1}{\pi(1+|\zeta|^2)^2} \ d\mu({\mathbf{v}},{\bm{\xi}},\zeta). \end{equation} The inner product on $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$ is \begin{equation} \label{inprodu} \langle f,g\rangle = \int_{\ensuremath{\mathbb C}^{2n+1}} f({\mathbf{v}},{\bm{\xi}},\zeta)\overline{g({\mathbf{v}},{\bm{\xi}},\zeta)} d\mu_k({\mathbf{v}},{\bm{\xi}},\zeta) \end{equation} and we will write $||.||$ for the corresponding norm. Denote by $\ensuremath{\mathcal{A}}^{(k)}(\ensuremath{\mathbb C}^{2n+1})$ the subspace of $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$ that consists of holomorphic functions. As we show below, these are precisely the holomorphic functions $F$ on $\ensuremath{\mathbb C}^{2n+1}$ that satisfy $\frac{\partial F}{\partial \zeta}\equiv 0$, i.e. depend only on the $2n$ complex variables $v_1$,...,$v_n$,$\xi_1$,...,$\xi_n$. \begin{lemma} \label{lemsupest} Suppose $f\in\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$. Let $K$ be a compact subset of $\ensuremath{\mathbb C}^{2n+1}$. There is a constant $C_K>0$, depending on $K$ and $n$, such that $$ \underset{z\in K}{\sup}|f(z)|\le C_K ||f||. $$ \end{lemma} \proof The proof is an obvious modification of the proof of the same statement for the Bergman space of holomorphic functions in $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu)$ (e.g. Lemma 1.4.1 \cite{krantz:92}). We use that the weight $(\frac{k}{\pi})^{2n}e^{-k\sum_{l=1}^n(|v_l|^2+|\xi_l|^2)}\frac{1}{\pi (1+|\zeta|^2)^2}$ is a positive continuous function on $K$, therefore it attains its minimum value on $K$, and this value is positive. $\Box$ \begin{lemma} The space $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$ is a closed subspace of $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$. \end{lemma} \proof It is sufficient to show that the limit of a sequence of holomorphic functions that converges in $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$ is a holomorphic function. Suppose $\{ f_n\}$ is such a sequence. We note that $\{ f_n\}$ is Cauchy in norm. Pick a compact set $K$ in $\ensuremath{\mathbb C}^{2n+1}$. Restricted to $K$, $\{ f_n\}$ is uniformly Cauchy by Lemma \ref{lemsupest}, and therefore it converges uniformly to a (continuous) function $f_K$. Thus $\{ f_n\}$ converges uniformly on compact sets to a function $f:\ensuremath{\mathbb C}^{2n+1}\to\ensuremath{\mathbb C}$. By Theorem 1.9\cite{range:86} (compact convergence of a sequence of holomorphic functions) $f$ is holomorphic. $\Box$ Thus, the space $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$ is a Hilbert space with the inner product (\ref{inprodu}) (since $L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$ is a Hilbert space, and a closed subspace of a complete metric space is complete). Let $l_1$,...,$l_n$,$m_1$,...,$m_n$ be nonnegative integers. Write $l=(l_1,...,l_n)$, $m=(m_1,...,m_n)$. Denote $$ \psi_{l,m}({\mathbf{v}},{\bm{\xi}},\zeta)=\sqrt{\frac{k^{l_1+...+l_n+m_1+...+m_n}}{l_1!...l_n!m_1!...m_n!}}(v_1)^{l_1}...(v_n)^{l_n} (\xi_1)^{m_1}...(\xi_n)^{m_n}. $$ \begin{lemma} The functions $\psi_{l,m}$ form a Hilbert space basis (orthonormal basis) in $\ensuremath{\mathcal{A}}^{(k)}(\ensuremath{\mathbb C}^{2n+1})$. \end{lemma} \proof A holomorphic function on $\ensuremath{\mathbb C}^{2n+1}$ has an expansion \begin{equation} \label{taylorexp} \sum c_{l_1,...,l_n,m_1,...,m_n,p}(v_1)^{l_1}...(v_n)^{l_n} (\xi_1)^{m_1}...(\xi_n)^{m_n}\zeta^q, \end{equation} where $l_j$, $m_j$, $q$ are nonnegative integers, $c_{l_1,...,l_n,m_1,...,m_n,q}$ are complex numbers (Theorem 1.18 \cite{range:86}, Taylor series of a holomorphic function on a polydisc). A monomial in ${\mathbf{v}},{\bm{\xi}}$, times $\zeta^p$ with $q>0$, is not a square integrable function on $\ensuremath{\mathbb C}^{2n+1}$. More generally, a holomorphic function whose expansion (\ref{taylorexp}) involves a term with $\zeta^q$ with $q>0$ is not square-integrable. It is straightforward to check that the functions $\psi_{l,m}$ form an orthonormal set. $\Box$ \begin{lemma} For each fixed $p=({\mathbf{v}},{\bm{\xi}},\zeta)\in \ensuremath{\mathbb C}^{2n+1}$, the functional $$ \Phi_{p}:f\mapsto f(p), \ f\in \ensuremath{\mathcal{A}}^{(k)}(\ensuremath{\mathbb C}^{2n+1}) $$ is a continuous linear functional on $\ensuremath{\mathcal{A}}^{(k)}(\ensuremath{\mathbb C}^{2n+1})$. \end{lemma} \proof Take $K$ to be the one point set $\{ p\}$ and apply Lemma \ref{lemsupest}. The statement follows. $\Box$ Then, by the Riesz representation theorem there is an element $\kappa_{p}\in \ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$ such that the linear functional $\Phi_{p}^{(k)}$ is given by the inner product with $\kappa_{p}^{(k)}$: $$ \langle f,\kappa_{p}^{(k)}\rangle=f(p) $$ for all $f\in \ensuremath{\mathcal{A}}^{(k)} (\ensuremath{\mathbb C}^{2n+1})$. The (weighted) Bergman kernel $\ensuremath{\mathcal{K}}^{(k)}$ is $$ \ensuremath{\mathcal{K}}^{(k)} (p,q)=\overline{\kappa_{p}^{(k)}(q)}. $$ It has the reproducing property \begin{equation} \label{reprop1} h(p)=\int_{\ensuremath{\mathbb C}^{2n+1}} K^{(k)}(p,q)h(q)d\mu_k(q) \end{equation} for all $h\in \ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$. The map $$ P^{(k)}:L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)\to \ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1}) $$ $$ f\mapsto \int_{\ensuremath{\mathbb C}^{2n+1}} K^{(k)}(p,q)f(q)d\mu_k(q) $$ is the orthogonal projection. The explicit expression for $\ensuremath{\mathcal{K}}^{(k)}$ is $$ \ensuremath{\mathcal{K}}^{(k)}({\mathbf{u}},{\bm{\eta}},\tau;{\mathbf{v}},{\bm{\xi}},\zeta )= \sum\psi_{l,m}({\mathbf{u}},{\bm{\eta}})\overline{\psi_{l,m}({\mathbf{v}},{\bm{\xi}})}= e^{k({\mathbf{u}}\cdot\bar{\mathbf{v}}+ {\bm{\eta}}\cdot\bar{\bm{\xi}} )}. $$ Thus, the reproducing property (\ref{reprop1}) is $$ h({\mathbf{u}},{\bm{\eta}},\tau)=\int_{\ensuremath{\mathbb C}^{2n+1}} h({\mathbf{v}},{\bm{\xi}},\zeta )e^{k({\mathbf{u}}\cdot\bar{\mathbf{v}}+ {\bm{\eta}}\cdot\bar{\bm{\xi}})}d\mu_k({\mathbf{v}},{\bm{\xi}},\zeta) $$ or \begin{equation} \label{reprop2} h({\mathbf{u}},{\bm{\eta}},\zeta)=(\frac{k}{\pi})^{2n} \int_{\ensuremath{\mathbb C}^{2n}} h({\mathbf{v}},{\bm{\xi}},\zeta )e^{k({\mathbf{u}}\cdot\bar{\mathbf{v}}+ {\bm{\eta}}\cdot\bar{\bm{\xi}})} e^{-k({\mathbf{v}}\cdot\bar{\mathbf{v}}+ {\bm{\xi}}\cdot\bar{\bm{\xi}})} d\mu({\mathbf{v}},{\bm{\xi}}). \end{equation} Given $f\in L^2(\ensuremath{\mathbb C}^{2n+1},d\mu_k)$, the Toeplitz operator $T_f^{(k)}$, $k\in\ensuremath{{\mathbb N}}$, is the linear operator \begin{equation} \label{toeplitzop} T_f^{(k)}:\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})\to \ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1}) \end{equation} $$ h\mapsto P^{(k)}(fh). $$ Usually to $f$ one associates the sequence $\{ T_f^{(k)}\ | k=1,2,3,...\}$. So, $$ (T_f^{(k)}h)(p)=\int_{\ensuremath{\mathbb C}^{2n+1}}f(q)K^{(k)}(p,q)h(q)d\mu_k(q). $$ \begin{remark} What we have described just now, is very standard for the complex euclidean space with the Gaussian measure (see e.g. \cite{coburn:92}, \cite{berger:86}). However, here we consider $\ensuremath{\mathbb C}^{2n+1}$ with the measure (\ref{measuremuk}). \end{remark} The following observation will be useful. For ${\mathbf{a}}, {\mathbf{b}}\in\ensuremath{\mathbb C}^n$, the operator defined below is a unitary operator on $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$: $$ U_{({\mathbf{a}},{\mathbf{b}})}^{(k)}:\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})\to \ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1}) $$ $$ (U_{({\mathbf{a}},{\mathbf{b}})}^{(k)}f)({\mathbf{v}},{\bm{\xi}},\zeta)= e^{k({\mathbf{v}}\cdot \bar{\mathbf{a}}+{\bm{\xi}}\cdot\bar{\mathbf{b}}) -\frac{k}{2}({\mathbf{a}}\cdot \bar{\mathbf{a}}+{\mathbf{b}}\cdot \bar{\mathbf{b}})} f({\mathbf{v-a}},{\bm{\xi}}{\mathbf{-b}},\zeta). $$ \section{Asymptotic estimates} \label{sec:asymp} In this section, we state and prove the main theorem. We recall that the twistor space $\Tw (\ensuremath{\mathbb R}^{4n})$ is equipped with two projections: \[ \xymatrix{& \Tw(\ensuremath{\mathbb R}^{4n}) \ar[rd]^\pi \ar[dl]_\sigma & \\ \ensuremath{\mathbb R}^{4n} & & \mathbb{CP}^1.} \] Choose and fix a point $\tau_0\in \mathbb{CP}^1$. We specified a diffeomorphism $$ \iota_{\tau_0}:\Tw(\ensuremath{\mathbb R}^{4n})-\pi^{-1}(\tau_0)\to \ensuremath{\mathbb C}^{2n+1}. $$ Explicitly, if $\tau_0$ corresponds to $(a_0,b_0,c_0)\in\mathbb{S}^2\subset\ensuremath{\mathbb R}^3$ (see sections \ref{subsec:twistscs} and \ref{subsec:twistsfiberrem}), we have a map \begin{equation} \label{diffeo} \ensuremath{\mathbb R}^{4n}\times (\mathbb{S}^2-\{ (a_0,b_0,c_0\})\to \ensuremath{\mathbb C}^{2n+1}. \end{equation} If $(a_0,b_0,c_0)=(-1,0,0)$, then this map is given by $$ ( {\mathbf{x}}, (a,b,c))\mapsto ({\mathbf{v}},{\bm{\xi}},\zeta), $$ where \begin{equation} \label{complexcoord1} \begin{split} \zeta=\frac{-c+ib}{a+1} \\ {\mathbf{v}}={\mathbf{z}}+\zeta\bar{\mathbf{w}}, \ {\bm{\xi}}={\mathbf{w}}-\zeta\bar{\mathbf{z}} \end{split} \end{equation} with the notation $$ {\mathbf{z}}={\mathbf{x_1}}+i{\mathbf{x_2}}, \ {\mathbf{w}}={\mathbf{x_3}}+i{\mathbf{x_4}}. $$ If $(a_0,b_0,c_0)\ne (-1,0,0)$, then let us set $\psi$ in formulas of section \ref{subsec:twistsfiberrem} so that $e^{2i\psi}=-1$, and then $$ ( {\mathbf{x}}, (a,b,c))\mapsto ({\mathbf{v}},{\bm{\xi}},\zeta), $$ where \begin{equation} \label{complexcoord2} \begin{split} \zeta=\frac{\bar\zeta_0 (-c+ib)+a+1}{-c+ib-\zeta_0(a+1)} \\ {\mathbf{v }}=(\zeta -\bar\zeta_0){\mathbf {z}}+(\zeta_0 \zeta+1)\bar{\mathbf {w}} \\ {\bm{\xi }}=(\zeta -\bar\zeta_0){\mathbf {w}}-(\zeta_0 \zeta+1)\bar{\mathbf {z}} . \end{split} \end{equation} There is a map \begin{equation} \label{mapfunext} C^{\infty}(\ensuremath{\mathbb R}^{4n})\to C^{\infty}(\ensuremath{\mathbb C}^{2n+1}) \end{equation} $$ h\mapsto h\circ \sigma\circ (\iota_{\tau_0})^{-1}=\tilde{h}. $$ Compose this map with the map \begin{equation} \label{funmap2} C^{\infty}(\ensuremath{\mathbb C}^{2n+1})\to C^{\infty}(\ensuremath{\mathbb C}^{2n+1}) \end{equation} $$ \tilde{h}\to \tilde{h}_{red} $$ where $\tilde{h}_{red}$ is defined by: at $({\mathbf{v}},{\bm{\xi}},\eta)$ $$ \tilde{h}_{red}({\mathbf{v}},{\bm{\xi}},\eta)= \int_{\ensuremath{\mathbb C}}\tilde{h}({\mathbf{v}},{\bm{\xi}},\zeta)\frac{1}{\pi(1+|\zeta|^2)^2}d\mu(\zeta). $$ It is straightforward to verify that the composition of the two maps above is a well defined linear map $C^{\infty}(\ensuremath{\mathbb R}^{4n})\to C^{\infty}(\ensuremath{\mathbb C}^{2n+1})$ (in particular, to conclude that the integral is finite for each ${\mathbf{v}},{\bm{\xi}}$, we can apply the Taylor's Theorem (\cite[Th. 2.4.15]{abraham:83} to the function $h$ and use the explicit formulas for ${\mathbf{v}},{\bm{\xi}}$ provided above). Let us illustrate (\ref{mapfunext}) and (\ref{funmap2}) by an example. \begin{example} Suppose $n=1$ and the complex coordinates on $\Tw (\ensuremath{\mathbb R}^4)$ with one fiber removed, are, as in (\ref{complexcoord1}) above, $$ v=z+\zeta \bar w, \ \xi=w-\zeta\bar z. $$ Then $$ x_1+ix_2=z= \frac{1}{1+\zeta\bar\zeta}(v-\zeta\bar \xi) $$ $$ x_3+ix_4= w= \frac{1}{1+\zeta\bar\zeta}(\xi+\zeta\bar v). $$ Let $h:\ensuremath{\mathbb R}^4\to\ensuremath{\mathbb C}$ be defined by $h(x)=2x_1=z+\bar z$. Then $$ \tilde{h}(v,\xi,\zeta)= \frac{1}{1+\zeta\bar\zeta}(v+\bar v-\zeta\bar\xi-\bar\zeta\xi) $$ and $$ \tilde{h}_{red}(v,\xi,\eta)=\int_{\ensuremath{\mathbb C}}\tilde{h}(v,\xi,\zeta)\frac{1}{\pi(1+|\zeta|^2)^2} \ dRe(\zeta)dIm(\zeta)=4 Re(v)\int_0^{\infty}\frac{rdr}{(1+r^2)^3} =Re(v). $$ More generally, a similar calculation for $h(x)=(2x_1)^{2p}$, where $p$ is a positive integer, yields $\tilde{h}_{red}(v,\xi,\eta)=\frac{2^p(Re \ v)^p}{p+1}$. \end{example} For two complex-valued $C^2$ functions on $\ensuremath{\mathbb C}^{2n+1}$ define $$ \{ f,g\} =-i\Bigr (\frac{\partial f}{\partial \zeta}\frac{\partial g}{\partial \bar \zeta}- \frac{\partial g}{\partial \zeta}\frac{\partial f}{\partial \bar \zeta}+ \sum_{l=1}^n\Bigr ( \frac{\partial f}{\partial v_l}\frac{\partial g}{\partial \bar v_l} - \frac{\partial g}{\partial v_l}\frac{\partial f}{\partial \bar v_l} + \frac{\partial f}{\partial \xi_l}\frac{\partial g}{\partial \bar \xi_l}- \frac{\partial g}{\partial \xi_l}\frac{\partial f}{\partial \bar \xi_l} \Bigr )\Bigr ). $$ This bracket is the Poisson bracket for the symplectic form on $\ensuremath{\mathbb C}^{2n+1}$ $$ \omega_0=i\Bigr ( d\zeta\wedge d\bar\zeta+\sum_{l=1}^n(dv_l\wedge d\bar v_l+d\xi_l\wedge d\bar\xi_l)\Bigr ) . $$ On the twistor space with a fiber removed, there is another $2$-form of interest, $\omega_M$: see (\ref{omega_decomp}) and further discussion in section \ref{degtwistors}. Over the fibers of $\pi$, the pull-backs of $\omega_M$ and $\omega_0$ to each fiber are constant multiples of each other (the constant factor depends on the point of the $2$-sphere). Here is the main result. Assume $f_{\ensuremath{\mathbb R}^{4n}}$ and $g_{\ensuremath{\mathbb R}^{4n}}$ are smooth complex valued functions on $\ensuremath{\mathbb R}^{4n}$, compactly supported. Write $f$ and $g$, respectively, for the functions $\tilde{f}_{\ensuremath{\mathbb R}^{4n}}$ and $\tilde{g}_{\ensuremath{\mathbb R}^{4n}}$ on $\ensuremath{\mathbb C}^{2n+1 }$ defined by (\ref{mapfunext}). Then, as we discussed, the corresponding functions $f_{red}$ and $g_{red}$ on $\ensuremath{\mathbb C}^{2n+1}$ defined by (\ref{funmap2}) are well defined. Furthermore, \begin{equation} \label{toeq} T_{f}^{(k)}=T_{f_{red}}^{(k)}, \ T_{g}^{(k)}=T_{g_{red}}^{(k)}. \end{equation} \begin{theorem} \label{mainth} There is a constant $C=C(f,g)$ such that $$ ||T_f^{(k)}T_g^{(k)}-T_{f_{red}g_{red}}^{(k)}+\frac{1}{k}T^{(k)}_ {\sum_{j=1}^n\Bigl ( \frac{\partial f_{red}}{\partial v_j}\frac{\partial g_{red}}{\partial \overline{ v _j}} + \frac{\partial f_{red}}{\partial \xi_j}\frac{\partial g_{red}}{\partial \overline{ \xi_j}} \Bigr )}|| \le \frac{C}{k^2}, $$ $$ ||T_{f_{red}}^{(k)}T_{g_{red}}^{(k)}-T_{f_{red}g_{red}}^{(k)}+\frac{1}{k}T^{(k)}_ {\sum_{j=1}^n\Bigl ( \frac{\partial f_{red}}{\partial v_j}\frac{\partial g_{red}}{\partial \overline{ v _j}} + \frac{\partial f_{red}}{\partial \xi_j}\frac{\partial g_{red}}{\partial \overline{ \xi_j}} \Bigr )}|| \le \frac{C}{k^2} $$ for all $k>0$. \end{theorem} \begin{corollary} There is a constant $C=C(f,g)$ such that $$ ||ik[T_{f}^{(k)},T_{g}^{(k)}]-T_{\{ f_{red},g_{red}\} }^{(k)}||\le \frac{C}{k} $$ and $$ ||ik[T_{{f_{red}}}^{(k)},T_{{g_{red}}}^{(k)}]-T_{\{ {f_{red}},{g_{red}}\} }^{(k)}||\le \frac{C}{k} $$ for all $k>0$. \end{corollary} \noindent {\bf Proof of Theorem \ref{mainth}.} The proof is a modification of the proof of Theorem 2 in \cite{coburn:92}. This way of obtaining asymptotics for Toeplitz operators is quite common, see e.g. \cite{klimek:92}. For $\varphi$, $\psi$ in $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$, and writing $\int$ for $\int_{\ensuremath{\mathbb C}^{2n+1}}$ we have: $$ \langle T_f^{(k)}T_g^{(k)}\varphi,\psi\rangle= \int_{\ensuremath{\mathbb C}^{2n+1}} (T_f^{(k)}T_g^{(k)}\varphi ) ({\mathbf{v}},{\bm{\xi}},\zeta) \overline{\psi({\mathbf{v}},{\bm{\xi}},\zeta)}d\mu_k({\mathbf{v}},{\bm{\xi}},\zeta)= $$ $$ \int \int f({\mathbf{u}},{\bm{\eta}},\tau) (T_g^{(k)}\varphi ) ({\mathbf{u}},{\bm{\eta}},\tau)e^{k({\mathbf{v}}\cdot\bar {\mathbf{u}}+ {\bm{\xi}}\cdot\bar{\bm{\eta}})}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau) \overline{\psi({\mathbf{v}},{\bm{\xi}},\zeta)}d\mu_k({\mathbf{v}},{\bm{\xi}},\zeta). $$ Since, by the reproducing property, $$ \int e^{k({\mathbf{u}}\cdot\bar {\mathbf{v}}+{\bm{\eta}}\cdot\bar{\bm{\xi}})} \psi({\mathbf{v}},{\bm{\xi}},\zeta)d\mu_k({\mathbf{v}},{\bm{\xi}},\zeta)= \psi({\mathbf{u}},{\bm{\eta}},\tau) $$ for each $\tau$, we get: $$ \langle T_f^{(k)}T_g^{(k)}\varphi,\psi\rangle= \int f({\mathbf{u}},{\bm{\eta}},\tau) (T_g^{(k)}\varphi ) ({\mathbf{u}},{\bm{\eta}},\tau) \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau)= $$ $$ \int\int f({\mathbf{u}},{\bm{\eta}},\tau) g({\mathbf{e}},{\bm{\beta}},\chi) \varphi({\mathbf{e}},{\bm{\beta}},\chi) e^{k({\mathbf{u}}\cdot\bar {\mathbf{e}}+{\bm{\eta}}\cdot\bar{\bm{\beta}})}d\mu_k({\mathbf{e}},{\bm{\beta}},\chi) \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ Now let ${\mathbf{a}}={\mathbf{e}}-{\mathbf{u}}$ and ${\mathbf{b}}={\bm{\beta}}-{\bm{\eta}}$. The integral above becomes $$ \int\int f({\mathbf{u}},{\bm{\eta}},\tau) g({\mathbf{u}}+{\mathbf{a}},{\bm{\eta}}+{\mathbf{b}},\chi) \varphi({\mathbf{u}}+{\mathbf{a}},{\bm{\eta}}+{\mathbf{b}},\chi) e^{k({\mathbf{u}}\cdot\overline{( {\mathbf{u}}+{\mathbf{a}})}+{\bm{\eta}}\cdot\overline{({\bm{\eta}}+{\mathbf{b}})})} $$ $$ d\mu_k({\mathbf{u}}+{\mathbf{a}},{\bm{\eta}}+{\mathbf{b}},\chi) \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau)= $$ \begin{equation} \label{mainintegral} \int\int f({\mathbf{u}},{\bm{\eta}},\tau) g({\mathbf{u}}+{\mathbf{a}},{\bm{\eta}}+{\mathbf{b}},\chi) (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) \end{equation} $$ d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ Now, for $m=2n+3$, we write $g({\mathbf{u}}+{\mathbf{a}},{\bm{\eta}}+{\mathbf{b}},\chi)$ as the sum of the $m$-th Taylor polynomial at $({\mathbf{u}},{\bm{\eta}},\chi)$, plus remainder, $g_{m+1}$: $$ g=g-g_{m+1}+g_{m+1}. $$ First, we aim to establish that the part of the integral with $g_{m+1}$ is bounded by $\dfrac{1}{k^2}$ times a positive constant. Use the Taylor's Theorem (\cite[Th. 2.4.15]{abraham:83}, Lemma 5 \cite{coburn:92}). We get: $$ | \int\int f({\mathbf{u}},{\bm{\eta}},\tau) g_{m+1} (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau)|\le $$ $$ \sum_{p_1+...+p_{2n}+l_1+...l_{2n}+j_1+j_2=m+1}|c(p_1,...,p_{2n},l_1,...,l_{2n},j_1,j_2)| |\partial_{v_1}^{p_1}...\partial_{v_n}^{p_n}\partial_{\xi_1}^{p_{n+1}}...\partial_{\xi_n}^{p_{2n}} \partial_{\bar v_1}^{l_1}...\partial_{\bar v_n}^{l_n}\partial_{\bar \xi_1}^{l_{n+1}}...\partial_{\bar \xi_n}^{l_{2n}} \partial_{\zeta}^{j_1}\partial_{\bar\zeta}^{j_2}g|_{\infty} $$ $$ \int(|a_1|^2+...+|a_n|^2+|b_1|^2+...+|b_n|^2)^{\frac{m+1}{2}} |(U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi)| d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) $$ $$ \int |f({\mathbf{u}},{\bm{\eta}},\tau)|e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} |\psi({\mathbf{u}},{\bm{\eta}},\tau)|d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ Using the Cauchy-Schwarz inequality on the first integral and then the fact that $U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)}$ is a unitary operator, we conclude that the expression above does not exceed $$ \sum_{p_1+...+p_{2n}+l_1+...l_{2n}+j_1+j_2=m+1}|c(p_1,...,p_{2n},l_1,...,l_{2n},j_1,j_2)| |\partial_{v_1}^{p_1}...\partial_{v_n}^{p_n}\partial_{\xi_1}^{p_{n+1}}...\partial_{\xi_n}^{p_{2n}} \partial_{\bar v_1}^{l_1}...\partial_{\bar v_n}^{l_n}\partial_{\bar \xi_1}^{l_{n+1}}...\partial_{\bar \xi_n}^{l_{2n}} \partial_{\zeta}^{j_1}\partial_{\bar\zeta}^{j_2}g|_{\infty} $$ $$ ||\varphi|| \ \Bigr (\int(|a_1|^2+...+|a_n|^2+|b_1|^2+...+|b_n|^2)^{m+1} d\mu_k({\mathbf{a}},{\mathbf{b}},\chi)\Bigr )^{\frac{1}{2}} $$ $$ \int |f({\mathbf{u}},{\bm{\eta}},\tau)|e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} |\psi({\mathbf{u}},{\bm{\eta}},\tau)|d\mu_k({\mathbf{u}},{\bm{\eta}},\tau)\le $$ $$ \sum_{p_1+...+p_{2n}+l_1+...l_{2n}+j_1+j_2=m+1}|c(p_1,...,p_{2n},l_1,...,l_{2n},j_1,j_2)| |\partial_{v_1}^{p_1}...\partial_{v_n}^{p_n}\partial_{\xi_1}^{p_{n+1}}...\partial_{\xi_n}^{p_{2n}} \partial_{\bar v_1}^{l_1}...\partial_{\bar v_n}^{l_n}\partial_{\bar \xi_1}^{l_{n+1}}...\partial_{\bar \xi_n}^{l_{2n}} \partial_{\zeta}^{j_1}\partial_{\bar\zeta}^{j_2}g|_{\infty} $$ $$ \Bigr (\int(|a_1|^2+...+|a_n|^2+|b_1|^2+...+|b_n|^2)^{m+1}d\mu_k({\mathbf{a}},{\mathbf{b}},\chi)\Bigr )^{\frac{1}{2}} \ ||\varphi|| \ ||\psi|| $$ $$ \Bigl ( \int |f({\mathbf{u}},{\bm{\eta}},\tau)|^2 \Bigl (\frac{k}{\pi}\Bigr )^{2n}\frac{1}{\pi(1+|\tau|^2)^2} d\mu({\mathbf{u}},{\bm{\eta}},\tau)\Bigr )^{\frac{1}{2}}. $$ By Lemma 4 \cite{coburn:92} $$ \int(|a_1|^2+...+|a_n|^2+|b_1|^2+...+|b_n|^2)^{m+1}d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) =b(m+1,n)\frac{1}{k^{m+1}} $$ where the constant $b(m+1,n)$ does not depend on $k$. Thus, above we get an estimate with $\dfrac{1}{k^{\frac{m+1}{2}-n}}$, which is, for $m=2n+3$, is $\dfrac{1}{k^2}$. This concludes the part of the argument with Taylor remainder. Now we consider the part of (\ref{mainintegral}) that involves the $m$-th Taylor polynomial of $g$: \begin{equation} \label{integralwithpoly} \int\int f({\mathbf{u}},{\bm{\eta}},\tau) (g-g_{m+1}) (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). \end{equation} A typical term of the polynomial $g-g_{m+1}$ is of the form $$ c(p_1,...,p_{2n},l_1,...,l_{2n},0,0) \partial_{u_1}^{p_1}...\partial_{u_n}^{p_n}\partial_{\eta_1}^{p_{n+1}}...\partial_{\eta_n}^{p_{2n}} \partial_{\bar u_1}^{l_1}...\partial_{\bar u_n}^{l_n}\partial_{\bar \eta_1}^{l_{n+1}}...\partial_{\bar \eta_n}^{l_{2n}} g({\mathbf{u}},{\bm{\eta}},\chi) $$ $$ a_1^{p_1}...a_n^{p_n}b_1^{p_{n+1}}...b_n^{p_{2n}} \bar a_1^{l_1}...\bar a_n^{l_n}\bar b_1^{l_{n+1}}...\bar b_n^{l_{2n}}, $$ where $c(p_1,...,p_{2n},l_1,...,l_{2n},0,0)$ are constants. By Lemma 3 \cite{coburn:92} $$ \int a_1^{p_1}...a_n^{p_n}b_1^{p_{n+1}}...b_n^{p_{2n}} \bar a_1^{l_1}...\bar a_n^{l_n}\bar b_1^{l_{n+1}}...\bar b_n^{l_{2n}} (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) d\mu_k({\mathbf{a}},{\mathbf{b}},\chi)= $$ \begin{equation} \label{intermintegral} \frac{1}{k^{l_1+...+l_{2n}}}\int_{\ensuremath{\mathbb C}} \partial_{a_1}^{l_1}...\partial_{a_n}^{l_n}\partial_{b_1}^{l_{n+1}}...\partial_{b_n}^{l_{2n}} \Bigl [ a_1^{p_1}...a_n^{p_n}b_1^{p_{n+1}}...b_n^{p_{2n}} (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) \Bigr ] \Bigr |_{{\mathbf{a}}={\mathbf{b}}={\mathbf{0}}} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}. \end{equation} If there is a $j$ such that $l_j<p_j$, then this expression is equal to zero. Assume $l_j\ge p_j$ for all $j\in \{ 1,...,2n\}$. Then (\ref{intermintegral}) is a sum of the terms, with coefficients independent of $k$, that are $$ \frac{1}{k^{l_1+...+l_{2n}}}\int_{\ensuremath{\mathbb C}} \partial_{a_1}^{t_1}...\partial_{a_n}^{t_n}\partial_{b_1}^{t_{n+1}}...\partial_{b_n}^{t_{2n}} (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) \Bigr |_{{\mathbf{a}}={\mathbf{b}}={\mathbf{0}}} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2} $$ where all $0\le t_j\le l_j$, and by Lemma 2 \cite{coburn:92} this is $$ \frac{1}{k^{l_1+...+l_{2n}}}\int_{\ensuremath{\mathbb C}} e^{\frac{k}{2}({\mathbf{u}}\cdot\bar {\mathbf{u}}+{\bm{\eta}}\cdot\bar {\bm{\eta}})} \partial_{u_1}^{t_1}...\partial_{u_n}^{t_n}\partial_{\eta_1}^{t_{n+1}}...\partial_{\eta_n}^{t_{2n}} \{ \varphi ({\mathbf{u}},{\bm{\eta}},\chi)e^{-k({\mathbf{u}}\cdot\bar {\mathbf{u}}+{\bm{\eta}}\cdot\bar {\bm{\eta}})}\} \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}. $$ Putting this in (\ref{integralwithpoly}), we get terms (with coefficients that do not depend on $k$) \begin{equation} \label{nextintegral} \frac{1}{k^{l_1+...+l_{2n}}}\int_{\ensuremath{\mathbb C}} \int f({\mathbf{u}},{\bm{\eta}},\tau) \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)} \partial_{u_1}^{p_1}...\partial_{u_n}^{p_n}\partial_{\eta_1}^{p_{n+1}}...\partial_{\eta_n}^{p_{2n}} \partial_{\bar u_1}^{l_1}...\partial_{\bar u_n}^{l_n}\partial_{\bar \eta_1}^{l_{n+1}}...\partial_{\bar \eta_n}^{l_{2n}} g({\mathbf{u}},{\bm{\eta}},\chi) \end{equation} $$ \partial_{u_1}^{t_1}...\partial_{u_n}^{t_n}\partial_{\eta_1}^{t_{n+1}}...\partial_{\eta_n}^{t_{2n}} \{ \varphi ({\mathbf{u}},{\bm{\eta}},\chi)e^{-k({\mathbf{u}}\cdot\bar {\mathbf{u}}+{\bm{\eta}}\cdot\bar {\bm{\eta}})}\} \frac{1}{\pi(1+|\tau|^2)^2} (\frac{k}{\pi})^{2n} d\mu({\mathbf{u}},{\bm{\eta}},\tau) \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}. $$ Recall that $l_j\ge p_j$ and $l_j\ge t_j$ for all $j\in \{ 1,...,2n\}$ and $p_1+...+p_{2n}+l_1+...+l_{2n}\le m$. After complex integration by parts (taking into account that $f$ and $g$ have compact support with respect to the variables $u_1$, ... ,$u_n$, $\eta_1$, ..., $\eta_n$), we conclude that (\ref{nextintegral}) becomes a sum of the terms (with coefficients that do not depend on $k$) $$ \frac{1}{k^{l_1+...+l_{2n}}}\int_{\ensuremath{\mathbb C}} \int \varphi ({\mathbf{u}},{\bm{\eta}},\chi)\overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)} \partial_{u_1}^{p_1+s_1}...\partial_{u_n}^{p_n+s_n}\partial_{\eta_1}^{p_{n+1}+s_{n+1}}...\partial_{\eta_n}^{p_{2n}+s_{2n}} \partial_{\bar u_1}^{l_1}...\partial_{\bar u_n}^{l_n}\partial_{\bar \eta_1}^{l_{n+1}}...\partial_{\bar \eta_n}^{l_{2n}} g({\mathbf{u}},{\bm{\eta}},\chi) $$ $$ \partial_{u_1}^{q_1}...\partial_{u_n}^{q_n}\partial_{\eta_1}^{q_{n+1}}...\partial_{\eta_n}^{q_{2n}} f({\mathbf{u}},{\bm{\eta}},\tau) d\mu_k({\mathbf{u}},{\bm{\eta}},\tau) \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}. $$ Thus, the terms with $l_1+...+l_{2n}>1$ lead to the inequality $\le \dfrac{const}{k^2}$. It remains to consider the cases $l_1+...+l_{2n}=0$ and $l_1+...+l_{2n}=1$. Looking at (\ref{intermintegral}), we conclude that the part of (\ref{integralwithpoly}) with $l_1+...+l_{2n}=0$ is \begin{equation} \label{zeroterm} \int\int f({\mathbf{u}},{\bm{\eta}},\tau) g({\mathbf{u}},{\bm{\eta}},\chi) (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). \end{equation} Setting ${\mathbf{u}}+{\mathbf{a}}={\mathbf{v}}$ and ${\bm{\eta}}+{\mathbf{b}}={\bm{\xi}}$, and using the reproducing property (\ref{reprop2}), we observe that for each ${\mathbf{u}}$, ${\bm{\eta}}$ $$ \int g({\mathbf{u}},{\bm{\eta}},\chi) (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} = \int_{\ensuremath{\mathbb C}} g({\mathbf{u}},{\bm{\eta}},\chi) \varphi({\mathbf{u}},{\bm{\eta}},\chi)\frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}. $$ So, we rewrite (\ref{zeroterm}) as $$ \int\int_{\ensuremath{\mathbb C}} f({\mathbf{u}},{\bm{\eta}},\tau) g({\mathbf{u}},{\bm{\eta}},\chi) \varphi({\mathbf{u}},{\bm{\eta}},\chi)\frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ Now suppose $l_1+...+l_{2n}=1$. Without loss of generality $l_1=1$ and $l_2=...=l_{2n}=0$. Then $p_1=0,1$ and $p_2=...=p_{2n}=0$. The corresponding integral is \begin{equation} \label{firstorderterms} \int\int f({\mathbf{u}},{\bm{\eta}},\tau) \Bigl ( \partial _{\bar u_1} g({\mathbf{u}},{\bm{\eta}},\chi)\bar a_1+ \partial _{u_1} \partial _{\bar u_1}g({\mathbf{u}},{\bm{\eta}},\chi)a_1\bar a_1\Bigr ) (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) d\mu_k({\mathbf{a}},{\mathbf{b}},\chi) \end{equation} $$ e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)}d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ By Lemma 3 \cite{coburn:92} $$ \int (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi)a_1\bar a_1 \ d\mu_k({\mathbf{a}},{\mathbf{b}},\chi)= \frac{1}{k}\int_{\ensuremath{\mathbb C}}\partial _{a_1}(a_1U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) \Bigr |_{{\mathbf{a}}={\mathbf{b}}={\mathbf{0}}} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}= $$ $$ \frac{1}{k}\int_{\ensuremath{\mathbb C}}(U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{0}},{\mathbf{0}},\chi) \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}= \frac{1}{k}\int_{\ensuremath{\mathbb C}} e^{-\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \varphi ({\mathbf{u}},{\bm{\eta}},\chi) \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2} $$ and $$ \int (U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi)\bar a_1 \ d\mu_k({\mathbf{a}},{\mathbf{b}},\chi)= \frac{1}{k}\int_{\ensuremath{\mathbb C}}\partial _{a_1}(U_{-({\mathbf{u}},{\bm{\eta}})}^{(k)} \varphi)({\mathbf{a}},{\mathbf{b}},\chi) \Bigr |_{{\mathbf{a}}={\mathbf{b}}={\mathbf{0}}} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2} $$ which, by Lemma 2 \cite{coburn:92}, becomes $$ \frac{1}{k}\int_{\ensuremath{\mathbb C}} e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \partial _{u_1}\{ \varphi({\mathbf{u}},{\bm{\eta}},\chi) e^{-k({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})}\} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2}. $$ Thus, (\ref{firstorderterms}) is equal to $$ \frac{1}{k} \int\int_{\ensuremath{\mathbb C}} f({\mathbf{u}},{\bm{\eta}},\tau) \Bigl ( \partial _{\bar u_1} g({\mathbf{u}},{\bm{\eta}},\chi) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \partial _{u_1}\{ \varphi({\mathbf{u}},{\bm{\eta}},\chi) e^{-k({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})}\} + $$ $$ \partial _{u_1} \partial _{\bar u_1}g({\mathbf{u}},{\bm{\eta}},\chi) e^{-\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \varphi ({\mathbf{u}},{\bm{\eta}},\chi) \Bigr ) e^{\frac{k}{2}({\mathbf{u}}\cdot\bar{\mathbf{u}}+{\bm{\eta}}\cdot\bar{\bm{\eta}})} \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2} d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ After a complex integration by parts on the first summand (taking into account that $f$ and $g$ are compactly supported on $\ensuremath{\mathbb C}^{2n}$), we get $$ -\frac{1}{k} \int\int_{\ensuremath{\mathbb C}}\partial_{u_1} f({\mathbf{u}},{\bm{\eta}},\tau) \partial _{\bar u_1} g({\mathbf{u}},{\bm{\eta}},\chi) \varphi({\mathbf{u}},{\bm{\eta}},\chi) \overline{\psi({\mathbf{u}},{\bm{\eta}},\tau)} \ \frac{d\mu(\chi)}{\pi(1+|\chi|^2)^2} d\mu_k({\mathbf{u}},{\bm{\eta}},\tau). $$ To summarize, we got: for arbitrary $\varphi$, $\psi$ in $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$ $$ |\langle \Bigl (T_f^{(k)}T_g^{(k)} - T_{f_{red}g_{red}}^{(k)}+\frac{1}{k}T^{(k)}_ {\sum_{j=1}^n\Bigl ( \frac{\partial f_{red}}{\partial v_j}\frac{\partial g_{red}}{\partial \overline{ v _j}} + \frac{\partial f_{red}}{\partial \xi_j}\frac{\partial g_{red}}{\partial \overline{ \xi_j}} \Bigr ) } \Bigr )\varphi ,\psi \rangle |\le \frac{const}{k^2}. $$ We recall that for a bounded selfadjoint operator $A$ on a Hilbert space $H$ we have $||A||=\underset{||x||=1}{\sup}|\langle Ax,x\rangle |$. This equality also holds if $A$ is bounded and $iA$ is selfadjoint. Then, for an arbitrary bounded operator $B$ on $H$ we write $B=\frac{1}{2}(B+B^*+B-B^*)$ and conclude that $||B||\le 2\underset{||x||=1}{\sup}|\langle Bx,x\rangle |$. This proves the first statement of the theorem. The second statement follows from (\ref{toeq}). $\Box$ Now, let us make some comments. \begin{remark} In the proof, it was only used that ${\mathbf{v}}$, ${\bm{\xi}}$, $\zeta$ are complex coordinates on $\Tw(\ensuremath{\mathbb R}^{4n})$ with one fiber removed. We did not utilize the explicit formulas (\ref{complexcoord1}) or (\ref{complexcoord2}). \end{remark} \begin{remark} In the assumptions of the theorem, $f$ and $g$ are the functions obtained from $f_{\ensuremath{\mathbb R}^{4n}}$ and $g_{\ensuremath{\mathbb R}^{4n}}$, smooth compactly supported functions on $\ensuremath{\mathbb R}^{4n}$. In fact, for the proof it was only needed that $f_{\ensuremath{\mathbb R}^{4n}}\in C_c^{l}(\ensuremath{\mathbb R}^{4n})$ (the set of $l$ times continuously differentiable complex-valued functions on $\ensuremath{\mathbb R}^{4n}$ with compact support), and $g_{\ensuremath{\mathbb R}^{4n}}\in BC^{l}(\ensuremath{\mathbb R}^{4n})$ (the set of bounded continuous complex-valued functions on $\ensuremath{\mathbb R}^{4n}$ with all derivatives up to order $l$ bounded and continuous), with a sufficiently large value of $l$. \end{remark} Finally, let us discuss what it is possible to extract directly from \cite{coburn:92} and how our theorem relates to that. Recall that a fiber of $\pi$ is $\ensuremath{\mathbb R}^{4n}$ with the complex structure $aI+bJ+cK$. In our considerations, $P=(a,b,c)$ is an arbitrary point in ${\mathbb{S}}^2-\{ (a_0,b_0,c_0)\}$, where $(a_0,b_0,c_0)$ is a fixed point in the sphere. There is a Berezin-Toeplitz quantization on each of these $(\ensuremath{\mathbb R}^{4n},aI+bJ+cK)$. Let us explain this in some detail. The map (\ref{diffeo}) induces an $\ensuremath{\mathbb R}$-linear isomorphism $S_P:\ensuremath{\mathbb R}^{4n}\to\ensuremath{\mathbb C}^{2n}$, ${\mathbf{x}}\mapsto ({\mathbf{v}},{\bm{\xi}})$, for each $(a,b,c)$ (these are different maps for different $(a,b,c)$). The space $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})$ is isomorphic to the space $\hat{\ensuremath{\mathcal{A}}} ^{(k)}(\ensuremath{\mathbb C}^{2n})$ of holomorphic functions $f$ on $\ensuremath{\mathbb C}^{2n}$ that satisfy $$ \int_{\ensuremath{\mathbb C}^{2n}}|f({\mathbf{v}},{\bm{\xi}})|^2e^{-k({\mathbf{v}}\cdot\bar{\mathbf{v}}+ {\bm{\xi}}\cdot\bar{\bm{\xi}})} \ d\mu({\mathbf{v}},{\bm{\xi}})<\infty . $$ The isomorphism $\ensuremath{\mathcal{A}} ^{(k)}(\ensuremath{\mathbb C}^{2n+1})\to\hat{\ensuremath{\mathcal{A}} }^{(k)}(\ensuremath{\mathbb C}^{2n})$, $h\mapsto \hat{h}$ can be defined by $$ \hat{h}({\mathbf{v}},{\bm{\xi}})=h({\mathbf{v}},{\bm{\xi}},\zeta) $$ for a fixed $\zeta\in\ensuremath{\mathbb C}$ or by $$ \hat{h}({\mathbf{v}},{\bm{\xi}})=\int_{\ensuremath{\mathbb C}}h({\mathbf{v}},{\bm{\xi}},\zeta)\frac{1}{\pi(1+|\zeta|^2)^2}d\mu(\zeta). $$ The inverse map $\hat{h}\mapsto h$ is given by $$ h({\mathbf{v}},{\bm{\xi}},\zeta)=\hat{h}({\mathbf{v}},{\bm{\xi}}) $$ for all $\zeta \in \ensuremath{\mathbb C}$. Let $k\in\ensuremath{{\mathbb N}}$. The Toeplitz operator with symbol $h\in L^2(\ensuremath{\mathbb C}^{2n},\Bigl (\frac{k}{\pi}\Bigr )^{2n}e^{-k({\mathbf{v}}\cdot \bar {\mathbf{v}}+{\bm{\xi}}\cdot \bar {\bm{\xi}})} d\mu( {\mathbf{v}},{\bm{\xi}}))$ is $$ (T_h^{(k)}f)({\mathbf{v}},{\bm{\xi}})=\Bigl (\frac{k}{\pi}\Bigr )^{2n}\int_{\ensuremath{\mathbb C}^{2n}}h({\mathbf{u}},{\bm{\eta}})f({\mathbf{u}},{\bm{\eta}})e^{k ( {\mathbf{v}}\cdot \bar{\mathbf{u}}+ {\bm{\xi}}\cdot\bar{\bm{\eta}})}e^{-k({\mathbf{u}}\cdot \bar {\mathbf{u}}+{\bm{\eta}}\cdot \bar {\bm{\eta}})} d\mu( {\mathbf{u}},{\bm{\eta}}), $$ Denote, for $l\in \ensuremath{{\mathbb N}}$, by $C_c^{l}$ the set of $l$ times continuously differentiable complex-valued functions on $\ensuremath{\mathbb C}^{2n}$ with compact support, and by $BC^{l}$ the set of bounded continuous complex-valued functions on $\ensuremath{\mathbb C}^{2n}$ with all derivatives up to order $l$ bounded and continuous. We obtain \begin{theorem}\cite[Th. 2]{coburn:92} \label{prelimth} Suppose $f$ is in $C_c^{2n+3}$ and $g$ is in $BC^{4n+6}$. Then there is a constant $C=C(f,g)$ such that $$ ||T_f^{(k)}T_g^{(k)}-T_{fg}^{(k)}+\frac{1}{k}T^{(k)}_ {\sum_{j=1}^n\Bigl ( \frac{\partial f}{\partial v_j}\frac{\partial g}{\partial \overline{ v _j}} + \frac{\partial f}{\partial \xi_j}\frac{\partial g}{\partial \overline{ \xi_j}} \Bigr )}|| \le \frac{C}{k^2}, $$ for all $k>0$. \end{theorem} Immediately follows \begin{corollary} Suppose $f$ and g are in $C_c^{4n+6}$. Then there is a constant $C=C(f,g)$ such that $$ ||ik[T_f^{(k)},T_g^{(k)}]-T^{(k)}_{ \{ f,g\} } \le \frac{C}{k} $$ for all $k>0$, where $$ \{ f,g\} = -i\sum_{l=1}^n\Bigr ( \frac{\partial f}{\partial v_l}\frac{\partial g}{\partial \bar v_l} - \frac{\partial g}{\partial v_l}\frac{\partial f}{\partial \bar v_l} + \frac{\partial f}{\partial \xi_l}\frac{\partial g}{\partial \bar \xi_l}- \frac{\partial g}{\partial \xi_l}\frac{\partial f}{\partial \bar \xi_l} \Bigr ). $$ is the standard Poisson bracket on $\ensuremath{\mathbb C}^{2n}$. \end{corollary} The subtle issue is that the functions $f$ and $g$ that appear in Theorem \ref{prelimth} are functions on $\ensuremath{\mathbb C}^{2n}$ and not on $\ensuremath{\mathbb R}^{4n}$. These functions are $f=f_0\circ (S_P)^{-1}$ and $g=g_0\circ (S_P)^{-1}$, where $f_0$ and $g_0$ are functions on $\ensuremath{\mathbb R}^{4n}$. Start with a smooth compactly supported function $f_0$ on $\ensuremath{\mathbb R}^{4n}$. Then we get a family of functions $f$, parametrized by $P\in {\mathbb{S}}^2-\{ pt\}$ and the corresponding family of Toeplitz operators $T_f^{(k)}$. \begin{example} Let $n=1$, $(a_0,b_0,c_0)=(-1,0,0)$. So, we use (\ref{complexcoord1}). Let $f_0:\ensuremath{\mathbb R}^4\to \ensuremath{\mathbb C}$ be defined by $f_0(x_1,x_2,x_3,x_4)=x_1$. Take $P=(1,0,0)$. Then $\zeta=0$, $v=z$, $\xi=w$, and the function $f=f_0\circ (S_P)^{-1}$ is defined by $f(v,\xi)=Re(v)$. Take, instead, $P=(0,0,-1)$. Then $\zeta=1$, $v=z+\bar w$, $\xi=w-\bar z$, and the function $f=f_0\circ (S_P)^{-1}$ is defined by $f(v,\xi)=\frac{1}{2} Re(v-\bar\xi)$. \end{example} So, from \cite{coburn:92} we get a family of Berezin-Toeplitz quantizations, one quantization on each fiber of the twistor projection (with the assumption that we removed one fiber at the beginning of our discussion). That's a family of Berezin-Toeplitz quantizations, parametrized by $\ensuremath{\mathbb C}$ (one quantization for every complex structure corresponding to a point in $\ensuremath{\mathbb C}$). The objective of Theorem \ref{mainth} is to get, instead, one quantization that incorporates all these complex structures at once.
null
null
null
proofpile-arXiv_000-10112
{"arxiv_id":"2009.09040","language":"en","timestamp":1600740137000,"url":"https:\/\/arxiv.org\/abs\/2009.09040","yymm":"2009"}
2024-02-18T23:40:24.912Z
2020-09-22T02:02:17.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10114"}
null
\section{Introduction} Transitional millisecond pulsars (tMSPs) form a unique subclass of stellar compact binary system containing a neutron star and a low-mass, non-degenerate companion. These binaries are identified by their observed transitions between a low-mass X-ray binary-like state and a rotation-powered radio millisecond pulsar state on timescales of weeks to years. To date, only three systems have been observed to undergo such transitions: PSR J1023+0038 \citep{Archibald09} and XSS J12270--4859 \citep{Bassa14,Roy15} in the Galactic field, and IGR J18245--2452 \citep{Papitto13} in the globular cluster M28. In addition to these class-defining transitions, tMSPs show other distinctive characteristics, especially in the disk state. Their typical 1--10 keV X-ray luminosities are $L_X\sim10^{33-34}$ erg s$^{-1}$ \citep{deMartino13,Papitto13,Patruno14}. This is much lower than the typical values of $L_X \gtrsim 10^{36}$ erg s$^{-1}$ observed for persistent or outbursting transient neutron star low-mass X-ray binaries \citep{Paradijs98}. Hence this $L_X\sim10^{33-34}$ erg s$^{-1}$ state is also called the sub-luminous disk state. A single tMSP (IGR J18245--2452; \citealt{Papitto13}) has been observed at the outburst $L_X$ of a few $\times \, 10^{36}$ erg s$^{-1}$ traditionally seen for accreting millisecond X-ray pulsars \citep{Patruno12}. While it is uncertain what fraction of tMSPs show such outbursts, this case shows that it is a possible alternative route to tMSP discovery. \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.45\linewidth]{findingfig-xray-v13.pdf} \hspace{5mm} \includegraphics[width=0.45\linewidth]{findingfig-opt-v12.pdf} \caption{Left: \textit{\textit{XMM-Newton}}/EPIC X-ray image of the field of 4FGL J0407.7--5702. The 95\% 4FGL error ellipse is in red and the candidate X-ray/optical counterpart is circled in yellow. Right: Red Digitized Sky Survey image of the field with the X-ray source marked in yellow. The inset is a zoomed-in unfiltered optical image taken with the SOAR telescope, with the source again marked with a yellow circle.} \end{center} \label{fig:finder_fig} \end{figure*} The X-ray light curves show variability on a range of amplitudes and timescales. In the three confirmed tMSPs this variability exhibits remarkable ``mode switching": abrupt changes between distinct low and high modes \citep{deMartino13,Linares14,Bogdanov15} that differ by a factor of $\sim 5-10$ in X-ray luminosity, with occasional more luminous flares. During the high mode there is evidence of X-ray and optical pulsations \citep{Archibald10,Archibald15,Papitto2015b,Ambrosino17}. In the low mode X-ray pulsations are not seen, and instead radio continuum emission is observed, likely from synchrotron-emitting bubbles produced close to the neutron star \citep{Bogdanov18}. A self-consistent model of these observations has not yet been firmly established. Indeed it is unclear whether (i) the disk state is accretion powered, with some accreted material reaching the surface of the neutron star and the rest possibly ejected in a propeller \citep{Bogdanov15,Papitto2015a} or maintained in a ``trapped disk" \citep{Dangelo12}, or (ii) whether the disk state is instead rotation powered, with the X-ray emission (including the X-ray pulsations) due to shocks from the pulsar wind occurring just outside the light cylinder \citep{Ambrosino17,Papitto19,Veledina19} or at a larger radius \citep{Takata14}. Notably, timing observations of PSR J1023+0038 show that the pulsar spin-down rate increases by $\sim 27\%$ in the sub-luminous disk state compared to the radio pulsar state \citep{Jaodand16}. This suggests that the pulsar wind is enhanced during the disk state, rather than being suppressed, as might be expected if accretion is occurring. The detection of radio pulsations in the disk state could help settle the question, but these have not been detected in the sub-luminous disk state of any of the confirmed tMSPs despite extensive searches \citep{Hill11,Papitto13,Stappers14,Jaodand16}. In another departure from typical low-mass X-ray binary phenomenology, the two field tMSPs show enhanced GeV $\gamma$-ray emission in the sub-luminous disk state: they are brighter by a factor of a few compared to the millisecond pulsar state \citep{Stappers14,Johnson15}. Besides the known tMSPs (and the candidate tMSPs selected via $\gamma$-ray emission), no other low-mass X-ray binary at $L_X \lesssim 10^{35}$ erg s$^{-1}$ has shown persistent $\gamma$-ray emission, including a number of relatively nearby accreting millisecond X-ray pulsars \citep{Torres20}. Regardless of whether the disk state in tMSPs is actually powered by accretion, there is strong evidence from the optical that a disk is indeed present. Optical spectra in this state are dominated by a warm continuum and the double-peaked H and He emission lines characteristic of an accretion disk around a compact object \citep{Bond02,deMartino14}. In the MSP state, these optical signatures of a disk disappear \citep{Thorstensen05,deMartino15}. Unlike the apparently unique sub-luminous disk state, tMSPs in the pulsar state have multiwavelength properties comparable to the other compact binaries with non-degenerate companions. These systems are typically called ``redbacks" when the companion mass is $\gtrsim 0.1 M_{\odot}$ \citep{Roberts13}. The X-ray emission in redbacks is ususally dominated by a hard intrabinary shock modulated on the orbital period \citep{Linares_solo14,Roberts14}. Typical pulsar searches strongly select against eclipsing pulsars such as redbacks, and targeted follow-up of new $\gamma$-ray sources discovered with the \emph{Fermi} Large Area Telescope has been the most important tool in substantially increasing the known sample of field redbacks \citep{Ray12,Roberts13}. Redbacks are not mere curiosities: as a population they have among the highest neutron star masses known for any subclass of millisecond pulsars \citep{Strader19}, and some are among the fastest spinning pulsars known \citep{Patruno17}. Since all tMSPs are redbacks it is natural to wonder whether the converse holds, but despite intensive searches (e.g., \citealt{Torres17}) no other field redbacks have been observed to transition to or from the pulsar state. Instead, new candidate tMSPs have been identified using the distinguishing characteristics of the class in the sub-luminous disk state, including $\gamma$-ray emission, variable X-ray emission with $L_X \sim 10^{33}-10^{34}$ erg s$^{-1}$, and evidence for a disk. The three convincing candidates found this way are: 3FGL J1544.6--1125 \citep{BH15}, 3FGL J0427.9--6704 \citep{Strader16}, and CXOU J110926.4--650224 \citep{CotiZelati19}. This tMSP discovery route, while very useful, is certainly incomplete: some other sources have X-ray luminosities and variability properties somewhat similar to tMSPs in the sub-luminous disk state, but perhaps have not been detected as $\gamma$-ray sources due to their distances or confused sky locations (e.g., \citealt{Degenaar14,Heinke15}). It is plausible that some of these sources are indeed tMSPs and could be identified as such with future multi-wavelength observations. Here we present the discovery and characterization of a compact binary within the error ellipse of the \emph{Fermi}-LAT $\gamma$-ray source 4FGL J0407.7--5702. We show this source has X-ray and optical properties similar to the known tMSPs in the sub-luminous disk state and hence is a strong tMSP candidate. \section{Observations} \subsection{The $\gamma$-ray Source \& Optical Discovery} \label{sec:gamma} The candidate X-ray/optical counterpart was discovered as part of our ongoing program to search for new compact binaries among previously unassociated \emph{Fermi}-LAT $\gamma$-ray sources. We are focused in particular on possible counterparts that are both optical variables and X-ray sources, since this preferentially selects for compact binaries over unrelated contaminants. The focus of this paper is on an X-ray/optical source at the edge of the 68\% error ellipse---and hence well within the 95\% error ellipse---of the LAT 4FGL-DR2 (10-year) source 4FGL J0407.7--5702 \citep{4FGLDR2}. The 95\% error ellipse is not too far from circular, with a mean radius $\sim 4.1$\arcmin. The LAT source, while faint (0.1--100 GeV flux of 1.6$\pm$0.3 $\times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$), is detected at $5.7\sigma$, with a power law photon index of $\Gamma = 2.54\pm0.17$. There is no significant evidence for variability either in the formal variability index or in an examination of the light curve in 1-yr bins \citep{4FGLDR2}. Using archival \textit{Swift} X-ray Telescope (XRT) data \citep{Stroh13} taken from the \emph{Swift}/XRT website \citep{Evans09}, we identified a single prominent X-ray source within the \emph{Fermi}-LAT error ellipse, with a J2000 (R.A., Dec.) of (04:07:31.78, --57:00:25.2) and a 90\% uncertainty of 4.5\arcsec. There is a single catalogued optical source that matches this X-ray source. The \emph{Gaia} DR2 ICRS position of this source in (R.A., Dec.) is (04:07:31.7195, --57:00:25.295), which we take as the best known position. This match is $< 1$\arcsec, so well within the uncertainty of the X-ray position, and the follow-up data discussed below prove the X-ray and optical sources are associated with each other. Furthermore, the \emph{Gaia} DR2 photometry ($G = 20.176\pm0.011$ mag), and presence of the optical source in a catalog of candidate variables identified by the Dark Energy Survey \citep{Stringer19}, both suggested it might be variable, motivating follow-up. \subsection{Optical Spectroscopy} \label{sec:optobs} We obtained optical spectroscopy with the Goodman Spectrograph \citep{Clemens04} on the SOAR telescope on parts of six different nights from 2019 Nov 6 to 2020 Jan 17. In all cases we used a 400 l mm$^{-1}$ grating with a 0.95\arcsec\ slit, giving a resolution of about 5.6 \AA\ (full-width at half-maximum; FWHM). Some of the spectra were obtained with a wavelength range of $\sim 3820$--7850 \AA, while others used a central wavelength with coverage about 1000 \AA\ redder. Each spectrum had an exposure time of 1500 sec. The spectra were all reduced and optimally extracted in the normal manner. We also obtained several spectra with Gemini/GMOS-S (Program ID: GS-2019B-FT-111) on the nights of 2019 Dec 30 and 31. On each night three 1200 sec exposures were taken. The R400 grating and a 1.0\arcsec\ slit together yielded a FWHM resolution of about 7.2 \AA\ over the wavelength range $\sim 4500$--9150 \AA. These data were reduced using the Gemini IRAF package \citep{2016ascl.soft08006G}. \subsection{SOAR Photometry} \label{sec:photometry} In an effort to detect periodic optical variability associated with the companion, we observed the source with SOAR/Goodman in imaging mode on 2019 Dec 16 and again on 2020 Jan 12. On 2019 Dec 16, which had seeing around 1\arcsec, we took a series of 180 s exposures, alternating between the SDSS $g'$ and $i'$ filters, while on the second, brighter night, with median seeing of 0.8\arcsec, we only observed in $i'$, with a frame time of 120 s. We performed differential aperture photometry with respect to a set of nearby non-varying stars, and calibrated to magnitudes from the Dark Energy Survey \citep{DES18}. \subsection{X-ray Observations} \label{sec:xrayobs} We observed 4FGL J0407.7--5702 on 2020 March 6 with the European Photon Imaging Camera (EPIC) on board the \textit{XMM-Newton} space telescope. A total live time of $\sim 22$~ksec was achieved. Data were reduced using the Science Analysis Software ({\sc sas}) data reduction package, version 18.0.0. We used a circular source extraction region of radius 30$^{\prime\prime}$ centered on J0407.7--5702 and a local background extraction region with an area three times larger. Exposure time intervals of high particle backgrounds were excluded. Standard flagging criteria \verb| FLAG=0|, plus \verb|#XMMEA_EP| and \verb|#XMMEA_EM| (for pn and MOS detectors, respectively), were applied. Additionally, we selected patterns 0--4 for the pn data and 0--12 for the MOS data. Individual background-subtracted spectra were extracted for pn, MOS1 and MOS2 using standard tasks in \textit{xmmselect}. A single combined EPIC spectrum was created using \textit{epicspeccombine} and grouped to at least 20 counts per bin so that Gaussian statistics could be used. For our timing analysis, we used the {\sc sas} tasks \textit{evselect} and \textit{epiclccorr} to produce a background-subtracted light curve. The X-ray source discussed in this paper is the closest source to the center of the \emph{Fermi}-LAT error ellipse, and has a higher X-ray flux than any other source in the error ellipse by a factor of $\sim 20$. \section{Results \& Analysis} \subsection{X-ray Spectrum and Mean X-ray Properties} \label{sec:Xrayspec} We began by fitting the \textit{XMM-Newton} EPIC spectrum with an absorbed power-law, as shown in Figure~\ref{fig:xray_spectra}. At the Galactic latitude of the source ($b = -44^{\circ}$) the expected foreground extinction is very low ($E(B-V) = 0.013$; \citealt{Schlafly11}), with a correspondingly low line-of-sight column density of $N_H=1.08\times10^{20}$ cm$^{-2}$ \citep{HI4PI16}. We find no evidence of additional intrinsic absorption and so fix the $N_H$ to this foreground value. This satisfactory model ($\chi^2/$dof$=105.8/96$, $p=0.23$) results in a best-fit photon index of $\Gamma=1.74\pm0.04$, with an unabsorbed 0.5--10 (1--10) keV flux of $4.16\pm0.17$ ($3.47\pm0.17$) $\times10^{-13}$ erg s$^{-1}$ cm$^{-2}$. At a reference distance of 7 kpc (see Section~\ref{sec:distance}), this flux corresponds to a 0.5--10 keV X-ray luminosity of $2.4 \, (d/7 \, {\rm kpc})^2 \times 10^{33}$ erg s$^{-1}$. We also experimented with adding a blackbody component to the model. However, we found only a marginal improvement ($\chi^2$/dof=101.6/94) in the fit, with an $F$-test probability of $p=0.15$. As expected, in this fit, the added blackbody component (\textit{k}T=0.17$\pm$0.05 keV) results in a slightly harder power law component ($\Gamma=1.63\pm0.10$), but the blackbody contributes only $\sim 5$\% to the unabsorbed 0.5--10 keV flux. For this two-component model, the unabsorbed 0.5--10 (1--10) keV flux is $4.3\pm0.2$ ($3.6\pm0.2$) $\times10^{-13}$ erg s$^{-1}$ cm$^{-2}$. Given the weak ($< 2\sigma$) evidence for a thermal component and its minimal effect on the total flux, we prefer the simpler power-law model, but note that some small thermal contribution could be present. The hard power law, with $\Gamma=1.74\pm0.04$, matches the X-ray spectrum of PSR J1023+0038 \citep{Bogdanov15} in the sub-luminous disk state as well as that of the tMSP candidates J1544.6--1125 \citep{Bogdanov16}, 3FGL J0427.9--6704 \citep{Strader16,Li20}, and CXOU J110926.4--650224 \citep{CotiZelati19}. A similar hard power law has also been observed for quiescent accreting millisecond pulsars such as SAX J1808.4--3658 \citep{Heinke09}, though such sources do not show an optical disk nor the extreme variability observed for tMSPs in the sub-luminous disk state. \begin{figure}[t!] \includegraphics[width=\linewidth]{J0407_EPIC_spec.pdf} \caption{\emph{XMM-Newton}/EPIC spectrum of 4FGL J0407.7--5702, which is well-fit by an absorbed power law with $\Gamma=1.74\pm0.04$.} \label{fig:xray_spectra} \end{figure} \subsubsection{Relative X-ray and $\gamma$-ray flux} \label{sec:fxfb} \begin{figure}[ht!] \includegraphics[width=1.05\linewidth]{fxfg.pdf} \caption{Ratio of X-ray (0.5--10 keV) to $\gamma$-ray (0.1--100 GeV) flux vs. X-ray photon index for tMSPs or candidates in the sub-luminous disk state (filled blue circles) and redbacks or tMSPs in the pulsar state (open red circles). The location of 4FGL J0407.7--5702 (orange star) is consistent with a classification as a disk state tMSP. The $\gamma$-ray fluxes are from \citet{4FGLDR2} and the X-ray fluxes from the compilation of \citet{Strader19}, except for PSR J1023+0038 \citep{Bogdanov11,Bogdanov15,Stappers14}, XSS J12270--4859 \citep{Linares_solo14,Johnson15}, and the new redback candidate 4FGL J2333.1--5527 \citep{Swihart20}. The X-ray photon indices are from the compilations of \citet{Linares_solo14} and \citet{Lee18} or the literature \citep{Bogdanov11,Bogdanov15,Bogdanov16,Li16,Strader16,Halpern17,AlNoori18,Cho18,Gentile18,Li18,Linares18,Swihart18,deMartino20,Li20,Swihart20}.} \label{fig:fxfg} \end{figure} Since the distance to 4FGL J0407.7--5702 is not known, we cannot effectively assess whether its X-ray luminosity supports an identification as a candidate tMSP. Nevertheless, here we show that the ratio of its X-ray and $\gamma$-ray fluxes---a distance-independent quantity---does indeed support this classification. The X-ray (0.5--10 keV) to $\gamma$-ray (0.1--100 GeV) flux ratio for 4FGL J0407.7--5702 is $F_X/F_{\gamma} = 0.26\pm0.06$. In Figure~\ref{fig:fxfg} we show this ratio, plotted against the X-ray photon index from a power-law spectral fit, for 4FGL-detected sources in the sub-luminous disk state\footnote{CXOU J110926.4--650224 was associated with a tentative 8-year source (FL8Y J1109.8--6500) that is not in the official 4FGL catalog. Using FL8Y, $F_X/F_{\gamma} = 0.67\pm0.27$.} (PSR J1023+03038, XSS J12270--4859, 3FGL J1544.6--1125, 3FGL J0427.9--6704) as well as all redbacks in the pulsar state that appear in the 4FGL catalog and which have well-measured photon indices (uncertainties $< 0.5$). Figure~\ref{fig:fxfg} shows that the four confirmed or candidate field tMSPs in the sub-luminous disk state have $F_X/F_{\gamma}$ in the range 0.29--0.43. This can be compared to a median value of 0.012 for redbacks in the pulsar state, and a maximum of $F_X/F_{\gamma} \sim 0.12$ (for 1FGL J1417.7--4407, which has an evolved companion and perhaps an unusually luminous intrabinary shock). 4FGL J0407.7--5702 has an $F_X/F_{\gamma}$ value consistent within the uncertainties with that observed for tMSPs, supporting its classification as such. By contrast, the recently discovered binary 4FGL J0935.3+0901, whose optical and X-ray data alone do not allow a clear classification \citep{Wang20}, has $F_X/F_{\gamma} \sim 0.02$, suggesting it is much more likely to be in the pulsar state than the sub-luminous disk state. The transitions of PSR J1023+03038 and XSS J12270--4859 show the proximate reason for this difference between the disk and pulsar states: in the disk state their 0.1--100 GeV $\gamma$-ray fluxes are higher by only a factor of $\sim 3$--6 compared to the pulsar state \citep{Stappers14,Johnson15}, but their 0.5--10 keV X-ray fluxes are higher by a factor of $\sim 25$--30 \citep{Bogdanov11,Bogdanov15,Linares14,deMartino20}. Whether this difference holds for other tMSPs awaits future observed transitions, but the location of candidate disk-state tMSPs in the same region as confirmed tMSPs is suggestive. If tMSPs do indeed all have relatively high values of $F_X/F_{\gamma}$, this has ramifications for the identification of new tMSPs among currently unassociated \emph{Fermi} $\gamma$-ray sources. New sources in the 10-year catalog have typical 0.1--100 GeV fluxes of 1--$3 \times 10^{-12}$ erg s$^{-1}$ cm$^{-2}$ \citep{4FGLDR2}. For typical tMSP-like values of $F_X/F_{\gamma}$, the corresponding 0.5--10 keV fluxes are $F_X \sim 3 \times 10^{-13}$ to $10^{-12}$ erg s$^{-1}$ cm$^{-2}$, detectable with \emph{Swift}/XRT for short exposure times of 1--2 ksec even in the presence of moderate foreground extinction ($N_H \lesssim 10^{22}$ cm$^{-2}$). The expected all-sky X-ray sensitivity of \emph{eROSITA} is similar \citep{Merloni12}. The implication is that for essentially any tMSP in the sub-luminous disk state detected as a \emph{Fermi} GeV source, an X-ray counterpart should be readily identifiable even in shallow data. This is unlike the case for redbacks or black widows, which can have much fainter X-ray counterparts. Since typical \emph{Fermi} error ellipses can contain a number of unrelated X-ray sources, the mere existence of a candidate X-ray counterpart cannot be used to provide a definitive classification, but the absence of such a counterpart would disfavor the identification of the source as a tMSP. Figure~\ref{fig:fxfg} also shows that, consistent with previous work (e.g., \citealt{Linares_solo14}), the tMSPs or candidates in the disk state have $\Gamma \sim 1.7$, but that redbacks show a wide range of photon indices, in part depending on the strength of the intrabinary shock. Therefore, the X-ray photon index can only give indicative, but not conclusive, information about the classification of a candidate tMSP. \subsubsection{ROSAT} There is a faint ($0.020\pm0.008$ ct s$^{-1}$) \emph{ROSAT} source, 2RXS J040730.2--570024 \citep{Boller16}, with a catalog position $11\arcsec$ from that of the optical/X-ray counterpart to 4FGL J0407.7--5702. Assuming the best-fit \emph{XMM} spectral model, this \emph{ROSAT} count rate is equivalent to a 0.5--10 keV flux of $(3.6\pm1.4) \times10^{-13}$ erg s$^{-1}$ cm$^{-2}$, identical to the \emph{XMM} flux within the uncertainties. Given how similar these fluxes are, the lack of another \emph{XMM} X-ray source within $\sim 45\arcsec$ of the target, and the poor astrometry expected for faint \emph{ROSAT} sources, we think it is likely that 2RXS J040730.2--570024 is the same as the \emph{XMM} source despite the astrometric offset. If so, this implies that 4FGL J0407.7--5702 was in a similar spectral state in 1990--1991 to 2019--2020. While the $\gamma$-ray source is faint, there is no evidence for significant $\gamma$-ray variability since 2008 \citep{4FGLDR2}, and hence no reason to believe a transition occurred in this time interval. The constraints on a possible ``full" X-ray outburst are weak: at our inferred range of likely distances, an outburst to $L_X \sim 10^{36}$ erg s$^{-1}$ would have only reached a few milliCrab, so its discovery by all-sky X-ray monitors would have been borderline. \subsection{X-ray Light Curve} \label{sec:XrayLC} The background subtracted and barycentric corrected \textit{XMM-Newton} EPIC 0.2--10 keV light curve, shown separately in 100s bins (Figure~\ref{fig:xmm_bigLC}) and 50s bins (Figure~\ref{fig:xmm_smallLC}), display clear short-term variability over the 21.7 ksec exposure. A histogram of the finer 50s-binned light curve shows a bimodal distribution (Figure~\ref{fig:xmm_hist}). For exploratory analysis in this section, we use this figure to guide a preliminary division of the light curve into three separate flux levels: low (0.0--0.3 ct s$^{-1}$), medium (0.3--0.6 ct s$^{-1}$), and flare ($>$0.6 ct s$^{-1}$). The average 0.2--10 keV count rate is $0.204\pm0.003$ ct s$^{-1}$. 4FGL J0407.7--5702 spent a majority of the observation at the low ($\sim$60\% of the time) or medium ($\sim$37\%) flux levels, with occasional flares ($\sim 2$--3\%). Given that fast, frequent ``mode switching" between the low and high modes is observed clearly in PSR J1023+0038, XSS J12270--4859, and 3FGL J1544.6--1125 \citep{Linares_solo14,Bogdanov15,BH15}, it is tempting to associate the low and medium count levels observed for 4FGL J0407.7--5702 with these well-studied modes. \begin{figure}[t!] \includegraphics[width=\linewidth]{paperfig_XraybigLC.pdf} \caption{Background subtracted and barycentric corrected \textit{XMM-Newton} EPIC light curve of 4FGL J0407.7--5702 in the 0.2--10 keV band, binned in 100s bins.} \label{fig:xmm_bigLC} \end{figure} However, there are several reasons to think this simple interpretation is not correct. First, in these other systems the flux difference between the low and high modes is large (a factor of $\sim 5$--10), while in 4FGL J0407.7--5702 the difference is only a factor of $\sim 2$--2.5. Another difference is that in the other tMSPs and candidates the binary is in the high mode for the majority ($\gtrsim 75\%$) of the time, compared to $< 40\%$ here. A careful examination of Figure~\ref{fig:xmm_smallLC} shows that the system does indeed make excursions to a count rate much fainter than the broad, low flux level identified in Figure~\ref{fig:xmm_hist}. The most extensive of these is around 0.17 d after the light curve start, where the binary has a flat-bottomed light curve with a mean count rate of $0.027\pm0.014$ ct s$^{-1}$. This is a factor of $\sim 7$--8 fainter than the average count rate over the whole dataset, and equivalent to an 0.5--10 keV X-ray luminosity of $(3.2\pm1.6) \, (d/7 \, {\rm kpc})^2 \times 10^{32}$ erg s$^{-1}$. We suggest it might be more accurate to view this rarer state as the true ``low mode" and both the peaks at 0.15--0.2 ct s$^{-1}$ and 0.4 ct s$^{-1}$ as manifestations of the same ``high mode". Indeed, from a phenomenological point of view the X-ray light curve most closely resembles that of the tMSP candidate CXOU J110926.4--650224, which shows occasional low modes but less pronounced bimodality over its entire light curve than some of the other tMSPs \citep{CotiZelati19}. It may also be the case that tMSPs show a broader set of behaviors than simple mode switching; for example, the tMSP candidate 3FGL J0427.9--6704 shows bright $\gamma$-ray and radio continuum emission as well as a disk, appears to spend most of its time in an X-ray flare mode, with no consistent stable low or high modes \citep{Li20}. \begin{figure}[t] \includegraphics[width=\linewidth]{50sec_4panel_LC.pdf} \caption{The same data as in Figure 4, but instead with finer 50s bins.} \label{fig:xmm_smallLC} \end{figure} \subsection{Optical Spectroscopy} \label{sec:optspec} \begin{figure}[t!] \includegraphics[width=\linewidth]{paperfig_hist.pdf} \caption{Distribution of count rates from the 50s binned background subtracted \textit{XMM-Newton} EPIC light curve. We define the following regions according to their flux levels: low (0.0-0.3 ct s$^{-1}$), medium (0.3-0.6 ct s$^{-1}$), and flare ($>$0.6 ct s$^{-1}$).} \label{fig:xmm_hist} \end{figure} \begin{figure*} \includegraphics[width=1.0\linewidth]{j0407_spec_e.pdf} \caption{Rectified co-added SOAR optical spectra of 4FGL J0407.7--5702. The main Balmer and \ion{He}{1} and \ion{He}{2} lines are marked. The spectrum is that of a typical accretion disk, with the only significant absorption lines telluric.} \label{fig:opt_spectra} \end{figure*} All of the SOAR spectra show the same features: a blue continuum with superimposed emission lines from \ion{H}{1}, \ion{He}{1}, and the 4686 \AA\ line of \ion{He}{2}. The emission lines are resolved, with a mean resolution-corrected FWHM of $516\pm21$ km s$^{-1}$ for H$\alpha$ measured among the different epochs. A mean of the stronger \ion{He}{1} lines is yet broader at $\sim 670$ km s$^{-1}$, and the \ion{He}{2} 4686 \AA\ line is double-peaked with a FWHM $\sim 830$ km s$^{-1}$. This trend of increasing FWHM is consistent with the idea that the H emission is primarily from the outer disk, with \ion{He}{1} and then \ion{He}{2} dominated by regions progressively closer to the compact object. There are no photospheric absorption features apparent either visually or in a cross-correlation with templates of the expected spectral types of the likely low-mass secondary. We obtained the Gemini spectra in the hope of uncovering faint absorption features in higher signal-to-noise spectra, but did not find any in these data either. We next attempted to constrain the orbital period through motion of the emission lines. While the different epochs of data do show evidence for modest variations in the wavelengths of the emission lines (of order $\sim 20$--30 km s$^{-1}$), these did not phase on any readily identifiable orbital period. Qualitatively, in terms of the presence of emission lines and their relative strengths, the 4FGL J0407.7--5702 spectra are very similar to those of the confirmed tMSP PSR J1023+0038 in its disk state and to the candidate tMSP 3FGL J1544.6--1125, and consistent overall with the optical spectra expected for an accretion disk around a compact object. Under the assumption that the emission lines do arise in an accretion disk around a neutron star, their relatively narrow FWHM hints at a more face-on inclination. For example, the neutron star low-mass X-ray binary Cen X-4 has an H$\alpha$ FWHM of $678\pm48$ km s$^{-1}$ \citep{Casares15} and an inclination of $35_{-4}^{+5^{\circ}}$ \citep{Hammerstein18}, while 3FGL J1544.6--1125 has a FWHM of $\sim 330$ km s$^{-1}$ and an inclination of 5--$8^{\circ}$ \citep{Britt17}. 4FGL J0407.7--5702, with an H$\alpha$ FWHM of $516\pm21$ km s$^{-1}$, likely has an inclination within this broad range of face-on values, assuming its orbital period and primary mass are not too dissimilar from typical neutron star low-mass X-ray binaries. \subsection{Optical Photometry} The SOAR optical ($g'$ and $i'$) light curves from our two epochs are shown in Figure~\ref{fig:opt_LCs}. In the Dec 2019 epoch, with $g'$ and $i'$ data taken over three hours, the light curves in both filters show aperiodic, seemingly stochastic variations. The short timescale variations in both filters are reminiscent of the ``flickering'' seen in the light curve of PSR J1023+0338 while in its sub-luminous disk state \citep{Kennedy18}. The Jan 2020 epoch, with data only in $i'$, extends over a longer timespan of nearly 6 hr. The variability is qualitatively different than the earlier optical photometry. By eye there is some evidence for periodic variability in the repeating distinct maxima (Figure~\ref{fig:opt_LCs}), but these do not repeat regularly: the second and third peaks are separated by $\sim 1.1$ hr, and the third and fourth by $\sim 1.7$ hr. A Lomb-Scargle periodogram analysis of this light curve does not show evidence for a significant peak (with only a weak, insignificant peak at 56 min). It is unlikely that any of these timescales represent the orbital period of the binary; of the known redbacks, the system with the shortest well-measured orbital period is PSR J1622--0315, at 3.9 hrs \citep{Sanpa16}. It is unknown whether all the tMSP candidates are indeed redbacks and hence whether less massive donors and hence shorter periods might be possible. Instead, the 2020 Jan 13 light curve more closely resembles the ``limit cycle'' behavior present most clearly in the tMSP candidate 3FGL J1544.6--1125, which shows short timescale variations confined between minimum and maximum values separated by $\sim$0.5 mag \citep{BH15}, as opposed to random flickering. Similar behavior is also observed in XSS J12270--4859 \citep{Pretorius2009,deMartino10} and PSR J1023+0038 \citep{Bond02,Bogdanov15,Kennedy18}, although in these systems modulation on the orbital period is also observed, superimposed on the shorter timescale variations. Such orbital variations, if present, might be harder to observe for 4FGL J0407.7--5702 given its likely face-on orientation (Sec.~\ref{sec:optspec}). \begin{figure}[htbp] \centering \subfloat{\includegraphics[angle=0,width=0.47\textwidth, trim={0 0.0cm 0 0},clip]{paperplot_DecLC.pdf}}\\ \subfloat{\includegraphics[angle=0,width=0.47\textwidth, trim={0 0.0cm 0 0},clip]{paperplot_JanLC.pdf}} \caption{Top: SOAR $g'$ and $i'$ optical photometry of 4FGL J0407.7--5702, taken on 2019 Dec 17. Bottom: SOAR $i'$ photometry from 2020 Jan 13, showing the limit cycle behavior discussed in the text.} \label{fig:opt_LCs} \end{figure} \subsection{Distance} \label{sec:distance} Since 4FGL J0407.7--5702 does not have a significant \emph{Gaia} DR2 parallax ($\varpi = -0.37\pm0.48$ mas, \citealt{GaiaDR2}), nor the possibility of modeling the flux from a Roche Lobe-filling secondary, we must use alternative methods to constrain its distance. To estimate the most likely distance, we proceed under the assumption that it has intrinsic properties similar to the known tMSPs. First, we consider its $\gamma$-ray and X-ray flux in the context of the four known or candidate tMSPs that have reasonably well-constrained distances. From the compilation of \citet{Strader19}, the 0.1--100 GeV $\gamma$-ray luminosities of these four sources range from $6 \times 10^{33}$ erg s$^{-1}$ to $2.4 \times 10^{34}$ erg s$^{-1}$, which given the flux of 4FGL J0407.7--5702 would imply a distance in the range 5.6--11.2 kpc. Similarly, given mean 0.5--10 keV X-ray luminosities of $2.4 \times 10^{33}$ erg s$^{-1}$ to $7.7 \times 10^{33}$ erg s$^{-1}$ for these sources, the implied X-ray distance is in the range 6.9--12.5 kpc. As a third estimate we consider the \emph{Gaia} photometry for the three sources with well-constrained distances and which have been in the disk state for the \emph{Gaia} era: PSR J1023+0038, 3FGL J1544.6--1125, and 3FGL J0427.9--6704. We focus on the $G_{BP}$ photometry under the hypothesis that this region of the spectrum is more likely to be disk-dominated, and under the ansatz that given the vaguely similar X-ray luminosities and periods of these sources their blue disk luminosities might also be similar. Using \emph{Gaia} DR2 \citep{GaiaDR2}, accounting for foreground reddening \citep{Schlafly11,Marigo17}, and the distances from \citet{Strader19}, the range of absolute $M_{G_{BP}}$ for this small sample of objects is indeed small, from $M_{G_{BP}} \sim 5.6$ to 6.0. Given $G_{BP,0} = 20.22 \pm 0.09$ for 4FGL J0407.7--5702, the implied range of optical distances is 7.2--8.3 kpc. The systematic uncertainties associated with this optical distance estimate are likely much larger than even those for the X-ray and $\gamma$-ray distances, but nevertheless, these values fall within the the range of the high-energy distances. Above and for the remainder of the paper we quote a reference distance of 7 kpc, but emphasize that this is not a best-fit distance estimate of the binary and that the uncertainty on the distance is substantial. We are more secure in saying that if 4FGL J0407.7--5702 has intrinsic properties similar to the other known and candidate tMSPs, then a distance of $\lesssim 5$ kpc is disfavored. \section{Discussion and Conclusions} \label{sec:discussion} We have shown that the brightest X-ray source within the error ellipse of the \emph{Fermi}-LAT $\gamma$-ray source 4FGL J0407.7--5702 has an X-ray light curve and spectrum consistent with known and strong candidate tMSPs in the sub-luminous disk state. Photometry and spectroscopy of the optical counterpart to this X-ray source provides compelling support for this scenario, as does the high ratio of X-ray to $\gamma$-ray flux. A definitive classification as a tMSP would require an observed transition to the pulsar state, which could be clued by $\gamma$-ray, X-ray, and optical monitoring of the source. Nonetheless, our conclusion could be strengthened with additional data in the present state. The most straightforward of these would be longer X-ray observations with \emph{XMM-Newton} or \emph{Chandra}, which could reveal whether the hints of mode switching behavior discussed above are borne out with more data. X-ray timing observations with one of these telescopes, or perhaps with \emph{NICER}, could in principle reveal if the system shows X-ray pulsations similar to PSR J1023+0038 in the sub-luminous disk state \citep{Jaodand16}, though in the absence of radio ephemerides such detections are challenging. Alternatively, it is possible that 4FGL J0407.7--5702 shows X-ray phenomenology beyond mode switching, not unlike 3FGL J0427.9--6704, supporting a wider range of behaviors for these systems. Another useful measurement would be the orbital period of the binary, which we could not determine using our optical photometry or spectroscopy. Rapid cadence optical photometry, or photometry or spectroscopy in the near-infrared (where the contribution of the donor star might be more observable compared to the disk) are potential alternative approaches. Since PSR J1023+0038 \citep{Deller15}, XSS J12270--4859 \citep{Hill11}, 3FGL J1544.6--1125 \citep{Jaodand19}, and 3FGL J0427.9--6704 \citep{Li20} all show radio continuum emission in their sub-luminous disk states, the radio behavior of 4FGL J0407.7--5702 could strengthen its tentative classification as a candidate tMSP. The plausibility of a detection depends on its unknown distance and radio behavior: if at 7 kpc, it would be well-detectable if at the 5 GHz radio luminosity of 3FGL J0427.9--6704 (predicted flux density of $\sim 31 \mu$Jy), marginal if as luminous as XSS J12270--4859 or 3FGL J1544.6--1125, and very difficult if akin to PSR J1023+0038 (mean flux density $\sim 2$--6 $\mu$Jy, though with occasional brighter flares). Under the assumption that 4FGL J0407.7--5702 has a luminosity comparable to previously studied tMSPs in the X-ray, $\gamma$-ray, or optical, we found the most likely distance to lie in the range $\sim 6$--13 kpc, with a consensus value more toward the lower end of that range. While the uncertainty in these estimates is substantial, together they suggest that 4FGL J0407.7--5702 is the most distant known candidate field tMSP to date. It is possible, though not certain, that an end-of-mission \emph{Gaia} parallax for this source might be available, which would allow the crucial determination of its X-ray luminosity. Finally, we highlight the emerging evidence that the ratio of X-ray to $\gamma$-ray flux ($F_X/F_\gamma$) could help to identify candidate tMSPs in the sub-luminous disk state and separate them from the more common redbacks or black widows when good distance constraints are not available. Since the increasingly deep \emph{Fermi}-LAT catalogs are enabling the study of more distant millisecond pulsar binaries, such techniques may see increasing relevance in follow-up studies in the coming years. \section*{Acknowledgements} We acknowledge a helpful and thoughtful report from an anonymous referee. We also acknowledge support from NSF grant AST-1714825 and the Packard Foundation. This work is based on observations obtained with \textit{XMM-Newton} an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (MCTIC) do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU). Based on observations obtained at the international Gemini Observatory, a program of NSF’s OIR Lab, which is managed by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. on behalf of the Gemini Observatory partnership: the National Science Foundation (United States), National Research Council (Canada), Agencia Nacional de Investigaci\'{o}n y Desarrollo (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n (Argentina), Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (Brazil), and Korea Astronomy and Space Science Institute (Republic of Korea). \software {IRAF \citep{Tody86, 2016ascl.soft08006G}, SAS \citep[v18.0.0,][]{Gabriel04}, Astropy \citep{astropy13}}
null
null
null
proofpile-arXiv_000-10113
{"arxiv_id":"2009.09054","language":"en","timestamp":1600740160000,"url":"https:\/\/arxiv.org\/abs\/2009.09054","yymm":"2009"}
2024-02-18T23:40:24.917Z
2020-09-22T02:02:40.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10115"}
null
\section{Introduction} \label{sec:intro} \input{sections/intro} \section{The SpiNNaker 2 Prototype Chip} \label{sec:hardware} \input{sections/hardware} \section{Benchmark Models} \label{sec:benchmarks} \input{sections/benchmark_models} \section{Implementation of the Benchmarks on the SpiNNaker 2 Prototype} \label{sec:software} \input{sections/software_implementation} \section{Results} \label{sec:results} \input{sections/results} \section{Discussion} \label{sec:discussion} \input{sections/discussion} \section{Conclusion} \input{sections/conclusion} \bibliographystyle{IEEEtran} \section*{Supplement}% \setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}}% \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}}% } \newcommand{\frac{\partial}{\partial t}}{\frac{\partial}{\partial t}} \newcommand{\frac{\partial}{\partial \theta_i}}{\frac{\partial}{\partial \theta_i}} \newcommand{\frac{\partial}{\partial \theta_{ki}}}{\frac{\partial}{\partial \theta_{ki}}} \newcommand{\frac{\partial}{\partial w_i}}{\frac{\partial}{\partial w_i}} \newcommand{\frac{\partial^2}{\partial \theta_{i}^2}}{\frac{\partial^2}{\partial \theta_{i}^2}} \newcommand{\frac{\partial^2}{\partial \theta_{ki}^2}}{\frac{\partial^2}{\partial \theta_{ki}^2}} \newcommand{\dd}[1]{\frac{\partial}{\partial #1}} \newcommand{\syn}[1]{\text{\textsc{syn}}_{#1}} \newcommand{\text{\tiny{\textsc{pre}}}_{i}}{\text{\tiny{\textsc{pre}}}_{i}} \newcommand{\text{\tiny{\textsc{post}}}_{i}}{\text{\tiny{\textsc{post}}}_{i}} \newcommand{\text{\textsc{pre}}_{i}}{\text{\textsc{pre}}_{i}} \newcommand{\text{\textsc{post}}_{i}}{\text{\textsc{post}}_{i}} \newcommand{\pl}[1]{{(\bf\MakeLowercase{#1})}} \newcommand{\pr}[1]{{\bf\MakeLowercase{#1}}} \newcommand{\figref}[2][]{{Fig.~\ref{#2}\MakeLowercase{#1}}} \newcommand{z_{\posti}}{z_{\text{\tiny{\textsc{post}}}_{i}}} \newcommand{\zk(t)}{z_{\posti}(t)} \newcommand{\zk(s)}{z_{\posti}(s)} \newcommand{\zk(\tau)}{z_{\posti}(\tau)} \newcommand{f_{\posti}}{f_{\text{\tiny{\textsc{post}}}_{i}}} \newcommand{\fk(t)}{f_{\posti}(t)} \newcommand{\fk(s)}{f_{\posti}(s)} \newcommand{\fk(\tau)}{f_{\posti}(\tau)} \newcommand{r(t)}{r(t)} \newcommand{r(\tau)}{r(\tau)} \newcommand{\hat{r}(t)}{\hat{r}(t)} \newcommand{\ve r}{\ve r} \newcommand{\rbin}{v_{b}} \newcommand{y}{y} \newcommand{\hat{x}}{\hat{x}} \newcommand{\boldsymbol{z}}{\boldsymbol{z}} \newcommand{\mathbf{z}}{\mathbf{z}} \newcommand{\mathbf{z}(t)}{\mathbf{z}(t)} \newcommand{\mathbf{z}_{s:t}}{\mathbf{z}_{s:t}} \newcommand{\mathbf{z}(\tau)}{\mathbf{z}(\tau)} \newcommand{\mathbf{S}}{\mathbf{S}} \newcommand{\mathbf{r}}{\mathbf{r}} \newcommand{\boldsymbol{x}}{\boldsymbol{x}} \newcommand{\mathbf{X}}{\mathbf{X}} \newcommand{\ve \theta}{\ve \theta} \newcommand{\ve \theta(t)}{\ve \theta(t)} \newcommand{\theta_i(t)}{\theta_i(t)} \newcommand{\mathcal{Z}}{\mathcal{Z}} \newcommand{\bbr, \bbs,\bbx, \bbz}{\mathbf{r}, \mathbf{S},\mathbf{X}, \mathbf{z}} \newcommand{R,\bbz}{R,\mathbf{z}} \newcommand{\sum_{r^t, \bbs^t,\bbx^t, \bbz^t}}{\sum_{r^t, \mathbf{S}^t,\mathbf{X}^t, \mathbf{z}^t}} \newcommand{\pn}[2]{p_{\mathcal{N}}\left({#1}\,\middle\vert\,{#2}\right)} \newcommand{\pe}[2]{p_{\mathcal{E}}\left({#1}\,\middle\vert\,{#2}\right)} \newcommand{\ps}[1]{p_{\mathcal{S}}\left({#1}\right)} \subsection{Comparison with SpiNNaker 1} We assume that the same benchmarks in this work could also be implemented on SpiNNaker 1. However, since in SpiNNaker 1 there is no MAC array, the vector-matrix multiplication would be much slower and therefore consume much more energy than the SpiNNaker 2 prototype. Fig. \ref{fig:encoding2} indicates the speedup in terms of number of clock cycles for the vector-matrix multiplication in the SpiNNaker 2 prototype compared to what it would be in SpiNNaker 1. The differences in fabrication technology and supply voltage etc. further increases the difference between the SpiNNaker 2 prototype and SpiNNaker 1. \subsection{Comparison with other neuromorphic platforms} To ease the discussion, we group neuromorphic platforms into 3 categories: 1. Neuromorphic platforms with static synapses, such as TrueNorth \cite{Merolla668}, NeuroGrid \cite{neurogrid}, Braindrop \cite{braindrop}, HiAER-IFAT \cite{park2017hierarchical}, DYNAPs \cite{dynaps18}, Tianjic \cite{tianjic}, NeuroSoC \cite{bionect20} and DeepSouth \cite{deepsouth18}, 2. Neuromorphic platforms with configurable (but not programmable) plasticity, such as ROLLS \cite{rolls15}, ODIN \cite{odin19} and TITAN \cite{titan16}, 3. Neuromorphic platforms with programmable plasticity, such as (except SpiNNaker 1/2 and Loihi) the BrainScales~1/2 system\cite{PPU,hartmann2010highly}. We assume all 3 groups of neuromorphic platforms should be able to implement the keyword spotting benchmark in this work. However, DNNs can not be directly implemented on these platforms since they only support SNNs (except Tianjic, which also supports DNNs). Solutions similar to the SNN version implemented on Loihi would be an option. For adaptive control, since learning is involved, we assume the neuromorphic platforms in group 1 can not support this benchmark. It would be still possible to have an external host PC to reprogram the synaptic weights, but that would not be suitable for embedded applications. Although the learning rule in adaptive control is relatively simple, it involves multiplying an external error signal with the activity of the presynaptic neuron in every time step, which is quite different from the learning rules normally supported in the neuromorphic community, like Spike-Timing Dependent Plasticity (STDP) \cite{Markram213} or Spike-Driven Synaptic Plasticity (SDSP) \cite{brader07}. Therefore we assume the neuromorphic platforms in group 2 could not implement the adaptive control benchmark. The BrainScales~2 system in group 3 comes with programmable plasticity, but since the neural network runs in accelerated time, it is unclear whether the neural activity of each time step can be used for the weight update. Also it is unclear how to interface robotic applications which require real time response with a neural network running in accelerated time. \subsection{Keyword Spotting}\label{result_keyword_spotting} \subsubsection{Memory Footprint}\label{result_keyword_spotting_memory} For the keyword spotting benchmark, the required SRAM memory mainly consists of 2 parts: weight memory and neuron input memory. The weight memory is the memory for storing the weights and biases, which are quantized as 8-bit integers. The required memory in bytes is \begin{equation} M_{w} \;=\; (D + 1) N \label{eq:M_w} \end{equation} where \(D\) is the number of input dimensions, \(N\) is the number of neurons. The neuron input memory is the memory for storing the results from the MAC array after the vector-matrix multiplication is complete. Each input is a 32 bit integer. The required memory in bytes is \begin{equation} M_{i} \;=\; 4N \label{eq:M_ic_kw} \end{equation} Since the ReLU unit doesn't need to hold its output value between inferences, which is the case for the LIF neuron model, there is no neuron memory needed. The total memory for a neural network on a PE is \begin{equation} M_{total} \;=\; M_{w}+M_{i} \label{eq:M_total_kw} \end{equation} Based on Eq. (\ref{eq:M_w}), (\ref{eq:M_ic_kw}), (\ref{eq:M_total_kw}) for memory footprint, the first hidden layer of the keyword spotting network would require ca. 100 KBytes of memory. For each PE, in total 128 KBytes of SRAM memory is available, which is used for the program code as well as the program data. In this work, it is assumed that each PE has 90 KBytes of SRAM memory available for the data of the neural network. So the first hidden layer is split into two PEs. \subsubsection{Computation Time and Comparison with Loihi}\label{result_keyword_spotting_time} In the keyword spotting benchmark, the computation times for the vector-matrix multiplication (\(T_{mm}\)) and the ReLU update (\(T_{relu}\)) are measured. After the measurement, polynomial models can be fitted by minimizing the mean-squared error. The number of clock cycles for the vector-matrix multiplication with the MAC array is found to be \begin{align} T_{mm} \;=\; 74.0 + 5.38 N \nonumber \\ + 0.13 N D + 24.0 D \label{eq:c_mm} \end{align} where \(N\) is the number of neurons and \(D\) is the number of input dimensions. The time for the vector-matrix multiplication is mostly reflected in \(0.13 N D\). Before the vector-matrix multiplication starts, the inputs to the network needs to be prepared for the MAC array. This pre-processing step is mostly reflected in \(24.0 D\). After the vector-matrix multiplication, a post-processing step is necessary for the resulting neuron input current. The computation time depends on both \(D\) and \(N\), and this is reflected in \(24.0 D\) and \(5.38 N\). For each of the computational steps, there is a constant overhead, which is reflected in the constant \(74.0\). The computation time for ReLU update with ARM core is found to be \begin{align} T_{relu} \;=\; 17.70 N + 117.5 \label{eq:c_relu_neuron_update} \end{align} The total time is \begin{align} T_{total} \;=\; T_{mm} + T_{relu} \label{eq:c_kw_total} \end{align} Based on Eq. (\ref{eq:c_mm}), (\ref{eq:c_relu_neuron_update}), (\ref{eq:c_kw_total}) for computation time, with the keyword spotting network split into 3 PEs (Fig. \ref{fig:keywordspotting_impl}), the computation of one time step consumes less than 21k clock cycles. With a safety margin of 4k clock cycles, one time step would take less than 25k clock cycles. When the PE is running with 250 MHz, this means the duration of one time step can be reduced to 0.1 ms. Since 10 time steps are combined to 1 time window to form one inference, a time step duration of 0.1 ms would correspond to 1000 inferences per second. In \cite{keyword_spotting_loihi19}, 296 inferences per second has been reported for Loihi. One reason for the reduced speed of Loihi might be that the inputs to the neural network are coming from an FPGA which could cause some latency, while the SpiNNaker 2 prototype is using inputs generated by one of the PEs of the same chip. \subsubsection{Energy Measurement and Comparison with Loihi}\label{result_keyword_spotting_power} Both QPEs are used for the measurement. In each QPE, 3 PEs are switched on to simulate a keyword spotting network. The measured result is then divided by 2 to obtain the energy per network. The energy is measured incrementally, similar to previous measurements on SpiNNaker 1 \cite{spinn1power} and on the first SpiNNaker 2 prototype \cite{hoeppner19tcas}. The idle energy is measured when in each time step the timer tick interrupt is handled but nothing is processed. The result we present in this section is the active energy which is obtained by subtracting the idle energy from the total energy. The resulting active energy per inference is 7.1 \(\mu\)J. The keyword spotting network is implemented as a normal DNN on the SpiNNaker 2 prototype. The MAC array is used for the computation of the connection matrix, and the ARM core is used for the computation of ReLU activation function. Since Loihi only supports SNN, the spiking version of the keyword spotting network is implemented on Loihi. This could be the reason that the SpiNNaker 2 prototype consumes less energy for each inference in the keyword spotting benchmark (Tab. \ref{tab:keyword_spotting}). Note that in \cite{keyword_spotting_loihi19}, the reported energy per inference on Loihi was 270 \(\mu\)J, including a 70 mW overhead presumably caused by the x86 processor on Loihi. In this work the overhead has been removed which results in 37 \(\mu\)J per inference. \begin{table}[htb] \begin{minipage}{0.47\textwidth} \centering \renewcommand{\arraystretch}{1.1} \caption{Comparison of the SpiNNaker 2 prototype (SpiNN) and Loihi for the keyword spotting task } \label{tab:keyword_spotting} \centering \footnotesize \begin{tabular}{lcc} \toprule Hardware & inference/sec & energy/inference (\(\mu\)J)\\ \midrule SpiNN & 1000 & 7.1 \\ Loihi & 296 & 37 \\ \bottomrule \end{tabular} \end{minipage} \end{table} \subsection{Adaptive Control}\label{result_adaptive_control} \subsubsection{Memory Footprint}\label{result_adaptive_control_memory} For an adaptive control network simulated on a PE, the required SRAM memory mainly consists of 4 parts: input weight matrix and bias memory, output weight matrix memory, neuron input current memory and neuron memory. The input weight matrix and bias memory is the memory for storing the input weight matrix and bias, which are quantized as 8-bit integers. The required memory in bytes is \begin{equation} M_{ib} \;=\; (D_{in} + 1) N \label{eq:M_e-b} \end{equation} where \(D_{in}\) is the number of input dimensions, \(N\) is the number of neurons. The output weight matrix memory is the memory for storing the output weight matrix, which are 16 bit floating point numbers. The required memory in bytes is \begin{equation} M_{o} \;=\; 2D_{out} N \label{eq:M_d} \end{equation} where \(D_{out}\) is the number of output dimensions. The neuron input current memory is the memory for storing the results from the MAC array after the input processing is complete. Each input current is a 32 bit integer. The required memory in bytes is \begin{equation} M_{ic} \;=\; 4N \label{eq:M_ic} \end{equation} The neuron memory is the memory to hold the LIF neuron parameters like the membrane potential and refractory time. Each of them has 32 bits. The required memory in bytes is \begin{equation} M_{n} \;=\; 8N \label{eq:M_n} \end{equation} The total memory for a neural network on a PE is \begin{equation} M_{total} \;=\; M_{ib} + M_{o} + M_{ic} + M_{n} \label{eq:M_total} \end{equation} Since it is assumed that each PE has 90 KBytes of SRAM memory available for the data of the neural network, the maximum number of output dimensions given the number of input dimensions and number of neurons in a neural network can be derived with Eq. (\ref{eq:M_e-b}), (\ref{eq:M_d}), (\ref{eq:M_ic}), (\ref{eq:M_n}), (\ref{eq:M_total}). The result is shown Fig. \ref{fig:adaptive_memory}. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/adaptive_control_memory.eps} \caption{Maximum number of output dimensions for each input dimension and number of neurons for a neural network simulated on a PE. }\label{fig:adaptive_memory} \end{figure} \subsubsection{Computation Time and Comparison with Loihi}\label{result_adaptive_control_time} \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/ratio_encoding_with_without_mlacc_paper_v2} \caption{Speedup of input processing time with the MAC array }\label{fig:encoding2} \end{figure} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{figs/loihi_jib1_adaptive_control_comparison_time} \caption{Duration of a time step of SpiNNaker 2 prototype (with strips) and Loihi (without strips) for different number of neurons per core, different input and output dimensions for the adaptive control benchmark. No measurement result for the SpiNNaker 2 prototype is shown where the implementation is limited by memory. }\label{fig:loihi_jib1_comparison_time} \end{figure*} For adaptive control, the computation times for input processing (\(T_{i\_mlacc}\) / \(T_{i\_no\_mlacc}\)), neuron update (\(T_{n}\)), output processing (\(T_{o}\)) and weight update (\(T_{w}\)) are measured. After the measurement, polynomial models can be fitted by minimizing the mean-squared error. For input processing with MAC array, the number of clock cycles is \begin{align} T_{i\_mac} \;=\; 131.21 + 5.07 N \nonumber \\ + 0.13 N D_{in} + 35.79 D_{in} \label{eq:c_encoding_mac} \end{align} where \(N\) is the number of neurons, \(D_{in}\) is the number of input dimensions. Eq. (\ref{eq:c_encoding_mac}) is very similar to Eq. (\ref{eq:c_mm}), because the main computation is in both cases done by the MAC array. The difference is caused by the different data types. In keyword spotting, the inputs are assumed to be 8 bit integers, but in adaptive control, each input is assumed to be floating point. This is necessary because in general, the same implementation can be used as a building block for NEF implementation on SpiNNaker 2 to construct large-scale cognitive models as mentioned in Section \ref{adaptive_control}, so that the input data type needs to be the same as the output data type. Since the output weights are floating point, and their values change dynamically due to learning, an extra range check is performed for each input value, and an extra data type conversion is performed. This is reflected in \(35.79 D_{in}\) and the constant \(131.21\). The number of clock cycles without MAC array is \begin{align} T_{i\_no\_mac} \;=\; 102.52 + 22.54 N \nonumber \\ + 7.07 N D_{in} + 25.54 D_{in} \label{eq:c_encoding_no_mac} \end{align} The main benefit of MAC array is reflected in the reduction of \(7.07 N D_{in}\) in Eq. (\ref{eq:c_encoding_no_mac}) to \(0.13 N D_{in}\) in Eq. (\ref{eq:c_encoding_mac}), which is made possible by the SIMD operation of the MAC array. The speedup is higher for higher dimensions. Fig. \ref{fig:encoding2} shows the speedup of the computation time for input processing with the MAC array compared to without the MAC array. Unlike in keyword spotting, where the ReLU neuron model is used, in adaptive control, the LIF neuron model is used, which is the same as in Loihi. The neuron update time in terms of number of clock cycles is \begin{align} T_{n} \;=\; 28.19 N - 26.90 NP + 509.18 \label{eq:c_lif_neuron_update} \end{align} where \(P\) is the firing probability. The minus sign in \(-26.9 N P\) is because during the refractory period, the computation needed is reduced. Since this is event based, it depends on \(P\). The output processing time is \begin{equation} T_{o} \;=\; 5.8 ND_{out}P + 19.31 NP \label{eq:c_lif_decoding} \end{equation} where \(D_{out}\) is the number of output dimensions. The weight update time is \begin{equation} T_{w} \;=\; 8.28 ND_{out}P + 28.04N P \label{eq:c_lif_learning} \end{equation} The total time is \begin{align} T_{total} \;=\; T_{i\_mac} + T_{n} + T_{o} + T_{w} \label{eq:c_lif_total} \end{align} Since output processing and weight update are event based, the firing rate of 130 Hz corresponding to a firing probability \(P\) of 0.13, which is used for comparing the SpiNNaker 2 prototype with Loihi, would reduce the computation time by 87\% compared to a non-event-based implementation. Typically, the SpiNNaker system runs in real time with 1 ms time step. When the PE is running at 250 MHz, the available number of clock cycles for each time step is 250 000, which is the computational constraint. According to Eq. (\ref{eq:c_lif_total}), for the range of the parameters shown in Fig. \ref{fig:adaptive_memory}, the computation can be done within 1 ms. So the maximum implementable size of a network on a single PE in this benchmark is constrained by memory rather than computation. Although the SpiNNaker system runs with a constant timer tick, and within a timer tick, the Arm core enters sleep mode after the computation is done, the study of the computation time provides information about how much the time step can be potentially reduced, e.g. when instead of a 1 ms timer tick, a 0.5 ms timer tick is required. For the adaptive control benchmark task with different number of input dimensions, output dimensions and number of neurons, the duration of a time step of SpiNNaker 2 prototype and Loihi is compared and shown in Fig. \ref{fig:loihi_jib1_comparison_time}, with the mean population firing rate kept at around 130 Hz for both hardwares. Here the duration of a time step for the SpiNNaker 2 prototype refers to the time for the PE to complete the computation of a time step. From the comparison it is clear that for small number of input dimensions, Loihi is faster than the SpiNNaker 2 prototype, and for large number of input dimensions, the SpiNNaker 2 prototype is faster than Loihi. The maximum ratio of duration of a time step between both hardwares is summarized in Tab. \ref{tab:adaptive_control_relative_time}. Because of the MAC array, the computation time of the SpiNNaker 2 prototype increases less rapidly with the number of input dimensions, so that the SpiNNaker 2 prototype could catch up with Loihi in terms of computation time for higher input dimensions. \begin{table}[htb] \begin{minipage}{0.47\textwidth} \centering \renewcommand{\arraystretch}{1.1} \caption{Maximum ratio of duration of a time step between the SpiNNaker 2 prototype (SpiNN) and Loihi for the adaptive control task } \label{tab:adaptive_control_relative_time} \centering \footnotesize \begin{tabular}{lcc} \toprule Input Dimensions & 1 & 100 \\ Output Dimensions & 1 & 1 \\ Number of Neurons & 1024 & 512 \\ Duration of a Time Step SpiNN : Loihi & 1 : 0.37 & 0.49 : 1\\ \bottomrule \end{tabular} \end{minipage} \end{table} \subsubsection{Energy Measurement and Comparison with Loihi}\label{result_adaptive_control_power} \begin{figure*}[ht] \centering \includegraphics[width=1.0\textwidth]{figs/loihi_jib1_adaptive_control_comparison_energy.png} \caption{Active energy of SpiNNaker 2 prototype (with strips) and Loihi (without strips) for different number of neurons per core, different input and output dimensions for the adaptive control benchmark. No measurement result for the SpiNNaker 2 prototype is shown where the implementation is limited by memory. }\label{fig:loihi_jib1_comparison} \end{figure*} \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/loihi_jib1_adaptive_control_comparison_breakdown.png} \caption{Breakdown of energy consumption per core per time step of the SpiNNaker 2 prototype into 4 energy components: input processing, neuron update, output processing and weight update. }\label{fig:jib1_energy_breakdown} \end{figure} The energy consumption of the SpiNNaker 2 prototype and Loihi is measured with the same parameters as in the computation time comparison. The result is shown in Fig. \ref{fig:loihi_jib1_comparison}. Similar to Section \ref{result_keyword_spotting_power}, only the active energy is shown. For small number of input dimensions, Loihi is more energy efficient than the SpiNNaker 2 prototype, and for large number of input dimensions, the SpiNNaker 2 prototype is more energy efficient than Loihi. The maximum ratio of active energy consumption between both hardwares is summarized in Tab. \ref{tab:adaptive_control_relative_energy}. \begin{table}[htb] \begin{minipage}{0.47\textwidth} \centering \renewcommand{\arraystretch}{1.1} \caption{Maximum ratio of active energy consumption between the SpiNNaker 2 prototype (SpiNN) and Loihi for the adaptive control task } \label{tab:adaptive_control_relative_energy} \centering \footnotesize \begin{tabular}{lcc} \toprule Input Dimensions & 1 & 100 \\ Output Dimensions & 1 & 1 \\ Number of Neurons & 1024 & 512 \\ Active Energy SpiNN : Loihi & 1 : 0.81 & 0.36 : 1 \\ \bottomrule \end{tabular} \end{minipage} \end{table} Similar to the computation time comparison, we see the benefit of MAC array especially for high input dimensions, when the MAC array is more extensively used. This is made more clear in the energy breakdown in Fig. \ref{fig:jib1_energy_breakdown}. Here, it is clear how the input processing energy increases with the input dimensions for the same number of neurons and output dimensions, how the neuron update energy increases with the number of neurons for the same input dimensions and output dimensions, and how the output processing and weight update energy increases with the number of output dimensions for the same input dimensions and number of neurons. \subsubsection{Robotic Demo}\label{result_adaptive_control_demo} The SpiNNaker 2 prototype running the adaptive control benchmark is connected to a robotic arm built with Lego Mindstorms EV3 robot kit. The setup is based on \cite{terry15}. The input to the neural network is the position and velocity of the motor and the output of the neural network is the motor control signal to be combined with the PD controller output, as described in Section \ref{adaptive_control}. In this demo we consider two situations: the normal case and the simulated aging case (Fig. \ref{fig:robotic_demo1}, upper part). In the case of simulated aging an extra weight is added to the robotic arm to resemble the aging effect or unknown disturbance. For each case the performance of the adaptive controller is compared with a normal PID controller. In the normal case, both controllers perform equally well, but in the simulated aging case, the PID controller cannot adapt itself to the new situation, while the adaptive controller can learn from the error feedback and adapt its parameters to improve the performance (Fig. \ref{fig:robotic_demo1}, lower part). \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figs/robotic_demo7-eps-converted-to.pdf} \caption{Robotic demo. Upper part: In the normal case (left), there is no extra weight attached to the robotic arm. In the simulated aging case (right), an extra weight is attached to resemble the aging effect. Lower part: Performance of the PID Controller and the adaptive controller in the normal case and the simulated aging case. The red curve shows the actual position of the arm, and the green curve shows the target position. The shown position is the normalized angle position of the motor. In the normal case, both the PID Controller and the adaptive controller perform well. But in the simulated aging case, the PID Controller cannot adapt to the new situation, while the adaptive controller can adapt to the new situation by learning from error feedback, thus improving the performance, as seen in the improvement from the first to second motor commands. }\label{fig:robotic_demo1} \end{figure} \subsection{System Overview}\label{sys_ove} SpiNNaker \cite{furber14} is a digital neuromorphic hardware system based on low-power Arm processors originally built for real-time simulation of spiking neural networks (SNNs). In the second generation of SpiNNaker (SpiNNaker 2), which is currently being developed in the Human Brain Project \cite{amunts2016human}, several improvements are being made. The SpiNNaker 2 architecture is based on Processing Elements (PEs) which contain an Arm Cortex-M4F core, 128 KBytes local SRAM, hardware accelerators for exponential functions \cite{partzsch17} and true- and pseudo random numbers \cite{felix16,synsampling19} and multiply-accumulate (MAC) accelerators. Additionally, the PEs include advanced dynamic voltage and frequency scaling (DVFS) features \cite{hoeppner17iscas,hoeppner19tcas}. The PEs are arranged in Quad-Processing Elements (QPEs) containing four PEs and a Network-on-Chip (NoC) router for packet based on-chip communication. The QPEs can be placed in a array scheme without any additional flat toplevel routing to form the SpiNNaker 2 many core SoC. SpiNNaker 2 will be implemented in GLOBALFOUNDRIES 22FDX technology ~\cite{glofo16}. This FDSOI technology allows the application of adaptive body biasing (ABB) for low-power operation at ultra-low supply voltages in both forward \cite{Hoeppner2019a} and reverse bias schemes \cite{Walter2020}. For maximum energy efficiency and reasonable clock frequencies, 0.50V nominal supply voltage is chosen and ABB in a forward bias scheme is applied. The ABB aware implementation methodology from \cite{Hoeppner2019b} has been used. This allows to achieve \SI{>200}{\MHz} clock frequency at 0.50V nominal supply voltage at the first DVFS performance level PL1 and \SI{>400}{\MHz} from 0.60V supply at the second DVFS performance level PL2. The second SpiNNaker 2 prototype chip has been implemented and manufactured in 22FDX \textcolor{red}{(cite Jib1 paper)}. It contains 2 QPEs with 8 PEs in total to allow the execution of neuromorphic applications. Fig.~\ref{fig:pe_schematic} shows the simplified block diagram of the testchip PE array. The chip photo is shown in Fig.~\ref{fig:chipphoto}. The testchip includes peripheral components for host communication, a prototype of the SpiNNaker router for chip-to-chip spike communication and some shared on-chip SRAM. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/qpe_schematic2.eps} \caption{Simplified schematic of the second SpiNNaker 2 prototype with 2 QPEs. Each QPE contains 4 PEs. Each PE contains a MAC array, an Arm core and a local SRAM. The NoC router is responsible for the communication. }\label{fig:pe_schematic} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.25\textwidth]{figs/jib1_chipphoto.eps} \caption{Chip photo of the SpiNNaker 2 prototype in 22FDX technology }\label{fig:chipphoto} \end{figure} \subsection{MAC Array}\label{mac_acc} The MAC array has 64 MAC units in a 4 x 16 layout. Fig. \ref{fig:mac_schematic} illustrates the MAC array. The data of operand A and operand B are arrays of 8 bit integer values. In each clock cycle, 16 values from the array of operand A and 4 values from the array of operand B are fed into the MAC array. Every MAC unit in the same column is fed with the same value from operand A, and every MAC unit in the same row is fed with the same value from operand B. The software running on the Arm core is responsible for arranging the data in the SRAM and notifying the MAC array the address and length of the data to be processed. After the data is processed, the results are written back to predefined addresses in the memory. The result of each MAC unit is 29-bit. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{figs/mac.eps} \caption{Schematic of the MAC array. Each square in the 4 x 16 block represents one MAC unit. The squares around the block represent the data to be executed. In each clock cycle, 4 values from operand B and 16 values from operand A are fed into the MAC array simultaneously, as indicated by the arrows. }\label{fig:mac_schematic} \end{figure} When computing a matrix multiplication, a general purpose processor like the Arm core needs to: 1. fetch the operand A and operand B into the registers, 2. do the multiply-accumulate, 3. write the result back, 4. check the condition of the loop, 5. compute the addresses of the data in the next iteration. While the MAC array essentially does the same, it is more efficient due to the Single Instruction Multiple Data (SIMD) operation. In particular, the efficiency is made possible by: 1. 64 MAC operations can be done in one clock cycle in parallel. 2. 16 x 8 bits of data of operand A and 4 x 8 bits of data of operand B can be fetched in one clock cycle in parallel 3. control logic and data transfer in parallel to MAC operations, hiding the overhead of data transfer for the next iteration. \subsection{Keyword Spotting} The keyword spotting network consists of 2 computational steps: vector-matrix multiplication which is done with the MAC array and ReLU update which is done with the ARM core. Because of memory constraints (see Section \ref{result_keyword_spotting_memory}) layer 1 is split into 2 PEs. The weights in this network are the same as in \cite{keyword_spotting_loihi19}. The input to the network is a 390 dimensional vector of 8 bit integers. The ReLU activations of each layer are also 8 bit integers. The ReLU activations of layer 2 are directly sent back to host PC, where the vector-matrix multiplication for the output layer with 29 dimensions is performed, the same as in \cite{keyword_spotting_loihi19}. Fig. \ref{fig:keywordspotting_impl} shows the implementation of the keyword spotting network on the SpiNNaker 2 prototype. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/keyword_spotting_impl3.eps} \caption{Implementation of keyword spotting network on the SpiNNaker 2 prototype }\label{fig:keywordspotting_impl} \end{figure} \subsection{Adaptive Control} The implementation of adaptive control on the SpiNNaker 2 prototype is based on \cite{mundy15} and \cite{knight2016}. There are mainly 4 computational steps: input processing, neuron update, output processing and weight update. In input processing, the inputs to the network are multiplied with the input weight matrix to produce the input current for each neuron in the hidden layer. The weights are quantized to 8 bit integers with stochastic rounding. The vector-matrix multiplication with only ARM core and without MAC array is also implemented and serves as reference. The rest of the computation is implemented on the ARM core which allows event based processing. In neuron update, the neuron dynamics is updated according to the input current. The Leaky-Integrate-and-Fire (LIF) neuron model is used in the hidden layer to allow for event based processing of the spikes in the following steps. In output processing, the outputs of the neurons are multiplied with the output weight matrix. In the case of non-spiking neuron models like ReLU, this process is a vector-matrix multiplication. In the case of spiking neuron models, a connection is only activated when there is a spike, so this output processing step corresponds to adding the weights associated with the neuron which has spiked to the output of the network. In weight update, the output weight matrix is updated according to the neuron activity and error signal. In order to do weight update in an event based manner, the low pass filter mentioned in section \ref{adaptive_control} has been removed, similar to \cite{knight2016}. Because of the short time constant of the low pass filter in this application, this modification doesn't affect the performance. Since the learning rate is normally very small, floating point data type is chosen for the weights in the output weight matrix. In this work, we focus on the adaptive control network implemented on a single PE. The implementation is done with scalability in mind. In the case that the size of a neuron population exceeds the memory limit of a PE, it can be split into many PEs \cite{mundy15}. In this work, the PE additionally simulates the PD controller. The overhead is negligible. The computational steps and the hardware component used for each step is summarized in Fig. \ref{fig:nef_mlacc}. The PD controller is not shown since the computation is relatively simple. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/nef_mlacc2.eps} \caption{Main computational steps and hardware component for each step in adaptive control }\label{fig:nef_mlacc} \end{figure} \subsection{Keyword Spotting}\label{keyword_spotting} Keyword spotting is a speech processing problem which deals with identifying keywords in utterances. A practical use case is the identification of wake words for virtual assistants (e.g. "Alexa"). In this work, the keyword spotting network we implement on the SpiNNaker 2 prototype is the same as in \cite{keyword_spotting_loihi19}, which consists of 1 input layer with 390 input values, 2 dense layers each with 256 neurons and 1 output layer with 29 output values (Fig. \ref{fig:keyword_spotting_algo}). Also, the same as in \cite{keyword_spotting_loihi19}, no training is involved and only inference is considered. The 390 dimensional input to the network is the Mel-frequency cepstral coefficient (MFCC) features of an audio waveform in each time step. The 29 dimensional output of the network basically corresponds to the alphabetical characters, with additional special characters for e.g. silence etc. One 'inference' with this network involves passing 10 time steps of the MFCC features into the network. The outputs are then postprocessed to form a result for the inference. The difference to the implementation on Loihi is that on the SpiNNaker 2 prototype, we implement the network with normal DNN with ReLU activations, whereas on Loihi, the SNN version was implemented since Loihi only supports SNNs. \begin{figure}[htb] \centering \includegraphics[width=0.4\textwidth]{figs/keyword_spotting_architecture2.eps} \caption{Keyword Spotting Network Architecture }\label{fig:keyword_spotting_algo} \end{figure} \subsection{Adaptive Control}\label{adaptive_control} For our second benchmark task, we use the adaptive control algorithm proposed as a benchmark in \cite{terry15} and further investigated in \cite{nengo_adaptive_loihi2020}. This benchmark consists of a single-hidden-layer neural network, where the input is the sensory state of the system to be controlled (such as a robot arm) and the output is the extra force that should be applied to compensate for the intrinsic dynamics and forces on the arm (gravity, friction, etc.) The only non-linearities are in the hidden layer (i.e. there is no non-linear operation directly on the input or output). The input weights are fixed and randomly chosen, and the output weights $\omega_{ij}$ are initialized to zero and then adjusted using a variant of the delta rule \cite{eliasmith2011} (Eq. \ref{eq:adaptive_control}), where $\alpha$ is a learning rate, $a_i$ is the current level of activity of the $i$th neuron, and $E_j$ is an error signal. \begin{align} \Delta \omega_{ij} = \alpha a_i E_j \label{eq:adaptive_control} \end{align} Crucially, if we use the output of a PD-controller to be this error signal $E_j$, and if we take the output of this network and add it to the control signal produced by a PD-controller, then the resulting system will act as a stable adaptive controller \cite{dewolf16}. This is a variant of the adaptive control algorithm developed by Jean-Jacques Slotine \cite{slotine87}. One way to think of this is that the neural network is acting somewhat like the I term in a PID-controller, but since the I value is being produced by the neural network, it can be different for different parts of the sensory space. It can thus learn to, for example, apply extra positive torque when a robot arm is leaning far to one side, and extra negative torque when the arm is leaning far to the other side. When used with spiking neurons, we also apply a low-pass filter to the $a_i$ term, producing a continuous value representative of the recent spiking activity of the neuron. While this benchmark was originally proposed for its simplicity and applicability across a wide range of neuromorphic hardware and controlled devices, there is one further important reason for us to choose this benchmark. The core network that it requires has a single hidden layer non-linearity, and the inputs and outputs are generally of much lower dimensionality than the number of neurons in the hidden layer. This is exactly the sort of network that forms the core component of the Neural Engineering Framework (NEF) \cite{eliasmith2003a}. The NEF has been used to create large-scale biologically-based neural models \cite{eliasmith2012} by chaining these smaller networks together. By sending the output from one of these networks to the inputs of another network, we are effectively factoring the weight matrix between the hidden layers of the two networks. This has been shown to be a highly efficient method for implementing neural models on the original SpiNNaker 1 hardware \cite{mundy15}, and we expect the same to be the case on SpiNNaker 2. \begin{figure}[htb] \centering \includegraphics[width=0.48\textwidth]{figs/adaptive_control_algo2.eps} \caption{Adaptive Control Network Architecture }\label{fig:adaptive_algo} \end{figure}
null
null
null
proofpile-arXiv_000-10114
{"arxiv_id":"2009.08921","language":"en","timestamp":1600654640000,"url":"https:\/\/arxiv.org\/abs\/2009.08921","yymm":"2009"}
2024-02-18T23:40:24.919Z
2020-09-21T02:17:20.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10116"}
null
\section{Introduction} Since its inception, supersymmetry has been a formidable tool to understand the dynamics of gauge theories in various dimensions. More recently, the success of localization methods \cite{Pestun:2016zxk} has brought about a flourish of new results, often shedding light on hidden structures and symmetries in the strongly coupled regime. Among those, novel symmetries of gauge theories in four dimensions were uncovered by exhibiting non-perturbative Schwinger-Dyson type equations satisfied by certain correlators of the quantum field theory \cite{Nekrasov:2015wsu,Nekrasov:2016qym}. Concretely, a correlator is defined via a path integral in quantum field theory. Schwinger-Dyson equations can be understood as constraints that must be satisfied by such a correlator. This comes about from demanding that the path integral be invariant under an infinitesimal shift of the contour (under the condition that the integral measure is left invariant by such a shift). The particular question asked in \cite{Nekrasov:2015wsu} was to determine what type of constraints must be satisfied by correlators in Yang-Mills theory, when a contour gets shifted from a given topological sector to another distinct topological sector of the theory, related to the former by a large gauge transformation. Recall that the connected components of the space of gauge fields are labeled by an integer called the instanton charge: $\frac{-1}{8\pi^2}\int\text{Tr}F\wedge F$, where $F$ is the field strength and the domain of integration is the spacetime. Then, the problem can be recast in a simple way: what symmetries of the gauge theory are made manifest when the instanton charge varies? The question was answered in the context of supersymmetric $\cN=2$ Yang-Mills, on a regularized spacetime called the $\Omega$-background \cite{Nekrasov:2002qd,Losev:2003py,Nekrasov:2010ka} on $\mathbb{C}^2$. On this background, the instanton number can be changed by adding and removing point-like instantons in a controlled way, and the shift of contour in the definition of the path integral turns into the discrete operation of adding and removing boxes in a Young tableau \cite{Nekrasov:2003rj}. To mediate the change in instanton number of the theory, it is convenient to construct a local 1/2-BPS codimension-4 ``$Y$-operator," as a function of an auxiliary complex parameter $M\in\mathbb{C}$. Then, Schwinger-Dyson equations are understood as regularity conditions for the vev $\left\langle Y(M)\right\rangle$ in the parameter $M$. Put differently, the correlator typically has poles in the fugacity $M$, but the Schwinger-Dyson equations tell us that there exists a precise sum of $Y$-operator vevs which is pole-free in $M$, nicknamed the $qq$-character observable. Here, each ``$q$" stands for one of the two parameters of the $\Omega$-background on $\mathbb{C}^2$, and the term ``character" is used because the observable is a (deformed) character of a finite dimensional representation of a Yangian algebra. The above construction can be generalized in many ways, for example by considering additional defects in the background \cite{Nekrasov:2017rqy,Jeong:2018qpc}, by studying different gauge groups \cite{Haouzi:2020yxy,Haouzi:2020zls}, or by going away from four dimensions: the case of a five-dimensional gauge theory compactified on a circle has been an particularly fruitful area of research \cite{Tong:2014cha,Kim:2016qqs,Kimura:2015rgi,Kimura:2017hez,Mironov:2016yue,Assel:2018rcw,Haouzi:2019jzk,Bourgine:2017jsi,Bourgine:2019phm,Chang:2016iji}, where the $qq$-character observable arises not as an object defined in the representation theory of Yangians, but instead in the representation theory of quantum affine algebras. Likewise, in the case of a six-dimensional gauge theory compactified on a 2-torus \cite{Kimura:2016dys,Agarwal:2018tso}, the $qq$-character observable becomes an object in the representation theory of quantum elliptic algebras. Remarkably, equivariant localization on the $\Omega$-background can be performed to yield exact expressions for the $qq$-character observables in all of the above cases.\\ Meanwhile, supersymmetric gauge theories in codimension-2 lower dimensions share many common features with their higher-dimensional counterparts, but have not yet been studied in any systematic way. Most notably, there exist once again distinct topological sectors of the theory, this time labeled by an integer called the vortex charge: $\frac{-1}{2\pi}\int\text{Tr}F$, where $F$ is the field strength and the integration is over the two real dimensions transverse to the vortex. By the logic we reviewed above, one should then expect non-perturbative Schwinger-Dyson equations to exist also in dimensions 2, 3 (compactified on a circle) and 4 (compactified on a 2-torus). This time around, invariance under a slight shift of contour in the path integral should translate to a change in vortex number. To mediate such a shift, one could hope to construct as before a 1/2-BPS $Y$-operator, this time around of codimension-2, as a function of (at least) one auxiliary parameter $M$. Then, Schwinger-Dyson equations would again be understood as regularity conditions that the vev $\left\langle Y(M)\right\rangle$ needs to satisfy in the parameter $M$. Indeed, the existence of such non-perturbative equations has been anticipated for two-dimensional gauged linear sigma models with $\cN=(4,4)$ supersymmetry: to exhibit the equations and their associated symmetries, a new vortex $qq$-character observable can be defined \cite{Nekrasov:2017rqy,Haouzi:2019jzk}, with the same Yangian symmetry as its four-dimensional counterpart, but involving different twists. The aim of this paper is to give a first-principles construction of low-dimensional non-perturbative Schwinger-Dyson equations, and interpret them from various physical perspectives. We find it convenient to work in a K-theoretic framework, i.e. we study three-dimensional $\cN=4$ gauge theories $G^{3d}$ on the manifold $\mathbb{C}\times S^1$. Results for two-dimensional gauged linear sigma model with $\cN=(4,4)$ supersymmetry can be obtained by reducing the 3d theory on the circle $S^1$. The theories $G^{3d}$ we focus on will be of quiver-type, labeled by an $ADE$ Lie algebra, with unitary gauge groups and fundamental flavors. We require the amount of flavors to be large enough in order for $G^{3d}$ to be Higgsable, and introduce non-abelian versions of Nielsen–Olesen vortex solutions at the Higgs vacua. Our investigations leads us to define of a vortex character observable with quantum affine symmetry\footnote{The literature regarding the representation theory of quantum affine algebras is rich. As a short guide, there are two popular presentations of finite-dimensional representations, one due to Jimbo \cite{jimbo}, and another due to Drinfeld \cite{Drinfeld:1987sy}. In our physical context, it is the latter presentation that is relevant; see also \cite{Chari:1994pf, Chari:1994pd}. Characters of finite-dimensional representations of quantum affine algebras, dubbed ``$q$-characters," were first constructed by Frenkel and Reshetikhin in the 90's \cite{Frenkel:qch}. They were later rediscovered in a physical context when discussing the quantum geometry of 5d supersymmetric quiver gauge theories \cite{Nekrasov:2013xda,Bullimore:2014awa}. A deformed character depending on two parameters was introduced in \cite{Shiraishi:1995rp,Awata:1995zk,Frenkel:1998} (for related work on $t$-analogues of $q$-characters, see also \cite{Nakajima:tanalog}). This ``$qq$-character" was again rediscovered in the study of 5d supersymmetric gauge theories \cite{Nekrasov:2015wsu}. In this paper, we study how it furthermore arises in the study of 3d supersymmetric quiver gauge theories.}.\\ In this three-dimensional setting, the codimension-2 operator mediating the change in vortex number is a 1/2-BPS Wilson loop wrapping the circle $S^1$. Recall that a Wilson loop is formulated as the trace of a holonomy matrix, where a quark is parallel transported along the circle, and the trace is evaluated in some representation of the gauge group. We are able to give four definitions of the vortex $qq$-character observable:\\ -- The vortex character is the Witten index of a one-dimensional $\cN=2$ gauged quantum mechanics living on the vortices of $G^{3d}$, interacting with the quark in the Wilson loop.\\ -- The vortex character is the half-index of the 3d $\cN=4$ gauge theory $G^{3d}$ in the presence of a defect, the Wilson loop.\\ -- The vortex character is a deformed $\cW_{q,t}$-algebra correlator on an infinite cylinder, with stress tensor and higher spin current insertions, including a distinguished set of ``fundamental" vertex operators.\\ -- The vortex character is the partition function of the six-dimensional $(2,0)$ little string theory compactified on the cylinder, in the presence of various codimension-4 defects. The defects are all realized as D3 branes of type IIB wrapping 2-cycles of a resolved $ADE$ singularity.\\ We will analyze each perspective in detail, and prove that all four definitions are in fact equivalent. Let us briefly comment on them. The most obvious perspective is perhaps the one-dimensional one. There, we describe microscopically the gauged quantum mechanics on the vortices in some Higgs vacuum of $G^{3d}$. In three dimensions, both the vortices and the quark of the Wilson loop are particles wrapping the circle $S^1$. In particular, the magnatically charged vortex will experience a Lorentz force in the presence of the electrically charged quark, and the quantum mechanics captures the corresponding dynamics. We count vortices in this background by computing the Witten index of the theory, with appropriate chemical potentials turned on. We show that this index is a deformed character of a finite-dimensional representation of a quantum affine algebra. This Witten index can be reinterpreted directly from the perspective of $G^{3d}$ itself, as a half-index, or holomorphic block \cite{Beem:2012mb}, and where the Wilson loop is treated as a codimension-2 line defect. In this picture, the half-index of the 3d theory is computed via Coulomb branch localization, and the line defect is coupled to the bulk theory via gauging of its flavor symmetries. We show that this coupled 3d/1d index is, up to overall normalization, precisely the vortex $qq$-character constructed from the vortex quantum mechanics. The 3d perspective is useful to make contact with certain vertex operator algebras called ${\cW}(\fg)$-algebras. These are labeled by a simple Lie algebra $\fg$, which in this work will be simply-laced, and the choice of a nilpotent orbit, which in this work will be the maximal one. They realize the symmetry of Toda theory, here defined on an infinite cylinder. The particular case $\fg=A_1$ is known as Liouville theory, which enjoys Virasoro symmetry. When $\fg\neq A_1$, the Virasoro stress tensor remains, but there are also higher spin currents. In the 90's, Frenkel and Reshetikhin introduced a two-parameter deformation of the ${\cW}$-algebras, denoted as ${\cW}_{q,t}(\fg)$ \cite{Frenkel:1998}, and sometimes referred to as deformed ${\cW}$-algebras. Crucially, while an ordinary ${\cW}$-algebra has conformal symmetry, its deformation does not: instead, it is the symmetry of the so-called $q$-Toda theory on the cylinder. Correlators are defined in the free field formalism, as integrals over the positions of some deformed screening currents on the cylinder. We show that the vortex $qq$-character of $G^{3d}$ is such a correlator: the 3d gauge content is realized as screening current insertions, the 3d flavor symmetry is realized as fundamental vertex operator insertions, and the Wilson loop is realized as the insertion of a generating current operator. This latter type of operator includes the deformed stress tensor, but also ``higher spin" currents of the ${\cW}_{q,t}(\fg)$ algebra; they are all constructed in free field formalism as the commutant of the screening currents. There are ${\text{rank}}({\fg})$ independent generators constructed in this way, with spin $s$ in the range $2\leq s \leq {\text{rank}}({\fg})+1$. The vortex Schwinger-Dyson equations of the gauge theory $G^{3d}$ are now interpreted as Ward identities satisfied by the correlator in the ${\cW}_{q,t}(\fg)$-algebra. Finally, the various operators appearing in the ${\cW}(\fg)$-algebra construction have a natural interpretation in $(2,0)$ little string theory compactified on the cylinder: they are all D3 brane defects at points on this cylinder. Some D3 branes realize the screening charges, other D3 branes realize the flavor vertex operators, and a last set of D3 branes realizes the stress tensor and higher spin currents of the $\cW$-algebra. In hindsight, some of the relations are not too surprising: for instance, given a three-dimensional supersymmetric gauge theory defined on a $S^1$-bundle over a 2-manifold, the partition function (with adequate twists) is expected to contain information about the vortex sector of the theory. This makes plausible the relation between the 3d half-index with line defect and the Witten index of the vortex quantum mechanics. Furthermore, the relation from gauge theory supersymmetric indices to $\cW$-algebra correlators is an illustration of the so-called BPS/CFT correspondence \cite{Losev:2003py,Alday:2009aq}. Lastly, it is known that the effective theory on D3 branes in $(2,0)$ little string is precisely the 3d $\cN=4$ theory under study \cite{Aganagic:2017gsx}. The goal of this paper is to make use of 1/2-BPS Wilson loops to flesh out these ideas in detail, and exhibit new non-perturbative physics in the process.\\ As an application, we briefly analyze the action of 3d Seiberg duality on the vortex $qq$-character observable. This duality relates different 3d gauge theories as defined in the UV, but which flow to the same theory in the IR \cite{Seiberg:1994pq}. Here, we construct Seiberg-dual characters directly from the vortex quantum mechanics, where the duality manifests itself as a wall-crossing phenomenon in the Witten index \cite{Hwang:2017kmk}. This perspective gives us complete control over the action of the duality in three dimensions.\\ The paper is organized as follows: in Section 2, we construct the vortex quantum mechanics of $G^{3d}$ in the presence of a Wilson loop, and show its Witten index is a vortex character. We comment on how to interpret our result as a set of non-perturbative Schwinger-Dyson identities. In section 3, we re-derive the vortex character directly from the 3d perspective, coupled to a loop defect. In Section 4, we make contact with Ward identities in the deformed $\cW(\fg)$-algebra picture. In Section 5, we define the vortex character straight from little string theory in the presence of codimension-4 D-brane defects. In section 6, we discuss Seiberg duality and future directions. In section 7, we showcase in full detail all the results of the paper for the case of 3d $\cN=4$ SQCD. \vspace{16mm} \section{Schwinger-Dyson Equations: the Vortex Quantum Mechanics Perspective} \label{sec:1dsection} We start with a lightning review on three-dimensional gauge theories with 8 supercharges, along with the various 1/2-BPS objects that enter our story. \subsection{3d $\cN=4$ Gauge Theory, Vortices and Wilson Loops} \label{ssec:review} We consider a 3d $\cN=4$ quiver gauge theory $G^{3d}$ on the manifold $\mathbb{C}\times S^1(\widehat{R})$, where the quiver is labeled by a simply-laced Lie algebra $\fg$ of rank $n$, of shape the Dynkin diagram of $A_n$, $D_n$ or $E_n$. The radius of the circle is denoted by $\widehat{R}$. For concreteness, the Lagrangian gauge group is a product of $n$ unitary groups, \beq\label{gaugegroup3d} G=\prod_{a=1}^n U(N^{(a)})\; . \eeq We introduce flavor symmetry through the gauge group \beq\label{flavorgroup3d} G_F=\prod_{a=1}^n U(N^{(a)}_F)\; , \eeq where the associated gauge fields of $U(N^{(a)}_F)$ are frozen. This produces $N^{(a)}_F$ hypermultiplets on node $a$, in the bifundamental representation $(N^{(a)}, \overline{N^{(a)}_F})$ of the group $U(N^{(a)})\times U(N^{(a)}_F)$. Finally, we have hypermultiplets in the bifundamental representation $\oplus_{b>a}\, \Delta^{ab}\,(N^{(a)}, \overline{N^{(b)}})$ of the group $\prod_{a,b} U(N^{(a)})\times U(N^{(b)})$, where $\Delta^{ab}$ is the incidence matrix of $\fg$: $\Delta^{ab}$ is equal to 1 if there is a link connecting nodes $a$ and $b$ in the Dynkin diagram of $\fg$, and is 0 otherwise. Then, $G^{3d}$ contains a total of $n-1$ such bifundamental hypermultiplets.\\ The $a$-th gauge group in the quiver contains an abelian factor $U(1)\in U(N^{(a)})$, from which one can define a conserved current $j^{(a)}=\frac{1}{2\pi}* \text{Tr}\, F^{(a)}$; the associated global symmetry makes up the so-called topological symmetry of $G^{3d}$. Coupling this current to a $U(1)$ factor from the gauge group results in a Fayet-Iliopoulos (FI) term for the $a$-th node of the quiver. The theory $G^{3d}$ has a moduli space of vacua with Coulomb and Higgs branches, and a corresponding $SU(2)_C \times SU(2)_H$ R-symmetry, where each $SU(2)$ acts on the two branches separately. In particular, each of the $n$ FI parameters is a triplet under $SU(2)_H$; we decompose each such triplet into a real FI parameter and a complex one. Under the R-symmetry, the 3d $\cN=4$ Poincar\'{e} supercharges transform in the representation $(\bf{2,2})$. They obey the anticommutator relation \beq \{Q^\alpha_{a,a'}, Q^\beta_{b,b'}\}=\epsilon_{a b}\, \epsilon_{a' b'}\,\left(\gamma_\mu\, C\right)^{\alpha \beta} P^\mu\; , \eeq where we introduced $SO(2,1)$ $\gamma$-matrices, the charge conjugation matrix $C$, and the three-momentum $P^\mu$. The upper index $\alpha$ is a spinor index for $SO(2,1)$, while the lower indices $a, a'$ are indices for $SU(2)_C$ and $SU(2)_H$, respectively. Additionally, the above supercharges obey a reality condition, which we omit writing explicitly here.\\ The aim of this work is to exhibit certain symmetries associated to finite energy configurations of BPS vortices, which sit at Higgs vacua of $G^{3d}$. Therefore, from now on, we require that all theories under study possess a Higgs branch, and moreover that all vacua we study be Higgs vacua. In other words, the flavor symmetry group $G_F$ should have a large enough rank. The vortices then arise as semi-local non-abelian versions of Nielsen-Olson solutions; they are codimension-2 particles, transverse to the $\mathbb{C}$-line and wrapping $S^1(\widehat{R})$. Then we tune the moduli to sit at such a Higgs vacuum, and the gauge group $G$ breaks to its $U(1)$ centers. We furthermore turn on the $n$ real FI parameters. The complex FI parameters are set to zero throughout this paper. The R-symmetry is broken to $SU(2)_C \times U(1)_H$, and 1/2-BPS vortices solutions appear in the moduli space. They can be described as a one-dimensional $\cN=4$ supersymmetric quantum mechanics\footnote{Here, we mean by 1d $\cN=4$ supersymmetry the reduction of 2d $\cN=(2,2)$ supersymmetry to one dimension.}, preserving the supercharges $Q^1_{a,1}$ and $Q^2_{b,2}$ of the 3d theory. Those four supercharges anticommute to the generator of translations along the vortices, which we denote as $H$: \beq \{Q^1_{a,1}, Q^2_{b,2}\}=\epsilon_{a b}\, H \eeq Independently of vortices, it is also possible to introduce 1/2-BPS Wilson lines for $G^{3d}$. Specifically, a Wilson line operator is labeled by a choice of path and a representation ${\cal R}$ of the gauge group. In this work, we choose a loop wrapping the circle $S^1(\widehat{R})$ and sitting at the origin of $\mathbb{C}$. The Wilson loop operator vev on node $a$ reads: \beq \label{Wilsonloop} \langle W^{(a)}_{\cal R}\rangle = \text{Tr}_{\cal R}\, P \, \text{exp}\oint d\tau\, i\left[A^{(a)}_\mu\, \dot{x}^\mu + \Phi^{(a)}\, \sqrt{- \dot{x}^2} \right] \eeq The second term in the exponent is required by supersymmetry, and $\Phi^{(a)}$ is the scalar belonging in the 3d $\cN=2$ $a$-th vector multiplet inside the $\cN=4$ vector multiplet. This operator clearly breaks the $SU(2)_C \times SU(2)_H$ R-symmetry to $U(1)_C \times SU(2)_H$, since $SU(2)_C$ used to act on the triplet of $\cN=4$ vector multiplet scalars. One way to realize such a supersymmetric Wilson loop operator is to couple the 3d bulk to a one-dimensional $\cN=4$ supersymmetric quantum mechanics\footnote{Here, we mean by 1d $\cN=4$ supersymmetry the reduction of 2d $\cN=(0,4)$ supersymmetry to one dimension. Note that this is different from the 1d $\cN=4$ supersymmetry we described for the vortices, which was a reduction of 2d $\cN=(2,2)$ supersymmetry.}. The Wilson loop is then the theory of a 1d $\cN=4$ Fermi multiplet, meaning a complex chiral fermion. This can be achieved in a supersymmetric way by gauging the flavor symmetry of the Fermi multiplet, meaning we couple it to a 1d $\cN=4$ vector multiplet embedded inside the 3d $\cN=4$ vector multiplet. Then, integrating out the 1d Fermi multiplet has the effect of inserting the Wilson loop \eqref{Wilsonloop} in the path integral of the bulk theory $G^{3d}$. The 1d $\cN=4$ theory on the Wilson loop preserves the supercharges $Q^1_{1,a'}$ and $Q^2_{2,b'}$ of the 3d theory. Those four supercharges anticommute to the generator of translations along the loop, which we denote as $H$: \beq \{Q^1_{1,a'}, Q^2_{2,b'}\}=\epsilon_{a' b'}\, H \eeq The supersymmetric vortices and Wilson loops we described above both preserve 4 supercharges, but only $Q^1_{1,1}$ and $Q^2_{2,2}$ are preserved at the same time. Therefore, in the presence of a Wilson loop, the vortex quantum mechanics only has 1d $\cN=2$ supersymmetry. \subsection{The Vortex Quantum Mechanics} \label{ssec:QMvortex} Let us first consider $G^{3d}$ without any Wilson loop. In that case, the 1d $\cN=4$ quantum mechanics on its vortices is well-known \cite{Hanany:2003hp,Shifman:2006kd,Eto:2006uw,Eto:2007yv}\footnote{The description of the moduli space of vortices relies on a D-brane construction in the work \cite{Hanany:2003hp}. However, some of the dynamical degrees of freedom considered there turn out to be non-normalizable zero modes when performing a careful field-theoretic analysis \cite{Shifman:2006kd,Eto:2006uw,Eto:2007yv}; as a result, the K\"{a}hler potentials on the vortex moduli space are in general different in the brane and field theory approaches, with agreement only on certain BPS solutions \cite{Shifman:2011xc,Koroteev:2011rb}. In our context, the index of the vortex quantum mechanics we compute is insensitive to these discrepancies.}. Just like the bulk theory, it is a $\fg$-type quiver theory of rank $n$ which we call $T^{1d}_{pure}$, where the subscript ``pure" emphasizes the absence of defect Wilson loop for now. The Higgs branch of this quiver theory is the moduli space of $(k^{(1)},k^{(2)},\ldots,k^{(n)})$ vortices of $G^{3d}$, where $k^{(a)}$ is a positive integer denoting the rank of the $a$-th gauge group in the quantum mechanics. Concretely, The gauge group of $T^{1d}_{pure}$ is \beq\label{gaugegroup1d} \widehat{G}=\prod_{a=1}^n U(k^{(a)})\; . \eeq For a 3d gauge group $U(N^{(a)})$ with field strength $F^{(a)}$, each 1d rank above is identified as the nontrivial first Chern class $k^{(a)}=\frac{-1}{2\pi}\int \text{Tr} F^{(a)}$, where the integral is taken over the $\mathbb{C}$-line transverse to the vortex. There are chiral multiplets in the bifundamental representation $\oplus_{b>a}\, \Delta^{ab}\,(k^{(a)}, \overline{k^{(b)}})$ and in the bifundamental representation $\oplus_{b>a}\, \Delta^{ab}\,(\overline{k^{(a)}}, k^{(b)})$ of $\prod_{a,b} \left(U(k^{(a)})\times U(k^{(b)})\right)$, where $\Delta^{ab}$ is again denoting the incidence matrix of $\fg$: $\Delta^{ab}$ is equal to 1 if there is a link connecting nodes $a$ and $b$ in the Dynkin diagram of $\fg$, and is 0 otherwise. Additionally, there is fundamental and antifundamental chiral matter which manifests itself as additional ``teeth" in the 1d quiver. The precise determination of such matter requires specifying the gauge group $G=\prod_{a=1}^n U(N^{(a)})$ and flavor group $G_F=\prod_{a=1}^n U(N_F^{(a)})$ of the 3d bulk theory. We denote this flavor symmetry by $\widehat{G}_F$. When the rank of $G_F$ is large enough, fully Higgsing the 3d quiver theory is always possible, for any $ADE$ Lie algebra. The resulting 1d theory is then a generic handsaw quiver variety \cite{Nakajima:2011yq}, with 1d chiral matter on all $n$ nodes. Namely, on the $a$-th node, there are $P^{(a)}$ chirals in the representation $(k^{(a)},\overline{P^{(a)}})$ of $U(k^{(a)})\times U(P^{(a)})$, and $Q^{(a)}$ chirals in the representation $(\overline{k^{(a)}},Q^{(a)})$ of $U(k^{(a)})\times U(Q^{(a)})$. As we reviewed in the previous section, the R-symmetry group of $T^{1d}_{pure}$ is $SU(2)_C\times U(1)_H$, and the R-charge assignment of the various fields is constrained by the superpotential, readable from the ``closed loops" in the quiver diagram. The 3d FI parameter $\zeta_{3d}^{(a)}$ of the gauge group $U(N^{(a)})$ sets the BPS tension of the vortex on node $a$. It is related to the 1d gauge coupling $\xi_{1d}^{(a)}$ of the gauge group $U(k^{(a)})$, according to $\xi_{1d}^{(a)}\propto 1/(\zeta_{3d}^{(a)})^2$. Meanwhile, the 3d gauge coupling $\xi_{3d}^{(a)}$ of the gauge group $U(N^{(a)})$ is related to the 1d FI parameter $\zeta_{1d}^{(a)}$ of the gauge group $U(k^{(a)})$, according to $\zeta_{1d}^{(a)}\propto 1/(\xi_{3d}^{(a)})^2$.\\ \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{flag} \vspace{-12pt} \caption{Example of the $G^{3d}$ theory $T_\rho[SU(N^{(n+1)})]$, and its vortex quantum mechanics $T^{1d}_{pure}$. Note we use a 3d $\cN=4$ notation for the quiver on top, and a 1d $\cN=4$ notation for the quiver on the bottom.} \label{fig:flag} \end{figure} Let us pause and look at an example, the $\fg=A_n$ case, showcased in Figure \ref{fig:flag}. The particular quiver theory $G^{3d}$ on top is sometimes called $T_\rho[SU(N^{(n+1)})]$, labeled by a partition $\rho=[N^{(1)}, N^{(2)}-N^{(1)}, \ldots ,N^{(n+1)}-N^{(n)}]$ \cite{Gaiotto:2008ak}. The $n$ circles label the gauge group $G=\prod_{a=1}^n U(N^{(a)})$, the box on the right labels a flavor symmetry group $G_F=U(N_F^{(n)})\equiv U(N^{(n+1)})$. An arrow between two circles labels a $\cN=4$ bifundamental hypermultiplet, while the arrow between the $n$-th circle and the box labels $N^{(n+1)}$ hypermultiplets in the fundamental representation of $U(N^{(n)})$. The corresponding 1d $\cN=4$ $(k^{(1)},k^{(2)},\ldots,k^{(n)})$ vortex world-line theory $T^{1d}_{pure}$ is shown on the bottom. The circles label the gauge group $G=\prod_{a=1}^n U(k^{(a)})$, the looping arrows label adjoint chiral multiplets, and the straight arrows label fundamental/antifundamental chiral multiplets, which makes up the flavor symmetry $\widehat{G}_F=\prod_{a=1}^{n+1} U(N^{(a)}-N^{(a-1)})$. Specifically, in our previous notation, the number of fundamental chirals at node $a$ is $P^{(a)} = N^{(a)} - N^{(a-1)}$, while the number of antifundamental chirals at node $a$ is $Q^{(a)} = N^{(a+1)} - N^{(a)}$. There are two types of cubic contributions to the superpotential: the first type of terms is due to the bifundamental/adjoint chiral multiplets, while the second type is due to the bifundamental/fundamental/antifundamental chirals, meaning the flavor teeth. These superpotential terms can simply be read off the various triplets of arrows making closed loops in the quiver diagram. Mathematically, the theory $T^{1d}_{pure}$ in this example is known as a handsaw quiver, isomorphic to a parabolic Laumon space. This is the moduli space of (based) quasi-maps from $\mathbb{P}^1$ into the flag variety \cite{2010arXiv1009.0676F,Nakajima:2011yq,Aganagic:2014oia}.\\ We now come to the new physics, and consider the vortex quantum when a 1/2-BPS Wilson loop wraps $S^1(\widehat{R})$ in $G^{3d}$; we call the resulting theory $T^{1d}$. Introducing such a Wilson loop for the 3d gauge group $U(N^{(a)})$ ($a\in\{1,\ldots, n\}$) can be done with the use of a new (nondynamical) defect group $U(L^{(a)})$ for a one-dimensional complex fermion field $\chi^{(a)}$ localized on the $S^1(\widehat{R})$, transforming in the representation of $(N^{(a)},\overline{L^{(a)}})$ of $U(N^{(a)})\times U(L^{(a)})$. We thereby refer to the defect group for the entire quiver as: \beq \widehat{G}_{defect}=\prod_{a=1}^n U(L^{(a)}) \;. \eeq The fermions make up the dimensional reduction of a 2d $\cN=(0,4)$ Fermi multiplet to 1d, coupled to the 3d fields in the bulk as \cite{Gomis:2006sb,Assel:2015oxa}: \beq \label{1dfermion3d} S^{3d/1d}=\int dt\; {\chi_{i,\rho}^{(a)}}^\dagger\, \left( \delta_{\rho\sigma}(\delta_{ij}\,i\, \partial_t + A^{3d,(a)}_{t, ij} + \Phi^{3d,(a)}_{ij} ) - \delta_{ij}\,\widetilde{A}_{t,\rho \sigma}^{(a)} \right)\, \chi_{j,\sigma}^{(a)} \; . \eeq Above, $A^{3d,(a)}_t$ and $\Phi^{3d,(a)}$ are the pullback of the 3d gauge field and the adjoint scalar of the $U(N^{(a)})$-vector multiplet, respectively. $\widetilde{A}^{(a)}_{t}$ is the background $U(L^{(a)})$ gauge field the 1d fermions couple to. $i$ and $j$ are indices for the fundamental representation of $U(N^{(a)})$, while $\rho$ and $\sigma$ are indices for the fundamental representation of $U(L^{(a)})$. The variable $t$ is periodic, with period $\widehat{R}/(2\pi)$. In the rest of this paper, an important role will be played by the eigenvalues $\{M^{(a)}_\rho\}$ of the background gauge field $\widetilde{A}^{(a)}_{t}$, which are (large) masses for the fermions and set the energy scale for their excitation. One can integrate out the fermions exactly, in which case the path integral organizes itself as a generating function of Wilson loops in the $L^{(a)}$-fold tensor product of the fundamental representation of $SU(N^{(a)})$, with expansion parameters the (exponentiated) defect fermions $\{M^{(a)}_\rho\}$ \cite{Gomis:2006sb}. For example, if the defect group is $U(L^{(a)})=U(1)$, the path integral is a series in the parameter $\text{exp}(\widehat{R}\, M^{(a)})$ comprised of $N^{(a)}+1$ terms, where each coefficient is a Wilson loop vev valued in one of fundamental representations of $SU(N^{(a)})$ (including the trivial one). An important point is that because we study $G^{3d}$ on its Higgs branch, the 3d theory is massive, the gauge group $G$ is broken and the Coulomb moduli are frozen to flavor masses. It follows that the Wilson loop we consider technically becomes a collection of ``flavor" loops at that locus of the moduli space, determined from the Higgsing pattern. After turning on the FI parameters, the Wilson loop fermions make up the degrees of freedom of 1d $\cN=2$ Fermi multiplets.\\ What are the details of the quantum mechanics $T^{1d}$? First recall from the last section that only two supercharges are preserved by the vortex quantum mechanics after adding in the Wilson loop. In particular, the $\cN=4$ multiplets that previously defined $T^{1d}_{pure}$ are still present, but should now be understood as a collection of $\cN=2$ vector, chiral, and Fermi multiplets in $T^{1d}$. Moreover, the $SU(2)_C\subset SU(2)_C\times U(1)_H$ part of the $T^{1d}_{pure}$ R-symmetry is now broken to $U(1)_C$. From the paragraph above, it is furthermore clear that the Wilson loop contributes extra $\cN=2$ Fermi multiplets, coupling the defect fermions to the various flavors. The interactions are encoded in the superpotential, now written in terms of holomorphic functions called E and J-terms for the Fermi multiplets, which constrains the R-charge of the various fields. More nontrivially, we claim there are additional multiplets present due to the coupling of the Wilson loop to the vortex. These are $\cN=(0,4)$ twisted hypermultiplets and Fermi multiplets in the bifundamental representation of $U(k^{(a)})\times U(L^{(a)})$; see Figure \ref{fig:flagdefect} for an illustration in our previous $T_\rho[SU(N^{(n+1)})]$ example. Those multiplets decompose into $\cN=2$ chiral and Fermi multiplets in $T^{1d}$. Heuristically, the existence of these extra multiplets can be justified as follows: note that on the manifold $\mathbb{C}\times S^1(\widehat{R})$, the Wilson loop is the world-line of an electrically charged quark, while the vortex is a magnetically charged particle. Then, when a vortex moves in the presence of a quark, it experiences a Lorentz force\footnote{An analogous Lorentz force was identified in a five-dimensional context, where the Wilson loop quark is moving instead in a nontrivial instanton background \cite{Tong:2014cha}.}, and correspondingly the Higgs branch of $T^{1d}$ should be understood as describing a generalized vortex moduli space. We leave the precise mathematical characterization of this modified moduli space to future work\footnote{This program was recently carried out successfully in instanton physics in five dimensions \cite{Nekrasov:2015wsu,Kim:2016qqs,Chang:2016iji,Haouzi:2019jzk,Haouzi:2020yxy,Haouzi:2020zls}: there, the problem of counting instantons in the presence of a Wilson loop is not solved by localizing the loop on usual ADHM solutions \cite{Atiyah:1978ri}, but by defining instead a more general ``crossed instanton" moduli space from the onset. The fact that a similar problem arises in our context is not too surprising, since vortices are ultimately related to instantons on a codimension-2 locus.}. The major takeaway is that it is not enough to simply localize the Wilson loop \eqref{Wilsonloop} at the solutions of usual BPS vortex equations in the absence of the loop. This would just result in the Fermi multiplet contributions \eqref{1dfermion3d}. Such contributions should be understood as ``classical," or topologically trivial, in the sense that they exist already at zero vortex charge $k=0$, but they will undergo an infinite number of corrections due to the other sectors $k>0$. These corrections are represented by the double green line in Figure \ref{fig:flagdefect}, and they will be crucial in making the non-perturbative symmetries of $G^{3d}$ manifest. \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{flagdefect} \vspace{-12pt} \caption{On the top, a Wilson loop defect is placed in the 3d theory $T_\rho[SU(N^{(n+1)})]$, transforming in the $L^{(2)}$-fold tensor product of the fundamental representation of $SU(N^{(2)})$. We denoted the loop by a green cross. On the bottom, the vortex quantum mechanics $T^{1d}$ is displayed. Black links and arrows denote 1d $\cN=4$ multiplets obtained by reduction of 2d $\cN=(2,2)$ supersymmetry, as before. Meanwhile, The double green link labels both a 1d $\cN=4$ twisted hypermultiplet and Fermi multiplet, obtained by reduction of 2d $\cN=(0,4)$ supersymmetry. The green dashed arrows represent 1d $\cN=2$ Fermi multiplets, as reduced from 2d $\cN=(0,2)$, which is the topologically trivial contribution of flavor Wilson loops. Ultimately, all multiplets in the picture should be decomposed into appropriate 1d $\cN=2$ vector, chiral and Fermi multiplets, as this is the supersymmetry of $T^{1d}$. We refrained from doing so in the bottom quiver not to clutter the figure.} \label{fig:flagdefect} \end{figure} For now, we justify the existence of these extra multiplets inside $T^{1d}$ a fortiori, by showing that they correctly (and uniquely) account for the symmetries of $G^{3d}$ under a shift of vortex number; in other words, they describe the physics of a non-perturbative Schwinger-Dyson identity. We will give a direct proof of the existence of these multiplets in section \ref{sec:littlestringsecion}, where we analyze an underlying string picture. Having described the quantum mechanics $T^{1d}$, we turn to the definition of its Witten index. As we will soon see, this index has remarkable properties in our context. \subsection{The Index of the Quantum Mechanics} \label{ssec:QMindex} Recall that the gauge theory $G^{3d}$ is defined on $\mathbb{C}\times S^1(\widehat{R})$. Let us denote by $U(1)_{\omega}$ the symmetry associated with rotating the $\mathbb{C}$-line. Then, the global symmetry of the vortex theory $T^{1d}$ is $(\widehat{G}_F/U(1)^n ) \times \widehat{G}_{defect}\times U(1)_C \times U(1)_H \times U(1)_{\omega}$, with $\widehat{G}_F$ the chiral matter producing the teeth of the handsaw quiver, and all other groups as introduced previously. The diagonal combination of $U(1)_H \times U(1)_{\omega}$ commutes with the supersymmetry and acts as a flavor symmetry; we call it $U(1)_{\epsilon_1}$, with generator $J_-$. We further define $r$ as the generator of $U(1)_H$, and $J_3$ as the generator of $U(1)_C$. Then, the refined Witten index of the $\cN=2$ gauged quantum mechanics $T^{1d}$ has the form \cite{Hori:2014tda,Cordova:2014oxa,Hwang:2014uwa}: \begin{align} \label{index} \left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d} =\text{Tr}\bigg[(-1)^F\, &e^{-\widehat{R}\{Q,\overline{Q}\}}\, e^{\widehat{R}\,\epsilon_2 (2 J_3 - r)}\,e^{2\widehat{R}\,\epsilon_1 J_-}\\ &\times\prod_{a=1}^n e^{\widehat{R}\, \zeta_{3d}^{(a)}\, k^{(a)}}\,e^{\widehat{R}\sum_{d} m^{(a)}_d\Pi^{(a)}_d}\,e^{\widehat{R}\sum_{\rho} M^{(a)}_{\rho}\Lambda^{(a)}_{\rho}}\,\bigg] \; .\nonumber \end{align} This index has a path integral interpretation as a twisted partition function on $S^1(\widehat{R})$. The trace is over the Hilbert space of the theory, and the index counts states in $Q$-cohomology, where we have redefined the supercharges as $Q\equiv Q^1_{1,1}$ and $\overline{Q}\equiv Q^2_{2,2}$ in the notation of section \ref{ssec:review}. $F$ is the fermion number. $\{\Pi^{(a)}_i\}$ and $\{\Lambda^{(a)}_{\rho}\}$ are Cartan generators of the flavor group $\widehat{G}_F$ and the Wilson line defect group $\widehat{G}_{defect}$, respectively. We have also defined conjugate variables for these generators: the fundamental/antifudamental chiral multiplet masses $\{m^{(a)}_d\}$, and the Wilson loop fermion masses $\{M^{(a)}_{\rho}\}$. Furthermore, the integer $k^{(a)}= \frac{-1}{2\pi}\int \text{Tr} F^{(a)}$ is the topological $U(1)$ charge for the $a$-th gauge group, conjugate to the vortex counting fugacity $\zeta_{3d}^{(a)}$, which as we reviewed is the 3d FI parameter\footnote{Recall that throughout our analysis, we set the complex FI parameter to zero. $\zeta_{3d}^{(a)}$ is the real FI parameter, but because the 3d theory is compactified on $S^1(\widehat{R})$, the parameter is in fact complexified by the holonomy of the corresponding background gauge field around the circle.}. Finally, we have introduced the variables $\epsilon_1$ and $\epsilon_2$, respectively conjugate to $J_-$ and $2 J_3 - r$. The fugacity $e^{\widehat{R}\, \E_1}$ is well-known in the context of the 3d gauge theory on $\mathbb{C}\times S^1(\widehat{R})$, where it is called the $\Omega$-background \cite{Nekrasov:2002qd,Nekrasov:2003rj,Losev:2003py,Nekrasov:2010ka}. We will analyze it in detail when discussing the 3d gauge theory perspective. In the rest of this paper, the following redefined fugacities will come in handy: \beq \label{epsilons} \epsilon_+\equiv \frac{\epsilon_1+\epsilon_2}{2}\; , ~~~~~ \epsilon_-\equiv \frac{\epsilon_1-\epsilon_2}{2}\; . \eeq The index is the grand canonical ensemble of vortex BPS states. The natural grading by the integers $k^{(a)}$ means that the index can be organized as a sum over vortex sectors $(k^{(1)}, k^{(2)}, \ldots, k^{(n)})$.\\ By standard arguments \cite{Witten:1982df,AlvarezGaume:1986nm}, the Witten index does not depend on the circle scale $\widehat{R}$. In particular, we can work in the limit $\widehat{R}\rightarrow 0$, where it reduces to Gaussian integrals around saddle points. These saddle points are parameterized by $\phi^{(a)}=\widehat{R}\, \varphi^{(a)}_{1d}+ i\, \widehat{R}\, A^{(a)}_{t, 1d}$, with $A^{(a)}_{t, 1d}$ the gauge field and $\varphi^{(a)}_{1d}$ the scalar in the $a$-th vector multiplet of the quantum mechanics. The (complexified) eigenvalues of $\phi^{(a)}$ are denoted as $\phi^{(a)}_1, \ldots, \phi^{(a)}_{k^{(a)}}$. Performing the Gaussian integrals over massive fluctuations, the index reduces to a zero mode integral of various 1-loop determinants, which we write schematically as: {\allowdisplaybreaks \begin{align} \label{vortexintegral} &\left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d} =\sum_{k^{(1)},\ldots, k^{(n)}=0}^{\infty}\;\prod_{a=1}^{n}\frac{e^{\widehat{R}\,\zeta_{3d}^{(a)}\, k^{(a)}}}{k^{(a)}!} \prod_{\rho=1}^{L^{(a)}} Z^{(a)}_{defect, \varnothing}\\ &\qquad\qquad\qquad\qquad\;\;\times \oint \left[\frac{d\phi^{{(a)}}_I}{2\pi i}\right]Z^{(a)}_{pure, vec}\cdot Z^{(a)}_{pure, adj}\cdot Z^{(a)}_{pure, teeth}\cdot \prod_{b>a}^{n} Z^{(a,b)}_{pure, bif}\cdot Z^{(a)}_{defect, k}\; ,\nonumber \\ &Z^{(a)}_{pure, vec} = \frac{\prod_{\substack{I\neq J\\ I,J=1}}^{k^{(a)}}\sh\left(\phi^{(a)}_I-\phi^{(a)}_J\right)}{\prod_{I, J=1}^{k^{(a)}} \sh\left(\phi^{(a)}_{I}-\phi^{(a)}_{J}+\E_2 \right)}\nonumber\\ &Z^{(a)}_{pure, adj} = \prod_{I, J=1}^{k^{(a)}}\frac{\sh\left(\phi^{(a)}_I-\phi^{(a)}_J+\E_1+\E_2\right)}{\sh\left(\phi^{(a)}_{I}-\phi^{(a)}_{J}+\E_1 \right)}\nonumber\\ &Z^{(a)}_{pure, teeth} = \prod_{I=1}^{k^{(a)}}\prod_{i=1}^{P^{(a)}}\frac{\sh\left(\phi^{(a)}_I-\mu^{(a)}_i+(\E_1-\E_2)/2+\E_2\right)}{\sh\left(\phi^{(a)}_{I}-\mu^{(a)}_i+(\E_1-\E_2)/2\right)}\prod_{j=1}^{Q^{(a)}}\frac{\sh\left(-\phi^{(a)}_I+\widetilde{\mu}^{(a)}_j+(\E_1+\E_2)/2+\E_2\right)}{\sh\left(-\phi^{(a)}_{I}+ \widetilde{\mu}^{(a)}_j+(\E_1+\E_2)/2 \right)}\nonumber\\ &Z^{(a, b)}_{pure, bif} =\left[\prod_{I=1}^{k^{(a)}}\prod_{J=1}^{k^{(b)}} \frac{\sh\left(\phi^{(b)}_{I}-\phi^{(a)}_{J} + \E_2 \right)}{\sh\left(\phi^{(b)}_{I}-\phi^{(a)}_{J} \right)} \frac{\sh\left(-\phi^{(b)}_{I}+\phi^{(a)}_{J}- \E_1 \right)}{\sh\left(-\phi^{(b)}_{I}+\phi^{(a)}_{J}- \E_1 - \E_2 \right)}\right]^{\Delta^{ab}}\nonumber\\ &Z^{(a)}_{defect, \varnothing} = \prod_{\substack{b \\ U(N^{(a)})\rightarrow \prod_b U(P^{(b)})}}\prod_{i=1}^{P^{(b)}} \sh\left(\mu^{{(b)}}_i-M^{{(a)}}_\rho +\E_2 -\#^{(b)}_i\, (\E_1+\E_2)/2\right)\nonumber\\ &Z^{(a)}_{defect, k} = \prod_{I=1}^{k^{(a)}} \frac{\sh\left(\phi^{{(a)}}_I-M^{(a)}_\rho- (\E_1 - \E_2)/2 \right)\, \sh\left(-\phi^{{(a)}}_I + M^{(a)}_\rho- (\E_1 - \E_2)/2\right)}{\sh\left(\phi^{{(a)}}_I-M^{(a)}_\rho- (\E_1 + \E_2)/2\right)\, \sh\left(-\phi^{{(a)}}_I + M^{(a)}_\rho - (\E_1 + \E_2)/2 \right)}\, .\nonumber \end{align}} Some comments are in order:\\ We use the convenient notation $\sh(x)\equiv 2 \sinh(\widehat{R}\,x/2)$. $\mathcal{M}_k$ is the set of poles enclosed by the contours, which we will characterize below. The prefactor $\prod_{a=1}^{n} 1/k^{(a)}!$ is the Weyl group order of the 1d gauge group $\widehat{G}=\prod_{a=1}^n U(k^{(a)})$. The factor $Z^{(a)}_{pure, vec}(\{\phi^{(a)}_I\}, \epsilon_2)$ is the contribution of the (reduction from 2d $\cN=(2,2)$ to) 1d $\cN=4$ vector multiplet on node $a$, decomposed into $\cN=2$ Fermi and chiral multiplets. The factor $Z^{(a)}_{pure, adj}(\{\phi^{(a)}_I\}, \epsilon_1, \epsilon_2)$ is the contribution of the (reduction from 2d $\cN=(2,2)$ to) 1d $\cN=4$ adjoint chiral multiplet on node $a$, decomposed into $\cN=2$ Fermi and chiral multiplets. The factor $Z^{(a)}_{pure, teeth}(\{\phi^{(a)}_I\}, \{\mu^{(a)}_i\}, \{\widetilde{\mu}^{(a)}_i\}, \epsilon_1, \epsilon_2)$ is the contribution of the (reduction from 2d $\cN=(2,2)$ to) 1d $\cN=4$ flavors on node $a$. Note that the numbers $P^{(a)}$ of fundamental chirals (with corresponding masses $\{\mu^{(a)}_i\}$) and $Q^{(a)}$ of antifundamental chirals (with corresponding masses $\{\widetilde{\mu}^{(a)}_i\}$) are fully determined in terms of the ranks of the 3d gauge group $G$, the 3d flavor group $G_F$, and the choice of the 3d vacuum\footnote{This is the data that determines the Higgsing of the 3d theory, which is to say the fundamental mass each Coulomb modulus is frozen to. In our 1d notation, the masses and $\{\mu^{(a)}_i\}$ and $\{\widetilde{\mu}^{(a)}_i\}$ should eventually be expressed in terms of the 3d masses $\{m^{(a)}_d\}$.}. For instance, consider the 3d theory $T_\rho[SU(N^{(n+1)})]$, where the fundamental matter ranks are $N_F^{(a)}=0$ for $a=1,\ldots,n-1$, and $N^{(n)}_F=N^{(n+1)}$ on the last node, with corresponding masses $\{m^{(n)}_d\}_{d=1,\ldots,N^{(n+1)}}$. Then, in the quantum mechanics, $P^{(a)} = N^{(a)} - N^{(a-1)}$, while $Q^{(a)} = N^{(a+1)} - N^{(a)}$, and the matter factor becomes: \beq Z^{(a)}_{pure, teeth} = \prod_{I=1}^{k^{(a)}}\prod_{i=N^{(a-1)}}^{N^{(a)}}\frac{\sh\left(\phi^{(a)}_I-m^{(n)}_i+\E_-+\E_2\right)}{\sh\left(\phi^{(a)}_{I}-m^{(n)}_i+\E_-\right)}\prod_{j=N^{(a)}}^{N^{(a+1)}}\frac{\sh\left(-\phi^{(a)}_I+m^{(n)}_j+\E_+ +\E_2\right)}{\sh\left(-\phi^{(a)}_{I}+ m^{(n)}_j+\E_+ \right)} \; . \eeq In particular, the chiral multiplet masses $\{\mu^{(a)}_i\}$ and $\{\widetilde{\mu}^{(a)}_i\}$ are now written exclusively in terms of the $N^{(n+1)}$ 3d masses $\{m^{(n)}_i\}$, as they should. The generalization to $D_n$ and $E_n$ algebras is straightforward, even though the Higgsing pattern is more intricate to write down explicitly \cite{Aganagic:2015cta}. The factor $Z^{(a, b)}_{pure, bif}(\{\phi^{(a)}_I\}, \{\phi^{(b)}_I\}, \epsilon_1, \epsilon_2)$ is the contribution of the (reduction from 2d $\cN=(2,2)$ to) 1d $\cN=4$ bifundamental matter between nodes $a$ and $b$. It is only nontrivial when the incidence matrix $\Delta^{a b}$ is as well. Recall that the matrix $\Delta^{a b}$ equals 1 if there is a link connecting nodes $a$ and $b$, and equals 0 otherwise.\\ The above factors account for all the multiplets present in the vortex quantum mechanics $T^{1d}_{pure}$ of a 3d $\cN=4$ gauge theory in the absence of a Wilson loop. Now, recall that the loop is characterized by the defect group $\widehat{G}_{defect}=\prod_{a=1}^n U(L^{(a)})$ for additional 1d fermions. The superscript notation we use for the index, $\left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}$, makes the dependence on this defect group explicit. Those fermions are responsible for two universal contributions to the index. First, the factor $Z^{(a)}_{defect, \varnothing}(\{M^{(a)}_\rho\}, \{\mu^{(a)}_i\}, \epsilon_1, \epsilon_2)$ is the contributions of (the reduction from 2d $\cN=(0,2)$ to) 1d $\cN=2$ Fermi multiplets. We previously called this factor ``classical," in the sense that it exists even in the zero-vortex sector. Hence, it sits outside of the 1-loop determinant integrals and is denoted by the subscript $\varnothing$. The symbols $\#^{(b)}_i$ stand for positive integers, which are uniquely fixed by R-symmetry once the 3d Higgs vacuum is specified. As is the case for the factor $Z^{(a)}_{pure, teeth}$, each mass $\{\mu^{(b)}_i\}$ is equal to one of the 3d masses $\{m^{(b)}_d\}$, determined by the choice of the vacuum. The product is over all $b$ such that $U(N^{(a)})\rightarrow \prod_b U(P^{(b)})$; this is to indicate the breaking of gauge Wilson loop to a product of flavor Wilson loops on the Higgs branch. Second, there is the interaction between the defect fermions and the vortices: we denote it as the factor $Z^{(a)}_{defect, k}(\{\phi^{(a)}_I\}, \{M^{(a)}_\rho\}, \epsilon_1, \epsilon_2)$, which is the contributions of (the reduction from 2d $\cN=(0,4)$ to) 1d $\cN=4$ twisted hypermultiplets and Fermi multiplets, decomposed into $\cN=2$ chiral and Fermi multiplets.\\ Because the theory is valued on a circle of radius $\widehat{R}$, it is useful in what follows to introduce K-theoretic fugacities for each of the equivariant parameters: \begin{align}\label{fugacitiesKtheory} &\widetilde{\fq}^{(a)}\equiv e^{\widehat{R}\,\zeta_{3d}^{(a)}},\\ & q \equiv e^{\widehat{R}\,\epsilon_1},\qquad t\equiv e^{-\widehat{R}\,\epsilon_2},\qquad v\equiv e^{\widehat{R}\,\epsilon_+}=\sqrt{q/t},\qquad u\equiv e^{\widehat{R}\,\epsilon_-}=\sqrt{q\, t},\nonumber\\ &f^{(a)}_d \equiv e^{-\widehat{R}\,\mu^{{(a)}}_d},\qquad \widetilde{f}^{(a)}_d \equiv e^{-\widehat{R}\,\widetilde{\mu}^{{(a)}}_d},\qquad z^{(a)}_\rho=e^{-\widehat{R}\,M^{(a)}_\rho}.\nonumber \end{align} Crucially, the Witten index also depends implicitly on additional continuous parameters in a piecewise constant manner: the $n$ FI parameters $\zeta_{1d}^{(a)}$, which are themselves $k^{(a)}$-vectors, one for each abelian factor in $\widehat{G}$. Indeed, when such a parameter changes sign and crosses the value $\zeta_{1d}^{(a)}=0$, a non-compact Coulomb branch opens up, and some vacua may appear or disappear, resulting in wall crossing and a jump in the index. This dependence on the 1d FI parameters is in one-to-one correspondence with the choice of the index integration contours, which we now turn to. We adopt the so-called Jeffrey-Kirwan (JK) residue prescription \cite{Jeffrey:1993}. It was first popularized in our context in a two-dimensional setup \cite{Benini:2013xpa}, and used in our quantum mechanical context in \cite{Hwang:2014uwa, Cordova:2014oxa, Hori:2014tda}. Let us briefly review its main features. First note that each $Z^{(a)}$-factor in the integrand has the following general form: \beq\label{integRand} \frac{\prod_{i=1}^{n_1}\sh(\vec\rho_i\vec\phi_i+\ldots)}{\prod_{j=1}^{n_2}\sh(\vec\rho_j\vec\phi_j+\ldots)}\; , \eeq where $\vec\rho$ is a $k$-tuple vector, with $k=\sum_{a=1}^n k^{(a)}$. The entries of $\vec\rho$ are all in the set $\{0,\pm1\}$, and $n_1^{(a)}$ and $n_2^{(a)}$ are positive integers specified by the details of the vortex quantum mechanics. The dots ``$\ldots$" stand for a linear function of the spacetime fugacities $\epsilon_1$, $\epsilon_2$, as well as all the other 1d flavor fugacities. Since $\sinh(0)=\sinh(i\pi)=0$, there can be many poles in \eqref{integRand}. We denote a pole locus as $\vec\phi=\vec\phi_*$. Now, we assemble the $n$ FI parameters $\zeta^{(a)}_{1d}$ into a vector $\zeta_{1d}$ of size $k=\sum_{a=1}^n k^{(a)}$. As we pointed out, the Witten index depends on the choice of a chamber for $\zeta_{1d}$. Apart from the FI parameter vector $\zeta_{1d}$, the JK prescription instructs us to define yet another auxiliary $k$-vector $\eta$, though the index ultimately does not depend on $\eta$. We are a priori free to work with any $k$-vector $\eta$ we want to carry out the JK residue prescription, but there exists a particularly convenient choice $\eta=\zeta_{1d}$. Indeed, on general grounds, the index integral $\eqref{vortexintegral}$ has $\phi$-poles at $\pm \infty$ with nonzero residues; one can show that the choice $\eta=\zeta_{1d}$, meaning $\eta$ generic but chosen in the same chamber as $\zeta_{1d}$, guarantees that the contributions of $\phi$-residues at $\pm \infty$ vanish. Unless specified otherwise, in this paper we work in a chamber where all components of $\zeta_{1d}$ are positive. We will work in different chambers when discussing 3d Seiberg duality later on. Having defined $\eta$, we are to choose $k$ hyperplanes from the arguments of $\sinh$ functions in the denominator of \eqref{integRand}. Those hyperplanes will take the following form: \beq \label{linearsystem} \vec\rho_j \cdot \vec\phi_j + \ldots =0\;,\;\;\;\text{where }j=1,\ldots,k. \eeq The contours of the index are then chosen to enclose poles which are solutions of this linear system of equation, but only if the vector $\eta$ also happens to lie in the cone spanned by the vectors $\vec\rho_j$. A practical way to test this condition is to construct a $k\times k$ matrix ${\bf{Q}}=Q_{ji}=(\rho_j)_i$, where $\vec\rho_j=((\rho_j)_1,\ldots,(\rho_j)_k)$, and test if all the components of $\eta\, {\bf{Q}}^{-1}$ are positive. We collect the poles $\vec\phi_*$ satisfying the condition in a set $\mathcal{M}_k$. Summing over all the poles in $\mathcal{M}_k$, the Witten index takes the form \beq\label{JK} \left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}=\sum_{k^{(1)},\ldots, k^{(m)}=0}^{\infty}\prod_{a=1}^n\frac{(\widetilde{\fq}^{(a)}){}^{k^{(a)}}}{k^{(a)}!}\sum_{\vec\phi_*}\text{JK-res}_{\vec\phi_*}({\bf{Q}}_*, \eta)\, Z^{(a)}_{integrand} \; , \eeq where $Z^{(a)}_{integrand}$ is the integrand of \eqref{vortexintegral}, and the JK-residue is defined as \beq\label{JKrule} \text{JK-res}_{\vec\phi_*}({\bf{Q}}_*, \eta)\frac{d^k \vec\phi}{\prod_{j=1}^k(\vec\rho_j\cdot\vec\phi_j)}=\begin{cases} \frac{1}{\left|\text{det}\left({\bf{Q}}\right)\right|} \;\; \text{if}\;\; \eta\in\text{cone}\left({\bf{Q}}\right)\\ 0 \qquad\qquad\qquad\, \text{otherwise} \end{cases} \eeq The condition $\eta\in\text{cone}\left({\bf{Q}}\right)$ means that the vector $\eta$ should lie in the cone spanned by the rows of the matrix ${\bf{Q}}$. It can happen that a solution of the system of equations \eqref{linearsystem} yields additional zeroes in the denominator of \eqref{integRand}. This typically results in degenerate poles, which can be dealt with using a constructive definition of the JK residue and the so-called flag method \cite{2004InMat.158..453S,Benini:2013xpa}. This is an involved procedure to implement analytically, and we will refrain from doing so in this paper, treating potential degenerate poles on a case-by-case basis instead. We now come to our main object of study, the derivation of non-perturbative Schwinger-Dyson equations for the gauge theory $G^{3d}$. As we now show, they arise as a regularity condition of the quantum mechanics index $\left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d} $ on the defect fermion masses $\{M^{(a)}_{\rho}\}$. \subsection{The Index is a Vortex $qq$-character} \label{ssec:1dqqcharacter} We evaluate the index, using the JK residue prescription above to define the contours. As a warmup, let us practice with the index of $T^{1d}_{pure}$, which is the vortex quantum mechanics of the ``pure" 3d $\cN=4$ theory, in the absence of Wilson loop defects. We call the corresponding index: \begin{align}\label{partpure} &\left[\chi\right]^{(0,\ldots, 0)}_{1d} =\sum_{k^{(1)},\ldots, k^{(n)}=0}^{\infty}\;\prod_{a=1}^{n}\frac{(\widetilde{\fq}^{(a)}){}^{k^{(a)}}}{k^{(a)}!}\\ &\qquad\qquad\qquad\qquad\;\;\times \oint_{\mathcal{M}^{pure}_k} \left[\frac{d\phi^{{(a)}}_I}{2\pi i}\right]Z^{(a)}_{pure, vec}\cdot Z^{(a)}_{pure, adj}\cdot Z^{(a)}_{pure, teeth}\cdot \prod_{b>a}^{n} Z^{(a,b)}_{pure, bif}\; . \nonumber \end{align} Working in the $\zeta_{1d}>0$ chamber, the poles that end up contributing to the $T^{1d}_{pure}$ index make up the set $\mathcal{M}^{pure}_k$. The elements of this set satisfy: \begin{align} &\phi^{(a)}_I = \phi^{(a)}_J - \E_1 \; , \label{purepole1}\\ &\phi^{(a)}_I = \phi^{(a)}_J - \E_2 \; , \label{purepole2}\\ &\phi^{(a)}_I = \mu^{(a)}_i - \E_- \; ,\;\; \text{for some $i\in\{1,\ldots, P^{(a)}\}$}\; ,\label{purepole3}\\ &\phi^{(b)}_J=\phi^{(a)}_I \; ,\;\; \text{if there is a link between nodes $a$ and $b>a$}\; , \label{purepole4}\\ &\phi^{(b)}_J=\phi^{(a)}_I + 2\epsilon_+ \; ,\;\; \text{if there is a link between nodes $a$ and $b<a$}\; .\label{purepole5} \end{align} The poles \eqref{purepole1} arise from the $\cN=4$ adjoint chiral factor, \beq Z^{(a)}_{pure, adj} = \prod_{I, J=1}^{k^{(a)}}\frac{\sh\left(\phi^{(a)}_I-\phi^{(a)}_J+\E_1+\E_2\right)}{\sh\left(\phi^{(a)}_{I}-\phi^{(a)}_{J}+\E_1 \right)} \; . \eeq The poles \eqref{purepole2} arise from the $\cN=4$ vector multiplet, \beq Z^{(a)}_{pure, vec} = \frac{\prod_{\substack{I\neq J\\ I,J=1}}^{k^{(a)}}\sh\left(\phi^{(a)}_I-\phi^{(a)}_J\right)}{\prod_{I, J=1}^{k^{(a)}} \sh\left(\phi^{(a)}_{I}-\phi^{(a)}_{J}+\E_2 \right)} \; . \eeq The poles \eqref{purepole3} arise from the $\cN=4$ flavor factor, \beq \label{teethagain} Z^{(a)}_{pure, teeth} = \prod_{I=1}^{k^{(a)}}\prod_{i=1}^{P^{(a)}}\frac{\sh\left(\phi^{(a)}_I-\mu^{(a)}_i+\E_-+\E_2\right)}{\sh\left(\phi^{(a)}_{I}-\mu^{(a)}_i+\E_- \right)}\prod_{j=1}^{Q^{(a)}}\frac{\sh\left(-\phi^{(a)}_I+\widetilde{\mu}^{(a)}_j+\E_+ +\E_2\right)}{\sh\left(-\phi^{(a)}_{I}+ \widetilde{\mu}^{(a)}_j+\E_+ \right)} \; . \eeq Specifically, the JK contours enclose poles coming from the fundamental chirals only (the $P^{(a)}$-product), and none of the antifundamental chirals. We wrote the poles in terms of 1d flavor fugacities $\{\mu^{(a)}_i\}$ as a shorthand notation, which are really placeholders for the $\text{rank}(G_F)$ 3d fundamental masses $\{m^{(b)}_d\}$. The poles \eqref{purepole4} and \eqref{purepole5} are due to the bifundamental contributions, \beq Z^{(a, b)}_{pure, bif} =\left[\prod_{I=1}^{k^{(a)}}\prod_{J=1}^{k^{(b)}} \frac{\sh\left(\phi^{(b)}_{I}-\phi^{(a)}_{J} + \E_2 \right)}{\sh\left(\phi^{(b)}_{I}-\phi^{(a)}_{J} \right)} \frac{\sh\left(-\phi^{(b)}_{I}+\phi^{(a)}_{J}- \E_1 \right)}{\sh\left(-\phi^{(b)}_{I}+\phi^{(a)}_{J}- \E_1 - \E_2 \right)}\right]^{\Delta^{ab}}\, . \eeq Two important remarks are in order. First, even though the contours enclose the JK-poles \eqref{purepole2}, the resulting residues are always trivial, because the numerators in $Z^{(a)}_{pure, teeth}$ create a zero at this locus. Second, because of the bifundamental factor $Z^{(a, b)}_{pure, bif}$, some of the enclosed poles are non-simple for generic rank $k^{(a)}$. However, a careful application of the flag method to construct the JK-residue shows that the poles we enclose above make up an exhaustive list; this was checked numerically in \cite{Hwang:2017kmk}. Putting it all together, and writing the 1d fundamental chiral masses $\{\mu^{(a)}_i\}$ in terms of the 3d masses $\{m^{(b)}_i\}$, the various poles which end up contributing with nonzero residue are of the form \beq\label{polepure} \phi^{(a)}_I = m^{(b)}_i - \E_- - (s_i-1) \E_1 + 2\,\#^{(ab)} \E_+ \; , \;\;\;\; \text{with}\; s_i\in\{1,\ldots,k^{(a)}_i\},\;\; i\in\{1,\ldots,N^{(a)}\},\; , \eeq for some mass index $b\in\{1,\ldots,n\}$, and where $(k^{(a)}_1, \ldots, k^{(a)}_{N^{(a)}})$ is a partition of the vortex charge $k^{(a)}$ into $N^{(a)}$ non-negative integers. The pair of integers $(i, s_i)$ is assigned to one of the integers $I\in\{1,\ldots,k^{(a)}\}$ exactly once, and $\#^{(ab)}$ is a non-negative integer equal to the number of links between nodes $a$ and $b<a$ in the Dynkin diagram of $\fg$ (and $\#^{(ab)}=0$ if $b>a$).\\ As an explicit example, consider the $A_n$ theory $G^{3d}=T_\rho[SU(N^{(n+1)})]$ from figure \ref{fig:flag}. Then, the poles with nonzero residue are all of the form \beq \phi^{(a)}_I = m^{(n)}_i - \E_- - (s_i-1) \E_1 \; , \;\;\;\; \text{with}\; s_i\in\{1,\ldots,k^{(a)}_i\},\;\; i\in\{1,\ldots,N^{(a)}\}\; . \eeq and $(k^{(a)}_1, \ldots, k^{(a)}_{N^{(a)}})$ is a partition of $k^{(a)}$ into $N^{(a)}$ non-negative integers, and the pair of integers $(i, s_i)$ is assigned to one of the integers $I\in\{1,\ldots,k^{(a)}\}$ exactly once. Performing the residue integral, one finds the following closed-form expression, which is well-known \cite{Hwang:2017kmk}: \begin{align}\label{examplepure1d} \left[\chi\right]^{(0,\ldots, 0)}_{1d} =\sum_{k^{(1)},\ldots, k^{(n)}=0}^{\infty}\;\prod_{a=1}^{n}(\widetilde{\fq}^{(a)}){}^{k^{(a)}}&\sum_{\substack{\sum_i k^{(a)}_i = k^{(a)} \\ k^{(a)}_i\geq 0}} \left[\prod_{i,j=1}^{N^{(a)}}\prod_{s=1}^{k^{(a)}_i-k^{(a)}_j}\frac{\sh\left(m_i-m_j+\E_2- (s-1)\, \E_1\right)}{\sh\left(m_i-m_j - (s-1)\, \E_1\right)}\right]\nonumber\\ &\qquad\times\left[\prod_{i=1}^{N^{(a+1)}}\prod_{j=1}^{N^{(a)}}\prod_{p=1}^{k^{(a)}_j-k^{(a+1)}_i}\frac{\sh\left(m_i-m_j+\E_2 + p\, \E_1\right)}{\sh\left(m_i-m_j + p\, \E_1\right)}\right]. \end{align} We now consider the vortex quantum mechanics of $G^{3d}$ in the presence of the Wilson loop, that is to say the index of $T^{1d}$ \eqref{vortexintegral}. For a given vortex number $k=\sum_{a=1}^n k^{(a)}$, the set of poles to be enclosed is denoted as $\mathcal{M}_k$. This set contains the set $\mathcal{M}^{pure}_k$ of poles we just reviewed for the theory $T^{1d}_{pure}$ (the index in the absence of defect). There are also additional poles depending on the fermion masses $M^{(a)}_\rho$, which belong in the set $\mathcal{M}_k\setminus\mathcal{M}^{pure}_k$. Specifically, the new poles are of the form: \begin{align} &\phi^{(a)}_I=M^{(a)}_\rho + \epsilon_+ \; , \label{newpole1}\\ &\phi^{(b)}_J=\phi^{(a)}_I \; ,\;\; \text{if there is a link between nodes $a$ and $b>a$}\; , \label{newpole2}\\ &\phi^{(b)}_J=\phi^{(a)}_I + 2\epsilon_+ \; ,\;\; \text{if there is a link between nodes $a$ and $b<a$}\; .\label{newpole3} \end{align} The poles \eqref{newpole1} arise because of the interactions between the vortices and the Wilson loop fermions, \beq \label{defect} Z^{(a)}_{defect, k} = \prod_{I=1}^{k^{(a)}} \frac{\sh\left(\phi^{{(a)}}_I-M^{(a)}_\rho- \E_- \right)\, \sh\left(-\phi^{{(a)}}_I + M^{(a)}_\rho- \E_-\right)}{\sh\left(\phi^{{(a)}}_I-M^{(a)}_\rho- \E_+\right)\, \sh\left(-\phi^{{(a)}}_I + M^{(a)}_\rho - \E_+ \right)} \, . \eeq The remaining poles \eqref{newpole2} and \eqref{newpole3} are again due to the bifundamental contributions, \beq Z^{(a, b)}_{pure, bif} =\left[\prod_{I=1}^{k^{(a)}}\prod_{J=1}^{k^{(b)}} \frac{\sh\left(\phi^{(b)}_{I}-\phi^{(a)}_{J} + \E_2 \right)}{\sh\left(\phi^{(b)}_{I}-\phi^{(a)}_{J} \right)} \frac{\sh\left(-\phi^{(b)}_{I}+\phi^{(a)}_{J}- \E_1 \right)}{\sh\left(-\phi^{(b)}_{I}+\phi^{(a)}_{J}- \E_1 - \E_2 \right)}\right]^{\Delta^{ab}}\, . \eeq For a given vortex number $k$, we now argue that the content of the set $\mathcal{M}_k\setminus\mathcal{M}^{pure}_k$ makes it possible to reinterpret the index as the character of a finite-dimensional representation of a quantum affine algebra. In order to prove this, we define a new quantity, the vacuum expectation value of a loop defect operator on node $a$, with corresponding fermion mass $z^{(a)}_\rho\equiv z$: \begin{align} \label{Yoperator1d} &\left\langle \left[Y^{(a)}_{1d}(z)\right]^{\pm 1} \right\rangle \equiv \sum_{k^{(1)},\ldots, k^{(n)}=0}^{\infty}\;\prod_{b=1}^{n} \frac{\left(\widetilde{\fq}^{(b)}\right)^{k^{(b)}}}{k^{(b)}!} \; \\ &\;\;\qquad\qquad\times\oint_{\mathcal{M}^{pure}_k} \left[\frac{d\phi^{{(b)}}_I}{2\pi i}\right]Z^{(b)}_{pure, vec}\cdot Z^{(b)}_{pure, adj}\cdot Z^{(b)}_{pure, teeth}\cdot \prod_{c>b}^{n} Z^{(b, c)}_{pure, bif} \cdot \left[Z^{(a)}_{defect, k}(z)\right]^{\pm 1} . \nonumber \end{align} Even though the defect factor $Z^{(a)}_{defect, k}(z)$ is present inside the integrand, the contour integral is defined to \emph{only} enclose poles in the set $\mathcal{M}^{pure}_k$, the same poles as in the pure index \eqref{partpure}. Remarkably, the index of $T^{1d}$ can be written as a finite Laurent series in such $Y$-operator vevs. In order to be as concise as possible, we find it convenient to normalize the index by the classical Wilson loop contribution and the index of the vortex quantum mechanics $T^{1d}_{pure}$, in the absence of Wilson loop: \beq \label{normalized} \left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d} (\{z^{(a)}_\rho\})\equiv \frac{\left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}(\{z^{(a)}_\rho\})}{\prod_{a=1}^{n}\prod_{\rho=1}^{L^{(a)}} Z^{(a)}_{defect, \varnothing}(z^{(a)}_\rho)\cdot \left[{\chi}\right]^{(0,\ldots,0)}_{1d}} \eeq As a function of the fermion masses $\{z^{(a)}_\rho\}$, our first main result is that the normalized index can be written as: \begin{align}\label{character1d} \boxed{\left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}(\{z^{(a)}_\rho\})=\frac{1}{\left[{\chi}\right]^{(0,\ldots,0)}_{1d}}\sum_{\omega\in V(\lambda)}\prod_{b=1}^n \left({\widetilde{\fq}^{(b)}}\right)^{d_b^\omega}\; c_{d_b^\omega}(q, t)\; \left(\mathcal{Q}_{d_b^\omega}^{(b)}(\{z_{\rho}^{(a)}\})\right)\, \left[{Y}_{1d}(\{z^{(a)}_\rho\})\right]_{\omega} \, .} \end{align} We will prove this statement momentarily. For now, let us unpack the notation. $\{z^{(a)}_\rho\}$ denotes collectively the $\sum_{a=1}^n L^{(a)}$ fermion masses $z^{(a)}_\rho\equiv e^{-\widehat{R} M^{(a)}_\rho}$. The sum runs over all the weights $\omega$ of the finite-dimensional irreducible representation $V(\lambda)$ of the quantum affine algebra $U_q(\widehat{\fg})$, with highest weight $\lambda=\sum_{a=1}^n L^{(a)}\, \lambda_a$. Here, $\fg$ is the simply-laced Lie algebra denoting the 3d quiver gauge theory (as well as its vortex quantum mechanics), and $\lambda_a$ the $a$-th fundamental weight of $\fg$. The label $d_b^\omega$ is a positive integer that is determined by solving \beq\label{sl2string} \omega=\lambda -\sum_{b=1}^n d_b^\omega\, \alpha_b\; . \eeq Namely, a given weight $\omega$ is reached by lowering the highest weight $\lambda$ a finite number of times, using the positive simple roots $\{\alpha_b\}_{b=1,\ldots, n}$. This procedure is referred to as building the weight $\omega$ out of $sl_2$ strings. The equivariant parameter $\widetilde{\fq}^{(b)}$ is the 3d FI parameter for the $b$-th gauge group. The factors $c_{d_b^\omega}(q, t)$ are coefficients depending only on $q$ and $t$. The function $\mathcal{Q}_{d_b^\omega}^{(b)}(\{z_{\rho}^{(a)}\})$ is the residue of $Z^{(b)}_{pure, teeth}$ evaluated at the poles \eqref{newpole1}, \eqref{newpole2}, and \eqref{newpole3}. The function is therefore made up of fundamental and antifundamental chiral multiplet contributions, such as: \beq \label{antifund} \prod_{a=1}^n\prod_{\rho=1}^{L^{(a)}}\prod_{i=1}^{P^{(b)}}\frac{\left(1- v^{{\#'}^{(b)}_i} f^{(b)}_i/z_{\rho}^{(a)}\right)}{\left(1- v^{{\#'}^{(b)}_i}f^{(b)}_i/z_{\rho}^{(a)}\right)}\prod_{j=1}^{Q^{(b)}}\frac{\left(1- t\, v^{\widetilde{\#'}^{(b)}_j} \widetilde{f}^{(b)}_j/z_{\rho}^{(a)}\right)}{\left(1- v^{\widetilde{\#'}^{(b)}_j}\widetilde{f}^{(b)}_j/z_{\rho}^{(a)}\right)} \eeq As usual, $P^{(b)}$ stands for the number of fundamental chirals at node $b$, with masses $\{f^{(b)}_i\}$, while $Q^{(b)}$ stands for the number of antifundamental chirals at node $b$, with masses $\{\widetilde{f}^{(b)}_j\}$. The symbols ${\#'}^{(a)}_{i}$ and $\widetilde{\#'}^{(a)}_{j}$ stand for other non-negative integers, which are fixed by the choice of the 3d Higgs vacuum. Finally, the operator $\left[{Y}_{1d}(\{z^{(a)}_\rho\})\right]_{\omega}$, for a given weight $\omega$, is the expectation value of a rational function of $Y$-operators $\left\langle \prod_a \left[Y^{(a)}_{1d}\right]^{\pm 1} \right\rangle $ and derivatives thereof\footnote{An example where such a derivative term can appear is the index of the $D_4$ theory with a fundamental Wilson loop insertion on node 2, $\left[\chi^{\fg}\right]^{(0,1,0,0)}_{1d}(z^{(2)}_1)$. The partition function organizes itself as a Laurent series of 29 $Y$-operator terms, one of which involves derivatives of $Y^{(a)}_{1d}$-operators. Note that the second fundamental representation of $D_4$ is only 28-dimensional. However, finite dimensional irreducible representations of quantum affine algebras are notoriously bigger than their non-affine counterpart. Indeed, the second fundamental representation $V(\lambda_2)$ of $U_q(\widehat{D_4})$ decomposes into irreducible representations of $U_q(D_4)$ as $V(\lambda_2) = \textbf{28}\oplus\textbf{1}$. Put differently, one necessarily has to add the trivial representation \textbf{1}, an extra null weight, to the \textbf{28} in order to obtain an irreducible representation of $U_q(\widehat{D_4})$.}, where each operator $\left[Y^{(a)}_{1d}\right]^{\pm 1}$ is a function of a fermion mass $z^{(a)}_\rho$. The arguments of each factor is shifted by powers of $q$ and $t$, determined uniquely from \eqref{sl2string}. All in all, the index is a twisted\footnote{The character is \emph{twisted} because of the presence of the 3d FI parameters $\widetilde{\fq}^{(b)}$ and the flavor matter factors $\mathcal{Q}^{(b)}$.} character of a finite dimensional irreducible representation $V(\lambda)$ of $U_q(\widehat{\fg})$, with highest weight $\lambda=\sum_{a=1}^n L^{(a)}\, \lambda_a$. Starting with the highest weight $\lambda$, each term in the character can then be obtained by successive ``vortex-Weyl" reflections, which generalize the usual Weyl group action of the Lie algebra $\fg$. Because of the dependence on the two fugacities $q=e^{\widehat{R}\E_1}$ and $t=e^{-\widehat{R}\E_2}$, the vortex character is a $qq$-character, in the denomination of \cite{Nekrasov:2015wsu}.\\ Two remarks are in order. First, similar $qq$-characters have been constructed in the related context of counting instantons in the presence of a 1/2-BPS Wilson loop on the manifold $\mathbb{C}^2\times S^1(\widehat{R})$. There, the functional form of the character, meaning its dependence on the $Y$-operators $\left[{Y}_{1d}(\{z^{(a)}_\rho\})\right]_{\omega}$ and on the weights $\omega$, is identical to what we found here in the context of vortex counting. This is because the $Y$-operator dependence is entirely fixed by the choice of the algebra $\fg$ and the representation $(L^{(1)},\ldots, L^{(n)})$ in which the Wilson loop transforms. In particular, the functional form of the character does not depend on whether we study instanton or vortex counting, nor does it depend on the dimension of the manifold. Of course, there are still notable differences according to which gauge theory setup we study: this is encoded in the expressions for the vevs $\langle \ldots \rangle$, and the functions $\mathcal{Q}^{(b)}(\{z_{\rho}^{(a)}\})$ in \eqref{character1d}. For instance, in the context of an instanton quantum mechanics, these functions are contributions of $\cN=2$ Fermi multiplets exclusively, while in the vortex context, we found here that the functions are made of both $\cN=2$ Fermi and chiral multiplets. Second, one can consider the limit where we shrink the circle size to zero. There are a priori many ways to take this limit, so we should be specific: here, we require that all flavor fugacities of the quantum mechanics remain fixed as we take $\widehat{R}\rightarrow 0$. In practice, all the trigonometric functions present in the 1-loop determinants of the quantum mechanics index will become rational functions of their arguments instead. The 3d gauge theory $G^{3d}$ turns into a 2d $\cN=(4,4)$ gauged sigma model on $\mathbb{C}^2$, and the Wilson loop wrapping $S^1(\widehat{R})$ becomes a 1/2-BPS point defect at the origin. Correspondingly, the index we computed becomes a vortex $qq$-character of the 2d theory, whose general form was first conjectured in \cite{Nekrasov:2017rqy} (section 7). Our work in this section can be seen as a microscopic derivation of the expression presented there.\\ It remains to prove that the index of $T^{1d}$ is indeed equal to the character expression \eqref{character1d}. Since a fully explicit proof would require the knowledge of a specific Higgs vacuum for the 3d theory, we find it more worthwhile to outline the universal features of the proof here in the general case, and showcase it in detail later when discussing an example in section \ref{sec:example}. The proof consists of two parts. First, recall that the contours of the index enclose the poles $\mathcal{M}_k$, for a given vortex number $k$. In contrast, the contours of the $Y$-operator \eqref{Yoperator1d} only enclose the poles $\mathcal{M}^{pure}_k$ of the index in the absence of Wilson loop. Because the set $\mathcal{M}_k\setminus\mathcal{M}^{pure}_k$ is non-empty for every $k$, it follows that the index has following expansion: \beq\label{firstterm1d} \left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}(\{z^{(a)}_\rho\})= \prod_{a=1}^n \prod_{\rho=1}^{L^{(a)}} Z^{(a)}_{defect, \varnothing}(z^{(a)}_\rho) \left\langle \prod_{a=1}^n \prod_{\rho=1}^{L^{(a)}}{Y_{1d}^{(a)}(z^{(a)}_\rho)} \right\rangle +\ldots\; , \eeq where each term in the dots ``$...$" stands for a residue of $\left[\chi\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}$ at one of the poles \eqref{newpole1}, \eqref{newpole2}, or \eqref{newpole3}, in $\mathcal{M}_k\setminus\mathcal{M}^{pure}_k$. These extra poles making up the dotted terms need to be included, as dictated by the JK-prescription, and our first observation is that there is only a \emph{finite} number of them. This last point is highly nontrivial, and is derived from the explicit form of the integrand \eqref{character1d}. As an example, suppose that there exists a pole of the first kind \eqref{newpole1}, at the locus $\phi^{(a)}_I - M^{(a)}_{\rho, *} - \epsilon_+ =0$, for some $I\in\{1,\ldots,k^{(a)}\}$. Then, there exist no pole at the locus $\phi^{(a)}_J - M^{(a)}_{\rho, *} - \epsilon_+ =0$ for any $J\neq I$. This is because there is a zero at the locus $\phi^{(a)}_I-\phi^{(a)}_J=0$, due to the numerator in $Z^{(a)}_{pure, vec}$. Similarly, the JK-residue prescription predicts a pole at the locus $\phi^{(a)}_J-\phi^{(a)}_I+\E_1=0$, due to the denominator of $Z^{(a)}_{pure, adj}$, and a pole at the locus $\phi^{(a)}_J-\phi^{(a)}_I+\E_2=0$, due to the denominator of $Z^{(a)}_{pure, vec}$. However, there is a zero at both loci, due to the zeros at the numerators of $Z^{(a)}_{defect, k}$. All in all, the locus \eqref{newpole1} contributes a \emph{single} new $M^{(a)}_{\rho, *}$-pole to the index. One can similarly show that the only other $M^{(a)}_{\rho, *}$-poles are exclusively due to the loci \eqref{newpole2}, \eqref{newpole3}, and that this list of poles is bounded above for all $k$. Namely, for all vortex number $k$, the size of the set $\mathcal{M}_k\setminus\mathcal{M}^{pure}_k$ is always smaller or equal to some fixed integer $k'$. By carrying out the JK-residue procedure explicitly, we can determine $k'$ exactly: one finds that $k'+1$ is equal to the dimension of the finite-dimensional irreducible representation of the quantum affine algebra $U_q(\widehat{\fg})$ with highest weight $(L^{(1)},\ldots, L^{(n)})$. From this perspective, the first term in \eqref{firstterm1d} before the dots is nothing but the highest weight of the representation. This ends the first part of the proof. It remains to show that each of the $k'$ terms in the dotted expansion \eqref{firstterm1d} is precisely of the form \eqref{character1d}. This follows from a remarkable fact, which again can be proved by direct computation: for a given vortex number $k$, each contour enclosing $j'$ of the $k'$ poles in $\mathcal{M}_k\setminus\mathcal{M}^{pure}_k$ can be traded for an integration contour which encloses $k-j'$ poles \textit{only}, where $j'=1,\ldots,k'$. The price to pay for such a trade of contours is the introduction in the integrand of extra $Y$-operator insertions, along with the residue at the $j'$ poles of the chiral matter factors $Z^{(a)}_{pure, teeth}$. Performing this change of contours for all $k'$ dotted terms, and normalizing by the classical Wilson loop contribution $\prod_{a=1}^n \prod_{\rho=1}^{L^{(a)}} Z^{(a)}_{defect, \varnothing}(z^{(a)}_\rho)$, we arrive at the advertised expression for the vortex $qq$-character.\\ \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{contours} \vspace{-12pt} \caption{The black crosses denote poles in the set $\mathcal{M}^{pure}_k$, from the pure index, while the red dot denotes a pole in the set $\mathcal{M}_k\setminus \mathcal{M}^{pure}_k$. Such a pole is due to the factor $Z^{(a)}_{defect, k}$ in the integrand. On the left, we show a possible contour for the computation of the index at $k=1$. Note that by the JK prescription, we must in particular enclose the new pole in red. Remarkably, it is equivalent to trade this contour for the one on the right, which now only encloses the poles in the set $\mathcal{M}^{pure}_1$, but with a modified integrand; in the latter contour, the integrand will now contain insertions of additional $Y$-operators, with a vortex charge shift of one unit to account for the missing pole.} \label{fig:contours} \end{figure} We emphasize that at no point in the discussion did we need to know the content of the set $\mathcal{M}^{pure}_k$, that is to say the poles of the quantum mechanics index in the absence of Wilson loop. What is instead relevant here to derive the $qq$-character is the set of poles $\mathcal{M}_k\setminus \mathcal{M}^{pure}_k$, due entirely to the insertion of the defect. We now explain why this vortex character can be understood as a non-perturbative Schwinger-Dyson equation for the 3d theory $G^{3d}$. \subsection{Physics of the Schwinger-Dyson Equations} \label{ssec:schwingerphysics} Let us focus on the case of a fundamental Wilson loop on the $a$-th node of $G^{3d}$: $L^{(a)}=1$ for some $a\in \{1,\ldots,n\}$ and $L^{(b)}=0$ for $b\neq a$. Correspondingly, the defect group is $\widehat{G}_{defect}=U(1)$ in that case, parameterized by the fermion mass $z^{(a)}_1\equiv z$. The $Y$-operator we constructed mediates the change in vortex charge $k$ of the theory $G^{3d}$. More precisely, the vev $\left\langle Y^{(a)}_{1d} (z)\right\rangle$ represents the insertion of a Wilson loop with fermion mass $z$ on node $a$, enabling vortex particles to appear and disappear out of the bulk. This changes the topological sector of $G^{3d}$, and the $qq$-character \eqref{character1d} encodes a corresponding quantum affine symmetry of the theory. Put differently, the Schwinger-Dyson equation for the $Y$-operator vev is the statement that even though $\left\langle Y^{(a)}_{1d} (z)\right\rangle$ is singular in $z$, as is obvious from the explicit expression \eqref{Yoperator1d}, a particular Laurent series of $Y$-operator vevs, the $qq$-character, has regularity properties in $z$. The precise statement of the Schwinger-Dyson equations is as follows: \beq \left[\widetilde{\chi}\right]^{(0,\ldots,0,1,0,\ldots,0)}_{1d}(z) \;\text{is regular in $z$ except for the poles of the function $\mathcal{Q}_{d_b^\omega}^{(b)}(z)$ in \eqref{character1d}.}\nonumber \eeq To prove this statement, one has to show that the residues at the various poles of the index do not develop singularities in the variable $z=e^{-M}$, other than at the denominators of $\mathcal{Q}_{d_b^\omega}^{(b)}(z)$. A singularity in $z$ will arise if two poles of the integrand pinch one of the contours. It turns out that most potential singularities are canceled in a subtle manner by zeroes in the integrand, resulting in an \emph{almost} regular structure in $z$. Indeed, a finite set of $z$-singularities is produced after integration, and makes up the various poles of the function $\mathcal{Q}_{d_b^\omega}^{(b)}(z)$. For instance, let $a\in\{1,\ldots,n\}$ and $I\in\{1,\ldots,k^{(a)}\}$, and consider the following two loci of poles in the integrand: \beq \phi^{(a)}_I - M - \E_+ = 0 \qquad \text{and}\qquad \phi^{(a)}_{I}- \widetilde{\mu}^{(a)}_j-\E_+ = 0 \; \text{for some mass}\; \{\widetilde{\mu}^{(a)}_j\}\; . \eeq The first pole locus is due to the denominator of $Z^{(a)}_{defect, k}$, while the second pole locus is due to the denominator of $Z^{(a)}_{pure, teeth}$, the antifundamental chiral matter contribution. By the JK-residue prescription, the first locus is inside the integration contour, as we saw in \eqref{newpole1}. Furthermore, the JK-residue instructs us to only enclose the $\{\mu^{(a)}_j\}$-poles coming from fundamental chiral multiplets \eqref{purepole3}, and none of the $\{\widetilde{\mu}^{(a)}_j\}$-poles coming from antifundamental chiral multiplets. It follows that the second locus is outside the integration contour. Then, the poles can freely coalesce and pinch the contour, resulting in the singular locus: \beq M=\widetilde{\mu}^{(a)}_j \; . \eeq This singularity manifests itself as a simple pole in the function $\mathcal{Q}_{d_b^\omega}^{(b)}(z)$. Given a generic theory, providing the comprehensive list $z$-singularities in the index is a tedious exercise, though it presents no technical difficulties; one simply proceeds as above, analyzing the various sets of poles which can potentially pinch the contours. We will carry out this procedure in detail when presenting an example in section \ref{sec:example}. As a last remark, note that this discussion straightforwardly generalizes to a Wilson loop in an arbitrary irreducible representation of the 3d gauge group $G$. In that case, the Schwinger-Dyson equations are still regularity conditions on the associated $qq$-character, but involving correlation functions of a higher number of $Y$-operators. Typically, the index $\left[\widetilde{\chi}\right]^{(L^{(1)},\ldots,L^{(n)})}_{1d}(\{z^{(a)}_\rho\})$will develop even more singularities in the defect fermion masses $\{z^{(a)}_\rho\}$.\\ We now give an alternate derivation of the non-perturbative Schwinger-Dyson equations obeyed by 3d $\cN=4$ gauge theories, directly from three dimensions and without resorting to its vortex quantum mechanics. \vspace{16mm} \section{Schwinger-Dyson Equations: the Three-Dimensional Perspective} \label{sec:3dsection} Consider a 3d supersymmetric gauge theory on a 3-manifold. There is by now overwhelming evidence that the partition function on such a space, with adequate twists, contains information about the vortex sector of the theory \cite{Pasquetti:2011fj,Beem:2012mb,Nekrasov:2008kza,Krattenthaler:2011da,Dimofte:2011ju,Taki:2013opa,Cecotti:2013mba,Fujitsuka:2013fga,Benini:2013yva,Hwang:2012jh,Yoshida:2014ssa,Benini:2015noa}\footnote{Similar results exist for 2d gauge theories on 2-manifolds, see for instance \cite{Doroud:2012xw,Benini:2012ui}}. For instance, in the case where the 3-manifold is a $\mathbb{C}$-bundle over $S^1$, and the theory is the $\cN=4$ quiver $G^{3d}$ we previously considered, the vortex part of the 3d partition function is precisely the index of the quantum mechanics $T^{1d}_{pure}$ from last section \cite{Hwang:2017kmk}. In the spirit of the above body of works, in this section we propose a half-index for $G^{3d}$ on $\mathbb{C}\times S^1(\widehat{R})$ in the presence of a 1/2-BPS Wilson loop wrapping $S^1(\widehat{R})$, and derive the vortex $qq$-character from it. The definition of the index is quite nontrivial in the 3d picture, but we will argue that it is sensible since it correctly reproduces the results we derived in the vortex quantum mechanics picture for $T^{1d}$. \subsection{A Half-Index Presentation} \label{ssec:3dindex} Let us first review how to define a half-index for $G^{3d}$ on $\mathbb{C}\times S^1(\widehat{R})$ in the absence of Wilson loop. This index is also referred to as a holomorphic block \cite{Beem:2012mb}\footnote{Formally, a holomorphic block is defined in the IR of the 3d $\cN=4$ theory: on the manifold $\mathbb{C}\times S^1(\widehat{R})$, one has to specify boundary conditions at infinity on $\mathbb{C}$ by choosing an IR vacuum. Alternatively, one can consider a ``half-index" on $D\times S^1(\widehat{R})$, where $D$ is a finite disk, with boundary conditions defined in the UV on the edge of the disk. The boundary conditions will flow to some boundary condition in the IR which may or may not agree with the one defined in the holomorphic block formalism. It would be important to explore these subtleties in our context.}. We first consider the 3-manifold in the $\Omega$-background, to regularize the non-compactness of $\mathbb{C}$; namely, if we let $z$ be a complex coordinate on the complex line, we can view the 3-manifold as a $\mathbb{C}$-bundle over $S^1(\widehat{R})$, where as we go around the circle, we make the identification \beq\label{omega} z \sim z\, e^{\widehat{R}\, \epsilon_1}\; ,\qquad \epsilon_1\in\mathbb{R} \; . \eeq From now on, we denote the $\mathbb{C}$-line in this background as $\mathbb{C}_q$, with $q=e^{\widehat{R}\, \epsilon_1}$. Then, the partition function of $G^{3d}$ is defined via the following half-index: \beq \label{index3d} \left[\widetilde{\chi}\right]^{(0,\ldots, 0)}_{3d} =\text{Tr}\left[(-1)^F\,e^{-\widehat{R}\{Q,\overline{Q}\}}\, q^{S_1 - S_R}\, t^{-S_2 + S_R}\prod_{a=1}^n (\widetilde{\fq}^{(a)}){}^{k^{(a)}}\, \prod_{d=1}^{N^{(a)}_F} (x^{(a)}_d)^{\Pi^{(a)}_d}\,\right] \; . \eeq The trace is taken over the Hilbert space of states on $\mathbb{C}_q$. The index counts states in $Q$-cohomology, where $Q= Q^1_{1,1}$ and $\overline{Q}=Q^2_{2,2}$ were defined in section \ref{ssec:review}. $F$ is the fermion number. $S_1$ is a rotation generator for $\mathbb{C}_q$, while $S_2$ is a R-symmetry generator; indeed, $S_2$ generates a $U(1)_C$ symmetry which is a subgroup of the $SU(2)_C$ R-symmetry acting on the vector multiplet scalars. Meanwhile, $S_H$ generates a $U(1)_H$ symmetry which is a subgroup of the $SU(2)_H$ R-symmetry acting on the hypermultiplet scalars. $\{\Pi^{(a)}_d\}$ are Cartan generators for the flavor group $G_F$, with conjugate fundamental masses $\{m^{(a)}_d\}$. The integer $k^{(a)}= \frac{-1}{2\pi}\int \text{Tr} F^{(a)}$ is the topological $U(1)$ charge for the $a$-th gauge group, and the conjugate fugacity $\widetilde{\fq}^{(a)}$ is the real FI parameter on node $a$, complexified by the holonomy of the corresponding background gauge field around $S^1(\widehat{R})$. The field configurations which preserve the supersymmetries of the index are solutions to the vortex equations on $\mathbb{C}$; the integers $k^{(a)}$ then provide a natural grading on the moduli space of vortices. So far we have not talked about the gauge symmetry group $G=\prod_{a=1}^{n} U(N^{(a)})$. We start by treating it as a global symmetry, which we make abelian by breaking it to its maximal torus. The associated equivariant parameters are denoted collectively as ``$y$". We then gauge the symmetry by projecting to $G$-invariant states, which amounts to integrating over those parameters. Namely, \beq\label{Haar3d} \oint d_{Haar}y=\oint \prod_{a=1}^{n}\prod_{i=1}^{N^{(a)}}\frac{dy^{(a)}_i}{y^{(a)}_i}\; . \eeq Above, the contour is chosen to project to states neutral under the $G$-symmetry. Because the parameters $\{y^{(a)}_i\}$ parameterize part of the Coulomb branch of $G^{3d}$, this presentation of the index is referred to as Coulomb branch localization: \begin{align}\label{indexintegral3d} \left[\widetilde{\chi}\right]^{(0,\ldots, 0)}_{3d} = \oint_{\mathcal{M}^{bulk}} dy\,\left[I^{3d}_{bulk}(y)\, \right] \; . \end{align} The choice of contours $\mathcal{M}^{bulk}$ determines a vacuum for $G^{3d}$. In this three-dimensional setup, the contours are once again fixed by the JK-residue prescription, where we choose to work with the auxiliary vector $\eta=(1,\ldots,1)$, as we did before. The integrand $I^{3d}_{bulk}(y)$ stands for the contribution of all the various multiplets to the index. These can be read off directly from the 3d $\cN=4$ quiver description of the theory. This bulk contribution has the form \cite{Beem:2012mb,Aganagic:2017smx}: \begin{align}\label{bulk3d} I^{3d}_{bulk}(y)=\prod_{a=1}^{n}\prod_{i=1}^{N^{(a)}}{y^{(a)}_i}^{\left(\zeta_{3d}^{(a)}-1\right)}\;I^{(a)}_{vec}(y)\cdot\prod_{b>a}I^{(a,b)}_{bif}(y) \cdot I^{(a)}_{flavor}(y,\{x^{(a)}_d\})\; . \end{align} The factor \begin{align}\label{FI3d} \prod_{a=1}^{n}\prod_{i=1}^{N^{(a)}}{y^{(a)}_i}^{\left(\zeta_{3d}^{(a)}\right)} \end{align} is the contribution of the 3d FI parameters. The factor \begin{align}\label{vec3d} I^{(a)}_{vec}(y)=\prod_{1\leq i\neq j\leq N^{(a)}}\frac{\left(y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}}{\left(t\, y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}}\;\prod_{1\leq i<j\leq N^{(a)}} \frac{\Theta\left(t\,y^{(a)}_{i}/y^{(a)}_{j};q\right)}{\Theta\left(y^{(a)}_{i}/y^{(a)}_{j};q\right)} \end{align} stands for the contribution of a ${\cN}=4$ vector multiplet for the gauge group $U(N^{(a)})$. Above, we use the following definitions of the $q$-Pochhammer symbol, \beq \left( x\, ; q\right)_{\infty}\equiv\prod_{l=0}^{\infty}\left(1-q^l\, x\right)\; , \eeq and of the theta function, \beq \Theta\left(x\,;q\right)\equiv \left(x \,;\, q\right)_\infty\,\left(q/x \,;\, q\right)_\infty \; . \eeq In particular, decomposing the $\cN=4$ vector multiplet as a $\cN=2$ vector multiplet and $\cN=2$ adjoint chiral multiplet, the numerator factor $\left(y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}$ is the contribution of the W-bosons in the $\cN=2$ vector multiplet, while the denominator factor $\left(t\, y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}$ is the contribution of the $\cN=2$ adjoint chiral multiplet\footnote{The ratio of theta functions has a natural interpretation when the manifold is thought of as $D\times S^1(\widehat{R})$, with $D$ the disk. The boundary of that manifold is $S^1\times S^1=T^2$, and one needs to specify the 2d theory on this torus $T^2$. In principle, any choice of 2d $\cN=(0,2)$ boundary conditions will do, as long as the theory is anomaly-free. In our context, one should specify the 3d chiral multiplet boundary conditions, which are either Dirichlet or Neumann. The gauge fields have Neumann boundary conditions, and the appearance of theta functions in the 3d vector multiplet is understood as the contribution of the 2d elliptic genus on the boundary torus. For details, see \cite{Gadde:2013wq,Yoshida:2014ssa,Dimofte:2017tpi}, and the related discussion in \cite{Aganagic:2017smx}.}. The factor \begin{align} \label{bif3d} I^{(a,b)}_{bif}(y)=\prod_{1\leq i \leq D^{(a)}}\prod_{1\leq j \leq D^{(b)}}\left [ \frac{(t\, v\, y^{(a)}_{i}/y^{(b)}_{j};q)_{\infty}}{(v \, y^{(a)}_{i}/y^{(b)}_{j};q)_{\infty}}\right]^{\Delta^{a b}} \end{align} is the contribution of $\cN=4$ bifundamental hypermultiplets. We use the same notations as introduced previously: $\Delta^{ab}$ is the incidence matrix of the Lie algebra $\fg$, and $v=\sqrt{q/t}$. The factor \begin{align}\label{matter3d} I^{(a)}_{flavor}(y, \{x^{(a)}_d\}) =\prod_{d=1}^{N_F^{(a)}}\prod_{i=1}^{N^{(a)}} \frac{\left(t\, v\, x^{(a)}_{d}/y^{(a)}_{i};q\right)_{\infty}}{\left(v\, x^{(a)}_{d}/y^{(a)}_{i};q\right)_{\infty}} \end{align} stands for the contribution of $\cN=4$ hypermultiplets in the fundamental representation of the $a$-th gauge group. The $\cN=4$ supersymmetry fixes the R-charge assignments of the various fugacities in the arguments of the $q$-Pochhammer symbols. In particular, note the presence of a cubic superpotential term due to the bifundamental/adjoint chiral multiplets in the $\cN=2$ language. Let us briefly discuss the contours: there are three distinct sets of poles in the set $\mathcal{M}^{bulk}$, following the JK-residue prescription: the first is due to the denominators $\left(v\, x^{(a)}_{d}/y^{(a)}_{i};q\right)_{\infty}$ from the fundamental matter contribution \eqref{matter3d}, resulting in: \beq y^{(a)}_i =v\, x^{(a)}_d \, q^{s}\qquad\;\; ,\;\; s=0,1,2,\ldots\; , \;\;\; d\in\{1,\ldots,N^{(a)}_F\} \; . \eeq Second, there are the denominators $\left[(v \, y^{(a)}_{i}/y^{(b)}_{j};q)_{\infty}\right]^{\Delta^{a b}}$ from the bifundamentals \eqref{bif3d}, resulting in: \begin{align} &y^{(b)}_i =v\, y^{(a)}_j \, q^{s}\qquad ,\;\; s=0,1,2,\ldots,\; \text{if there is a link between nodes $a$ and $b>a$}\; . \end{align} Third, there are the denominators $\left(t\, y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}$ from the vector multiplets \eqref{vec3d}. However, such $t$-dependent poles turn out to have vanishing residue, as can be easily checked\footnote{The fact that poles of the third kind give a vanishing residue is characteristic of indices for 3d theories with $\cN=4$ supersymmetry. In particular, our argument does not apply to 3d theories with less supersymmetry. A similar phenomenon was noted in the 1d vortex quantum mechanics index.}. The above pole structure makes explicit the grading over the vortex charge, so we naturally denote the set of poles as $\mathcal{M}^{bulk}_k$, summed over all $k=\sum_{a=1}^n k^{(a)}$.\\ We now want to introduce a 1/2-BPS Wilson loop wrapping $S^1(\widehat{R})$, which is a codimension-2 defect from the point of view of $G^{3d}$. As we reviewed, the fact that this is possible in the first place is because such a loop preserves the supersymmetries $Q$ and $\overline{Q}$. We couple the one-dimensional $\cN=4$ theory on the loop to the bulk three-dimensional theory by considering the flavor symmetries of the 1d theory and gauging them with 3d $\cN=4$ vector multiplets. From the point of view of the index, this translates into gauging the 1d masses, turning them into the scalars of the corresponding 3d $\cN=4$ vector multiplets. When the vector multiplet is dynamical, the scalar becomes an eigenvalue $y$ to be integrated over, while in the case of a background vector multiplet, the scalar becomes a mass from the 3d point of view. To achieve this, we start by defining a defect $Y$-operator vacuum expectation value, written as an integral over the Coulomb moduli of the 3d theory: \begin{align}\label{3ddefectexpression} \left\langle\left[{\widetilde{Y}^{(a)}_{3d}}(z)\right]^{\pm 1} \right\rangle \equiv \oint_{\mathcal{M}^{bulk}} d{y}\,\left[I^{3d}_{bulk}(y)\cdot\left[{\widetilde{Y}^{(a)}_{defect}}(y, z)\right]^{\pm 1}\right] \; . \end{align} For now, let us simply state that the integrand factor is defined as \beq\label{3dWilsonfactor} {\widetilde{Y}^{(a)}_{defect}}(y,z)=\prod_{i=1}^{N^{(a)}}\frac{1-t\, y^{(a)}_{i}/z}{1- y^{(a)}_{i}/z}\; . \eeq There is another piece of the defect $Y$-operator which is not integrated over, as it couples the loop to the flavor symmetry of $G^{3d}$; this contribution has the generic form \beq\label{3dWilsonfactor2} {\widetilde{Y}^{(a)}_{flavor}}(\{x^{(b)}_{d}\}, z)=\prod_{b=1}^{n}\prod_{d=1}^{N_F^{(b)}}\frac{1- v^{\#^{(ab)}+1}\, x^{(b)}_{d}/z}{1- t\, v^{\#^{(ab)}+1}\, x^{(b)}_{d}/z}\; , \eeq where $\#^{(ab)}$ is a non-negative integer equal to the number of links between nodes $a$ and $b$ in the Dynkin diagram of $\fg$. Note that the contour definition for the above $Y$-operator vev \eqref{3ddefectexpression} is the same as the contour definition for the 3d index \eqref{indexintegral3d} in the absence of defect. In particular, the contours are defined not to enclose the potential $z$-poles from the factor $\prod_{a=1}^n {\widetilde{Y}^{(a)}_{defect}}$.\\ Then, we \emph{define} the (normalized) half-index of $G^{3d}$ in the presence of a Wilson loop, or 3d/1d index for short, as the expansion: \begin{align}\label{NEWcharacter3d} \boxed{\left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{3d}(\{z^{(a)}_\rho\})=\frac{1}{\left[{\chi}\right]^{(0,\ldots,0)}_{3d}}\sum_{\omega\in V(\lambda)}\prod_{b=1}^n \left({\widetilde{\fq}^{(b)}}\right)^{d_b^\omega}\; c_{d_b^\omega}(q, t)\; \left(\widetilde{\mathcal{Q}}_{d_b^\omega}^{(b)}(\{z_{\rho}^{(a)}\})\right)\, \left[\widetilde{Y}_{3d}(\{z^{(a)}_\rho\})\right]_{\omega} \, .} \end{align} In the above, the factor $\left[\widetilde{Y}_{3d}(\{z^{(a)}_\rho\})\right]_{\omega}$ is defined as the vev of a rational function of the $Y$-operators $\left\langle \prod_a \left[Y^{(a)}_{3d}\right]^{\pm 1} \right\rangle $ \eqref{3ddefectexpression}, and possible derivatives thereof. All the other functions and notations appearing above are the same as were introduced in section \ref{ssec:1dqqcharacter}\footnote{There is one subtle difference: in the quantum mechanics $T^{1d}$, the function $\mathcal{Q}_{d_b^\omega}^{(b)}(z_{\rho}^{(a)})$ was the residue of $Z^{(b)}_{pure, teeth}$ at the various poles of $\mathcal{M}_k\setminus \mathcal{M}^{pure}_k$. In the 3d setup used here, we have defined a function $\widetilde{\mathcal{Q}}_{d_b^\omega}^{(b)}(z_{\rho}^{(a)})$, which is still written as contributions of 1d $\cN=4$ chirals, but not quite the same expressions as in the quantum mechanics. Giving exact formulas would require specifying the theory $G^{3d}$, see section \ref{sec:example} for a detailed example.}. This implies that the index is once again a twisted $qq$-character of a finite dimensional irreducible representation $V(\lambda)$ of the quantum affine algebra $U_q(\widehat{\fg})$, with highest weight $\lambda=\sum_{a=1}^m L^{(a)}\, \lambda_a$. The above definition of the 3d/1d half-index seems ad-hoc from the three-dimensional perspective, but we will see later that it is in fact very natural in the light of the BPS/CFT correspondence. We end this section by exhibiting the relation between the 3d/1d index \eqref{NEWcharacter3d} and the Witten index \eqref{character1d} of the 1d quantum mechanics: up to normalization, they turn out to be one and the same! \subsection{Relation between the 3d and 1d Expressions for the $qq$-character} \label{ssec:3dindexis1dindex} As we reviewed, the choice of contour for the 3d half-index fixes a vacuum for $G^{3d}$. Let $T^{1d}_{pure}$ be the vortex quantum mechanics defined on that vacuum, and let $T^{1d}$ be the vortex quantum mechanics in the presence of a Wilson loop. We now prove that the index of $T^{1d}$ is, up to a constant factor, the 3d/1d half-index introduced above. The proof rests on establishing a relation between the Wilson loop $Y$-operator vev $\left\langle{\widetilde{Y}^{(a)}_{3d}}(z)\right\rangle$ and its quantum mechanical counterpart $\left\langle{{Y}^{(a)}_{1d}}(z)\right\rangle$. First recall that in defining the 1d $Y$-operator vev \eqref{Yoperator1d}, one sums over the poles \eqref{polepure} in the set $\mathcal{M}^{pure}_k$ for each vortex charge $k=\sum_{a=1}^n k^{(a)}$, \beq\label{polepureagain} \phi^{(a)}_I = \mu^{(b)}_i - \E_- - (s_i-1) \E_1 + 2\,\#^{(ab)} \E_+ \; , \;\;\;\; \text{with}\; s_i\in\{1,\ldots,k^{(a)}_i\},\;\; i\in\{1,\ldots,N^{(a)}\}\; , \eeq for some mass index $b\in\{1,\ldots,n\}$. Recall that in the notation above, $(k^{(a)}_1, \ldots, k^{(a)}_{N^{(a)}})$ is a partition of the vortex charge $k^{(a)}$ into $N^{(a)}$ non-negative integers, and $\#^{(ab)}$ is a non-negative integer equal to the number of links between nodes $a$ and $b$ in the Dynkin diagram of $\fg$. We write the collection of all such $\phi$-loci as $\vec\phi_*$. After performing the residue sum, the $Y$-operator vev can be schematically written as: \beq\label{residuesum1d} \left\langle{{Y}^{(a)}_{1d}}(M)\right\rangle =\sum^{\infty}_{k=0}\sum_{\{\vec\phi_*\}\in\mathcal{M}^{pure}_k} Z^{1d}_{pure}(\vec\phi_*) \cdot Z^{(a)}_{defect, k}(\vec\phi_*,M) \; , \eeq where we collected all the contributions independent of the defect inside a factor $Z^{1d}_{pure}$, while the remaining factor is due to the interaction \eqref{defect} between the loop and the vortices, rewritten here for convenience: \beq \label{defectagain} Z^{(a)}_{defect, k}(\phi^{{(a)}}_I,M) = \prod_{I=1}^{k^{(a)}} \frac{\sh\left(\phi^{{(a)}}_I-M^{(a)}_\rho- \E_- \right)\, \sh\left(-\phi^{{(a)}}_I + M^{(a)}_\rho- \E_-\right)}{\sh\left(\phi^{{(a)}}_I-M^{(a)}_\rho- \E_+\right)\, \sh\left(-\phi^{{(a)}}_I + M^{(a)}_\rho - \E_+ \right)} \, . \eeq Meanwhile, in the three-dimensional setup, we sum over the poles $\{\vec y_*\}\in\mathcal{M}^{bulk}$, and the $Y$-operator vev \eqref{3ddefectexpression} becomes: \begin{align}\label{3ddefectexpressionagain} \left\langle{\widetilde{Y}^{(a)}_{3d}}(z) \right\rangle \equiv {\widetilde{Y}^{(a)}_{flavor}}(\{x^{(b)}_d\}, z)\sum^{\infty}_{k=0}\sum_{\{\vec y_*\}\in\mathcal{M}^{bulk}_k} I^{3d}_{bulk}(\vec y_*)\cdot{\widetilde{Y}^{(a)}_{defect}}(\vec y_*, z) \; . \end{align} It is well known that in the absence of loop defect, the 3d index $\sum^{\infty}_{k=0}\sum_{\{\vec y_*\}\in\mathcal{M}^{bulk}_k} I^{3d}_{bulk}(\vec y_*)=\left[\widetilde{\chi}\right]^{(0,\ldots, 0)}_{3d}$ is in fact equal to the quantum mechanical index $\sum^{\infty}_{k=0}\sum_{\{\vec\phi_*\}\in\mathcal{M}^{pure}_k} Z^{1d}_{pure}(\vec\phi_*)=\left[\widetilde{\chi}\right]^{(0,\ldots, 0)}_{1d}$, up to overall normalization. We refer the reader to the references \cite{Beem:2012mb,Bullimore:2014awa,Hwang:2017kmk,Aganagic:2017smx} for details. Because the contents of the sets $\mathcal{M}^{bulk}_k$ and $\mathcal{M}^{pure}_k$ are in one-to-one correspondence for all $k$, this implies in particular that the summands $I^{3d}_{bulk}(\vec y_*)$ and $Z^{1d}_{pure}(\vec\phi_*)$ are the same, up to normalization. We therefore need to simply investigate the remaining factor $Z^{(a)}_{defect, k}(\vec\phi_*,M)$ in \eqref{residuesum1d}. We switch to K-theoretic variables $z=e^{-M}$, $f^{(a)}_{i}=e^{-\mu^{(a)}_i}$, and let \beq y^{(a)}_{i,*} = f^{(b)}_i\, q^{k^{(a)}_i+1} \, v^{-2\#^{(ab)}}\;,\;\; i\in\{1,\ldots,N^{(a)}\}\;, \;\; b\in\{1,\ldots,n\} \; . \eeq We further renormalize the masses to make contact with their 3d definitions: \beq f^{(b)}_{i} \equiv v^{3\#^{(ab)}+1}\,x^{(b)}_{i} \; . \eeq After a finite number of telescopic cancellations, the 1d $Y$-operator at the locus \eqref{polepureagain} becomes: \begin{align}\label{fromY1dtoY3d} Z^{(a)}_{defect, k}({y}^{(a)}_{i,*},z) &=\prod_{i=1}^{N^{(a)}}\frac{1-t\, y^{(a)}_{i,*}/z}{1-y^{(a)}_{i,*}/z}\cdot\prod_{i=1}^{N^{(a)}}\frac{1- v^{\#^{(ab)}+1}\, x^{(b)}_{i}/z}{1- t\, v^{\#^{(ab)}+1}\, x^{(b)}_{i}/z} \; . \end{align} We recognize the first product as the 3d/1d contribution \beq {\widetilde{Y}^{(a)}_{defect}}(\vec y_*, z)= \prod_{i=1}^{N^{(a)}}\frac{1-t\, y^{(a)}_{i,*}/z}{1- y^{(a)}_{i,*}/z} \; . \eeq Meanwhile, the latter product is part of the 3d/1d contribution \beq \widetilde{Y}^{(a)}_{flavor}(\{x^{(b)}_d\}, z) = \prod_{i=1}^{N^{(a)}}\frac{1- v^{\#^{(ab)}+1}\, x^{(b)}_{i}/z}{1- t\, v^{\#^{(ab)}+1}\, x^{(b)}_{i}/z}\cdot c^{(a)}_{3d/1d}(\{x^{(b)}_d\}, z)\; , \eeq where the leftover factor have been collected in the expression $c_{3d/1d}(\{x^{(b)}_d\}, z)$; this factor can be determined exactly by simply comparing the above result to the contribution \eqref{3dWilsonfactor2}, if one wishes. Putting it all together, we have shown that the $Y$-operator vevs are proportional to each other: \beq\label{Yequality} \left\langle{\widetilde{Y}^{(a)}_{3d}(z)} \right\rangle = c^{(a)}_{3d/1d}(\{x^{(b)}_d\}, z) \, \left\langle {Y_{1d}^{(a)}(z)} \right\rangle \eeq Now, the 3d/1d half-index is a Laurent series in $Y$-operator vevs, with the same functional form and number of terms as the index of $T^{1d}$. This does not yet guarantee the indices are the same, since $c^{(a)}_{3d/1d}$ appears as a relative factor between the various terms of the character. Remarkably, one can show after computing each term in the character that they all share the \emph{same} proportionality factor $c^{(a)}_{3d/1d}$, so it can be factored out entirely. Considering a general Wilson loop in the $(L^{(1)},\ldots, L^{(n)})$ representation of $G$, the normalized indices of $G^{3d}$ and its quantum mechanics $T^{1d}$ therefore satisfy: \begin{align}\label{CHIequality} \left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{3d}(\{z^{(a)}_\rho\})= c_{3d/1d}\cdot \left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{1d}(\{z^{(a)}_\rho\})\; . \end{align} The proportionality constant $c_{3d/1d}$ is simply: \beq c_{3d/1d}=\prod_{a=1}^{n}\prod_{\rho=1}^{L^{(a)}} c^{(a)}_{3d/1d}(\{x^{(b)}_d\}, z^{(a)}_\rho)\; . \eeq \vspace{16mm} \section{Schwinger-Dyson Equations: the ${\cW}_{q,t}({\fg})$-Algebra Perspective} \label{sec:deformedWsection} The BPS/CFT correspondence predicts that Schwinger-Dyson equations for the gauge theory $G^{3d}$ should have a counterpart as a set of Ward identities for a conformal field theory, or a deformation thereof, on a Riemann surface. In this paper, we show that the vortex $qq$-character is a certain (chiral) correlator a deformed ${\cW}({\fg})$-algebra on the cylinder. \subsection{The Deformed ${\cW}_{q,t}({\fg})$-Algebra} \label{ssec:qToda} Let $\fg$ be a simply-laced Lie algebra. In the work \cite{Frenkel:1998}, a deformation of ${\cW}({\fg})$-algebras was proposed, in free field formalism, based on a certain canonical deformation of the screening currents; this is the symmetry algebra of the so-called $\fg$-type $q$-Toda theory on a cylinder. See also \cite{Shiraishi:1995rp} for the special case $\fg=A_1$, and \cite{Feigin:1995sf,Awata:1995zk} for the case $\fg=A_n$. The starting point is to define a $(q,t)$-deformed Cartan matrix\footnote{In what follows, we follow the conventions of \cite{Frenkel:1998}. Other definitions of the deformed Cartan matrix are possible, by introducing explicit ``bifundamental masses" in the off-diagonal entries \cite{Kimura:2015rgi}.}: \begin{align}\label{CartanToda} C_{ab}(q,t)= \left(q\, t^{-1} +q\, t\right) \, \delta_{ab}- [\Delta_{ab}]_q\; . \end{align} Let us explain the notation: a number in square brackets is called a quantum number, defined as \begin{align}\label{quantumnumber} [n]_q = \frac{q^{n}-q^{-n}}{q-q^{-1}} \; , \end{align} and the incidence matrix is $\Delta_{ab}= 2 \, \delta_{ab} - C_{ab}$. Later, we will also need the inverse of the Cartan matrix: \beq\label{Mmatrixdef} M(q,t)= C(q,t)^{-1}\; . \eeq On then constructs a deformed Heisenberg algebra, generated by $n$ positive simple roots, satisfying: \begin{align}\label{commutatorgenerators} [\alpha_a[k], \alpha_b[m]] = {1\over k} (q^{k\over 2} - q^{-{k\over 2}})(t^{{k\over 2} }-t^{-{k\over 2} })C_{ab}(q^{k\over 2} , t^{k\over 2} ) \delta_{k, -m} \; . \end{align} In the above, it is understood that the zero-th generator commutes with all others: $[\alpha_a[k], \alpha_b[0]]=0$, for $k$ an arbitrary integer. The Fock space representation of this algebra is given by acting on the vacuum state $|\psi\rangle$: \begin{align} \alpha_a[0] |\psi\rangle &= \langle\psi, \alpha_a\rangle |\psi\rangle\label{eigenvalue}\nonumber\\ \alpha_a[k] |\psi\rangle &= 0\, , \qquad\qquad\;\; \mbox{for} \; k>0\; . \end{align} Then, we define deformed screening operators as\footnote{The screening operators we write down are called ``magnetic" in \cite{Frenkel:1998}, and are defined with respect to the parameter $q$. Another set of ``electric" screening and vertex operators can be constructed using the parameter $t$ instead, but these will not enter our discussion. From the point of view of the 3d gauge theory, this amounts to having $G^{3d}$ defined on the manifold $\mathbb{C}_t\times S^1(\widehat{R})$ instead of the manifold $\mathbb{C}_q\times S^1(\widehat{R})$. We made the choice of working on the latter manifold in this paper, hence why we only make use of magnetic screenings here.}. \beq\label{screeningdef} S^{(a)}(y) = y^{-\alpha_a[0]}\,: \exp\left(\sum_{k\neq 0}{ \alpha_a[k] \over q^{k\over 2} - q^{-\,{k\over 2}}} \, y^k\right): \; . \eeq Note all operators in this section are written up to a center of mass zero mode, whose effect is simply to shift the momentum of the vacuum. Up to redefinition of the vacuum $|\psi\rangle$, we safely ignore such factors. The ${\cW}_{q,t}({\fg})$-algebra is defined as the associative algebra whose generators are Fourier modes of the operators commuting with the screening charges, \beq\label{screeningchargedef} Q^{(a)} =\int dy\, S^{(a)}(y) \; . \eeq We denote the generating currents as $W^{(s)}(z)$, labeled by their ``spin" $s$. We therefore have\footnote{Note that the generating currents must also commute with the set of electric screening charges we mentioned in the previous footnote.}: \beq\label{generatorsdef} [W^{(s)}(z),Q^{(a)}]=0\; , \qquad \text{for all}\;\; a=1, \ldots, n, \;\;\text{and}\;\; s=2, \ldots,n+1\; . \eeq In this way, one finds that every generating current can be written as a Laurent polynomial in certain vertex operators, which we call $\cY$-operators for reasons that will soon be clear: \beq\label{YoperatorToda} {\cY^{(a)}}(z)= e^{\lambda_a[0]}\,: \exp\left(-\sum_{k\neq 0} w_a[k]\, t^{-k/2} \, z^k\right): \; . \eeq Degenerate vertex operators are constructed out of $n$ fundamental weight generators, \begin{align}\label{commutator2} [\alpha_a[k], w_b[m]] ={1\over k} (q^{k \over 2} - q^{-{k \over 2} })(t^{{k \over 2}}-t^{-{k \over 2} })\,\delta_{ab}\,\delta_{k, -m} \, . \end{align} These are dual to the operators $\alpha_a[k]$. Put differently, \beq\label{etow} \alpha_a[k] = \sum_{b=1}^n C_{ab}(q^{k\over 2},t^{k \over 2})\, w_b[k]\; . \eeq For completeness, we also write the commutator of two coweight generators, \begin{align}\label{commutator3} [w_a[k], w_b[m]] ={1\over k} (q^{k\over 2} - q^{-{k \over 2} })(t^{{k \over 2}}-t^{-{k \over 2} })\,M_{ab}(q^{k\over 2} , t^{k\over 2})\,\delta_{k, -m} \, . \end{align} Among the vertex operators of the theory, a distinguished class which will enter our story is the set of so-called \emph{fundamental} vertex operators \cite{Frenkel:1998}: \beq\label{fundvertex} V^{(a)}(x) = x^{w_a[0]}\,: \exp\left(-\sum_{k\neq 0}{ w_a[k] \over q^{k\over 2} - q^{-\,{k \over 2}}} \, x^k\right): \; . \eeq Before we proceed, let us briefly comment on an important limit: if we rescale $q= \exp(\widehat{R} \epsilon_1),\, t=\exp(-\widehat{R} \epsilon_2)$ and take $\widehat{R}\rightarrow 0$, the deformed ${\cW}_{q,t}({\fg})$-algebra becomes the standard ${\cW}({\fg})$-algebra\footnote{Note this is \emph{not} the limit which produces 2d vortex characters in gauged sigma models by circle reduction of the 3d gauge theory on the circle. See the work \cite{Nieri:2019mdl} for that limit instead.}, which is the symmetry algebra of $\fg$-type Toda conformal field theory\footnote{We presented ${\cW}_{q,t}({\fg})$-algebras in the free field formalism since it is the only known way to deform ${\cW}({\fg})$-algebras. This is sometimes called the Coulomb gas formalism, or the Dotsenko-Fateev formalism \cite{Dotsenko:1984nm}. For a modern treatment of the topic, we refer the reader to \cite{Dijkgraaf:2009pc,Itoyama:2009sc,Mironov:2010zs,Morozov:2010cq,Maruyoshi:2014eja}.}; for an extensive review, see \cite{Bouwknegt:1992wg}. In particular, if we set $b\equiv -\epsilon_2/\epsilon_1$, the central charge of the theory is $c=m+12 \left\langle Q,Q\right\rangle$, where $Q=\rho\, ( b + 1/b)$ is the background charge, $\rho$ the Weyl vector of $\fg$, and the bracket is the Cartan-Killing form. For the Heisenberg algebra \eqref{commutatorgenerators} to keep making sense, the root (and weight) generators should also be rescaled by $\widehat{R}$ and $\epsilon_1$ in this limit. The deformed screenings currents \eqref{screeningdef} become \beq\label{screeningcurrent} S^{(a)}(y) = \;:e^{\langle\alpha_{a}, \varphi(y)\rangle/b}: \; , \eeq with $\alpha_a$ the $a$-th simple root of $\fg$, and $\varphi$ a $n$-dimensional boson. The deformed fundamental vertex operators \eqref{fundvertex} become vertex operators of unit momentum, \beq\label{primary} V^{(a)}(x) = \; :e^{\langle w_a, \varphi(x)\rangle/b}: \; , \eeq with $w_a$ the $a$-th fundamental weight of $\fg$. Furthermore, in the limit $\widehat{R}\rightarrow 0$, the deformed generators $W^{(s)}(z)$ become the stress tensor and higher spin currents of the ${\cW}({\fg})$-algebra. The special case $\fg= A_1$ is called the Liouville CFT, and ${\cW}({A_1})$ is more commonly called the Virasoro algebra, generated by the spin 2 stress energy tensor $W^{(2)}(z)$. When $\fg$ is a higher rank algebra, the stress tensor $W^{(2)}(z)$ is still present, but there are also more currents $W^{(s)}(z)$ of higher spin $s>2$. As a concrete example, consider the deformed stress tensor of ${\cW}_{q,t}({A_1})$. It is a Laurent polynomial in the $\cY$ operators \eqref{YoperatorToda}: \beq\label{examplestress} W^{(2)}(z)= \cY(z) + \left[\cY(v^{-2}z)\right]^{-1} \; . \eeq This can be checked explicitly by computing the commutator $[W^{(s)}(z),Q^{(a)}]$, and finding out that it indeed vanishes. In the limit $q= \exp(R \epsilon_1),\, t=\exp(-R \epsilon_2)$, and the further rescaling of the Heisenberg algebra generators, one finds \beq\label{examplestress2} W^{(2)}(z) \qquad \longrightarrow \qquad-\frac{1}{2}:\left(\partial_z\phi(z)\right)^2:+ Q\, :\partial_z^2\phi(z):\; . \eeq The reader will recognize the Liouville stress energy tensor of the ${\cW}({A_1})$-algebra. \subsection{The Vortex $qq$-character is a Deformed ${\cW}_{q,t}({\fg})$-Algebra Correlator} We are interested in evaluating the following correlator: \beq\label{correlatordef} \left\langle \psi'\left|\prod_{a=1}^{n}\prod_{d=1}^{N^{(a)}_f}V^{(a)}(x^{(a)}_d)\; (Q^{(a)})^{N^{(a)}}\; \prod_{s=2}^{n+1}\prod_{\rho=1}^{L^{(s-1)}}W^{(s)}(z^{(s-1)}_\rho) \right| \psi \right\rangle \, . \eeq In what follows, we use the shorthand notation $\langle \mathellipsis\rangle$ for a vacuum expectation value. The incoming and outgoing states are written as $|\psi\rangle$ and $|\psi'\rangle$ respectively, instead of the trivial vacuum $|0\rangle$. Because the theory is defined in the free-field formalism, the above correlator can be evaluated using straightforward Wick contractions, as an integral over the positions $y$ of the $N^{(a)}$ screening currents. Namely, after taking into account the normal ordering of the various operators, the correlator \eqref{correlatordef} becomes the integral \beq\label{conf1def} \int d_{Haar}y \;I_{Toda}(y) \; , \eeq where the Haar measure is given by \beq\label{Haardef} d_{Haar}y=\prod_{a=1}^{n}\prod_{i=1}^{N^{(a)}}\frac{dy^{(a)}_i}{y^{(a)}_i} \; . \eeq The integrand $I_{Toda}(y)$ is made up of various factors. First, we have \beq\label{TodaFI} \prod_{a=1}^{n}\prod_{i=1}^{N^{(a)}} \left(y^{(a)}_i\right)^{\langle\psi, \alpha_a\rangle}\; , \eeq where $\langle\psi, \alpha_a\rangle$ is the eigenvalue of the state $|\psi\rangle$, as we have defined it previously \eqref{eigenvalue}. In 3d gauge theory language, this is nothing but the F.I. term \eqref{FI3d} contribution to the index $\left[\widetilde{\chi}^{\fg}\right]^{3d}$. There are also various two-point functions: for a given $a\in \{1,\ldots, n\}$, we find by direct computation: \beq\label{screena} \prod_{1\leq i< j\leq N^{(a)}}\left\langle S^{(a)}(y^{(a)}_i)\, S^{(a)}(y^{(a)}_j) \right\rangle = \prod_{1\leq i\neq j\leq N^{(a)}}\frac{\left(y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}}{\left(t\, y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}}\;\prod_{1\leq i<j\leq N^{(a)}} \frac{\Theta\left(t\,y^{(a)}_{j}/y^{(a)}_{i};q\right)}{\Theta\left(y^{(a)}_{j}/y^{(a)}_{i};q\right)}\; . \eeq We recognize the vector multiplet contribution \eqref{vec3d} to the index $\left[\widetilde{\chi}^{\fg}\right]^{3d}$. For $a$ and $b$ two distinct nodes in the Dynkin diagram of $\fg$, we compute: \beq\label{screenab} \prod_{1\leq i \leq N^{(a)}}\prod_{1\leq j \leq N^{(b)}}\left\langle S^{(a)}(y^{(a)}_i)\; S^{(b)}(y^{(b)}_j) \right\rangle = \prod_{1\leq i \leq N^{(a)}}\prod_{1\leq j \leq N^{(b)}}\left [ \frac{(t\,v\, y^{(a)}_{i}/y^{(b)}_{j};q)_{\infty}}{(v\, y^{(a)}_{i}/y^{(b)}_{j};q)_{\infty}}\right]^{\Delta^{ab}}\; . \eeq We recognize the bifundamental contribution \eqref{bif3d} to the index $\left[\widetilde{\chi}^{\fg}\right]^{3d}$. The two-point of a fundamental vertex operator with a screening current equals: \beq\label{vertexscreening} \prod_{i=1}^{N^{(a)}}\left\langle V^{(a)}(x^{(a)}_d)\; S^{(b)}(y^{(b)}_i)\right\rangle =\prod_{i=1}^{N^{(a)}} \left[\frac{\left(t\, v\, x^{(a)}_{d}/y^{(b)}_{i};q\right)_{\infty}}{\left(v\, x^{(a)}_{d}/y^{(b)}_{i};q\right)_{\infty}}\right]^{\delta^{ab}}\; . \eeq We recognize the flavor contributions \eqref{matter3d} to the index $\left[\widetilde{\chi}^{\fg}\right]^{3d}$. We come to the two-point function of a screening current with a $\cY$-operator, which evaluates to: \beq\label{Yopscreening} \prod_{i=1}^{N^{(b)}}\left\langle S^{(b)}(y^{(b)}_i)\, \cY^{(a)}(z) \right\rangle = \prod_{i=1}^{N^{(b)}} \left[\frac{1-t\, y^{(b)}_{i}/z}{1- y^{(b)}_{i}/z}\right]^{\delta_{ab}}\; . \eeq We recognize part of the Wilson loop contribution \eqref{3dWilsonfactor} to the index $\left[\widetilde{\chi}\right]_{3d}$. Note the zero mode of the $\cY$-operator \eqref{YoperatorToda} acts nontrivially on the vacuum $|\psi\rangle$. As a result, the two-point of a screening with a $\cY$-operator generates a relative shift of one unit of 3d FI parameter between the various terms of the generating current $W^{(s)}(z)$. The last missing ingredient is the two-point of a fundamental vertex operator with a $\cY$-operator, which at first sight takes a far less elegant form: \beq\label{Yopvertex} \left\langle V^{(b)}(x^{(b)}_d)\; \cY^{(a)}(z) \right\rangle = \exp\left(\sum_{k>0}\,M_{ba}(q^{k\over 2} , t^{k\over 2})\, \frac{t^k-1}{k} \, \left(\frac{x^{(a)}_d}{z}\right)^k\right)\; . \eeq Recall $M_{ba}$ is the inverse of the deformed Cartan matrix. Fortunately, this two-point can be rewritten as: \beq\label{Vopvertex} \left\langle V^{(b)}(x^{(b)}_d)\; \cY^{(a)}(z) \right\rangle = B(x^{(b)}_d, z)\; \frac{1- v^{\#^{(ab)}+1}\, x^{(b)}_{d}/z}{1- t\, v^{\#^{(ab)}+1}\, x^{(b)}_{d}/z}. \eeq where $\#^{(ab)}$ is a non-negative integer equal to the number of links between nodes $a$ and $b$ in the Dynkin diagram of $\fg$, and $B(x^{(b)}_d, z)$ is \emph{defined} by the above two equations: it is literally the exponential in \eqref{Yopvertex} divided by the ratio on the right-hand side of \eqref{Vopvertex}. It may seem like we have not gained much by trading the exponential \eqref{Yopvertex} for a new seemingly artifical prefactor $B(x^{(b)}_d,z)$, but this is not so for two reasons: first, the flavor part of the defect $Y$-operator contribution \eqref{3dWilsonfactor2} in the 3d/1d half-index now appears explicitly. Second, a remarkable factorization comes into play when we consider not just the $\cY$-operator inside the correlator, but the full generating current $W^{(s)}(z)$, which is a Laurent polynomial in such $\cY$-operators. Namely, each term in this polynomial is of the form \eqref{Vopvertex}, with the \textit{same} prefactor $B(x^{(b)}_d,z)$ for each term. This implies that the prefactor $B(x^{(b)}_d,z)$ can be factorized out of the correlator integral altogether. Note that a related factorization had also been noticed in the gauge theory picture, see the discussion under \eqref{Yequality}.\\ To fully specify the correlator integral, we also need to make a choice of contour. Here, the contours are simply chosen to be the ones we used in defining the 3d half-index, enclosing the poles in $\mathcal{M}^{bulk}$. In particular, the contours will avoid all poles depending on the generating current fugacity $z$.\\ For a general correlator, we can now claim: \begin{empheq}[box=\fbox]{align} &\frac{\left\langle \psi'\left|\prod_{a=1}^{n}\prod_{d=1}^{N^{(a)}_f}V^{(a)}(x^{(a)}_d)\; (Q^{(a)})^{N^{(a)}}\;\prod_{s=2}^{n+1}\prod_{\rho=1}^{L^{(s-1)}}W^{(s)}(z^{(s-1)}_\rho) \right| \psi \right\rangle}{\left\langle \psi'\left|\prod_{a=1}^{n}\prod_{d=1}^{N^{(a)}_f}V^{(a)}(x^{(a)}_d)\; (Q^{(a)})^{N^{(a)}} \right| \psi \right\rangle} \label{correlatoris3dindex} \\ &\qquad\qquad\qquad\qquad\qquad\qquad= B\left(\{x^{(b)}_d\},\{z^{(s-1)}_\rho\}\right)\, \left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{3d}(\{z^{(s-1)}_\rho\})\; ,\nonumber \end{empheq} where the overall prefactor is \beq B\left(\{x^{(b)}_d\},\{z^{(s-1)}_\rho\}\right) =\prod_{s=2}^{n+1}\prod_{\rho=1}^{L^{(s-1)}}\prod_{b=1}^{n}\prod_{d=1}^{N^{(b)}_f}B\left(x^{(b)}_d,z^{(s-1)}_\rho\right)\; . \eeq Naturally, $B(\{x_d\},\{z^{(s)}_\rho\})$ stands outside the correlator integrals, since it does not depend on the $y$-integration variables. In the end, we find that the 3d/1d index of the gauge theory $G^{3d}$ with Wilson loop is a deformed ${\cW}_{q,t}({\fg})$-algebra correlator, up to the constant $B(\{x_d\},\{z^{(s)}_\rho\})$. Recall that this prefactor contains an exponential; we mention in passing that this ``phase" has a natural interpretation when $\fg=A_n$, in the general framework of Ding-Iohara-Miki (DIM) algebras \cite{Ding:1996,Miki:2007}; for a detailed study, see \cite{Mironov:2016yue}. Roughly speaking, in the DIM formalism, a vertex operator ${\mathbb{V}}^{(a)}(x^{(a)}_d)$ is built using intertwiners, as the product of a $V^{(a)}(x^{(a)}_d)$ vertex operator from the ${\cW}_{q,t}({A_n})$-algebra, and another vertex operator coming from an additional Heisenberg algebra. This extra Heisenberg algebra comes with its own Fock space, and contributes to the correlator in a way to precisely cancel the prefactor $B(\{x_d\},\{z^{(s)}_\rho\})$.\\ \vspace{16mm} \section{Schwinger-Dyson Equations: the Little String Perspective} \label{sec:littlestringsecion} We have showcased three different physical frameworks where we can make sense of a vortex character for the 3d $\cN=4$ gauge theory $G^{3d}$. Moreover, we saw explicitly how the regularity properties of this character implies a non-perturbative type of Schwinger-Dyson identities for the theory. Ultimately, all the various perspectives are unified in a string theory picture. In the process of describing it, we will learn about the dynamics of new defects in $(2,0)$ little string theory. The literature on BPS-defects of the little string has been steadily growing in the last few years, with rich physical and mathematical implications: among them, we find codimension-2 defects \cite{Aganagic:2015cta,Haouzi:2016ohr,Haouzi:2016yyg,Haouzi:2017vec}, codimension-4 defects \cite{Aganagic:2016jmx,Aganagic:2017smx}, point and codimension-2 defects \cite{Haouzi:2019jzk}, and in this present work, new codimension-4 defects\footnote{Equivalently, these are all T-dual defects in the $(1,1)$ little string theory.}. \subsection{Little String Basics} \label{ssec:basics} We consider ten-dimensional type IIB string theory compactified on an $ADE$ surface $X$ times a circle, meaning type IIB on $X\times M_6$. The six-manifold $M_6$ is the product of an infinite cylinder $\cC= \mathbb{R} \times S^1(R)$ of radius $R$ and two complex lines, which we distinguish using the subscript notation $\mathbb{C}_q$ and $\mathbb{C}_t$, so that $M_6=\cC\times\mathbb{C}_q\times\mathbb{C}_t$. $X$ is a resolution of a ${\mathbb C}^2/\Gamma$ singularity, where $\Gamma$ is one of the discrete subgroups of $SU(2)$. By the McKay correspondence, such a discrete subgroup is labeled by one of the simply-laced Lie algebras ${\fg=A, D, E}$; we call $n$ the rank of $\fg$. Explicitly, the singularity is resolved by blow-up: the exceptional divisor is a collection of 2-spheres $S_a$, $a=1,\ldots,n$, which organize themselves in the shape of the Dynkin diagram of $\fg$. We focus our attention on a sector of the theory which has far less degrees of freedom than are present in the full IIB string. That is, we decouple gravity and focus only on the degrees of freedom supported near the origin of $X$ by sending the string coupling to $g_s\rightarrow 0$. In this limit, the type IIB string on $X$ becomes a six-dimensional string theory on $M_6$, known as the $(2,0)$ little string of type $\fg=A, D, E$ \cite{Berkooz:1997cq,Seiberg:1997zk,Losev:1997hx}. It is not a local QFT \cite{Kapustin:1999ci}. It is instead a theory of strings proper (inherited from the ten-dimensional IIB strings), with finite tension $m_s^2$, the square of the string mass. There are a few good reviews in the literature, most notably \cite{Aharony:1999ks,Kutasov:2001uf}. The moduli space of the $(2,0)$ little string is \beq \left(\mathbb{R}^4\times S^1\right)^{n}/W(\fg)\; , \eeq where $W(\fg)$ is the Weyl group of $\fg$. The moduli come from periods of various 2-forms along the 2-cycles $S_a$ of the surface $X$: the $S^1$ modulus is the R-R 2-form $C^{(2)}$ of the ten-dimensional type IIB string theory integrated over $S_a$. Meanwhile, the $\mathbb{R}^4$ moduli come from the NS-NS B-field $B^{(2)}$, and a triplet of self-dual 2-forms $\omega_{I,J,K}$, which exist because $X$ is a hyperk\"ahler manifold. To get the correct R-R and NS-NS normalizations, one needs to recall the low energy action of the type IIB superstring. In particular, the R-R field is not accompanied by any power of $g_s$. Moreover, the mass dimension of a scalar in a theory of 2-forms should be 2. Then, in canonical normalization, we obtain: \beq\label{periods} \frac{m_s^4}{g_s}\int_{S_a}\omega_{I,J,K}\, ,\qquad \frac{m_s^2}{g_s}\int_{S_a}B^{(2)}\, , \qquad m_s^2\int_{S_a}C^{(2)}\, . \eeq The above periods remain fixed in the limit $g_s\rightarrow 0$.\\ As is, this background preserves 16 supercharges. Ultimately, we want to make contact with three-dimensional physics and produce nontrivial dynamics. We can achieve both goals at once by introducing various supersymmetric branes. Since our construction originates in type IIB, we naturally consider adding certain D-branes, whose tension should remain finite in the $g_s\rightarrow 0$ limit. As we will argue, the relevant branes to consider here are D3 branes wrapping 2-cycles of the surface $X$, which we now turn to. \subsection{The Effective Theory on D3-Branes} \label{ssec:simplylaced} To be more quantitative, we introduce some notations: According to the McKay correspondence, the second homology group $H_2(X, \mathbb{Z})$ of $X$ is identified with the root lattice $\Lambda$ of $\fg$. Then, $H_2(X, \mathbb{Z})$ is spanned by $n$ vanishing 2-cycles $S_a$, which we identify as the positive simple roots $\alpha_a$. The intersection pairing in homology is further identified with the Cartan Killing metric of $\fg$; explicitly, \beq \# (S_a \cap S_b)=-C_{ab}\; , \eeq where $C_{ab}$ is the Cartan matrix of $\fg$. We also consider the second relative homology group $H_2(X, \partial X, \mathbb{Z})$. This group is spanned by non-compact 2-cycles $S_a^*$, $a=1,\ldots, n$, where each $S_a^*$ is constructed as the fiber of the cotangent bundle $T^*S_a$ over a generic point on $S_a$. The group $H_2(X, \partial X, \mathbb{Z})$ is identified with the weight lattice $ \Lambda_*$ of $\fg$; correspondingly, the 2-cycle $S^*_a$ is identified with the $a$-th fundamental weight $\lambda_a$ of $\fg$. In particular, the following orthonormality relation holds in homology: \beq \# (S_a \cap S^*_b)=\delta_{ab}\; . \eeq Note that $H_2(X, \mathbb{Z})\subset H_2(X, \partial X, \mathbb{Z})$, since compact 2-cycles can be understood as elements of $H_2(X, \partial X, \mathbb{Z})$ with trivial boundary at infinity. This is just the homological version of the familiar statement that the root lattice of $\fg$ is a sublattice of the weight lattice, $\Lambda\subset \Lambda_*$.\\ \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 3cm},clip,width=0.99\textwidth]{geometry} \vspace{-1pt} \caption{A vanishing 2-cycle of an $A_n$ singularity, labeled by $S_a$ (the black 2-sphere), and the dual non-compact 2-cycle $S_a^*$ (the black cigar).} \label{fig:geometry} \end{figure} Consider a total of $N$ D3$_{gauge}$ branes wrapping the compact 2-cycles of $X$ and one of the complex lines $\mathbb{C}_q$ in $M_6$, while sitting at the origin of the transverse complex line $\mathbb{C}_t$. This results in a net non-zero D3$_{gauge}$ brane charge, measured by a class $[S]\in H_2(X, \mathbb{Z})$. We expand $[S]$ in terms of positive simple roots as \beq\label{compact} [S] = \sum_{a=1}^n \,N^{(a)}\,\alpha_a\;\; \in \,\Lambda \; , \eeq with $N^{(a)}$ non-negative integers. The $N$ D3$_{gauge}$ branes are points on the cylinder $\cC$, with coordinates $\{y^{(a)}_i\}$.\\ Next, we consider a total of $N_f$ D3$_{flavor}$ branes wrapping non-compact 2-cycles in $X$, along with the same complex line $\mathbb{C}_q$ in $M_6$, while also sitting at the origin of the transverse complex line $\mathbb{C}_t$. The charge for these branes is measured by a class $[S^*]\in H_2(X, \partial X, \mathbb{Z})$. We expand $[S^*]$ in terms of fundamental weights as \beq\label{noncompact} [S^*] = - \sum_{a=1}^n \, N^{(a)}_f \, \lambda_a \;\; \in\, \Lambda_*\; , \eeq where $N^{(a)}_f$ are non-negative integers commonly called Dynkin labels. The $N_f$ D3$_{flavor}$ branes are points on the cylinder $\cC$, with coordinates $\{x^{(a)}_d\}$.\\ Lastly, we introduce $L$ D3$_{defect}$ branes wrapping the non-compact 2-cycles in $X$ and the transverse complex line $\mathbb{C}_t$, while sitting at the origin of $\mathbb{C}_q$. The charge for these D3$_{defect}$ branes is measured by a class $[S^*_{defect}]\in H_2(X, \partial X, \mathbb{Z})$, expanded in terms of fundamental weights as: \begin{equation} [S^*_{defect}] = - \sum_{a=1}^n \, L^{(a)} \, \lambda_a \;\; \in\, \Lambda_*\; , \end{equation} where $L^{(a)}$ are non-negative integers, once again Dynkin labels. The $L$ D3$_{defect}$ branes are points on the cylinder $\cC$, with coordinates $\{z^{(a)}_\rho\}$.\\ \begin{figure}[h!] \emph{} \centering \includegraphics[width=1.0\textwidth]{branes} \vspace{-12pt} \caption{The brane configuration in type IIB: there are $N$ D3$_{gauge}$ branes wrapping compact 2-cycles $S_{a}$ and $\mathbb{C}_q$ (yellow), $N_{f}$ D3$_{flavor}$ branes wrapping non-compact 2-cycles $S_{a}^{*}$'s and $\mathbb{C}_q$ (red). There are also $L$ D3$_{defect}$ branes wrapping the non-compact 2-cycles $S_{a}^{*}$'s and $\mathbb{C}_q$ (green). All branes are points on the cylinder $\cC$. Later, we will also consider the quantum mechanics of $k$ D1$_{vortex}$ branes (not pictured) wrapping the compact 2-cycles $S_{a}$'s.} \label{fig:branes} \end{figure} Let us first ignore the D3$_{defect}$ branes. At energies $E$ well below the string scale, $E/m_s\ll 1$, the effective theory on the D3$_{gauge}$ branes is a three-dimensional gauge theory with $\cN=4$ supersymmetry\footnote{Recall we have defined the periods of a triplet $\vec{\omega}=(\omega_I, \omega_J, \omega_K)$ of self-dual 2-forms \eqref{periods}. The D3$_{flavor}$ branes wrapping the non-compact 2-cycles preserve the same supersymmetry only if the vectors $\int_{S^*_a} \vec{\omega}$ all point in the same direction, for all $a=1, \ldots, n$. Having made such a choice, we then have to worry about the supersymmetry preserved by the D3$_{gauge}$ branes wrapping the compact 2-cycles. This is determined by the periods of the 2-forms through the 2-cycles $S_a$, which is the choice of a metric on $X$. Then, it is always possible to choose D3 branes wrapping the compact and non-compact 2-cycles and which break the same supersymmetry.}, on the manifold $\mathbb{C}_q\times S^1(R)$. At first sight, our brane setup may suggest that the gauge theory we obtain should only be two-dimensional, with $\cN=(4,4)$ supersymmetry. However, it is not the case: the D3$_{gauge}$ branes are points on the cylinder $\cC=\mathbb{R}\times S^1(R)$, and strings wrap around the circle, resulting in a tower of Kaluza-Klein states on the T-dual circle of radius $\widehat{R}=1/(m^2_s\, R)$, which modifies the low-energy physics. Put differently, the $(2,0)$ little string compactified on $S^1(R)$ enjoys T-duality (inherited from type IIB), under which it becomes the $(1,1)$ little string theory compactified on $S^1(\widehat{R})$. Then, the D3$_{gauge}$ branes at points on the cylinder $\cC=\mathbb{R}\times S^1(R)$ in the $(2,0)$ little string are exactly the same as D4$_{gauge}$ branes wrapping the circle of the T-dual cylinder $\cC'=\mathbb{R}\times S^1(\widehat{R})$, in the $(1,1)$ little string. It is clear in the second description that the low energy theory really is three-dimensional, on $\mathbb{C}_q\times S^1(\widehat{R})$. We call this gauge theory $G^{3d}$. The choice of this denomination is not innocent, since we will now argue that the low energy theory on the branes is precisely the 3d theory we have studied in the rest of this paper.\\ The precise characterization of $G^{3d}$ was determined by Douglas and Moore \cite{Douglas:1996sw}: it is a quiver gauge theory of shape the Dynkin diagram of $\fg=ADE$\footnote{The original analysis of \cite{Douglas:1996sw} was carried out in the full type IIB background, and the quiver gauge theory there was labeled by an affine Dynkin diagram. Here, we are working in the little string limit $g_s\rightarrow 0$, and the affine node is decoupled.}. The gauge group is \beq\label{gaugegroup} G=\prod_{a=1}^n U(N^{(a)})\; , \eeq where the ranks $N^{(a)}$ were defined in \eqref{compact} as the number of D3$_{gauge}$ branes wrapping the compact 2-cycle $S_a$, \[ [S] = \sum_{a=1}^n \,N^{(a)}\,\alpha_a\;\; \in \,\Lambda \; . \] The flavor symmetry is the gauge group \beq\label{flavorgroup} G_F=\prod_{a=1}^n U(N^{(a)}_f)\; , \eeq where the ranks $N^{(a)}_f$ were defined in \eqref{noncompact} as the number of D3$_{flavor}$ branes wrapping the non-compact 2-cycle $S^*_a$, \[ [S^*] = - \sum_{a=1}^n \, N^{(a)}_f \, \lambda_a \;\; \in\, \Lambda_*\; . \] Note that because the 2-cycle $S^*_a$ are non-compact, the associated gauge fields of $U(N^{(a)}_f)$ are frozen. This produces $N^{(a)}_f$ hypermultiplets on node $a$, in the bifundamental representation $(N^{(a)}, \overline{N^{(a)}_f})$ of the group $U(N^{(a)})\times U(N^{(a)}_f)$. Such multiplets come about from quantizing the strings between the D3 branes wrapping the compact 2-cycle $S_a$ and the non-compact 2-cycle $S^*_a$. Finally, for $a\neq b$, we have hypermultiplets coming from the intersection of 2-cycles $S_a$ and $S_b$ at a point. The intersection pairing $\#(S_a \cap S_b)$ in homology is identified with the incidence matrix of $\fg$. Open strings with one end on the $a$-th D3$_{gauge}$ brane and the other end on the $b$-th D3$_{gauge}$ brane results in a hypermultiplet in the bifundamental representation $(N^{(a)}, \overline{N^{(b)}})$ of $U(N^{(a)})\times U(N^{(b)})$. So far, the above stringy construction applies a priori to any configuration of D3$_{gauge}$ and D3$_{flavor}$ branes. The resulting effective 3d $\cN = 4$ gauge theory then inherits an arbitrary gauge and flavor content. The aim of this work is to exhibit certain symmetries associated to BPS vortices, which sit at Higgs vacua. Therefore, from now on, we require the number $N_F$ of D3$_{flavor}$ branes to be large enough, so that $G^{3d}$ possesses a Higgs branch and that its vacua be Higgs vacua.\\ Consider now adding the D3$_{defect}$ branes in the background. Recall that those branes wrap the non-compact 2-cycles of the geometry. As such, they are not dynamical. They are point defects on the $\mathbb{C}_q$-line, and break supersymmetry further by half. The number of such D3$_{defect}$ branes is $L=\sum_{a=1}^n L^{(a)}$. Correspondingly, the defects carry a background gauge group of their own, \beq\label{defectflavorgroup} \widehat{G}_{defect}=\prod_{a=1}^n U(L^{(a)})\;. \eeq \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{Tduality} \vspace{-40pt} \caption{T-duality tells us that the D3 branes at points on the cylinder $\cC$ in the $(2,0)$ little string are the same as D4 branes wrapping the T-dual cylinder $\cC'$ in the $(1,1)$ little string.} \label{fig:Tduality} \end{figure} The D3$_{defect}$ branes are points on $\cC$, or equivalently, following the same line of reasoning as before, they are D4$_{defect}$ branes wrapping the circle $S^1(\widehat{R})$ of the T-dual cylinder $\cC'$. Thus, from the point of view of the gauge theory $G^{3d}$, the D3$_{defect}$ branes really are 1/2-BPS defects wrapping the $S^1(\widehat{R})$, and at the origin of $\mathbb{C}$; in other words, they make up a Wilson loop of $G^{3d}$ \footnote{The realization of supersymmetric Wilson loops in string theory was first proposed in \cite{Maldacena:1998im,Rey:1998ik}, in the context of holography; namely, a loop in the first fundamental representation of $SU(N)$ is described as a fundamental string whose worldsheet ends at the loop, located at the boundary of $AdS$. Later, a description of the loops was given in terms of D-branes instead \cite{Drukker:2005kx,Gomis:2006sb,Yamaguchi:2006tq}, allowing for more general representations. This D-brane perspective is the one relevant to us here; in particular, the D3$_{defect}$ branes we study are T-dual to the D5' branes appearing in the work \cite{Assel:2015oxa}.}.\\ Translating the geometry to gauge theory data, the periods \eqref{periods} of the $(2,0)$ little string become parameters of $G^{3d}$. Namely, the modulus coming from the NS-NS $B^{(2)}$-field through the 2-cycle $S^2_a$ is identified with the gauge coupling on node $a$ of the quiver gauge theory. The triplet of self-dual two-forms $\omega_{I,J,K}$ are the FI parameters. The positions of the $N$ D3$_{gauge}$ branes on the cylinder $\cC$ are (part of the) Coulomb moduli of $G^{3d}$. The positions of the $N_f$ D3$_{flavor}$ branes on $\cC$ are mass parameters for the fundamental hypermultiplets of $G^{3d}$. Finally, the positions of the $L$ D3$_{defect}$ branes on $\cC$ are the fermion masses \eqref{1dfermion3d} for the Wilson loop. All of the above moduli and parameters are complexified, due to the presence of the circle $S^1(\widehat{R})$ the 3d theory and the Wilson loop live on. This is precisely the gauge theory setup we studied throughout this paper.\\ We end this section with comments on the limit $m_s\rightarrow \infty$\footnote{As usual, there are a priori many ways to take this limit. The limit described here is the same one that turned deformed $\cW_{q,t}(\fg)$-algebras into the usual $\cW(\fg)$-algebras.}. In that regime, we lose the one scale of the theory and flow to a $(2,0)$ SCFT on $M_6$, labeled by the same simply-laced Lie algebra $\fg$ as the $(2,0)$ little string\footnote{In the T-dual setup, the $(1,1)$ little string in the low energy limit becomes 6d maximally supersymmetric Yang-Mills theory, with gauge group of type $\fg$.}. The various moduli of the little string are kept fixed in the limit, and become moduli of the SCFT. We further insist on keeping the Riemann surface we compactified the theory on fixed, along with the position of the various D3 branes on it; that is, the cylinder $\cC=\mathbb{R}\times S^1(R)$ remains fixed. Recall however that $G^{3d}$ is naturally defined on the T-dual cylinder $\cC'=\mathbb{R}\times S^1(\widehat{R})$, where $\widehat{R}=1/(m^2_s R)$. Therefore, if $R$ is kept fixed, the dual radius $\widehat{R}$ vanishes in the SCFT limit and the theory on the branes becomes effectively two-dimensional, with $\cN=(4,4)$ supersymmetry. The gauge coupling on the D3 branes becomes infinite, meaning the theory cannot be described as a gauge theory anymore. Meanwhile, the Wilson loop wrapping $\widehat{R}$ becomes a 1/2-BPS point defect, and supersymmetry is broken to $\cN=(0,4)$. In the rest of this paper, we keep $m_s$ finite. \subsection{The Index of the Little String is a $qq$-character} Our goal is to compute the partition function of the $(2,0)$ little string theory on $M_6=\cC\times\mathbb{C}_q\times\mathbb{C}_t$. Note this is fully equivalent to computing the partition function of the $(1,1)$ little string on $M'_6=\cC'\times\mathbb{C}_q\times\mathbb{C}_t$, where $\cC'$ is the T-dual cylinder of radius $\widehat{R}$. In the latter setup, and in the absence of D-branes, the partition function is naturally expressed as a supersymmetric index: \beq \label{index6d} \text{Tr}\left[(-1)^F\, q^{S_1 - S_R}\, t^{-S_2 + S_R}\,\right] \; . \eeq The trace is defined over the circle $\widehat{R}$, and $q^{S_1 - S_R}\, t^{-S_2 + S_R}$ turns the manifold $M'_6$ into a twisted product. As we go around $S^1(\widehat{R})$, the line $\mathbb{C}_q$ is rotated by $q$, and the line $\mathbb{C}_t$ is rotated by $t^{-1}$, describing an $\Omega$-background. $S_1$ is the generator of the $\mathbb{C}_q$-rotations, while $S_2$ is the generator of the $\mathbb{C}_t$-rotations. $S_R$ is the generator of a $U(1)\subset SU(2)$ subgroup of the R-symmetry of the 6d theory. Without any branes, the index is trivial by pairwise cancellations of bosons and fermions, since the 6d theory has too much supersymmetry.\\ Working in the T-dual setup, we first add the D4$_{gauge}$ and D4$_{flavor}$ branes in the background. By a supersymmetric localization argument, the partition function of the bulk little string becomes the partition function on the defects. Indeed, supersymmetry is only broken near the locus of the defect branes, while the supersymmetries of the full $(1,1)$ little string are preserved away from the defect. It follows that the partition function on the D4 branes is precisely the half-index \eqref{index3d} of $G^{3d}$. In particular, the 3d FI parameter contribution \eqref{FI3d} comes from turning on the periods of the self-dual two-form $\omega_I$. The ${\cN}=4$ vector multiplet contribution on node $a$ \eqref{vec3d} comes about from quantizing the D4$_{gauge}$/D4$_{gauge}$ strings, with the D4 branes wrapping the $a$-th compact 2-cycle. The ${\cN}=4$ bifundamental hypermultiplets contribution between nodes $a$ and $b$ \eqref{bif3d} comes about from quantizing the D4$_{gauge}$/D4$_{gauge}$ strings, with one set of D4 branes wrapping the $a$-th compact 2-cycle, and the other set of D4 branes wrapping the $b$-th compact 2-cycle. The ${\cN}=4$ fundamental hypermultiplet contribution on node $a$ \eqref{matter3d} comes about from quantizing the D4$_{gauge}$/D4$_{flavor}$ strings, with the D4$_{gauge}$ branes wrapping the $a$-th compact 2-cycle, and the D4$_{flavor}$ branes wrapping the dual non-compact 2-cycle.\\ We now introduce the D4$_{defect}$ branes. These branes are nondynamical as they do not wrap $\mathbb{C}_q$, but they nonetheless modify the index. We conjecture here that the new string sectors realize the $Y$-operator defect in the gauge theory. Namely, \eqref{3dWilsonfactor} is the contribution of D4$_{gauge}$/D4$_{defect}$ strings at node $a$, while \eqref{3dWilsonfactor2} is the contribution of D4$_{flavor}$/D4$_{defect}$ strings at node $a$. All in all, this implies that the index of the little string in the presence of all three types of branes localizes to the vortex $qq$-character observable. We rewrite here for convenience, normalized by the index in the absence of D4$_{defect}$ branes: \begin{align}\label{NEWcharacter3dstring} \boxed{\left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{\text{D4}_g/\text{D4}_f/\text{D4}_d}(\{z^{(a)}_\rho\})=\frac{1}{\left[{\chi}\right]^{(0,\ldots,0)}_{\text{D4}_g/\text{D4}_f}}\sum_{\omega\in V(\lambda)}\prod_{b=1}^n \left({\widetilde{\fq}^{(b)}}\right)^{d_b^\omega}\; c_{d_b^\omega}(q, t)\; \left(\widetilde{\mathcal{Q}}_{d_b^\omega}^{(b)}(\{z_{\rho}^{(a)}\})\right)\, \left[\widetilde{Y}_{3d}(\{z^{(a)}_\rho\})\right]_{\omega} \, .} \end{align} The superscripts in the index designate the D4$_{defect}$ charge, while the subscripts indicate which types of branes are present (D4$_g$ for D4$_{gauge}$ and so on).\\ This above identification makes the dictionary to deformed $\cW$-algebras explicit: the $N$ screening charges \eqref{screeningchargedef} are the $N$ D4$_{gauge}$ branes, the $N_F$ fundamental vertex operators \eqref{fundvertex} are the $N_F$ D4$_{flavor}$ branes, and the $L$ generating currents \eqref{generatorsdef} are the $L$ D4$_{defect}$ branes. The little string index can therefore be recast as the $q$-Toda correlator \eqref{correlatoris3dindex}.\\ To make contact with the vortex quantum mechanics $T^{1d}$, we need to do a little more work. Namely, we freeze the moduli of the D4$_{gauge}$ branes to be equal to the moduli of the D4$_{flavor}$ branes. This describes the root of the Higgs branch for $G^{3d}$. Geometrically, this means we can recombine the D4$_{gauge}$ branes with the D4$_{flavor}$ branes so that they exclusively make up a collection of $N_F$ D4'$_{flavor}$ branes wrapping the non-compact 2-cycles of $X$, and the theory is effectively massive. \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{Higgsing} \vspace{-40pt} \caption{Illustration of the Higgsing procedure in the string theory picture. On the right side, the gauge and flavor branes have recombined exclusively into flavor branes.} \label{fig:Higgsing} \end{figure} Now, we would like to introduce vortices for $G^{3d}$. First note that a generic collection of vortices is BPS if the 3d FI parameters are aligned in the same direction. For each $a=1,\ldots,n$, the triplet of FI terms $\int_{S_a}\omega_{I,J,K}$ transforms as a vector under the R-symmetry group $SU(2)_R$ rotating the hypermultiplet scalars. We identify $SU(2)_R$ as the $SU(2)$ R-symmetry of the little string. We then turn on the periods $\int_{S_a}\omega_{I}>0$, while setting the other periods to zero, $\int_{S_a}\omega_{J,K}=0$, for all $a$. Correspondingly, this turns on a real FI parameter on each node $a$ (complexified as usual due to the presence of the cylinder $\cC'$), while the complex FI parameters are set to zero. This describes a generic point on the Higgs branch of $G^{3d}$, and $SU(2)_R$ is broken to $U(1)_R$, which acts by rotating the periods of $\omega_J$ and $\omega_K$. This background indeed allows for 1/2-BPS vortex solutions: they are D2$_{vortex}$ branes wrapping the compact 2-cycles of $X$ and the circle of the cylinder $\cC'$ in the $(1,1)$ little string. Alternatively, they are D1$_{vortex}$ branes wrapping the compact 2-cycles of $X$ at a point on $\cC$ in the $(2,0)$ little string. With all branes present, only two supercharges are preserved in the background. The effective theory on $k$ D2$_{vortex}$ branes is precisely the quantum mechanics $T^{1d}$. It follows that the index of the $(1,1)$ little string in the presence of the D4'$_{flavor}$ branes, the D4$_{defect}$ branes, and the new D2$_{vortex}$ branes, is the 1d $\cN=2$ Witten index \eqref{index}. In its integral representation \eqref{vortexintegral}, the index is comprised of 1-loop determinants, all of which can be attributed to the various strings stretching between the branes. Let us explain the factors resulting in the Wilson loop physics: the classical Wilson loop contribution $Z^{(a)}_{defect, \varnothing}$ is attributed to D4'$_{flavor}$/D4$_{defect}$ strings, which provide exclusively fermions; for details, see the T-dual setup of D0/D8 branes studied in \cite{Banks:1997zs}, as well as \cite{Assel:2015oxa}. Meanwhile, the interaction $Z^{(a)}_{defect, k}$ of the vortices with the Wilson loop is attributed to D2$_{vortex}$/D4$_{defect}$ strings on node $a$. It provides the degrees of freedom for (the reduction from 2d $\cN=(0,4)$ to) 1d $\cN=4$ twisted hypermultiplets and Fermi multiplets; these 1-loop determinants were also worked out in T-dual setups, see \cite{Haouzi:2019jzk,Kim:2016qqs,Nekrasov:2016gud}. All in all, the Witten index of $T^{1d}$ is a little string index, which can be naturally expressed once again as a vortex $qq$-character: \begin{align}\label{character1dstring} \boxed{\left[\widetilde{\chi}\right]^{(L^{(1)},\ldots, L^{(n)})}_{\text{D2}_v/\text{D4'}_f/\text{D4}_d}(\{z^{(a)}_\rho\})=\frac{1}{\left[{\chi}\right]^{(0,\ldots,0)}_{\text{D2}_v/\text{D4'}_f}}\sum_{\omega\in V(\lambda)}\prod_{b=1}^n \left({\widetilde{\fq}^{(b)}}\right)^{d_b^\omega}\; c_{d_b^\omega}(q, t)\; \left(\mathcal{Q}_{d_b^\omega}^{(b)}(\{z_{\rho}^{(a)}\})\right)\, \left[{Y}_{1d}(\{z^{(a)}_\rho\})\right]_{\omega} \, .} \end{align} Again, the superscripts indicate the D4$_{defect}$ charge, while the subscripts indicate which types of branes are present in the background. Recall that this index depends on a choice of sign for the 1d FI parameter. In the little string context, this is the sign of the period for the NS-NS $B$-field $\int_{S_a}B^{(2)}$. In particular, as we will argue next, changing the sign of this period gives a little string realization of 3d Seiberg duality. \vspace{16mm} \section{Discussion} \label{sec:discussionsection} \subsection{Seiberg Duality of the Vortex Character} Our focus in this paper was on the UV physics in three dimensions, and in particular we have not paid attention to what the IR description of $G^{3d}$ may look like. Our only requirement was that the theory had Higgs vacua, and the number of hypermultiplets in that range allows for many distinct behaviors in the IR \cite{Gaiotto:2008ak,Kapustin:2010mh}. It can happen that two distinct UV theories flow to the same IR point, a phenomenon called Seiberg duality \cite{Seiberg:1994pq}. We would like to ask what the action of Seiberg duality (if any) is on the $qq$-character observable we constructed. Luckily, we have the one-dimensional quantum mechanics description at our disposal, where Seiberg duality is understood microscopically as a change of sign in the 1d FI parameter $\zeta_{1d}$ \cite{Hwang:2017kmk}; in the absence of a Wilson loop, 3d $\cN=4$ gauge theories which are Seiberg-dual in the UV typically look very different from each other, but turn out to have identical partition functions. In the 1d quantum mechanics picture, Seiberg-dual theories happen to have one and the same gauge theory description $T^{1d}_{pure}$. What happens in the presence of a Wilson loop? For starters, the quantum mechanics $T^{1d}$ is no longer invariant: this is because the Wilson loop transforms in some representation of the 3d gauge group $G$, not the 3d flavor group $G_F$, a distinction which breaks a symmetry of Seiberg-duality. In particular, the dual Wilson loop is expected to map to a flavor Wilson loop, already in the topologically trivial sector. In our presentation of the Witten index, this is the contribution of the Fermi multiplets $Z^{(a)}_{defect, \varnothing}$ present at $k=0$. We also want to understand the duality in a background with arbitrary vortex charge $k$. A detailed proof of the construction of the Seiberg-dual character will be given in the case $\fg=A_1$ in section \ref{sec:example}. Here, we only sketch the main points. From the quantum mechanics picture, the $qq$-character of a Seiberg-dual theory can easily be obtained after changing the sign of some of the 1d FI parameters $\zeta^{(a)}_{1d}$ from positive to negative in the Witten index \eqref{vortexintegral}. In particular, a different set of poles from the one considered so far in this paper will be enclosed by the contours of the dual theory. This modification of the contours is perfectly tractable, and we can readily compute the index by the JK-residue. As our end result, we find that up to normalization by the ``classical" contribution $Z^{(a)}_{defect, \varnothing}$, the vortex character of a Seiberg-dual theory is obtained by switching the signs of the various flavor masses, defect fermion masses and 3d FI parameters in the quantum mechanics. There is one caveat, however: the FI parameters are continuous, so when changing their sign, wall crossing can happen at the value $\zeta^{(a)}_{1d}=0$. This typically results in new states from the Coulomb branch contributing to the Witten index, and these should be identified carefully by considering the residues at $\phi\rightarrow\pm \infty$. Another important subtlety is that the Seiberg-dual theory we identify in this formalism is only correct on the Higgs branch, where the vortex solutions are defined. In particular, at a more general point on the moduli space, the 3d Seiberg-dual theories we identify can happen to disagree globally. Put differently, the duality we identify is only strictly true at a special point in the moduli space. For physical considerations on this point, see \cite{Assel:2017jgo,Dey:2017fqs,Bashkirov:2013dda,Gaiotto:2013bwa,Yaakov:2013fza}. For mathematical considerations, see \cite{Bullimore:2015lsa,Nakajima:2015txa,Braverman:2016wma}. \subsection{Future Directions} Starting from our construction of the vortex character, there are many important questions to investigate. Let us list a few pressing open problems:\\ One important question would be to understand what the D3$_{defect}$ branes of the little string mean in geometry, most notably in the language of quantum K-theory of Nakajima quiver varieties \cite{Aganagic:2017smx,Aganagic:2017gsx}.\\ Defining the vortex characters should be possible for classical gauge groups using orientifold arguments \cite{Haouzi:2020yxy}, and for more general quiver theories than the $ADE$ ones. For instance, so-called ``fractional" quiver theories, which include the non simply-laced $BCFG$ algebras, should be obtained by folding \cite{Aganagic:2017smx,Haouzi:2019jzk,Kimura:2017hez}. Affine quivers theories could also be studied, modifying some of our arguments in a straightforward way.\\ The vortex characters we constructed are naturally defined for 3d theories on the manifold $\mathbb{C}\times S^1(\widehat{R})$. As we mentioned in the text, reducing the theory on the circle produces vortex characters for 2d $\cN=(4,4)$ gauged sigma models \cite{Nekrasov:2017rqy}. It should likewise be straightforward to study the uplift to 4d $\cN=2$ theories on the manifold $\mathbb{C}\times T^2$; such a lift is expected to produce elliptic vortex characters \cite{Kimura:2016dys}.\\ Recently, a vortex $qq$-character was defined for certain 3d $\cN=2$ gauge theories of handsaw-type \cite{Haouzi:2019jzk} obtained from the Higgsing a five-dimensional theory, using a similar construction to the one we presented for $\cN=4$ theories. In the $\cW_{q,t}(\fg)$-algebra formalism, the characters arise as correlators similar to those studied here, but with the insertion of deformed ``primary" vertex operators at points on the cylinder rather than the fundamental vertex operators \eqref{fundvertex} compatible with $\cN=4$ supersymmetry. It would be important to construct the characters for more generic 3d $\cN=2$ theories, and understand their realization in the language of $\cW(\fg)$-algebras and string theory, as well as the action of Seiberg-like duality \cite{Aharony:1997gp,Benini:2011mf}.\\ Mirror symmetry is known to exchange Wilson loops with so-called vortex loops in 3d $\cN=4$ gauge theories \cite{Assel:2015oxa,Dimofte:2019zzj}. By Wilson loop, we mean here the classical contribution $Z^{(a)}_{defect, \varnothing}$. It our work, we crucially made use of the idea that the presence of such a loop should generalize the vortex moduli space altogether and introduce new multiplets in the quantum mechanics (as opposed to localizing the loop on the vortex solutions in the absence of a loop). It would be interesting to study the action of mirror symmetry in our setup, and notably how the symmetry acts on the vortex character. \vspace{16mm} \section{A Case Study: 3d $\cN=4$ SQCD} \label{sec:example} In this section, we illustrate in detail all the statements made in the paper for the Lie algebra $\fg=A_1$. Namely, consider the 3d $\cN=4$ gauge theory $G^{3d}$ with gauge group $G=U(N)$ and flavor group $G_F=U(N_F)$, on the manifold $\mathbb{C}\times S^1(\widehat{R})$, with a 1/2-BPS Wilson loop at the origin of $\mathbb{C}$ and wrapping $S^1(\widehat{R})$. \vspace{8mm} ------- \emph{The 1d Quantum Mechanics} -------\\ Let us first describe the theory in the absence of Wilson loop. We freeze each equivariant parameter of $G$ to be one of the equivariant parameters of $G_F=U(N_F)$, describing the root of the Higgs branch of $G^{3d}$. We turn on the real FI parameter $\zeta_{3d}>0$, complexified because of the circle, and consider the moduli space of $k$ vortices. Let $T^{1d}_{pure}$ be the quantum mechanics on the vortices. It is a theory with (the reduction from 2d $\cN=(2,2)$ to) 1d $\cN=4$ supersymmetry on $S^1(\widehat{R})$, with gauge group $\widehat{G}=U(k)$. The flavor symmetry is $\widehat{G_F}=U(N)\times U(N_F - N)$, where the first group is the symmetry of $N$ fundamental chiral multiplets, while the second group is the symmetry of $N_F-N$ antifundamental chiral multiplets. \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{A1examplepure} \vspace{-12pt} \caption{The 3d gauge theory $G^{3d}$ and the vortex quantum mechanics $T^{1d}_{pure}$.} \label{fig:A1examplepure} \end{figure} The Witten index \eqref{index} of the quantum mechanics is expressed as the following integral: {\allowdisplaybreaks \begin{align} \label{vortexintegralA1pure} &\left[\chi\right]^{(L)}_{1d} =\sum_{k=0}^{\infty}\frac{e^{\zeta_{3d}\, k}}{k!} \oint_{\mathcal{M}^{pure}_k} \left[\frac{d\phi_I}{2\pi i}\right]Z_{pure, vec}\cdot Z_{pure, adj}\cdot Z_{pure, teeth}\; , \\ &Z_{pure, vec} = \frac{\prod_{\substack{I\neq J\\ I,J=1}}^{k}\sh\left(\phi_I-\phi_J\right)}{\prod_{I, J=1}^{k} \sh\left(\phi_{I}-\phi_{J}+\E_2 \right)}\nonumber\\ &Z_{pure, adj} = \prod_{I, J=1}^{k}\frac{\sh\left(\phi_I-\phi_J+\E_1+\E_2\right)}{\sh\left(\phi_{I}-\phi_{J}+\E_1 \right)}\nonumber\\ &Z_{pure, teeth} = \prod_{I=1}^{k}\prod_{i=1}^{N}\frac{\sh\left(\phi_I-m_i+\E_-+\E_2\right)}{\sh\left(\phi_{I}-m_i+\E_-\right)}\prod_{j=1}^{N_f - N}\frac{\sh\left(-\phi_I+m_j+\E_++\E_2\right)}{\sh\left(-\phi_{I}+ m_j+\E_+ \right)}\nonumber\, . \end{align}} We made use of the notations $\sh(x)=2\, \sinh(\widehat{R}\, x/2)$, $\E_+=(\E_1+\E_2)/2$ and $\E_-=(\E_1-\E_2)/2$. Crucially, the index depends on the sign of the 1d FI parameter, which we take here to be $\zeta_{1d}>0$. After applying the JK-residue prescription in that FI-chamber, the poles that end up contributing at vortex charge $k$ to the $T^{1d}_{pure}$ index make up a set $\mathcal{M}^{pure}_k$. The elements of this set satisfy: \begin{align} &\phi_I = \phi_J - \E_1 \; , \label{purepole1A1}\\ &\phi_I = \phi_J - \E_2 \; , \label{purepole2A1}\\ &\phi_I = m_i - \E_- \; , \;\;\;\; i\in\{1,\ldots, N\} \; .\label{purepole3A1} \end{align} The poles \eqref{purepole1A1} arise from the adjoint chiral factor $Z_{pure, adj}$, the poles \eqref{purepole2A1} arise from the vector multiplet $Z_{pure, vec}$, and the poles \eqref{purepole3A1} arise from flavor factor $Z_{pure, teeth}$. Most notably, the last set of contours only encloses poles originating from the fundamental chiral multiplets, and none of the antifundamental chiral multiplets. Furthermore, the residues at the locus \eqref{purepole2} are all zero, thanks to the numerator of $Z_{pure, teeth}$. Putting it all together, the various poles which end up contributing with nonzero residue are of the form \beq\label{purepolesA1} \phi_I = m_i - \E_- - (s_i-1) \E_1 \; , \qquad \text{with}\; s_i\in\{1,\ldots,k_i\}\; ,\qquad i\in\{1,\ldots,N\}\; . \eeq In this notation, $(k_1, \ldots, k_N)$ is a partition of $k$ into $N$ non-negative integers, and the pair of integers $(i, s_i)$ is assigned to one of the integers $I\in\{1,\ldots,k\}$ exactly once. Performing the residue integral, one finds the following well-known expression \cite{Kim:2012uz}: \begin{align} \left[\chi\right]^{(0)}_{1d} =\sum_{k=0}^\infty e^{\zeta_{3d}\, k} &\sum_{\substack{\sum_i k_i=k \\ k_i\geq 0}} \; \left[\prod_{i,j=1}^{N}\prod_{s=1}^{k_i}\frac{\sh\left(m_i-m_j+\E_2- (s-k_j-1)\, \E_1\right)}{\sh\left(m_i-m_j - (s-k_j-1)\, \E_1\right)}\right]\nonumber\\ &\qquad\qquad\times\left[\prod_{i=N+1}^{N_F}\prod_{j=1}^{N}\prod_{p=1}^{k_j}\frac{\sh\left(m_i-m_j+\E_2 + p\, \E_1\right)}{\sh\left(m_i-m_j + p\, \E_1\right)}\right]\label{examplepure1dA1} \end{align} Let us now consider the inclusion of the Wilson loop, transforming in the fundamental representation of $G=U(N)$. This loop is a 1/2-BPS codimension-2 defect from the point of view of $G^{3d}$; we introduce a defect group $\widehat{G}_{defect}=U(1)$ for the 1d fermions in the loop, with associated mass fugacity $M$. The inclusion of the loop modifies the vortex quantum mechanics, which we now call $T^{1d}$. \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.99\textwidth]{A1exampledefect} \vspace{-12pt} \caption{The 3d gauge theory $G^{3d}$ with a Wilson loop defect and the vortex quantum mechanics $T^{1d}$. The notations are as in Figure \ref{fig:flagdefect}.} \label{fig:A1exampledefect} \end{figure} Its Witten index is given by: {\allowdisplaybreaks \begin{align} \label{vortexintegralA1} &\left[\chi\right]^{(1)}_{1d} =\sum_{k=0}^{\infty}\frac{e^{\zeta_{3d}\, k}}{k!} Z_{defect, \varnothing}\oint_{\mathcal{M}_k} \left[\frac{d\phi_I}{2\pi i}\right]Z_{pure, vec}\cdot Z_{pure, adj}\cdot Z_{pure, teeth}\cdot Z_{defect, k} \; , \\ &Z_{defect, \varnothing} = \prod_{i=1}^{N} \sh\left(m_i -M +\E_2 \right)\nonumber\\ &Z_{defect, k} = \prod_{I=1}^{k} \frac{\sh\left(\phi_I-M- \E_- \right)\, \sh\left(-\phi_I + M-\E_-\right)}{\sh\left(\phi_I-M- \E_+\right)\, \sh\left(-\phi_I + M - \E_+ \right)}\, .\nonumber \end{align}} We once again work in the FI-chamber $\zeta_{1d}>0$. For a given vortex charge $k$, the set of poles to be enclosed by the contours is called $\mathcal{M}_k$. This set contains the set $\mathcal{M}^{pure}_k$ we specified in \eqref{purepolesA1} for the theory $T^{1d}_{pure}$, and an extra locus depending on the defect fermion mass $M$: \begin{align}\label{newpoleA1} \phi_I=M + \epsilon_+ \; ,\;\;\;\text{for some}\; I\in\{1,\ldots,k\} \; . \end{align} No other pole depending on $M$ exists, by the following argument: consider the pole at $\phi_I=M+\epsilon_+$ for $I$ fixed. The JK-prescription indicates there is a pole at the locus $\phi_J-M-\epsilon_+=0$, with $J \neq I$. But the residue there is zero, because of the factor $\sh(\phi_I - \phi_J)$ in the numerator of $Z_{pure, vec}$. Similarly, the JK prescription requires us to include the hyperplanes $\phi_{J}-\phi_{I}+\epsilon_1=0$ and $\phi_{J}-\phi_{I}+\epsilon_2=0$. But the numerators of $Z_{defect, k}$ guarantee a zero residue at this loci, so we have succeeded in showing that there is exactly one $M$-dependent pole in $\mathcal{M}_k$. Put differently, $|\mathcal{M}_k|=|\mathcal{M}^{pure}_k|+1$, for all $k$. We conclude that for a given vortex charge $k$, we choose one of the contours to pick up the unique $M$-dependent pole in $\mathcal{M}_k$, namely \eqref{newpoleA1}, and the $k-1$ other poles are to be chosen in the set $\mathcal{M}^{pure}_{k-1}$, according to \eqref{purepolesA1}. We introduce a (renormalized) 1/2-BPS codimension-2 defect operator for the loop, with associated vev: \begin{align} \label{Yoperator1dA1} &\left\langle \left[Y_{1d}(M)\right]^{\pm 1} \right\rangle \equiv \sum_{k=0}^\infty\frac{e^{\zeta_{3d}\, k}}{k!}\oint_{\mathcal{M}^{pure}_k} \left[\frac{d\phi_I}{2\pi i}\right]Z_{pure, vec}\cdot Z_{pure, adj}\cdot Z_{pure, teeth}\cdot \left[Z_{defect, k}(M)\right]^{\pm 1}\, . \end{align} Note the contour is defined to exclusively enclose poles in the set $\mathcal{M}^{pure}_k$, thereby avoiding the pole at $\phi_I=M+\epsilon_+$. In what follows, we will freely make use of K-theoretic notations, \begin{align}\label{fugacitiesKtheoryA1} \widetilde{\fq}&= e^{\zeta_{3d}},\\ q &= e^{\epsilon_1},\qquad t= e^{-\epsilon_2},\qquad v= e^{\epsilon_+}=\sqrt{q/t},\qquad u= e^{\epsilon_-}=\sqrt{q\, t},\nonumber\\ f_d& = e^{-m_d},\qquad z=e^{-M}\; .\nonumber \end{align} Furthermore, for ease of presentation, we renormalize the index by the classical Wilson loop contribution and the index of the vortex quantum mechanics $T^{1d}_{pure}$ in the absence of Wilson loop: \beq \label{normalizedA1} \left[\widetilde{\chi}\right]^{(1)}_{1d}(z) \equiv \frac{\left[\chi\right]^{(1)}_{1d}(z)}{Z_{defect, \varnothing}(z)\cdot \left[{\chi}\right]^{(0)}_{1d}} \eeq Then, we find that the normalized index can be expressed in terms of the $Y$-operators, as a sum of exactly two terms: \begin{align} \label{A1pure} \left[\widetilde{\chi}\right]^{(1)}_{1d}(z) = \frac{1}{\left[{\chi}\right]^{(0)}_{1d}}\left[ \left\langle Y_{1d}(z) \right\rangle + \widetilde{\fq} \; \prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\left\langle\frac{1}{Y_{1d}(z\,v^{-2})}\right\rangle\right]\; . \end{align} This is a twisted $qq$-character of the fundamental representation of the quantum affine algebra $U_q(\widehat{A_1})$. The meaning of the above character is as follows: The first term on the right-hand side encloses almost all the ``correct" poles in the index integrand, but it is missing exactly one: the extra pole at $\phi_I-M-\epsilon_+=0$. The second term on the right-hand side makes up for this missing pole, and relies on a key observation: we can trade a contour enclosing this extra pole for a contour which does not enclose it, at the expense of inserting the operator $Y(z\, v^{-2})^{-1}$ inside the vev. This result is derived at once from the integral expression \eqref{vortexintegralA1}, and the $Y$-operator definition \eqref{Yoperator1dA1}. Finally, note the presence of the 3d FI parameter $\widetilde{\fq}$ in the second term; it counts exactly one vortex, to make up for the missing $M$-pole, consistent with the fact that $|\mathcal{M}_k|=|\mathcal{M}^{pure}_k|+1$.\\ It is instructive to recast the above result in terms of the general expression for the index presented in the main text: \begin{align}\label{character1dA1} \left[\widetilde{\chi}\right]^{(1)}_{1d}(z)=\frac{1}{\left[{\chi}\right]^{(0)}_{1d}}\sum_{\omega\in V(\lambda)} \left({\widetilde{\fq}}\right)^{d^\omega}\; c_{d^\omega}(q, t)\,\mathcal{Q}_{d^\omega}(z)\, \left[{Y}_{1d}(z)\right]_{\omega} \, . \end{align} In that notation, the sum is over two weights exactly, the highest weight $\omega_1=[1]$ of the spin-1/2 representation of $A_1$, and the $\omega_2=[-1]$, obtained by lowering $\omega_1$ by the positive simple root $\alpha$ of $A_1$: $[1]-[2] = [-1]$. The coefficients $c_{d^\omega}(q, t)$ are all 1, while \beq \label{QfunctionA1} \mathcal{Q}_{d^{[1]}}(z)=1 \; ,\qquad\qquad \mathcal{Q}_{d^{[-1]}}(z)= \prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\; , \eeq and \beq \label{YfunctionA1} \left[{Y}_{1d}(z)\right]_{[1]}= \left\langle Y_{1d}(z) \right\rangle \; ,\qquad\qquad\left[{Y}_{1d}(z)\right]_{[-1]}= \left\langle \frac{1}{Y_{1d}(z\, v^{-2})} \right\rangle \; . \eeq Non-perturbative Schwinger-Dyson identities for $G^{3d}$ follow from the regularity conditions of the vortex character in the defect fugacity $z=e^{-M}$. Namely, a $z$-singularity will arise if two poles of the integrand pinch one of the contours. Let us list such pairs of poles, where the poles on the left are inside the contours and the poles on the right are outside the contours, by the JK-prescription. We let $I\in\{1,\ldots,k\}$ and find: \begin{align} &\phi_I - M - \E_+ = 0 \qquad \text{and}\qquad \phi_{I}- m_j-\E_+ = 0 \;\;\;\;\; (j\in\{N+1, \ldots, N_F\}) \label{pinch1}\\ &\phi_I - M - \E_+ = 0 \qquad \text{and}\qquad \phi_{I}- \phi_{J}-\E_1 = 0 \;\;\;\;\; (J\neq I) \label{pinch2}\\ &\phi_I - M - \E_+ = 0 \qquad \text{and}\qquad \phi_{I}- \phi_{J}-\E_2 = 0 \;\;\;\;\; (J\neq I) \label{pinch3}\\ &\phi_{I}- m_i+\E_- = 0 \qquad \text{and}\qquad \phi_I - M + \E_+ = 0 \;\;\;\;\; (i\in\{1, \ldots, N\}) \label{pinch4}\\ &\phi_{I}- \phi_{J}+\E_1 = 0 \qquad \text{and}\qquad \phi_I - M + \E_+ = 0 \;\;\;\;\; (J\neq I) \label{pinch5}\\ &\phi_{I}- \phi_{J}+\E_2 = 0 \qquad \text{and}\qquad \phi_I - M + \E_+ = 0 \;\;\;\;\; (J\neq I) \label{pinch6} \end{align} The sets of poles \eqref{pinch2}, \eqref{pinch3}, \eqref{pinch5} and \eqref{pinch6} pinch the contour, but the singularity is canceled by a zero in the integrand. For instance, the set \eqref{pinch5} implies a singularity at the locus $\phi_J-M-\E_- =0$, but there is a zero there due to the numerator of $Z_{defect, k}$. The sets of poles \eqref{pinch1} and \eqref{pinch4} genuinely pinch the contours, and result in singularities \begin{align} &M=m_j \;\;\;\;\;\;\;\;\;\;\;\; (j\in\{N+1, \ldots, N_F\})\; ,\label{locus1}\\ &M=m_i+\E_2 \;\;\;\;\; (i\in\{1, \ldots, N\}), \label{locus2} \end{align} The singularity \eqref{locus2} is formally canceled by the Fermi multiplets numerators of $Z_{defect, \varnothing}$ in the index \eqref{vortexintegralA1}, but we normalized by this classical contribution when defining the vortex character. The singularity is absent if we decide not to normalize the index by $Z_{defect, \varnothing}$, at the cost of slightly modifying the twist of the $qq$-character. Our choice to normalize by this classical contribution was purely cosmetic, and ultimately does not affect any result. On the other hand, the singularity \eqref{locus1} is an unavoidable feature of 3d $\cN=4$ theories. Removing this singularity can be done by inserting an additional flavor Wilson loop in the 3d theory, transforming in the fundamental representation of the flavor subgroup $U(N_f-N)\subset U(N_F)$; this would result in extra Fermi multiplet contributions to the index, of the form $Z_{defect, \varnothing}$ \eqref{vortexintegralA1}. We decided against adding unnecessary Wilson loops in this work. Ultimately, our presentation of the index simply means that we have to live with these flavor singularities. The Schwinger-Dyson identities are the statement that these are the only singularities present. \vspace{8mm} ------- \emph{The 3d Gauge Theory Perspective} -------\\ In the absence of Wilson loop, the half-index of the 3d theory reads \begin{align}\label{indexintegral3dA1} \left[\widetilde{\chi}\right]^{(0)}_{3d} = \oint_{\mathcal{M}^{bulk}} dy\,\left[I^{3d}_{bulk}(y)\, \right] \; . \end{align} where the bulk contribution reads \begin{align}\label{bulk3dA1} I^{3d}_{bulk}(y)=\prod_{i=1}^{N}{y_i}^{\left(\zeta_{3d}-1\right)}\;I_{vec}(y) \cdot I_{flavor}(y,\{x_d\})\; . \end{align} The factor \begin{align}\label{FI3dA1} \prod_{i=1}^{N}{y_i}^{\left(\zeta_{3d}\right)} \end{align} is the contribution of the 3d FI parameter. The factor \begin{align}\label{vec3dA1} I_{vec}(y)=\prod_{1\leq i\neq j\leq N}\frac{\left(y_{i}/y_{j};q\right)_{\infty}}{\left(t\, y_{i}/y_{j};q\right)_{\infty}}\;\prod_{1\leq i<j\leq N} \frac{\Theta\left(t\,y_{i}/y_{j};q\right)}{\Theta\left(y_{i}/y_{j};q\right)} \end{align} stands for the contribution of the ${\cN}=4$ vector multiplet for the gauge group $G=U(N)$. The factor \begin{align}\label{matter3dA1} I_{flavor}(y, \{x_d\}) =\prod_{d=1}^{N_F}\prod_{i=1}^{N} \frac{\left(t\, v\, x_{d}/y_{i};q\right)_{\infty}}{\left(v\, x_{d}/y_{i};q\right)_{\infty}} \end{align} stands for the contribution of the $\cN=4$ hypermultiplets in the fundamental representation of the $a$-th gauge group. The set of poles to be enclosed by the contours is denoted as $\mathcal{M}^{bulk}$. Following the JK-residue prescription, the poles which contribute with nonzero residue are located at \beq y_i =v\, x_d \, q^{s}\qquad\;\; ,\;\; s=0,1,2,\ldots\; , \;\;\; d\in\{1,\ldots,N_F\} \; , \eeq where each integer $\{i\}$ gets mapped uniquely to one of the integers $\{d\}$. Note that the $t$-dependent poles coming from the vector multiplet denominators $\left(t\, y^{(a)}_{i}/y^{(a)}_{j};q\right)_{\infty}$ are allowed by the JK-prescription, but they contribute with zero residue due to the fundamental hypermultiplet numerators $\left(t\, v\, x_{d}/y_{i};q\right)_{\infty}$. Having identified the poles, one can perform the residue integral to recover the Witten index \eqref{examplepure1dA1} of the quantum mechanics $T^{1d}_{pure}$ (up to normalization by irrelevant infinite quantum dilogarithm factors). We now introduce the 1/2-BPS Wilson loop wrapping $S^1(\widehat{R})$ via gauging its 1d degrees of freedom, as explained in the main text. The corresponding defect $Y$-operator vev is written as an integral over the Coulomb moduli of the 3d theory: \begin{align}\label{3ddefectexpressionA1} \left\langle\left[{\widetilde{Y}_{3d}}(z)\right]^{\pm 1} \right\rangle \equiv\oint_{\mathcal{M}^{bulk}} d{y}\,\left[I^{3d}_{bulk}(y)\cdot\left[{\widetilde{Y}_{defect}}(y, z)\right]^{\pm 1}\right] \; , \end{align} with \beq\label{3dWilsonfactorA1} {\widetilde{Y}_{defect}}(y,z)=\prod_{i=1}^{N}\frac{1-t\, y_{i}/z}{1- y_{i}/z}\; . \eeq There is also the flavor part of the defect which we have to include, \beq\label{3dWilsonfactor2A1} {\widetilde{Y}_{flavor}}(\{x_{d}\}, z)=\prod_{d=1}^{N_F}\frac{1- v\, x_{d}/z}{1- t\, v\, x_{d}/z}\; . \eeq Then, the (normalized) index of $G^{3d}$ in the presence of a Wilson loop is \emph{defined} as the following vortex character: \begin{align}\label{character1dA1letsgo} \left[\widetilde{\chi}\right]^{(1)}_{3d}(z)=\frac{1}{\left[{\chi}\right]^{(0)}_{3d}}\left[{\widetilde{Y}_{flavor}}(\{x_{d}\}, z)\left\langle \widetilde{Y}_{3d}(z) \right\rangle+ \left\langle\frac{1}{\widetilde{Y}_{3d}(z\, v^{-2})}\right\rangle\right] \, . \end{align} This is once more a twisted $qq$-character of the fundamental representation of $U_q(\widehat{A_1})$. It is again instructive to recast the above result in terms of the general expression for the index presented in the main text: \begin{align}\label{character1dA1again} \left[\widetilde{\chi}\right]^{(1)}_{3d}(z)=\frac{1}{\left[{\chi}\right]^{(0)}_{3d}}\sum_{\omega\in V(\lambda)} \left({\widetilde{\fq}}\right)^{d^\omega}\; c_{d^\omega}(q, t)\,\widetilde{\mathcal{Q}}_{d^\omega}(z)\, \left[{Y}_{3d}(z)\right]_{\omega} \, . \end{align} Just as in the case of the quantum mechanics presentation, the sum is over two weights exactly: the highest weight $\omega_1=[1]$ of the spin-1/2 representation of $A_1$, and the $\omega_2=[-1]$, obtained by lowering $\omega_1$ by the positive simple root $\alpha$ of $A_1$: $[1]-[2] = [-1]$. The coefficients $c_{d^\omega}(q, t)$ are all 1, while \beq \label{QfunctionA13d} \widetilde{\mathcal{Q}}_{d^{[1]}}(z)={\widetilde{Y}_{flavor}}(\{x_{d}\}, z) \; ,\qquad\qquad \widetilde{\mathcal{Q}}_{d^{[-1]}}(z)= 1\; , \eeq and \beq \label{YfunctionA13d} \left[\widetilde{Y}_{3d}(z)\right]_{[1]}= \left\langle \widetilde{Y}_{3d}(z) \right\rangle \; ,\qquad\qquad\left[\widetilde{Y}_{3d}(z)\right]_{[-1]}= \left\langle \frac{1}{\widetilde{Y}_{3d}(z\, v^{-2})} \right\rangle \; . \eeq The correctness of this definition can be checked by comparing it to the Witten index of the vortex theory; recall that we previously derived: \begin{align} \label{A1pureagain} \left[\widetilde{\chi}\right]^{(1)}_{1d}(z) = \frac{1}{\left[{\chi}\right]^{(0)}_{1d}}\left[ \left\langle Y_{1d}(z) \right\rangle + \widetilde{\fq} \; \prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\left\langle\frac{1}{Y_{1d}(z\,v^{-2})}\right\rangle\right]\; . \end{align} We can perform the residue integral over the poles \eqref{purepolesA1} explicitly, and define ``3d variables" as: \beq y_{i,*} = f_i\, q^{k_i+1} \;,\;\; i\in\{1,\ldots,N\} \; , \eeq along with the rescaling of the 1d masses to define them in terms of the 3d masses, \beq f_i=v\, x_i \;,\;\;\;\; i=1,\ldots,N_F \; . \eeq The defect residue in the quantum mechanics becomes: \begin{align}\label{fromY1dtoY3dA1} Z_{defect, k}({y}_{i,*},z) &=\prod_{i=1}^{N}\frac{1-t\, y_{i,*}/z}{1-y_{i,*}/z}\cdot\prod_{i=1}^{N}\frac{1- f_i/z}{1- t\, f_{i}/z}\nonumber\\ &=\prod_{i=1}^{N}\frac{1-t\, y_{i,*}/z}{1-y_{i,*}/z}\cdot\prod_{i=1}^{N}\frac{1- v\, x_i/z}{1- t\, v\, x_{i}/z}\nonumber\\ &=\left[{\widetilde{Y}_{defect}}(\vec y_*, z)\right]\cdot \left[\widetilde{Y}_{flavor}(\{x_d\}, z)\right]\cdot \prod_{i=N+1}^{N_F}\frac{1-t\, v\, x_i/z}{1- v\, x_{i}/z} \end{align} In other terms, we find that the $Y$-operator vev as defined in the quantum mechanics can be written in terms of the $Y$-operator vev as defined in the 3d theory. Let us now look at the second term in the 1d vortex character: \begin{align}\label{fromY1dtoY3dA12} &\prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\;\frac{1}{Z_{defect, k}({y}_{i,*},z\, v^{-2})}\nonumber\\ &=\cancel{\prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z}} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\prod_{i=1}^{N}\frac{1-v^2\, y_{i,*}/z}{1-t\, v^2\, y_{i,*}/z}\cdot\cancel{\prod_{i=1}^{N}\frac{1- q\, f_{i}/z}{1- q\,t^{-1}\,f_i/z}}\nonumber\\ &=\prod_{i=1}^{N}\frac{1-v^2\, y_{i,*}/z}{1-t\, v^2\, y_{i,*}/z} \prod_{j=N+1}^{N_F} \frac{1-t\,v\,x_j/z}{1-v\, x_j/z}\nonumber\\ &=\frac{1}{\left[{\widetilde{Y}_{defect}}(\vec y_*, z\, v^{-2})\right]}\cdot \prod_{i=N+1}^{N_F}\frac{1-t\, v\, x_i/z}{1- v\, x_{i}/z} \end{align} After the above astonishing cancellations, we find that the characters are in fact proportional to each other! Denoting the proportionality factor as \beq c_{3d/1d}=\prod_{i=N+1}^{N_F}\frac{1-t\, v\, x_i/z}{1- v\, x_{i}/z} \; , \eeq we proved that \begin{align}\label{CHIequalityA1} \left[\widetilde{\chi}\right]^{(1)}_{3d}(z)= c_{3d/1d}\cdot \left[\widetilde{\chi}\right]^{(1)}_{1d}(z)\, . \end{align} \vspace{8mm} ------- \emph{The ${\cW}_{q,t}(A_1)$-Algebra} -------\\ $q$-Liouville theory on the cylinder $\cC$ enjoys a ${\cW}_{q,t}(A_1)$-algebra symmetry, which is generated by the deformed stress tensor $W^{(2)}(z)$. This generating current is constructed as the commutant of the screening charge. We find: \beq\label{A1stress} W^{(2)}(z) = :\cY(z): + :\left[\cY(v^{-2}z)\right]^{-1}: \; , \eeq where $\cY$ is the operator defined in \eqref{YoperatorToda}. We consider the correlator: \beq\label{A1correlatordef} \left\langle \psi'\left|\prod_{d=1}^{N_f} V(x_d)\; Q^{N}\; W^{(2)}(z) \right| \psi \right\rangle\, . \eeq The contours are specified as to not enclose any pole in the $z$ variable. The state $|\psi\rangle$ is defined such that: \begin{align} \alpha[0] |\psi\rangle &= \langle\psi, \alpha\rangle |\psi\rangle\label{eigenvalueA1}\\ \alpha[k] |\psi\rangle &= 0\, , \qquad\qquad\;\; \mbox{for} \; k>0,\nonumber \end{align} where the $\alpha[k]$ generate a $q$-deformed Heisenberg algebra: \beq\label{A1commutator} [\alpha[k], \alpha[m]] = {1\over k} (q^{k\over 2} - q^{-{k\over 2}})(t^{{k\over 2} }-t^{-{k\over 2} })(v^{k}+ v^{-k}) \delta_{k, -m} \; . \eeq We compute the various two-points making up the correlator; first, the bulk contributions \begin{align} &\prod_{1\leq i< j\leq N}\left\langle S(y_i)\, S(y_j) \right\rangle = \prod_{1\leq i\neq j\leq N} \frac{\left(y_{i}/y_{j};q\right)_{\infty}}{\left(t\, y_{i}/y_{j};q\right)_{\infty}}\;\prod_{1\leq i<j\leq N} \frac{\Theta\left(t\,y_{j}/y_{i};q\right)}{\Theta\left(y_{j}/y_{i};q\right)}\\ &\prod_{i=1}^{N}\left\langle V(x_d)\, S(y_i) \right\rangle = \prod_{i=1}^{N} \frac{\left(t\,v\, x_{d}/y_{i};q\right)_{\infty}}{\left(v\, x_{d}/y_{i};q\right)_{\infty}}\; . \end{align} The two-point of fundamental vertex operators with themselves will drop out after normalization, so we omit writing it. We now come to the contributions involving the Wilson loop. First, the two-point of the stress tensor with the screening currents \begin{align} &\prod_{i=1}^{N}\left\langle S(y_i)\, W^{(2)}(z) \right\rangle = \prod_{i=1}^{N} \frac{1-t\, y_{i}/z}{1- y_{i}/z}+\prod_{i=1}^{N} \frac{1-v^2\, y_{i}/z}{1- t\, v^2\, y_{i}/z} \end{align} Note that in the actual correlator, the vacuum is labeled by $|\psi\rangle$ instead of $|0\rangle$, resulting in a relative shift of $\widetilde{\fq}$ between the two terms. A more involved computation is the two-point of the fundamental vertex operator with the stress tensor: \begin{align} &\left\langle V(x_d)\, W^{(2)}(z) \right\rangle = \exp\left(\sum_{k>0}\, \frac{1}{k} \,\frac{t^k-1}{v^k+v^{-k}}\left(\frac{x_d}{z}\right)^k\right)\nonumber\\ &\qquad\qquad+ \exp\left(-\sum_{k>0}\, \frac{1}{k} \,\frac{t^k-1}{v^k+v^{-k}}\left(\frac{v^2\,x_d}{z}\right)^k\right)\nonumber\\ &\;\;\;= \exp\left(-\sum_{k>0}\, \frac{1}{k} \,\frac{t^k-1}{v^k+v^{-k}}\left(\frac{v^2\,x_d}{z}\right)^k\right)\cdot\left(\frac{1-v\, x_d/z}{1-t\,v\,x_d/z}+1\right)\nonumber\\ &\;\;\;= B\left(x_d, z\right)\cdot\left(\frac{1-v\, x_d/z}{1-t\,v\,x_d/z}+1\right) \end{align} In the first line, we used the commutator \begin{align} [w[k], w[n]] = {1\over k} (q^{k\over 2} - q^{-{k\over 2}})(t^{{k\over 2} }-t^{-{k\over 2} })\frac{1}{v^{k}+ v^{-k}}\delta_{k, -n} \; , \end{align} which is dual to the relation \eqref{A1commutator}. In the second line we used the identity $\exp(-\sum_{k>0}\frac{x^k}{k})=(1-x)$. In the third line, we gave a name to the overall exponential factor, \beq\label{BdefinedA1} B\left(x_d, z\right)\equiv \exp\left(-\sum_{k>0}\, \frac{1}{k} \,\frac{t^k-1}{v^k+v^{-k}}\left(\frac{v^2\,x_d}{z}\right)^k\right) \; . \eeq All in all, the normalized ${\cW}_{q,t}(A_1)$ correlator comes out to be proportional to the 3d vortex character, \begin{align}\label{A1correlatoris3dindex} \frac{\left\langle \psi'\left|\prod_{d=1}^{N_f} V(x_d)\; Q^{D}\; W^{(2)}(z) \right| \psi \right\rangle}{\left\langle \psi'\left|\prod_{d=1}^{N_f} V(x_d)\; Q^{D} \right| \psi \right\rangle} = B(\{x_d\}, z)\, \left[\widetilde{\chi}\right]^{(1)}_{3d}(z)\; . \end{align} with \beq\label{BdefinedA1again} B\left(\{x_d\}, z\right)\equiv\prod_{d=1}^{N_F} \exp\left(-\sum_{k>0}\, \frac{1}{k} \,\frac{t^k-1}{v^k+v^{-k}}\left(\frac{v^2\,x_d}{z}\right)^k\right) \; , \eeq As explained in the main text, this factor can be naturally canceled out in the Ding-Iohara-Miki formalism, where it arises as an extra $U(1)$ due to an auxiliary Heisenberg algebra \cite{Mironov:2016yue}. The non-perturbative Schwinger-Dyson equation for $G^{3d}$ manifests itself here as a Ward identity. It is interpreted as a statement about the regularity in the fugacity $z$ of the correlator $\left\langle[\dots\ldots]\, W^{(2)}(z)\right\rangle$.\\ It is straightforward to generalize this discussion to the case of a Wilson loop in a higher spin representation, by considering a defect group $\widehat{G}_{defect}=U(L)$. This corresponds to considering a Wilson loop valued in the representation $\textbf{N}\otimes\ldots\otimes\textbf{N}$ of $SU(N)$, where the fundamental representation $\textbf{N}$ is tensored $L$ times with itself. This is a flavor Wilson loop after Higgsing. The JK-residue prescription dictates that for each $\rho\in\{1, 2, \ldots, L\}$, the contours of the quantum mechanics should enclose a pole at \begin{equation} \phi_I-M_\rho-\epsilon_+=0 \; . \end{equation} Once again, the partition function can be expressed as a $qq$-character of $U_q(\widehat{A_1})$, with highest weight $[L]$ (the spin $L/2$ representation). In the $q$-Liouville picture, one would simply consider a deformed $\cW_{q,t}(A_1)$-algebra correlator with $L$ insertions of the deformed stress tensor: \beq\label{A1correlatoris3dindexMORE} \frac{\left\langle \psi'\left|\prod_{d=1}^{N_f} V(x_d)\; Q^{D}\; \prod_{\rho=1}^{L} W^{(2)}(z_{\rho}) \right| \psi \right\rangle}{\left\langle \psi'\left|\prod_{d=1}^{N_f} V(x_d)\; Q^{D} \right| \psi \right\rangle} = B(\{x_d\}, \{z_\rho\})\, \left[\widetilde{\chi}\right]^{(L)}_{3d}(\{z_\rho\})\; . \eeq \vspace{15mm} ------- \emph{Defects of the $A_1$ $(2,0)$ Little String} -------\\ Let $X$ be a resolved $A_1$ singularity, and consider type IIB string theory on $X\times\cC\times\mathbb{C}_q\times\mathbb{C}_t$, with $\cC= \mathbb{R} \times S^1(R)$ an infinite cylinder of radius $R$, and $\mathbb{C}_q$ and $\mathbb{C}_t$ two complex lines. We introduce $N$ D3$_{gauge}$ branes wrapping the compact 2-cycle $S$ of $X$ and $\mathbb{C}_q$. We further introduce $N_f$ D3$_{flavor}$ branes wrapping the dual non-compact 2-cycle $S^*$ and $\mathbb{C}_q$. We also add to this background $L$ D3$_{defect}$ branes wrapping that same 2-cycle $S^*$ and $\mathbb{C}_t$. This background preserves 4 supercharges. We send the string coupling to $g_s\rightarrow 0$; the tensions of the various D3 branes survive in the limit. Then, this amounts to studying the $(2,0)$ $A_1$ little string on $\cC\times\mathbb{C}_q\times \mathbb{C}_t$ in the presence of various codimension-4 defects. At energies below the string scale, the dynamics are fully captured by the theory on the D3$_{gauge}$ branes: the effective theory on the branes is the 3d gauge theory $G^{3d}$, with gauge group $G=U(N)$, defined on the manifold $\mathbb{C}_q\times S^1(\widehat{R})$. Note that this is the T-dual circle to the original circle $S^1(R)$ of the cylinder, meaning $\widehat{R}=1/(m^2_s\, R)$. The D3$_{flavor}$ branes realize the fundamental matter content $G_F=U(N_F)$. From the 3d gauge theory point of view, the $L$ D3$_{defect}$ branes make up a 1/2-BPS Wilson loop wrapping $S^1(\widehat{R})$ and sitting at the origin of $\mathbb{C}_q$. Let us focus on the case $L=1$, which fixes the Wilson loop representation to be the fundamental one. The index of the $(2,0)$ little string in this background localizes to the 3d/1d half-index \eqref{character1dA1letsgo} of the 3d gauge theory: \begin{align}\label{character1dA1letsgoagain} \left[\widetilde{\chi}\right]^{(1)}_{\text{D3}_g, \text{D3}_f, \text{D3}_d}(z)=\frac{1}{\left[{\chi}\right]^{(0)}_{\text{D3}_g, \text{D3}_f}}\left[\prod_{d=1}^{N_F}\frac{1- v\, x_{d}/z}{1- t\, v\, x_{d}/z}\left\langle \widetilde{Y}_{3d}(z) \right\rangle+ \left\langle\frac{1}{\widetilde{Y}_{3d}(z\, v^{-2})}\right\rangle\right] \, . \end{align} Up to an overall normalization, this also happens to be the $q$-Liouville correlator \eqref{A1correlatoris3dindex} on the cylinder, see Figure \ref{fig:cylinderbranes}. \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 3cm},clip,width=0.99\textwidth]{cylinderbranes} \vspace{-40pt} \caption{Example of a correlator in $q$-Liouville, along with the corresponding D-branes at points on the cylinder. The specific correlator pictured here is $\langle \psi' | \prod_{d=1}^2 V(x_d) \, (Q)^{3} \, T(z) | \psi \rangle$.} \label{fig:cylinderbranes} \end{figure} We break $G$ by freezing the D3$_{gauge}$ moduli, and reorganize the branes to make up a set of D3'$_{flavor}$ branes exclusively. We turn on the period $\int_{S}\omega_{I}>0$, which is the 3d FI parameter, and study the vortex solutions on the Higgs branch of $G^{3d}$. These are D1$_{vortex}$ branes wrapping the 2-cycle $S$ in the $(2,0)$ string. The partition function localizes to the Witten index \eqref{A1pure} of the quantum mechanics $T^{1d}$ on the D1$_{vortex}$ branes. Up to an overall normalization, this is again the same vortex character: \begin{align} \label{A1pureletsgoagain} \left[\widetilde{\chi}\right]^{(1)}_{\text{D3}'_f, \text{D3}_d, \text{D1}_v}(z) = \frac{1}{\left[{\chi}\right]^{(0)}_{\text{D3}'_f, \text{D1}_v}}\left[ \left\langle Y_{1d}(z) \right\rangle + \widetilde{\fq} \; \prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\left\langle\frac{1}{Y_{1d}(z\,v^{-2})}\right\rangle\right]. \end{align} \vspace{8mm} ------- \emph{Seiberg Duality of the Vortex Character} -------\\ Let us first go back to the index of the quantum mechanics $T^{1d}_{pure}$, in the absence of the Wilson loop, which we rewrite here for convenience: {\allowdisplaybreaks \begin{align} \label{vortexintegralA1pureSeiberg} &\left[\chi\right]^{(L)}_{1d} =\sum_{k=0}^{\infty}\frac{e^{\zeta_{3d}\, k}}{k!} \oint_{\mathcal{M}^{pure}_k} \left[\frac{d\phi_I}{2\pi i}\right]Z_{pure, vec}\cdot Z_{pure, adj}\cdot Z_{pure, teeth}\; , \\ &Z_{pure, vec} = \frac{\prod_{\substack{I\neq J\\ I,J=1}}^{k}\sh\left(\phi_I-\phi_J\right)}{\prod_{I, J=1}^{k} \sh\left(\phi_{I}-\phi_{J}+\E_2 \right)}\nonumber\\ &Z_{pure, adj} = \prod_{I, J=1}^{k}\frac{\sh\left(\phi_I-\phi_J+\E_1+\E_2\right)}{\sh\left(\phi_{I}-\phi_{J}+\E_1 \right)}\nonumber\\ &Z_{pure, teeth} = \prod_{I=1}^{k}\prod_{i=1}^{N}\frac{\sh\left(\phi_I-m_i-\E_-+\E_2\right)}{\sh\left(\phi_{I}-m_i-\E_-\right)}\prod_{j=1}^{N_f - N}\frac{\sh\left(-\phi_I+m_j-\E_++\E_2\right)}{\sh\left(-\phi_{I}+ m_j-\E_+ \right)}\nonumber\, . \end{align}} Once again, we used the notations $\E_+=(\E_1+\E_2)/2$ and $\E_-=(\E_1-\E_2)/2$. We now study the Witten index in a chamber with a negative 1d FI parameter, $\zeta_{1d}<0$. We then apply the JK-residue prescription in that FI-chamber\footnote{Just as before, the JK-residue requires us to define an auxiliary vector $\eta$ of size $k$, and we once again choose $\eta=\zeta_{1d}$ to remove contributions from $\phi$-poles at $\pm \infty$. We find the choice $\eta=(-1,\ldots,-1)$ convenient here.}. For each vortex charge $k$, the poles that end up contributing make up the set $\mathcal{M}^{pure}_k$. The elements of this set satisfy: \begin{align} &\phi_I = \phi_J + \E_1 \; , \label{purepole1A1Seiberg}\\ &\phi_I = \phi_J + \E_2 \; , \label{purepole2A1Seiberg}\\ &\phi_I = m_j + \E_+ \; , \;\;\;\; j\in\{N+1,\ldots, N_F\} \; .\label{purepole3A1Seiberg} \end{align} The poles \eqref{purepole1A1Seiberg} arise from the adjoint chiral factor $Z_{pure, adj}$, the poles \eqref{purepole2A1Seiberg} arise from the vector multiplet factor $Z_{pure, vec}$, and the poles \eqref{purepole3A1Seiberg} arise from flavor factor $Z_{pure, teeth}$. The last set of contours now encloses poles originating from the \emph{antifundamental} chiral multiplets, and none of the fundamental chiral multiplets. Furthermore, the residue at the locus \eqref{purepole2A1Seiberg} is zero. Putting it all together, the various poles which end up contributing with nonzero residue are of the form: \beq\label{purepolesA1Seiberg} \phi_I = m_j + \E_+ + (s_i-1) \E_1 \; , \qquad \text{with}\;\; s_i\in\{1,\ldots,k_i\}\; ,\qquad i\in\{N+1,\ldots,N_F\}\; . \eeq In this notation, $(k_1, \ldots, k_N)$ is a partition of $k$ into $N_F-N$ non-negative integers, and the pair of integers $(i, s_i)$ is assigned to one of the integers $I\in\{1,\ldots,k\}$ exactly once. Performing the residue integral, we get the closed-form expression: \begin{align}\label{examplepure1dA1Seiberg} \left[\chi\right]^{(0)}_{1d,\; \zeta_{1d}<0} =\sum_{k=0}^\infty \left(-e^{\zeta_{3d}}\right)^k &\sum_{\substack{\sum_i k_i=k \\ k_i\geq 0}} \; \left[\prod_{i,j=N+1}^{N_F}\prod_{s=1}^{k_i}\frac{\sh\left(m_i-m_j+\E_2- (s-k_j-1)\, \E_1\right)}{\sh\left(m_i-m_j - (s-k_j-1)\, \E_1\right)}\right]\nonumber\\ &\;\;\qquad\times\left[\prod_{i=N+1}^{N_F}\prod_{j=1}^{N}\prod_{p=1}^{k_j}\frac{\sh\left(m_i-m_j+\E_2 + p\, \E_1\right)}{\sh\left(m_i-m_j + p\, \E_1\right)}\right]\; . \end{align} After flipping the signs of the $N_F$ masses $\{m_d\}\rightarrow \{-m_d-\E_2\}$ (the shift by $-\E_2$ is inconsequential at this stage, but will matter later) and the sign of the 3d FI parameter $\widetilde{\fq}\rightarrow -\widetilde{\fq}$, we recognize the index of a 3d $U(N_F - N)$ gauge theory with $N_F$ fundamental flavors. As predicted, changing the sign of the 1d FI parameter in the quantum mechanics realizes 3d Seiberg duality \cite{Hwang:2017kmk}. For comparison, we rewrite the index of the $U(N)$ gauge theory with $N_F$ fundamental flavors we previously derived in the chamber $\zeta_{1d}>0$: \begin{align}\label{examplepure1dA1again} \left[\chi\right]^{(0)}_{1d, \; \zeta_{1d}>0} =\sum_{k=0}^\infty e^{\zeta_{3d}\, k}&\sum_{\substack{\sum_i k_i=k \\ k_i\geq 0}} \; \left[\prod_{i,j=1}^{N}\prod_{s=1}^{k_i}\frac{\sh\left(m_i-m_j+\E_2- (s-k_j-1)\, \E_1\right)}{\sh\left(m_i-m_j - (s-k_j-1)\, \E_1\right)}\right]\nonumber\\ &\;\;\qquad\times\left[\prod_{i=N+1}^{N_F}\prod_{j=1}^{N}\prod_{p=1}^{k_j}\frac{\sh\left(m_i-m_j+\E_2 + p\, \E_1\right)}{\sh\left(m_i-m_j + p\, \E_1\right)}\right]\; . \end{align} Having reviewed the pure case, let us now introduce the Wilson loop. Recall that the Witten index of the quantum mechanics $T^{1d}$ now reads: {\allowdisplaybreaks \begin{align} \label{vortexintegralA1Seiberg} &\left[\chi\right]^{(1)}_{1d} =\sum_{k=0}^{\infty}\frac{e^{\zeta_{3d}\, k}}{k!} Z_{defect, \varnothing}\;\;\nonumber\\ &\qquad\qquad\qquad\qquad\;\;\times \oint_{\mathcal{M}_k} \left[\frac{d\phi_I}{2\pi i}\right]Z_{pure, vec}\cdot Z_{pure, adj}\cdot Z_{pure, teeth}\cdot Z_{defect, k} \; , \\ &Z_{defect, \varnothing} = \prod_{i=1}^{N} \sh\left(m_i -M +\E_2 \right)\, ,\nonumber\\ &Z_{defect, k} = \prod_{I=1}^{k} \frac{\sh\left(\phi_I-M- \E_- \right)\, \sh\left(-\phi_I + M- \E_-\right)}{\sh\left(\phi_I-M- \E_+\right)\, \sh\left(-\phi_I + M - \E_+ \right)}\, .\nonumber \end{align}} In the FI-chamber $\zeta_{1d}<0$, we have a new pole at the locus \begin{align}\label{newpoleA1Seiberg} \phi_I=M - \epsilon_+ \; ,\;\;\;\text{for some}\; I\in\{1,\ldots,k\} \; . \end{align} No other pole depending on $M$ exists, by the same arguments invoked in the case $\zeta_{1d}>0$. For each vortex charge $k$, the set of poles $\mathcal{M}_k$ is therefore the set $\mathcal{M}^{pure}_k$, augmented by the pole \eqref{newpoleA1Seiberg}. The (renormalized) codimension-2 $Y$-operator is defined as before, avoiding this new pole: \begin{align} \label{Yoperator1dA1Seiberg} &\left\langle \left[Y_{1d}(M)\right]^{\pm 1}\right\rangle \equiv \sum_{k=0}^\infty\frac{e^{\zeta_{3d}\, k}}{k!}\oint_{\mathcal{M}^{pure}_k} \left[\frac{d\phi_I}{2\pi i}\right]Z_{pure, vec}\cdot Z_{pure, adj}\cdot Z_{pure, teeth}\cdot \left[Z_{defect, k}(M)\right]^{\pm 1}\, . \nonumber \end{align} We renormalize the index by the classical Wilson loop contribution and the index of the vortex quantum mechanics $T^{1d}_{pure}$ in the absence of Wilson loop: \beq \label{normalizedA1Seiberg} \left[\widetilde{\chi}\right]^{(1)}_{1d, \;\zeta_{1d}<0}(z) \equiv \frac{\left[\chi\right]^{(1)}_{1d}(z)}{Z_{defect, \varnothing}(z)\cdot \left[{\chi}\right]^{(0)}_{1d,\; \zeta_{1d}<0}}\; , \eeq and derive at once the vortex character in the FI-chamber $\zeta_{1d}<0$, written in K-theoretic notation as: \begin{align} \label{A1pureSeiberg} \left[\widetilde{\chi}\right]^{(1)}_{1d, \;\zeta_{1d}<0}(z) = \frac{1}{\left[{\chi}\right]^{(0)}_{1d, \;\zeta_{1d}<0}}\left[ \left\langle Y_{1d}(z) \right\rangle - \widetilde{\fq} \; \prod_{i=1}^N\frac{1-f_i/z}{1-t\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t^2\,q^{-1}f_j/z}{1-t\,q^{-1}f_j/z}\left\langle\frac{1}{Y_{1d}(z\,v^{2})}\right\rangle\right]\; . \end{align} If we flip the sign of the $N_F$ masses $\{m_d\}\rightarrow \{-m_d-\E_2\}$ (or $\{f_d\}\rightarrow \{t^{-1}\,f^{-1}_d\}$ in the new variables), flip the defect fermion mass as $M\rightarrow -M$ (or $z\rightarrow z^{-1}$ in the new variables), and flip the 3d FI parameter as $\widetilde{\fq}\rightarrow -\widetilde{\fq}$, we recognize the vortex character of a 3d $U(N_F - N)$ gauge theory with $N_F$ fundamental flavors. Note the nontrivial rescaling of the $N_F$ flavor masses by $t^{-1}$. For comparison, we also rewrite the vortex character of the 3d $U(N)$ gauge theory with $N_F$ fundamental flavors \eqref{A1pure}: \begin{align} \label{A1pureagainagain} \left[\widetilde{\chi}\right]^{(1)}_{1d, \;\zeta_{1d}>0}(z) = \frac{1}{\left[{\chi}\right]^{(0)}_{1d, \;\zeta_{1d}>0}}\left[ \left\langle Y_{1d}(z) \right\rangle + \widetilde{\fq} \; \prod_{i=1}^N\frac{1-q\,t^{-1}\,f_i/z}{1-q\,f_i/z} \prod_{j=N+1}^{N_F} \frac{1-t\,f_j/z}{1-f_j/z}\left\langle\frac{1}{Y_{1d}(z\,v^{-2})}\right\rangle\right]\; . \end{align} As a last remark, note that the index $\left[\chi\right]^{(0)}_{1d, \; \zeta_{1d}>0}$ \eqref{examplepure1dA1again} in the positive FI chamber is not equal to the index $\left[\chi\right]^{(0)}_{1d, \; \zeta_{1d}<0}$ \eqref{examplepure1dA1Seiberg} in the negative FI chamber. This is because new states appear and contribute to the index at $\zeta_{1d}=0$, due to the opening of the Coulomb branch there. The vortex mechanics $T^{1d}_{pure}$ experiences wall-crossing, and the BPS index of the extra states can be computed explicitly by identifying the residues at asymptotic infinity, enclosing the $\phi$-poles of the integrand \eqref{vortexintegralA1pureSeiberg} at $\pm \infty$. A quick computation shows that such residues can be summed up exactly to give the contribution of $2N-N_F$ decoupled twisted hypermultiplets, which do exist on the Coulomb branch of $G^{3d}$ \cite{Gaiotto:2008ak,Kim:2012uz,Yaakov:2013fza,Gaiotto:2013bwa,Hwang:2017kmk}. Explicitly, the wall-crossing contribution can be written as a plethistic exponential: \beq\label{extrahyper} \text{PE}\left[\frac{\sh(2\E_+)\; \sh((2N-N_F)\E_2)}{\sh(\E_1)\;\sh(\E_2)}\; \widetilde{\fq}\right] \eeq Note that such a contribution vanishes when $N_F = 2 N$, in which case the indices agree: $\left[\chi\right]^{(0)}_{1d, \; \zeta_{1d}<0}= \left[\chi\right]^{(0)}_{1d, \; \zeta_{1d}>0}$. We can carry out the same computation for the index $\left[\chi\right]^{(1)}_{1d}$ in the presence of the Wilson loop \eqref{vortexintegralA1Seiberg}, to find that the extra contributions due to $\phi$-poles at $\pm \infty$ are the same as above: there are $2N-N_F$ extra decoupled twisted hypermultiplets, resulting in a decoupled factor \eqref{extrahyper}. Because the vortex character observable is the index $\left[\chi\right]^{(1)}_{1d}$ normalized by the pure index $\left[\chi\right]^{(0)}_{1d}$, the twisted hypermultiplets contributions cancel out at any rate in our context. \section*{Acknowledgments} We thank Mina Aganagic for remarks made at a UC Berkeley String-Math Seminar. and Nikita Nekrasov for elucidating some aspects of his instanton $qq$-character construction. The research of N. H. is supported by the Simons Center for Geometry and Physics. \vspace{16mm} \begin{appendix} \section{Some Examples of Vortex Characters} We write some explicit expressions for the vortex $qq$-character observables of some 3d gauge theories, in the 3d/1d half-index formalism. It is straightforward to write the observables in the quantum mechanics or $q$-Toda variables instead, if one wishes. All characters should be normalized by the pure index $\left[{\chi}\right]^{(0,\ldots,0)}_{3d}$, which we omitted here not to overburden the expressions. \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 0cm},clip,width=0.65\textwidth]{Anexamples} \vspace{-15pt} \caption{The $T_\rho[SU(N^{(n+1)})]$ theory, with a Wilson loop defect producing the first fundamental vortex character (top), and a loop defect producing the $n$-th fundamental vortex character (bottom).} \label{fig:Anexamples} \end{figure} For the $T_\rho[SU(N^{(n+1)})]$ theory on top of Figure \ref{fig:Anexamples}, we compute: \begin{align} &\left[\widetilde{\chi}\right]^{(1,0,\ldots,0)}_{3d}(z) =\prod_{d=1}^{N^{(n+1)}}\frac{1- v^n\, x_{d}/z}{1- t\, v^n\, x_{d}/z}\left\langle\widetilde{Y}^{(1)}_{3d}(z) \right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)} \, \prod_{d=1}^{N^{(n+1)}}\frac{1- v^n\, x_{d}/z}{1- t\, v^n\, x_{d}/z} \left\langle\frac{\widetilde{Y}^{(2)}_{3d}(z\, v^{-1})}{\widetilde{Y}^{(1)}_{3d}(z\, v^{-2})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)}\widetilde{\fq}^{(2)} \,\prod_{d=1}^{N^{(n+1)}}\frac{1- v^n\, x_{d}/z}{1- t\, v^n\, x_{d}/z} \left\langle\frac{\widetilde{Y}^{(3)}_{3d}(z\, v^{-2})}{\widetilde{Y}^{(2)}_{3d}(z\, v^{-3})}\right\rangle\nonumber\\ &\qquad\qquad+\ldots\nonumber\\ &\qquad\qquad+\prod_{a=1}^n \widetilde{\fq}^{(n)}\, \left\langle\frac{1}{\widetilde{Y}^{(n)}_{3d}(z\, v^{-n-1})}\right\rangle\; .\label{3dpartitionfunctionAmSIMPLE1} \end{align} For the $T_\rho[SU(N^{(n+1)})]$ theory on the bottom of Figure \ref{fig:Anexamples}, we compute: \begin{align} &\left[\widetilde{\chi}\right]^{(0,\ldots,0,1)}_{3d}(z) =\prod_{d=1}^{N^{(n+1)}}\frac{1- v\, x_{d}/z}{1- t\, v\, x_{d}/z}\left\langle\widetilde{Y}^{(n)}_{3d}(z) \right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(n)} \,\left\langle\frac{\widetilde{Y}^{(n-1)}_{3d}(z\, v^{-1})}{\widetilde{Y}^{(n)}_{3d}(z\, v^{-2})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(n)}\widetilde{\fq}^{(n-1)} \, \left\langle\frac{\widetilde{Y}^{(n-2)}_{3d}(z\, v^{-2})}{\widetilde{Y}^{(n-1)}_{3d}(z\, v^{-3})}\right\rangle\nonumber\\ &\qquad\qquad+\ldots\nonumber\\ &\qquad\qquad+\prod_{a=1}^n \widetilde{\fq}^{(n)}\, \left\langle\frac{1}{\widetilde{Y}^{(1)}_{3d}(z\, v^{-n-1})}\right\rangle\; .\label{3dpartitionfunctionAmSIMPLE2} \end{align} \begin{figure}[h!] \emph{} \centering \includegraphics[trim={0 0 0 3cm},clip,width=0.75\textwidth]{D4example} \vspace{-15pt} \caption{A $D_4$ theory with fundamental matter on node 3, with a Wilson loop defect producing the first fundamental vortex character.} \label{fig:D4example} \end{figure} For the $D_4$ theory in Figure \ref{fig:D4example}, we compute: \begin{align} &\left[\widetilde{\chi}\right]^{(1,0,0,0)}_{3d}(z) =\prod_{d=1}^{N^{(3)}_F}\frac{1- v^3\, x_{d}/z}{1- t\, v^3\, x_{d}/z}\left\langle\widetilde{Y}^{(1)}_{3d}(z) \right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)} \, \prod_{d=1}^{N^{(3)}_F}\frac{1- v^3\, x_{d}/z}{1- t\, v^3\, x_{d}/z} \left\langle\frac{\widetilde{Y}^{(2)}_{3d}(z\, v^{-1})}{\widetilde{Y}^{(1)}_{3d}(z\, v^{-2})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)}\widetilde{\fq}^{(2)} \,\prod_{d=1}^{N^{(3)}_F}\frac{1- v^3\, x_{d}/z}{1- t\, v^3\, x_{d}/z} \left\langle\frac{\widetilde{Y}^{(3)}_{3d}(z\, v^{-2})\, \widetilde{Y}^{(4)}_{3d}(z\, v^{-2})}{\widetilde{Y}^{(2)}_{3d}(z\, v^{-3})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)}\widetilde{\fq}^{(2)}\widetilde{\fq}^{(3)} \, \left\langle\frac{\widetilde{Y}^{(4)}_{3d}(z\, v^{-2})}{\widetilde{Y}^{(3)}_{3d}(z\, v^{-4})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)}\widetilde{\fq}^{(2)}\widetilde{\fq}^{(4)} \,\prod_{d=1}^{N^{(3)}_F}\frac{1- v^3\, x_{d}/z}{1- t\, v^3\, x_{d}/z} \left\langle\frac{\widetilde{Y}^{(3)}_{3d}(z\, v^{-2})}{\widetilde{Y}^{(4)}_{3d}(z\, v^{-4})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)}\widetilde{\fq}^{(2)}\widetilde{\fq}^{(3)}\widetilde{\fq}^{(4)} \, \left\langle\frac{\widetilde{Y}^{(2)}_{3d}(z\, v^{-3})}{\widetilde{Y}^{(3)}_{3d}(z\, v^{-4})\, \widetilde{Y}^{(4)}_{3d}(z\, v^{-4})}\right\rangle\nonumber\\ &\qquad\qquad + \widetilde{\fq}^{(1)}\left[\widetilde{\fq}^{(2)}\right]^2\widetilde{\fq}^{(3)}\widetilde{\fq}^{(4)} \, \left\langle\frac{\widetilde{Y}^{(1)}_{3d}(z\, v^{-4})}{\widetilde{Y}^{(2)}_{3d}(z\, v^{-5})}\right\rangle\nonumber\\ &\qquad\qquad + \left[\widetilde{\fq}^{(1)}\right]^2\left[\widetilde{\fq}^{(2)}\right]^2\widetilde{\fq}^{(3)}\widetilde{\fq}^{(4)} \, \left\langle\frac{1}{\widetilde{Y}^{(1)}_{3d}(z\, v^{-6})}\right\rangle\; . \label{3dpartitionfunctionD4} \end{align} \end{appendix} \newpage
null
null
null
proofpile-arXiv_000-10115
{"arxiv_id":"2009.08989","language":"en","timestamp":1600740018000,"url":"https:\/\/arxiv.org\/abs\/2009.08989","yymm":"2009"}
2024-02-18T23:40:24.923Z
2020-09-22T02:00:18.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10117"}
null
\section{Introduction} Privacy-preserving machine learning is critical to the deployment of data-driven solutions in applications involving sensitive data. Differential privacy (DP) \cite{dwork2006calibrating} is a de-facto standard for designing algorithms with strong privacy guarantees for individual data. Large-scale industrial deployments -- e.g.\ by Apple \cite{appledp}, Google \cite{erlingsson2014rappor} and the US Census Bureau \cite{abowd2018us} -- and general purpose DP tools for machine learning \cite{tfprivacy} and data analysis \cite{DBLP:journals/corr/abs-1907-02444,wilson2019differentially} exemplify that existing methods are well-suited for simple data analysis tasks (e.g.\ averages, histograms, frequent items) and batch learning problems where the training data is available beforehand. While these techniques cover a large number of applications in the central and (non-interactive) local models, they are often insufficient to tackle machine learning applications involving other threat models. This includes federated learning problems \cite{kairouz2019advances,li2019federated} where devices cooperate to learn a joint model while preserving their individual privacy, and, more generally, interactive learning in the spirit of the reinforcement learning (RL) framework \cite{sutton2018reinforcement}. In this paper we contribute to the study of reinforcement learning from the lens of differential privacy. We consider sequential decision-making tasks where users interact with an agent for the duration of a fixed-length episode. At each time-step the current user reveals a state to the agent, which responds with an appropriate action and receives a reward generated by the user. Like in standard RL, the goal of the agent is to learn a policy that maximizes the rewards provided by the users. However, our focus is on situations where the states and rewards that users provide to the agent might contain sensitive information. While users might be ready to reveal such information to an agent in order to receive a service, we assume they want to prevent third parties from making unintended inferences about their personal data. This includes external parties who might have access to the policy learned by the agent, as well as malicious users who can probe the agent's behavior to trigger actions informed by its interactions with previous users. For example, \cite{DBLP:conf/atal/PanWZLYS19} recently showed how RL policies can be probed to reveal information about the environment where the agent was trained. The question we ask in this paper is: how should the learnings an agent can extract from an episode be balanced against the potential information leakages arising from the behaviors of the agent that are informed by such learnings? We answer the question by making two contributions to the analysis of the privacy-utility trade-off in reinforcement learning: (1) we provide the first privacy-preserving RL algorithm with formal accuracy guarantees, and (2) we provide lower bounds on the regret and number of sub-optimal episodes for any differentially private RL algorithm. To measure the privacy provided by episodic RL algorithms we introduce a notion of episodic joint differential privacy (JDP) under continuous observation, a variant of joint differential privacy \cite{DBLP:conf/innovations/KearnsPRU14} that captures the potential information leakages discussed above. \paragraph{Overview of our results.} We study reinforcement learning in a fixed-horizon episodic Markov decision process with $S$ states, $A$ actions, and episodes of length $H$. We first provide a meaningful privacy formulation for this general learning problem with a strong relaxation of differential privacy: joint differential privacy (JDP) under continual observation, controlled by a privacy parameter $\epsilon \geq 0$ (larger $\epsilon$ means less privacy). Under this formulation, we give the first known RL sample complexity and regret upper and lower bounds with formal privacy guarantees. First, we present a new algorithm, \texttt{PUCB}, which satisfies $\varepsilon$-JDP in addition to two utility guarantees: it finds an $\alpha$-optimal policy with a sample complexity of $$\tilde O\left( \frac{SAH^4}{\alpha^2} + \frac{S^2AH^4 }{\varepsilon\alpha}\right) \enspace,$$ and achieves a regret rate of $$\tilde O\left(H^2 \sqrt{SAT} + \frac{SAH^3 + S^2 A H^3}{\varepsilon} \right)$$ over $T$ episodes. In both of these bounds, the first terms $\frac{SAH^4}{\alpha^2}$ and $H^2 \sqrt{SAT}$ are the non-private sample complexity and regret rates, respectively. The privacy parameter $\varepsilon$ only affects the lower order terms -- for sufficiently small approximation $\alpha$ and sufficiently large $T$, the ``cost'' of privacy becomes negligible. We also provide new lower bounds for $\varepsilon$-JDP reinforcement learning. Specifically, by incorporating ideas from existing lower bounds for private learning into constructions of hard MDPs, we prove a sample complexity bound of $$\tilde \Omega\left(\frac{SAH^2}{\alpha^2} + \frac{SAH}{\varepsilon\alpha} \right)$$ and a regret bound of $$\tilde \Omega\left( \sqrt{HSAT} + \frac{SAH}{\varepsilon}\right) \enspace.$$ As expected, these lower bounds match our upper bounds in the dominant term (ignoring $H$ and polylogarithmic factors). We also see that necessarily the utility cost for privacy grows linearly with the state space size, although this does not match our upper bounds. Closing this gap is an important direction for future work. \subsection{Related Work} Most previous works on differentially private interactive learning with partial feedback concentrate on bandit-type problems, including on-line learning with bandit feedback \cite{thakurta2013nearly,agarwal2017price}, multi-armed bandits \cite{mishra2015nearly,tossou2016algorithms,tossou2017achieving,tossou2018differential}, and linear contextual bandits \cite{DBLP:conf/icml/NeelR18,shariff2018differentially}. These works generally differ on the assumed reward models under which utility is measured (e.g.\ stochastic, oblivious adversarial, adaptive adversarial) and the concrete privacy definition being used (e.g.\ privacy when observing individual actions or sequences of actions, and privacy of reward or reward and observation in the contextual setting). \cite{basu2019differential} provides a comprehensive account of different privacy definitions used in the bandit literature. Much less work has addressed DP for general RL. For policy evaluation in the batch case, \cite{DBLP:conf/icml/BalleGP16} propose regularized least-squares algorithms with output perturbation and bound the excess risk due to the privacy constraints. For the control problem with private rewards and public states, \cite{NIPS2019_9310} give a differentially private Q-learning algorithm with function approximation. On the RL side, as we are initiating the study of RL with differential privacy, we focus on the well-studied tabular setting. While a number of algorithms with utility guarantees and lower bound constructions are known for this setting~\cite{Kakade2003,azar2017minimax,dann2017unifying}, we are not aware of any work addressing the privacy issues that are fundamental in high-stakes applications. \section{Lower Bounds}\label{sec:lower} In this section we prove the following lower bounds on the sample complexity and regret for any PAC RL agent providing joint differential privacy. \begin{theorem}[PAC Lower Bound]\label{thm:lowerbound} Let $\cM$ be an RL agent satisfying $\varepsilon$-JDP. Suppose that $\cM$ is $(\alpha, \beta)$-PAC for some $\beta \in (0,1/8)$. Then, there exists a fixed-horizon episodic MDP where the number of episodes until the algorithm's policy is $\alpha$-optimal with probability at least $1 - \beta$ satisfies \begin{align*} \Ex{n_\cM} \geq \Omega\left( \frac{SAH^2}{\alpha^2} + \frac{SAH}{\alpha\epsilon} \ln\left(\frac{1}{\beta}\right)\right) \enspace. \end{align*} \end{theorem} \begin{theorem}[Private Regret Lower Bound]\label{thm:regretlower} For any $\varepsilon$ JDP-algorithm $\cM$ there exist an MDP $M$ with $S$ states $A$ actions over $H$ time steps per episode such that for any initial state $s\in\cS$ the expected regret of $\cM$ after $T$ steps is \begin{align*} \Ex{\mathrm{Regret}(T)} = {\Omega}\left (\sqrt{HSA T} + \frac{S A H\log(T)}{\varepsilon}\right) \end{align*} for any $T \geq S^{1.1}$. \end{theorem} Here we present the proof steps for the sample complexity lower bound in Theorem~\ref{thm:lowerbound}. The proof for the regret lower bound in Theorem~\ref{thm:regretlower} follows from a similar argument and is deferred to the appendix. To obtain Theorem~\ref{thm:lowerbound}, we go through two intermediate lower bounds: one for private best-arm identification in multi-armed bandits problems (Lemma~\ref{lem:privMAB}), and one for private RL in a relaxed scenario where the initial state of each episode is considered public information (Lemma~\ref{lem:lowerboundpublic}). At first glance our arguments look similar to other techniques that provide lower bounds for RL in the non-private setting by leveraging lower bounds for bandits problems, e.g.\ \cite{strehl2009reinforcement,dann2015sample}. However, getting this strategy to work in the private case is significantly more challenging because one needs to ensure the notions of privacy used in each of the lower bounds are compatible with each other. Since this is the main challenge to prove Theorem~\ref{thm:lowerbound}, we focus our presentation on the aspects that make the private lower bound argument different from the non-private one, and defer the rest of details to the appendix. \subsection{Lower Bound for Best-Arm Identification}\label{sec:mablb} The first step is a lower bound for best-arm identification for differentially private multi-armed bandits algorithms. This considers mechanisms $\cM$ interacting with users via the MAB protocol described in \cref{alg:mabprotocol}, where we assume arms $a^{(t)}$ come from some finite space $\cA$ and rewards are binary, $r^{(t)} \in \{0,1\}$. Recall that $T$ denotes the total number of users. Our lower bound applies to mechanisms for this protocol that satisfy standard DP in the sense that the adversary has access to all the outputs $\cM(U) = (a^{(1)}, \ldots, a^{(T)},\hat{a})$ produced by the mechanism. \begin{definition} A MAB mechanism $\cM$ is $\epsilon$-DP if for any neighboring user sequences $U$ and $U'$ differing in a single user, and all events $E \subseteq \cA^{T+1}$ we have \begin{align*} \Pr[\cM(U) \in E] \leq e^{\epsilon} \Pr[\cM(U') \in E] \enspace. \end{align*} \end{definition} To measure the utility of a mechanism for performing \emph{best-arm identification} in MABs we consider a stochastic setting with independent arms. In this setting each arm $a \in \cA$ produces rewards following a Bernoulli distribution with expectation $\bar{P}_a$ and the goal is to identify high probability an optimal arm $a^*$ with expected reward $\bar{P}_{a^*} = \max_{a \in \cA} \bar{P}_a$. A problem instance can be identified with the vector of expected rewards $\bar{P} = (\bar{P}_a)_{a \in \cA}$. \begin{algorithm}[h] \caption{MAB Protocol for Best-Arm Identification}\label{alg:mabprotocol} \KwIn{Agent $\cM$ and users $u_1, \ldots, u_T$} \For{$t \in [T]$}{ $\cM$ sends arm $a^{(t)}$ to $u_t$ \\ $u_t$ sends reward $r^{(t)}$ to $\cM$ } $\cM$ releases arm $\hat{a}$ \end{algorithm} The lower bound result relies on the following adaptation of the coupling lemma from \citep[Lemma 6.2]{karwa2017finite}. \begin{lemma}\label{lem:KarwaVadhanMAB} Fix any arm $a \in [k]$. Now consider any pair of MAB instances $\mu, \nu \in [0,1]^k$ both with $k$ arms and time horizon $T$, such that $\|\mu_a - \nu_a \|_{tv} < \alpha$ and $\|\mu_{a'} - \nu_{a'} \|_{tv} = 0$ for all $a' \neq a$. Let $R \sim B(\mu)^T$ and $Q \sim B(\nu)^T$ be the sequence of $T$ rounds of rewards sampled under $\mu$ and $\nu$ respectively, and let $\cM$ be any $\epsilon$-DP multi-armed bandit algorithm. Then, for any event $E$ such that under event $E$ arm $a$ is pulled less than $t$ times, \begin{align*} \prob{\cM,R}{E} \leq e^{6 \epsilon t \alpha}\prob{\cM,Q}{E} \end{align*} \end{lemma} \begin{lemma}[Private MAB Lower Bound]\label{lem:privMAB} Let $\cM$ be a MAB best-arm identification algorithm satisfying $\epsilon$-DP that succeeds with probability at least $1-\beta$, for some $\beta \in (0,1/4)$. For any MAB instance $\bar{P}$ and any $\alpha$-suboptimal arm $a$ with $\alpha > 0$ (i.e.\ $\bar{P}_a = \bar{P}_{a^*} - \alpha$), the number of times that $\cM$ pulls arm $a$ during the protocol satisfies \begin{align*} \Ex{n_a} > \frac{1}{24 \epsilon \alpha}\ln{\left(\frac{1}{4\beta}\right)} \enspace. \end{align*} \end{lemma} \begin{proof} Let $a^*$ be the optimal arm under $\bar{P}$ and $a$ an $\alpha$-suboptimal arm. We construct an alternative MAB instance $\bar{Q}$ by exchanging the rewards of $a$ and $a^*$: $\bar{Q}_a = \bar{P}_{a^*}$, $\bar{Q}_{a^*} = \bar{P}_a$, and the rest of rewards are identical on both instances. Note that now $a^*$ is $\alpha$-suboptimal under $\bar{Q}$. Let $t_a = \frac{1}{24\epsilon \alpha}\ln{\left(\frac{1-2\beta}{2\beta}\right)}$ and $n_a$ is the number of times the policy $\cM$ pulls arm $a$. We suppose that $\ex{\bar{P}}{n_a} \leq t_a$ and derive a contradiction. Define $A$ to be the event that arm $n_a$ is pulled less than $4 t_a$ times, that is $A := \{n_a \leq 4 t_a\}$. From Markov's inequality we have \begin{align} \label{eq:assump} t_a \geq \ex{\bar{P}}{n_a} &\geq 4 t_a \prob{\bar{P}}{n_a > 4 t_a} \\ \label{eq:A} & = 4 t_a\left(1- \prob{\bar{P}}{n_a \leq 4 t_a} \right) \enspace, \end{align} where the first inequality \eqref{eq:assump} comes from the assumption that $\Ex{n_a} \leq t_a$. From \eqref{eq:A} above it follows that $\prob{\bar{P}}{A} \geq 3/4$. We also let $B$ be the event that arm $a^*$ is selected. Since arm $a^*$ is optimal under $\bar{P}$, our assumption on $\cM$ implies $\prob{\bar{P}}{B} \geq 1-\beta$. Now let $E$ be the event that both $A$ and $B$ occur, that is $E = A\cap B$. We combine the lower bound of $\prob{\bar{P}}{A}$ and $\prob{\bar{P}}{B}$ to get a lower bound for $\prob{\bar{P}}{E}$. First we show that $\prob{\bar{P}}{B|A} \geq 3/4 -\beta$: \begin{align*} 1-\beta &\leq \prob{\bar{P}}{B|A}\prob{\bar{P}}{A} + \prob{\bar{P}}{B|A^c}\prob{\bar{P}}{A^c} \\ &\leq \prob{\bar{P}}{B|A} + \prob{\bar{P}}{A^c} \leq \prob{\bar{P}}{B|A} + 1/4 \enspace. \\ \end{align*} By replacing in the lower bounds for $\prob{\bar{P}}{A}$ and $\prob{\bar{P}}{B|A}$ we obtain: \begin{align*} \prob{\bar{P}}{E} &= \prob{\bar{P}}{A}\prob{\bar{P}}{B|A} \geq \frac{3}{4} \left(\frac{3}{4} - \beta \right) \enspace. \end{align*} On instance $\bar{Q}$ arm $a^*$ is suboptimal, hence we have that $\prob{\bar Q}{E} \leq \beta$. Now we apply the group privacy property (Lemma~\ref{lem:KarwaVadhanMAB}) where the number of observations is $4 t_a$ and $t_a = \frac{1}{24\varepsilon\alpha}\ln\left(\frac{1/2 -\beta}{\beta}\right)$ to obtain \begin{align} \frac{3}{4} \left(\frac{3}{4} - \beta \right) &\leq \prob{\bar{P}}{E} \leq e^{6\epsilon \alpha 4 t_a} \prob{\bar{Q}}{E} \notag \\ &\leq e^{6\epsilon \alpha 4 t_a} \beta = \frac{1}{2}-\beta \label{eq:grouplem} \enspace. \end{align} But $\frac{3}{4} \left(\frac{3}{4} - \beta \right)>\frac{1}{2}-\beta$ for $\beta \in (0,1/4)$, therefore \eqref{eq:grouplem} is a contradiction. \end{proof} \subsection{Lower Bound for RL with Public Initial State}\label{sec:rlpslb} To leverage the lower bound for private best-arm identification in the RL setting we first consider a simpler setting where the initial state of each episode is public information. This means that we consider agents $\cM$ interacting with a variant of the protocol in Algorithm~\ref{alg:rlprotocol} where each user $t$ releases their first state $s_1^{(t)}$ in addition to sending it to the agent. We model this scenario by considering agents whose inputs $(U,S_1)$ include the sequence of initial states $S_1 = (s_1^{(1)},\ldots,s_1^{(T)})$, and define the privacy requirements in terms of a different notion of neighboring inputs: two sequences of inputs $(U,S_1)$ and $(U',S_1')$ are $t$-neighboring if $u_{t'} = u'_{t'}$ for all $t \neq t'$ and $S_1 = S_1'$. That is, we do not expect to provide privacy in the case where the user that changes between $U$ and $U'$ also changes their initial state, since in this case making the initial state public already provides evidence that the user changed. Note, however, that $u_t$ and $u_t'$ can provide different rewards for actions taken by the agent on state $s_1^{(t)}$. \begin{definition}\label{def:jdppublic} A randomized RL agent $\cM$ is $\epsilon$-JDP under continual observation in the \emph{public initial state} setting if for all $t \in [T]$, all $t$-neighboring user-state sequences $(U,S_1)$, $(U',S_1')$, and all events $E \subseteq A^{H \times [T-1]} \times \Pi$ we have \begin{align*} \pr{\cM_{-t}(U,S_1) \in E } \leq e^\epsilon \pr{\cM_{-t}(U',S_1') \in E} \enspace. \end{align*} \end{definition} \begin{figure}[h] \begin{equation*} \tikzfig{figures/hard_MDP2} \end{equation*} \caption{Class of hard MDP instances used in the lower bound.}\label{fig:hardMDP} \end{figure} We obtain a lower bound on the sample complexity of PAC RL agents that satisfy JDP in the public initial state setting by constructing a class of hard MDPs shown in Figure~\ref{fig:hardMDP}. An MDP in this class has state space $\cS \coloneqq [n] \cup \{+,-\}$ and action space $\cA \coloneqq \{0,\ldots,m\}$. On each episode, the agent starts on one of the initial states $\{1, \ldots, n\}$ chosen uniformly at random. On each of the initial states the agent has $m+1$ possible actions and transitions can only take it to one of two possible absorbing states $\{+,-\}$. Lastly, if the current state is either one of $\{ +, - \}$ then the only possible transition is a self loop, hence the agent will in that state until the end of the episode. We assume in these absorbing states the agent can only take a fixed action. Every action which transitions to state $+$ provides reward $1$ while actions transitioning to state $-$ provide reward $0$. In particular, in each episode the agent either receives reward $H$ or $0$. Such an MDP can be seen as consisting of $n$ parallel MAB problems. Each MAB problem determines the transition probabilities between the initial state $s \in\{1, \ldots, n\}$ and the absorbing states $\{+,-\}$. We index the possible MAB problems in each initial state by their optimal arm, which is always one of $\{0,\ldots,m\}$. We write $I_s \in \{0,\ldots,m\}$ to denote the MAB instance in initial state $s$, and define the transition probabilities such that $\pr{+|s,0} = 1/2+\alpha'/2$ and $\pr{+|s,a'} = 1/2$ for $a' \neq I_s$ for all $I_s$, and for $I_s \neq 0$ we also have $\pr{+|s,I_s} = 1/2 + \alpha'$. Here $\alpha'$ is a free parameter to be determined later. We succinctly represent an MDP in the class by identifying the optimal action (i.e.\ arm) in each initial state: $I \coloneqq (I_1,\ldots,I_n)$. To show that our MAB lower bounds imply lower bounds for an RL agent interacting with MDPs in this class we prove that collecting the first action taken by the agent in all episodes $t$ with a fixed initial state $s_1^{(t)} = s \in [n]$ simulates the execution of an $\epsilon$-DP MAB algorithm. Let $\cM$ be an RL agent and $(U,S_1)$ a user-state input sequence with initial states from some set $\cS_1$. Let $\cM(U,S_1) = (\vec{a}^{(1)},\ldots,\vec{a}^{(T)},\pi) \in \cA^{H \times T} \times \Pi$ be the collection of all outputs produced by the agent on inputs $U$ and $S_1$. For every $s \in \cS_1$ we write $\cM_{1,s}(U,S_1)$ to denote the restriction of the previous trace to contain just the first action from all episodes starting with $s$ together with the action predicted by the policy at states $s$: \begin{align*} \cM_{1,s}(U,S_1) \coloneqq \left(a_1^{(t_{s,1})}, \ldots, a_1^{(t_{s,T_s})}, \pi(s)\right) \enspace, \end{align*} where $T_s$ is the number of occurrences of $s$ in $S_1$ and $t_{s,1}, \ldots, t_{s,T_s}$ are the indices of these occurrences. Furthermore, given $s \in \cS_1$ we write $U_s = (u_{t_{s,1}},\ldots,u_{t_{s,T_s}})$ to denote the set of users whose initial state equals $s$. \begin{lemma}\label{lem:jdptodp} Let $(U,S_1)$ be a user-state input sequence with initial states from some set $\cS_1$. Suppose $\mathcal{M}$ is an RL agent that satisfies $\varepsilon$-JDP in the public initial state setting. Then, for any $s \in \cS_1$ the trace $\cM_{1,s}(U,S_1)$ is the output of an $\epsilon$-DP MAB mechanism on input $U_s$. \end{lemma} Using Lemmas~\ref{lem:privMAB} and~\ref{lem:jdptodp} and a reduction from RL lower bounds to bandits lower bounds yields the second term in the following result. The first terms follows directly from the non-private lower bound in \cite{dann2015sample}. \begin{lemma}\label{lem:lowerboundpublic} Let $\cM$ be an RL agent satisfying $\varepsilon$-JDP in the public initial state setting. Suppose that $\cM$ is $(\alpha, \beta)$-PAC for some $\beta \in (0,1/8)$. Then, there exists a fixed-horizon episodic MDP where the number of episodes until the algorithm's policy is $\alpha$-optimal with probability at least $1 - \beta$ satisfies \begin{equation* \Ex{n_\cM} \geq \Omega\left( \frac{S A H^2}{\alpha^2} + \frac{S A H}{\alpha\epsilon}\ln\left(\frac{1}{\beta} \right)\right) \enspace. \end{equation*} \end{lemma} Finally, Theorem~\ref{thm:lowerbound} follows from Lemma~\ref{lem:lowerboundpublic} by observing that any RL agent $\cM$ satisfying $\varepsilon$-JDP also satisfies $\varepsilon$-JDP in the public state setting (see lemma \ref{lem:publicvsprivatestates} and see appendix for proof). \begin{lemma}\label{lem:publicvsprivatestates} Any RL agent $\cM$ satisfying $\varepsilon$-JDP also satisfies $\varepsilon$-JDP in the public state setting. \end{lemma} \section{Conclusion} In this paper, we initiate the study of differentially private algorithms for reinforcement learning. On the conceptual level, we formalize the privacy desiderata via the notion of joint differential privacy, where the algorithm cannot strongly base future decisions off sensitive information from previous interactions. Under this formalism, we provide a JDP algorithm and establish both PAC and regret utility guarantees for episodic tabular MDPs. Our results show that the utility cost for privacy is asymptotically negligible in the large accuracy regime. We also establish the first lower bounds for reinforcement learning with JDP. A natural direction for future work is to close the gap between our upper and lower bounds. A similar gap remains open for tabular RL \emph{without} privacy considerations, but the setting is more difficult with privacy, so it may be easier to establish a lower bound here. We look forward to pursuing this direction, and hope that progress will yield new insights into the non-private setting. Beyond the tabular setup considered in this paper, we believe that designing RL algorithms providing state and reward privacy in non-tabular settings is a promising direction for future work with considerable potential for real-world applications. \section{Upper Bound} For any fixed $s,a,h \in \cS\times\cA\times[H]$ we introduce the following notation to count events that ocurred right before episode $t$: $\widehat{n}_t(s,a,h)$ is the number of times state $(s,a,h)$ is visited , $\widehat{r}_t(s,a,h)$ is the cumulative reward sum, and $\widehat{m}_t(s,a,s',h)$ is the transition count from state $s$ to state $s'$ after taking action $a$. We use the follwing shorthand notation for the confidence intervals that we'll use. \begin{align*} &\SamplingConfSign \coloneqq \SamplingConfVal \\ &\SamplingNoiseConfSign \coloneqq \SamplingNoiseConfVal \\ &\PrivacyConfSign \coloneqq \PrivacyConfVal \end{align*} where $E_{\eps} \coloneqq \frac{3}{\eps}H\log\left(\frac{2SAH+S^2AH}{\beta'}\right)\log\left(\Tmax\right)^{5/2}$. We denote by $w_{t}(s,a,h)$ the probability that state $s,a$ is visited on time step $h$ on episode $t$. \subsection{Error Bounds} \begin{lemma}\label{appex:lem:nfail} The event that there is an episode $t$ and $s,a,h\in \cS\times\cA\times[H]$ such that \begin{align*} \widehat{n}_t(s,a,t) < \sum_i^t w_{t}(s,a,h) - \ln{\frac{5SAH}{\beta}} \end{align*} happends with probability at most $\beta/5$. \end{lemma} \begin{lemma}\label{appex:lem:rewardfail} Let $r(s,a,h)$ be the mean reward. Then the event that there is an episode $t$ and $s,a,h\in \cS\times\cA\times[H]$ such that \begin{align*} \left| \frac{\widehat{r}_t(s,a,h)}{\widehat{n}_t(s,a,t)} -r(s,a,h) \right| > \SamplingConfSign \end{align*} happends with probability at most $\beta/5$. \end{lemma} \begin{lemma}\label{appex:lem:Vfail} Let $\cP(s,a,h)$ be the true transition distribution for state action pair $s,a,h$. And let $V^*$ be the optimal value function. We use the shorthand notation $\widehat{\cP}_t$ to denote the empirical transition distribution given by $\widehat{\cP}_t(s'|s,a,h)\coloneqq\frac{\widehat{m}_t(s,a,s',h)}{\widehat{n}_t(s,a,h)}$. Then the event that there is an episode $t$ and $s,a,h\in \cS\times\cA\times[H]$ such that \begin{align*} \left| \left(\widehat{\cP}_t(s,a,h) - \cP(s,a,h)\right)V^* \right| > H \SamplingConfSign \end{align*} happends with probability at most $\beta/5$. \end{lemma} \begin{lemma} Then the event that there is an episode $t$ and $s,a,h\in \cS\times\cA\times[H]$ such that \begin{align*} \left| \widetilde{n}_t(s,a,h) - \widehat{n}_t(s,a,h) \right| > E_{\eps} \end{align*} happends with probability at most $\beta/5$. \end{lemma} \begin{definition} We define the fail event $F$ as the event that one of the events defined in lemmas \ref{appex:lem:nfail}, \ref{appex:lem:rewardfail}, \ref{appex:lem:Vfail} is realized. After applying union bound we get that the fail event $F$ occurs with probability at most $\beta$. \end{definition} \begin{definition}\label{appex:def:failevents} \text{Let } $$E_{\eps} = \frac{3}{\eps}H\log\left(\frac{2SAH+S^2AH}{\beta'}\right)\log\left(\Tmax\right)^{5/2}$$ and define the following fail event: $$F = \bigcup_t \left[ F_t^N \cup F_t^R F_t^V \cup F_t^{PR} \cup F_t^{PN} \cup F_t^{PM} \right]$$ \noindent where \begin{align} F_t^N &= \bigg\{ \exists s,a,h: \widehat{n}_t(s,a,h) < \frac{1}{2} \sum_{i<t} w_{i,h}(s,a) \\ &\quad\quad\quad \quad\quad-\ln\frac{SAH}{\beta'}\bigg\} \\ F_t^R &= \Bigg\{ \exists s,a,t : \left| \frac{\widehat{r}(s,a,h)}{\widehat{n}(s,a,h)} - r(s,a,h)\right| \geq \\ &\quad\quad \sqrt{\frac{\left( 2\ln\ln(\widehat{n}_t(s,a,h)) + \ln\frac{3SAH}{\beta'} \right)}{\widehat{n}_t(s,a,h)} } \Bigg\}\\ F_t^V &= \bigg\{ \exists{s,a,h} : |(\hat{\mathcal{P}}_t(s,a,h) - \mathcal{P}(s,a,h))^{\top} V_{h+1}^*| \geq \\ &\quad\quad\quad H\sqrt{ \frac{\left( 2\text{llnp}( \widehat{n}_t(s,a,h))+ \ln\frac{3SAH}{\beta'} \right)}{\widehat{n}_t(s,a,h)}} \bigg\}\\ F_t^{PR} &= \left\{ \exists{s,a,h} : |\widetilde{r}_t(s,a,h) - \widehat{r}_t(s,a,h)| \geq E \right\}\\ F_t^{PN} &= \left\{ \exists{s,a,h} : |\widetilde{n}_t(s,a,h) - \widehat{n}_t(s,a,h)| \geq E \right\}\\ F_t^{PM} &= \{ \exists{s,a,s',h} : \\ &\quad\quad\quad|\widetilde{m}_t(s,a,s',h) - \widehat{m}_t(s,a,s',h)| \geq E \} \end{align} \end{definition} \subsection{Nice episodes} \begin{definition}[Definition 2 in \cite{dann2017unifying}] Let $w_{t,h}(s',a')$ be the probability of visiting state $s'$ and taking action $a'$ during episode $t$ and time-step $h$ after following policy $\pi_t$. An episode $t$ is nice if and only if for all $s,s'\in \cS$, $a,a'\in\cA$ and $h\in [H]$ the following two conditions hold: $$w_{t,h}(s,a) \leq w_{min} \quad \vee \quad \frac{1}{4}\sum_{i<t} w_{i,h}(s,a) \geq \ln\frac{SAH}{\beta'} + E_{\eps}$$ \end{definition} \begin{lemma}[]\label{lem:niceepisode} Let $\widetilde{n}_{t}(s,a, h)$ be the value of $\widetilde{n}(s, a, h)$ after planning in episode $t$. Then, if an episode $t$ is nice, then on $F^c$ for all $s\in\cS$, $a\in\cA$ and $h\in[H]$ the following holds: $$w_{t,h}(s,a) \leq w_{min} \quad \vee \quad \widetilde{n}_t(s,a,h) \geq \frac{1}{4}\sum_{i<t} w_{i,h}(s,a)$$ \end{lemma} \begin{proof} Since we consider the event ${F_t^N}^c$ and event ${F_t^{PN}}$, it holds for all $s,a,h$ triples with $w_{t,h}(s,a)>w_{min}$ \begin{align*} \widetilde{n}_t(s,a,h)\geq \frac{1}{2}\sum_{i<k}w_{i,h}(s,a) -\ln\frac{SAH}{\beta'} - E_{\eps} \\ \geq \frac{1}{4}\sum_{i<t} w_{i,h}(s,a) \end{align*} \end{proof} \begin{lemma}[Lemma E.1 in \cite{dann2017unifying}] \label{appex:lem:notnice} On the good event $F^c$, the number of episodes that are not nice is at most $$\frac{120S^2AH^4}{\alpha\varepsilon}\mathrm{polylog}(S,A,H, 1/\beta)$$ \end{lemma} \begin{proof} If an episode $t$ is not nice, then there is $s,a,h$ with $w_{t,h}(s,a)>w_{min}$ and $\sum_{i<t} < \ln\left(\frac{4SAH}{\beta'}\right) + 4E_{\eps}$. With $E_{\eps} \coloneqq \frac{3H}{\varepsilon} polylog(S,A,H)$ Since the sum of this inequality increases by at least $w_{min} \coloneqq \frac{\alpha}{4SH^2}$ when this happens and the right hand side stays constant, this situation can occur at most \begin{align*} \frac{4SAH}{w_{min}}\left( \ln\frac{SAH}{\beta'} + E_{\eps} \right) \\ \leq \frac{120S^2AH^4}{\alpha\varepsilon} \mathrm{polylog}(S,A,H, 1/\beta) \end{align*} \end{proof} \begin{lemma}[Main rate lemma. Lemma E.2 in \cite{dann2017unifying}] \label{appex:lem:mainrate} Let $r\geq 1$ fix and $C>0$ which can depend polynomially on que relevant quantities and $\alpha'>0$ and let $D\geq D$ which can depend poly-logarithmically on the relevant quantities. Then $$\sum_{h}^H\sum_{s,a\in L_{t,k}}w_{t,h}(s,a)\left( \frac{C(\llnp{n_t(s,a,h)} + D)}{n_t(s,a,h)} \right)^{1/r}\leq \alpha'$$ on all but at most $$\frac{8CSAH^r}{(\alpha')^r}\text{polylog}(S,A,H,(\beta')^{-1}, (\alpha')^{-1})$$ nice episodes. \end{lemma} \begin{lemma}[Privacy rate. similar to Lemma E.2 in \cite{dann2017unifying}] \label{appex:lem:privacyrate} Fix $C>0$ and $\alpha'>0$. Let $E$ be the error bound of the counting mechanisms on all episodes such that for any $s,a,h$ $$|\widetilde{n}(s,a,h)-\widehat{n}(s,a,h)| \leq E$$ then $$\sum_{h}^H\sum_{s,a\in L_{t,k}}w_{t,h}(s,a) \frac{CE}{\widetilde{n}_t(s,a,h)} \leq \alpha'$$ \ on all but at most $$\frac{SAHCE}{\alpha'}\text{polylog}(S,A,H,(\beta')^{-1}, (\alpha')^{-1})$$ nice episodes. \end{lemma} \begin{proof} Define $$\Delta_t = \sum_{h}^H\sum_{s,a\in L_{t,k}}w_{t,h}(s,a) \frac{CE_t}{\widetilde{n}_t(s,a,h)}$$ Using lemma \ref{lem:niceepisode} and the property of nice episodes and the fact \begin{equation} \begin{aligned} \sum_{i<t} w_{i,h}(s,a) &\geq 4\ln \frac{SAH}{\delta'} \geq 2 \\ \end{aligned}\end{equation} which implies \begin{equation} \begin{aligned} 2\sum_{i<t} w_{i,h}(s,a) &\geq 2 + \sum_{i<t} w_{i,h}(s,a)\\ \implies 2\sum_{i<t} w_{i,h}(s,a) &\geq w_{t,h}(s,a)+ \sum_{i<t} w_{i,h}(s,a)\\ \implies \sum_{i<t} w_{i,h}(s,a) &\geq \frac{1}{2} \sum_{i\leq t} w_{i,h}(s,a)\\ \end{aligned}\end{equation} we have $$\widetilde{n}_t(s,a,h) \geq \frac{1}{4}\sum_{i<t} w_{i,h}(s,a) \geq \frac{1}{8} \sum_{i\leq t} w_{i,h}(s,a)$$ We can bound the gap as follows \begin{equation*} \Delta_t \leq 8\sum_{h}^H\sum_{s,a\in L_{t,k}}w_{t,h}(s,a) \frac{CE_t}{\sum_{i\leq t} w_{i,h}(s,a)} \end{equation*} Assume that the gap on episode $t$ is $\Delta_t > \alpha'$. In this cas there exist at least one $(s,a,h)$ with $w_{t,h}(s,a)>w_{min}$ and \begin{equation}\begin{aligned}\label{eq:} 8SAH\frac{CE_t}{\sum_{i\leq t} w_{i,h}(s,a)} > \alpha'\\ \implies 8CSAH^2\log(1/\delta')\frac{\log(t)^{5/2}}{\sum_{i\leq t} w_{i,h}(s,a)} > \alpha'\\ \implies \frac{8CSAH^2}{\alpha'} > \frac{\sum_{i\leq t} w_{i,h}(s,a)}{\log(t)^{5/2}} \end{aligned} \end{equation} \end{proof} \subsection{Optimally gap} This section provides the main analysis of the PAC sample complexity. We procede by using optimism of the Q-value policy. \begin{lemma}{$Q$-optimism}\label{appex:lem:optimism} On the event $F^c$, the noisy Q-function $\widetilde{Q}$ is an optimistic estimator of the true value policy $Q^*$. That is, with high probability, for all $s,a\in\cS\times\cA$, we have $\widetilde{Q}(s, a, h) \geq Q^*(s,a, h)$ \end{lemma} First we provide a lemma and two claims that will be used to prove lemma \ref{lem:optimism}. \begin{claim}\label{claim:fracbound} Given any $\alpha>0$, let $n,E >0$. If $n\geq \frac{(1+\alpha)E}{\alpha}$, then $$\frac{1}{n-E} \leq \frac{1+\alpha}{n}$$ \end{claim} \begin{claim}\label{claim:oneOverN} For any $n>0$ and $0\leq E< n$, we have that $$\frac{1}{n-E} \leq \frac{1}{n} + \frac{e + e^2}{n^2}$$ \end{claim} \begin{proof}{(of claim \ref{claim:oneOverN})} Case 1: $n-1 \leq e$: We have that $n-1 \leq e$ \begin{align*} \frac{1}{n} + \frac{e}{n^2} + \frac{e^2}{n^2} \geq \frac{1}{n} + \frac{n-1}{n} + \frac{(n-1)^2}{n^2} \\ = \frac{1}{n} + 1 - \frac{1}{n} + \frac{(n-1)^2}{n^2} = 1 + \frac{(n-1)^2}{n^2} \geq 1 = \\ \frac{1}{\max(1,n-e)} \end{align*} Case 2: $n-e>1$: We have that $\left(\frac{1}{n-e}\right) < 1$ \begin{align*} \frac{1}{n-e} &= \frac{1}{n}+ \frac{e}{n^2} + \frac{e^2}{n^3}+\ldots \quad \quad (\text{Taylor expansion})\\ &= \frac{1}{n}+\frac{e}{n^2}+\frac{e^2}{n^2}\left(\frac{1}{n}+\frac{e}{n^2}+\frac{e^2}{n^3}+\ldots \right) \\ &= \frac{1}{n}+\frac{e}{n^2}+\frac{e^2}{n^2}\left(\frac{1}{n-e}\right) \\ &< \frac{1}{n}+\frac{e}{n^2}+\frac{e^2}{n^2} \quad\quad (\text{use } 1/(n-e) < 1 ) \end{align*} \end{proof} \begin{claim}\label{claim:qlowerbound} Given a target accuracy $\alpha'$. Let $E$ be the error bound from the $\BM{.}$ for round $t$. Then on the event $F^c$, for all $a\in\cA$, $s,s' \in \cS$, $h \in [H]$, if $\widetilde{n}(s,a,h)>\frac{(1+\alpha')E}{\alpha'}$ we have \begin{align} \frac{\widetilde{r}(s,a,h) + E + \sum_{s' \in \cS} \widetilde{V}_h(s')(\widetilde{m}(s,a, s' , h) + E) }{\max(1, \widetilde{n}(s,a,h) - E)} \\ \leq \frac{\widetilde{r}(s,a,h) + \sum_{s'\in \cS}\widetilde{V}(s')\widetilde{m}(s,a,s',h)}{\widetilde{n}(s,a,h)}\\ \quad\quad+ \frac{(1+\alpha')(E + HSE)}{\widetilde{n}(s,a,h)} + (H+1)\alpha' \end{align} \end{claim} \begin{lemma}\label{appex:lem:Qhat} Let $\widehat{Q}$ be the optimistic empirical Q-value of algorithm \ref{alg:privateplanning} before adding noise, defined by \begin{align*} \widehat{Q}(s,a,h) = \frac{\widehat{r}(s,a,h) + \sum_{s'\cS} \widetilde{V}_h(s')\widehat{m}(s,a,s', h)}{\widehat{n}(s,a,h)} \\ + \phibound \\ + (H-h)\phibound \end{align*} then on any time step for all $s,a,h \in \cS\times\cA\times [H]$, \begin{align} \widetilde{Q}(s,a,h) \geq \widehat{Q}(s,a,h) \end{align} \end{lemma} \begin{proof} On event $F^c$ we get from definition \ref{def:failevents} that $|\widetilde{r}(s,a,h) - \widehat{r}(s,a,h)| \leq E$, $|\widetilde{n}(s,a,h) - \widehat{n}(s,a,h)| \leq E$, and $|\widetilde{m}(s,a,s', h) - \widehat{m}(s,a,s',h)| \leq E$. Therefore we can upper bound $\widehat{Q}$ by \begin{align*} \widehat{Q}(s,a,h)\\ \leq \frac{\widetilde{r}(s,a,h) + E + \sum_{s'\cS} \widetilde{V}_h(s')(\widehat{m}(s,a,s', h) + E) }{\widehat{n}(s,a,h) - E} \\ + \phi + (H-h)\phi\ \end{align*} where $\phi = \phiboundWithE$. Then using claim \ref{claim:qlowerbound} with $\alpha' = \frac{\alpha}{5H(H+1)}$ we have can upper bound the first term of the equation: \begin{align*} \widehat{Q}(s,a,h) \leq \frac{\widetilde{r}(s,a,h) + \sum_{s'\in \cS}\widetilde{V}(s')\widetilde{m}(s,a,s',h)}{\widetilde{n}(s,a,h)}\\ \quad\quad+ \frac{(1+\alpha')(E + HSE)}{\widetilde{n}(s,a,h)} + \frac{\alpha }{5H} + \phi + (H-h)\phi \\ \equiv \widetilde{Q}(s,a,h) \end{align*} \end{proof} \begin{lemma}[Value function optimism]\label{appex:lem:Voptimism} Fix any episode $t$. On the event $F^c$, the value function from algorithm \ref{alg:privateplanning} is optimismtic. i.e. for all $s\in\cS$ and all $h\in[H]$ $$\widetilde{V}_h(s) \geq V^*_{h}(s)$$ \end{lemma} \begin{proof} The proof proceeds by induction. Fixing any state $s\in\cS$, we must show that $\widetilde{V}_h(s)\geq V^*_h(s)$ for all $h\in[H]$. For the base case, note that $\widetilde{V}_{H+1}(s)=V^*_h(s)=0$. Now assume that for any $h\leq H$, $\widetilde{V}_{h+1}(s) \geq V_{h+1}^*(s)$. Then we must show that $\widetilde{V}_h(s) \geq V_h^*(s)$: First write out the equation for $V^*_h(s)$: \begin{align}\label{eq:vstar} V^*_h(s) = \max_{a} \left( r(s,a,h) + P(s,a,h)V^*_{h+1} \right) \end{align} Let $a^*$ be the action corresponding to equation ($\ref{eq:vstar}$). Next we write out the equation for $\widetilde{V}_h(s)$ \begin{align*} \widetilde{V}_h(s) &= \max_{a} \frac{\widetilde{r}(s,a,h)+ \sum_{s'} \widetilde{V}_{h+1}\widetilde{m}(s,a,s',h)}{\widetilde{n}(s,a,h)} \\ &\quad\quad + \widetilde{\conf}(s,a,h) \\ &\geq \frac{\widetilde{r}(s,a^*,h)+ \sum_{s'} \widetilde{V}_{h+1}\widetilde{m}(s,a^*,s',h)}{\widetilde{n}(s,a^*,h)} \\ &\quad\quad + \widetilde{\conf}(s,a^*,h) \\ & \coloneqq \widetilde{Q}(s,a^*,h) \end{align*} Next we use lemma \ref{lem:optimismQhat} which says that on event $F^c$, $\widetilde{Q}(s,a^*,h)\geq \widehat{Q}(s,a^*,h)$ to get a lower bound for $\widetilde{V}_h$: \begin{align*} \widetilde{V}_h(s) \geq \frac{\widehat{r}(s,a^*,h) + \sum_{s'\cS} \widetilde{V}_h(s')\widehat{m}(s,a^*,s', h)}{\widehat{n}(s,a^*,h)} \\ + (H+1)\phihatconfL{s,a^*,h} \end{align*} Letting \begin{align*} \frac{\sum_{s'\cS} \widetilde{V}_h(s')\widehat{m}(s,a^*,s', h)}{\widehat{n}(s,a^*,h)} = \widehat{P}(s,a^*,h)\widetilde{V}_{h+1} \end{align*} and applying the inductive step (i.e. $\forall_s \widetilde{V}_{h+1}(s)\geq V^*_{h+1}(s)$) we get \begin{align*} &\widetilde{V}_h(s) \geq \frac{\widehat{r}(s,a^*,h)}{\widehat{n}(s,a^*,h)} + \widehat{P}(s,a^*,h)V^*_{h+1} + \\ &\quad\quad (H+1)\phihatconfL{s,a^*,h} \end{align*} Next we use the concentration bound from definition \ref{def:failevents} \begin{align*} \widehat{P}(s,a^*,h)V^*_{h+1}\geq P(s,a^*,h)V^*_{h+1} - (H)\widehat{\phi}(s,a^*, h) \end{align*} and then we use the definition of $V^*_h(s)$ \begin{align*} P(s,a^*,h)V^*_{h+1} = - r(s,a^*, h) +V^*_{h}(s) \end{align*} to get \begin{align*} &\widetilde{V}_h(s) \\ &\geq \frac{\widetilde{r}(s,a^*,h)}{\widetilde{n}(s,a^*,h)} + P(s,a^*,h)V^*_{h+1} + \widehat{\phi}(s,a^*, h) \\ &= \frac{\widetilde{r}(s,a^*,h)}{\widetilde{n}(s,a^*,h)} - r(s,a^*, h) + \widehat{\phi}(s,a^*, h) + V^*_{h}(s) \end{align*} Next note that on event $F^c$, we have \begin{align*} \frac{\widetilde{r}(s,a^*,h)}{\widetilde{n}(s,a^*,h)} - r(s,a^*, h) + \widehat{\phi}(s,a^*, h)> 0 \end{align*} Hence it follows that $\widetilde{V}_h(s) \geq V_h^*(s)$, completing the proof. \end{proof} \begin{proof}{of lemma \ref{lem:optimism}} Let $\widehat{Q}$ be the optimistic empirical Q-value before adding noise. On lemma \ref{lem:Qhat}, we showed that $\widetilde{Q}(s,a,h) \geq \hat{Q}(s,a,h) $ $\forall s,a,h$. Next, it only remains to show that $\widehat{Q} \geq Q^*$. Let the empirical mean reward be $\bar{r}(s,a,h) = \frac{\widehat{r}(s,a,h)}{\widehat{n}(s,a,h)}$ and let $\hat{P} \widetilde{V}_{h+1} = \frac{\widetilde{V}_h(s')(\widehat{m}(s,a,s', h)}{\widehat{n}(s,a,h)}$. No we can write $\widehat{Q}$ as $$\widehat{Q}(s,a,h) = \bar{r}(s,a,h) + \hat{P}\widetilde{V}_{h+1}$$ \noindent On the event $F^c$ we have that $\frac{\widehat{r}(s,a,h)}{\widehat{n}(s,a,h)} - r(s,a,h) \leq \phi$ and $(\widehat{P} - P )V^*_{h+1} \leq H \phi$. Furthermore, from the value function optimism lemma (\ref{lem:Voptimism}) we have that for all $s \in \cS$, and all $h\in[H]$, $\widetilde{V}_h(s) \geq V_h^*(s)$. Putting it all together: \begin{align} &\widehat{Q}(s,a,h) - Q^*(s,a,h)\\ & = \bar{r}(s,a,h) - r(s,a,h) + \widehat{P}\widetilde{V}_{h+1} - P V^*_{h+1}\\ & \quad\quad+ (H-h+1)\phi \\ &\geq \bar{r}(s,a,h) - r(s,a,h) + (\widehat{P} - P )V^*_{h+1} \\ & \quad\quad+ \phi + (H-h)\phi \\ &\geq 0 \end{align} \end{proof} \begin{lemma} \label{appex:lem:optgap} Let $\pi_t$ be the policy played by algorithm \ref{alg:prl} during episode $t$. Let $w_{t,h}(s,a)$ be the probability of visiting $s$ and taking action $a$ on episode $t$ and time $h\in[H]$. Then the optimality gap is bounded by \begin{equation}\begin{aligned} V^\star - V^\pi_t &\leq \sum_{h=1}^H \mathbb{E}_{s_h \sim \pi_t} \widetilde{\conf}(s_h,\pi_t(s_h)) \\ &\leq \sum_{h=1}^H \sum_{s',a'\in\cS}w_{t,h}(s',a') \widetilde{\conf}(s_h,\pi_t(s_h)) \\ \end{aligned}\end{equation} \end{lemma} \begin{proof} \noindent Let $\mathbb{E}_{s_h \sim \pi_t}\widetilde{r}_h $ be the expected reward received during time step $h \in [H]$ given that we follow policy $\pi_t$. Let $s_1, \ldots, s_H$ be random variables, where each $s_h$ represents the state visited during time-step $h$ after following policy $\pi_t$. Next observe that \begin{align}\label{eq:qtilinequality} \left| \mathbb{E}_{\pi_t}\widetilde{Q}_1(s_1,\pi_t(s_1)) - \mathbb{E}_{\pi_t}\left( \widetilde{r}_1 + \widetilde{Q}_2(s_2,\pi_t(s_2)) \right) \right| \\ \leq \mathbb{E}_{\pi_t}\widetilde{\conf}(s_1,\pi_t(s_1)) \end{align} \noindent Equation \ref{eq:qtilinequality} can be verified by rewriting the two terms as: $$\mathbb{E}_{\pi_t} \widetilde{Q}(s_1, \pi_t(s_1)) = \mathbb{E}_{\pi_t}\widetilde{r}_1 + \mathbb{E}_{\pi_t} \widetilde{V}(s_2) + \mathbb{E}_{\pi_t}\widetilde{\conf}(s_1, a_1, 1)$$ \noindent and $$\mathbb{E}_{\pi_t} \widetilde{Q}_2(s_2, \pi_t(s_2)) = \mathbb{E}_{\pi_t} \widetilde{V}(s_2)$$ \noindent Then the optimality gap on episode $t$ is \begin{equation}\label{eq:gapboundA} \begin{aligned} V^\star - V^\pi_t &= V^\star - \sum_{h=1}^H \mathbb{E}_{\pi_t}r_h \\ &= \mathbb{E}_{\pi_t} Q_1^\star(s_1,\pi^ \star(s_1)) - \sum_{h=1}^H \mathbb{E}_{\pi_t}r_h \\ & (\text{Optimism}) \\ &\leq \mathbb{E}_{\pi_t} \widetilde{Q}_1(s_1,\pi^*(s_1)) - \sum_{h=1}^H \mathbb{E}_{\pi_t}r_h \\ & (\text{Greedy of policy } \pi_t) \\ &\leq \mathbb{E}_{\pi_t} \widetilde{Q}_1(s_1,\pi_t(s_1)) - \sum_{h=1}^H \mathbb{E}_{\pi_t}r_h \\ &\leq \mathbb{E}_{\pi_t} \widetilde{Q}_1(s_1,\pi_t(s_1)) - \mathbb{E}_{\pi_t}r_1 - \sum_{h=2}^H \mathbb{E}_{\pi_t}r_h \\ & (\text{Equation \ref{eq:qtilinequality}}) \\ &\leq \mathbb{E}_{ \pi_t} \widetilde{\conf}(s_1,\pi_t(s_1), 1) + \mathbb{E}_{\pi_t} \widetilde{Q}_2(s_2,\pi_t(s_2)) \\ &\quad \quad - \sum_{h=2}^H \mathbb{E}_{\pi_t}r_h \end{aligned} \end{equation} \noindent Since $\widetilde{Q}_{H+1}(s_h, a_h) = 0$, applying equation (\ref{eq:qtilinequality}) $H-1$ more times, we get \begin{equation}\begin{aligned} (\text{equation } \ref{eq:gapboundA}) &\leq \sum_{h=1}^H \mathbb{E}_{s_h \sim \pi_t} \widetilde{\conf}(s_h,\pi_t(s_h), h) \end{aligned}\end{equation} \end{proof} \section{Other proofs} Proof of claim \ref{claim:oneoverNbound}. \begin{claim}\label{appex:claim:oneoverNbound} Let $e \in \mathbb{R}$ be any positive real number. Then for all $x \in \mathbbm{R}$ with $x\geq 2e$ it holds that \begin{align} \label{eq:oneoverNinequality} \frac{1}{x - e} \leq \frac{1}{x} + \frac{2e}{x^2} \end{align} \end{claim} \begin{proof} To prove lemma \ref{claim:oneoverNbound} we will induction on the positive real numbers following the tools from \ref{clark2012instructor}. This case procceds by induction. The base case $x =2e$ is easy to show. \begin{align*} \frac{1}{x} + \frac{2e}{x^2} = \frac{1}{2e} + \frac{2e}{(2e)^2} = \frac{1}{2e} + \frac{1}{2e} = \frac{1}{e} = \frac{1}{x-y} \end{align*} where the last equality follows because $x=2e$. Thus we have shown that inequality \ref{eq:oneoverNinequality} holds when $x = 2e$. For the inductive step we must show that for any $z\geq 2e$ if $\frac{1}{x-e} \leq \frac{1}{x} + \frac{2e}{x^2}$ holds on all $x\in [2e, z]$ then $\frac{1}{y-e} \leq \frac{1}{y} + \frac{2e}{y^2}$ holds on all $y\in [z, w)$ for some some $w>z$. Secondly we must show that for any $z> 2e$, if $\frac{1}{x-e} \leq \frac{1}{x} + \frac{2e}{x^2}$ holds on all $x\in [2e, z)$ then $\frac{1}{z-e} \leq \frac{1}{z} + \frac{2e}{z^2}$ holds. We proceed by contradiction: Let $z \geq 2e$. For the first step let $x \in [2e, z]$ and $y\in [z, w)$ for some $w>z$. Then assume that $\frac{1}{x-e} \leq \frac{1}{x} + \frac{2e}{x^2}$ holds and that $\frac{1}{y-e} > \frac{1}{y} + \frac{2e}{y^2}$. \begin{align} \notag &\frac{1}{y-e} > \frac{1}{y} + \frac{2e}{y^2} \\ \notag \Leftrightarrow & \frac{y^2}{y-e} > y + 2e && \text{Multiply by $y^2$}\\ \notag \Leftrightarrow & \frac{y^2}{y-e} + (x-y)> x + 2e \\ \notag \Leftrightarrow & \frac{y^2}{(y-e)x^2} + \frac{(x-y)}{x^2} > \frac{1}{x} + \frac{2e}{x^2} && \text{Divide by $x^2>0$}\\ \notag \Leftrightarrow & \frac{y^2}{(y-e)x^2} + \frac{(x-y)}{x^2} > \frac{1}{x-e} && \text{Apply inductive assumption} \\ \notag \Leftrightarrow & y^2 > \frac{(y-e)x^2}{x-e} -(y-e)(x-y) && \text{Multiply by $(y-e)x^2$} \\ \notag \Leftrightarrow & y^2 > \frac{(y-e)x^2}{x-e} + y^2 - ye - xy + xe && \text{Multiply by $(y-e)x^2$} \\ \notag \Leftrightarrow & y(x+e) > \frac{(y-e)x^2}{x-e} + xe && \text{Expand} \\ \notag \Leftrightarrow & y(x+e)(x-e) > (y-e)x^2 + xe(x-e) && \text{Multiply by $(x-e)$} \\ \notag \Leftrightarrow & yx^2 -yex + yex - ye^2 > yx^2-ex^2 + ex^2 -xe^2 && \text{Expand} \\ \notag \Leftrightarrow & - ye^2 > -xe^2 && \text{Simplify} \\ \label{eq:contradiction} \Leftrightarrow & x > y && \text{Divide by $e^2$} \end{align} Equation \ref{eq:contradiction} contradicts the assumption that $y\geq x$. Therefore we have shown that $\frac{1}{y-e} \leq \frac{1}{y} + \frac{2e}{y^2}$ holds. For the next step in the proof, let $x\in [2e, z)$ and assume $\frac{1}{x-e} \leq \frac{1}{x}+ \frac{2e}{x^2}$, then we must show that $\frac{1}{z-e} \leq \frac{1}{z}+ \frac{2e}{z^2}$. To show this we apply the same derivation as in the previous step to arrive at the contradiction $x>z$ which completes the proof. \end{proof} \subsection{Nice episodes} The goal in this section is to bound the number of suboptimal episodes. We use a similar approach as in \cite{dann2017unifying} which uses the concept of nice episodes but we modify their definition of nice episodes to to account for the noise the algorithm adds in order to preserve privacy. We formally define nice episodes in definition \ref{def:nice}. The rest of the proof proceeds by bounding the number of episodes that are not nice and bounding the number of nice suboptimal episodes. Recal that $\widetilde{n}_t(s,a,h)$ represents the private count of the number of times state triplet $s,a,h$ has been visited right before episode $t$. And $E_{\eps}$ is the error of the $\varepsilon$-differentially private counter, that is, on any episode $t$ \[ \left|\widetilde{n}_t(s,a,h) - \widehat{n}_t(s,a,h) \right| < E_{\eps} \] where $\widehat{n}_t(s,a,h)$ is the true count. \begin{definition}[Nice Episodes. Similar to definition 2 in \cite{dann2017unifying}] \label{def:nice} Let $w(s,a,h)$ be the probability of visiting state $s$ and taking action $a$ during episode $t$ and time-step $h$ after following policy $\pi_t$. An episode $t$ is nice if and only if for all $s,\in \cS$, $a,\in\cA$ and $h\in [H]$ the following two conditions hold: \begin{align*} w_t(s,a, h) \leq w_{min} \quad \vee \quad \frac{1}{4}\sum_{i<t} w_{i}(s,a,h) \geq \ln\frac{SAH}{\beta'} + 2E_{\eps} \end{align*} \end{definition} \begin{lemma}\label{lem:niceproperties} If an episode $t$ is nice, then on $F^c$ for all $s,a,h$ the following statement holds \begin{align*} w_t(s,a,h) < w_{min} \quad\vee\quad \widetilde{n}_t(s,a,h) > \frac{1}{4}\sum_{i<t} w_{i}(s,a,h) + E_{\eps} \end{align*} Plus, it follows that if $w_t(s,a,h)>w_{min}$ then $\widetilde{n}_t(s,a,h)>2E_{\eps}$. \end{lemma} \begin{proof} Since we consider the event $F^c$ it holds for all $s,a,h$ triplets \[ \widehat{n}_t(s,a,h) > \frac{1}{2}\sum_{i<t} w_{i}(s,a,h) - \ln\frac{SAH}{\delta'} \] and \begin{align*} \widetilde{n}_t(s,a,h) > \widehat{n}_t(s,a,h) - E_{\eps} > \frac{1}{2}\sum_{i<t} w_{i}(s,a,h) - \ln\frac{SAH}{\delta'}- E_{\eps} > \frac{1}{4}\sum_{i<t} w_{i}(s,a,h) \end{align*} \end{proof} \begin{lemma}[Non-nice Episodes. \cite{dann2017unifying}] \label{lem:notnice} On the good event $F^c$, the number of episodes that are not nice is at most \[ \frac{120S^2AH^4}{\alpha\varepsilon}\mathrm{polylog}(S,A,H, 1/\beta) \] \end{lemma} \begin{lemma}[Nice Episodes rate. \cite{dann2017unifying}] \label{lem:mainrate} Let $r\geq 1$ and fix $C>0$ which can depend polynomially on the relevant quantities and let $D\geq 1$ which can depend poly-logarithmically on the relevant quantities. Finally let $\alpha'>0$ be the target accuracy and let $\widetilde{n}_t(s,a,h)$ be the private estimate count with error $E_{\eps}$. Then \begin{align*} \sum_{(s,a,h)\notin L_{t}}w_h(s,a,h) \left( \frac{C(\log{(T)} + D)}{\widetilde{n}_t(s,a,h) - E_{\eps}} \right)^{1/r} \leq \alpha' \end{align*} on all but at most \begin{align*} \frac{8CSAH^r}{(\alpha')^r}\mathrm{polylog}(T,S,A,H,1/\varepsilon,1/\beta', 1/\alpha') \end{align*} nice episodes. \end{lemma} \begin{proof} The proof follows mostly from the argument in \cite[Lemma E.3]{dann2017unifying}. Let $x\coloneqq (s,a,h)$ denote a state tuple. Define the gap in episode $t$ by \begin{align*} \Delta_t =\sum_{\text{x}\notin L_t}w_t(\text{x})\left(\frac{C(\log(T)+D)}{\widetilde{n}_t(\text{x})-E_{\eps}}\right)^{\tfrac{1}{r}} =\sum_{\text{x}\notin L_t}w_t(\text{x})^{1-\tfrac{1}{r}} \left(w_t(\text{x})\frac{C(\log(T)+D)}{\widetilde{n}_t(\text{x})-E_{\eps}}\right)^{\tfrac{1}{r}} \end{align*} Using H\"{o}lder's inequality \begin{align*} \Delta_t \leq \left(\sum_{\text{x}\notin L_t}C H^{r-1}w_t(\text{x})\frac{C(\log(T)+D)}{\widetilde{n}_t(\text{x})-E_{\eps}}\right)^{\tfrac{1}{r}} \end{align*} Now we the properties of nice episodes from lemma \ref{lem:niceproperties} and the fact that $\sum_{i<t}w_t(\text{x}) \geq 4\ln(SAH/\delta')\geq 2$. Then on the good event $F^c$ we have the following bound \begin{align*} \widetilde{n}_t(\text{x}) \geq \frac{1}{4}\sum_{i<t}w_i(\text{x}) + E_{\eps} \geq \frac{1}{8}\sum_{i\leq t}w_i(\text{x}) + E_{\eps} \end{align*} The function $\frac{\ln(T) + D}{ x}$ is monotonically decreasing for $x\geq 0$. Then we bound \begin{align*} \Delta_t^r &\leq\sum_{\text{x}\notin L_t}C H^{r-1}w_t(\text{x})\frac{C(\log(T)+D)}{\widetilde{n}_t(\text{x})-E_{\eps}}\\ &\leq 8C H^{r-1}\sum_{\text{x}\notin L_t}w_t(\text{x})\frac{(\log(T)+D)}{\sum_{i\leq t}w_i(\text{x})}\\ \end{align*} Let the set of nice episodes be $N = \left\{ t : w_t(\text{x}) < w_{min} \text{ or } \frac{1}{4}\sum_{i<t} w_i(\text{x}) \geq \ln\frac{SAH}{\beta'} + 2E_{\eps}\right\}$ and define a set $M = \{ t : \Delta_t > \alpha' \}\cap N$ to be the set of suboptimal nice epidoes. We know that $|M|\leq T$. Finally we can bound the total number of suboptimal nice episodes by \begin{align*} \sum_{t\in M} \Delta_t^r &\leq \sum_{t\in M}8C H^{r-1}(\log(T)+D)\sum_{\text{x}\notin L_t}\frac{w_t(\text{x})}{\sum_{i\leq t}w_i(\text{x})} \\ &\leq 8C H^{r-1}(\log(T)+D)\sum_{\text{x}\notin L_t}\sum_{t\in K}\frac{w_t(\text{x})}{\sum_{i\leq t}w_i(\text{x})\mathbbm{1}(w_i(\text{x})>w_{\text{min}})} \end{align*} For every $x=(s,a,h)$ consider the sequence $w_i(\text{x}) \in [w_{\text{min}}, 1]$ with $i \in I = \{ w_i(\text{x}) \geq w_{\text{min}}\}$ and apply lemma \ref{lem:lnbound} to get \begin{align*} \sum_{t\in M}\frac{w_t(\text{x})}{\sum_{i\leq t}w_i(\text{x})\mathbbm{1}(w_i(\text{x})>w_{\text{min}})} \leq \ln\left(\frac{Te}{w_{\text{min}}} \right) \end{align*} Therefore we have \begin{align*} \sum_{t\in M} \Delta_t^r \leq 8C S A H^r(\log(T)+D)\ln\left(\frac{Te}{w_{\text{min}}} \right) \end{align*} \end{proof} Since each episode has to contribute at least $(\alpha')^r$ to this bound we have \begin{align*} |M| \leq \frac{8C S A H^r(\log(T)+D)\ln\left(\frac{Te}{w_{\text{min}}} \right)} {(\alpha')^2} \end{align*} Completing the proof. \begin{lemma}{\cite[Lemma E.5]{dann2017unifying}} \label{lem:lnbound} Let $a_i$ be a sequence taking values in $[a_{min}, 1]$ with $a_{min}>0$ and $m>0$, then $\sum_{k=1}^m \frac{a_i}{\sum_{i=1}^k a_i}\leq \ln(\frac{me}{a_{min}})$ \end{lemma} \section{PAC and Regret Analysis of $\texttt{PUCB}$} Now that we have established $\texttt{PUCB}$ is JDP, we turn to utility guarantees. We establish two forms of utility guarantee namely a PAC sample complexity bound, and a regret bound. In both cases, comparing to \texttt{UBEV}, we show that the price for JDP is quite mild. In both bounds the privacy parameter interacts quite favorably with the ``error parameter.'' We first state the PAC guarantee. \begin{theorem}[PAC guarantee for $\texttt{PUCB}$] \label{thm:PUCBPAC} Let $T$ be the maximum number of episodes and $\varepsilon$ the JDP parameter. Then for any $\alpha \in (0,H]$ and $\beta \in (0,1)$, algorithm $\texttt{PUCB}$ with parameters $(\varepsilon,\beta)$ follows a policy that with probability at least $1-\beta$ is $\alpha$-optimal on all but \begin{align*} O\left(\left(\frac{SAH^4}{\alpha^2} + \frac{S^2AH^4}{\varepsilon\alpha}\right) \mathrm{polylog}\left(T,S,A,H,\tfrac{1}{\alpha},\tfrac{1}{\beta}, \tfrac{1}{\varepsilon}\right)\right) \end{align*} episodes. \end{theorem} The theorem states that if we run $\texttt{PUCB}$ for many episodes, it will act near-optimally in a large fraction of them. The number of episodes where the algorithm acts suboptimally scales polynomially with all the relevant parameters. In particular, notice that in terms of the utility parameter $\alpha$, the bound scales as $1/\alpha^2$. In fact the first term here matches the guarantee for the non-private algorithm \texttt{UBEV} up to polylogarithmic factors. On the other hand, the privacy parameter $\varepsilon$ appears only in the term scaling as $1/\alpha$. In the common case where $\alpha$ is relatively small, this term is typically of a lower order, and so the price for privacy here is relatively low. Analogous to the PAC bound, we also have a regret guarantee. \begin{theorem}[Regret bound for \texttt{PUCB}] \label{thm:PUCBRegret} With probability at least $1-\beta$, the regret of $\texttt{PUCB}$ up to episode $T$ is at most \begin{align*} O\left(\left(H^2\sqrt{SAT} + \frac{SAH^3 + S^2AH^3}{\varepsilon}\right) \mathrm{polylog}\left(T, S, A, H, \tfrac{1}{\beta}, \tfrac{1}{\varepsilon}\right)\right) \enspace. \end{align*} \end{theorem} A similar remark to the PAC bound applies here: the privacy parameter only appears in the $\mathrm{polylog}(T)$ terms, while the leading order term scales as $\sqrt{T}$. In this guarantee it is clear that as $T$ gets large, the utility price for privacy is essentially negligible. We also remark that both bounds have ``lower order'' terms that scale with $S^2$. This is quite common for tabular reinforcement algorithms~\cite{dann2017unifying,azar2017minimax}. We find it quite interesting to observe that the privacy parameter $\varepsilon$ interacts with this term, but not with the so-called ``leading'' term in these guarantees. \paragraph{Proof Sketch.} The proofs for both results parallel the arguments in~\citep{dann2017unifying} for the analysis of \texttt{UBEV}. The main differences arises from the fact that we have adjusted the confidence interval $\widetilde{\conf}$ to account for the noise in the releases of $\widetilde{r},\widetilde{n},\widetilde{m}$. In~\citep{dann2017unifying} the bonus is crucially used to establish optimism, and the final guarantees are related to the over-estimation incurred by these bonuses. We focus on these two steps in this sketch, with a full proof deferred to the appendix. First we verify optimism. Fix episode $t$ and state tuple $(s,a,h)$, and let us abbreviate the latter simply by $\text{x}$. Assume that $\widetilde{V}_{h+1}$ is private and optimistic in the sense that $\widetilde{V}_{h+1}(s) \geq V_{h+1}^*(s)$, for all $s \in \cS$. First define the empirical Q-value \begin{align*} \widehat{Q}_t(\text{x}) = \frac{\widehat{r}_t(\text{x}) + \sum_{s'\in\cS}\widetilde{V}_{h+1}(s')\widehat{m}_t(\text{x},s')}{\widehat{n}_t(\text{x})} \enspace. \end{align*} The optimistic Q-function, which is similar to the one used by~\citep{dann2017unifying}, is given by \begin{align*} \widehat{Q}^{+}_t(\text{x}) = \widehat{Q}_t(\text{x}) + (H+1)\phihatconfL{\text{x}} \enspace, \end{align*} where $\phihatconfL{\text{x}} = \phihatconfR{\text{x}}$. A standard concentration argument shows that $\widehat{Q}^{+}_t\geq Q^\star$, assuming that $\widetilde{V}_{h+1} \geq V^\star_{h+1}$. Of course, both $\widehat{Q}_t$ and $\widehat{Q}^{+}_t$ involve the non-private counters $\widehat{r},\widehat{n},\widehat{m}$, so they are \emph{not} available to our algorithm. Instead, we construct a surrogate for the empirical Q-value using the private releases: \begin{align*} &\widetilde{Q}_t(\text{x})= \frac{\widetilde{r}_t(\text{x}) +\sum_{s'\in\cS}\widetilde{V}_{h+1}(s')\widetilde{m}_t(\text{x},s')}{\widetilde{n}_t(\text{x})} \enspace. \end{align*} Our analysis involves relating $\widetilde{Q}_t$ which the algorithm has access to, with $\widehat{Q}_t$ which is non-private. To do this, note that by the guarantee for the counting mechanism, we have \begin{align} \widehat{Q}_t(\text{x}) \leq\frac{\widetilde{r}_t(\text{x})+E_{\eps} +\sum_{s'\in\cS}\widetilde{V}_{h+1}(s')(\widetilde{m}_t(\text{x},s')+E_{\eps})}{\widetilde{n}_t(\text{x})-E_{\eps}} \enspace.\label{eq:qhat_upper} \end{align} Next, we use the following elementary fact. \begin{claim} \label{claim:main:oneoverNbound} Let $y \in \mathbb{R}$ be any positive real number. Then for all $x \in \mathbbm{R}$ with $x\geq 2y$ it holds that $\frac{1}{x - y} \leq \frac{1}{x} + \frac{2y}{x^2}$. \end{claim} If $\widetilde{n}_t(\text{x}) \geq 2E_{\eps}$, then we can apply claim~\ref{claim:main:oneoverNbound} to equation~\eqref{eq:qhat_upper}, along with the facts that $\widetilde{V}_{h+1}(s') \leq H$ and $\widetilde{r}_t(\text{x}) \leq \widetilde{n}_t(\text{x}) + 2E_{\eps}\leq 2\widetilde{n}_t(\text{x})$, to upper bound $\widehat{Q}_t$ by $\widetilde{Q}_t$. This gives: \begin{align*} \widehat{Q}_t(\text{x}) &\leq \widetilde{Q}_t(\text{x}) + \left(\frac{1}{\widetilde{n}_t(\text{x})} + \frac{2E_{\eps}}{\widetilde{n}_t(\text{x})^2}\right)\cdot(1+SH)E_{\eps}\\ & = \widetilde{Q}_t(\text{x}) + \psitilconfL{\text{x}} \enspace. \end{align*} Therefore, we see that $\widetilde{Q}_t(\text{x})+\psitilconfL{\text{x}}$ dominates $\widehat{Q}_t(\text{x})$. Accordingly, if we inflate by $\phitilconfL{\text{x}}$ -- which is clearly an upper bound on $\phihatconfL{\text{x}}$ -- we account for the statistical fluctuations and can verify optimism. In the event that $\widetilde{n}_t(\text{x}) \leq 2E_{\eps}$, we simply upper bound $Q^* \leq H$. For the over-estimation, the bonus we have added is $\phitilconfL{\text{x}} + \psitilconfL{\text{x}}$, which is closely related to the original bonus $\phihatconfL{\text{x}}$. The essential property for our bonus is that it is not significantly larger than the original one $\phihatconfL{\text{x}}$. Indeed, $\phihatconfL{\text{x}}$ scales as $1/\sqrt{\widetilde{n}_t(\text{x})}$ while $\psitilconfL{\text{x}}$ scales roughly as $E_{\eps}/\widetilde{n}_t(\text{x}) + E_{\eps}^2/\widetilde{n}_t(\text{x})^2$, which is lower order in the dependence on $\widetilde{n}_t(\text{x})$. Similarly, the other sources of error here only have lower order effects on the over-estimation. In detail, there are three sources of error. First, $\phitilconfL{\text{x}}$ is within a constant factor of $\phihatconfL{\text{x}}$ since we are focusing on rounds where $\widetilde{n}_t(\text{x}) \geq 2E_{\eps}$. Second, as the policy suboptimality is related to the bonuses on the states and actions we are likely to visit, we cannot have many rounds where $\widetilde{n}_t(\text{x}) \leq 2E_{\eps}$, since all of the private counters are increasing. A similar argument applies for $\psitilconfL{\text{x}}$: we can ignore states that we visit infrequently, and the private counters $\widetilde{n}_t(\text{x})$ for states that we visit frequently increase rapidly enough to introduce minimal additional error. Importantly, in the latter two arguments, we have terms of the form $E_{\eps}/\widetilde{n}_t(\text{x})$, while $\phihatconfL{\text{x}}$ itself scales as $\sqrt{1/\widehat{n}_t(\text{x})}$, which dominates in terms of the accuracy parameter $\alpha$ or the number of episodes $T$. As such we obtain PAC and regret guarantees where the privacy parameter $\varepsilon$ does not appear in the dominant terms. \section{The $\texttt{PUCB}$ Algorithm} In this section, we introduce the Private Upper Confidence Bound algorithm ($\texttt{PUCB}$), a JDP algorithm with both PAC and regret guarantees. The pseudo-code for $\texttt{PUCB}$ is in algorithm~\ref{alg:pucb}. At a high level, the algorithm is a private version of the \texttt{UBEV} algorithm~\citep{dann2017unifying}. \texttt{UBEV} keeps track of three types of statistics about the history, including (a) the average empirical reward for taking action $a$ in state $s$ at time $h$, denoted $\widehat{r}_t(s,a,h)$, (b) the number of times the agent has taken action $a$ in state $s$ at time $h$, denoted $\widehat{n}_t(s,a,h)$, and (c) the number of times the agent has taken action $a$ in state $s$ at time $h$ and transitioned to $s'$, denoted $\widehat{m}_t(s,a,s',h)$. In each episode $t$, \texttt{UBEV} uses these statistics to compute a policy via dynamic programming, executes the policy, and updates the statistics with the observed trajectory.~\cite{dann2017unifying} compute the policy using an optimistic strategy and establish both PAC and regret guarantees for this algorithm. \input{pseudocode/PUCB} Of course, as the policy depends on the statistics from the previous episodes, $\texttt{UBEV}$ as is does not satisfy JDP. On the other hand, the policy executed only depends on the previous episodes \emph{only} through the statistics $\widehat{r}_t,\widehat{n}_t,\widehat{m}_t$. If we maintain and use private versions of these statistics, and we set the privacy level appropriately, we can ensure JDP. To do so $\texttt{PUCB}$ initializes one private counter mechanism for each $\widehat{r}_t, \widehat{n}_t,\widehat{m}_t$ ($2SAH + S^2AH$ counters in total). At episode $t$, we compute the policy using optimism as in $\texttt{UBEV}$, but we use only the private counts $\widetilde{r}_t,\widetilde{n}_t,\widetilde{m}_t$ released from the counter mechanisms. We require that each set of counters is $(\varepsilon/3)$ JDP, and so with $$E_{\eps} = \frac{3}{\eps}H\log\left(\frac{2SAH+S^2AH}{\beta'}\right)\log\left(\Tmax\right)^{5/2},$$ we can ensure that with probability at least $1-\beta$: \begin{align*} \forall t \in [T]: \left|\widetilde{n}_t(s,a,h) - \widehat{n}_t(s,a,h)\right| < E_{\eps} \enspace, \end{align*} where $\widehat{n}_t,\widetilde{n}_t$ are the count and release at the beginning of the $t$-th episode. The guarantee is uniform in $(s,a,h)$ and also holds simultaneously for $\widetilde{r}$ and $\widetilde{m}$. To compute the policy, we define a bonus function $\widetilde{\conf}(s,a,h)$ for each $(s,a,h)$ tuple, which can be decomposed into two parts $\phitilconfL{s,a,h}$ and $\psitilconfL{s,a,h}$, where \begin{align*} &\phitilconfL{s,a,h}=\phitilconfR{s,a,h} \enspace,\\ &\psitilconfL{s,a,h}=\psitilconfR{s,a,h} \enspace. \end{align*} The term $\phitilconfL{\cdot}$ roughly corresponds to the sampling error, while $\psitilconfL{\cdot}$ corresponds to errors introduced by the private counters. Using this bonus function, we use dynamic programming to compute an optimistic private Q-function in Algorithm~\ref{alg:privateplanning}. The algorithm here is a standard batch Q-learning update, with $\widetilde{\conf}(\cdot)$ serving as an optimism bonus. The resulting Q-function, called $\widetilde{Q}^{+}$, encodes a greedy policy, which we use for the $t$-th episode. \section{Privacy Analysis of $\texttt{PUCB}$} We show that releasing the sequence of actions by algorithm $\texttt{PUCB}$ satisfies JDP with respect to any user on an episode changing his data. Formally, \begin{theorem}\label{thm:jdp} Algorithm (\ref{alg:pucb}) \texttt{PUCB}~is $\varepsilon$-JDP. \end{theorem} To prove theorem \ref{thm:jdp}, we use the \emph{billboard lemma} due to \cite{hsu2016private} which says that an algorithm is JDP if the output sent to each user is a function of the user's private data and a common signal computed with standard differential privacy. We state the formal lemma: \begin{lemma}[Billboard lemma \cite{hsu2016private}]\label{lem:billboard} Suppose $\cM:U \rightarrow \mathcal{R}$ is $\varepsilon$-differentially private. Consider any set of functions $f_i : U_i\times \mathcal{R} \rightarrow \mathcal{R}'$ where $U_i$ is the portion of the database containing the $i$'s user data. The composition $\{f_i(\Pi_i U, \cM(U))\}$ is $\varepsilon$-joint differentially private, where $\Pi_i:U\rightarrow U_i$ is the projection to $i$'s data. \end{lemma} Let $U_{<t}$ denote the data of all users before episode $t$ and $u_t$ denote the data of the user during episode $t$. Algorithm \texttt{PUCB} keeps track of all events on users $U_{<t}$ in a differentially-private way with private counters $\widetilde{r}_t, \widetilde{n}_t, \widetilde{m}_t$. These counters are given to the procedure \texttt{PrivQ}~ which computes a $Q$-function $\widetilde{Q}^{+}_t$, and induces the policy $\pi_t(s,h) \coloneqq \max_a \widetilde{Q}^{+}_t(s,a,h)$ to be use by the agent during episode $t$. Then the output during episode $t$ is generated the policy $\pi_t$ and the private data of the user $u_t$ according to the protocol \ref{alg:rlprotocol}, the output on a single episode is: $\left(\pi_t\left(s^{(t)}_1, 1\right), \ldots, \pi_t\left(s^{(t)}_H, H\right)\right)$. By the billboard lemma \ref{lem:billboard}, the composition of the output of all T episodes, and the final policy $\left(\left\{\left( \pi_t(s^{(t)}_1, 1), \ldots, \pi_t(s^{(t)}_H, H)\right) \right\}_{t\in [T]}, \pi_T\right) $ satisfies $\varepsilon$-JDP if the policies $\{\pi_t\}_{t\in [T]}$ are computed with a $\varepsilon$-DP mechanism. Then it only remains to show that the noisy counts satisfy $\varepsilon$-DP. First, consider the counters for the number of visited states. The algorithm $\texttt{PUCB}$ runs $SAH$ parallel private counters, one for each state tuple $(s,a,h)$. Each counter is instantiated with a $\varepsilon/(3H)$-differentially private mechanism which takes an input an event stream $\widehat{n}(s,a,h) = \{0,1\}^T$ where the $i$th bit is set to 1 if a user visited the state tuple $(s,a,h)$ during episode $i$ and 0 otherwise. Hence each stream $\widehat{n}(s,a,h)$ is the data for a private counter. The next claim says that the total $\ell_1$ sensitivity over all streams is bounded by $H$: \begin{claim}\label{claim:sensitivity} Let $U, U'$ be two $t$-neighboring user sequences, in the sense that they are only different in the data for episode $t$. For each $(s,a,h)\in\cS\times\cA\times [H]$, let $\widehat{n}(s,a,h)$ be the event stream corresponding to user sequence U and $\widehat{n}'(s,a,h)$ be the event stream corresponding to $U'$. Then the total $\ell_1$ distance of all stream is given by the following claim: \begin{align*} \sum_{(s,a,h)\in \cS\times\cA\times [H]} \|\widehat{n}(s,a,h) -\widehat{n}'(s,a,h)\|_1\leq H \end{align*} \end{claim} \begin{proof} The proof follows from the fact that on any episode $t$ a user visits at most $H$ states. \end{proof} Finally we use a result from \citep[Lemma 34]{hsu2016private} which states that the composition of the $SAH$ $(\varepsilon/3H)$-DP counters for $\widehat{n}(\cdot)$ satisfy $(\varepsilon/3)$-DP as long as the $\ell_1$ sensitivity of the counters is $H$ as shown in claim \ref{claim:sensitivity}. We can apply the same analysis to show that the counters corresponding to the empirical reward $\widehat{r}(\cdot)$ and the transitions $\widehat{m}(\cdot)$ are both also $(\varepsilon/3)$-DP. Putting it all together releasing the noisy counters is $\varepsilon$-differentially private. \subsection{Proofs from Section~\ref{sec:mablb}} The lower bound result relies on the following adaptation of the coupling lemma from \citep[Lemma 6.2]{karwa2017finite}. \begin{lemma}[\cite{karwa2017finite}]\label{lemma:KarwaVadhan} For every pair of distributions $\mathbb{D}_{\theta_0}$ and $\mathbb{D}_{\theta_1}$, every $(\epsilon, \delta)$-differentially private mechanism $M(x_1, \ldots, x_n)$, if $\mathbb{M}_{\theta_0}$ and $\mathbb{M}_{\theta_1}$ are two induced marginal distributions on the output of $M$ evaluated on input dataset $X_1, \ldots, X_n$ sampled i.i.d from $\mathbb{D}_{\theta_0}$ and $\mathbb{D}_{\theta_1}$ respectively, $\epsilon' = 6\epsilon n \|\mathbb{D}_{\theta_0}-\mathbb{D}_{\theta_1}\|_{tv}$ and $\delta' = 4\delta e^{\epsilon'}\|\mathbb{D}_{\theta_0}-\mathbb{D}_{\theta_1}\|_{tv}$, then, for every event $E$, \begin{align*} \mathbb{M}_{\theta_0}(E) \leq e^{\epsilon'}\mathbb{M}_{\theta_1}(E) + \delta' \end{align*} \end{lemma} \begin{lemma*}[Lemma \ref{lem:KarwaVadhanMAB}.] Fix any arm $a \in [k]$. Now consider any pair of MAB instances $\mu, \nu \in [0,1]^k$ both with $k$ arms and time horizon $T$, such that $\|\mu_a - \nu_a \|_{tv} < \alpha$ and $\|\mu_{a'} - \nu_{a'} \|_{tv} = 0$ for all $a' \neq a$. Let $R \sim B(\mu)^T$ and $Q \sim B(\nu)^T$ be the sequence of $T$ rounds of rewards sampled under $\mu$ and $\nu$ respectively, and let $\cM$ be any $\epsilon$-DP multi-armed bandit algorithm. Then, for any event $E$ such that under event $E$ arm $a$ is pulled less than $t$ times, \begin{align*} \prob{\cM,R}{E} \leq e^{6 \epsilon t \alpha}\prob{\cM,Q}{E} \end{align*} \end{lemma*} \begin{proof} We can think of algorithm $M(R)$ as taking as input a tape of $t$ pre-generated rewards for arm $a$, denote this tape as $R = (r_1, \ldots r_t)$. If $M(R)$ is executed with input tape $R$, then when $M(R)$ pulls arm $a$ for the $j^{th}$ time the $j^{th}$ entry $R_j$ is revealed and removed from $R$. If $M(R)$ runs out of the tape $R$ then the reward is drawn from the real distribution of arm $a$ (i.e. $P_a$). Lastly, if $M(R)$ pulls some arm $a' \neq a$ then the reward is drawn from the real distribution $P_{a'}$. Note that if $R$ is sampled from the real distribution of arm $a$ i.e $R \sim P_a^t$, then $M$ and $M(R)$ are equivalent. That is, for any event $E$, $$\prob{M,P}{E} = \prob{M(R), P}{E}$$ Under this construction, the event that $M(R)$ pulls arm $a$ less than $t$ times, is the same as the event that $M(R)$ consumes less than $t$ entries of the tape $R$. By the assumption of the event $E$ under consideration, if $M(R)$ consumes at least $t$ entries of the tape then we can say that event $E$ fails to happen. Therefore, in order to evaluate the event $E$ we only need to initialize $M(R)$ with tapes of size $t$. Furthermore, we treat the input tape $R$ as the data of $M$ and we claim that $M(R)$ is ($\epsilon, \delta$)-differentially private on $R$. Now we apply lemma (\ref{lemma:KarwaVadhan}) to bound the probability of $E$ under $M(R_p)$ and $M(R_q)$, where $R_p$ and $R_q$ are the input tapes each generated with $t$ i.i.d samples from distribution $P_a$ and $Q_a$ respectively. $$\prob{M(R_p),P}{E} \leq e^{6 \epsilon t \Delta_a} \prob{M(R_q),Q}{E}$$ This implies $\prob{M,P}{E} \leq e^{6 \epsilon t \Delta_a}\prob{M,Q}{E}$. \end{proof} \subsection{Proofs from Section~\ref{sec:rlpslb}} \begin{lemma*}[Lemma \ref{lem:jdptodp}.] Let $(U,S_1)$ be a user-state input sequence with initial states from some set $\cS_1$. Suppose $\mathcal{M}$ is an RL agent that satisfies $\varepsilon$-JDP in the public initial state setting. Then, for any $s \in \cS_1$ the trace $\cM_{1,s}(U,S_1)$ is the output of an $\epsilon$-DP MAB mechanism on input $U_s$. \end{lemma*} \begin{proof}[Proof of Lemma~\ref{lem:jdptodp}] \newcommand{\bar E_{-t}^{|\vec{a}} }{\bar E_{-t}^{|\vec{a}} } \newcommand{\bar E_{>t}^{|\vec{a}} }{\bar E_{>t}^{|\vec{a}} } \newcommand{\bar E_{<t}^{|\vec{a}} }{\bar E_{<t}^{|\vec{a}} } Fix $(U,S_1)$, $s$ and $U_s$ as in the statement. Recall that $T_s$ is the number of times state $s$ is in $S_1$. Observe that $\cM_{1,s}(U,S_1)$ has the output type expected from a MAB mechanism on input $U_s$. Fix an event $E \subseteq \cA^{T_s+1}$ on the first action from all episodes starting with $s$ together with the action predicted by the policy at state $s$. For any $\bar{a} = (\bar a^{(1)},\ldots,\bar a^{(T_s)},\hat{a}) \in E$ we define the event $E_{\bar{a}} \subseteq \cA^{H \times T} \times \Pi$ by \begin{align*} E_{\bar{a}} = \{ (a_h^t)_{h\in[H], t\in [T]}, \pi | a_1^{(t_1)} = \bar a^{(1)}, \ldots, a_1^{(t_{T_s})} = \bar a^{(T_s)}, \pi(s) = \hat{a}\} \enspace. \end{align*} where $a_1^{t_i}$ is the first action in the $i$th episode where state $s$ is the first state. The the event $\bar E$ is the union of all events $E_{\bar a}$, defined as \begin{align*} \bar{E} = \cup_{\bar{a} \in E} E_{\bar{a}} \subseteq \cA^{H \times T} \times \Pi \end{align*} Let $\bar{E}_{-t} \subseteq \cA^{H \times (T-1)}\times \Pi$ be the collection of outputs from $\bar{E}$ truncated to length $T-1$ and including the output policy. Furthermore, let $\bar{E}_{< t} \subseteq \cA^{H \times [t-1]}$ be the collection of outputs from $\bar{E}$ truncated to length $t-1$ and similarly let $\bar{E}_{> t} \subseteq \cA^{H \times (T - t)}$ be the sequences truncated to length $T-t$ . For any $\vec{a} \in \cA^{H}$ we define the following notation \begin{align*} &\bar E_{-t}^{|\vec{a}} =\left\{e \in \bar{E}_{-t} : (\vec{a},e) \in \bar{E} \right\} \\ &\bar E_{>t}^{|\vec{a}} = \left\{e \in \bar{E}_{>t}: \exists_{ {b}^{(1)},\ldots,{b}^{(t-1)}\in \cA^{H}} \left( {b}^{(1)},\ldots,{b}^{(t-1)}, \vec{a},e\right) \in \bar{E} \right\} \\ &\bar E_{<t}^{|\vec{a}} = \left\{e \in \bar{E}_{<t}: \exists_{ {b}^{(1)},\ldots,{b}^{(T-t)}\in \cA^{H}} \left({b}^{(1)},\ldots,{b}^{(T-t)}, \vec{a},e\right) \in \bar{E} \right\} \enspace. \end{align*} For the remaining of the proof, denote by $\cM_t(U,S_1)$ the output during episode $t$, $\cM_{<t}(U,S_1)$ all the outputs before episode $t$, $\cM_{>t}(U,S_1)$ all the outputs after episode $t$, and $\cM_{-t}(U,S_1)$ are all the outputs except for the output during episode $t$ and it includes the final output policy. For any $\vec{a}\in \cA^H$ It is easy to show that \begin{align}\label{eq:Eequiv} \text{ } \cM_{-t}(U,S_1) \in \bar E_{-t}^{|\vec{a}} \text { if and only if } \cM_{>t}(U,S_1) \in \bar E_{>t}^{|\vec{a}} \text{ and } \cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} \end{align} Observe that since $\cM$ processes its inputs incrementally we have that \begin{align} \label{eq:futureconditioned} \pr{\cM_{t}(U,S_1) = \vec{a} \land \cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} ~\bigg| \cM_{>t}(U,S_1) \in \bar E_{>t}^{|\vec{a}} } = \pr{\cM_{t}(U,S_1) = \vec{a} \land \cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} } \end{align} The equation \eqref{eq:futureconditioned} says that conditioning on the output of future events does not affect the probability of the present event. Now take $(U',S_1)$ to be a $t$-neighboring user-state sequence and note $U'_s$ is a neighboring sequence of $U_s$ in the sense used in the definition of DP for MAB mechanisms. The next equation says that the output of $\cM_t$ on episode $t$, is not distinguishable on the user-state sequences $(U, S_1)$ and $(U', S_1)$. This is because $U$ and $U'$ match on all episodes before episode $t$ and they share the same initial state on every episode. We have that \begin{align} \label{eq:neighborevent} \pr{\cM_t(U, S_1)} = \pr{\cM_t(U', S_1)} \end{align} We will use the following simple application of Baye's Rule: \begin{align} \label{eq:bayes} \pr{A | B,C } = \frac{\pr{A \land B | C}}{\pr{B|C}} \end{align} Next We want to show that \begin{align}\label{eq:UtoUprime} \pr{\cM_{t}(U,S_1) = \vec{a} ~\bigg| \cM_{-t}(U,S_1) \in \bar E_{-t}^{|\vec{a}} } = \pr{\cM_{t}(U',S_1) = \vec{a} ~\bigg| \cM_{-t}(U',S_1) \in \bar E_{-t}^{|\vec{a}} } \end{align} Let fix one $\vec{a}$ for now, then from \eqref{eq:Eequiv}, \eqref{eq:futureconditioned}, \eqref{eq:neighborevent}, and \eqref{eq:bayes} we have \begin{align*} &\pr{\cM_{t}(U,S_1) = \vec{a} ~\bigg| \cM_{-t}(U,S_1) \in \bar E_{-t}^{|\vec{a}} } \\ &= \pr{\cM_{t}(U,S_1) = \vec{a} ~\bigg| \cM_{>t}(U,S_1) \in \bar E_{>t}^{|\vec{a}} \land \cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} } \quad \text{equation } \eqref{eq:Eequiv}\\ &= \frac{ \pr{\cM_{t}(U,S_1) = \vec{a} \land \cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} ~\bigg| \cM_{>t}(U,S_1) \in \bar E_{>t}^{|\vec{a}} }} {\pr{\cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} ~\bigg| \cM_{>t}(U,S_1) \in \bar E_{>t}^{|\vec{a}} }} \quad \text{equation } \eqref{eq:bayes} \\ &= \frac{ \pr{\cM_{t}(U,S_1) = \vec{a} \land \cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} }} {\pr{\cM_{<t}(U,S_1) \in \bar E_{<t}^{|\vec{a}} }} \quad \text{equation } \eqref{eq:futureconditioned} \\ &= \frac{ \pr{\cM_{t}(U',S_1) = \vec{a} \land \cM_{<t}(U',S_1) \in \bar E_{<t}^{|\vec{a}} }} {\pr{\cM_{<t}(U',S_1) \in \bar E_{<t}^{|\vec{a}} }} \quad \text{equation } \eqref{eq:neighborevent} \\ &= \frac{ \pr{\cM_{t}(U',S_1) = \vec{a} \land \cM_{<t}(U',S_1) \in \bar E_{<t}^{|\vec{a}} ~\bigg| \cM_{>t}(U',S_1) \in \bar E_{>t}^{|\vec{a}} }} {\pr{\cM_{<t}(U',S_1) \in \bar E_{<t}^{|\vec{a}} ~\bigg| \cM_{>t}(U',S_1) \in \bar E_{>t}^{|\vec{a}} }} \quad \text{equation } \eqref{eq:futureconditioned} \\ &= \pr{\cM_{t}(U',S_1) = \vec{a} ~\bigg| \cM_{>t}(U',S_1) \in \bar E_{>t}^{|\vec{a}} \land \cM_{<t}(U',S_1) \in \bar E_{<t}^{|\vec{a}} } \quad \text{equation } \eqref{eq:bayes}\\ &=\pr{\cM_{t}(U',S_1) = \vec{a} ~\bigg| \cM_{-t}(U',S_1) \in \bar E_{-t}^{|\vec{a}} } \end{align*} Combined with the $\epsilon$-JDP assumption on $\cM$ this implies that \begin{align*} & \pr{\cM(U,S_1) \in \bar{E}} \\ &= \sum_{\vec{a} \in \cA^H} \pr{\cM_t(U,S_1) = \vec{a} \land \cM_{-t}(U, S_1) \in \bar E_{-t}^{|\vec{a}} } \\ &= \sum_{\vec{a} \in \cA^H} \pr{\cM_{t}(U,S_1) = \vec{a} ~\bigg| \cM_{- t}(U,S_1) \in \bar E_{-t}^{|\vec{a}} } \pr{\cM_{- t}(U,S_1) \in \bar E_{-t}^{|\vec{a}} } \\ &= \sum_{\vec{a} \in \cA^H} \pr{\cM_{t}(U',S_1) = \vec{a} ~\bigg| \cM_{- t}(U',S_1) \in \bar E_{-t}^{|\vec{a}} } \pr{\cM_{- t}(U,S_1) \in \bar E_{-t}^{|\vec{a}} } \quad\quad \hfill \text{ Equation } \eqref{eq:UtoUprime} \\ &\leq \sum_{\vec{a} \in \cA^H} \pr{\cM_{t}(U',S_1) = \vec{a} ~\bigg| \cM_{- t}(U',S_1) \in \bar E_{-t}^{|\vec{a}} } e^{\varepsilon}\pr{\cM_{-t}(U',S_1) \in \bar E_{-t}^{|\vec{a}} } \quad\quad \hfill \cM \text{ satisfies } \varepsilon\text{-JDP} \\ &= e^{\varepsilon} \pr{\cM(U',S_1) \in \bar{E}} \enspace. \end{align*} Finally, using the inequality above, and by the construction of $\cM_{1,s}$ and $\bar{E}$ we have \begin{align*} \pr{\cM_{1,s}(U,S_1) \in E} &= \pr{\cM(U,S_1) \in \bar{E}} \\ &\leq e^{\epsilon} \pr{\cM(U',S_1) \in \bar{E}} \\ &= e^{\epsilon}\pr{\cM_{1,s}(U',S_1) \in E} \enspace. \end{align*} \end{proof} To prove the lower bound we consider the class of MDPs shown in Figure~\ref{fig:hardMDP}. An MDP in this class has state space $\cS \coloneqq [n] \cup \{+,-\}$ and action space $\cA \coloneqq \{0,\ldots,m\}$. On each episode, the agent starts on one of the initial states $\{1, \ldots, n\}$ chosen uniformly at random. The state labelled $0$ is a dummy state which represents the initial transition to any state $s \in \{1, \ldots, n\}$ with uniform probability. On each of the initial states the agent has $m+1$ possible actions and transitions can only take it to one of two possible absorbing states $\{+,-\}$. Lastly, if the current state is either one of $\{ +, - \}$ then the only possible transition is a self loop, hence the agent is stays in that state until the end of the episode. We assume in these absorbing states the agent can only take a fixed action. Every action which transitions to state $+$ provides reward $1$ while actions transitioning to state $-$ provide reward $0$. In particular, in each episode the agent either receives reward $H$ or $0$. Such an MDP can be seen as consisting of $n$ parallel MAB problems. Each MAB problem determines the transition probabilities between the initial state $s \in\{1, \ldots, n\}$ and the absorbing states $\{+,-\}$. We index the possible MAB problems in each initial state by their optimal arm, which is always one of $\{0,\ldots,m\}$. We write $I_s \in \{0,\ldots,m\}$ to denote the MAB instance in initial state $s$, and define the transition probabilities such that $\pr{+|s,0} = 1/2+\alpha'/2$ and $\pr{+|s,a'} = 1/2$ for $a' \neq I_s$ for all $I_s$, and for $I_s \neq 0$ we also have $\pr{+|s,I_s} = 1/2 + \alpha'$. Here $\alpha'$ is a free parameter to be determined later. We succinctly represent an MDP in the class by identifying the optimal action (i.e.\ arm) in each initial state: $I \coloneqq (I_1,\ldots,I_n)$. \begin{figure}[h] \begin{equation*} \tikzfig{figures/hard_MDP2} \end{equation*} \caption{Hard MDP}\label{fig:hardMDP} \end{figure} \begin{proof}[Proof of Lemma~\ref{lem:lowerboundpublic}] We start by noting that the first term in the lower bound comes from the corresponding lower bound for the non-private episodic RL setting \citep[Theorem 2]{dann2015sample}, which also holds for our case. Now let $I = (I_1,\ldots,I_n)$ encode an MDP from the class above with $n+2$ states and $m+1$ actions. The optimal policy on this MDP is given by $\pi^*(s) = I_s$ for $s \in [n]$, and we write $\rho_I^*$ to denote the total expected reward of the optimal policy on a single episode. Define $G_s$ to be the event that policy $\pi$ produced by algorithm $\cM$ finds the optimal arm in state $s$, that is $\pi(s)=I_s$. We denote by $\rho_I^{\pi}$ the total expected reward per episode of this policy. Then, for any episode, the difference $\rho_I^*-\rho_I^\pi$ between total rewards is at least $$ \rho_I^*-\rho_I^\pi \geq H \left(1 - \frac{1}{n}\sum_{s=1}^n \mathbbm{1}\{G_s\} \right) \alpha'/2 \enspace. $$ Thus, $\pi$ cannot by $\alpha$-optimal unless we have: \begin{align} &\alpha \geq H \left(1 - \frac{1}{n}\sum_{s=1}^n \mathbbm{1}\{G_s\} \right) \alpha'/2 \notag \\ \iff &\frac{2\alpha}{H \alpha'} \geq\left(1 - \frac{1}{n}\sum_{s=1}^n \mathbbm{1}\{G_s\} \right) \notag\\ \iff &\frac{1}{n}\sum_{i=1}^n\mathbbm{1}\{G_s\} \geq \left( 1 - \frac{2\alpha}{H \alpha'} \right)\coloneqq \phi \enspace. \label{eq:epsOptFrac} \end{align} Here choose $\phi = 6/7$ and set $\alpha' = \frac{14 \alpha}{H}$. Equation \eqref{eq:epsOptFrac} says that in order to make $\pi$ an $\alpha$-optimal policy we must solve at least a $\phi$ fraction of the MAB instances. Hence, to get an $\alpha$-optimal with probability at least $1-\beta$ we require \begin{equation*} 1-\beta \leq \prob{I}{\rho_I^*-\rho_I^\pi \leq \alpha} \leq \prob{I}{\frac{1}{n}\sum_{s=1}^n\mathbbm{1}\{G_s\} \geq \phi} \enspace, \end{equation*} and by Markov's inequality we have \begin{align*} \prob{I}{\frac{1}{n}\sum_{s=1}^n\mathbbm{1}\{G_s\} \geq \phi} \leq \frac{1}{n \phi}\sum_{s=1}^n\prob{I}{G_s} \enspace. \end{align*} Each $\mathbbm{1}\{G_s\}$ is independent from each other be construction of the MDP. Now letting $\beta_s$ be an upper bound for the fail probability of each $\{G_s\}$, the derivation above implies that $1-\beta \leq \frac{1}{n \phi}\sum_{s=1}^n (1 - \beta_s)$, or, equivalently, that $\sum_{s} \beta_s \leq n (1 - \phi (1 - \beta))$. Now note that Lemma~\ref{lem:jdptodp} implies that all interactions between $\cM$ and $I$ that start on state $s$ constitute the execution of an $(\epsilon,\delta)$-DP algorithm on the MAB instance at state $s$. Hence, by Lemma~\ref{lem:privMAB} we can only have $\prob{I}{G_s} \geq 1 - \beta_s$ for some $\beta_s < 1/4$ if the number of episodes starting at $s$ where $\cM$ chooses an $\alpha'$-suboptimal arm satisfies \begin{align*} \Ex{n_s} &> \frac{(A-1)}{24 \epsilon \alpha'}\ln{\left(\frac{1}{4\beta_s}\right)}\mathbbm{1}[\beta_s < 1/4] \\ &= \frac{H(A-1)}{336 \epsilon \alpha}\ln{\left(\frac{1}{4\beta_s}\right)}\mathbbm{1}[\beta_s < 1/4] \\ &\geq \frac{H(A-1)}{336 \epsilon \alpha} \ln{\left(\frac{1}{4\beta_s}\right)}\mathbbm{1}[\beta_s \leq 1 - \phi(1-\beta)] \enspace, \end{align*} where we used that $\phi = 6/7$ and $\beta < 1/8$ imply $1 - \phi(1-\beta) < 1/4$, and that each MAB instance has $A - 1$ arms which are $\alpha'$-suboptimal. Thus, we can find a lower bound $\Ex{n_{\cM}} \geq \sum_s \Ex{n_s}$ by minimizing the sum of the lower bound on $\Ex{n_s}$ under the constraint that $\sum_{s} \beta_s \leq n (1 - \phi (1 - \beta))$. Here we can apply the argument from \citep[Lemma D.1]{dann2015sample} to see that the optimal choice of probabilities is given by $\beta_s = 1 - \phi(1-\beta)$ for all $s$. Plugging this choice in the lower bound leads to \begin{align*} \Ex{n_{\cM}} &\geq \frac{H S (A-1)}{336 \epsilon \alpha} \ln{\left(\frac{7}{4+24\beta}\right)} \enspace. \end{align*} \end{proof} \begin{lemma*}[Lemma \ref{lem:publicvsprivatestates}] Any RL agent $\cM$ satisfying $\varepsilon$-JDP also satisfies $\varepsilon$-JDP in the public state setting. \end{lemma*} \begin{proof} Suppose that algorithm $\cM$ satisfies $\varepsilon$-JDP. Let $(U, S_1)$ and $(U', S_1')$ be two $t$-neighboring user-state sequences such that $S_1 = S_1'$. Then for all events $E \subseteq \cA^{H \times [T-1]} \times \Pi$ we have \begin{align*} \pr{\cM_{-t}(U,S_1) \in E } \leq e^\varepsilon \pr{\cM_{-t}(U', S_1') \in E} \end{align*} Therefore $\cM$ satisfies the condition for $\varepsilon$-JDP in the public state setting as in definition \ref{def:jdppublic}. \end{proof} \section{Preliminaries} \subsection{Markov Decision Processes} A fixed-horizon \emph{Markov decision process} (MDP) with time-dependent dynamics can be formalized as a tuple $M = \left( \cS, \cA, \cR, \cP, p_0, H \right)$. $\cS$ is the state space with cardinality $S$. $\cA$ is the action space with cardinality $A$. $\cR(s_h, a_h, h)$ is the reward distribution on the interval $[0,1]$ with mean $r(s_h, a_h, h)$. $\cP$ is the transition kernel, given time step $h$, action $a_h$ and, state $s_h$ the next state is sampled from $s_{t+1} \sim \cP(.|s_h, a_h,h)$. Let $p_0$ be the initial state distribution at the start of each episode, and $H$ be the number of time steps in an episode. In our setting, an agent interacts with an MDP by following a (deterministic) policy $\pi \in \Pi$, which maps states $s$ and timestamps $h$ to actions, i.e., $\pi(s,h) \in \cA$. The \emph{value function} in time step $h\in [H]$ for a policy $\pi$ is defined as: \begin{align*} V_h^\pi(s) &\;= \Ex{\sum_{i=h}^H r(s_i, a_i, i) \bigg| s_h = s, \pi} \\ &\;= r(s, \pi(s,h), h) + \sum_{s'\in \cS} V_{h+1}^\pi(s') \cP(s'|s,\pi(s,h), h) \enspace. \end{align*} The \emph{expected total reward} for policy $\pi$ during an entire episode is: $$\rho^\pi = \Ex{\sum_{i=1}^H r(s_i, a_i, i) \bigg| \pi} = p_0^\top V_1^\pi \enspace.$$ The \emph{optimal value function} is given by $V_h^*(s) = \max_{\pi \in \Pi} V_h^{\pi}(s)$. Any policy $\pi$ such that $V_h^{\pi}(s) = V_h^*(s)$ for all $s \in \cS$ and $h \in [H]$ is called optimal. It achieves the optimal expected total reward $\rho^* = \max_{\pi \in \Pi} \rho^{\pi}$. The goal of an RL agent is to learn a near-optimal policy after interacting with an MDP for a finite number of episodes $T$. During each episode $t \in [T]$ the agent follows a policy $\pi_t$ informed by previous interactions, and after the last episode it outputs a final policy $\pi$. \begin{definition} An agent is \emph{$(\alpha,\beta)$-probably approximately correct} (PAC) with sample complexity $f(S,A,H,\tfrac{1}{\alpha}, \log(\tfrac{1}{\beta}))$, if with probability at least $1-\beta$ it follows an $\alpha$-optimal policy $\pi$ such that $\rho^* - \rho^\pi \leq \alpha$ except for at most $f(S,A,H,\tfrac{1}{\alpha}, \log(\tfrac{1}{\beta}))$ episodes. \end{definition} \begin{definition} The (expected cumulative) \emph{regret} of an agent after $T$ episodes is given by \begin{align*} \mathrm{Regret}(T) = \sum_{t=1}^T (\rho^* - \rho^{\pi_t}) \enspace, \end{align*} where $\pi_1, \ldots \pi_T$ are the policies followed by the agent on each episode. \end{definition} \subsection{Privacy in RL} In some RL application domains such as personalized medical treatments, the sequence of states and rewards received by a reinforcement learning agent may contain sensitive information. For example, individual users may interact with an RL agent for the duration of an episode and reveal sensitive information in order to obtain a service from the agent. This information affects the final policy produced by the agent, as well as the actions taken by the agent in any subsequent interaction. Our goal is to prevent damaging inferences about a user's sensitive information in the context of the interactive protocol in \cref{alg:rlprotocol} summarizing the interactions between an RL agent $\cM$ and $T$ distinct users. \begin{algorithm}[h] \caption{Episodic RL Protocol}\label{alg:rlprotocol} \KwIn{ Agent $\cM$ and users $u_1, \ldots, u_T$} \For{$t \in [T]$}{ \For{$h \in [H]$}{ $u_t$ sends state $s_h^{(t)}$ to $\cM$ \\ $\cM$ sends action $a_h^{(t)}$ to $u_t$ \\ $u_t$ sends reward $r_h^{(t)}$ to $\cM$ \\ } } $\cM$ releases policy $\pi$ \end{algorithm} Throughout the execution of this protocol the agent observes a collection of $T$ state-reward trajectories of length $H$. Each user $u_t$ gets to observe the actions chosen by the agent during the $t$-th episode, as well as the final policy $\pi$. To preserve the privacy of individual users we enforce a (joint) differential privacy criterion: upon changing one of the users in the protocol, the information observed by the other $T-1$ participants will not change substantially. This criterion must hold even if the $T-1$ participants collude adversarially, by e.g., crafting their states and rewards to induce the agent to reveal information about the remaining user Formally, we write $U = (u_1,\ldots,u_T)$ to denote a sequence of $T$ users participating in the RL protocol. Technically speaking a user can be identified with a tree of depth $H$ encoding the state and reward responses they would give to all the $A^H$ possible sequences of actions the agent can choose. During the protocol the agent only gets to observe the information along a single root-to-leaf path in each user's tree. For any $t \in [T]$, we write $\cM_{-t}(U)$ to denote all the outputs excluding the output for episode $t$ during the interaction between $\cM$ and $U$. This captures all the outputs which might leak information about the $t$-th user in interactions after the $t$-th episode, as well as all the outputs from earlier episodes where other users could be submitting information to the agent adversarially to condition its interaction with the $t$-th users. We also say that two user sequences $U$ and $U'$ are $t$-neighbors if they only differ in their $t$-th user. \begin{definition}\label{def:jdp} A randomized RL agent $\cM$ is \emph{$\epsilon$-jointly differentially private under continual observation} (JDP) if for all $t \in [T]$, all $t$-neighboring user sequences $U$, $U'$, and all events $E \subseteq \cA^{H \times [T-1]} \times \Pi$ we have \begin{align*} \pr{\cM_{-t}(U) \in E }\leq e^\epsilon \pr{\cM_{-t}(U') \in E} \enspace. \end{align*} \end{definition} This definition extends to the RL setting the one used in \cite{shariff2018differentially} for designing privacy-preserving algorithms for linear contextual bandits. The key distinctions is that in our definition each user interacts with the agent for $H$ time-steps (in bandit problems one usually has $H=1$), and we also allow the agent to release the learned policy at the end of the learning process. Another distinction is that our definition holds for all past and future outputs. In contrast, the definition of JDP in \cite{shariff2018differentially} only captures future episodes; hence, it only protects against collusion from future users. To demonstrate that our definition gives a stronger privacy protection, we use a simple example. Consider an online process that takes as input a stream of binary bits $u=(u_1, \ldots, u_T)$, where $u_t\in\{0,1\}$ is the data of user $t$, and on each round $t$ the mechanism outputs the partial sum $m_t(u) = \sum_{i=1}^t u_i$. Then the following trivial mechanism satisfies JDP (in terms of future episodes as in the JDP definition of \cite{shariff2018differentially}): First, sample once from the Laplace mechanism $\xi \sim \text{Lap}(\varepsilon)$ before the rounds begin, and on each round output $\widetilde{m}_t(u) = m_t(u) + \xi$. Note that the view of any future user $t’>t$ is $\widetilde{m}_{t'}(u)$. Now let $u$ be a binary stream with user $t$ bit on and let $w$ be identical to $u$ but with user $t$ bit off. Then, by the differential-privacy guarantee of the Laplace mechanism, a user $t'>t$ cannot distinguish between $\widetilde{m}_{t'}(u)$ and $\widetilde{m}_{t'}(w)$. Furthermore, any coalition of future users cannot provide more information about user $t$. Therefore this simple mechanism satisfies the JDP definition from \cite{shariff2018differentially}. However, the simple counting mechanism with one round of Laplace noise does not satisfy JDP for past and future outputs as in our JDP (\cref{def:jdp}). To see why, suppose that user $t-1$ and user $t+1$ collude in the following way: For input $u$, the view of user $t-1$ is $\widetilde{m}_{t-1}(u)$ and the view of user $t+1$ is $\widetilde{m}_{t+1}(u)$. They also know their own data $u_{t-1}$, $u_{t+1}$. Then they can recover the data of the $t$-th user as follows \begin{align*} \widetilde{m}_{t+1}(u) - u_{t+1} - \widetilde{m}_{t-1}(u) = m_{t+1}(u) + \xi - u_{t+1} - m_{t-1}(u) - \xi = \sum_{i=1}^{t+1}u_i - u_{t+1} - \sum_{i=1}^{t-1}u_i =u_t \end{align*} \begin{remark} 1. would the algorithm leak more info for the returning user? yes, but we could bound using group privacy. 2. would other users be affected? no, because JDP prevents arbitrary collusion \end{remark} \subsection{Counting Mechanism} The algorithm we describe in the next section maintains a set of counters to keep track of events that occur when interacting with the MDP. We denote by $\widehat{n}_t(s,a,h)$ the count of visits to state tuple $(s,a,h)$ right before episode $t$, where $a\in\cA$ is the action taken on state $s\in\cS$ and time-step $h\in [H]$. Likewise $\widehat{m}_t(s,a,s',h)$ is the count of going from state $s$ to $s'$ after taking actions $a$ before episode $t$. Finally, we have the counter $\widehat{r}_t(s,a,h)$ for the total reward received by taking action $a$ on state $s$ and time $h$ before episode $t$. Then, on episode $t$, the counters are sufficient to create an estimate of the MDP dynamics to construct a policy for episode $t$. The challenge is that the counters depend on the sequence of states and actions, which is considered sensitive data in this work. Therefore the algorithm must release the counts in a privacy-preserving way, and we do this the private counters proposed by \cite{chan2011private} and \cite{dwork2010differential}. A private counter mechanism takes as input a stream $\sigma = (\sigma_1\ldots, \sigma_T)\in [0,1]^T$ and on any round $t$ releases and approximation of the prefix count $c(\sigma)(t) = \sum_{i=1}^t \sigma_i$. In this work we will denote $\text{PC} $ as the binary mechanism of \cite{chan2011private} and \cite{dwork2010differential} with parameters $\epsilon$ and $T$. This mechanism produces a monotonically increasing count and satisfies the following accuracy guarantee: Let $\mathcal{M} \coloneqq \BM{T, \varepsilon}$ be a private counter and $c(\sigma)(t)$ be the true count on episode $t$, then given a stream $\sigma$, with probability at least $1-\beta$, simultaneously for all $1\leq t \leq T$, we have \begin{align*} \left| \mathcal{M}(\sigma)(t) - c(\sigma)(t) \right| \leq \frac{4}{\varepsilon}\ln(1/\beta)\log(T)^{5/2} \enspace. \end{align*} While the stated bound above holds for a single $\varepsilon$-DP counter, our algorithm needs to maintain more than $S^2 A H$ many counters. A naive allocation of the privacy budget across all these counters will require noise with scale polynomially with $S, A$, and $H$. However, we will leverage the fact that the total change across all counters a user can have scales with the length of the episode $H$, which allows us to add a much smaller amount of noise that scales linearly in $H$. \section{Private Counters} We use the binary mechanism of \citep{chan2011private} and \cite{dwork2010differential} to keep track of important events in a differentially private way. \begin{algorithm}[h] \caption{Binary Meachanism $\mathcal{B}$} \label{alg:BM} \KwIn{ Time upper bound $T$, privacy parameter $\epsilon$, stream $\sigma \in \{0,1\}^T$} $\epsilon' \leftarrow \epsilon / \log T$\\ \For{$t\leftarrow 1$ \textbf{ to } $ T$}{ Express $t$ in binary form: $t = \sum_j \text{Bin}_j(j) \cdot 2^j$\\ Let $i := \min\{j: \text{Bin}_j(j)\neq0\}$\\ $\alpha_i \leftarrow \sum_{j<i}\alpha_j + \sigma(t)$\\ \For{ $j \longleftarrow 0$ \textbf{to} $i-1$ }{ $\alpha_j\leftarrow 0, \hat{\alpha}_j \leftarrow 0$ } $\hat{\alpha_i} \leftarrow \alpha_i + \text{Lap}\left(\frac{1}{\epsilon'}\right)$\\ Output at time $t$ $\mathcal{B}(t) \leftarrow \sum_{j:\text{Bin}_j(T)=1} \hat{\alpha}_j$\\ } \end{algorithm} The error of the counter is given by the following theorem: \begin{theorem}[Theorem 4.1 in \cite{dwork2010differential} ] \label{thm:counterAccuracy} The counter algorithm \ref{alg:BM} run with parameters $T, \varepsilon, \beta$, yields a $T$-bounded counter with $\varepsilon$-differential privacy, such that with probability at least $1-\beta$ the error for all prefixes $1\leq t \leq T$ is at most $\tfrac{4}{\varepsilon}\log(1/\beta)\log^{2.5}\left(T\right)$. \end{theorem} \section{PAC and Regret Analysis of algorithm \texttt{PUCB}} In this section we provide the complete PAC and Regret analysis of algorithm \texttt{PUCB}~ corresponding to theorem \ref{thm:PUCBPAC} and \ref{thm:PUCBRegret} respectively. We begin by analyzing the PAC sample complexity. \subsection{PAC guarantee for \texttt{PUCB}. Proof of theorem \ref{thm:PUCBPAC}} \input{docs/appendix_pac_upper} \subsection{Regret bound for \texttt{PUCB}. Proof of theorem \ref{thm:PUCBRegret}} \input{docs/appendix_regret_upper} \subsection{Error bounds} \input{docs/fail_events} \subsection{$Q$-optimism}\label{sec:optimism} \input{docs/optimism} \subsection{Optimality gap}\label{subsec:optimalitygap} \input{docs/optimality_gap} \subsection{Nice Episodes}\label{subsec:niceepisodes} \input{docs/nice_episodes} \section{PAC and Regret Lower Bound Proofs} \subsection{PAC Lower Bound. Proof of theorem \ref{thm:lowerbound}} \input{docs/appendix_lower_bound} \subsection{Regret Lower Bound. Proof of theorem \ref{thm:regretlower}} \input{docs/appendix_regret_lower} \section{Acknowledgements} Giuseppe Vietri has been supported by the GAANN fellowship from the U.S. Department of Education. We want to thank Matthew Joseph, whose comments improved our definition of joint-differential-privacy. \newpage \bibliographystyle{alpha}
null
null
null
proofpile-arXiv_000-10116
{"arxiv_id":"2009.09052","language":"en","timestamp":1600740157000,"url":"https:\/\/arxiv.org\/abs\/2009.09052","yymm":"2009"}
2024-02-18T23:40:24.937Z
2020-09-22T02:02:37.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10118"}
null
\section{Introduction} The Cosmic star-formation rate density is known to have been significantly higher in the past (see review by Madau \& Dickson 2014). This galaxy formation will have been fed by larger molecular gas masses than what is observed in present day galaxies, as confirmed by observations of redshifted CO line emission (see review by Carilli \& Walter 2013). A significant contributor to the star-formation rate density of the Universe at the peak epoch of galaxy build up ($z=1-3$) was the population of massive star-forming main-sequence galaxies (Brinchmann et al.\ 2004; Daddi et al.\ 2007; Elbaz et al.\ 2007; Rodighiero et al.\ 2011; Sargent et al.\ 2012). Several studies of molecular CO line emission have concluded that these galaxies have long depletion timescales, high molecular gas fractions and typically evolve secularly with redshift (e.g. Daddi et al. 2010a,b; Tacconi et al.\ 2010, 2013, 2018; Freundlich et al.\ 2019). In general, considerable advances have been made in our understanding of the molecular and atomic interstellar medium (ISM) properties of main-sequence galaxies at $z > 1$ (Dannerbauer et al.\ 2009; Aravena et al.\ 2010, 2019; Tacconi et al.\ 2013, 2018; Daddi et al.\ 2015; Genzel et al.\ 2015; Decarli et al.\ 2016; Valentino et al.\ 2018, 2020; Zanella et al.\ 2018; Brisbin et al.\ 2019). Owing to their brightness at rest-frame FIR wavelengths, the ionized and neutral species of Carbon, Nitrogen and Oxygen are powerful diagnostic lines for tracing the ISM of nearby and distant galaxies. When combined with photo-dissociation region (PDR) models (Tielens \& Hollenbach 1985), measurement of the emission from different lines provides a means to constrain quantities such as the ionization rate and metallicity of the ISM of galaxies. Multi-level FIR transition lines have now been widely surveyed in local galaxies, originally by \textit{ISO} (e.g. Luhman et al.\ 1998; Malhotra et al.\ 2001) and more recently using the PACS spectrometer (Poglitsch et al.\ 2010) on the ESA \textit{Herschel Space Observatory} (Pilbratt et al.\ 2010). The \hbox{[CII]}158\,$\mu$m line is typically the brightest in star-forming galaxies, arising from ionized, and even neutral gas where it is the main coolant (Wolfire et al.\ 2003). Another commonly observed FIR line tracer of the ionized ISM is \hbox{[NII]}122\,$\mu$m. It has the advantage that it can be found associated with lower excitation gas, close to that observed in our own Galaxy (e.g. Goldsmith et al.\ 2015; Herrera-Camus et al.\ 2016). Another major coolant of the ISM is \hbox{[OI]}63\,$\mu$m (Wolfire et al.\ 2003). Owing to its high excitation temperature and critical density, it can dominate the cooling in regions of starburst activity (Kaufman et al.\ 1999, 2006; Brauher et al.\ 2008; Croxall et al.\ 2012; Narayanan \& Krumholz 2017). When combined with measurements of the \hbox{[CII]}158\,$\mu$m line intensity and FIR luminosity, the \hbox{[OI]}63\,$\mu$m line intensity can constrain the FUV field, $G$, and the gas density using PDR models. The luminosity in these FIR lines generally exhibits a deficit in the most FIR luminous galaxies compared to the trend expected from lower luminosity galaxies (e.g. Malhotra et al.\ 2001; Graci{\'a}-Carpio et al.\ 2011; Diaz-Santos et al.\ 2017). This has made the emission from lines like \hbox{[OI]}63\,$\mu$m more challenging to detect at high-redshifts. Studies of the \hbox{[OI]}63\,$\mu$m line in the distant Universe have been further limited by the opacity of the atmosphere. Space-based observations provide the most promising route to detecting this line in either emission or absorption during the $z \sim 1 - 2$ epoch of peak star-formation. The \textit{Herschel}-PACS spectrometer enabled observations of the \hbox{[OI]}63\,$\mu$m line emission in high-redshift, submm-selected starburst galaxies (Sturm et al.\ 2010; Coppin et al.\ 2012; Brisbin et al.\ 2015; Wardlow et al.\ 2017; Zhang et al.\ 2018), which confirm the FIR line deficit observed in nearby luminous and ultraluminous galaxies. Most recently, Rybak et al.\ (2020) have made a ground-based detection of \hbox{[OI]}63\,$\mu$m in G09.83808, a dusty z$\sim$6 galaxy, and from these data they infer a gas density, $n = 10^4$~cm$^{-3}$, and FUV field strength, G$= 10^4$~G$_0$\footnote{Note that $G_0$ is the Habing field unit and is equal to $1.6 \times 10^{-3}$~erg~s$^{-1}$~cm$^{-2}$.}. To date, few observations of this line have been presented for lower redshift, main sequence star-forming objects like the BzK galaxies. Here, we present \textit{Herschel}-PACS spectroscopy of four BzK-selected star-forming galaxies at $z \sim 1.5$. The paper is organized as follows: in Section~2 we describe the sample selection along with the \textit{Herschel}-PACS observations and data analysis. In Section 3 we present our results and discussion, including PDR modelling of the luminosity ratios. Finally, we conclude in Section 4. Throughout this work, we adopt a flat $\Lambda$CDM cosmology with parameters measured by Planck Collaboration et al.\ (2016). \section{Observations and Data Reduction} \subsection{Selection of BzK galaxy sample} Our targets were selected to be massive ($\rm log M_{stars}/M_{\odot} > 10.5$), disk galaxies at $z \sim 1.5$, detected in multiple CO line transitions (Daddi et al.\ 2008, 2010a,b, 2015; Dannerbauer et al.\ 2009; Aravena et al.\ 2010). There were four main-sequence galaxies with observations of multiple CO line transitions at the time of the proposal, and all benefitted from a wealth of multi-wavelength data covering UV-to-cm wavelengths (Capak et al.\ 2004; Wirth et al.\ 2004; Barger et al.\ 2008; Magdis et al.\ 2010, 2012; Morrison et al.\ 2010; Teplitz et al.\ 2011). The data have been used to measure infrared luminosities, L$_{\rm IR} \sim (1 -2)\times 10 ^{12}$~L$_{\odot}$, and estimate star-formation rates of $SFR \sim 100 - 200$~M$_{\odot}$~yr$^{-1}$. With the exception of BzK-17999 which has not been detected in CO~\textit{J} =1-0 line emission, all of our sources have been observed in CO~\textit{J} =1-0, 2-1, 3-2 and 5-4 line emission. Some of the observational properties of our targets are provided in Table~\ref{tab:obs}. \begin{table*} \begin{minipage}{1.0\textwidth} \caption{Properties of the targets in our sample, along with the PACS observing times and resulting spectral measurements. The CO line redshifts are from Daddi et al.\ (2010a,b), while the 8-to-1000~$\mu$m infrared luminosities have been derived by fitting their infrared spectral energy distributions (Magdis et al.\ 2012).}\label{tab:obs} \scriptsize \hspace{-0.2in} \begin{tabular}{lrrrlrcccc} \hline \multicolumn{1}{c}{Source name} & \multicolumn{4}{c}{\underline{Source properties}} & \multicolumn{3}{c}{\underline{Observation Parameters}} & \multicolumn{2}{c}{\underline{ [OI]63\,$\mu$m Spectral Measurements}} \\ & \multicolumn{1}{c}{R.A.} & \multicolumn{1}{c}{Dec.} & \multicolumn{1}{c}{z$_\mathrm{CO}$} & \multicolumn{1}{c}{L$_{IR}$} & \multicolumn{1}{c}{OD} & \multicolumn{1}{c}{ObsID} & \multicolumn{1}{c}{Obs. time} & \multicolumn{1}{c}{Integrated intensity} & rms per 80~km~s$^{-1}$ \\ & \multicolumn{2}{c}{(J2000)} & & $\times 10^{12}$~L$_{\odot}$ & & & \multicolumn{1}{c}{(hrs)} & \multicolumn{1}{c}{(Jy~kms~s$^{-1}$)} & \multicolumn{1}{c}{(mJy)} \\ \hline BzK-4171 & 12:36:26.516 & 62:08:35.25 & 1.4652 & 1.0 & 1140 & 1342247456 & 9.0 & $< 2.2$ (3-$\sigma$) & 3.5 \\ BzK-21000 & 12:37:10.597 & 62:22:34.60 & 1.5213 & 2.1 & 1132 & 1342247133 & 7.5 & 15.3$\pm$2.7 & 9.1\\ BzK-16000 & 12:36:30.120 & 62:14:28.00 & 1.5250 & 0.7 & 1306 & 1342256932, 1342247133 & 9.6 & $<4.8$ (3-$\sigma$) & 12.9 \\ BzK-17999 & 12:37:51.819 & 62:15:20.16 & 1.4139 & 1.1 & 1133 & 1342247156 & 6.3 & $< 9.4$ (3-$\sigma$) & 16.5 \\ \hline \end{tabular} \normalsize \end{minipage} \end{table*} \subsection{Observing setup and pipeline processing} Observations were made with the PACS integral field unit (IFU) spectrometer on board the \textit{Herschel Space Observatory} (OT2\_maravena\_3, PI: M. Aravena) during June and December, 2012. At the redshifts of our targets, the \hbox{[OI]}63\,$\mu$m line will be redshifted to the 103 to 190$\mu$m R1 band, which we have used along with the high spectral sample density mode. The sky background subtraction was achieved using a chop-nod technique and the total on-sky observing times are given in Table~\ref{tab:obs}. The PACS integral field spectrometer consisted of 5$\times$5 spatial pixels, where each is connected to two arrays of 16 spectral pixels. Each spectrometer spatial pixel (or spaxel) has an approximate size of $9\farcs 4 \times 9\farcs 4$ at these wavelengths, and the line emission is unresolved over these angular scales (Daddi et al.\ 2010a,b). Data reduction was performed using the PACS data reduction and calibration pipeline. We follow a similar recipe to that of Coppin et al.\ (2012) in our data processing, using \textit{Herschel} Interactive Processing Environment (HIPE v12.1.0; Ott 2010) and calibration tree version 58. This pipeline scales the continuum in each pixel to the median value in order to perform the flat-field correction, and then subsequently combines the nods for sky removal. As our sources are expected to be unresolved at the resolution of these observations, the spectra are extracted at the central pixel position. The spectra have a resolution of $\sim$40~km~s$^{-1}$ prior to resampling. No continuum emission is detected from the targets in our sample. The pipeline processed spectra are all modelled with a third order polynomial fit to the regions of the data which are expected to be line-free as indicated by the width of the previously detected CO lines. This polynomial was deemed to be the lowest order which best represented the off-line spectral baseline structure while not introducing a spurious signal at the expected line wavelength. We also excluded channels at the outer edges of the band in our fitting, as these are known to be noisy in PACS spectra (e.g. Coppin et al.\ 2012). The total region included in the line fitting corresponds to $\sim$3000~km~s$^{-1}$. Figure~\ref{fig:spectra} shows plots of the spectra following baseline subtraction, and resampling to $\sim$80~km~s$^{-1}$. \section{Results and Discussion } \begin{figure*} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{BzK4171_rebin.pdf} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{BzK21000_rebin_wfit.pdf} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{BzK16000_rebin.pdf} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=\textwidth]{BzK17999_rebin.pdf} \end{minipage} \caption{ \textit{Herschel}-PACS spectra of the four BzK-selected star-forming galaxies in our sample, tuned to the observed wavelength of the \hbox{[OI]}63\,$\mu$m line. Each spectrum is plotted relative to the redshift of the previously detected CO~\textit{J}=2-1 lines (Daddi et al.\ 2010a), while the \textit{Solid horizontal} lines indicate the FWHM of the CO. The channels have been resampled and the spectral resolution corresponds to $\sim$80~km~s$^{-1}$. The only source which exhibits significant \hbox{[OI]}63\,$\mu$m line emission is BzK-21000 at z=1.5213, and the \textit{dotted line} shows a Gaussian fit with a peak of 25~mJy, FWHM$=550$~km~s$^{-1}$ and a velocity offset of 26~km~s$^{-1}$ relative to the CO line.} \label{fig:spectra} \end{figure*} \subsection{Detection of \hbox{[OI]}63\,$\mu$m in BzK-21000} Of the four targets observed in our programme, we detect \hbox{[OI]}63\,$\mu$m line emission only in BzK-21000 at $z = 1.5213$. The integrated line intensity is detected with a signal-to-noise of 5.7-$\sigma$ (see Table~\ref{tab:obs}). Although there is a hint of positive emission in the spectra of BzK-4171 and BzK-17999, the significance of the integrated intensity is formally less than 3-$\sigma$ in each case. We calculate a line luminosity for BzK-21000, L$_{\rm [OI]63\,\mu m} = (3.9\pm 0.7)\times 10^9$~L$_{\odot}$. When compared to galaxies in the nearby Universe, this line luminosity is only similar to that observed in the low-redshift ULIRG, NGC6240 (Diaz-Santos et al.\ 2017), a prototypical dual AGN known to be undergoing a merger (e.g. de Vaucouleurs et al.\ 1964; Komossa et al.\ 2003; Wang et al.\ 2014). The line to infrared luminosity ratio in NGC6240 is nearly 70\% higher than that observed in BzK-21000, possibly due to an absence of AGN activity in the latter. We discuss this further below. The other three sources are individually undetected in these data and we compute 3-$\sigma$ upper-limits to the integrated line intensities, $3 \sqrt{\Delta V_{\rm line} / \Delta V_{\rm chan}} \sigma_{\rm chan} \Delta V_{\rm chan}$ (Isaak et al.\ 2004; Wagg et al.\ 2007). $\Delta V_{\rm chan}$ and $\sigma_{\rm chan}$ are the channel linewidths and rms per channel, respectively, while $\Delta V_{\rm line}$ is the assumed linewidth of the \hbox{[OI]}63\,$\mu$m based on previous CO~\textit{J}=2-1 line measurements (Daddi et al.\ 2010a,b). We assume FWHM linewidths of 530, 194, and 440~km~s$^{-1}$ when calculating the \hbox{[OI]}63\,$\mu$m line intensity limits for BzK-4171, BzK-16000 and BzK-17999, respectively. Table~\ref{tab:obs} provides the calculated 3-$\sigma$ upper-limits on the integrated line intensity. \subsection{Spectral line stacking} Although the \hbox{[OI]}63\,$\mu$m line emission is not detected in three of our targets, we perform a stacking analysis of the spectra of the three undetected sources in order to determine if the line might be detectable with more sensitive observations. Our approach is to calculate the weighted mean of individual spectra, $S_i$, after first normalizing such that each is divided by the source FIR luminosity and then scaled by 10$^{12}$~[L$_{\odot}$]. \begin{equation} S_{stacked} = \frac{\sum_{i=1}^{n} w_i S_i}{\sum_{i=1}^{n} w_i} \end{equation} \noindent For the weighting we take the measured rms of the spectra, $\sigma_i$, and assume weighting, $w_i = 1/\sigma_i$ and $w_i = 1/\sigma_i^2$. Both weighting schemes give similar results, and in Figure~\ref{fig:oistack} we plot the results obtained assuming $1/\sigma_i^2$. The average infrared luminosity of the three galaxies is $(9.3\pm 2.1)\times 10^{11}$~L$_{\odot}$. \begin{figure} \includegraphics[width=\columnwidth]{COline_stack_residuals_weight2_faint.pdf} \caption{The \textit{solid line} shows the stacked \hbox{[OI]}63\,$\mu$m spectrum for the three BzK-selected galaxies in our sample which are individually undetected in the data. A weighted average of the spectra is obtained by weighting the average by $1/\sigma_i$, where $\sigma_i$ is the rms calculated for each spectrum. The stacked \hbox{[OI]}63\,$\mu$m line emission is detected at 5.5-$\sigma$ significance with an integrated intensity of $4.7\pm 0.9$~Jy~km~s$^{-1}$. For reference, the \textit{dashed line} shows the best-fitting Gaussian model for these data.} \label{fig:oistack} \end{figure} The results of our stacking analysis reveal a significant detection of the \hbox{[OI]}63\,$\mu$m line emission with an integrated intensity, $4.7 \pm 0.9$~Jy~km~s$^{-1}$. As all three galaxies are at a similar redshift, we can assume a common luminosity distance and therefore calculate a line luminosity from the integrated intensity of the stacked spectrum, L$_{\rm [OI]63\mu m} = (1.1 \pm 0.2)\times 10^9$~L$_{\odot}$. \subsection{Luminosity ratios and PDR modelling} Assuming that the line emission is cospatial with the thermal dust continuum emission, we calculate the ratio of the [OI]63$\mu$m line luminosity in BzK-21000 to its total 8-1000$\mu$m infrared luminosity measured by Magdis et al.\ (2012). These authors use photometry from \textit{Herschel} PACS and SPIRE to constrain the full infrared spectral energy distributions of the targets in our sample. The measured luminosities are presented in Table~\ref{tab:obs}. From this we calculate a luminosity ratio, $L_{\rm [OI]63\,\mu m}/L_{\rm IR} = (1.8 \pm 0.3)\times 10^{-3}$, for BzK-21000. \begin{figure} \includegraphics[width=1.1\columnwidth]{LOI63ratios.eps} \caption{We plot the luminosity ratio, $L_{\rm [OI]63\,\mu m }/L_{\rm IR}$ versus L$_{\rm IR}$ for the GOALS sample of nearby galaxies (Diaz-Santos et al.\ 2017), compared to those galaxies which have been detected at high-redshift. The \textit{stars} show the luminosity ratios of lensed and unlensed submm galaxies at $z \sim 1-3$ (Coppin et al.\ 2012; Brisbin et al.\ 2015; Zhang et al.\ 2018), while Rybak et al.\ (2020) detect line emission in G09.83808 at $z \sim 6$ using the APEX telescope. Where applicable, the luminosities have been corrected for gravitational lensing. For reference we also plot the $[OI]63\,\mu$m-to-IR relationship for local galaxies (de Looze et al.\ 2014). The error bars include an additional 20\% to account for instrumental calibration uncertainty on the line measurements. } \label{fig:loilfir} \end{figure} In Figure~\ref{fig:loilfir} we plot the [OI]63$\mu m$ line-to-infrared luminosity ratio of galaxies as a function of infrared luminosity. The luminosity ratio for BzK-21000 is compared to the low-redshift GOALS sample of star-forming galaxies and AGN (Diaz-Santos et al.\ 2017), along with the average ratio derived by de Looze et al.\ (2014). Also plotted are the luminosity ratios of the submm galaxies at $z \sim 1-3$ detected using \textit{Herschel}-PACS (Coppin et al.\ 2012; Brisbin et al.\ 2015; Zhang et al.\ 2018), where three of the 8 to 1000~$\mu$m infrared luminosities are from Swinbank et al.\ (2014), and the APEX telescope detection of [OI]63$\mu$m emission in a gravitationally lensed, dusty galaxy at $z \sim 6$ (Rybak et al.\ 2020). All of our error bars include an additional 20\% to account for instrumental calibration uncertainty on the line measurements. The high-redshift galaxies detected in [OI]63$\mu m$ line emission tend to exhibit a higher line-to-infrared luminosity ratio than that of typical galaxies in the nearby Universe. One of the clear exceptions to this is NGC6240, exhibiting both a high infrared luminosity ($\sim9\times10^{11}$~L$\odot$) and strong [OI]63$\mu$m line emission ($\sim 2.9\times 10^{9}$~L$_{\odot}$). This dual AGN is known to have a very warm and dense ISM, as revealed by studies of molecular CO line emission and dense gas tracers HCN and HCO$^+$ (e.g. Greve et al.\ 2009; Meijerink et al.\ 2013; Scoville et al.\ 2015; Treister et al.\ 2020) . It also appears to have a nuclear outflow traced by molecular gas (van der Werf et al.\ 1993; Iono et al.\ 2007; Feruglio et al.\ 2013; Cicone et al.\ 2018). It is possible that the extreme physical conditions of the ISM of NGC6240 may have some similarities with that of high-redshift submm starburst galaxies and AGN detected in the [OI]63$\mu$m line, however we note that both high-redshift submm galaxies and main-sequence galaxies generally exhibit lower CO line excitation. The presence of an AGN may be a contributing factor, as up to four of the $z \sim 1 - 3$ submm galaxies with strong [OI]63$\mu$m line emission are thought to contain an AGN. The optical and UV lines in SMMJ030227.73 +000653.5 suggest that an AGN is present (Swinbank et al.\ 2004; Takata et al.\ 2006), while SDSS~J120602.09+514229.5 shows weak evidence in the form of a strong [S~IV] line and hot mid-infrared colours (Fadely et al.\ 2010). LESS66 has a \textit{Chandra} X-ray counterpart (Wang et al.\ 2013) and MIPS~22530 is tentatively believed to host an AGN based on an analysis of its radio emission (Sajina et al.\ 2008). Another possible interpretation of the high [OI]63$\mu m$ line luminosity observed in submm galaxies and BzK-21000, is that this emission arises from an extended reservoir of cool and low density, neutral gas within these galaxies. Such a scenario would be consistent with the low molecular CO line excitation observed in main sequence galaxies (e.g. Daddi et al.\ 2008, 2010a, 2015, Dannerbauer et al.\ 2009; Aravena et al.\ 2010), and the extended reservoirs of cold molecular gas traced by CO~\textit{J}=1-0 in some submm galaxies (e.g. Ivison et al.\ 2010, 2011; Riechers et al.\ 2011). Blind mm and cm-wavelength surveys of CO line emission in high-redshift galaxies have found that galaxies selected via CO~\textit{J}=1-0 line emission have a lower excitation, on average, than those selected through the CO~\textit{J}=3-2 line (Riechers et al.\ 2020). These low excitation galaxies may also be strong [OI]63$\mu m$ line emitters, with some fraction of the neutral atomic gas arising from clumps of denser gas. The results presented here are similar to what has been found by previous studies of far-infrared line emission in star-forming galaxies, where the line-to-infrared luminosity ratio shows a deficit that shifts to higher luminosities with redshift (e.g. Grazi\'a-Carpio et al.\ 2011). Such a redshift trend can be removed by plotting the dependance of the luminosity ratio against the $\rm L_{FIR}/M_{H_2}$ ratio (related to the star-formation efficiency). This can be understood by considering that the majority of galaxies studied in far-infrared line emission at high-redshift are starburst galaxies which exhibit a high star-formation efficiency compared to main sequence galaxies. Observing the [CII] line emission in main-sequence galaxies over a range in luminosities and redshifts, Zanella et al.\ (2018) showed that the luminosity in this line is strongly correlated with total molecular gas. As such, the $\rm L_{[CII]}/ L_{FIR}$ ratio could be interpreted as the gas-depletion timescale for galaxies such as those in our sample. \begin{figure} \includegraphics[width=1.\columnwidth]{ratiomaps.pdf} \caption{PDR model parameter space constrained by our measured luminosity ratios for BzK-21000. The CO~\textit{J}=2-1 line luminosity is from Daddi et al.\ (2010a). Using the PDR toolbox models (Pound \& Wolfire 2008 and Kaufman et al.\ 2006) updated by the authors (Wolfire priv. communication), the UV radiation field within the [OI]63$\mu$m emission line region is estimated to be, $G \sim 320 G_0$, at a gas density of, $n \sim 1800$~cm$^{-3}$, if concomittant with the low-\textit{J} CO line emission.} \label{fig:PDRmodels} \end{figure} Finally, we consider the constraints that our luminosity ratios imply for the physical conditions within the ISM of BzK-21000 where the [OI]63$\mu$m emission arises. We assume that the [OI]63$\mu$m line and thermal dust continuum emission regions are cospatial with the CO~\textit{J}=2-1 line emission measured by Daddi et al.\ (2010a), acknowledging that high angular resolution imaging would be required to verify this assumption. To investigate the PDRs we adopt updated PDR models based on the online PDR toolbox (Pound \& Wolfire 2008 and Kaufman et al. 2006, Wolfire priv. communication). These models have recently been updated to reflect the chemistry and reaction rates noted in Hollenbach et al. (2012), and Neufeld \& Wolfire (2016), as well as photodissociation and ionization rates from Heays et al. (2017), and the collisional excitation of OI from Lique et al. (2018). These models assume a simple slab PDR geometry illuminated on one side. For an ensemble of PDR cloudlets externally illuminated, optically thin emission will be observed from both sides, while optically thick emission is only observed from the front side. Therefore, as recommended in Kaufman et al. (1999) and commonly practiced in PDR analysis (e.g. Hailey-Dunsheath et al 2010; Stacey et al. 2010), we have doubled the observed line fluxes of [OI] and CO when fitting the data to account for their expected optical thickness. Based on these models, the L$_{\rm [OI]63\mu m}/$L$_{\rm IR}$ and L$_{\rm CO}/$L$_{\rm IR}$ luminosity ratios suggest a UV radiation field, $G \sim 320 G_0$, and gas density, $n \sim 1800$~cm$^{-3}$ (Figure~\ref{fig:PDRmodels}). This gas density is broadly consistent with the Large Velocity Gradient (LVG) models fit to the observed CO~\textit{J}=2-1 and \textit{J}=3-2 line intensities (Dannerbauer et al.\ 2009; Daddi et al.\ 2015). Although these gas densities are low compared to what is typically inferred for submm luminous galaxies observed in [CII] and [OI]63$\mu$m (Brisbin et al.\ 2015), the radiation field strengths are similar. Further constraints on the ISM conditions within BzK-21000 would be possible with detections of other FIR lines like [CII] or [NII]. \section{Conclusions} We present \textit{Herschel}-PACS spectroscopy of four BzK-selected star-forming galaxies at $z \sim 1.5$. One of our targets, BzK-21000 at $z = 1.5213$ is detected with an [OI]63$\mu$m line luminosity, L$_{\rm [OI]63\,\mu m} = (3.9\pm 0.7)\times 10^9$~L$_{\odot}$. A spectral stacking analysis of the data from the three non-detections reveals a significant signal, implying L$_{\rm [OI]63\,\mu m} = (1.1\pm 0.2)\times10^9$~L$_{\odot}$. The line-to-total infrared luminosity ratio in BzK-21000 is similar to that of a dusty, $z\sim 6$ galaxy (Rybak et al.\ 2020), but lower than that typically observed in massive submm galaxies at $z \sim 1 - 3$. Combined with PDR models, the relative strengths of the [OI]63$\mu$m and CO~\textit{J}=2-1 lines compared to the infrared luminosity imply a UV field intensity, $G\sim 320G_0$, and a gas density, $n \sim 1800~$cm$^{-3}$. The gas density is low compared to the average determined for more massive submm galaxies observed in [OI]63$\mu$m at high-redshift (Brisbin et al.\ 2015). Given the observed intensity of the [OI]63$\mu$m line emission in the BzK-selected star-forming galaxies studied here, it is likely that ALMA would be a powerful instrument for studying this line in more distant, main sequence galaxies. Beyond redshifts, $z \ga 4$, this line is redshifted into the ALMA band 10 receiver range. The $602 - 720$~GHz band 9 receivers can observe this line in galaxies during the Epoch of Reionization at $z \ga 6$, and have been used to study the [CII]158$\mu$m line in lower redshift galaxies (e.g. Schaerer et al.\ 2015; Zanella et al.\ 2018; Lamarche et al.\ 2018). \section*{Acknowledgements} We thank the anonymous referee for a thorough review of the original manuscript, and for useful feedback. In addition, we thank Kristen Coppin and Mark Swinbank for helpful discussions. MA and this work have been supported by grants ``CONICYT+PCI+REDES 19019" and ``CONICYT + PCI + INSTITUTO MAX PLANCK DE ASTRONOMIA MPG190030''. D.R. acknowledges support from the National Science Foundation under grant numbers AST-1614213 and AST-1910107. D.R. also acknowledges support from the Alexander von Humboldt Foundation through a Humboldt Research Fellowship for Experienced Researchers. PACS has been developed by a consortium of institutes led by MPE (Germany) and including UVIE (Austria); KU Leuven, CSL, IMEC (Belgium); CEA, LAM (France); MPIA (Germany); INAF-IFSI/OAA/OAP/OAT, LENS, SISSA (Italy); IAC (Spain). This development has been supported by the funding agencies BMVIT (Austria), ESA-PRODEX (Belgium), CEA/CNES (France), DLR (Germany), ASI/INAF (Italy), and CICYT/MCYT (Spain). \section*{Data Availability} The data underlying this article can be accessed from the ESA \textit{Herschel} science archive: \textit{http://archives.esac.esa.int/hsa/whsa/}. The derived data generated in this research will be shared on reasonable request to the corresponding author.
null
null
null
proofpile-arXiv_000-10117
{"arxiv_id":"2009.08984","language":"en","timestamp":1600740014000,"url":"https:\/\/arxiv.org\/abs\/2009.08984","yymm":"2009"}
2024-02-18T23:40:24.946Z
2020-09-22T02:00:14.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10119"}
null
\section{Background on Elliptic Curves} Before stating the objectives of this paper, I'd like to provide the reader with some background on elliptic curves. Let $K$ be a field. For the purposes of our work, an elliptic curve $E/K$ is the projective algebraic curve associated to a Weierstrass equation as in (1.2), where $a,b \in K$. We assume $\text{char}(K) \neq 2, 3$, and so by affine transformations we may reduce the representation of an elliptic curve $E$ to \begin{equation} E_{aff}: y^2 = x^3 + ax + b \label{WEaffine} \end{equation} which yields the projective homogeneous representation \begin{equation} E_{proj}: Y^2Z = X^3 + aXZ^2 + bZ^3. \label{WEprojective} \end{equation} We assume $4a^3+27b^2 \neq 0$, which guarantees a non-singular elliptic curve. Finally, define $\mathcal{O} = [0, 1, 0]$ to be the "point at infinity" on $E$. \begin{defn} Denote by $E(K)$ the set of $K$-rational points on $E$. \end{defn} \subsection{The Group Law on an Elliptic Curve} We will now define the group law $\oplus$ on $E(K)$. Let $P, Q \in E(K)$, where $P$ and $Q$ are not necessarily distinct. By B\'ezout's Theorem, the line $\ell_{proj}$ through $P$ and $Q$ must intersect $E$ exactly three times, counting multiplicity. Therefore, $\ell_{proj}$ must intersect $E$ at precisely one more point. Denote this point as $R$. \begin{defn} Let $-R = R \oplus \mathcal{O}$ and define $P \oplus Q = -R$. \end{defn} From the discussion so far, $E(K)$ is closed under $\oplus$, $\mathcal{O}$ is the identity element, and $P \oplus -P = \mathcal{O}$, so every element has an inverse. Commutativity follows from the fact that two points define a line. It remains to prove associativity, which follows from a more involved application of Bezout's Theorem (see \cite{IK}). Thus we have that $E(K)$ is an abelian group. The Mordell-Weil Theorem asserts that when $K$ is a number field, $E(K)$ is finitely generated and thus is isomorphic to $\mathbb{Z}^r \times E(K)_{tors}$ for some positive integer $r$ and finite subgroup $E(K)_{tors}$ (called the ``torsion subgroup"). This subgroup will be central to the counting problem of \S \S 2-4. Since $\mathbb{Q}[i]$ is a number field, we may apply all the theory developed in this section to an elliptic curve $E/\mathbb{Q}[i]$. We now proceed to count elliptic curves $E$ that have a fixed torsion subgroup. \section{Statement of the Main Theorem} The goal of this paper will be to extend the results of Harron and Snowden's paper, which gives the relative frequencies for which isomorphism classes of elliptic curves over $\mathbb{Q}$ with prescribed torsion subgroup $G$ will appear up to a given height. We generalize these results to elliptic curves over $\mathbb{Q}[i]$. We begin with the following definitions and notation: \begin{itemize} \item $N(z)$ denotes the norm of $z \in \mathbb{Q}[i] $. \item $X \in \mathbb{R}^+$. \item $p$ denotes a prime of $\mathbb{Z}$, and $\textbf{p}$ denotes a prime of $\mathbb{Z}[i]$. \item A Weierstrass representation $E:y^2 = x^3 + Ax + B$ ($A, B \in \mathbb{Z}[i] $) of an elliptic curve $E/\mathbb{Q}[i]$ is \textit{minimal} if and only if $\gcd(A^3, B^2)$ is not divisible by any 12th power of a non-unit in $\mathbb{Z}[i]$. \item The \textit{height} of the representation $E:y^2 = x^3 + Ax + B$ is defined as $\max(N(A)^3, N(B)^2)$. \end{itemize} Any elliptic curve $E/\mathbb{Q}[i]$ is isomorphic to a curve represented by a minimal Weierstrass equation \begin{equation} E : y^2 = x^3 + Ax + B \end{equation} because $\mathbb{Z}[i]$ is a unique factorization domain, and so if $\textbf{p}^{12} \mid \gcd(A^3, B^2)$, then $\textbf{p}^4 \mid A$ and $\textbf{p}^6 \mid B$, and by a change of variable $E$ is also isomorphic to $y^2 = x^3 + \textbf{p}^{-4}Ax+\textbf{p}^{-6}Bx$. Furthermore, note that one may make the change of variable $(x,y) \to (x/i^2, y/i^3)$ on $E$. This yields the elliptic curve $E_1:y^2=x^3+Ax - B$, with $E \simeq E_1$. Thus, for $B = 0$ there will be one minimal Weierstrass representation, and for $B \neq 0$ there will be two isomorphic minimal Weierstrass representations. \begin{defn} Let $N_G(X)$ be the number of isomorphism classes of elliptic curves of height less than $X$ with torsion subgroup isomorphic to $G$. \end{defn} \begin{remark} This quantity must be finite because there are only finitely many elements in $\mathbb{Z}[i]$ that have a fixed integral norm. \end{remark} In \cite{RHAS} it was shown that for each group $G$ in Mazur's theorem, there is an explicit constant $d(G)$ such that elliptic curves over $\mathbb{Q}$ with torsion subgroup $G$ satisfy the asymptotic \begin{equation} \frac{1}{d(G)} = \lim_{X \to \infty} \frac{\text{log }N_G(X)}{\text{log } X} \end{equation} \cite{RHAS}. We first partially generalize the methods used in Harron and Snowden's paper to $\mathbb{Q}[i]$. The precise possibilities for new torsion subgroups over $\mathbb{Q}[i]$ are given by the following theorem: \begin{theorem} (Kenku, Momose, Kamienny) \cite{KM}\cite{KUMO} Let $E$ be an elliptic curve over $ \mathbb{Q}[i]$. Then, $E(\mathbb{Q}[i])_{tors}$ must be isomorphic to one of the following groups: \begin{itemize} \item $\mathbb{Z}/M\mathbb{Z}, 1 \leq M \leq 16$ or $ M = 18$ \item $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2M\mathbb{Z}, 1 \leq M \leq 6 $ \item $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$. \end{itemize} \end{theorem} Finally, we note that for every group $G$ in Theorem 2.3 (except for $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$, which is dealt with in \S 5), there exists a universal family of elliptic curves $\mathcal{E}$ equipped with a subgroup isomorphic to $G$ \cite{FPR}. The case of $G = \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ is exceptional and will be dealt with in \S 5. We may parameterize the coefficients of the Weierstrass forms with two functions. For the groups in Theorem 2.2, these functions are polynomials in one variable or two variables. For a fixed $G$ we have the universal family $\mathcal{E}_t:y^2 = x^3+f(t)x+g(t)$, or, for the two variable case, $\mathcal{E}_{s,t}:y^2 = x^3+f(s, t)x+g(s,t)$ (where the points $(s,t) \in \mathbb{Q}[i]^2$ lie on a curve $C \hookrightarrow \mathbb{P}^2$) such that $G \subseteq \mathcal{E}_t(\mathbb{Q}[i])_{tors}$ or $G \subseteq \mathcal{E}_{s,t}(\mathbb{Q}[i])_{tors}$. And any $E$ such that $G \subseteq E(\mathbb{Q}[i])_{tors}$ is isomorphic to some curve of these two forms. Summarizing the results of \cite{FPR}, the three cases that will be dealt with in \S 4 are: \begin{itemize} \item Case 1: $G = \mathbb{Z}/M\mathbb{Z}, \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2K\mathbb{Z}, \mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$, $M \in [4,10]$, $M=12$, $2 \leq K \leq 4$, is parameterized by $\mathcal{E}_t$ and $t \in \mathbb{Q}[i]$. \item Case 2: $G = \mathbb{Z}/M\mathbb{Z}$, $M = 13, 16, 18$, is parameterized by $\mathcal{E}_{s,t}$ with $(s,t) \in C(\mathbb{Q}[i])$, where $C$ is a plane curve of genus $>1$. \item Case 3: $G = \mathbb{Z}/M\mathbb{Z}, \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/10\mathbb{Z}, \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/12\mathbb{Z} $, $M = 11, 14, 15 $, is parameterized by $\mathcal{E}_{s, t}$ with $(s,t) \in C(\mathbb{Q}[i])$, where $C$ has genus 1. \end{itemize} Our main theorem is the following, generalizing \cite[Theorem~1.1]{RHAS}: \begin{theorem} For torsion subgroups $G \neq \mathbb{Z}/M\mathbb{Z}$ ($M = 1, 2, 3$), we obtain values of $d(G)$ given by the table below. \begin{table}[H] \centering \begin{tabular}{ | l | l | l | l | l | l |} \hline $G$ & $ d(G) $ & $G$ & $d(G)$ & $G$ & $d(G)$ \\ \hline $\mathbb{Z}/4\mathbb{Z}$ & 4 & $\mathbb{Z}/11\mathbb{Z}$ & $+\infty$ & $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z} $ & 3 \\ \hline $\mathbb{Z}/5\mathbb{Z} $ & 6 & $\mathbb{Z}/12\mathbb{Z} $ & 24 & $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z} $ & 6 \\ \hline $\mathbb{Z}/6\mathbb{Z}$ & 6 & $\mathbb{Z}/13\mathbb{Z}$ & $+\infty$ & $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/6\mathbb{Z} $ & 12 \\ \hline $\mathbb{Z}/7\mathbb{Z}$ & 12 & $\mathbb{Z}/14\mathbb{Z}$ & $+\infty$ & $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/8\mathbb{Z} $ & 24 \\ \hline $\mathbb{Z}/8\mathbb{Z}$ & 12 & $\mathbb{Z}/15\mathbb{Z}$ & $+\infty$ & $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/10\mathbb{Z} $ & $+\infty$ \\ \hline $\mathbb{Z}/9\mathbb{Z}$ & 18 & $\mathbb{Z}/16\mathbb{Z}$ & $+\infty$ &$\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/12\mathbb{Z} $ & $+\infty$ \\ \hline $\mathbb{Z}/10\mathbb{Z}$ & 18 & $\mathbb{Z}/18\mathbb{Z} $ & $+\infty$ & $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z} $ & 12 \\ \hline \end{tabular} \end{table} \end{theorem} \section{Reduction of the Problem} We extend the proofs used in \S \S 1-2 of \cite{RHAS} to $\mathbb{Q}[i]$, providing slight modifications. The structure of this section will be to trace through these first two sections. \subsection{Reduction of the Problem} Motivated by the discussion in \S 2, we have the following definition: \begin{defn} $N^2_G(X) = \{(A,B) \in \mathbb{Z}[i]^2: E: y^2 = x^3+Ax+B \text{ is a minimal Weierstrass }$ \noindent $ \text{equation of height less than } $X$ \text{ and } E(\mathbb{Q}[i])_{tors} = G \}$. \end{defn} We shall compute the quantity $N^2_G(X)$, which roughly counts the number of isomorphism classes of curves up to height $X$ with a torsion subgroup isomorphic to $G$. For the groups treated in \S 4.1, we have that $N_G(X) = \frac{1}{2}N^2_G(X) + O(1)$. See \S 4.1 for a proof of this. In \S \S 4.2-4.3, we in fact do not use the reductions provided in this section to treat the groups in Case 2 and Case 3. Then, for the groups treated in \S 4.1 by replacing $N_G(X)$ with $\frac{1}{2} N^2_G(X)+O(1)$ we do not change the value of the limit: \begin{equation} \lim_{X \to \infty} \frac{\text{log }N_G(X)}{\text{log } X} = \lim_{X \to \infty} \frac{\text{log }\frac{1}{2}N^2_G(X)+O(1)}{\text{log } X} = \lim_{X \to \infty} \frac{\text{log }N^2_G(X)-\text{log }2 + O(1)}{\text{log } X} = \lim_{X \to \infty} \frac{\text{log }N^2_G(X)}{\text{log } X}. \end{equation} As in \cite{RHAS}, let $N'_G(X)$ be the number of elliptic curves up to height $X$ with a torsion subgroup containing $G$. We similarly define the quantity $N^{2'}_G(X)$: \begin{defn} $N^2_G(X) = \{(A,B) \in \mathbb{Z}[i]^2: E: y^2 = x^3+Ax+B \text{ is a minimal Weierstrass }$ \noindent $ \text{equation of height less than } $X$ \text{ and } E(\mathbb{Q}[i])_{tors} \subseteq G \}$. \end{defn} We now prove the following theorem. \begin{theorem} For the groups $G$ and constants $d(G)$ listed in Theorem 2.3, there exist positive constants $K_1$ and $K_2$ such that $K_1X^{1/d(G)} \leq N'_G(X) \leq K_2X^{1/d(G)}$. \end{theorem} \begin{lemma} Theorem 3.3 implies Theorem 2.3. \end{lemma} \begin{proof} We have the following bounds: \begin{equation} N_G^{2'}(X) - \sum_{G \subsetneq H} N_H^{2'}(X) \leq N_G^2(X) \leq N_G^{2'}(X). \end{equation} We have that $d(G) < d(H)$ when $G \subsetneq H$, so $N_G^2(X)/N_G^{2'}(X) \to 1$ as $X \to \infty$. The error term $O(1)$ becomes negligible as $X \to \infty$, so we have $N_G(X)/N_G'(X) \to 1$ as $X \to \infty$. Theorem 3.3 implies that $\frac{1}{d(G)} = \lim_{X \to \infty} \frac{\text{log }N_G(X)}{\text{log } X} $, proving Theorem 2.3. \end{proof} \section{Generalization of Harron and Snowden's Paper} By (3.1), to prove Theorem 2.3 we may instead analyze the quantity $N_G^2(X)$. \subsection{Case 1} See page 3 above Theorem 2.3 for the list of groups in this case. We begin by proving the assertions of \S 3.1: \begin{lemma} $N^{2'}_G(X) = N^2_G(X) + O(1)$, where $O(1) \leq k \deg(g)$ \end{lemma} \begin{proof} We have $B = 0$ if and only if $g(t) = 0$. There are at most $\deg(g)$ such $t$. Finally, there are only finitely many $u$ such that $y^2 = x^3+u^4f(t)$ is minimal. \end{proof} Having reduced to computing $N^{2'}_G(X)$, we compute it in Proposition 4.2. We show that \cite[Proposition 2.1]{RHAS} continues to hold in $\mathbb{Z}[i]$. The following proposition implies Theorem 3.3, and so by Lemma 3.4 it implies Theorem 2.3 for the groups that fall into this case: \begin{prop} Let $ f, g \in \mathbb{Q}[t] $ be coprime polynomials of degrees $ r $ and $s$. Assume at least one of $r$ or $s$ is positive. Write \begin{center} max$(\frac{r}{4}, \frac{s}{6}) = \frac{n}{m}$ \end{center} with $n$ and $m$ coprime. Assume $n=1$ or $m=1$. Let $S(X)$ be the set of pairs $(A,B) \in \mathbb{Z}[i]^2$ satisfying the following conditions: \begin{itemize} \item $4A^3+27B^2 \neq 0$. \item $\text{gcd}(A^3, B^2)$ is not divisible by any 12th power. \item $N(A) < X^{1/3}$ and $N(B) < X^{1/2}$. \item There exist $ u, t \in \mathbb{Q}[i]$ such that $A=u^4f(t)$ and $B=u^6g(t)$. \end{itemize} Then, there exist positive real constants $K_1$ and $K_2$ such that $K_1X^{(m+1)/12n} \leq \left| S(X) \right| \leq K_2X^{(m+1)/12n}$. Since $\left| S(X) \right| = N_G^{2'}(X)$, this proves Theorem 3.3. \end{prop} We will now proceed to prove Proposition 4.2, starting with the upper bound. We have the following data for the groups in Case 1: \begin{table}[H] \centering \begin{tabular}{ | l | l | l | l | l | l | } \hline $G$ & $r$ & $s$ & $n$ & $m$ & $12n/(m+1)$ \\ \hline $\mathbb{Z}/4\mathbb{Z}$ & $2$ & $3$ & $1$ & $2$ & $4$ \\ \hline $\mathbb{Z}/5\mathbb{Z}$ & $4$ & $6$ & $1$ & $1$ & $6$ \\ \hline $\mathbb{Z}/6\mathbb{Z}$ & $4$ & $6$ & $1$ & $1$ & $6$ \\ \hline $\mathbb{Z}/7\mathbb{Z}$ & $8$ & $12$ & $2$ & $1$ & $12$ \\ \hline $\mathbb{Z}/8\mathbb{Z}$ & $8$ & $12$ & $2$ & $1$ & $12$ \\ \hline $\mathbb{Z}/9\mathbb{Z}$ & $12$ & $18$ & $3$ & $1$ & $18$ \\ \hline $\mathbb{Z}/10\mathbb{Z}$ & $12$ & $18$ & $3$ & $1$ & $18$ \\ \hline $\mathbb{Z}/12\mathbb{Z}$ & $16$ & $24$ & $4$ & $1$ & $24$ \\ \hline $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$ & $4$ & $6$ & $1$ & $1$ & $6$ \\ \hline $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/6\mathbb{Z}$ & $8$ & $12$ & $2$ & $1$ & $12$ \\ \hline $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/8\mathbb{Z}$ & $16$ & $24$ & $4$ & $1$ & $24$ \\ \hline $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$ & $8$ & $12$ & $2$ & $1$ & $12$ \\ \hline \end{tabular} \caption{Data for the universal elliptic curves.} \label{table: 1} \end{table} The data were computed using the table \cite[\S 1.4, Table 2]{RHAS} and the computations of the degrees seen in Appendix A, derived from \cite{RHAS}. \subsubsection{The Upper Bound} Continue to let $f, g \in \mathbb{Q} [t]$ where $f$ and $g$ are coprime. Let $\overline{\mathbb{Q}}$ be a fixed algebraic closure of $\mathbb{Q}$, and extend $\left|\cdot\right|_{\textbf{p}}$ to $\overline{\mathbb{Q}}$. Let $\{\alpha_i\}$ be the roots of $f$ and $\{\beta_j\}$ the roots of $g$ in $\overline{\mathbb{Q}}$. We now have the following definition: \begin{defn} Let $S_1(X)$ denote the set of pairs $(u,t) \in \mathbb{Q}[i]^2$ such that $(A,B) = (u^4f(t), u^6g(t)) \in S(X)$. \end{defn} \begin{lemma} Let $f, g \in \mathbb{Q}[t] $ be two coprime polynomials. For each prime $\textbf{p}$ of $\mathbb{Z}[i]$ (with $N(\textbf{p}) \leq \infty $), there exists a constant $c_{\textbf{p}}>0$ such that for all $t\in\mathbb{Q}[i]$, \begin{equation} \text{max}(\left|f(t)\right|_{\textbf{p}}, \left|g(t)\right|_{\textbf{p}}) \geq c_{\textbf{p}}. \end{equation} \noindent If $N(\textbf{p})$ is sufficiently large, one can take $c_{\textbf{p}}=1$. \end{lemma} \begin{remark} We require the case of $N(\textbf{p}) = \infty$ for the proof of Lemma 4.7, where the standard absolute value assumes this case. \end{remark} \begin{proof} We first analyze the roots of $f$ and $g$. Note that $\alpha_i \neq \beta_j$ for all $i$ and $j$ since $f$ and $g$ are coprime. Let $\delta = \min_{ij}(\left|\alpha_i - \beta_j\right|_{\textbf{p}})$. Let $\epsilon>0$ be such that $\left|f(t)\right|_{\textbf{p}} < \epsilon$ implies $\left|t-\alpha_i\right|_{\textbf{p}} < \delta/2$ for some $i$ and $\left| g(t) \right|_{\textbf{p}} < \epsilon $ implies $\left|t-\beta_j\right|_{\textbf{p}} < \delta/2$ for some $j$. To see why such an $\epsilon$ exists, consider the factorizations $f(t) = a\prod_{i = 1}^{r}(t-\alpha_i)$ and $g(t) = b\prod_{j = 1}^{s}(t-\beta_j)$ where $a,b \in \mathbb{R}$. If $\left|f(t)\right|_{\textbf{p}}, \left| g(t) \right|_{\textbf{p}} < \epsilon$, there exists indices $m$ and $n$ where $\left|t - \alpha_m \right|_{\textbf{p}} \leq (\epsilon/a)^{1/r}$ and $\left| t - \beta_n \right|_{\textbf{p}} \leq (\epsilon/b)^{1/s}$. We may choose $\epsilon$ such that $(\epsilon/a)^{1/r}, (\epsilon/b)^{1/s} < \delta/2$. We proceed with proof by contradiction. If $\left|f(t)\right|_{\textbf{p}} < \epsilon$ and $\left|g(t)\right|_{\textbf{p}} < \epsilon$ then $\left|t-\alpha_i\right|_{\textbf{p}} < \delta/2$ and $\left|t-\beta_j\right|_{\textbf{p}} < \delta/2$ for some $i$ and $j$. However, this implies that $\left|\alpha_i-\beta_j\right|_{\textbf{p}} < \delta$ by the triangle inequality, a contradiction as $\delta $ is the minimal value of $\left|\alpha_i-\beta_j\right|_{\textbf{p}}$ for any $i$ and $j$. We must therefore have for all $t$ that either $\left|f(t)\right|_{\textbf{p}} \geq \epsilon$ or $\left|g(t)\right|_{\textbf{p}} \geq \epsilon$, so we can take $c_{\textbf{p}}=\epsilon$. Now let $p$ be an integer prime with sufficiently large norm such that: (1) the coefficients of $f$ and $g$ are in $\mathbb{Z}_{p}$; (2) the leading coefficients of $f$ and $g$ are elements of $\mathbb{Z}_{p}^{\times}$ ; and (3) $\alpha_i-\beta_j$ are elements of $\overline{\mathbb{Z}_{p}}^{\times}$, for all $i$ and $j$. Condition 3 gives us $\delta = 1$, since $\alpha_i-\beta_j$ then must satisfy $\left|\alpha_i-\beta_j\right|_{p} = p^0 = 1$. If $\left|f(t)\right|_{p} < 1$ and $\left|g(t)\right|_{p} < 1$, then $\left|t-\alpha_i\right|_{p} < 1$ for some $i$, and $\left|t-\beta_j\right|_{p} < 1$ for some $j$. For this pair $(i,j)$, this implies $\left|\alpha_i-\beta_j\right|_{p} < 1 $, another contradiction, this time by the non-Archimedean triangle inequality that exists when working with the $p$-adic norm. That is, $ \left|x+y\right|_{p} \leq \text{max}(\left|x\right|_{p}, \left|y\right|_{p})$. Thus, we can take $c_{p} = 1$ for all primes $p$ with sufficiently large norm. \end{proof} We now prove a variant of \cite[Lemma 2.3]{RHAS}. \begin{lemma} For each Gaussian prime $\textbf{p}$ there exists a constant $C_{\textbf{p}}$ with the following property. Suppose $(u,t)\in S_1(X)$. Then \begin{center} $ \text{val}_{\textbf{p}}(u) = \epsilon + \begin{cases} \lceil - \frac{n}{m} \text{val}_{\textbf{p}}(t)\rceil & \text{$\text{val}_{\textbf{p}}(t) < 0$} \\ 0 & \text{$\text{val}_{\textbf{p}}(t) \geq 0$} \end{cases} $ \end{center} where \textit{$\left|\epsilon\right| \leq C_{\textbf{p}}$. Furthermore, one can take $C_{\textbf{p}}=0$ for $N(\textbf{p}) \gg 0$.} \end{lemma} \begin{proof} Let $B(x,r)$ be the open ball of radius $r$ centered at $x$. Fix an arbitrarily small constant $\delta > 0$ such that $B(\alpha_i, \delta) \cap B(\beta_j, \delta)$ is empty for all $i$ and $j$ and each $B(\alpha_i, \delta)$ and $B(\beta_j, \delta)$ contains at most one root of $f$ and $g$, respectively. Furthermore, suppose this $\delta$ satisfies that $t \in B(\alpha_i, \delta)$ implies that $\text{min}(\lfloor \frac{1}{4}\text{val}_{\textbf{p}}(f(t)) \rfloor, \lfloor \frac{1}{6}\text{val}_{\textbf{p}}(g(t)) \rfloor) = \lfloor \frac{1}{6}\text{val}_{\textbf{p}}(g(t)) \rfloor$ and similarly for $t \in B(\beta_j, \delta)$. Before splitting into cases we make some general deductions. Suppose $ (u,t) \in S_1(X)$, and let $\textbf{p}$ be a prime. Since $A$ and $B$ are integral, we have $4 \text{val}_{\textbf{p}}(u) + \text{val}_{\textbf{p}}(f(t)) \geq 0 $ and $ 6 \text{val}_{\textbf{p}}(u) + \text{val}_{\textbf{p}}(g(t)) \geq 0$. Furthermore, $\text{val}_{\textbf{p}}(u) $ must be minimal subject to these inequalities, or else $\textbf{p}^{12}$ would divide $ \text{gcd}(A^3, B^2) $, a contradiction. We can thus write the following: \begin{equation} \text{val}_{\textbf{p}}(u) = \text{max}(\lceil -\frac{1}{4}\text{val}_{\textbf{p}}(f(t))\rceil, \lceil -\frac{1}{6}\text{val}_{\textbf{p}}(g(t)) \rceil). \end{equation} By taking the negative of both sides of (4.2) we have: \begin{equation} -\text{val}_{\textbf{p}}(u) = \text{min}(\lfloor \frac{1}{4}\text{val}_{\textbf{p}}(f(t)) \rfloor, \lfloor \frac{1}{6}\text{val}_{\textbf{p}}(g(t)) \rfloor). \end{equation} We split into the following cases: \begin{enumerate} \item \textbf{Case I:} $t \notin B(\alpha_i, \delta), B(\beta_j, \delta)$ for any $i$ and $j$. \item \textbf{Case II:} $t$ is in at most one of the balls. \end{enumerate} where $t \notin B(\alpha_i, \delta), B(\beta_j, \delta)$ for any $i$ and $j$ and where $t$ is in at most one of the balls. Assume now that $\text{val}_{\textbf{p}}(t) < 0$. \textbf{Case I:} We prove there exists a positive $K_1$ such that $\left|\text{val}_{\textbf{p}}(f(t)) - r \text{val}_{\textbf{p}}(t)\right| = \left| \text{val}_{\textbf{p}}(f(t)/t^r) \right| < K_1$ and $\left|\text{val}_{\textbf{p}}(g(t)) - s \text{val}_{\textbf{p}}(t)\right| = \left| \text{val}_{\textbf{p}}(g(t)/t^s) \right| < K_1$ for all such $t$. As stated in Proposition 4.2, the degrees of $f$ (resp. $g$) are $r$ (resp. $s$). Then, \begin{equation} \frac{f(t)}{t^r} = \sum_{k = 0}^{r} c_kt^{-k} \end{equation} and \begin{equation} \frac{g(t)}{t^s} = \sum_{j = 0}^{s} d_jt^{-j} \end{equation} \noindent Note that because $t \notin B(\alpha_i, \delta), B(\beta_j, \delta)$ for any $i$ and $j$, for $\text{val}_{\textbf{p}}(t) \leq v$ for some sufficiently negative $v$, we have that $\text{val}_{\textbf{p}}(f(t)/t^r) = \text{val}_{\textbf{p}}(c_0) $ and $\text{val}_{\textbf{p}}(g(t)/t^s) = \text{val}_{\textbf{p}}(d_0)$. For the case where $v < \text{val}_{\textbf{p}}(t) < 0$, the set of such $t$ is compact, and since $\text{val}_{\textbf{p}}(f(t)/t^r)$ and $\text{val}_{\textbf{p}}(g(t)/t^s)$ are both continuous (since $f(t), g(t) \neq 0$ on $\{t \in \mathbb{Q}[i]_{\textbf{p}}: v < \text{val}_{\textbf{p}}(t) < 0 \}$, they must be bounded. Choose $M$ such that $ \left| \text{val}_{\textbf{p}}(f(t)/t^r) \right|, \left| \text{val}_{\textbf{p}}(g(t)/t^s)\right| \leq M$. We may then take $K_1 = \max(M,\text{val}_{\textbf{p}}(c_0),\text{val}_{\textbf{p}}(d_0))$. For $\text{val}_{\textbf{p}}(t) < 0$, we then have \begin{center} $\text{val}_{\textbf{p}}(u) = \epsilon + \text{max}(\lceil -\frac{r}{4} \text{val}_{\textbf{p}}(t) \rceil, \lceil -\frac{s}{6} \text{val}_{\textbf{p}}(t) \rceil)$, \end{center} where $\left|\epsilon\right| \leq K_2 $, for some constant $K_2$ (e.g., $K_2 = 1+\frac{n}{m}K_1$). Since $\text{val}_{\textbf{p}}(t) < 0$, we have \begin{center} $\text{max}(\lceil -\frac{r}{4} \text{val}_{\textbf{p}}(t) \rceil, \lceil -\frac{s}{6} \text{val}_{\textbf{p}}(t) \rceil)=\lceil-\frac{n}{m}\text{val}_{\textbf{p}}(t)\rceil$. \end{center} \textbf{Case II:} Without loss of generality, assume $t \in B(\alpha_i, \delta)$. Here, $-\text{val}_{\textbf{p}}(u) = \lfloor \frac{1}{6}\text{val}_{\textbf{p}}(g(t)) \rfloor$. For all groups $G$ in Table 1, $\frac{r}{4} = \frac{s}{6}$, and so $-\text{val}_{\textbf{p}}(u) = \lfloor \frac{1}{6}\text{val}_{\textbf{p}}(g(t)) \rfloor = \epsilon + \lfloor \frac{n}{m}\text{val}_{\textbf{p}}(t) \rfloor$ where $\left| \epsilon \right| \leq C_2$ for some positive constant $C_2$. Now we consider $\text{val}_{\textbf{p}}(t) \geq 0 $. Let $K_3$ be a constant such that $\text{min}(\text{val}_{\textbf{p}}(f(t)), \text{val}_{\textbf{p}}(g(t)) \leq K_3 $ for all such $t$, which exists by Lemma 4.4. We may use (4.3) to conclude that $-\text{val}_{\textbf{p}}(u) \leq K_4$ for an appropriate $K_4$ (the existence of $K_3$ demonstrates this). Let $K_5$ be so that $\text{val}_{\textbf{p}}(f(t)) \geq K_5 $ and $\text{val}_{\textbf{p}}(g(t)) \geq K_5 $ for all $t$ with $\text{val}_{\textbf{p}}(t) \geq 0 $. Using (4.3) again, we find $-\text{val}_{\textbf{p}}(u) \geq K_6$, for an appropriate $K_6$ since we have now a lower bound on $-\text{val}_{\textbf{p}}(u)$ with $K_5$. We find that $\left|\text{val}_{\textbf{p}}(u)\right| \leq K_7$ in this case, with $K_7 = \text{max}(K_4, K_6)$. Combined with the proof of the case $\text{val}_{\textbf{p}}(t) < 0$, we have proved the formula in the statement of the lemma, as we can take $C_{\textbf{p}}=\text{max}(C_1, C_2, K_2,K_7)$. Now suppose $p$ is a sufficiently large integer prime so that the coefficients of $f$ and $g$ are in $\mathbb{Z}_{p}$, the leading coefficients of $f$ and $g$ are elements of $(\mathbb{Z}_{p})^{\times}$, and that $c_{p}=1$ from Lemma 4.4. For $\text{val}_p(t) < 0$ we have $\text{val}_{p}(f(t)) = r\text{ val}_{p}(t)$ and $\text{val}_{p}(g(t)) = s\text{ val}_{p}(t)$, which shows that \begin{center} $\text{val}_{p}(u) = \text{max}(\lceil-\frac{r}{4} \text{val}_{p}(t) \rceil, \lceil-\frac{s}{6} \text{val}_{p}(t) \rceil) = \lceil-\frac{n}{m} \text{val}_{p}(t) \rceil $ \end{center} For $\text{val}_{p}(t) \geq 0 $, we have $\text{val}_{p}(f(t)) \geq 0$ and $\text{val}_{p}(g(t)) \geq 0$, with at least one inequality being an equality (otherwise we contradict Lemma 4.4 with $c_{p}=1$). Thus, $\text{val}_{p}(u) = 0$ and we can take $C_{p}=0$ in this case. \end{proof} Lemma 2.4 of \cite{RHAS} may be used directly and so may the proof. This is because like $\mathbb{Z}$, $\mathbb{Z}[i]$ has finitely many units. \begin{lemma} There exists a finite set $Q$ of non-zero elements of $\mathbb{Q}[i]$ with the following property. Suppose $(u,t) \in S_1(X)$. Then we can write $t=a/b^m$, where $a$ and $b$ are Gaussian integers such that gcd$(a,b^m)$ is not divisible by any $m$th power of a non-unit, and $u=qb^n$, with $q \in Q $. \end{lemma} \begin{proof} Because $\mathbb{Z}[i]$ has unique factorization, we have a unique expression $t=a/b^m$ up to multiplication by a unit, with gcd$(a,b^m)$ not divisible by any $m$th power of a non-unit and $a,b \in \mathbb{Z}[i]$. We split into two cases. First, assume that $\textbf{p} \mid b$. Then, $-\text{val}_{\textbf{p}}(t) = m\text{val}_{\textbf{p}}(b)-k$, where $0 \leq k < m $, $k \in \mathbb{Z}$. If $m=1$ then $0 \leq k < 1 $, and since $k$ is integral, $k=0$. If $n=1$ then $ 0 \leq k < m $ becomes $ 0 \leq nk < m$, so $0 \leq \frac{n}{m}k < 1 $. In both cases, $ \lceil - \frac{nk}{m}\rceil = 0$. From Lemma 4.5, we have that $\text{val}_{\textbf{p}}(u) = \epsilon + \lceil - \frac{n}{m} \text{val}_{\textbf{p}}(t) \rceil = \epsilon + n \text{val}_{\textbf{p}}(b) $, with $\left| \epsilon \right| \leq C_{\textbf{p}}$. Second, if $ \textbf{p} \nmid b$ then $\text{val}_{\textbf{p}}(t) \geq 0 $, and so $ \left|\text{val}_{\textbf{p}}(u) \right| \leq C_{\textbf{p}}$. We thus see that $\left|\text{val}_{\textbf{p}}(u/b^n) \right| \leq C_{\textbf{p}}$ for $\textbf{p}$ by hypothesis of this case. Note that from Lemma 4.5, in each case we can take $C_{\textbf{p}} = 0$ for all $N(\textbf{p}) \gg 0$. This implies that for primes $\textbf{p}$ with large enough norm, $\text{val}_{\textbf{p}}(u/b^n) = \text{val}_{\textbf{p}}(q) = 0 $. Taking units into account, there are 4 representations for a given value of $q$ since there are 4 units in $\mathbb{Z}[i]$. This deduction implies that there are finitely many possibilities for $u/b^n$. \end{proof} Lemma 2.5 of \cite{RHAS} may be used directly. For the proof, let the maximum value of $q \in Q$ be the element with the largest norm, which yields an appropriate proof. \begin{lemma} Suppose $(u,t) \in S_1(X)$ and write $t=a/b^m$ and $u=qb^n$ as in Lemma 4.6. Then $N(a) \leq C_1X^{m/12n}$ and $N(b) \leq C_2X^{1/12n}$, for some constants $C_1$ and $C_2$. \end{lemma} \begin{proof} Write $t=a/b^m$ and $u=qb^n$ as above. The inequality max$(N(A)^3, N(B)^2) < X $ gives us \begin{center} $N(u) \cdot \text{max}(N(f(t))^{1/4}, N(g(t))^{1/6}) < X^{1/12}\text{.}$ \end{center} Let $K_1 > 0$ be a constant such that $\text{max}(N(f(t))^{1/4}, N(g(t))^{1/6}) \geq K_1$ for all $t$, which exists by Lemma 4.4. For instance, one could take the crude estimate $c_{\infty}^{1/6}$ as a lower bound, where $c_{\textbf{p}}$ is as in Lemma 4.4. Then $N(u) \leq K_1^{-1}X^{1/12}$, and so \begin{center} $N(b) \leq K_2X^{1/12n}$, \end{center} with $K_2 = \left|K_1^{-1/n} \right| \text{max}_{q \in Q} (\left|q^{-1/n}\right|)$, where the maximal element in $Q$ is the one with greatest norm. Note that while there may be multiple such elements, all of those elements differ by multiplication by a unit, and hence have the same norm. This implies the existence of $K_2$. Now suppose that $N(t) \geq 1 $. Let $ K_3 > 0 $ be a constant so that $K_3^4N(t)^r \leq N(f(t)) $ and $ K_3^6N(t)^s \leq g(t)$ holds for all such $t$. This exists since the absolute values of $f(t)/t^r$ and $g(t)/t^s$ will be at least the leading coefficients of $f$ and $g$, respectively. By algebraic manipulation, we have \begin{center} $X^{1/12} > N(u)\cdot \text{max}(N(f(t))^{1/4}, N(g(t))^{1/6}) \geq K_3N(u) \cdot \text{max}(N(t)^{r/4}, N(t)^{s/6}) = K_3N(u)N(t)^{n/m} = \left|K_3qa^{n/m}\right|$, \end{center} We therefore find $N(a) < K_4X^{m/12n}$ with $K_4=\text{max}_{q \in Q}(\left|(K_3q)^{-m/n}\right|)$. Now suppose $N(t) < 1$. Then $NaA) < N(b^m) \leq K_2^mX^{m/12n}$. Thus, in all cases, we have \begin{center} $N(a) \leq K_5X^{m/12n}$, \end{center} with $K_5=\text{max}(K_4, K_2^m)$. \end{proof} \begin{corollary} $\left| N_G(X) \right| \leq K_1X^{(m+1)/12n}$ for some positive constant $K_1$. \end{corollary} \begin{proof} We have that $\left| S_1(X) \right| \leq K_1X^{(m+1)/12n}$ for a positive constant $K_1$ (e.g., $K_1 = K_2K_6$). This implies the same for $S(X)$ (every pair $(A,B)$ has an associated pair $(u,t)$ where $A = u^4f(t)$ and $B = u^6g(t)$), and, by previous deductions, $N_G(X)$. \end{proof} \subsubsection{The Lower Bound} For a pair $(a,b) \in \mathbb{Z}[i]^2$, set $u = b^n$, $t = a/b^m$, and further set $A=u^4f(t)$ and $B = u^6g(t)$. Let $S_2(X)$ be the number of pairs $(a,b) \in \mathbb{Z}[i]^2$ satisfying the following: \begin{itemize} \item $a$ and $b$ are coprime. \item $N(a) < \kappa X^{m/12n}$ and $N(b) < \kappa X^{1/12n}$. \item $4A^3+27B^2 \neq 0$. \end{itemize} Fix $\kappa > 0 $ such that if $(a,b) \in S_2(X)$, then $N(A) < X^{1/3} $ and $N(B) < X^{1/2}$. \begin{remark} Such a $\kappa$ exists because $N(f(t)), N(g(t) \leq M $ (the bounds on $a$ and $b$ would $f$ and $g$ defined in a compact region in $\mathbb{C})$. Note as $\kappa \to 0$, $\sup_{N(b) < \kappa X^{1/12n}} b^n \to 0$. Hence, we may choose $\kappa$ small enough such that we satisfy the above conditions. \end{remark} The statement of Lemma 2.6 of \cite{RHAS} is identical for $\mathbb{Q}[i]$ and replacing the condition $p \gg 0 $ with $\left| \textbf{p} \right| \gg 0$ yields an appropriate proof. \begin{lemma} There exists a non-zero Gaussian integer $D$ with the following property: if $(a,b) \in \mathbb{Z}[i] $, then gcd$(A^3, B^2)$ divides $D$. \end{lemma} \begin{proof} We find constants $e_{\textbf{p}}$ such that $\text{val}_{\textbf{p}}(\text{gcd}(A^3, B^2)) \leq e_{\textbf{p}}$ and $e_{\textbf{p}} = 0$ for $N(\textbf{p}) \gg 0 $. This would mean we can take $D= \prod_{\textbf{p}}\textbf{p}^{e_{\textbf{p}}}$, where the product is over all primes $\textbf{p}$ in $\mathbb{Z}[i]$. By construction, $D$ satisifies the desired conditions. We first consider the case where $t$ satisfies Case I of Lemma 4.5. Suppose $(a,b) \in S_2(X) $, and let $\textbf{p}$ be a prime. Let $K_1$ be a constant such that \\ $\left|3\text{val}_{\textbf{p}}(f(t)) - 3r \text{val}_{\textbf{p}}(t) \right| \leq K_1$ and $\left|2\text{val}_{\textbf{p}}(g(t)) - 2s \text{val}_{\textbf{p}}(t) \right| \leq K_1$ for $t \in \mathbb{Q}[i] $ with $\text{val}_{\textbf{p}}(t) < 0 $, which exists by the discussion in the proof of Lemma 4.5. Note that for $N(\textbf{p}) \gg 0$, we can take $K_1=0$. Suppose that $\text{val}_{\textbf{p}}(b) = k > 0$ so that $\text{val}_{\textbf{p}}(t)=-mk$ and $\text{val}_{\textbf{p}}(u) = nk $. Then, \begin{center} $\text{val}_{\textbf{p}}(A^3) = \text{val}_{\textbf{p}}(u^{12}f(t)^3) = 12nk-3rmk+\epsilon = 12m(\frac{n}{m}-\frac{r}{4})k+\epsilon$ \end{center} and \begin{center} $\text{val}_{\textbf{p}}(B^2) = \text{val}_{\textbf{p}}(u^{12}g(t)^2) = 12nk-2smk+\delta = 12m(\frac{n}{m}-\frac{s}{6})k+\delta$ \end{center} where $\left|\epsilon\right| \leq K_1$ and $\left|\delta\right| \leq K_1$. However, $\frac{n}{m}=\text{max}(\frac{r}{4}, \frac{s}{6})$, and so we have $\text{val}_{\textbf{p}}(\text{gcd}(A^3, B^2)) \leq K_1 $. We may take $K_1 = 0$ for $\textbf{p}$ of large enough norm. We now handle Case II where $\text{val}_{\textbf{p}}(t) < 0$. Without loss of generality, suppose $t \in B(\alpha_i, \delta)$ for some $i$ and $\delta > 0$. Because $A = u^4f(t)$ and $B = u^6g(t)$ are continuous in $t$, $A$ and $B$ are bounded in the closure of $B(\alpha_i, \delta)$. Hence, there exists $K_2$ such that $\text{val}_{\textbf{p}}(\gcd (A^3, B^2)) \leq K_2$. For $\textbf{p}$ of large enough norm, $\text{val}_{\textbf{p}} g(t) \leq 0$ since the coefficients of $g$ will be $\textbf{p}$-adic units. Hence, we may take $\text{val}_{\textbf{p}}(u) = 0$, and thus $\text{val}_{\textbf{p}}(B^2) = 0$. Thus, we may take $K_2 = 0$ for $\textbf{p}$ of large enough norm. Now suppose $\text{val}_{\textbf{p}}(t) \geq 0 $. Let $K_2$ be a constant so that $\text{max}(3\text{val}_{\textbf{p}}(f(t)), 2\text{val}_{\textbf{p}}(g(t))) \leq K_2$ for all $t \in \mathbb{Q}[i] $ with $\text{val}_{\textbf{p}}(t) \geq 0 $. This constant exists by Lemma 4.4. Moreover, we can take $K_2=0$ for sufficiently large $\textbf{p}$. Since $ \textbf{p} \nmid b $, this gives $\text{val}_{\textbf{p}}(\text{gcd}(A^3, B^2)) \leq K_3 $. Then we may take $e_{\textbf{p}} = \max (K_1, K_2, K_3)$, and this is 0 for $\textbf{p}$ of large enough norm. \end{proof} For the next two lemmas we introduce another set. Let $S_3(X)$ denote the set of pairs $(A,B) \in \mathbb{Z}[i]^2 $ coming from $S_2(X)$. We have a map $S_3(X) \rightarrow S(X) $ which sends $(A,B)$ to $(A/d^4, B/d^6)$, where $d^{12}$ is the largest 12th power dividing $\text{gcd}(A^3, B^2)$. Then, the statement of \cite[Lemma 2.7]{RHAS} may also remain unchanged, and letting $d \in \mathbb{Z}[i]$ provides sufficient justification. \begin{lemma} There exists a finite positive constant $N$ such that every fiber of the map $S_3(X) \rightarrow S(X)$ has cardinality at most $N$. \end{lemma} \begin{proof} Let $(A_0, B_0) \in S(X) $ be given. Suppose $(A,B) \in S_3(X) $ maps to $(A_0, B_0) $. That is, $A=d^4A_0$ and $B=d^6B_0$ for some $d \in \mathbb{Z}[i] $. By Lemma 4.8, $d^{12} \mid D$. Therefore, if $N$ is the number of 12th powers dividing $D$, the fibers have cardinality at most $N$. \end{proof} The statement and proof of Lemma 2.8 hold over $\mathbb{Q}[i]$. \begin{lemma} There exists a positive finite constant $M$ such that every fiber of the map $S_2(X) \rightarrow S_3(X)$ has cardinality at most $M$. \end{lemma} \begin{proof} Let $(A,B) \in S_3(X)$ be given. An element $(a,b) \in S_2(X)$ of the fiber gives a solution to the equations $Ax^4 = f(y)$ and $Bx^6 = g(y)$ by $x=u^{-1}=b^{-n}$ and $y=t=a/b^m$. These two equations define algebraic plane curves of degree $\text{max}(4,r)$ and $\text{max}(6, s) $, which have no common components since $f$ and $g$ are coprime. By B\'ezout's theorem, they have at most $M=\text{max}(4,r) \text{max}(6,s) $ points in common, which is finite and positive. This bounds the cardinality of the fibers. \end{proof} \begin{lemma} The probability $P$ that two Gaussian integers $a$ and $b$ are coprime exists. \end{lemma} \begin{proof} This probability is equal to $1/\zeta_{\mathbb{Q}[i]}(2)$, where $\zeta_{\mathbb{Q}[i]}(2)$ is the Dedekind zeta function of $\mathbb{Q}[i]$ evaluated at 2. \cite{GCJJ} \end{proof} It remains to use the conditions for elements of $S_2(X)$, which we use here. \begin{corollary} $K_1X^{(m+1)/12n} \leq \left| N_G(X) \right|$ for some positive constant $K_1$. \end{corollary} \begin{proof} We first make use of the condition that $N(a) < \kappa X^{(m+1)/12n}$ and $N(b) < \kappa X^{1/12n}$. There are at most $\kappa X^{(m+1)/12n}$ possible norms for $a$ and at most $\kappa X^{1/12n}$ possible norms for $b$. Thus, there are at most $\kappa^2 X^{(m+1)/12n} +X^{m/12n} + X^{1/12n} + 1$ pairs $(N(a), N(b)$ that satisfy this condition. Because there are finite number of Gaussian integers with fixed norm, there exists a positive constant $C_1$ such that there are at most $C_1(\kappa^2 X^{(m+1)/12n} +X^{m/12n} + X^{1/12n}) + O(1)$ pairs $(a,b)$ in $S_2(X)$. Then, by Lemma 4.11 we know that $\left| S_2(X) \right| \geq P \cdot (C_1X^{(m+1)/12n} +X^{m/12n} + X^{1/12n})$. Since $\lim_{X \to \infty} \frac{X^{\alpha}}{X^{(m+1)/12n}} = 0$ for $0 < \alpha < \frac{m+1}{12n}$, as $X \to \infty$ we have that $C_2X^{(m+1)/12n} \leq \left| S_2(X) \right|$ for some positive cosntant $C_2$. Finally, we use the condition on $A$ and $B$ that $4A^3 + 27B^2 \neq 0 $. We exclude such $ (a,b)$ that give $ 4A^3 + 27B^2 = 0 $. Note that this equation gives us an algebraic curve in $\mathbb{Q}[i]^2$, whereas $ (a,b) \in \mathbb{Q}[i]^2 $. The plane $\mathbb{Q}[i] ^ 2$ has greater dimension than the algebraic curve $ 4A^3 + 27B^2 = 0 $. Thus, as $X \to \infty$, the number of ordered pairs $(A,B)$ excluded will be of lower order. We may conclude that $C_4X^{(m+1)/12n} \leq \left| S_2(X) \right|$ for some positive constant $C_4$. Since the map $ S_2(X) \rightarrow S(X) $ has fibers of bounded size, $MNC_4X^{(m+1)/12n} \leq \left| S(X) \right|$. Thus, the lower bound has been established and Proposition 4.2 has been proved. Hence we have proven Theorem 2.3 for the groups listed in Case 1, $G = \mathbb{Z}/M\mathbb{Z}, \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2K\mathbb{Z}, \mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}/4\mathbb{Z}$, $M \in [4,10]$, $M=12$, $2 \leq K \leq 4$, with $d(G) = 12n/(m+1)$. \end{proof} From the definition of $S_2(X)$, we know it contains $C_1\kappa^2X^{(m+1)/12n} + O(\kappa) $ elements, with $C_1$ a positive constant and $ O(\kappa)$ an error term of lower magnitude that accounts for possible ordered pairs $(a,b)$ on the boundary of the relevant region in $\mathbb{Z}[i]^2$. We now use the condition that $a$ and $b$ are coprime. We have that $\left| S_2(X) \right| \geq P(C_2X^{(m+1)/12n} + O(\kappa)) $. Since the error term has much lower order compared to $\kappa^2X^{(m+1)/12n}$, $C_1X^{(m+1)/12n} \leq \left| S_2(X) \right|$ as $X \rightarrow \infty $, for some positive constants $C_2$ and $C_3$. \subsection{Case 2} Recall the groups that will be dealt with in this case: $G = \mathbb{Z}/M\mathbb{Z}$, $M = 13, 16, 18$, is parameterized by $\mathcal{E}_{s,t}$ with $(s,t) \in C(\mathbb{Q}[i])$, where $C$ is a plane curve of genus $>1$. We have the following data for the parameter spaces of the universal equations of groups $G$ in this case, following from \cite[\S 4]{FPR} and some manipulation: \begin{table}[H] \centering \begin{tabular}{ | l | l |} \hline $G$ & $ C(s,t) $ \\ \hline $\mathbb{Z}/13\mathbb{Z} $ & $s^2 = t^6-2t^5+t^4-2t^3+6t^2-4t+1$ \\ \hline $\mathbb{Z}/16\mathbb{Z} $ & $s^2 = t(t^2+1)(t^2+2t-1)$ \\ \hline $\mathbb{Z}/18\mathbb{Z} $ & $s^2 = t^6+2t^5+5t^4+10t^3+10t^2+4t+)$ \\ \hline \end{tabular} \end{table} The curves $C$ that form each parameter space have genus greater than 1. Furthermore the curves are irreducible. Thus, by Faltings' Theorem (\cite{Faltings1983}) there are a finite number of points $s,t \in \mathbb{Q}[i]$ such that $(s,t) \in C(\mathbb{Q}[i])$, and so for each group $G$ in this case, there are finitely many curves that have torsion subgroup $G$. Therefore, we have that $d(G) = \infty$. \subsection{Case 3} Recall the groups that will be dealt with in this section: $G = \mathbb{Z}/M\mathbb{Z}, \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/10\mathbb{Z}, \mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/12\mathbb{Z} $, $M = 11, 14, 15 $. For the groups $G$ that fall into this case, the parameter spaces $C(s,t) = 0$ are elliptic curves. We have the following data for the Mordell-Weil groups of these curves $C$, following from \cite[\S 4]{FPR} and some manipulation: \begin{table}[H] \centering \begin{tabular}{ | l | l |} \hline $G$ & Mordell-Weil group for $C(\mathbb{Q}[i])$ \\ \hline $\mathbb{Z}/11\mathbb{Z} $ & $\mathbb{Z}/5\mathbb{Z}$ \\ \hline $\mathbb{Z}/14\mathbb{Z} $ & $\mathbb{Z}/6\mathbb{Z}$ \\ \hline $\mathbb{Z}/15\mathbb{Z} $ & $\mathbb{Z}/4\mathbb{Z}$ \\ \hline $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/10\mathbb{Z}$ & $\mathbb{Z}/6\mathbb{Z}$ \\ \hline $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/12\mathbb{Z}$ & $\mathbb{Z}/8\mathbb{Z}$ \\ \hline \end{tabular} \end{table} This data clearly demonstrates that for each group $G$ in this case, the corresponding group $C(\mathbb{Q}[i])$ is finite, and thus there are a finite number of elliptic curves with torsion subgroup isomorphic to $G$. This establishes $d(G) = \infty$ for this case. \section{The Group $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$} \iffalse --------------REMOVE-------------------------- The value of $12n/(m+1)$ for the group $\mathbb{Z}/10\mathbb{Z}$ (resp. $\mathbb{Z}/12\mathbb{Z}$) equals the value of $12n/(m+1)$ for the group $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/10\mathbb{Z}$ (resp. $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/12\mathbb{Z}$). We first treat the case of $\mathbb{Z}/10\mathbb{Z}$. Let the universal family of elliptic curves for $\mathbb{Z}/10\mathbb{Z}$ be $\mathcal{E}_t : y^2 = x^3 + f(t)x+g(t)$. This family must yield at least one 2-torsion point (since $\mathbb{Z}/10\mathbb{Z}$ has an element of order 2), which means $\mathcal{E}_t$ has at least one root in $\mathbb{Q}[i]$ \cite{JHS1}. Let $\alpha \in \mathbb{Q}[i] $ be a root of $\mathcal{E}_t$. We may then rewrite $\mathcal{E}_t$ as \begin{equation} y^2 = (x- \alpha)(x^2+ax-\frac{g(t)}{\alpha}). \end{equation} We then have that the discriminant of the quadratic factor is at least the degree of $g$. The points where $\mathcal{E}_t$ have more than one 2-torsion point (which would then imply the torsion subgroup of the curve is $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/10\mathbb{Z}$) are where the discriminant of the quadratic factor of (3.1) is the square of an element in $\mathbb{Q}[i]$. From (3.1), we have \begin{equation} (x-\alpha)(x^2+ax-\frac{g(t)}{\alpha}) = x^3+f(t)x+g(t). \end{equation} By equating coefficients, we have $a-\alpha = 0$ and hence $a = \alpha$. Furthermore, $ -a \alpha - \frac{g(t)}{\alpha} = f(t)$, yielding $f( t ) = -\alpha^2 - \frac{g( t )}{\alpha}$. These simplifications yield that the discriminant of the quadratic factor of (5.1) is $\frac{3g(t)}{\alpha} - f(t)$. We now compute the genus of $C : y^2 - f(t) - \frac{3g(t)}{\alpha} = 0$. Consider the projective version of the curve $C$ with third variable $z$. The point $[t,y,z]=[0,1,0]$ is a singularity of multiplicity 1. When $t=1$ a singularity corresponds to when both $z=1$ and $y=0$, which occurs finitely often and thus such cases may be ignored as the height approaches infinity. The genus of the curve is then $\text{max}(\text{deg}(f), \text{deg}(g))/2 > 1$. We now have the following lemma: \begin{lemma} There are a finite number of values of $t \in \mathbb{Q}[i]$ such that a square-free polynomial $f(t) \in \mathbb{Q}[i][t]$ with degree greater than 5 and genus greater than 1 takes on a square value. \end{lemma} \begin{proof} This follows from the generalization of Faltings's Theorem (1983) to any number field \cite{SER}. \end{proof} It may be computed that $f(t)-\frac{3g(t)}{\alpha}$ and its derivative with respective to $t$ have a constant greatest common divisor. Thus, we may apply Lemma 3.1 to $f(t)-\frac{3g(t)}{\alpha}$ to conclude it is square-free. Thus, there finitely many points on $C$ in $\mathbb{Q}[i] ^2$. Over $\mathbb{Q}$ it was computed that $d(\mathbb{Z}/10\mathbb{Z}) = 18$. Here we exclude a finite number of values, yielding $d(\mathbb{Z}/10\mathbb{Z}) = 18$ as in \cite{RHAS}. The case of $\mathbb{Z}/12\mathbb{Z}$ is handled by allowing $\mathcal{E}_t : y^2 = x^3 + f(t)x+g(t)$ to be the universal family for $\mathbb{Z}/12\mathbb{Z}$. We then proceed as in the case $\mathbb{Z}/10\mathbb{Z}$, and, using the parameterizations given in \cite{RHAS}, may verify that the curve $ C : y^2 - f(t) - \frac{3g(t)}{\alpha} = 0$ has sufficiently large genus to apply Lemma 3.1. Thus, we obtain that $d(\mathbb{Z}/12\mathbb{Z}) = 24$. ------------REMOVE------------ \fi For the group $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$, we may begin with an analogue of \cite[Proposition 4.2]{RHAS}: \begin{prop} Let $ f, g \in \mathbb{Q}[t] $ be coprime polynomials of degrees $ r $ and $s$. Assume at least one of $r$ or $s$ is positive. Write \begin{center} max$(\frac{r}{4}, \frac{s}{6}) = \frac{n}{m}$ \end{center} with $n$ and $m$ coprime. Assume $n=1$ or $m=1$. Let $S(X)$ be the set of pairs $(A,B) \in \mathbb{Z}[i]^2$ satisfying the following conditions: \begin{itemize} \item $4A^3+27B^2 \neq 0$. \item $\text{gcd}(A^3, B^2)$ is not divisible by any 12th power of a non-unit. \item $N(A) < X^{1/3}$ and $N(B) < X^{1/2}$. \item There exist $ u, t \in \mathbb{Q}[i]$ such that $A=u^2f(t)$ and $B=u^3g(t)$. \end{itemize} Assume $m+1>n$. Then $K_1X^{(m+1)/6n} \leq |S(X)| \leq K_2X^{(m+1)/6n}$ for some positive constants $K_1$ and $K_2$. \end{prop} \begin{proof} The proof is similar to the proof of Proposition 4.2. A version of Lemma 4.5 holds, but $C_{\textbf{p}} = 1$ for $N(\textbf{p}) \gg 0$. A version of Lemma 4.6 holds where we can write $t = a/b^m$ with gcd$(a,b^m)$ not divisible by any $m$th power of a non-unit and $u = qcb^n$ where $q$ belongs to a finite set and $c$ is square-free. The analog of Lemma 4.7 yields $|ca^{m/n}| \leq K_3X^{1/6}$ and $|cb^n| \leq K_4X^{1/6}$ for some positive constants $K_3$ and $K_4$. For any given $c$, the number of possibilities for $a$ is $O(\frac{X^{m/6n}}{c^{m/n}})$ and the number of possibilities for $b$ is $O(\frac{X^{1/6n}}{c^{1/n}})$, yielding $O(\frac{X^{(m+1)/6n}}{c^{(m+1)/n}})$ total possibilities. Integrate over $c$, up to $X^{1/6}$, to get $|S(X)| \leq K_2X^{(m+1)/6n}$ for some positive constant $K_2$. For the lower bound, fix $c = 1$. Let $\kappa > 0$ be a small constant, and consider the set $S = \{ (a,b) \in \mathbb{Z}[i]^2: |a^{n/m}|<\kappa X^{1/6}, |b^n| < \kappa X^{1/6}, \Delta \neq 0 \}$. Then, $|S(X)| \leq K_4X^{(m+1)/6n}$ for some positive constant $K_4$. The analog of Lemma 4.8 says that gcd$(A^3, B^2) = \alpha^6 \beta$, where $\beta \mid D$ for some fixed $D \in \mathbb{Z}[i]$, and furthermore $\alpha$ is square-free. We can then apply Lemmas 4.9-4.11 to obtain the lower bound. \end{proof} We may deduce from the parameterizations given in \cite{FPR} that the polynomials of the universal family for $\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ are $f(t) = \frac{1}{3}(t^2-t+1)$ and $g(t) = \frac{1}{27}(-2t^3+3t^2+3t-2)$. Then, $m = n = 1$ and so $(m+1)/6n = 1/3$. From this we obtain that $d(\mathbb{Z}/2\mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}) = 3$.
null
null
null
proofpile-arXiv_000-10118
{"arxiv_id":"2009.08998","language":"en","timestamp":1600740038000,"url":"https:\/\/arxiv.org\/abs\/2009.08998","yymm":"2009"}
2024-02-18T23:40:24.948Z
2020-09-22T02:00:38.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10120"}
null
\section{Introduction} This paper studies two circles of problems in general relativity, namely the problem of high-frequency limits and the problem of null dust shell solutions. We moreover show that there is a close relationship between the two problems. Here is a summary of what we achieve in this paper: \begin{enumerate} \item The first circle of problems (see Section~\ref{sec:problem.1}) concerns \textbf{high-freqeuncy limits of vacuum solutions}, i.e.~we seek to understand ``effective matter field'' that arises in appropriately defined weak limits of solutions to the Einstein vacuum equations. For ``angularly regular'' spacetimes adapted to a double null foliation (but \emph{without any symmetry assumptions}), we give a complete characterization of possible high-frequency limits. Namely we show that all high-frequency limits are isometric to solutions to the Einstein--null dust system; and conversely, all solutions to the Einstein--null dust system also arise locally as high-frequency limits of vacuum spacetimes. \item The second circle of problems (see Section~\ref{sec:problem.2}) concerns \textbf{null dust shell solutions}, i.e.~solutions to the Einstein--null dust system with a ``shell of null dust'' for which the stress-energy-momentum tensor is a delta measure on an embedded null hypersurface. We prove an existence and uniqueness result (again \emph{with no symmetry assumptions}) for the Einstein--null dust system which describes solutions featuring propagation and interaction of null dust shells (and also more general solutions where the null dust is measure-valued). \item We show that the problem of high-frequency limits and the problem of null dust shells are closely related. In fact, they can both be studied and understood from the point of view of \textbf{low-regularity problems of the Einstein equations}. In particular, in this paper we study these problems using the low-regularity local existence and uniqueness result in our previous papers \cite{LR, LR2} (which was originally developed to understand the propagation and interaction of impulsive gravitational waves, i.e.~solutions to the Einstein vacuum equation such that some curvature components admit a delta function singularity on an embedded null hypersurface). \item Moreover, our construction of null dust shell solutions is based on studying vacuum solutions and then taking appropriate high-frequency limits. Put differently, using the characterization of high-frequency limit, existence and uniqueness of \emph{solutions to the Einstein--null dust} can be established by studying \emph{vacuum solutions}. See Section~\ref{sec:results.dust}. \item Conversely, we also illustrate through the example of \textbf{formation of trapped surfaces} how the study of the Einstein--null dust system illuminates our understanding of the Einstein vacuum equations. See Section~\ref{sec:intro.addendum}. \end{enumerate} We will explain this further in the remainder of the introduction and give a first descriptions of the main results. In \textbf{Section~\ref{sec:problems}}, we will first introduce the two circles of problems regarding high-frequency limits and null dust shell solutions. In \textbf{Section~\ref{sec:intro.main.results}}, we then give a first description of the main results in this paper, and explain how they relate to \cite{LR, LR2}. We then discuss some related works in \textbf{Section~\ref{sec:related.works}} and the ideas of the proof in \textbf{Section~\ref{sec:proof.intro}}. \subsection{The problems}\label{sec:problems} \subsubsection{High-frequency limits}\label{sec:problem.1} Consider a sequence of $(3+1)$-dimensional spacetimes $\{(\mathcal M, g_n)\}_{n=1}^{+\infty}$ which solve the Einstein vacuum equations: \begin{equation}\label{EVE} Ric_{\mu \nu}(g_n)=0. \end{equation} Suppose that $g_n \to g_\infty$ in $C^0_{loc}$ and the derivatives of $g_n$ converge \underline{weakly}\footnote{Remark that if the convergence of the derivatives of $g_n$ is \underline{strong} in $L^2_{loc}$, then the limit is necessarily vacuum.} in $L^2_{loc}$. Explicit examples are known (see for instance \cite{Burnett, GW2}) in symmetry classes such that the limit $(\mathcal M, g_\infty)$ may satisfy the Einstein equations \begin{equation}\label{EE} Ric_{\mu \nu}(g_\infty)-\frac 12 (g_\infty)_{\mu\nu} R(g_\infty) = T_{\mu\nu}, \end{equation} with a \emph{non-vanishing stress-energy-momentum tensor} $T_{\mu\nu}$, where $R(g_\infty)$ is the scalar curvature of the limit metric. Physically, $T_{\mu\nu}$ can be interpreted as an effective stress-energy-momentum tensor arising from limits of high-frequency gravitational waves. Mathematically, $T_{\mu\nu}$ can be thought of as a defect measure that arises because taking weak limits do not commute with taking products. This phenomenon raises the following question: \begin{problem}\label{prob:Burnett.general} Give a description of the non-vanishing stress-energy-momentum tensors that arise in the limiting process as described above. \end{problem} There are two guiding conjectures concerning Problem~\ref{prob:Burnett.general}, both of which were introduced by Burnett. In \cite{Burnett}, Burnett considered more restrictive assumptions on the convergence of $g_n$. Namely, he required that for some $C>0$, $\lambda_n\to 0$, \begin{equation}\label{eq:Burnett.assumptions} |g_n - g_\infty|\leq \lambda_n,\quad |\partial g_n|\leq C,\quad |\partial^2 g_n|\leq C\lambda_n^{-1} \end{equation} in some local coordinate system. Under these assumptions, Burnett made the following conjectures \cite{Burnett}: \begin{conjecture}[Burnett's conjecture]\label{conj:Burnett} Any such limit $(\mathcal M, g_\infty)$ is isometric to a solution to the Einstein--massless Vlasov system for some appropriate choice of Vlasov field. \end{conjecture} \begin{conjecture}[Reverse Burnett's conjecture]\label{conj:reverse.Burnett} Any solution to the Einstein--massless Vlasov system arises as a limit of solutions to the Einstein vacuum equations in the sense described above. \end{conjecture} Here, ``Einstein--massless Vlasov system'' is to be interpreted in an appropriate generalized sense which allows the Vlasov field to be a measure on the cotangent bundle. In particular, it includes the known examples for which the limit is described by the Einstein--null dust system. Conjectures~\ref{conj:Burnett} and \ref{conj:reverse.Burnett} remain open in full generality, although there is some recent progress when $(\mathcal M, g_n)$ is assumed to be $\mathbb U(1)$ symmetric \cite{HL.HF,HL.Burnett}; see Section~\ref{sec:U(1)}. In between the full problem and the $\mathbb U(1)$ symmetric problem, it is of interest to study a setting which is not completely general, but nonetheless does not require any exact symmetries. \begin{problem}\label{prob:Burnett} Find a setting in $(3+1)$-dimensions \underline{without any exact symmetry} such that Conjectures~\ref{conj:Burnett} and \ref{conj:reverse.Burnett} can be studied. \end{problem} Beyond the original conjectures of Burnett, one can try to understand weak limits of vacuum solutions without imposing \eqref{eq:Burnett.assumptions}, but requiring only the weaker convergence ($g_n\to g_0$ uniformly and $\partial g_n \to \partial g_\infty$ weakly in $L^2$) introduced in the beginning of Section~\ref{sec:problem.1}. Notice in particular that while \eqref{eq:Burnett.assumptions} allows for oscillations in $g_n$, it prohibits concentrations. On the other hand, concentrations can in principle occur if we only require $g_n\to g_\infty$ in $C^0$ and $\partial g_n \to \partial g_\infty$ weakly in $L^2$. This motivates \begin{problem}\label{prob:concentration} Study (appropriate analogues of) Conjectures~\ref{conj:Burnett} and \ref{conj:reverse.Burnett} when \underline{concentrations} are present (in addition to oscillations). \end{problem} \subsubsection{Null shell solutions}\label{sec:problem.2} In 1957, Synge \cite{Synge} discovered a solution to the Einstein equations describing a propagating null shell of dust. More precisely, he constructed a spacetime with a distinguished null hypersurface so that the metric is isometric to Schwarzschild to the one side of that null hypersurface, and is isometric to Minkowski to the other side of that null hypersurface. Along the separating null hypersurface, the spacetime is not vacuum. Instead, a component of the Ricci curvature is a delta function supported on this null hypersurface, and the spacetime can be thought of as containing a null shell of dust. Since then, many other explicit solutions have been discovered; see Section~\ref{sec:null.dust.in.physics} for further discussions. In view of the explicit solutions, it is desirable to develop a local theory for null dust shells which does not impose any symmetry assumptions. Note that the difficulty in such an endeavor is that a null shell has much lower regularity than that allowed by standard local well-posedness results. \begin{problem}\label{prob:null.shell} Prove a local existence and uniqueness theorem for the Einstein--null dust system which incorporates the \underline{propagation} of null shells of dust. \end{problem} Once Problem~\ref{prob:null.shell} is understood, it is natural to further extend the class of initial data for which one can develop a local theory. Motivated by explicit solutions featuring the interaction of two null dust shells (see for instance \cite{tDgtH85, tDgtH86, iR85}), it is desirable to understand more generally the \emph{interaction} of two null shells, described by the transversal intersection of two null hypersurfaces which support the null shells of dust. \begin{problem}\label{prob:interaction} Prove a local existence and uniqueness theorem for the Einstein--null dust system which incorporates the \underline{interaction} of null shells of dust. \end{problem} From a PDE point of view, a spacetime containing a null dust shell is a solution to the Einstein--null dust system for which the stress-energy-momentum tensor of the null dust is merely a measure (which is not absolutely continuous with respect to the Lebesgue measure). From this perspective, it is of interest to study more general solutions to the Einstein--null dust system for which the stress-energy-momentum tensor is a measure with singular parts, but not necessarily a measure supported on a single null hypersurface. \begin{problem}\label{prob:general.shell} Construct more general solutions to the Einstein--null dust system where the null-dust is \underline{measure-valued}. \end{problem} Notice that while null dust shells are particular measure-valued solutions to the Einstein--null dust system, they are very special. Indeed they are so special that often in the physics literature they are constructed (under symmetry assumptions) by considering a ``junction condition'' across the null hypersurface on which the dust is supported. This is no longer the case for more general measures. \subsection{Main results}\label{sec:intro.main.results} In this subsection, we give the informal statements of the main results concerning the problems discussed in Sections~\ref{sec:problem.1} and \ref{sec:problem.2}; see Sections~\ref{sec:results.HF} and \ref{sec:results.dust}. Before that, however, we first discuss our previous low-regularity well-posedness result in Section~\ref{sec:intro.LR2}, which as we will show is closely related to the problems at hand. \subsubsection{A local well-posedness result with $L^2$ Christoffel symbols}\label{sec:intro.LR2} We recall our earlier local well-posedness result in \cite{LR, LR2}. The setup of \cite{LR, LR2} is as follows. We seek a solution $(\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2, g)$ to the Einstein vacuum equations in double null coordinates: \begin{equation}\label{eq:metricform.intro} g = -2\Omega^2(\mathrm{d} u\otimes \mathrm{d}\underline{u}+\mathrm{d}\underline{u}\otimes \mathrm{d} u)+\gamma_{AB}(\mathrm{d}\theta^A-b^A\mathrm{d} u)\otimes (\mathrm{d}\theta^B-b^B\mathrm{d} u), \end{equation} where $\vartheta = (\theta^1, \theta^2)$ is a local coordinate system on $\mathbb S^2$, $\Omega$ is a strictly positive function, $b$ is a vector field tangent to $\mathbb S^2$, and for every $(u,\underline{u})$, $\gamma$ a Riemannian metric on $\mathbb S^2$. A characteristic initial value problem (for $(\Omega, b, \gamma)$) is considered in \cite{LR, LR2}, i.e.~characteristic data are prescribed on $\underline{H}_0:=[0, I ] \times \{0\}\times \mathbb S^2$ and $H_0:=\{0\}\times [0,\underline{I} ] \times \mathbb S^2$. The function $\Omega$ can be arbitrarily prescribed on $H_0$ and $\underline{H}_0$. The vector field $b^A$ can be prescribed arbitrarily on $\underline{H}_0$ (but not $H_0$), and additionally $\frac{\partial b^A}{\partial \underline{u}}$ can be prescribed arbitrarily on the sphere $S_{0,0} := \{0\}\times\{0\}\times \mathbb S^2$. Finally, the metric $\gamma$ can be prescribed on $H_0$ and $\underline{H}_0$ subject to some \emph{constraints} equations (see \eqref{eq:constraints.first.time} in Section~\ref{sec:reduced.data}). In \cite{LR2}, we consider data obeying the following estimates\footnote{In fact we only needed slightly weaker estimates, but \eqref{eq:intro.metric.bds} is slightly more concise to state. We remark also that the constraints equations together with \eqref{eq:intro.metric.bds} imply bounds for the Ricci coefficients} (where $\frac{\partial }{\partial\vartheta}$ denotes $\frac{\partial }{\partial\theta^1}$ or $\frac{\partial }{\partial\theta^2}$ derivatives): \begin{equation}\label{eq:intro.metric.bds} \begin{split} &\: \sum_{\mathfrak g \in \{\gamma, \log\det\gamma, \log\Omega, b\}} \sum_{i\leq 5} \|(\frac{\partial }{\partial\vartheta})^i \mathfrak g \restriction_{S_{0,0}}\|_{L^2(S)} + \sum_{i\leq 5} \|(\frac{\partial }{\partial\vartheta})^i \frac{\partial b}{\partial\underline{u}} \restriction_{S_{0,0}}\|_{L^2(S)} \\ &\: + \sum_{\mathfrak g \in \{\gamma, \log\det\gamma, \log\Omega, b\}}\sum_{i\leq 5}(\|(\frac{\partial}{\partial\vartheta})^i\frac{\partial \mathfrak g}{\partial\underline{u}}\restriction_{H_0}\|_{L^2_{\underline{u}} L^2(S)} + \|(\frac{\partial}{\partial\vartheta})^i\frac{\partial \mathfrak g}{\partial u}\restriction_{\underline{H}_0}\|_{L^2_{u} L^2(S)}) \leq C. \end{split} \end{equation} \begin{theorem}[L.--R.~\cite{LR2}]\label{thm:existence.intro} Given characteristic initial data satisfying the bounds \eqref{eq:intro.metric.bds}, there exists $\epsilon>0$ sufficiently small \textbf{depending only on $C$} such that for any $u_* \in (0,I]$ and $\underline{u}_* \in (0,\epsilon]$, there exists a unique solution to the Einstein vacuum equations in double null coordinates in $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ which achieves the given data. The solution is $C^0\cap W^{1,2}$ with additional regularity in $\frac{\partial}{\partial\vartheta}$ directions, with estimates \textbf{depending only on $C$} in \eqref{eq:intro.metric.bds}. \end{theorem} A more precise version is given in Theorems~\ref{ext.thm} and \ref{thm:ext.est}. The key point here is that when measured in the worst directions (i.e.~the $\frac{\partial}{\partial u}$ and $\frac{\partial}{\partial \underline{u}}$ directions), the components of the metric are merely in $W^{1,2}$. \subsubsection{Results on high-frequency limits}\label{sec:results.HF} To study high-frequency limits, we consider exactly the setting of the results in Section~\ref{sec:intro.LR2}, where in spite of the very low regularity of the data we still have a well-posedness theory (see~Problem~\ref{prob:Burnett}). Our result on high-frequency limits is most easily formulated in the language of Theorem~\ref{thm:existence.intro}. As emphasized above, the assumptions of Theorem~\ref{thm:existence.intro} allow the components of the metric to be merely bounded in $W^{1,2}$. Therefore, given a sequence of characteristic initial data obeying uniformly the estimates in Theorem~\ref{thm:existence.intro}, the first derivatives of the metric components do not necessarily have strong limits. In particular, the limits, if they exist, can in principle have non-trivial stress-energy-momentum tensors as discussed in Section~\ref{sec:problem.1}. Our first main result shows that non-trivial stress-energy-momentum tensors must correspond to that of null dust. It can be viewed as a resolution of Conjecture~\ref{conj:Burnett} in our particular setting where the metric is adapted to a double null foliation gauge\footnote{Note that in our setting, due to the angular regularity given by Theorem~\ref{thm:existence.intro}, we only obtain null dust in the limit, as opposed to more general Vlasov field as in the case of Conjecture~\ref{conj:Burnett} in general.}. \begin{theorem}\label{thm:limit.intro} Take a sequence of characteristic initial data which obey the bounds in Theorem~\ref{thm:existence.intro} uniformly. Then the following holds: \begin{enumerate} \item There exists a sequence of metric $$g_n = -2\Omega_n^2(\mathrm{d} u\otimes \mathrm{d}\underline{u}+\mathrm{d}\underline{u}\otimes \mathrm{d} u)+(\gamma_n)_{AB}(\mathrm{d}\theta^A-b_n^A\mathrm{d} u)\otimes (\mathrm{d}\theta^B-b_n^B\mathrm{d} u)$$ in a uniform domain of existence $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. \item After passing to a subsequence $g_{n_k}$, there exists a metric $$g_\infty = -2\Omega_\infty^2(\mathrm{d} u\otimes \mathrm{d}\underline{u}+\mathrm{d}\underline{u}\otimes \mathrm{d} u)+(\gamma_\infty)_{AB}(\mathrm{d}\theta^A-b_\infty^A\mathrm{d} u)\otimes (\mathrm{d}\theta^B-b_\infty^B\mathrm{d} u)$$ so that $g_n \to g_\infty$ in $C^0$ and weakly in $W^{1,2}$ in $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. \item Moreover $g_\infty$ satisfies (weakly) the Einstein--null dust system with two families of null dusts which are potentially measure-valued. \end{enumerate} \end{theorem} In establishing Theorem~\ref{thm:limit.intro}, the existence theorem (Theorem~\ref{thm:existence.intro}) plays two important roles: \begin{itemize} \item Theorem~\ref{thm:existence.intro} gives a \underline{uniform} region of existence of solutions, which allows us to study the limit in this setting. \item The regularity properties of the solutions proven in Theorem~\ref{thm:existence.intro} allows us to treat the limits of the nonlinear terms. In particular, the regularity properties dictate that only for specific nonlinear terms could the product of the weak limits be different from the weak limit of the product. This implies that nothing ``worse'' than two families of null dusts can arise in the limit. \end{itemize} Understanding the second point above in particular requires studying the effects of compensated compactness. Moreover, the setup in Theorem~\ref{thm:limit.intro} allows \emph{concentrations} (in addition to oscillations) to occur in the limiting process. These seem to be first known examples of limiting effective stress-energy-momentum tensors being created by concentrations; see Problem~\ref{prob:concentration}. One consequence of Theorem~\ref{thm:limit.intro} is that we know when the limiting spacetime metric is vacuum: since the null dust satisfies a transport equation, it is vanishing if and only if it has vanishing data. \begin{corollary}\label{cor:vac.cond.intro} Let the sequence $g_n$, the subsequence $g_{n_k}$ and the limit $g_\infty$ be as in Theorem~\ref{thm:limit.intro}. Then the limiting spacetime metric $g_\infty$ is a (weak) solution to the Einstein vacuum equations if and only if the initial data sets for $g_{n_k}$ converge to a limiting initial data set to the Einstein vacuum equations. \end{corollary} In fact, slightly more can be said, and that the solution is determined only by the limit of the initial data. In other words, we have the following uniqueness theorem: \begin{theorem}\label{thm:uniqueness.intro} Given two sequences of characteristic initial data satisfying the assumptions of Theorem~\ref{thm:limit.intro} which moreover have the same limit on the initial characteristic hypersurfaces, then in fact the limiting spacetime metrics given by Theorem~\ref{thm:limit.intro} also coincide. \end{theorem} \subsubsection{Results on null shells}\label{sec:results.dust} Once we have obtained the existence and uniqueness of the limits as solutions to the Einstein--null dust system (see Theorems~\ref{thm:limit.intro} and \ref{thm:uniqueness.intro}), we can use this to obtain an existence and uniqueness theory for the Einstein--null dust system where the null dust is merely a measure (with sufficient angular regularity). More precisely, given an initial data set to the Einstein--null dust system with a potentially measure-valued null dust, we approximate it by a sequence of initial data sets to the Einstein vacuum equations, and then use Theorem~\ref{thm:limit.intro} (!) to construct a solution to the Einstein--null dust system. Uniqueness of solutions constructed in this manner is then given by Theorem~\ref{thm:uniqueness.intro}. Our main result on null shells is the following existence and uniqueness theorem for the characteristic initial value problem for the Einstein--null dust system with measure-valued null dust: \begin{theorem}\label{thm:null.shells.intro} Consider a characteristic initial value problem with the Einstein--null dust system with strongly angularly regular\footnote{We refer the reader to Definition~\ref{def:SARCID} for the precise regularity assumptions. For now we just emphasize that the angular regularity that we require for these characteristic initial data is stronger than the angular regularity for the solutions. We thus distinguish then with the terms ``angularly regular spacetimes'' and ``strongly angularly regular data''. This is related to a well-known loss of derivative associate to the characteristic initial value problem for second order hyperbolic system \cite{Hagen}.} initial data with a measure-valued null dust. Then, in an appropriate local double null domain, there exists a unique angularly regular weak solution to the Einstein--null dust system. \end{theorem} Theorem~\ref{thm:null.shells.intro} provides an existence and uniqueness result for a large class of data with measure-valued null dust. These include in particular data for which the solutions feature the propagation and interaction of null dust shells. In other words, it simultaneously addresses Problems~\ref{prob:null.shell}, \ref{prob:interaction} and \ref{prob:general.shell}. We emphasize that Theorem~\ref{thm:null.shells.intro} imposes \underline{no symmetry assumptions}. We remark that in Theorem~\ref{thm:null.shells.intro} one also gets a stability statement, which follows from the proof of Theorem~\ref{thm:uniqueness.intro}. We will however not formulate this precisely for the sake of brevity. Finally, as we explained above, the proof of Theorem~\ref{thm:null.shells.intro} not only gives existence and uniqueness of measure-valued solutions to the Einstein--null dust system, but it also shows that any such solution is an appropriate limit of vacuum solutions. As a result, we also resolve Conjecture~\ref{conj:reverse.Burnett} in our setting. \begin{corollary}\label{cor:reverse.Burnett.intro} Let $(\mathcal M =[0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2 ,\, g_\infty,\, \{\mathrm{d} \nu_u\}_{u\in [0,u_*]}, \, \{\mathrm{d} \underline{\nu}_{\underline{u}}\}_{\underline{u}\in [0,\underline{u}_*]})$ be an angularly regular solution\footnote{Here, $\mathrm{d}\nu$ and $\mathrm{d}\underline{\nu}$ here denote the measure-valued null dust; see Definition~\ref{def:ang.reg.null.dust}.} to the Einstein--null dust system (potentially with measure-valued dusts) with strongly angularly regular data. Then for any $p\in \mathcal M$, there exist $p\in \mathcal M'\subseteq \mathcal M$ and a sequence of smooth angularly regular vacuum solutions $(\mathcal M',g_n)$ such that $g_n \to g_\infty$ in $C^0$ and weakly in $W^{1,2}$ in $\mathcal M'$. \end{corollary} Combining Theorem~\ref{thm:limit.intro} and Corollary~\ref{cor:reverse.Burnett.intro}, we have thus answered Problem~\ref{prob:Burnett} in the class of angularly regular solutions with strongly angularly regular data. \subsection{Related works}\label{sec:related.works} \subsubsection{High-frequency limits in general relativity} The study of limits of high-frequency spacetimes has a long tradition in the physics literature, beginning with the pioneering works of Isaacson \cite{I1, I2} and Choquet-Bruhat \cite{CB.HF}, who already observed using some form of ``averaging'' or ``expansion'' that high-frequency limits of gravitational waves could lead to an effective stress-energy-momentum tensor mimicking that of null dust. (See also \cite{Penrose.massless}.) This was further discussed and explored by MacCallum--Taub \cite{MacCallumTaub}. More relevant to our paper is the work of Burnett \cite{Burnett}, in which he formulated high-frequency limits of gravitational waves in the language of weak limits. In the same paper, he introduced Conjectures~\ref{conj:Burnett} and \ref{conj:reverse.Burnett}. Within Burnett's framework of weak limits, various examples have been constructed, see for instance \cite{pHtF93, GW2, SGWK, SW, HL.HF, Lott2}. Finally, we note interesting connections between high-frequency limits and inhomogeneities in cosmology \cite{GW1, GW2, GW.FLRW, GW.simple}, as well as late-time asymptotics in cosmological spacetimes \cite{Lott1, Lott3, Lott2}. See also \cite{Penrose2018, SC.standing} for other applications. \subsubsection{Burnett's conjecture in $\mathbb U(1)$ symemtry}\label{sec:U(1)} As mentioned earlier, Burnett's conjecture (Conjecture~\ref{conj:Burnett}) in its full generality remains open. Nevertheless, imposing an $\mathbb U(1)$ symmetry and an elliptic gauge condition, Burnett's conjecture\footnote{We remark that the exact conditions in \cite{HL.Burnett} are slightly stronger that in Conjecture~\ref{conj:Burnett}; see \cite{HL.Burnett} for details.} has been solved recently in \cite{HL.Burnett}. In fact, under a slightly more restrictive symmetry and gauge assumption, there is a partial result for the reverse Burnett conjecture (Conjecture~\ref{conj:reverse.Burnett}) \cite{HL.HF}. It was shown that all generic, smooth, small data solutions to the Einstein--null dust system arise as suitable weak limit of solutions to the Einstein vacuum equations. \subsubsection{Null dust shell solutions in the physics literature}\label{sec:null.dust.in.physics} As described earlier, to the best of our knowledge the first null dust shell solution was constructed in \cite{Synge}. This was later generalized by \cite{tDgtH85}. The interaction of two null dust shells under symmetry assumptions has been studied in \cite{tDgtH85, tDgtH86, iR85}. Due to its simplicity, null dust shell solutions are also used as a simplified model to study gravitational collapse; see for instance \cite{Penrose.shell, Hawking.shell, Tod.shell, Barrabes.shell, cBwIeP90, cBwIpL91}. For further references, see the book \cite{BH.book}. \subsubsection{Low-regularity solutions to the Einstein equations} Our result in this paper heavily relies on the low-regularity existence and uniqueness result in Theorem~\ref{thm:existence.intro}. Low-regularity results in general relativity are themselves of independent interest. Perhaps the most celebrated such result is the bounded $L^2$ curvature theorem: \begin{theorem}[Klainerman--R.--Szeftel \cite{L21}]\label{thm:L2curv} The time of existence (with respect to a maximal foliation) of a classical solution to the Einstein vacuum equations depends only on the $L^2$-norm of the curvature and a lower bound of the volume radius of the corresponding initial data set. \end{theorem} Theorem~\ref{thm:L2curv} handles a very general class of data. This is in contrast to Theorem~\ref{thm:existence.intro}, which although allows for lower regularity when measured in the worst directions, the theorem also requires the data to be of a more specific form. Very recently, in an ongoing work, L.--Van de Moortel \cite{LVdM1, LVdM2} study the problem of transversal interaction of \emph{three} impulsive gravitational waves under polarized $\mathbb U(1)$ symmetry. While \cite{LVdM1, LVdM2} relies heavily on the symmetry assumptions, in view of the presence of three impulsive gravitational waves, the problem requires geometric construction beyond the double null foliation used in \cite{LR, LR2}. \subsection{Brief discussion of the proof}\label{sec:proof.intro} \subsubsection{Examples in symmetries} Before we discuss the proof, it is illuminating to look at a few very simple examples in symmetry. The first example is given already in \cite{Burnett}, which shows in the plane wave setting how oscillations give rise to a null dust. This is the basic example of the phenomenon that we explore in our paper (where we consider the much more general case with no symmetry assumptions). \begin{example}[The Burnett example \cite{Burnett}]\label{ex.Burnett} Consider the following metric on $\mathbb R^4$: \begin{equation}\label{Burnett.form} g=-2 du d\underline{u}+H(\underline{u})^2(e^{G(\underline{u})} dX^2+ e^{-G(\underline{u})} dY^2), \end{equation} where $H(\underline{u})$ and $G(\underline{u})$ are real-valued functions of $\underline{u}$. This defines a Lorentzian metric as long as $H>0$. The Ricci curvature tensor is given by \begin{equation}\label{eq:Burnett.Ric} \mathrm{Ric}(g) = \{-\frac 12 \left(G'(\underline{u})\right)^2-\frac{2 H''(\underline{u})}{H(\underline{u})} \}\, \mathrm{d}\underline{u} \otimes \mathrm{d}\underline{u}. \end{equation} Burnett considered a one-parameter family of solutions to the vacuum Einstein equations which take the form \eqref{Burnett.form}. The family of solutions is parametrized by $\lambda$. For $\lambda>0$, define $G_\lambda$ by $$G_\lambda(\underline{u})=\lambda k(\underline{u})\sin (\frac{\underline{u}}{\lambda}),$$ where $k(\underline{u})$ is some fixed smooth function. Also, define $H_\lambda$ by the following ordinary differential equation \begin{equation}\label{H.ODE} \begin{cases} -\frac 12 \left(G_{\lambda}'(\underline{u})\right)^2-\frac{2 H_{\lambda}''(\underline{u})}{H_{\lambda}(\underline{u})}=0,\\ H_{\lambda}(0)=1,\quad H_{\lambda}'(0)=0, \end{cases} \end{equation} so that by \eqref{eq:Burnett.Ric} the metric $g_\lambda = -2 du d\underline{u}+H_\lambda(\underline{u})^2(e^{G_\lambda(\underline{u})} dX^2+ e^{-G_\lambda(\underline{u})} dY^2)$ is vacuum. We now consider the limit $\lambda \to 0$. Clearly, \begin{equation}\label{Burnett.G.def} G_0(\underline{u}):=\lim_{\lambda\to 0} G_{\lambda}(\underline{u})=0. \end{equation} By standard theory of ordinary differential equations, there exists $\epsilon>0$ such that \eqref{H.ODE} can be solved for $\underline{u} \in [0,\epsilon]$. It is easy moreover to show that $H_\lambda$ has a limit in $C^1([0,\epsilon])$ after taking $\epsilon$ to be smaller if necessary. We define $H_0(\underline{u}):=\lim_{\lambda\to 0} H_\lambda(\underline{u})$. For $\underline{u} \in [0,\epsilon]$, the spacetime metric given by $$g_0=-2 du d\underline{u}+H_0(\underline{u})^2(e^{G_0(\underline{u})} dX^2+ e^{-G_0(\underline{u})} dY^2)$$ satisfies (by \eqref{eq:Burnett.Ric}) $$\mathrm{Ric}(g_0)=\frac 14 \left(k(\underline{u})\right)^2\,\mathrm{d}\underline{u} \otimes \mathrm{d} \underline{u} = \frac 12 w\mbox{-}\lim_{\lambda\to 0} (G_\lambda'(\underline{u}))^2 \,\mathrm{d}\underline{u} \otimes \mathrm{d} \underline{u}.$$ In particular, if $k \not\equiv 0$, then $g_0$ is not a solution to the Einstein vacuum equation, but instead solves the Einstein null dust system. \end{example} Still within the category of explicit examples in symmetry class, one can also go beyond one family of null dust and get a limit with two families of null dust. We refer the reader to \cite{GW2} for details. \begin{example}[Green--Wald example in Gowdy symmetry \cite{GW2}] Green and Wald gave an example of a sequence of vacuum \underline{polarized Gowdy} spacetimes whose limit is non-vacuum and in fact can be thought of as having two families of null dust. The spacetimes they constructed have topology\footnote{We remark that as we are only interested in the local behavior of high-frequency, the topology plays no role here.} $\mathbb R\times \mathbb T^3$ so that in a coordinate system $(\tau,\theta,\sigma,\delta)\in \mathbb R\times \mathbb T^3$, the sequence of metrics are given by $$g_n=e^{\frac{(\tau-\alpha_n)}{2}}(-e^{-2\tau}d\tau^2+d\theta^2)+e^{-\tau}\left(e^{P_n} d\sigma^2+e^{-P_n} d\delta^2\right),$$ where $P_n$ and $\alpha_n$ take the following form \begin{equation}\label{GW.P.alpha.def} \begin{split} P_n:=&\frac{A}{\sqrt{n}}J_0(n e^{-\tau})\sin(n\theta),\\ \alpha_n:=&-\frac{A^2e^{-\tau}}{2}J_1(n e^{-\tau})J_0(n e^{-\tau})\cos(2n\theta)\\ &-\frac{A^2 n e^{-2\tau}}{4}\left(\left(J_0(n e^{-\tau})\right)^2+2\left(J_1(n e^{-\tau})\right)^2-J_0(n e^{-\tau})J_2(n e^{-\tau})\right). \end{split} \end{equation} Here, $J_k$ and $Y_k$ denote the standard Bessel functions of first and second kind respectively; and $A$ is some fixed real valued constant. As is shown in \cite{GW2}, these metrics are all vacuum and the sequence has a uniform limit on compact subsets of $\mathbb R\times\mathbb T^3$ to the limiting spacetime $$g_{\infty}=e^{\frac{(\tau+\frac{A^2 e^{-\tau}}{\pi})}{2}}(-e^{-2\tau}d\tau^2+d\theta^2)+e^{-\tau}\left( d\sigma^2+ d\delta^2\right),$$ which has a non-trivial Einstein tensor with the following non-vanishing components $$G_{\tau\tau}=\frac{A^2e^{-\tau}}{4\pi},\quad G_{\theta\th}=\frac{A^2e^\tau}{4\pi}.$$ This corresponds to a solution to the Einstein equations with two families of null dust\footnote{It can be easily checked after introducing the null variables $\underline{u}:=-e^{-\tau}+\theta$ and $u:=-e^{-\tau}-\theta$, the non-vanishing components of the Einstein tensor in the $(u,\underline{u},\sigma,\delta)$ coordinates are $$G_{uu}=G_{\underline{u}\ub}=\frac{A^2}{8\pi H^2}.$$}. \end{example} While the explicit examples feature only oscillations and have a limit which is smooth, it is not difficult to modify \eqref{ex.Burnett} so that the limiting stress-energy-momentum tensor still corresponds to null dust, but is only a \emph{measure-valued} null dust shell. Our main result will in particular generalize this simple example to general measure-valued null dust without symmetry assumptions. \begin{example}[Null shell in plane symmetry]\label{example:intro.shell} Let $k(\underline{u})\in C^\infty_c$, $\mathrm{supp}(k) \subseteq [-\frac 12, \frac 12]$, $f\geq 0$. Moreover, assume \begin{equation}\label{shell.k.bd} \int_{-\infty}^{\infty} \left(k'(\underline{u})\right)^2 d\underline{u} = 1. \end{equation} Now, we construct a one-parameter family of solutions of the form \eqref{Burnett.form} to the Einstein vacuum equation by setting $$G_\lambda(\underline{u})=\lambda^{\f12} k(\frac{\underline{u}}{\lambda})$$ for $\lambda>0$. $H_\lambda$ is then defined to be solutions to \eqref{H.ODE} so that we obtain a $1$-parameter family of vacuum solutions. Notice that $G_{\lambda}$ is much more singular than that in \eqref{ex.Burnett} as $\lambda\to 0$. In particular, $G'_{\lambda}$ is not uniformly bounded in $\lambda$. It is easy to see that $G_\lambda(\underline{u})$ converges uniformly to $0$. We thus define \begin{equation}\label{shell.G.def} G_0(\underline{u})= \lim_{\lambda\to 0} G_\lambda(\underline{u}) = 0. \end{equation} An ODE argument shows that there is an interval $\underline{u} \in [-\epsilon ,\epsilon]$ such that $H_\lambda(\underline{u})$ admits a $C^0 \cap W^{1,p}$ limit (for\footnote{Note however that $H'_{\lambda}$ does not have a uniform limit.} $p \in [1,+\infty)$). For $\underline{u} \in [-\epsilon,\epsilon]$, the spacetime metric given by $$g_0=-2 du d\underline{u}+H_0(\underline{u})^2(e^{G_0(\underline{u})} dX^2+ e^{-G_0(\underline{u})} dY^2)$$ satisfies (by \eqref{eq:Burnett.Ric}) $$\mathrm{Ric}(g_0)= \frac 12 \delta(\underline{u})\,\mathrm{d} \underline{u}\otimes \mathrm{d} \underline{u},$$ which corresponds exactly to a null dust shell. \end{example} \subsubsection{Characterization of the high-frequency limit} The above examples, while extremely specific, already illustrate the basic phenomenon. In the more general case in Theorem~\ref{thm:limit.intro}, we will prove that there are (at most) two non-trivial components of the Einstein tensor that can be generated in the weak limit. In particular, as asserted in Theorem~\ref{thm:limit.intro}, the limiting spacetime is isometric to a solution to the Einstein--null dust system. Unlike in the above examples, however, no explicit computations will be available and we will rely on compactness, particularly compensated compactness, arguments. Our starting point is Theorem~\ref{thm:existence.intro}, which states that given a sequence of initial data obeying the estimates in Theorem~\ref{thm:existence.intro} uniformly, there is a uniform region of existence. Moreover, it follows from the estimates established in the proof of Theorem~\ref{thm:existence.intro} that the sequence of spacetime metrics are uniformly bounded in $C^\alpha \cap W^{1,2}$. Using standard compactness results, this is sufficient to extract a subsequence $(\mathcal M, g_{n_k})$ and a limiting spacetime so that the metrics converge strongly in $C^0$ and weakly in $W^{1,2}$. To obtain more information about the limiting spacetime, and to show that it indeed satisfies the Einstein--null dust system, we need a more precise understanding of the convergence. Introducing the Ricci coefficients $\eta$, $\underline{\eta}$, $\slashed{\mathrm{tr}}\chi$, $\slashed{\mathrm{tr}}\underline{\chi}$, $\hat{\chi}$, $\hat{\underline{\chi}}$, $\omega$ and $\underline{\omega}$ with respect to a null frame adapted to a double null coordinate system (see Section~\ref{sec:Ricci.coeff}), to understand whether the limiting Ricci coefficients amounts to checking whether quadratic products of these Ricci coefficients converge weakly to the products of the weak limits. The following are the main observations: \begin{enumerate} \item The Ricci coefficients $\eta$, $\underline{\eta}$, $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ converge (up to a subsequence) \emph{strongly} in the (say, spacetime\footnote{In fact, stronger convergence holds (and the sense of convergence is different for different Ricci coefficients), but strong spacetime $L^2$ convergence is sufficient to ensure that they do not contribution to convergence defect for the Ricci curvature.}) $L^2$ norm. In particular, in all the quadratic terms where $\eta$, $\underline{\eta}$, $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ is one of the Ricci coefficients, the weak limit of the product coincides with the product of the weak limits. \item Notice that even though $\eta$, $\underline{\eta}$, $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ all have strong $L^2$ limits, the precise sense of limit (and the proof) is different. The components $\eta$ and $\underline{\eta}$ admit (subsequential) uniform pointwise limit, and can be proven by an Arzela--Ascoli type argument. However, $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ only converge in $L^p$ (for $p\neq +\infty$) (up to a subsequence) and to prove this we rely on the compactness of BV and the Aubin--Lions lemma. \item The Ricci coefficients which only have weak $L^2$ limits, i.e.~$\hat{\chi}$, $\hat{\underline{\chi}}$, $\omega$ and $\underline{\omega}$, exhibit some \emph{compensated compactness}. For instance, since $\hat{\chi}$ is more regular along constant-$\underline{u}$ hypersurfaces and $\hat{\underline{\chi}}$ is more regular along constant-$u$ hypersurfaces, we have $w\mbox{-}\lim_{k\to +\infty} (\hat{\chi} \otimes \hat{\underline{\chi}}) = (w\mbox{-}\lim_{k\to +\infty} \hat{\chi}) \otimes (w\mbox{-}\lim_{k\to +\infty} \hat{\underline{\chi}})$, \emph{even though both $\hat{\chi}_{n_k}$ and $\hat{\underline{\chi}}_{n_k}$ only admit weak limits}. \item Finally, the \emph{only} quadratic terms of the Ricci coefficients in the definition of the Ricci curvature such that the weak limit of the product differs from the product of the weak limits are $|\hat{\chi}|^2_\gamma$ and $|\hat{\underline{\chi}}|^2_\gamma$. In particular, $\mathrm{weak}\mbox{-*} \lim_{k\to +\infty} |\hat{\chi}_{n_k}|^2_{\gamma_{n_k}} - |\mathrm{weak}\mbox{-*} \lim_{k\to +\infty} \hat{\chi}_{n_k}|^2_{\gamma_{\infty}}$ and $\mathrm{weak}\mbox{-*} \lim_{k\to +\infty} |\hat{\underline{\chi}}_{n_k}|^2_{\gamma_{n_k}} - |\mathrm{weak}\mbox{-*} \lim_{k\to +\infty} \hat{\underline{\chi}}_{n_k}|^2_{\gamma_{\infty}}$ are in general non-trivial non-negative measures corresponding to the two families of null dust. \end{enumerate} Using the above three observations, it already follows that with respect to the null frame $\{e_1,\,e_2,\,e_3,\,e_4\}$, the only potentially non-vanishing components of the Ricci curvature of the limiting spacetime are $\mathrm{Ric}(e_3,e_3)$ and $\mathrm{Ric}(e_4, e_4)$. To prove that the limit solves the Einstein--null dust system, we also need to show the propagation equation of the null dust\footnote{In fact we will also show some higher order equations such as transport equations for some first angular derivatives of the Ricci coefficients and a hyperbolic system for the renormalized curvature components. They are strictly speaking not necessary for Theorem~\ref{thm:limit.intro}, but are used to prove the uniqueness of the limit.}. For this we need to understand convergence properties of some higher derivative terms and also cubic terms. These turn out to follow from strong convergence statements and compensated compactness statements which are similar to those above but for higher order derivatives. \subsubsection{Proof of the uniqueness theorem} For the uniqueness theorem (Theorem~\ref{thm:uniqueness.intro}), first note that since the limiting spacetime metrics are obtained as limits of the metrics from Theorem~\ref{thm:existence.intro}, the metric components are mostly in similar function spaces as in \cite{LR, LR2} (with the notable exception that $\slashed{\mathrm{tr}}\chi$, $\slashed{\mathrm{tr}}\underline{\chi}$ are only BV instead of $W^{1,1}$, and that there are in addition two families of null dust, which are in general only measure-valued; more on this later). This suggests uniqueness to be proven in function spaces that are used in \cite{LR2}. In order to obtain the uniqueness result, the renormalization introduced in \cite{LR, LR2} plays an important role. (In fact, in this paper, the presence of the measure-valued null dust makes the renormalization in \cite{LukWeakNull} more convenient to use than that in our original \cite{LR, LR2}). The main point of the renormalization in \cite{LR, LR2} is to identify some quantities we called the renormalized curvature components, which are more regular than the spacetime curvature components themselves. Introducing the renormalization moreover allowed us to obtain a closed system of equations which does not involve the spacetime curvature components $R(e_4, e_A, e_4, e_B)$ and $R(e_3, e_A, e_3, e_B)$, which are in general only distribution-valued. After introducing the renormalization, we can recast the Einstein--null dust system in double null coordinates as a coupled quasilinear system of hyperbolic, elliptic and transport equations for the metric components, the Ricci coefficients, the renormalized curvature components and the null dust. We then use this system to prove estimates and deduce the uniqueness statement. One crucial difference of our uniqueness argument with the proof of a priori estimates in \cite{LR2} is that a priori we only know that the solutions obey the Einstein--null dust system in an appropriate weak sense. Another (perhaps more important) difference is the presence of the null dust, which is moreover only measure-valued. To prove our uniqueness result will involve estimating differences of the null dust and the null mean curvatures $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ using the transport equations they satisfy, and this needs to be carried out in appropriate function spaces which avoid a potential loss of derivative; see Sections~\ref{sec:diff.trch} and \ref{sec:diff.null.dust}. \subsubsection{Approximating the initial data} The final result that we prove is Theorem~\ref{thm:null.shells.intro} (from which Corollary~\ref{cor:reverse.Burnett.intro} also follows), which asserts the local existence and uniqueness of solutions to the Einstein--null dust system with measure-valued null dust. Given that all limits satisfy the Einstein--null dust system (Theorem~\ref{thm:limit.intro}) and that the limiting spacetime depends only on the limiting data (Theorem~\ref{thm:uniqueness.intro}), the final necessary ingredient is a statement that for every given data set to the Einstein--null dust system (with potentially a measure-valued null dust), we can find a sequence of smooth vacuum data which \begin{enumerate} \item obey uniformly the estimates required in Theorem~\ref{thm:limit.intro}, and \item moreover limits in an appropriate weak sense to the given data. \end{enumerate} Once this approximation result is achieved, we prove the existence part of Theorem~\ref{thm:null.shells.intro} using Theorem~\ref{thm:limit.intro} to extract a limit which is a weak solution to the Einstein--null dust system. We then prove the uniqueness part of Theorem~\ref{thm:null.shells.intro} by Theorem~\ref{thm:uniqueness.intro}. To obtain the approximation result requires solving on the initial hypersurfaces the null constraint equations. For this we rely on the fact, elucidated in \cite{Chr}, that the null constraint equations can be solved by first prescribing appropriate ``free data'', related to the conformal class of $\gamma$, and then solving transport equations. We carry out the proof of the approximation result in two steps\footnote{In particular, even to generate the null dust shell, we first regularize the initial data for the null dust, and then approximate it by high-frequency oscillation in the metric, in contrast to Example~\ref{example:intro.shell}.}, which we now describe. In the first step, we show that (up to technical assumptions\footnote{We need a technical assumption that require the null dust to vanish near some angular direction; see Section~\ref{sec:approx.smooth.dust.by.vac}.}) any \emph{smooth} null dust data can be approximated by highly oscillatory but smooth vacuum data. We prescribe highly oscillatory data for $\frac{\partial \hat{\gamma}_n}{\partial \underline{u}}$, where $\hat{\gamma}_n$ is an appropriate representation of the conformal class of $\gamma_n$. We choose a sequence $\frac{\partial \hat{\gamma}_n}{\partial \underline{u}}$ carefully so that $|\frac{\partial \hat{\gamma}_n}{\partial \underline{u}}|^2_{\hat{\gamma}_n} - |\frac{\partial \hat{\gamma}^{(dust)}}{\partial \underline{u}}|^2_{\hat{\gamma}^{(dust)}}$ converges weakly to the prescribed function corresponding to the null dust. Moreover, using the high-frequency parameter in $\gamma_n$ as a smallness parameter, we solve the vacuum null constraints equations, prove necessary estimates, and show that the limit indeed solves the null constraint equations for the Einstein--null dust system. In the second step, we show that any \emph{measure-valued} null dust data (with additional angular regularity) can be approximated by \emph{smooth} null dust data (and obtain the desired result after combining with Step~1). To obtain this approximation result, we smooth out the given data for the null dust and the metric and prove a stability-type result for the null constraint equations in a low-regularity class consistent with the null dust only being a measure. We emphasize that such an approximation result is possible precisely because we only need uniform bounds for $\hat{\chi}$ and $\hat{\underline{\chi}}$ in $L^2$ (with only additional angular regularity but not regularity in the null directions). \subsection{Formation of trapped surfaces}\label{sec:intro.addendum} A main theme of this paper and the discussion thus far is the connection between the Einstein vacuum equations and the Einstein--null dust system via weak limits. In particular, our construction of solutions to the Einstein--null dust system with measure-valued null dust explicitly exploits this connection. The connection between the Einstein vacuum equations and the Einstein--null dust system can also shed light on the problem of the formation of trapped surfaces. Because of the complexity of the formation of trapped surfaces in vacuum, the problem was first studied in simplified settings, such as the collapse of a null dust shell \cite{Gibbons.shell, Penrose.shell, Synge}. We will show that the solutions in \cite{Gibbons.shell, Penrose.shell, Synge} in fact arise as suitable weak limits of \emph{vacuum} solutions. These vacuum solutions are moreover exactly those constructed by Christodoulou in his groundbreaking work \cite{Chr}. In other words, at least for a specific solution regime, the collapse of a null dust shell captures the dynamics of gravitational collapse in vacuum. See further discussions in Section~\ref{sec:trapped.surfaces}. \subsection{Outline of the paper} We end the introduction with an outline of the remainder of the paper. In \textbf{Section~\ref{sec:geo.prelim}}, we introduce the geometric setup in this paper. In particular, we will describe the double null foliation gauge. We will also define the precise notions of weak solutions that will be relevant in this paper. After these preliminary discussions, we recall the existence theorem in \cite{LR2} in \textbf{Section~\ref{sec.existence}}. We then state the precise version of the main theorems (Theorems~\ref{thm:limit.intro}, \ref{thm:uniqueness.intro}, \ref{thm:null.shells.intro} and Corollary~\ref{cor:reverse.Burnett.intro}) in \textbf{Section~\ref{sec.main.thm}} (see Theorems~\ref{main.thm}, \ref{thm:uniqueness}, \ref{thm:main.local.dust} and \ref{thm:reverse.Burnett}). The remainder of the paper is then devoted to the proofs of these four theorems. \begin{itemize} \item Sections~\ref{sec:gen.compactness}--\ref{sec:eqns.for.limit} are devoted to proving that a limit exists and solves the Einstein--null dust system (Theorem \ref{main.thm}). In \textbf{Section~\ref{sec:gen.compactness}}, we begin with some general compactness results. In \textbf{Section~\ref{sec:existence}}, the compactness results will be applied to extract a limiting spacetime with regularity properties. In \textbf{Section~\ref{sec:eqns.for.limit}}, we then show that the limit satisfies the Einstein--null dust system. \item \textbf{Section~\ref{sec:proof.uniqueness}} will be devoted to proving the uniqueness theorem (Theorem~\ref{thm:uniqueness}). \item \textbf{Section~\ref{sec.approx.thm}} will be devoted to proving an approximation theorem for the characteristic initial data, from which the local existence and uniqueness result for the Einstein--null dust system (Theorem~\ref{thm:main.local.dust}) and the result on approximation by vacuum spacetimes (Theorem~\ref{thm:reverse.Burnett}) follow. \end{itemize} In \textbf{Section~\ref{sec:trapped.surfaces}}, we end the paper with a discussion regarding the relation between null dust shell solutions and the formation of trapped surfaces result of Christodoulou \cite{Chr}. Finally, we include two appendices. In \textbf{Appendix~\ref{app:est}}, we derive from \cite{LR2} some additional estimates that are used in this paper. In \textbf{Appendix~\ref{app:CC}}, we prove our main compensated compactness lemma. \\ \\ {\bf Acknowledgements:} J. Luk thanks the Cambridge mathematical general relativity group for stimulating discussions. In particular, he thanks Jan Sbierski for raising some interesting questions. J. Luk is supported by NSF grants DMS-1709458, DMS-2005435 and a Terman Fellowship. I. Rodnianski is supported by NSF grant DMS-1709270 and a Simons Investigator Award. \section{Geometric preliminaries}\label{sec:geo.prelim} \subsection{Weak solutions to the Einstein equations}\label{sec.weak.sol} In this subsection, we give some definitions of weak solutions to the Einstein vacuum equations and the Einstein--null dust system which make sense under very weak regularity assumptions. We will very soon further restrict the class of solutions that we consider, but it is useful to keep these more general definitions in mind. \begin{definition}\label{def:vacuum} Let $(\mathcal M, g)$ be a $C^0\cap W^{1,2}_{\mathrm{loc}}$ time-oriented Lorentzian manifold. We say that $(\mathcal M, g)$ is a \textbf{weak solution to the Einstein vacuum equations} if for all smooth and compactly supported vector fields $X$ and $Y$, $$\int_{\mathcal M} \left((D_\mu X^\mu)(D_\nu Y^\nu)-D_\mu X^\nu D_\nu Y^\mu\right) \,\mathrm{dVol}_g = 0,$$ where $D$ and $\mathrm{dVol}_g$ respectively denote the Levi--Civita connection and the volume form associated to $g$. \end{definition} \begin{definition}\label{def:null.dust} Let $(\mathcal M, g)$ be a $C^0\cap W^{1,2}_{\mathrm{loc}}$ time-oriented Lorentzian manifold and $\mathrm{d}\nu$, $\mathrm{d}\underline{\nu}$ be non-negative Radon measures on $\mathcal M$. Let $u,\,\underline{u}:\mathcal M \to \mathbb R$ be two $C^1$ functions satisfying $$g^{-1}(\mathrm{d} u, \mathrm{d} u) = g^{-1}(\mathrm{d} \underline{u}, \mathrm{d} \underline{u}) = 0.$$ We say that $(\mathcal M, g, \mathrm{d} \nu, \mathrm{d} \underline{\nu})$ is a \textbf{weak solution to the Einstein--null dust system} if the following holds: \begin{enumerate} \item For all smooth and compactly supported vector fields $X$ and $Y$, $$\int_{\mathcal M} \left((D_\mu X^\mu)(D_\nu Y^\nu)-D_\mu X^\nu D_\nu Y^\mu\right) \,\mathrm{dVol}_g = \int_{\mathcal M} (Xu)(Yu) \,\mathrm{d}\nu + \int_{\mathcal M} (X\underline{u})(Y\underline{u}) \,\mathrm{d}\underline{\nu}.$$ \item For every smooth and compactly supported real-valued function $\varphi$, $$\int_{\mathcal M} g^{-1}(\mathrm{d} u, \mathrm{d} \varphi) \, \mathrm{d} \nu = 0 = \int_{\mathcal M} g^{-1}(\mathrm{d} \underline{u}, \mathrm{d} \varphi) \, \mathrm{d} \underline{\nu}.$$ \end{enumerate} \end{definition} \begin{remark}\label{rmk:null.dust.1} Consider the particular case where $g$ is $C^2$ and there exist $C^1$ functions $f_\mu$ and $f_\nu$ such that $\mathrm{d}\mu = f^2_\mu \,\mathrm{dVol}_g$ and $\mathrm{d}\nu = f^2_\nu \,\mathrm{dVol}_g$. Then the two conditions in Definition~\ref{def:null.dust} are equivalent to \begin{enumerate} \item $$\mathrm{Ric}(g) = f^2_\mu\, \mathrm{d} u\otimes \mathrm{d} u + f^2_{\nu} \, \mathrm{d} \underline{u} \otimes \mathrm{d} \underline{u}.$$ \item $$2 g^{-1}(\mathrm{d} u, \mathrm{d} f_\mu) + (\Box_g u) f_\mu =0 = 2 g^{-1}(\mathrm{d} \underline{u}, \mathrm{d} f_\nu) + (\Box_g \underline{u}) f_\nu,$$ where $\Box_g$ denotes the Laplace--Beltrami operator associated to $g$. \end{enumerate} \end{remark} \begin{remark}\label{rmk:null.dust.2} Definition~\ref{def:null.dust} does \underline{not} give the most general form of the Einstein--null dust system. Even when the density of the null dust is given by an $L^1$ function, one in general allows, for null $1$-forms $\xi$ and $\underline{\xi}$, \begin{enumerate} \item $$\mathrm{Ric} = f^2_\xi \, \xi \otimes \xi + f^2_{\underline{\xi}} \, \underline{\xi} \otimes \underline{\xi},$$ \item $$2 g^{-1}(\xi, \mathrm{d} f_\xi) + (D^\alpha \xi_\alpha) f_\xi =0 = 2 g^{-1}(\underline{\xi}, \mathrm{d} f_{\underline{\xi}}) + (D^\alpha \underline{\xi}_\alpha) f_{\underline{\xi}},$$ \item $$D_\xi \xi = 0 = D_{\underline{\xi}} \underline{\xi},$$ \end{enumerate} where $\xi$ and $\underline{\xi}$ are not necessarily exact. We restrict however our attention to Definition~\ref{def:null.dust} since this is precisely what arises in the limit in our setting. \end{remark} \subsection{Double null foliation and double null coordinates}\label{sec.dnf} In this subsection, we define spacetimes with double null foliation and double null coordinates. From now on we consider $\mathcal M$ being a manifold with corners with\footnote{In fact, all of our results apply if we replace $\mathbb S^2$ by any compact $2$-surface $S$. The choice of $S=\mathbb S^2$ is so that we have consistent notation with \cite{LR, LR2}.} \begin{equation}\label{eq:M.topology} \mathcal M = [0,u_*] \times [0,\underline{u}_*] \times \mathbb S^2, \end{equation} where $u_*, \underline{u}_*>0$. \begin{definition}\label{def:sets} We introduce the following notations for subsets of $\mathcal M$ satisfying \eqref{eq:M.topology}: \begin{enumerate} \item $$H_u:= \{ (u',\underline{u}', \vartheta) \in [0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2: u' = u\},$$ \item $$\underline{H}_u:= \{ (u',\underline{u}', \vartheta) \in [0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2: \underline{u}' = \underline{u}\},$$ \item $$S_{u,\underline{u}} := H_u \cap \underline{H}_{\underline{u}} = \{ (u',\underline{u}', \vartheta) \in [0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2: u' = u,\,\underline{u}' = \underline{u}\}.$$ \end{enumerate} \end{definition} \begin{definition} For a type $T^q_p$ tensor field $\xi$ on $\mathcal M$, we say that it is $S$-tangent if for every vector fields $X_1, ..., X_p\in T_x\mathcal M$, $\xi(X_1,...,X_p)$ is in $\otimes^q T_x S_{u,\underline{u}}$ and $\xi(X_1,...,X_p)=0$ if any one of $X_1, ..., X_p$ equals to $e_3$ or $e_4$. \end{definition} We introduce the following convention for indices for the remainder of the paper. {\bf We will use the convention that the lower case Greek indices run through the spacetime indices $\mu,\nu=1,2,3,4$ while the upper case Latin indices run through the indices on the $2$-surfaces $A,B=1,2$.} We will use Einstein's summation convention unless otherwise stated. \begin{definition}[$C^0\cap W^{1,2}_{loc}$ metrics in double null coordinates]\label{double.null.def} A Lorentzian metric $g$ on $\mathcal M$ satisfying \eqref{eq:M.topology} is said to be a \textbf{$C^0\cap W^{1,2}_{loc}$ metric in double null coordinates} if the following hold: \begin{itemize} \item There exists an atlas $\{U_i\}_{i=1}^N \subset \mathbb S^2$ such that given coordinates $(\theta^1, \theta^2)$ in each coordinate chart $U_i$, the metric takes the form\footnote{Recall our notation that capital Latin letters are summed from $1$ to $2$.} \begin{equation}\label{double.null.coordinates} g=-2\Omega^2(du\otimes d\underline{u}+d\underline{u}\otimes du)+\gamma_{AB}(d\theta^A-b^Adu)\otimes (d\theta^B-b^Bdu), \end{equation} where \begin{itemize} \item $\Omega$ is a real-valued function in $C^0\cap W^{1,2}_{loc}$, \item $b = b^A\frac{\partial}{\partial\theta^A}$ is a $C^0\cap W^{1,2}_{loc}$ $S$-tangent vector field tangential to $S$, \item $\gamma = \gamma_{AB}\,\mathrm{d}\theta^A\,\mathrm{d} \theta^B$ is a $C^0\cap W^{1,2}_{loc}$ $S$-tangent symmetric covariant $2$-tensor, which restricts to a positive definitie metric on $T S$. \end{itemize} \end{itemize} \end{definition} \begin{remark}[Regularity assumptions] In Definition~\ref{double.null.def}, the metric components are required only to be in $C^0\cap W^{1,2}_{loc}$. This is the minimal assumption that we need for many of the definitions below. Nevertheless, the spacetime metrics that we actually construct will have higher regularity, at least when viewed in some directions (see already Definition~\ref{double.null.def.2}). \end{remark} \begin{remark}[Geometric significance of the double null coordinate system] At least when the metric is sufficiently regular, \eqref{double.null.coordinates} has the following geometric interpretation: \begin{enumerate} \item $u$ and $\underline{u}$ are null variables. In particular, $H_u$ and $\underline{H}_{\underline{u}}$ (recall \eqref{def:sets}) are null hypersurfaces. \item $(\mathrm{d} u)^\sharp$ and $(\mathrm{d}\underline{u})^\sharp$ are null geodesic vector fields (where $\sharp$ denotes the metric dual). \item The coordinate functions $\theta^1$ and $\theta^2$ are constant along the integral curves of $(du)^\sharp$. \end{enumerate} \end{remark} \subsection{Ricci coefficients and the Gauss curvature of the $2$-spheres}\label{sec:Ricci.coeff} We will define the Ricci coefficients for a metric satisfying Definition~\ref{double.null.def}. For the rest of this subsection, we assume that $(\mathcal M, g)$ satisfying Definition~\ref{double.null.def} is given and fixed. \begin{definition} The normalized null pair are defined as follows: $$e_3=\Omega^{-1}(\frac{\partial}{\partial u}+b^A\frac{\partial}{\partial\theta^A}),\quad e_4=\Omega^{-1}\frac{\partial}{\partial\underline{u}}.$$ We will also write $\{e_A\}_{A=1,2}$ to denote an arbitrary local frame tangent to $S_{u,\underline{u}}$. \end{definition} We now define the Ricci coefficients as the following $S$-tangent tensors, where $D$ is the Levi--Civita connection with respect to the spacetime metric $g$: \begin{definition}[Ricci coefficients]\label{def.RC} \begin{enumerate} \item Define the following Ricci coefficients\footnote{Note that this is well-defined with $C^0\cap W^{1,2}_{loc}$ regularity of the metric coefficients.} such that for vector fields $\slashed X$, $\slashed Y$ tangential to $S$: \begin{equation*} \begin{split} &\chi(\slashed X, \slashed Y)=g(D_{\slashed X} e_4, \slashed Y),\, \,\, \quad \underline{\chi}(\slashed X,\slashed Y)=g(D_{\slashed X} e_3,\slashed Y),\\ &\eta(\slashed X)=-\frac 12 g(D_3 \slashed X,e_4),\quad \underline{\eta}(\slashed X)=-\frac 12 g(D_4 \slashed X,e_3),\\ &\omega=-\frac 14 g(D_4 e_3,e_4),\quad\,\,\, \underline{\omega}=-\frac 14 g(D_3 e_4,e_3). \end{split} \end{equation*} \item Define also $$\hat{\chi}=\chi-\frac 12 \slashed{\mathrm{tr}}\chi \gamma, \quad \hat{\underline{\chi}}=\underline{\chi}-\frac 12 \slashed{\mathrm{tr}}\underline{\chi} \gamma,$$ where $\hat{\chi}$ (resp.~$\hat{\underline{\chi}}$) is the traceless part of $\chi$ (resp.~$\underline{\chi}$) (with the trace taken with respect to the metric $\gamma$ on $S_{u,\underline{u}}$) and $\slashed{\mathrm{tr}}\chi$ (resp.~$\slashed{\mathrm{tr}}\underline{\chi}$) is the trace of $\chi$ (resp.~$\underline{\chi}$). \item We will use $\chi_{AB}$, $\underline{\chi}_{AB}$, $\eta_A$, etc.~to denote the components of the Ricci coefficients with respect to a local coordinate system on $S$ in a double null coordinate system. \end{enumerate} \end{definition} We define the following quantities which depend only on the intrinsic geometry of a Riemannian $2$-manifold. Note that in application $S = S_{u,\underline{u}}$ for some $u,\,\underline{u}$. \begin{definition}\label{def:isoperimetric} Let $(S,\gamma)$ be a closed Riemannian $2$-manifold. \begin{enumerate} \item Define $\mathrm{Area}(S,\gamma)$ to be the total area of $S$ with respect to the metric $\gamma$. \item Define the isoperimetric constant by \begin{equation}\label{eq:def.IPC} {\mathbf I}(S,\gamma) = \sup_{\substack{ U \\ \partial U \in C^1}} \frac{ \min\{\mathrm{Area}(U),\,\mathrm{Area}(U^c)\} }{(\mathrm{Perimeter}(\partial U))^2 } . \end{equation} \item Define the Gauss curvature $K$ by \begin{equation}\label{Gauss.def} \gamma_{BC} K=\frac{\partial}{\partial\theta^A}\slashed{\Gamma}^{A}_{BC}-\frac{\partial}{\partial\theta^C}\slashed{\Gamma}^A_{BA}+\slashed{\Gamma}^A_{AD}\slashed{\Gamma}^D_{BC}-\slashed{\Gamma}^A_{CD}\slashed{\Gamma}^D_{BA}, \end{equation} where \begin{equation}\label{Gamma.def} \slashed{\Gamma}_{BA}^C=\frac 12(\gamma^{-1})^{CD}\left(\frac{\partial}{\partial\theta^B}\gamma_{AD}+\frac{\partial}{\partial\theta^A}\gamma_{BD}-\frac{\partial}{\partial\theta^D}\gamma_{AB}\right). \end{equation} \end{enumerate} When there is no danger of confusion, we write $\mathrm{Area}(S) = \mathrm{Area}(S,\gamma)$ and ${\mathbf I}(S) = {\mathbf I}(S,\gamma)$. \end{definition} Note that our regularity assumptions are sufficient to make sense of $\mathrm{Area}(S_{u,\underline{u}},\gamma)$ and ${\mathbf I}(S_{u,\underline{u}},\gamma)$. However, for $(\mathcal M, g)$ satisfying Definition~\ref{double.null.def}, the Gauss curvature $K$ of the surfaces $S_{u,\underline{u}}$ is only to be understood as a distribution. \subsection{Differential operators on $S$-tangent tensor fields} \begin{definition}[Covariant derivatives for $S$-tangent tensor fields]\label{def:nabla} Define $\slashed{\nabla}_3$ and $\slashed{\nabla}_4$ to be the projections to $S_{u,\underline{u}}$ of the covariant derivatives $D_3= D_{e_3}$ and $D_4 = D_{e_4}$ respectively. Define $\slashed{\nabla}_A$ to be the Levi--Civita connection with respect to the metric $\gamma$. \end{definition} The following identities hold for the connections $\slashed{\nabla}_3$, $\slashed{\nabla}_4$ and $\slashed{\nabla}$. The proofs are straightforward and omitted. \begin{proposition}\label{diff.formula} In the double null coordinate system, for every covariant tensor field $\phi$ of rank $r$ tangential to the spheres $S_{u,\underline{u}}$, we have \begin{equation}\label{nab3.def} \begin{split} (\slashed{\nabla}_3 \phi)_{A_1 A_2 ... A_r} =&\:\Omega^{-1}(\frac{\partial}{\partial u}+b^C\frac{\partial}{\partial\theta^C}) \phi_{A_1 A_2 ... A_r}-\sum_{i=1}^r(\underline{\chi}^B{ }_{A_i}-\Omega^{-1}\frac{\partial b^B}{\partial\theta^{A_i}})\phi_{A_1\dots\hat{A_i}B\dots A_r}, \end{split} \end{equation} where $\hat{A_i}$ denotes that the $A_i$ which was originally present is removed. Similarly, we have \begin{equation}\label{nab4.def} \begin{split} (\slashed{\nabla}_4 \phi)_{A_1 A_2 ... A_r} =&\:\Omega^{-1}\frac{\partial}{\partial \underline{u}} \phi_{A_1 A_2 ... A_r}-\sum_{i=1}^r \chi^B{ }_{A_i}\phi_{A_1\dots\hat{A_i}B\dots A_r}. \end{split} \end{equation} Finally, $\slashed{\nabla}$ is given by the Levi--Civita connection associated to the meetric $\gamma$, i.e., \begin{equation}\label{nab.def} \slashed{\nabla}_B \phi_{A_1 A_2 \dots A_r}=\frac{\partial}{\partial \theta^B}\phi_{A_1 A_2 \dots A_r}-\sum_{i=1}^r \slashed{\Gamma}_{BA_i}^C \phi_{A_1 A_2\dots \hat{A_i} C\dots A_r},\end{equation} where $\slashed{\Gamma}$ is as in \eqref{Gamma.def}. \end{proposition} We introduce the following differential operators: \begin{definition}\label{def:slashedL} Define $\slashed{\mathcal L}$ to be the projection of the Lie derivative to the tangent space of $S_{u,\underline{u}}$ \end{definition} \begin{definition}\label{def:div.curl} For a totally symmetric tensor field $\phi$ of rank $r+1$, define $$(\div \phi)_{A_1\cdots A_r} := (\gamma^{-1})^{BC}\slashed{\nabla}_B \phi_{CA_1\cdots A_r},\quad (\slashed{\mathrm{curl}} \phi)_{A_1\cdots A_r} := \in^{BC}\slashed{\nabla}_B \phi_{CA_1\cdots A_r},$$ where $\in$ denotes the volume form with respect to $\gamma$. Finally, define the following operator on $S$-tangent $1$-forms: $$(\slashed{\nabla}\otimes \phi)_{AB} := \slashed{\nabla}_A \phi_B + \slashed{\nabla}_B \phi_A - \gamma_{AB} \div \phi.$$ \end{definition} \subsection{Some basic identities} \begin{proposition}\label{prop:metric.der} The Ricci coefficients can be expressed as derivatives of the metric components in the double null coordinate system (recall \eqref{double.null.coordinates}). More precisely, we have\footnote{In coordinates, the relations in \eqref{metric.derivative.invar} above read as follows: \begin{equation}\label{metric.derivative} \begin{split} \frac{\partial}{\partial \underline{u}} \gamma_{AB} = &2\Omega \chi_{AB},\quad (\frac{\partial}{\partial u}+b^C\frac{\partial}{\partial\theta^C}) \gamma_{AB}+\frac{\partial b^C}{\partial \theta^A}\gamma_{BC}+\frac{\partial b^C}{\partial \theta^B}\gamma_{AC} = 2\Omega \underline{\chi}_{AB},\\ \frac{\partial}{\partial \underline{u}} b^A= &-2\Omega^2(\eta^A-\underline{\eta}^A). \end{split} \end{equation}} \begin{equation}\label{metric.derivative.invar} \begin{split} \slashed {\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \gamma = 2\Omega \chi,\quad \slashed{\mathcal L}_{(\frac{\partial}{\partial u}+b^A\frac{\partial}{\partial\theta^A})} \gamma = 2\Omega \underline{\chi},\quad \slashed {\mathcal L}_{\frac{\partial}{\partial \underline{u}}} b=-2\Omega^2(\eta^\sharp-\underline{\eta}^\sharp), \end{split} \end{equation} where $\slashed{\mathcal L}$ is as in Definition~\ref{def:slashedL} and ${ }^\sharp$ denotes the metric dual with respect to $\gamma$. \end{proposition} \begin{proposition} The following relation holds: \begin{equation}\label{Ricci.relation} \begin{split} \omega=-\f12 (e_4\log\Omega),\quad \underline{\omega}=-\f12(e_3\log\Omega), \quad \frac 12 (\eta_A+ \underline{\eta}_A)= \slashed{\nabla}_A\log\Omega. \end{split} \end{equation} \end{proposition} \subsection{Function spaces and angularly regular metrics} In this paper, we will mostly consider a slightly more restricted class of spacetimes in double null coordinates; see already Definition~\ref{double.null.def.2}. The main feature of this class is that the metric is more regular along the directions tangential to $S_{u,\underline{u}}$ (despite that it is not in a better \emph{isotropic} Sobolev space than $W^{1,2}$). Moreover, the precise regularity is different for different Ricci coefficients, which can be seen as capturing the null structure of the Einstein equations in a double null coordinate system. The regularity that we impose is consistent with the regularity of spacetimes obtained in \cite{LR2}. \begin{definition} Denote by $\mathrm{dA}_\gamma$ the standard volume form induced by the (Riemannian) metric $\gamma$ on $S_{u,\underline{u}}$, i.e.~in local coordinates, $\mathrm{dA}_\gamma:= \sqrt{\det\gamma}\,\mathrm{d}\theta^1\,\mathrm{d}\theta^2$. \end{definition} \begin{definition}[Definition of $L^p$ spaces]\label{def:Lp} In this definition, let $\phi$ be an $S$-tangent rank $r$ tensor field on $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. \begin{enumerate} \item For every $(u,\underline{u})$, define, for $p\in [1,+\infty)$ \begin{equation*} \begin{split} \|\phi\|_{L^p(S_{u,\underline{u}},\gamma)} := &\: (\int_{S_{u,\underline{u}}} |\phi|^p_{\gamma} \, \mathrm{dA}_\gamma)^{\frac 1p} \\ = &\: (\int_{S_{u,\underline{u}}} ( (\gamma^{-1})^{A_1 A_1'} \cdots (\gamma^{-1})^{A_r A_r'} \phi_{A_1\dots A_r} \phi_{A_1'\dots A_r'})^{\frac p2} \, \mathrm{dA}_\gamma)^{\frac 1p}; \end{split} \end{equation*} and for $p=+\infty$, define $$\|\phi\|_{L^\infty(S_{u,\underline{u}},\gamma)} := \mathrm{ess\,sup}_{S_{u,\underline{u}}} |\phi|_{\gamma} = \mathrm{ess\,sup}_{S_{u,\underline{u}}} ((\gamma^{-1})^{A_1 A_1'} \cdots (\gamma^{-1})^{A_r A_r'} \phi_{A_1\dots A_r} \phi_{A_1'\dots A_r'})^{\frac 12}.$$ We will often view $\|\phi\|_{L^p(S_{u,\underline{u}},\gamma)}$ as a function of $u$ and $\underline{u}$. \item For $q\in [1,+\infty)$, $p\in [1,+\infty]$, define $$\|\phi\|_{L^q_u L^p(S_{u,\underline{u}},\gamma)} := (\int_0^{u_*} \|\phi\|_{L^p(S_{u,\underline{u}},\gamma)}^q \, \mathrm{d} u)^{\frac 1q},$$ $$ \|\phi\|_{L^q_{\underline{u}} L^p(S_{u,\underline{u}},\gamma)} := (\int_0^{\underline{u}_*} \|\phi\|_{L^p(S_{u,\underline{u}},\gamma)}^q \, \mathrm{d} \underline{u})^{\frac 1q}.$$ These two terms will be viewed as functions of $\underline{u}$ and $u$ respectively. Define also $L^\infty_u L^p(S_{u,\underline{u}},\gamma)$ and $L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\gamma)$ after the obvious modifications. \item For $r\in [1, +\infty)$, $p,\,q\in [1,+\infty]$, define $$\|\phi\|_{L^r_u L^q_{\underline{u}} L^p(S_{u,\underline{u}},\gamma)} := (\int_0^{u_*} \|\phi\|_{L^q_{\underline{u}} L^p(S_{u,\underline{u}},\gamma)}^r \,\mathrm{d} u)^{\frac 1r},$$ $$ \|\phi\|_{L^r_{\underline{u}} L^q_u L^p(S_{u,\underline{u}},\gamma)} := (\int_0^{\underline{u}_*} \|\phi\|_{L^q_{\underline{u}} L^p(S_{u,\underline{u}},\gamma)}^r \,\mathrm{d} \underline{u})^{\frac 1r}.$$ In a similar manner as (2), we also allow $r=+\infty$ after the obvious modifications. \end{enumerate} \end{definition} \begin{definition}[Definition of Sobolev spaces] Let $\phi$ be an $S$-tangent tensor field. \begin{enumerate} \item For every $m\in \mathbb N\cup \{0\}$ and $p\in [1,+\infty]$, define $$\|\phi\|_{W^{m,p}(S_{u,\underline{u}},\gamma)}:= \|\slashed{\nabla}^m \phi\|_{L^p(S_{u,\underline{u}},\gamma)},$$ where $\slashed{\nabla}$ is the Levi--Civita connection associated to $\gamma$. \item Define also $L^q_{\underline{u}}W^{m,p}(S_{u,\underline{u}},\gamma)$, $L^r_u L^q_{\underline{u}} W^{m,p}(S_{u,\underline{u}},\gamma)$, etc.~in a similar manner as Definition~\ref{def:Lp}.2 and \ref{def:Lp}.3 after replacing $L^p(S_{u,\underline{u}},\gamma)$ by $W^{m,p}(S_{u,\underline{u}},\gamma)$. \end{enumerate} \end{definition} \begin{definition}[Definition of the BV spaces]\label{def:BV} Let $\phi$ be an $S$-tangent rank $r$ tensor field on $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. Define \begin{equation*} \begin{split} \|\phi\|_{BV(H_u, \gamma)} := &\: \int_0^{\underline{u}_*} \|\phi\|_{L^1(S_{u,\underline{u}},\gamma)}\,\mathrm{d} \underline{u} + \sup \left\{| \int_0^{\underline{u}_*} \int_{S_{u,\underline{u}}} (\frac{\partial}{\partial \underline{u}}\varphi) \phi \, \mathrm{dA}_{\gamma} \,\mathrm{d} \underline{u}| : \varphi \in C^1_c,\,|\varphi|\leq 1\right\} \\ &\: + \sup \left\{| \int_0^{\underline{u}_*} \int_{S_{u,\underline{u}}} (\div\slashed{X}) \phi \, \mathrm{dA}_{\gamma} \,\mathrm{d} \underline{u}| : \slashed{X} \in C^1_c,\,\sup_{u,\,\underline{u}} \|\slashed{X} \|_{L^1(S_{u,\underline{u}},\gamma)}\leq 1\right\}. \end{split} \end{equation*} Similarly, we define \begin{equation*} \begin{split} \|\phi\|_{BV(\underline{H}_{\underline{u}}, \gamma)} := &\: \int_0^{u_*} \|\phi\|_{L^1(S_{u,\underline{u}},\gamma)}\,\mathrm{d} u + \sup \left\{| \int_0^{u_*} \int_{S_{u,\underline{u}}} (\frac{\partial}{\partial u}\varphi) \phi \, \mathrm{dA}_{\gamma} \,\mathrm{d} u| : \varphi \in C^1_c,\,|\varphi|\leq 1\right\} \\ &\: + \sup \left\{| \int_0^{u_*} \int_{S_{u,\underline{u}}} (\div\slashed{X}) \phi \, \mathrm{dA}_{\gamma} \,\mathrm{d} u| : \slashed{X} \in C^1_c,\,\sup_{u,\,\underline{u}} \|\slashed{X} \|_{L^1(S_{u,\underline{u}},\gamma)}\leq 1\right\}. \end{split} \end{equation*} \end{definition} \begin{definition}[Continuity in $u$ and/or $\underline{u}$] Define the space $C^0_u C^0_{\underline{u}} W^{m,p}(S_{u,\underline{u}},\gamma)$ as the completion of smooth tensor fields under the $L^\infty_u L^\infty_{\underline{u}} W^{m,p}(S_{u,\underline{u}},\gamma)$. Define $C^0_u L^q_{\underline{u}} W^{m,p}(S_{u,\underline{u}},\gamma)$, $C^0_{\underline{u}} L^q_{u} W^{m,p}(S_{u,\underline{u}},\gamma)$ in a similar manner. \end{definition} At this point, let us recall that BV functions, even though they are defined only a.e., have well-defined traces on Lipschitz hypersurfaces. We will in particular need the following statement (whose proof can be found for instance in \cite[Theorem~5.6]{Evans}): \begin{lemma}\label{lem:trace} Let $f:[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2 \to \mathbb R$ be such that $f \in C^0_u BV(H_u,\gamma)$. Then the following holds. \begin{enumerate} \item For every $(u,\underline{u})\in [0,u_*]\times (0,\underline{u}_*)$, there is an $L^1(S_{u,\underline{u}},\gamma)$ function $f^-(u,\underline{u},\theta)$ such that $$\lim_{\epsilon\to 0^+} \frac 1\epsilon\int_{\underline{u}-\epsilon}^{\underline{u}}\int_{S_{u,\underline{u}'}} |f^-(u,\underline{u},\vartheta) - f(u,\underline{u},\vartheta)|\,\mathrm{dA}_\gamma\,\mathrm{d} \underline{u}'= 0.$$ \item For every $(u,\underline{u})\in [0,u_*]\times [0,\underline{u}_*]$, there is an $L^1(S_{u,\underline{u}},\gamma)$ function $f^+(u,\underline{u},\theta)$ such that $$\lim_{\epsilon\to 0^+} \frac 1\epsilon\int_{\underline{u}}^{\underline{u}+\epsilon}\int_{S_{u,\underline{u}'}} |f^+(u,\underline{u},\vartheta) - f(u,\underline{u},\vartheta)|\,\mathrm{dA}_\gamma\,\mathrm{d} \underline{u}'= 0.$$ \end{enumerate} Similar statements hold for $f\in C^0_{\underline{u}} BV(\underline{H}_{\underline{u}},\gamma)$ after swapping $u$ and $\underline{u}$; we omit the details. \end{lemma} We need one more definition before we introduce the class of spacetime we consider. We introduce an auxiliary metric in all of $\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ to measure the regularity of $\gamma$. We define this to be the Lie-transported $\gamma \restriction_{S_{0,0}}$. \begin{definition}\label{def:gamma00} Define $\gamma_{0,0}$ on $\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ so that on the initial $2$-sphere $S_{0,0}$, $\gamma_{0,0} = \gamma\restriction_{S_{0,0}}$, and that\footnote{Note that this is possible since $[\frac{\partial}{\partial u}, \frac{\partial}{\partial\underline{u}}] = 0$.} $$\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \gamma_{0,0} = 0,\quad \slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \gamma_{0,0} = 0$$ everywhere else in $\mathcal M$. \end{definition} We are now ready to define the class of spacetimes that we study for the remainder of the paper. \begin{definition}[Angularly regular double null metrics]\label{double.null.def.2} Let $(\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times S, g)$ be a spacetime in double null coordinates (see~Definition~\ref{double.null.coordinates}). We say that $(\mathcal M, g)$ is \textbf{angularly regular} if \begin{enumerate} \item $\gamma - \gamma_{0,0},\,b,\,\log\Omega \in C^0_u C^0_{\underline{u}} W^{2,4}(S_{u,\underline{u}}) \cap L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$, \item $\sup_{u,\underline{u}}({\bf I}(S_{u,\underline{u}},\gamma) + \mathrm{Area}(S_{u,\underline{u}},\gamma) + (\mathrm{Area}(S_{u,\underline{u}},\gamma))^{-1}) <+\infty$, $\log \frac{\det\gamma}{\det\gamma_{0,0}} \in C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})$, \item $K \in C^0_u C^0_{\underline{u}} L^4(S_{u,\underline{u}}) \cap L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} L^2_{u} W^{2,2}(S_{u,\underline{u}})$, \item $\chi,\,\omega \in L^2_{\underline{u}} L^\infty_u W^{2,2}(S_{u,\underline{u}}) \cap C^0_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})\cap L^\infty_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$, $\underline{\chi},\,\underline{\omega} \in L^2_{u} L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}}) \cap C^0_{\underline{u}} L^2_{u} W^{2,2}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} L^2_{u} W^{3,2}(S_{u,\underline{u}})$, \item $\eta,\,\underline{\eta} \in C^0_u C^0_{\underline{u}} W^{1,4}(S_{u,\underline{u}}) \cap L^\infty_u L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}})\cap L^\infty_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} L^2_u W^{3,2}(S_{u,\underline{u}})$, \item $\slashed{\mathrm{tr}}\chi \in C^0_u L^1_{\underline{u}} W^{2,1}(S_{u,\underline{u}}) \cap L^\infty_u BV(H_u) \cap L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$, $\slashed{\mathrm{tr}}\underline{\chi} \in C^0_{\underline{u}} L^1_u W^{2,1}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} BV(\underline{H}_{\underline{u}}) \cap L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$. \end{enumerate} In the above, we have written $S_{u,\underline{u}}$, $H_u$ and $\underline{H}_{\underline{u}}$ instead of $(S_{u,\underline{u}},\gamma)$, $(H_u,\gamma)$ and $(\underline{H}_{\underline{u}},\gamma)$ to simplify the notations. \end{definition} \begin{remark} Note that because of the presence of $C^0_u$ and $C^0_{\underline{u}}$ in the above spaces, we can talk about $\gamma$, $b$, $\log\Omega$, $\eta$ and $\underline{\eta}$ for every $u$ and $\underline{u}$ (and not just for almost every $u$ and $\underline{u}$). We can also talk about $\chi$ and $\omega$ as an $L^2_{\underline{u}}W^{2,2}(S_{u,\underline{u}})$ function and $\slashed{\mathrm{tr}}\chi$ as a $L^1_{\underline{u}} W^{2,1}(S_{u,\underline{u}})$ function for every $u$ (as opposed to just for almost every $u$). Similarly, we can talk about $\underline{\chi}$ and $\underline{\omega}$ as an $L^2_{u}W^{2,2}(S_{u,\underline{u}})$ function and $\slashed{\mathrm{tr}}\underline{\chi}$ as a $L^1_u W^{2,1}(S_{u,\underline{u}})$ function for every $\underline{u}$ (as opposed to just for almost every $\underline{u}$). \end{remark} \subsection{Null structure equations}\label{sec:null.structure.eqn} We introduce the \emph{null structure equations}, which are equations for the Ricci coefficients which hold on solutions to the Einstein vacuum equations. To state these equations, we need the following definitions (in addition to Definition~\ref{def:div.curl}): \begin{definition}\label{def:contractions} Define the following contractions $$\phi^{(1)}\cdot\phi^{(2)} := (\gamma^{-1})^{AC}(\gamma^{-1})^{BD}\phi^{(1)}_{AB}\phi^{(2)}_{CD} \quad\mbox{for symmetric $2$-tensors $\phi^{(1)}_{AB}$, $\phi^{(2)}_{AB}$,}$$ $$\phi^{(1)}\cdot\phi^{(2)} := (\gamma^{-1})^{AB}\phi^{(1)}_{A}\phi^{(2)}_{B} \quad\mbox{for $1$-forms $\phi^{(1)}_{A}$, $\phi^{(2)}_{A}$,}$$ $$(\phi^{(1)}\cdot\phi^{(2)})_A := (\gamma^{-1})^{BC}\phi^{(1)}_{AB}\phi^{(2)}_{C} \quad\mbox{for a symmetric $2$-tensor $\phi^{(1)}_{AB}$ and a $1$-form $\phi^{(2)}_{A}$,}$$ $$(\phi^{(1)}\widehat{\otimes}\phi^{(2)})_{AB} := \phi^{(1)}_A\phi^{(2)}_B+\phi^{(1)}_B\phi^{(2)}_A-\gamma_{AB}((\gamma^{-1})^{CD}\phi^{(1)}_C\phi^{(2)}_D) \quad\mbox{for one forms $\phi^{(1)}_A$, $\phi^{(2)}_A$,}$$ $$\phi^{(1)}\wedge\phi^{(2)} := \in^{AB}(\gamma^{-1})^{CD}\phi^{(1)}_{AC}\phi^{(2)}_{BD}\quad\mbox{for symmetric two tensors $\phi^{(1)}_{AB}$, $\phi^{(2)}_{AB}$}.$$ Define $^*$ of $1$-forms and symmetric $2$-tensors respectively as follows (note that on $1$-forms this is the Hodge dual on $S_{u,\underline{u}}$): \begin{align*} ^*\phi_A := & \gamma_{AC} \in^{CB} \phi_B, \quad ^*\phi_{AB} \doteq \gamma_{BD} \in^{DC} \phi_{AC}. \end{align*} Define also the trace for totally symmetric tensors of rank $r$ to be \begin{equation}\label{tr.def} (\slashed{\mathrm{tr}}\phi)_{A_1...A_{r-1}}:= (\gamma^{-1})^{BC}\phi_{BCA_1...A_{r-1}}. \end{equation} \end{definition} We now give a list of the null structure equations. \begin{proposition}\label{prop:null.structure} Let $(\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times S,g)$ be a $C^2$ spacetime in double null coordinates. If $(\mathcal M,g)$ solves the Einstein vacuum equations, then the following \textbf{null structure equations} hold: \begin{align} \slashed{\nabla}_4 \slashed{\mathrm{tr}}\chi+\frac 12 (\slashed{\mathrm{tr}}\chi)^2=&\: -|\hat{\chi}|_\gamma^2-2\omega \slashed{\mathrm{tr}}\chi, \label{Ric44}\\ \slashed{\nabla}_3 \slashed{\mathrm{tr}}\underline{\chi}+\frac 12 (\slashed{\mathrm{tr}}\underline{\chi})^2=&\: -|\hat{\underline{\chi}}|_\gamma^2-2\underline{\omega} \slashed{\mathrm{tr}}\underline{\chi}, \label{Ric33} \\ \slashed{\nabla}_4\eta + \frac 34 \slashed{\mathrm{tr}}\chi (\eta-\underline{\eta}) =&\: \div\hat{\chi} -\frac 12 \slashed{\nabla} \slashed{\mathrm{tr}}\chi - \frac 12(\eta - \underline{\eta})\cdot \hat{\chi}, \label{Ric4A} \\ \slashed{\nabla}_3\underline{\eta} +\frac 34 \slashed{\mathrm{tr}}\underline{\chi} (\underline{\eta}-\eta) = &\: \div\hat{\underline{\chi}} - \frac 12 \slashed{\nabla} \slashed{\mathrm{tr}}\underline{\chi} - \frac 12(\underline{\eta}-\eta) \cdot \hat{\underline{\chi}}, \label{Ric3A} \\ \slashed{\nabla}_4 \slashed{\mathrm{tr}}\underline{\chi}+\slashed{\mathrm{tr}}\chi \slashed{\mathrm{tr}}\underline{\chi} =&\: 2\omega \slashed{\mathrm{tr}}\underline{\chi} -2 K +2\div \underline{\eta} +2|\underline{\eta}|_\gamma^2,\label{trRicAB}\\ \slashed{\nabla}_3 \slashed{\mathrm{tr}}\chi+ \slashed{\mathrm{tr}}\underline{\chi} \slashed{\mathrm{tr}}\chi =&\: 2\underline{\omega} \slashed{\mathrm{tr}}\chi -2K +2\div \eta+2|\eta|_\gamma^2, \label{trRicAB.1}\\ \slashed{\nabla}_4\hat{\underline{\chi}} +\frac 12 \slashed{\mathrm{tr}}\chi \hat{\underline{\chi}} =&\: \slashed{\nabla}\widehat{\otimes} \underline{\eta}+2\omega \hat{\underline{\chi}} -\frac 12\slashed{\mathrm{tr}}\underline{\chi} \hat{\chi} + \underline{\eta}\widehat{\otimes} \underline{\eta}, \label{RicAB} \\ \slashed{\nabla}_3\hat{\chi} +\frac 12 \slashed{\mathrm{tr}}\underline{\chi} \hat{\chi} =&\: \slashed{\nabla}\widehat{\otimes} \eta+2\underline{\omega} \hat{\chi} -\frac 12\slashed{\mathrm{tr}}\chi \hat{\underline{\chi}} + \eta\widehat{\otimes} \eta, \label{RicAB.1} \\ \slashed{\nabla}_4\underline{\omega}-2\omega\underline{\omega}+ \eta\cdot\underline{\eta}-\frac 12|\eta|_\gamma^2 =&\: -\f12(K-\f12 \hat{\chi}\cdot\hat{\underline{\chi}}+\f14 \slashed{\mathrm{tr}}\chi\slashed{\mathrm{tr}}\underline{\chi}), \label{Ric34} \\ \slashed{\nabla}_3\omega-2\omega\underline{\omega}+\eta\cdot\underline{\eta}-\frac 12|\underline{\eta}|_\gamma^2 =&\:-\f12(K-\f12 \hat{\chi}\cdot\hat{\underline{\chi}}+\f14 \slashed{\mathrm{tr}}\chi\slashed{\mathrm{tr}}\underline{\chi}). \label{Ric34.1} \end{align} \end{proposition} \begin{proof} See the derivation for instance in \cite{Chr}. \qedhere \end{proof} \subsection{Weak formulation of transport equations}\label{sec:weak.transport} \begin{definition}\label{def:weak.transport} Let $(\mathcal M,g)$ be an angularly regular metric in double null coordinates. Consider the transport equations \begin{equation}\label{eq:transport.3} \slashed{\nabla}_3 \phi = F, \end{equation} \begin{equation}\label{eq:transport.4} \slashed{\nabla}_4 \psi = G, \end{equation} where $\phi$, $F$, $\psi$, $G$ are $S$-tangent covariant tensor fields of rank $r$ in $C^0_u C^0_{\underline{u}} L^p(S)$ for some $p\in [1,+\infty]$. We say that \eqref{eq:transport.3} (resp.~\eqref{eq:transport.4}) is satisfied in the \textbf{integrated sense} if for every $C^1$ contravariant $S$-tangent tensor $\varphi$ of rank $r$, the following holds $\forall 0\leq u_1< u_2\leq u_*,\,\forall \underline{u} \in [0,\underline{u}_*]$ \begin{equation*} \begin{split} \int_{S_{u_2,\underline{u}}} \langle\varphi, \phi\rangle \Omega \,\mathrm{d} A_{\gamma} &\: - \int_{S_{u_1,\underline{u}}} \langle\varphi, \phi\rangle \Omega \,\mathrm{d} A_{\gamma} - \int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} (\langle \varphi, F + (\slashed{\mathrm{tr}}\underline{\chi} - 2\underline{\omega})\phi \rangle + \langle \slashed{\nabla}_3\varphi, \phi\rangle) \Omega^2 \,\mathrm{d} A_{\gamma}\, \mathrm{d} u' =0, \end{split} \end{equation*} (resp.~$\forall 0\leq \underline{u}_1< \underline{u}_2\leq \underline{u}_*,\,\forall \underline{u} \in [0,u_*]$ \begin{equation*} \begin{split} \int_{S_{u,\underline{u}_1}} \langle\varphi, \psi\rangle \Omega \,\mathrm{d} A_{\gamma} &\: - \int_{S_{u,\underline{u}_2}} \langle\varphi, \psi\rangle \Omega \,\mathrm{d} A_{\gamma} - \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} (\langle \varphi, G + (\slashed{\mathrm{tr}}\chi - 2\omega)\psi\rangle + \langle \slashed{\nabla}_4\varphi,\psi\rangle) \Omega^2 \,\mathrm{d} A_{\gamma}\, \mathrm{d} \underline{u}' = 0. ) \end{split} \end{equation*} \end{definition} \begin{definition}\label{def:weaker.transport} Let $(\mathcal M,g)$ be an angularly regular metric in double null coordinates. \eqref{eq:transport.3} and \eqref{eq:transport.4} where $\phi$, $F$ are $S$-tangent covariant tensor fields of rank $r$ in $C^0_u L^2_{\underline{u}} L^p(S)$ for some $p\in [1,+\infty]$; and $\psi$, $G$ are $S$-tangent covariant tensor fields of rank $r$ in $C^0_{\underline{u}} L^2_u L^p(S)$ for some $p \in [1,+\infty]$. We say that \eqref{eq:transport.3} (resp.~\eqref{eq:transport.4}) is satisfied in the \textbf{weak integrated sense} if for every $C^1$ contravariant $S$-tangent tensor $\varphi$ of rank $r$, the following holds $\forall 0\leq u_1< u_2\leq u_*$: \begin{equation*} \begin{split} \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \langle\varphi, \phi\rangle \Omega \,\mathrm{d} A_{\gamma}\, \mathrm{d} \underline{u} &\: - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \langle\varphi, \phi\rangle \Omega \,\mathrm{d} A_{\gamma}\,\mathrm{d} \underline{u} \\ &\: - \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} (\langle \varphi, F + (\slashed{\mathrm{tr}}\underline{\chi} - 2\underline{\omega})\phi \rangle + \langle \slashed{\nabla}_3\varphi, \phi\rangle) \Omega^2 \,\mathrm{d} A_{\gamma}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} =0, \end{split} \end{equation*} (resp.~$\forall 0\leq \underline{u}_1< \underline{u}_2\leq \underline{u}_*$: \begin{equation*} \begin{split} \int_0^{u_*} \int_{S_{u,\underline{u}_1}} \langle\varphi, \psi\rangle \Omega \,\mathrm{d} A_{\gamma} \,\mathrm{d} u &\: - \int_0^{u_*} \int_{S_{u,\underline{u}_2}} \langle\varphi, \psi\rangle \Omega \,\mathrm{d} A_{\gamma} \,\mathrm{d} u \\ &\: - \int_0^{u_*} \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} (\langle \varphi, G + (\slashed{\mathrm{tr}}\chi - 2\omega)\psi\rangle + \langle \slashed{\nabla}_4\varphi,\psi\rangle) \Omega^2 \,\mathrm{d} A_{\gamma}\, \mathrm{d} \underline{u}'\,\mathrm{d} u = 0. ) \end{split} \end{equation*} \end{definition} \begin{remark} It is easy to see by integration by parts, \eqref{metric.derivative.invar} and \eqref{Ricci.relation} that $$\boxed{\mbox{classical sense}} \implies \boxed{\mbox{integrated sense}} \implies \boxed{\mbox{weak integrated sense}}.$$ \end{remark} \subsection{Weak formulation of Einstein vacuum equations in the double null gauge}\label{sec.weak.double.null} In this subsection, we give a weak formulation of the Einstein vacuum equations in the double null gauge, which is slightly stronger than that in Definition~\ref{def:vacuum} and takes advantage of angular regularity. Our formulation relies on notions introduced in Sections~\ref{sec:null.structure.eqn} and \ref{sec:weak.transport}. \begin{definition}\label{def:weak.sol.vac.ang.reg} Let $(\mathcal M= [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ be an angularly regular spacetime in double null coordinates. We say that $(\mathcal M,g)$ is an \textbf{angularly regular weak solution to the Einstein vacuum equations} if the following holds: \begin{enumerate} \item \eqref{Ric44}--\eqref{Ric3A} are satisfied in the integrated sense (Definition~\ref{def:weak.transport}). \item \eqref{trRicAB}--\eqref{Ric34.1} are satisfied in the weak integrated sense (Definition~\ref{def:weaker.transport}). \end{enumerate} \end{definition} \begin{remark} We remark that in order to make sense of Definition~\ref{def:weak.sol.vac.ang.reg}, we do not need to full regularity assumptions in Definition~\ref{double.null.def.2}. We make the stronger assumptions in Definition~\ref{double.null.def.2} because that will be the relevant class of spacetimes in the later parts of the paper. \end{remark} The following proposition clarifies the relation between the notions of solutions Definitions~\ref{def:vacuum} and \ref{double.null.def.2}, as well as their relation to classical solutions. Part (1) is an immediate consequence of Proposition~\ref{prop:null.structure}; part (2) is a direct computation. We omit the details. \begin{proposition}\label{prop:weak.sol.vac.ang.reg} \begin{enumerate} \item Suppose $(\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ is a $C^2$ (classical) solutions to the Einstein vacuum equations in double null coordinates (see Definition~\ref{double.null.def}), then $(\mathcal M, g)$ angularly regular weak solution to the Einstein vacuum equations in the sense of Definition~\ref{def:weak.sol.vac.ang.reg}. \item Suppose $(\mathcal M= [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ be an angularly regular weak solution to the Einstein vacuum equations in the sense of Definition~\ref{def:weak.sol.vac.ang.reg}, then $(\mathcal M, g)$ is a weak solution to the Einstein vacuum equations in the sense of Definition~\ref{def:vacuum}. \end{enumerate} \end{proposition} \subsection{Weak formulation of the Einstein--null dust system in the double null gauge}\label{sec.weak.double.null.dust} In analogy to Definition~\ref{def:weak.sol.vac.ang.reg}, we introduce in this subsection a weak formulation of the Einstein--null dust system in the double null gauge that uses angular regularity; see already Definition~\ref{def:weak.sol.ang.reg}. We begin with defining the class of measures (which will represent the null dusts) which we will consider. \begin{definition}\label{def:ang.reg.null.dust} Let $(\mathcal M= [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ be an angularly regular spacetime in double null coordinates. Let $\{\mathrm{d} \nu_u\}_{u\in [0,u_*]}$ be a $1$-parameter family of measures such that for every $u\in [0,u_*]$, $\mathrm{d}\nu_u$ is a non-negative Radon measure on $\{u\}\times (0,\underline{u}_*) \times \mathbb S^2$. Similarly, let $\{\mathrm{d} \underline{\nu}_{\underline{u}}\}_{\underline{u} \in [0,\underline{u}_*]}$ be a $1$-parameter family of measures such that for every $\underline{u} \in [0,\underline{u}_*]$, $\mathrm{d}\underline{\nu}_{\underline{u}}$ is a non-negative Radon measure on $(0,u_*)\times \{\underline{u}\}\times \mathbb S^2$. We say that $\{\mathrm{d}\nu_u\}_{u \in [0,u_*]}$ and $\{\mathrm{d}\underline{\nu}_{\underline{u}}\}_{\underline{u} \in [0,\underline{u}_*] }$ are \textbf{angularly regular} if the following holds: \begin{enumerate} \item $\mathrm{d}\nu_u$ is continuous in $u$ and $\mathrm{d}\underline{\nu}_{\underline{u}}$ is continuous in $\underline{u}$ with respect to the weak-* topology, i.e $$\lim_{u'\to u} \int_{\{u'\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi \,\mathrm{d} \nu_{u'} = \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi \,\mathrm{d} \nu_{u},\quad \forall \varphi\in C^0_c([0,u_*]\times (0,\underline{u}_*)\times \mathbb S^2),$$ $$\lim_{\underline{u}'\to \underline{u}} \int_{(0,u_*)\times \{\underline{u}'\}\times \mathbb S^2} \varphi \,\mathrm{d} \nu_{\underline{u}'} = \int_{(0,u_*)\times \{\underline{u}\}\times \mathbb S^2} \varphi \,\mathrm{d} \nu_{u},\quad \forall \varphi\in C^0_c((0,u_*)\times [0,\underline{u}_*]\times \mathbb S^2).$$ \item Angular regularity holds in the following sense. For every $u$, let \begin{equation*} \begin{split} \accentset{\scalebox{.7}{\mbox{\tiny (0)}}}{{\mathfrak X}}_u:= \{ \varphi \mbox{ real valued function} : &\: \varphi \in C^0_{\underline{u}}L^1(S_{u,\underline{u}}),\,\|\varphi \|_{L^\infty_{\underline{u}}L^1(S_{u,\underline{u}})} \leq 1\}, \end{split} \end{equation*} \begin{equation*} \begin{split} \accentset{\scalebox{.7}{\mbox{\tiny (1)}}}{{\mathfrak X}}_u:= \{ \slashed X \mbox{ $S$-tangent vector field} : &\: \slashed X,\,\div\slashed{X} \in C^0_{\underline{u}}L^1(S_{u,\underline{u}}),\,\|\slashed X\|_{L^\infty_{\underline{u}}L^1(S_{u,\underline{u}})} \leq 1\}, \end{split} \end{equation*} \begin{equation*} \begin{split} \accentset{\scalebox{.7}{\mbox{\tiny (2)}}}{{\mathfrak X}}_u:= \{ (\slashed X, \slashed Y) : &\: \slashed X, \,\slashed Y \mbox{ $S$-tangent vector fields}, \slashed X,\,\slashed Y,\, \div (\slashed{X}\otimes \slashed Y),\\ &\: \div \div (\slashed{X}\otimes\slashed{Y}) \in C^0_{\underline{u}}L^1(S_{u,\underline{u}}),\, \|\slashed X\otimes \slashed Y\|_{L^\infty_{\underline{u}}L^{\frac 43}(S_{u,\underline{u}})} \leq 1 \}. \end{split} \end{equation*} Similarly, for every $\underline{u}$, define \begin{equation*} \begin{split} \accentset{\scalebox{.7}{\mbox{\tiny (0)}}}{{\mathcal X}}_{\underline{u}}:= \{ \varphi \mbox{ real valued function} : &\: \varphi \in C^0_u L^1(S_{u,\underline{u}}),\,\|\varphi \|_{L^\infty_u L^1(S_{u,\underline{u}})} \leq 1\}, \end{split} \end{equation*} $$\accentset{\scalebox{.7}{\mbox{\tiny (1)}}}{{\mathcal X}}_{\underline{u}}:= \{ \slashed X \mbox{ $S$-tangent vector field} : \slashed X,\,\div \slashed{X} \in C^0_{u}L^1(S_{u,\underline{u}}),\,\|\slashed X\|_{L^\infty_{u}L^1(S_{u,\underline{u}})} \leq 1\},$$ \begin{equation*} \begin{split} \accentset{\scalebox{.7}{\mbox{\tiny (2)}}}{{\mathcal X}}_{\underline{u}}:= \{ (\slashed X, \slashed Y) : &\: \slashed X, \,\slashed Y \mbox{ $S$-tangent vector fields}, \slashed X,\,\slashed{Y},\, \div (\slashed{X}\otimes \slashed Y),\\ &\: \div \div (\slashed{X}\otimes\slashed{Y}) \in C^0_{u}L^1(S_{u,\underline{u}}),\, \|\slashed X\otimes \slashed Y\|_{L^\infty_{u}L^{\frac 43}(S_{u,\underline{u}})} \leq 1 \}. \end{split} \end{equation*} Then there exists $C>0$ such that \begin{equation}\label{eq:nu.add.reg} \begin{split} &\: \sup_{u \in [0,u_*]} \left(\sup_{\varphi \in \accentset{\scalebox{.7}{\mbox{\tiny (0)}}}{{\mathfrak X}}_{u} } \left|\int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \varphi \,\mathrm{d}\nu_u\right| + \sup_{\slashed X\in \accentset{\scalebox{.7}{\mbox{\tiny (1)}}}{{\mathfrak X}}_{u} } \left|\int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \div \slashed{X}\,\mathrm{d}\nu_u\right| \right) \\ &\: \qquad + \sup_{u \in [0,u_*]} \sup_{(\slashed X,\,\slashed Y) \in \accentset{\scalebox{.7}{\mbox{\tiny (2)}}}{{\mathfrak X}}_{u} } \left|\int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \div\, \div (\slashed{X}\otimes\slashed{Y}) \,\mathrm{d}\nu_u\right| \leq C, \end{split} \end{equation} and \begin{equation}\label{eq:nub.add.reg} \begin{split} &\: \sup_{\underline{u} \in [0,\underline{u}_*]} \left(\sup_{\varphi \in \accentset{\scalebox{.7}{\mbox{\tiny (0)}}}{{\mathcal X}}_{\underline{u}}} \left|\int_{[0,u_*]\times \{\underline{u}\} \times \mathbb S^2} \varphi \,\mathrm{d}\nu_{\underline{u}} \right| + \sup_{\slashed X\in \accentset{\scalebox{.7}{\mbox{\tiny (1)}}}{{\mathcal X}}_{\underline{u}} } \left|\int_{[0,u_*]\times \{\underline{u}\} \times \mathbb S^2} \div \slashed{X}\,\mathrm{d}\underline{\nu}_{\underline{u}} \right|\right) \\ &\: \qquad + \sup_{\underline{u} \in [0,\underline{u}_*]} \sup_{(\slashed X,\,\slashed Y) \in \accentset{\scalebox{.7}{\mbox{\tiny (2)}}}{{\mathcal X}}_{\underline{u}} } \left|\int_{[0,u_*]\times \{\underline{u}\}\times \mathbb S^2} \div\, \div (\slashed{X}\otimes\slashed{Y}) \, \mathrm{d}\underline{\nu}_{\underline{u}}\right| \leq C, \end{split} \end{equation} where we used the convention $\div\, \div (\slashed{X}\otimes\slashed{Y}):= \slashed{\nabla}_A \slashed{\nabla}_B (X^A Y^B)$. \end{enumerate} \end{definition} \begin{definition}\label{def:weak.sol.ang.reg} Let $(\mathcal M= [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ be an angularly regular spacetime in double null coordinates, and let $(\{\mathrm{d}\nu_u\}_{u\in [0,u_*]}, \{\mathrm{d}\nu_{\underline{u}}\}_{\underline{u} \in [0,\underline{u}_*]})$ be angularly regular non-negative Radon measures (see Definition~\ref{def:ang.reg.null.dust}). We say that $(\mathcal M,g,\{\mathrm{d}\nu_u\}_{u\in [0,u_*]}, \{\mathrm{d}\nu_{\underline{u}}\}_{\underline{u} \in [0,\underline{u}_*]})$ is an \textbf{angularly regular weak solution to the Einstein--null dust system} if the following holds: \begin{enumerate} \item \eqref{Ric4A} and \eqref{Ric3A} are satisfied in the integrated sense (Definition~\ref{def:weak.transport}). \item \eqref{trRicAB}--\eqref{Ric34.1} are satisfied in the weak integrated sense (Definition~\ref{def:weaker.transport}). \item The following equations hold for $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ (instead of \eqref{Ric44} and \eqref{Ric33}). For all $0\leq u_1<u_2 \leq u_*$, $\underline{u}\in [0,\underline{u}_*]$ and $C^1$ function $\varphi: [0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2\to \mathbb R$, \begin{equation}\label{eq:trchb} \begin{split} &\: \int_{S_{u_2,\underline{u}}} \varphi \Omega \slashed{\mathrm{tr}}\underline{\chi}^- \,\mathrm{dA}_{\gamma} - \int_{S_{u_1,\underline{u}}} \varphi \Omega \slashed{\mathrm{tr}}\underline{\chi}^+ \,\mathrm{dA}_{\gamma} \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} ((e_3\varphi)\slashed{\mathrm{tr}}\underline{\chi} -4 \varphi \underline{\omega} \slashed{\mathrm{tr}}\underline{\chi}+ \f12 \varphi (\slashed{\mathrm{tr}}\underline{\chi})^2 - \varphi|\hat{\underline{\chi}}|_{\gamma}^2)\Omega^2\,\mathrm{dA}_{\gamma} \,\mathrm{d} u -\int_{(u_1, u_2)\times \{\underline{u}\}\times \mathbb S^2} \varphi\,\mathrm{d} \underline{\nu}_{\underline{u}}, \end{split} \end{equation} For all $0\leq \underline{u}_1 < \underline{u}_2 \leq \underline{u}_*$, $u\in [0,u_*]$ and $C^1$ function $\varphi: [0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2\to \mathbb R$, \begin{equation}\label{eq:trch} \begin{split} &\: \int_{S_{u,\underline{u}_2}} \varphi \Omega \slashed{\mathrm{tr}}\chi^- \,\mathrm{dA}_{\gamma} - \int_{S_{u,\underline{u}_1}} \varphi \Omega \slashed{\mathrm{tr}}\chi^+ \,\mathrm{dA}_{\gamma} \\ = &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}}} ((e_4 \varphi)\slashed{\mathrm{tr}}\chi -4 \varphi \omega \slashed{\mathrm{tr}}\chi+ \f12 \varphi (\slashed{\mathrm{tr}}\chi)^2 - \varphi|\hat{\chi}|_{\gamma}^2)\Omega^2\,\mathrm{dA}_{\gamma} \,\mathrm{d} \underline{u} -\int_{\{u\}\times (\underline{u}_1, \underline{u}_2)\times \mathbb S^2} \varphi\,\mathrm{d} \nu_u. \end{split} \end{equation} \item The following equations hold for $\mathrm{d} \nu_u$ and $\mathrm{d}\underline{\nu}_{\underline{u}}$. For every $0\leq u_1 <u_2 \leq u_*$, and every $C^1_c$ function $\varphi: [0,u_*]\times (0,\underline{u}_*) \times \mathbb S^2\to \mathbb R$, \begin{equation}\label{eq:nu} \begin{split} &\: \int_{\{u_2\}\times (0,\underline{u}_*) \times \mathbb S^2} \varphi \,\mathrm{d} \nu_{u_2} \\ = &\: \int_{\{u_1\}\times (0,\underline{u}_*) \times \mathbb S^2} \varphi\,\mathrm{d}\nu_{u_1} + \int_{u_1}^{u_2} \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} (\frac{\partial\varphi}{\partial u} + \slashed{\nabla}_{b} \varphi ) \,\mathrm{d} \nu_{u}\,\mathrm{d} u. \end{split} \end{equation} For every $0\leq \underline{u}_1 <\underline{u}_2 \leq \underline{u}_*$, and every $C^1_c$ function $\varphi: (0,u_*)\times [0,\underline{u}_*] \times \mathbb S^2\to \mathbb R$, \begin{equation}\label{eq:nub} \int_{(0,u_*) \times \{\underline{u}_2\} \times \mathbb S^2} \varphi \,\mathrm{d} \underline{\nu}_{\underline{u}_2} = \int_{(0,u_*) \times \{\underline{u}_1\} \times \mathbb S^2} \varphi\,\mathrm{d}\underline{\nu}_{\underline{u}_1} + \int_{\underline{u}_1}^{\underline{u}_2} \int_{(0,u_*) \times \{\underline{u}\} \times \mathbb S^2} \frac{\partial\varphi}{\partial \underline{u}} \,\mathrm{d} \underline{\nu}_{\underline{u}}\,\mathrm{d} \underline{u}. \end{equation} \end{enumerate} \end{definition} The following is easy to check (cf.~part (2) of Proposition~\ref{prop:weak.sol.vac.ang.reg} in the vacuum case); we omit the proof. \begin{lemma} Suppose $(\mathcal M,g,\{\mathrm{d}\nu_u\}_{u\in [0,u_*]}, \{\mathrm{d}\nu_{\underline{u}}\}_{\underline{u} \in [0,\underline{u}_*]})$ is an angularly regular weak solution to the Einstein--null dust system in the sense of Definition~\ref{def:weak.sol.ang.reg}. Then, for $\mathrm{d} \nu:= \Omega^2\, \mathrm{d}\nu_u \,\mathrm{d} u$ and $\mathrm{d}\underline{\nu}:= \Omega^2 \,\mathrm{d}\underline{\nu}_{\underline{u}}\,\mathrm{d}\underline{u}$, $(\mathcal M, g, \mathrm{d}\nu, \mathrm{d}\underline{\nu})$ is a weak solution to the Einstein--null dust system in the sense of Definition~\ref{def:null.dust}. \end{lemma} \subsection{Renormalized curvature components and the renormalized Bianchi equations} Given an angularly regular spacetime $(\mathcal M,g)$ (see~Definition~\ref{double.null.def.2}), define the following $S$-tangent tensor fields: \begin{definition}\label{def:curv} Define $$\beta :=-\div\hat{\chi} + \frac 12 \slashed{\nabla} \slashed{\mathrm{tr}}\chi - \frac 12(\eta-\underline{\eta}) \cdot (\chi - \slashed{\mathrm{tr}}\chi\gamma),\quad \underline{\beta} := \div\hat{\underline{\chi}} - \frac 12 \slashed{\nabla} \slashed{\mathrm{tr}}\underline{\chi} - \frac 12(\eta-\underline{\eta})\cdot (\underline{\chi} -\slashed{\mathrm{tr}}\underline{\chi}\gamma),\quad \check{\sigma} := \slashed{\mathrm{curl}}\eta.$$ \end{definition} We will call $(\beta,\,\underline{\beta},\,\check{\sigma})$ and $K$ the \textbf{renormalized curvature components}. The relation of $(\beta,\,\underline{\beta},\,\check{\sigma})$ to the spacetime curvature components is given by the following: \begin{lemma} If $(\mathcal M,g)$ is a $C^2$ (classical) solution to the Einstein vacuum equations, then $$\beta_A = \frac 1 2 R(e_A, e_4, e_3, e_4),\quad\check{\sigma} = \frac 1 4 \,^*R(e_4,e_3, e_4, e_3)+ \f12 \in^{AB}(\gamma^{-1})^{CD}\hat{\underline{\chi}}_{AC}\hat{\chi}_{BD},$$ $$\underline{\beta}_A = \frac 1 2 R(e_A, e_3, e_3, e_4),$$ where $R$ is the Riemann curvature tensor of $g$ and $\, ^*R$ denotes the Hodge dual of $R$. \end{lemma} For sufficiently regular spacetimes in double null coordinates, the renormalized curvature components obey the following \textbf{renormalized Bianich equations} (recall definitions from Definitions~\ref{def:div.curl} and \ref{def:contractions}): \begin{proposition}\label{prop:Bianchi} If $(\mathcal M,g)$ is a $C^3$ (classical) solution to the Einstein vacuum equations, then the following system of equations holds: \begin{align} \slashed{\nabla}_3\beta+\slashed{\mathrm{tr}}\underline{\chi}\beta=&\: -\slashed{\nabla} K +^*\slashed{\nabla}\check{\sigma} + 2\underline{\omega} \beta+2\hat{\chi}\cdot\underline{\beta}-3(\eta K-^*\eta\check{\sigma})+\frac 1 2(\slashed{\nabla}(\hat{\chi}\cdot\hat{\underline{\chi}})+^*\slashed{\nabla}(\hat{\chi}\wedge\hat{\underline{\chi}})) \nonumber\\ &+\frac 32(\eta\hat{\chi}\cdot\hat{\underline{\chi}}+^*\eta\hat{\chi}\wedge\hat{\underline{\chi}})-\frac 14 (\slashed{\nabla}\slashed{\mathrm{tr}}\chi \slashed{\mathrm{tr}}\underline{\chi}+\slashed{\mathrm{tr}}\chi\slashed{\nabla}\slashed{\mathrm{tr}}\underline{\chi})-\frac 34 \eta\slashed{\mathrm{tr}}\chi\slashed{\mathrm{tr}}\underline{\chi},\label{eq:null.Bianchi.1}\\ \slashed{\nabla}_4\check{\sigma}+\frac 32\slashed{\mathrm{tr}}\chi\check{\sigma}=&\: -\div^*\beta-\frac 12(\eta - \underline{\eta})\wedge\beta-2\underline{\eta}\wedge \beta-\frac 12 \hat{\chi}\wedge(\slashed{\nabla}\widehat{\otimes}\underline{\eta})-\frac 12 \hat{\chi}\wedge(\underline{\eta}\widehat{\otimes}\underline{\eta}),\label{eq:null.Bianchi.2}\\ \slashed{\nabla}_4 K+\slashed{\mathrm{tr}}\chi K=&\: -\div\beta-\frac 12(\eta - \underline{\eta})\cdot\beta-2\underline{\eta}\cdot\beta+\frac 12 \hat{\chi}\cdot\slashed{\nabla}\widehat{\otimes}\underline{\eta}+\frac 12 \hat{\chi}\cdot(\underline{\eta}\widehat{\otimes}\underline{\eta})-\frac 12 \slashed{\mathrm{tr}}\chi\div\underline{\eta}-\frac 12\slashed{\mathrm{tr}}\chi |\underline{\eta}|^2,\label{eq:null.Bianchi.3}\\ \slashed{\nabla}_3\check{\sigma}+\frac 32\slashed{\mathrm{tr}}\underline{\chi}\check{\sigma}=&\: -\div ^*\underline{\beta}+\frac 12(\eta - \underline{\eta})\wedge\underline{\beta}-2\eta\wedge \underline{\beta}+\frac 12 \hat{\underline{\chi}}\wedge(\slashed{\nabla}\widehat{\otimes}\eta)+\frac 12 \hat{\underline{\chi}}\wedge(\eta\widehat{\otimes}\eta),\label{eq:null.Bianchi.4}\\ \slashed{\nabla}_3 K+\slashed{\mathrm{tr}}\underline{\chi} K=&\: \div\underline{\beta}-\frac 12(\eta - \underline{\eta})\cdot\underline{\beta}+2\eta\cdot\underline{\beta}+\frac 12 \hat{\underline{\chi}}\cdot\slashed{\nabla}\widehat{\otimes}\eta+\frac 12 \hat{\underline{\chi}}\cdot(\eta\widehat{\otimes}\eta)-\frac 12 \slashed{\mathrm{tr}}\underline{\chi}\div\eta-\frac 12 \slashed{\mathrm{tr}}\underline{\chi} |\eta|^2,\label{eq:null.Bianchi.5}\\ \slashed{\nabla}_4\underline{\beta}+\slashed{\mathrm{tr}}\chi\underline{\beta}=&\: \slashed{\nabla} K +^*\slashed{\nabla}\check{\sigma}+ 2\omega\underline{\beta} +2\hat{\underline{\chi}}\cdot\beta+3(\underline{\eta} K+^*\underline{\eta}\check{\sigma})-\frac 1 2(\slashed{\nabla}(\hat{\chi}\cdot\hat{\underline{\chi}})-^*\slashed{\nabla}(\hat{\chi}\wedge\hat{\underline{\chi}}))\nonumber\\ &+\frac 14 (\slashed{\nabla}\slashed{\mathrm{tr}}\chi \slashed{\mathrm{tr}}\underline{\chi}+\slashed{\mathrm{tr}}\chi\slashed{\nabla}\slashed{\mathrm{tr}}\underline{\chi})-\frac 32(\underline{\eta}\hat{\chi}\cdot\hat{\underline{\chi}}-^*\underline{\eta}\hat{\chi}\wedge\hat{\underline{\chi}})+\frac 34 \underline{\eta}\slashed{\mathrm{tr}}\chi\slashed{\mathrm{tr}}\underline{\chi}. \label{eq:null.Bianchi.6} \end{align} \end{proposition} \begin{proof} These equations can be derived starting the decomposing the Bianchi equations $\nabla^{\alpha}R_{\alpha\beta\mu\nu} = 0$ with respect to the null frame and then performing algebraic manipulations; they can be found for instance in \cite{DL}. \qedhere \end{proof} \begin{definition}\label{def:Bianchi.integrated} Let $(\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ be an angularly regular spacetime in double null coordinates. We say that \textbf{the renormalized Bianchi equations are satisfied} if \begin{enumerate} \item the equations \eqref{eq:null.Bianchi.2}--\eqref{eq:null.Bianchi.5} are satisfied in the integrated sense (Definition~\ref{def:weak.transport}); \item the equations \eqref{eq:null.Bianchi.1} and \eqref{eq:null.Bianchi.6} are satisfied in the weak integrated sense (Definition~\ref{def:weaker.transport}). \end{enumerate} \end{definition} \subsection{Auxiliary equations} In this subsection, we discuss a few auxiliary equations. We will show that when the spacetime is sufficiently regular, they hold as a consequence of the null structure equations (recall Section~\ref{sec:null.structure.eqn}). We will then introduce an appropriate (weak) notion for these solutions in the setting of angularly regular spacetimes; see already Definition~\ref{def:aux.integrated}. The equations that we will be interested in are those for the higher derivatives of the metric components, those for the mass aspect functions, and those for the derivatives of $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$. These are not necessary to make sense of the Einstein equations weakly, but will be useful for the proof of our uniqueness theorem (Theorem~\ref{thm:uniqueness.intro}). \subsubsection{Higher order transport equation for the metric components} \begin{proposition}\label{prop:higher.order.metric.C2} The following holds for a $C^2$ metric in double null coordinates (see \eqref{double.null.coordinates}): \begin{equation}\label{eq:nablagamma} \slashed{\nabla}_4 \slashed{\nabla} \gamma = 0, \end{equation} \begin{equation}\label{eq:nablaOmega} \slashed{\nabla}_4 \slashed{\nabla} \log\Omega = - (\eta + \underline{\eta}) \omega - \chi \cdot \slashed{\nabla} \log\Omega. \end{equation} Under the same assumptions, the following equation for $\slashed{\nabla} b$, which we write in index notations, also holds \begin{equation}\label{eq:nablab} \begin{split} \slashed{\nabla}_4 \slashed{\nabla}_B b^A = &\: - (\eta + \underline{\eta})_B (\eta + \underline{\eta})^A + \frac 12 (\eta +\underline{\eta})_B \chi_{AC} b^C - (\gamma^{-1})^{CD} \chi_{BD}\slashed{\nabla}_C b^{A} \\ &\: + \sum_{i=1}^r (\chi_{B}{ }^A \underline{\eta}_C - \chi_{BC} \underline{\eta}^{A}+ \in_{AC} { }^*\beta_B ) b^C. \end{split} \end{equation} \end{proposition} \begin{proof} By Lemma~7.3.3 in \cite{CK}, the following commutation formula holds: \begin{equation}\label{eq:CK.Lemma733} \begin{split} [ \slashed{\nabla}_4,\slashed{\nabla}_B] \phi_{A_1\dots A_r} = &\: \frac 12(\eta_B + \underline{\eta}_B)\slashed{\nabla}_4\phi_{A_1\dots A_r} - (\gamma^{-1})^{CD} \chi_{BD}\slashed{\nabla}_C \phi_{A_1\dots A_r} \\ &\: +\sum_{i=1}^r ((\gamma^{-1})^{CD} \chi_{A_i B} \underline{\eta}_D - (\gamma^{-1})^{CD} \chi_{BD} \underline{\eta}_{A_i}+ \in_{A_i}{ }^C { }^*\beta_B )\phi_{A_1\dots \hat{A_i} C\dots A_r}, \end{split} \end{equation} where ${ }^*$ denotes the Hodge dual on the $2$-sphere, and $\hat{A_i}$ in the indices means that the original $A_i$ is removed. The conclusion then follows from applying the commutation formula to \begin{equation}\label{eq:g.eqn.in.nab} \slashed{\nabla}_4 \gamma = 0,\quad \slashed{\nabla}_4 b = - 2\Omega (\eta - \underline{\eta}) + \chi\cdot b,\quad \slashed{\nabla}_4 \log \Omega = -2 \omega, \end{equation} noting that the regularity assumptions are sufficient to justify the use of the commutation formula. \qedhere \end{proof} \subsubsection{Transport equations for the mass aspect functions} Define the mass aspect functions $\mu$ and $\underline{\mu}$ by \begin{equation}\label{eq:mu.def} \mu:= - \div \eta + K,\quad \underline{\mu}:=- \div \underline{\eta} + K. \end{equation} We then have the following equations\footnote{Remark that the key point for introducing $\mu$ and $\underline{\mu}$ (instead of just considering $\div\eta$ and $\div\underline{\eta}$) is that in the transport equations they satisfy, no terms of first derivative of curvature appear on the RHS.} for $\mu$ and $\underline{\mu}$. \begin{proposition}\label{prop:mu.background} The following holds for a $C^3$ solution to the Einstein vacuum equations in double null coordinates (see \eqref{double.null.coordinates}): \begin{equation}\label{eq:mu.0} \begin{split} \slashed{\nabla}_4 \mu = &\: \div \{\chi\cdot(\eta-\underline{\eta}) \} + \frac 12 (\eta + \underline{\eta}) \cdot \{\chi\cdot(\eta-\underline{\eta}) + \beta\} + \chi \cdot \slashed{\nabla} \eta - \slashed{\mathrm{tr}}\chi \underline{\eta} \cdot \eta + \chi\cdot \underline{\eta}\cdot \eta + \beta \cdot \eta \\ &\: -\slashed{\mathrm{tr}}\chi K -\frac 12(\eta - \underline{\eta})\cdot\beta-2\underline{\eta}\cdot\beta+\frac 12 \hat{\chi}\cdot\slashed{\nabla}\widehat{\otimes}\underline{\eta}+\frac 12 \hat{\chi}\cdot(\underline{\eta}\widehat{\otimes}\underline{\eta})-\frac 12 \slashed{\mathrm{tr}}\chi\div\underline{\eta}-\frac 12\slashed{\mathrm{tr}}\chi |\underline{\eta}|^2, \end{split} \end{equation} and \begin{equation}\label{eq:mub.0} \begin{split} \slashed{\nabla}_3 \underline{\mu} =&\: \div \{\chi\cdot(\underline{\eta}-\eta) \} + \frac 12 (\eta + \underline{\eta}) \cdot \{\chi\cdot(\underline{\eta}-\eta) - \underline{\beta}\} +\underline{\chi} \cdot \slashed{\nabla} \underline{\eta} - \slashed{\mathrm{tr}}\underline{\chi} \eta \cdot \underline{\eta} + \underline{\chi}\cdot \eta\cdot \underline{\eta} - \underline{\beta} \cdot \underline{\eta} \\ &\: -\slashed{\mathrm{tr}}\underline{\chi} K-\frac 12(\eta - \underline{\eta})\cdot\underline{\beta}+2\eta\cdot\underline{\beta}+\frac 12 \hat{\underline{\chi}}\cdot\slashed{\nabla}\widehat{\otimes}\eta+\frac 12 \hat{\underline{\chi}}\cdot(\eta\widehat{\otimes}\eta)-\frac 12 \slashed{\mathrm{tr}}\underline{\chi}\div\eta-\frac 12 \slashed{\mathrm{tr}}\underline{\chi} |\eta|^2. \end{split} \end{equation} \end{proposition} \begin{proof} The following commutation formulae hold for any $C^2$ $S$-tangent $1$-form $\xi$ on a $C^2$ metric in a double null coordinate system (the equation \eqref{eq:commutation.4} can be derived from \eqref{eq:CK.Lemma733}; \eqref{eq:commutation.3} can be achieved similarly starting from Lemma~7.3.3 in \cite{CK}): \begin{align} [\slashed{\nabla}_4,\div] \xi = &\: \frac 12 (\eta+ \underline{\eta})\cdot \slashed{\nabla}_4\xi - \chi\cdot \slashed{\nabla}\xi + \slashed{\mathrm{tr}}\chi \underline{\eta} \xi - \chi\cdot \underline{\eta}\cdot \xi - \beta \cdot \xi, \label{eq:commutation.4}\\ [\slashed{\nabla}_3,\div] \xi = &\: \frac 12 (\eta+ \underline{\eta})\cdot \slashed{\nabla}_3\xi - \underline{\chi}\cdot \slashed{\nabla}\xi + \slashed{\mathrm{tr}}\underline{\chi} \eta \xi - \underline{\chi}\cdot \eta\cdot \xi + \underline{\beta} \cdot \xi. \label{eq:commutation.3} \end{align} Now, by Definition~\ref{def:curv}, the equations \eqref{Ric4A} and \eqref{Ric3A} can be rephrased as \begin{equation}\label{eq:eta.etab.rephrased} \slashed{\nabla}_4 \eta = -\chi\cdot(\eta-\underline{\eta}) - \beta,\quad \slashed{\nabla}_3\underline{\eta} = -\underline{\chi}\cdot (\underline{\eta} - \eta) + \underline{\beta}, \end{equation} i.e.~the top order derivatives can be grouped in terms of $\beta$ and $\underline{\beta}$. Applying \eqref{eq:commutation.4} and \eqref{eq:commutation.3} to \eqref{eq:eta.etab.rephrased}, we thus obtain \begin{equation*} \begin{split} \slashed{\nabla}_4 \div \eta =&\: - \div \{\chi\cdot(\eta-\underline{\eta}) + \beta\} - \frac 12 (\eta + \underline{\eta}) \cdot \{\chi\cdot(\eta-\underline{\eta}) + \beta\} - \chi \cdot \slashed{\nabla} \eta + \slashed{\mathrm{tr}}\chi \underline{\eta} \cdot \eta - \chi\cdot \underline{\eta}\cdot \eta - \beta \cdot \eta, \\ \slashed{\nabla}_3 \div \underline{\eta} =&\: - \div \{\chi\cdot(\underline{\eta}-\eta) - \underline{\beta}\} - \frac 12 (\eta + \underline{\eta}) \cdot \{\chi\cdot(\underline{\eta}-\eta) - \underline{\beta}\} - \underline{\chi} \cdot \slashed{\nabla} \underline{\eta} + \slashed{\mathrm{tr}}\underline{\chi} \eta \cdot \underline{\eta} - \underline{\chi}\cdot \eta\cdot \underline{\eta} + \underline{\beta} \cdot \underline{\eta}. \end{split} \end{equation*} Recalling the definition of $\mu$ and $\underline{\mu}$ in \eqref{eq:mu.def}, we can combine the above equations with \eqref{eq:null.Bianchi.3} and \eqref{eq:null.Bianchi.5} to obtain the desired conclusion. \qedhere \end{proof} \subsubsection{Weak formulation of transport equations for derivatives of $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$} As in Sections~\ref{sec.weak.double.null} and \ref{sec.weak.double.null.dust}, the transport equations for $\slashed{\mathrm{tr}}\chi$ in the $e_4$ direction and for $\slashed{\mathrm{tr}}\underline{\chi}$ in the $e_3$ direction are different depending on whether the spacetime satisfies the Einstein vacuum equations or the Einstein--null dust system. We first consider the vacuum case. \begin{proposition}\label{prop:Xtrch.vac} Let $(\mathcal M, g)$ be a $C^3$ solution to the Einstein vacuum equations in double null coordinates. Assume $\slashed X$ is a $C^1$ $S$-tangent vector field. Then the following holds: \begin{equation}\label{eq:Xtrch.0.vac} \frac{\partial}{\partial\underline{u}} \slashed X(\Omega^{-1} \slashed{\mathrm{tr}}\chi) + \frac 12 \slashed X (\slashed{\mathrm{tr}}\chi)^2 = - \slashed X|\hat{\chi}|_{\gamma}^2 + [\frac{\partial}{\partial\underline{u}}, \slashed X] (\Omega^{-1} \slashed{\mathrm{tr}}\chi). \end{equation} \begin{equation}\label{eq:Xtrchb.0.vac} (\frac{\partial}{\partial u} + \slashed{\nabla}_{b}) \slashed X(\Omega^{-1}\slashed{\mathrm{tr}}\underline{\chi}) +\frac 12 \slashed X(\slashed{\mathrm{tr}}\underline{\chi})^2= -\slashed X|\hat{\underline{\chi}}|_{\gamma}^2 + [\frac{\partial}{\partial u} + \slashed{\nabla}_{b}, \slashed X](\Omega^{-1}\slashed{\mathrm{tr}}\underline{\chi}). \end{equation} \end{proposition} \begin{proof} This follows from differentiating \eqref{Ric44} and \eqref{Ric33} by $\slashed{X}$ and using \eqref{Ricci.relation}. \qedhere \end{proof} In the presence of dust, \eqref{eq:Xtrch.0.vac} and \eqref{eq:Xtrchb.0.vac} do not hold. Instead, the dust term acts as a source in these transport equations (cf.~Section~\ref{sec.weak.double.null.dust}). We introduce a weak formulation of these equations, which can be considered as the higher derivative version of \eqref{eq:trchb} and \eqref{eq:trch}. Let $\slashed X$ be a $C^1$ $S$-tangent vector field. Consider the following equation for $\slashed X (\Omega^{-1} \slashed{\mathrm{tr}}\chi)$ (for all $u\in [0,u_*]$ and all $0\leq \underline{u}_1<\underline{u}_2 \leq \underline{u}_*$) \begin{equation}\label{eq:Xtrch.0} \begin{split} &\: \int_{S_{u,\underline{u}_2}} \Omega^2 (\slashed X (\Omega^{-1} \slashed{\mathrm{tr}}\chi))^- \,\mathrm{dA}_{\gamma} - \int_{S_{u,\underline{u}_1}} \Omega^2 (\slashed X (\Omega^{-1} \slashed{\mathrm{tr}}\chi))^+ \,\mathrm{dA}_{\gamma} \\ = &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}}} ([\frac{\partial}{\partial \underline{u}} , \slashed X](\Omega^{-1}\slashed{\mathrm{tr}}\chi) -4\omega \Omega\slashed X(\Omega^{-1}\slashed{\mathrm{tr}}\chi) )\Omega^2\,\mathrm{dA}_{\gamma} \,\mathrm{d} \underline{u} \\ &\: + \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}}} (2\slashed X(\log\Omega) + \div \slashed X)\Omega^2|\hat{\chi}|_{\gamma}^2 \,\mathrm{dA}_{\gamma} \,\mathrm{d} \underline{u} + \int_{\{u\}\times (\underline{u}_1,\underline{u}_2)\times \mathbb S^2} (2\slashed X(\log\Omega) + \div \slashed X)\,\mathrm{d}{\nu}_{u}, \end{split} \end{equation} and the following equation for $\slashed X (\Omega^{-1} \slashed{\mathrm{tr}}\underline{\chi})$ (for all $\underline{u}\in [0,\underline{u}_*]$ and all $0\leq u_1<u_2 \leq u_*$) \begin{equation}\label{eq:Xtrchb.0} \begin{split} &\: \int_{S_{u_2,\underline{u}}} \Omega^2 (\slashed X (\Omega^{-1} \slashed{\mathrm{tr}}\underline{\chi}))^- \,\mathrm{dA}_{\gamma} - \int_{S_{u_1,\underline{u}}} \Omega^2 (\slashed X (\Omega^{-1} \slashed{\mathrm{tr}}\underline{\chi}))^+ \,\mathrm{dA}_{\gamma} \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} ([\frac{\partial}{\partial u} + \slashed{\nabla}_{b}, \slashed X](\Omega^{-1}\slashed{\mathrm{tr}}\underline{\chi}) -4\underline{\omega} \Omega\slashed X(\Omega^{-1}\slashed{\mathrm{tr}}\underline{\chi}) )\Omega^2\,\mathrm{dA}_{\gamma} \,\mathrm{d} u \\ &\: + \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} (2\slashed X(\log\Omega) + \div \slashed X)\Omega^2|\hat{\underline{\chi}}|_{\gamma}^2 \,\mathrm{dA}_{\gamma} \,\mathrm{d} u + \int_{(u_1,u_2)\times \{\underline{u}\} \times \mathbb S^2} (2\slashed X(\log\Omega) + \div \slashed X)\,\mathrm{d}\underline{\nu}_{\underline{u}}. \end{split} \end{equation} Recall here that ${ }^{\pm}$ denotes the trace (see Lemma~\ref{lem:trace}), which is well defined in an angularly regular double null spacetime (Definition~\ref{double.null.def.2}) since $\slashed X (\Omega^{-1}\slashed{\mathrm{tr}}\chi)$ is in $BV(H_u)$ for all $u$ and $\slashed X(\Omega^{-1}\slashed{\mathrm{tr}}\underline{\chi})$ is in $BV(\underline{H}_{\underline{u}})$ for all $\underline{u}$. \subsubsection{Weak formulation of all the auxiliary equations} We are now ready to define an appropriate weak formulation of the equations that we have considered in this section. \begin{definition}\label{def:aux.integrated} Let $(\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2,g)$ be an angularly regular spacetime in double null coordinates. We say that \textbf{the equations \eqref{eq:nablagamma}, \eqref{eq:nablaOmega}, \eqref{eq:nablab}, \eqref{eq:mu.0}, \eqref{eq:mub.0}, \eqref{eq:Xtrch.0} and \eqref{eq:Xtrchb.0} are satisfied} if \begin{enumerate} \item the equations \eqref{eq:nablagamma}--\eqref{eq:nablab} and \eqref{eq:mu.0}--\eqref{eq:mub.0} are satisfied in the integrated sense (Definition~\ref{def:weak.transport}); \item the equation \eqref{eq:Xtrch.0} is satisfied for all $C^1$ $S$-tangent vector field $\slashed X$, for all $u\in [0,u_*]$ and all $0\leq \underline{u}_1<\underline{u}_2 \leq \underline{u}_*$; \item the equation \eqref{eq:Xtrchb.0} is satisfied for all $C^1$ $S$-tangent vector field $\slashed X$, for all $\underline{u}\in [0,\underline{u}_*]$ and all $0\leq u_1<u_2 \leq u_*$. \end{enumerate} \end{definition} \section{Existence theorems}\label{sec.existence} In this section, we recall the existence and uniqueness theorem in \cite{LR2}. We will in particular need to use the estimates derived in \cite{LR2} to prove our main theorems. To simplify the exposition\footnote{The original theorems in \cite{LR, LR2} indeed allow the initial data to be non-smooth. In particular, they allow the initial $\hat{\chi}$ and $\hat{\underline{\chi}}$ to be discontinuous, which corresponds to impulsive gravitational waves.}, we restrict our attention for smooth initial data. This will already be sufficient for our purposes. Even though the data are smooth, the key point here is that the result in \cite{LR2} guarantees that the region of existence and the estimates that the solutions obey depend only on low-regularity norms of the data. In \textbf{Section~\ref{sec:existence.data}}, we first give some remarks regarding the characteristic initial value problem in the double null foliation gauge. We then give the statement of the main result of \cite{LR2} in \textbf{Section~\ref{sec:statement.ext.thm}}. \subsection{The characteristic initial value problem}\label{sec:existence.data} We will consider a characteristic initial value problem as follows. We impose characteristic initial data on two transversally intersecting null hypersurfaces $H_0 = [0, \underline{I}] \times \mathbb S^2$ and $\underline{H}_0 = [0, I] \times \mathbb S^2$, where $\{0\}\times \mathbb S^2 \subset H_0$ and $\{0\}\times \mathbb S^2 \subset \underline{H}_0$ are identified. See Figure~\ref{fig:CIVP}. \begin{figure}[htbp]\label{fig:CIVP} \begin{center} \input{frame.pdf_t} \caption{Setup for the characteristic initial value problem} \end{center} \end{figure} We will provide two ways of thinking about the characteristic initial data, which we call the full characteristic initial data and the reduced characteristic initial data; see Sections~\ref{sec:full.char.data} and \ref{sec:reduced.data} respectively. \subsubsection{The full characteristic initial data}\label{sec:full.char.data} The first way to prescribe characteristic initial data is to prescribe all of the metric components $(\gamma,\, \Omega,\, b)$ and all of the Ricci coefficients $(\chi,\,\underline{\chi},\,\eta,\,\underline{\eta},\,\omega,\,\underline{\omega})$ on both $H_0$ and $\underline{H}_0$. All of these objects are required to be smooth, $\gamma$ is required to be positive definite, and $\Omega$ is required to be positive. Moreover, the metric components and the Ricci coefficients are required to satisfy the following: \begin{itemize} \item The metric components and the Ricci coefficients relate to each other via \eqref{metric.derivative.invar} and \eqref{Ricci.relation}. \item The null structure equations \eqref{Ric44}--\eqref{Ric34.1} all hold, where it is understood that the $\slashed{\nabla}_3$ equations are required to hold on $\underline{H}_0$ and the $\slashed{\nabla}_4$ equations are required to hold on $H_0$. \end{itemize} \subsubsection{The reduced characteristic initial data}\label{sec:reduced.data} As is discussed in \cite{Chr}, there is another way to think about the characteristic initial data. Essentially, this allows one to identify some freely prescribable data and then impose the other constraints by solving appropriate transport equations. From this point of view, the initial data consist of $(\Omega,\,\Phi,\,\hat{\gamma},\,b\restriction_{\underline{H}_0},\,\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}})$, where $\Phi$ and $\hat{\gamma}$ are such that $\gamma = \Phi^2 \hat{\gamma}$, and $\hat{\gamma}$ is normalized by the condition $\frac{\det\hat{\gamma}}{\det\mathring{\gamma}} = 1$ for some $\mathring{\gamma}$ satisfying $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \mathring{\gamma}\restriction_{H_0} = 0 = \slashed{\mathcal L}_{\frac{\partial}{\partial u} + b} \mathring{\gamma}\restriction_{\underline{H}_0}$. Here, $\Phi$ and $\hat{\gamma}$ are required to satisfy the \emph{constraint equation}: \begin{equation}\label{eq:constraints.first.time} \begin{split} \frac{\partial^2 \Phi}{\partial\underline{u}^2}=2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial\Phi}{\partial\underline{u}}-\frac {1}8 |\frac{\partial\hat{\gamma}}{\partial \underline{u}}|_{\hat{\gamma}}^2 \Phi \quad \mbox{on $H_0$},\\ \frac{\partial^2 \Phi}{\partial u^2}=2\frac{\partial \log\Omega}{\partial u}\frac{\partial\Phi}{\partial u}-\frac {1}8 |\frac{\partial\hat{\gamma}}{\partial u}|_{\hat{\gamma}}^2 \Phi \quad \mbox{on $\underline{H}_0$}, \end{split} \end{equation} where \begin{equation}\label{eq:def.|dubgamma|^2} |\frac{\partial\hat{\gamma}}{\partial \underline{u}}|_{\hat{\gamma}}^2 := (\hat{\gamma}^{-1})^{AC}(\hat{\gamma}^{-1})^{BD}(\frac{\partial}{\partial\underline{u}}\hat{\gamma}_{AB})(\frac{\partial}{\partial\underline{u}}\hat{\gamma}_{CD}),\quad |\frac{\partial\hat{\gamma}}{\partial u}|_{\hat{\gamma}}^2 := (\hat{\gamma}^{-1})^{AC}(\hat{\gamma}^{-1})^{BD}(\frac{\partial}{\partial u}\hat{\gamma}_{AB})(\frac{\partial}{\partial u}\hat{\gamma}_{CD}). \end{equation} To simplify the exposition, \textbf{we assume from now on $b\restriction_{\underline{H}_0} \equiv 0$}. Once $\Omega$, $\Phi$, $\hat{\gamma}$ and $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}}$ are prescribed, we can derive the full characteristic initial data set as in Section~\ref{sec:full.char.data} by the following procedure (see \cite{Chr}): \begin{itemize} \item Given $\Omega$, we can obtain $\omega\restriction_{H_0}$ and $\underline{\omega}_{\underline{H}_0}$ by (see \eqref{Ricci.relation}) $$\omega \restriction_{H_0} = -2 \Omega^{-1} \frac{\partial}{\partial\underline{u}}\log\Omega,\quad \underline{\omega}\restriction_{\underline{H}_0} = -2 \Omega^{-1} \frac{\partial}{\partial\underline{u}}\log\Omega.$$ \item Given $\Omega$, $\hat{\gamma}$ and $\Phi$, we can obtain $\chi\restriction_{H_0}$ and $\underline{\chi} \restriction_{\underline{H}_0}$ by $$\hat{\chi}_{AB}\restriction_{H_0} = \frac 12 \Omega^{-1} \Phi^2 \frac{\partial}{\partial\underline{u}}\gamma_{AB},\quad \slashed{\mathrm{tr}}\chi\restriction_{H_0} = \frac{2\Omega^{-2}}{\Phi} \frac{\partial\Phi}{\partial\underline{u}},$$ $$\hat{\underline{\chi}}_{AB}\restriction_{\underline{H}_0} = \frac 12 \Omega^{-1} \Phi^2 \frac{\partial}{\partial u}\gamma_{AB},\quad \slashed{\mathrm{tr}}\underline{\chi}\restriction_{\underline{H}_0} = \frac{2\Omega^{-2}}{\Phi} \frac{\partial\Phi}{\partial u}.$$ \item $\eta$ and $\underline{\eta}$ can be obtained on $H_0$ by solving \eqref{Ric4A} together with the condition $\frac 12 (\eta + \underline{\eta}) = \slashed{\nabla}\log\Omega$ (see \eqref{Ricci.relation}); $\eta$ and $\underline{\eta}$ can be obtained on $\underline{H}_0$ by solving \eqref{Ric3A} together with the condition $\frac 12 (\eta + \underline{\eta}) = \slashed{\nabla}\log\Omega$. The initial condition for both of these ODEs are given in view of $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b = -2\Omega^2 (\eta^\sharp -\underline{\eta}^\sharp)$ (see \eqref{metric.derivative}) and the fact that $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}}$ is prescribed. \item $b$ on $H_0$ can be obtained by $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b = -2\Omega^2 (\eta^\sharp -\underline{\eta}^\sharp)$ and the condition that $b\restriction_{S_{0,0}}= 0$ (which follows from $b\restriction_{\underline{H}_0} = 0$). \item $\omega$ on $\underline{H}_0$ can be obtained solving the transport equations \eqref{Ric34.1}; $\underline{\omega}$ on $H_0$ can be obtained solving the transport equation \eqref{Ric34}. Note that the initial data for $\omega$ for this transport equation is determined by the value of $\omega$ on $H_0$ (obtained above); similarly for $\underline{\omega}$. \item $\chi$ on $\underline{H}_0$ can be obtained solving the transport equations \eqref{trRicAB.1} and \eqref{RicAB.1}; $\underline{\chi}$ on $H_0$ can be obtained solving the transport equations \eqref{trRicAB} and \eqref{RicAB}. Note that the initial data for $\chi$ for these transport equations is determined by the value of $\chi$ on $H_0$; similarly for $\underline{\chi}$. \end{itemize} The important point for the above procedure is that all the terms on the RHS of the transport equations are given in the previous steps. The transport equations can therefore all be solved (despite the fact that the procedure involves a loss of derivatives in the sense that to obtain estimates for (say) three derivatives for all the initial Ricci coefficients require prescribing more derivatives on $(\Omega,\,\Phi,\,\hat{\gamma},\,\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}})$). Tracing through the above procedure, we also obtain quantitive estimates for $(b, \, \chi,\, \hat{\underline{\chi}},\, \eta,\, \underline{\eta},\, \omega,\, \underline{\omega})$. We give the result in the following lemma. The proof is straightforward and omitted (see again \cite{Chr} for details). \begin{lemma}\label{lem:suff.cond.on.data} Suppose $(\Omega,\,\Phi,\,\hat{\gamma},\,\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}})$ is a reduced characteristic initial data set with the estimates \begin{equation*} \begin{split} \|(\log\Omega,\,\hat{\gamma},\,\log\Phi,\,\frac{\partial\Phi}{\partial \underline{u}})\restriction_{H_0} \|_{C^0_{\underline{u}} W^{6,2}(S_{u,\underline{u}},\mathring{\gamma})} + \|(\log\Omega,\,\hat{\gamma},\,\log\Phi,\,\frac{\partial\Phi}{\partial u})\restriction_{\underline{H}_0} \|_{C^0_{\underline{u}} W^{6,2}(S_{u,\underline{u}},\mathring{\gamma})} \leq C_0, \end{split} \end{equation*} \begin{equation*} \begin{split} \|(\frac{\partial\log\Omega}{\partial\underline{u}},\,\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}}\hat{\gamma})\restriction_{H_0} \|_{L^2_{\underline{u}} W^{5,2}(S_{u,\underline{u}},\mathring{\gamma})} + \|(\frac{\partial\log\Omega}{\partial u},\,\slashed{\mathcal L}_{\frac{\partial}{\partial u}}\hat{\gamma})\restriction_{\underline{H}_0} \|_{L^2_{u} W^{5,2}(S_{u,\underline{u}},\mathring{\gamma})} \leq C_0, \end{split} \end{equation*} \begin{equation*} \begin{split} \|\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}} \|_{W^{5,2}(S_{0,0},\mathring{\gamma})} \leq C_0. \end{split} \end{equation*} Then, for the metric components and Ricci coefficients derived with the procedure outlined above, there exists $\widetilde{C}_0>0$ depending on $C_0$ such that $$\| (\gamma, \,\log\frac{\det\gamma}{\det\mathring{\gamma}}) \|_{L^\infty_u W^{3,2}(S_{u,0},\mathring{\gamma})} + \| (\gamma, \,\log\frac{\det\gamma}{\det\mathring{\gamma}}) \|_{L^\infty_{\underline{u}} W^{3,2}(S_{0,\underline{u}},\mathring{\gamma})} \leq \widetilde{C}_0,$$ \begin{equation*} \begin{split} \| (\eta,\,\underline{\eta},\,\slashed{\mathrm{tr}}\chi,\,\slashed{\mathrm{tr}}\underline{\chi}) \|_{L^\infty_u W^{3,2}(S_{u,0},\gamma)} + \| K \|_{L^\infty_u W^{2,2}(S_{u,0},\gamma)} \leq \widetilde{C}_0, \end{split} \end{equation*} $$\| (\eta,\,\underline{\eta},\,\slashed{\mathrm{tr}}\chi,\,\slashed{\mathrm{tr}}\underline{\chi}) \|_{L^\infty_{\underline{u}} W^{3,2}(S_{0,\underline{u}},\gamma)} + \| K \|_{L^\infty_{\underline{u}} W^{2,2}(S_{0,\underline{u}},\gamma)} \leq \widetilde{C}_0,$$ \begin{equation*} \begin{split} \| (\hat{\underline{\chi}},\,\underline{\omega}) \|_{L^2_u W^{3,2}(S_{u,0},\gamma)} + \| (\hat{\chi},\,\omega) \|_{L^2_{\underline{u}} W^{3,2}(S_{0,\underline{u}},\gamma)} \leq \widetilde{C}_0. \end{split} \end{equation*} \end{lemma} \subsection{The statements of the existence theorems}\label{sec:statement.ext.thm} We now state the main result in \cite{LR2}. Recall again that we only focus on the case of the result in \cite{LR2} where the initial data are smooth. We will first give the existence statement (Theorem~\ref{ext.thm}) and then give the estimates (Theorem~\ref{thm:ext.est}). As above, we consider a characteristic initial value problem for the Einstein vacuum equations with smooth initial data given on $H_0 = [0,\underline{I}]\times \mathbb S^2$ and $\underline{H}_0 = [0, I]\times \mathbb S^2$ where $\{0\}\times \mathbb S^2 \subset H_0$ and $\{0\}\times \mathbb S^2 \subset \underline{H}_0$ are identified. Suppose we are given a full characteristic initial data set as in Section~\ref{sec:full.char.data} with $b\restriction_{\underline{H}_0} = 0$. In addition, fix a smooth metric $\mathring{\gamma}$ on $\mathbb S^2$. Extend $\mathring{\gamma}$ to $H_0\cup \underline{H}_0$ by requiring $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \mathring{\gamma}\restriction_{H_0} = 0 = \slashed{\mathcal L}_{\frac{\partial}{\partial u}} \mathring{\gamma}\restriction_{\underline{H}_0}$. The following is the main existence theorem\footnote{Note that though equivalent, Theorem~\ref{ext.thm} is slightly differently worded as \cite[Theorem~3]{LR2}. In particular, instead of \eqref{eq:LR.gamma.data}, the estimates in \cite{LR2} are stated in terms of local coordinates instead of with respect to a reference metric $\mathring{\gamma}$.} from \cite{LR2}: \begin{theorem}[L.--R.,~Theorem~3 in \cite{LR2}]\label{ext.thm} Consider the characteristic initial value problem for the Einstein vacuum equations with smooth full characteristic initial data as described above. Suppose there exists a constant $D_0>0$ such that the prescribed data satisfy \begin{equation}\label{eq:LR.gamma.data} \| (\gamma, \,\log\frac{\det\gamma}{\det\mathring{\gamma}}) \|_{L^\infty_u W^{3,2}(S_{u,0},\mathring{\gamma})} + \| (\gamma, \,\log\frac{\det\gamma}{\det\mathring{\gamma}}) \|_{L^\infty_{\underline{u}} W^{3,2}(S_{0,\underline{u}},\mathring{\gamma})} \leq D_0, \end{equation} \begin{equation} \begin{split} \| (\eta,\,\underline{\eta},\,\slashed{\mathrm{tr}}\chi,\,\slashed{\mathrm{tr}}\underline{\chi}) \|_{L^\infty_u W^{3,2}(S_{u,0},\gamma)} + \| K \|_{L^\infty_u W^{2,2}(S_{u,0},\gamma)} \leq D_0, \end{split} \end{equation} \begin{equation} \| (\eta,\,\underline{\eta},\,\slashed{\mathrm{tr}}\chi,\,\slashed{\mathrm{tr}}\underline{\chi}) \|_{L^\infty_{\underline{u}} W^{3,2}(S_{0,\underline{u}},\gamma)} + \| K \|_{L^\infty_{\underline{u}} W^{2,2}(S_{0,\underline{u}},\gamma)} \leq D_0, \end{equation} \begin{equation}\label{eq:LR.data.4} \begin{split} \| (\hat{\underline{\chi}},\,\underline{\omega}) \|_{L^2_u W^{3,2}(S_{u,0},\gamma)} + \| (\hat{\chi},\,\omega) \|_{L^2_{\underline{u}} W^{3,2}(S_{0,\underline{u}},\gamma)} \leq D_0. \end{split} \end{equation} Then there exists $\epsilon>0$ (sufficiently small) depending only on $D_0$ and $I$ such that the following holds: Given $u_* \in (0,I]$ and $\underline{u}_* \in (0,\epsilon]$, there exists a unique smooth solution to the Einstein vacuum equations in double null coordinates in the domain $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ which achieves the given initial data. \end{theorem} We now turn to the estimates for the Ricci coefficients which are obtained in the proof of Theorem~\ref{ext.thm}. We record them in Theorem~\ref{thm:ext.est} below. Most of the following estimates can be directly read off from \cite{LR2}. For the convenience of the reader, we include in Appendix~\ref{app:est} a derivation of these estimates from \cite{LR2}. (For the statement of Theorem~\ref{thm:ext.est}, recall again the definition of $\gamma_{0,0}$ in Definition~\ref{def:gamma00}.) \begin{theorem}\label{thm:ext.est} Consider a characteristic initial value problem as in Theorem~\ref{ext.thm}, as well as a spacetime solution given as in the conclusion Theorem~\ref{ext.thm}. After choosing $\epsilon>0$ sufficiently small if necessary, there exists $\widetilde{C}>0$ depending only on $D_0$ and $I$ (in Theorem~\ref{ext.thm}) such that the following estimates hold in $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$: \begin{align} \mathbf{I}(S_{u,\underline{u}}, \gamma) \leq \widetilde{C},\quad \widetilde{C}^{-1} \leq \mathrm{Area}(S_{u,\underline{u}}, \gamma) \leq \widetilde{C}, \label{eq:bdd.isoperimetric} \\ \|\log \frac{\det\gamma}{\det\gamma_{0,0}} \|_{C^0_u C^0_{\underline{u}} C^0(S)} \leq \widetilde{C}, \label{eq:bdd.density}\\ \sum_{\slashed g \in \{\gamma - \gamma_{0,0},\, b,\,\log\Omega\}}(\|\slashed g\|_{C^0_u C^0_{\underline{u}} W^{3,2}(S)} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \slashed g\|_{L^2_{\underline{u}} L^\infty_u W^{2,2}(S)} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \slashed g\|_{L^2_{u} L^\infty_{\underline{u}} W^{2,2}(S)} )\leq \widetilde{C}, \label{eq:bdd.metric}\\ \sum_{\psi\in \{\eta,\,\underline{\eta}\}} \|\psi \|_{C^0_u C^0_{\underline{u}} W^{2,2}(S)} + \sum_{\psi\in \{\slashed{\mathrm{tr}}\chi,\,\slashed{\mathrm{tr}}\underline{\chi}\}} \|\psi \|_{C^0_u C^0_{\underline{u}} W^{3,2}(S)} \leq \widetilde{C}, \label{eq:bdd.psi}\\ \sum_{\psi\in \{\eta,\,\underline{\eta}\}} (\|\slashed{\nabla}^3 \psi \|_{C^0_u L^2_{\underline{u}} L^2(S)} + \|\slashed{\nabla}^3 \psi \|_{C^0_{\underline{u}} L^2_{u} L^2(S)}) + \|\slashed{\nabla}^2 K \|_{C^0_u L^2_{\underline{u}} L^2(S)} + \|\slashed{\nabla}^2 K \|_{C^0_{\underline{u}} L^2_{u} L^2(S)} \leq \widetilde{C}, \label{eq:bdd.psi.top} \\ \sum_{\psi_H \in \{ \hat{\chi},\omega \}} (\|\psi_H \|_{L^2_{\underline{u}} C^0_u W^{2,2}(S)} + \|\slashed{\nabla}^3 \psi_H \|_{C^0_u L^2_{\underline{u}} L^2(S)}) \leq \widetilde{C}, \label{eq:bdd.psiH} \\ \sum_{\psi_{\underline{H}} \in \{ \hat{\underline{\chi}},\underline{\omega} \}} ( \|\psi_{\underline{H}} \|_{L^2_{u} C^0_{\underline{u}} W^{2,2}(S)} + \|\slashed{\nabla}^3 \psi_{\underline{H}} \|_{C^0_{\underline{u}} L^2_{u} L^2(S)}) \leq \widetilde{C}, \label{eq:bdd.psiHb} \\ \sum_{\psi\in \{\eta,\,\underline{\eta} \}} (\|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi\|_{C^0_u L^2_{\underline{u}} W^{2,2}(S)} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi\|_{C^0_{\underline{u}} L^2_{u} W^{2,2}(S)}) \leq \widetilde{C}, \label{eq:bdd.psi.trans} \\ \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \slashed{\mathrm{tr}}\chi \|_{C^0_u L^1_{\underline{u}} W^{3,2}(S)} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \slashed{\mathrm{tr}}\chi \|_{L^2_u C^0_{\underline{u}} W^{2,2}(S)} \leq \widetilde{C}, \label{eq:bdd.trch.trans} \\ \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \slashed{\mathrm{tr}}\underline{\chi} \|_{C^0_{\underline{u}} L^1_{u} W^{3,2}(S)} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \slashed{\mathrm{tr}}\underline{\chi} \|_{L^2_{\underline{u}} C^0_{u} W^{2,2}(S)}\leq \widetilde{C}, \label{eq:bdd.trchb.trans} \\ \sum_{\psi_H \in \{ \hat{\chi},\omega \}} \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_H\|_{L^2_u L^2_{\underline{u}} W^{2,2}(S)} + \sum_{\psi_{\underline{H}} \in \{ \hat{\underline{\chi}},\underline{\omega} \}} \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_{\underline{H}}\|_{L^2_u L^2_{\underline{u}} W^{2,2}(S)} \leq \widetilde{C}, \label{eq:bdd.psiH.psiHb.trans} \end{align} where we have used the shorthand above that $W^{k,2}(S) = W^{k,2}(S_{u,\underline{u}},\gamma)$. \end{theorem} \begin{remark} Notice that the conditions in Lemma~\ref{lem:suff.cond.on.data} guarantee that the assumptions \eqref{eq:LR.gamma.data}--\eqref{eq:LR.data.4} of Theorem~\ref{ext.thm} hold. \end{remark} \section{Main results}\label{sec.main.thm} \subsection{Existence and characterization of the limiting spacetime} \begin{theorem}\label{main.thm} Consider a sequence of smooth characteristic initial data for the vacuum Einstein equations $$Ric_{\mu\nu}=0.$$ Assume that all the geometric quantities obey the bounds in Theorem \ref{ext.thm} uniformly. Denoting $(\gamma_{0,0})_n = \gamma_n \restriction_{S_{0,0}}$, assume that there is a $C^3$ limit $(\gamma_{0,0})_\infty$, i.e. \begin{equation}\label{eq:gamma.C3.conv} (\gamma_{0,0})_n \to (\gamma_{0,0})_\infty \mbox{ in $C^3$}. \end{equation} Then, there exists $\epsilon>0$ sufficiently small (independent of $n$) such that the following hold for $\mathcal M:= [0,u_*]\times [0,\underline{u}_*] \times \mathbb S^2$ with $\underline{u}_* \in (0,\epsilon]$: \begin{enumerate} \item There exists a sequence of spacetimes $(\mathcal M, g_n)$ in double null coordinates $$g_n=-2\Omega_n^2(du\otimes d\underline{u}+d\underline{u} \otimes du)+(\gamma_n)_{AB} (d\theta^A-b_n^A du)\otimes (d\theta^B-b_n^B du)$$ corresponding to the sequence of initial data, which solve the Einstein vacuum equations. \item A subsequence of spacetime metrics $(\mathcal M, \,g_{n_k})$ converges in $C^0$ in the null coordinate system to a limiting spacetime $(\mathcal M,\,g_{\infty})$ $$g_\infty=-2\Omega_{\infty}^2(du\otimes d\underline{u}+d\underline{u} \otimes du)+( \gamma_{\infty} )_{AB} (d\theta^A-b_{\infty}^A du)\otimes (d\theta^B-b_{\infty}^B du).$$ In addition, the weak derivatives of the components of $g_{n_k}$ converge weakly in (spacetime) $L^2$ to the weak derivatives of the components of $g_\infty$; and the Ricci coefficients corresponding to $g_{n_k}$ converge weak in (spacetime) $L^2$ to the Ricci coefficients corresponding to $g_\infty$. \item After passing to a further subsequence (not relabeled), the following weak-* limits exist: \begin{equation}\label{eq:dnu.def.thm} \mathrm{d}\nu_u:= \mathrm{weak}\mbox{-*}\lim_{k\to +\infty} (\Omega^2_{n_k} |\hat{\chi}_{n_k}|_{\gamma_{n_k}}^2\, \mathrm{dA}_{\gamma_{n_k}} \,\mathrm{d} \underline{u} - \Omega^2_{\infty} |\hat{\chi}_{\infty}|_{\gamma_{\infty}}^2\, \mathrm{dA}_{\gamma_{\infty}} \,\mathrm{d} \underline{u}), \end{equation} \begin{equation}\label{eq:dnub.def.thm} \mathrm{d}\underline{\nu}_{\underline{u}}:= \mathrm{weak}\mbox{-*}\lim_{k\to +\infty} (\Omega^2_{n_k} |\hat{\underline{\chi}}_{n_k}|_{\gamma_{n_k}}^2\, \mathrm{dA}_{\gamma_{n_k}} \,\mathrm{d} u - \Omega^2_{\infty} |\hat{\underline{\chi}}_{\infty}|_{\gamma_{\infty}}^2\, \mathrm{dA}_{\gamma_{\infty}} \,\mathrm{d} u). \end{equation} Moreover, $(\mathcal M, g_\infty, \{\mathrm{d}\nu_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}_{\underline{u}} \}_{\underline{u}\in [0,\underline{u}_*]})$ is an angularly regular weak solution to the Einstein--null dust system in the sense of Definition~\ref{def:weak.sol.ang.reg}. \item For $(\mathcal M, g_\infty, \{\mathrm{d}\nu_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}_{\underline{u}} \}_{\underline{u}\in [0,\underline{u}_*]})$, the renormalized Bianchi equations are satisfied in the sense of Definition~\ref{def:Bianchi.integrated}. \item For $(\mathcal M, g_\infty, \{\mathrm{d}\nu_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}_{\underline{u}} \}_{\underline{u}\in [0,\underline{u}_*]})$, the equations \eqref{eq:nablagamma}, \eqref{eq:nablaOmega}, \eqref{eq:nablab}, \eqref{eq:mu.0}, \eqref{eq:mub.0}, \eqref{eq:Xtrch.0} and \eqref{eq:Xtrchb.0} are satisfied in the sense of Definition~\ref{def:aux.integrated}. \end{enumerate} \end{theorem} Theorem~\ref{main.thm} will be proven in \textbf{Sections~\ref{sec:existence} and \ref{sec:eqns.for.limit}}. See the conclusion of the proof in Section~\ref{sec:proof.of.main.thm}. Some remarks are in order. \begin{remark} To simplify the statements, we only asserted that the limit is achieved in the $C^0$ and the $W^{1,2}$-weak sense. Nevertheless, in fact, various different Ricci coefficients have better convergence properties; see more precise convergence statements in Section~\ref{sec:existence}. \end{remark} \begin{remark} As we explained in the introduction, Theorem~\ref{main.thm} implies a fortiori that the $(\mathcal M_{\infty},\,g_{\infty})$ is vacuum if and only if $\mathrm{d}\nu_0= 0$ and $\mathrm{d}\underline{\nu}_0 =0 $ (Corollary~\ref{cor:vac.cond.intro}). \end{remark} \subsection{Uniqueness of the limiting spacetime} \begin{theorem}\label{thm:uniqueness} Let $\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. Suppose $(\mathcal M,\, g^{(1)}, \,\{\mathrm{d}\nu_u^{(1)}\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(M,\, g^{(2)}, \,\{\mathrm{d}\nu_u^{(2)}\}_{u\in [0,u_*]},\, \{\mathrm{d} \underline{\nu}_{\underline{u}}^{(2)}\}_{\underline{u} \in [0,\underline{u}_*]})$ are such that the following holds: \begin{enumerate} \item $(M,\, g^{(1)}, \,\{\mathrm{d}\nu_u^{(1)}\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(M,\, g^{(2)}, \,\{\mathrm{d}\nu_u^{(2)}\}_{u\in [0,u_*]},\, \{\mathrm{d} \underline{\nu}_{\underline{u}}^{(2)}\}_{\underline{u} \in [0,\underline{u}_*]})$ are both angularly regular weak solutions to the Einstein--null dust system in the sense of Definition~\ref{def:weak.sol.ang.reg}. \item $(M,\, g^{(1)}, \,\{\mathrm{d}\nu_u^{(1)}\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(M,\, g^{(2)}, \,\{\mathrm{d}\nu_u^{(2)}\}_{u\in [0,u_*]},\, \{\mathrm{d} \underline{\nu}_{\underline{u}}^{(2)}\}_{\underline{u} \in [0,\underline{u}_*]})$ both satisfy the renormalized Bianichi equations in the sense of Definition~\ref{def:Bianchi.integrated}. \item $(M,\, g^{(1)}, \,\{\mathrm{d}\nu_u^{(1)}\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(M,\, g^{(2)}, \,\{\mathrm{d}\nu_u^{(2)}\}_{u\in [0,u_*]},\, \{\mathrm{d} \underline{\nu}_{\underline{u}}^{(2)}\}_{\underline{u} \in [0,\underline{u}_*]})$ both satisfy the equations \eqref{eq:nablagamma}, \eqref{eq:nablaOmega}, \eqref{eq:nablab}, \eqref{eq:mu.0}, \eqref{eq:mub.0}, \eqref{eq:Xtrch.0} and \eqref{eq:Xtrchb.0} in the sense of Definition~\ref{def:aux.integrated}. \item $(M,\, g^{(1)}, \,\{\mathrm{d}\nu_u^{(1)}\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(M,\, g^{(2)}, \,\{\mathrm{d}\nu_u^{(2)}\}_{u\in [0,u_*]},\, \{\mathrm{d} \underline{\nu}_{\underline{u}}^{(2)}\}_{\underline{u} \in [0,\underline{u}_*]})$ have the same initial data in the sense that\footnote{Note that while we only explicitly assumed that the differences of very specific Ricci coefficients vanish on the initial hypersurfaces, in fact it holds that the all Ricci coefficients coincide on the initial hypersurfaces. This is because the remaining Ricci coefficients can be written as tangential (along the initial hypersurfaces) derivatives of the metric components.} $$(\gamma^{(1)} - \gamma^{(2)},\, b^{(1)} - b^{(2)},\, \log\frac{\Omega^{(1)}}{\Omega^{(2)}})\restriction_{S_{0,\underline{u}}} = 0,\,\,\forall \underline{u} \in [0,\underline{u}_*],$$ $$(\gamma^{(1)} - \gamma^{(2)},\, b^{(1)} - b^{(2)},\, \log\frac{\Omega^{(1)}}{\Omega^{(2)}}) \restriction_{S_{u,0}} = 0,\,\,\forall u\in [0,u_*],$$ $$(\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \slashed{\mathrm{tr}}\underline{\chi}^{(1)})^+ \restriction_{S_{0,\underline{u}}} = 0,\,\,\forall \underline{u} \in [0,\underline{u}_*],$$ $$((\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)})^+,\,\eta^{(1)}-\underline{\eta}^{(1)}-\eta^{(2)}+ \underline{\eta}^{(2)}) \restriction_{S_{u,0}} = 0,\,\,\forall u \in [0,u_*],$$ $$\mathrm{d} \nu_0^{(1)} - \mathrm{d} \nu_0^{(2)} = 0,\quad \mathrm{d} \underline{\nu}_0^{(1)} - \mathrm{d} \underline{\nu}_0^{(2)} = 0.$$ \end{enumerate} Then the following holds: \begin{enumerate} \item $\gamma^{(1)}= \gamma^{(2)}$, $b^{(1)} = b^{(2)}$ and $\Omega^{(1)} = \Omega^{(2)}$ everywhere on $\mathcal M$. \item $\mathrm{d} \nu_u^{(1)} = \mathrm{d}\nu_u^{(2)}$ for all $u\in [0,u_*]$. \item $\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)} = \mathrm{d}\underline{\nu}_{\underline{u}}^{(2)}$ for all $\underline{u}\in [0,\underline{u}_*]$. \end{enumerate} \end{theorem} The proof of Theorem~\ref{thm:uniqueness} will be carried out in \textbf{Section~\ref{sec:proof.uniqueness}}. \subsection{Characteristic initial value problem for the Einstein--null dust system with angularly regular measure-valued null dust} We now turn to our main results on the characteristic initial value problem for the Einstein--null dust system with angularly regular measure-valued null dust. We first introduce the class of initial data that we consider. For simplicity, we will only consider characteristic initial data for which $b^A \equiv 0$ on $\underline{H}_0$. \begin{definition}[Strongly angularly regular reduced characteristic initial data]\label{def:SARCID} Impose characteristic initial data on $H_0 = [0, \underline{I}] \times \mathbb S^2$ and $\underline{H}_0 = [0, I] \times \mathbb S^2$, where $\{0\}\times \mathbb S^2 \subset H_0$ and $\{0\}\times \mathbb S^2 \subset \underline{H}_0$ are identified (see Figure~\ref{fig:CIVP}). Let $\mathring{\gamma}$ be an (arbitrary) auxiliary smooth metric on $\mathbb S^2$. Define $\mathring{\gamma}$ on $H_0\cup \underline{H}_0$ by imposing $$\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \mathring{\gamma} = 0 = \slashed{\mathcal L}_{\frac{\partial}{\partial u}} \mathring{\gamma} .$$ A \textbf{strongly angularly regular reduced characteristic initial data set} to the Einstein--null dust system consists of a hextuple $(\Omega,\,\Phi,\, \hat{\gamma},\, \slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}},\, \mathrm{d} \nu_{\mathrm{init}},\,\mathrm{d} \underline{\nu}_{\mathrm{init}})$ with the following properties: \begin{enumerate} \item $\Omega>0$ is a smooth\footnote{Smoothness of $\Omega$ can also be dropped and replaced by $$\| \log\Omega \|_{C^0_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} +\|\frac{\partial \log\Omega}{\partial \underline{u}}\|_{L^2_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} < +\infty,$$ $$\| \log\Omega \|_{C^0_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} +\|\frac{\partial \log\Omega}{\partial u}\|_{L^2_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} < +\infty.$$ Since this will lengthen some arguments in Section~\ref{sec.approx.thm}, we will simply work with the slightly stronger assumption on $\Omega$.} function on $H_0 \cup \underline{H}_0$. \item $\Phi>0$ is a Lipschitz function on $H_0\cup \underline{H}_0$ such that on $\frac{\partial\Phi}{\partial\underline{u}}\restriction_{H_0} \in BV(H_0, \mathring{\gamma})$ and $\frac{\partial\Phi}{\partial u}\restriction_{\underline{H}_0} \in BV(\underline{H}_0, \mathring{\gamma})$ (see Definition~\ref{def:BV}). Moreover, $$\|\Phi\|_{C^0_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\Phi^{-1}\|_{C^0_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\frac{\partial\Phi}{\partial\underline{u}}\|_{L^\infty_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} <+\infty,$$ $$\|\Phi\|_{C^0_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} + \|\Phi^{-1}\|_{C^0_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} + \|\frac{\partial\Phi}{\partial u}\|_{L^\infty_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} <+\infty.$$ \item $\hat{\gamma}$ is a continuous covariant $S$-tangent $2$-tensor which restricts to a positive definite metric on each $S_{0,\underline{u}}$ or $S_{u,0}$. Moreover $\hat{\gamma}$ satisfies the following properties: \begin{enumerate} \item $$\frac{\det{\hat{\gamma}}}{\det{\mathring{\gamma}}} = 1.$$ \item $$\| \hat{\gamma} \|_{C^0_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} +\|\frac{\partial \hat{\gamma}}{\partial \underline{u}}\|_{L^2_{\underline{u}} W^{6,\infty}(S_{0,\underline{u}},\mathring{\gamma})} < +\infty,$$ $$\| \hat{\gamma} \|_{C^0_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} +\|\frac{\partial \hat{\gamma}}{\partial u}\|_{L^2_{u} W^{6,\infty}(S_{u,0},\mathring{\gamma})} < +\infty.$$ \end{enumerate} \item $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}}$ is a $W^{5,2}(S_{0,0},\mathring{\gamma})$ vector field. \item $\mathrm{d}\nu_{\mathrm{init}}$ is a non-negative Radon measure on $(0,\underline{I})\times \mathbb S^2$ satisfying the following regularity estimate: \begin{equation}\label{eq:dnu.init.bound.0} \sup \left\{ \sum_{0\leq k\leq 6}\left| \int_{(0,\underline{I})\times \mathbb S^2} (\mathring{\div}{}^k\varphi^{(k)})(\underline{u},\vartheta)\, \mathrm{d}\nu_{\mathrm{init}} \right| : \varphi^{(k)}\in C^\infty,\,\|\varphi^{(k)}\|_{L^\infty_{\underline{u}}L^1(S_{0,\underline{u}},\mathring{\gamma})} \leq 1 \right\} < +\infty, \end{equation} where $\varphi^{(k)}$ is a rank-$k$ tensor field. $\mathrm{d}\underline{\nu}_{\mathrm{init}}$ is a non-negative Radon measure on $(0,I)\times \mathbb S^2$ satisfying the following regularity estimate: \begin{equation}\label{eq:dnub.init.bound.0} \sup \left\{ \sum_{0\leq k\leq 6}\left| \int_{(0,I)\times \mathbb S^2} (\mathring{\div}{}^k\varphi^{(k)})(u,\vartheta)\, \mathrm{d}\underline{\nu}_{\mathrm{init}} \right| : \varphi^{(k)}\in C^\infty,\,\|\varphi^{(k)}\|_{L^\infty_{u}L^1(S_{u,0},\mathring{\gamma})} \leq 1 \right\} < +\infty, \end{equation} where $\varphi^{(k)}$ is a rank-$k$ tensor field. \item $(\Omega,\,\Phi,\,\hat{\gamma},\,\mathrm{d}\nu_{\mathrm{init}})$ together satisfy the constraint equations \begin{equation}\label{eq:dnu.init} \begin{split} &\: - \int_0^{\underline{I}} \int_{\mathbb S^2} (\frac{\partial\varphi}{\partial\underline{u}} \Omega^{-2} \frac{\partial\Phi}{\partial \underline{u}})(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}' \\ = &\: -\frac 18 \int_0^{\underline{I}} \int_{\mathbb S^2} (\varphi \Omega^{-2} |\frac{\partial\hat{\gamma}}{\partial\underline{u}}|^2_{\hat{\gamma}} \Phi)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}' - \frac 12 \int_{(0,\underline{I})\times \mathbb S^2} \Phi^{-1}\varphi (\underline{u}',\vartheta)\,\mathrm{d}\nu_{\mathrm{init}} \end{split} \end{equation} for any $\varphi \in C^\infty_c((0,\underline{I})\times \mathbb S^2)$. Similarly, on $\underline{H}_0$, $(\Omega,\,\Phi,\,\hat{\gamma},\,\mathrm{d}\underline{\nu}_{\mathrm{init}})$ together satisfy the constraint equations \begin{equation}\label{eq:dnub.init} \begin{split} &\:-\int_0^{I} \int_{\mathbb S^2} (\frac{\partial\varphi}{\partial u} \Omega^{-2} \frac{\partial\Phi}{\partial u})(u',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} u' \\ = &\: -\frac 18 \int_0^{I} \int_{\mathbb S^2} (\varphi \Omega^{-2} |\frac{\partial\hat{\gamma}}{\partial u}|^2_{\hat{\gamma}} \Phi)(u',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} u' - \frac 12 \int_{(0,I)\times \mathbb S^2} \Phi^{-1}\varphi (u',\vartheta)\,\mathrm{d}\underline{\nu}_{\mathrm{init}} \end{split} \end{equation} for any $\varphi \in C^\infty_c((0,I)\times \mathbb S^2)$. \end{enumerate} \end{definition} We emphasize that the initial data in Definition~\ref{def:SARCID} are consistent with the initial null dust being merely a measure, and initial metric being merely $C^0 \cap W^{1,2}$. The following is our local existence and uniqueness theorem for the Einstein--null dust system with measure-valued null shells (cf.~Theorem~\ref{thm:null.shells.intro}): \begin{theorem}\label{thm:main.local.dust} Given a strongly angularly regular reduced characteristic initial data set $(\Omega,\,\Phi,\, \hat{\gamma},\, \slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} b\restriction_{S_{0,0}},\, \mathrm{d} \nu_{\mathrm{init}},\,\mathrm{d} \underline{\nu}_{\mathrm{init}})$ as in Definition~\ref{def:SARCID}, there exists $\epsilon>0$ sufficiently small such that whenever $u_* \in (0,I]$ and $\underline{u}_* \in (0,\epsilon]$, there exists a unique angularly regular weak solution to the Einstein--null dust system in the sense of Definition~\ref{def:weak.sol.ang.reg} in the domain $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ which achieves the prescribed initial data. \end{theorem} The proof of Theorem~\ref{thm:main.local.dust} will be completed in \textbf{Section~\ref{sec.approx.thm}}. \subsection{Approximating solutions to the Einstein--null dust system by solutions to the Einstein vacuum equations} Finally, we prove that (up to some technical assumptions) any angularly regular weak solution to the Einstein--null dust system can be weakly approximately by smooth solutions to the Einstein vacuum equations (cf.~Corollary~\ref{cor:reverse.Burnett.intro}): \begin{theorem}\label{thm:reverse.Burnett} Let $\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. Suppose $(\mathcal M,\, g, \,\{\mathrm{d}\nu_u\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}} \}_{\underline{u}\in [0,\underline{u}_*]})$ is an angularly regular weak solution (see Definition~\ref{def:weak.sol.ang.reg}) to the Einstein--null dust system with strongly angularly regular characteristic initial data (see Definition~\ref{def:SARCID}) and $b \restriction_{\underline{H}_0} = 0$. Assume moreover that the strongly angularly regular characteristic initial data set satisfies $$\mathrm{supp}(\mathrm{d}\nu_{\mathrm{init}}) \subset [0,\underline{I} ]\times U^c,\quad\mathrm{supp}(\mathrm{d}\underline{\nu}_{\mathrm{init}}) \subset [0,I]\times U^c$$ for some non-empty open set $U\subset \mathbb S^2$. Let $\widetilde{\mathcal M} = [0,u_*]\times [0,\underline{u}_{**}]\times \mathbb S^2 \subseteq \mathcal M$. Then, for $\underline{u}_{**}\in (0, \underline{u}_*]$ sufficiently small, there exists a sequence of smooth solutions to the Einstein vacuum equations $(\widetilde{\mathcal M},\, g_n)$ in double null coordinates such that $(\widetilde{\mathcal M},\,g_n)$ converges to $(\widetilde{\mathcal M},\,g\restriction_{\widetilde{\mathcal M}})$ in the sense described in Theorem~\ref{main.thm}. \end{theorem} The proof of Theorem~\ref{thm:reverse.Burnett} will be completed in \textbf{Section~\ref{sec.approx.thm}}. We explain some of the assumptions in the following remarks. \begin{remark} Theorem~\ref{thm:reverse.Burnett} requires the initial data to satisfy $b \restriction_{\underline{H}_0} = 0$. This is however not a serious restriction as given any $(\mathcal M,\, g, \,\{\mathrm{d}\nu_u\}_{u\in [0,u_*]},\,\{\mathrm{d} \underline{\nu}_{\underline{u}} \}_{\underline{u}\in [0,\underline{u}_*]})$, one can always change coordinates on the $2$-spheres to achieve the vanishing of $b$ on $\underline{H}_0$. \end{remark} \begin{remark} Theorem~\ref{thm:reverse.Burnett} requires the vanishing of the (initial) null dust in some angular directions. This restriction is due to a hairy ball theorem-type obstruction in prescribing a symmetric traceless tensor on the $2$-sphere. A consequence of this assumption that if we consider solutions on $\widetilde{\mathcal M} = [0,u_*]\times [0,\underline{u}_{**}]\times \mathbb S^2$, Theorem~\ref{thm:reverse.Burnett} only concerns a restrictive class of solutions. Nevertheless, by the finite speed of propagation, it does imply that (sufficiently\footnote{Note that we still need assumptions more than that in Definition~\ref{def:weak.sol.ang.reg} as the data are required to be more regular.}) angularly regular weak solutions can always be \underline{locally} weakly approximated by vacuum solutions. \end{remark} \section{General compactness results}\label{sec:gen.compactness} \subsection{Preliminaries} The following Sobolev embedding result is standard (recall Definition~\ref{def:isoperimetric} for notations). \begin{proposition}[Sobolev embedding]\label{prop:Sobolev} \begin{enumerate} \item (\cite[Lemma 5.1]{Chr}) Let $2< p<\infty$ and $r\in \mathbb N\cup\{0\}$. There exists a constant $C_{p,r}>0$, depending only on $p$ and $r$, such that for any closed Riemannian $2$-manifold $(S,\gamma)$ with a $C^2$ metric\footnote{While the lemma is stated in \cite{Chr} for smooth metrics, the $C^2$ case follows from an easy approximation argument.}, $$(\mbox{Area}(S))^{-\f1p}\|\xi\|_{L^p(S)}\leq C_{p,r}\sqrt{\max\{{\bf I}(S),1\}}(\|\slashed{\nabla}\xi\|_{L^2(S)}+(\mbox{Area}(S))^{-\f12}\|\xi\|_{L^2(S)})$$ for any covariant tensor $\xi$ of rank $r$. \item (\cite[Lemma~5.2]{Chr}) Let $2< p<\infty$ and $r\in \mathbb N\cup\{0\}$. There exists a constant $C_{p,r}>0$, depending only on $p$ and $r$, such that for any closed Riemannian $2$-manifold $(S,\gamma)$ with a $C^2$ metric, $$\|\xi\|_{L^\infty(S)}\leq C_{p,r}\sqrt{\max\{{\bf I}(S),1\}}(\mbox{Area}(S))^{\f12-\f1p}(\|\slashed{\nabla}\xi\|_{L^p(S)}+(\mbox{Area}(S))^{-\f12}\|\xi\|_{L^p(S)})$$ for any covariant tensor $\xi$ of rank $r$. \end{enumerate} \end{proposition} \begin{proposition}\label{prop:transport.id} The following identities hold for any $C^1$ function $f$: $$\frac{\partial}{\partial u} \int_{S_{u,\underline{u}}} f \,\mathrm{dA}_{\gamma} = \int_{S_{u,\underline{u}}} \Omega \left(\slashed{\nabla}_3 f + \slashed{\mathrm{tr}}\underline{\chi} f \right)\, \,\mathrm{dA}_{\gamma} = \int_{S_{u,\underline{u}}} \left(\frac{\partial}{\partial u} f + \Omega \slashed{\mathrm{tr}}\underline{\chi} f \right)\, \,\mathrm{dA}_{\gamma},$$ $$\frac{\partial}{\partial \underline{u}} \int_{S_{u,\underline{u}}} f \,\mathrm{dA}_{\gamma} = \int_{S_{u,\underline{u}}} \Omega \left(\slashed{\nabla}_4 f + \slashed{\mathrm{tr}}\chi f \right)\, \,\mathrm{dA}_{\gamma}.$$ \end{proposition} \subsection{Compactness theorems}\label{sec:cpt.AA} Starting from this subsection, we prove various general compactness results. We will \textbf{consider the following setup for the remaining subsections in this section}: \begin{itemize} \item We will consider as our domain the manifold (with corners) $\mathcal M = [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. \item On $\mathcal M$, we fix a $C^3$ metric $\mathring{\gamma}$ (independent of $n$) such that \begin{equation}\label{eq:compact,HO.DT} \slashed{\mathcal L}_{\frac{\partial}{\partial u}} \mathring{\gamma} = \slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \mathring{\gamma} = 0. \end{equation} This will be used for defining the norms. \end{itemize} Our first compactness result is the following simple variation of the Arzela--Ascoli theorem. \begin{proposition}\label{prop:AA.gen} Consider either the case $(p,q) = (+\infty, 4)$ or $(p,q) = (4,2)$. Let $\{\psi_n\}_{n=1}^{+\infty}$ be a sequence of covariant rank $r$ smooth $S$-tangent tensors satisfying the following uniform bounds: \begin{equation}\label{eq:assumption.for.AA} \sup_n (\|\psi_n\|_{L^\infty_u L^\infty_{\underline{u}} W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_n \|_{L^\infty_u L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_n \|_{L^\infty_{\underline{u}} L^2_u L^p(S_{u,\underline{u}},\mathring{\gamma})})< +\infty. \end{equation} Then, there exists a subsequence $\{\psi_{n_k}\}_{k=1}^{+\infty}$ and a $\psi_\infty\in C^0_u C^0_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma}) \cap L^\infty_u L^\infty_{\underline{u}} W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})$ such that \begin{equation}\label{eq:AA.gen} \lim_{k\to +\infty} \|\psi_{n_k} - \psi_\infty\|_{L^\infty_u L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})} = 0. \end{equation} Moreover, $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_\infty \in L^\infty_u L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})$ and $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^\infty_{\underline{u}} L^2_u L^p(S_{u,\underline{u}},\mathring{\gamma})$. \end{proposition} \begin{proof} \pfstep{Step~1: Existence of $\psi_\infty\in L^\infty_u L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})$ and proof of \eqref{eq:AA.gen}} First, since $W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})\subseteq L^p(S_{u,\underline{u}}, \mathring{\gamma})$ is compact for all $(u,\underline{u})$, we know that for every $(u,\underline{u})$, there is a subsequence $n_k$ and a $\psi_\infty\in L^p(S_{u,\underline{u}},\mathring{\gamma})$ such that \begin{equation}\label{eq:easy.convergence} \lim_{k\to +\infty} \|\psi_{n_k} - \psi_\infty\|_{L^p(S_{u,\underline{u}},\mathring{\gamma})} = 0. \end{equation} By a standard argument extracting a diagonal subsequence, we thus obtain that for a \emph{fixed} subsequence $n_k$, the \eqref{eq:easy.convergence} holds for all $(u,\underline{u})$ rational. We next show that for this fixed subsequence, $\psi_{n_k}$ is uniformly (in $(u,\underline{u})$) Cauchy in $L^p(S_{u,\underline{u}}, \mathring{\gamma})$. Let $\epsilon>0$. For $(u_\mathbb Q, \underline{u}_\mathbb Q)\in ([0,u_*]\times [0,\underline{u}_*])\cap \mathbb Q^2$, let $\mathcal R(u_\mathbb Q, \underline{u}_\mathbb Q; \epsilon):= \{(u,\underline{u})\in [0,u_*]\times [0,\underline{u}_*]: |u-u_{\mathbb Q}|+ |\underline{u}-\underline{u}_{\mathbb Q}|\leq \epsilon^2\}$. Clearly, we can find a finite set of $\{(u_i,\underline{u}_i)\}_{i=1}^m \subset ([0,u_*]\times [0,\underline{u}_*])\cap \mathbb Q^2$ (depending on $\epsilon$) such that $\cup_{i=1}^m \mathcal R(u_i, \underline{u}_i; \epsilon) = [0,u_*]\times [0,\underline{u}_*]$. Since \eqref{eq:easy.convergence} holds for $(u,\underline{u}) = (u_i,\underline{u}_i)$ for $i=1,\dots,m$, we can find $K$ sufficiently large such that whenever $1\leq i\leq m$ and $k,\,k'\geq K$, we have $$\|\psi_{n_k} - \psi_{n_{k'}} \|_{L^p(S_{u_i,\underline{u}_i}, \mathring{\gamma})} \leq \epsilon.$$ Now for every $(u,\underline{u}) \in [0,u_*]\times [0,\underline{u}_*]$, we can find the closest $(u_i,\underline{u}_i)$ so that we obtain \begin{equation}\label{eq:AA.Cauchy} \begin{split} &\: \|\psi_{n_k} - \psi_{n_{k'}}\|_{L^p(S_{u,\underline{u}}, \mathring{\gamma})} \\ \lesssim &\: \|\psi_{n_k} - \psi_{n_{k'}} \|_{L^p(S_{u_i,\underline{u}_i}, \mathring{\gamma})} + \left| \int_{u_i}^u (\|\slashed{\mathcal L}_{\frac{\partial}{\partial u}}\psi_{n_k}\|_{L^p(S_{u',\underline{u}_i},\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}}\psi_{n_{k'}} \|_{L^p(S_{u',\underline{u}_i}\mathring{\gamma})} ) \,\mathrm{d} u' \right|\\ &\: + \left| \int_{\underline{u}_i}^{\underline{u}} (\|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}}\psi_{n_k}\|_{L^p(S_{u,\underline{u}'},\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}}\psi_{n_{k'}} \|_{L^p(S_{u,\underline{u}'}\mathring{\gamma})} ) \,\mathrm{d} \underline{u}'\right| \lesssim \epsilon \end{split} \end{equation} whenever $k,\,k'\geq K$. This proves that there exists $\psi_\infty\in L^\infty_u L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})$ and that \eqref{eq:AA.gen} holds. \pfstep{Step~2: $\psi_\infty\in C^0_u C^0_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma}) \cap L^\infty_u L^\infty_{\underline{u}} W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})$ } Since $\psi_{n_k}$ are smooth and $\psi_\infty$ is their limit in $L^\infty_u L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})$, it immediate follows that $\psi_\infty \in C^0_u C^0_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})$. Next, notice that for every fixed $(u,\underline{u})$, the given uniform $W^{1,q}(S_{u,\underline{u}},\mathring{\gamma})$ estimate in \eqref{eq:assumption.for.AA} and the Banach--Alaoglu theorem imply that there exists $\phi_\infty$ such that for a further subsequence, $\mathring{\slashed{\nabla}}\psi_{n_k} \rightharpoonup \phi_\infty$ \emph{weakly} in $L^q(S_{u,\underline{u}},\mathring{\gamma})$. Since the $W^{1,q}(S_{u,\underline{u}},\mathring{\gamma})$ estimate is uniform in $(u,\underline{u})$, it follows that $\phi_\infty \in L^\infty_u L^\infty_{\underline{u}} L^{q}(S_{u,\underline{u}}, \mathring{\gamma})$. It is easy to check that $\phi_\infty = \mathring{\slashed{\nabla}}\psi_\infty$, proving that $\psi_\infty \in L^\infty_u L^\infty_{\underline{u}} W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})$. \pfstep{Step~3: $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_\infty \in L^\infty_u L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})$ and $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^\infty_{\underline{u}} L^2_u L^p(S_{u,\underline{u}},\mathring{\gamma})$} For every fixed $u$, the estimate \eqref{eq:assumption.for.AA} and Banach--Alaoglu implies that after passing to a further subsequence, $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_{n_{k_\ell}}$ has a \emph{weak} limit in $L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})$. It is easy to check that the limit coincides with $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_{\infty}$, and therefore $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_\infty \in L^\infty_u L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})$. The proof of $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^\infty_{\underline{u}} L^2_u L^p(S_{u,\underline{u}},\mathring{\gamma})$ is similar; we omit the details. \qedhere \end{proof} \begin{proposition}\label{prop:compact.embeddings} Consider one of the following cases: (1) $m\in \{0,1\}$, $(p,q)= (+\infty, 4)$; (2) $m\in \{0,1,2\}$, $(p,q)= (4,2)$. Let $\{\psi_n\}_{n=1}^{+\infty}$ be a sequence of covariant rank $r$ smooth $S$-tangent tensors satisfying the following uniform bounds: $$\sup_n (\|\psi_n\|_{L^\infty_u L^\infty_{\underline{u}} W^{m+1,q}(S_{u,\underline{u}}, \mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_n \|_{L^\infty_u L^2_{\underline{u}}W^{m,p}(S_{u,\underline{u}},\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_n \|_{L^\infty_{\underline{u}} L^2_u W^{m,p}(S_{u,\underline{u}},\mathring{\gamma})})< +\infty.$$ Then, there exists a subsequence $\{\psi_{n_k}\}_{k=1}^{+\infty}$ and a $\psi_\infty\in C^0_u C^0_{\underline{u}} W^{m,p}(S_{u,\underline{u}},\mathring{\gamma}) \cap L^\infty_u L^\infty_{\underline{u}} W^{m+1,q}(S_{u,\underline{u}}, \mathring{\gamma})$ such that \begin{equation}\label{eq:AA.gen.2} \lim_{k\to +\infty} \|\psi_{n_k} - \psi_\infty\|_{L^\infty_u L^\infty_{\underline{u}} W^{m,p}(S_{u,\underline{u}},\mathring{\gamma})} = 0. \end{equation} Moreover, $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_\infty \in L^\infty_u L^2_{\underline{u}}W^{m,p}(S_{u,\underline{u}},\mathring{\gamma})$ and $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^\infty_{\underline{u}} L^2_u W^{m,p}(S_{u,\underline{u}},\mathring{\gamma})$. \end{proposition} \begin{proof} This is straightforward from Proposition~\ref{prop:AA.gen}. For $m=0$, this is exactly Proposition~\ref{prop:AA.gen}. We consider the case $m=1$. By Proposition~\ref{prop:AA.gen}, there exists a subsequence $\psi_{n_k}$ and $\psi_\infty\in C^0_u C^0_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma}) \cap L^\infty_u L^\infty_{\underline{u}} W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})$ such that $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \psi_\infty \in L^\infty_u L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})$, $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^\infty_{\underline{u}} L^2_u L^p(S_{u,\underline{u}},\mathring{\gamma})$ and \begin{equation*} \lim_{k\to +\infty} \|\psi_{n_k} - \psi_\infty\|_{L^\infty_u L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})} = 0. \end{equation*} Moreover, using Proposition~\ref{prop:AA.gen} for $\mathring{\slashed{\nabla}} \psi_{n_k}$, we see that after passing to a further subsequence (not relabeled), there exists $\phi_\infty\in C^0_u C^0_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma}) \cap L^\infty_u L^\infty_{\underline{u}} W^{1,q}(S_{u,\underline{u}}, \mathring{\gamma})$ such that $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \phi_\infty \in L^\infty_u L^2_{\underline{u}}L^p(S_{u,\underline{u}},\mathring{\gamma})$, $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \phi_\infty \in L^\infty_{\underline{u}} L^2_u L^p(S_{u,\underline{u}},\mathring{\gamma})$ and \begin{equation*} \lim_{k\to +\infty} \|\mathring{\slashed{\nabla}}\psi_{n_k} - \phi_\infty\|_{L^\infty_u L^\infty_{\underline{u}} L^p(S_{u,\underline{u}},\mathring{\gamma})} = 0. \end{equation*} The conclusion of the proposition in the $m=1$ case then follows after checking that $\phi_\infty = \mathring{\slashed{\nabla}}\psi_{\infty}$ and using that $[\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}}, \mathring{\slashed{\nabla}}] = [\slashed{\mathcal L}_{\frac{\partial}{\partial u}}, \mathring{\slashed{\nabla}}] = 0$ (due to \eqref{eq:compact,HO.DT}). The $m=2$ case is similar to the $m=1$ case; we omit the details. \qedhere \end{proof} \subsection{Compactness in BV} \begin{lemma}[Aubin--Lions lemma]\label{lem:AL} Let $X_0\subseteq X \subseteq X_1$ be three Banach spaces such that $X_0\subseteq X$ is compact and $X \subseteq X_1$ is continuous. For $T>0$ and $q>1$, let $$W:= \{ v \in L^\infty([0,T]; X_0): \dot{v} \in L^q([0,T]; X_1) \},$$ where $\dot{}$ denotes the (weak) derivative in the variable on $[0,T]$. Then $W$ embeds compactly into $C^0([0,T]; X)$. \end{lemma} \begin{proposition}[Compactness of BV functions (Theorems~5.2 and 5.5 in \cite{Evans})]\label{prop:BV.general} Let $U\subset \mathbb R^k$ be open and bounded, with Lipschitz boundary $\partial U$. Assume $\{\psi_n\}_{n=1}^{+\infty}$ is a sequence in (Euclidean) $BV(U)$ satisfying $$\sup_n \|\psi_n\|_{BV(U)}:= \sup_n \left(\int_{U} |\psi_n|\,\mathrm{d} x+ \sup\{\int_U \psi_n \mathrm{div}_{\mathbb R^k} \phi \,\mathrm{d} x : \phi \in C^1_c(U;\mathbb R^k),\,\|\phi\|_{L^\infty(U)} \leq 1\}\right)<+\infty.$$ Then there exists a subsequence $\{\psi_{n_k}\}_{k=1}^{+\infty}$ and a function $\psi_\infty \in BV(U)$ such that $$\int_U |\psi_{n_k} - \psi_\infty|\,\mathrm{d} x\to 0$$ as $k\to +\infty$. Moreover, \begin{equation}\label{eq:BV.general.est} \|\psi_\infty\|_{BV(U)} \leq \liminf_{k\to +\infty} \|\psi_{n_k}\|_{BV(U)}. \end{equation} \end{proposition} We now consider the application of Proposition~\ref{prop:BV.general} to our setting. In particular, in the proposition below, we continue to work in the setting described in the beginning of Section~\ref{sec:cpt.AA}. \begin{proposition}\label{prop:BV} \begin{enumerate} \item Let $\{\psi_n\}_{n=1}^{+\infty}$ be a sequence of smooth functions such that $$\sup_n (\|\psi_n\|_{C^0_u BV(H_u,\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_n\|_{L^2_u L^1_{\underline{u}} L^1(S_{u,\underline{u}},\mathring{\gamma})})<+\infty.$$ Then there exists a subsequence $\{\psi_{n_k}\}_{k=1}^{+\infty}$ and a $\psi_\infty \in C^0_uL^1_{\underline{u}} L^1(S_{u,\underline{u}},\mathring{\gamma}) \cap L^\infty_u BV(H_u,\mathring{\gamma})$ such that \begin{equation}\label{eq:BV.L1.conv} \|\psi_{n_k} - \psi_\infty\|_{C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}}, \mathring{\gamma})} \to 0 \end{equation} as $k\to +\infty$. \item An entirely symmetric statement holds after swapping $u$ and $\underline{u}$. \end{enumerate} \end{proposition} \begin{proof} We will only consider case~(1); case (2) can be treated by swapping $u$ and $\underline{u}$. First, by Lemma~\ref{lem:AL} (with $X_0 = BV(H_u,\mathring{\gamma})$, $X = X_1 = L^1_{\underline{u}} L^1(S_{u,\underline{u}},\mathring{\gamma})$, $T = u_*$, $q = 2$) and Proposition~\ref{prop:BV.general} (which gives the compactness of $X_0\subseteq X$), it follows that there exists $\psi_\infty \in C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}}, \mathring{\gamma})$ such that \eqref{eq:BV.L1.conv} holds. The fact that $\psi_\infty \in L^\infty_u BV(H_u)$ then follows from \eqref{eq:BV.general.est} in Proposition~\ref{prop:BV.general}. \qedhere \end{proof} \subsection{Weak compactness theorems}We continue to work in the setting described in the beginning of Section~\ref{sec:cpt.AA}. \begin{proposition}\label{prop:weak} \begin{enumerate} \item Let $\{\psi_n\}_{n=1}^{+\infty}$ be a sequence of rank-$r$ $S$-tangent covariant smooth tensor fields such that \begin{equation}\label{eq:prop.weak.assumption} \sup_n (\|\psi_n\|_{C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}},\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_n\|_{L^2_u L^1_{\underline{u}} L^1(S_{u,\underline{u}},\mathring{\gamma})})<+\infty. \end{equation} Then there exists a subsequence $\{\psi_{n_k}\}_{k=1}^{+\infty}$ and a rank-$r$ $S$-tangent covariant tensor field-valued Radon measure $\{\mathrm{d} \mu_{\psi,u}\}_{u\in [0,u_*]}$ such that the following hold: \begin{enumerate} \item For every $u\in [0,u_*]$ and every rank-$r$ $S$-tangent bounded contravariant tensor field $\varphi \in C^0$ on $(0,\underline{u}_*)\times \mathbb S^2$, \begin{equation}\label{eq:weak.conv} \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} - \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi(\underline{u},\vartheta)\cdot \mathrm{d} \mu_{\psi,u} \to 0 \end{equation} as $k\to +\infty$. \item $\mathrm{d} \mu_{\psi,u}$ is continuous in $u$ and the following holds: \begin{equation}\label{eq:weak.conv.cont} \begin{split} &\: \left| \int_{\{u'\}\times (0,\underline{u}_*)\times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} - \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}\right| \\ \leq &\: \limsup_{k\to +\infty} |u-u'|^{\frac 12} \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}}\psi_{n_k}\|_{L^2_u L^1_{\underline{u}}L^1(S_{u,\underline{u}},\mathring{\gamma})} \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}. \end{split} \end{equation} \item $\mathrm{d} \mu_{\psi,u}$ is uniformly bounded as follows: \begin{equation}\label{eq:weak.conv.bdd} \begin{split} &\: \sup_{u\in [0,u_*]} \sup_{\varphi \in C^0: \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}\leq 1} \left|\int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \varphi(\underline{u},\vartheta)\cdot \mathrm{d} \mu_{\psi,u}\right| \\ \leq &\: \limsup_{k\to +\infty}\|\psi_{n_k}\|_{C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}},\mathring{\gamma})}. \end{split} \end{equation} \end{enumerate} \item An entirely symmetric statement holds after swapping $u$ and $\underline{u}$. \end{enumerate} \end{proposition} \begin{proof} We only consider case (1); case (2) is similar. \pfstep{Step~1: Proof of \eqref{eq:weak.conv} for \underline{rational} $u$} For every \emph{fixed} $u\in [0,u_*]$, by the Banach--Alaoglu theorem, there exists a subsequence $n_k$ and a rank-$r$ $S$-tangent covariant tensor field-valued Radon measure $\mathrm{d}\mu_{\psi,u}$ such that \eqref{eq:weak.conv} holds. By considering $u\in [0,u_*]\cap \mathbb Q$ and using a standard trick of picking a diagonal subsequence, we can therefore find a \emph{fixed} subsequence $n_k$ such that \eqref{eq:weak.conv} holds for all \emph{rational} $u\in [0,u_*]$. \textbf{We now fix the subsequence $n_k$.} \pfstep{Step~2: Proof of \eqref{eq:weak.conv} for \underline{all} $u\in [0,u_*]$} We now show that for the subsequence $n_k$ fixed in Step~1, \eqref{eq:weak.conv} in fact holds for \emph{all} $u\in [0,u_*]$. First, we note that for every fixed $u\in [0,u_*]$ (not necessarily rational), we can take a further subsequence $n_{k_\ell}$ so that \eqref{eq:weak.conv} holds (for some $\mathrm{d}\mu_{\psi,u}$). Thus, in order to obtain weak-* convergence of that full subsequence $n_k$, it suffices to show that for \emph{all} $u\in [0,u_*]$, and all bounded $C^0$ $u$-independent, rank-$r$ contravariant, $S$-tangent tensor field $\varphi$, $$\left\{ \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} \right\}_{k=1}^{+\infty}$$ is a Cauchy sequence. Fix $u\in [0,u_*]$ (not necessarily rational) and $\varphi \in C^0$ for the remainder of this step. Since $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \mathring{\gamma} = 0$, we can use the fundamental theorem of calculus, H\"older's inequality and \eqref{eq:prop.weak.assumption} to obtain \begin{equation}\label{eq:prop.weak.1} \begin{split} &\: \left| \int_{\{u'\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} - \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}\right| \\ \leq &\: \left| \int_{u'}^u \int_{\{u''\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \slashed{\mathcal L}_{\frac{\partial}{\partial u}}\psi_{n_k}\rangle \,\mbox{dA}_{\mathring{\gamma}} \,\mathrm{d}\underline{u} \right| \\ \leq &\: |u-u'|^{\frac 12} \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}}\psi_{n_k}\|_{L^2_u L^1_{\underline{u}}L^1(S_{u,\underline{u}},\mathring{\gamma})} \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})} \lesssim |u-u'|^{\frac 12} \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}. \end{split} \end{equation} Let $\epsilon >0$. There exists $u'$ rational such that \begin{equation}\label{eq:prop.weak.2} |u-u'|^{\frac 12} \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S)} < \epsilon. \end{equation} By Step~1, for this rational $u'$, there exists $K>0$ such that whenever $k,\,k'\geq K$, we have \begin{equation}\label{eq:prop.weak.3} \begin{split} \left|\int_{\{u'\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} - \int_{\{u'\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_{k'}}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}\right|<\epsilon. \end{split} \end{equation} Hence, by \eqref{eq:prop.weak.1} (for both $k$ and $k'$), \eqref{eq:prop.weak.2}, \eqref{eq:prop.weak.3}, and the triangle inequality, \begin{equation*} \begin{split} \left| \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_k}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} - \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \langle \varphi(\underline{u},\vartheta), \psi_{n_{k'}}\rangle \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} \right|\lesssim \epsilon \end{split} \end{equation*} for $k,k'\geq K$, which is what we wanted to prove. \pfstep{Step~3: Proof of \eqref{eq:weak.conv.cont} and \eqref{eq:weak.conv.bdd}} The estimate \eqref{eq:weak.conv.cont} follows from \eqref{eq:prop.weak.1} after taking $\limsup_{k\to +\infty}$. Finally, \eqref{eq:weak.conv.bdd} follows from \eqref{eq:weak.conv} and H\"older's inequality. \qedhere \end{proof} \begin{proposition}\label{prop:weak.L2} Suppose, in addition to the assumptions of Proposition~\ref{prop:weak}, there exists $q\in [2,+\infty)$ such that \begin{equation}\label{eq:weak.L2.assumption} \sup_n (\|\psi_n\|_{L^{q}_{\underline{u}} L^\infty_u L^2(S_{u,\underline{u}},\mathring{\gamma})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_n\|_{L^2_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}},\mathring{\gamma})})<+\infty. \end{equation} Then there is a rank-$r$ $S$-tangent contravariant tensor field $\psi_\infty \in C^0_u L^{q}_{\underline{u}}L^2(S_{u,\underline{u}},\mathring{\gamma})\cap L^{q}_{\underline{u}}L^\infty_u L^2(S_{u,\underline{u}},\mathring{\gamma})$ such that $\mathrm{d} \mu_{\psi, u} = \psi_\infty\,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}$ (with $\mathrm{d} \mu_{\psi, u}$ as in Proposition~\ref{prop:weak}). Moreover, $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^2_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}},\mathring{\gamma})$. A symmetric statement also holds after swapping $u$ and $\underline{u}$. \end{proposition} \begin{proof} \pfstep{Step~1: $\mathrm{d} \mu_{\psi, u}$ is absolutely continuous with respect to $\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}$} Fix $u\in [0,u_*]$. Suppose $U\subset [0,\underline{u}_*]\times \mathbb S^2$ is an open subset such that $\int_U \, \mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u} <\epsilon$. Then, using H\"older's inequality and \eqref{eq:weak.L2.assumption}, \begin{equation*} \begin{split} &\: \sup_{\varphi \in C^0: \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}\leq 1} \left|\int_U \varphi\cdot \mathrm{d} \mu_{\psi, u} \right| \\ \leq &\: \sup_{\varphi \in C^0: \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}\leq 1} \limsup_{k\to +\infty} \int_U |\langle \varphi, \psi_{n_k}\rangle| \,\mathrm{d} A_{\mathring{\gamma}} \, \mathrm{d} \underline{u} \\ \leq &\: (\sup_{\varphi \in C^0: \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}\leq 1} \|\varphi\|_{L^\infty_{\underline{u}}L^\infty(S_{u,\underline{u}},\mathring{\gamma})}) (\sup_{k} \|\psi_{n_k}\|_{L^q_{\underline{u}} L^2(S_{u,\underline{u}})}) (\int_0^{\underline{u}_*} \,\mathrm{d} \underline{u})^{\frac {q-2}{2q}} (\int_U\,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u})^{\frac 12} \\ \lesssim &\: \underline{u}_*^{\frac {q-2}{2q}}\epsilon^{\frac 12}. \end{split} \end{equation*} It therefore follows that $\mathrm{d} \mu_{\psi, u} = \psi_\infty\,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}$ for some rank-$r$ tensor field $\psi_\infty \in L^\infty_u L^1_{\underline{u}} L^1(S_{u,\underline{u}})$. \pfstep{Step~2: Regularity of $\psi_\infty$} In view of Step~1, it remains to prove the regularity statement for $\psi_\infty$. First, by duality, Fatou's lemma, H\"older's inequality and \eqref{eq:weak.L2.assumption}, \begin{equation}\label{eq:weak.conv.2i2} \begin{split} \|\psi_\infty \|_{L^{q}_{\underline{u}} L^\infty_u L^2(S_{u,\underline{u}},\mathring{\gamma})} = &\: \sup_{\varphi\in C^0: \|\varphi\|_{L^{\frac{q}{q-1}}_{\underline{u}} L^1_u L^{2}(S_{u,\underline{u}}, \mathring{\gamma})} = 1 } \int_{[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2} \langle \varphi, \psi_\infty \rangle \, dA_{\mathring{\gamma}} \,du \,d\underline{u} \\ \leq &\: \sup_{\varphi\in C^0: \|\varphi\|_{L^{\frac{q}{q-1}}_{\underline{u}} L^1_u L^{2}(S_{u,\underline{u}}, \mathring{\gamma})} = 1 } \liminf_{k\to \infty} \int_{[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2} \langle \varphi, \psi_{n_k} \rangle \, dA_{\mathring{\gamma}} \,du \,d\underline{u} <+\infty, \end{split} \end{equation} which proves that $\psi_\infty \in L^q_{\underline{u}} L^\infty_u L^2(S_{u,\underline{u}},\mathring{\gamma})$. Next, we show that $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^2_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}},\mathring{\gamma})$. Note that by \eqref{eq:weak.L2.assumption} and the Banach--Alaoglu theorem, after passing to a further subsequence (not relabelled), $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_{n_k}$ has a weak limit in $L^2_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}},\mathring{\gamma})$. This limit coincides with $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty$, proving that $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^2_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}},\mathring{\gamma})$. Finally, we show that $\psi_\infty \in C^0_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}}, \mathring{\gamma})$. By \eqref{eq:weak.conv.2i2}, we already know $\psi_\infty \in L^\infty_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}}, \mathring{\gamma})$. To prove continuity in $u$, we use the fundamental theorem of calculus and the fact (established just above) that $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty \in L^2_u L^{q}_{\underline{u}} L^2(S_{u,\underline{u}},\mathring{\gamma})$. \qedhere \end{proof} \section{Existence of a limiting spacetime}\label{sec:existence} From now on until the end of Section~\ref{sec:eqns.for.limit}, we work under the assumptions of Theorem~\ref{main.thm}. In this section, we prove the existence of a limiting spacetime (recall part (2) of Theorem~\ref{main.thm}). We will use the convention that \textbf{all constants $C$ or implicit constants in $\lesssim$ will depend only on quantities in the assumptions of Theorem~\ref{main.thm}}. To prove convergence we will continually extract subsequences from $(\mathcal M,g_n)$. Phrases such as ``extracting a further subsequence $n_k$'' will mean that we extract a subsequence from that in the previous lemma, proposition, etc. To simplify notations, we will never relabel the further subsequence. We begin in \textbf{Section~\ref{sec:existence.prelim}} a preliminary step showing that the norms with respect to different metrics are comparable. We then proceed to show that the limit exists and proving the corresponding regularity statement: \begin{enumerate} \item The existence of uniform limit of the metric components will be proven in \textbf{Sections~\ref{sec:limit.gamma}, \ref{sec:limit.metric}.} \item The existence of uniform limit of $\eta$ and $\underline{\eta}$ will be proven in \textbf{Section~\ref{sec:limit.eta}}. \item The existence of weak limit of $\hat{\chi}$, $\hat{\underline{\chi}}$, $\slashed{\mathrm{tr}}\chi$, $\slashed{\mathrm{tr}}\underline{\chi}$, $\omega$ and $\underline{\omega}$ will be proven in \textbf{Section~\ref{sec:limit.chi}}. \item The existence of BV limit of $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ will be proven in \textbf{Section~\ref{sec:limit.trch}}. \item Finally, the existence of limits in \eqref{eq:dnu.def.thm} and \eqref{eq:dnub.def.thm} will be proven in \textbf{Section~\ref{sec:limit.dust}}. \end{enumerate} It will also be important to prove a compensated compactness result, related to some convergence properties of the Ricci coefficients which have the weakest convergence. This will be treated in \textbf{Section~\ref{sec:cc}}. \subsection{Comparability of norms}\label{sec:existence.prelim} \begin{proposition}[Comparability of norms]\label{prop:norms.compare} For every $r\in \mathbb N\cup \{0\}$, there exists a constant $C>0$ (independent of $n$ and $(u,\underline{u})$) such that for any rank-$r$ $S$-tangent covariant tensor $\xi$, \begin{align} C^{-1}\|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0}) _n)} \leq \|\xi\|_{L^p(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)},&\quad 1\leq p\leq +\infty,\label{eq:xi.compare}\\ C^{-1}\|\xi\|_{W^{1,p}(S_{u,\underline{u}}, (\gamma_{0,0})_n)} \leq \|\xi\|_{W^{1,p}(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{W^{1,p}(S_{u,\underline{u}}, (\gamma_{0,0})_n)},&\quad 1\leq p\leq +\infty,\label{eq:nab.xi.compare} \\ C^{-1}\|\xi\|_{W^{2,p}(S_{u,\underline{u}}, (\gamma_{0,0})_n)} \leq \|\xi\|_{W^{2,p}(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{W^{2,p}(S_{u,\underline{u}}, (\gamma_{0,0})_n)},& \quad 1\leq p\leq 4,\label{eq:nab.2.xi.compare} \\ C^{-1}\|\xi\|_{W^{3,p}(S_{u,\underline{u}}, (\gamma_{0,0})_n)} \leq \|\xi\|_{W^{3,p}(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{W^{3,p}(S_{u,\underline{u}}, (\gamma_{0,0})_n)},& \quad 1\leq p\leq 2.\label{eq:nab.3.xi.compare} \end{align} Using in addition that $(\gamma_{0,0})_n \to (\gamma_{0,0})_\infty$ in $C^3(S)$ (see~\eqref{eq:gamma.C3.conv}), it follows that \begin{align*} C^{-1}\|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \leq \|\xi\|_{L^p(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)},&\quad 1\leq p\leq +\infty, \\ C^{-1}\|\xi\|_{W^{1,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \leq \|\xi\|_{W^{1,p}(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{W^{1,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)},&\quad 1\leq p\leq +\infty, \\ C^{-1}\|\xi\|_{W^{2,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \leq \|\xi\|_{W^{2,p}(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{W^{2,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)}, &\quad 1\leq p\leq 4, \\ C^{-1}\|\xi\|_{W^{3,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \leq \|\xi\|_{W^{3,p}(S_{u,\underline{u}}, \gamma_n)} \leq C\|\xi\|_{W^{3,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)}, &\quad 1\leq p\leq 2. \end{align*} \end{proposition} \begin{proof} \pfstep{Step~1: Proof of \eqref{eq:xi.compare}} Given $\xi$ on $S_{u,\underline{u}}$, extend $\xi$ to a $S$-tangent tensor on $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ (still denoted by $\xi$) by stipulating that \begin{equation}\label{eq:u.and.ub.of.xi} \slashed {\mathcal L}_{\frac{\partial}{\partial u}}\xi = \slashed {\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \xi = 0 \end{equation} (possible since $[\frac{\partial}{\partial u}, \frac{\partial}{\partial \underline{u}}] = 0$). From this (and the fact that $\slashed {\mathcal L}_{\frac{\partial}{\partial u}}(\gamma_{0,0})_n = \slashed {\mathcal L}_{\frac{\partial}{\partial \underline{u}}} (\gamma_{0,0})_n = 0$) it follows that \begin{equation}\label{eq:comparability.1} \|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)} = \|\xi\|_{L^p(S_{u',\underline{u}'}, (\gamma_{0,0})_n)},\quad \forall (u',\underline{u}'). \end{equation} On the other hand, by Proposition~\ref{prop:metric.der} we have $$\frac{\partial}{\partial \underline{u}} (\gamma_n)_{AB} = 2\Omega_n (\chi_n)_{AB},\quad \frac{\partial}{\partial u} (\gamma_n)_{AB} = 2\Omega_n (\underline{\chi}_n)_{AB} - (\gamma_n)_{CA}(\slashed{\nabla}_n)_{B} b_n^C - (\gamma_n)_{CB}(\slashed{\nabla}_n)_{A} b_n^C.$$ Therefore, using also \eqref{eq:u.and.ub.of.xi} and Proposition~\ref{prop:transport.id}, we have \begin{equation}\label{eq:transport.compare.norms.1} \begin{split} &\: \frac{\partial}{\partial \underline{u}} \| \xi\|_{L^p(S_{u,\underline{u}},\gamma_n)}^p = \frac{\partial}{\partial \underline{u}} \int_{S_{u,\underline{u}}} |\xi|_{\gamma_n}^p \,\mathrm{dA}_{\gamma_n} \\ =&\: - p\int_{S_{u,\underline{u}}} |\xi|_{\gamma_n}^{p-2} [\sum_{s=1}^r \frac{\Omega_n(\chi^{\sharp\sharp}_n)^{A_{s}B_{s}}\Pi_{t=1}^r (\gamma^{-1}_n)^{A_{t}B_{t}} }{(\gamma^{-1}_n)^{A_{s}B_{s}}} \xi_{A_1\dots A_r} \xi_{B_1\dots B_r}] \, \mathrm{dA}_{\gamma_n} + \int_{S_{u,\underline{u}}} \Omega_n \slashed{\mathrm{tr}}\chi_n |\xi|_{\gamma_n}^p \, \mathrm{dA}_{\gamma_n}, \end{split} \end{equation} and similarly, \begin{equation}\label{eq:transport.compare.norms.2} \begin{split} &\: \frac{\partial}{\partial u} \| \xi\|_{L^p(S_{u,\underline{u}},\gamma_n)}^p = \frac{\partial}{\partial u} \int_{S_{u,\underline{u}}} |\xi|_{\gamma_n}^p \,\mathrm{dA}_{\gamma_n} \\ =&\: - p\int_{S_{u,\underline{u}}} |\xi|_{\gamma_n}^{p-2} [\sum_{s=1}^r \frac{(\Omega_n (\underline{\chi}^{\sharp\sharp}_n)^{A_{s}B_{s}} - 2(\gamma_n^{-1})^{C (A_s}\slashed{\nabla}_C b_n^{B_s)})\Pi_{t=1}^r (\gamma^{-1}_n)^{A_{t}B_{t}} }{(\gamma^{-1}_n)^{A_{s}B_{s}}} \xi_{A_1\dots A_r} \xi_{B_1\dots B_r} ] \, \mathrm{dA}_{\gamma_n} \\ &\: + \int_{S_{u,\underline{u}}} (\Omega_n \slashed{\mathrm{tr}}\underline{\chi}_n -\div_n b_n) |\xi|_{\gamma_n}^p \, \mathrm{dA}_{\gamma_n}. \end{split} \end{equation} Now using the uniform boundedness for the terms $\|\Omega_n \chi_n\|_{L^2_{\underline{u}} L^\infty_u L^\infty(S_{u,\underline{u}},\gamma_n)}$, $\|\Omega_n \underline{\chi}_n\|_{L^2_{u} L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma_n)}$, $\|\slashed{\nabla}_n b_n\|_{L^\infty_{u} L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma_n)}$ (by \eqref{eq:bdd.metric}, \eqref{eq:bdd.psiH} and \eqref{eq:bdd.psiHb}) and Gr\"onwall's inequality, we obtain (for all $(u,\underline{u})\in [0,u_*]\times [0,\underline{u}_*]$) \begin{equation}\label{eq:comparability.2} C^{-1} \|\xi\|_{L^p(S_{0,0}, \gamma_n)} \leq \|\xi\|_{L^p(S_{u,\underline{u}}, \gamma_n)} \leq C \|\xi\|_{L^p(S_{0,0}, \gamma_n)}. \end{equation} The estimate \eqref{eq:xi.compare} therefore follows from \eqref{eq:comparability.1}, \eqref{eq:comparability.2} and the fact that $(\gamma_{0,0})_n\restriction_{S_{0,0}} = \gamma_n\restriction_{S_{0,0}}$. \pfstep{Step~2: Proof of \eqref{eq:nab.xi.compare}} By \eqref{eq:bdd.metric}, Sobolev embedding, \eqref{eq:xi.compare} and the computation \begin{equation} (\slashed{\Gamma}_n - (\slashed{\Gamma}_{0,0})_n)^{A}_{BC} = \frac 12 ((\gamma_{0,0})_{n}^{-1})^{AD}(2(\slashed{\nabla}_n)_{(B} (\gamma_n - (\gamma_{0,0})_n)_{C)D} - (\slashed{\nabla}_n)_D (\gamma_n - (\gamma_{0,0})_n)_{BC}), \end{equation} we have $\|\slashed{\Gamma}_n - (\slashed{\Gamma}_{0,0})_n\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}}, \gamma_n)} \lesssim 1$. Using again \eqref{eq:xi.compare}, we then obtain \begin{equation}\label{eq:nab.compare.with.00} \|(\slashed{\nabla})_n \xi -(\slashed{\nabla}_{0,0})_n \xi \|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)} \lesssim \|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)}. \end{equation} As a result, by the triangle inequality, for any $1\leq p\leq +\infty$. $$\|(\slashed{\nabla})_n\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)} \leq C( \|(\slashed{\nabla}_{0,0})_n \xi \|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)}+ \|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)}),$$ and $$\|(\slashed{\nabla}_{0,0})_n \xi \|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)} \leq C( \|(\slashed{\nabla})_n\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)} + \|\xi\|_{L^p(S_{u,\underline{u}}, (\gamma_{0,0})_n)}).$$ The estimate \eqref{eq:nab.xi.compare} then follows from \eqref{eq:xi.compare}. \pfstep{Step~3: Proof of \eqref{eq:nab.2.xi.compare} and \eqref{eq:nab.3.xi.compare}} The proof of \eqref{eq:nab.2.xi.compare} and \eqref{eq:nab.3.xi.compare} is similar to that of \eqref{eq:nab.xi.compare}. The only difference is that by \eqref{eq:bdd.metric} (and Sobolev embedding using Proposition~\ref{prop:Sobolev} and \eqref{eq:bdd.isoperimetric}), we only have the estimates $\|\slashed{\nabla}_n (\slashed{\Gamma}_n - (\slashed{\Gamma}_{0,0})_n)\|_{C^0_u C^0_{\underline{u}} L^4(S_{u,\underline{u}}, \gamma_n)} \lesssim 1$ and $\|\slashed{\nabla}_n^2(\slashed{\Gamma}_n - (\slashed{\Gamma}_{0,0})_n)\|_{C^0_u C^0_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma_n)} \lesssim 1$, which restrict the range of allowable $p$. \qedhere \end{proof} \subsection{Limit of $\gamma$ and its angular derivatives}\label{sec:limit.gamma} \begin{proposition}\label{prop:gamma} There exists a subsequence $n_k$ and a limiting metric $\gamma_\infty \in C^0_u C^0_{\underline{u}} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty) \cap L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$ such that \begin{equation}\label{eq:gamma.convergence.1} \|\gamma_{n_k} - \gamma_\infty\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} + \|(\slashed{\nabla}_{0,0})_\infty(\gamma_{n_k} - \gamma_\infty)\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \to 0. \end{equation} \begin{equation}\label{eq:gamma.convergence.2} \|(\slashed{\nabla}_{0,0})_\infty^2(\gamma_{n_k} - \gamma_\infty)\|_{C^0_u C^0_{\underline{u}} L^4(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \to 0.\end{equation} Moreover, $\gamma_\infty$ also satisfies \begin{equation}\label{eq:gamma.bdd.2} \|\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \gamma_\infty\|_{L^\infty_u L^2_{\underline{u}} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \gamma_\infty\|_{L^\infty_{\underline{u}} L^2_{u} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \lesssim 1. \end{equation} \end{proposition} \begin{proof} By \eqref{eq:bdd.metric}, Sobolev embedding (Proposition~\ref{prop:Sobolev}) and Proposition~\ref{prop:norms.compare}, we have \begin{equation*} \begin{split} &\: \sup_n \left( \|\gamma_n - (\gamma_{0,0})_n\|_{C^0_u C^0_{\underline{u}} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} + \|\gamma_n - (\gamma_{0,0})_n\|_{C^0_u C^0_{\underline{u}} W^{3,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \right) \\ &\: + \sup_n \left( \|\slashed {\mathcal L}_{\frac{\partial}{\partial\underline{u}}} (\gamma_n - (\gamma_{0,0})_n)\|_{C^0_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} + \|\slashed {\mathcal L}_{\frac{\partial}{\partial u}} (\gamma_n - (\gamma_{0,0})_n)\|_{C^0_{\underline{u}} L^2_u W^{3,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \right) \lesssim 1. \end{split} \end{equation*} Therefore, by Proposition~\ref{prop:compact.embeddings} (with $\mathring{\gamma} = (\gamma_{0,0})_\infty$), there exists $\gamma_\infty$ such that $$\gamma_{n_k} - (\gamma_{0,0})_{n_k} \to \gamma_\infty - (\gamma_{0,0})_\infty \mbox{ in $C^0_u C^0_{\underline{u}} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$}.$$ This implies \eqref{eq:gamma.convergence.1} and \eqref{eq:gamma.convergence.2} (using Sobolev embedding). Moreover, by Proposition~\ref{prop:compact.embeddings}, $\gamma_\infty - (\gamma_{0,0})_\infty \in L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$, which implies $\gamma_\infty \in L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$. Finally, Proposition~\ref{prop:compact.embeddings} gives that $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} ( \gamma_\infty - (\gamma_{0,0})_\infty ) \in L^\infty_u L^2_{\underline{u}} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$ and $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} ( \gamma_\infty - (\gamma_{0,0})_\infty ) \in L^\infty_{\underline{u}} L^2_{u} W^{2,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$, which imply \eqref{eq:gamma.bdd.2} since $\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} (\gamma_{0,0})_\infty = \slashed{\mathcal L}_{\frac{\partial}{\partial u}} (\gamma_{0,0})_\infty =0$. \qedhere \end{proof} One immediate consequence of Proposition~\ref{prop:gamma} is the uniform bound of the isoperimetric constants and the area of each of the $2$-sphere $S_{u,\underline{u}}$ with respect to the limiting metric $\gamma_\infty$: \begin{proposition}\label{prop:isoperimetric} $$\sup_{u,\underline{u}} {\bf I}(S_{u,\underline{u}},\gamma_\infty) \lesssim 1,$$ $$1\lesssim \inf_{u,\underline{u}} \mathrm{Area}(S_{u,\underline{u}},\gamma_\infty) \leq \sup_{u,\underline{u}} \mathrm{Area}(S_{u,\underline{u}},\gamma_\infty) \lesssim 1,$$ and $$\|\log \frac{\det\gamma_\infty}{\det(\gamma_{0,0})_\infty} \|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})} \lesssim 1.$$ \end{proposition} \begin{proof} By the $C^0$ convergence statement \eqref{eq:gamma.convergence.1} in Proposition~\ref{prop:gamma}, it follows that for every $(u,\underline{u})$, $${\bf I}(S_{u,\underline{u}},\gamma_\infty) \leq \liminf_{k\to+\infty} {\bf I}(S_{u,\underline{u}},\gamma_{n_k}),$$ $$\limsup_{k\to +\infty} \mathrm{Area}(S_{u,\underline{u}}, \gamma_{n_k}) \leq \mathrm{Area}(S_{u,\underline{u}}, \gamma_\infty) \leq \liminf_{k\to +\infty} \mathrm{Area}(S_{u,\underline{u}}, \gamma_{n_k}),$$ and (using also \eqref{eq:gamma.C3.conv}) $$\|\log \frac{\det\gamma_\infty}{\det(\gamma_{0,0})_\infty} \|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})} \leq \limsup_{k\to +\infty} \|\log \frac{\det\gamma_{n_k}}{\det(\gamma_{0,0})_{n_k}} \|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})}.$$ The desired conclusions then follow from \eqref{eq:bdd.isoperimetric}. \qedhere \end{proof} Another immediate consequence of Proposition~\ref{prop:gamma} is the following estimates for the angular connections: \begin{proposition}\label{prop:Christoffel} The following hold for $\slashed \Gamma_\infty$ being the Christoffel symbols associated to $\gamma_\infty$: \begin{equation}\label{eq:Christoffel.1} \|\slashed \Gamma_{n_k} - \slashed \Gamma_{\infty}\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} + \|\slashed \Gamma_{n_k} - \slashed \Gamma_{\infty}\|_{C^0_u C^0_{\underline{u}} W^{1,4}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)}\to 0, \end{equation} \begin{equation}\label{eq:Christoffel.2} \|\slashed \Gamma_\infty - (\slashed \Gamma_{0,0})_\infty\|_{L^\infty_u L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)}\lesssim 1. \end{equation} Moreover, $K_\infty$ (the Gauss curvature of $(S_{u,\underline{u}},\gamma_\infty)$) is a well-defined $C^0_u C^0_{\underline{u}} L^4(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$ function which satisfies \begin{equation}\label{eq:K.first.limit} \|K_{n_k} - K_\infty\|_{C^0_u C^0_{\underline{u}} L^4(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)} \to 0,\quad \|K_\infty\|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)}\lesssim 1. \end{equation} \end{proposition} \begin{proof} The estimates \eqref{eq:Christoffel.1} and \eqref{eq:Christoffel.2} follow from Proposition~\ref{prop:metric} and the fact $$\Gamma_{n_k} - \Gamma_\infty = \frac 12 (\gamma_{\infty}^{-1})^{AD} (2(\slashed{\nabla}_{n_k})_{(B} (\gamma_{n_k} - \gamma_\infty)_{C)D} - (\slashed{\nabla}_{n_k})_D (\gamma_{n_k} - \gamma_\infty)_{BC}).$$ The statements about the Gauss curvature in \eqref{eq:K.first.limit} follow immediate from \eqref{eq:Christoffel.1}, \eqref{eq:Christoffel.2} and \eqref{Gauss.def}. \qedhere \end{proof} Given Propositions~\ref{prop:metric} and \ref{prop:Christoffel}, we have the following equivalence of norms: \begin{lemma}\label{lem:equivalent} Let $0\leq m \leq 3$ be an integer and $p\in [1,p_m]$, where $p_m = \begin{cases} +\infty & m=0,1 \\ 4 & m=2 \\ 2 & m=3 \end{cases}$. Then all of the following norms are equivalent (with constants independent of $k$): $$W^{m,p}(S_{u,\underline{u}}, \gamma_{n_k}),\,W^{m,p}(S_{u,\underline{u}}, \gamma_\infty),\,W^{m,p}(S_{u,\underline{u}}, (\gamma_{0,0})_{n_k}),\,W^{m,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty).$$ \end{lemma} \begin{proof} The equivalence of $W^{m,p}(S_{u,\underline{u}}, \gamma_{n_k})$, $W^{m,p}(S_{u,\underline{u}}, (\gamma_{0,0})_{n_k})$ and $W^{m,p}(S_{u,\underline{u}}, (\gamma_{0,0})_\infty)$ has been proven in Proposition~\ref{prop:norms.compare}. That $W^{m,p}(S_{u,\underline{u}}, \gamma_{n_k})$ and $W^{m,p}(S_{u,\underline{u}}, \gamma_\infty)$ are equivalent is a consequence of Propositions~\ref{prop:metric} and \ref{prop:Christoffel}. \qedhere \end{proof} In view of Lemma~\ref{lem:equivalent}, \textbf{from now on, we will write $L^p(S_{u,\underline{u}})$, $W^{1,p}(S_{u,\underline{u}})$, etc.~without specifying the metric with respect to which the norms are defined.} Recall that in the definition of angular regularity (Definition~\ref{double.null.def.2}), we need second derivative estimates for $K_\infty$, which does not follow from the estimates for $\gamma_\infty$. We derive them in the following proposition: \begin{proposition}\label{prop:K.imp} The limit $K_\infty$ from Proposition~\ref{prop:Christoffel} satisfies $$ K_\infty \in L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} L^2_{u} W^{2,2}(S_{u,\underline{u}}).$$ \end{proposition} \begin{proof} We will only prove the $L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$ estimate; the $L^\infty_{\underline{u}} L^2_{u} W^{2,2}(S_{u,\underline{u}})$ bound can be treated in a completely identical manner after switching $u$ and $\underline{u}$. By \eqref{eq:bdd.psi.top}, $$\|K_{n_k} \|_{L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})} \lesssim 1.$$ It follows from the Banach--Alaoglu theorem that for every $u \in [0,, u_*)$, there exists a further subsequence of $\slashed{\nabla}_{n_k}^2 K_{n_k}$ which admits a weak $L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ limit $\psi_\infty$ satisfying the estimate \begin{equation}\label{eq:K.imp.est} \|\psi_\infty \|_{L^2_{\underline{u}} L^2(S_{u,\underline{u}})} \lesssim 1. \end{equation} The weak convergence (together with Proposition~\ref{prop:Christoffel}) implies that $\psi_\infty = \slashed{\nabla}_\infty^2 K$. Since \eqref{eq:K.imp.est} moreover holds independently of $u$, this proves that $K_\infty \in L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. \qedhere \end{proof} Finally, before we end this subsection, it will be convenient to use the equivalence of norms that we have established above to rephrase the compactness theorems: \begin{lemma}\label{lem:easy.convergence} Proposition~\ref{prop:compact.embeddings}, \ref{prop:BV}, \ref{prop:weak} and \ref{prop:weak.L2} all apply in the setting of this section with $W^{m+1,q}(S_{u,\underline{u}},\mathring{\gamma})$, $W^{m,p}(S_{u,\underline{u}},\mathring{\gamma})$, etc.~replaced by $W^{m+1,q}(S_{u,\underline{u}})$, $W^{m,p}(S_{u,\underline{u}})$, etc. \end{lemma} \begin{proof} This is an immediate consequence of Proposition~\ref{prop:compact.embeddings} and Lemma~\ref{lem:equivalent}. \qedhere \end{proof} \subsection{Limits of $b$ and $\log\Omega$}\label{sec:limit.metric} \begin{proposition}\label{prop:metric.limit} There exist $b_\infty$ and $\Omega_\infty$ such that the following hold after passing to a further subsequence $n_k$: $$\|b_{n_k} - b_\infty\|_{C^0_u C^0_{\underline{u}} C^1(S_{u,\underline{u}})} + \|b_{n_k} - b_\infty\|_{C^0_u C^0_{\underline{u}} W^{2,4}(S_{u,\underline{u}})} \to 0,$$ $$\|\log \frac{\Omega_{n_k}}{\Omega_\infty}\|_{C^0_u C^0_{\underline{u}} C^1(S_{u,\underline{u}})} + \|\log \frac{\Omega_{n_k}}{\Omega_\infty}\|_{C^0_u C^0_{\underline{u}} W^{2,4}(S_{u,\underline{u}})} \to 0,$$ Moreover, for $\slashed g_\infty \in \{b_\infty,\,\log\Omega_\infty\}$, $$\|\slashed g_\infty\|_{L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \slashed g_\infty\|_{L^\infty_u L^2_{\underline{u}} W^{2,4}(S_{u,\underline{u}})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \slashed g_\infty\|_{L^\infty_{\underline{u}} L^2_{u} W^{2,4}(S_{u,\underline{u}})} \lesssim 1.$$ \end{proposition} \begin{proof} This is an immediate consequence of the bounds \eqref{eq:bdd.metric} and Proposition~\ref{prop:compact.embeddings} (and Lemma~\ref{lem:easy.convergence}). \qedhere \end{proof} \begin{remark} In particular, combining Propositions~\ref{prop:gamma} and \ref{prop:metric.limit}, it follows that $(\mathcal M, g_{n_k})$ has a $C^0$ limit $(\mathcal M, g_\infty)$ given by $$g_\infty = -2\Omega_\infty^2(du\otimes d\underline{u}+d\underline{u}\otimes du)+(\gamma_\infty)_{AB}(d\theta^A-b_\infty^Adu)\otimes (d\theta^B-b_\infty^B du).$$ \end{remark} \subsection{Limits of $\eta$ and $\protect\underline{\eta}$}\label{sec:limit.eta} We next consider the limits of $\eta$ and $\underline{\eta}$. They in particular have uniform limits. More precisely, \begin{proposition}\label{prop:eta.etab.limit} There exist $\eta_\infty$ and $\underline{\eta}_\infty$ such that the following hold after passing to a further subsequence $n_k$: $$\|\eta_{n_k} - \eta_\infty\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})} + \|\eta_{n_k} - \eta_\infty\|_{C^0_u C^0_{\underline{u}} W^{1,4}(S_{u,\underline{u}})} \to 0,$$ $$\|\underline{\eta}_{n_k} - \underline{\eta}_\infty\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})} + \|\underline{\eta}_{n_k} - \underline{\eta}_\infty\|_{C^0_u C^0_{\underline{u}} W^{1,4}(S_{u,\underline{u}})} \to 0.$$ Moreover, for $\psi_\infty \in \{\eta_\infty,\,\underline{\eta}_\infty\}$ $$\|\psi_\infty\|_{L^\infty_u L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \psi_\infty\|_{L^\infty_u L^2_{\underline{u}} W^{1,4}(S_{u,\underline{u}})} + \|\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \psi_\infty\|_{L^\infty_{\underline{u}} L^2_{u} W^{1,4}(S_{u,\underline{u}})} \lesssim 1.$$ \end{proposition} \begin{proof} This is an immediate consequence of the bounds \eqref{eq:bdd.psi}, Sobolev embedding (Proposition~\ref{prop:Sobolev}) and Proposition~\ref{prop:compact.embeddings} (and Lemma~\ref{lem:easy.convergence}). \qedhere \end{proof} \begin{proposition}\label{prop:eta.etab.imp} For $\psi_\infty \in \{\eta_\infty,\,\underline{\eta}_\infty\}$, where $\eta_\infty$ and $\underline{\eta}_\infty$ are as in Proposition~\ref{prop:eta.etab.limit}, it holds that $$\psi_\infty \in L^\infty_u L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}}) \cap L^\infty_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} L^2_{u} W^{3,2}(S_{u,\underline{u}}).$$ \end{proposition} \begin{proof} \pfstep{Step~1: The $W^{2,2}$ estimate} By \eqref{eq:bdd.psi}, (for $\psi_{n_k} \in \{\eta_{n_k},\,\underline{\eta}_{n_k}\}$,) $\psi_{n_k} \in W^{2,2}(S_{u,\underline{u}})$ uniformly in $k$, $u$ and $\underline{u}$, it follows that for every $u$, $\underline{u}$, after passing to a further subsequence, $\psi_{n_k}$ converges so some limit in $W^{2,2}(S_{u,\underline{u}})$. It is easy to check (using \eqref{prop:eta.etab.limit} and a density argument) that this limit coincides with $\psi_{\infty}$ almost everywhere. It follows that $ \psi_\infty \in L^\infty_u L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. \pfstep{Step~2: The $W^{3,2}$ estimate} The proof is similar to that in Proposition~\ref{prop:K.imp}; we omit the details. \qedhere \end{proof} \subsection{Weak limits of $\hat{\chi}$, $\omega$, $\protect\slashed{\mathrm{tr}}\chi$, $\protect\hat{\underline{\chi}}$, $\protect\underline{\omega}$ and $\protect\slashed{\mathrm{tr}}\underline{\chi}$}\label{sec:limit.chi} In this subsection, we now discuss the \underline{weak} limits of the Ricci coefficients $\hat{\chi}$, $\omega$, $\protect\slashed{\mathrm{tr}}\chi$, $\protect\hat{\underline{\chi}}$, $\protect\underline{\omega}$ and $\protect\slashed{\mathrm{tr}}\underline{\chi}$. The Ricci coefficients $\hat{\chi}$, $\omega$, $\hat{\underline{\chi}}$ and $\underline{\omega}$ indeed \emph{only} admit weak limits. This is related to the fact that they have the weakest regularity estimates. It is also the reason for which the limit spacetime $(\mathcal M, g_\infty)$ is not necessarily vacuum. On the other hand, $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ in fact have stronger convergence properties than those proven in Proposition~\ref{prop:trch.weak.limit}. We will return to this in Section~\ref{sec:limit.trch}. \begin{proposition}\label{prop:chi.limit} There exist a further subsequence $n_k$ and $S$-tangent tensor fields $\hat{\chi}_\infty$, $\omega_\infty$, $\slashed{\mathrm{tr}}\chi_\infty$, $\hat{\underline{\chi}}_\infty$, $\underline{\omega}_\infty$ and $\slashed{\mathrm{tr}}\underline{\chi}_\infty$ such that for every $u\in [0,u_*]$, $$\slashed{\nabla}_{n_k}^i\hat{\chi}_{n_k} \rightharpoonup \slashed{\nabla}_\infty^i\hat{\chi}_{\infty},\quad \slashed{\nabla}_{n_k}^i\omega_{n_k}\rightharpoonup \slashed{\nabla}_\infty^i\omega_{\infty},\quad \slashed{\nabla}_{n_k}^i\slashed{\mathrm{tr}}\chi_{n_k}\rightharpoonup \slashed{\nabla}_\infty^i\slashed{\mathrm{tr}}\chi_{\infty}$$ \underline{weakly} in $L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ for $i=0,1,2$, and for every $\underline{u}\in [0,\underline{u}_*]$, $$\slashed{\nabla}_{n_k}^i\hat{\underline{\chi}}_{n_k} \rightharpoonup \slashed{\nabla}_\infty^i\hat{\underline{\chi}}_{\infty},\quad \slashed{\nabla}_{n_k}^i\underline{\omega}_{n_k}\rightharpoonup \slashed{\nabla}_\infty^i\underline{\omega}_{\infty},\quad \slashed{\nabla}_{n_k}^i\slashed{\mathrm{tr}}\underline{\chi}_{n_k}\rightharpoonup \slashed{\nabla}_\infty^i\slashed{\mathrm{tr}}\underline{\chi}_{\infty}$$ \underline{weakly} in $L^2_{u} L^2(S_{u,\underline{u}})$ for $i=0,1,2$. Moreover, the limits satisfy $\hat{\chi}_\infty,\,\omega_\infty,\,\slashed{\mathrm{tr}}\chi_\infty \in L^2_{\underline{u}} L^\infty_u W^{2,2}(S_{u,\underline{u}})\cap C^0_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})\cap L^\infty_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$, $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \hat{\chi}_\infty,\,\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \omega_\infty,\,\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \slashed{\mathrm{tr}}\chi_\infty \in L^2_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$; and similarly $\hat{\underline{\chi}}_\infty,\,\underline{\omega}_\infty,\,\slashed{\mathrm{tr}}\underline{\chi}_\infty \in L^2_{u} L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}})\cap C^0_{\underline{u}} L^2_{u} W^{2,2}(S_{u,\underline{u}})\cap L^\infty_{\underline{u}} L^2_{u} W^{3,2}(S_{u,\underline{u}})$, $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \hat{\underline{\chi}}_\infty,\,\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \underline{\omega}_\infty,\,\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \slashed{\mathrm{tr}}\underline{\chi}_\infty \in L^2_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. \end{proposition} \begin{proof} We will only discuss the theorem for $\hat{\chi}$. It is easy to see that $\omega$ and $\slashed{\mathrm{tr}}\chi$ can be treated in exactly the same way (since $\omega$ satisfies similar estimates, and $\slashed{\mathrm{tr}}\chi$ satisfies even stronger bounds); while $\hat{\underline{\chi}}$, $\underline{\omega}$ and $\slashed{\mathrm{tr}}\underline{\chi}$ can be handled similarly after changing $u$ and $\underline{u}$. \pfstep{Step~1: Existence of weak limit} By Proposition~\ref{prop:weak.L2} with $q=2$ (and Proposition~\ref{prop:weak}, Lemma~\ref{lem:easy.convergence}) and the estimates in \eqref{eq:bdd.psiH} and \eqref{eq:bdd.psiH.psiHb.trans}, for $i=0,1,2$, after passing to a subsequence $n_k$, there exists $\hat{\chi}_\infty^{(i)}\in L^2_{\underline{u}} L^\infty_u L^{2}(S_{u,\underline{u}})\cap C^0_u L^2_{\underline{u}} L^{2}(S_{u,\underline{u}})$ such that $\slashed{\nabla}_{n_k}^i \hat{\chi}_{n_k} \rightharpoonup \hat{\chi}_\infty^{(i)}$ weakly in $L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ for every $u$. By Proposition~\ref{prop:Christoffel} and the uniqueness of distribution limits it follows that $\hat{\chi}_{\infty}^{(i)} = \slashed{\nabla}_\infty^i \hat{\chi}_{\infty}$. This shows that $\hat{\chi}_\infty \in L^2_{\underline{u}} L^\infty_u W^{2,2}(S_{u,\underline{u}})\cap C^0_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. \pfstep{Step~2: Higher regularity of $\hat{\chi}_\infty$ and $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \hat{\chi}_\infty$} Step~1 in particular shows that $\hat{\chi}_\infty \in L^2_{\underline{u}} L^\infty_u W^{2,2}(S_{u,\underline{u}})\cap C^0_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. By Proposition~\ref{prop:weak.L2}, we also have $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \hat{\chi}_\infty \in L^2_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. It therefore remains to show that $\hat{\chi}_\infty \in L^\infty_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$. To see this, note that for every $u\in [0,u_*]$, the estimate \eqref{eq:bdd.psiH} and the Banach--Alaoglu theorem imply that after passing to a further subsequence, $\slashed{\nabla}^3_{n_k}\hat{\chi}_{n_k}$ converges weakly in $L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$ to a limit $\hat{\chi}_\infty^{(3)}$ satisfying the bound (independently of $u$) \begin{equation}\label{eq:chi.limit.top} \chi_\infty^{(3)} \in L^2_{\underline{u}} L^2(S_{u,\underline{u}}). \end{equation} The weak convergence implies that that $\hat{\chi}_\infty^{(3)} = \slashed{\nabla}_\infty^3 \hat{\chi}_\infty$. Thus \eqref{eq:chi.limit.top} and the fact (established above) that $\hat{\chi}_\infty \in L^2_{\underline{u}} L^\infty_u W^{2,2}(S_{u,\underline{u}})\cap C^0_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$ imply $\chi_\infty\in L^\infty_u L^2_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$. \qedhere \end{proof} \begin{proposition}\label{prop:trch.weak.limit} Let $q\in [2,+\infty)$. There exist a further subsequence $n_k$ and functions $\slashed{\mathrm{tr}}\chi_\infty$ and $\slashed{\mathrm{tr}}\underline{\chi}_\infty$ such that for every $u\in [0,u_*]$, $$\slashed{\nabla}_{n_k}^i\slashed{\mathrm{tr}}\chi_{n_k}\rightharpoonup \slashed{\nabla}_\infty^i\slashed{\mathrm{tr}}\chi_{\infty}$$ \underline{weakly} in $L^q_{\underline{u}} L^2(S_{u,\underline{u}})$ for $i=0,1,2$, and for every $\underline{u}\in [0,\underline{u}_*]$, $$\slashed{\nabla}_{n_k}^i\slashed{\mathrm{tr}}\underline{\chi}_{n_k}\rightharpoonup \slashed{\nabla}_\infty^i\slashed{\mathrm{tr}}\underline{\chi}_{\infty}$$ \underline{weakly} in $L^q_{u} L^2(S_{u,\underline{u}})$ for $i=0,1,2$. Moreover, $\slashed{\mathrm{tr}}\chi_\infty \in L^q_{\underline{u}} L^\infty_u W^{2,2}(S_{u,\underline{u}})\cap C^0_u L^q_{\underline{u}} W^{2,2}(S_{u,\underline{u}})\cap L^\infty_u L^q_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$, $\slashed{\mathcal L}_{\frac{\partial}{\partial u}} \slashed{\mathrm{tr}}\chi_\infty \in L^2_u L^q_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$; and similarly $\slashed{\mathrm{tr}}\underline{\chi}_\infty \in L^q_{u} L^\infty_{\underline{u}} W^{2,2}(S_{u,\underline{u}})\cap C^0_{\underline{u}} L^q_{u} W^{2,2}(S_{u,\underline{u}})\cap L^\infty_{\underline{u}} L^q_{u} W^{3,2}(S_{u,\underline{u}})$, $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} \slashed{\mathrm{tr}}\underline{\chi}_\infty \in L^2_{\underline{u}} L^q_{u} W^{2,2}(S_{u,\underline{u}})$. \end{proposition} \begin{proof} This can be proven in exactly the same way as Proposition~\ref{prop:chi.limit}, except for using the better bounds that $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ obey (\eqref{eq:bdd.psi}, \eqref{eq:bdd.trch.trans} and \eqref{eq:bdd.trchb.trans}), and applying Proposition~\ref{prop:weak.L2} with a general $q$ (as opposed to only $q=2$). \qedhere \end{proof} \subsection{Strong limits of $\protect\slashed{\mathrm{tr}}\chi$ and $\protect\slashed{\mathrm{tr}}\underline{\chi}$}\label{sec:limit.trch} In this subsection we further show \emph{strong} convergence for $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ (in addition to Proposition~\ref{prop:trch.weak.limit} in Section~\ref{sec:limit.chi}); see the main results in Proposition~\ref{prop:trch.imp}. For this we rely on compactness of BV. First, we need the following \begin{proposition}[Comparability of the BV norms]\label{prop:BV.compare} There exists $C>0$ independent of $n$ such that the following holds for all continuous $\phi: [0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2\to \mathbb R$: $$C^{-1} \|\phi\|_{BV(H_u,(\gamma_\infty)_{0,0})} \leq \|\phi\|_{BV(H_u, \gamma_n)} \leq C \|\phi\|_{BV(H_u, (\gamma_\infty)_{0,0})},$$ and $$C^{-1} \|\phi\|_{BV(\underline{H}_{\underline{u}}, (\gamma_\infty)_{0,0})} \leq \|\phi\|_{BV(\underline{H}_{\underline{u}}, \gamma_n)} \leq C \|\phi\|_{BV(\underline{H}_{\underline{u}}, (\gamma_\infty)_{0,0})}.$$ \end{proposition} \begin{proof} After recalling Definition~\ref{def:BV}, this is immediate from Proposition~\ref{prop:norms.compare} and \eqref{eq:nab.compare.with.00}. \qedhere \end{proof} In view of Proposition~\ref{prop:BV.compare}, \textbf{from now on we will simply write $BV(H_u)$ and $BV(\underline{H}_{\underline{u}})$ for either of the equivalent norms.} \begin{proposition}\label{prop:trch.imp} The functions $\slashed{\mathrm{tr}}\chi_\infty$ and $\slashed{\mathrm{tr}}\underline{\chi}_\infty$ in Proposition~\ref{prop:trch.weak.limit} satisfy in addition \begin{equation}\label{eq:trch.imp} \begin{split} &\: \slashed{\mathrm{tr}}\chi_\infty \in C^0_u L^1_{\underline{u}} W^{2,1}(S_{u,\underline{u}}) \cap L^\infty_u BV(H_u) \cap L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}}),\\ &\: \slashed{\mathrm{tr}}\underline{\chi}_\infty \in C^0_{\underline{u}} L^1_{\underline{u}} W^{2,1}(S_{u,\underline{u}}) \cap L^\infty_{\underline{u}} BV(\underline{H}_{\underline{u}}) \cap L^\infty_{\underline{u}} L^\infty_{u} W^{3,2}(S_{u,\underline{u}}). \end{split} \end{equation} Moreover, after passing to a further subsequence $n_k$, \begin{equation}\label{eq:trch.W21.statement} \lim_{k\to +\infty} (\|\slashed{\mathrm{tr}}\chi_{n_k} -\slashed{\mathrm{tr}}\chi_\infty \|_{C^0_u L^1_{\underline{u}} W^{2,1}(S_{u,\underline{u}})} + \| \slashed{\mathrm{tr}}\underline{\chi}_{n_k} - \slashed{\mathrm{tr}}\underline{\chi}_\infty\|_{C^0_{\underline{u}} L^1_u W^{2,1}(S_{u,\underline{u}})}) = 0. \end{equation} and for every $p\in [1,+\infty)$, \begin{equation}\label{eq:trch.imp.conv} \lim_{k\to +\infty} (\|\slashed{\mathrm{tr}}\chi_{n_k} - \slashed{\mathrm{tr}}\chi_\infty\|_{C^0_u L^p_{\underline{u}} W^{1,p}(S_{u,\underline{u}})},\,\|\slashed{\mathrm{tr}}\underline{\chi}_{n_k} - \slashed{\mathrm{tr}}\underline{\chi}_\infty\|_{C^0_{\underline{u}} L^p_{u} W^{1,p}(S_{u,\underline{u}})} ) = 0. \end{equation} \end{proposition} \begin{proof} In this proof, we are concerned with $\slashed{\mathrm{tr}}\chi$; the proofs for statements regarding $\slashed{\mathrm{tr}}\underline{\chi}$ are similar. \pfstep{Step~1: BV compactness} \pfstep{Step~1(a): $C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}})\cap L^\infty_u BV(H_u)$ estimates} The bounds \eqref{eq:bdd.psi}, \eqref{eq:bdd.trch.trans}, \eqref{eq:bdd.trchb.trans} give uniform estimates for $\slashed{\mathrm{tr}}\chi_{n_k}$ which are sufficient to apply Proposition~\ref{prop:BV}. Thus, by Proposition~\ref{prop:BV.compare}, the BV compactness theorem in Proposition~\ref{prop:BV} (and Lemma~\ref{lem:easy.convergence}) and the uniqueness of (distributional) limits, we have, after passing to a further subsequence, \begin{equation}\label{eq:trch.limit.low} \lim_{k\to +\infty} (\|\slashed{\mathrm{tr}}\chi_{n_k} -\slashed{\mathrm{tr}}\chi_\infty \|_{C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}})} + \| \slashed{\mathrm{tr}}\underline{\chi}_{n_k} - \slashed{\mathrm{tr}}\underline{\chi}_\infty\|_{C^0_{\underline{u}} L^1_u L^1(S_{u,\underline{u}})}) = 0. \end{equation} and that $\slashed{\mathrm{tr}}\chi_{\infty} \in C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}})\cap L^\infty_u BV(H_u)$. \pfstep{Step~1(b): $C^0_u L^1_{\underline{u}} W^{2,1}(S_{u,\underline{u}})$ estimates and proof of \eqref{eq:trch.W21.statement}} We now apply the same argument as in Step~1(a) but to higher derivatives. By \eqref{eq:bdd.psi}, \eqref{eq:bdd.trch.trans} and Proposition~\ref{lem:equivalent}, we have that $(\slashed{\nabla}_{0,0})_\infty \slashed{\mathrm{tr}}\chi_{n_k}$ and $(\slashed{\nabla}_{0,0})_\infty^2 \slashed{\mathrm{tr}}\chi_{n_k}$ are uniformly bounded in $BV(H_u)$ for all $u$. Using Propositions~\ref{prop:BV.compare} and \ref{prop:BV} (and Lemma~\ref{lem:easy.convergence}), it follows that after passing to a further subsequence, $(\slashed{\nabla}_{0,0})_\infty \slashed{\mathrm{tr}}\chi_{n_k}$ and $(\slashed{\nabla}_{0,0})_\infty^2 \slashed{\mathrm{tr}}\chi_{n_k}$ both converge in $C^0_u L^1_{\underline{u}} L^1(S_{u,\underline{u}})$ to some limits. It is easy to check that these limits coincide with $(\slashed{\nabla}_{0,0})_\infty \slashed{\mathrm{tr}}\chi_\infty$ and $(\slashed{\nabla}_{0,0})_\infty \slashed{\mathrm{tr}}\chi_\infty$, which proves \eqref{eq:trch.W21.statement}. \pfstep{Step~2: Completion of the proof of \eqref{eq:trch.imp}} The only estimate not already established in Step~1 is that $\slashed{\mathrm{tr}}\chi_\infty \in L^\infty_u L^\infty_{\underline{u}} W^{3,2}(S_{u,\underline{u}})$. This can be proven in a similar manner as Step~1 in the proof of Proposition~\ref{prop:eta.etab.imp}; we omit the details. \pfstep{Step~3: Proof of \eqref{eq:trch.imp.conv}} Let $q\in [2,+\infty)$. By Proposition~\ref{prop:trch.weak.limit}, $\slashed{\mathrm{tr}}\chi_\infty\in C^0_u L^q_{\underline{u}} W^{2,2}(S_{u,\underline{u}})$. Hence by Sobolev embedding (using Propositions~\ref{prop:Sobolev} and \ref{prop:isoperimetric}), $\slashed{\mathrm{tr}}\chi_\infty\in C^0_u L^q_{\underline{u}} W^{1,q}(S_{u,\underline{u}})$. Combining with the estimate \eqref{eq:bdd.psi}, it follows that \begin{equation}\label{eq:trch.W2q} \sup_k \|\slashed{\mathrm{tr}}\chi_{n_k} - \slashed{\mathrm{tr}}\chi_\infty\|_{C^0_u L^q_{\underline{u}} W^{1,q}(S_{u,\underline{u}})} \lesssim 1. \end{equation} By H\"older's inequality, for any $p \in [1,+\infty)$, $q \in [2,+\infty)$ with $p <q$. $$\|\slashed{\mathrm{tr}}\chi_{n_k} - \slashed{\mathrm{tr}}\chi_\infty\|_{C^0_u L^p_{\underline{u}} W^{1,p}(S_{u,\underline{u}})} \lesssim \|\slashed{\mathrm{tr}}\chi_{n_k} - \slashed{\mathrm{tr}}\chi_\infty\|_{C^0_u L^1_{\underline{u}} W^{1,1}(S_{u,\underline{u}})}^{(\frac 1p- \frac 1q) \frac{q}{q-1}} \|\slashed{\mathrm{tr}}\chi_{n_k} - \slashed{\mathrm{tr}}\chi_\infty\|_{C^0_u L^q_{\underline{u}} W^{1,q}(S_{u,\underline{u}})}^{1 - (\frac 1p- \frac 1q) \frac{q}{q-1}}.$$ Given any $p\in [1,+\infty)$, we can choose $q\in [2,+\infty)$ with $p<q$ so that \eqref{eq:trch.W21.statement} and \eqref{eq:trch.W2q} imply $\|\slashed{\mathrm{tr}}\chi_{n_k} - \slashed{\mathrm{tr}}\chi_\infty\|_{C^0_u L^p_{\underline{u}} W^{1,p}(S_{u,\underline{u}})}\to 0$. We have thus obtained \eqref{eq:trch.imp.conv}. \qedhere \end{proof} \begin{remark} Notice that while $\slashed{\mathrm{tr}}\chi_\infty$ and $\slashed{\mathrm{tr}}\underline{\chi}_{\infty}$ are a.e.~bounded (by \eqref{eq:trch.imp} and Sobolev embedding), they are not necessarily continuous. Nonetheless, they have well-defined traces in the sense of Lemma~\ref{lem:trace}. \end{remark} \subsection{Compensated compactness}\label{sec:cc} While $\hat{\chi}$, $\omega$, $\hat{\underline{\chi}}$ and $\underline{\omega}$ only admit weak limits (see Section~\ref{sec:limit.chi}), it is important that there is a \emph{compensated compactness} phenomenon for some quadratic products of them. Our main result of this subsection in in Proposition~\ref{prop:chihchibh}, after we state a more general compensated compactness lemma (Lemma~\ref{lem:compensated.compactness}). The proof of Lemma~\ref{lem:compensated.compactness} is relegated to Appendix~\ref{app:CC}. \begin{lemma}\label{lem:compensated.compactness} Let $B_{\mathbb R^2}(0,R)\subset \mathbb R^2$ be the ball of radius $R$ in $\mathbb R^2$. Suppose there are two sequences of functions $\{f_n\}_{n=1}^{+\infty},\,\{h_n\}_{n=1}^{+\infty}\subset L^2([0,u_*]\times [0,\underline{u}_*]\times B_{\mathbb R^2}(0,R); \mathbb R)$ such that the following hold: \begin{enumerate} \item There exist $L^2([0,u_*]\times [0,\underline{u}_*]\times B_{\mathbb R^2}(0,R); \mathbb R)$ functions $f_{\infty}$ and $h_{\infty}$ such that $f_n \rightharpoonup f_\infty$ and $h_n\rightharpoonup h_\infty$ weakly in $L^2([0,u_*]\times [0,\underline{u}_*]\times B_{\mathbb R^2}(0,R); \mathbb R)$. \item There exists $C_0>0$ such that \begin{equation}\label{fn.bd.orig} \sup_n \sum_{i\leq 1\,j\leq 1,\,k\leq 1} \int_0^{\underline{u}_*} \int_0^{u_*} \iint_{B_{\mathbb R^2}(0,R)} \left((\frac{\partial}{\partial u})^{i}(\frac{\partial}{\partial y^1})^{j}(\frac{\partial}{\partial y^2})^{k} f_n\right)^2\,\mathrm{d} y^1\, \mathrm{d} y^2\, \mathrm{d} u\, \mathrm{d} \underline{u} \leq C_0 \end{equation} and \begin{equation}\label{gn.bd.orig} \sup_n \sum_{i\leq 1\,j\leq 1,\,k\leq 1} \int_0^{\underline{u}_*} \int_0^{u_*} \iint_{B_{\mathbb R^2}(0,R)} \left((\frac{\partial}{\partial \underline{u}})^{i}(\frac{\partial}{\partial y^1})^{j}(\frac{\partial}{\partial y^2})^{k} h_n\right)^2 \,\mathrm{d} y^1\, \mathrm{d} y^2\, \mathrm{d} u\, \mathrm{d} \underline{u} \leq C_0. \end{equation} \end{enumerate} Then, after passing to subsequences $\{f_{n_k}\}_{k=1}^{+\infty}$ and $\{h_{n_k}\}_{k=1}^{+\infty}$, $f_{n_k} h_{n_k} \rightharpoonup f_\infty h_\infty$ weakly in $L^2([0,u_*]\times [0,\underline{u}_*]\times B_{\mathbb R^2}(0,R); \mathbb R)$. \end{lemma} \begin{proposition}\label{prop:chihchibh} After passing to a subsequence $n_k$, $(\hat{\chi}_{n_k})_{AB}(\hat{\underline{\chi}}_{n_k})_{CD}$ converges weakly in $L^2_uL^2_{\underline{u}}L^2(S)$ to $(\hat{\chi}_\infty)_{AB}(\hat{\underline{\chi}}_\infty)_{CD}$, i.e.~for any contravariant $S$-tangent $4$-tensor $\varphi^{ABCD}\in L^2_uL^2_{\underline{u}}L^2(S)$, \begin{equation*} \begin{split} &\: \int_{[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2} \varphi^{ABCD} (\hat{\chi}_{n_k})_{AB}(\hat{\underline{\chi}}_{n_k})_{CD} \, \mathrm{dA}_{\gamma_\infty}\,\mathrm{d} u\,\mathrm{d} \underline{u} \\ \to &\: \int_{[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2} \varphi^{ABCD} (\hat{\chi}_{\infty})_{AB}(\hat{\underline{\chi}}_{\infty})_{CD} \, \mathrm{dA}_{\gamma_\infty}\,\mathrm{d} u\,\mathrm{d} \underline{u}. \end{split} \end{equation*} Similarly, after passing to a subsequence $n_k$, $(\hat{\underline{\chi}}_{n_k})_{AB}(\slashed{\nabla}_{n_k})_C(\hat{\chi}_{n_k})_{DE}$ and $(\hat{\chi}_{n_k})_{AB}(\slashed{\nabla}_{n_k})_C(\hat{\underline{\chi}}_{n_k})_{DE}$ also respectively converge weakly in $L^2_uL^2_{\underline{u}}L^2(S)$ to $(\hat{\underline{\chi}}_\infty)_{AB}(\slashed{\nabla}_\infty)_C(\hat{\chi}_\infty)_{DE}$ and $(\hat{\chi}_\infty)_{AB}(\slashed{\nabla}_\infty)_C(\hat{\underline{\chi}}_\infty)_{DE}$. \end{proposition} \begin{proof} Suffices to work component-wise and in a local coordinate chart $U$. The bounds \eqref{eq:bdd.psiH} and \eqref{eq:bdd.psiHb} (together with \eqref{eq:bdd.metric}) imply that in local coordinates, the following estimates hold: \begin{equation*} \sup_n \sum_{i\leq 1\,j+k\leq 2} \int_U \left((\frac{\partial}{\partial u})^{i}(\frac{\partial}{\partial \theta^1})^{j}(\frac{\partial}{\partial \theta^2})^{k} (\hat{\chi}_n)_{AB}\right)^2\,\mathrm{d} x^1 \,\mathrm{d} x^2\, \mathrm{d} u\, \mathrm{d} \underline{u} <+\infty \end{equation*} and \begin{equation*} \sup_n \sum_{i\leq 1\,j + k\leq 2} \int_U \left((\frac{\partial}{\partial \underline{u}})^{i}(\frac{\partial}{\partial \theta^1})^{j}(\frac{\partial}{\partial \theta^2})^{k} (\hat{\underline{\chi}}_n)_{AB} \right)^2 \,\mathrm{d} x^1 \,\mathrm{d} x^2\, \mathrm{d} u\, \mathrm{d} \underline{u} < +\infty. \end{equation*} The assertion of the proposition therefore follows from Lemma~\ref{lem:compensated.compactness}. \qedhere \end{proof} \subsection{Weak limits of $\Omega^2|\protect\hat{\chi}|_{\gamma}^2$ and $\Omega^2|\protect\hat{\underline{\chi}}|_{\gamma}^2$}\label{sec:limit.dust} In this subsection, we discuss the weak limits of $|\hat{\chi}_n|_{\gamma_n}^2$ and $|\hat{\underline{\chi}}_n|_{\gamma_n}^2$. Notice that unlike $\hat{\chi}_n$ and $\hat{\underline{\chi}}_n$ themselves, $|\hat{\chi}_n|_{\gamma_n}^2$ and $|\hat{\underline{\chi}}_n|_{\gamma_n}^2$ are only in $L^1$ (and not in $L^2$). We therefore can only hope to obtain weak limits as measures. For this purpose, our main tool will be Proposition~\ref{prop:weak}. \begin{proposition}\label{prop:nu.convergence} For every $u\in [0,u_*]$, there exists a non-negative Radon measure $\mathrm{d} \nu_u$ on $(0,\underline{u}_*)\times \mathbb S^2$, which is uniformly bounded and continuous in $u$ (see \eqref{eq:weak.conv.cont} and \eqref{eq:weak.conv.bdd}), such that after passing to a subsequence $n_k$, the following convergences hold for every bounded $\varphi \in C^0(\{u\} \times (0,\underline{u}_*)\times \mathbb S^2)$: \begin{equation}\label{eq:nu.convergence} \begin{split} &\: \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi \,\mathrm{d}\nu_u \\ = &\: \lim_{k\to +\infty} (\int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi \,\Omega_{n_k}^2 |\hat{\chi}_{n_k}|^2_{\gamma_{n_k}} \,\mathrm{dA}_{\gamma_{n_k}}\, \mathrm{d} \underline{u}) - \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi \,\Omega_{\infty}^2 |\hat{\chi}_{\infty}|^2_{\gamma_{\infty}} \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d}\underline{u}. \end{split} \end{equation} Similarly, there exists a non-negative Radon measure $\mathrm{d} \underline{\nu}_{\underline{u}}$, which is uniformly bounded and continuous in $\underline{u}$, such that after passing to a further subsequence, the following convergences hold for every bounded $\varphi \in C^0((0,u_*) \times \{\underline{u}\} \times \mathbb S^2)$: \begin{equation}\label{eq:nub.convergence} \begin{split} &\: \int_{(0,u_*) \times \{\underline{u}\} \times \mathbb S^2} \varphi \,\mathrm{d}\underline{\nu}_{\underline{u}} \\ = &\: \lim_{k\to +\infty} (\int_{(0,u_*) \times \{\underline{u}\}\times \mathbb S^2} \varphi \,\Omega_{n_k}^2 |\hat{\underline{\chi}}_{n_k}|^2_{\gamma_{n_k}} \,\mathrm{dA}_{\gamma_{n_k}}\, \mathrm{d} u) - \int_{(0,u_*) \times \{\underline{u}\}\times \mathbb S^2} \varphi \,\Omega_{\infty}^2 |\hat{\underline{\chi}}_{\infty}|^2_{\gamma_{\infty}} \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d} u. \end{split} \end{equation} \end{proposition} \begin{proof} We will only prove the statements concerning $\mathrm{d}\nu_u$; $\mathrm{d}\underline{\nu}_{\underline{u}}$ can be handled similarly. To show \eqref{eq:nu.convergence}, we first use Proposition~\ref{prop:weak} (and Lemma~\ref{lem:easy.convergence}) with $\psi_{n_k}= \Omega_{n_k}^2|\hat{\chi}_{n_k} - \hat{\chi}_\infty|^2_{\gamma_{n_k}}\frac{\sqrt{\det \gamma_{n_k}}}{\sqrt{\det (\gamma_{0,0})_\infty}}$. Note that by \eqref{eq:bdd.metric} and \eqref{eq:bdd.psiH}, $\{\psi_{n_k}\}_{k=1}^{+\infty}$ indeed obeys the assumptions of Proposition~\ref{prop:weak}. Hence we conclude that there is a further subsequence and a (scalar-valued) Radon measure $\mathrm{d} \nu_u$ which is uniformly bounded and continuous in $u$ such that for every real-valued function bounded $\varphi \in C^0(\{u\}\times (0,\underline{u}_*)\times \mathbb S^2)$, $$\int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi\, \Omega_{n_k}^2|\hat{\chi}_{n_k} -\hat{\chi}_\infty|^2_{\gamma_{n_k}} \frac{\sqrt{\det \gamma_{n_k}}}{\sqrt{\det (\gamma_{0,0})_\infty}}\,\mathrm{dA}_{(\gamma_{0,0})_\infty} \,\mathrm{d} \underline{u} \to \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} \varphi \,\mathrm{d} \nu_u.$$ Note that $\mathrm{d}\nu_u$ as defined is manifestly non-negative. Then noticing that $\frac{\sqrt{\det \gamma_{n_k}}}{\sqrt{\det (\gamma_{0,0})_\infty}}\,\mathrm{dA}_{(\gamma_{0,0})_\infty} = \mathrm{dA}_{\gamma_{n_k}}$, and using the convergence statements in Propositions~\ref{prop:metric.limit} and \ref{prop:chi.limit}, it follows that $\mathrm{d} \nu_u$ indeed satisfies \eqref{eq:nu.convergence}. \qedhere \end{proof} \begin{proposition}\label{prop:nu.add.reg} $(\{\mathrm{d} \nu_u\}_{u\in [0,u_*]},\, \{ \mathrm{d}\underline{\nu}_{\underline{u}}\}_{\underline{u} \in [0,\underline{u}_*]})$ is angularly regular in the sense of Definition~\ref{def:ang.reg.null.dust}. \end{proposition} \begin{proof} As in the previous proposition, we consider only $\mathrm{d}\nu_n$; the case for $\mathrm{d}\underline{\nu}_{\underline{u}}$ is similar. That $\mathrm{d}\nu_u$ is continuous in $u$ has already been established in Proposition~\ref{prop:nu.convergence}. It thus remains to bound each of the terms in \eqref{eq:nu.add.reg}, which will be carried out in the three steps below. \pfstep{Step~1: Estimating the first term in \eqref{eq:nu.add.reg}} Let $\varphi \in \accentset{\scalebox{.7}{\mbox{\tiny (0)}}}{{\mathfrak X}}_{u}$. By density we can assume that $\varphi$ is smooth. Hence, by \eqref{eq:nu.convergence}, for every $u$, \begin{equation}\label{eq:dust.Li.bd} \begin{split} &\: \left| \int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \varphi \,\mathrm{d}\nu_u \right| \\ \lesssim &\: \limsup_{k\to +\infty} \left| \int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \varphi \,\Omega_{n_k}^2 |\hat{\chi}_{n_k}|^2_{\gamma_{n_k}} \,\mathrm{dA}_{\gamma_{n_k}}\, \mathrm{d} \underline{u} \right| + \left| \int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \varphi \,\Omega_{\infty}^2 |\hat{\chi}_{\infty}|^2_{\gamma_{\infty}} \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d}\underline{u} \right| \\ \lesssim &\: \|\varphi\|_{C^0_{\underline{u}} L^1(S_{u,\underline{u}})} ( \limsup_{k\to +\infty}\|\Omega_{n_k}\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})} + \|\Omega_{\infty}\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})}) \\ &\: \quad \times ( \limsup_{k\to +\infty}\|\hat{\chi}_{n_k}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}}) } + \|\hat{\chi}_{\infty}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}}) } )\lesssim \|\varphi\|_{C^0_{\underline{u}} L^1(S_{u,\underline{u}})} \lesssim 1, \end{split} \end{equation} where in the last line we used the estimates in Propositions~\ref{prop:metric.limit} and \ref{prop:chi.limit}. \pfstep{Step~2: Estimating the second term in \eqref{eq:nu.add.reg}} Let $\slashed{X}\in \accentset{\scalebox{.7}{\mbox{\tiny (1)}}}{{\mathfrak X}}_{u}$. As in Step~1, we assume by density that $\slashed{X}$ is smooth. The main difference with Step~1 is that we need to integrate by parts in the angular direction to handle the additional derivative. By \eqref{eq:nu.convergence}, for every $u$, \begin{equation*} \begin{split} &\: \left|\int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \div_\infty \slashed{X}\,\mathrm{d}\nu_u\right| \\ \leq &\: \limsup_{k\to +\infty} \left| \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} (\div_\infty \slashed{X}) \,\Omega_{n_k}^2 |\hat{\chi}_{n_k}|^2_{\gamma_{n_k}} \,\mathrm{dA}_{\gamma_{n_k}}\, \mathrm{d} \underline{u} \right| + \left| \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} (\div_\infty \slashed{X}) \,\Omega_{\infty}^2 |\hat{\chi}_{\infty}|^2_{\gamma_{\infty}} \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d}\underline{u} \right| \\ \lesssim &\: \limsup_{k\to +\infty} \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} |\{ (\slashed{\nabla}_\infty)_A - (\slashed{\nabla}_{n_k})_A \} \slashed{X}^A| \,\Omega_{n_k}^2 |\hat{\chi}_{n_k}|^2_{\gamma_{n_k}} \,\mathrm{dA}_{\gamma_{n_k}}\, \mathrm{d} \underline{u} \\ &\: + \limsup_{k\to +\infty} \left| \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \slashed{X}^A \, (\slashed{\nabla}_{n_k})_A (\Omega_{n_k}^2 |\hat{\chi}_{n_k}|^2_{\gamma_{n_k}}) \,\mathrm{dA}_{\gamma_{n_k}}\, \mathrm{d} \underline{u} \right|\\ &\: + \left| \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} \slashed{X}^A \,(\slashed{\nabla}_\infty)_A (\Omega_{\infty}^2 |\hat{\chi}_{\infty}|^2_{\gamma_{\infty}}) \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d}\underline{u} \right| \\ \lesssim &\: \|\slashed X\|_{L^\infty_{\underline{u}}L^1(S_{u,\underline{u}})} \times (1 + \|(\slashed\Gamma_\infty)_{AB}^A - (\slashed\Gamma_{n_k})_{AB}^A\|_{C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})})\\ &\: \quad \times ( \limsup_{k\to +\infty}\|\Omega_{n_k}\|_{C^0_u C^0_{\underline{u}} C^1(S_{u,\underline{u}})} + \|\Omega_{\infty}\|_{C^0_u C^0_{\underline{u}} C^1(S_{u,\underline{u}})}) \\ &\: \times ( \limsup_{k\to +\infty}\|\hat{\chi}_{n_k}\|_{L^\infty_u L^2_{\underline{u}} W^{1,\infty}(S_{u,\underline{u}}) } + \|\hat{\chi}_{\infty}\|_{L^\infty_u L^2_{\underline{u}} W^{1,\infty}(S_{u,\underline{u}}) } ) \\ \lesssim &\: \|\slashed X\|_{L^\infty_{\underline{u}}L^1(S_{u,\underline{u}})} \lesssim 1, \end{split} \end{equation*} where in the last line we have used the estimates established in Propositions~\ref{prop:Christoffel}, \ref{prop:metric.limit} and \ref{prop:chi.limit}. \pfstep{Step~3: Estimating the third term in \eqref{eq:nu.add.reg}} To estimate the third term, we carry out an integration by parts argument as in Step~2. The only difference is that we have higher derivatives, e.g.~a term $(\slashed{\nabla}_\infty)^2 \hat{\chi}_\infty$, which can only be controlled in $L^\infty_u L^2_{\underline{u}} L^4(S_{u,\underline{u}})$ by Proposition~\ref{prop:chi.limit} (but not $L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}})$). It is for this reason that we need to assume control of $\|\slashed X\otimes \slashed Y\|_{L^\infty_{\underline{u}}L^{\frac 43}(S_{u,\underline{u}})}$ in the definition of $\accentset{\scalebox{.7}{\mbox{\tiny (2)}}}{{\mathfrak X}}_{u}$. We omit the straightforward details, but will just demonstrate this with one of the most difficult terms: \begin{equation*} \begin{split} &\: \left| \int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \slashed{X}^A \slashed{Y}^B \,(\slashed{\nabla}_\infty)^2_{BA} (\Omega_{\infty}^2 |\hat{\chi}_{\infty}|^2_{\gamma_{\infty}}) \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d}\underline{u} \right| \\ \lesssim &\: \| \slashed{X} \otimes \slashed{Y} \|_{L^\infty_{\underline{u}} L^{\frac 43}(S_{u,\underline{u}})} \| (\slashed{\nabla}_\infty)^2_{BA} (\Omega_{\infty}^2 |\hat{\chi}_{\infty}|^2_{\gamma_{\infty}}) \|_{L^\infty_u L^\infty_{\underline{u}} L^4(S_{u,\underline{u}})} \lesssim \| \slashed{X} \otimes \slashed{Y} \|_{L^\infty_{\underline{u}} L^{\frac 43}(S_{u,\underline{u}})} \lesssim 1, \end{split} \end{equation*} where we have used Propositions~\ref{prop:metric.limit} and \ref{prop:chi.limit}. \qedhere \end{proof} \textbf{At this point, we fix the subsequence $n_k$ such that Propositions~\ref{prop:gamma}, \ref{prop:Christoffel}, \ref{prop:metric.limit}, \ref{prop:eta.etab.limit}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit}, \ref{prop:trch.imp}, \ref{prop:chihchibh} and \ref{prop:nu.convergence} hold.} Along this subsequence, the spacetime metrics converge uniformly to a limiting spacetime $(\mathcal M, g_\infty)$, with additional refined convergence for the Ricci coefficients as described by the propositions above. Moreover, combining the above propositions with Propositions~\ref{prop:K.imp} and \ref{prop:eta.etab.imp}, it also follows that the limit $(\mathcal M,\,g_\infty)$ is angularly regular (see Definition~\ref{double.null.def.2}). \section{The equations satisfied by the limit spacetime and the proof of Theorem~\ref{main.thm}}\label{sec:eqns.for.limit} We continue to work under the assumptions of Theorem~\ref{main.thm}, and take $n_k$ as in the end of the last section. Using the properties of the limits that we showed in the previous section, we now derive the equations satisfied by the various limiting quantities. \begin{itemize} \item In \textbf{Section~\ref{sec:equations.for.metric.comp}}, we derive the transport equations for the metric components. \item In \textbf{Section~\ref{sec:eq.Ricci.trans.1}} and \textbf{Section~\ref{sec:eq.Ricci.trans.2}}, we derive the transport equations for the Ricci coefficients. These equations correspond exactly to a description of the (weak) Ricci curvature. \item In \textbf{Section~\ref{sec.prop.dust}}, we prove the propagation equation for the null dust. \item Finally, in \textbf{Section~\ref{sec:higher.Ricci}} and \textbf{Section~\ref{sec:renorm.Bianchi}}, we derive the higher order equations for the Ricci coefficients and renormalized Bianchi equations respectively. \end{itemize} Combined with the results in the previous section, we will then complete the proof of Theorem~\ref{main.thm} in \textbf{Section~\ref{sec:proof.of.main.thm}}. \subsection{Equations for the metric components}\label{sec:equations.for.metric.comp} In Section~\ref{sec:existence}, we defined \begin{itemize} \item $(\gamma_\infty, \,b_\infty,\, \Omega_\infty)$, which are understood as subsequential uniform limits of the metric components, and \item $(\chi_\infty,\,\underline{\chi}_\infty,\,\eta_\infty,\,\underline{\eta}_\infty,\,\omega_\infty,\,\underline{\omega}_\infty)$, which are understood as subsequential weak limits of the Ricci coefficients. \end{itemize} It is easy to check that they are related in the expected manner, i.e.~that $(\chi_\infty,\,\underline{\chi}_\infty,\,\eta_\infty,\,\underline{\eta}_\infty,\,\omega_\infty,\,\underline{\omega}_\infty)$ are indeed the Ricci coefficients associated to the limiting metric, i.e. \begin{proposition}\label{prop:Ricci.is.metric.derivative} The equations \eqref{metric.derivative.invar} and \eqref{Ricci.relation} hold when we \begin{itemize} \item take the metric components to be $(\gamma_\infty, \,b_\infty,\, \Omega_\infty)$ (given by Propositions~\ref{prop:gamma} and \ref{prop:metric.limit}) and \item take the Ricci coefficients to be $(\chi_\infty,\,\underline{\chi}_\infty,\,\eta_\infty,\,\underline{\eta}_\infty,\,\omega_\infty,\,\underline{\omega}_\infty)$ (given by Propositions~\ref{prop:eta.etab.limit}, \ref{prop:trch.weak.limit} and \ref{prop:chi.limit}), \item where all derivatives are to be understood as weak derivatives. \end{itemize} \end{proposition} \begin{proof} This follows easily from the uniqueness of limits in the sense of distributions. \qedhere \end{proof} Proposition~\ref{prop:Ricci.is.metric.derivative} immediately implies the following transport equations for the metric components. \begin{proposition}\label{prop:equations.for.g} The equations $$\slashed{\nabla}_4 \gamma = 0,\quad \slashed{\nabla}_4 b = - 2\Omega (\eta - \underline{\eta}) + \chi\cdot b,\quad \slashed{\nabla}_4 \log \Omega = -2 \omega$$ hold in the integrated sense (see Definition~\ref{def:weak.transport}). \end{proposition} \begin{proof} The last equation here is just the first equation in \eqref{Ricci.relation}. The other two equations follow from \eqref{metric.derivative.invar} and the expression of $\slashed{\nabla}_4$ in \eqref{nab4.def}. Using Proposition~\ref{prop:Ricci.is.metric.derivative}, we have thus shown that all these equations hold when the derivatives are understood as weak derivatives. Together with the regularity of the metric components and the Ricci coefficients we have derived in the previous section, it follows that these equations are also satisfied in the integrated sense. \qedhere \end{proof} In fact, we can also derive transport equations for the derivatives of the metric components. \begin{proposition}\label{prop:equations.for.nabla.g} The equations \eqref{eq:nablagamma}--\eqref{eq:nablab} hold for $(\mathcal M, g_\infty)$ in the integrated sense (see Definition~\ref{def:weak.transport}). \end{proposition} \begin{proof} We start from the fact that (by Proposition~\ref{prop:higher.order.metric.C2}), the equations \eqref{eq:nablagamma}--\eqref{eq:nablab} hold classically, and hence also in the integrated sense, for $(\mathcal M, g_{n_k})$ for all $n_k \in \mathbb N$. Taking \eqref{eq:nablagamma} as an example, we have that for any rank-3 $C^1$ tensor $\varphi$, \begin{equation}\label{eq:d4gamma.nk} \begin{split} &\: \int_{S_{u,\underline{u}_1}} \langle\varphi, (\slashed{\nabla}\gamma)_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k} } - \int_{S_{u,\underline{u}_2}} \langle\varphi, (\slashed{\nabla}\gamma)_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k} } \\ =&\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} (\langle \varphi, (\slashed{\mathrm{tr}}\chi_{n_k} - 2\omega_{n_k} )(\slashed{\nabla}\gamma)_{n_k} \rangle + \langle \slashed{\nabla}_4\varphi,(\slashed{\nabla}\gamma)_{n_k} \rangle) \Omega_{n_k} ^2 \,\mathrm{d} A_{\gamma_{n_k} }\, \mathrm{d} \underline{u}' . \end{split} \end{equation} Now we want to pass to the $k\to +\infty$ limit to obtain \eqref{eq:nablagamma} in the integrated sense. By Propositions~\ref{prop:gamma} and \ref{prop:metric.limit}, $\gamma_{n_k}$, $(\slashed{\nabla}\gamma)_{n_k}$, $\Omega_k$ and $\mathrm{d} A_{\gamma_{n_k}}$ have uniform limits which allow us to take $k\to+\infty$. Similarly, $\slashed{\mathrm{tr}}\chi_{n_k}$ also has a strong limit $C^0_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}})$ by \eqref{eq:trch.imp.conv}, allowing us to pass to $k\to +\infty$. The only term without a strong limit is therefore $\omega_{n_k}$, which, by Proposition~\ref{prop:chi.limit}, only has a weak $L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ limit (for every fixed $u$). Nevertheless, $\omega_{n_k}$ is multiplied by $(\slashed{\nabla}\gamma)_{n_k} \Omega_{n_k}^2 \, \mathrm{d} A_{\gamma_{n_k}}$ which has a uniform limit so that $$\int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \omega_{n_k} (\slashed{\nabla}\gamma)_{n_k} \rangle \,\mathrm{d} A_{\gamma_{n_k} }\, \mathrm{d} \underline{u}' \to \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \omega_{\infty} (\slashed{\nabla}\gamma)_{\infty} \rangle \,\mathrm{d} A_{\gamma_{\infty} }\, \mathrm{d} \underline{u}'.$$ Combining all these observations, we can pass \eqref{eq:d4gamma.nk} to the $k\to +\infty$ limit to obtain \eqref{eq:nablagamma} in the integrated sense. The equations \eqref{eq:nablaOmega} and \eqref{eq:nablab} can be proven similarly, noting that in both cases the only terms without a strong limit involve $\hat{\chi}_{n_k}$ or $\omega_{n_k}$, but they are multiplied by terms which have uniform limits. We omit the details. \qedhere \end{proof} \subsection{The vanishing (weak) Ricci curvature components of the limit spacetime}\label{sec:eq.Ricci.trans.1} \begin{proposition}\label{prop:Ricci.easy} The equations \eqref{Ric4A}--\eqref{Ric3A} hold for $(\mathcal M, g_\infty)$ in the integrated sense (see Definition~\ref{def:weak.transport}). \end{proposition} \begin{proof} Let us only consider \eqref{Ric4A}. The equation \eqref{Ric3A} can be treated in essentially the same manner. According to Definition~\ref{def:weak.transport}, we need to show, for all $S$-tangent vector field $\varphi \in C^1$, \begin{equation}\label{eq:Ricci4A.1} \begin{split} &\: \int_{S_{u,\underline{u}_1}} \langle\varphi, \eta_{\infty} \rangle \Omega_{\infty} \,\mathrm{d} A_{\gamma_{\infty}} - \int_{S_{u,\underline{u}_2}} \langle\varphi, \eta_{\infty} \rangle \Omega_{\infty} \,\mathrm{d} A_{\gamma_{\infty}} \\ = &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \div_{\infty}\hat{\chi}_{\infty} - \f12 \slashed{\nabla}_{\infty}\slashed{\mathrm{tr}}\chi_{\infty} - \frac 12(\eta-\underline{\eta})_{\infty}\cdot_{\infty}\hat{\chi}_{\infty} + \frac 14\slashed{\mathrm{tr}}\chi_{\infty}\eta_{\infty} \rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}'\\ &\: + \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \frac 34 \slashed{\mathrm{tr}}\chi_{\infty}\underline{\eta}_{\infty} - 2\omega_{\infty}\eta_{\infty}\rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}' +\int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle (\slashed{\nabla}_4)_{\infty}\varphi,\eta_{\infty} \rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}'. \end{split} \end{equation} Since $(\mathcal M, g_{n_k})$ is a smooth solution to the Einstein vacuum equations, by Proposition~\ref{prop:null.structure}, \begin{equation}\label{eq:Ricci4A.2} \begin{split} &\: \int_{S_{u,\underline{u}_1}} \langle\varphi, \eta_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}} - \int_{S_{u,\underline{u}_2}} \langle\varphi, \eta_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}} \\ = &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \div_{n_k}\hat{\chi}_{n_k} - \f12 \slashed{\nabla}_{n_k}\slashed{\mathrm{tr}}\chi_{n_k} - \frac 12(\eta-\underline{\eta})_{n_k}\cdot_{n_k}\hat{\chi}_{n_k} + \frac 14\slashed{\mathrm{tr}}\chi_{n_k}\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}'\\ &\: + \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \frac 34 \slashed{\mathrm{tr}}\chi_{n_k}\underline{\eta}_{n_k} - 2\omega_{n_k}\eta_{n_k}\rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}' +\int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle (\slashed{\nabla}_4)_{n_k}\varphi,\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}'. \end{split} \end{equation} Our goal now is to pass to the $k\to+\infty$ limit in \eqref{eq:Ricci4A.2} to obtain \eqref{eq:Ricci4A.1}. The terms on the LHS of \eqref{eq:Ricci4A.2} are easy, since by Proposition~\ref{prop:gamma}, \ref{prop:metric.limit} and \ref{prop:eta.etab.limit}, all of $\gamma_{n_k}$, $\Omega_{n_k}$ and $\eta_{n_k}$ converge uniformly to their limits. We therefore have \begin{equation}\label{eq:Ricci4A.3} \begin{split} &\: \int_{S_{u,\underline{u}_1}} \langle\varphi, \eta_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}} - \int_{S_{u,\underline{u}_2}} \langle\varphi, \eta_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}} \\ \to &\: \int_{S_{u,\underline{u}_1}} \langle\varphi, \eta_{\infty} \rangle \Omega_{\infty} \,\mathrm{d} A_{\gamma_{\infty}} - \int_{S_{u,\underline{u}_2}} \langle\varphi, \eta_{\infty} \rangle \Omega_{\infty} \,\mathrm{d} A_{\gamma_{\infty}}. \end{split} \end{equation} The terms on the RHS of \eqref{eq:Ricci4A.2} are not much more difficult. To proceed, let us first recall from Propositions~\ref{prop:gamma} and \ref{prop:metric.limit} that the metric components converge uniformly to their limit so that we can focus on taking the limits of the Ricci coefficients. There are now three types of terms to understand: \begin{enumerate} \item Terms in which $\varphi$ is contracted with $\slashed{\nabla}_{n_k} \chi_{n_k}$. For these terms, we use that $\slashed{\nabla}_{n_k}\chi_{n_k}$ converges weakly in $L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ for all fixed $u$ (Propositions~\ref{prop:chi.limit} and \ref{prop:trch.weak.limit}). Hence, \begin{equation}\label{eq:Ricci4A.4} \begin{split} &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \div_{n_k}\hat{\chi}_{n_k} - \f12 \slashed{\nabla}_{n_k}\slashed{\mathrm{tr}}\chi_{n_k}\rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}' \\ \to &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, \div_{\infty}\hat{\chi}_{\infty} - \f12 \slashed{\nabla}_{\infty}\slashed{\mathrm{tr}}\chi_{\infty} \rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}'. \end{split} \end{equation} \item Terms in which $\varphi$ is contracted with a \underline{product} of two Ricci coefficients. There are in turn two types of terms: (a) a product of two Ricci coefficients, each of which converges in the $C^0_u L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ norm, e.g.~$\slashed{\mathrm{tr}}\chi_{n_k} \eta_{n_k}$, $\slashed{\mathrm{tr}}\chi_{n_k} \underline{\eta}_{n_k}$ (by Propositions~\ref{prop:eta.etab.limit} and \ref{prop:trch.imp}); (b) a product of two Ricci coefficients, one of which converges weakly in $L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ for every $u$ and the other converges in the $C^0_u L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ norm, e.g.~$\eta_{n_k}\cdot\hat{\chi}_{n_k}$, $\underline{\eta}_{n_k}\cdot\hat{\chi}_{n_k}$, $\omega_{n_k} \hat{\chi}_{n_k}$. In either case, we know that the product converges weakly in $L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ to the product of the limits for every $u\in [0,u_*]$. This implies \begin{equation}\label{eq:Ricci4A.5} \begin{split} &\:\int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, - \frac 12(\eta-\underline{\eta})_{n_k}\cdot_{n_k}\hat{\chi}_{n_k} + \frac 14\slashed{\mathrm{tr}}\chi_{n_k}\eta_{n_k} +\frac 34 \slashed{\mathrm{tr}}\chi_{n_k}\underline{\eta}_{n_k} - 2\omega_{n_k}\eta_{n_k}\rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}' \\ \to &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle \varphi, - \frac 12(\eta-\underline{\eta})_{\infty}\cdot_{\infty}\hat{\chi}_{\infty} + \frac 14\slashed{\mathrm{tr}}\chi_{\infty}\eta_{\infty} +\frac 34 \slashed{\mathrm{tr}}\chi_{\infty}\underline{\eta}_{\infty} - 2\omega_{\infty}\eta_{\infty} \rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}'. \end{split} \end{equation} \item Terms in which $(\slashed{\nabla}_4)_{n_k}\varphi$ is contracted with $\eta_{n_k}$. Expanding $(\slashed{\nabla}_4)_{n_k}\varphi$ using \eqref{nab4.def}, we see from Propositions~\ref{prop:gamma}, \ref{prop:metric.limit}, \ref{prop:trch.weak.limit} and \ref{prop:chi.limit} that $(\slashed{\nabla}_4)_{n_k}\varphi \rightharpoonup (\slashed{\nabla}_4)_{\infty}\varphi$ weakly in $L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ for every $u\in [0,u_*]$. Now this is contracted with $\eta_{n_k}$, which tends to $\eta_\infty$ in the $C^0_u C^0_{\underline{u}}L^2(S_{u,\underline{u}})$ norm by Proposition~\ref{prop:eta.etab.limit}. It follows that \begin{equation}\label{eq:Ricci4A.6} \begin{split} \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle (\slashed{\nabla}_4)_{n_k}\varphi,\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}' \to &\: \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle (\slashed{\nabla}_4)_{\infty}\varphi,\eta_{\infty} \rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}'. \end{split} \end{equation} \end{enumerate} Combining \eqref{eq:Ricci4A.3}--\eqref{eq:Ricci4A.6}, we therefore deduce \eqref{eq:Ricci4A.1} from \eqref{eq:Ricci4A.2} after taking $k\to +\infty$, as desired. \qedhere \end{proof} \begin{proposition}\label{prop:Ricci.harder} The equations \eqref{trRicAB}--\eqref{Ric34.1} hold for $(\mathcal M, g_\infty)$ in the weak integrated sense (see Definition~\ref{def:weaker.transport}). \end{proposition} \begin{proof} The proof is in fact quite similar to that of Proposition~\ref{prop:Ricci.easy}, with two exceptions: \begin{itemize} \item The equations are now in weak integrated (as opposed to integrated) form. \item We also use the compensated compactness property in Proposition~\ref{prop:chihchibh}. \end{itemize} Since the equations \eqref{trRicAB}--\eqref{Ric34.1} can in fact all be treated in a similar fashion, We will only discuss \eqref{RicAB.1} in detail. Recalling Definition~\ref{def:weaker.transport}, our goal will be to show that for all contravariant $S$-tangent $2$-tensor field $\varphi \in C^1$, \begin{equation}\label{eq:Ricci.harder.1} \begin{split} &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \langle\varphi, \hat{\chi}_\infty\rangle \Omega_\infty\,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \langle\varphi, \hat{\chi}_\infty\rangle \Omega \,\mathrm{d} A_{\gamma_\infty}\,\mathrm{d} \underline{u} \\ = &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle \varphi, \slashed{\nabla}_\infty \widehat{\otimes}_\infty\eta_\infty +\frac 12\slashed{\mathrm{tr}}\underline{\chi}_\infty \hat{\chi}_\infty - \f12 \slashed{\mathrm{tr}}\chi_\infty \hat{\underline{\chi}}_\infty + \eta_\infty \widehat{\otimes}_\infty\eta_\infty \rangle \Omega_\infty^2 \,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ &\: + \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle (\slashed{\nabla}_3)_\infty \varphi, \hat{\chi}_\infty \rangle \Omega_\infty^2 \,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}. \end{split} \end{equation} In a similar spirit as the proof of Proposition~\ref{prop:Ricci.easy}, we will derive \eqref{eq:Ricci.harder.1} by taking the $k\to +\infty$ limit of the following equation, which holds thanks to Proposition~\ref{prop:null.structure}: \begin{equation}\label{eq:Ricci.harder.2} \begin{split} &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \langle\varphi, \hat{\chi}_{n_k}\rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \langle\varphi, \hat{\chi}_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}}\,\mathrm{d} \underline{u} \\ = &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle \varphi, \slashed{\nabla}_{n_k}\widehat{\otimes}_{n_k}\eta_{n_k}+\frac 12\slashed{\mathrm{tr}}\underline{\chi}_{n_k} \hat{\chi}_{n_k} - \f12 \slashed{\mathrm{tr}}\chi_{n_k} \hat{\underline{\chi}}_{n_k} + \eta_{n_k} \widehat{\otimes}_{n_k}\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ &\: + \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle (\slashed{\nabla}_3)_{n_k}\varphi, \hat{\chi}_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}. \end{split} \end{equation} We proceed, again, as in the proof of Proposition~\ref{prop:Ricci.easy}. First, by Propositions~\ref{prop:gamma}, \ref{prop:metric.limit} and \ref{prop:chi.limit}, \begin{equation}\label{eq:Ricci.harder.3} \begin{split} &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \langle\varphi, \hat{\chi}_{n_k}\rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \langle\varphi, \hat{\chi}_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}}\,\mathrm{d} \underline{u} \\ \to &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \langle\varphi, \hat{\chi}_\infty\rangle \Omega_\infty\,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \langle\varphi, \hat{\chi}_\infty\rangle \Omega \,\mathrm{d} A_{\gamma_\infty}\,\mathrm{d} \underline{u}. \end{split} \end{equation} For the terms on the RHS of \eqref{eq:Ricci.harder.2}, we split into three types of terms as in Proposition~\ref{prop:Ricci.easy}. Again, we note that we can focus on the limits of the Ricci coefficients in view of Propositions~\ref{prop:gamma} and \ref{prop:metric.limit}. \begin{enumerate} \item Terms in which $\varphi$ is contracted with $\slashed{\nabla}_{n_k} \eta_{n_k}$. Using Proposition~\ref{prop:eta.etab.limit} (in addition to Propositions~\ref{prop:gamma} and \ref{prop:metric.limit}), \begin{equation}\label{eq:Ricci.harder.4} \begin{split} &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle \varphi, \slashed{\nabla}_{n_k}\widehat{\otimes}_{n_k}\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ \to &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle \varphi, \slashed{\nabla}_\infty \widehat{\otimes}_\infty\eta_\infty \rangle \Omega_\infty^2 \,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}. \end{split} \end{equation} \item Terms in which $\varphi$ is contracted with a \underline{product} of two Ricci coefficients. As in the proof of Proposition~\ref{prop:Ricci.easy}, it can be checked that in any such products, at least one term has a \underline{strong} $L^2_uL^2_{\underline{u}}L^2(S_{u,\underline{u}})$ limit while the other term has at least a weak $L^2_uL^2_{\underline{u}}L^2(S_{u,\underline{u}})$ limit. Thus, using Propositions~\ref{prop:gamma}, \ref{prop:metric.limit}, \ref{prop:eta.etab.limit}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit} and \ref{prop:trch.imp}, we obtain \begin{equation}\label{eq:Ricci.harder.5} \begin{split} &\:\int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle \varphi, \frac 12\slashed{\mathrm{tr}}\underline{\chi}_{n_k} \hat{\chi}_{n_k} - \f12 \slashed{\mathrm{tr}}\chi_{n_k} \hat{\underline{\chi}}_{n_k} + \eta_{n_k} \widehat{\otimes}_{n_k}\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}\\ \to &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle \varphi, \frac 12\slashed{\mathrm{tr}}\underline{\chi}_\infty \hat{\chi}_\infty - \f12 \slashed{\mathrm{tr}}\chi_\infty \hat{\underline{\chi}}_\infty + \eta_\infty \widehat{\otimes}_\infty\eta_\infty \rangle \Omega_\infty^2 \,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} . \end{split} \end{equation} \item Terms in which $(\slashed{\nabla}_3)_{n_k}\varphi$ is contracted with $\hat{\chi}_{n_k}$. Expanding $(\slashed{\nabla}_3)_{n_k}\varphi$ using \eqref{nab3.def}, we see that \begin{equation*} \begin{split} &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle (\slashed{\nabla}_3)_{n_k}\varphi, \hat{\chi}_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ = &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} (\frac{\partial\varphi^{AB}}{\partial u} + b_{n_k}^C (\slashed{\nabla}_{n_k})_C \varphi^{AB} - 2(\slashed{\nabla}_{n_k})_C b_{n_k}^{(A} \varphi^{B)C}) (\hat{\chi}_{n_k})_{AB} \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ &\: + \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} 2 (\gamma_{n_k}^{-1})^{C(A|}(\underline{\chi}_{n_k})_{CD} \varphi^{|B)D} (\hat{\chi}_{n_k})_{AB} \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}. \end{split} \end{equation*} For the terms on the first line on the RHS, we use Propositions~\ref{prop:gamma}, \ref{prop:metric.limit} and \ref{prop:chi.limit} to pass to the limit. For the second line, we note that after expressing $\chi_{n_k} = \hat{\chi}_{n_k} + \f12 \gamma_{n_k} \slashed{\mathrm{tr}}\chi_{n_k}$, we have a term involving $\hat{\chi}\otimes\hat{\underline{\chi}}$. We therefore pass to the limit using Proposition~\ref{prop:chihchibh}, in addition to Propositions~\ref{prop:gamma}, \ref{prop:metric.limit}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit} and \ref{prop:trch.imp}. In summary, we obtain \begin{equation}\label{eq:Ricci.harder.6} \begin{split} &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle (\slashed{\nabla}_3)_{n_k}\varphi, \hat{\chi}_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ \to &\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u',\underline{u}}} \langle (\slashed{\nabla}_3)_\infty \varphi, \hat{\chi}_\infty \rangle \Omega_\infty^2 \,\mathrm{d} A_{\gamma_\infty}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}. \end{split} \end{equation} \end{enumerate} Combining \eqref{eq:Ricci.harder.3}--\eqref{eq:Ricci.harder.6}, we obtain \eqref{eq:Ricci.harder.1} from \eqref{eq:Ricci.harder.2}. As we mentioned in the beginning of the proof, the other equations can all be treated similarly. The only additional thing to note is that with the quadratic terms in the Ricci coefficients (Case~(2) above), in general there are also products with no factors having a strong limit, but all of these terms can be handled using compensated compactness in Proposition~\ref{prop:chihchibh}. We omit the details. \qedhere \end{proof} \subsection{The non-vanishing (weak) Ricci curvature components of the limit spacetime}\label{sec:eq.Ricci.trans.2} \begin{proposition}\label{prop:trch.limit.eqn} $\slashed{\mathrm{tr}}\underline{\chi}^{\pm}_{\infty}$ and $\slashed{\mathrm{tr}}\chi^{\pm}_{\infty}$ satisfy \eqref{eq:trchb} and \eqref{eq:trch} respectively. \end{proposition} \begin{proof} We will only derive equation \eqref{eq:trchb} for $\slashed{\mathrm{tr}}\underline{\chi}^{\pm}_\infty$; the equation \eqref{eq:trch} for $\slashed{\mathrm{tr}}\chi^{\pm}_\infty$ is similar (and simpler). Fix $0\leq u_1< u_2\leq u_*$. We begin with the fact that \eqref{Ric33} holds for all $(\mathcal M, g_{n_k})$. Take a $C^1$ function $\varphi:[0,u_*]\times \mathbb S^2\to \mathbb R$. Multiply \eqref{Ric33} by $\varphi(u,\vartheta) \xi_\ell(u)$, where for every $\ell\in \mathbb N$ with $\ell^{-1}\leq \frac{u_2 - u_1}{2}$, $\xi_\ell:[0,u_*]\to \mathbb R$ is defined to be the following (Lipschitz) cutoff function: \begin{equation}\label{def:xi} \xi_\ell(u):= \begin{cases} 0 & \mbox{if $u\in [0, u_1)$}\\ \ell (u-u_1) & \mbox{if $u\in [u_1, u_1+\ell^{-1})$}\\ 1 & \mbox{if $u\in [u_1+\ell^{-1}, u_2-\ell^{-1})$}\\ - \ell (u - u_2) & \mbox{if $u\in [u_2-\ell^{-1}, u_2)$}\\ 0 & \mbox{if $u\in [u_2, u_*)$} \end{cases}. \end{equation} Therefore, the following holds for every $\underline{u}\in [0,\underline{u}_*]$: \begin{equation}\label{eq:trch.limit.eqn.1} \begin{split} &\: \ell \int_{u_2-\ell^{-1}}^{u_2} \int_{S_{u,\underline{u}}} \varphi \Omega_{n_k} \slashed{\mathrm{tr}}\underline{\chi}_{n_k} \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u - \ell \int^{u_1+\ell^{-1}}_{u_1} \int_{S_{u,\underline{u}}} \varphi \Omega_{n_k} \slashed{\mathrm{tr}}\underline{\chi}_{n_k} \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \xi_\ell ((e_3\varphi)\slashed{\mathrm{tr}}\underline{\chi}_{n_k} -4\varphi\underline{\omega}_{n_k} \slashed{\mathrm{tr}}\underline{\chi}_{n_k}+ \f12 \varphi (\slashed{\mathrm{tr}}\underline{\chi}_{n_k})^2 - \varphi|\hat{\underline{\chi}}_{n_k}|_{\gamma_{n_k}}^2)\Omega_{n_k}^2\,\mathrm{dA}_{\gamma_{n_k}} \,\mathrm{d} u. \end{split} \end{equation} We now take $k\to +\infty$. Note that we can\underline{not} just replace the $n_k$'s in \eqref{eq:trch.limit.eqn.1} by $\infty$ because of the term $|\hat{\underline{\chi}}_{n_k}|_{\gamma_{n_k}}^2 \Omega_{n_k}^2$ (see~Proposition~\ref{prop:nu.convergence}). On the other hand, in all the \underline{other} terms which are quadratic in the Ricci coefficients, there must be at least one fact which has a strong $L^2_uL^2(S_{u,\underline{u}})$ limit (for every $\underline{u}\in [0,\underline{u}_*]$). Indeed, using Propositions~\ref{prop:gamma}, \ref{prop:metric.limit}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit}, \ref{prop:trch.imp} and \ref{prop:nu.convergence}, we obtain \begin{equation}\label{eq:trch.limit.eqn.2} \begin{split} &\: \ell \int_{u_2-\ell^{-1}}^{u_2} \int_{S_{u,\underline{u}}} \varphi \Omega_{\infty} \slashed{\mathrm{tr}}\underline{\chi}_{\infty} \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d} u - \ell \int^{u_1+\ell^{-1}}_{u_1} \int_{S_{u,\underline{u}}} \varphi \Omega_{\infty} \slashed{\mathrm{tr}}\underline{\chi}_{\infty} \,\mathrm{dA}_{\gamma_{\infty}}\,\mathrm{d} u \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \xi_\ell (((e_3)_\infty \varphi)\slashed{\mathrm{tr}}\underline{\chi}_{\infty} -4 \varphi \underline{\omega}_{\infty} \slashed{\mathrm{tr}}\underline{\chi}_{\infty}+ \f12 \varphi (\slashed{\mathrm{tr}}\underline{\chi}_{\infty})^2 - \varphi|\hat{\underline{\chi}}_{\infty}|_{\gamma_{\infty}}^2)\Omega_{\infty}^2\,\mathrm{dA}_{\gamma_{\infty}} \,\mathrm{d} u \\ &\: -\int_{(u_1, u_2)\times \{\underline{u}\}\times \mathbb S^2} \xi_\ell \varphi\,\mathrm{d} \underline{\nu}_{\underline{u}}. \end{split} \end{equation} Finally, we take $\ell \to +\infty$ in \eqref{eq:trch.limit.eqn.2}. For the LHS, we use Lemma~\ref{lem:trace}; for the RHS, we use the dominated convergence theorem. We then obtain \begin{equation*} \begin{split} &\: \int_{S_{u_2,\underline{u}}} \varphi \Omega_{\infty} \slashed{\mathrm{tr}}\underline{\chi}^-_{\infty} \,\mathrm{dA}_{\gamma_{\infty}} - \int_{S_{u_1,\underline{u}}} \varphi \Omega_{\infty} \slashed{\mathrm{tr}}\underline{\chi}_{\infty}^+ \,\mathrm{dA}_{\gamma_{\infty}} \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} (((e_3)_\infty \varphi)\slashed{\mathrm{tr}}\underline{\chi}_{\infty} -4 \varphi \underline{\omega}_{\infty} \slashed{\mathrm{tr}}\underline{\chi}_{\infty}+ \f12 \varphi (\slashed{\mathrm{tr}}\underline{\chi}_{\infty})^2 - \varphi|\hat{\underline{\chi}}_{\infty}|_{\gamma_{\infty}}^2)\Omega_{\infty}^2\,\mathrm{dA}_{\gamma_{\infty}} \,\mathrm{d} u \\ &\: -\int_{(u_1, u_2)\times \{\underline{u}\}\times \mathbb S^2} \varphi\,\mathrm{d} \underline{\nu}_{\underline{u}}, \end{split} \end{equation*} which is exactly the equation \eqref{eq:trchb}. \qedhere \end{proof} \subsection{Propagation equation for the null dust}\label{sec.prop.dust} In this subsection, we prove transport equations for $\mathrm{d}\nu_u$ and $\mathrm{d} \underline{\nu}_{\underline{u}}$ (recall Proposition~\ref{prop:nu.convergence}). The main result is the following proposition: \begin{proposition}\label{prop:nu.transport} For every $0\leq u_1 <u_2\leq u_*$, and every $C^1_c$ function $\varphi: [0,u_*]\times (0,\underline{u}_*) \times \mathbb S^2\to \mathbb R$, $$\int_{\{u_2\}\times (0,\underline{u}_*) \times \mathbb S^2} \varphi \,\mathrm{d} \nu_{u_2} = \int_{\{u_1\}\times (0,\underline{u}_*) \times \mathbb S^2} \varphi\,\mathrm{d}\nu_{u_1} + \int_{u_1}^{u_2} \int_{\{u\}\times (0,\underline{u}_*) \times \mathbb S^2} (\frac{\partial\varphi}{\partial u} + \slashed{\nabla}_{b_\infty} \varphi ) \,\mathrm{d} \nu_{u}\,\mathrm{d} u.$$ Similarly, for every $0\leq \underline{u}_1 <\underline{u}_2 \leq \underline{u}_*$, and every $C^1$ function $\varphi: (0,u_*) \times [0,\underline{u}_*] \times \mathbb S^2\to \mathbb R$, $$\int_{(0,u_*) \times \{\underline{u}_2\} \times \mathbb S^2} \varphi \,\mathrm{d} \underline{\nu}_{\underline{u}_2} = \int_{(0,u_*) \times \{\underline{u}_1\} \times \mathbb S^2} \varphi\,\mathrm{d}\underline{\nu}_{\underline{u}_1} + \int_{\underline{u}_1}^{\underline{u}_2} \int_{(0,u_*) \times \{\underline{u}\} \times \mathbb S^2} \frac{\partial\varphi}{\partial \underline{u}} \,\mathrm{d} \underline{\nu}_{\underline{u}}\,\mathrm{d} \underline{u}.$$ \end{proposition} The proof of Proposition~\ref{prop:nu.transport} relies on the next two propositions. We refer the reader to the end of this subsection for the conclusion of the proof of Proposition~\ref{prop:nu.transport}. \begin{proposition} The following identity holds for $(\mathcal M, g_n)$ for all $n\in \mathbb N$ and for $(\mathcal M, g_\infty)$: \begin{equation}\label{eq:main.transport.id} \begin{split} &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \psi \Omega^2 |\hat{\chi}|_{\gamma}^2 \,\mathrm{dA}_{\gamma}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \psi \Omega^2 |\hat{\chi}|_{\gamma}^2 \,\mathrm{dA}_{\gamma}\,\mathrm{d} \underline{u} \\ =&\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \hat{\chi}\cdot (2\slashed{\nabla}\widehat{\otimes} \eta - \slashed{\mathrm{tr}}\chi \hat{\underline{\chi}} + 2\eta\widehat{\otimes} \eta) \Omega^3 \,\mathrm{dA}_{\gamma}\, \mathrm{d} u\, \mathrm{d} \underline{u} \\ &\: + \int_0^{\underline{u}_*} \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} (\frac{\partial\psi}{\partial u} + \slashed{\nabla}_b\psi) (\Omega^2 |\hat{\chi}|_{\gamma}^2) \,\mathrm{dA}_\gamma\,\mathrm{d} u\,\mathrm{d}\underline{u}. \end{split} \end{equation} Similarly, $(\mathcal M, g_n)$ for all $n\in \mathbb N$ and for $(\mathcal M, g_\infty)$ \begin{equation}\label{eq:main.transport.id.b} \begin{split} &\: \int_0^{u_*} \int_{S_{u,\underline{u}^2}} \psi \Omega^2 |\hat{\underline{\chi}}|_{\gamma}^2 \,\mathrm{dA}_{\gamma}\, \mathrm{d} u - \int_0^{u_*} \int_{S_{u,\underline{u}_1}} \psi \Omega^2 |\hat{\underline{\chi}}|_{\gamma}^2 \,\mathrm{dA}_{\gamma}\,\mathrm{d} u \\ =&\: \int_0^{u_*} \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}}} \hat{\underline{\chi}}\cdot (2\slashed{\nabla}\widehat{\otimes} \underline{\eta} - \slashed{\mathrm{tr}}\underline{\chi} \hat{\chi} + 2\underline{\eta}\widehat{\otimes} \underline{\eta}) \Omega^3 \,\mathrm{dA}_{\gamma}\, \mathrm{d} \underline{u} \, \mathrm{d} u + \int_0^{u_*} \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}}} \frac{\partial\psi}{\partial \underline{u}} (\Omega^2 |\hat{\underline{\chi}}|_{\gamma}^2) \,\mathrm{dA}_\gamma \, \mathrm{d} \underline{u} \, \mathrm{d} u. \end{split} \end{equation} \end{proposition} \begin{proof} We will only prove \eqref{eq:main.transport.id}; the proof of \eqref{eq:main.transport.id.b} is slightly simpler. Note that by Theorem~\ref{thm:ext.est} and the results in Section~\ref{sec:existence}, $(\mathcal M, g_n)$ and $(\mathcal M, g_\infty)$ are angularly regular. Moreover, Propositions~\ref{prop:null.structure} and \ref{prop:Ricci.harder} shows that in these spacetimes, \begin{equation}\label{eq:nab3chih.in.proof} \slashed{\nabla}_3\hat{\chi}+\frac 1 2 \slashed{\mathrm{tr}}\underline{\chi} \hat{\chi} =\slashed{\nabla}\widehat{\otimes} \eta+2\underline{\omega} \hat{\chi}-\frac 12 \slashed{\mathrm{tr}}\chi \hat{\underline{\chi}} +\eta\widehat{\otimes} \eta \end{equation} is satisfied in the weak integrated sense of Definition~\ref{def:weaker.transport} It therefore suffices to show that if the equation \eqref{eq:nab3chih.in.proof} is satisfied in the weak integrated sense of Definition~\ref{def:weaker.transport} in an angularly regular spacetime, then in fact \eqref{eq:main.transport.id} holds. According to Definition~\ref{def:weaker.transport}, that \eqref{eq:nab3chih.in.proof} is satisfied in the weak integrated sense means \begin{equation}\label{eq:weak.chih} \begin{split} &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \langle\varphi, \hat{\chi}\rangle \Omega \,\mathrm{dA}_{\gamma}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \langle\varphi, \hat{\chi} \rangle \Omega \,\mathrm{dA}_{\gamma}\,\mathrm{d} \underline{u} \\ =&\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \langle \varphi, (\slashed{\nabla}\widehat{\otimes} \eta + \frac 12 \slashed{\mathrm{tr}}\underline{\chi} \hat{\chi} -\frac 12 \slashed{\mathrm{tr}}\chi \hat{\underline{\chi}} +\eta\widehat{\otimes} \eta)\rangle \Omega^2 \,\mathrm{dA}_{\gamma}\, \mathrm{d} u\,\, \mathrm{d} \underline{u} \\ &\: + \int_0^{\underline{u}_*} \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \langle \slashed{\nabla}_3\varphi, \hat{\chi} \rangle \Omega^2 \,\mathrm{dA}_\gamma\,\mathrm{d} u\,\mathrm{d}\underline{u} \end{split} \end{equation} for every smooth and compactly supported $S$-tangent $2$-tensor $\varphi^{AB}$. By angular regularity and H\"older's inequality, one verifies that $$\|\slashed{\mathrm{tr}}\underline{\chi} \hat{\chi}\|_{L^2_u L^2_{\underline{u}} L^2(S)} + \| \slashed{\nabla}\widehat{\otimes} \eta - \frac 12 \slashed{\mathrm{tr}}\chi \hat{\underline{\chi}} +\eta\widehat{\otimes} \eta\|_{L^2_u L^2_{\underline{u}} L^2(S)} <+\infty,\quad \|\hat{\chi}\|_{L^2_{\underline{u}} L^\infty_u L^2(S)}<+\infty.$$ It therefore follows from a density argument that \eqref{eq:weak.chih} holds for all $\varphi$ such that $\varphi \in C^0_u L^2_{\underline{u}} L^2(S)$ and $\slashed{\nabla}_3\varphi\in L^2_{\underline{u}} L^1_u L^2(S)$. In particular, we can choose $\varphi^{AB} = \psi\Omega \chi^{AB}$, where $\psi$ is a $C^1$ function to obtain \eqref{eq:main.transport.id}. \qedhere \end{proof} \begin{proposition}\label{prop:terms.in.nu.eqn} Taking the subsequence $n_k$ as in the end of Section~\ref{sec:existence}, the following terms in \eqref{eq:main.transport.id} obey \begin{equation}\label{eq:terms.in meas.prop} \begin{split} &\: \int_{[0,u_*]\times [0,\underline{u}_*]\times\mathbb S^2} \psi (\underbrace{2 \hat{\chi}_{n_k}\cdot_{n_k} \slashed{\nabla}_{n_k} \widehat{\otimes} \eta_{n_k}}_{=:\mathrm{I}} \underbrace{- \slashed{\mathrm{tr}}\chi_{n_k} \hat{\chi}_{n_k} \cdot_{n_k} \hat{\underline{\chi}}_{n_k}}_{=:\mathrm{II}} + \underbrace{2 \hat{\chi}_{n_k}\cdot_{n_k} (\eta_{n_k}\widehat{\otimes} \eta_{n_k})}_{=:\mathrm{III}} ) \Omega_{n_k}^{3} \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u\,\mathrm{d}\underline{u} \\ \to &\: \int_{[0,u_*]\times [0,\underline{u}_*]\times\mathbb S^2} \psi (2 \hat{\chi}_\infty \cdot_\infty \slashed{\nabla}_\infty \widehat{\otimes} \eta_\infty - \slashed{\mathrm{tr}}\chi_\infty \hat{\chi}_\infty \cdot_\infty \hat{\underline{\chi}}_\infty + 2 \hat{\chi}_\infty\cdot_\infty (\eta_\infty\widehat{\otimes} \eta_\infty) ) \Omega_\infty^{3} \,\mathrm{dA}_{\gamma_\infty}\,\mathrm{d} u\,\mathrm{d}\underline{u}. \end{split} \end{equation} A similar convergence statement holds for the corresponding terms in \eqref{eq:main.transport.id.b}. \end{proposition} \begin{proof} We will only prove \eqref{eq:terms.in meas.prop}; the terms in \eqref{eq:main.transport.id.b} can be treated in the same way. We will use the following two facts. First of all, $g_{n_k}\to g_\infty$ uniformly, and since the integrand in every term is at least in $L^1_u L^1_{\underline{u}} L^1(S_{u,\underline{u}})$, we can easily pass to the limit $k\to +\infty$ in all occurrences of $\cdot_{n_k}$, $\Omega_{n_k}$ and $\mathrm{dA}_{\gamma_{n_k}}$. Second, we use the standard fact that whenever there is a product of two quantities, say $f_{n_k}$ and $h_{n_k}$, such that $f_{n_k}$ converges weakly to $f_\infty$ in $L^2_u L^2_{\underline{u}} L^2(S)$and $h_{n_k}$ converges to $h_\infty$ in the $L^2_u L^2_{\underline{u}} L^2(S)$ norm, then $f_{n_k} h_{n_k}$ converges to $f_\infty h_\infty$ in the sense of distribution. \pfstep{Step~1: Term $\mathrm{I}$} Note that $\slashed{\nabla}_{n_k} \widehat{\otimes} \eta_{n_k}$ converges in the $C^0_u C^0_{\underline{u}} L^4(S_{u,\underline{u}})$ norm (by Proposition~\ref{prop:eta.etab.limit}), and thus in particular converges in the $L^2_u L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ norm. On the other hand, by Proposition~\ref{prop:chi.limit}, $\hat{\chi}_{n_k}$ converges weakly in $L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ for all $u$, and thus in particular converges weakly in $L^2_u L^2_{\underline{u}} L^2(S_{u,\underline{u}})$. By the remark in the beginning of the proof, we deduce that $\mathrm{I}$ attains the limit as indicated in \eqref{eq:terms.in meas.prop} as $k\to +\infty$. \pfstep{Step~2: Term $\mathrm{II}$} For this term we use compensated compactness. Indeed, by Proposition~\ref{prop:chihchibh}, $\hat{\chi}_{n_k} \cdot_{n_k} \hat{\underline{\chi}}_{n_k} \rightharpoonup \hat{\chi}_\infty \cdot_\infty \hat{\underline{\chi}}_\infty$ weakly in $L^2_u L^2_{\underline{u}} L^2(S)$. On the other hand, Proposition~\ref{prop:trch.imp} in particular implies that $\slashed{\mathrm{tr}}\chi_{n_k}\to \slashed{\mathrm{tr}}\chi_\infty$ in the $L^2_u L^2_{\underline{u}} L^2(S)$ norm. As in Step~1, we then conclude using the remark in the beginning of the proof. \pfstep{Step~3: Term $\mathrm{III}$} By Proposition~\ref{prop:eta.etab.limit}, $\eta_{n_k}\widehat{\otimes} \eta_{n_k}$ converges in the $C^0_u C^0_{\underline{u}} C^0(S_{u,\underline{u}})$ norm, and thus also converges in the $L^2_u L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ norm. On the other hand, as argued in Step~1, $\hat{\chi}_{n_k}$ converges weakly in $L^2_u L^2_{\underline{u}} L^2(S_{u,\underline{u}})$. As before, we then conclude using the remark in the beginning of the proof. \qedhere \end{proof} \begin{proof}[Proof of Proposition~\ref{prop:nu.transport}] We start with the identity \eqref{eq:main.transport.id} for $(\mathcal M, g_{n_k})$. Now take $k\to+\infty$ and use Propositions~\ref{prop:nu.convergence} and \ref{prop:terms.in.nu.eqn} to obtain \begin{equation}\label{eq:main.transport.id.2} \begin{split} &\: \int_0^{\underline{u}_*} \int_{S_{u_2,\underline{u}}} \psi \Omega_\infty^2 |\hat{\chi}_\infty|_{\gamma_\infty}^2 \,\mathrm{dA}_{\gamma_\infty}\, \mathrm{d} \underline{u} - \int_0^{\underline{u}_*} \int_{S_{u_1,\underline{u}}} \psi \Omega_\infty^2 |\hat{\chi}_\infty|_{\gamma_\infty}^2 \,\mathrm{dA}_{\gamma_\infty}\,\mathrm{d} \underline{u} \\ &\: + \int_{\{u_2\}\times (0,\underline{u}_*) \times \mathbb S^2} \psi \,\mathrm{d}\nu_{u_2} - \int_{\{u_1\}\times [0,\underline{u}_*]\times \mathbb S^2} \psi \,\mathrm{d}\nu_{u_1} \\ =&\: \int_0^{\underline{u}_*}\int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \hat{\chi}_\infty\cdot_\infty (2\slashed{\nabla}_\infty\widehat{\otimes}_\infty \eta_\infty - \slashed{\mathrm{tr}}\chi_\infty \hat{\underline{\chi}}_\infty + 2\eta_\infty\widehat{\otimes}_\infty \eta_\infty) \Omega_\infty^3 \,\mathrm{dA}_{\gamma}\, \mathrm{d} u\,\, \mathrm{d} \underline{u} \\ &\: + \int_0^{\underline{u}_*} \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} (\frac{\partial\psi}{\partial u} + \slashed{\nabla}_{b_\infty}\psi) (\Omega_\infty^2 |\hat{\chi}_\infty|_{\gamma_\infty}^2) \,\mathrm{dA}_{\gamma_\infty} \,\mathrm{d} u\,\mathrm{d}\underline{u} \\ &\: + \int_{u_1}^{u_2} \int_{\{u\}\times (0,\underline{u}_*)\times \mathbb S^2} (\frac{\partial\psi}{\partial u} + \slashed{\nabla}_{b_\infty}\psi) \,\mathrm{d}\nu_{u}\,\mathrm{d} u. \end{split} \end{equation} Finally, subtract from \eqref{eq:main.transport.id.2} the equation \eqref{eq:main.transport.id} for $(\mathcal M, g_\infty)$. We then obtain the desired transport equation for $\mathrm{d}\nu_u$. The transport equation for $\mathrm{d}\underline{\nu}_{\underline{u}}$ can be derived analogously. \qedhere \end{proof} \subsection{Higher order transport equations for the Ricci coefficients}\label{sec:higher.Ricci} We next derive the equations \eqref{eq:mu.0}, \eqref{eq:mub.0}, \eqref{eq:Xtrch.0} and \eqref{eq:Xtrchb.0} for the derivatives of the Ricci coefficients. \begin{proposition}\label{prop:eq.div.eta} $\mu_\infty$ and $\underline{\mu}_\infty$ respectively obeys \eqref{eq:mu.0} and \eqref{eq:mub.0} in the integrated sense (Definition~\ref{def:weak.transport}). \end{proposition} \begin{proof} By Proposition~\ref{prop:mu.background}, we know that \eqref{eq:mu.0} and \eqref{eq:mub.0} are satisfied for $(\mathcal M, g_{n_k})$ for all $k$. It therefore suffices to check that we can take the limit $k\to +\infty$. Except for having some cubic terms, this is similar to Proposition~\ref{prop:Ricci.easy}. Now, \eqref{eq:mu.0} and \eqref{eq:mub.0} can be schematically written as \begin{align} \slashed{\nabla}_4 \mu= &\: \slashed{\nabla}\chi \star (\eta,\underline{\eta}) + \slashed{\nabla} (\eta,\underline{\eta}) \star \chi + K\star \chi + \chi \star (\eta,\underline{\eta}) \star (\eta,\underline{\eta}), \label{eq:mu.1}\\ \slashed{\nabla}_3 \underline{\mu} = &\: \slashed{\nabla}\underline{\chi} \star (\eta,\underline{\eta}) + \slashed{\nabla} (\eta,\underline{\eta}) \star \underline{\chi} + K \star \underline{\chi} + \underline{\chi} \star (\eta,\underline{\eta}) \star (\eta,\underline{\eta}),\label{eq:mub.1} \end{align} where $\slashed{\nabla}\chi\star(\eta,\underline{\eta})$ denotes a linear combination of contractions (with respect to $\gamma$) of $\slashed{\nabla}\hat{\chi} \eta$, $\slashed{\nabla}\hat{\chi} \underline{\eta}$, $\slashed{\nabla}\slashed{\mathrm{tr}}\chi \eta$ and $\slashed{\nabla}\slashed{\mathrm{tr}}\chi\underline{\eta}$; and similarly for other terms. Consider now \eqref{eq:mu.1} since \eqref{eq:mub.1} is similar. For $i=1,2$, the terms $$\int_{S_{u,\underline{u}_i}} \langle\varphi, \mu_{n_k} \rangle \Omega_{n_k} \,\mathrm{d} A_{\gamma_{n_k}} \to \int_{S_{u,\underline{u}_i}} \langle\varphi, \mu_{\infty} \rangle \Omega_{\infty} \,\mathrm{d} A_{\gamma_{\infty}}$$ due to \eqref{eq:mu.def}, Propositions~\ref{prop:metric.limit}, \ref{prop:Christoffel}, \ref{prop:metric.limit} and \ref{prop:eta.etab.limit}. It thus remains to pass to the $k\to +\infty$ limit for the terms which are integrated in $\underline{u}$: \begin{enumerate} \item Term with $\slashed{\nabla}_4\varphi$. In a completely analogous manner as the proof of \eqref{eq:Ricci4A.6}, we have $$\int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle (\slashed{\nabla}_4)_{n_k} \varphi,\eta_{n_k} \rangle \Omega_{n_k}^2 \,\mathrm{d} A_{\gamma_{n_k}}\, \mathrm{d} \underline{u}'\to \int_{\underline{u}_1}^{\underline{u}_2} \int_{S_{u,\underline{u}'}} \langle (\slashed{\nabla}_4)_{\infty}\varphi,\eta_{\infty} \rangle \Omega_{\infty}^2 \,\mathrm{d} A_{\gamma_{\infty}}\, \mathrm{d} \underline{u}'.$$ \item Quadratic terms\footnote{Note that in addition to the inhomogeneous terms in \eqref{eq:mu.1}, the quadratic term includes terms $\slashed{\mathrm{tr}}\chi\mu$ and $\omega\mu$ in Definition~\ref{def:weak.transport}.}: $\eta \slashed{\nabla} \chi$, $\underline{\eta} \slashed{\nabla} \chi$, $\chi \slashed{\nabla}\eta$, $\chi\slashed{\nabla} \underline{\eta}$, $\chi K$, $\omega \nabla \eta$, $\omega K$. By Propositions~\ref{prop:chi.limit} and \ref{prop:trch.weak.limit}, all of $\hat{\chi}_{n_k}$, $\slashed{\mathrm{tr}}\chi_{n_k}$, $\omega_{n_k}$, $\slashed{\nabla}_{n_k}\hat{\chi}_{n_k}$, $\slashed{\nabla}_{n_k}\slashed{\mathrm{tr}}\chi_{n_k}$ converge weakly to their (weak) limits in $L^2_{\underline{u}}L^2(S_{u,\underline{u}})$ for all $u$. By Propositions~\ref{prop:Christoffel} and \ref{prop:eta.etab.limit}, $\eta_{n_k}$, $\underline{\eta}_{n_k}$, $\slashed{\nabla}\eta_{n_k}$, $\slashed{\nabla}\underline{\eta}_{n_k}$ all converge to their limits in the $L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}})$ norm. Hence, all the quadratic terms converge to the appropriate limit in the sense of distribution. \item Cubic terms: $\eta\star \eta\star \chi$, $\eta\star \underline{\eta}\star \chi$, $\underline{\eta}\star \underline{\eta}\star \chi$. By Proposition~\ref{prop:eta.etab.limit}, $\eta_{n_k}$ and $\underline{\eta}_{n_k}$ have pointwise uniform limits. By Proposition~\ref{prop:chi.limit}, $\chi_{n_k}$ (i.e.~both $\slashed{\mathrm{tr}}\chi_{n_k}$ and $\hat{\chi}_{n_k}$) has a weak $L^2_{\underline{u}} L^2(S_{u,\underline{u}})$ limit for every $u$. It therefore follows that all the cubic terms have the desired limits. \qedhere \end{enumerate} \end{proof} \begin{proposition}\label{prop:trch.top.order} \begin{enumerate} \item The equation \eqref{eq:Xtrch.0} is satisfied for all $C^1$ $S$-tangent vector field $\slashed X$, for all $u\in [0,u_*]$ and all $0\leq \underline{u}_1<\underline{u}_2 \leq \underline{u}_*$. \item The equation \eqref{eq:Xtrchb.0} is satisfied for all $C^1$ $S$-tangent vector field $\slashed X$, for all $\underline{u}\in [0,\underline{u}_*]$ and all $0\leq u_1<u_2 \leq u_*$. \end{enumerate} \end{proposition} \begin{proof} In view of their similarities, we will only prove (the slightly harder) \eqref{eq:Xtrchb.0}. By \eqref{eq:Xtrchb.0.vac} in Proposition~\ref{prop:Xtrch.vac}. \begin{equation}\label{eq:Xtrchb} (\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{n_k}}) \slashed X(\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}) +\frac 12 \slashed X(\slashed{\mathrm{tr}}\underline{\chi}_{n_k})^2= -\slashed X|\hat{\underline{\chi}}_{n_k}|_{\gamma_{n_k}}^2 + [\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{n_k}}, \slashed X](\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}). \end{equation} Fix $0\leq u_1<u_2 \leq u_*$ and let $\xi_\ell$ be a cutoff as in \eqref{def:xi} when $\ell^{-1} \leq \frac{u_2-u_1}{2}$. Multiplying \eqref{eq:Xtrchb} by $\xi_\ell$, integrating with respect to $\Omega_{n_k}^2 \, \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u$, and integrating by parts, we obtain that for every $\underline{u}\in [0,\underline{u}_*]$: \begin{equation*} \begin{split} &\: \ell \int_{u_2-\ell^{-1}}^{u_2} \int_{S_{u,\underline{u}}} \Omega_{n_k}^2 \slashed X (\Omega_{n_k}^{-1} \slashed{\mathrm{tr}}\underline{\chi}_{n_k}) \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u - \ell \int^{u_1+\ell^{-1}}_{u_1} \int_{S_{u,\underline{u}}} \Omega_{n_k}^2 \slashed X (\Omega_{n_k}^{-1} \slashed{\mathrm{tr}}\underline{\chi}_{n_k}) \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \xi_\ell ([\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{n_k}}, \slashed X](\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}) -4\underline{\omega}_{n_k} \Omega_{n_k}\slashed X(\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}) - \slashed X|\hat{\underline{\chi}}_{n_k}|_{\gamma_{n_k}}^2)\Omega_{n_k}^2\,\mathrm{dA}_{\gamma_{n_k}} \,\mathrm{d} u. \end{split} \end{equation*} In order to be able to pass to the $k\to +\infty$ limit, we integrate by parts the last term to obtain \begin{equation}\label{eq:Xtrchb.after.ibp} \begin{split} &\: \ell \int_{u_2-\ell^{-1}}^{u_2} \int_{S_{u,\underline{u}}} \Omega_{n_k}^2 \slashed X (\Omega_{n_k}^{-1} \slashed{\mathrm{tr}}\underline{\chi}_{n_k}) \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u - \ell \int^{u_1+\ell^{-1}}_{u_1} \int_{S_{u,\underline{u}}} \Omega_{n_k}^2 \slashed X (\Omega_{n_k}^{-1} \slashed{\mathrm{tr}}\underline{\chi}_{n_k}) \,\mathrm{dA}_{\gamma_{n_k}}\,\mathrm{d} u \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \xi_\ell ([\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{n_k}}, \slashed X](\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}) -4\underline{\omega}_{n_k} \Omega_{n_k}\slashed X(\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}) )\Omega_{n_k}^2\,\mathrm{dA}_{\gamma_{n_k}} \,\mathrm{d} u \\ &\: + \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} \xi_\ell (2\slashed X(\log\Omega_{n_k}) + \div_{n_k} \slashed X)\Omega_{n_k}^2|\hat{\underline{\chi}}_{n_k}|_{\gamma_{n_k}}^2 \,\mathrm{dA}_{\gamma_{n_k}} \,\mathrm{d} u. \end{split} \end{equation} We now argue in a similar manner as Proposition~\ref{prop:trch.limit.eqn}. First we pass to the $k\to +\infty$ limit using Propositions~\ref{prop:gamma}, \ref{prop:Christoffel}, \ref{prop:metric.limit}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit}, \ref{prop:trch.imp}, \ref{prop:nu.convergence}. Note that except for the $|\hat{\underline{\chi}}_{n_k}|^2_{\gamma_{n_k}}$ terms in the last line of \eqref{eq:Xtrchb.after.ibp}, all the other terms have a most one factor which does not admit a strong limit so that we can replace $n_k$ by $\infty$ in the limit. In particular, because $[\frac{\partial}{\partial u} + b_{n_k}, \slashed X]$ is an $S$-tangent vector field, $[\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{n_k}}, \slashed X](\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k}) \to [\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{n_k}}, \slashed X](\Omega_{n_k}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{n_k})$ in the $L^2_{u} L^2(S_{u,\underline{u}})$ norm for every $\underline{u} \in [0,\underline{u}_*]$. For the terms on the last line of \eqref{eq:Xtrchb.after.ibp}, we obtain an extra term involving $\mathrm{d}\underline{\nu}_{\underline{u}}$ in the limit. After taking the $k\to +\infty$ limit we then take $\ell\to +\infty$ (using Lemma~\ref{lem:trace} on the LHS and the dominated convergence theorem on the RHS), we obtain that \begin{equation*} \begin{split} &\: \int_{S_{u_2,\underline{u}}} \Omega_{\infty}^2 (\slashed X (\Omega_{\infty}^{-1} \slashed{\mathrm{tr}}\underline{\chi}_{\infty}))^- \,\mathrm{dA}_{\gamma_{\infty}} - \int_{S_{u_1,\underline{u}}} \Omega_{\infty}^2 (\slashed X (\Omega_{\infty}^{-1} \slashed{\mathrm{tr}}\underline{\chi}_{\infty}))^+ \,\mathrm{dA}_{\gamma_{\infty}} \\ = &\: \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} ([\frac{\partial}{\partial u} + \slashed{\nabla}_{b_{\infty}}, \slashed X](\Omega_{\infty}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{\infty}) -4\underline{\omega}_{\infty} \Omega_{\infty}\slashed X(\Omega_{\infty}^{-1}\slashed{\mathrm{tr}}\underline{\chi}_{\infty}) )\Omega_{\infty}^2\,\mathrm{dA}_{\gamma_{\infty}} \,\mathrm{d} u \\ &\: + \int_{u_1}^{u_2} \int_{S_{u,\underline{u}}} (2\slashed X(\log\Omega_{\infty}) + \div_\infty \slashed X)\Omega_{\infty}^2|\hat{\underline{\chi}}_{\infty}|_{\gamma_{\infty}}^2 \,\mathrm{dA}_{\gamma_{\infty}} \,\mathrm{d} u \\ &\: + \int_{(u_1,u_2)\times \{\underline{u}\} \times \mathbb S^2} (2\slashed X(\log\Omega_{\infty}) + \div_{\infty} \slashed X)\,\mathrm{d}\underline{\nu}_{\underline{u}}, \end{split} \end{equation*} as desired. \qedhere \end{proof} \subsection{Renormalized Bianchi equations}\label{sec:renorm.Bianchi} Recall the definition of the curvature components in Definition~\ref{def:curv}. The set of renormalized Bianchi equations Proposition~\ref{prop:Bianchi} are satisfied by the limit spacetime in an appropriate sense: \begin{proposition}\label{prop:Bianichi.for.limit} In the limiting spacetime $(\mathcal M, g_\infty)$, the renormalized Bianchi equations are satisfied in the sense of Definition~\ref{def:Bianchi.integrated}. \end{proposition} \begin{proof} The proof is similar to various previous propositions; we will only indicate the main points. As in Propositions~\ref{prop:Ricci.easy}, \ref{prop:Ricci.harder} and \ref{prop:eq.div.eta}, the main goal will be to make use of the fact that all the equations are satisfied for $(\mathcal M, g_{n_k})$ according to Proposition~\ref{prop:Bianchi} and then take limits. Now taking the limit of (the integrated form of) \eqref{eq:null.Bianchi.2}--\eqref{eq:null.Bianchi.5} can be done in exactly the same way as in the proof of Proposition~\ref{prop:eq.div.eta}. Indeed, it can be easily checked that except for the top derivative terms $\div \beta$, $\div\underline{\beta}$, etc., \eqref{eq:null.Bianchi.2}--\eqref{eq:null.Bianchi.5} have schematically the same type of terms as \eqref{eq:mu.0} and \eqref{eq:mub.0}. The top derivative terms can be handled using Propositions~\ref{prop:chi.limit} and \ref{prop:trch.weak.limit} (which guarantees the weak convergence of up to second derivatives of $\hat{\chi}$, $\slashed{\mathrm{tr}}\chi$, $\hat{\underline{\chi}}$ and $\slashed{\mathrm{tr}}\underline{\chi}$) together with Propositions~\ref{prop:gamma}, \ref{prop:metric.limit} and \ref{prop:eta.etab.limit}. Finally, in order to take the limit of (the weak integrated form of) \eqref{eq:null.Bianchi.1} and \eqref{eq:null.Bianchi.6}, we need in addition to use the compensated compactness result in Proposition~\ref{prop:chihchibh} to handle the terms $\chi\slashed{\nabla}\underline{\chi}$, $\hat{\underline{\chi}}\slashed{\nabla}\chi$, $\eta\chi\underline{\chi}$, etc.~(cf.~the difference between Propositions~\ref{prop:Ricci.easy} and \ref{prop:Ricci.harder}). We omit the details. \qedhere \end{proof} \subsection{Proof of Theorem~\ref{main.thm}}\label{sec:proof.of.main.thm} We now have all the ingredients to complete the proof of Theorem~\ref{main.thm}: \begin{proof}[Proof of Theorem~\ref{main.thm}] \begin{enumerate} \item This assertion follows directly from Theorem~\ref{ext.thm}. \item The existence of a $C^0$ limit of the metric components in double null coordinates is a consequence of Propositions~\ref{prop:gamma} and \ref{prop:metric.limit}. The same propositions also give the weak $L^2$ convergence statements for the first derivatives of the metric components. The weak $L^2$ convergence\footnote{Of course we have in fact proven much stronger convergence statements.} of the Ricci coefficient follow as a consequence of Propositions~\ref{prop:eta.etab.limit}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit}. Finally, Proposition~\ref{prop:Ricci.is.metric.derivative} shows that the weak limit of the Ricci coefficients coincide with the Ricci coefficients associated with the limit metric. \item The weak-* limits \eqref{eq:dnu.def.thm} and \eqref{eq:dnub.def.thm} exist as a consequence of Proposition~\ref{prop:nu.convergence}. The angular regularity of $(\mathcal M,\,g_\infty)$ (see Definition~\ref{double.null.def.2}) follows from Propositions~\ref{prop:gamma}, \ref{prop:isoperimetric}, \ref{prop:Christoffel}, \ref{prop:K.imp}, \ref{prop:metric.limit}, \ref{prop:eta.etab.limit}, \ref{prop:eta.etab.imp}, \ref{prop:chi.limit}, \ref{prop:trch.weak.limit} and \ref{prop:trch.imp}. The angular regularity of $(\{\mathrm{d}\nu\}_{u\in [0,u_*]},\,\{\mathrm{d}\underline{\nu}\}_{\underline{u} \in [0,\underline{u}_*]})$ (see Definition~\ref{def:ang.reg.null.dust}) follows from Propositions~\ref{prop:nu.convergence} and \ref{prop:nu.add.reg}. Finally, that $(\mathcal M,\,g_\infty,\,\{\mathrm{d}\nu\}_{u\in [0,u_*]},\,\{\mathrm{d}\underline{\nu}\}_{\underline{u} \in [0,\underline{u}_*]})$ is an angularly regular weak solution to the Einstein--null dust system (see Definition~\ref{def:weak.sol.ang.reg}) follows from Propositions~\ref{prop:Ricci.easy}, \ref{prop:Ricci.harder}, \ref{prop:trch.limit.eqn} and \ref{prop:nu.transport}. \item The renormalized Bianchi equations follow from Proposition~\ref{prop:Bianichi.for.limit}. \item The auxiliary equations hold because of Propositions~\ref{prop:equations.for.nabla.g}, \ref{prop:eq.div.eta} and \ref{prop:trch.top.order}. \qedhere \end{enumerate} \end{proof} \section{Uniqueness of the limit}\label{sec:proof.uniqueness} In this section, we prove the uniqueness theorem (Theorem~\ref{thm:uniqueness}). \textbf{For the whole section, we work under the assumptions of Theorem~\ref{thm:uniqueness}}. In particular, we are given two angularly regular weak solutions $(\mathcal M, g^{(1)}, \{\mathrm{d}\nu^{(1)}_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}^{(1)}_{\underline{u}}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(\mathcal M, g^{(2)}, \{\mathrm{d}\nu^{(2)}_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}^{(2)}_{\underline{u}}\}_{\underline{u}\in [0,\underline{u}_*]})$ to the Einstein--null dust system. We will first define in \textbf{Section~\ref{sec:def.dist}} a distance function (see \eqref{def:dist}) that controls the difference of the two solutions. The remaining subsections are devoted to controlling this distance function. \begin{itemize} \item In \textbf{Section~\ref{sec:uniqueness.aux.est}}, we prove some easy preliminary estimates. \item In \textbf{Section~\ref{sec:diff.transport}}, we prove general estimates for differences of transport equations, and apply them to control the differences of metric components and the Ricci coefficients $\eta$, $\underline{\eta}$, $\hat{\chi}$, $\hat{\underline{\chi}}$, $\omega$ and $\underline{\omega}$. The transport equations for $\slashed{\mathrm{tr}}\chi$ and $\slashed{\mathrm{tr}}\underline{\chi}$ (and their angular derivatives) will be treated separately in \textbf{Section~\ref{sec:diff.trch}}, because they involve the measure-valued $\mathrm{d} \nu_u$ and $\mathrm{d}\underline{\nu}_{\underline{u}}$ on the RHSs. \item We then treat the top-order estimates. This is dealt with by a combination of energy estimates for the renormalized curvature components (\textbf{Section~\ref{sec:energy.est}}) and elliptic estimates to handle the top-order derivatives of the Ricci coefficients (\textbf{Section~\ref{sec:elliptic.est}}). \item In \textbf{Section~\ref{sec:diff.null.dust}}, we estimate the difference of the (measure-valued) null dust. \end{itemize} Putting all these together in \textbf{Section~\ref{sec:uniqueness.everything}}, we obtain Theorem~\ref{thm:uniqueness}. \subsection{Distance function}\label{sec:def.dist} To proceed, we first introduce a reduction so that the analysis is carried out in a small region. We partition the set $[0,u_*]\times [0,\underline{u}_*]$ into $N^2$ rectangles; namely we write $[0,u_*]\times [0,\underline{u}_*] = \cup_{i=0}^{N-1} \cup_{j=0}^{N-1} [u_i, u_{i+1}]\times [\underline{u}_j,\underline{u}_{j+1}]$ where $u_i = \frac{i\times u_*}{N}$ and $\underline{u}_j = \frac{j\times \underline{u}_*}{N}$. It suffices to show that for $N$ sufficiently large (depending on the size of $(\mathcal M, g^{(1)}, \{\mathrm{d}\nu^{(1)}_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}^{(1)}_{\underline{u}}\}_{\underline{u}\in [0,\underline{u}_*]})$ and $(\mathcal M, g^{(2)}, \{\mathrm{d}\nu^{(2)}_u\}_{u\in [0,u_*]}, \{\mathrm{d}\underline{\nu}^{(2)}_{\underline{u}}\}_{\underline{u}\in [0,\underline{u}_*]})$), if the two sets of data agree on $\{ u_i \} \times [ \underline{u}_j, \underline{u}_{j+1} ] \times \mathbb S^2$ and $[u_i, u_{i+1}] \times \{\underline{u}_j\}\times \mathbb S^2$, then in fact the two solutions agree in $[u_i, u_{i+1}] \times [ \underline{u}_j, \underline{u}_{j+1} ] \times \mathbb S^2$. The parameter $N$ will be chosen later. \textbf{For the remainder of the section, we fix some $0\leq i,\,j\leq N-1$, and will only concern ourselves with the region $[u_i, u_{i+1}] \times [ \underline{u}_j, \underline{u}_{j+1} ] \times \mathbb S^2$.} In particular, when applying the definitions or equations from the previous sections, we will replace $[0,u_*]$ by $[u_i, u_{i+1}]$ (and respectively $[0,\underline{u}_*]$ by $[\underline{u}_j, \underline{u}_{j+1}]$). In order to define a distance function between We define a distance between the two measures $\mathrm{d} \nu^{(1)}$ and $\mathrm{d} \nu^{(2)}$. \begin{equation}\label{def:dist.nu} \begin{split} \mathrm{dist}_\nu(\mathrm{d}\nu^{(1)},\mathrm{d}\nu^{(2)}) := &\: \sup_{u'\in [u_i, u_{i+1}]} \sup_{\substack{\varphi(\underline{u},\vartheta)\in C^\infty_c \\ \|\varphi\|_{L^\infty_{\underline{u}} L^2(S_{u',\underline{u}},\gamma^{(1)})}\leq 1} } \left| \int_{H_{u'}} \varphi \,(\mathrm{d} \nu^{(1)}_{u'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \nu^{(2)}_{u'})\right| \\ &\: + \sup_{u'\in [u_i, u_{i+1}]} \sup_{ \substack{ \slashed{X}(\underline{u},\vartheta)\in C^\infty_c \\ \|\slashed X\|_{L^\infty_{\underline{u}} L^2(S_{u',\underline{u}},\gamma^{(1)})}\leq 1} }\left| \int_{H_{u'}} \slashed{\div}^{(1)} \slashed X \,(\mathrm{d} \nu^{(1)}_{u'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \nu^{(2)}_{u'})\right| . \end{split} \end{equation} Here $\varphi$ is a scalar-valued function and $\slashed{X}$ is an $S$-tangent vector field. Similarly, we define $\mathrm{dist}_{\underline{\nu}}(\mathrm{d}\underline{\nu}^{(1)},\mathrm{d}\underline{\nu}^{(2)})$ after flipping all $u$ and $\underline{u}$, i.e. \begin{equation}\label{def:dist.nub} \begin{split} \mathrm{dist}_{\underline{\nu}} (\mathrm{d}\underline{\nu}^{(1)},\mathrm{d}\underline{\nu}^{(2)}) := &\: \sup_{\underline{u}'\in [\underline{u}_j, \underline{u}_{j+1}]} \sup_{ \substack{ \varphi(u,\vartheta)\in C^\infty_c \\ \|\varphi\|_{L^\infty_u L^2(S_{u,\underline{u}'}, \gamma^{(1)})} \leq 1}} \left| \int_{\underline{H}_{\underline{u}'}} \varphi \,(\mathrm{d} \underline{\nu}^{(1)}_{\underline{u}'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \underline{\nu}^{(2)}_{\underline{u}'})\right| \\ &\: + \sup_{\underline{u}'\in [\underline{u}_j, \underline{u}_{j+1}]} \sup_{\substack{ \slashed{X}(u,\vartheta)\in C^\infty_c \\ \|\slashed X\|_{L^\infty_u L^2(S,\gamma^{(1)})}\leq 1 }} \left| \int_{\underline{H}_{\underline{u}'}} \slashed{\div}^{(1)} \slashed X \,(\mathrm{d} \underline{\nu}^{(1)}_{\underline{u}'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \underline{\nu}^{(2)}_{\underline{u}'})\right| . \end{split} \end{equation} We then define a distance function between two solutions to the Einstein--null dust system: \begin{equation}\label{def:dist} \begin{split} \mathrm{dist}:= &\: \sum_{\slashed g \in \{\gamma,\,\log\det\gamma,\,b,\,\log\Omega\}} \|\slashed g^{(1)} - \slashed g^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} + \sum_{\psi \in \{ \eta, \underline{\eta}\} } \|\psi^{(1)} - \psi^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \sum_{\psi \in \{ \eta, \underline{\eta}\} } \|\psi^{(1)} - \psi^{(2)}\|_{L^\infty_u L^2_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} + \sum_{\psi \in \{ \eta, \underline{\eta}\} } \|\psi^{(1)} - \psi^{(2)}\|_{L^\infty_{\underline{u}} L^2_u W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \sum_{\psi_H \in \{\slashed{\mathrm{tr}}\chi,\, \hat{\chi},\,\omega\}} \|\psi_H^{(1)} - \psi_H^{(2)} \|_{L^\infty_u L^2_{\underline{u}} W^{1,2} (S_{u,\underline{u}}, \gamma^{(1)})} + \sum_{\psi_H \in \{\slashed{\mathrm{tr}}\underline{\chi},\,\hat{\underline{\chi}},\,\underline{\omega}\}} \|\psi_{\underline{H}}^{(1)} - \psi_{\underline{H}}^{(2)} \|_{L^\infty_{\underline{u}} L^2_{u} W^{1,2} (S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + \mathrm{dist}_{\nu}(\mathrm{d} \nu^{(1)},\mathrm{d} \nu^{(2)}) + \mathrm{dist}_{\underline{\nu}}(\mathrm{d} \underline{\nu}^{(1)},\mathrm{d} \underline{\nu}^{(2)}). \end{split} \end{equation} In the following subsections of the section, we will control each piece in \eqref{def:dist} and prove an estimate $\mathrm{dist} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}$. We will use the following convention for constants. The angular regularity assumption of Theorem~\ref{thm:uniqueness} (see Definitions~\ref{double.null.def.2} and \ref{def:ang.reg.null.dust}) gives control of the geometric quantities associated to $(\mathcal M, g^{(1)}, \mathrm{d}\nu^{(1)}, \mathrm{d}\underline{\nu}^{(1)})$ and $(\mathcal M, g^{(1)}, \mathrm{d}\nu^{(2)}, \mathrm{d}\underline{\nu}^{(2)})$ (in the full region $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$). \textbf{All implicit constants in $\lesssim$ in this section will depend only on the estimates for the geometric quantities given in Definitions~\ref{double.null.def.2} and \ref{def:ang.reg.null.dust}. Importantly, they are independent of $N$.} (Moreover, for instance when we say we use Definition~\ref{double.null.def.2}, we mean that we use the corresponding quantitative estimates.) \subsection{Some auxiliary estimates}\label{sec:uniqueness.aux.est} \begin{proposition}\label{prop:diff.gamma.comp} For every $u$, $\underline{u}$, $$\|\phi\|_{L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \|\phi\|_{L^2(S_{u,\underline{u}},\gamma^{(2)})} \lesssim \|\phi\|_{L^2(S_{u,\underline{u}},\gamma^{(1)})}.$$ \end{proposition} \begin{proof} As in the proof of Proposition~\ref{prop:norms.compare}, for $i=1,2$, $L^2(S_{u,\underline{u}},\gamma^{(i)})$ is comparable to $L^2(S_{u,\underline{u}},(\gamma^{(i)})_{0,0})$, where $(\gamma^{(i)})_{0,0}$ is the metric that agrees with $\gamma^{(i)}$ on $S_{0,0}$ and satisfies $\slashed{\mathcal L}_{\frac{\partial}{\partial \underline{u}}} (\gamma^{(i)})_{0,0} = \slashed{\mathcal L}_{\frac{\partial}{\partial u}} (\gamma^{(i)})_{0,0} = 0.$ Therefore, to establish the proposition, it suffices to show that $L^2(S_{0,0},\gamma^{(1)})$ and $L^2(S_{0,0},\gamma^{(2)})$ are comparable, which is obviously the case since by assumption $\gamma^{(1)} \equiv \gamma^{(2)}$ on $S_{0,0}$. \qedhere \end{proof} \begin{proposition}\label{prop:gamma.inverse.diff} \begin{equation}\label{eq:gamma.inverse.diff.main} \| (\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)} ) } \lesssim \| \gamma^{(1)} - \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)} ) }, \end{equation} and \begin{equation}\label{eq:gamma.det.diff.main} \begin{split} &\: \|\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} - 1 \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} + \|\frac{\sqrt{\det\gamma^{(2)}}}{\sqrt{\det\gamma^{(1)}}} - 1 \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \| \log \det \gamma^{(1)} - \log \det \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})}. \end{split} \end{equation} In particular, by \eqref{def:dist}, we also have $$\mbox{LHS of \eqref{eq:gamma.inverse.diff.main}} + \mbox{LHS of \eqref{eq:gamma.det.diff.main}} \lesssim \mathrm{dist}.$$ \end{proposition} \begin{proof} The estimate \eqref{eq:gamma.inverse.diff.main} is a standard statement regarding the continuity of inverses. We omit the details; see for instance \cite[Proposition~9.2]{DL} for the relevant calculations. To prove \eqref{eq:gamma.det.diff.main}, first note that for every $x\in (0,+\infty)$, we have the calculus inequality $ |x-1| \leq \max\{ x,\, \frac 1x\} |\log x|.$ It follows (by setting $x = \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}$) that $$|\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} -1| \leq \max\{|\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}|, \, |\frac{\sqrt{\det\gamma^{(2)}}}{\sqrt{\det\gamma^{(1)}}}| \} |\log \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}|.$$ By Definition~\ref{double.null.def.2}, we have uniform $L^\infty$ bounds for $|\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}|$ and $|\frac{\sqrt{\det\gamma^{(2)}}}{\sqrt{\det\gamma^{(1)}}}|$. Therefore, taking the $L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$ norm of the above inequality yields \begin{equation}\label{eq:gamma.diff.aux.1} \|\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} -1 \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \| \log \det\gamma^{(1)} - \log\det\gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{equation} For the first derivative, we compute $$\slashed{\nabla} (\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} -1) = \frac 12\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \slashed{\nabla}(\log\det\gamma^{(1)} - \log\det\gamma^{(2)}).$$ Using the bounds in Definition~\ref{double.null.def.2}, we obtain \begin{equation}\label{eq:gamma.diff.aux.2} \|\slashed{\nabla} (\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} -1)\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \|\slashed{\nabla}(\log\det\gamma^{(1)} - \log\det\gamma^{(2)})\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{equation} Combining \eqref{eq:gamma.diff.aux.1} and \eqref{eq:gamma.diff.aux.2} yields the bound for $\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} - 1$ in \eqref{eq:gamma.det.diff.main}; the bound for $\frac{\sqrt{\det\gamma^{(2)}}}{\sqrt{\det\gamma^{(1)}}} - 1$ can be proven in an entirely analogous manner. \qedhere \end{proof} \begin{proposition}\label{prop:Gamma.diff} $$\| \slashed \Gamma^{(1)} - \slashed \Gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)} ) } \lesssim \| \gamma^{(1)} - \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)} ) },$$ and $$\| \slashed \Gamma^{(1)} - \slashed \Gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)} ) } \lesssim \mathrm{dist}.$$ \end{proposition} \begin{proof} The first statement is immediate from \eqref{Gamma.def} and Proposition~\ref{prop:gamma.inverse.diff}. The second statement then follows after applying also \eqref{def:dist}. \qedhere \end{proof} \begin{proposition}\label{prop:Omg.diff.aux} $$\|1 - \frac{\Omega^{(1)}}{\Omega^{(2)}} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} + \|1 - \frac{\Omega^{(2)}}{\Omega^{(1)}} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \mathrm{dist}.$$ \end{proposition} \begin{proof} It suffices to control $1 - \frac{\Omega^{(1)}}{\Omega^{(2)}}$, $1 - \frac{\Omega^{(2)}}{\Omega^{(1)}}$ and their derivatives by $\log\frac{\Omega^{(1)}}{\Omega^{(2)}}$ and its derivatives. This is a computation almost exactly the same as the proof of \eqref{eq:gamma.det.diff.main}; we omit the details. \qedhere \end{proof} \subsection{Transport estimates for the metric coefficients and the Ricci coefficients}\label{sec:diff.transport} In this subsection, we prove some estimates which are derivable using transport equations. We first prove some general estimates regarding general transport equations in Propositions~\ref{prop:transport} and \ref{prop:operator.diff}. These will then be applied in Propositions~\ref{prop:metric}--\ref{prop:chih} to control the differences of the metric coefficients and the Ricci coefficients. \begin{proposition}\label{prop:transport} Suppose\footnote{For all the statements in this proposition, we allow $\phi$ and $F$ to be of arbitrary (but the same) rank. The implicit constants in the estimates may depend on the rank.} $\slashed{\nabla}_3^{(1)} \phi = F$ holds in the integrated sense (Definition~\ref{def:weak.transport}) such that $\phi\restriction_{\{u = u_i\}} = 0$. Then \begin{equation}\label{eq:ii2} \|\phi\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \|F \|_{L^\infty_{\underline{u}} L^1_u L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{equation} Suppose $\slashed{\nabla}_4^{(1)} \phi = F$ holds in the integrated sense (Definition~\ref{def:weak.transport}) such that $\phi\restriction_{\{\underline{u} = \underline{u}_j\}} = 0$. Then \begin{equation}\label{eq:ii2.4} \|\phi\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \|F \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{equation} Suppose $\slashed{\nabla}_3^{(1)} \phi = F$ holds in the weak integrated sense (Definition~\ref{def:weaker.transport}) such that $\phi\restriction_{\{u = u_i\}} = 0$. Then \begin{equation}\label{eq:2i2} \|\phi\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \| F \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{equation} Suppose $\slashed{\nabla}_4^{(1)} \phi = F$ holds in the weak integrated sense (Definition~\ref{def:weaker.transport}) such that $\phi\restriction_{\{\underline{u} = \underline{u}_j \}} = 0$. Then \begin{equation}\label{eq:2i2.4} \|\phi\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \| F \|_{L^1_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{equation} \end{proposition} \begin{proof} We will prove \eqref{eq:ii2} and \eqref{eq:2i2}. The proofs for \eqref{eq:ii2.4} and \eqref{eq:2i2.4} are similar and omitted. \pfstep{Step~1: Proof of \eqref{eq:ii2}} Fix $(U,\underline{U}) \in [u_i,u_{i+1}] \times [\underline{u}_j,\underline{u}_{j+1}]$. Let $\varphi\in C^1$ satisfy \begin{equation}\label{eq:varphi.transported} \slashed{\nabla}_3\varphi = 0 \end{equation} and \begin{equation}\label{eq:phi.transported.leq1} \|\varphi\|_{L^2(S_{U,\underline{U}},\gamma^{(1)})} \leq 1. \end{equation} By Proposition~\ref{prop:transport.id} and \eqref{eq:varphi.transported}, \begin{equation}\label{eq:phi.transported.Gronwall} \frac{\partial}{\partial u} \int_{S_{u,\underline{U}}} |\varphi|_{\gamma^{(1)}}^2 \,\mathrm{dA}_{\gamma^{(1)}} = \int_{S_{u,\underline{U}}} \Omega^{(1)}\left(\slashed{\nabla}_3^{(1)} |\varphi|_{\gamma^{(1)}}^2 + \slashed{\mathrm{tr}}\underline{\chi}^{(1)} |\varphi|_{\gamma^{(1)}}^2 \right)\, \,\mathrm{dA}_{\gamma^{(1)}} = \int_{S_{u,\underline{U}}} \Omega^{(1)} \slashed{\mathrm{tr}}\underline{\chi}^{(1)} |\varphi|_{\gamma^{(1)}}^2 \, \,\mathrm{dA}_{\gamma^{(1)}}. \end{equation} A simple application of the Gr\"onwall's inequality implies that $\|\varphi\|_{L^\infty_u L^2(S_{u,\underline{U}},\gamma^{(1)})} \lesssim 1$. By Definition~\ref{def:weak.transport}, \eqref{eq:varphi.transported} and the assumption on the initial data, the following holds for all $u \in [u_i,u_{i+1}]$: \begin{equation}\label{eq:weak.for.uniqueness} \begin{split} \int_{S_{u,\underline{U}}} \langle\varphi, \phi\rangle \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}} + \int_{u_i}^{u} \int_{S_{u',\underline{U}}} \langle \varphi, F + (\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)})\phi \rangle (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u' =0. \end{split} \end{equation} Applying H\"older's inequality and \eqref{eq:phi.transported.Gronwall} to \eqref{eq:weak.for.uniqueness}, and using the estimates in Definition~\ref{double.null.def.2}, we see that for every $u \in [u_i,u_{i+1}] $, \begin{equation}\label{eq:phi.dual.formulation} \begin{split} \left| \int_{S_{u,\underline{U}}} \langle\varphi, \phi\rangle \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\right| \lesssim &\: \| F\|_{L^1_u L^2(S_{u,\underline{U}},\gamma^{(1)})} + \|\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)}\|_{L^1_u L^2(S_{u,\underline{U}},\gamma^{(1)})} \| \phi\|_{L^\infty_u L^2(S_{u,\underline{U}},\gamma^{(1)})} \\ \lesssim &\: \| F\|_{L^1_u L^2(S_{u,\underline{U}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \| \phi\|_{L^\infty_u L^2(S_{u,\underline{U}},\gamma^{(1)})}. \end{split} \end{equation} In particular, it follows from \eqref{eq:phi.dual.formulation} by duality and the boundedness of $\log\Omega^{(1)}$ that \begin{equation} \begin{split} \|\phi\|_{L^2(S_{U,\underline{U}},\gamma^{(1)})}\lesssim &\: \sup_{\|\varphi\|_{L^2(S_{U,\underline{U}}, \gamma^{(1)})}\leq 1} \left| \int_{S_{U,\underline{U}}} \langle\varphi, \phi\rangle \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\right| \\ \lesssim &\: \| F\|_{L^1_u L^2(S_{u,\underline{U}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \| \phi\|_{L^\infty_u L^2(S_{u,\underline{U}},\gamma^{(1)})}. \end{split} \end{equation} In view of the arbitrariness of $(U,\underline{U})$, we then obtain $$\|\phi\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \| F\|_{L^\infty_{\underline{u}} L^1_u L^2(S_{u,\underline{u}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \|\phi\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})},$$ which, after choosing $N$ sufficiently large, implies \eqref{eq:ii2}. \pfstep{Step~2: Proof of \eqref{eq:2i2}} Fix $U\in [u_i,u_{i+1}]$. Pick $\varphi \in C^1$ satisfying \eqref{eq:varphi.transported}, but instead of \eqref{eq:phi.transported.leq1}, assume \begin{equation}\label{eq:phi.transported.L2.leq1} \|\varphi\|_{L^2_{\underline{u}} L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1. \end{equation} Integrating \eqref{eq:phi.transported.Gronwall} in $\underline{u}$ and applying Gr\"onwall's inequality, we obtain \begin{equation}\label{eq:phi.transported.L2.leq1.propagated} \|\varphi\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim 1. \end{equation} By Definition~\ref{def:weaker.transport} and \eqref{eq:varphi.transported}, we have, for all $u \in [u_i,u_{i+1}]$, \begin{equation}\label{eq:transport.weaker.transport} \begin{split} \int_{\underline{u}_j}^{\underline{u}_{j+1}} \int_{S_{u,\underline{u}}} \langle\varphi, \phi\rangle \Omega^{(1)} \,\mathrm{d} A_{\gamma}\, \mathrm{d} \underline{u} + \int_{\underline{u}_j}^{\underline{u}_{j+1}}\int_{u_i}^{u} \int_{S_{u',\underline{u}}} (\langle \varphi, F + (\slashed{\mathrm{tr}}\underline{\chi} - 2\underline{\omega})^{(1)}\phi \rangle ) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} =0. \end{split} \end{equation} Applying \eqref{eq:transport.weaker.transport} when $u=U$ and using H\"older's inequality together with \eqref{eq:phi.transported.L2.leq1.propagated}, we obtain \begin{equation}\label{eq:transport.weaker.transport.2} \begin{split} \|\phi\|_{L^2_{\underline{u}} L^2(S_{U,\underline{u}},\gamma^{(1)})} \lesssim &\: \sup_{\|\varphi\|_{L^2_{\underline{u}} L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1} |\int_{\underline{u}_j}^{\underline{u}_{j+1}} \int_{S_{U,\underline{u}}} \langle\varphi, \phi\rangle \Omega \,\mathrm{d} A_{\gamma}\, \mathrm{d} \underline{u}| \\ \lesssim &\: \| F\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \frac 1{N^{\frac 12}}\|\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)} \|_{L^2_u L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \|\phi\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \| F\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \|\phi\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{split} \end{equation} Since $U$ is arbitrary, it follows from \eqref{eq:transport.weaker.transport.2} that \begin{equation*} \begin{split} \|\phi\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim &\: \| F\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \|\phi\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}, \end{split} \end{equation*} which implies \eqref{eq:2i2} after taking $N$ sufficiently large. \qedhere \end{proof} \begin{proposition}\label{prop:operator.diff} The following holds for every $\phi$ an $S$-tangent tensorfield of arbitrary rank (with the implicit constant depending on the rank): \begin{equation}\label{eq:nab4.diff.1} \begin{split} \|( \slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \phi\|_{L^\infty_{u} L^1_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}} (\|\phi \|_{L^\infty_{\underline{u}} L^\infty_u W^{1,4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\slashed{\nabla}_4^{(2)} \phi \|_{L^\infty_u L^\infty_{\underline{u}} L^{4}(S_{u,\underline{u}},\gamma^{(1)}) }), \end{split} \end{equation} \begin{equation}\label{eq:nab4.diff.2} \begin{split} &\: \|( \slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \phi\|_{ L^1_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}} (\|\phi \|_{L^2_u L^\infty_{\underline{u}} L^{4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\phi \|_{L^\infty_{\underline{u}} L^2_u W^{1,4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\slashed{\nabla}_4^{(2)} \phi \|_{ L^\infty_{\underline{u}} L^2_u L^{4}(S_{u,\underline{u}},\gamma^{(1)}) }), \end{split} \end{equation} \begin{equation}\label{eq:nab3.diff.1} \begin{split} \|( \slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)}) \phi\|_{L^\infty_{\underline{u}} L^1_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}} ( \|\phi \|_{L^\infty_{u} L^\infty_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\slashed{\nabla}_3^{(2)} \phi \|_{L^\infty_{u} L^\infty_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)}) }). \end{split} \end{equation} and \begin{equation}\label{eq:nab3.diff.2} \begin{split} &\: \|( \slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)}) \phi\|_{L^1_{u} L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}} (\|\phi \|_{L^2_{\underline{u}} L^\infty_{u} L^{4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\phi \|_{L^\infty_{u} L^2_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\slashed{\nabla}_3^{(2)} \phi \|_{L^\infty_{u} L^2_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)}) }). \end{split} \end{equation} \end{proposition} \begin{proof} We will only prove \eqref{eq:nab3.diff.1} and \eqref{eq:nab3.diff.2}; the estimates \eqref{eq:nab4.diff.1} and \eqref{eq:nab4.diff.2} are slightly easier. Before we proceed, first note that by H\"older's inequality and Fubini's theorem, it suffices to\footnote{Note in particular that a smallness factor $N^{-\frac 12}$ arises from the difference between $L^1_u$ on the LHS of \eqref{eq:nab3.diff.1}, \eqref{eq:nab3.diff.2} and $L^2_u$ on the LHS of \eqref{eq:nab3.diff.3}.} prove that for $p \in \{2, +\infty\}$, \begin{equation}\label{eq:nab3.diff.3} \begin{split} &\: \|( \slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)}) \phi\|_{L^p_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \\ \lesssim &\: \mathrm{dist}\times (\|\phi \|_{L^p_{\underline{u}} L^\infty_{u} L^{4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\phi \|_{L^\infty_{u} L^p_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)}) } + \|\slashed{\nabla}_3^{(2)} \phi \|_{L^\infty_{u} L^p_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)}) }). \end{split} \end{equation} From now on, we take $p\in \{2, +\infty\}$ and our goal will be to prove \eqref{eq:nab3.diff.3}. By \eqref{nab3.def}, \begin{align} &\: (( \slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)}) \phi)_{A_1 A_2 ... A_r} \notag\\ = &\: \frac{\Omega^{(2)} - \Omega^{(1)}}{\Omega^{(1)}} (\slashed{\nabla}_3^{(2)} \phi)_{A_1 A_2 ... A_r} +(\Omega^{(1)})^{-1} (b^{(1)}- b^{(2)})^C \slashed{\nabla}_C^{(1)} \phi_{A_1 A_2 ... A_r} \label{eq:nab3.diff.line1}\\ &\: + \frac{\Omega^{(2)} - \Omega^{(1)}}{\Omega^{(1)}} \sum_{i=1}^r ((\gamma^{-1})^{BC}\underline{\chi}_{CA_i})^{(2)} \phi_{A_1\dots\hat{A_i}B\dots A_r} \label{eq:nab3.diff.line2}\\ &\: -\sum_{i=1}^r( ((\gamma^{-1})^{BC}\underline{\chi}_{CA_i})^{(1)} - ((\gamma^{-1})^{BC}\underline{\chi}_{CA_i})^{(2)}) \phi_{A_1\dots\hat{A_i}B\dots A_r} \label{eq:nab3.diff.line3}\\ &\: + \sum_{i=1}^r (\Omega^{(1)})^{-1} \slashed{\nabla}_{A_i}^{(1)} ( b^{(1)} - b^{(2)})^B \phi_{A_1\dots\hat{A_i}B\dots A_r} \label{eq:nab3.diff.line4}. \end{align} For the first term in \eqref{eq:nab3.diff.line1}, we first note that by Proposition~\ref{prop:Omg.diff.aux} and Sobolev embedding (Proposition~\ref{prop:Sobolev}), \begin{equation}\label{eq:Om.diff.Sobolev} \| \frac{\Omega^{(2)} - \Omega^{(1)}}{\Omega^{(1)}}\|_{L^\infty_u L^\infty_{\underline{u}} L^4(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \| \frac{\Omega^{(2)} - \Omega^{(1)}}{\Omega^{(1)}}\|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \mathrm{dist}. \end{equation} Therefore, using the fact $p\geq 2$ and H\"older's inequality, we obtain \begin{equation*} \begin{split} \| \frac{\Omega^{(2)} - \Omega^{(1)}}{\Omega^{(1)}} \slashed{\nabla}_3^{(2)} \phi \|_{L^p_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim &\: \| \frac{\Omega^{(2)} - \Omega^{(1)}}{\Omega^{(1)}} \slashed{\nabla}_3^{(2)} \phi \|_{ L^2_{u} L^p_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}} \| \slashed{\nabla}_3^{(2)} \phi\|_{L^\infty_{u} L^p_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)})}. \end{split} \end{equation*} The second term in \eqref{eq:nab3.diff.line1} can be treated in a similar manner. We note that by \eqref{def:dist} and Proposition~\ref{prop:Sobolev}, $\| b^{(1)} - b^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \mathrm{dist}$ and so using also the bounds provided by Definition~\ref{double.null.def.2}, we have $$\| (\Omega^{(1)})^{-1} (b^{(1)}- b^{(2)})\cdot \slashed{\nabla}^{(1)} \phi \|_{L^p_{\underline{u}} L^1_{u} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N} \| \phi\|_{L^\infty_{u} L^p_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)})}.$$ For \eqref{eq:nab3.diff.line2}, we use \eqref{eq:Om.diff.Sobolev}, bounds in Definition~\ref{double.null.def.2} and H\"older's inequality to obtain $$\|\mbox{\eqref{eq:nab3.diff.line2}}\|_{L^p_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \lesssim \frac{\mathrm{dist}}{N^{\frac 12}} \|\underline{\chi}\|_{L^2_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \|\phi\|_{L^\infty_{u} L^p_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \mathrm{dist} \|\phi\|_{L^\infty_{u} L^p_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)})}.$$ We next consider \eqref{eq:nab3.diff.line3}. For this, we note that by \eqref{def:dist} and Sobolev embedding (Proposition~\ref{prop:Sobolev})m $$\| ((\gamma^{-1})^{BC}\underline{\chi}_{CA_i})^{(1)}- ((\gamma^{-1})^{BC}\underline{\chi}_{CA_i})^{(2)} \|_{ L^\infty_{\underline{u}} L^2_{u} L^4(S_{u,\underline{u}},\gamma^{(1)}) }\lesssim \mathrm{dist}.$$ Thus, H\"older's inequality implies that $$\|\mbox{\eqref{eq:nab3.diff.line3}} \|_{L^p_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \lesssim \mathrm{dist} \|\phi\|_{L^p_{\underline{u}} L^\infty_{u} L^4(S_{u,\underline{u}},\gamma^{(1)})}.$$ Finally, we consider \eqref{eq:nab3.diff.line4}. By \eqref{def:dist}, $$\| (\Omega^{(1)})^{-1} \slashed{\nabla}_{A_i}^{(1)} ( b^{(1)} - b^{(2)})^B \|_{ L^\infty_{\underline{u}} L^\infty_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) }\lesssim \mathrm{dist}.$$ Therefore, by H\"older's inequality, Sobolev embedding (Proposition~\ref{prop:Sobolev}) and the fact that $p\geq 2$, $$\|\mbox{\eqref{eq:nab3.diff.line4}}\|_{L^p_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}},\gamma^{(1)}) } \lesssim \mathrm{dist} \|\phi\|_{L^p_{\underline{u}} L^2_{u} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}} \|\phi\|_{ L^\infty_{u} L^p_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)})}.$$ Combining the above estimates, we have thus achieved \eqref{eq:nab3.diff.3}. This concludes the argument. \qedhere \end{proof} \begin{proposition}\label{prop:metric} \begin{equation*} \begin{split} &\: \|\gamma^{(1)} - \gamma^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} + \| \log\frac{\det\gamma^{(1)}}{\det\gamma^{(2)}} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \|b^{(1)} - b^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} + \| \log\frac{\Omega^{(1)}}{\Omega^{(2)}} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}} . \end{split} \end{equation*} \end{proposition} \begin{proof} \pfstep{Step~1: Proof of $L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}},\gamma^{(1)})$ estimates} By Propositions~\ref{prop:transport} and \ref{prop:operator.diff}, it suffices to bound $(\slashed{\nabla}_4\gamma)^{(1)} - (\slashed{\nabla}_4\gamma)^{(2)}$, $(\slashed{\nabla}_4\log\det\gamma)^{(1)} - (\slashed{\nabla}_4\log\det\gamma)^{(2)}$, $(\slashed{\nabla}_4 b)^{(1)} - (\slashed{\nabla}_4 b)^{(2)}$ and $(\slashed{\nabla}_4\log\Omega)^{(1)} - (\slashed{\nabla}_4\log\Omega)^{(2)}$ in $L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})$. By \eqref{metric.derivative.invar}, we have $$(\slashed{\nabla}_4\gamma)^{(1)} - (\slashed{\nabla}_4\gamma)^{(2)} = 0,$$ $$(\slashed{\nabla}_4\log\det\gamma)^{(1)} - (\slashed{\nabla}_4\log\det\gamma)^{(2)} = 2\slashed{\mathrm{tr}}\chi^{(1)} - 2\slashed{\mathrm{tr}}\chi^{(2)},$$ $$(\slashed{\nabla}_4 b)^{(1)} - (\slashed{\nabla}_4 b)^{(2)} = - 2(\Omega^{(1)} (\eta - \underline{\eta})^{(1)} - \Omega^{(2)} (\eta - \underline{\eta})^{(2)}) + (\chi\cdot b)^{(1)} - (\chi\cdot b)^{(2)},$$ $$(\slashed{\nabla}_4\log\Omega)^{(1)} - (\slashed{\nabla}_4\log\Omega)^{(2)} = - 2(\omega^{(1)} - \omega^{(2)}).$$ Now by the estimates in Definition~\ref{double.null.def.2} and \eqref{def:dist}, the RHS of each of these equations is bounded above in $L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})$ by $\mathrm{dist}$. In particular, using the Cauchy--Schwarz inequality, we obtain \begin{equation*} \begin{split} &\: \|(\slashed{\nabla}_4\gamma)^{(1)} - (\slashed{\nabla}_4\gamma)^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} + \|(\slashed{\nabla}_4\log\det\gamma)^{(1)} - (\slashed{\nabla}_4\log\det\gamma)^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + \|(\slashed{\nabla}_4 b)^{(1)} - (\slashed{\nabla}_4 b)^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} +\| (\slashed{\nabla}_4\log\Omega)^{(1)} - (\slashed{\nabla}_4\log\Omega)^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} Therefore, we obtain the desired $L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}},\gamma^{(1)})$ estimates. \pfstep{Step~2: Proof of $L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})$ estimates} This is similar to Step~1, except that we use the equations \eqref{eq:nablagamma}--\eqref{eq:nablab} instead; we omit the details. \qedhere \end{proof} \begin{proposition}\label{prop:eta} $$\|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\underline{\eta}^{(1)} - \underline{\eta}^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}\lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ \end{proposition} \begin{proof} We will prove the estimate for $\eta^{(1)} - \eta^{(2)}$; the estimate for $\underline{\eta}^{(1)} - \underline{\eta}^{(2)}$ is similar. By Propositions~\ref{prop:transport}, we need to bound $\slashed{\nabla}_4^{(1)} (\eta^{(1)} - \eta^{(2)})$. We write $$\slashed{\nabla}_4^{(1)} (\eta^{(1)} - \eta^{(2)}) = - (\slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \eta^{(2)} + (\slashed{\nabla}_4 \eta)^{(1)} - (\slashed{\nabla}_4 \eta)^{(2)}.$$ The term $(\slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \eta^{(2)}$ can be estimated using \eqref{eq:nab4.diff.1} in Proposition~\ref{prop:operator.diff} and the bounds for $\eta^{(2)}$ given by Definition~\ref{double.null.def.2}, the equation \eqref{Ric4A} satisfied by $\eta^{(2)}$, together with Propositions~\ref{prop:diff.gamma.comp} and \ref{prop:Gamma.diff} so that we have $$\|(\slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \eta^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ Therefore, according to Proposition~\ref{prop:transport}, it suffices to bound the terms in $(\slashed{\nabla}_4 \eta)^{(1)} - (\slashed{\nabla}_4 \eta)^{(2)}$. By \eqref{Ric4A}, these terms are either of the form $(\div \hat{\chi})^{(1)} - (\div \hat{\chi})^{(2)}$ or $(\slashed{\nabla}\slashed{\mathrm{tr}}\chi)^{(1)} - (\slashed{\nabla}\slashed{\mathrm{tr}}\chi)^{(2)}$ or $(\chi\star \eta)^{(1)} - (\chi\star \eta)^{(2)}$ or $(\chi \star \underline{\eta})^{(1)} - (\chi \star \underline{\eta})^{(2)}$ (where $\star$ denotes some contraction with respect to $\gamma$). We first handle the term $(\div \hat{\chi})^{(1)} - (\div \hat{\chi})^{(2)}$. A direct computation gives \begin{equation*} \begin{split} &\: (\div\hat{\chi})^{(1)} - (\div\hat{\chi})^{(2)} \\ = &\:\{ (\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1} \}^{AB} \slashed{\nabla}^{(1)}_A \hat{\chi}^{(1)}_{BC} + \{(\gamma^{(2)})^{-1} \}^{AB} \slashed{\nabla}^{(1)}_A (\hat{\chi}^{(1)}_{BC} - \hat{\chi}^{(2)}_{BC}) + \{(\gamma^{(2)})^{-1} \}^{AB} (\slashed{\nabla}^{(1)}_A - \slashed{\nabla}^{(2)}_A)\hat{\chi}^{(2)}_{BC}. \end{split} \end{equation*} To proceed, note that by Definition~\ref{double.null.def.2} and Sobolev embedding (Proposition~\ref{prop:Sobolev}), $\|\hat{\chi}^{(1)}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \lesssim 1$. Therefore, using Definition~\ref{double.null.def.2}, \eqref{def:dist} and Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:Gamma.diff}, we obtain \begin{equation*} \begin{split} &\: \| (\div\hat{\chi})^{(1)} - (\div\hat{\chi})^{(2)} \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}} + N^{-\frac 12} \|\hat{\chi}^{(1)} - \hat{\chi}^{(2)}\|_{L^\infty_u L^2_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)})} + N^{-\frac 12}\|\slashed \Gamma^{(1)} - \slashed \Gamma^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} We next consider $(\slashed{\nabla}\slashed{\mathrm{tr}}\chi)^{(1)} - (\slashed{\nabla}\slashed{\mathrm{tr}}\chi)^{(2)}$. Again, we use Definition~\ref{double.null.def.2} and \eqref{def:dist} to obtain \begin{equation*} \begin{split} &\: \| (\slashed{\nabla}\slashed{\mathrm{tr}}\chi)^{(1)} - (\slashed{\nabla}\slashed{\mathrm{tr}}\chi)^{(2)} \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)}\|_{L^\infty_u L^2_{\underline{u}} W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} Finally, the terms not involving derivatives of $\chi$. We will look at $(\chi\star \eta)^{(1)} - (\chi\star \eta)^{(2)}$; the term $(\chi \star \underline{\eta})^{(1)} - (\chi \star \underline{\eta})^{(2)}$ is similar. By Definition~\ref{double.null.def.2} and \eqref{def:dist}, we have \begin{equation*} \begin{split} &\: \| (\chi\star \eta)^{(1)} - (\chi\star \eta)^{(2)} \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\hat{\chi}^{(1)} - \hat{\chi}^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + N^{-\frac 12} (\|(\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} + \|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})}) \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} Combining all the above estimates, we thus obtain \begin{equation*} \begin{split} \| (\slashed{\nabla}_4 \eta)^{(1)} - (\slashed{\nabla}_4 \eta)^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} As argued above, this estimate concludes the proof. \qedhere \end{proof} \begin{proposition}\label{prop:mu} The following estimate holds: $$\|\mu^{(1)} - \mu^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\underline{\mu}^{(1)} - \underline{\mu}^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}\lesssim \mathrm{dist}.$$ \end{proposition} \begin{proof} We will only prove the estimate for $\mu^{(1)}-\mu^{(2)}$; the other term can be treated similarly. Arguing as in Proposition~\ref{prop:eta}, we first write $$\slashed{\nabla}_4^{(1)} (\mu^{(1)} - \mu^{(2)}) = - (\slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \mu^{(2)} + (\slashed{\nabla}_4 \mu)^{(1)} - (\slashed{\nabla}_4 \mu)^{(2)}.$$ Again as in Proposition~\ref{prop:eta}, we use \eqref{eq:nab4.diff.1} in Proposition~\ref{prop:operator.diff} to obtain $$\|(\slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)}) \mu^{(2)}\|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ It thus remains to bound $(\slashed{\nabla}_4 \mu)^{(1)} - (\slashed{\nabla}_4 \mu)^{(2)}$ in $L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$. To this end, we consider the terms in the equation \eqref{eq:mu.0}. Note that schematically, we need to consider the following terms: $\slashed{\nabla}\chi\star (\eta,\underline{\eta})$, $\slashed{\nabla}(\eta,\underline{\eta})\star \chi$, $\chi\star (\eta,\underline{\eta})\star (\eta,\underline{\eta})$, where $(\eta, \underline{\eta})$ means we take either $\eta$ or $\underline{\eta}$, and $\star$ denotes some contraction with respect to $\gamma$. We now consider each of these types of terms. For simplicity of the exposition, we will take $\eta$ as a representative of $(\eta, \underline{\eta})$. We begin with the term $\slashed{\nabla}\chi\star \eta$. This can be treated as the terms in Proposition~\ref{prop:eta}. \begin{equation*} \begin{split} &\: \| (\slashed{\nabla} \chi\star \eta)^{(1)} - (\slashed{\nabla} \chi\star \eta)^{(2)} \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\slashed{\nabla}^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})\|_{L^\infty_u L^2_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} + N^{-\frac 12} \|\slashed\Gamma^{(1)} - \slashed\Gamma^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + N^{-\frac 12} (\|(\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} + \|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})}) \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} We next consider $\chi\star \slashed{\nabla}\eta$. Note that there is a contribution $\chi^{(2)} \slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})$ for which we need to put $\chi^{(2)}$ in $L^2_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})$ and put $\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})$ in $L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$. Therefore we will not be able to obtain a smallness factor of $N^{-\frac 12}$. \begin{equation*} \begin{split} &\: \| (\chi\star \slashed{\nabla}\eta)^{(1)} - (\chi\star \slashed{\nabla} \eta)^{(2)} \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\hat{\chi}^{(1)} - \hat{\chi}^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + N^{-\frac 12} \|(\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} + \|\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)}) \|_{L^\infty_u L^2_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \mathrm{dist}. \end{split} \end{equation*} Finally, we handle $\chi\star \eta \star \eta$. This can again be treated as the terms in Proposition~\ref{prop:eta}. \begin{equation*} \begin{split} &\: \| (\chi\star \eta \star \eta)^{(1)} - (\chi\star \eta \star \eta)^{(2)} \|_{L^\infty_u L^1_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\hat{\chi}^{(1)} - \hat{\chi}^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + N^{-\frac 12} (\|(\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})} + \|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^{2}(S_{u,\underline{u}}, \gamma^{(1)})}) \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} This concludes the proof. \qedhere \end{proof} \begin{proposition}\label{prop:chih} $$\|\hat{\chi}^{(1)} - \hat{\chi}^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\hat{\underline{\chi}}^{(1)} - \hat{\underline{\chi}}^{(2)}\|_{ L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}},$$ $$\|\omega^{(1)} - \omega^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\underline{\omega}^{(1)} - \underline{\omega}^{(2)}\|_{ L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})}\lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ \end{proposition} \begin{proof} All of the estimates can be obtained in a similar way, we will consider only $\hat{\chi}^{(1)} - \hat{\chi}^{(2)}$ in the proof. By \eqref{eq:2i2} in Proposition~\ref{prop:transport}, it suffices to bound $\slashed{\nabla}_3^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})$ in $L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})$. We write $$\slashed{\nabla}_3^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)}) = - (\slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)})\hat{\chi}^{(2)} + (\slashed{\nabla}_3 \hat{\chi})^{(1)} - (\slashed{\nabla}_3 \hat{\chi})^{(2)}.$$ By Proposition~\ref{prop:operator.diff}, the estimates in Definition~\ref{double.null.def.2}, Propositions~\ref{prop:diff.gamma.comp} and \ref{prop:Gamma.diff}, we have $$\|(\slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)})\hat{\chi}^{(2)}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ It therefore suffices to bound $(\slashed{\nabla}_3 \hat{\chi})^{(1)} - (\slashed{\nabla}_3 \hat{\chi})^{(2)}$ in $L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$. We now look at the corresponding terms in \eqref{RicAB.1}. We begin with the $\slashed{\nabla}\otimes \eta$ term: \begin{equation*} \begin{split} &\: \|(\slashed{\nabla}\otimes \eta)^{(1)} - (\slashed{\nabla}\otimes \eta)^{(2)}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \|\gamma^{(1)} - \gamma^{(2)} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed \Gamma^{(1)} - \slashed \Gamma^{(2)} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \| \eta^{(1)} - \eta^{(2)} \|_{L^1_u L^2_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 32}}. \end{split} \end{equation*} Next, we consider the quadratic term is $\eta$. \begin{equation*} \begin{split} &\: \|(\eta \otimes \eta)^{(1)} - (\eta \otimes \eta)^{(2)}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \|\gamma^{(1)} - \gamma^{(2)} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \| \eta^{(1)} - \eta^{(2)} \|_{L^1_u L^2_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N}. \end{split} \end{equation*} Finally, we consider the terms $\slashed{\mathrm{tr}}\underline{\chi} \hat{\chi}$, $\underline{\omega} \hat{\chi}$ and $\slashed{\mathrm{tr}}\chi\hat{\underline{\chi}}$. We need to be more careful since some of the terms involved cannot be bounded in $L^\infty_u L^\infty_{\underline{u}}$ type norms. We consider $\underline{\omega} \hat{\chi}$ as an example (the other terms are similar) . By H\"older's inequality, \begin{equation}\label{eq:omb.chih.diff} \begin{split} &\: \|(\underline{\omega} \hat{\chi})^{(1)} - (\underline{\omega} \hat{\chi})^{(2)}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\underline{\omega}^{(1)} - \underline{\omega}^{(2)} \|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \|\hat{\chi}^{(1)} \|_{L^2_{\underline{u}} L^\infty_u L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + N^{-\frac 12} \| \underline{\omega}^{(2)} \|_{ L^2_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \|\hat{\chi}^{(1)} - \hat{\chi}^{(2)} \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation} We note explicitly that in the above estimate, while the differences $\underline{\omega}^{(1)} - \underline{\omega}^{(2)}$ has to be controlled by taking $L^2_u$ first before taking $L^\infty_{\underline{u}}$, it is important that according to Definition~\ref{double.null.def.2}, $\hat{\chi}^{(1)}$ can be bounded by taking $L^\infty_{\underline{u}}$ first before taking $L^2_u$. A similar comment applies to the product $\underline{\omega}^{(2)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})$. Putting all these together and using \eqref{eq:2i2} in Proposition~\ref{prop:transport}, we obtain the desired estimate. \qedhere \end{proof} \subsection{Estimates for $\slashed{\mathrm{tr}}\chi$, $\protect\slashed{\mathrm{tr}}\underline{\chi}$ and their derivatives}\label{sec:diff.trch} \begin{proposition}\label{prop:trch} $$\|\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \slashed{\mathrm{tr}}\underline{\chi}^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)} )}\lesssim \mathrm{dist}.$$ In particular, $$\|\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)} \|_{L^2_{\underline{u}} L^\infty_u L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \slashed{\mathrm{tr}}\underline{\chi}^{(2)} \|_{L^2_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)} )}\lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ \end{proposition} \begin{proof} We will only prove the estimate for $\slashed{\mathrm{tr}}\underline{\chi}$; the estimate for $\slashed{\mathrm{tr}}\chi$ is similar. Fix $\underline{u} \in [\underline{u}_j, \underline{u}_{j+1}]$ and $U \in [u_i,u_{i+1}]$ for the remainder of the proof. Let $\varphi$ be a function on $S_{U,\underline{u}}$ satisfying \begin{equation}\label{eq:trchb.diff.duality} \|\varphi\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1. \end{equation} Extend $\varphi$ on $[u_i, u_{i+1}] \times \mathbb S^2$ by $e_3^{(1)} \varphi = 0$. Proposition~\ref{prop:transport.id}, Gr\"onwall's inequality, and the estimates for $\slashed{\mathrm{tr}}\underline{\chi}^{(1)}$ and $\Omega^{(1)}$ together imply that \begin{equation}\label{eq:trchb.diff.duality.2} \|\varphi\|_{L^\infty_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim 1. \end{equation} Using $e_3^{(1)} \varphi = 0$, we also obtain \begin{equation}\label{eq:trchb.diff.e32.varphi} \begin{split} e_3^{(2)} \varphi = &\: -(e_3^{(1)} - e_3^{(2)}) \varphi = - (\Omega^{(2)})^{-1} \slashed{\nabla}_{b^{(1)} - b^{(2)}} \varphi. \end{split} \end{equation} To proceed, we use the equation \eqref{eq:trchb} for both $(\mathcal M, g^{(1)})$ and $(\mathcal M, g^{(2)})$. For $(\mathcal M, g^{(1)})$ we will use $\varphi$ as the test function; while for $(\mathcal M, g^{(2)})$ we will use $\varphi \frac{(\Omega^{(1)})^2}{(\Omega^{(2)})^2} \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}}$ (instead of $\varphi$) as the test function. Taking the difference between the two identities, using the fact that the initial data coincide, and applying \eqref{eq:trchb.diff.e32.varphi}, we then obtain \begin{align} &\: \int_{S_{U,\underline{u}}} \varphi \Omega^{(1)}( (\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^- - \frac{\Omega^{(1)}}{\Omega^{(2)}} (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^-) \,\mathrm{dA}_{\gamma^{(1)}} \notag \\ = &\: - \int_{u_i}^{U} \int_{S_{u,\underline{u}}} (\slashed{\nabla}_{b^{(1)} - b^{(2)}} \varphi)\, \slashed{\mathrm{tr}}\underline{\chi}^{(2)} \frac{(\Omega^{(1)})^2}{\Omega^{(2)}}\,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \label{eq:trchb.diff.main.1}\\ &\: + \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \varphi \, e_3^{(2)} [\frac{(\Omega^{(1)})^2}{(\Omega^{(2)})^2} \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}] \, \mathrm{d} A_{\gamma^{(2)}}\, \mathrm{d} u \label{eq:trchb.diff.main.later.addition}\\ &\: + \int_{u_i}^{U} \int_{S_{u,\underline{u}}} (-4 \varphi (\underline{\omega}^{(1)} \slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \underline{\omega}^{(2)} \slashed{\mathrm{tr}}\underline{\chi}^{(2)}) + \f12 \varphi ((\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^2 -(\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^2) )(\Omega^{(1)})^2\,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \label{eq:trchb.diff.main.2} \\ &\: - \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \varphi ( |\hat{\underline{\chi}}^{(1)}|_{\gamma^{(1)}}^2 - |\hat{\underline{\chi}}^{(2)}|_{\gamma^{(2)}}^2) (\Omega^{(1)})^2\,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \label{eq:trchb.diff.main.3}\\ &\: - \int_{(u_i, U)\times \{\underline{u}\}\times \mathbb S^2} \varphi\,(\mathrm{d} \underline{\nu}_{\underline{u}}^{(1)} - \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} \mathrm{d} \underline{\nu}_{\underline{u}}^{(2)}). \label{eq:trchb.diff.main.4} \end{align} We now estimate each of the terms \eqref{eq:trchb.diff.main.1}--\eqref{eq:trchb.diff.main.4}. For \eqref{eq:trchb.diff.main.1}, we integrate by parts, use H\"older's inequality and use the estimates in Definition~\ref{double.null.def.2} and \eqref{def:dist} and \eqref{eq:trchb.diff.duality.2} to obtain \begin{equation}\label{eq:trchb.diff.main.1.est} \begin{split} |\eqref{eq:trchb.diff.main.1}| \leq &\: \left| \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \varphi\, \slashed{\nabla}_{b^{(1)} - b^{(2)}} (\slashed{\mathrm{tr}}\underline{\chi}^{(2)} \frac{(\Omega^{(1)})^2}{\Omega^{(2)}}) \,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \right| \\ &\: + \left| \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \varphi\, [\div^{(1)}(b^{(1)} - b^{(2)})] (\slashed{\mathrm{tr}}\underline{\chi}^{(2)} \frac{(\Omega^{(1)})^2}{\Omega^{(2)}}) \,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \right| \\ \lesssim &\: \|\varphi\|_{L^\infty_u L^2(S_{u,\underline{u}},\gamma^{(1)} ) } \|b^{(1)} - b^{(2)}\|_{L^\infty_{\underline{u}} L^\infty_u W^{1,2}(S_{u,\underline{u}}, \gamma^{(1)} ) } \|\slashed{\mathrm{tr}}\chi^{(2)} \frac{(\Omega^{(1)})^2}{\Omega^{(2)}} \|_{L^\infty_{\underline{u}} L^2_u W^{1,\infty}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation} For \eqref{eq:trchb.diff.main.later.addition}, we compute \begin{equation*} \begin{split} &\: e_3^{(2)} [\frac{(\Omega^{(1)})^2}{(\Omega^{(2)})^2} \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}] \\ = &\: [\frac{(\Omega^{(1)})^2}{(\Omega^{(2)})^2} \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}] \{ -2 \omega^{(1)} + 2\omega^{(2)} + (\Omega^{(2)})^{-1} b^{(2)} - (\Omega^{(1)})^{-1} b^{(1)} + \slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)} \\ &\: \qquad \qquad \qquad \qquad - (\Omega^{(1)})^{-1} \div^{(1)} b^{(1)} + (\Omega^{(2)})^{-1} \div^{(2)} b^{(2)} + (\Omega^{(2)})^{-1} \slashed{\nabla}_{b^{(2)}} \log \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \}. \end{split} \end{equation*} Therefore, using Definition~\ref{double.null.def.2}, \eqref{def:dist}, H\"older's inequality and \eqref{eq:trchb.diff.duality.2}, \begin{equation}\label{eq:trchb.diff.main.later.addition.est} |\eqref{eq:trchb.diff.main.later.addition}| \lesssim \mathrm{dist}. \end{equation} Using Definition~\ref{double.null.def.2}, \eqref{def:dist}, H\"older's inequality and \eqref{eq:trchb.diff.duality.2}, \begin{equation}\label{eq:trchb.diff.main.2.3.est} |\eqref{eq:trchb.diff.main.2}| + |\eqref{eq:trchb.diff.main.3}| \lesssim \mathrm{dist}. \end{equation} Finally, \eqref{eq:trchb.diff.main.4} can be directly estimate using the definition \eqref{def:dist.nu} and \eqref{eq:trchb.diff.duality.2}: \begin{equation}\label{eq:trchb.diff.main.4.est} | \eqref{eq:trchb.diff.main.4} | \lesssim \mathrm{dist}(\mathrm{d} \nu^{(1)},\,\mathrm{d} \nu^{(2)}) \lesssim \mathrm{dist}. \end{equation} Combining \eqref{eq:trchb.diff.main.1.est}, \eqref{eq:trchb.diff.main.later.addition.est}, \eqref{eq:trchb.diff.main.2.3.est} and \eqref{eq:trchb.diff.main.4.est} and plugging into the identity preceding these estimates, we obtain \begin{equation*} \left| \int_{S_{U,\underline{u}}} \varphi \Omega^{(1)}( (\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^- - \frac{\Omega^{(1)}}{\Omega^{(2)}} (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^-) \,\mathrm{dA}_{\gamma^{(1)}} \right| \lesssim \mathrm{dist}. \end{equation*} By duality and the bounds of $\Omega^{(1)}$ in Definition~\ref{double.null.def.2}, it follows that \begin{equation}\label{eq:trchb.diff.almost} \|(\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^- - \frac{\Omega^{(1)}}{\Omega^{(2)}} (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^- \|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \lesssim \mathrm{dist}. \end{equation} Since $(U,\underline{u})$ is arbitrary, by \eqref{eq:trchb.diff.almost}, we have \begin{equation}\label{eq:trchb.diff.almost.2} \|(\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^- - \frac{\Omega^{(1)}}{\Omega^{(2)}} (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^- \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \mathrm{dist}. \end{equation} Finally, we compute $$(\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^- - (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^- = (\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^- - \frac{\Omega^{(1)}}{\Omega^{(2)}} (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^- - (1- \frac{\Omega^{(1)}}{\Omega^{(2)}}) (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^-$$ and observe that the desired conclusion follows from \eqref{eq:trchb.diff.almost.2}, the bounds for $(1- \frac{\Omega^{(1)}}{\Omega^{(2)}})$ in Proposition~\ref{prop:Omg.diff.aux} and the bounds for $(\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^-$ in Definition~\ref{double.null.def.2}. \qedhere \end{proof} \begin{proposition}\label{prop:nabla.trch} \begin{equation}\label{eq:nabla.trch.main.1} \|\slashed{\nabla}^{(1)} (\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)})\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\nabla}^{(1)} (\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \slashed{\mathrm{tr}}\underline{\chi}^{(2)})\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}\lesssim \mathrm{dist}. \end{equation} In particular, \begin{equation}\label{eq:nabla.trch.main.2} \|\slashed{\nabla}^{(1)} (\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\mathrm{tr}}\chi^{(2)})\|_{L^2_{\underline{u}} L^\infty_u L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\nabla}^{(1)} (\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \slashed{\mathrm{tr}}\underline{\chi}^{(2)})\|_{L^2_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}\lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{equation} \end{proposition} \begin{proof} It clearly suffices to prove \eqref{eq:nabla.trch.main.1}; as \eqref{eq:nabla.trch.main.2} follows from \eqref{eq:nabla.trch.main.1} and H\"older's inequality. We will only prove the estimate for $\slashed{\mathrm{tr}}\underline{\chi}$; the estimate for $\slashed{\mathrm{tr}}\chi$ is similar. To this end, we will use the equation for $\slashed X(\Omega^{-1}\slashed{\mathrm{tr}}\underline{\chi})$ in \eqref{eq:Xtrchb.0}. Fix $\underline{u} \in [\underline{u}_j, \underline{u}_{j+1}]$ and $U \in [u_i,u_{i+1}]$ for the remainder of the proof. \pfstep{Step~1: Definition of $\slashed X$} Let $\mathring{\slashed X}$ be a smooth vector field on $S_{U, \underline{u}}$ satisfying \begin{equation}\label{eq:Xtrch.duality} \|\mathring{\slashed X}\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1. \end{equation} Extend $\mathring{\slashed X}$ to an $S$-tangent vector field $\slashed X$ on $[u_i, U]\times \mathbb S^2$ by stipulating to be the unique solution to \begin{equation}\label{eq:Xtrch.Xtransport} \begin{cases} [\frac{\partial}{\partial u} + \slashed{\nabla}_{b^{(1)}}, \slashed X] = 0 \\ \slashed X(U,\vartheta) = \mathring{\slashed X}(\vartheta). \end{cases} \end{equation} It is easy to check using \eqref{eq:Xtrch.duality}, \eqref{eq:Xtrch.Xtransport} and estimates in Definition~\ref{double.null.def.2} that for every $u\in [u_i,U]$, \begin{equation}\label{eq:Xtrch.X.est} \|\slashed X\|_{L^\infty_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim 1. \end{equation} \pfstep{Step~2: Preliminary computations} Before we proceed, we first carry out a couple of computations for terms that will arise later in Step~3 when we apply the equation \eqref{eq:Xtrchb.0} for $\slashed X(\Omega^{-1} \slashed{\mathrm{tr}}\underline{\chi})$. The main point of these computations is to rewrite the expression in a form so that any term involving derivatives of $\slashed X$ can be regrouped as a total divergence. First, by \eqref{eq:Xtrch.Xtransport}, for any $f\in L^\infty_uL^\infty_{\underline{u}} W^{2,2}(S_{u,u},\gamma^{(1)})$, \begin{equation}\label{eq:computation.for.Xtrchb.1} \begin{split} &\: [\frac{\partial}{\partial u} + \slashed{\nabla}_{b^{(2)}}, \slashed X] f = \slashed{\nabla}_{[b^{(2)} - b^{(1)}, \slashed X]} f \\ = &\: \div^{(2)} ((\slashed{\nabla}_{\slashed X} f) (b^{(2)} - b^{(1)})) - (\slashed{\nabla}_{\slashed X} f) (\div^{(2)} (b^{(2)} - b^{(1)})) - \slashed{\nabla}_{\slashed{\nabla}_{\slashed X} (b^{(2)} - b^{(1)})} f - \slashed{\mathrm{Hess}}(f)(\slashed X, b^{(2)} - b^{(1)}). \end{split} \end{equation} Notice that, after integrating by parts and using \eqref{Ricci.relation}, the first term also satisfies \begin{equation}\label{eq:computation.for.Xtrchb.2} \int_{S_{u,\underline{u}}} \div^{(2)} ((\slashed{\nabla}_{\slashed X} f) (b^{(2)} - b^{(1)})) (\Omega^{(2)})^2 \, \mathrm{d} A_{\gamma^{(2)}} = - 2\int_{S_{u,\underline{u}}} (\slashed{\nabla}_{\slashed X} f) (\slashed{\nabla}_{(b^{(2)} - b^{(1)})} \log \Omega^{(2)}) (\Omega^{(2)})^2 \, \mathrm{d} A_{\gamma^{(2)}}. \end{equation} On the other hand, we also have \begin{equation}\label{eq:computation.for.Xtrchb.3} \begin{split} &\: (\div^{(1)} \slashed X) \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - \div^{(2)} \slashed X \\ =&\: (\div^{(2)}\slashed X) (\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) + (\div^{(1)}\slashed X - \div^{(2)} \slashed X) \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}}\\ =&\: \div^{(2)} ((\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) \slashed X) - \slashed{\nabla}_{\slashed X} (\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) + \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} \slashed{\nabla}_{\slashed X} \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}}. \end{split} \end{equation} \pfstep{Step~3: Application of \eqref{eq:Xtrchb.0}} We now apply \eqref{eq:Xtrchb.0} to both $(\mathcal M, g^{(1)})$ and $(\mathcal M, g^{(2)})$, using the fact that $\slashed X$ satisfies \eqref{eq:Xtrch.Xtransport} and that $\Omega^{(1)} = \Omega^{(2)}$, $\gamma^{(1)} = \gamma^{(2)}$ and $(\slashed{\mathrm{tr}}\underline{\chi}^{(1)})^+ = (\slashed{\mathrm{tr}}\underline{\chi}^{(2)})^+$ when $\underline{u} = 0$. Using also the computations \eqref{eq:computation.for.Xtrchb.1}--\eqref{eq:computation.for.Xtrchb.3}, we obtain \begin{align} &\: \int_{S_{U,\underline{u}}} \left( (\Omega^{(1)})^2 (\slashed X ((\Omega^{(1)})^{-1} \slashed{\mathrm{tr}}\underline{\chi}^{(1)}))^- - (\Omega^{(2)})^2 \frac{\sqrt{\det \gamma^{(2)}}}{\sqrt{\det \gamma^{(1)}}}(\slashed X ((\Omega^{(2)})^{-1} \slashed{\mathrm{tr}}\underline{\chi}^{(2)}))^- \right) \,\mathrm{dA}_{\gamma^{(1)}} \label{eq:Xtrchb.main.term.to.control} \\ =&\: \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \{ 2\slashed{\nabla}_{b^{(2)} - b^{(1)}}\log \Omega^{(2)} + \div^{(2)} (b^{(2)} - b^{(1)}) \} \slashed X((\Omega^{(2)})^{-1}\slashed{\mathrm{tr}}\underline{\chi}^{(2)}) (\Omega^{(2)})^2\,\mathrm{dA}_{\gamma^{(2)}} \,\mathrm{d} u \label{eq:Xtrchb.main.1} \\ &\: + \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \left( (\slashed{\nabla}_{\slashed{\nabla}_{\slashed X} (b^{(2)} - b^{(1)})}) \Omega^{(2)}\slashed{\mathrm{tr}}\underline{\chi}^{(2)} - (\Omega^{(2)})^2 \slashed{\mathrm{Hess}}((\Omega^{(2)})^{-1}\slashed{\mathrm{tr}}\underline{\chi}^{(2)})(\slashed X, b^{(2)} - b^{(1)}) \right) \,\mathrm{dA}_{\gamma^{(2)}} \,\mathrm{d} u \label{eq:Xtrchb.main.2}\\ &\: -4 \int_{u_i}^{U} \int_{S_{u,\underline{u}}} [ \underline{\omega}^{(1)} (\Omega^{(1)})^3\slashed X((\Omega^{(1)})^{-1}\slashed{\mathrm{tr}}\underline{\chi}^{(1)}) - \underline{\omega}^{(2)} (\Omega^{(2)})^3\slashed X((\Omega^{(2)})^{-1}\slashed{\mathrm{tr}}\underline{\chi}^{(2)})\frac{\sqrt{\det \gamma^{(2)}}}{\sqrt{\det \gamma^{(1)}}} ]\,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \label{eq:Xtrchb.main.3}\\ &\: + \int_{u_i}^{U} \int_{S_{u,\underline{u}}} (2\slashed X(\log\Omega^{(1)}) (\Omega^{(1)})^2|\hat{\underline{\chi}}^{(1)}|_{\gamma^{(1)}}^2 - 2\slashed X(\log\Omega^{(2)}) (\Omega^{(2)})^2|\hat{\underline{\chi}}^{(2)}|_{\gamma^{(2)}}^2 \frac{\sqrt{\det \gamma^{(2)}}}{\sqrt{\det \gamma^{(1)}}}) \,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \label{eq:Xtrchb.main.4}\\ &\: + \int_{u_i}^{U} \int_{S_{u,\underline{u}}} ((\div^{(1)} \slashed X) (\Omega^{(1)})^2|\hat{\underline{\chi}}^{(1)}|_{\gamma^{(1)}}^2 - (\div^{(2)} \slashed X)(\Omega^{(2)})^2|\hat{\underline{\chi}}^{(2)}|_{\gamma^{(2)}}^2 \frac{\sqrt{\det \gamma^{(2)}}}{\sqrt{\det \gamma^{(1)}}}) \,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u \label{eq:Xtrchb.main.5}\\ &\: + \int_{(u_i,U)\times \{\underline{u}\}\times \mathbb S^2} (2\slashed X(\log\Omega^{(1)}) \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 2 \slashed X(\log\Omega^{(2)}) )\,\mathrm{d}\underline{\nu}_{\underline{u}}^{(2)} \label{eq:Xtrchb.main.6} \\ &\: + \int_{(u_i,U)\times \{\underline{u}\}\times \mathbb S^2} (- \slashed{\nabla}_{\slashed X} (\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) + \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} \slashed{\nabla}_{\slashed X} \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}})\,\mathrm{d}\underline{\nu}_{\underline{u}}^{(2)} \label{eq:Xtrchb.main.7}\\ &\: + \int_{(u_i,U)\times \{\underline{u}\}\times \mathbb S^2} \div^{(2)} ((\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) \slashed X) \,\mathrm{d}\underline{\nu}_{\underline{u}}^{(2)} \label{eq:Xtrchb.main.8}\\ &\: + \int_{(u_i,U)\times \{\underline{u}\}\times \mathbb S^2} (2 \slashed X(\log\Omega^{(1)}) + \div^{(1)} \slashed X ) \,(\mathrm{d}\underline{\nu}_{\underline{u}}^{(1)} - \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} \mathrm{d}\underline{\nu}_{\underline{u}}^{(2)}). \label{eq:Xtrchb.main.9} \end{align} Now \eqref{eq:Xtrchb.main.1}--\eqref{eq:Xtrchb.main.4} do not involve derivatives of $\slashed X$ and we can directly estimate then using Definition~\ref{double.null.def.2}, \eqref{def:dist}, \eqref{eq:Xtrch.X.est}, Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:Omg.diff.aux}, and H\"older's inequality to get \begin{equation}\label{eq:Xtrchb.main.est.1} |\eqref{eq:Xtrchb.main.1}| + |\eqref{eq:Xtrchb.main.2}| + |\eqref{eq:Xtrchb.main.3}| + |\eqref{eq:Xtrchb.main.4}|\lesssim \mathrm{dist}. \end{equation} For \eqref{eq:Xtrchb.main.est.2}, we first integrate by parts away the derivative on $\slashed X$ and then argue as above to obtain \begin{equation}\label{eq:Xtrchb.main.est.2} \begin{split} &\: |\eqref{eq:Xtrchb.main.5}| \\ = &\: \left| \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \slashed{\nabla}_{\slashed X} \{(\Omega^{(1)})^2|\hat{\underline{\chi}}^{(1)}|_{\gamma^{(1)}}^2\} \,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u - \int_{u_i}^{U} \int_{S_{u,\underline{u}}} \slashed{\nabla}_{\slashed X} \{ (\Omega^{(2)})^2|\hat{\underline{\chi}}^{(2)}|_{\gamma^{(2)}}^2\} \frac{\sqrt{\det \gamma^{(2)}}}{\sqrt{\det \gamma^{(1)}}} \,\mathrm{dA}_{\gamma^{(1)}} \,\mathrm{d} u\right| \\ \lesssim &\: \mathrm{dist}. \end{split} \end{equation} We then consider the terms involving the measure $\mathrm{d}\nu^{(2)}_{\underline{u}}$. To handle the terms \eqref{eq:Xtrchb.main.6}--\eqref{eq:Xtrchb.main.8}, we first use the regularity of $\mathrm{d}\nu^{(2)}_{\underline{u}}$ given in \eqref{eq:nub.add.reg}, and then argue as above (with Definition~\ref{double.null.def.2}, \eqref{def:dist}, \eqref{eq:gamma.det.diff.main}, \eqref{eq:Xtrch.X.est}, Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:Omg.diff.aux}, and H\"older's inequality) to obtain \begin{equation}\label{eq:Xtrchb.main.est.3} \begin{split} &\: |\eqref{eq:Xtrchb.main.6}| + |\eqref{eq:Xtrchb.main.7}| + |\eqref{eq:Xtrchb.main.8}| \\ \lesssim &\: \|2\slashed X(\log\Omega^{(1)}) \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 2 \slashed X(\log\Omega^{(2)}) \|_{L^\infty_u L^\infty_{\underline{u}} L^1(S_{u,\underline{u}},\gamma^{(2)})} + \|(\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) \slashed X\|_{L^\infty_u L^\infty_{\underline{u}} L^1(S_{u,\underline{u}},\gamma^{(2)})}\\ &\: + \| (- \slashed{\nabla}_{\slashed X} (\frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} - 1) + \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} \slashed{\nabla}_{\slashed X} \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}})\|_{L^\infty_u L^\infty_{\underline{u}} L^1(S_{u,\underline{u}},\gamma^{(2)})} \\ \lesssim &\: \mathrm{dist}. \end{split} \end{equation} Finally, we bound the term \eqref{eq:Xtrchb.main.9}. Using \eqref{def:dist.nu}, \eqref{def:dist}, H\"older's inequality, \eqref{eq:Xtrch.X.est} and Definition~\ref{double.null.def.2}, we obtain \begin{equation}\label{eq:Xtrchb.main.est.4} \begin{split} |\eqref{eq:Xtrchb.main.9}| \lesssim &\: \mathrm{dist} (\|\slashed X \log \Omega^{(1)} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} + \|\slashed X \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})}) \\ \lesssim &\: \mathrm{dist} (1 + \|\slashed{\nabla} \log \Omega^{(1)} \|_{L^\infty_u L^\infty_{\underline{u}} L^\infty (S_{u,\underline{u}}, \gamma^{(1)})}) \lesssim \mathrm{dist}. \end{split} \end{equation} \pfstep{Step~4: Conclusion} Plugging the estimates \eqref{eq:Xtrchb.main.est.1}--\eqref{eq:Xtrchb.main.est.4} back into \eqref{eq:Xtrchb.main.1}--\eqref{eq:Xtrchb.main.9}, we obtain \begin{equation}\label{eq:Xtrchb.prefinal} \begin{split} \sup_{\|\slashed X\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1}\eqref{eq:Xtrchb.main.term.to.control} \lesssim &\: \mathrm{dist}. \end{split} \end{equation} By duality, Definition~\ref{double.null.def.2}, \eqref{def:dist}, Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:Omg.diff.aux}, and \eqref{eq:Xtrchb.prefinal} that we have just established, \begin{equation}\label{eq:Xtrchb.final} \begin{split} &\: \|\slashed{\nabla} (\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - \slashed{\mathrm{tr}}\underline{\chi}^{(2)})\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \\ = &\: \sup_{\|\slashed X\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1} \int_{S_{U,\underline{u}}} \left( (\slashed X (\slashed{\mathrm{tr}}\underline{\chi}^{(1)}))^- - (\slashed X ( \slashed{\mathrm{tr}}\underline{\chi}^{(2)}))^- \right) \,\mathrm{dA}_{\gamma^{(1)}} \\ \lesssim &\: \sup_{\|\slashed X\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1} \int_{S_{U,\underline{u}}} \Omega^{(1)} \left( (\slashed X (\slashed{\mathrm{tr}}\underline{\chi}^{(1)}))^- - (\slashed X ( \slashed{\mathrm{tr}}\underline{\chi}^{(2)}))^- \right) \,\mathrm{dA}_{\gamma^{(1)}} \\ = &\: \sup_{\|\slashed X\|_{L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1} \eqref{eq:Xtrchb.main.term.to.control} + \mathrm{dist} \lesssim \mathrm{dist}. \end{split} \end{equation} Since $U\in [u_i, u_{i+1}]$ and $\underline{u} \in [\underline{u}_j, \underline{u}_{j+1}]$ are both arbitrary, the desired conclusion follows from \eqref{eq:Xtrchb.final}. \qedhere \end{proof} \subsection{Energy estimates for the renormalized curvature components}\label{sec:energy.est} In this subsection, we control the differences of the renormalized curvature components using energy estimates. We first bound the terms appearing in the equations for the difference of the renormalized curvature components. \begin{proposition}\label{prop:bianchi.diff} The following equations, when viewed as transport equations, hold in the weak integrated sense\footnote{We remark that \eqref{eq:bianchi.diff.2}, \eqref{eq:bianchi.diff.3}, \eqref{eq:bianchi.diff.5} and \eqref{eq:bianchi.diff.6} in fact hold in the integrated sense, and therefore a fortiori hold in the weak integrated sense.} of Definition~\ref{def:weak.transport}: \begin{align} \label{eq:bianchi.diff.1} (\slashed{\nabla}_3)^{(1)} (\beta^{(1)} - \beta^{(2)}) + \slashed{\nabla} (K^{(1)} -K^{(2)}) - (^*)^{(1)} \slashed{\nabla}(\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) =&\: \mathrm{error}_{\beta,\,K,\,\check{\sigma}},\\ \label{eq:bianchi.diff.2} (\slashed{\nabla}_4)^{(1)} (\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) +(\div^*)^{(1)}(\beta^{(1)} - \beta^{(2)})=&\:\mathrm{error}_{\check{\sigma},\,\beta},\\ \label{eq:bianchi.diff.3} (\slashed{\nabla}_4)^{(1)} (K^{(1)} - K^{(2)}) + \div^{(1)} (\beta^{(1)} - \beta^{(2)})=&\:\mathrm{error}_{K,\,\beta},\\ \label{eq:bianchi.diff.4} (\slashed{\nabla}_3)^{(1)}(\check{\sigma}^{(1)} - \check{\sigma}^{(2)})+(\div ^*)^{(1)}(\underline{\beta}^{(1)} - \underline{\beta}^{(2)})=&\:\mathrm{error}_{\underline{\beta},\,K,\,\check{\sigma}},\\ \label{eq:bianchi.diff.5} (\slashed{\nabla}_3)^{(1)} (K^{(1)} - K^{(2)})-\div^{(1)} (\underline{\beta}^{(1)} - \underline{\beta}^{(2)}) =&\:\mathrm{error}_{K,\,\underline{\beta}},\\ \label{eq:bianchi.diff.6} (\slashed{\nabla}_4)^{(1)}(\underline{\beta}^{(1)} - \underline{\beta}^{(2)}) - \slashed{\nabla} (K^{(1)} -K^{(2)}) -(^*)^{(1)}\slashed{\nabla}(\check{\sigma}^{(1)} - \check{\sigma}^{(2)})=&\: \mathrm{error}_{\check{\sigma},\,\underline{\beta}}, \end{align} where \begin{equation}\label{eq:null.Bianchi.error.1} \|\mathrm{error}_{\beta,\,K,\,\check{\sigma}}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\mathrm{error}_{K,\,\underline{\beta}}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\mathrm{error}_{\check{\sigma},\,\underline{\beta}}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}, \end{equation} \begin{equation}\label{eq:null.Bianchi.error.2} \|\mathrm{error}_{\check{\sigma},\,\beta}\|_{L^1_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\mathrm{error}_{K,\,\beta}\|_{L^1_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\mathrm{error}_{\underline{\beta},\,K,\,\check{\sigma}}\|_{L^1_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{equation} \end{proposition} \begin{proof} We only consider a subset of the equations in view of their similarities. More precisely, we will consider \eqref{eq:bianchi.diff.1} and \eqref{eq:bianchi.diff.2}. The equation \eqref{eq:bianchi.diff.3} is similar to \eqref{eq:bianchi.diff.2}. On the other hand, the equations \eqref{eq:bianchi.diff.4}--\eqref{eq:bianchi.diff.6} are similar to \eqref{eq:bianchi.diff.1}--\eqref{eq:bianchi.diff.3} after changing $u$, $\underline{u}$, $e_3$, $e_4$ etc.~appropriately. \pfstep{Step~1: The equation \eqref{eq:bianchi.diff.1}} We compute \begin{equation*} \begin{split} \mbox{LHS of \eqref{eq:bianchi.diff.1}} =&\: \underbrace{(\slashed{\nabla}_3 \beta + \slashed{\nabla} K - ^* \slashed{\nabla}\check{\sigma})^{(1)} - (\slashed{\nabla}_3 \beta + \slashed{\nabla} K - ^* \slashed{\nabla}\check{\sigma})^{(2)} }_{=:\mathrm{I}}\\ &\: \underbrace{- (\slashed{\nabla}_3^{(1)} - \slashed{\nabla}_3^{(2)})\beta^{(2)}}_{=:\mathrm{II}} + \underbrace{\{ (^*)^{(1)} - (^*)^{(2)}\} \slashed{\nabla} \check{\sigma}^{(2)}}_{=:III}. \end{split} \end{equation*} We now control the RHS in $L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$. To bound the term $\mathrm{I}$ we use the equation \eqref{eq:null.Bianchi.1}. Schematically, there are three types of terms: \begin{enumerate} \item $\slashed{\nabla}\chi\star \underline{\chi}$, $\slashed{\nabla}\chi\star \underline{\omega}$, $\chi\star \slashed{\nabla}\underline{\chi}$, \item $\eta\star\chi\star\underline{\chi}$, $\eta\star\chi\star\underline{\omega}$, \item $\eta K$, $\eta \check{\sigma}$, \end{enumerate} where $\star$ denotes some arbitrary contraction with respect to the metric. We will take one example from each group of terms above; the other terms can be treated in exactly the same manner. For the first group, we consider $\slashed{\nabla}\chi\star \underline{\chi}$, which can be treated as \eqref{eq:omb.chih.diff}. More precisely, we use H\"older's inequality, Definition~\ref{double.null.def.2}, \eqref{def:dist}, Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:Gamma.diff} to obtain \begin{equation*} \begin{split} &\: \|(\slashed{\nabla}\chi\star \underline{\chi})^{(1)} - (\slashed{\nabla}\chi\star \underline{\chi})^{(2)} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|(\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} (\max_{i=1,2} \|\slashed{\nabla}\chi^{(i)}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})})(\max_{i=1,2} \|\underline{\chi}^{(i)}\|_{L^2_{u} L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})}) \\ &\: + N^{-\frac 12} \|\underline{\chi}^{(1)} - \underline{\chi}^{(2)} \|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \|(\slashed{\nabla} \chi)^{(1)} \|_{L^2_{\underline{u}} L^\infty_u L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + N^{-\frac 12} \| \underline{\chi}^{(2)} \|_{ L^2_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \|\slashed{\nabla}^{(1)}(\chi^{(1)} - \chi^{(2)}) \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + N^{-\frac 12} \| \underline{\chi}^{(2)} \|_{ L^2_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \|\slashed{\Gamma}^{(1)} - \slashed{\Gamma}^{(2)}\|_{ L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \|\chi^{(1)} \|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} For the second group, we consider $\eta\star\chi\star\underline{\omega}$. We use H\"older's inequality, Definition~\ref{double.null.def.2}, \eqref{def:dist}, Proposition~\ref{prop:gamma.inverse.diff} and \eqref{eq:omb.chih.diff} to obtain \begin{equation*} \begin{split} &\: \|(\eta\star\chi\star\underline{\omega})^{(1)} - (\eta\star\chi\star\underline{\omega})^{(2)} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: N^{-\frac 12} \|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \|\chi^{(1)}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})} \|\underline{\omega}^{(1)}\|_{L^2_{u} L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + N^{-\frac 12} \|(\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \|\chi^{(1)}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})} \|\underline{\omega}^{(1)}\|_{L^2_{u} L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})} \\ &\: + (\|\eta^{(1)}\|_{L^\infty_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})} + \|\eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}}, \gamma^{(1)})}) \|(\chi \underline{\omega})^{(1)} - (\chi\underline{\omega})^{(2)}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} For the third group, we consider $\eta K$. Using H\"older's inequality, Definition~\ref{double.null.def.2} and \eqref{def:dist}, we obtain \begin{equation*} \begin{split} &\: \|(\eta K)^{(1)} - (\eta K)^{(2)}\|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: N^{-1} \|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \|K^{(1)}\|_{L^\infty_u L^2_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + N^{-1}\|\eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)}) } \|K^{(1)} - K^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N}. \end{split} \end{equation*} To handle $\mathrm{II}$ we use \eqref{eq:nab3.diff.2} in Proposition~\ref{prop:operator.diff}. Using the estimates for $\beta^{(2)}$ given by Definition~\ref{double.null.def.2} (and the estimate for $\slashed{\nabla}_3^{(2)}\beta^{(2)}$ obtained after using \eqref{eq:null.Bianchi.1}), we obtain $$\|\mathrm{II} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}$$ Finally, the term $\mathrm{III}$ can be bounded by H\"older's inequality, Sobolev embedding (Proposition~\ref{prop:Sobolev}), Definition~\ref{double.null.def.2} and \eqref{def:dist}, \begin{equation*} \begin{split} \|\mathrm{III} \|_{L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim &\: N^{-1} \| \gamma^{(1)} - \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)})} \|\check{\sigma}\|_{L^\infty_u L^2_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: N^{-1}\| \gamma^{(1)} - \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \|\check{\sigma}^{(2)} \|_{L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N}. \end{split} \end{equation*} Combining the above considerations, we thus obtain \eqref{eq:bianchi.diff.1}. \pfstep{Step~2: The equation \eqref{eq:bianchi.diff.2}} We compute as in Step~1 to obtain \begin{equation*} \begin{split} \mbox{LHS of \eqref{eq:bianchi.diff.2}} =&\: \underbrace{(\slashed{\nabla}_4 \check{\sigma} + \div ^* \beta)^{(1)} - (\slashed{\nabla}_4 \check{\sigma} + \div ^* \beta)^{(2)} }_{=:\mathrm{I}}\\ &\: \underbrace{- (\slashed{\nabla}_4^{(1)} - \slashed{\nabla}_4^{(2)})\check{\sigma}^{(2)}}_{=:\mathrm{II}} - \underbrace{\{ (\div^*)^{(1)} - (\div^*)^{(2)}\} \beta^{(2)}}_{=:III}. \end{split} \end{equation*} According to \eqref{eq:null.Bianchi.2}, $(\slashed{\nabla}_4 \check{\sigma} + \div ^* \beta)^{(1)} - (\slashed{\nabla}_4 \check{\sigma} + \div ^* \beta)^{(2)}$ consists of terms similar to those in $(\slashed{\nabla}_4\mu)^{(1)} - (\slashed{\nabla}_4\mu)^{(2)}$ and therefore $\mathrm{I}$ can be treated in a similar manner as in Proposition~\ref{prop:mu}. We thus obtain $$\|I\|_{L^1_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim N^{-1} \|I\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N}.$$ We omit the details. The term $\mathrm{II}$ can be controlled using \eqref{eq:nab4.diff.2} in Proposition~\ref{prop:operator.diff} after using the estimates for $K$ given by combining Definition~\ref{double.null.def.2} and the equation \eqref{eq:null.Bianchi.2} so that $$\|\mathrm{II}\|_{L^1_{\underline{u}}L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ Finally, the term $\mathrm{III}$ can be bounded by H\"older's inequality, Sobolev embedding (Proposition~\ref{prop:Sobolev}), Definition~\ref{double.null.def.2}, \eqref{def:dist}, Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:Gamma.diff} so that \begin{equation*} \begin{split} \|\mathrm{III} \|_{L^1_{\underline{u}} L^2_{u} L^2(S_{u,\underline{u}}, \gamma^{(1)})} \lesssim &\: N^{-1} \| \gamma^{(1)} - \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)})} \|\beta^{(2)}\|_{L^\infty_u L^2_{\underline{u}} W^{1,4}(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + N^{-1} \| \slashed\Gamma^{(1)} - \slashed\Gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \|\beta^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^{\infty}(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: N^{-1}\| \gamma^{(1)} - \gamma^{(2)} \|_{L^\infty_u L^\infty_{\underline{u}} W^{1,2}(S_{u,\underline{u}},\gamma^{(1)})} \|\beta^{(2)} \|_{L^\infty_u L^2_{\underline{u}} W^{2,2}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N}. \end{split} \end{equation*} This concludes the proof of \eqref{eq:bianchi.diff.2}. \qedhere \end{proof} \begin{proposition}\label{prop:energy.est} \begin{align} \|\beta^{(1)} - \beta^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|(K^{(1)} - K^{(2)}, \,\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) \|_{(L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)}))^2} \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 14}}, \label{eq:energy.est.1} \\ \|\underline{\beta}^{(1)} - \underline{\beta}^{(2)}\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|(K^{(1)} - K^{(2)},\,\check{\sigma}^{(1)} - \check{\sigma}^{(2)})\|_{(L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)}))^2} \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 14}}. \label{eq:energy.est.2} \end{align} \end{proposition} \begin{proof} The proofs of \eqref{eq:energy.est.1} and \eqref{eq:energy.est.2} are similar; we will only treat \eqref{eq:energy.est.1}. This will be achieved by considering \eqref{eq:bianchi.diff.1}--\eqref{eq:bianchi.diff.3}. \pfstep{Step~1: Derivation of the main energy identities} We begin with \eqref{eq:bianchi.diff.1}, which is satisfied in the weak integrated sense (see~Definition~\ref{def:weaker.transport}). Uinsg Definition~\ref{def:weaker.transport} the fact that $\beta^{(1)} = \beta^{(2)}$ a.e.~on $\{u = u_i\}$, this means that \begin{equation*} \begin{split} &\: \int_{\underline{u}_j}^{\underline{u}_{j+1}} \int_{S_{u,\underline{u}}} \langle\varphi, \beta^{(1)} - \beta^{(2)} \rangle \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} \underline{u} \\ &\: + \int_{\underline{u}_j}^{\underline{u}_{j+1}} \int_{u_i}^{u} \int_{S_{u',\underline{u}}} \langle \varphi, -\slashed{\nabla} (K^{(1)} -K^{(2)}) + (^*)^{(1)} \slashed{\nabla}(\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) + \mathrm{error}_{\beta,\,K,\,\check{\sigma}} \rangle (\Omega^{(1)}) \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} \\ &\: + \int_{\underline{u}_j}^{\underline{u}_{j+1}} \int_{u_i}^{u} \int_{S_{u',\underline{u}}} (\langle \varphi,(\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)})(\beta^{(1)} - \beta^{(2)}) \rangle + \langle \slashed{\nabla}_3\varphi, (\beta^{(1)}- \beta^{(2)})\rangle) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u} =0 \end{split} \end{equation*} A priori this holds for $\varphi \in C^1$, but using the bounds in Definition~\ref{double.null.def.2} and \eqref{eq:null.Bianchi.error.1}, we can apply a density argument to show that in fact it suffices to have $\varphi \in C^0_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$ and $\slashed{\nabla}_3 \varphi \in L^1_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$. Therefore, for every fixed $\underline{u}\in [\underline{u}_j,\underline{u}_{j+1}]$, we can choose $$\varphi^A(u,\underline{u}',\vartheta) := (((\gamma^{(1)})^{-1})^{AB} (\beta^{(1)} - \beta^{(2)})_B)(u,\underline{u}',\vartheta) \mathbbm 1_{\underline{u}'\in [\underline{u}_j, \underline{u})}(\underline{u}'),$$ where $\mathbbm 1$ denotes the indicator function. We then obtain \begin{equation}\label{eq:EE.1} \begin{split} &\: \int_{\underline{u}_j}^{\underline{u}} \int_{S_{u,\underline{u}'}} |\beta^{(1)} - \beta^{(2)}|_{\gamma^{(1)}}^2 \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} \underline{u}' \\ &\: + 2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} \langle \beta^{(1)} - \beta^{(2)}, -\slashed{\nabla} (K^{(1)} -K^{(2)}) + (^*)^{(1)} \slashed{\nabla}(\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) \rangle_{\gamma^{(1)}} (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \\ &\: + \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (2\langle \beta^{(1)} - \beta^{(2)}, \mathrm{error}_{\beta,\,K,\,\check{\sigma}} \rangle_{\gamma^{(1)}} + (\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)}) |\beta^{(1)} - \beta^{(2)}|_{\gamma^{(1)}}^2) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' =0. \end{split} \end{equation} In a completely similar manner, but using \eqref{eq:bianchi.diff.2} and \eqref{eq:bianchi.diff.3} instead of \eqref{eq:bianchi.diff.1}, we obtain \begin{equation}\label{eq:EE.2} \begin{split} &\: \int_{u_i}^{u} \int_{S_{u',\underline{u}}} |K^{(1)} - K^{(2)}|^2 \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u' \\ &\: - 2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (K^{(1)} - K^{(2)})\, \div^{(1)} (\beta^{(1)} - \beta^{(2)}) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \\ &\: + \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (2 (K^{(1)} - K^{(2)}) \mathrm{error}_{K,\,\beta} + (\slashed{\mathrm{tr}}\chi^{(1)} - 2\omega^{(1)}) |K^{(1)} - K^{(2)}|^2) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' =0, \end{split} \end{equation} and \begin{equation}\label{eq:EE.3} \begin{split} &\: \int_{u_i}^{u} \int_{S_{u',\underline{u}}} |\check{\sigma}^{(1)} - \check{\sigma}^{(2)}|^2 \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u' \\ &\: - 2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (\check{\sigma}^{(1)} - \check{\sigma}^{(2)})\, (\div^*)^{(1)} (\beta^{(1)} - \beta^{(2)}) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \\ &\: + \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (2 (\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) \mathrm{error}_{\check{\sigma},\,\beta} + (\slashed{\mathrm{tr}}\chi^{(1)} - 2\omega^{(1)}) |\check{\sigma}^{(1)} - \check{\sigma}^{(2)}|^2) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' =0. \end{split} \end{equation} Our goal will be to derive an estimate after summing \eqref{eq:EE.1}, \eqref{eq:EE.2} and \eqref{eq:EE.3}. In the next two steps, we will estimate terms in \eqref{eq:EE.1}--\eqref{eq:EE.3}. We will then return to summing \eqref{eq:EE.1}, \eqref{eq:EE.2} and \eqref{eq:EE.3} in Step~4. \pfstep{Step~2: Handling the main angular terms} We note that the highest order terms in \eqref{eq:EE.1}--\eqref{eq:EE.3} involving angular derivatives of $\beta^{(1)} - \beta^{(2)}$, $K^{(1)} - K^{(2)}$ and $\check{\sigma}^{(1)} - \check{\sigma}^{(2)}$ cannot be directly controlled by $\mathrm{dist}$. Instead, we need to integrate by parts. Using also \eqref{Ricci.relation} and H\"older's inequality, we obtain \begin{equation}\label{eq:EE.error.1} \begin{split} &\: \left| 2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} \langle \beta^{(1)} - \beta^{(2)}, -\slashed{\nabla} (K^{(1)} -K^{(2)}) + (^*)^{(1)} \slashed{\nabla}(\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) \rangle_{\gamma^{(1)}} (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \right. \\ &\: \left. - 2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (K^{(1)} - K^{(2)})\, \div^{(1)} (\beta^{(1)} - \beta^{(2)}) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \right.\\ &\:\left. - 2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (\check{\sigma}^{(1)} - \check{\sigma}^{(2)})\, (\div^*)^{(1)} (\beta^{(1)} - \beta^{(2)}) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \right|\\ = &\: \left|2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (K^{(1)} -K^{(2)}) (\beta^{(1)} - \beta^{(2)}) \cdot \slashed{\nabla}(\eta+\underline{\eta}) \, (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \right.\\ &\: \left. +2 \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) { }^*(\beta^{(1)} - \beta^{(2)}) \cdot \slashed{\nabla}(\eta+\underline{\eta}) \, (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}'.\right| \\ \lesssim &\: \frac 1N \|\beta^{(1)}-\beta^{(2)}\|_{L^\infty_u L^2_{\underline{u}}L^2(S_{u,\underline{u}},\gamma^{(1)})} \|(K^{(1)} -K^{(2)},\, \check{\sigma}^{(1)} - \check{\sigma}^{(2)})\|_{(L^\infty_{\underline{u}} L^2_{u}L^2(S_{u,\underline{u}},\gamma^{(1)}))^2} \lesssim \frac{\mathrm{dist}^2}{N}. \end{split} \end{equation} \pfstep{Step~3: Estimating the error terms} We now handle the remaining terms in \eqref{eq:EE.1}, \eqref{eq:EE.2} and \eqref{eq:EE.3}: \begin{equation}\label{eq:EE.error.2} \begin{split} &\: \left| \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (2\langle \beta^{(1)} - \beta^{(2)}, \mathrm{error}_{\beta,\,K,\,\check{\sigma}} \rangle_{\gamma^{(1)}} + (\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)}) |\beta^{(1)} - \beta^{(2)}|_{\gamma^{(1)}}^2) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}'\right| \\ &\: + \left| \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (2 (K^{(1)} - K^{(2)}) \mathrm{error}_{K,\,\beta} + (\slashed{\mathrm{tr}}\chi^{(1)} - 2\omega^{(1)}) |K^{(1)} - K^{(2)}|^2) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \right| \\ &\: + \left| \int_{\underline{u}_j}^{\underline{u}} \int_{u_i}^{u} \int_{S_{u',\underline{u}'}} (2 (\check{\sigma}^{(1)} - \check{\sigma}^{(2)}) \mathrm{error}_{\check{\sigma},\,\beta} + (\slashed{\mathrm{tr}}\chi^{(1)} - 2\omega^{(1)}) |\check{\sigma}^{(1)} - \check{\sigma}^{(2)}|^2) (\Omega^{(1)})^2 \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u'\,\, \mathrm{d} \underline{u}' \right| \\ \lesssim &\: \|\beta^{(1)}-\beta^{(2)}\|_{L^\infty_u L^2_{\underline{u}}L^2(S_{u,\underline{u}},\gamma^{(1)})} \|\mathrm{error}_{\beta,\,K,\,\check{\sigma}}\|_{L^1_u L^2_{\underline{u}}L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \|\beta^{(1)}-\beta^{(2)}\|_{L^\infty_u L^2_{\underline{u}}L^2(S_{u,\underline{u}},\gamma^{(1)})}^2 \|\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)} \|_{L^1_uL^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: +\|(K^{(1)} -K^{(2)},\, \check{\sigma}^{(1)} - \check{\sigma}^{(2)})\|_{(L^\infty_{\underline{u}} L^2_{u}L^2(S_{u,\underline{u}},\gamma^{(1)}))^2} \|(\mathrm{error}_{K,\,\beta}, \mathrm{error}_{\check{\sigma},\,\beta})\|_{(L^1_{\underline{u}} L^2_{u}L^2(S_{u,\underline{u}},\gamma^{(1)}))^2} \\ &\: +\|(K^{(1)} -K^{(2)},\, \check{\sigma}^{(1)} - \check{\sigma}^{(2)})\|_{(L^\infty_{\underline{u}} L^2_{u}L^2(S_{u,\underline{u}},\gamma^{(1)}))^2}^2 \|\slashed{\mathrm{tr}}\chi^{(1)} - 2\omega^{(1)}\|_{L^1_{\underline{u}} L^\infty_{u} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}^2}{N^{\frac 12}}, \end{split} \end{equation} where we have used \eqref{eq:null.Bianchi.error.1} and \eqref{eq:null.Bianchi.error.2}, as well as the estimates $$\|\slashed{\mathrm{tr}}\underline{\chi}^{(1)} - 2\underline{\omega}^{(1)} \|_{L^1_uL^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\mathrm{tr}}\chi^{(1)} - 2\omega^{(1)}\|_{L^1_{\underline{u}} L^\infty_{u} L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \lesssim N^{-\frac 12},$$ which in turn follow from the bounds in Definition~\ref{double.null.def.2} and H\"older's inequality. \pfstep{Step~4: Putting everything together} Summing \eqref{eq:EE.1}, \eqref{eq:EE.2}, \eqref{eq:EE.3}, and using \eqref{eq:EE.error.1} and \eqref{eq:EE.error.2}, we obtain that for every $(u,\underline{u})\in [u_i,u_{i+1}]\times [\underline{u}_j,\underline{u}_{j+1}]$, \begin{equation*} \begin{split} \int_{u_i}^{u} \int_{S_{u',\underline{u}}} &\: (|K^{(1)} - K^{(2)}|^2 + |\check{\sigma}^{(1)} - \check{\sigma}^{(2)}|^2) \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} u' \\ &\:+ \int_{\underline{u}_j}^{\underline{u}} \int_{S_{u,\underline{u}'}} |\beta^{(1)} - \beta^{(2)}|_{\gamma^{(1)}}^2 \Omega^{(1)} \,\mathrm{d} A_{\gamma^{(1)}}\, \mathrm{d} \underline{u}' \lesssim \frac{\mathrm{dist}^2}{N^{\frac 12}}. \end{split} \end{equation*} Since $(u,\underline{u})$ is arbitrary, we obtain \eqref{eq:energy.est.1} after taking square roots. \qedhere \end{proof} \subsection{Elliptic estimates for the Ricci coefficients}\label{sec:elliptic.est} We recall the following standard elliptic estimate for div-curl systems: \begin{proposition}\label{prop:elliptic} Let $(\mathbb S^2, \gamma)$ be a Riemmanian manifold such that $\gamma \in C^1$ and the Gauss curvature $K_\gamma \in L^2(\mathbb S^2,\gamma)$. Then the following holds for all covariant symmetric tensor $\xi$ of rank $(r+1)$ belonging to $W^{1,2}(\mathbb S^2,\gamma)$: \begin{equation}\label{eq:Bochner} \int_{\mathbb S^2} (|\slashed{\nabla}\xi|_\gamma^2 + (r+1) K_\gamma |\xi|_{\gamma}^2) \,\mathrm{dA}_{\gamma} = \int_{\mathbb S^2} (|\div_\gamma\xi|_\gamma^2 + |\slashed{\mathrm{curl}}_\gamma\xi|_\gamma^2 + rK_\gamma |\slashed{\mathrm{tr}}_\gamma \xi|_\gamma^2) \,\mathrm{dA}_{\gamma}. \end{equation} In particular, there exists a constant $C>0$ depending only on $r$, $\|K_\gamma\|_{L^2(\mathbb S^2,\gamma)}$, the area $\mathrm{Area}(\mathbb S^2,\gamma)$, and the isoperimetric constant ${\bf I}(\mathbb S^2,\gamma)$ such that \begin{equation}\label{eq:elliptic.est} \|\slashed{\nabla}\xi\|_{L^2(\mathbb S^2, \gamma)} \leq C (\|\div_\gamma\xi\|_{L^2(\mathbb S^2, \gamma)} + \|\slashed{\mathrm{curl}}_\gamma\xi\|_{L^2(\mathbb S^2, \gamma)} + \|\xi\|_{L^2(\mathbb S^2, \gamma)} + \|\slashed{\mathrm{tr}}_\gamma \xi\|_{L^{\frac 83}(\mathbb S^2, \gamma)}). \end{equation} \end{proposition} \begin{proof} A proof of \eqref{eq:Bochner} can be found in Lemma~7.1 in \cite{Chr}. In order to prove \eqref{eq:elliptic.est}, we first note that by H\"older's inequality and the Sobolev inequality (Proposition~\ref{prop:Sobolev}), \begin{equation*} \begin{split} &\: \int_{\mathbb S^2} ((r+1) |K_\gamma| |\xi|_{\gamma}^2 + r |K_\gamma| |\slashed{\mathrm{tr}}_\gamma \xi|_\gamma^2) \mathrm{dA}_{\gamma} \\ \lesssim &\: \|K_\gamma\|_{L^4(\mathbb S^2,\gamma)} (\|\xi\|_{L^2(\mathbb S^2, \gamma)} \|\xi\|_{L^4(\mathbb S^2, \gamma)} + \|\slashed{\mathrm{tr}}_\gamma \xi\|_{L^{\frac 83}(\mathbb S^2, \gamma)}^2) \\ \lesssim &\: \|\xi\|_{L^2(\mathbb S^2, \gamma)}^2 + \|\xi\|_{L^2(\mathbb S^2, \gamma)} \|\slashed{\nabla}\xi\|_{L^2(\mathbb S^2, \gamma)} + \|\slashed{\mathrm{tr}}_\gamma \xi\|_{L^{\frac 83}(\mathbb S^2, \gamma)}^2. \end{split} \end{equation*} Plugging this into \eqref{eq:Bochner} and applying Young's inequality to absorb $\|\slashed{\nabla}\xi\|_{L^2(\mathbb S^2, \gamma)}$, we obtain \eqref{eq:elliptic.est}. \qedhere \end{proof} \begin{proposition}\label{prop:nabla.eta} $$\|\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}},$$ $$\|\slashed{\nabla}^{(1)} (\underline{\eta}^{(1)} - \underline{\eta}^{(2)})\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\nabla}^{(1)} (\underline{\eta}^{(1)} - \underline{\eta}^{(2)})\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}.$$ \end{proposition} \begin{proof} By Definition~\ref{def:curv} and \eqref{eq:mu.def}, $$\div^{(1)} (\eta^{(1)} - \eta^{(2)}) = -(\mu^{(1)} - \mu^{(2)}) + K^{(1)} - K^{(2)} - (\div^{(1)} - \div^{(2)})\eta^{(2)},$$ $$\slashed{\mathrm{curl}}^{(1)}(\eta^{(1)} - \eta^{(2)}) = \check{\sigma}^{(1)}-\check{\sigma}^{(2)} - (\slashed{\mathrm{curl}}^{(1)} - \slashed{\mathrm{curl}}^{(2)}) \eta^{(2)},$$ Applying Proposition~\ref{prop:elliptic} with $(\mathbb S^2,\gamma) = (S_{u,\underline{u}}, \gamma^{(1)})$ for every $(u,\underline{u})$, and using H\"older's inequality, we then obtain \begin{equation}\label{eq:eta.top} \begin{split} &\: \|\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \frac 1{N^{\frac 12}} \|\eta^{(1)} - \eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \|\mu^{(1)} - \mu^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \|K^{(1)} - K^{(2)}\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|K^{(1)} - K^{(2)}\|_{L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \frac 1{N^{\frac 12}}\|(\div^{(1)} - \div^{(2)})\eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \frac 1{N^{\frac 12}} \|(\slashed{\mathrm{curl}}^{(1)} - \slashed{\mathrm{curl}}^{(2)})\eta^{(2)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}. \end{split} \end{equation} The first four terms in \eqref{eq:eta.top} can be bounded above by $\lesssim \frac{\mathrm{dist}}{N^{\frac 14}}$ using Propositions~\ref{prop:eta}, \ref{prop:mu} and \ref{prop:energy.est}. For the last two terms in \eqref{eq:eta.top}, we use Proposition~\ref{prop:gamma.inverse.diff} to bound the difference of the inverse metrics and use Proposition~\ref{prop:Gamma.diff} to control the difference of the connections, and combine them with the estimate in Proposition~\ref{prop:metric}. Using also the bound for $\eta^{(2)}$ given by Definition~\ref{double.null.def.2}, H\"older's inequality and Sobolev embedding (Proposition~\ref{prop:Sobolev}), we obtain \begin{equation*} \begin{split} &\: \|(\div^{(1)} - \div^{(2)})\eta^{(2)} \|_{L^2(S_{u,\underline{u}}, \gamma^{(1)})} + \|(\slashed{\mathrm{curl}}^{(1)} - \slashed{\mathrm{curl}}^{(2)})\eta^{(2)} \|_{L^2(S_{u,\underline{u}}, \gamma^{(1)})}\\ \lesssim &\: \|\gamma^{(1)} - \gamma^{(2)} \|_{L^4(S_{u,\underline{u}}, \gamma^{(1)})} \|\slashed{\nabla} \eta^{(2)}\|_{L^4(S_{u,\underline{u}}, \gamma^{(1)})} + \|\slashed\Gamma^{(1)} - \slashed\Gamma^{(2)} \|_{L^2(S_{u,\underline{u}}, \gamma^{(1)})} \|\eta^{(2)}\|_{L^\infty(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} Combining all the above bounds gives the desired estimates for $\slashed{\nabla}^{(1)} (\eta^{(1)} - \eta^{(2)})$. The estimate for $\underline{\eta}^{(1)} - \underline{\eta}^{(2)}$ can be derived in a similar manner but using instead the following equations\footnote{To derive the equation for $\slashed{\mathrm{curl}}^{(1)}(\underline{\eta}^{(1)} - \underline{\eta}^{(2)})$, we use \eqref{Ricci.relation}, and $\slashed{\mathrm{curl}}^{(1)} \slashed{\nabla} \log\Omega^{(1)} = \slashed{\mathrm{curl}}^{(2)} \slashed{\nabla} \log\Omega^{(2)} = 0$, in addition to \eqref{eq:mu.def}.}: $$\div^{(1)} (\underline{\eta}^{(1)} - \underline{\eta}^{(2)}) = -(\underline{\mu}^{(1)} - \underline{\mu}^{(2)}) + K^{(1)} - K^{(2)} - (\div^{(1)} - \div^{(2)})\underline{\eta}^{(2)},$$ $$\slashed{\mathrm{curl}}^{(1)}(\underline{\eta}^{(1)} - \underline{\eta}^{(2)}) = -(\check{\sigma}^{(1)}-\check{\sigma}^{(2)}) - (\slashed{\mathrm{curl}}^{(1)} - \slashed{\mathrm{curl}}^{(2)}) \underline{\eta}^{(2)}.$$ We omit the details. \qedhere \end{proof} \begin{proposition}\label{prop:nabla.chih} \begin{equation*} \begin{split} \|\slashed{\nabla}^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})\|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|\slashed{\nabla}^{(1)}(\hat{\underline{\chi}}^{(1)} - \hat{\underline{\chi}}^{(2)})\|_{ L^\infty_{\underline{u}} L^2_u L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}. \end{split} \end{equation*} \end{proposition} \begin{proof} We will only handle the estimate for $\hat{\chi}^{(1)} - \hat{\chi}^{(2)}$; that for $\hat{\underline{\chi}}^{(1)} - \hat{\underline{\chi}}^{(2)}$ can be treated similarly. \pfstep{Step~1: Estimates for $\div^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})$} By Definition~\ref{def:curv}, \begin{equation}\label{eq:div.chih.diff} \begin{split} \div^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)}) = &\: - (\div^{(1)} - \div^{(2)})\hat{\chi}^{(2)} - (\beta^{(1)} - \beta^{(2)}) + \frac 12 (\slashed{\nabla}^{(1)}\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\nabla}^{(2)}\slashed{\mathrm{tr}}\chi^{(2)})\\ &\: - \f12 \{ [(\eta - \underline{\eta})\cdot (\chi - \slashed{\mathrm{tr}}\chi\gamma)]^{(1)} - [(\eta - \underline{\eta})\cdot (\chi - \slashed{\mathrm{tr}}\chi\gamma)]^{(2)} \}. \end{split} \end{equation} We now estimate each term in \eqref{eq:div.chih.diff}. By Propositions~\ref{prop:gamma.inverse.diff}, \ref{prop:Gamma.diff} and \ref{prop:metric} \begin{equation*} \begin{split} \| (\div^{(1)} - \div^{(2)})\hat{\chi}^{(2)} \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} The $\beta^{(1)} - \beta^{(2)}$ term can be estimated using Proposition~\ref{prop:energy.est} by \begin{equation*} \begin{split} \| \beta^{(1)} - \beta^{(2)} \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}. \end{split} \end{equation*} The remaining terms can be controlled using the bounds in Definition~\ref{double.null.def.2}, the estimates obtained in Propositions~\ref{prop:eta}, \ref{prop:trch}, \ref{prop:nabla.trch} and H\"older's inequality as follows: \begin{equation*} \begin{split} &\: \| \frac 12 (\slashed{\nabla}^{(1)}\slashed{\mathrm{tr}}\chi^{(1)} - \slashed{\nabla}^{(2)}\slashed{\mathrm{tr}}\chi^{(2)}) - \f12 \{ [(\eta - \underline{\eta})\cdot (\chi - \slashed{\mathrm{tr}}\chi\gamma)]^{(1)} - [(\eta - \underline{\eta})\cdot (\chi - \slashed{\mathrm{tr}}\chi\gamma)]^{(2)} \} \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation*} Combining all the above estimates, we thus obtain \begin{equation}\label{eq:div.chih.est} \|\div^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)}) \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}. \end{equation} \pfstep{Step~2: Estimates for $\slashed{\mathrm{tr}}_{\gamma^{(1)}} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})$} Next, we compute, using $\slashed{\mathrm{tr}}_{\gamma^{(1)}} \hat{\chi}^{(1)} = \slashed{\mathrm{tr}}_{\gamma^{(2)}} \hat{\chi}^{(2)} = 0$, that \begin{equation}\label{eq:elliptic.trch.exp} \slashed{\mathrm{tr}}_{\gamma^{(1)}} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)}) = -((\gamma^{(1)})^{-1} - (\gamma^{(2)})^{-1})\cdot \hat{\chi}^{(2)}. \end{equation} Combining \eqref{eq:elliptic.trch.exp} with Propositions~\ref{prop:gamma.inverse.diff} and \ref{prop:metric}, we obtain \begin{equation}\label{eq:elliptic.trch.est} \|\slashed{\mathrm{tr}}_{\gamma^{(1)}} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)}) \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{equation} \pfstep{Step~3: Estimates for $\slashed{\mathrm{curl}}^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)})$ and conclusion of argument} To proceed, note that for any symmetric rank-$2$ tensor $\xi$, we have (see for instance computations leading up to (7.63) in \cite{Chr}) $$\slashed{\mathrm{curl}}^{(1)} \xi = -(^*\slashed{\nabla})^{(1)} \mathrm{tr}_{\gamma^{(1)}} \xi + ({ }^*\div)^{(1)} \xi.$$ Using \eqref{eq:div.chih.est} and \eqref{eq:elliptic.trch.est}, this implies \begin{equation}\label{eq:curl.chih.est} \|\slashed{\mathrm{curl}}^{(1)} (\hat{\chi}^{(1)} - \hat{\chi}^{(2)}) \|_{L^\infty_u L^2_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}. \end{equation} Finally, combining \eqref{eq:div.chih.est}, \eqref{eq:elliptic.trch.est} and \eqref{eq:curl.chih.est}, and using Propositions~\ref{prop:elliptic} and \ref{prop:chih}, we obtain the desired conclusion. \qedhere \end{proof} \subsection{Estimates for the measure-valued null dusts}\label{sec:diff.null.dust} \begin{proposition}\label{prop:diff.nu.0} The following estimates hold: \begin{equation}\label{eq:diff.nu.0.statement.1} \sup_{u'\in [u_i, u_{i+1}]} \sup_{\substack{\varphi(\underline{u},\vartheta)\in C^\infty_c \\ \|\varphi\|_{L^\infty_{\underline{u}} L^2(S_{u',\underline{u}},\gamma^{(1)})}\leq 1} } \left| \int_{H_{u'}} \varphi \,(\mathrm{d} \nu^{(1)}_{u'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \nu^{(2)}_{u'})\right| \lesssim \frac{\mathrm{dist}}{N} , \end{equation} \begin{equation}\label{eq:diff.nu.0.statement.2} \sup_{\underline{u}'\in [\underline{u}_j, \underline{u}_{j+1}]} \sup_{ \substack{ \varphi(u,\vartheta)\in C^\infty_c \\ \|\varphi\|_{L^\infty_u L^2(S_{u,\underline{u}'}, \gamma^{(1)})} \leq 1}} \left| \int_{\underline{H}_{\underline{u}'}} \varphi \,(\mathrm{d} \underline{\nu}^{(1)}_{\underline{u}'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \underline{\nu}^{(2)}_{\underline{u}'})\right| \lesssim \frac{\mathrm{dist}}{N}. \end{equation} \end{proposition} \begin{proof} In view of their similarities we will only prove \eqref{eq:diff.nu.0.statement.1}. Let $\varphi(\underline{u},\vartheta)$ be a smooth function compactly supported in $(\underline{u}_j,\underline{u}_{j+1})\times \mathbb S^2$ which satisfies $\|\varphi\|_{L^\infty_{\underline{u}} L^2(S_{U,\underline{u}},\gamma^{(1)})} \leq 1$. For every fixed $U\in [0,u_*]$, define $\varphi_U (u,\underline{u},\vartheta)$ to be the unique solution to the following initial value problem: \begin{equation}\label{eq:diff.nu.varphi.eq} \begin{cases} \partial_u \varphi_U + \slashed{\nabla}_{b^{(1)}} \varphi_U = 0 \\ \varphi_U(U,\underline{u},\vartheta) = \varphi(\underline{u},\vartheta) \end{cases}. \end{equation} It then follows from integrating \eqref{eq:diff.nu.varphi.eq} and using the estimates in Definition~\ref{double.null.def.2} that \begin{equation}\label{eq:varphi.U.global.est} \|\varphi_U\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \lesssim \|\varphi\|_{L^\infty_{\underline{u}} L^2(S_{U,\underline{u}},\gamma^{(1)})} \lesssim 1. \end{equation} We compute \begin{equation}\label{eq:compute.e3.frac.area.density} \begin{split} &\: (\partial_u + \slashed{\nabla}_{b^{(2)}}) (\varphi_U \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) \\ = &\: \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \{ \slashed{\nabla}_{b^{(2)} - b^{(1)}} \varphi_U + \varphi_U(\partial_u + \slashed{\nabla}_{b^{(2)}})(\log \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) \} \\ =&\: \div^{(2)} [\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) \varphi_U] - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \div^{(1)}[ (b^{(2)} - b^{(1)})] \varphi_U \\ &\: + \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \varphi_U(\partial_u + \slashed{\nabla}_{b^{(2)}})(\log \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) \\ =&\: \div^{(2)} [ \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) \varphi_U] + [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \varphi_U, \end{split} \end{equation} where in the last line we have used (by \eqref{metric.derivative}) \begin{equation*} \begin{split} &\: (\partial_u + \slashed{\nabla}_{b^{(2)}})(\log \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) - \div^{(1)} (b^{(2)} - b^{(1)}) = (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}. \end{split} \end{equation*} Therefore, using \eqref{eq:diff.nu.varphi.eq}, the transport equation \eqref{eq:nu} corresponding to $\mathrm{d}\nu^{(1)}$ and $\mathrm{d}\nu^{(2)}$, the fact that $\mathrm{d}\nu^{(1)}_{u_i} = \mathrm{d}\nu^{(2)}_{u_i}$, and the estimates in Definition~\ref{def:ang.reg.null.dust}, \begin{equation*} \begin{split} &\: \left| \int_{H_U} \varphi \,(\mathrm{d} \nu^{(1)}_{U} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\, \mathrm{d} \nu^{(2)}_{U}) \right| \\ = &\: \left| - \int_{u_i}^U \int_{H_u} (\partial_u + \slashed{\nabla}_{b^{(2)}}) (\varphi_U \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) \, \mathrm{d} \nu^{(2)}_{u} \,\mathrm{d} u \right| \\ =&\: \left| - \int_{u_i}^U \int_{H_u} \left\{ \div^{(2)} [ \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) \varphi_U] + [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \varphi_U\right\} \, \mathrm{d} \nu^{(2)}_{u} \,\mathrm{d} u \right| \\ \lesssim &\: \|\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) \varphi_U\|_{L^1_u L^\infty_{\underline{u}} L^1(S_{u,\underline{u}}, \gamma^{(2)})} + \|[(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \varphi_U\|_{L^1_u L^\infty_{\underline{u}} L^1(S_{u,\underline{u}}, \gamma^{(2)})} \\ \lesssim &\: \frac 1N \|b^{(2)} - b^{(1)}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(2)})} + \frac 1{N^{\frac 12}}\|(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}\|_{L^2_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(2)})}) \lesssim \frac {\mathrm{dist}}{N}, \end{split} \end{equation*} where in the second to last inequality we have used \eqref{eq:varphi.U.global.est} and the Cauchy--Schwarz inequality; and in the last inequality we have used \eqref{def:dist} and Proposition~\ref{prop:trch}. \qedhere \end{proof} \begin{proposition}\label{prop:diff.nu.1} The following estimates hold: \begin{equation}\label{eq:diff.nu.statement.1} \sup_{u'\in [u_i, u_{i+1}]} \sup_{ \substack{ \slashed{X}(\underline{u},\vartheta)\in C^\infty_c \\ \|\slashed X\|_{L^\infty_{\underline{u}} L^2(S_{u',\underline{u}},\gamma^{(1)})}\leq 1} }\left| \int_{H_{u'}} \slashed{\div}^{(1)} \slashed X \,(\mathrm{d} \nu^{(1)}_{u'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \nu^{(2)}_{u'})\right|, \end{equation} \begin{equation}\label{eq:diff.nu.statement.2} \sup_{\underline{u}'\in [\underline{u}_j, \underline{u}_{j+1}]} \sup_{\substack{ \slashed{X}(u,\vartheta)\in C^\infty_c \\ \|\slashed X\|_{L^\infty_u L^2(S,\gamma^{(1)})}\leq 1 }} \left| \int_{\underline{H}_{\underline{u}'}} \slashed{\div}^{(1)} \slashed X \,(\mathrm{d} \underline{\nu}^{(1)}_{\underline{u}'} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\mathrm{d} \underline{\nu}^{(2)}_{\underline{u}'})\right| \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{equation} \end{proposition} \begin{proof} We will only prove \eqref{eq:diff.nu.statement.1}; the estimate \eqref{eq:diff.nu.statement.2} is similar. Fix $U\in [u_i,\, u_{i+1}]$. \pfstep{Step~1: Choice of $\slashed{X}$} Let $\mathring{\slashed{X}}(\underline{u},\vartheta)$ be a $C^\infty_c$ $S$-tangent vector field on $(\underline{u}_j, \underline{u}_{j+1})$. Extend $\mathring{X}$ to $\slashed{X} (u,\underline{u},\vartheta)$, which is defined to be the unique solution to the following initial value problem: $$\begin{cases} \slashed{\nabla}_3^{(1)} \slashed{X} = 0 \\ \slashed{X}(U,\underline{u},\vartheta) = \mathring{\slashed{X}}(\underline{u},\vartheta) \end{cases}$$ so that we in particular have \begin{equation}\label{eq:L2controlofX} \|\slashed{X}\|_{L^\infty_u L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})}\leq 1. \end{equation} We now compute (using\footnote{Notice that we have enough regularity to justify this commutation, where the derivatives on the LHS of \eqref{eq:du.1.divX} are understood as weak derivatives.} \eqref{eq:commutation.3}) that \begin{equation}\label{eq:du.1.divX} \begin{split} &\: (\partial_u + \slashed{\nabla}_{b^{(1)}}) \div^{(1)}\slashed{X} \\ = &\: \Omega^{(1)} \left(-\frac 12 \slashed{\mathrm{tr}}\underline{\chi}^{(1)} \div^{(1)} \slashed{X} - \hat{\underline{\chi}}^{(1)}\cdot^{(1)} \slashed{\nabla}^{(1)} \slashed{X} +\underline{\beta}^{(1)} \cdot^{(1)} \slashed{X} - \hat{\underline{\chi}}^{(1)} \cdot^{(1)} \eta^{(1)}\cdot^{(1)} \slashed{X} \right) =: F_{\slashed X}. \end{split} \end{equation} On the other hand, computing as in \eqref{eq:compute.e3.frac.area.density} and using \eqref{eq:du.1.divX}, we obtain \begin{equation}\label{eq:du.2.divX} \begin{split} &\: (\partial_u + \slashed{\nabla}_{b^{(2)}}) (\div^{(1)}\slashed{X} \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) \\ = &\: \div^{(2)} [ \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) (\div^{(1)}\slashed{X}) ] + [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (\div^{(1)}\slashed{X}) + F_{\slashed X} \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\\ =: &\: G_{\slashed X} + F_{\slashed X} \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}, \end{split} \end{equation} where $F_{\slashed X}$ is as in \eqref{eq:du.1.divX}. \pfstep{Step~2: Application of \eqref{eq:nu}} We now apply the transport equation\footnote{Note that a priori, to apply \eqref{eq:nu} requires the test function to be $C^1_c$, but it can easily be checked by an approximation argument that $\div^{(1)}\slashed X$ and $(\div^{(1)}\slashed X) \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}$ are also admissible test functions.} for $\mathrm{d} \nu_u$ (in \eqref{eq:nu}) with \eqref{eq:du.1.divX} and \eqref{eq:du.2.divX}, and the fact that $\mathrm{d}\nu_{u_i}^{(1)} = \mathrm{d}\nu_{u_i}^{(2)}$ to obtain \begin{align} &\: \left| \int_{H_U} (\div^{(1)}\slashed X)\,(\mathrm{d} \nu_U^{(1)} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \,\mathrm{d}\nu_U^{(2)}) \right| \nonumber\\ = &\: \left| - \int_{u_i}^U \int_{H_u} (\partial_u + \slashed{\nabla}_{b^{(1)}})(\div^{(1)}\slashed X) \, \mathrm{d} \nu^{(1)}_u \,\mathrm{d} u\right. \nonumber\\ &\qquad \left. + \int_{u_i}^U \int_{H_u} (\partial_u + \slashed{\nabla}_{b^{(2)}}) ((\div^{(1)}\slashed X)\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}) \, \mathrm{d} \nu^{(2)}_u \,\mathrm{d} u\right| \nonumber\\ \leq &\: \left|\int_{u_i}^U \int_{H_u} F_{\slashed X} \, (\mathrm{d} \nu_u^{(1)} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \,\mathrm{d}\nu_u^{(2)})\, \mathrm{d} u \right| \label{eq:nu.diff.error.1}\\ &\: + \left|\int_{u_i}^U \int_{H_u} G_{\slashed X}\, \mathrm{d} \nu^{(2)}_u \,\mathrm{d} u\right|. \label{eq:nu.diff.error.2} \end{align} For the remainder of the proof we will estimate \eqref{eq:nu.diff.error.1} and \eqref{eq:nu.diff.error.2}. \pfstep{Step~3: Estimating \eqref{eq:nu.diff.error.1}} For the term \eqref{eq:nu.diff.error.1}, we recall the definition of $F_{\slashed X}$ in \eqref{eq:du.1.divX} and compute \begin{equation} \begin{split} F_{\slashed X} = &\: -\frac 12 \div^{(1)} (\Omega^{(1)} \slashed{\mathrm{tr}}\underline{\chi}^{(1)} \slashed{X}) + \frac 12 (\slashed{\nabla}_{\slashed X} (\Omega^{(1)}\slashed{\mathrm{tr}}\underline{\chi}^{(1)})) - \div^{(1)}(\Omega^{(1)}\hat{\underline{\chi}}^{(1)}\cdot \slashed{X}) + \slashed{X}\cdot \div^{(1)}(\Omega^{(1)}\hat{\underline{\chi}}^{(1)}) \\ &\: +\Omega^{(1)} \underline{\beta}^{(1)} \cdot \slashed{X} - \Omega^{(1)} \hat{\underline{\chi}}^{(1)} \cdot^{(1)} \eta^{(1)}\cdot^{(1)} \slashed{X} \\ =&\: -\div^{(1)}(F_{\slashed X,1}) + F_{\slashed X,2}, \end{split} \end{equation} where $F_{\slashed X,1}$ and $F_{\slashed X,2}$, given by $$F_{\slashed X,1} := \frac 12\Omega^{(1)} \slashed{\mathrm{tr}}\underline{\chi}^{(1)} \slashed{X} + \Omega^{(1)}\hat{\underline{\chi}}^{(1)}\cdot \slashed{X},$$ $$F_{\slashed X,2} :=\frac 12 (\slashed{\nabla}_{\slashed X} (\Omega^{(1)}\slashed{\mathrm{tr}}\underline{\chi}^{(1)})) + \slashed{X}\cdot \div^{(1)}(\Omega^{(1)}\hat{\underline{\chi}}^{(1)}) +\Omega^{(1)} \underline{\beta}^{(1)} \cdot \slashed{X} - \Omega^{(1)} \hat{\underline{\chi}}^{(1)} \cdot^{(1)} \eta^{(1)}\cdot^{(1)} \slashed{X},$$ obey the estimates $$\|F_{\slashed X,1} \|_{L^2_u L^\infty_{\underline{u}} L^2(S)} + \| F_{\slashed X,2} \|_{L^2_u L^\infty_{\underline{u}} L^2(S)} \lesssim 1.$$ (To see that, we use Definition~\ref{double.null.def.2}, \eqref{eq:L2controlofX} and H\"older's inequality.) Recalling then the definition of $\mathrm{dist}_\nu(\mathrm{d} \nu^{(1)}, \mathrm{d}\nu^{(2)})$ (see \eqref{def:dist.nu}), we obtain \begin{equation}\label{eq:nu.diff.error.1.final} \begin{split} \mbox{\eqref{eq:nu.diff.error.1}} \lesssim &\: \mathrm{dist}_\nu(\mathrm{d} \nu^{(1)}, \mathrm{d}\nu^{(2)}) \int_{u_i}^U (\|F_{\slashed X,1} \|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})} + \| F_{\slashed X,2} \|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}}, \gamma^{(1)})}) \,\mathrm{d} u \\ \lesssim &\: \frac 1{N^{\frac 12}} \mathrm{dist}_\nu(\mathrm{d} \nu^{(1)}, \mathrm{d}\nu^{(2)}) \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation} \pfstep{Step~4: Estimating \eqref{eq:nu.diff.error.2}} We further compute the $G_{\slashed X}$ terms (recall \eqref{eq:du.2.divX}). Using $(\div^{(1)} -\div^{(2)}) \slashed Y = \slashed Y \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}}$, we obtain \begin{equation}\label{eq:du.2.divX.1} \begin{split} &\: \div^{(2)} [ \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) (\div^{(1)}\slashed{X}) ] \\ = &\: \div^{(2)}\div^{(2)} [ \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) \otimes \slashed{X} ] - \div^{(2)} [ \slashed{X} \div^{(2)}(\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)})) ] \\ &\: + \div^{(2)} [ \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) ( \slashed X \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} ) ], \end{split} \end{equation} and \begin{equation}\label{eq:du.2.divX.2} \begin{split} &\: [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (\div^{(1)}\slashed{X}) \\ =&\: \div^{(2)}\{ [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \slashed{X}\} - \slashed{\nabla}_{\slashed X} \{ [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\} \\ &\: + [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}( \slashed X \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} ). \end{split} \end{equation} Therefore, \eqref{eq:du.2.divX}, \eqref{eq:du.2.divX.1} and \eqref{eq:du.2.divX.2} together imply that we have the following decomposition\footnote{Note that $G_{\slashed X, r}$ is a tensor of rank $r$.} $$G_{\slashed X} = \div^{(2)}\div^{(2)}G_{\slashed X,2} + \div^{(2)} G_{\slashed X, 1} + G_{\slashed X, 0},$$ where $$G_{\slashed X,2} = \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) \otimes \slashed{X},$$ \begin{equation*} \begin{split} &\: G_{\slashed X,1} = -\slashed{X} \div^{(2)}(\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)})) + \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} (b^{(2)} - b^{(1)}) ( \slashed X \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} ) \\ &\: + [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \slashed{X}, \end{split} \end{equation*} $$G_{\slashed X,0} = - \slashed{\nabla}_{\slashed X} \{ [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}\} + [(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}]\frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}}( \slashed X \log \frac{\sqrt{\det \gamma^{(1)}}}{\sqrt{\det \gamma^{(2)}}} ). $$ By Definition~\ref{double.null.def.2} (and the definition of $\slashed X$), $G_{\slashed X,2} \in C^0_{\underline{u}} L^{\frac 43}(S_{u,\underline{u}},\gamma^{(1)})$, $G_{\slashed X,1},\, G_{\slashed X,0} \in C^0_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})$. Moreover, the following estimates are satisfied (using H\"older's inequality, Definition~\ref{double.null.def.2}, \eqref{eq:L2controlofX} and Sobolev embedding (Proposition~\ref{prop:Sobolev})): \begin{align*} \|G_{\slashed X,2}\|_{L^\infty_{\underline{u}}L^{\frac 43}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim &\: \|b^{(1)} - b^{(2)} \|_{L^\infty_{\underline{u}} L^4(S_{u,\underline{u}},\gamma^{(1)})} \\ \lesssim &\: \|\slashed{\nabla}^{(1)} (b^{(1)} - b^{(2)}) \|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|b^{(1)} - b^{(2)} \|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}, \\ \|G_{\slashed X,1}\|_{L^\infty_{\underline{u}}L^{1}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim &\: \|\slashed{\nabla}^{(1)} (b^{(1)} - b^{(2)}) \|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} + \|b^{(1)} - b^{(2)} \|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: + \| (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}\|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})}, \\ \|G_{\slashed X,0}\|_{L^\infty_{\underline{u}}L^{1}(S_{u,\underline{u}},\gamma^{(1)})} \lesssim &\: \|\slashed{\nabla} ((\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)})\|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} \\ &\: +\|(\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(2)} - (\Omega \slashed{\mathrm{tr}}\underline{\chi})^{(1)}\|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}},\gamma^{(1)})} . \end{align*} Using Definition~\ref{double.null.def.2}, \eqref{def:dist}, Proposition~\ref{prop:trch} and the Cauchy--Schwarz inequality, this implies that \begin{equation}\label{eq:integralofG} \int_{u_i}^{u_{i+1}} ( \|G_{\slashed X,2}\|_{L^\infty_{\underline{u}}L^{\frac 43}(S_{u,\underline{u}},\gamma^{(1)})} + \|G_{\slashed X,1}\|_{L^\infty_{\underline{u}}L^{1}(S_{u,\underline{u}},\gamma^{(1)})} + \|G_{\slashed X,0}\|_{L^\infty_{\underline{u}}L^{1}(S_{u,\underline{u}},\gamma^{(1)})} )\,\mathrm{d} u \lesssim \frac {\mathrm{dist}}{N^{\frac 12}}. \end{equation} Hence, using the regularity estimate for $\mathrm{d}\nu_u$ in Definition~\ref{def:ang.reg.null.dust} together with \eqref{eq:integralofG}, we obtain \begin{equation}\label{eq:nu.diff.error.2.final} \begin{split} \mbox{\eqref{eq:nu.diff.error.2}} \lesssim \int_{u_i}^U ( \|G_{\slashed X,2}\|_{L^\infty_{\underline{u}}L^{\frac 43}(S_{u,\underline{u}},\gamma^{(1)})} + \|G_{\slashed X,1}\|_{L^\infty_{\underline{u}}L^{1}(S_{u,\underline{u}},\gamma^{(1)})} + \|G_{\slashed X,0}\|_{L^\infty_{\underline{u}}L^{1}(S_{u,\underline{u}},\gamma^{(1)})} )\,\mathrm{d} u \lesssim \frac {\mathrm{dist}}{N^{\frac 12}}. \end{split} \end{equation} \pfstep{Step~5: Putting everything together} Using the estimates \eqref{eq:nu.diff.error.1.final} and \eqref{eq:nu.diff.error.2.final} for \eqref{eq:nu.diff.error.1} and \eqref{eq:nu.diff.error.2}, and returning to Step~2, we obtain $$\left| \int_{H_U} (\div^{(1)}\slashed X)\,(\mathrm{d} \nu_U^{(1)} - \frac{\sqrt{\det\gamma^{(1)}}}{\sqrt{\det\gamma^{(2)}}} \,\mathrm{d}\nu_U^{(2)}) \right| \lesssim \frac{\mathrm{dist}}{N^{\frac 12}}.$$ In view of (1) the arbitrariness of the prescription of $\slashed X$ at $u=U$ (subject to $\|\slashed X\|_{L^\infty_{\underline{u}}L^2(S_{U,\underline{u}},\gamma^{(1)})}\leq 1$) and (2) the arbitrariness of $U \in [u_i, u_{i+1}]$, we have obtain \eqref{eq:diff.nu.statement.1}. \qedhere \end{proof} \subsection{Putting everything together: Proof of Theorem~\ref{thm:uniqueness}}\label{sec:uniqueness.everything} \begin{proof}[Proof of Theorem~\ref{thm:uniqueness}] Recalling the definition of the distance function in \eqref{def:dist}, we see that by Propositions~\ref{prop:metric}, \ref{prop:eta}, \ref{prop:chih}, \ref{prop:trch}, \ref{prop:nabla.trch}, \ref{prop:energy.est}, \ref{prop:nabla.eta}, \ref{prop:nabla.chih}, \ref{prop:diff.nu.0} and \ref{prop:diff.nu.1}, $$\mathrm{dist} \lesssim \frac{\mathrm{dist}}{N^{\frac 14}}.$$ Choosing $N$ large enough gives $\mathrm{dist} = 0$, which implies the desired uniqueness result. \qedhere \end{proof} \section{Weak approximation theorem}\label{sec.approx.thm} In this section we prove our final two theorems, Theorem~\ref{thm:main.local.dust} and Theorem~\ref{thm:reverse.Burnett}. Given what we have obtained so far (Theorems~\ref{main.thm} and \ref{thm:uniqueness}), the key step to both Theorem~\ref{thm:main.local.dust} and Theorem~\ref{thm:reverse.Burnett} is an approximation result which allows us to approximate any null dust initial data set (with merely measure-valued null dust) by a smooth vacuum initial data set; see already Proposition~\ref{prop:final.approx}. This result will be carried out in the following steps: \begin{enumerate} \item We first show that all \emph{smooth null dust initial data sets} with two families of null dusts in double null foliation can be approximated by \emph{smooth vacuum initial data sets} (Proposition~\ref{prop.data.approx} in \textbf{Section~\ref{sec:approx.smooth.dust.by.vac}}). \item Our next step is to approximate \emph{measure-valued null dust initial data} by \emph{smooth null dust initial data} (Proposition~\ref{prop:f.approx} in \textbf{Section~\ref{sec:f.approx}}). \item Combining the previous steps, we then achieve an approximation of \emph{measure-valued null dust initial data} by \emph{smooth null dust initial data sets} (Proposition~\ref{prop:final.approx} in \textbf{Section~\ref{sec:final.approx}}). \end{enumerate} Once we obtain the main approximation result, we conclude the proofs of Theorem~\ref{thm:main.local.dust} and Theorem~\ref{thm:reverse.Burnett}. This will be carried out in \textbf{Section~\ref{sec:approx.final}}. Since the approximation results are the same on $H_0$ and $\underline{H}_0$, in what follows we will focus on $H_0$. The case for $\underline{H}_0$ is the same; see Proposition~\ref{prop:final.approx.1}. \subsection{Approximating smooth null dust data by vacuum data}\label{sec:approx.smooth.dust.by.vac} Before we proceed, recall Definition~\ref{def:SARCID} only gives a notion of the null dust where the null dust is a measure. We now introduce a convention for smooth null dust data in order to carry out the approximation procedure. For this we will stipulate that $\mathrm{d}\nu_{\mathrm{init}}$ is absolutely continuous with respect to $\mathrm{d} A_\gamma \, \mathrm{d} \underline{u}$ and for a smooth function $f$, $$\mathrm{d} \nu_{\mathrm{init}} = \Phi^{-2} \Omega^{-2} f\, \mathrm{d} A_\gamma \, \mathrm{d} \underline{u}.$$ In this smooth setting, the constraint equation \eqref{eq:constraints.first.time} is replaced by \begin{equation}\label{eq:constraint.ODE} \frac{\partial^2 \Phi}{\partial\underline{u}^2}=2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial\Phi}{\partial\underline{u}}-\frac {1}8 |\frac{\partial\hat{\gamma}}{\partial \underline{u}}|_{\hat{\gamma}}^2 \Phi-\frac 12\Phi^{-1} f \mbox{ on $H_0$}, \end{equation} where $f$ is non-negative and $|\frac{\partial\hat{\gamma}}{\partial \underline{u}}|_{\hat{\gamma}}^2$ is defined as in \eqref{eq:def.|dubgamma|^2}, (with obviously modifications on $\underline{H}_0$). \begin{proposition}\label{prop.data.approx} Let $K\in \mathbb N$. Fix an arbitrary smooth metric $\mathring{\gamma}$ on $S_{0,0}$. Extend $\mathring{\gamma}$ to $[0,\underline{u}_*]\times \mathbb S^2$ by \begin{equation}\label{eq:data.approx.Lgamma} \slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \mathring{\gamma} = 0. \end{equation} Assume the following: \begin{itemize} \item Let $\Omega$ be a given smooth positive function on $\{0 \}\times [0,\underline{u}_*]\times \mathbb S^2$. \item Suppose $\hat{\gamma}^{(dust)}$ is a smooth $S$-tangent covariant $2$-tensor on $\{0\} \times [0,\underline{u}_*]\times \mathbb S^2$, which is a Riemannian metric on $S_{0,\underline{u}}$ for every $\underline{u}\in [0, \underline{u}_*]$. \item Suppose $\Phi^{(dust)}$ is a positive smooth function on $\{0\}\times [0,\underline{u}_*]\times \mathbb S^2$. \item Assume that the following constraint equation is satisfied \begin{equation}\label{eq:data.approx.dust.constraint} \frac{\partial^2 \Phi^{(dust)}}{\partial\underline{u}^2}=2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial\Phi^{(dust)}}{\partial\underline{u}}-\frac {1}8 |\frac{\partial\hat{\gamma}^{(dust)}}{\partial \underline{u}}|_{\hat{\gamma}^{(dust)}}^2 \Phi^{(dust)}-\frac 12 \frac{f}{\Phi^{(dust)}} \end{equation} for a non-negative smooth function $f(\underline{u},\vartheta)$. \item Suppose moreover that there exists $\emptyset \neq U\subset \mathbb S^2$ such that for every $\vartheta\in U$, $f(\underline{u},\vartheta) = 0$ for all $\underline{u}\in [0,\underline{u}_*]$. \end{itemize} Then there exists a sequence of smooth $\{(\hat{\gamma}_n, \,\Phi_n) \}_{n=1}^{+\infty}$ on $\{0\}\times [0,\underline{u}_*]\times \mathbb S^2$ such that the following holds: \begin{enumerate} \item (Positivity of $\hat{\gamma}_n$ and $\Phi_n$) For every $n$ sufficiently large, $\hat{\gamma}_n$ is a smooth $S$-tangent tensor on $\{0\}\times [0,\underline{u}_*]\times \mathbb S^2$ which is a Riemannian metric on $S_{0,\underline{u}}$ satisfying $\frac{\det {\hat{\gamma}_n}}{\det\mathring{\gamma}} = 1$, and $\Phi_n$ is a positive smooth function on $\{0\}\times [0,\underline{u}_*]\times \mathbb S^2$. \item (Vacuum constraint) For every $n\geq 1$, \begin{equation}\label{eq:data.approx.vac.constraint} \begin{cases} \frac{\partial^2 \Phi_n}{\partial\underline{u}^2}=2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial\Phi_n}{\partial\underline{u}}-\frac {1}8 |\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma_n}}^2 \Phi_n,\\ \Phi_n(\underline{u}=0) = \Phi^{(dust)}(\underline{u} =0),\quad \frac{\partial\Phi_n}{\partial\underline{u}} (\underline{u} = 0) = \frac{\partial\Phi^{(dust)}}{\partial\underline{u}}(\underline{u} = 0). \end{cases} \end{equation} \item (Uniform estimates and convergence) The following hold for some implicit constants depending only on $f$, $\mathring{\gamma}$, $\hat{\gamma}^{(dust)}$, $\Phi^{(dust)}$ and $\Omega$ but \underline{independent} of $n$: \begin{equation}\label{eq:data.approx.gamma.easy} \|\hat{\gamma}_n - \hat{\gamma}^{(dust)}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim n^{-1},\quad \|\frac{\partial \hat{\gamma}_n}{\partial\underline{u}} \|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 1, \end{equation} \begin{equation}\label{eq:data.approx.Phi.final} \|\Phi_n - \Phi^{(dust)}\|_{L^\infty_{\underline{u}}W^{K,\infty}} \lesssim n^{-1}, \quad \|\frac{\partial (\Phi_n - \Phi^{(dust)})}{\partial \underline{u}}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim n^{-1}. \end{equation} \item (Refined\footnote{Note that Part~(3) already contains the statement $\|\frac{\partial \hat{\gamma}_n}{\partial\underline{u}} \|_{L^2_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 1$ (after using H\"older's inequality). The key point here is that we further analyze the dependence of the implicit constant. In particular, the constant $C>0$ in the estimate can be chosen uniformly for all $\hat{\gamma}^{(dust)}$ satisfying \eqref{eq:data.approx.uniform.assumption}.} uniform estimates for $\frac{\partial \hat{\gamma}_n}{\partial\underline{u}}$) For any fixed $\mathring{\gamma}$ (satisfying \eqref{eq:data.approx.Lgamma}) and $\hat{\gamma}_0$ (satisfying $\frac{\det {\hat{\gamma}_0}}{\det\mathring{\gamma}} = 1$), there exist $C>0$ and $\epsilon>0$ (both depending only on $\mathring{\gamma}$ and $\hat{\gamma}_0$) such that if \begin{equation}\label{eq:data.approx.uniform.assumption} \|\hat{\gamma}^{(dust)} - \hat{\gamma}_0\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} <\epsilon, \end{equation} then for all $n$ sufficiently large \begin{equation}\label{eq:data.approx.uniform.consequence} \|\frac{\partial \hat{\gamma}_n}{\partial\underline{u}} \|_{L^2_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \leq C (1 + \|\frac{\partial\hat{\gamma}^{(dust)}}{\partial\underline{u}}\|_{L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|f\|_{L^1_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}). \end{equation} \item (Oscillatory estimates) There exist smooth functions $\{F_n\}_{n=1}^{+\infty}$ with uniform in $n$ estimates \begin{equation}\label{eq:data.approx.Fn.est} \|F_n\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1 \end{equation} such that \begin{equation}\label{eq:data.approx.dgamma.weak} \| [|\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma}_n}^2 - |\frac{\partial\hat{\gamma}^{(dust)}}{\partial \underline{u}}|_{\hat{\gamma}^{(dust)}}^2] (\Phi^{(dust)})^2 - 4 f -\frac 1n \frac{\partial F_n}{\partial \underline{u}} \|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim n^{-1}, \end{equation} In both \eqref{eq:data.approx.Fn.est} and \eqref{eq:data.approx.dgamma.weak}, the implicit constants depend only on $f$, $\mathring{\gamma}$, $\hat{\gamma}^{(dust)}$ and $\Phi^{(dust)}$ but are \underline{independent} of $n$. \end{enumerate} \end{proposition} \begin{proof} Since there exists $\emptyset \neq U\subset \mathbb S^2$ such that $f(\underline{u},\vartheta) = 0$ for $\vartheta\in U$ and $\underline{u} \in [0,\underline{u}_*]$, we can work with a local coordinate system on $\mathbb S^2$. \pfstep{Step~1: Construction of $\hat{\gamma}_n$} Suppose that in local coordinates $\hat{\gamma}$ be given by $$\hat{\gamma}^{(dust)}=\left( \begin{array}{cc} a & b \\ b & d \end{array} \right)$$ where $a$, $b$ and $d$ are smooth functions of $\underline{u}$ and $\vartheta$. By assumption $\hat{\gamma}^{(dust)} = \mathring{\gamma}$ and thus we have \begin{equation}\label{det.condition} ad-b^2=\det \mathring{\gamma}. \end{equation} Notice moreover that since $\hat{\gamma}$ is positive definite, we must have $a>0$ and $d>0$. We now define the sequence $\hat{\gamma}_n$ by \begin{equation}\label{eq:data.approx.gamma.def} \hat{\gamma}_n:=\left( \begin{array}{cc} a+\frac{(ad - b^2)}{d} \frac{2 f^{\frac 12}}{\Phi^{(dust)}}\frac{1}{k n}\sin(k n\underline{u}) & b \\ b & d - (ad - b^2)\frac{\frac{2 f^{\frac 12}}{\Phi^{(dust)}}\frac{1}{k n}\sin(k n\underline{u})}{a+\frac{(ad-b^2)}{d}\frac{2 f^{\frac 12}}{\Phi^{(dust)}}\frac{1}{k n}\sin(k n\underline{u})} \end{array} \right), \end{equation} where $k$ is some large but fixed real parameter chosen so that $\hat{\gamma}_n$ is positive definite for all $n\geq 1$. Note that \eqref{eq:data.approx.gamma.def} is well-defined since\footnote{It is for this step that we have used the condition of non-negativity of $\Phi^{(dust)}$ and $f$.} $a>0$, $d > 0$, $\Phi^{(dust)} \geq 0$ and $f\geq 0$. Furthermore, \eqref{eq:data.approx.gamma.def} is chosen so that indeed by \eqref{det.condition}, \begin{equation*} \begin{split} \det(\hat{\gamma}_n)=&\:(ad-b^2) \{ 1+ \frac{2 f^{\frac 12}}{\Phi^{(dust)}} \frac{1}{k n}\sin(k n\underline{u})-\frac{2 a f^{\frac 12}\frac{1}{k n}\sin(k n\underline{u})}{\Phi^{(dust)} [a+\frac{(ad-b^2)}{d}\frac{2 f^{\frac 12}}{\Phi^{(dust)}} \frac{1}{k n}\sin(k n\underline{u})]}\\ &\: \qquad\qquad -\frac{(ad-b^2)\frac{4 f}{ d (k n)^2 (\Phi^{(dust)})^2}\sin^2(k n\underline{u})}{a + \frac{(ad-b^2)}{d}\frac{2 f^{\frac 12}}{\Phi^{(dust)}}\frac{1}{k n}\sin(k n\underline{u})} \}=ad-b^2 = \det\hat{\gamma}^{(dust)} = \det\mathring{\gamma}. \end{split} \end{equation*} From the formula \eqref{eq:data.approx.gamma.def}, the smoothness of $(a,\,b,\,d,\,f)$, and the positivity of $a$ and $d$, it immediate follows that \eqref{eq:data.approx.gamma.easy} holds. To obtain \eqref{eq:data.approx.uniform.consequence} and \eqref{eq:data.approx.dgamma.weak}, we compute \begin{equation}\label{eq:data.approx.dgamma.prelim} \frac{\partial}{\partial\underline{u}}\hat\gamma_n=\left( \begin{array}{cc} \frac{\partial a}{\partial\underline{u}} + \frac{(ad-b^2)}{d} \frac 2{\Phi^{(dust)}} (\frac{f}{ d})^{\frac 12}\cos(k n\underline{u}) & \frac{\partial b}{\partial\underline{u}} \\ \frac{\partial b}{\partial\underline{u}} & \frac{\partial d}{\partial\underline{u}} - \frac{(ad-b^2)}{a}\frac{2 f^{\frac 12}}{ \Phi^{(dust)}} \cos(k n\underline{u}) \end{array} \right)+O_K(\frac 1n), \end{equation} where $O_K(\frac 1n)$ denotes terms whose $L^\infty_{\underline{u}} W^{K,\infty}(S_{u,\underline{u}})$ norm is bounded above up to a constant by $\frac 1n$ (where the constant depends on $f$, $\mathring{\gamma}$, $\hat{\gamma}^{(dust)}$ and $\Phi^{(dust)}$). From this formula we obtain \eqref{eq:data.approx.uniform.consequence}. Indeed, \begin{itemize} \item the $\frac{\partial a}{\partial\underline{u}}$, $\frac{\partial b}{\partial \underline{u}}$, $\frac{\partial d}{\partial \underline{u}}$ terms in the $L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})$ norm can be controlled by $\|\frac{\partial\hat{\gamma}^{(dust)}}{\partial\underline{u}}\|_{L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}$; \item the $O_K(\frac 1n)$ terms in the $L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})$ norm can be bounded by $1$ after choosing $n$ to be sufficient large; \item the $\frac{(ad-b^2)}{d} (\frac{4 f}{\Phi^{(dust)} d})^{\frac 12}\cos(k n\underline{u})$ and $\frac{(ad-b^2)}{a}(\frac{4 f}{ \Phi^{(dust)}})^{\frac 12}\cos(k n\underline{u})$ terms in the $L^2_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})$ norm can be controlled by the $\| f\|_{L^1_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma}) }$ norm. Moreover, the constant of the estimate can be chosen to depend continuously on $\hat{\gamma}_n$. In particular, the constant can be chosen to be uniform under the assumption \eqref{eq:data.approx.uniform.assumption}. \end{itemize} To establish \eqref{eq:data.approx.dgamma.weak}, we compute using \eqref{eq:data.approx.gamma.def} and \eqref{eq:data.approx.dgamma.prelim} that \begin{equation}\label{eq:data.approx.dgamma.weak.2} \begin{split} |\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma}_n}^2 =&\: \left((\hat{\gamma}_n^{-1})^{AC}(\hat{\gamma}_n^{-1})^{BD}(\frac{\partial}{\partial\underline{u}}\hat{\gamma}_n)_{AB}(\frac{\partial}{\partial\underline{u}}\hat{\gamma}_n)_{CD}\right)(\underline{u},\vartheta)\\ =&\: \left(((\hat{\gamma}^{(dust)})^{-1})^{AC} ((\hat{\gamma}^{(dust)})^{-1})^{BD}(\frac{\partial}{\partial\underline{u}}\hat{\gamma}_n)_{AB}(\frac{\partial}{\partial\underline{u}}\hat{\gamma}_n)_{CD}\right)(\underline{u},\vartheta) +O_K(\frac 1n) \\ =&\: \frac{d^2}{(ad-b^2)^2} [\frac{\partial a}{\partial\underline{u}} + \frac{(ad-b^2)}{d} \frac{2 f^{\frac 12}}{ \Phi^{(dust)}} \cos(k n\underline{u})]^2 - \frac{2b^2}{(ad-b^2)^2}(\frac{\partial b}{\partial\underline{u}})^2 \\ &\: + \frac{a^2}{(ad-b^2)^2} [ \frac{\partial d}{\partial\underline{u}} - \frac{(ad-b^2)}{a} \frac{2 f^{\frac 12}}{ \Phi^{(dust)}} \cos(k n\underline{u}) ]^2 + O_K(\frac 1n) \\ = &\: \frac{1}{(ad-b^2)^2} [ (d\frac{\partial a}{\partial\underline{u}})^2 -2 (b \frac{\partial b}{\partial\underline{u}})^2 + a (\frac{\partial d}{\partial\underline{u}})^2] + \frac{4 f }{(\Phi^{(dust)})^2} (1+ \cos(2kn\underline{u})) \\ &\: + \frac{2 d}{ad-b^2} \frac{\partial a}{\partial\underline{u}} \frac{2 f^{\frac 12}}{ \Phi^{(dust)}}\cos(k n\underline{u}) - \frac{2 a}{ad-b^2}\frac{\partial d}{\partial\underline{u}}\frac{2 f^{\frac 12}}{ \Phi^{(dust)}}\cos(k n\underline{u}) + O_K(\frac 1n). \end{split} \end{equation} We now analyze these terms in \eqref{eq:data.approx.dgamma.weak.2}. First, $$\frac{1}{(ad-b^2)^2} [ (d\frac{\partial a}{\partial\underline{u}})^2 -2 (b \frac{\partial b}{\partial\underline{u}})^2 + a (\frac{\partial d}{\partial\underline{u}})^2] = |\frac{\partial\hat{\gamma}^{(dust)}}{\partial \underline{u}}|_{\hat{\gamma}^{(dust)}}^2.$$ Next, for the highly oscillatory terms, we can write \begin{equation*} \begin{split} &\: (\Phi^{(dust)})^2 \times \{ \frac{4 f }{(\Phi^{(dust)})^2} \cos(2kn\underline{u}) + \frac{2 }{ad-b^2} (d\frac{\partial a}{\partial\underline{u}} - a\frac{\partial d}{\partial\underline{u}}) \frac{2 f^{\frac 12}}{ \Phi^{(dust)}}\cos(k n\underline{u}) \}\\ =&\: \frac 1 n \frac{\partial}{\partial \underline{u}} [\frac{2 f}{k } \sin(2kn\underline{u}) + \frac{4 }{k(ad-b^2)} (d\frac{\partial a}{\partial\underline{u}} - a\frac{\partial d}{\partial\underline{u}}) f^{\frac 12} \Phi^{(dust)} \sin(k n\underline{u})] \\ &\: - \frac 1 n [\frac{\partial}{\partial \underline{u}} \frac{2 f}{k }] \sin(2kn\underline{u}) - \frac 1n [\frac{\partial}{\partial\underline{u}} (\frac{4 }{k(ad-b^2)} (d\frac{\partial a}{\partial\underline{u}} - a\frac{\partial d}{\partial\underline{u}}) f^{\frac 12}\Phi^{(dust)}) ]\sin(k n\underline{u}) \\ =&\: \frac 1 n \frac{\partial}{\partial \underline{u}} [\frac{2 f}{k } \sin(2kn\underline{u}) + \frac{4 }{k(ad-b^2)} (d\frac{\partial a}{\partial\underline{u}} - a\frac{\partial d}{\partial\underline{u}}) f^{\frac 12} \Phi^{(dust)} \sin(k n\underline{u})] + O_K(\frac 1n). \end{split} \end{equation*} Thus, defining $$F_n:= \frac{2 f}{k } \sin(2kn\underline{u}) + \frac{4 }{k(ad-b^2)} (d\frac{\partial a}{\partial\underline{u}} - a\frac{\partial d}{\partial\underline{u}}) f^{\frac 12} \Phi^{(dust)} \sin(k n\underline{u}),$$ which clearly satisfies \eqref{eq:data.approx.Fn.est}, it follows that \eqref{eq:data.approx.dgamma.weak} holds. \pfstep{Step~2: Estimates for $\Phi_n$} Define now $\Phi_n$ by the initial value problem \eqref{eq:data.approx.vac.constraint}. Since $\Omega$ and $\hat{\gamma}_n$ are smooth, this linear ODE has a unique solution. Our goal now is to prove \eqref{eq:data.approx.Phi.final}, from which the positivity of $\Phi_n$ (for $n$ large) would also follow. By \eqref{eq:data.approx.dust.constraint} and \eqref{eq:data.approx.vac.constraint}, $(\Phi_n-\Phi^{(dust)})$ satisfies the ODE \begin{equation}\label{eqn:Phi.diff} \begin{split} &\:\frac{\partial^2(\Phi_n-\Phi^{(dust)})}{\partial\underline{u}^2}\\ =&\: 2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial(\Phi_n-\Phi)}{\partial\underline{u}}-\frac 18(|\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma}_n}^2 - |\frac{\partial\hat{\gamma}^{(dust)}}{\partial \underline{u}}|_{\hat{\gamma}^{(dust)}}^2)\Phi^{(dust)} \\ &\:- \frac 18 |\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma}_n}^2 (\Phi_n-\Phi^{(dust)}) +\frac 12 \frac{f}{\Phi^{(dust)}}, \end{split} \end{equation} where the initial data for both $\Phi_n-\Phi^{(data)}$ and $\frac{\partial}{\partial\underline{u}}(\Phi_n-\Phi^{(data)})$ vanish. Using \eqref{eq:data.approx.dgamma.weak} and then integrating by parts and using \eqref{eq:data.approx.Fn.est}, \begin{equation}\label{eq:data.approx.Phi.est.osc} \begin{split} &\: \int_0^{\underline{u}} [-\frac 18(|\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma}_n}^2 - |\frac{\partial\hat{\gamma}^{(dust)}}{\partial \underline{u}}|_{\hat{\gamma}^{(dust)}}^2)\Phi^{(dust)} +\frac 12 \frac{f}{\Phi^{(dust)}}](\underline{u}',\vartheta) \,\mathrm{d} \underline{u}' \\ =&\: - \frac 1{8n} \int_0^{\underline{u}} \frac{1}{\Phi^{(dust)}}\frac{\partial F_n}{\partial \underline{u}} (\underline{u}',\vartheta) \,\mathrm{d} \underline{u}' + O_K(\frac 1n) = O_K(\frac 1n). \end{split} \end{equation} Integrating \eqref{eqn:Phi.diff} and using \eqref{eq:data.approx.Phi.est.osc}, we obtain \begin{equation}\label{eqn:Phi.diff.int} \begin{split} &\: \frac{\partial(\Phi_n-\Phi^{(dust)})}{\partial\underline{u}} (\underline{u}, \vartheta) \\ = &\: \int_0^{\underline{u}} [2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial(\Phi_n-\Phi)}{\partial\underline{u}}(\underline{u}',\vartheta) - \frac 18 |\frac{\partial\hat{\gamma}_n}{\partial \underline{u}}|_{\hat{\gamma}_n}^2 (\Phi_n-\Phi^{(dust)})(\underline{u}',\vartheta)] \,\mathrm{d} \underline{u}' + O_K(\frac 1n). \end{split} \end{equation} Taking the $W^{K,\infty}$ norm along the $2$-spheres, \eqref{eqn:Phi.diff.int} implies \begin{equation}\label{eq:data.approx.Phi.almost} \begin{split} &\: \sup_{\underline{u}' \in [0,\underline{u}]} \|\frac{\partial(\Phi_n-\Phi^{(dust)})}{\partial\underline{u}} \|_{W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \\ \lesssim &\: \int_0^{\underline{u}} [\sup_{\underline{u}'' \in [0, \underline{u}'] }\|\frac{\partial(\Phi_n-\Phi^{(dust)})}{\partial\underline{u}} \|_{W^{K,\infty}(S_{0,\underline{u}''},\mathring{\gamma})} + \sup_{\underline{u}'' \in [0, \underline{u}'] } \|\Phi_n-\Phi^{(dust)} \|_{W^{K,\infty}(S_{0,\underline{u}''},\mathring{\gamma})}] \,\mathrm{d} \underline{u}' + n^{-1} \\ \lesssim &\: \int_0^{\underline{u}} \sup_{\underline{u}'' \in [0, \underline{u}'] }\|\frac{\partial(\Phi_n-\Phi^{(dust)})}{\partial\underline{u}} \|_{W^{K,\infty}(S_{0,\underline{u}''},\mathring{\gamma})} \,\mathrm{d} \underline{u}' + n^{-1}, \end{split} \end{equation} where in the last line we used the fundamental theorem of calculus and the initial vanishing of $\Phi_n - \Phi^{(data)}$ to control $\sup_{\underline{u}'' \in [0, \underline{u}'] } \|\Phi_n-\Phi^{(dust)} \|_{W^{K,\infty}(S_{0,\underline{u}''},\mathring{\gamma})}$. The estimate \eqref{eq:data.approx.Phi.final} therefore follows from first applying Gr\"onwall's inequality to \eqref{eq:data.approx.Phi.almost}, and then using the fundamental theorem of calculus to estimate $\|\Phi_n-\Phi^{(dust)} \|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}$. \qedhere \end{proof} \subsection{Approximating measure-valued null dust data by smooth null dust data}\label{sec:f.approx} \begin{proposition}\label{prop:f.approx} Let $K\in \mathbb N$. Assume that we are given $\mathring{\gamma}$, $\Omega$ and $\mathrm{d}\nu_{\mathrm{init}}$ on $[0,\underline{u}_*]\times \mathbb S^2$ as follows: \begin{enumerate} \item Let $\mathring{\gamma}$ be an arbitrary smooth metric on $S_{0,0}$. Extend $\mathring{\gamma}$ to $[0,\underline{u}_*]\times \mathbb S^2$ by $$\slashed{\mathcal L}_{\frac{\partial}{\partial\underline{u}}} \mathring{\gamma} = 0.$$ \item Let $\Omega$ be an arbitrary positive smooth function on $[0,\underline{u}_*]\times \mathbb S^2$. \item Let $\mathrm{d}\nu_{\mathrm{init}}$ be a non-negative Radon measure on $(0,\underline{u}_*) \times \mathbb S^2$ such that for $\varphi^{(k)}$ being a rank-$k$ tensor $k$-th differentiable along the $\mathbb S^2$ directions, \begin{equation}\label{eq:dnu.init.bound} \sup \left\{ \sum_{0\leq k\leq K}\left| \int_{[0,\underline{u}_*]\times \mathbb S^2} (\mathring{\div}{}^k\varphi^{(k)})(\underline{u},\vartheta)\, \mathrm{d}\nu_{\mathrm{init}} \right| : \|\varphi^{(k)}\|_{L^\infty_{\underline{u}}L^1(S_{0,\underline{u}},\mathring{\gamma})} \leq 1 \right\} =:\Lambda < +\infty, \end{equation} \end{enumerate} Then there exists a sequence of smooth functions $\{f_m\}_{m=1}^{+\infty}$, with $f_m:[0,\underline{u}_*]\times \mathbb S^2\to [0,\infty)$ such that \begin{enumerate} \item $\Omega^{-2} f_m\,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u}$ converges to $\mathrm{d}\nu_{\mathrm{init}}$ in the weak-* topology as $m\to +\infty$. \item For every $m\in \mathbb N$, $f_m$ satisfies the bound \begin{equation}\label{eq:fm.uniform} \|f_m\|_{L^1_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 1. \end{equation} \item For $0\leq k\leq K$, $f_m$ satisfies the quantitative convergence estimate \begin{equation}\label{eq:fm.quantitative} \begin{split} &\: \left| \int_0^{\underline{u}_*}\int_{S_{0,\underline{u}}} (\mathring{\div}{}^k\varphi^{(k)}) \Omega^{-2} f_m \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u} - \int_{(0,\underline{u}_*) \times \mathbb S^2} \mathring{\div}{}^k\varphi^{(k)} \,\mathrm{d}\nu_{\mathrm{init}} \right|\\ \lesssim &\: 2^{-m}\|\frac{\partial\varphi^{(k)}}{\partial\underline{u}}\|_{L^2_{\underline{u}}L^1(S_{0,\underline{u}},\mathring{\gamma})} + 2^{-2m} \|\varphi^{(k)}\|_{L^\infty_{\underline{u}} L^1(S_{0,\underline{u}}, \mathring{\gamma})}, \end{split} \end{equation} for every continuous rank-$k$ $S$-tangent tensor field $\varphi^{(k)}$ such that $\frac{\partial\varphi^{(k)}}{\partial\underline{u}} \in L^2_{\underline{u}}L^1(S_{0,\underline{u}},\mathring{\gamma})$. \end{enumerate} Here, the implicit constants in \eqref{eq:fm.uniform} and \eqref{eq:fm.quantitative} depend only on $\underline{u}_*$, $\mathring{\gamma}$, $\Omega$ and $\Lambda$. Finally, if $\mathrm{d}\nu_{\mathrm{init}}$ is supported on $[0,\underline{u}_*]\times U^c$ for some $U\subset \mathbb S^2$, then for any open $V\subseteq U$ with $\overline{V} \subset U$, $f_m$ can be chosen so that $\mathrm{supp}(f_m)\subset [0,\underline{u}_*]\times V^c$. \end{proposition} \begin{proof} In this proof, we will suppress the explicit dependence on $\mathring{\gamma}$ in the norms if there is no risk of confusion. \pfstep{Step~1: Definition of $\{\widetilde{f}_\epsilon\}_{\epsilon\in (0,10^{-10}\underline{u}_*]}$} We first define an auxiliary one parameter family of (not necessarily smooth) functions $\{\widetilde{f}_\epsilon\}_{\epsilon\in (0,10^{-10}\underline{u}_*]}$. The functions $\widetilde{f}_\epsilon$ are obtained by mollifying $\mathrm{d}\nu_{\mathrm{init}}$ in the $\underline{u}$-direction and then taking the density with respect to $\Omega^{-2} \, \mathrm{d} A_{\mathring{\gamma}}\, \mathrm{d} \underline{u}$. Let \begin{itemize} \item $\varphi:[0,\underline{u}_*]\times \mathbb S^2\to \mathbb R$ be a smooth tensor field; and \item $\{\zeta_i\}_{i=1}^3$ be a smooth partition of unity of $[0,\underline{u}_*]$ such that $\mathrm{supp}(\zeta_1) \subset [0,\frac{\underline{u}_*}{3}]$, $\mathrm{supp}(\zeta_2) \subset [\frac{\underline{u}_*}{4}, \frac{3\underline{u}_*}{4}]$, $\mathrm{supp}(\zeta_3)\subset [\frac{2\underline{u}_*}3, \underline{u}_*]$ and $\sum_{i=1}^3 \zeta_i \equiv 1$; and \item $\varrho:\mathbb R\to \mathbb R$ be a non-negative smooth even cutoff function with $\mathrm{supp}\varrho \subseteq [-1,1]$ and $\int_{\mathbb R} \varrho = 1$. \end{itemize} Given $\epsilon\in (0,10^{-10}\underline{u}_*]$, define\footnote{Note that $\varphi_\epsilon$ is defined on all of $[0,\underline{u}_*]$ because we have shifted $\varphi$ on the support of $\zeta_1$ and $\zeta_3$ near the boundary.} $\varphi_\epsilon:[0,\underline{u}_*]\to \mathbb R$ by \begin{equation}\label{eq:varphi.ep.def.in.data} \begin{split} \varphi_\epsilon(\underline{u}',\vartheta):= &\: \int_{\mathbb R} (\zeta_1\varphi)(\underline{u}+\epsilon,\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u} + \int_{\mathbb R} (\zeta_2\varphi)(\underline{u},\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u} \\ &\: + \int_{\mathbb R} (\zeta_3\varphi)(\underline{u}-\epsilon,\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u} \\ =&\: \sum_{i=1}^3 \int_{\mathbb R} (\zeta_i\varphi)(\underline{u} + \alpha_i\epsilon,\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u}, \end{split} \end{equation} where $\alpha_1 = 1$, $\alpha_2 = 0$ and $\alpha_3 = -1$. It is easy to check that $\varphi_\epsilon \to \varphi$ uniformly. Note that (for an implicit constant independent of $\varphi$) $ \sup_{\underline{u}' \in [0, \underline{u}_*]} \| \varphi_\epsilon(\underline{u}', \cdot)\|_{L^1(\mathbb S^2)} \lesssim \epsilon^{-\frac 12}. $ Therefore, using \eqref{eq:dnu.init.bound}, the map $$\varphi \mapsto \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi_\epsilon(\underline{u}',\vartheta) \, \mathrm{d} \nu_{\mathrm{init}}(\underline{u}',\vartheta) $$ extends to a bounded linear map $:L^2_{\underline{u}}L^1(S_{0,\underline{u}}) \to \mathbb R$. It follows by duality that there exists $\widetilde{f}_{\epsilon}\in L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})$ such that \begin{equation}\label{eq:f.ep.def} \begin{split} \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi_\epsilon(\underline{u}',\vartheta) \, \mathrm{d} \nu_{\mathrm{init}}(\underline{u}',\vartheta) = \int_0^{\underline{u}_*}\int_{S_{0,\underline{u}}} \varphi(\underline{u},\vartheta)\Omega^{-2} \widetilde{f}_{\epsilon}(\underline{u},\vartheta)\,\mathrm{d} A_{\mathring{\gamma}} \, \mathrm{d} \underline{u}. \end{split} \end{equation} By \eqref{eq:f.ep.def}, it follows that $\Omega^{-2} \widetilde{f}_\epsilon\,\mathrm{d} A_{\mathring{\gamma}}\,\mathrm{d}\underline{u}\rightharpoonup \mathrm{d} \nu_{\mathrm{init}}$ in the weak-* topology as $\epsilon \to 0$. \pfstep{Step~2: Uniform estimates for $\{\widetilde{f}_\epsilon\}_{\epsilon\in (0,10^{-10}\underline{u}_*]}$} Consider the class of rank-$k$ tensor fields $$\mathcal D^{(k)} = \{\varphi^{(k)}\in \Gamma(T^k ([0,\underline{u}_*]\times \mathbb S^2)): \varphi \in L^\infty_{\underline{u}} C^\infty(S_{0,\underline{u}}) \}.$$ Since $\mathcal D^{(k)}$ is dense in $L^\infty_{\underline{u}}L^1(S_{0,\underline{u}})$, we can compute using duality \eqref{eq:f.ep.def} and \eqref{eq:dnu.init.bound} tha \begin{equation}\label{eq:ftildeep.est} \begin{split} \|\widetilde{f}_{\epsilon}\|_{L^1_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}})} = &\: \sup_{\{\varphi^{(0)}\in \mathcal D^{(0)}: \|\varphi^{(0)}\|_{L^\infty_{\underline{u}}L^1(S_{0,\underline{u}})} = 1\}} \int_0^{\underline{u}_*}\int_{S_{0,\underline{u}}} \varphi^{(0)}(\underline{u},\vartheta)\widetilde{f}_{\epsilon}(\underline{u},\vartheta)\,\mathrm{d} A_{\mathring{\gamma}} \, \mathrm{d} \underline{u} \\ &\: + \sup_{\{\varphi^{(K)}\in \mathcal D^{(K)}: \|\varphi^{(K)}\|_{L^\infty_{\underline{u}}L^1(S_{0,\underline{u}})} = 1\}} \int_0^{\underline{u}_*}\int_{S_{0,\underline{u}}} (\mathring{\div}{}^K\varphi^{(K)})(\underline{u},\vartheta)\widetilde{f}_{\epsilon}(\underline{u},\vartheta)\,\mathrm{d} A_{\mathring{\gamma}} \, \mathrm{d} \underline{u} \\ \lesssim &\: 1. \end{split} \end{equation} \pfstep{Step~3: Speed of convergence} Using additional regularity of the test function $\varphi$, we show a quantitative speed for the weak-* convergence of $\Omega^{-2} \widetilde{f}_\epsilon\,\mathrm{d} A_{\mathring{\gamma}}\,\mathrm{d}\underline{u} \rightharpoonup \mathrm{d} \nu_{\mathrm{init}}$. We first compute, for $i = 1,2,3$, \begin{equation}\label{eq:convolutions.and.so.on} \begin{split} &\: \sup_{\underline{u}' \in [0,\underline{u}_*]} \| \int_{\mathbb R} (\zeta_i \varphi)(\underline{u}+\alpha_i \epsilon,\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u} - (\zeta_i\varphi)(\underline{u}',\vartheta)\|_{L^1(\mathbb S^2)} \\ \leq &\: \sup_{\underline{u}'\in [0,\underline{u}_*]} \int_{\mathbb R} \frac 1{\epsilon} \rho(\frac{\underline{u}-\underline{u}'}{\epsilon})\|(\zeta_i\varphi)(\underline{u}+\alpha_i \epsilon,\vartheta) - (\zeta_i\varphi)(\underline{u}',\vartheta)\|_{L^1(\mathbb S^2)} \,\mathrm{d} \underline{u} \\ \leq &\: \sup_{\substack{ |\underline{u}' - \underline{u}''| \leq 2\epsilon \\ \underline{u}',\,\underline{u}'' \in [0, \underline{u}_*]}} \|(\zeta_i\varphi)(\underline{u}'',\vartheta) - (\zeta_i\varphi)(\underline{u}',\vartheta)\|_{L^1(\mathbb S^2)} \\ = &\:\sup_{\substack{ |\underline{u}' - \underline{u}''| \leq 2\epsilon \\ \underline{u}',\,\underline{u}'' \in [0, \underline{u}_*]}} \|\int_{\underline{u}''}^{\underline{u}'} \frac{\partial(\zeta_i\varphi)}{\partial\underline{u}}(\underline{u}''',\vartheta) \,\mathrm{d}\underline{u}'''\|_{L^1(\mathbb S^2)} \lesssim \epsilon^{\frac 12}\|\frac{\partial\varphi}{\partial\underline{u}}\|_{L^2_{\underline{u}}L^1(\mathbb S^2)} + \epsilon \|\varphi\|_{L^\infty_{\underline{u}} L^1(\mathbb S^2)}. \end{split} \end{equation} Let $0\leq k \leq K$. Using \eqref{eq:convolutions.and.so.on}, \eqref{eq:dnu.init.bound} and the definition of $\widetilde{f}_\epsilon$ in \eqref{eq:varphi.ep.def.in.data} and \eqref{eq:f.ep.def}, we obtain \begin{equation}\label{eq:fm.quantitative.prelim} \begin{split} &\: \left| \int_0^{\underline{u}_*}\int_{S_{0,\underline{u}}} (\mathring{\div}{}^k\varphi^{(k)}) \Omega^{-2} \widetilde{f}_\epsilon \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u} - \int_{(0,\underline{u}_*) \times \mathbb S^2} \mathring{\div}{}^k\varphi^{(k)}(\underline{u}', \vartheta) \,\mathrm{d}\nu_{\mathrm{init}}(\underline{u}',\vartheta) \right|\\ = &\: \left| \sum_{i=1}^3 \int_{(0,\underline{u}_*)\times \mathbb S^2} \{ \mathring{\div}{}^k [ \int_{\mathbb R} (\zeta_i \varphi^{(k)})(\underline{u} + \alpha_i \epsilon,\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u} - (\zeta_i \varphi^{(k)})(\underline{u}',\vartheta)] \} \,\mathrm{d}\nu_{\mathrm{init}}(\underline{u}',\vartheta) \right|\\ \lesssim &\: \max_{i=1,2,3} \sup_{\underline{u}' \in [0,\underline{u}_*]} \| \int_{\mathbb R} (\zeta_i \varphi)(\underline{u}+\alpha_i \epsilon,\vartheta) \frac 1{\epsilon} \varrho(\frac{\underline{u}-\underline{u}'}{\epsilon}) \,\mathrm{d} \underline{u} - (\zeta_i\varphi)(\underline{u}',\vartheta)\|_{L^1(\mathbb S^2)} \\ \lesssim &\: \epsilon^{\frac 12} \|\frac{\partial\varphi^{(k)}}{\partial\underline{u}}\|_{L^2_{\underline{u}}L^1(\mathbb S^2)} + \epsilon \|\varphi\|_{L^\infty_{\underline{u}} L^1(\mathbb S^2)}. \end{split} \end{equation} \pfstep{Step~4: Definition of $\{f_m\}_{m=1}^{+\infty}$ and conclusion of proof} Finally, let $\{f_m\}_{m=1}^{+\infty}$be smooth and such that \begin{equation}\label{eq:f.&.ft.very.close} \|f_m - \widetilde{f}_{2^{-2m}}\|_{L^1_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}})}\leq 2^{-2m}, \end{equation} (where by $\widetilde{f}_{2^{-2m}}$ we mean $\widetilde{f}_{\epsilon}$ with $\epsilon = 2^{-2m}$). Since $\widetilde{f}_\epsilon$ is non-negative by \eqref{eq:f.ep.def}, it is easy to see that $f_m$ can be arranged to be non-negative. In the case $\mathrm{d}\nu_{\mathrm{init}}$ is supported on $[0,\underline{u}_*]\times U^c$ for some $U\subset \mathbb S^2$, then for any open $V\subseteq U$, we can moreover impose $\mathrm{supp}(f_m)\subset [0,\underline{u}_*]\times V^c$. We need to check that the three properties asserted in the statement of the proposition hold. \begin{enumerate} \item Since $\Omega^{-2}\widetilde{f}_\epsilon\,\mathrm{d} A_{\mathring{\gamma}}\,\mathrm{d}\underline{u}\rightharpoonup \mathrm{d} \nu_{\mathrm{init}}$ in the weak-* topology, it follows from \eqref{eq:f.&.ft.very.close} that $\Omega^{-2} f_m\,\mathrm{d} A_{\mathring{\gamma}}\,\mathrm{d}\underline{u}\rightharpoonup \mathrm{d} \nu_{\mathrm{init}} $ in the weak-* topology. \item The estimate \eqref{eq:fm.uniform} follows from \eqref{eq:ftildeep.est}, \eqref{eq:f.&.ft.very.close} and the triangle inequality. \item Integrating by parts and using \eqref{eq:f.&.ft.very.close}, smoothness of $\Omega$ and H\"older's inequality, we obtain $$\left| \int_0^{\underline{u}_*}\int_{S_{0,\underline{u}}} (\mathring{\div}{}^k\varphi^{(k)}) \Omega^{-2} (f_m - \widetilde{f}_{2^{-2m}}) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u} \right| \lesssim 2^{-2m} \|\varphi^{(k)}\|_{L^\infty_{\underline{u}} L^1(S_{0,\underline{u}})}.$$ Combining this with \eqref{eq:fm.quantitative.prelim}, we then obtain \eqref{eq:fm.quantitative} using the triangle inequality. \qedhere \end{enumerate} \end{proof} \begin{proposition}\label{prop:f.approx.Phi.conv} Let $\mathrm{d}\nu_{\mathrm{init}}$, $\{f_m\}_{m=1}^{+\infty}$ and $\mathring{\gamma}$ be as in Proposition~\ref{prop:f.approx}. Suppose there exist continuous functions $(\Phi,\log\Omega)$ and a continuous $S$-tangent $2$-tensor $\hat{\gamma}$ such that the following holds for some $K\in \mathbb N$: \begin{itemize} \item $\Phi$ is positive, Lipschitz and $L^\infty_{\underline{u}}W^{K,\infty}$, $\frac{\partial\Phi}{\partial\underline{u}}$ is BV and $L^\infty_{\underline{u}} W^{K,\infty}$, and the following estimates hold: \begin{equation}\label{eq:f.approx.Phi.bkg.est} \|\Phi\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\Phi^{-1}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\frac{\partial\Phi}{\partial\underline{u}}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1. \end{equation} \item $\Omega$ is smooth, positive and satisfy \begin{equation}\label{eq:f.approx.Omg.bkg.est} \|\log\Omega\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\frac{\partial}{\partial\underline{u}} \log\Omega\|_{L^2_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma}) } \lesssim 1. \end{equation} \item $\hat{\gamma}$ satisfies $\frac{\det(\hat{\gamma})}{\det(\mathring{\gamma})} = 1$ and obeys the following bounds: \begin{equation}\label{eq:f.approx.gamma.bkg.est} \|\hat{\gamma}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})} + \|\frac{\partial}{\partial\underline{u}} \hat{\gamma}\|_{L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1. \end{equation} \end{itemize} Assume also that $(\mathrm{d}\nu_{\mathrm{init}},\Phi,\log\Omega, \hat{\gamma})$ satisfies the equation \eqref{eq:dnu.init} for any $\varphi \in C^\infty_c((0,\underline{u}_*)\times \mathbb S^2)$. Consider the initial value problem for $\Phi_m$ with smooth data: \begin{equation}\label{Phim.ODE} \begin{cases} \frac{\partial^2\Phi_m}{\partial \underline{u}^2}-2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial\Phi_m}{\partial\underline{u}}+\frac 18 |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}}|^2_{\hat{\gamma}_m}\Phi_m + \frac 12 f_m \Phi_m^{-1} =0, \\ \Phi_m(0,\vartheta) = \overline{\Phi}_m(\vartheta),\quad \frac{\partial\Phi_m}{\partial \underline{u}} (0,\vartheta) = \overline{\Psi}_m(\vartheta), \end{cases} \end{equation} where $(\hat{\gamma}_m,\, \overline{\Phi}_m, \, \overline{\Psi}_m)$ are such that the following holds for all $m\in \mathbb N$: \begin{itemize} \item The tensor $\hat{\gamma}_m$ is smooth and such that \begin{equation}\label{eq:f.approx.gamma.est} \frac{\det\hat{\gamma}_m}{\det \mathring{\gamma}} = 1,\quad \|\hat{\gamma}_m- \hat{\gamma}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\frac{\partial}{\partial\underline{u}}(\hat{\gamma}_m- \hat{\gamma})\|_{L^2_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\leq 2^{-m}. \end{equation} \item The data $(\overline{\Phi}_m,\overline{\Psi}_m)$ of \eqref{Phim.ODE} are smooth and such that\footnote{Here ${}^-$ denote the trace of BV functions, see Lemma~\ref{lem:trace}.} \begin{equation}\label{eq:f.approx.data.est} \| \overline{\Phi}_m - \Phi(0,\cdot) \|_{W^{K,\infty}(S_{0,0},\mathring{\gamma})} + \|\overline{\Psi}_m - (\frac{\partial \Phi}{\partial\underline{u}})^-(0,\cdot)\|_{W^{K,\infty}(S_{0,0},\mathring{\gamma})} \leq 2^{-m}. \end{equation} \end{itemize} Then the following holds for $m\in \mathbb N$ sufficiently large: \begin{enumerate} \item The solution $\Phi_m$ to \eqref{Phim.ODE} is defined on all of $[0,\underline{u}_*]\times \mathbb S^2$. \item $\Phi_m$ converges to $\Phi$ in the following sense\footnote{We emphasize that while we have uniform $L^\infty_{\underline{u}}$ bounds for $\frac{\partial\Phi_m}{\partial\underline{u}}$ in \eqref{eq:duphi.uniform}, the convergence estimate in \eqref{eq:phi.diff.est} for $\frac{\partial\Phi_m}{\partial\underline{u}}$ is only in $L^2_{\underline{u}}$. In fact, it can be easily checked that the convergence in general does \underline{not} hold in $L^\infty_{\underline{u}}$.}: \begin{equation}\label{eq:phi.diff.est} \|\Phi_m - \Phi \|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \|\frac{\partial}{\partial\underline{u}}(\Phi_m - \Phi)\|_{L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 2^{-m}. \end{equation} \item $\frac{\partial\Phi_m}{\partial\underline{u}}$ satisfies uniformly the estimate \begin{equation}\label{eq:duphi.uniform} \sup_m \|\frac{\partial\Phi_m}{\partial\underline{u}}\|_{L^{\infty}_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1. \end{equation} \end{enumerate} Here, the implicit constants depend only on $\underline{u}_*$, $\mathring{\gamma}$, $\Omega$ and $\Lambda$ in Proposition~\ref{prop:f.approx}. \end{proposition} \begin{proof} We will only prove the estimates which will then in particular imply existence of solution on all of $[0,\underline{u}_*]\times \mathbb S^2$. Since all norms on the $2$-spheres in this proof will be taken with respect to $\mathring{\gamma}$, when there is no risk of confusion we will suppress explicit references to $\mathring{\gamma}$. For Steps~1 and 2 of the argument, let us fix $\underline{U}\in [0,\underline{u}_*]$. We will first be deriving estimates for $\Phi_m - \Phi$ and $\frac{\partial}{\partial\underline{u}} (\Phi_m - \Phi)$ on the interval $[0,\underline{U}]$. Thus, \textbf{in Steps~1 and 2, $L^2_{\underline{u}}$ and $L^\infty_{\underline{u}}$ mean $L^2_{\underline{u}}([0,\underline{U}])$ and $L^\infty_{\underline{u}}([0,\underline{U}])$. Moreover, all the implicit constants in the estimates will be independent of $\underline{U}$.} \pfstep{Step~1: Equation for $\Phi_m - \Phi$} We first derive an equation for $\Phi_m - \Phi$ using \eqref{eq:dnu.init} and \eqref{Phim.ODE}. Let $k\in \{0,\dots,K\}$. We will consider smooth rank-$k$ covariant tensor $\widetilde{\varphi} = \widetilde{\varphi}_{A_1\dots A_k}$ such that \begin{equation}\label{eq:varphi.cond.constraints} \|e^{A\underline{u}} \frac{\partial\widetilde{\varphi}}{\partial\underline{u}}\|_{L^2_{\underline{u}}L^1(S_{0,\underline{u}})} = 1,\quad \widetilde{\varphi}\restriction_{\underline{u} = \underline{U}} = 0, \end{equation} for some large $A>1$ to be determined\footnote{The largeness of $A$ will be used to facilitate the proof of the estimates in Step~2. One could think of the weight $e^{A\underline{u}}$ as a device to prove a Gr\"onwall-like estimate in the setting involving the measure $\mathrm{d}\nu_{\mathrm{init}}$.}. For every $\ell \in \mathbb N$ with $\ell^{-1} \leq \frac{\underline{U}}{2}$, define $\widetilde{\xi}_\ell:[0,\underline{u}_*]\to \mathbb R_{\geq 0}$ by \begin{equation}\label{eq:def.xi.t} \widetilde{\xi}_\ell(\underline{u}):= \begin{cases} \ell \underline{u} & \mbox{if $\underline{u}\in [0, \ell^{-1})$}\\ 1 & \mbox{if $\underline{u}\in [\ell^{-1}, \underline{U}]$}\\ 0 & \mbox{if $\underline{u} \in (\underline{U},\underline{u}_*]$} \end{cases}. \end{equation} Note that given $\widetilde{\varphi}$ as above and $\widetilde{\xi}_\ell$ as in \eqref{eq:def.xi.t}, $\widetilde{\xi}_\ell\mathring{\div}{}^k\widetilde{\varphi} \in C^0_c((0,\underline{u}_*)\times \mathbb S^2)$ for all $\ell \in \mathbb N$. After an easy limiting argument, we can thus apply \eqref{eq:dnu.init} with $\varphi = \widetilde{\xi}_\ell \mathring{\div}{}^k\widetilde{\varphi}$. Together with \eqref{Phim.ODE}, we thus obtain that, for every $\ell \in \mathbb N$ with $\ell^{-1} \leq \frac{\underline{u}_*}{2}$, \begin{equation}\label{eq:diff.Phi.der.prelim} \begin{split} &\: \underbrace{-\int_0^{\underline{u}_*} \int_{\mathbb S^2} \xi_\ell(\underline{u}')((\mathring{\div}{}^k\frac{\partial\widetilde{\varphi}}{\partial\underline{u}}) \Omega^{-2} \frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}})(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}'}_{\mathrm{O}'} \\ &\: - \underbrace{\ell \int_0^{\ell^{-1}}\int_{\mathbb S^2} ((\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2} \frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}})(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\, \mathrm{d} \underline{u}'}_{=:\mathrm{I}'} \\ = &\: -\underbrace{\frac 18 \int_0^{\underline{u}_*} \int_{\mathbb S^2} \xi_\ell(\underline{u}') (\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2}( |\frac{\partial\hat{\gamma}}{\partial\underline{u}}|^2_{\hat{\gamma}} \Phi - |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}}|^2_{\hat{\gamma}_m} \Phi_m)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}'}_{=:\mathrm{II}'} \\ &\: - \underbrace{ \frac 12 \int_{(0,\underline{u}_*)\times \mathbb S^2} \xi_\ell(\underline{u}')((\mathring{\div}{}^k\widetilde{\varphi})\Phi^{-1}) (\underline{u}',\vartheta)\,\mathrm{d}\nu_{\mathrm{init}}(\underline{u}',\vartheta)}_{=:\mathrm{III}_A'} \\ &\: + \underbrace{\frac 12 \int_0^{\underline{u}_*} \int_{\mathbb S^2} \xi_\ell(\underline{u}')((\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2} \Phi_m^{-1} f_m)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u}'}_{=:\mathrm{III}_B'}. \end{split} \end{equation} Taking $\ell \to +\infty$, applying the dominated convergence theorem for $\mathrm{O}'$, $\mathrm{II}'$, $\mathrm{III}_A'$, $\mathrm{III}_B'$ and using the fact that $\frac{\partial\Phi_m}{\partial \underline{u}}$ is smooth and $\lim_{\underline{u}\to 0^+}\frac{\partial\Phi}{\partial \underline{u}}$ is well-defined in the trace sense for $\mathrm{I}'$, we obtain \begin{equation}\label{eq:diff.Phi.der} \begin{split} &\: -\int_0^{\underline{U}} \int_{\mathbb S^2} ((\mathring{\div}{}^k\frac{\partial\widetilde{\varphi}}{\partial\underline{u}}) \Omega^{-2} \frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}})(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}' - \underbrace{\int_{\mathbb S^2} (\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2} [(\frac{\partial\Phi}{\partial \underline{u}})^- - \frac{\partial\Phi_m}{\partial \underline{u}}](0,\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}}_{=:\mathrm{I}} \\ = &\: -\underbrace{\frac 18 \int_0^{\underline{U}} \int_{\mathbb S^2} (\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2}( |\frac{\partial\hat{\gamma}}{\partial\underline{u}}|^2_{\hat{\gamma}} \Phi - |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}}|^2_{\hat{\gamma}_m} \Phi_m)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}'}_{=:\mathrm{II}} \\ &\: - \underbrace{\frac 12 \int_{(0,\underline{U})\times \mathbb S^2} ((\mathring{\div}{}^k\widetilde{\varphi}) \Phi^{-1}) (\underline{u}',\vartheta)\,\mathrm{d}\nu_{\mathrm{init}}(\underline{u}',\vartheta) + \frac 12 \int_0^{\underline{U}} \int_{\mathbb S^2} ((\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2} \Phi_m^{-1} f_m)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u}'}_{=:\mathrm{III}}. \end{split} \end{equation} \pfstep{Step~2: Estimating the terms in \eqref{eq:diff.Phi.der}} We now estimate the terms from \eqref{eq:diff.Phi.der}. Before we proceed, note that it follows easily from \eqref{eq:varphi.cond.constraints} that for every $\underline{u} \in [0, \underline{U}]$, \begin{equation}\label{eq:varphi.also.pointwise} \|\widetilde{\varphi}\|_{L^1(S_{0,\underline{u}})}(\underline{u}) \leq \int_{\underline{u}}^{\underline{U}} \|\frac{\partial\widetilde{\varphi}}{\partial\underline{u}}\|_{L^1(S_{0,\underline{u}})}(\underline{u}') \,\mathrm{d} \underline{u}'\leq \|e^{A\underline{u}} \frac{\partial\widetilde{\varphi}}{\partial\underline{u}}\|_{L^2_{\underline{u}}L^1(S_{0,\underline{u}})} (\int_{\underline{u}}^{\underline{U}*} e^{-2A\underline{u}'} \,\mathrm{d}\underline{u}')^{\frac 12}\leq \frac{1}{\sqrt{2A}}e^{-A\underline{u}}. \end{equation} Now for all of the terms in \eqref{eq:diff.Phi.der}, we first integrate by parts in the angular variables and then control the resulting terms with \eqref{eq:varphi.also.pointwise} and the bounds in the assumption of the proposition. By H\"older's inequality, \eqref{eq:varphi.also.pointwise}, \eqref{eq:f.approx.Omg.bkg.est}, \eqref{Phim.ODE} and \eqref{eq:f.approx.data.est}, \begin{equation}\label{eq:main.Phi.m.est.1} \begin{split} |\mathrm{I}| \lesssim &\: \sum_{k_1+k_2 = k} \|\widetilde{\varphi} \|_{L^1(S_{0,0})}\|\mathring{\slashed{\nabla}}{}^{k_1}\Omega^{-2}\|_{L^\infty(S_{0,0})}\|\mathring{\slashed{\nabla}}{}^{k_2}[(\frac{\partial\Phi}{\partial \underline{u}})^-(0,\vartheta) - \frac{\partial\Phi_m}{\partial \underline{u}}(\vartheta)]\|_{L^\infty(S_{0,0})} \lesssim \frac{2^{-m}}{\sqrt{A}}. \end{split} \end{equation} By H\"older's inequality, \eqref{eq:varphi.also.pointwise}, \eqref{eq:f.approx.Phi.bkg.est}, \eqref{eq:f.approx.Omg.bkg.est}, \eqref{eq:f.approx.gamma.bkg.est} and \eqref{eq:f.approx.gamma.est}, \begin{equation}\label{eq:main.Phi.m.est.2} \begin{split} &\: |\mathrm{II}| \\ \lesssim &\: \|e^{A\underline{u}}\widetilde{\varphi}\|_{L^\infty_{\underline{u}} L^1(S_{0,\underline{u}})} \times \sum_{k_1+k_2 =k} (\|\mathring{\slashed{\nabla}}{}^{k_1}(\Omega^{-2} (|\frac{\partial\hat{\gamma}}{\partial\underline{u}}|^2_{\hat{\gamma}} - |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}} |^2_{\hat{\gamma}_m} ) )\|_{L^1_{\underline{u}}L^\infty(S_{0,\underline{u}})} \|e^{-A\underline{u}}\mathring{\slashed{\nabla}}{}^{k_2}\Phi\|_{L^\infty_{\underline{u}}L^\infty(S_{0,\underline{u}})} \\ &\: \qquad + \|\mathring{\slashed{\nabla}}{}^{k_1} (\Omega^{-2} |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}}|^2_{\hat{\gamma}_m})\|_{L^1_{\underline{u}}L^\infty(S_{0,\underline{u}})} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k_2}(\Phi-\Phi_m)\|_{L^\infty_{\underline{u}}L^\infty(S_{0,\underline{u}})}) \\ \lesssim &\: \frac{1}{\sqrt{A}} (2^{-m} + \sum_{0\leq k_1\leq k} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k_1}(\Phi-\Phi_m)\|_{L^\infty_{\underline{u}}L^\infty(S_{0,\underline{u}})}). \end{split} \end{equation} We now turn to term $\mathrm{III}$, which we will split into two terms (see \eqref{eq:f.approx.III.1} and \eqref{eq:f.approx.III.2} below). First we observe that by \eqref{eq:f.approx.Phi.bkg.est}, \begin{equation}\label{eq:diff.Phi-1.Phim-1} \begin{split} \| \Phi_m^{-1} - \Phi^{-1} \|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} =&\: \| \Phi^{-1} \Phi_m^{-1} (\Phi -\Phi_m) \|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} \\ \lesssim &\: \|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} \|\Phi - \Phi_m \|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})}. \end{split} \end{equation} Therefore, integrating by parts the $\mathring{\div}{ }^k$ and using the estimates \eqref{eq:fm.uniform}, \eqref{eq:f.approx.Omg.bkg.est}, \eqref{eq:diff.Phi-1.Phim-1} and \eqref{eq:varphi.also.pointwise}, we obtain \begin{equation}\label{eq:f.approx.III.1} \begin{split} &\: \frac 12 \left| \int_0^{\underline{u}_*} \int_{\mathbb S^2} ((\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2} (\Phi_m^{-1} - \Phi^{-1}) f_m)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u}' \right| \\ \lesssim &\: \sum_{k_1+k_2+k_3 = k} \|\mathring{\slashed{\nabla}}^{k_1} \Phi_m^{-1}\|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} \|e^{-A\underline{u}}\mathring{\slashed{\nabla}}^{k_2}(\Phi - \Phi_m) \|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} \| \mathring{\slashed{\nabla}}^{k_3} f_m \|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} \|e^{A\underline{u}} \widetilde{\varphi} \|_{L^\infty_{\underline{u}} L^1(S_{0,\underline{u}})} \\ \lesssim &\: \frac{1}{\sqrt{A}}\|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} W^{k,\infty}(S_{0,\underline{u}})} (\sum_{0\leq k'\leq k} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}^{k'} (\Phi - \Phi_m) \|_{L^\infty_{\underline{u}} L^{\infty}(S_{0,\underline{u}})}). \end{split} \end{equation} On the other hand, notice that we can write $(\mathring{\div}{}^k\widetilde{\varphi})\Phi^{-1}$ as a linear combination of terms of the form $$\mathring{\div}{}^{k_1} (\widetilde{\varphi} \mathring{\slashed{\nabla}}^{k_2}\Phi^{-1})$$ for $k_1 + k_2 = k$. Therefore, using \eqref{eq:fm.quantitative} in Proposition~\ref{prop:f.approx}, and then using \eqref{eq:varphi.cond.constraints}, \eqref{eq:varphi.also.pointwise} (and using $A\underline{u} \geq 0$) and \eqref{eq:f.approx.Phi.bkg.est}, we obtain \begin{equation}\label{eq:f.approx.III.2} \begin{split} &\: -\frac 12 \int_{(0,\underline{u}_*)\times \mathbb S^2} ((\mathring{\div}{}^k\widetilde{\varphi})\Phi^{-1}) (\underline{u}',\vartheta)\,\mathrm{d}\nu_{\mathrm{init}}(\underline{u}',\vartheta) + \frac 12 \int_0^{\underline{u}_*} \int_{\mathbb S^2} ((\mathring{\div}{}^k\widetilde{\varphi}) \Omega^{-2} \Phi^{-1} f_m)(\underline{u}',\vartheta) \,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d}\underline{u}' \\ \lesssim &\: 2^{-m} \sum_{0\leq k'\leq k} \|\frac{\partial(\widetilde{\varphi}\mathring{\slashed{\nabla}}^{k'}\Phi^{-1})}{\partial\underline{u}} \|_{L^2_{\underline{u}} L^1(S_{0,\underline{u}})} + 2^{-2m} \sum_{0\leq k'\leq k} \|\varphi\mathring{\slashed{\nabla}}^{k'}\Phi^{-1} \|_{L^2_{\underline{u}} L^1(S_{0,\underline{u}})} \\ \lesssim &\: 2^{-m} \|\frac{\partial\widetilde{\varphi}}{\partial\underline{u}}\|_{L^2_{\underline{u}} L^1(S_{0,\underline{u}})} \|\Phi^{-1} \|_{L^\infty_{\underline{u}} W^{k,\infty}(S_{0,\underline{u}})} + 2^{-m} \|\widetilde{\varphi}\|_{L^\infty_{\underline{u}} L^1(S_{0,\underline{u}})} \|\Phi^{-2} \frac{\partial\Phi}{\partial\underline{u}} \|_{L^\infty_{\underline{u}} W^{k,\infty}(S_{0,\underline{u}})} \\ &\: + 2^{-2m} \| \widetilde{\varphi} \|_{L^2_{\underline{u}} L^1(S_{0,\underline{u}})} \|\Phi^{-1} \|_{L^\infty_{\underline{u}} W^{k,\infty}(S_{0,\underline{u}})} \\ \lesssim &\: 2^{-m}. \end{split} \end{equation} Combining \eqref{eq:f.approx.III.1} and \eqref{eq:f.approx.III.2} and using the triangle inequality, we thus obtain \begin{equation}\label{eq:main.Phi.m.est.3} \begin{split} |\mathrm{III}|\lesssim \frac{1}{\sqrt{A}}\|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} W^{k,\infty}(S_{0,\underline{u}})} (\sum_{0\leq k'\leq k} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}^{k'} (\Phi - \Phi_m) \|_{L^\infty_{\underline{u}} L^{\infty}(S_{0,\underline{u}})}) + 2^{-m}. \end{split} \end{equation} Starting with duality and using \eqref{eq:diff.Phi.der}, \eqref{eq:main.Phi.m.est.1}, \eqref{eq:main.Phi.m.est.2} and \eqref{eq:main.Phi.m.est.3}, we obtain \begin{equation}\label{eq:main.Phi.m.est.5} \begin{split} &\: \sum_{0\leq k \leq K} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^k(\Omega^{-2}\frac{\partial(\Phi - \Phi_m)}{\partial\underline{u}}) \|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})} \\ =&\: \sum_{0\leq k \leq K} \sup_{\widetilde{\varphi} \,\mathrm{ satisfying \eqref{eq:varphi.cond.constraints}}} \int_0^{\underline{u}_*} \int_{S_{0,\underline{u}}} (\frac{\partial\widetilde{\varphi}}{\partial\underline{u}} \mathring{\slashed{\nabla}}{}^k(\Omega^{-2} \frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}}))(\underline{u}',\vartheta)\,\mathrm{dA}_{\mathring{\gamma}}\,\mathrm{d} \underline{u}' \\ \lesssim &\: 2^{-m} + (1+ \|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})})\sum_{0\leq k\leq K} \frac{1}{\sqrt{A}} \|e^{-A\underline{u}}\mathring{\slashed{\nabla}}{}^{k}(\Phi_m - \Phi)\|_{L^{\infty}_{\underline{u}}L^{\infty}(S_{0,\underline{u}})}. \end{split} \end{equation} We then complement the estimate \eqref{eq:main.Phi.m.est.5} of $\frac{\partial(\Phi - \Phi_m)}{\partial\underline{u}}$ with the following estimate on $\Phi - \Phi_m$ for $\underline{u} \in [0,\underline{U}]$, which is derived using the fundamental theorem of calculus, \eqref{Phim.ODE}, \eqref{eq:f.approx.data.est} and H\"older's inequality: \begin{equation}\label{eq:main.Phi.m.est.6} \begin{split} \|\mathring{\slashed{\nabla}}{}^{k}(\Phi - \Phi_m)\|_{L^\infty(S_{0,\underline{u}})}(\underline{u}) \leq &\: \|\mathring{\slashed{\nabla}}{}^{k}(\Phi\restriction_{\underline{u} = 0} - \overline{\Phi}_m)\|_{L^\infty(S_{0,\underline{u}})} + \int_0^{\underline{u}} \|\mathring{\slashed{\nabla}}{}^{k}\frac{\partial(\Phi - \Phi_m)}{\partial \underline{u}}\|_{L^\infty(S_{0,\underline{u}'})} \,\mathrm{d} \underline{u}' \\ \lesssim &\: 2^{-m} + (\int_0^{\underline{u}} e^{2A\underline{u}'}\,\mathrm{d}\underline{u}')^{\frac 12}\|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k}\frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}}\|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})} \\ \lesssim &\: 2^{-m} + \sum_{0\leq k_1\leq k}\frac{e^{A\underline{u}}}{\sqrt{A}} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k_1}(\Omega^{-2}\frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}})\|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})}. \end{split} \end{equation} The estimate \eqref{eq:main.Phi.m.est.6} implies (using $A\underline{u}\geq 0$) that for every $\underline{u} \in [0,\underline{U}]$, \begin{equation}\label{eq:main.Phi.m.est.7} \sum_{0\leq k\leq K} \|e^{-A\underline{u}}\mathring{\slashed{\nabla}}{}^{k}(\Phi - \Phi_m)\|_{L^\infty_{\underline{u}}L^\infty(S_{0,\underline{u}})}\lesssim 2^{-m} + \sum_{0\leq k\leq K} \frac{1}{\sqrt{A}} \|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k}(\Omega^{-2}\frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}})\|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})}. \end{equation} Adding \eqref{eq:main.Phi.m.est.5} and \eqref{eq:main.Phi.m.est.7}, we obtain \begin{equation}\label{eq:main.Phi.m.est.8} \begin{split} &\: \sum_{0\leq k\leq K}(\|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k}(\Omega^{-2}\frac{\partial(\Phi - \Phi_m)}{\partial\underline{u}}) \|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})} + \|e^{-A\underline{u}}\mathring{\slashed{\nabla}}{}^{k}(\Phi - \Phi_m)\|_{L^\infty_{\underline{u}}L^\infty(S_{0,\underline{u}})}) \\ \lesssim &\: 2^{-m} + \sum_{0\leq k\leq K} \frac{1}{\sqrt{A}} (\|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k}(\Omega^{-2}\frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}})\|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})})\\ &\: + (1+ \|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})}) \sum_{0\leq k\leq K} \frac{1}{\sqrt{A}}\|e^{-A\underline{u}}\mathring{\slashed{\nabla}}{}^{k}(\Phi_m - \Phi)\|_{L^{\infty}_{\underline{u}}L^{\infty}(S_{0,\underline{u}})}) \end{split} \end{equation} \textbf{uniformly for all subintervals $[0,\underline{U}]\subseteq [0,\underline{u}_*]$ and for all $m\in \mathbb N$}. Using \eqref{eq:main.Phi.m.est.8}, we first claim that for $m\in \mathbb N$ sufficiently large, $\|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})} \leq 2 \|\Phi^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})}$. If not, then by continuity there exists $\widetilde{\underline{U}}$ such that \begin{equation}\label{eq:main.Phi.m.contradiction} \|\Phi_m^{-1}\|_{W^{K,\infty}(S_{0,\widetilde{\underline{U}}})} = 2\|\Phi^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})},\quad \sup_{\underline{u}\in [0,\widetilde{\underline{U}}]}\|\Phi_m^{-1}\|_{W^{K,\infty}(S_{0,\underline{u}})} \leq 2\|\Phi^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})}. \end{equation} However, for $A$ sufficiently large (independently of $\widetilde{U}$ or $m$), \eqref{eq:main.Phi.m.est.8} and \eqref{eq:main.Phi.m.contradiction} imply that on the interval $[0,\widetilde{\underline{U}}]$, \begin{equation}\label{eq:main.Phi.m.almost.the.end} \sum_{0\leq k\leq K}(\|e^{-A\underline{u}} \mathring{\slashed{\nabla}}{}^{k} \frac{\partial(\Phi-\Phi_m)}{\partial \underline{u}}\|_{L^2_{\underline{u}}L^\infty(S_{0,\underline{u}})} + \|e^{-A\underline{u}}\mathring{\slashed{\nabla}}{}^{k}(\Phi - \Phi_m)\|_{L^\infty_{\underline{u}}L^\infty(S_{0,\underline{u}})}) \lesssim 2^{-m}. \end{equation} For $m\in \mathbb N$ sufficiently large, \eqref{eq:main.Phi.m.almost.the.end} contradicts \eqref{eq:main.Phi.m.contradiction}. Hence \begin{equation}\label{eq:main.Phi.m.almost.the.end.2} \|\Phi_m^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})} \leq 2 \|\Phi^{-1}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})} \end{equation} Now using \eqref{eq:main.Phi.m.almost.the.end.2}, we can repeat the above argument to derive \eqref{eq:main.Phi.m.almost.the.end} from \eqref{eq:main.Phi.m.est.8} as long as $A$ and $m$ are chosen to be sufficiently large. Finally, we fix $A$ in \eqref{eq:main.Phi.m.almost.the.end} and absorb it into the implicit constants. Since $\underline{U}$ is arbitrary, we have thus obtained \eqref{eq:phi.diff.est}. \pfstep{Step~3: Proof of \eqref{eq:duphi.uniform}} Finally, the proof of \eqref{eq:duphi.uniform} is much more straightforward since we are only have to deal with the \emph{smooth} ODE \eqref{Phim.ODE}. Using the uniform estimates given by \eqref{eq:fm.uniform}, \eqref{eq:f.approx.Phi.bkg.est}, \eqref{eq:f.approx.gamma.bkg.est} and \eqref{eq:phi.diff.est}, we know that \begin{equation}\label{eq:uniform.est.for.Rm} \sup_m \|\frac 18 |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}}|^2_{\hat{\gamma}_m}\Phi_m + \frac 12 f_m \Phi_m^{-1}\|_{L^1_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})}<+\infty. \end{equation} We now let $\Psi_m = \frac{\partial\Phi_m}{\partial\underline{u}}$, $H_m = 2 \frac{\partial \log\Omega}{\partial\underline{u}}$ and $R_m = \frac 18 |\frac{\partial\hat{\gamma}_m}{\partial\underline{u}}|^2_{\hat{\gamma}_m}\Phi_m + \frac 12 f_m \Phi_m^{-1}$. Then by \eqref{Phim.ODE}, \eqref{eq:f.approx.Omg.bkg.est} and \eqref{eq:uniform.est.for.Rm}, it suffices to show that for $\Psi_m$ solving the ODE \begin{equation}\label{Phim.ODE.revisit} \frac{\partial \Psi_m}{\partial \underline{u}} + H_m \Psi_m + R_m =0, \end{equation} with the estimates \begin{equation} \sup_m (\| \Psi_m(0,\vartheta)\|_{W^{K,\infty}(S_{0,0})} + \|H_m\|_{L^2_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})} + \|R_m \|_{L^1_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}})}) \leq D, \end{equation} we can obtain uniform (in $m$) estimates for the $L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}})$ norm of $\Psi_m$. Assume as a bootstrap assumption that $\| \Psi_m\|_{W^{K,\infty}(S_{0,\underline{u}})}\leq \sqrt{B} D e^{B\underline{u}}$ (which is satisfied initially as long as $B\geq 1$). Differentiating \eqref{Phim.ODE.revisit} by $\mathring{\slashed{\nabla}}{}^k$ (for $0\leq k\leq K$), integrating in $\underline{u}$, and then using the bootstrap assumption and the Cauchy--Schwarz inequality, we obtain \begin{equation*} \begin{split} \|\mathring{\slashed{\nabla}}{}^k \Psi_m\|_{L^\infty(S_{0,\underline{u}})} \lesssim &\: D + \sum_{k_1+k_2 = k} \|\mathring{\slashed{\nabla}}{}^{k_1} H_m\|_{L^2_{\underline{u}} L^\infty(S_{0,\underline{u}})} \|\mathring{\slashed{\nabla}}{} ^{k_2}\Psi_m\|_{L^2_{\underline{u}} L^\infty(S_{0,\underline{u}})} + \| \mathring{\slashed{\nabla}}{}^k R_m\|_{L^1_{\underline{u}} L^\infty(S_{0,\underline{u}})} \\ \lesssim &\: D + D (\sqrt{B}D) (\int_0^{\underline{u}} e^{2 B\underline{u}}\,\mathrm{d} \underline{u}')^{\frac 12} \lesssim D + D^2 e^{B\underline{u}} \lesssim D(D+1) e^{B\underline{u}}. \end{split} \end{equation*} In particular, we obtain $\| \Psi_m\|_{W^{K,\infty}(S_{0,\underline{u}})}\leq D(D+1) e^{B\underline{u}}$. If we choose $B$ such that $(D+1)\ll \sqrt{B}$, we have improved the bootstrap assumption. Finally, after fixing $B$ and allowing the implicit constant to depend on $B$, we obtain that $\|\Psi_m\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}})} \lesssim D(D+1)$. As argued above, this implies \eqref{eq:duphi.uniform}. \qedhere \end{proof} \subsection{Putting everything together: Approximating measure-valued null dust data by vacuum data}\label{sec:final.approx} \begin{proposition}\label{prop:final.approx} Let $K\in \mathbb N$. Suppose we are given $\mathrm{d}\nu_{\mathrm{init}}$ and $\mathring{\gamma}$ as in the assumptions of Proposition~\ref{prop:f.approx}, and $\Phi$, $\log\Omega$, $\hat{\gamma}$ as in Proposition~\ref{prop:f.approx.Phi.conv}. Assume moreover that $\mathrm{d}\nu_{\mathrm{init}}$ is supported on $[0,\underline{u}_*]\times U^c$ for some $U\subset \mathbb S^2$. Then there exist smooth $\{(\hat{\gamma}_m^{(vac)},\Phi^{(vac)}_m)\}_{m=m_0}^{+\infty}$ on $[0,\underline{u}_*]\times \mathbb S^2$ such that the following holds: \begin{enumerate} \item (Basic properties) $\Phi^{(vac)}_m$ is strictly positive. $\hat{\gamma}_m^{(vac)}$ is a positive definite metric on each $S_{0,\underline{u}}$. Moreover, \begin{equation}\label{eq:final.approx.det} \frac{\det\hat{\gamma}_m^{(vac)}}{\det \mathring{\gamma}} = 1 \end{equation} \item (Vacuum constraint) For every $m\in \mathbb N$ with $m\geq m_0$, $$\begin{cases} \frac{\partial^2 \Phi^{(vac)}_m}{\partial\underline{u}^2}=2\frac{\partial \log\Omega}{\partial\underline{u}}\frac{\partial\Phi^{(vac)}_m}{\partial\underline{u}}-\frac {1}8 |\frac{\partial\hat{\gamma}_m^{(vac)}}{\partial\underline{u}}|_{\hat{\gamma}_m^{(vac)}}^2 \Phi^{(vac)}_m,\\ \Phi^{(vac)}_m(\underline{u}=0) = \Phi(\underline{u} = 0),\quad \frac{\partial \Phi_m^{(vac)}}{\partial\underline{u}}(\underline{u} = 0) = \frac{\partial \Phi}{\partial\underline{u}}(\underline{u} = 0). \end{cases}$$ \item (Uniform estimates) \begin{equation}\label{eq:final.approx.ue.1} \|\hat{\gamma}_m^{(vac)} \|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} + \| \frac{\partial\hat{\gamma}_m^{(vac)}}{\partial \underline{u}} \|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 1, \end{equation} \begin{equation}\label{eq:final.approx.ue.2} \|\Phi_m^{(vac)} \|_{L^\infty_{\underline{u}}W^{K,\infty}} + \|\frac{\partial \Phi_m^{(vac)}}{\partial \underline{u}}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 1, \end{equation} \begin{equation}\label{eq:final.approx.ue.3} \Phi_m^{(vac)} \gtrsim 1. \end{equation} \item (Convergence) The following convergences hold: \begin{equation}\label{eq:final.approx.convergence.easy} \hat{\gamma}_m^{(vac)}\to \hat{\gamma},\quad \Phi_m^{(vac)}\to \Phi \quad \mbox{uniformly}, \end{equation} and for every $\varphi \in C^0_c([0,\underline{u}_*]\times \mathbb S^2)$, \begin{equation}\label{eq:final.approx.convergence.hard} \begin{split} &\: \int_{(0,\underline{u}_*)\times \mathbb S^2} \varphi \,\mathrm{d}\nu_{\mathrm{init}} \\ = &\: \frac 14 \lim_{m\to +\infty} (\int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2}\, |\frac{\partial\hat{\gamma}_m^{(vac)}}{\partial\underline{u}}|_{\hat{\gamma}_m^{(vac)}}^2 \,\mathrm{dA}_{\gamma_{m}^{(vac)}}\, \mathrm{d} \underline{u}) - \frac 14 \int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2} \, |\frac{\partial\hat{\gamma}}{\partial\underline{u}}|_{\hat{\gamma}}^2 \,\mathrm{dA}_{\gamma}\,\mathrm{d}\underline{u}. \end{split} \end{equation} \end{enumerate} \end{proposition} \begin{proof} \pfstep{Step~1: Approximate with a sequence of smooth null dust data} Let $\{f_m\}_{m=1}^{+\infty}$ be as in the conclusion of Proposition~\ref{prop:f.approx}. Since $\mathrm{d}\nu_{\mathrm{init}}$ is supported on $[0,\underline{u}_*]\times U^c$, by Proposition~\ref{prop:f.approx}, we choose $f_m$ so that $\mathrm{supp}(f_m)\subset [0,\underline{u}_*]\times V^c$ for some fixed non-empty open $V\subseteq U$ with $\overline{V} \subset U$. Define a \emph{smooth} sequence $\{(\hat{\gamma}_m^{(dust)},\, \overline{\Phi}_m^{(dust)}, \, \overline{\Psi}_m^{(dust)})\}_{m=1}^{+\infty}$ so that the estimates \eqref{eq:f.approx.gamma.est} and \eqref{eq:f.approx.data.est} hold (with $(\hat{\gamma}_m,\, \overline{\Phi}_m, \, \overline{\Psi}_m) = (\hat{\gamma}_m^{(dust)},\, \overline{\Phi}_m^{(dust)}, \, \overline{\Psi}_m^{(dust)})$). We in addition choose \begin{equation}\label{eq:final.approx.extra.small} \|\hat{\gamma}_m^{(dust)} - \hat{\gamma}\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \leq \min\{\epsilon, \, 2^{-m} \}, \end{equation} where $\epsilon>0$ is as in Proposition~\ref{prop.data.approx}.4 (see in particular \eqref{eq:data.approx.uniform.assumption}) with $\hat{\gamma}_0 = \hat{\gamma}$. We then apply Proposition~\ref{prop:f.approx.Phi.conv} so that by \eqref{eq:phi.diff.est} and \eqref{eq:duphi.uniform} we have \begin{equation}\label{eq:final.approx.Phi.dust} \|\Phi_m^{(dust)} - \Phi \|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 2^{-m},\quad \sup_m \|\frac{\partial\Phi_m^{(dust)}}{\partial\underline{u}}\|_{L^{\infty}_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1. \end{equation} \pfstep{Step~2: Approximate with a sequence of vacuum data} From Step~1, we have obtain a sequence of smooth data $\{( f_m, \, \hat{\gamma}_m^{(dust)}, \, \Phi_m^{(dust)})\}_{m=1}^{+\infty}$ to the Einstein--null dust system. Now for each $m\in \mathbb N$, we apply Proposition~\ref{prop.data.approx}. We choose $n(m)$ sufficiently large depending on $m$ so that \begin{equation}\label{eq:n.m.compare} n^{-1} \leq 2^{-m} \end{equation} and that all the $O(\frac 1n)$ error terms in Proposition~\ref{prop.data.approx} are made $\lesssim 2^{-m}$. Since we have already required \eqref{eq:final.approx.extra.small}, we thus obtain a sequence $\{ (\hat{\gamma}_m^{(vac)},\Phi_m^{(vac)})\}_{m=1}^{+\infty}$ so that \begin{equation}\label{eq:final.approx.gamma.vac} \|\hat{\gamma}_m^{(vac)} - \hat{\gamma}^{(dust)}_m\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 2^{-m}, \end{equation} \begin{equation}\label{eq:final.approx.gamma.weak.vac} \||\frac{\partial\hat{\gamma}_m^{(vac)}}{\partial \underline{u}}|_{\hat{\gamma}_m^{(vac)}}^2 (\Phi_m^{(dust)})^2 - |\frac{\partial\hat{\gamma}_m^{(dust)}}{\partial \underline{u}}|_{\hat{\gamma}_m^{(dust)}}^2 (\Phi_m^{(dust)})^2 - 4 f_m - \frac 1{n(m)} \frac{\partial F_m}{\partial\underline{u}}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 2^{-m}, \end{equation} \begin{equation}\label{eq:final.approx.Phi.vac} \|\Phi_m^{(vac)} - \Phi_m^{(dust)}\|_{L^\infty_{\underline{u}}W^{K,\infty}} \lesssim 2^{-m}, \quad \|\frac{\partial (\Phi_m^{(vac)} - \Phi_m^{(dust)})}{\partial \underline{u}}\|_{L^\infty_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})}\lesssim 2^{-m}, \end{equation} and \begin{equation}\label{eq:final.approx.uniform.dubgamma} \|\frac{\partial \hat{\gamma}_m^{(vac)}}{\partial\underline{u}} \|_{L^2_{\underline{u}}W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1. \end{equation} Here, in \eqref{eq:final.approx.gamma.weak.vac}, $F_m$ are smooth functions which according to \eqref{eq:data.approx.Fn.est} satisfy \begin{equation}\label{eq:final.approx.Fm} \|F_m\|_{L^\infty_{\underline{u}} W^{K,\infty}(S_{0,\underline{u}},\mathring{\gamma})} \lesssim 1. \end{equation} Note also that in deriving \eqref{eq:final.approx.uniform.dubgamma}, we have also used the estimate \eqref{eq:fm.uniform} for $f_m$. \pfstep{Step~3: Putting everything together} In this last step, we check that for sufficiently large $m_0$, the sequence $\{ (\hat{\gamma}_m^{(vac)},\Phi_m^{(vac)})\}_{m=m_0}^{+\infty}$ constructed in Step~2 indeed satisfies the necessary requirements. First, the constructions in Propositions~\ref{prop.data.approx} guarantees \eqref{eq:final.approx.det}. After choosing $m_0$ to be sufficiently large, the positivity follows from \eqref{eq:final.approx.extra.small}, \eqref{eq:final.approx.Phi.dust}, \eqref{eq:final.approx.gamma.vac}, \eqref{eq:final.approx.Phi.vac} and the triangle inequality. Second, since $\{ (\hat{\gamma}_m^{(vac)},\Phi_m^{(vac)})\}_{m=1}^{+\infty}$ are constructed by Proposition~\ref{prop.data.approx}, the vacuum constraints are satisfied by definition. Third, the uniform estimates \eqref{eq:final.approx.ue.1} and \eqref{eq:final.approx.ue.2} follow from \eqref{eq:final.approx.extra.small}, \eqref{eq:final.approx.Phi.dust}, \eqref{eq:final.approx.gamma.vac}, \eqref{eq:final.approx.Phi.vac}, \eqref{eq:final.approx.uniform.dubgamma} and the triangle inequality. The lower bound \eqref{eq:final.approx.ue.3} follows from the lower bound of $\Phi$, the estimates \eqref{eq:final.approx.Phi.dust}, \eqref{eq:final.approx.Phi.vac} and the triangle inequality. Fourth, the convergence statements in \eqref{eq:final.approx.convergence.easy} follow from \eqref{eq:final.approx.extra.small}, \eqref{eq:final.approx.Phi.dust}, \eqref{eq:final.approx.gamma.vac}, \eqref{eq:final.approx.Phi.vac} and the triangle inequality. Finally, we prove the convergence statements in \eqref{eq:final.approx.convergence.hard}. By density we assume that $\varphi$ is $C^1$. Note that $\mathrm{dA}_\gamma = \Phi^2 \,\mathrm{dA}_{\mathring{\gamma}}$. Also, by \eqref{eq:final.approx.det}, $\mathrm{dA}_{\gamma_m^{(vac)}} = (\Phi_m^{(vac)})^2 \,\mathrm{dA}_{\mathring{\gamma}}$. Then, using \eqref{eq:f.approx.Phi.bkg.est}, \eqref{eq:f.approx.gamma.bkg.est}, \eqref{eq:final.approx.Phi.dust}, \eqref{eq:final.approx.gamma.weak.vac}, \eqref{eq:final.approx.Phi.vac} and \eqref{eq:final.approx.uniform.dubgamma}, we obtain that for each fixed $m\in \mathbb N$ with $m\geq m_0$, \begin{equation}\label{eq:final.approx.final.1} \begin{split} &\: \frac 14 \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2}\, |\frac{\partial\hat{\gamma}_m^{(vac)}}{\partial\underline{u}}|_{\hat{\gamma}_m^{(vac)}}^2 \,\mathrm{dA}_{\gamma_{m}^{(vac)}}\, \mathrm{d} \underline{u} - \frac 14 \int_{\{u\}\times [0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2} \, |\frac{\partial\hat{\gamma}}{\partial\underline{u}}|_{\hat{\gamma}}^2 \,\mathrm{dA}_{\gamma}\,\mathrm{d}\underline{u} \\ = &\: \frac 14 \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2}\, (|\frac{\partial\hat{\gamma}_m^{(vac)}}{\partial\underline{u}}|_{\hat{\gamma}_m^{(vac)}}^2 (\Phi_m^{(vac)})^2 - |\frac{\partial\hat{\gamma}}{\partial\underline{u}}|_{\hat{\gamma}}^2 \Phi^2) \,\mathrm{dA}_{\mathring{\gamma}}\, \mathrm{d} \underline{u} \\ = &\: \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2}\, (f_m + \frac 1{4n(m)}\frac{\partial F_m}{\partial\underline{u}} ) \,\mathrm{dA}_{\mathring{\gamma}}\, \mathrm{d} \underline{u} + O(2^{-m}). \end{split} \end{equation} We then integrate by parts and use \eqref{eq:n.m.compare} and \eqref{eq:final.approx.Fm} to obtain \begin{equation}\label{eq:final.approx.final.2} \begin{split} \left| \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2}\, \frac 1{4n(m)}\frac{\partial F_m}{\partial\underline{u}} \,\mathrm{dA}_{\mathring{\gamma}}\, \mathrm{d} \underline{u}\right| \lesssim 2^{-m}. \end{split} \end{equation} Plugging \eqref{eq:final.approx.final.2} into \eqref{eq:final.approx.final.1}, taking the $m\to +\infty$ limit, and using the first conclusion in Proposition~\ref{prop:f.approx}, we obtain \begin{equation*} \begin{split} \mbox{RHS of \eqref{eq:final.approx.convergence.hard}} = \lim_{m\to +\infty} \int_{[0,\underline{u}_*]\times \mathbb S^2} \varphi \Omega^{-2}\, f_m \,\mathrm{dA}_{\mathring{\gamma}}\, \mathrm{d} \underline{u} =&\: \mbox{LHS of \eqref{eq:final.approx.convergence.hard}}. \end{split} \end{equation*} \qedhere \end{proof} The same construction as Proposition~\ref{prop:final.approx} can be carried out on the $\underline{u}=0$ hypersurface in a completely analogous manner. We state this as a proposition: \begin{proposition}\label{prop:final.approx.1} Proposition~\ref{prop:final.approx} holds on $\underline{H}_0$ after replacing $\underline{u}\mapsto u$, $\mathrm{d}\nu_{\mathrm{init}} \mapsto \mathrm{d}\underline{\nu}_{\mathrm{init}}$. \end{proposition} \subsection{Proofs of Theorem~\ref{thm:main.local.dust} and Theorem~\ref{thm:reverse.Burnett}}\label{sec:approx.final} \begin{proof}[Proof of Theorem~\ref{thm:main.local.dust}] We first consider the case that the strongly angularly regular reduced characteristic initial data set satisfies the additional assumption that $\mathrm{supp}(\mathrm{d}\nu_{\mathrm{init}}) \subset [0,\underline{I} ]\times U^c$ and $\mathrm{supp}(\mathrm{d}\underline{\nu}_{\mathrm{init}}) \subset [0,I]\times U^c$ for some non-empty open $U\subset \mathbb S^2$. We first apply Propositions~\ref{prop:final.approx} and \ref{prop:final.approx.1} in the previous subsection so as to obtain a sequence of reduced characteristic initial data sets (see Section~\ref{sec:reduced.data}) for the Einstein vacuum equations. By Lemma~\ref{lem:suff.cond.on.data}, the sequence of vacuum characteristic initial data sets corresponding to this sequence of vacuum reduced characteristic initial data set satisfies the assumption of Theorem~\ref{main.thm}. Therefore, Theorem~\ref{main.thm} shows that there exists $\epsilon>0$ so that the sequence of vacuum solutions arising from this sequence of data admits a subsequence that converges to an angularly regular weak solution to the Einstein--null dust system in $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ whenever $u_*\in (0,I]$ and $\underline{u}_* \in (0,\epsilon]$. Because of the convergence statements \eqref{eq:final.approx.convergence.easy} and \eqref{eq:final.approx.convergence.hard}, it follows moreover that this solution indeed achieve the prescribed initial data. We have thus proven the existence part of the theorem. Uniqueness then follows from Theorem~\ref{thm:uniqueness}. Finally, in the general case where $\mathrm{d}\nu_{\mathrm{init}}$ or $\mathrm{d} \underline{\nu}_{\mathrm{init}}$ is not supported away from some angular direction, we can cut off and use the finite speed of propagation to reduce to the previous case. \qedhere \end{proof} \begin{proof}[Proof of Theorem~\ref{thm:reverse.Burnett}] This is similar to the proof of Theorem~\ref{thm:main.local.dust} so we will be brief. Given an angularly regular weak solution to the Einstein--null dust system as in Theorem~\ref{thm:main.local.dust}, we first approximate the data using Propositions~\ref{prop:final.approx} and \ref{prop:final.approx.1} and then use Lemma~\ref{lem:suff.cond.on.data} and Theorem~\ref{main.thm} to obtain a limiting angularly regular weak solution $(\widetilde{\mathcal M},g_\infty)$ (for $u_{**}$ sufficiently small) to the Einstein--null dust system. By definition, $(\widetilde{M},\,g_\infty)$ is a limit of a sequence of smooth solutions $(\widetilde{\mathcal M},\,g_n)$ to the Einstein vacuum equations in the sense of Theorem~\ref{main.thm}. On the other hand, by Theorem~\ref{thm:uniqueness}, $(\widetilde{\mathcal M},\,g_\infty) = (\widetilde{\mathcal M},\,g\restriction_{\widetilde{\mathcal M}})$. Combining these two facts gives the conclusion of the theorem. \qedhere \end{proof} \section{Relation with the formation of trapped surfaces}\label{sec:trapped.surfaces} In this final section, we discuss a connection of high-frequency limits and null dust shells with Christodoulou's work \cite{Chr} on the formation of trapped surfaces in vacuum. As is well-known, Penrose proved that a vacuum spacetime (or more generally a spacetime obeying the null energy condition) with a non-compact Cauchy hypersurface and a \emph{trapped surface} must be future causally geodesically incomplete. In a monumental work \cite{Chr}, Christodoulou showed moreover that trapped surfaces could form dynamically in vacuum spacetimes from characteristic initial data which are arbitrarily dispersed. In particular, he found open sets of initial data such that there are no trapped surfaces initially, but a trapped surface is formed in the dynamical evolution. We will recall the results of \cite{Chr} in \textbf{Section~\ref{sec:Chr.trapped.result}}; see also \cite{An.AH, AnAthanasiou, AL, Jaffe, KLR, KlRo.scarred, KlRo.trapped, Le, Li.Schwarzschild, LiLiu, LiMei, LiYu.glue, LR2, Yu.Maxwell, Yu.CMP} and references therin for various extensions. Christodoulou's construction is based on what he called the \emph{short pulse method}, for which the large initial data is concentrated on a short length scale of size $\delta$. It is precisely the short length scale that allowed Christodoulou to propagation a hierarchy of $\delta$-dependent estimates for the geometric quantities so that the estimates can be closed despite being a large data problem We will show below (see \textbf{Section~\ref{sec:connection}}) that if we take the $\delta \to 0^+$ limit in Christodoulou's short pulse ansatz, then one obtains a limiting spacetime which is not vacuum, but solves the Einstein--null dust system with a null dust shell (in a similar manner as the main results of this paper). The limiting solution in fact coincides with the Synge--Gibbons--Penrose construction (see \textbf{Section~\ref{sec:Gibbons.Penrose}}) of collapsing null dust shell solutions in which trapped surfaces form dynamically. In particular, one could think of Christodoulou's solutions as ``approximating'' the spacetimes with collapsing null shells. \subsection{Christodoulou's trapped surface formation result}\label{sec:Chr.trapped.result} In \cite{Chr}, Christodoulou proved the dynamical formation of trapped surfaces by considering a characteristic initial value problem, where the initial data are prescribed on two intersecting null hypersurfaces. Before describing Christodoulou's data, first define $\mathcal M^- := \{ (u,\underline{u}, \vartheta): 0 < u \leq \underline{u}+1 <1 ,\,\vartheta\in \mathbb S^2\}$ to be the past light cone of a point in Minkowski spacetime\footnote{In polar coordinates, the Minkowski metric is given by $m= -dt^2 + dr^2 + r^2 \mathring{\gamma}_{\mathbb S^2(1)}$. Here, the null coordinates correspond to $u = \frac 12 (t-r) +1$, $\underline{u} = \frac 12 (t+r)$. Thus in the $(t,r,\vartheta)$ coordinate system, we have $\mathcal M^- = \{(t,r,\vartheta): -2< t+r < 0,\,-2<t-r<0\}$, which is a truncated subset of the past light cone of the origin in Minkowski spacetime.}, i.e.~consider the metric $g$ on $\mathcal M^-$ taking the form $$g = -2\Omega^2(\mathrm{d} u\otimes \mathrm{d}\underline{u}+\mathrm{d}\underline{u}\otimes \mathrm{d} u)+\gamma_{AB}(\mathrm{d}\theta^A-b^A\mathrm{d} u)\otimes (\mathrm{d}\theta^B-b^B\mathrm{d} u)$$ with $\Omega^2\restriction_{\mathcal M^-} = 1$, $b\restriction_{\mathcal M^-} = 0$ and $\gamma\restriction_{\mathcal M^-} = (\underline{u} - u + 1)^2 \mathring{\gamma}_{\mathbb S^2(1)}$, where $\mathring{\gamma}_{\mathbb S^2(1)}$ is the round metric on the $2$-sphere with radius $1$. For the characteristic initial value problem that Christodoulou considered, one initial characteristic hypersurface is $\underline{H}_0:= \{ (u,\underline{u}, \vartheta) \in \mathcal M^-: \underline{u} = 0\}$ with the induced geometry. The other initial characteristic surface is given by $H_{0} = \{u = 0\} \times [0,\delta]\times \mathbb S^2$, and the data consist of a ``short pulse'' mentioned above, where $\delta>0$ is a small parameter. The following is a version\footnote{The original theorem in fact applies when the data are posed on past null infinity to obtain a semi-global spacetime; see details in \cite{Chr}.} of the main theorem in \cite{Chr}, which gives a condition on the initial data on $H_{-1}$ which guarantees the dynamical formation of trapped surfaces. \begin{theorem}[Christodoulou \cite{Chr}]\label{thm:Chr.FOTS} For every $B>0$ and $u_* < 1$, there exists $\delta = \delta(B,u_*) > 0$ sufficiently small such that if the initial $\hat{\chi}$ (denoted $\hat{\chi}_0$), prescribed on $H_{0}:=\{(u,\underline{u}, \vartheta): u = 0,\,\underline{u} \in [0,\delta],\,\vartheta \in \mathbb S^2 \}$ satisfy \begin{equation}\label{eq:Chr.upper.bd} \sum_{i\leq 5, \, j \leq 3} \delta^{\frac 12 + j} \|\slashed{\nabla}^i \slashed{\nabla}_4^j \hat{\chi}_0\|_{L^\infty_{\underline{u}} L^2(S_{u,\underline{u}})} \leq B, \end{equation} then there is a unique solution to the Einstein vacuum equation in double null coordinates in $\{(u,\underline{u}): u \in[0, u_*],\, \underline{u}\in [0,\delta]\} \times \mathbb S^2$ with the prescribed data. Moreover, if the initial data also verify the lower bound \begin{equation}\label{eq:Chr.lower.bd} \inf_{\vartheta \in \mathbb S^2} \int_0^\delta |\hat{\chi}_0|^2_\gamma (\underline{u}', \vartheta) \,\mathrm{d} \underline{u}'\geq M_* > 2 (1-u_*), \end{equation} then, after choosing $\delta$ to be smaller (depending on $B$, $u_*$ and $M_*$) if necessary, the sphere $S_{-1 + u_*,\delta}$ is a trapped surface. \end{theorem} We remark that by definition, the sphere $S_{-1 + u_*,\delta}$ is a trapped surface exactly when the inequalities $\slashed{\mathrm{tr}}\chi>0$ and $\slashed{\mathrm{tr}}\underline{\chi}<0$ hold pointwise on $S_{-1 + u_*,\delta}$. As already noted in \cite{LR2}, the scaling of the initial data in Theorem~\ref{thm:Chr.FOTS} is such that the $L^2(H_0)$ norm of the initial data for $\hat{\chi}$ and its angular derivatives are uniformly bounded as $\delta\to 0$. In particular, suppose we extend the data in a ``regular'' way to $\underline{u} \in [0,\underline{I}]$ but still requiring the data to concentrate in $\underline{u}\in [0,\delta]$, we can study the $\delta \to 0^+$ limit in view of Theorem~\ref{ext.thm}. Before we discuss that, we consider in the next subsection the formation of trapped surface in the presence of a null dust shell. (This will turn out to be connected to the $\delta \to 0^+$ limit of Christodoulou's spacetimes; see Section~\ref{sec:connection}.) \subsection{The Synge--Gibbons--Penrose construction}\label{sec:Gibbons.Penrose} While to show that trapped surfaces form dynamically in \emph{vacuum} is a very difficult problem, to show that trapped surfaces can arise from the collapsing of a null dust shell is significantly easier. In fact, the null dust shell example of Synge \cite{Synge} already demonstrates the dynamical formation of trapped surfaces. Other examples were later considered in the works of Gibbons \cite{Gibbons.shell} and Penrose \cite{Penrose.shell}. The setup of \cite{Gibbons.shell, Penrose.shell} is as follows. (The example in \cite{Synge} is a special case where the function $m: \mathbb S^2\to \mathbb R_{>0}$ below is a constant function.) Consider a spacetime $\mathcal M$ with a null dust shell supported on $\mathcal N$, which is an ingoing null hypersurface. The spacetime is given as $\mathcal M = \mathcal M^+ \cup \mathcal M^- \cup \mathcal N$, where $\mathcal M^-$ is the spacetime to one side of $\mathcal N$ and $\mathcal M^+$ is the spacetime to the other side of $\mathcal N$. Assume moreover that $\mathcal M^-$ is isometric to a region in Minkowski spacetime in exactly the same way as in Section~\ref{sec:Chr.trapped.result}. Introduce now a double null coordinate system so that the metric is in the form \begin{equation}\label{eq:double.null.trapped.surface} g = -2\Omega^2(\mathrm{d} u\otimes \mathrm{d}\underline{u}+\mathrm{d}\underline{u}\otimes \mathrm{d} u)+\gamma_{AB}(\mathrm{d}\theta^A-b^A\mathrm{d} u)\otimes (\mathrm{d}\theta^B-b^B\mathrm{d} u). \end{equation} Define, in exactly the same manner as in Section~\ref{sec:Chr.trapped.result}, $\mathcal M^- := \{ (u,\underline{u}, \vartheta): 0 < u \leq \underline{u}+1 < 1 ,\,\vartheta\in \mathbb S^2\}$ and impose that $\Omega^2\restriction_{\mathcal M^-} = 1$, $b\restriction_{\mathcal M^-} = 0$ and $\gamma\restriction_{\mathcal M^-} = (\underline{u} - u+1)^2 \mathring{\gamma}_{\mathbb S^2(1)}$, where $\mathring{\gamma}_{\mathbb S^2(1)}$ is the round metric on the $2$-sphere with radius $1$. As a result, it follows that in $\mathcal M^-$, $\slashed{\mathrm{tr}}\chi = \frac 2{\underline{u} - u+1}$ and $\slashed{\mathrm{tr}}\underline{\chi} = -\frac 2{\underline{u} - u+1}$ and all the other Ricci coefficients vanish. Define $\mathcal M^+ := \{(u,\underline{u},\vartheta): 0<u<1,\, 0\leq \underline{u} \leq f(u)\}$ for some decreasing function $f:[0,1]\to \mathbb R$ with $\lim_{u\to 1^-} f(u) = 0$. The choice of the metric in $\mathcal M^+$ does not matter so much, let us just assume that it is a vacuum metric in a double null coordinate system \eqref{eq:double.null.trapped.surface}. Assume also that the metric coefficients $\Omega$, $\gamma$ and $b$ are continuous up to and across $\mathcal N$ and that all the Ricci coefficients \emph{except} for $\slashed{\mathrm{tr}}\chi$, $\hat{\chi}$ and $\omega$ are also continuous up to and across $\mathcal N$. Impose that the Ricci coefficient\footnote{Notice that $\hat{\chi}$ and $\omega$ could have a jump discontinuity across $\{\underline{u} = 0\}$ in this construction. (A jump discontinuity of $\hat{\chi}$ corresponds to an impulsive gravitational wave.) Nevertheless, whether $\hat{\chi}$ and $\omega$ have a jump discontinuity does not affect conclusions regarding trapped surface formation. } $\slashed{\mathrm{tr}}\chi$ has a jump discontinuity across $\mathcal N$ so that\footnote{We note that the propagation equation \eqref{eq:nu} essentially forces the jump $\slashed{\mathrm{tr}}\chi^+ - \slashed{\mathrm{tr}}\chi^-$ to be of this form.} $\slashed{\mathrm{tr}}\chi^+ - \slashed{\mathrm{tr}}\chi^- = \frac{ m(\vartheta) }{(1-u)^2}$, for some smooth function $m:\mathbb S^2 \to \mathbb R_{\geq 0}$. By \eqref{eq:trch} (and taking appropriate limit), this means that the spacetime has a null dust shell given by $$\mathrm{d}\nu_u = m(\vartheta) \delta(\underline{u}),$$ where $\delta(\underline{u})$ denotes the delta measure at $\underline{u} = 0$ and $m(\vartheta)$ is as before. Note also that with this definition of $\mathrm{d}\nu_u$, it is easy to check that the propagation equation \eqref{eq:nu} holds. Suppose now that there are $M_*>0$ and $u_* \in (0,1)$ such that \begin{equation}\label{eq:null.shell.m.lower.bd} \inf_{\vartheta \in \mathbb S^2} m(\vartheta) \geq M_* > 2(1-u_*) >0. \end{equation} We claim that in fact the sphere $S_{u_*,\epsilon}$ is \emph{trapped} for $\epsilon>0$ sufficiently small. In order to prove that, it suffices to show that $ \slashed{\mathrm{tr}}\underline{\chi}(u = u_*, \,\underline{u} = 0,\, \vartheta)<0$ and that\footnote{Recall again that $\slashed{\mathrm{tr}}\chi$ is not continuous across the hypersurface $\{\underline{u} = 0\}$!} $\slashed{\mathrm{tr}}\chi^+(u=u_*,\, \underline{u} = \epsilon, \,\vartheta) < 0$ for all $\vartheta \in \mathbb S^2$. These are very easy to check: since $\slashed{\mathrm{tr}}\underline{\chi}$ is continuous across the null dust shell, it takes the Minkowskian value $\slashed{\mathrm{tr}}\underline{\chi}(u=u_*,\,\underline{u} = 0,\,\vartheta) = -\frac 2{1 - u_*}$; on the other hand, since $\slashed{\mathrm{tr}}\chi^-(u = u_*,\,\underline{u} = 0,\,\vartheta) = \frac 2{1- u_*}$ (taking Minkowskian value), the jump condition above gives $$\slashed{\mathrm{tr}}\chi^+(u = u_*,\,\underline{u} = 0,\,\vartheta) = \slashed{\mathrm{tr}}\chi^-(u = u_*,\,\underline{u} = 0,\,\vartheta) - \frac{ m(\vartheta) }{(1-u_*)^2} = \frac 2{1- u_*} - \frac{ m(\vartheta) }{(1-u_*)^2}.$$ In particular, this means that whenever $m(\theta)$ obeys the lower bound \eqref{eq:null.shell.m.lower.bd}, we have $\slashed{\mathrm{tr}}\chi^+<0$ everywhere on $S_{u_*, 0}$. Now since the metric is smooth in $\mathcal M^+ \cap \{ \underline{u} >0\}$, it follows that $S_{u_*,\underline{u}}$ is trapped for some $\underline{u}>0$ sufficiently small. This simple construction shows that a trapped surface is formed dynamically from the collapse of a null dust shell. \subsection{Connection between the Synge--Gibbons--Penrose construction and trapped surface formation in vacuum}\label{sec:connection} We now observe that by taking the $\delta\to 0^+$ limit in Christodoulou's construction in Theorem~\ref{thm:Chr.FOTS}, one obtains a solution to the Einstein--null dust system with a null dust shell as in Section~\ref{sec:Gibbons.Penrose}. To make this precise, we need slightly more assumptions than that in Theorem~\ref{thm:Chr.FOTS}. In order to streamline the exposition, let us first state a propagation of regularity result, before turning to the precise setup relating Christodoulou's result to Section~\ref{sec:Gibbons.Penrose}. The following propagation of regularity result is a small modification of \cite[Proposition~52]{LR2}, and can be proven in exactly the same manner. \begin{lemma}\label{lem:prop.of.singularities} Consider a characteristic initial value problem with initial data satisfying the assumptions of Theorem~\ref{ext.thm}. \begin{enumerate} \item (Propagation of angular regularity) If $ \forall i \in \mathbb N \cup\{0\}$, $\exists C_i>0$ such that the initial data satisfy (in addition to the assumptions of Theorem~\ref{ext.thm}) \begin{equation}\label{eq:trapped.surface.hr.assumption.1} \begin{split} \sum_{\psi \in \{\eta,\underline{\eta},\slashed{\mathrm{tr}}\chi,\slashed{\mathrm{tr}}\underline{\chi},K\}}\| \slashed{\nabla}^i \psi \|_{L^\infty_{\underline{u}} L^\infty(S_{0,\underline{u}})} + \sum_{\psi \in \{\eta,\underline{\eta},\slashed{\mathrm{tr}}\chi,\slashed{\mathrm{tr}}\underline{\chi},K\}}\| \slashed{\nabla}^i \psi \|_{L^\infty_{u} L^\infty(S_{u,0})} & \\ + \sum_{\psi_H \in \{\hat{\chi},\omega\}}\|\slashed{\nabla}^i \psi_H\|_{L^2_{\underline{u}} L^\infty(S_{0,\underline{u}})} + \sum_{\psi_{\underline{H}} \in \{\hat{\underline{\chi}},\underline{\omega}\}}\|\slashed{\nabla}^i \psi_{\underline{H}}\|_{L^2_{u} L^\infty(S_{u,0})} &\: \leq C_i. \end{split} \end{equation} Then $\forall i'\in \mathbb N\cup \{0\}$, $\exists C'_{i'} > 0$ (where for each $i'$, $C'_{i'}$ depends only on the constants in Theorem~\ref{ext.thm} and finitely many $C_i$'s) such that the following bounds hold in $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$: \begin{equation}\label{eq:trapped.surface.hr.conclusion.1} \begin{split} \sum_{\psi \in \{\eta,\underline{\eta},\slashed{\mathrm{tr}}\chi,\slashed{\mathrm{tr}}\underline{\chi},K\}}\| \slashed{\nabla}^{i'} \psi \|_{L^\infty_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}})} & \\ + \sum_{\psi_H \in \{\hat{\chi},\omega\}}\|\slashed{\nabla}^i \psi_H\|_{L^2_{\underline{u}} L^\infty_u L^\infty(S_{u,\underline{u}})} + \sum_{\psi_{\underline{H}} \in \{\hat{\underline{\chi}},\underline{\omega}\}}\|\slashed{\nabla}^i \psi_{\underline{H}}\|_{L^2_{u} L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}})} &\: \leq C'_{i'}. \end{split} \end{equation} \item (Propagation of higher regularity in the ``regular region'') Suppose the assumptions of part (1) hold, and \begin{itemize} \item $\exists u_1,\,u_2,\,\underline{u}_1,\,\underline{u}_2$ with $0 \leq u_1 < u_2 \leq u_*$ and $0 \leq \underline{u}_1 < \underline{u}_2 \leq \underline{u}_*$, \item $\exists J,\, L\in \mathbb N \cup \{0\}$, and \item $\forall i\in \mathbb N\cup \{0\}$, $\exists \widetilde{C}_{i}^{(J,L)}>0$ \end{itemize} such that the initial data satisfy \begin{equation}\label{eq:trapped.surface.hr.assumption.2} \begin{split} &\: \sum_{\psi \in \{\eta,\underline{\eta},\slashed{\mathrm{tr}}\chi,\slashed{\mathrm{tr}}\underline{\chi},K,\hat{\chi},\omega\}} \sum_{\ell \leq L} \| \slashed{\nabla}^i \slashed{\nabla}_4^\ell \psi \|_{L^\infty_{\underline{u}}([\underline{u}_1,\underline{u}_2 ]; L^\infty(S_{0,\underline{u}}))} \\ &\: + \sum_{\psi \in \{\eta,\underline{\eta},\slashed{\mathrm{tr}}\chi,\slashed{\mathrm{tr}}\underline{\chi},K,\hat{\underline{\chi}},\underline{\omega}\}} \sum_{j\leq J} \| \slashed{\nabla}^i \slashed{\nabla}_3^j\psi \|_{L^\infty_{u}([u_1,u_2]; L^\infty(S_{u,0}))} \leq \widetilde{C}_{i}^{(J,L)}. \end{split} \end{equation} Then $\forall i'\in \mathbb N\cup \{0\}$, $\exists \widetilde{C'}_{i}^{(J,L)} > 0$ (where for each $i'$, $ \widetilde{C'}_{i}^{(J,L)}$ depends only on the constants in Theorem~\ref{ext.thm} and finitely many $C_i$'s and $\widetilde{C}_{i}^{(J,L)}$'s) such that the following estimates hold: \begin{equation}\label{eq:trapped.surface.hr.conclusion.2} \begin{split} \sup_{(u,\underline{u}) \in [u_1,u_2]\times [\underline{u}_1,\underline{u}_2]} \sum_{\psi \in \{\eta,\underline{\eta},\slashed{\mathrm{tr}}\chi,\slashed{\mathrm{tr}}\underline{\chi},K,\hat{\chi},\omega,\hat{\underline{\chi}},\underline{\omega}\}} \sum_{\substack{ j\leq J \\ \ell\leq L}} \| \slashed{\nabla}^{i'} \slashed{\nabla}_3^j \slashed{\nabla}_4^\ell \psi \|_{L^\infty_u L^\infty_{\underline{u}} L^\infty(S_{u,\underline{u}})} \leq \widetilde{C'}_{i}^{(J,L)}. \end{split} \end{equation} \end{enumerate} \end{lemma} We remark that an important point of part (2) Lemma~\ref{lem:prop.of.singularities} is that the higher regularity estimates hold even when the data are singular for $u<u_1$ or $\underline{u} < \underline{u}_1$. We now return to the discussion of the relation between Theorem~\ref{thm:Chr.FOTS} and null dust shells. Consider a sequence of characteristic initial data on $\underline{H}_0$ is the backward Minkowskian light cone as in Theorem~\ref{thm:Chr.FOTS} and $H_0 = \{0\}\times [0, \underline{I}]\times \mathbb S^2$. Fix a decreasing sequence $\delta_n \to 0$. On $H_0$, require the (smooth) characteristic initial data to obey the following: \begin{enumerate} \item (Christodoulou's conditions) Fix $M_*,\,B>0$ and $u_*\in (0,1)$. When restricted to $\underline{u}\in [0, \delta_n]$, \eqref{eq:Chr.upper.bd} and \eqref{eq:Chr.lower.bd} both hold (with $\delta$ replaced by $\delta_n$). \item (Additional angular regularity) Assume pointwise estimates for \emph{all} higher angular derivatives, i.e.~assume \eqref{eq:trapped.surface.hr.assumption.1} holds. \item (Additional regularity beyond short pulse) Impose that when restricted to $\underline{u} \in (\delta_m, \underline{I}]$, the metric components $(\gamma_n, \log\Omega_n, b_n)$ are uniformly bounded in the $C^k$ norm (with respect to derivatives tangential to $H_0$) for every $n\geq m$ and for every $k\in \mathbb N \cup \{0\}$. \end{enumerate} By Theorem~\ref{main.thm}, there exists an $\epsilon>0$ such that if $\underline{u}_* \leq \epsilon$, then for every $n\in \mathbb N$ there is a solution to the Einstein vacuum equations in the region $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$ with the prescribed initial data. Moreover, there exists a limiting solution to the Einstein--null dust system in $[0,u_*]\times [0,\underline{u}_*]\times \mathbb S^2$. Given the conditions (1)--(3) that we imposed above for the sequence of vacuum initial data, we can in fact conclude that the limiting solution to the Einstein--null dust system has the following features: \begin{enumerate} \item (Propagation of angular regularity) By part (1) of Lemma~\ref{lem:prop.of.singularities}, the improved angular regularity estimates \eqref{eq:trapped.surface.hr.conclusion.1} hold for the full sequence of vacuum solutions, and hence also hold for the limiting solution. \item (Regularity in the $\slashed{\nabla}_3$ direction) Since the data on $\underline{H}_0$ are smooth, by part (2) of Lemma~\ref{lem:prop.of.singularities}, the estimates \eqref{eq:trapped.surface.hr.conclusion.2} hold for $L = 0$ and for all $J \geq 0$ for the full sequence of vacuum solutions. As in (1), this implies that the same estimates hold for the limiting solution. \item (Improved regularity away from $\{\underline{u} = 0\}$) For every $\widetilde{\delta}>0$, (since $\delta_n \to 0^+$), there exists $N\in \mathbb N$ such that for every $n\geq N$, the assumption \eqref{eq:trapped.surface.hr.assumption.2} holds for $(u_1, u_2, \underline{u}_1,\underline{u}_2) := (0, u_*, \widetilde{\delta}, \underline{u}_*)$ and any $J,\,L\in \mathbb N\cup\{0\}$. It follows that for the limiting spacetime, \eqref{eq:trapped.surface.hr.assumption.1} holds away from $\{\underline{u} = 0\}$. In particular, the limiting spacetime metric is smooth away from $\{\underline{u} = 0\}$. \item (Continuity of $\eta$, $\underline{\eta}$, $\slashed{\mathrm{tr}}\underline{\chi}$, $\hat{\underline{\chi}}$ and $\underline{\omega}$) By Theorem~\ref{main.thm} and the definition of angular regularity (Definition~\ref{double.null.def.2}), the limiting spacetime has continuous $\eta$, $\underline{\eta}$. Using angular regularity and the improved $\slashed{\nabla}_3$ regularity established above, it follows that $\slashed{\mathrm{tr}}\underline{\chi}$, $\hat{\underline{\chi}}$ and $\underline{\omega}$ are also continuous. \item (Presence of a null shell) The Chrisotodoulou conditions \eqref{eq:Chr.upper.bd} and \eqref{eq:Chr.lower.bd} exactly imply that in the limit there is a null dust shell --- supported exactly on the $\{\underline{u} = 0\}$-hypersurface --- given by the measure $\mathrm{d}\nu_u = m(\vartheta) \delta(\underline{u})$, where $m:\mathbb S^2\to \mathbb R_{>0}$ which is both bounded above and bounded below away from $0$. (Correspondingly, this gives a jump of $\slashed{\mathrm{tr}}\chi$ across the $\{\underline{u} = 0\}$ hypersurface.) \end{enumerate} With this it is easy to conclude that the limiting spacetime is exactly one as in Section~\ref{sec:Gibbons.Penrose}, i.e.~there is a propagating null dust shell which drives the dynamical formation of trapped surfaces. This demonstrates a connection between Christodoulou's construction in \cite{Chr} and collapsing null dust shells.
null
null
null
proofpile-arXiv_000-10119
{"arxiv_id":"2009.08968","language":"en","timestamp":1600654707000,"url":"https:\/\/arxiv.org\/abs\/2009.08968","yymm":"2009"}
2024-02-18T23:40:24.955Z
2020-09-21T02:18:27.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10121"}
null
\section{Related Work} \input{sections/background} \input{sections/models} \section{Empirical Evaluation} \label{sec:experiments} \input{sections/experiments} \section{Conclusion and Future Work} \label{sec:conclusion} \input{sections/conclusion} \section{Acknowledgements} \label{sec:acknowledgements} \input{sections/acknowledgements} \section{Appendix} \label{sec:appendix} \subsection{Probabilistic Soft Logic} Probabilistic soft logic (\PSL) is a statistical relational learning (SRL) framework that uses arithmetic and first order like logical syntax to define a specific type of probabilistic graphical model called a hinge-loss Markov random field (HL-MRF) \cite{bach:jmlr17}. To do this, \PSL \, derives potential functions from the provided rules which take the form of hinges. Data is used to instantiate several potential functions in a process called grounding. The resulting potential functions are then used to define the HL-MRF. The formal definition of a HL-MRF is as follows: \begin{definition}{Hinge-loss Markov random field.} Let $\mathbf{y} = (y_{1}, \cdots, y_{n})$ be a vector of $n$ variables and $\mathbf{x} = (x_{1}, \cdots, x_{n'})$ a vector of $n'$ variables with joint domain $\mathbf{D} = [0, 1]^{n + n'}$. Let $\mathbf{\phi} = (\phi_{1}, \cdots , \phi_{m})$ be a vector of $m$ continuous potentials of the form $\phi_{i}(\mathbf{y}, \mathbf{x}) = (\max\{\ell_{i}(\mathbf{y}, \mathbf{x}), 0\})^{p_{i}}$, where $\ell_{i}$ is a linear function of $\mathbf{y}$ and $\mathbf{x}$ and $p_{i} \in \{1,2\}$. Let $\mathbf{c} = (c_{1}, \cdots, c_{r})$ be a vector of $r$ linear constraint functions associated with index sets denoting equality constraints $\mathcal{E}$ and inequality constraints $\mathcal{I}$, which define the feasible set . \begin{equation*} \Tilde{\mathbf{D}} = \left\{ (\mathbf{y},\mathbf{x}) \in \mathbf{D} \, \Bigg \vert \begin{array}{lr} c_{k}(\mathbf{y}, \mathbf{x}) = 0,& \forall k \in \mathcal{E}\\ c_{k}(\mathbf{y}, \mathbf{x}) \leq 0,& \forall k \in \mathcal{I}\\ \end{array} \right\} \end{equation*} Then, for $(\mathbf{y}, \mathbf{x}) \in \Tilde{\mathbf{D}}$, given a vector of $m$ nonnegative parameters, i.e., weights, $\mathbf{w} = (w_{1}, \cdots, w_{m})$, a \textbf{hinge-loss Markov random field} $\mathcal{P}$ over random variables $\mathbf{y}$ and conditioned on $\mathbf{x}$ is a probability density defined as: \begin{equation} P(\mathbf{y} \vert \mathbf{x}) = \begin{cases} \frac{1}{Z(\mathbf{w}, \mathbf{x})} \exp (-\sum_{j = 1}^{m} w_{j} \phi_{j}(\mathbf{y}, \mathbf{x})) & (\mathbf{y}, \mathbf{x}) \in \Tilde{\mathbf{D}} \\ 0 & o.w. \end{cases} \label{eq:HL-MRF_Dist} \end{equation} where $Z(\mathbf{w}, \mathbf{x}) = \int_{\mathbf{y} \vert (\mathbf{y}, \mathbf{x} \in \Tilde{\mathbf{D}})} \exp(-f_{\mathbf{w}}(\mathbf{y}, \mathbf{x})) d\mathbf{y}$ is the partition function for the conditional distribution. \end{definition} Rules in a \PSL \, model capture interactions between variables in the domain and can be in the form of a first order logical implication or a linear arithmetic relation. Each rule is made up of predicates with varying numbers of arguments. Substitution of the predicate arguments with constants present in the data generate ground atoms that can take on a continuous value in the range $[0, 1]$. A logical rule must have a conjunctive clause in the body and a disjunctive clause in the head, while an arithmetic rule must be an inequality or equality relating two linear combinations of predicates. A logical rule is translated as a continuous relaxation of Boolean connectives using \textit{Lukasiewicz} logic. Specifically, $P \psland Q$ results in the potential $\max(0.0, P \pslsum Q - 1.0)$, $P \pslor Q$ results in the potential $\min(1.0, P \pslsum Q)$, and $\pslneg Q$ results in the potential $1.0 - Q$. Each grounding of an arithmetic rule is manipulated to $\ell(\mathbf{y}, \mathbf{x}) \leq 0$ and the resulting potential takes the form $\max \{\ell(\mathbf{y}, \mathbf{x}), 0\}$. We now illustrate the process of instantiating a HL-MRF using \PSL \, with an example in the context of recommender systems. \begin{center} \begin{tabular}{ |l|r| } \hline $\pslpred{SimUser}(\pslarg{U1},\pslarg{U2}) \psland \pslpred{Rating}(\pslarg{U1}, \pslarg{M}) \pslthen \pslpred{Rating}(\pslarg{U2}, \pslarg{M})$ & (1) \\ $\pslneg \pslpred{SimUser}(\pslarg{U1}, \pslarg{U2}) \pslor \pslneg \pslpred{Rating}(\pslarg{U1}, \pslarg{M}) \pslor \pslpred{Rating}(\pslarg{U2}, \pslarg{M})$ & (2) \\ $min\{1.0, (1.0 - \pslpred{SimUser}(\pslarg{Alice}, \pslarg{Bob}) \pslsum (1.0 - \pslpred{Rating}(\pslarg{Alice}, \pslarg{Alien})) \pslsum \pslpred{Rating}(\pslarg{Bob}, \pslarg{Alien})\}$ & (3) \\ \hline \end{tabular} \end{center} \begin{exmp} Consider a movie recommendation setting where the task is to predict the ratings users will provide movies that they have not yet rated. We can encode the idea that similar users are likely to enjoy the same movies using the logical statement (1). Here, \pslpred{SimUser} is an observed predicate that represents the similarity between two users, and \pslpred{Rating} is the predicate that we are trying to predict, i.e., the rating a user will give a movie. \PSL \, first converts the rule to its disjunctive normal form, (2). Then, all possible substitutions of constants for the variable arguments in the predicates of the rule are made to make ground atoms. Then, all possible combinations of atoms that can form a ground rule are made. Finally, by utilizing the \textit{Lukasiewicz} relaxation, the following hinge-loss function is created. For instance, let us assume we have data with users $\pslarg{U} = \{Alice, Bob\}$ and movies $\pslarg{M} = \{Alien\}$. This will result in the hinge-loss function (3). \end{exmp} We refer the reader to \cite{bach:jmlr17} for a more detailed description of \PSL. \commentout{ \subsection{Fairness Metrics} In this work, we focus on the fairness metrics non-parity and value unfairness. The definitions of these metrics are provided in \tabref{tab:fairness_metrics}. The two user groups we have selected to define as the protected and unprotected groups are are female and male, respectively. For the remainder of this paper we will let $g$ represent the set of female users, and $\neg g$ the set of male users, $\textbf{R}$ is the set of ratings in the dataset, $m$ is the number of predictions made by the model, $n$ is the number of unique items in the dataset, and $v_{i,j}$ and $r_{i,j}$ are the predicted and true rating user $i$ gives item $j$, respectively. For the fairness metric definitions we use $E_{g}[v]$ and $E_{\neg g}[v]$ to represent the average predicted ratings for $g$ and $\neg g$, respectively, $E_{g}[v]_{j}$ and $E_{\neg g}[v]_{j}$ represent the average predicted ratings for movie $j$ for $g$ and $\neg g$, respectively, and $E_{g}[r]_{j}$ and $E_{\neg g}[r]_{j}$ represent the average true ratings for movie $j$ for $g$ and $\neg g$, respectively \begin{table*}[h] \caption{Fairness Metrics} \centering \begin{tabular}{l l} \hline Metric & Definition \\ \hline \textit{Non-Parity} \cite{DBLP:journals/corr/YaoH17, FairnessAwareRegularizer}& $U_{par}(v) = |(E_{g}[v] - E_{\neg g}[v])|$\\ \textit{Value} \cite{DBLP:journals/corr/YaoH17} & $U_{val}(y) = \frac{1}{n} \sum_{j=1}^{n} |(E_{g}[v]_{j}] - E_{g}[r]_{j}]) - (E_{\neg g}[v]_{j} - E_{\neg g}[r]_{j}])|$\\ \hline \end{tabular} \label{tab:fairness_metrics} \end{table*} } \section{Background} \label{sec:background} We begin by briefly reviewing related work upon which our approach builds. \subsection{Fairness in Recommender Systems} Methods for addressing fairness can occur at three stages of a recommender pipeline: pre-process, in-process, and post-process \cite{mehrabi2019survey}. Pre-processing techniques transform the data so that discrimination characteristics are removed prior to model training \cite{lahoti2019ifair, NIPS2017_6988}. In-processing techniques attempt to remove discrimination during the model training process by incorporating changes into the objective function. Post-processing techniques treat the learned model as a black box and modify the output to remove discrimination \cite{FairnessExposureInRankings:Singh, equalityOfOpportunity:Hardt, LinkedInReRanking:Geyik, MatchmakingFairness:Paraschakis, fair-Reranking:recsys:2019}. Post-processing techniques are particularly attractive to industry since their treatment of the predictor as a black-box makes for a manageable integration into an existing pipeline \cite{AdressingAlgorithmicBias:Gathright, LinkedInReRanking:Geyik}. Recent work has shown the effectiveness of both adversarial learning \cite{AdversariallyLearningFairRepresentations:Beutel, Louizos:CoRR2016, Madras:CoRR2018} and regularization \cite{FairnessAwareRegularizer, DBLP:journals/corr/YaoH17, PairwiseFairness}. Our methods can be used as either an in-processing or post-processing method, and builds upon the line of research that addresses fairness via regularization. \commentout{ Methods for addressing unfairness can fall into three stages of a pipeline: pre-process, in-process, and post-process \cite{mehrabi2019survey}. Pre-processing techniques transform the data so that discrimination characteristics are removed prior to model training \cite{lahoti2019ifair, NIPS2017_6988}. Our work does not fit into this category of techniques but instead can be viewed as either an in-processing or post-processing method. Post-processing techniques treat the learned model as a black box and modify the output to remove discrimination \cite{FairnessExposureInRankings:Singh, equalityOfOpportunity:Hardt, LinkedInReRanking:Geyik, MatchmakingFairness:Paraschakis, fair-Reranking:recsys:2019}. Post-processing techniques are particularly attractive to industry since their treatment of the predictor as a black-box makes for a manageable integration into an existing pipeline \cite{AdressingAlgorithmicBias:Gathright, LinkedInReRanking:Geyik}. In-processing techniques attempt to remove discrimination during the model training process by incorporating changes into the objective function. Recent work has shown the effectiveness of both adversarial learning \cite{AdversariallyLearningFairRepresentations:Beutel, Louizos:CoRR2016, Madras:CoRR2018} and regularization \cite{FairnessAwareRegularizer, DBLP:journals/corr/YaoH17, PairwiseFairness} as in-processing methods for addressing fairness. Our work specifically builds upon the line of research that addresses fairness via regularization. } \subsection{Hybrid Recommenders using Probabilistic Soft Logic} \label{sec:Hyper} Probabilistic soft logic (\PSL) is a probabilistic programming language that has been shown to be effective for hybrid recommender systems \cite{kouki:recsys15}. \PSL's advantages include the ability to easily write interpretable, extendable, and explainable hybrid systems \cite{kouki:recsys17}. \PSL \, models specify probabilistic dependencies using logical and arithmetic rules; the rules, combined with data, are translated into a conditional random field referred to as a \emph{hinge-loss Markov random field (HL-MRF)} \cite{bach:jmlr17}. Given a set of evidence $\mathbf{x}$ and continuous unobserved variables $\mathbf{y}$, the inference objective is given by: \vspace{-1mm} \begin{equation} \min_{\mathbf{y} \in [0, 1]^n} \quad \sum_{i}^{k} w_{i} \phi_{i}(\mathbf{y}, \mathbf{x}) \label{eq:Rec_Obj} \end{equation} \noindent where $k$ is the number of unique hinge-loss potential functions, $\phi_i(\cdot)$, and $w_i$ are the corresponding scalar weights. In addition to expressivity, an important advantage of HL-MRFs is their scalability; inference is convex, and a variety of specialized optimizers have been proposed \cite{srinivasan:aaai20} (more details about \PSL \, in \secref{sec:appendix}). Following Kouki et al. \cite{kouki:recsys15}, the following collection of rules expresses a simple, intuitive hybrid recommender model: \paragraph{\bf Demographic and Content Similarity:} \, Demographic-based approaches are built upon on the observation that users with similar demographic properties will tend to make similar ratings. Likewise, content-based approaches are built upon on the observation that items with similar content will be rated similarly by users. This is different from the collaborative filtering approach as the rating patterns of users and items is strictly not considered in the similarity calculation. \vspace{-1mm} \begin{gather*} \pslpred{Rating}(\pslarg{U1}, \pslarg{I}) \psland \pslpred{SimUserDemo}(\pslarg{U1}, \pslarg{U2}) \pslthen \pslpred{Rating}(\pslarg{U2}, \pslarg{I}) \\ \pslpred{Rating}(\pslarg{U}, \pslarg{I1}) \psland \pslpred{SimItemContent}(\pslarg{I1}, \pslarg{I2}) \pslthen \pslpred{Rating}(\pslarg{U}, \pslarg{I2}) \end{gather*} \vspace{-1mm} The predicate $\pslpred{Rating}(\pslarg{U}, \pslarg{I})$ represents the normalized value of the rating that user $\pslarg{U}$ provided for item $\pslarg{I}$.\\ $\pslpred{SimUserDem}(\pslarg{U1}, \pslarg{U2})$ and $\pslpred{SimItemContent}(\pslarg{I1}, \pslarg{I2})$ represent the similarity of users $\pslarg{U1}$ and $\pslarg{U2}$ and items $\pslarg{I1}$ and $\pslarg{I2}$. \paragraph{\bf Neighborhood-based Collaborative Filtering:} \, Neighborhood-based collaborative filtering methods capture the notion that users that have rated items similarly in the past will continue to rate new items similarly. An analogous and transposed notion applies to items, i.e., items that have been rated similarly by many of the same users will continue to be rated similarly. Similarity in this context is based solely on rating patterns and can be measured using various metrics. \vspace{-1mm} \begin{gather*} \pslpred{Rating}(\pslarg{U1}, \pslarg{I}) \psland \pslpred{SimUsers}(\pslarg{U1}, \pslarg{U2}) \pslthen \pslpred{Rating}(\pslarg{U2}, \pslarg{I}) \\ \pslpred{Rating}(\pslarg{U}, \pslarg{I1}) \psland \pslpred{SimItems}(\pslarg{I1}, \pslarg{I2}) \pslthen \pslpred{Rating}(\pslarg{U}, \pslarg{I2}) \end{gather*} \vspace{-1mm} $\pslpred{SimUsers}(\pslarg{U1}, \pslarg{U2})$ and $\pslpred{SimItems}(\pslarg{I1}, \pslarg{I2})$ represent the similarity of users $\pslarg{U1}$ and $\pslarg{U2}$ and items $\pslarg{I1}$ and $\pslarg{I2}$, respectively. \paragraph{\bf Local Predictor Prior:} \, One of the advantages of the \Hyper \, system is its ability to combine multiple recommendation algorithms into a single model in a principled fashion. Recommender predictions are incorporated as non-uniform priors in the \PSL \, model using the pattern shown below. \begin{gather*} \pslpred{LocalPredictor}(\pslarg{U}, \pslarg{I}) = \pslpred{Rating}(\pslarg{U}, \pslarg{I}) \end{gather*} The predicate $\pslpred{LocalPredictor}(\pslarg{U}, \pslarg{I})$ represents the prediction made by the external recommendation algorithm. \paragraph{\bf Mean-Centering Priors:} \, Based on the above rules, ratings are propagated across similar users, and if two users have different average ratings, then these ratings may actually bias one another too much. To counter this effect, the following rules bias ratings towards the average user and item rating calculated from the observed ratings: \begin{gather*} \pslpred{AverageUserRating}(\pslarg{U}) = \pslpred{Rating}(\pslarg{U}, \pslarg{I}) \\ \pslpred{AverageItemRating}(\pslarg{I}) = \pslpred{Rating}(\pslarg{U}, \pslarg{I}) \end{gather*} The predicates $\pslpred{AverageItemRating}(\pslarg{I})$ and $\pslpred{AverageUserRating}(\pslarg{U})$ represent the average normalized value of the ratings associated with user $\pslarg{U}$ and item $\pslarg{I}$, respectively. \subsection{FairPSL} Farnadi et al. \cite{fairHyper:RecSys} introduced two collections of rules that can be added to a \PSL \, recommender system to address disparities derived from training data and observations. The authors use the same metrics we consider as proxies to measure this notion of disparity. Our work is different in that we give the modeler a finer level of control. Rather than attempting to capture multiple notions of fairness at once, we propose techniques that integrate specific fairness metrics into the \PSL\, inference objective as regularizers. In this way, the modeler is able to tune the degree of the specific metric to the domain they are working in simply by adjusting the weight of the additional rules. \commentout{ \subsection{Fairness Metrics} In this work, we focus on the fairness metrics non-parity and value unfairness. The definitions of these metrics are provided in \tabref{tab:fairness_metrics}. The two user groups we have selected to define as the protected and unprotected groups are are female and male, respectively. For the remainder of this paper we will let $g$ represent the set of female users, and $\neg g$ the set of male users, $\textbf{R}$ is the set of ratings in the dataset, $m$ is the number of predictions made by the model, $n$ is the number of unique items in the dataset, and $v_{i,j}$ and $r_{i,j}$ are the predicted and true rating user $i$ gives item $j$, respectively. For the fairness metric definitions we use $E_{g}[v]$ and $E_{\neg g}[v]$ to represent the average predicted ratings for $g$ and $\neg g$, respectively, $E_{g}[v]_{j}$ and $E_{\neg g}[v]_{j}$ represent the average predicted ratings for movie $j$ for $g$ and $\neg g$, respectively, and $E_{g}[r]_{j}$ and $E_{\neg g}[r]_{j}$ represent the average true ratings for movie $j$ for $g$ and $\neg g$, respectively \begin{table*}[h] \caption{Fairness Metrics} \centering \begin{tabular}{l l} \hline Metric & Definition \\ \hline \textit{Non-Parity} \cite{DBLP:journals/corr/YaoH17, FairnessAwareRegularizer}& $U_{par}(v) = |(E_{g}[v] - E_{\neg g}[v])|$\\ \textit{Value} \cite{DBLP:journals/corr/YaoH17} & $U_{val}(y) = \frac{1}{n} \sum_{j=1}^{n} |(E_{g}[v]_{j}] - E_{g}[r]_{j}]) - (E_{\neg g}[v]_{j} - E_{\neg g}[r]_{j}])|$\\ \hline \end{tabular} \label{tab:fairness_metrics} \end{table*} } \subsection{In-Process Fairness Intervention} The baseline hybrid recommender system (\secref{sec:Hyper}) uses cosine similarity for the similarity predicates and three local predictors, non-negative matrix factorization (NMF) \cite{lee2001algorithms}, biased singular value decomposition (SVD) \cite{koren2009matrix}, and a content-based multinomial Naive Bayes multi-class classifier with Laplace smoothing that is trained using the demographic and content information of the user and item, respectively. Five versions of the \PSL \, recommender system, extending the baseline model, are implemented: \commentout{ The baseline hybrid recommender system discussed in \secref{sec:Hyper} is built with the following implementation details. The similarity predicates all represent calculated cosine similarities. Furthermore, three local predictors are employed. The first two are the matrix factorization based approaches, non-negative matrix factorization (NMF) \cite{lee2001algorithms} and biased singular value decomposition (SVD) \cite{koren2009matrix}. The third local predictor is a content-based multinomial Naive Bayes multi-class classifier with Laplace smoothing that is trained using the demographic and content information of the user and item, respectively. Five versions of the \PSL \, recommender system, extending the baseline model, are implemented: } \begin{itemize} \item \textbf{\PSL \, Base:} demographic and content similarity, collaborative filtering, local predictor, and mean-centering rules \item \textbf{\HyperFair (NP):} The \textbf{\PSL \, Base} model with the non-parity fairness rules. \item \textbf{\HyperFair (Val):} The \textbf{\PSL \, Base} model with the value fairness rules. \item \textbf{\HyperFair (NP + Val):} The \textbf{\PSL \, Base} model with both the non-parity and value fairness rules. \item \textbf{Fair \PSL:} The \textbf{\PSL \, Base} model with rules introduced by Farnadi et al. \cite{fairHyper:RecSys}. \end{itemize} We also implemented three of the fair matrix factorization methods introduced by Yao and Huang \cite{DBLP:journals/corr/YaoH17} using the hyperparameters and training methods chosen by the authors, i.e., we use a L2 regularization term $\lambda = 10^{-3}$ and learning rate of $0.1$ for $500$ iterations of Adam optimization using the full gradient. The first is the baseline matrix factorization based approach which we refer to as \textbf{MF}. The second and third methods are where the matrix factorization objective function is augmented with the smoothed non-parity and value unfairness metrics, which we refer to as \textbf{MF NP} and \textbf{MF Val}, respectively. For both the \HyperFair \, models we introduced in this work and the matrix factorization methods, we set the fairness regularization parameter to $1.0$. We measure the prediction performance of each of the models using the RMSE of the rating predictions. The unfairness of the predictions are measured using the metrics defined in \secref{sec:models}. The prediction performance and fairness metrics are measured across 5 folds of the \MovieLens \, dataset and we report the mean and standard deviation for each model. We bold the best value, and values not statistically different from the best with $p < 0.05$ for a paired sample t-test. \begin{table*}[t] \caption{Prediction performance(RMSE) and unfairness(Non-Parity and Value) of recommender systems on \MovieLens 1m} \centering \begin{tabular}{|l|c|c|c|c|} \hline \textbf{Model} & \textbf{RMSE (SD)} & \textbf{Non-Parity (SD)} & \textbf{Value (SD)}\\ \hline \hline MF & $0.945 (1.0e\hbox{-}3)$ & $0.0371(1.6e\hbox{-}3)$ & $0.349 (6.0e\hbox{-}3)$ \\ \hline MF NP & $0.945(1.0e\hbox{-}3)$ & $\mathbf{0.0106(2.0e\hbox{-}3)}$ & $0.351(6.3e\hbox{-}3)$ \\ \hline MF Val & $0.950(6.4e\hbox{-}4)$ & $0.0446(3.0e\hbox{-}3)$ & $0.343(3.4e\hbox{-}3)$ \\ \hline \hline Fair \PSL & $\mathbf{0.932(9.7e\hbox{-}4)}$ & $0.0274(9.6e\hbox{-}4)$ & $\mathbf{0.332(5.5e\hbox{-}3)}$ \\ \hline \hline PSL Base & $\mathbf{0.931 (1.2e\hbox{-}3)}$ & $0.0270 (1.4e\hbox{-}3)$ & $\mathbf{0.330 (4.4e\hbox{-}3)}$ \\ \hline \HyperFair (NP) & $0.945 (1.1e\hbox{-}2)$ & $0.0215 (5.0e\hbox{-}3)$ & $\mathbf{0.338(9.9e\hbox{-}3)}$ \\ \hline \HyperFair (Val) & $\mathbf{0.932(1.1e\hbox{-}3)}$ & $0.0267(8.6e\hbox{-}4)$ & $\mathbf{0.333(6.9e\hbox{-}3)}$ \\ \hline \HyperFair (NP + Val) & $\mathbf{0.932 (1.1e\hbox{-}3)}$ & $0.0274(1.4e\hbox{-}3)$ & $\mathbf{0.331(4.5e\hbox{-}3)}$ \\ \hline \end{tabular} \label{tab:fairness_results} \end{table*} We can see from \tabref{tab:fairness_results} that the \HyperFair \, models either improved on the targeted fairness metric over \textbf{\PSL \, Base} or achieved results that were not significantly different from the best \PSL \, model. Notably, \textbf{\HyperFair(NP)} achieved significantly better non-parity unfairness over \textbf{\PSL \, Base} and the anticipated performance decrease in the RMSE of the rating predictions was minimal. In fact, the \textbf{\HyperFair(NP)} still achieved the same level of prediction accuracy as the highest performing matrix factorization method. Another interesting takeaway from \tabref{tab:fairness_results} is that when attempting to optimize for both non-parity and value unfairness simultaneously in \textbf{\HyperFair(NP + Val)}, the effectiveness of the non-parity rule decreases. This effect can also be observed in the \textbf{ Fair \PSL} of \cite{farnadi2018fairness}, where both fairness notions are attempting to be addressed with a single set of rules. A potential explanation of this could be that the value unfairness and non-parity unfairness metrics are opposing one another, that is to say that a set of ratings that performs well on value unfairness in this dataset may actually perform poorly on non-parity unfairness, and vice-versa. Fully understanding this behavior and controlling the tradeoff between the metrics is a direction for future work. When comparing the \PSL \, models to the matrix factorization models from \cite{DBLP:journals/corr/YaoH17}, we see that \PSL \, consistently achieves better RMSE and value unfairness, while matrix factorization achieves better non-parity unfairness values. It is important to note that \tabref{tab:fairness_results} only reflects metrics values for the regularization parameter $w_f = 1.0$. In the next set of experiments, we show how tuning the non-parity unfairness regularization parameter can effectively yield predictions that fall within a desired non-parity unfairness threshold. \subsection{Post-Process Fairness Intervention} \label{post_process_exp_section} Another way we employ our proposed methods is as an interpretable fair retrofitting procedure for predictions from an arbitrary black-box model. To show the effectiveness of our methods for this task, we create a simple \PSL \, model that contains only the NMF local predictor rule and the fairness rule. We refer to these models as \textbf{NMF + NP \PSL} and \textbf{NMF + Val \PSL} for the models with non-parity and value unfairness rules, respectively. The weights of the fairness rules in the templates are varied to show the tradeoff between prediction performance and the fairness metric. For both models, we run experiments for all $w_f \in \{0.0, 0.01, 0.1, \cdots, 10000.0 \}$. \begin{figure}[t] \centering \subfloat[]{\includegraphics[width=0.4\textwidth]{Figures/NPvsFairnessRegularizer.png}}\hfill \subfloat[]{\includegraphics[width=0.4\textwidth]{Figures/ValuevsFairnessRegularizer.png}} \caption{(a) Non-Parity unfairness and RMSE performance of \textbf{NMF + NP \PSL} vs the value of the fairness regularization parameter.\\ (b) Value unfairness and RMSE performance of \textbf{NMF + Val \PSL} vs the value of the fairness regularization parameter.} \label{fig:FairnessvsRegularizer} \end{figure} \figref{fig:FairnessvsRegularizer} shows both the RMSE and the fairness of the ratings predicted by the NMF model. We see that \textbf{NMF + NP \PSL} begins to improve the predictions' fairness without significantly decreasing the performance when the regularization parameter is set to $100.0$. When the parameter exceeds $10.0$, there is a significant decrease in the non-parity unfairness, reaching nearly $0.0$, while the increase in RMSE is still not drastic. The \textbf{Val + NP \PSL} model initially does not improve the value unfairness of the NMF rating predictions. We suspect this behavior suggests that the quality of the estimator for the group average item rating needs improvement and initially biases the predictions in a detrimental way and is a direction for future investigation. There is a region where the value unfairness begins a downward trend with respect to the fairness regularizer, as is desired. This is an encouraging result, showing that the weight of the fair rule generally has the desired relationship with value unfairness. \section{Introduction} \label{sec:intro} As the ubiquity of recommender systems continues to grow, concerns of bias and fairness are becoming increasingly urgent to address. An algorithm oblivious to any form of fairness has the potential to propagate, or even amplify, discrimination \cite{MenAlsoLikeShopping:Zhao, DiscriminationAds}. In doing so, certain groups can be severely impacted by the recommendations provided. For instance, one study showed that an algorithm for targeted advertising of jobs in the STEM fields was delivering more advertisements to men than women with similar professional backgrounds \cite{Lambrecht:biasedAdvertising}. The need to integrate fairness and ensure that different groups of users are experiencing the same level of utility from recommender systems has been acknowledged by the artificial intelligence community \cite{mehrabi2019survey, pmlr-v81-ekstrand18b}. We introduce techniques for integrating fairness metrics as regularizations of a joint inference objective function of a probabilistic graphical model. Our approach naturally leads to novel collections of rules that can be added to a probabilistic soft logic (\PSL) \cite{bach:jmlr17} model. Furthermore, the weights of the rules can be translated as regularization parameters which can be tuned by the modeler or via weight learning \cite{bach:jmlr17}. This motivates a general framework for introducing multiple soft fairness constraints in a hybrid recommender system which we refer to as \HyperFair. \commentout{ Farnadi et al. \cite{fairHyper:RecSys} introduced collections of rules that can be added to a \PSL \, recommender system to address disthe same fairness metrics we are considering in this research. Our work is different in that we give the modeler a finer level of control. Rather than attempting to capture multiple definitions of fairness at once, we propose interventions that integrate specific fairness metrics into the \PSL\, inference objective. In this way, the modeler is able to tune the degree of the specific fairness constraint to the domain they are working in by simply adjusting a single parameter. } \HyperFair \, builds upon the \Hyper \, recommender system introduced by Kouki et al. \cite{kouki:recsys15} by adding the ability to enforce multiple soft fairness constraints to the model predictions. This framework is general enough to capture previous work by Farnadi et al. \cite{fairHyper:RecSys} who proposed \PSL \, modelling techniques for addressing disparities stemming from imbalanced training data and observation bias. We develop a generic technique, provide principled derivations of the soft constraints, and show how a set of fairness metrics can be precisely targeted. Our key contributions are as follows: 1) we introduce the \HyperFair \, framework for enforcing soft fairness constraints in a hybrid recommender system; 2) we show that non-parity and value unfairness can be written as linear combinations of hinge-loss potentials and can thus be integrated into the \PSL \, inference objective via template rules; 3) we perform an empirical analysis using the \MovieLens\, dataset and show both how our fairness rules can be used internally or as a method for retrofitting the output of a black-box algorithm to increase the fairness of predictions; and 4) we show our method improves fairness over baseline models and outperforms a state-of-the-art fair recommender \cite{DBLP:journals/corr/YaoH17} in terms of RMSE and value unfairness. \commentout{ The contributions we make in this paper are the following: 1) we theoretically motivate two collections of \PSL\, rules that can be added to model templates to integrate variations of fairness metrics into the MAP inference objective; 2) we perform an empirical analysis of the methods we propose using the \MovieLens\, dataset and show how our fairness rules can be used as extensions of a \PSL \, model or as a method for retrofitting the output of a black-box algorithm to increase the fairness of predictions. } \commentout{ The remainder of this paper is structured as follows: In \secref{sec:background}, we provide the reader with the necessary background for understanding the motivations and approaches proposed in this paper. We introduce the modeling framework we utilize in this work, \PSL, and then give the fairness definitions we will be working with. Next, in \secref{sec:models}, we introduce the \PSL \, recommender system we will be working with for the empirical evaluation and then motivate the approaches we take to integrate the fairness metrics into the \PSL \, inference objective. Then, in \secref{sec:experiments}, we empirically validate our approach with experiments comparing our model to two state-of-the-art fair recommender sytems. Finally, in \secref{sec:conclusion}, we close with concluding remarks and propose possible directions for future research. } \newpage \section{\HyperFair} \label{sec:models} In this section, we introduce \HyperFair, a framework for enforcing multiple soft fairness constraints in a hybrid recommender system. \HyperFair \, is a natural development to \Hyper \cite{kouki:recsys15} that incorporates fairness metrics, $U$, via regularization of the HL-MRF MAP inference objective \eqref{eq:Rec_Obj}: \begin{equation} \min_{\mathbf{y} \in [0, 1]^n} \quad w_f U + \sum_{i}^{k} w_{i} \phi_{i}(\mathbf{y}, \mathbf{x}) \label{eq:regularized_rec_obj} \end{equation} where $w_f \in \mathcal{R}^+$ is the scalar regularization parameter. A fairness metric, $U$, in a \HyperFair \, model is expressed as a linear combination of hinge loss potential functions and can then be written as a \PSL \, rule. A particularly active and productive area of research is defining fairness metrics, and there are many metrics that could be targeted by the proposed HyperFair framework. Following the line of work in \cite{DBLP:journals/corr/YaoH17} and \cite{fairHyper:RecSys}, we focus on the unfairness metrics of non-parity and value unfairness defined in the following sections. These metrics were introduced in \cite{DBLP:journals/corr/YaoH17} specifically for addressing disparity stemming from biased training data in collaborative-filtering based recommender systems. For the remainder of this paper, we will let $g$ represent the protected group of users, i.e., $g$ is a subset of all the users present in the data that have been identified as possessing a protected attribute. Then, $\neg g$ represents the remaining subset of users that do not possess the protected attribute. We let $\textbf{R}$ be the set of ratings in the dataset, $m$ the number of predictions made by the model, $n$ the number of unique items in the dataset, and $v_{i,j}$ and $r_{i,j}$ the predicted and true rating user $i$ gives item $j$, respectively. For the fairness metric definitions, we use $E_{g}[v]$ and $E_{\neg g}[v]$ to represent the average predicted ratings for $g$ and $\neg g$, respectively, $E_{g}[v]_{j}$ and $E_{\neg g}[v]_{j}$ represent the average predicted ratings for item $j$ for $g$ and $\neg g$, respectively, and $E_{g}[r]_{j}$ and $E_{\neg g}[r]_{j}$ represent the average true ratings for item $j$ for $g$ and $\neg g$, respectively. \subsection{Non-parity Unfairness} Non-parity unfairness, $U_{par}$, aims to minimize the disparity in the overall average predicted ratings of the protected and unprotected groups. $$U_{par}(v) = |(E_{g}[v] - E_{\neg g}[v])|$$ In this section, we motivate a collection of rules that can be added to the \PSL \, recommender system as an approach to minimize this metric. We start by introducing two new free variables to the inference problem \eqref{eq:Rec_Obj}, $y_{n + 1}$ and $y_{n + 2}$, and the following hard constraints without breaking the convexity of \PSL \, inference: \begin{gather*} c_{1}(\mathbf{y}, \mathbf{x}) := y_{n + 1} - \frac{1}{\lvert \{(i,j) : ((i, j) \in \mathbf{R}) \land g_{i} \}\rvert} \sum_{(i,j) : ((i, j ) \in \mathbf{R}) \land g_{i}} v_{i,j} = 0 \\ c_{2}(\mathbf{y}, \mathbf{x}) := y_{n + 2} - \frac{1}{\lvert \{(i,j) : ((i, j ) \in \mathbf{R}) \land \neg g_{i} \}\rvert} \sum_{(i,j) : ((i, j ) \in \mathbf{R}) \land \neg g_{i}} v_{i,j} = 0 \end{gather*} With the two additional hard constraints, the solution of the new optimization problem is a state such that $y^{*}_{n + 1} = E_{g}[v]$ and $y^{*}_{n + 2} = E_{\neg g}[v]$. The two hard constraints can be added to the \PSL \, model by introducing the following pair of rules: \begin{gather*} \pslpred{Rating}(+ \pslarg{U}, + \pslarg{I}) / m_{g} = \pslpred{ProtectedAvgRating}(\pslarg{c}) \, . \, \{\pslarg{U}: \pslpred{Protected}(\pslarg{U})\} \, \{\pslarg{I}: \pslpred{ProtectedItem}(\pslarg{I})\} \\ \pslpred{Rating}(+ \pslarg{U}, + \pslarg{I}) / m_{\neg g} = \pslpred{UnProtectedAvgRating}(\pslarg{c}) \, . \, \{\pslarg{U}: \pslpred{UnProtected}(\pslarg{U})\} \, \{\pslarg{I}: \pslpred{UnProtectedItem}(\pslarg{I})\} \end{gather*} where $m_{g}$ and $m_{\neg g}$ are the total number of ratings for the protected and unprotected group, respectively, and are added as a preprocessing step. The predicates $\pslpred{ProtectedAvgRating}(\pslarg{c})$ and $\pslpred{UnProtectedAvgRating}(\pslarg{c})$ hold the values of $y_{n+1}$ and $y_{n+2}$ , respectively. We can now define the two following hinge-loss potentials: \begin{align*} \phi_{k+1} (\mathbf{y}, \mathbf{x}) = \max \Big \{1 - y_{n+1} - (1 - y_{n + 2}), 0 \Big\} && \phi_{k+2}(\mathbf{y}, \mathbf{x}) = \max \Big \{1 - y_{n+2} - (1 - y_{n + 1}), 0 \Big\} \end{align*} \noindent Observe that $U_{par} = \phi_{k+1} (y_{n+1}^{*}, \mathbf{x}) + \phi_{k+2} (y_{n+2}^{*}, \mathbf{x})$. This transformation allows us to push the regularizer in \eqref{eq:regularized_rec_obj} into the summation to create a valid \PSL \, objective function. Formally, if we let $w_{k + 1} = w_{k + 2} = w_f$, then: \begin{align} \argmin_{\mathbf{y} \in [0, 1]^n} \quad w_f U_{par} + \sum_{i}^{k} w_{i} \phi_{i}(\mathbf{y}, \mathbf{x}) \quad \equiv \quad \argmin_{\mathbf{y} \in [0, 1]^{n + 2}} \quad & \sum_{i}^{k + 2} w_{i} \phi_{i}(\mathbf{y'}, \mathbf{x}) \label{eq:NP_Rec_Intervention_Obj} \\ \textrm{s.t.} \quad & c_{1}(\mathbf{y}, \mathbf{x}) = 0, \ c_{2}(\mathbf{y}, \mathbf{x}) = 0 \nonumber \end{align} \noindent Furthermore, we now have that the right hand side of \eqref{eq:NP_Rec_Intervention_Obj} is a valid HL-MRF that can be instantiated using \PSL. The following rule can be added to a \PSL \, model to obtain the two desired ground potentials $\phi_{k+1}$ and $\phi_{k+2}$. \begin{gather*} \pslpred{ProtectedAvgRating}(\pslarg{c}) = \pslpred{UnProtectedAvgRating}(\pslarg{c}) \end{gather*} Altogether, this method for addressing non-parity unfairness results in a total of $4$ additional ground potentials and $3$ additional rules in the \PSL \, template and achieves precisely the desired semantics. Furthermore, the regularization term $w_f$ is directly translated as a weight in \PSL \, that can be tuned by the modeler or via weight learning. \subsection{Value Unfairness} Next, we motivate our approach in addressing value unfairness. Value unfairness aims to minimize the expected inconsistency in the signed estimation error between the protected and unprotected user groups. $$U_{val}(y) = \frac{1}{n} \sum_{j=1}^{n} |(E_{g}[v]_{j} - E_{g}[r]_{j}) - (E_{\neg g}[v]_{j} - E_{\neg g}[r]_{j})|$$ A key difference between this metric and non-parity is that the truth values of the predictions are included in the definition of the metric and thus cannot be directly targeted during inference. In \PSL \, the truth values of the target predicates are withheld until evaluation. Therefore, the approach we take is to estimate properties of the rating distribution prior to running the model to approximate the desired inference objective function \eqref{eq:regularized_rec_obj}. We start the derivation of the fairness rules we will be adding to the \PSL \, model by augmenting the inference optimization problem of \eqref{eq:Rec_Obj} with two hard constraints for every item in the dataset, that is for all $j \in I = \{j : (i,j) \in \mathbf{R}\}$. Note that these constraints do not break the convexity properties of the original optimization problem. \begin{gather*} c_{1,j}(y_{n + j}, \mathbf{x}) := y_{n + j} - \frac{1}{\lvert \{i : ((i, j) \in \mathbf{R}) \land g_{i} \}\rvert} \sum_{i : ((i, j ) \in \mathbf{R}) \land g_{i}} v_{i,j} = 0 \\ c_{2,j}(y_{n + j + \lvert I \rvert}, \mathbf{x}) := y_{n + j + \lvert I \rvert} - \frac{1}{\lvert \{i : ((i, j) \in \mathbf{R}) \land \neg g_{i} \}\rvert} \sum_{i : ((i, j) \in \mathbf{R}) \land \neg g_{i}} v_{i,j} = 0 \end{gather*} With these hard constraints, the setting of the free variables $y_{n + 1}, \cdots , y_{n + \lvert I \rvert}, y_{n + \lvert I \rvert} \cdots y_{n + 2 \lvert I \rvert}$ in the optimal solution will be such that $(y^{*}_{n + 1} = E_g[v]_1), \cdots , (y^{*}_{n + \lvert I \rvert} = E_g[v]_{\lvert I \rvert}), (y^{*}_{n + 1 + \lvert I \rvert} = E_{\neg g}[v]_{1}) \cdots (y^{*}_{n + 2 \lvert I \rvert} = E_{\neg g}[v]_{ \lvert I \rvert})$. These hard constraints are added to the \PSL \, inference objective with the following rule: \begin{gather*} \pslpred{Rating}(+ \pslarg{U}, \pslarg{I}) / @Max[1, \lvert \pslarg{U} \rvert] = \pslpred{PredGroupAvgItemRating}(\pslarg{G}, \pslarg{I}) \, . \, \{\pslarg{U}: \pslpred{target}(\pslarg{U}, \pslarg{I}) \land \pslpred{Group}(\pslarg{G}, \pslarg{U})\} \end{gather*} $\pslpred{PredGroupAvgItemRating}(\pslarg{G}, \pslarg{I})$ represents the average of the predicted ratings that users in group $G$ gave item $I$. The term $G$ in this rule represents either the unprotected or protected group. At the time of inference, we cannot calculate the average true values of the ratings for either the protected or unprotected group, $E_{g}[r]_{j}$ or $E_{\neg g}[r]_{j}$, since the true rating value information is withheld until evaluation. Instead, the group average item rating is estimated using the observed ratings, $\hat{E}_{g}[r]_{j} = \frac{1}{\lvert \{i : ((i, j) \in \mathbf{R}_{obs}) \land g_{i} \}\rvert} \sum_{i : ((i, j) \in \mathbf{R}_{obs}) \land g_{i}} v_{i,j}$ and similarly, $\hat{E}_{\neg g}[r]_{j} = \frac{1}{\lvert \{i : ((i, j) \in \mathbf{R}_{obs}) \land \neg g_{i} \}\rvert} \sum_{i : ((i, j) \in \mathbf{R}_{obs}) \land \neg g_{i}} v_{i,j}$, where $\mathbf{R}_{obs}$ is the set of observed ratings. The observed group average item rating is calculated in a preprocessing step and is added to the model as an observed predicate, $\pslpred{ObsGroupAvgItemRating}(\pslarg{G}, \pslarg{I})$. We can now define the following set of hinge-loss potentials: \begin{align*} \phi_{k + j}(\mathbf{y}, \mathbf{x}) & = \max\Big \{(y_{n + j} - \hat{E}_{g}[r]_{j}) - (y_{n + j + \lvert I \rvert} - \hat{E}_{\neg g}[r]_{j}), 0 \Big \} \\ \phi_{k + j + \lvert I \rvert}(\mathbf{y}, \mathbf{x}) & = \max\Big \{(y_{n + j + \lvert I \rvert} - \hat{E}_{\neg g}[r]_{j}) - (y_{n + j} + \hat{E}_{g}[r]_{j}), 0 \Big \} \end{align*} Then, in the optimal state: $U_{val} \approx n \sum_{j = 1}^{2 \lvert I \rvert} \phi_{k + j} (\mathbf{y}^{*}, \mathbf{x}) =: \hat{U}_{val}$. This transformation allows us to push an approximation of the regularizer in \eqref{eq:regularized_rec_obj} into the summation of the HL-MRF MAP inference objective function. Formally, if we let $w_{k+j} = w' = \frac{1}{n} w_{Val}$ for all $j \geq 0$, then: \begin{align} \argmin_{\mathbf{y} \in [0, 1]^n} \quad w_f \hat{U}_{val} + \sum_{i}^{k} w_{i} \phi_{i}(\mathbf{y}, \mathbf{x}) \quad \equiv \argmin_{\mathbf{y'} \in [0, 1]^{n + 2|I|}} \quad & \sum_{i}^{k + 2|I|} w_{i} \phi_{i}(\mathbf{y'}, \mathbf{x}) \label{eq:val_Rec_Intervention}\\ \textrm{s.t.} \quad & c_{1, j}(\mathbf{y'}, \mathbf{x}) = 0, \ c_{2, j}(\mathbf{y'}, \mathbf{x}) = 0, \ \forall j \in I \nonumber \end{align} Further, we have \eqref{eq:val_Rec_Intervention} is a valid HL-MRF that can be instantiated using \PSL. Specifically, the following rule in \PSL \, results in the desired potentials $\phi_{k + 1}, \cdots, \phi_{k + 2 \lvert I \rvert}$: \begin{align*} \pslpred{PredGroupAvgItemRating} (\pslarg{G1}, \pslarg{I}) & - \pslpred{ObsGroupAvgItemRating} (\pslarg{G1}, \pslarg{I}) \\ = \pslpred{PredGroupAvgItemRating} (\pslarg{G2}, \pslarg{I}) & - \pslpred{ObsGroupAvgItemRating} (\pslarg{G2}, \pslarg{I}) \end{align*} This approach approximates the targeted fairness metric using statistics from the set of observations. The approximation is then transformed into a summation of hinge-loss potential functions that could be pushed into a \PSL \, inference objective function. Furthermore, the weight of the arithmetic rule in this intervention can be interpreted as a scaled version of the regularization parameter of the fairness metric in \eqref{eq:regularized_rec_obj}. \section{Related Works} \label{sec:Related Works} Generally methods for addressing unfairness can fall into three stages of a pipeline: pre-process, in-process, and post-process \cite{mehrabi2019survey}. Pre-processing techniques transform the data so that discrimination characteristics are removed prior to model training \cite{lahoti2019ifair, NIPS2017_6988}. Our work does not fit into this category of techniques but instead can be viewed as either an in-processing or post-processing method. Post-processing techniques treat the learned model as a black box and modify the output to remove discrimination \cite{FairnessExposureInRankings:Singh, equalityOfOpportunity:Hardt, LinkedInReRanking:Geyik, MatchmakingFairness:Paraschakis}. This class of approach is a simple way to integrate fairness constraints into an existing pipeline, which is a common practical requirement for industry applications \cite{AdressingAlgorithmicBias:Gathright, LinkedInReRanking:Geyik} In-processing techniques attempt to remove discrimination during the model training process, either by incorporating changes into the objective function or by imposing a constraint. Recent work has shown the effectiveness of both adversarial learning \cite{AdversariallyLearningFairRepresentations:Beutel, Louizos:CoRR2016, Madras:CoRR2018} and regularization \cite{FairnessAwareRegularizer, DBLP:journals/corr/YaoH17, PairwiseFairness} as in-processing methods for addressing fairness. Our work specifically builds upon the line of research that addresses fairness via regularization.
null
null
null
proofpile-arXiv_000-10120
{"arxiv_id":"2009.08952","language":"en","timestamp":1600654690000,"url":"https:\/\/arxiv.org\/abs\/2009.08952","yymm":"2009"}
2024-02-18T23:40:24.986Z
2020-09-21T02:18:10.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10122"}
null
\section{Introduction} \label{sec:introduction} Genome-wide association studies (GWASs) explore the associations between genetic variants, called single-nucleotide polymorphisms (SNPs), and traits \citep{visscher2012five}. They have been successfully applied to identify numerous genetic variants associated with complex human diseases \citep{buniello2019nhgri}. In GWASs, it is common to measure multiple traits underlying complex diseases, because due to pleiotropy one genetic variant can influence multiple phenotypic traits \citep{solovieff2013pleiotropy}. For example, a number of genetic variants are associated with both fasting glucose and fasting insulin of type 2 diabetes \citep{Billings2010}; GWASs found a variant in gene SLC39A8 that has an influence on the risk of schizophrenia and Parkinson disease.\citep{pickrell2016detection}. In the last decade, single-trait methods has been widely adopted \citep{visscher201710} in which the association between the genetic variant and each single trait is tested one at a time. However, this type of methods suffers from several disadvantages. First, sometimes the association between a single SNP and a trait is too weak to be detected by itself. Second, it ignores the correlation structure among the traits, which leads to the loss of statistical power when the traits are truly correlated. Third, post-hoc combination of multiple tests without proper adjustment may lead to inflated type I error or compromised statistical power. As a result, there is an increasing need to develop powerful statistical methods that are capable of testing multiple traits simultaneously and properly. Various statistical methods have been developed and applied to multiple-trait studies. Following an overview of multiple-trait methods by \citet{Yang2012}, we classify the existing methods into three categories. The first category is to combine test statistics or p-values from univariate tests. The O'Brien method \citep{o1984procedures, wei1985combining} combines the test statistics from the individual test on each trait weighted by inverse variance. The sum of powered score tests (SPU) \citep{Pan2014} and adaptive SPU (aSPU) \citep{Zhang2014} combines the score test statistics derived from generalized estimation equations (GEE). The Trait-based Association Test that uses Extended Simes procedure (TATES) \citep{Sluis2013} exploits the correlation among p-values from each univariate test and generate a new test statistic. Fisher's method \citep{fisher1925statistical, yang2016efficient} and Cauchy's method \citep{liu2020cauchy} combines p-values of single-trait analyses and get the final p-values from known probability distributions. The second category is to reduce the dimensions of multiple traits. Principal components of heritability (PCH) \citep{Klei2008} collapses the multiple traits to a linear combination of traits which maximizes the heritability and then tests the associations based on transformed traits. Canonical correlation analysis \citep{Ferreira2009} finds the linear combination of traits maximizing the covariance between a SNP and all traits. It is equivalent to multivariate analysis of variance (MANOVA) when there is only one SNP \citep{Sluis2013}. The third category relies on regression models. MultiPhen \citep{OReilly2012} regresses genotypes on phenotypes via proportional odds logistic model. As mentioned, GEE has been utilized to generate score test statistics in aSPU \citep{Zhang2014}. Besides linear models, kernel regression models (KMRs) also play a role in multiple-trait analysis, including multivariate kernel machine regression \citep{maity2012multivariate}, multi-trait sequence kernel association test (MSKAT) \citep{Wu2016} and Multi‐SKAT \citep{dutta2019multi}. \citet{davenport2018powerful} extended KMRs to multiple binary outcomes. Currently, several limitations still exist in multiple-trait methods restricting their wide applications. First, many existing methods are unable to simultaneously analyze binary and continuous traits. For KMRs, multiple non-continuous traits can be both theoretically and computationally challenging and it is unclear how to integrate multiple different datatypes (e.g., multi-omics) \citep{larson2019review}. MANOVA is not applicable to non-Normal traits. Numerous studies in GWASs are case-control studies and incapability of processing binary traits greatly limits their application. Second, the methods may have inconsistent performance under various scenarios depending on the number of traits, the strength of correlation and the number of true associations, which are largely unknown in practice. Hence, there is a demand for methods with robust performance regardless of scenarios. Third, although many methods claimed the capability of handling covariates and confounders, few of them conducted the relevant simulations or real-data applications to demonstrate the type I error control and performance. As a matter of fact, our simulation study indicates the claims of some methods are inaccurate (see section \ref{sec:results}). In this paper, We propose a multiple-trait adaptive Fisher's (MTAF) method for multiple traits based on adaptive Fisher's (AF) method \citep{Song2016}. The remainder of this paper is organized as follows. In section \ref{sec:methods}, we elaborate the proposed method and introduce variations to tackle highly correlated traits. In section \ref{sec:results}, we evaluate the performance of MTAF using simulation and apply it to a real GWAS of substance addiction traits. In section \ref{sec:discussion}, we review the advantages and the limitations of the MTAF method and discuss our future work. \section{Methods} \label{sec:methods} Suppose there are $n$ independent subjects. For each subject $i=1,\ldots,n$, there are $K$ traits $\bm{Y_i}=(y_{i1},\ldots,y_{iK})'$ and $\bm{Y_k}=(y_{1k},\ldots,y_{nk})'$, and $x_i \in {0,1,2}$ is the genotype of a SNP coded as the number of minor alleles in the subject $i$. $\bm{x} = (x_1,\ldots,x_n)$ is the vector of all subjects' genotypes for this SNP. In terms of real applications, we suppose each subject $i$ has $M$ covariates $z_{i1},\dots,z_{iM}$ and $\bm{Z}=\{z_{im}\}_{n \times M}$. After adjusting for covariates, We aim to test the associations between the SNP and $K$ traits under the null hypothesis $H_0$ that none of the $K$ traits associates with the SNP. To construct the test statistics, the MTAF method combines the marginal p-values from the single-trait score tests in an adaptive way. At last, because our test statistic has no closed form, we conduct permutations to get the empirical p-values. \subsection{Score Test} Score test is one of the most popular test in GWASs of single traits, because it only requires fitting the null model thus is computationally inexpensive. Suppose for each SNP and subject $i$, a generalized linear model of the following form is assumed for the $k^{th}$ trait and the SNP with $M$ covariates: \begin{equation*} g_{k}(E(Y_{ik})) = x_{i} \cdot \beta_{k} + \sum_{m=1}^{M} z_{im} \cdot \alpha_{mk}, \label{eq:glm} \end{equation*} where $\beta_k$ is the effect of the SNP and $\alpha$'s are the effects of the M covariates on the $k^{th}$ trait. $g_k(\cdot)$ is the link function, which is identity function for continuous traits and logit function for binary traits. Different link functions make it possible to allow for both continuous traits and binary traits. Under $H_0: \beta_{k} = 0$, the test statistic of score test for the SNP is: \begin{equation*} U_k = \sum_{i=1}^n (x_i-\hat{x}_i)(Y_{ik}-\hat{Y}_{ik}), \label{eq:score} \end{equation*} where $\hat{x}_i$ is an estimate of $x_i$ and $\hat{Y}_{ik}$ is an estimate of $Y_{ik}$. Denote $\bm{V}$ as Fisher information matrix which is the covariance matrix of $\bm{U}=(U_1,\ldots,U_k)$ and $\mbox{Var}(U_k|H_0)=V_{kk}$ where $V_{kk}$ is the $k^{th}$ diagonal element of $\bm{V}$. Asymptotically, we have $U_k/\sqrt{V_{kk}} \sim \mathcal{N}(0,1)$ and then the p-values (either one-sided or two-sided) can be generated. We get the p-values of the single-trait score tests computationally by R package \textbf{statmod} \citep{RJ-2016-024}. \subsection{MTAF Method} Denote $p_1,...,p_K$ as the p-values of score tests between the K traits and the SNP. Let $S_k = - \sum_{i=1}^k{\log{p_{(i)}}}$ where $p_{(i)}$ is the $i^{th}$ smallest p-value and it is the sum of first $k$ smallest p-values. Then the p-value of $s_k$ is $p_{s_k} = P( S_k \ge s_k )$ where $s_k$ is the observed value of $S_k$. In practice, this p-value can be obtained by permutation. The proposed test statistic of the MTAF method is \begin{equation*} T_{MTAF} = \min_{1 \le k \le K} p_{s_k}. \end{equation*} \subsection{Permutation test} Since it is intractable to get the p-value of the test statistic analytically, we turn to apply the permutation procedure to get the empirical p-values. We intend to test the conditional independence between the SNP and the traits given the covariates, i.e.,$\bm{Y} \perp \bm{x} \mid \bm{Z}$. The permutation procedure should break the associations between $\bm{Y}$ and $\bm{x}$ while preserving the associations between $\bm{Y}$ and $\bm{Z}$ and between $\bm{x}$ and $\bm{Z}$. Simply permuting the genotype $\bm{x}$ leads to inflated type I error rate, because the correlation between the genotype and covariates are destroyed. Following \citet{potter2005permutation} and \citet{werft2010glmperm}, we permute residuals of regressions of $\bm{x}$ on $\bm{Z}$ for generalized regression models. In our method, we first regress the genotype on the covariates, then we permute the residuals derived from the regression. We replace the original genotype with the permuted residuals to perform score tests for the permuted data. It should be noted that even when no covariate is explicitly included in the mode, we still have a constant as our covariate. Specifically, we denote the vector of residuals of regressing $\bm{x}$ on $\bm{Z}$ as $\bm{e_{x}}$ and permute it for B times. In the $b^{th}$ permutation, we regress $\bm{Y_{k}}$ on $\bm{e_{x}^{(b)}}$ and get the score tests p-values $p_{k}^{(b)}$ for the coefficient of $\bm{e_{x}^{(b)}}$. After B permutations, we get a $(B+1) \times K$ matrix $\mathbbm{P}=\{p_{k}^{(b)} \}$. Each element $p_{k}^{(b)}$ is the p-value measuring the $k^{th}$ trait in the $b^{th}$ permutation for $1 \leq b \leq B$ and $p_{k}^{(0)}$ is the observed p-value. Based on $\mathbbm{P}$, we can construct the MTAF method's test statistics for both the observed data and permuted data. For the matrix $\mathbbm{P}$, we can calculate the empirical p-values of the MTAF method for the observed data and permuted data with the following steps: Suppose we have a $(B+1) \times K$ matrix of p-values $\mathbbm{P}$. \begin{enumerate} \item For each row $b \in \{ 0,1,...,B \}$, we calculate $s_k^{(b)}$ and \begin{equation*} p_{s_k}^{(b)} = \frac{1}{B+1} \sum_{j=0}^B \mathbbm{1}{\{s_k^{(j)} \geq s_k^{(b)}\}}, \end{equation*} where $\mathbbm{1}$ is the indicator function. \item Then we can get a vector $\bm{t}=(t_{MTAF}^{(0)},t_{MTAF}^{(1)},\dots,t_{MTAF}^{(B)})$ where $t_{MTAF}^{(b)} = \min_{1 \le k \le K} p_{s_k}^{(b)}$. \item The p-values of MTAF test statistics are approximated by \begin{equation*} p_{MTAF}^{(b)} = \frac{1}{B+1} \sum_{j=0}^B \mathbbm{1}{\{ t_{MTAF}^{(j)} \leq t_{MTAF}^{(b)} \}}, \label{eq:approx_pval} \end{equation*} where $p_{MTAF}^{(b)}$ is the empirical p-value of the MTAF method for the permuted data for $1 \leq b \leq B$ and $p_{MTAF}^{(0)}$ is the empirical p-value for the observed data. \end{enumerate} To simplify the following discussion of the variation of MTAF method, we define the steps above as an AF operator $AF\{\cdot\}$ mapping $\mathbbm{P}$ to $\bm{p} = (p_{MTAF}^{(0)},p_{MTAF}^{(1)},\dots,p_{MTAF}^{(B)})$. \subsection{Combination of One Sided P-values} In practice, traits of a complex disease tend to be positively correlated intrinsically. Or, according to the prior knowledge, we can manually change the direction of effects to make them in the same direction. In the situation that effects are in the same direction, combining one-sided p-values aggregates evidence for the effects which tend to have the same signal and enjoys higher statistical power than combining two-sided p-values. Therefore, we recommend always combine one-sided p-values when it is appropriate. We separately combine the lower-tail p-values and the upper-tail p-values, and then unify these two results using another round of MTAF permutation. Specifically, we get $\bm{p}_{lower} = AF\{\mathbbm{P}_{lower}\}$ and $\bm{p}_{upper} = AF\{\mathbbm{P}_{upper}\}$ and then the empirical p-value of the observed data is the first element of $\bm{p}_{combo} = AF\{[\bm{p}_{lower} \ \bm{p}_{upper}] \}$. \subsection{PCA of continuous traits} Alternatively, a SNP may not have strong associations with observed traits but hidden components, which can be difficult to detect those associations. PCA is widely used to reduce dimensions and it generates orthogonal linear combinations of variables maximizing the variability. We introduce PCA into the MTAF method with the purpose of uncovering the hidden components. In the MTAF method, PCA generates $K$ independent principal components and we detect the associations between the SNP and the principal components. Specifically, for continuous traits, we first regress $\bm{Y_k}$ on the covariates $\bm{Z}$ and denote the residuals $\bm{e_{k}}$ for $1 \leq k \leq K$. PCA conducted on $\bm{e_{1}},\ldots,\bm{e_{K}}$ leads to $K$ principal components which substitute for $\bm{y_{1}},\ldots,\bm{y_{K}}$. In the simulation study, the power of the MTAF method increases dramatically given correlated traits after applying PCA. Unlike its common use in practice that selects only several top principal components, it claims that using all principal components can have greater power \citep{aschard2014maximizing}. Therefore, the MTAF method keeps all principal components and it proves to be powerful. The MTAF method itself is powerful when signals are sparse, but usually the number of traits truly associated with the SNP and underlying correlation structure are unknown. Hence, when analyze continuous traits, initially we apply the original MTAF method and the MTAF method with PCA respectively, then combine the results from the two to get the final p-value. Specifically, we have $\bm{p}_{original}=AF(\mathbbm{P}_{original})$ and $\bm{p}_{pca}=AF(\mathbbm{P}_{pca})$. Then combine two vectors $\bm{p}_{original}$ and $\bm{p}_{pca}$ into a matrix $\mathbbm{P}_{continuous}$. At last, we have $\bm{p}_{continuous}=AF(\mathbbm{P}_{continuous})$ and the p-value is the first element of $\bm{p}_{continuous}$. To process a mixture of binary traits and continuous traits, we first apply the MTAF method to binary traits to get $\bm{p}_{binary}$ and then get $\bm{p}_{continuous}$ by the procedure above. Then we combine two p-value vectors to get $\mathbbm{P}_{mix} = [\bm{p}_{binary} \ \bm{p}_{continuous}]$ and the empirical p-value is the first element of $\bm{p}_{mix}=AF(\mathbbm{P}_{mix})$. In the MTAF method, PCA is used to handle continuous traits rather than binary traits. Although some literature refers to generalized PCA \citep{landgraf2019generalized}, our method only applies PCA to continuous traits at this moment. \subsection{Simulation Setup} To evaluate the performance of the MTAF method, we conduct a simulation study. In each dataset, we simulate $1000$ subjects based on various parameters such as the number and types of traits, as well as the proportion of traits associated with the genotype and the strength of the association. We consider $10$, $50$, and $100$ traits and we simulate three scenarios: continuous traits only, binary traits only, and mixture of the two. We assume a compound symmetry (CS) structure underlying traits with either weak correlation ($\rho=0.3$) or strong correlation ($\rho=0.6$). For the proportion of associated traits, we define the sparse scenarios as when $2\%$ of the traits are truly associated with the SNP, and the dense scenarios as when $20\%$ are associated. However when there are only $10$ traits, we set the number of associated traits being 1 and 4 for the sparse and dense scenarios respectively. The detailed simulation steps are listed below. First, we simulate the genotypes of the SNP. Since we focus on the association between single common SNPs and multiple traits, we only simulate one SNP genotype $x_i \in \{0,1,2\}$ for each subject $i$, such that $x_i \sim \textrm{Bin} (2, 0.3)$, where $0.3$ is the minor allele frequency (MAF) of the simulated SNP. Next, the traits for each subject $\bm{Y_i}=(Y_{i1},\ldots,Y_{iK})'$ are simulated via a linear model: \begin{equation} Y_i = x_i\bm{\beta} + \bm{\epsilon}_i, \label{eq:sim_1} \end{equation} where $\bm{\beta} = (\beta_1,\ldots,\beta_K )$ are the coefficients. The non-zero $\beta_k$'s are drawn from independent uniform distributions and we select the parameters of the uniform distributions to make the differences among methods obvious. $\bm{\epsilon}_i$'s are independently drawn from a multivariate Gaussian distribution $N(\mathbf{0},\bm{\Sigma})$. To simulate correlated traits, $\bm{\Sigma}$ is simulated such that the variances are sampled independently from an inverse gamma distributions $\textrm{Inv-Gamma}(4, 4)$ and the corresponding correlation matrix is CS with correlation $\rho$. In addition, we consider simulation scenarios with two binary covariates $Z_{i1}$ and $Z_{i2}$ to investigate the performance of the MTAF method in the presence of confounders, such as gender and race in the real datasets. These covariates are simulated by dichotomizing underlying covariates that are linearly associated with the genotype. Specifically, $Z_{i1}$ is simulated by dichotomizing $x_i \eta_1 + \omega_{i1}$, and $Z_{i2}$ are simulated by dichotomizing $x_i \eta_2 + \omega_{i2}$, where $\eta_1$ and $\eta_2$ are randomly drawn from uniform distributions $U(0.5,1)$, and $\omega_{i1}$ and $\omega_{i2}$ follow $\mathcal{N}(0,1)$. Then, we label the values greater than medians ``1", otherwise ``0". $\bm{Y_i}$ is simulated based on a linear model conditional on both the genotype and the covariates: \begin{equation} \bm{Y_i} = x_i \bm{\beta} + \bm{Z_i} \bm{\Gamma} + \bm{\epsilon}_i, \label{eq:sim_2} \end{equation} where $\bm{Z_i} = (Z_{i1},Z_{i2})$ and $\bm{\Gamma}_{K \times 2}$ has coefficients drawn from iid uniform distribution $U(0.5,1)$. To simulate binary traits, we first simulate the log-odds of $Y_{ik}=1$ by replacing the corresponding $Y_{ik}$ with $\textrm{logit}(E(Y_{ik}))$ in \ref{eq:sim_1} and \ref{eq:sim_2} and then we draw the binary traits based on the simulated odds. To evaluate the performance of the MTAF method, competitor methods including MSKAT, aSPU, TATES, MANOVA, MultiPhen, and minP are also applied on the simulated datasets for comparison. \subsection*{Data availability} The authors state that all data necessary for confirming the conclusions are represented fully within the article. The SAGE data was downloaded from the dbGAP using accession number phs000092.v1.p1. The R software package for the MTAF method and our simulation code are available at \url{https://github.com/songbiostat/MTAF}. \section{Results} \label{sec:results} \subsection{Simulation Results} \subsubsection{Type I Error Rate} First, to assess whether the MTAF method and other methods can appropriately control type I error at the nominal level, we perform simulations under the null hypothesis where $\beta_1=\dots=\beta_K=0$. The empirical p-values were calculated based on $1,000$ permutations and the type I error rate is evaluated at the $0.05$ nominal level. In addition to aSPU with default independent structure, we evaluated aSPU equipped with exchangeable correlation structure (aSPU-ex). Table \ref{tab:typeI_cont} shows that, when all traits were continuous, the empirical Type I error of most methods were well controlled allowing for different number of traits and strength of correlation. We found that MultiPhen had inflated Type I error, especially after adding covariates into the models. Thus, we decided not to include MultiPhen in the corresponding simulation studies. The similar phenomenon was reported by Konigorski et al.\citep{konigorski2020powerful} that MultiPhen led to inflated or highly inflated type I errors and they did not include the method in the power study. Table \ref{tab:typeI_bin} and \ref{tab:typeI_mix} show that type I error were well controlled for the compared methods when all traits are binary, or when the traits are half binary and half continuous. Please be noted that only methods that can be applied on binary or mixed trait scenarios are included in tables \ref{tab:typeI_bin} and \ref{tab:typeI_mix}. \begin{table}[htbp] \caption{\bf Type I error: continuous traits} \centering \scalebox{0.8}{ \begin{tabular}{c c c c cccccccc} \hline \# Covariates & Correlation & \# Traits & MTAF & MSKAT & aSPU &aSPU-ex &MultiPhen & TATES & MANOVA & minP \\ [0.5ex] \hline 0 & 0.3 & 10 &0.042 &0.048 &0.041 &0.049 &0.049 &0.045 &0.050 &0.049 \\[-0.5ex] & & 50 &0.044 &0.038 &0.047 &0.039 &0.038 &0.042 &0.045 &0.049 \\[-0.5ex] & & 100 &0.049 &0.040 &0.054 &0.051 &0.041 &0.050 &0.049 &0.052 \\[0.5ex] & 0.6 & 10 &0.045 &0.048 &0.041 &0.049 &0.049 &0.035 &0.050 &0.039 \\[-0.5ex] & & 50 &0.041 &0.038 &0.050 &0.039 &0.038 &0.026 &0.045 &0.043 \\[-0.5ex] & & 100 &0.051 &0.040 &0.061 &0.047 &0.041 &0.044 &0.049 &0.054 \\[1ex] 2 & 0.3 & 10 &0.051 &0.049 &0.039 &0.042 &0.066 &0.046 &- &0.046 \\[-0.5ex] & & 50 &0.049 &0.064 &0.040 &0.041 &0.097 &0.045 &- &0.046 \\[-0.5ex] & & 100 &0.047 &0.042 &0.056 &0.058 &0.146 &0.043 &- &0.046 \\[0.5ex] & 0.6 & 10 &0.048 &0.048 &0.043 &0.050 &0.059 &0.044 &- &0.045 \\[-0.5ex] & & 50 &0.039 &0.064 &0.034 &0.038 &0.097 &0.027 &- &0.038 \\[-0.5ex] & & 100 &0.043 &0.040 &0.033 &0.042 &0.138 &0.029 &- &0.049 \\[0ex] \hline \hline \end{tabular} } \label{tab:typeI_cont} \end{table} \begin{table}[htbp] \caption{\bf Type I error: binary traits} \centering \scalebox{1}{ \begin{tabular}{c c c c cccccc} \hline \# Covariates & Correlation & \# Traits & MTAF &aSPU &aSPU-ex & MultiPhen &TATES & minP \\ [0.5ex] \hline 0 & 0.3 & 10 &0.050 &0.055 &0.056 &0.066 &0.054 &0.055 \\[-0.5ex] & & 50 &0.050 &0.051 &0.040 &0.072 &0.055 &0.052 \\[0.5ex] & 0.6 & 10 &0.044 &0.048 &0.055 &0.059 &0.048 &0.046 \\[-0.5ex] & & 50 &0.051 &0.046 &0.049 &0.077 &0.055 &0.054 \\[1ex] 2 & 0.3 & 10 &0.039 &0.042 &0.043 &0.526 &0.047 &0.049 \\[-0.5ex] & & 50 &0.035 &0.034 &0.037 &0.710 &0.044 &0.043 \\[0.5ex] & 0.6 & 10 &0.043 &0.042 &0.039 &0.429 &0.044 &0.040 \\[-0.5ex] & & 50 &0.038 &0.038 &0.044 &0.606 &0.049 &0.052 \\[0ex] \hline \end{tabular} } \label{tab:typeI_bin} \end{table} \begin{table}[htbp] \caption{\bf Type I error: mixed traits with covariates} \centering \scalebox{1}{ \begin{tabular}{c c c ccc} \hline Correlation & \# Traits & MTAF &MultiPhen & TATES & minP \\ [0.5ex] \hline 0.3 & 10 &0.054 &0.886 &0.059 &0.058 \\[-0.5ex] & 50 &0.043 &0.982 &0.056 &0.055 \\[0.5ex] 0.6 & 10 &0.049 &0.878 &0.053 &0.051 \\[-0.5ex] & 50 &0.040 &0.996 &0.041 &0.045\\[0ex] \hline \end{tabular} } \label{tab:typeI_mix} \end{table} \subsubsection{Statistical Power} The power of these compared methods were evaluated under different scenarios at significance level of $0.05$. The effect sizes for the associated traits were randomly drawn from uniform distributions. We include the original MTAF method ($\textrm{MTAF}_{\textrm{original}}$) and its PCA expansion ($\textrm{MTAF}_{\textrm{PCA}}$). The results show that the statistical power under dense scenarios was greatly improved by introducing PCA. Table \ref{tab:power_cont_noz} summarizes the power of the compared methods for continuous traits without covariates. We observe that when signals were sparse, the MTAF method was the most powerful method or performed similar to the most powerful method. On the other hand, when the signals were dense, MSKAT, MANOVA, and MultiPhen were the powerful methods. Although the MTAF method had a slightly lower power, the performance of the MTAF method was close to these methods. Table \ref{tab:power_cont_z} shows the results for continuous traits with two covariates. It shows that when confounders were included, only the MTAF method and MSKAT managed to preserve their power in both sparse and dense scenarios, while the performance of other methods deteriorated for the dense signal scenarios, especially when the number of traits got large. Table \ref{tab:power_bin_noz} shows the results for binary traits without covariates. Under this scenario, aSPU and aSPU-ex outperformed other methods. The MTAF method was slightly less powerful than aSPU methods, while TATES and minP performed well with sparse signal, but significantly underperforms with dense signals. In Table \ref{tab:power_bin_z}, we find that with two covariates, performance difference between the MTAF method and the aSPU methods diminished, and their powers were close in most simulations settings. Table \ref{tab:power_mix} shows the results for mixed traits with two covariates. It should be noted that only four methods allow for mixture of binary and continuous traits, including MultiPhen, the MTAF method, TATES, and minP. Whereas we did not include MultiPhen in our comparison because it fails to control type I error as shown previously. According to the results, the MTAF method outperformed TATES and minP regardless of the number of traits, the strength of correlation, or the proportional of signals. In summary, MTAF was robustly one of the most powerful methods in all the simulation settings with various number of traits, strength of correlation, and proportion of signals, for both continuous traits and binary traits (or their mixture), with or without confounding covariates. \begin{table*}[!htbp] \caption{\bf Power: continuous traits without covariates} \centering \scalebox{0.6}{ \begin{tabular}{c c c c cccccccccc} \hline Sparsity & Correlation & \# Traits & Effect Size &$\textrm{MTAF}_{\textrm{PCA}}$ &$\textrm{MTAF}_{\textrm{original}}$ &MTAF &MSKAT &aSPU &aSPU-ex &MultiPhen &TATES &MANOVA &minP \\ [0.5ex] \hline sparse & 0.3 & 10 &U(0.15,0.25) &0.762 &0.791 &0.793 &0.796 &0.563 &0.702 &0.796 &0.785 &0.798 &0.788 \\[-0.5ex] & & 50 & U(0.2,0.4) &0.797 &0.922 &0.916 &0.825 &0.688 &0.807 &0.827 &0.904 &0.836 &0.906 \\[-0.5ex] & & 100 & U(0.15,0.3) &0.770 &0.924 &0.919 &0.843 &0.400 &0.665 &0.845 &0.895 &0.852 &0.894 \\[0.5ex] & 0.6 & 10 &U(0.15,0.25) &0.907 &0.858 &0.905 &0.918 &0.590 &0.859 &0.918 &0.799 &0.918 &0.907 \\[-0.5ex] & & 50 & U(0.2,0.4) &0.916 &0.957 &0.949 &0.944 &0.730 &0.926 &0.944 &0.908 &0.950 &0.924 \\[-0.5ex] & & 100 & U(0.15,0.3) &0.935 &0.948 &0.968 &0.960 &0.413 &0.833 &0.960 &0.886 &0.966 &0.914 \\[1ex] dense & 0.3 & 10 &U(0.05,0.15) &0.769 &0.690 &0.765 &0.784 &0.390 &0.507 &0.784 &0.651 &0.786 &0.639 \\[-0.5ex] & & 50 & U(0.05,0.12) &0.808 &0.629 &0.812 &0.865 &0.134 &0.422 &0.866 &0.534 &0.873 &0.551\\[-0.5ex] & & 100 & U(0.02,0.1) &0.615 &0.418 &0.637 &0.716 &0.082 &0.266 &0.716 &0.334 &0.742 &0.343 \\[0.5ex] & 0.6 & 10 &U(0.05,0.15) &0.92 &0.729 &0.907 &0.933 &0.301 &0.701 &0.933 &0.627 &0.933 &0.641 \\[-0.5ex] & & 50 & U(0.05,0.12) &0.969 &0.654 &0.964 &0.987 &0.119 &0.640 &0.987 &0.457 &0.988 &0.526 \\[-0.5ex] & & 100 & U(0.02,0.1) &0.929 &0.437 &0.916 &0.970 &0.074 &0.338 &0.970 &0.273 &0.974 &0.346 \\[0ex] \hline \end{tabular} } \label{tab:power_cont_noz} \end{table*} \begin{table}[htbp] \caption{\bf Power: continuous traits with covariates} \centering \scalebox{0.7}{ \begin{tabular}{c c c c cccccccc} \hline Sparsity & Correlation & \# Traits & Effect Size &$\textrm{MTAF}_{\textrm{PCA}}$ &$\textrm{MTAF}_{\textrm{original}}$ &MTAF &MSKAT &aSPU &aSPU-ex &TATES &minP \\ [0.5ex] \hline sparse & 0.3 & 10 &U(0.15,0.3) &0.763 &0.787 &0.803 &0.795 &0.602 &0.715 &0.773 &0.779 \\[-0.5ex] & & 50 & U(0.2,0.4) &0.684 &0.889 &0.871 &0.757 &0.578 &0.722 &0.871 &0.876 \\[-0.5ex] & & 100 & U(0.15,0.3) &0.646 &0.872 &0.862 &0.754 &0.273 &0.543 &0.835 &0.840 \\[0.5ex] & 0.6 & 10 &U(0.15,0.3) &0.908 &0.862 &0.900 &0.920 &0.620 &0.865 &0.778 &0.794 \\[-0.5ex] & & 50 & U(0.2,0.4) &0.877 &0.935 &0.939 &0.917 &0.619 &0.878 &0.880 &0.891 \\[-0.5ex] & & 100 & U(0.15,0.3) &0.889 &0.922 &0.948 &0.934 &0.307 &0.739 &0.828 &0.859 \\[1ex] dense & 0.3 & 10 &U(0.05,0.2) &0.833 &0.792 &0.842 &0.875 &0.506 &0.649 &0.759 &0.748 \\[-0.5ex] & & 50 &U(0.05,0.13) &0.685 &0.517 &0.695 &0.775 &0.123 &0.387 &0.468 &0.470\\[-0.5ex] & & 100 &U(0.03,0.12) &0.460 &0.301 &0.478 &0.560 &0.064 &0.211 &0.258 &0.268 \\[0.5ex] & 0.6 & 10 &U(0.05,0.2) &0.957 &0.833 &0.947 &0.963 &0.422 &0.821 &0.732 &0.737 \\[-0.5ex] & & 50 &U(0.05,0.13) &0.934 &0.544 &0.928 &0.963 &0.105 &0.567 &0.400 &0.457\\[-0.5ex] & & 100 &U(0.03,0.12) &0.779 &0.292 &0.756 &0.862 &0.057 &0.247 &0.192 &0.269 \\[0ex] \hline \end{tabular} } \label{tab:power_cont_z} \end{table} \begin{table}[htbp] \caption{\bf Power: binary traits without covariates} \centering \scalebox{0.7}{ \begin{tabular}{c c c c ccccc} \hline Sparsity & Correlation & \# Traits & Effect Size &MTAF &aSPU &aSPU-ex &TATES &minP \\ [0.5ex] \hline sparse & 0.3 & 10 &U(0.4,0.6) &0.747 &0.837 &0.839 &0.805 &0.805 \\[-0.5ex] & & 50 &U(0.6,0.8) &0.891 &0.957 &0.959 &0.930 &0.934 \\[0.5ex] & 0.6 & 10 &U(0.4,0.6) &0.773 &0.84 &0.859 &0.804 &0.801\\[-0.5ex] & & 50 &U(0.6,0.8) &0.912 &0.960 &0.974 &0.934 &0.931 \\[1ex] dense & 0.3 & 10 &U(0.2,0.3) &0.712 &0.738 &0.723 &0.547 &0.554 \\[-0.5ex] & & 50 &U(0.15,0.3) &0.667 &0.734 &0.749 &0.473 &0.46 \\[0.5ex] & 0.6 & 10 &U(0.2,0.3) &0.684 &0.701 &0.691 &0.569 &0.573 \\[-0.5ex] & & 50 &U(0.15,0.3) &0.580 &0.619 &0.754 &0.459 &0.454 \\[0ex] \hline \end{tabular} } \label{tab:power_bin_noz} \end{table} \begin{table}[htbp] \caption{\bf Power: binary traits with covariates} \centering \scalebox{0.7}{ \begin{tabular}{c c c c ccccc} \hline Sparsity & Correlation & \# Traits & Effect Size &MTAF &aSPU &aSPU-ex &TATES &minP \\ [0.5ex] \hline sparse & 0.3 & 10 &U(0.4,0.6) &0.702 &0.704 &0.714 &0.745 &0.739 \\[-0.5ex] & & 50 &U(0.5,0.7) &0.745 &0.725 &0.753 &0.800 &0.805 \\[0.5ex] & 0.6 & 10 &U(0.4,0.6) &0.718 &0.712 &0.747 &0.737 &0.740 \\[-0.5ex] & & 50 &U(0.5,0.7) &0.766 &0.738 &0.792 &0.791 &0.787 \\[1ex] dense & 0.3 & 10 &U(0.2,0.3) &0.610 &0.589 &0.570 &0.509 &0.491 \\[-0.5ex] & & 50 &U(0.2,0.35) &0.782 &0.783 &0.797 &0.637 &0.608 \\[0.5ex] & 0.6 & 10 &U(0.2,0.3) &0.548 &0.534 &0.525 &0.490 &0.467 \\[-0.5ex] & & 50 &U(0.2,0.35) &0.725 &0.683 &0.806 &0.608 &0.589 \\[0ex] \hline \end{tabular} } \label{tab:power_bin_z} \end{table} \begin{table}[htbp] \caption{\bf Power: mixed traits with covariates and dense signals} \centering \scalebox{0.8}{ \begin{tabular}{c c c c ccc} \hline Correlation & \# Traits & Effect Size &MTAF &TATES &minP \\ [0.5ex] \hline 0.3 & 10 &U(0.05,0.3) &0.844 &0.805 &0.804 \\[-0.5ex] & 50 &U(0.05,0.25) &0.929 &0.846 &0.852 \\[0.5ex] 0.6 & 10 &U(0.05,0.3) &0.897 &0.798 &0.795 \\[-0.5ex] & 50 &U(0.05,0.25) &0.986 &0.827 &0.835 \\[0ex] \hline \end{tabular} } \label{tab:power_mix} \end{table} \subsection{The Study of Addiction: Genetics and Environment (SAGE)} To further demonstrate the usage of the proposed method in real studies, we applied MTAF to The Study of Addiction: Genetics and Environment (SAGE) \citep{bierut2010genome} data from the database of Genotypes and Phenotypes (dbGaP) \citep{mailman2007ncbi}, \url{http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000092.v1.p1}. SAGE is a case-control GWAS of addiction with unrelated individuals where cases are defined as individuals with Diagnostic and Statistical Manual of Mental Disorders 4th Edition (DSM-IV) alcohol dependence (lifetime) and potentially other illicit drug dependence. The individuals were selected from three large studies including the Collaborative Study on the Genetics of Alcoholism (COGA), the Family Study of Cocaine Dependence (FSCD), and the Collaborative Genetic Study of Nicotine Dependence (COGEND). From dbGaP, We downloaded the data of $3,847$ individuals who consented to provide their data for health research. Quality control was performed using PLINK 1.9 \citep{purcell2007plink}. We filtered data based on the genotyping rate (0.01), missingness (0.01), minor allele frequency (0.01), and Hardy-Weinberg equilibrium (p-value $\le$ 0.01). After the filtering, we ended up with $3,557$ individuals and $560,218$ SNPs. In order to detect SNPs associated with addiction, we selected 18 traits (summarized in Table \ref{tab:sage_variables}) that account for the addiction to alcohol, nicotine, marijuana, cocaine, opiates, and any other drug. Among these traits, 12 are binary and 6 are continuous. In addition, gender and race (black or white) are included in our analysis as confounding covariates. \begin{table}[htbp] \caption{\bf SAGE data variables summary} \centering \scalebox{0.75}{ \begin{tabular}{c c c} \hline & variable name & description \\ [0.5ex] \hline & nic\_sx\_tot & number of nicotine symptoms endorsed \\ [0.5ex] & nic\_sx1 & tolerance to nicotine \\ [0.5ex] & nic\_sx2 & withdrawal from nicotine \\ [0.5ex] & cig\_daily & Has participant ever smoked cigarettes daily for a month or more? \\ [0.5ex] & mj\_sx\_tot & number of marijuana dependence symptoms endorsed \\ [0.5ex] & mj\_sx1 & tolerance to marijuana \\ [0.5ex] & mj\_sx2 & withdrawal to marijuana \\ [0.5ex] & coc\_sx\_tot & number of cocaine dependence symptoms endorsed \\ [0.5ex] & coc\_sx1 & tolerance to cocaine \\ [0.5ex] & coc\_sx2 & withdrawal to cocaine \\ [0.5ex] & op\_sx\_tot & number of opiates dependence symptoms endorsed \\ [0.5ex] & op\_sx1 & tolerance to opiates \\ [0.5ex] & op\_sx2 & withdrawal to opiates \\ [0.5ex] & alc\_sx\_tot & number of alcohol dependence symptoms endorsed \\ [0.5ex] & alc\_sx1 & tolerance to opiates \\ [0.5ex] & alc\_sx2 & withdrawal to opiates \\ [0.5ex] & max\_drinks & largest number of alcoholic drinks consumed in 24 hours \\ [0.5ex] & ever\_oth & Has participant ever used drugs other than marijuana, cocaine or opiates? \\ [0.5ex] \hline \end{tabular} } \label{tab:sage_variables} \end{table} \begin{table}[htbp] \caption{\bf Correlation of continuous traits in SAGE data} \centering \scalebox{0.75}{ \begin{tabular}{c c c c c c c} \hline & nicotine & marijuana & cocaine & alcohol & opiate & max drinks \\ [0.5ex] \hline nicotine &1 &0.390 &0.392 &0.534 &0.218 &0.345 \\ [0.5ex] marijuana &0.390 &1 &0.534 &0.472 &0.331 &0.287 \\ [0.5ex] cocaine &0.392 &0.534 &1 &0.519 &0.377 &0.346 \\ [0.5ex] alcohol &0.537 &0.472 &0.519 &1 &0.298 &0.574 \\ [0.5ex] opiate &0.218 &0.331 &0.377 &0.298 &1 &0.175 \\ [0.5ex] max drinks &0.350 &0.287 &0.346 &0.574 &0.175 &1 \\ [0.5ex] \hline \end{tabular} } \label{tab:sage_corr} \end{table} We inspected the Pearson's correlation among six continuous traits and Table \ref{tab:sage_corr} shows that all six traits are positively correlated. Then we applied the MTAF method to detect the associations between the SNPs and the 18 traits. To get accurate p-values at extremely small significance level (usually lower than $10^{-6}$) given limited computational resource, we performed the tests by adaptively increasing the number of permutations. We first set the number of permutation to be $B$ and filtered out insignificant SNPs with p-values greater than $5/B$ and updated the number of permutation to $10 \times B$. We started with $B = 100$ and repeat the above process until $B=10^7$. By doing this, we managed to avoided permuting $10^7$ times for most of the SNPs, and saved computation resource only for the most significant SNPs. Figure \ref{fig:qq} shows the QQ-plot of the $-\log_{10}(\textrm{p-values})$ of all the SNPs based on the MTAF method. As expected, the majority of the SNPs have p-values that are uniformly distributed, while only a small proportion of the SNPs has strong association with the phenotypes. Please be noted that the inflections observed in the QQ-plot are completely normal due to the adaptive permutation procedure. Figure \ref{fig:man} shows the p-values for all SNPs across 22 chromosomes in a Manhattan plot. \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{qq2.pdf} \caption{QQ-plot of p-values of the MTAF method testing the association between SNPs and multiple traits of substance dependence. }% \label{fig:qq} \end{figure} At the significance level $5 \times 10^{-6}$, we identified 11 significant SNPs belonging to six genes as shown in Table \ref{tab:sage_SNP}. Most of these genes are related to the biological functions of nerve and brain, which is plausible considering the fact that addictions are considered to be related to mental health. Among these genes, EVI5 is a risk gene of multiple sclerosis, a disease which causes damage to the nervous system \citep{hoppenbrouwers2008evi5}. Gouveia et al. \citep{gouveia2019genetics} finds that ZNF385D is associated with cognitive function, and it is also reported to be associated with language impairment in previous literature. TPK1 produces thiamine pyrophosphate and its mutation can cause neurological disorder \citep{banka2014expanding}. According to GWAS catlog \citep{buniello2019nhgri}, LINC02008, MIR4495 and CNTN1 are all linked to Alzheimer's disease, and CNTN1 is reported to be associated with Parkinson's disease and schizophrenia. \begin{table}[htbp] \caption{\bf significant SNPs at $5 \times 10^{-6}$ level } \centering \scalebox{0.8}{ \begin{tabular}{c c c c c} \hline rsid & chromosone & position & p-value & gene \\ [0.5ex] \hline rs1556562 &1 & 92568466 &$4.0 \times 10^{-6}$ &EVI5 \\ [0.5ex] rs1408916 &1 & 92527070 &$3.1 \times 10^{-6}$ &EVI5 \\ [0.5ex] rs4847377 &1 & 92557593 &$2.8 \times 10^{-6}$ &EVI5 \\ [0.5ex] rs4970712 &1 & 92527990 &$3.8 \times 10^{-6}$ & EVI5 \\ [0.5ex] rs9310661 &3 & 21855648 &$1.3 \times 10^{-6}$ & ZNF385D \\ [0.5ex] rs7614064 &3 & 82239346 &$1.4 \times 10^{-6}$ &LINC02008 \\ [0.5ex] rs7645576 &3 & 82274160 &$3.0 \times 10^{-6}$ &LINC02008 \\ [0.5ex] rs9852219 &3 & 82327624 &$5.0 \times 10^{-6}$ &LINC02008 \\ [0.5ex] rs10224675 &7 & 144595669 &$3.4 \times 10^{-6}$ &TPK1 \\ [0.5ex] rs11178982 &12 &40907530 &$2.0 \times 10^{-6}$ &CNTN1 \\ [0.5ex] rs2020139 &12 &97953656 &$3.1 \times 10^{-6}$ &MIR4495 \\ [0.5ex] \hline \end{tabular} } \label{tab:sage_SNP} \end{table} \begin{figure}[htbp!] \centering \includegraphics[width=\linewidth]{man2.pdf} \caption{Manhattan plot of the MTAF method testing the association between SNPs and multiple traits of substance dependence at a significance level of $5 \times 10^{-6}$ }% \label{fig:man} \end{figure} \section{Discussion} \label{sec:discussion} Although single-trait analysis methods have been widely used in multiple-trait studies, these methods may be insufficient when traits are truly correlated with each other. Hence, the multiple-trait association tests can increase statistical power by incorporating the information among traits. In this paper, we propose the MTAF method for multiple traits, by adaptively combining p-values from the single-trait analyses. The MTAF method is very versatile for the multiple-trait association testing. First, because the MTAF method only requires the p-values as inputs, it can process both continuous traits and binary traits simultaneous whenever p-values are provided. Second, we apply PCA on continuous traits to uncover hidden components underlying traits, which greatly improves the performance of MTAF when signals are weak and dense. Third, the MTAF method combines the one sided p-values instead of two sided p-values, which increases power given the effects of traits are in the same direction. When some of the traits are in the opposite direction of other traits, we can flip these traits such that all or most of the traits are positively correlated. At last, we paid special attention to the permutation test with covariates. By permuting the residuals of genotypes regressed on the covariates, we managed to control type I error while adjusting for confounders. Relying on the permutation procedure to get the empirical p-values can be time-consuming when the validity of tiny p-values is required. Since in GWASs most SNPs have no significant effect on the traits, permuting the same number of times on each SNP is unnecessary and can be a waste of computing time and resources. A more efficient way is to permute the data iteratively on each SNP with the expectation that insignificant SNPs will be excluded in less permutation time. As a result, most SNPs get removed after the first few rounds and only significant SNPs require a large number of permutation time. We show the reduction in time complexity by the following example. Starting with $B=100$, the chance that a SNP gets removed in the first round is $0.05$. If the SNP remains, we would start the second round $B=1000$ and the chance that the SNP gets removed in the second round would be $0.045$ which is the difference between the chance of remaining in the first round $0.05$ and the chance of advancing to the third round $0.005$. Following the procedure, if we stop at $B=10^7$, the expected permutation times for a SNP would be $0.95 \cdot 100 + 0.045 \cdot 1000 + \ldots + 4.5 \cdot 10^{-6} \cdot 10^7 \approx 45 \cdot \log_{10}(10^7)$, which is logarithmic time $\log_{10}(B)$. On the contrary, if we fix the permutation times at $10^7$, the expected permutation times would be linear time $B$. Therefore, we reduce the time complexity from linear time to logarithmic time. A SNP may not always influence the traits directly. Instead, it may indirectly affect correlated traits through an unobserved component, in which case, uncovering the hidden component can enhance the statistical power given correlated traits. In the MTAF method, we use PCA to uncover this potentially hidden component. By detecting the associations between the SNP and hidden components, we may increase the statistical power. This idea was supported by our simulation study in which PCA largely improved the statistical power under dense scenarios. Thus, depending on the correlation structure, PCA may increase the power of testing when traits are correlated. Despite the advantages of the MTAF method, several limitations can be addressed in the future work. First, the MTAF method can only analyze the single variant at this moment, the set-based analysis is common in GWASs though. \citet{cai2020adaptive} proposed a set-based AF method and the MTAF method might be extended to the set-based analysis or pathway analysis in the future. Second, the p-values are adaptively combined in the MTAF method and we do not aim at selecting the most related traits for the identified SNPs. Thus, in our future work, we can develop a method to provide a list of traits that are most related for each identified SNP. At last, because we need to permute the residuals of genotypes to control for type I error while adjusting for confounders or covariates, the MTAF method requires the individual level genotype data to perform the test. However, the individual level data are not alway available or often require special permissions since they are considered as identifiable data. In the future work, we would explore whether the MTAF method can be extended to use the GWAS summary statistics without requiring the individual level data. The R software package for the MTAF method and our simulation code are available at \url{https://github.com/songbiostat/MTAF}. \section*{Acknowledgments} Funding support for the Study of Addiction: Genetics and Environment (SAGE) was provided through the NIH Genes, Environment and Health Initiative [GEI] (U01 HG004422). SAGE is one of the genome-wide association studies funded as part of the Gene Environment Association Studies (GENEVA) under GEI. Assistance with phenotype harmonization and genotype cleaning, as well as with general study coordination, was provided by the GENEVA Coordinating Center (U01 HG004446). Assistance with data cleaning was provided by the National Center for Biotechnology Information. Support for collection of datasets and samples was provided by the Collaborative Study on the Genetics of Alcoholism (COGA; U10 AA008401), the Collaborative Genetic Study of Nicotine Dependence (COGEND; P01 CA089392), and the Family Study of Cocaine Dependence (FSCD; R01 DA013423). Funding support for genotyping, which was performed at the Johns Hopkins University Center for Inherited Disease Research, was provided by the NIH GEI (U01HG004438), the National Institute on Alcohol Abuse and Alcoholism, the National Institute on Drug Abuse, and the NIH contract "High throughput genotyping for studying the genetic contributions to human disease" (HHSN268200782096C). The datasets used for the analyses described in this manuscript were obtained from dbGaP at \url{http://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs000092.v1.p1} through dbGaP accession number phs000092.v1.p. \bibliographystyle{plainnat}
null
null
null
proofpile-arXiv_000-10121
{"arxiv_id":"2009.09002","language":"en","timestamp":1600740049000,"url":"https:\/\/arxiv.org\/abs\/2009.09002","yymm":"2009"}
2024-02-18T23:40:24.989Z
2020-09-22T02:00:49.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10123"}
null
\section{Introduction} Deep learning has shown outstanding performance in a wide range of computer vision tasks in the past years, especially image tasks. Meanwhile, in many practical applications, such as autonomous vehicles (Figure \ref{fig:1} shows a point cloud collected by an autonomous vehicle), we need more information than only images to obtain a better sense of the environment. 3D data from lidar or RGB-D cameras are considered to be a good supplement here. These devices generate 3D geometric data in the form of point clouds. With the growing demand from industry, utilization of point clouds with deep learning models is becoming a research hotspot recently. \par \begin{figure}[t] \begin{center} \includegraphics[width=1.5in]{pc1.png} \includegraphics[width=1.5in]{pc2.png} \end{center} \caption{Point cloud data collected from outdoor scene, shown from two distinct angles.} \label{fig:1} \end{figure} In constrast to image data, point clouds do not directly contain spatial structure, and deep models on point clouds must therefore solve three main problems: (1) how to find a representation of high information density from a sparse point cloud, (2) how to build a network satisfying necessary restrictions like size-variance and permutation-invariance, (3) how to process large volumes of data with lower time and computing resource consumption. PointNet \cite{qi2017pointnet} is one of the representative early attempts to design a novel deep network for comsumption of unordered 3D point sets by taking advantage of MLP and T-Net. PointNet, together with its improved version PointNet++ \cite{qi2017pointnet++}, inspired a lot of follow-up works. \par Fundamental tasks in images, such as classification, segmentation and object detection also exist in point clouds. Most solutions to these problems benefit from research findings on the image side, while adequate adaptions are inevitable to suit the characteristics of 3D data. In this paper, recent works on point clouds are divided into the following categories: classification, segmentation, detection, matching and registration, augmentation, completion and reconstruction. Detailed descriptions of each category will be provided in the following sections. \par A growing number of datasets are available for different tasks on point clouds. ShapeNet \cite{shapenet2015} and ModelNet \cite{wu20153d} are two early datasets consisting of clean 3D models. These early datasets suffer from the lack of generalization. However, it is necessary to consider disturbance including noise and missing points to develop robust models. With that in mind, datasets such as ScanNet \cite{dai2017scannet} and KITTI \cite{Geiger2013IJRR} are then created from scans of the actual environment. Datasets designed for autonomous vehicle tasks, like nuScenes \cite{caesar2019nuscenes} and Lyft \cite{lyft2019}, are further generalized by involving various environments at different times. Currently, ever more datasets are being proposed in order to meet the increasing demands of distinct niches. \par The structure of this paper is as follows. Section 2 introduces existing 3D datasets and corresponding metrics for different tasks. Section 3 includes a survey of 3D shape classification methods. Section 4 reviews methods for 3D semantic segmentation and instance segmentation. Section 5 presents a survey of methods for 3D object detection and its derivative task. Section 6 introduces recent progress in 3D point cloud matching and registration. Section 7 provides a review of methods to improve data quality. Finally, section 8 concludes the paper. \par \section{Datasets and metrics} Datasets are of great importance in deep learning methods for 3D point cloud data. First, well-designed datasets provide convictive evaluation and comparison among different algorithms. Second, datasets with richer content and metadata help define more complicated tasks and raise new research topics. In this section, we will briefly introduce some most commonly used datasets and evaluation metrics. \begin{table*}[h] \centering \caption{Commonly used 3D point cloud datasets in recent works} \begin{tabular}{|p{2.2cm}|p{2cm}|p{1cm}|p{2.8cm}|p{5.5cm}|p{0.7cm}|} \hline Dataset & Task & Classes & Scale & Feature & Year \\ \hline ShapeNet \cite{shapenet2015} & Classification & 55 & 51300 models & The categories are selected according to WordNet \cite{miller1995wordnet} synset. & 2015\\ \hline ModelNet40 \cite{wu20153d} & Classification & 40 & 12311 models & The models are collected with online search engines by querying for each established object category. & 2015 \\ \hline S3DIS \cite{armeni20163d} & Segmentation & 12 & 215 million points & Points are collected in 5 large-scale indoor scenes from 3 different buildings. & 2016 \\ \hline Semantic3D \cite{hackel2017isprs} & Segmentation & 8 & 4 billion points & Hand-labelled from a range of diverse urban scenes. & 2017 \\ \hline ScanNet \cite{dai2017scannet} & Segmentation & 20 & 2.5 million frames & Collected with a scalable RGB-D capture system with automated surface reconstruction and crowdsourced semantic annotation. & 2017 \\ \hline KITTI \cite{Geiger2013IJRR,Geiger2012CVPR,Fritsch2013ITSC,Menze2015CVPR} & Detection Tracking & 3 & 80256 objects & Captured by a standard station wagon equipped with two cameras, a Velodyne laser scanner and a GPS localization system driving in different outdoor scenes. & 2012\\ \hline nuScenes \cite{caesar2019nuscenes} & Detection Tracking & 23 & 1.4M objects & Captured with full sensor suite (1x LIDAR, 5x RADAR, 6x camera, IMU, GPS); 1000 scenes of 20s each. & 2019\\ \hline Waymo Open Dataset \cite{sun2020scalability} & Detection Tracking & 4 & 12.6M objects with tracking ID & Captured with 1 mid-range lidar, 4 short-range lidars and 5 cameras (front and sides); 1,950 segments of 20s each, collected at 10Hz. & 2019 \\ \hline \end{tabular} \label{table:0} \end{table*} \subsection{Datasets} Table \ref{table:0} shows the most commonly used 3D point cloud datasets for three matured tasks (classification, segmentation and detection), which will be mentioned often in the following sections. We will also introduce each of them with more details. \par \textbf{ShapeNet} ShapeNet \cite{shapenet2015} is a rich-annotated dataset with 51300 3D models in 55 categories. It consists of several subsets. ShapeNetSem, which is one of the subsets, contains 12000 models spread over a broader set of 270 categories. This dataset, together with ModelNet40 \cite{wu20153d}, are relatively clean and small, so they are usually used to evaluate the capacity of backbones before applied to more complicated tasks.\par \textbf{ModelNet40} The ModelNet \cite{wu20153d} project provides three benchmarks: ModelNet10, ModelNet40 and Aligned40. The ModelNet40 benchmark, where ``40" indicates the number of classes, is the most widely used. To find the most common object categories, the statistics obtained from the SUN database \cite{xiao2010sun} are utilized. After establishing the vocabulary, 3D CAD models are collected with online search engines and verified by human workers. \par \textbf{S3DIS} The Stanford Large-Scale 3D Indoor Spaces (S3DIS) dataset is composed of 5 large-scale indoor scenes from three buildings to hold diverse in architectural style and appearance. The point clouds are automatically generated without manual intervention. 12 semantic elements including structural elements (floor, wall, etc.) and common furniture are detected. \par \textbf{Semantic3D} Semantic3D \cite{hackel2017isprs} is the largest 3D point cloud dataset for outdoor scene segmentation so far. It contains over 4 billion points collected from around 110000$m^2$ area with a static lidar. The natural of outdoor scene, such as the unevenly distribution of points and massive occlusions, makes the dataset challenging. \par \textbf{ScanNet} ScanNet \cite{dai2017scannet} is a video dataset consists of 2.5 million frames from more than 1000 scans, annotated with camera poses, surface reconstructions and instance-level semantic segmentation. The dataset provides benchmarks for mutiple 3D scene understanding tasks, such as classification, semantic voxel labeling and CAD model retrieval. \par \textbf{KITTI} The KITTI \cite{Geiger2013IJRR,Geiger2012CVPR,Fritsch2013ITSC,Menze2015CVPR} vision benchmark suite is among the most famous benchmarks with 3D data. It covers benchmarks for 3D object detection, tracking and scene flow estimation. The multi-view data are captured with an autonomous driving platform with two high-resolution color and gray cameras, a Velodyne laser scanner and a GPS localization system. Only three kinds of objects which are important to autonomous driving are labelled: cars, pedestrians and cyclists.\par \textbf{Other datasets} There are some other datasets of high quality but not widely used, such as Oakland \cite{munoz2009contextual}, iQmulus \cite{vallet2015terramobilita} and Paris-Lille-3D \cite{roynard2017parisIJRR}. 3DMatch \cite{zeng20183dcontextnet} pushed the research in 3D matching and registration, which is a less popular direction in the past period. Recently, the rising demand from industry of autonomous driving has spawned several large-scale road-based datasets, represented by nuScenes \cite{caesar2019nuscenes}, Lyft Level 5 \cite{lyft2019} and Waymo Open Dataset \cite{sun2020scalability}. They proposed complicated challenges requiring to leverage multi-view data and related metadata. The development of datasets is helping reduce the gap between research and practical applications. \subsection{Metrics} The comparison between different algorithms requires certain metrics. It is important to design and select appropriate metrics. Well-designed metrics can provide valid evaluation of different models, while unreasonable metrics might lead to incorrect conclusions. \par Table \ref{table:metrics} lists widely used metrics in different tasks. For classification methods, overall accuracy and mean accuracy are most frequently used. Segmentation models can be analyzed by accuracy or (m)IoU. In detection tasks, the result are usually evaluated region-wise, so (m)IoU, accuracy, precision and recall could apply. MOTA and MOTP are specially designed for object tracking modelts, while EPE is for scene for estimation. ROC curves, which is the derivative of precision and recall, help evaluate the performance of 3D match and registration models. Besides, visualization is always an effective supplement of numbers. \begin{table*}[htbp] \centering \caption{Commonly used metrics for different tasks. In this table, $N$ denotes the number of samples, $C$ denotes the number of categories, $IDS$ denotes the number of identity switches, $I_{i,j}$ denotes the number of points that are from ground truth class/instance $i$ and labelled as $j$, $TP/TN/FP/FN$ stands for the number of true positives, true negatives, false positives and false negatives respectively. Higher metrics indicate better results if not specified otherwise.} \begin{tabular}{|c|c|c|c|} \hline Metric & Formula & Explanation \\ \hline Accuracy & $Accuracy=\frac{TP+TN}{TP+TN+FP+FN}$ & \multicolumn{1}{|m{9cm}|}{Accuracy indicates how many predictions are correct over all predictions. ``Overall accuracy (OA)" indicates the accuracy on the entire dataset.} \\ \hline mACC & $mACC=\frac{1}{C}\sum_{c=1}^C Accuracy_c$ & \multicolumn{1}{|m{9cm}|}{The mean of accuracy on different categories, useful when the categories are imbalanced.} \\ \hline Precision & $Precision=\frac{TP}{TP+FP}$ & \multicolumn{1}{|m{9cm}|}{The ratio of correct predictions over all predictions.} \\ \hline Recall & $Recall=\frac{TP}{TP+FN}$ & \multicolumn{1}{|m{9cm}|}{The ratio of correct predictions over positive samples in the ground truth.} \\ \hline F1-Score & $F_1=2\times \frac{Precision\cdot Recall}{Precision + Recall}$ & \multicolumn{1}{|m{9cm}|}{The harmonic mean of precision and recall.} \\ \hline IoU & $IoU_i=\frac{I_{i,i}}{\sum_{c=1}^C (I_{i,c}+I_{c,i}) - I_{i,i}}$ & \multicolumn{1}{|m{9cm}|}{Intersection over Union (of class/instance $i$). The intersection and union are calculated between the prediction and the ground truth.}\\ \hline mIoU & $mIoU = \frac{1}{C}\sum_{c=1}^{C}IoU_i$ & \multicolumn{1}{|m{9cm}|}{The mean of IoU on all classes/instances.} \\ \hline MOTA & $MOTA=1-\frac{FN+FP+IDS}{TP+FN}$ & \multicolumn{1}{|m{9cm}|}{Multi-object tracking accuracy (MOTA) synthesizes 3 error sources: false positives, missed targets and identity switches, and the number of ground truth (as $TP+FN$) is used for normalization.} \\ \hline MOTP & $MOTP=\frac{\sum_{i,t} e_{i,t}}{\sum_t d_t}$ & \multicolumn{1}{|m{9cm}|}{Multi-object tracking precision (MOTP) indicates the precision of localization. $d_t$ denotes the number of matches at time $t$, and $e_{i,t}$ denotes the error of the $i$-th pair at time $t$.} \\ \hline EPE & $EPE=||\Hat{sf}-sf||_2$ & \multicolumn{1}{|m{9cm}|}{End point error (EPE) is used in scene flow estimation, also referred as EPE2D/EPE3D for 2D/3D data respectively. $\Hat{sf}$ denotes the predicted scene flow vector while $sf$ denotes the ground truth.} \\ \hline \end{tabular} \label{table:metrics} \end{table*} \section{Classification} \subsection{Overview} Classification on point clouds is commonly known as 3D shape classification. Similar to image classification models, models on 3D shape classification usually first generate a global embedding with an aggregation encoder, then pass the embedding through several fully connected layers to obtain the final result. Most 3D shape classification methods are tested with clean 3D models (as in Figure \ref{fig:2}). Based on the point cloud aggregation method, classification models can be generally divided into two categories: projection-based methods and point-based methods. \begin{figure*}[h] \begin{center} \includegraphics[width=5in]{shapenet.png} \end{center} \caption{3D models from ShapeNet \cite{shapenet2015}. ShapeNet contains large-scale 3D models with manually verified annotation.} \label{fig:2} \end{figure*} \subsection{Projection-based Methods} Projection-based methods project the unstructured 3D point clouds into specific presupposed modality (e.g. voxels, pillars), and extract features from the target format, which allows them to benefit from the previous research findings in the corresponding direction. \par \subsubsection{Multi-view representation} MvCNN \cite{su15mvcnn} is a method based on a multi-view representation of point clouds. A 3D point cloud is represented by a group of 2D images by rendering snapshots from different angles. Each image in the group will be passed through a CNN to extract view-based features, pooled across views and passed through another CNN to build a compact descriptor. While MVCNN does not distinguish different views, it is helpful to consider the relationship among views. GVCNN \cite{feng2018gvcnn} is a method that takes advantage of this relationship. By quantifying the discrimination of views, we are able to divided the set of views into groups based on their discrimination scores. The view descriptors will be passed through intra-group pooling and cross-group fusion for prediction. Aside from the models mentioned above, \cite{yu2018multi} and \cite{yang2019learning} also improve the recognition accuracy with multi-view representation. \subsubsection{Volumetric representation} VoxNet \cite{maturana2015voxnet} is an early method using the volumetric representation. In this method, each point $(x, y, z)$ is projected into a corresponding discrete voxel point $(i, j, k)$. Each point cloud will be mapped into an occupancy grid of $32\times 32\times 32$ voxels, and the grid will then be passed through two 3D convolutional layers to obtain the final representation. \par VoxNet simply uses adaption of CNN layers for the prediction head, which leads to potential loss of detailed spatial information. 3D ShapeNet \cite{shapenet2015} proposed a belief-based deep convolutional network to learn the distribution of point clouds in different 3D shapes. In this method, 3D shapes are represented by the probability distributions of binary variables on grids. \par While volumetric methods already achieve satisfactory performance, most suffer from the cubic growth of computation complexity and memory footprint, hence the resolution of the grid is strictly limited. OctNet \cite{riegler2017octnet} improved the efficiency by introducing a hybrid grid-octree structure to hierarchically partition point clouds. A point cloud is represented by several octrees along a regular grid, each octree is encoded as a bit string, and features are generated through naive arithmetic. Inspired by OctNet, OCNN \cite{wang2017cnn} then proposed a method that introduces 3D-CNNs to extract features from octrees. \par Methods based on volumetric representations as mentioned above are naturally coarse as only a small fraction of voxels are non-empty and the detailed context inside each voxel is hardly collected. The balance between resolution and computation is difficult to achieve in practice. \par \subsubsection{Basis point set} BPS \cite{prokudin2019efficient} proposed a new approach that breaks the convention that point clouds, even with various sizes, are usually projected onto a grid of same size. In BPS, input points are first normalized into a unit ball, then a group of points is randomly sampled to make up a basis point set (BPS). The sampled BPS is constant for all point clouds in a dataset. For a given point cloud $X$, each point $x_i$ is represented by the Euclidean distance between itself and its nearest neighbor in BPS. By passing such representation through the last two fully connected layers of PointNet, the model achieves performance similar to that of the original PointNet design. \subsection{Point-based Methods} Compared with projection-based methods that aggregate points from a spatial neighborhood, point-based methods attempt to learn features from individual points. Most of recent work focuses on this direction. \subsubsection{MLP networks} PointNet \cite{qi2017pointnet} is a famous architecture that takes advantage of multi-layer perceptrons (MLPs). The input (an $n \times 3$ 2D tensor) is first multiplied by an affine transformation matrix predicted by a mini-network (T-Net) to hold invariance under geometric transformations. The point set is then passed through a group of MLPs followed by another joint alignment network, and a max-pooling layer to obtain the final global feature. This backbone can be used for both classification and segmentation prediction. For classification, the global feature is passed through an MLP for output scores. For segmentation, the concatenations of the global feature and different levels of intermediate features from each point are passed through an MLP for the classification result of each point. Conventional CNNs take features at different scales by a stack of convolutional layers; inspired by that, PointNet++ \cite{qi2017pointnet++} is proposed. In this work, the local region of a point $x$ is defined as the points within a sphere centered at $x$. One set abstraction level here contains a sampling layer, a grouping layer to identify local regions and a PointNet layer. Stacking such set abstraction levels allows us to extract features hierarchically as CNNs for image tasks do. \par The simple implementation and promising performance of PointNet \cite{qi2017pointnet} and PointNet++ \cite{qi2017pointnet++} inspired a lot of follow-up work. PointWeb \cite{zhao2019pointweb} is adapted from PointNet++ and improves quality of features by introducing Adaptive Feature Adjustment (AFA) to make use of context information of local neighborhoods. In addition, SRN \cite{duan2019structural} proposed Structural Relation Network (SRN) to equip PointNet++, and obtained better performance. \subsubsection{Convolutional networks} Convolution kernels on 2D data can be extended to work on 3D point cloud data. As mentioned before, VoxNet \cite{maturana2015voxnet} is an early work that directly takes advantage of 3D convolution. \par A-CNN \cite{komarichev2019cnn} proposed another way to apply convolution on point clouds. In order to prevent redundant information from overlapped local regions (the same group of neighboring points might be repeatedly included in regions at different scales), A-CNN proposed a ring-based scheme instead of spheres. To convolve points within a ring, points are projected on a tangent plane at a query point $q_i$, then ordered in clockwise or counter-clockwise direction by making use of cross product and dot product, and eventually a 1-D convolution kernel will be applied to the ordered sequence. The output feature can be used for both classification and segmentation as in PointNet.\par RS-CNN \cite{liu2019relation} is another convolutional network based on relation-shape convolution. An RS-Conv kernel takes a neighborhood around a certain point as its input, and learns the mapping from naive relations (e.g. Euclidean distance, relative position) to high-level relations among points, and encodes the spatial structure within the neighborhood with the learned mapping.\par In PointConv \cite{wu2019pointconv}, the convolution operation is defined as finding a Monte Carlo estimation of the hidden continuous 3D convolution w.r.t. an importance sampling. The process is composed with a weighting function and a density function, implemented by MLP layers and a kernelized density estimation. Furthermore, the 3D convolution is reduced into matrix multiplication and 2D convolution for memory and computational efficiency and easy deployment. A similar idea is used in MCCNN \cite{hermosilla2018monte}, where convolution is replaced by a Monte Carlo estimation based on the density function of the sample. \par Geo-CNN \cite{lan2019modeling} proposed another way to model the geometric relationship among neighborhood points. By taking six orthogonal bases, the space will be separated into eight quadrants, and all vectors in a specific quadrant can be composed by three of the bases. Features are extracted independently along each direction with corresponding direction-associated weight matrices, and are aggregated based on the angle between the geometric vector and the bases. The feature of some specific point at the current layer is the sum of features of the given point and its neighboring edge features from the previous layer. \par In SFCNN \cite{rao2019spherical}, the input point cloud is projected onto regular icosahedral lattices with discrete sphere coordinates, hence convolution can be implemented by maxpooling and convolution on the concatenated features from vertices of spherical lattices and their neighbors. SFCNN holds rotation invariance and is robust to perturbations. \subsubsection{Graph networks} Graph networks consider a point cloud as a graph and the vertices of the graph as the points, and edges are generated based on the neighbors of each point. Features will be learned in spatial or spectral domains. \par ECC \cite{simonovsky2017dynamic} first proposed the idea of considering each point as a vertex of the graph and connected edges between pairs of points that are ``neighbors". Then, edge conditioned convolution (ECC) is applied with a filter generating network such as MLP. Neighborhood information is aggregated by maxpooling and coarsened graph will be generated with VoxelGrid \cite{rusu20113d} algorithm. After that, DGCNN \cite{wang2019dynamic} uses a MLP to implement EdgeConv, followed by channel-wise symmetric aggregation on edge features from the neighborhood of each point, which allows the graph to be dynamically updated after each layer of the network. \par Inspired by DGCNN, Hassani and Haley \cite{hassani2019unsupervised} proposed an unsupervised multi-task approach to learn shape features. The approach consists of an encoder and an decoder, where the encoder is constructed from multi-scale graphs, and the decoder is constructed for three unsupervised tasks (clustering, self-supervised classification and reconstruction) trained by a joint loss. \par ClusterNet \cite{chen2019clusternet} uses rigorously rotation-invariant (RRI) module to generate rotation-invariant features from each point, and an unsupervised agglomerative hierarchical clustering method to construct hierarchical structures of a point cloud. Features of sub-clusters at each level are first learned with an EdgeConv block, then aggregated by maxpooling. \par \subsubsection{Other networks} Aside from OctNet \cite{riegler2017octnet}, which uses octrees on voxel grids to hierarchically extract features from point clouds, Kd-Net \cite{klokov2017escape} makes use of K-d trees to build a bottom-up encoder. Leaf node representations are normalized 3D coordinates (by setting the center of mass as origin and rescaled to $[-1,1]^3$), and non-leaf node representations are calculated from its children nodes with MLP. The parameters of MLPs are shared within each level of the tree. Moreover, 3DContextNet \cite{zeng20183dcontextnet} proposed another method based on K-d trees. While non-leaf representations are still computed with MLP from its children, the aggregation at each level is more complicated for considering both local cues and global cues. The local cues concern points in the corresponding local region, and the global cues concern the relationship between current position and all positions in the input feature map. The representation at the root will be used for prediction. \par RCNet \cite{wu2019point} introduced RNN to point cloud embedding. The ambient space is first partitioned into parallel beams, each beam is then fed into a shared RNN, and the output subregional features are considered as a 2D feature map and processed by a 2D CNN. \par SO-Net \cite{li2018so} is a method based on the self-organized map (SOM). A SOM is a low-dimensional (two-dimensional in the paper) representation of the input point cloud, initialized by a proper guess (dispersing nodes uniformly in a unit ball), and trained with unsupervised competitive learning. A k-nearest-neighbor set is searched over the SOM for each point, and the normalized KNN set is then passed through a series of fully connected layers to generate individual point features. The point features are used to generate node features by maxpooling according to the association in KNN search, and the node features are passed through another series of fully connected layers and aggregated into a global representation of the input point cloud. \subsection{Experiments} Different methods choose to test their models on various datasets. In order to obtain a better comparison among methods, we select datasets that most methods are tested on, and list the experiment results for them in Table \ref{table:1}. \begin{table*}[htbp] \centering \caption{Experiment results on ModelNet40 classification benchmark. ``OA" stands for overall accuracy and ``mACC" stands for mean accuracy.} \begin{tabular}{|c|c|c|} \hline Methods & ModelNet40(OA) & ModelNet40(mAcc) \\ \hline PointNet \cite{qi2017pointnet} & 89.2\% & 86.2\% \\ \hline PointNet++ \cite{qi2017pointnet++} & 90.7\% & 90.7\% \\ \hline PointWeb \cite{zhao2019pointweb} & 92.3\% & 89.4\% \\ \hline SRN \cite{duan2019structural} & 91.5\% & - \\ \hline Pointwise-CNN \cite{hua2018pointwise} & 86.1\% & 81.4\% \\ \hline PointConv \cite{wu2019pointconv} & 92.5\% & - \\ \hline RS-CNN \cite{liu2019relation} & 92.6\% & - \\ \hline GeoCNN \cite{lan2019modeling} & 93.4\% & 91.1\% \\ \hline A-CNN \cite{komarichev2019cnn} & 92.6\% & 90.3\% \\ \hline Hassani and Haley \cite{hassani2019unsupervised} & 89.1\% & - \\ \hline ECC \cite{simonovsky2017dynamic} & 87.4\% & 83.2\% \\ \hline SFCNN \cite{rao2019spherical} & 91.4\% & - \\ \hline DGCNN \cite{wang2019dynamic} & 92.2\% & 90.2\% \\ \hline ClusterNet \cite{chen2019clusternet} & 87.1\% & - \\ \hline BPS \cite{prokudin2019efficient} & 91.6\% & - \\ \hline KD-Net \cite{klokov2017escape} & 91.8\% & 88.5\% \\ \hline 3DContextNet\cite{zeng20183dcontextnet} & 91.1\% & - \\ \hline RCNet \cite{wu2019point} & 91.6\% & - \\ \hline SO-Net \cite{li2018so} & 90.9\% & 87.3\% \\ \hline \end{tabular} \label{table:1} \end{table*} \section{Segmentation} \subsection{Overview} 3D segmentation intends to label each individual point, which requires the model to collect both global context and detailed local information at each point. Figure \ref{fig:3} shows some examples from S3DIS \cite{armeni20163d} dataset. There are two main tasks in 3D segmentation: semantic segmentation and instance segmentation.\par Since a large number of classification models are able to achieve very high performance on popular benchmarks, they tend to test their backbone on segmentation datasets to prove the novel contribution and generalization ability. We will not reintroduce these models if they have been mentioned above. There are also some models that benefit from the jointly training on multiple tasks, and we will discuss these methods later in section 3.4. \begin{figure}[htbp] \begin{center} \includegraphics[width=3.2in]{s3dis.png} \end{center} \caption{Stanford Large-Scale 3D Indoor Spaces Dataset \cite{armeni20163d} (S3DIS).} \label{fig:3} \end{figure} \subsection{Semantic Segmentation} Similar to 3D shape classification models, based on how the raw point cloud is organized, semantic segmentation methods can be generally divided into projection-based methods and point-based methods. \subsubsection{Projection-based methods} Huang and You \cite{huang2016point} project the input point cloud into occupancy voxels, which are then fed into a 3D convolutional network to generate voxel-level labels. All points within a voxel are assigned with the same semantic label as the voxel. ScanComplete \cite{dai2018scancomplete} utilizes fully convolutional networks to adapt to different input data sizes, and deploys a coarse-to-fine strategy to improve the resolution of predictions hierarchically. VV-Net \cite{meng2019vv} also transfers unordered points into regular voxel grids as the first step. After that, the local geometry information of each voxel will be encoded with a kernel-based interpolated variational auto-encoder (VAE). In each voxel, a radial basis function (RBF) is computed to generate a local continuous representation to deal with sparse distributions of points. \par F. Jaremo-Lawin et al. \cite{lawin2017deep} proposed a multi-view method that first projects a 3D cloud to 2D planes from multiple camera views, then pixel-wise scores on synthetic images are predicted with a multi-stream FCN, and the final labels are obtained by fusing scores over different views. PolarNet \cite{Zhang_polar_2020_CVPR}, however, proposed a polar BEV representation. By implicitly aligning attention with the long-tailed distribution, this representation reduces the imbalance of points across grid cells along the radial axis.\par Some other methods leverage scans in multiple modalities. 3DMV \cite{dai20183dmv} proposed a joint 3D-multi-view network that combines features from RGB images and point cloud. Features are extracted with a 3D CNN stream and a group of 2D streams respectively. MVPNet \cite{jaritz2019multi} proposed another aggregation to fuse features (from images and point cloud) in 3D canonical space with a point-based network. \par \par \subsubsection{Point-based methods} First of all, PointNet \cite{qi2017pointnet} and PointNet++ \cite{qi2017pointnet++} can predict semantic labels with corresponding prediction branches attached. Engelmann et al. \cite{engelmann2018know} proposed a method to define neighborhoods in both world space and feature space with k-means clustering and KNN. A pairwise distance loss and centroid loss are introduced to feature learning based on the assumption that points with the same semantic label are supposed to be closer. PointWeb \cite{zhao2019pointweb}, as mentioned in classification, can also be adapted to predict segmentation labels. PVCNN \cite{liu2019point} proposed a comprehensive method that leverages both point and voxel representation to obtain memory and computation efficiency simultaneously.\par Some extensions of the convolution operator are introduced for feature extraction on point cloud. PCCN \cite{wang2018deep} introduces parametric continuous convolutional layers. These layers are parameterized by MLPs and span full continuous vector spaces. The generalization allows models to learn over any data structure where the support relationship is computable. Pointwise-CNN \cite{hua2018pointwise} introduced a point-wise convolution where the neighbor points are projected into kernel cells and convolved with corresponding kernel weights. Engelmann et al. \cite{engelmann2019dilated} proposed Dilated Point Convolution (DPC) to aggregate dilated neighbor features, instead of the conventional k-nearest neighbors. \par Graph networks are also used in some segmentation models to obtain the underlying geometric structures of the input point clouds. SPG \cite{landrieu2018large} introduced a structure called superpoint graph (SPG) to capture the organization of point clouds. The idea is further extended in \cite{landrieu2019point}, which introduces a oversegmentation (into pure superpoints) of the input point cloud. Aside from that, Graph Attention Convolution \cite{wang2019graph} (GAC) is proposed to learn relevant features from local neighborhoods selectively. By dynamically assigning attention weights to different neighbor points and different feature channels based on their spatial positions and feature differences, the model is able to learn discriminative features from the most relevant part of the neighbor point sets.\par Compared with projection-based methods, point-based methods usually require more computation and therefore have more trouble dealing with large-scale data. Tatarchenko et al. \cite{tatarchenko2018tangent} introduced tangent convolutions to solve this. A fully-convolutional network is designed based on the tangent convolution and successfully improved the performance on large-scale point clouds. RandLA-Net \cite{Hu_rand_2020_CVPR} attempted to reduce computation by replace conventional complex point sampling approaches with random sampling. And to avoid random sampling from discarding crucial information, a novel feature aggregation module is introduced to enlarge receptive fields of each point. \par Based on the fact that the production of point-level labels is labor-intensive and time-consuming, some methods explored weakly supervised segmentation. Xu and Lee \cite{xulee2020weakly} proposed a weakly supervised approach which only requires a small fraction of points to be labelled at training stage. By learning gradient approximation and smoothness constraints in geometry and color, competitive results can be obtained with as few as 10\% points labelled. On the other hand, Wei et al. \cite{Wei_multi_2020_CVPR} introduced a multi-path region mining module, which can provide pseudo point-level labels by a classification network over weak labels. The segmentation network is then trained with these pseudo labels in a fully supervised manner. \subsection{Instance Segmentation} Instance segmentation, compared with semantic segmentation, requires distinguishing points with same semantic meaning, which makes the task more challenging. In this section, instance segmentation methods are further divided into two categories: proposal-based methods and proposal-free methods. \subsubsection{Proposal-based methods} Proposal-based instance segmentation methods can be considered as the combination of object detection and mask prediction. 3D-SIS \cite{hou20193d} is a fully convolutional network for 3D semantic instance segmentation where geometry and color signals are fused. For each image, 2D features for each pixel are extracted by a series of 2D convolutional layers, and then backprojected to the associated 3D voxel grids. The geometry and color features are passed through a series of 3D convolutional layers respectively and concatenated into a global semantic feature map. Then a 3D-RPN and a 3D-RoI layer are applied to generate bounding boxes, instance masks and object labels. Generative Shape Proposal Network (GSPN) \cite{yi2019gspn} generates proposals by reconstructing shapes from the scene instead of directly regresses bounding boxes. The generated proposals are refined with a region-based PointNet (R-PointNet), and the labels are determined with a point-wise binary mask prediction over all class labels. 3D-BoNet \cite{yang2019learning} is a single-stage method that adapts PointNet++ \cite{qi2017pointnet++} as backbone network to global features and local features at each point. Two prediction branches follow to generate instance-level bounding box and point-level mask respectively. Zhang el al. \cite{zhang2020instance} proposed a method for large-scale outdoor point clouds. The point cloud is first encoded into a high-resolution BEV representation augmented by KNN, and features are then extracted by voxel feature encoding (VFE) layers and self-attention blocks. For each grid, a horizontal object center and its height limit are predicted, objects that are closed enough will be merged, and eventually these constraints will be leveraged to generate instance prediction. \subsubsection{Proposal-free methods} Proposal-free methods tend to generate instance-level label based on semantic segmentation by algorithms like clustering. Similarity Group Proposal Network (SGPN) \cite{wang2018sgpn} is a representative work that learns a feature and semantic map for each point, and a similarity matrix to estimate the similarity between pairs of features. A heuristic non-maximal suppression method follows to merge points into instances. Lahoud et al. \cite{lahoud20193d} adopted multi-task metric learning to (1) learn a feature embedding such that voxels with the same instance label are close and those with different labels are separated in the feature space and (2) predict the shape of instance at each voxel. Instance boundaries are estimated with mean-shift clustering and NMS. \par Zhang et al. \cite{zhang2019point} introduced a probabilistic embedding to encode point clouds. The embedding is implemented with multivariate Gaussian distribution, and the Bhattacharyya kernel is adopted to esimate the similarity between points. Proposal-free methods do not suffer from the computational complexity of region-proposal layers; however, it is usually difficult for them to produce discriminative object boundaries from clustering. \par There are also several instance segmentation methods based on projection. SqueezeSeg \cite{wu2018squeezeseg} is one of the pioneer works in this direction. In this method, points are first projected onto a sphere for a grid-based representation. The transformed representation is of size $H\times W\times C$, where in practice $H$=64 is the number of vertical channels of lidar, $W$ is manually picked to be 512, and $C$ equals to 5 (3 dimensional coordinates + intensity measurement + range). The representation is then fed through a conventional 2D CNN and a conditional random field (CRF) for refined segmentation results. This method is afterwards improved by SqueezeSegv2 \cite{wu2019squeezesegv2} with a context aggregation module and a domain adaptation pipeline. \par The idea of projection-based methods is further explored by Lyu et al. \cite{Lyu_Seg2D_2020_CVPR}. Inspired by graph drawing algorithms, they proposed a hierarchical approximate algorithm to project point clouds into image representations with abundant local geometric information preserved. The segmentation will then be generated by a multi-scale U-Net from the image representation. With this innovative projection algorithm, the method obtained significant improvement. \par PointGroup \cite{Jiang_pointgroup_2020_CVPR} proposed a bottom-up framework with two prediction branches. For each point, its semantic label and relative offset to its respective instance centroid are predicted. The offset branch helps better grouping of points into objects as well as separation of objects with the same semantic label. During the clustering stage, both original positions and shifted positions are considered, the association of these two results turns out to have a better performance. Along with NMS based on the newly designed ScoreNet, this method outperforms other works of the day by a great margin. \par \subsection{Joint Training} As mentioned above, some recent works jointly address more than one problems to better realized the power of models. The unsupervised multi-task approach proposed by Hassani and Haley \cite{hassani2019unsupervised} is an example in which clustering, self-supervised classification and reconstruction are jointly trained. The two tasks under segmentation, semantic segmentation and instance segmentation, are also proven to likely benefit from simultaneous training. \par There are two naive ways to solve semantic segmentation and instance segmentation at the same time: (1) solve semantic segmentation first, run instance segmentation on points of certain labels based on the result of semantic segmentation, (2) solve instance segmentation first, and directly assign semantic labels with instance labels. These two step-wise paradigms highly depend on the output quality of the first step, and are not able to make full use of the shared information between two tasks. \par JSIS3D \cite{pham2019jsis3d} develops a pointwise network that predicts the semantic label of each point and high-dimensional embeddings at the same time. After these steps, instances of the same class will have similar embeddings, then a multi-value conditional random field model is applied to synthesize semantic and instance labels, formulating the problem as jointly optimizing labels in the field model. ASIS \cite{wang2019associatively} is another method that makes the two tasks benefit from each other. Specifically, instance segmentation benefits from semantic segmentation by learning semantic-aware instance embedding at point level, while semantic features of the point set from the same instance will be fused together to generate accurate semantic predictions for every point. \subsection{Experiments} We select the benchmarks on which most methods are tested, S3DIS\cite{armeni20163d}, to compare the performance of different methods. The performances are summarized in Table \ref{table:2}. \begin{table*}[!htbp] \centering \caption{Experiment results on on semantic segmentation in S3DIS benchmark. Only results that are reported in the original papers are listed, those reported as a reference by other papers are excluded because they are sometimes conflicting.} \begin{tabular}{|c|c|c|c|c|} \hline Methods & Area5(mACC) & Area5(mIoU) & 6-fold(mACC) & 6-fold(mIoU) \\ \hline PointCNN \cite{li2018pointcnn} & 63.9 & 57.3 & 75.6 & 65.4 \\ \hline PointWeb \cite{zhao2019pointweb} & 66.6 & 60.3 & 76.2 & 66.7 \\ \hline A-CNN \cite{wu2019pointconv} & - & - & - & 62.9 \\ \hline DGCNN \cite{wang2019dynamic} & - & - & - & 56.1 \\ \hline VV-Net \cite{meng2019vv} & - & - & 82.2 & 78.2 \\ \hline PCCN \cite{wang2018deep} & - & 58.3 & - & - \\ \hline GAC \cite{wang2019graph} & - & 62.9 & - & - \\ \hline DPC \cite{engelmann2019dilated} & 68.4 & 61.3 & - & - \\ \hline SSP+SPG \cite{landrieu2019point} & - & - & 78.3 & 68.4\\ \hline JSIS3D \cite{pham2019jsis3d} & - & - & 78.6 & - \\ \hline ASIS \cite{wang2019associatively} & 60.9 & 53.4 & 70.1 & 59.3 \\ \hline Xu and Lee \cite{xulee2020weakly} & - & 48.0 & - & - \\ \hline RandLA-Net \cite{Hu_rand_2020_CVPR} & - & - & 82.0 & 70.0 \\ \hline Tatarchenko et al. \cite{tatarchenko2018tangent} & 62.2 & 52.8 & - & - \\ \hline \hline \end{tabular} \label{table:2} \end{table*} \section{Detection, Tracking and Flow Estimation} \subsection{Overview} Object detection is a recent research hotspot as the basis of many practical applications. It aims to locate all the objects in the given scene. 3D object detection methods can be generally divided into three categories: multi-view methods, projection-based methods and point-based methods. Figure 4.1 shows an example of 3D object detection on multiple (lidar and camera) views. Aside from image object detection models, the exclusive characteristics of point cloud data provide more potential of optimization. Also, since 3D object tracking and scene flow estimation are two derivative tasks that highly depend on object detection, they will be discussed together in this section. \begin{figure}[htbp] \begin{center} \includegraphics[width=3.2in]{nuscene.png} \end{center} \caption{An outdoor scene from nuScenes \cite{caesar2019nuscenes}, annotations in multi-view (lidar/camera) are provided.} \label{fig:4} \end{figure} \subsection{Object Detection} \subsubsection{Projection-based methods} The success of convolutional neural networks in image object detection inspired attempts to apply 3D CNN on projected point cloud data. VoxelNet \cite{zhou2018voxelnet} proposed an approach that applies random sampling to the point set within each voxel, and passes them through a novel voxel feature encoding (VFE) layer based on PointNet \cite{qi2017pointnet} and PointNet++ \cite{qi2017pointnet++} to extract point-wise features. A region proposal network is used to produce detection results. Similar to classification models with volumetric representation, VoxelNet runs at a relatively low speed due to the sparsity of voxels and 3D convolutions. SECOND \cite{yan2018second} then proposed an improvement in inference efficiency by taking advantage of sparse convolution network. \par PointPillars \cite{lang2019pointpillars} utilizes point cloud data in another way. Points are organized in vertical columns (called Pillars), and the features of pillars are extracted with PointNet to generate a pseudo image. The pseudo image is then considered as the input of a 2D object detection pipeline to predict 3D bounding boxes. PointPillars is more accurate than previous fusion approaches, and it is capable of real-time applications with a running speed of 62 FPS. Wang et al. \cite{wang2020} further proposed another anchor-free bounding box prediction based on a cylindrical projection into multi-view features. \par Projection-based methods suffer from spatial information loss inevitably. Aside from using point-based networks instead, He et al. \cite{He_SAS_2020_CVPR} proposed a structure-aware method to mitigate the problem. The convolutional layers are explicitly supervised to contain structural information by an auxiliary network. The auxiliary network converts the convolutional features from the backbone network to point-level representations and is jointly optimized. After the training process is finished, the auxiliary network can be detached to speed up the inference. \par \subsubsection{Point-based methods} Most point-based methods attempt to minimize information loss during feature extraction, and they are the group with the best performance so far. STD \cite{yang2019std} introduced the idea of using sphere anchors for proposal generation, which achieves a high recall with significantly less computation than previous methods. Each proposal is passed through a PointsPool layer that converts proposal features from sparse expression to compact representation, and is robust under transformation. In addition to the regular regression branch, STD has another IoU branch to replace the role of classification score in NMS.\par Some methods use foreground-background classification to improve the quality of proposals. PointRCNN \cite{shi2019pointrcnn} is such a framework, in which points are directly segmented to screen out foreground points, while semantic features and spatial features are then fused to produce high-quality 3D boxes. Compared with multi-view methods above, segmentation-based methods perform better for complicated scenes and occluded objects.\par Furthermore, Qi et al. proposed VoteNet \cite{qi2019vote}. A group of points are sampled as seeds, and each seed independently generates a vote for potential center points of objects in the point cloud with the help of PointNet++ \cite{qi2017pointnet++}. By taking advantage of voting, VoteNet outperforms previous approaches on two large indoor benchmarks. However, as the center point prediction of virtual center points is not as stable, the method performs less satisfactorily in wild scenes. As a follow-up work, ImVoteNet \cite{Qi_ImVoteNet_2020_CVPR} inherited the idea of VoteNet and achieved prominent improvement by fusing 3D votes with 2D votes from images.\par There are also attempts that consider domain knowledge as an auxiliary to enhance features. Associate-3Ddet \cite{Du_A3D_2020_CVPR} introduced the idea of perceptual-to-conceptual association. To enrich perception features that might be incomplete due to occlusion or sparsity, a perceptual-to-conceptual module is proposed to generate class-wise conceptual models from the dataset. The perception and conceptual features will be associated for feature enhancement. \par Yang et al. \cite{Yang_3DSSD_2020_CVPR} proposed a point-based anchor-free method 3DSSD. This method attempts to reduce computation by abandoning the upsampling layers (e.g. feature propagation layers in \cite{yang2019std}) and refinement stages that are widely used in previous point-based methods. Previous set abstraction layers for downsampling only leverage furthest-point-sampling based on Euclidean distance (D-FPS), instances with a small number of interior points are easily lost under this strategy. In this case, removing upsampling layers could lead to huge performance drop. 3DSSD proposed F-FPS, a new sampling strategy based on feature distances, to preserve more foreground points for instances. The fusion of F-FPS and D-FPS, together with the candidate generation layer and 3D center-ness assignment in the prediction head, help this method outperform previous single-stage methods with a considerable margin. \par Graph neural networks have also been introduced to 3D object detection for its ability to accommodate intrinsic characteristics of point clouds like sparsity. PointRGCN \cite{zarzar2019pointrgcn} is an early work that introduce graph-based representation for 3D vehicle detection refinement. After that, HGNet \cite{Chen_HGNet_2020_CVPR} introduces a hierarchical graph network based on shape-attentive graph convolution (SA-GConv). By capturing object shapes with relative geometric information and reasoning on proposals, the method obtained a significant improvement on previous results. Besides, Point-GNN \cite{Shi_PointGNN_2020_CVPR} proposed a single-shot method based on graph neural networks. It first builds a fixed radius near-neighbors graph over the input point cloud. Then, the category and the bounding box of affiliation are predicted with the point graph. Finally, a box merging and scoring operation is used to obtain accurate combination of detection results from multiple vertices. \par \subsubsection{Multi-view methods} MV3D \cite{chen2017multi} is a pioneering method in multi-view object detection methods on point clouds. In this approach, candidate boxes are generated from BEV map and projected into feature maps of multiple views (RGB images, lidar data, etc.), then the region-wise features extracted from different views are combined to produce the final oriented 3D bounding boxes. While this approach achieves satisfactory performance, much like many other early multi-view methods, its running speed is too slow for practical use. \par Attempts to improve multi-view methods generally take one of two directions. First, we could find a more efficient way to fuse information from different views. Liang et al. \cite{liang2018deep} use continuous convolutions to effectively fuse feature maps from images and lidar at different resolutions. Image features for each point in BEV space are utilized to generate a dense BEV feature map by bi-linear interpolation with projections of image features within the BEV plane. Experiments show that dense BEV feature maps perform better than discrete image feature maps and sparse point cloud feature maps. Second, many methods propose innovative feature extraction approaches to obtain representations of input data with higher robustness. SCANet \cite{lu2019scanet} introduced a Spatial Channel Attention (SCA) module to make use of multi-scale contextual information. The SCA module captures useful features from the global and multi-scale context of given scene, while an Extension Spatial Unsample (ESU) module helps combine multi-scale low-level features to generate high-level features with rich spatial information, which then leads to accurate 3D object proposals. In RT3D \cite{zeng2018rt3d}, the majority of convolution operations prior to the RoI pooling module are removed. With such optimization, RoI convolutions only need to be performed once for all proposals, accelerating the method to run at 11.1 FPS, which is five times faster than MV3D \cite{chen2017multi}. \par Another approach to detect 3D objects is to generate candidate regions on 2D plane with 2D object detectors, then extract a 3D frustum proposal for each 2D candidate region. In F-PointNets \cite{qi2018frustum}, each 2D region generates a frustum proposal, and the features of each 3D frustum are learned with PointNet \cite{qi2017pointnet} or PointNet++ \cite{qi2017pointnet++} and used for 3D bounding box estimation. PointFusion \cite{xu2018pointfusion} uses both 2D image region and corresponding frustum points for more accurate 3D box regression. A fusion network is proposed to directly predict corner locations of boxes by fusing image features and global features from point clouds. \par \subsection{Object Tracking} Object tracking targets estimating the location of a certain object in subsequent frames given its state in the first frame. The success of Siamese networks \cite{bertinetto2016fully} in 2D image object tracking inspired 3D object tracking, and Giancola et al. \cite{giancola2019leveraging} extend Siamese networks to 3D. In this method, candidates are first generated by a Kalman filter, then passed through an encoding model to generate compact representations with shape regularization, and match the detected objects by cosine similarity. Zarzar et al. \cite{zarzar2019pointrgcn} proposed another method that captures target objects more efficiently by leveraging a 2D Siamese network to detect coarse object candidates on BEV representation. The coarse candidates are then refined by cosine similarity in the 3D Siamese network.\par Chiu et al. \cite{chiu2020probabilistic} introduced the Kalman filter to encode the hidden states of objects. The state of an object is represented by a tuple of 11 variables, including position, orientation, size and speed. A Kalman filter is adopted to predict the object in next frame based on previous information, and a greedy algorithm is used for data association with Mahalanobis distance.\par Besides, Qi et al. \cite{Qi_P2B_2020_CVPR} proposed P2B, a point-to-box method for 3D object tracking. It divides the task into two parts. The first part is target-specific feature augmentation, seeds from the template and the search area are generated with a PointNet++ backbone, and the search area seeds will be enriched with target clues from the template. The second is target proposal and verification, candidate target centers are regressed and seed-wise targetness is evaluated for joint target proposal and verification. \par \subsection{Scene Flow Estimation} Similar to optical flow estimation on images, 3D scene flow estimation works on a sequence of point clouds. FlowNet3D \cite{liu2019flownet3d} is a representative work that directly estimates scene flows from pairs of consecutive point clouds. The flow embedding layer is used to learn point-level features and motion features. The experiment results of FlowNet3D shows that it performs less than satisfactorily in non-static scenes, and the angles of predicted motion vectors sometimes significantly differ from the ground truth. FlowNet3D++ \cite{wang2020flownet3d++} is proposed to fix these issues by introducing a cosine distance loss in angles, and a point-to-plane distance loss to improve accuracy in dynamic scenes. HPLFlowNet \cite{gu2019hplflownet}, on the other hand, proposed a series of bilateral convolutional layers to fuse information from two consecutive frames and restore structural information from unconstructed point clouds. \par In addition, MeteorNet \cite{liu2019meteornet} introduced direct grouping and chained-flow grouping to group temporal neighbors, and adopted information aggregation over neighbor points to generate representation for dynamic scenes. Derived from recurrent models in images, Fan and Yang \cite{fan2019pointrnn} proposed PointRNN, PointGRU and PointLSTM to encode dynamic point clouds by capturing both spatial and temporary information. \subsection{Experiments} KITTI \cite{Geiger2013IJRR,Geiger2012CVPR,Fritsch2013ITSC,Menze2015CVPR} is one of the most popular benchmarks for many computer vision tasks, including those in images, point clouds, and multi-views. By taking advantage of autonomous driving platforms, KITTI provides raw data of real-world scenes, and allows evaluation on multiple tasks. Table \ref{table:3} shows experimental results of different methods on KITTI. Some methods, such as VoteNet \cite{ding2019votenet}, which does not provide detailed test results on KITTI, are not listed. \begin{table*}[htbp] \centering \caption{Experiment results on KITTI 3D detection benchmark, E/M/H stands for easy/medium/hard samples.} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}*{Method} & \multirow{2}*{Category} & \multirow{2}*{Speed} & \multicolumn{3}{|c|}{Car} & \multicolumn{3}{|c|}{Pedestrians} & \multicolumn{3}{|c|}{Cyclists} \\ \cline{4-12} ~ & ~ & ~ & E & M & H & E & M & H & E & M & H\\ \hline MV3D \cite{chen2017multi} & multi-view & 2.8 & 74.8 & 63.6 & 54.0 & - & - & - & - & - & - \\ \hline AVOD \cite{ku2018joint} & multi-view & 12.5 & 89.8 & 85.0 & 78.3 & 42.6 & 33.6 & 30.1 & 64.1 & 48.1 & 42.4 \\ \hline SCANet \cite{lu2019scanet} & multi-view & 12.5 & 76.4 & 66.5 & 60.2 & - & - & - & - & - & - \\ \hline PIXOR \cite{yang2018pixor} & projection & 28.6 & 84.0 & 80.0 & 74.3 & - & - & - & - & - & - \\ \hline VoxelNet \cite{zhou2018voxelnet} & projection & 2.0 & 77.5 & 65.1 & 57.7 & 39.5 & 33.7 & 31.5 & 61.2 & 48.4 & 44.4 \\ \hline SECOND \cite{yan2018second} & projection & 26.3 & 83.3 & 72.6 & 65.8 & 49.0 & 38.8 & 34.9 & 71.3 & 52.1 & 45.8 \\ \hline PointPillars \cite{lang2019pointpillars} & projection & 62.0 & 82.6 & 74.3 & 69.0 & 54.5 & 41.2 & 38.9 & 77.1 & 85.7 & 52.0 \\ \hline PointRCNN \cite{shi2019pointrcnn} & point & 10.0 & 87.0 & 75.6 & 70.7 & 48.0 & 39.4 & 36.0 & 75.0 & 58.8 & 52.5 \\ \hline PointRGCN \cite{zarzar2019pointrgcn} & point & 3.8 & 86.0 & 95.6 & 70.7 & - & - & - & - & - & - \\ \hline STD \cite{yang2019std} & point & 12.5 & 88.0 & 79.7 & 75.1 & 53.3 & 42.5 & 38.3 & 78.7 & 61.6 & 55.3 \\ \hline Point-GNN \cite{Shi_PointGNN_2020_CVPR} & point & - & 88.3 & 79.5 & 72.3 & 52.0 & 43.8 & 40.1 & 78.6 & 63.5 & 57.0 \\ \hline PV-RCNN \cite{Shi_PVRCNN_2020_CVPR} & point & - & 90.2 & 81.4 & 76.8 & 52.1 & 43.3 & 40.3 & 78.6 & 63.7 & 57.7 \\ \hline 3DSSD \cite{Yang_3DSSD_2020_CVPR} & point & 26.3 & 88.4 & 79.6 & 74.6 & 54.6 & 44.3 & 40.2 & 82.5 & 64.1 & 56.9 \\ \hline \end{tabular} \label{table:3} \end{table*} \section{Registration} \subsection{Overview} In some scenarios like autopilot, it is of great value to find the relationship between point cloud data of the same scene collected in different ways. These data might be collected from different angles, or at different times. 3D point cloud registration (sometimes also called matching) attempts to align two or more different point clouds by estimating the transformation between them. It is a challenging problem affected by a lot of factors including noise, outliers and nonrigid spatial transformation. \subsection{Traditional Methods} The Iterative Closest Point (ICP) algorithm \cite{besl1992method} is a pioneering work that solves 3D point set registration. The basic pipeline of ICP and its variants is as follows: (1) Sample a point set $P$ from the source point cloud. (2) Compute the closest point set $Q$ from the target point cloud. (3) Calculate the registration (transformation) with $P$ and $Q$. (4) Apply the registration, and if the error is above some threshold, go back to step (2), otherwise terminate. A global refinement step is usually required for better performance. The performance of ICP highly depends on the quality of initialization and whether the input point clouds are clean. Generalized-ICP \cite{segal2009generalized} and Go-ICP \cite{yang2015go} are two representative follow-up works that mitigate the problems of ICP in different ways. \par Coherent Point Drift (CPD) algorithm \cite{myronenko2010point} considers the alignment as a problem of probability density estimation. Concretely, the algorithm consider the first point set as the Gaussian mixture model centroids, and the transformation is estimated by maximizing the likelihood in fitting them to the second point set. The movement of these centroids are forced to be coherent to preserve the topological structure. \par Robust Point Matching (RPM) \cite{gold1998new} is another influential point matching algorithm. The algorithm starts with soft assignments of the point correspondences, and these soft assignments will get hardened through deterministic annealing. RPM is generally more robust than ICP, but still sensitive to initialization and noise.\par Iglesias et al. \cite{Iglesias_2020_CVPR} focused on the registration of several point clouds to a global coordinate system. In other words, with the original set of $n$ points, we want to find the correspondences between (subsets of) the original set and $m$ local coordinate systems respectively. Iglesias et al. consider the problem as a Semidefinite Program (SDP), and attempt to analyze it with the application of Lagrangian duality. \par \subsection{Learning-based Methods} DeepVCP \cite{lu2019deepvcp} is the first end-to-end learning-based framework in point cloud registration. Given the source and target point cloud, PointNet++ \cite{qi2017pointnet++} is applied to extract local features. A point weighting layer then helps select a set of $N$ keypoints, after which $N\times C$ candidates from the target point cloud are selected and passed through a deep feature embedding operation together with keypoints from the source. Finally, a corresponding point generation layer takes the embeddings and generates the final result. Two losses are incurred: (1) the Euclidean distance between the estimated corresponding points and ground truth under the ground truth transformation, and (2) the distance between the target under the estimated transformation and ground truth. These losses are combined to consider both global geometric information and local similarity. \par 3DSmoothNet \cite{gojcic2019perfect} is proposed to perform 3D point cloud matching with a compact learned local feature descriptor. Given two raw point clouds as input, the model first computes the local reference frame (LRF) of the neighborhood around the randomly sampled interest points. Then the neighborhoods are transformed into canonical representations and voxelized by Gaussian smoothing, and the local feature of each point is then generated by 3DSmoothNet. The features will then be utilized by a RANSAC approach to produce registration results. The proposed smooth density value (SDV) voxelization outperforms traditional binary-occupancy grids by reducing the impact of boundary effects and noise, and provides greater compactness. Following 3DSmoothNet, Gojcic et al. \cite{gojcic2020learning} proposed another method that formulates conventional two-stage approaches in an end-to-end structure. Earlier methods solve the problem in two steps, the pairwise alignment and the globally consistent refinement, by jointly learning both parts. Gojcic et al.'s method outperforms previous ones with higher accuracy and less computational complexity. \par RPM-Net \cite{yew2020rpm} inherits the idea of RPM \cite{gold1998new} algorithm, and takes advantage of deep learning to enhance robustness against noise, outliers and bad initialization. In this method, the initialization assignments are generated based on hybrid features from a network instead of spatial distances between points. The parameters of annealing is predicted by a secondary network, and a modified Chamfer distance is introduced to evaluate the quality of registration. This method outperforms previous methods no matter the input is clean, noisy, or even partially visible. \section{Augmentation and Completion} \subsection{Overview} Point clouds collected by lidar, especially those from outdoor scenes, suffer from different kinds of quality issues like noise, outliers, and missing points. Many attempts have been made to improve the quality of raw point clouds by completing missing points, removing outliers and so on. The motivation and implementation vary a lot among different approaches; in this paper, we divide them into two categories: discriminative models and generative models. \subsection{Discriminative Methods} Noise in point clouds collected from outdoor scenes is naturally inevitable. To prevent noise from influencing the encoding of point clouds, some denoising methods shall be applied in pre-processing. Conventional methods include local surface fitting, neighborhood averaging and guessing the underlying noise model. PointCleanNet \cite{rakotosaona2020pointcleannet} proposed a data-driven method to remove outliers and reduce noise. With a deep neural network adapted from PCPNet\cite{guerrero2018pcpnet}, the model first classifies outliers and discards them, then estimates a correction projection that projects noise to original surfaces.\par Hermosilla et al. \cite{hermosilla2019total} proposed Total Denoising that achieved unsupervised denoising of 3D point clouds without additional data. The unsupervised image denoisers are usually built based on the assumption that the value of a noisy pixel follows a distribution around a clean pixel value. Under this assumption, the original clean value can be recovered by learning the parameters of the random distribution. However, such an idea cannot be directly extended to point clouds because there are multiple formats of noise in point clouds, such as a global position deviation where no reliable reference point exists. Total Denoising introduces a spatial prior term that finds the closest of all possible modes on a manifold. The model achieves competitive performance against supervised models.\par While a lot of models benefit from rich information in dense point clouds, some others are suffering from the low efficiency with large amounts of points. Conventional downsampling approaches usually have to risk dropping critical points. Nezhadarya et al. \cite{Nezhadarya_2020_CVPR} proposed the critical points layer (CPL) that learns to reduce the number of points while preserving the important ones. The layer is deterministic, order-agnostic and also efficient by avoiding neighbor search. Aside from that, SampleNet \cite{Lang_sample_2020_CVPR} proposed a differentiable relaxation of point sampling by approximating points after sampling as a mixture of original points. The method has been tested as a front to networks on various tasks, and obtains decent performance with only a small fraction of the raw input point cloud. \par \subsection{Generative Methods} Generative adversarial networks are widely studied for 2D images and CNNs, as they help locate the potential defects of networks by generating false samples. While typical applications of point cloud models, such as autonomous driving, consider safety as a critical concern, it is helpful to study how current deep neural networks on point clouds are affected by false samples.\par Xiang et al. \cite{xiang2019generating} proposed several algorithms to generate adversarial point clouds against PointNet. The adversarial algorithms work in two ways: point perturbation and point generation. Perturbation is implemented by shifting existing points negligibly, and generation is implemented by either adding some independent and scattered points or a small number of point clusters with predefined shapes. Shu et al. \cite{shu20193d} proposed tree-GAN, a tree-structured graph convolution network. By performing graph convolution within a tree, the model takes advantage of ancestor information to enrich the capacity of features. Along with the development of adversarial networks, DUP-Net \cite{zhou2019dup} is proposed to defend 3D adversarial models. The model contains a statistical outlier removal (SOR) module as denoiser and a data-driven upsampling network as upsampler. \par Aside from adversarial generation, generative models are also used for point cloud upsampling. There are generally two motivations to upsample a point cloud. The first is to reduce the sparseness and irregularity of data, and the second is to restore missing points due to occlusion. \par For the first aim, PU-Net \cite{yu2018pu} proposed upsampling in the feature space. For each point, multi-level features are extracted and expanded via a multi-branch convolution unit; after that, the expanded feature is split into multiple features and reconstructed to upsample the input set. Inspired by image super-resolution models, Wang et al. \cite{yifan2019patch} proposed a cascade of patch-based upsampling networks, learning different levels of details at different steps, where at each step the network focuses only on a local patch from the output of the previous step. The architecture is able to upsample a sparse input point set to a dense set with rich details. Hui et al. \cite{hui2020progressive} also proposed a learning-based deconvolution network that generates multi-resolution point clouds based on low-resolution input with bilateral interpolation performed in both the spatial and feature spaces.\par Meanwhile, early methods in completion, such as \cite{dai2017shape}, tend to voxelize the input point cloud at the very beginning. PCN \cite{yuan2018pcn} was the first framework to work on raw point clouds and in a coarse-to-fine fashion. Wang et al. \cite{Wang_Comp_2020_CVPR} improved the results with a two-step reconstruction design. Besides, Huang et al. \cite{Huang_PF_2020_CVPR} proposed PF-Net that preserves the spatial structure of the original incomplete point cloud, and predicts the missing points hierarchically a multi-scale generating network. GRNet\cite{xie2020grnet}, on the other hand, proposed a gridding-based which retrieve structural context by performing cubic feature sampling per grid, and complete the output with "Gridding Reverse" layers and MLPs. \par Lan et al. \cite{lan2019robust} proposed a probabilistic approach to optimize outliers by applying EM algorithm with Cauchy-Uniform mixture model to suppress potential outliers. More generally, PU-GAN \cite{li2019pu} proposed a data-driven generative adversarial network to learn point distributions from the data and upsample points over patches on the surfaces of objects. Furthermore, RL-GAN-Net \cite{sarmad2019rl} uses a reinforcement learning (RL) agent to provide fast and reliable control of a generative adversarial network. By first training the GAN on the dimension-reduced latent space representation, and then finding the correct input to generate the representation that fits the current input form the uncompleted point cloud with a RL agent, the framework is able to convert noisy, partial point cloud into a completed shape in real time.\par \section{Conclusion} In this paper, we reviewed milestones and recent progress on various problems in 3D point clouds. With the expectation of practical applications like autonomous driving, point cloud understanding has received increasing attention lately. In 3D shape classification, point-based models have achieved satisfactory performance on recognized benchmarks. Methods developed from image tasks, such as two-stage detector and the Siamese architecture, are widely introduced in 3D segmentation, object detection and other derivative tasks. Specific deep learning frameworks are proposed to match point clouds of the same scene from multiple scans, and generative networks are adapted to improve the quality of point cloud data with noise and missing points. Deep learning methods with proper adaption have been proven to efficiently help overcome the unique challenges in point cloud data. \par {\small \bibliographystyle{ieee_fullname}
null
null
null
proofpile-arXiv_000-10122
{"arxiv_id":"2009.08920","language":"en","timestamp":1600654628000,"url":"https:\/\/arxiv.org\/abs\/2009.08920","yymm":"2009"}
2024-02-18T23:40:24.992Z
2020-09-21T02:17:08.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10124"}
null
\section{Introduction} \begin{figure}[ht] \centering \includegraphics[width=\columnwidth]{img.pdf} \caption{\label{figure:simexample} Examples of two generated similes GenSimile1 and GenSimile2 from their literal inputs.} \vspace{-1em} \end{figure} Comparisons are inherent linguistic devices that express the likeness of two entities, concepts or ideas. When used in a figurative sense, these comparisons are called similes. They are a figure of speech that compare two different kind of things, usually with the intent to make the description more emphatic or vivid, being often used in literature and poetry to spark the reader's imagination ~\cite{definition}. Take the following two examples: ``The city was \textit{like a painting}", and ``If it falls into the wrong hands it would be as catastrophic \textit{as a nuclear bomb}." In the first example, the comparison draws on the implicit ``beauty" property being shared by the two very different entities, \textit{city} and \textit{painting}, while in the second the ``catastrophic" property is shared by \textit{falling into the wrong hands} and \textit{nuclear bomb}. While most computational work has focused on simile detection \cite{simile1,simile2,simile3,simile4,simile5,simile6}, research on simile generation is under-explored. Generating similes could impact many downstream applications such as creative writing assistance, and literary or poetic content creation. To tackle the generation problem, we take advantage of the relatively simple structure of similes that consists of five elements \cite{hanks2013lexical,simile1}: the {\tt TOPIC} (usually a noun phrase that acts as logical subject), the {\tt VEHICLE} (the logical object of the comparison, usually a noun phrase), the {\tt PROPERTY} (what the two things being compared have in common, usually an adjective), the {\tt EVENT} (eventuality or state, usually a verb), and the {\tt COMPARATOR} (the trigger word or phrase that marks the presence of a comparison, usually the preposition ``like" or ``as...as"). All elements of a simile are explicit, with the exception of {\tt PROPERTY}, which can be both implicit and explicit. If we take the first example above, its structure is: ``[The city/{\tt TOPIC}] [was/{\tt EVENT}] [like/{\tt COMPARATOR}] [a painting/{\tt VEHICLE}]" ({\tt PROPERTY} is implicit). Unlike metaphors, the semantic context of similes tends to be very shallow, transferring a single \textit{property} \cite{hanks2013lexical}. Moreover, the explicit syntactic structure of similes allows, in exchange, for more lexical creativity~\cite{simile1}. We focus on the task of generating a simile starting from a literal utterance that contains the {\tt TOPIC}, {\tt EVENT} and {\tt PROPERTY}. We frame this task as a style-transfer problem \cite{shen2017style,fu2017style,li2018delete}, where the author's intent is to make the description of the {\tt TOPIC} more emphatic by introducing a comparison with the {\tt VEHICLE} via a shared {\tt PROPERTY} (See Figure~\ref{figure:simexample} for example of literal descriptive sentences and the generated similes). We call our approach \textbf{SCOPE} (\textbf{S}tyle transfer through \textbf{CO}mmonsense \textbf{P}rop\textbf{E}rty). There are two main challenges we need to address: 1) the lack of training data that consists of pairs of literal utterances and their equivalent simile in order to train a supervised model; 2) ensuring that the generated simile makes a meaningful comparison between the {\tt TOPIC} and the {\tt VEHICLE} via the shared {\tt PROPERTY} explicitly or implicitly expressed (e.g., Figure \ref{figure:simexample} GenSimile1 and GenSimile2, respectively). To the best of our knowledge, this is the first work in attempting to generate similes. By framing the task as a style-transfer problem we make three contributions: \footnote{ Code \& Data at \url{https://github.com/tuhinjubcse/SimileGeneration-EMNLP2020}} \textbf{Automatic creation of a parallel corpus of \textit{[literal sentence, simile]} pairs}. Our constructed corpus contains 87,843 such pairs. As a first step, we use distant supervision to automatically collect a set of \emph{self-labeled similes} using the phrase \textit{like a}. We then convert these similes to their literal versions by removing the {\tt COMPARATOR} and replacing the {\tt VEHICLE} with the associated {\tt PROPERTY} by leveraging the structured common sense knowledge achieved from COMET \cite{comet}, a language model fine-tuned on ConceptNet \cite{conceptnet}. For example, for the simile ``Love is like a unicorn" our method will generate ``Love is rare" (Section \ref{section:data1}). \textbf{Transfer learning from a pre-trained model for generating high quality similes.} Our system \textbf{SCOPE}, \emph{fine-tunes} BART \cite{lewis2019bart} --- a state of the art pre-trained denoising autoencoder built with a sequence to sequence model, on our \emph{automatically collected parallel corpus} of \textit{[literal sentence, simile]} pairs (Section \ref{section:model}) to generate similes. Human evaluations show that this approach generates similes that are better 37\% of the time on average compared to 2 literary experts, 82\% and 63\% of times compared to two well crafted baselines, and 68\% of the times compared to a state of the art system for metaphor generation \cite{metagen2} (Section \ref{section:results}). \textbf{A task-based evaluation.} We show the effectiveness of the generated similes as a tool for enhancing creativity and evocativeness in machine generated stories. Evaluation via Amazon Mechanical Turk shows that stories containing similes generated by \textbf{SCOPE} is preferred by Turkers 42\% of the times compared to stories without similes, which is preferred 25\% of the times (Section \ref{section:story}). \section{Related Work} Simile generation is a relatively new task. Most prior work has focused on detection of similes. The closest task in NLP to simile generation is generating metaphors. However it should be noted the overlap between the expressive range of similes and metaphors is known to be only partial: there are similes that cannot be rephrased as metaphors, similarly the other way around~\cite{israel2004simile}. \subsection{Simile Detection and Analysis} \citet{simile1} proposed frameworks for annotating similes from product reviews by considering their semantic and syntactic characteristics as well as the challenges inherent to the automatic detection of similes. \citet{simile3,simile4} built computational models to recognize affective polarity and implicit properties in similes. Unlike these works, we focus on generating similes by transforming a literal sentence while still being faithful to the property in context. \subsection{Metaphor Generation} Earlier works in metaphor generation \cite{abe2006computational,terai2010computational} were conducted on a lexical or phrase level, using template and heuristic-based methods. \cite{metaphoria} presented an interactive system for collaboratively writing metaphors with a computer. They use an open source knowledge graph and a modified Word Mover’s Distance algorithm to find a large, ranked list of suggested metaphorical connections. Word embedding approaches \cite{gagliano2016intersecting} have also been used for metaphor generation. However, the metaphors generated through these methods do not take semantic context into consideration and lack the flexibility and creativity necessary to instantiate similes through a natural language sentence. \citet{metagen1} use neural models to generate metaphoric expressions given a literal input in an unsupervised manner. \citet{metagen2} develop a new framework dubbed `metaphor masking' where they train a supervised seq2seq model with input as the masked text, where they mask or hide the metaphorical verb while preserving the original text as the output. However, both these works hinge on metaphoric verbs unlike similes where we not only need to replace the literal property with a vehicle but it also needs to be relevant to the context and the tenor. Additionally we also use \cite{metagen2} as a baseline and show that their approach may not be the best way for generating similes. \section{Conclusion} We establish a new task for NLG: simile generation from literal sentences. We propose a novel way of creating parallel corpora and a transfer-learning approach for generating similes. Human and automatic evaluations show that our best model is successful at generating similes. Our experimental results further show that to truly be able to generate similes based on actual metaphoric or conceptual mappings, it is important to incorporate some common sense knowledge about the topics and their properties. Future directions include exploration of other knowledge bases to help the inference and applying our simile generation approach to different creative NLG tasks. \section{Problem Statement} \section{SCOPE: {S}tyle Transfer through \textbf{CO}mmonsense \textbf{P}rop\textbf{E}rty} \begin{figure*}[ht] \centering \includegraphics[width=\textwidth]{schematic.pdf} \vspace{-1.5em} \caption{\label{figure:sim} A schematic illustration of our system, where the top block shows our \textbf{training} process where we use COMET to transform similes to literal sentences and use them to fine-tune BART. The block below shows the \textbf{inference} step where we use fine-tuned BART to generate novel similes conditioned on a literal sentence.} \vspace{-.5em} \end{figure*} Our style transfer approach for simile generation from literal descriptive sentences has two steps: 1) first convert self-labeled similes into literal sentences using structured common sense knowledge (Section \ref{section:data1}); and 2) given the \textit{[literal sentence, simile]} pairs, fine-tune a seq2seq model on these pairs to generate a simile given a literal sentence (Section \ref{section:model}). This two-step approach is shown in the upper half of Figure \ref{figure:sim}. \subsection{Automatic Parallel Corpus Creation} \label{section:data1} One of the requirements to train a supervised generative model for text style transfer is the presence of a large-scale parallel corpus. We use distant supervision to collect self-labeled similes using the phrase \textit{like a} \footnote{While there can be noisy sentences where the TOPIC is a PNP and typically short $<=6$ tokens such as \textit{I feel like a .., I would like a .., I don't like a..}, they are very less in number(1.1 \%) so we do not remove them. More details in Appendix A.2} from Reddit (e.g., the rows labeled as Simile in Table \ref{table:example2}). For fine-tuning, the similes form the ``target" side of our parallel data. For the ``source" side of our parallel data, we use commonsense knowledge to transform the similes to their literal version (e.g., the rows labeled as Best Literal in Table \ref{table:example2}). One of the possible ways to collect similes would be to train a supervised model using existing data and methods for simile detection but most data sets are very small in size (in order of a few hundred). The only large-scale dataset is that of \cite{simile1} however their data is from a rather restricted domain of product reviews on Amazon which might often lack variety, diversity and creativity needed for this task. \paragraph{Simile Dataset Collection.} We hypothesize that similes are used frequently in creative writing or humorous content on social media \cite{veale2013humorous}. Hence, we obtain training data by scraping the subreddits WRITINGPROMPTS \footnote{\url{https://www.reddit.com/r/WritingPrompts/}} and FUNNY \footnote{\url{https://www.reddit.com/r/funny/}} from social media site Reddit for comments containing the phrase \textit{like a}. We use the API provided by pushshift.io \footnote{\url{https://pushshift.io/}} to mine comments. Through this process we collect 87,843 self-labeled human written similes, from which we use 82,697 samples for training and 5,146 for validation. \paragraph{Simile to Literal Transformation via Commonsense Property.} From a theoretical perspective, similes are created by making a comparison between the {\tt TOPIC} and the {\tt VEHICLE} through a shared {\tt PROPERTY}. While this property is naturally known to humans through common sense and connotative knowledge, computers still struggle to perform well on such tasks when the {\tt PROPERTY} is not expressed. Hence we use structured common sense knowledge to derive properties to transform similes to their literal versions. \begin{table}[t] \centering \small \begin{tabular}{|@{ }l@{ }|@{ }p{5.5cm}@{ }|} \hline Simile & Love is like a \textit{unicorn.} \\ \hline Has property & very rare, rare, beautiful, beautiful and smart, color \\ \hline Best Literal & Love is \textit{\color{blue}rare.} \\ \hline\hline Simile & It was cool and quiet, and I stormed through like a \textit{charging bull.} \\ \hline Has property & big and strong, dangerous, big, fast, large \\ \hline Best Literal & It was cool and quiet, and I stormed through \textit{\color{blue}fast.} \\ \hline\hline Simile & Sir Francis's voice was calm and quiet, like a \textit{breeze through a forest.} \\ \hline Has property & very relax, soothe, cool, beautiful, relax \\ \hline Best Literal & Sir Francis's voice was calm and quiet, \textit{\color{blue}very relaxed.} \\ \hline \end{tabular} \caption{Examples of self-labeled similes collected from Reddit. For each example, we show the top five commonsense properties associated with the \textit{vehicle} obtained from COMET, and the best literal sentence constructed from these properties. The blue italic texts in the literal sentences represent the \textit{property} inferred from the \textit{vehicle} in the simile (denoted in black italic). } \vspace{-1em} \label{table:example2} \end{table} To generate the common sense {\tt PROPERTY} that is implied by the {\tt VEHICLE} in the simile, we take advantage of the simple syntactic structure of a simile. We extract the {\tt VEHICLE} by extracting the phrase after \textit{like a} and feed it as input to COMET \cite{comet}. COMET is an adaptation framework for constructing commonsense knowledge based on pre-trained language models. Our work only leverages the \textbf{HasProperty} relation from COMET \footnote{\url{https://mosaickg.apps.allenai.org/comet\_conceptnet}}. For a given simile \textit{`Love is like a unicorn.'}, the {\tt TOPIC} \textit{Love} is compared to the {\tt VEHICLE} \textit{unicorn}. As shown in Table \ref{table:example2}, COMET tells us the top 5 properties associated with the {\tt VEHICLE} are \textit{very rare, rare, beautiful, beautiful and smart, color}. COMET gives us the properties sorted by probability in isolation by just relying on the {\tt VEHICLE}. While in most situations all of the properties are apt, we need to make the literal sentence as meaningful as possible. To do this, we append the common sense property to the portion of the simile before \textit{`like a'}. This typically consists of the {\tt TOPIC}, the {\tt EVENT}, and a {\tt PROPERTY} if stated explicitly. We take the top 5 properties from COMET to form 5 possible literal versions for a particular simile. To rank these literal versions and select the best one, we rely on perplexity scores obtained from a pre-trained language model GPT \cite{gpt}. Table \ref{table:example2} shows human written similes collected from Reddit, the top 5 common sense properties associated with the {\tt VEHICLE}, and the literal version created by taking the best {\tt PROPERTY}. To correct any grammatical error introduced by this manipulation, we rely on a grammatical error correction model \cite{zhao2019improving}. \paragraph{Test Data Collection.} \label{section:evaldata} Our task is to generate a simile given a literal input. The automatically-generated parallel data might contain stylistic biases. To truly measure the effectiveness of our approach, we need to evaluate on a dataset independent of our training and validation data. Towards this end, we again scrape WRITINGPROMPTS subreddits for sentences which are this time \emph{literal} in nature (without any comparators \textit{like, as}). Since literal utterances contains the description of {\tt TOPIC} via a {\tt PROPERTY} and usually the {\tt PROPERTY} is an adjective or adverb, we restrict the last word of our literal sentences to adverbs or adjectives. We crawl 500 such sentences and randomly sample 150 literal utterance. We used two literary experts, a student in creative writing, and a student in comparative literature who is the author of a novel, to write corresponding similes for each of these 150 inputs for evaluation and comparison. \subsection{Seq2Seq Model for Simile Generation} \label{section:model} Our goal of generating similes can be broken down into two primary tasks: 1) identifying the words in the literal sentence that should be removed or replaced and 2) generating the appropriate substitutions while being pertinent to the context. Sequence to sequence (seq2seq) neural network models \cite{sutskever2014sequence} have demonstrated great success in many text generation tasks, such as machine translation, dialog system and image caption, with the requirement of a considerable amount of parallel data. Hence we use seq2seq models for simile generation. \begin{figure}[t] \centering \includegraphics[scale=0.75]{bart.pdf} \caption{\label{figure:bart} The backbone of SCOPE: fine-tuning BART on literal to simile pairs.} \vspace{-1em} \end{figure} BART \cite{lewis2019bart} is a pre-trained model combining bidirectional and auto-regressive transformers. It is implemented as a sequence-to-sequence model with a bidirectional encoder over corrupted text and a left-to-right autoregressive decoder. In principle, the pre-training procedure has two stages: (1) text is corrupted with an arbitrary noising function, and (2) a transformer-to-transformer model is learned to reconstruct the original text. Because BART has an autoregressive decoder, it can be directly fine-tuned for most sequence generation tasks. Here, the encoder input is the a sequence of words, and the decoder generates outputs autoregressively, as shown in Figure \ref{figure:bart}. BART achieves new state-of-the art results on a number of text generation tasks, making it an ideal choice for generating similes. We refer the reader to \cite{lewis2019bart} for further details. For our task, we fine-tune BART by treating the literal input as encoder source and the simile as the the decoder target. Post fine-tuning at the inference step, we use top-k sampling strategy \cite{fan2018hierarchical} to generate similes conditioned on a test literal input. \paragraph{Implementation details.} Hyper-parameters, and essential details needed for satisfiability of reproducibility checklists are given in Appendix A.1. \section{Model} \section{Experimental Setup} To compare the quality of the generated similes, we benchmark SCOPE model and human generations (HUMAN1 \& HUMAN2) described in Section \ref{section:evaldata} with three baseline systems described below \subsection{Baseline Systems} Simile generation is a new task. The baselines outlined below have been used for other generation tasks. We adapt them to generate similes. \begin{enumerate} \item \textbf{BART}: This is the pre-trained BART model. Since BART is a pre-trained sequence to sequence model, it can still be used for conditional text generation. To this end we use the same literal sentence (For example \textit{The city was beautiful}) as an input to the encoder and force the decoder to begin with same prefix by removing the adjective/adverb at the end and appending the comparator and the article (\textit{The city was like a}) and generate a simile. \item \textbf{Retrieval (RTRVL)}: We also experiment with a retrieval approach where we retrieve a {\tt VEHICLE} from ConceptNet \cite{conceptnet} having the highest \textit{HasProperty} relation w.r.t our input (i.e., an adjective or adverb at the end of literal sentence) \footnote{ConceptNet is a weighted graph with multiple relations as can be viewed here http://conceptnet.io/ . We use `has property" for our work.There are multiple edges for objects with their properties. We choose edge with highest weight}. For the input \textit{The city was beautiful} we query ConceptNet with \textit{beautiful} and it returns \textit{sunset} as the {\tt VEHICLE} having highest weight for \textit{HasProperty beautiful}. We take this retrieved {\tt VEHICLE} and append it to the prefix ending in \textit{like a}. If the word is not in ConceptNet, we fall back to its synonyms obtained from Wordnet \cite{miller1995wordnet}. \item \textbf{Metaphor Masking (META\_M)}: The third baseline is the metaphor generation model given a literal sentence described by \newcite{metagen2}. Following their approach, we fine-tune BART where we mask the adjective or adverb in the end of the literal sentence. The input is the masked text, with the hidden adjective or adverb (\textit{The city was \textbf{\textless MASK \textgreater}}), and the output is the original simile (\textit{The city was like a painting}). Through this learning paradigm, the model learns that it needs to generate simile when it encounters the mask token. At test time, we provide the model with the literal input, mask the adjective/adverb, and the model produces an output conditioned on the adjective/adverb masking training. \end{enumerate} \subsection{Evaluation Criteria} \paragraph{Automatic evaluation.} \textit{BLEU}~\cite{BLEU} is one of the most widely used automatic evaluation metric for generation tasks such as Machine Translation. However, for creative text generation, it is not ideal to expect significant n-gram overlaps between the machine-generated and the gold-standard sentences. We still report the BLEU scores for generated {\tt VEHICLE} after discarding the common prefix with the gold. \textit{BERTScore}~\cite{zhang2019bertscore} has been used recently for evaluating text generation using contextualized embeddings and said to somewhat ameliorate the problems with BLEU. It computes a similarity score using contextual embeddings for each token in the candidate (here {\tt VEHICLE} in generated simile) with each token in the reference ({\tt VEHICLE} in human written simile).To compute F1-Score it uses Recall (matching each token in reference to a token in candidate) and Precision(matching each token in candidate to a token in reference).We report F1Score of \textit{BERTScore.} \textit{Novelty.} To measure the model's generalization capability, we also want to test how well our models can generate novel content. We capture the proportion of generated {\tt VEHICLE} conditioned on an adverb/adjective literal {\tt PROPERTY} that does not appears in the training set. \begin{table}[] \small \centering \begin{tabular}{|l|l|l|l|l|} \hline & \bf B-1 & \bf B-2 & \bf BERT-S & \bf NOVELTY \\ \hline RTRVL & 0.0 & 0.0 & 0.13 & 92.6 \\ \hline BART & 3.25 & 0.32 & 0.12 & 92.6 \\ \hline META\_M & 3.73 & 0.96 & 0.15 & \textbf{93.3} \\ \hline SCOPE & \textbf{8.03} & \textbf{3.59} & \textbf{0.18} & 88.6 \\ \hline \end{tabular} \caption{Results using automatic metrics: BLEU-1 (B-1), BLEU-2 (B-2), BERTScores (BERT-S) and Novelty. Boldface denotes the best results.} \label{table:autoeval} \end{table} \begin{table}[] \small \centering \begin{tabular}{|@{ }l@{ }|@{ }l@{}|@{ }l@{}|@{ }l@{}|@{ }l@{ }|} \hline \bf System & \bf C & \bf R1 & \bf R2 & \bf OQ \\ \hline HUMAN1 & \textbf{3.61} (0.34) & \textbf{3.74} (0.43) & 3.90 (0.51) & \textbf{3.54} (0.40) \\ \hline HUMAN2 & 3.46 (0.31) & 3.72 (0.43) & \textbf{3.97} (0.47) & 3.44 (0.39) \\ \hline\hline RTRVL & 1.90 (0.39) & 1.85 (0.44) & 1.73 (0.50) & 1.85 (0.42) \\ \hline BART & 2.68 (0.39) & 2.78 (0.45) & 2.75 (0.51) & 2.61 (0.41) \\ \hline META\_M & 2.68 (0.42) & 2.72 (0.46) & 2.77 (0.47) & 2.59 (0.41) \\ \hline SCOPE & \underline{3.16} (0.35) & \underline{3.50} (0.43) & \underline{3.78} (0.52) & \underline{3.32} (0.43) \\ \hline \end{tabular} \caption{Human evaluation on several criteria of similes' quality for different systems' outputs and human written similes. We show average scores on a 1-5 scale with 1 denotes the worst and 5 be the best; the corresponding inter-annotator agreement (IAA) is in the parenthesis. Boldface denotes the best results and underscore denotes the second bests.} \label{table:example3} \end{table} \begin{table}[t] \small \centering \begin{tabular}{|p{0.3cm}|l|l|l|l|l|l|} \hline \multirow{2}{*}{} & \multicolumn{2}{l|}{\bf SCOPE/H1} & \multicolumn{2}{l|}{\bf SCOPE/H2} & \multicolumn{2}{l|}{\bf SCOPE/META\_M} \\ \cline{2-7} & w\% & l\% & w\% & l\% & w\% & l\% \\ \hline C & 28.0 & \textbf{58.6} & 26.6 & \textbf{57.3} & \textbf{58.6} & 31.3 \\ \hline R1 & 37.3 & \textbf{51.3} & 33.3 & \textbf{50.0} & \textbf{63.3} & 18.0 \\ \hline R2 & 42.6 & \textbf{45.3} & 37.3 & \textbf{44.6} & \textbf{69.3} & 17.3 \\ \hline OQ & 32.6 & \textbf{54.6} & 41.3 & \textbf{50.0} & \textbf{68.6} & 18.6 \\ \hline \end{tabular} \caption{Pairwise comparison between SCOPE and HUMAN1(H1), HUMAN2(H2), and META\_M. Win[w]\% (lose[l]\%) is the percentage of SCOPE gets a higher (lower) average score compared to HUMAN1, HUMAN2 and META\_M. The rest are ties.} \label{table:example4} \end{table} \paragraph{Human evaluation.} Automated metrics are not adequate on their own for evaluating methods to generate creative text so we present a human-based evaluation as well. We evaluate on a total of 900 utterances, 600 generated from 4 systems and 300 utterances generated by humans. We proposed a set of 4 criteria to evaluate the generated output: (1) \textit{Creativity (C)} (``How creative are the utterances?''), (2) \textit{Overall Quality (OQ)} (``How good is the simile overall?'' \footnote{Turk guidelines was to score based on how creative, well formed, meaningful and relevant it is with respect to the literal utterance}), (3) \textit{Relevance1 (R1)} (``How relevant is the generated {\tt VEHICLE} in terms of portraying the {\tt PROPERTY}?'') and (4) \textit{Relevance2 (R2)} (``How relevant is the {\tt VEHICLE} to the {\tt TOPIC} in the generation?''). As we evaluate on 4 separate dimensions for 900 utterances we have a total of 3600 evaluations. We hired Turkers on MTurk to rate outputs from the 4 systems and 2 humans. Each Turker was given the literal utterance as well as the 6 generate similes (randomly shuffled) Each criteria was rated on a scale from 1 (not at all) to 5 (very). Each utterance was rated by three separate Turkers. We hired 86,48,42,46 Turkers for the tasks of Creativity, Overall Quality, Relevance1, Relevance2 respectively. Further details in Appendix A.4 . \section{Experimental Results} \label{section:results} \subsection{Automatic Evaluation} Table \ref{table:autoeval} shows BLEU-1, BLEU-2 and BERTScore of our system compared to the three baselines. The low scores can be attributed to the nature of creative NLG tasks. To further validate this we also compute the BLEU-1 and BLEU-2 score between the two literary experts treating one as reference and other as candidate and get scores of $4.12$ and $0.52$ respectively. BERTScore is often a better metric as it utilizes contextualized embeddings. For example for a candidate [\textbf{desert}] with multi-reference as [[\textbf{sandy death trap}],[\textbf{wasteland}]] , we get a BERTscore of 0.99 while BLEU score is 0.0. Finally our best model SCOPE emerges as the winner for both BLEU and BERTScore. For novelty SCOPE can still generate novel content 88\% of the time proving it is generalizable to unseen test data. Further there are 5558 unique {\tt PROPERTY} in training data and 41\% of {\tt PROPERTY} in testing data does not appear in training, showing our model is generalizable on unseen {\tt PROPERTY} as well. \subsection{Human Evaluation Scores} Table ~\ref{table:example3} presents the scores of the aforementioned evaluation criteria for our model and the baselines on the test set. The results show that SCOPE is significantly ($p<.001$ according to approximate randomization test) better than the baselines on all four criteria. For all metrics our best system is comparable to humans. We also computed Pearson's correlation between OQ with other metrics and observed that R1 and R2 had moderate correlation of 0.54 and 0.52 with OQ , while C was fairly correlated (0.31) to OQ suggesting a relevance matters when deciding the quality of a simile. \paragraph{Pairwise Comparison between systems.} Table \ref{table:example4} shows the pairwise comparisons between the SCOPE and human generated simile (HUMAN1 and HUMAN2), and META\_M \cite{metagen2}, respectively. Given a pair of inputs, we decide win/lose/tie by comparing the average scores (over three Turkers) of both outputs. We see that SCOPE outperforms META\_M on all the metrics. For overall quality, although it is a given that literary experts are better, the SCOPE model still has a winning rate of 32.6\% and 41.3\% respectively. \begin{figure} \centering \includegraphics[scale=0.17]{bar-chart.png} \caption{\label{figure:cn} Barchart showing the percent of times each individual system won in terms of Overall Quality.} \end{figure} \section{Qualitative Analysis} Table \ref{table:example5} demonstrates several generation outputs from different systems along with human judgements on individual criteria. We observe that often our model is better than at least one human on a certain criteria while outperforming the baselines by a large margin. \subsection{Role of Relevance} While conditioning on the context of literal sentences might lead to grammatically correct similes, they are often not meaningful and relevant to the {\tt PROPERTY} in consideration. META\_M generates similes by fine-tuning BART on literal sentences where the common sense {\tt PROPERTY} is masked. The lack of relevance mapping during fine-tuning often leads to improper generations. For instance, referring to Table \ref{table:example5}, the context of `falling into the wrong hands' is more likely to lead to something bad and hence here `gift' is not appropriate while `nuclear bomb' is. One possible way of incorporating relevance is through common sense knowledge. \subsection{Role of Context} The role of context is necessary for simile generation. For example given the literal input \textit{`But times are hard, and silver bullets are \textbf{expensive}'} even though ConceptNet tells us \textbf{diamonds} are objects with \textit{HasProperty} expensive, a generated simile by RTRVL model \textit{`But times are hard, and silver bullets are like a \textbf{diamond}'} seems inappropriate suggesting that a context leads to better generation. Our SCOPE model generates \textit{`But times are hard, and silver bullets are like a \textbf{luxury item}'} \section{Task-based Evaluation: Simile for Story Generation} \label{section:story} Similes are often used to evoke imagery. Generating or transforming text to be evocative can be useful for journalism, poetry and story writing. Table \ref{table:example7} shows how we can use our simile generation module as a post processing step to replace literal sentences leading to more expressive and creative stories. To further test this hypothesis we conduct an experiment further outlined below. \begin{table}[] \small \centering \begin{tabular}{|c|c|c|} \hline GPT2 & GPT2+META\_M & GPT2+SCOPE \\ \hline 23\% & 25\% & \textbf{42\% } \\ \hline \end{tabular} \caption{\label{tab:analysis}Win\% (in terms of average score over three annotators) of stories generated with only GPT2, GPT2 with META\_M or SCOPE simile post processing. The rest are ties.} \end{table} \subsection{Story Generation} We use the ROCStories \cite{mostafazadeh2016corpus} dataset to generate stories using the \textit{Plan and Write } model outlined by \citet{yao2019plan}. We introduce a two step pipeline procedure where we fine-tune a pre-trained GPT2 \cite{gpt} model on titles and storyline from the training set to generate a storyline given a title (Row 1 Table \ref{table:story}). In parallel, we also fine-tune GPT2 on storylines and stories from the training set to generate a story given a storyline (Row 2 Table \ref{table:story}). At test time, we generate a storyline using an input title first and then use the generated storyline to generate a story. \subsection{Post Processing} There can be multiple sentences ending with an adjective or adverb and replacing each of them with a simile might lead to over-embellishment. Under such situations we feed only one randomly selected sentence to SCOPE and META\_M module and replace the sentence in GPT2 generated story with the output from SCOPE or META\_M, respectively. \subsection{Human evaluation.} We randomly select 50 titles from ROCStories data set and generate stories as described above. We postprocess it using both SCOPE and META\_M separately. Thus for each title we have 3 stories 1) the original GPT2 story 2)the GPT2 story postprocessed with SCOPE 3)the GPT2 story postprocessed with META\_M. For each given titles, we present these 3 stories each to workers in AMT and ask them to score them in a range of 1(poor) to 5 (excellent) based on creativity and evocativeness. Experimental results from Table \ref{tab:analysis} prove that effective usage of similes can improve evocativeness and reception of machine generated stories. \section*{Acknowledgments} This work was supported by the CwC program under Contract W911NF-15-1-0543 with the US Defense Advanced Research Projects Agency (DARPA). The views expressed are those of the authors and do not reflect the official policy or position of the Department of Defense or the U.S. Government. The authors would like to thank Kai-Wei Chang, Christopher Hidey, Christopher Robert Kedzie, Anusha Bala and Liunian Harold Li for useful discussions. The authors also thank members of PLUSLab at the University Of California Los Angeles and University Of Southern California and the anonymous reviewers for helpful comments.
null
null
null
proofpile-arXiv_000-10123
{"arxiv_id":"2009.08942","language":"en","timestamp":1600654675000,"url":"https:\/\/arxiv.org\/abs\/2009.08942","yymm":"2009"}
2024-02-18T23:40:24.997Z
2020-09-21T02:17:55.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10125"}
null
\section{Introduction} \label{section.intro} Intense periods of star formation can lead to multi-phase, galaxy-scale outflows driven by the energy and momentum of stellar winds and supernovae. These galactic winds can have speeds of hundreds to thousands of km~s$^{-1}$ and are the most extreme instance of stellar feedback \citep{Veilleux2005,Heckman2017,Rupke2018}. Winds have the potential to regulate or even quench star formation in their host galaxies, and are perhaps the main source of metals in the intergalactic medium. It is generally accepted that star formation and AGN are the source of galactic winds, but how these determine the morphology, kinematics, composition, and how much (if any) material completely escapes the galaxy are unsettled questions. Plausible theoretical models vary widely in their answers to these questions \citep[for a recent theoretical review, see][]{Zhang2018}. Addressing these uncertainties requires measuring the evolution of wind properties with distance from the galaxy, such as the temperature, density, and velocity phase diagrams. This is challenging because winds expand and decrease in surface brightness, so beyond a few kpc from the galaxy winds have primarily been studied in absorption. For example, \citet{Kacprzak2012} found using the MAGIICAT dataset of Mg~{\sc ii} absorbers that absorption occurs most frequently within $\sim 20^{\circ}$ of the minor axis (outflows) and major axis (accretion) of the host galaxies \citep[although this is not generally true beyond 40~kpc;][]{Bordoloi2014}, while \citet{Heckman2017b} showed with the COS-Burst survey that Ly$\alpha$ absorbers around starburst galaxies have roughly twice the virial velocity, suggesting rapid outflows. Nevertheless, searching for emission at large radii is important because it enables detailed study of winds from individual objects, whereas pencil-beam absorption studies must build up samples of absorption systems around similar galaxies or focus on well-placed sightlines. This is limiting because the wind properties are sensitive to the characteristics of the generative starburst or AGN, which may differ substantially even in similar galaxies. In addition to emission, scattered Lyman-$\alpha$ can probe winds in individual galaxies \citep[e.g.,][]{Duval2016}, but as it does not directly probe ionized gas and can have complex line-of-sight radiative transfer even after scattering it is not a primary tool. One of the best ways to search for emission from winds, which is expected to be primarily line emission, is to use integral field spectrographs on large telescopes \citep[e.g.,][]{Finley2017}. Recently, \citep{Rupke2019} reported the discovery of a 100~kpc wind in SDSS~J211824.06+001729.4 (``Makani'') at $z=0.459$ using the integral field unit on the Keck observatory. However, it remains worthwhile to search for extended emission from nearby winds because they can be observed with current instruments in nearly every waveband. Here we present evidence for emission from shocked gas in a biconical outflow extending at least 60~kpc from NGC~3079, using far ultraviolet (FUV) and X-ray data. NGC~3079 is a well known starburst and AGN galaxy with an extensive radio halo \citep{Irwin2003}, a nuclear H$\alpha$ and X-ray bubble \citep{Cecil2001}, and diffuse nuclear hard X-rays \citep{JiangtaoLi2019}. A biconical outflow at lower latitudes has already been established from previous H$\alpha$, X-ray, and UV studies \citep[e.g.,][]{Fabbiano1992,Strickland2004}. There is also a suggestion that the extensive \ion{H}{1} tail seen behind the companion NGC~3073 is due to stripping by the wind from NGC~3079 \citep{Irwin1987}, as the ambient halo density required to explain it via stripping in the hot halo of NGC~3079 or the tidal forces needed to strip the gas are inconsistent with existing data \citep{Shafi2015}. If so, this would indicate a wind extended significantly beyond the radio halo, although the prospect of NGC~3073 itself driving an outflow has not been thoroughly explored. The starburst in NGC~3079 is fueled by $3\times 10^8 M_{\odot}$ of molecular gas in the nucleus \citep{Sofue2001} with a nuclear star-formation rate (SFR) of $2.6 M_{\odot}$~yr$^{-1}$ \citep{Yamagishi2010}. In the following sections we describe the data sources (Section~\ref{section.data}), the search for extended diffuse emission and structure (Section~\ref{section.filaments}), and our interpretation of this structure based on the X-ray properties (Sections~\ref{section.xrays} and \ref{section.interpretation}). We close with a summary in Section~\ref{section.summary}. We adopt a distance of $d=19$~Mpc for NGC~3079 \citep{Springob2009}, which corresponds to a scale of 5.5~kpc~arcmin$^{-1}$ and is the median redshift-independent distance from the NASA/IPAC Extragalactic Database\footnote{http://ned.ipac.caltech.edu}. \section{Observations and Data} \label{section.data} \begin{figure*}[htp] \centering \includegraphics[width=0.95\textwidth]{Images.pdf} \caption{\textit{Left}: \textit{GALEX}\ FUV image of NGC~3079, corrected for the galaxy light scattered into the wings of the PSF (see text for an explanation). The galaxy itself is clipped out near $R_{25}$ and bright source masks are indicated in black (many more small point sources were excised). Several filament candidates form an ``X'' shape. The color table is in units of $10^{-4}$~counts~s$^{-1}$. \textit{Right}: 0.3-2~keV \textit{XMM-Newton}\ image of the same field with source masks shown in black and the optical galaxy ($R_{25}$) shown as a blue ellipse. The same ``X'' shape seen in previous \textit{Chandra}\ images is evident. The color table is in units of $10^{-5}$~counts~s$^{-1}$. Both images have been smoothed with a Gaussian kernel and clipped at the mean background. } \label{fig:images} \end{figure*} The primary data sets used in this paper are a 16.1~ks \textit{GALEX}\ (Galaxy Evolution Explorer) observation of NGC~3079 and 300~ks of new and archival \textit{XMM-Newton}\ observations. 231~ks of new observations were obtained under \textit{XMM-Newton}\ project 080271 (PI: Hodges-Kluck). We also used archival \textit{Chandra}, \textit{Neil Gehrels Swift Observatory} (\textit{Swift}), \textit{Infrared Astronomy Satellite} (IRAS), and \textit{Herschel}\ observations as complementary data sets. \begin{deluxetable}{lccccc} \tablenum{1} \tabletypesize{\scriptsize} \tablecaption{\label{table.obs} UV and X-ray Observations in this Paper} \tablewidth{0pt} \tablehead{ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} \\ \colhead{Telescope} & \colhead{Instrument} & \colhead{Date} & \colhead{ObsID} & \colhead{Exposure} & \colhead{GTI} \\ & & & & \colhead{($10^3$~s)} & \colhead{($10^3$~s)} } \startdata GALEX & FUV & 2005-02-24 & NGA\_NGC3079 & 16.1 & 16.1 \\ GALEX & NUV & 2005-02-24 & NGA\_NGC3079 & 16.1 & 16.1 \\ Swift & UVW1 & 2013-11-12 & 80030001 & 0.3 & 0.3 \\ Swift & UVM2 & 2008-02-26 & 37245001 & 8.5 & 8.5 \\ Swift & UVW2 & 2009-02-27 & 37245002 & 1.2 & 1.2 \\ Swift & UVW2 & 2013-11-12 & 80030001 & 0.6 & 0.6 \\ Swift & UVW2 & 2014-04-04 & 91912001 & 0.7 & 0.7 \\ Swift & UVW2 & 2014-04-06 & 91912002 & 4.9 & 4.9 \\ XMM & EPIC & 2001-04-13 & 110930201 & 25.3 & 3.1 \\ XMM & EPIC & 2003-10-14 & 147760101 & 44.4 & 6.6 \\ XMM & EPIC & 2017-11-01 & 802710101 & 22.8 & 17.0 \\ XMM & EPIC & 2017-11-03 & 802710201 & 22.4 & 7.1 \\ XMM & EPIC & 2017-11-05 & 802710301 & 22.4 & 9.1 \\ XMM & EPIC & 2017-11-09 & 802710401 & 22.4 & 3.8 \\ XMM & EPIC & 2017-11-15 & 802710501 & 22.4 & 11.3 \\ XMM & EPIC & 2017-11-23 & 802710601 & 22.4 & 14.1 \\ XMM & EPIC & 2017-11-27 & 802710701 & 22.4 & 12.5 \\ XMM & EPIC & 2018-04-17 & 802710801 & 26.0 & 22.0 \\ XMM & EPIC & 2018-04-21 & 802710901 & 22.4 & 18.6 \\ XMM & EPIC & 2018-04-23 & 802711001 & 25.3 & 21.0 \\ Chandra & ACIS-S & 2001-03-07 & 2038 & 26.6 & 26.6 \\ Chandra & ACIS-S & 2018-01-30 & 19307 & 53.2 & 53.2 \\ Chandra & ACIS-S & 2018-02-01 & 20947 & 44.4 & 44.4 \\ \enddata \end{deluxetable} \subsection{GALEX} We used the FUV and NUV pipeline-processed \textit{GALEX}\ images. The FUV filter has an effective wavelength of 1542\AA\ and a width (FWHM) of 228\AA. The NUV filter has an effective wavelength of 2274\AA\ and a width of 796\AA. The on-axis angular resolution of the 50~cm telescope is about 4.2\arcsec\ in the FUV and 5.3\arcsec\ in the NUV channel. To prepare the images for analysis, we clipped the image to a 25\arcmin$\times$25\arcmin\ square around the galaxy, identified and masked point sources at least 3$\sigma$ above background, and masked extended sources through visual inspection. The size of the region of interest is limited by the significantly increased noise and image reconstruction artifacts near the edge of the \textit{GALEX}\ field, and no overlapping \textit{GALEX}\ image is nearly as deep. Since we are interested in diffuse light near the bright galaxy, we corrected for galactic light scattered into the halo region by the wings of the point-spread function (PSF). We followed the procedure described in \citet{Hodges-Kluck2016} (HK16), which involves subtracting a scaled convolution of the galaxy image (i.e., within the optical $R_{25}$) with the PSF from the raw image. The scale factor depends on the PSF and accounts for the fact that the galaxy image has already been convolved with the PSF. We used a PSF constructed in \citetalias{Hodges-Kluck2016} that is 15\arcmin\ in radius. There are no other extended sources in the field bright enough to require this procedure. The resultant image is shown in Figure~\ref{fig:images}. \subsection{XMM} We processed each \textit{XMM-Newton}\ dataset using Science Analysis Software (SAS) v17.0.0 and applied standard procedures to extract events files for each camera (two MOS and one pn). First, we used the {\tt emchain} and {\tt epchain} scripts to filter and calibrate events and produce analysis-ready event lists for the MOS and pn cameras (for the pn we also extracted out-of-time events). We then used the XMM Extended SAS (E-SAS) software \citep{Snowden2004} to further filter the data, including removing background flares (through {\tt mos\_filter} and {\tt pn\_filter}), masking point sources (through modified {\tt cheese} masks), and extracting spectra with the quiescent particle background and an estimate of the solar wind charge exchange subtracted (through {\tt mos-spectra}, {\tt mos\_back}, {\tt pn-spectra}, and {\tt pn\_back}). After filtering, about 150~ks of good time remained (Table~\ref{table.obs}). We used the background estimates to create filtered images from each detector, and combined these images across all exposures for the image analysis. The combined X-ray image is several times more sensitive to faint point source emission than any individual exposure and so we identified and masked additional sources with {\tt edetect\_chain} to highlight the diffuse emission. In general, the source masks extend to where the light from the wings of the PSF is well below the background rather than to a fixed encircled energy fraction. \begin{figure*} \centering \includegraphics[height=3in]{FUV_filaments.pdf} \includegraphics[height=3in]{azmap.pdf} \caption{\textit{Left}: Four candidate filaments identified in the \textit{GALEX}\ FUV image. The maximum apparent extent is about 66~kpc from the galaxy nucleus on the east side. The existence and extent of the northwest filament is unclear because it coincides with the companion galaxy, MCG 9-17-9, located 33~kpc from NGC~3079. \textit{Right}: An azimuthal significance map between $R=$0--1.25~arcmin, 1.25--5.0~arcmin, and 5.0--11.25~arcmin. Each segment is 15$^{\circ}$ wide. In each ring the color represents $(X-\mu)/\sigma$, where $X$ is the mean value in each segment, $\mu$ is the mean for the ring, and $\sigma$ is the standard deviation of values in the ring. The colors have been biased towards positive deviations. } \label{fig:fuv_filaments} \end{figure*} For the image analysis we adopt the $0.3-2$~keV bandpass to maximize the signal from hot gas. The combined $0.3-2$~keV image is shown in Figure~\ref{fig:images}. The sensitivity is not uniform across the field, and this image has been clipped to the region where the effective exposure exceeds about 80\% of the total. However, there is clearly a ring of increased noise that limits the search for extended diffuse emission. The primary astrophysical backgrounds are the hot gas in and around the Milky Way and solar wind charge exchange, whereas the instrumental backgrounds include residual soft proton flaring (below the flare detection threshold) and a strong Al~K$\alpha$ line (1.49~keV) from the detector. The astrophysical backgrounds are vignetted, the protons are centrally concentrated but not optically vignetted, and the detector background is not vignetted. This means that simple exposure correction can be misleading. We follow \citet{Anderson2014} in accounting for backgrounds (but not in fitting a radial profile) in the $0.3-2$~keV bandpass, which involves fitting the soft X-ray background spectrum at large radii to obtain its physical normalization, using the unexposed chip corners (through the ESAS software) to make non-vignetted background maps, and ``exposure correcting'' using a factor based on the vignetting function as a function of radius. \subsection{Auxiliary Data} We used the \textit{uvw1} ($\lambda$2600\AA), \textit{uvm2} ($\lambda$2200\AA), and \textit{uvw2} ($\lambda$2000\AA) \textit{Swift}\ Ultraviolet and Optical Telescope (UVOT) images as a check on the deeper \textit{GALEX}\ data. We previously processed these data in \citetalias{Hodges-Kluck2016}, including the removal of persistent, diffuse scattered light artifacts and correcting for the PSF scattering from the galaxy. Multiple short exposures were combined to form single images for each filter. The 124~ks of archival \textit{Chandra}\ Advanced CCD Imaging Spectrometer (ACIS) data was used to identify X-ray point sources to remove from the \textit{XMM-Newton}\ data and to search for diffuse emission. Both data sets were centered on the ACIS-S3 chip. Unfortunately, the longer \textit{Chandra}\ data sets were obtained after the detector lost most of its soft sensitivity ($E<1$~keV) due to molecular contaminant buildup on the detector window. Thus, the \textit{Chandra}\ sensitivity to hot gas is much worse than with \textit{XMM-Newton}. We reprocessed the data using the \textit{Chandra}\ Interactive Analysis of Observations (CIAO) v4.11 software\footnote{http://cxc.harvard.edu/ciao/}. We used the standard {\tt chandra\_repro} script to filter and grade the events. We then searched for background flares in the light curve for the whole ACIS-S3 chip and excluded periods where the count rate exceeded 3$\sigma$ above background but found no significant flaring. For the imaging analysis, we reprojected and combined the data sets using the {\tt merge\_obs} script. We then identified and removed point sources, and restricted the bandpass to $0.3-2$~keV. We also used the 100~$\mu$m IRAS and pipeline-processed \textit{Herschel} data in several bands from 70-160~$\mu$m without further processing. \section{An Extended Wind Cone} \label{section.filaments} We searched for extended emission in the \textit{GALEX}\ and X-ray images (Figure~\ref{fig:images}). Both the FUV and X-ray images show the characteristic ``X'' shape of a biconical wind known to exist around NGC~3079 from previous H$\alpha$ and \textit{Chandra}\ observations, but the filaments along the wind edges (identified in Figure~\ref{fig:fuv_filaments}) appear to extend significantly farther than previously known, especially in the FUV image where the eastern filaments extend at least to 60~kpc from the galactic center. The western filaments appear more truncated but extend at least to 30-40~kpc from the nucleus. Meanwhile, the \textit{XMM-Newton}\ image shows more extended X-ray emission on the west side with a maximum extent of about 40~kpc, and to 30-35~kpc on the east. The emission may extend farther but the sensitivity declines significantly outside $R \sim 40$~kpc. \subsection{Filaments} The filaments are resolved, with widths exceeding several kpc, although it is difficult to measure their widths owing to the low surface brightness. This also makes it difficult to objectively define filaments. Here we assumed that they simply follow the direction of the brighter emission seen at lower latitudes and defined rectangular boxes 38$\times$13~kpc wide that start at the edge of the brighter reflection nebula reported at lower latitudes in \citet{Hodges-Kluck2016}. 7~arcmin in length (38~kpc) appears well matched to the data, but was defined by eye. We measured the fluxes in these boxes, which are given in Table~\ref{table.filaments} and overlaid on Figure~\ref{fig:fuv_filaments}. Detecting low surface brightness emission requires accounting for true variation in the background across the field of view. This variation, rather than Poisson noise, limits the sensitivity. We defined multiple source-free regions around the galaxy at the outskirts of the clipped field of view and measured the mean background in each, as well as the standard deviation among fields. We find $B_{\text{NUV}} = (2.17\pm0.01) \times 10^{-3}$~counts~s$^{-1}$~pix$^{-1}$ and $B_{\text{FUV}} = (1.44\pm0.02) \times 10^{-4}$~counts~s$^{-1}$~pix$^{-1}$, which is consistent with typical GALEX values. The fluxes in each filament region were measured and converted to magnitudes using the AB zero points of 20.08 and 18.82~mag for the NUV and FUV, respectively. The sensitivities imply 3$\sigma$ upper limits for these boxes of around 20.4 and 21.0~mag for the NUV and FUV, respectively (surface brightness of 23.5 and 24.1~mag~arcmin$^{-2}$). Each of the FUV filaments is clearly detected (for a 3$\sigma$ threshold). The luminosities range from 2-5$\times 10^{40}$~erg~s$^{-1}$ and their total luminosity is about 1\% of the galaxy FUV luminosity. In contrast, no filaments are detected in the NUV image. The limits imply FUV/NUV luminosity ratios greater than 2 in the NE, NW, and SE filaments, corresponding to FUV$-$NUV colors of less than $0.4$~mag. The SW filament is the weakest detection in the FUV, but it is also not present in the NUV and Figure~\ref{fig:fuv_filaments} suggests that it may not extend the full length of the (uniform) 38~kpc box. Although there are no extended ($R>15$~kpc) NUV filaments, at lower latitude diffuse UV continuum emission is detected in the FUV, NUV, and the UVOT images. The morphology in each filter agrees well and the measured fluxes are similar. This emission was ascribed to scattering by circumgalactic dust in \citetalias{Hodges-Kluck2016}, indicating that the FUV filaments are something else. However, since we defined the filaments based on the presumption that they continue from spurs seen at lower latitude it is likely that not all of the UV light seen at lower latitude comes from dust. The measured fluxes show that there is some extended emission in the boxes that we defined, but this does not prove that the filaments are actually coherent structures that trace the ``X'' shape seen clearly at lower latitudes. The average surface brightness of the NE box (without subtracting background) is 20.4~mag~arcmin$^{-2}$, which is about equal to the 1$\sigma$ contour above background. This precludes definitively characterizing the emission as a coherent filament. However, there are two reasons to believe that they do continue the ``X'' shape. First, the X-ray image has similar structures that are clearly connected to the lower latitude filaments. This also makes it unlikely that they are filter artifacts. There are some \textit{GALEX}\ FUV images where either a ghost image or stray light from a bright source exterior to the field causes similar looking patterns, but there are no large-scale structures like this seen in the wider FUV or NUV images. Second, an azimuthal map gridded in 15$^{\circ}$ angular segments at several radii (based on the drop-off of the average radial profile) suggests that the ``X''-shaped structure continues beyond the inner regions except to the southwest. The right panel in Figure~\ref{fig:fuv_filaments} shows the quantity $(X-\mu)/\sigma$, where $X$ is the mean value of the FUV image in each segment, $\mu$ is the mean of the $X$ values in that ring, and $\sigma$ is their standard deviation (\textit{not} the background). There are no filament candidates exceeding 3$\sigma$ from the mean at any radii, but note that if the ``X'' shape is real then at least four of the 24 segments will have positive deviations. For instance, if the background annulus has a uniform brightness and there are four equally brighter segments, the limiting $(X-\mu)/\sigma \approx 2.2$. Therefore, the purpose is to show where positive deviations occur, and the map shows that they form an ``X'' shaped structure in the second ring ($\theta=1.25$ to 5~arcmin) and this continues to the northwest, northeast, and southeast in the outer ring ($\theta=5$ to 11.25~arcmin). \begin{deluxetable*}{cccccccccc} \tablenum{2} \tabletypesize{\scriptsize} \tablecaption{\label{table.filaments} UV Filament Luminosities} \tablewidth{0pt} \tablehead{ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} \\ \colhead{Filament} & \colhead{R.A.} & \colhead{Dec} & \colhead{P.A.} & \colhead{Length} & \colhead{Width} & \colhead{Filter} & \colhead{$m$} & \colhead{FUV$-$NUV} & \colhead{$\lambda L_{\lambda}$} \\ & \colhead{(J2000)} & \colhead{(J2000)} & \colhead{(deg.)} & \colhead{(arcmin)} & \colhead{(arcmin)} & & \colhead{(mag)} & \colhead{(mag)} & \colhead{($10^{40}$~erg~s$^{-1}$)} } \startdata NE & 10:02:32.16 & $+$55:47:15.8 & 38 & 7 & 2.4 & FUV & 19.8$\pm$0.1 & $< -0.4$ & 3.7$\pm$0.4 \\ & & & & & & NUV & $>$20.2 & & $<1.8$ \\ SE & 10:02:38.20 & $+$55:37:37.1 & 110 & 7 & 2.4 & FUV & 19.6$\pm$0.1 & $< -0.7$ & 4.6$\pm$0.4 \\ & & & & & & NUV & $>$20.3 & & $<$1.6 \\ NW & 10:01:15.92 & $+$55:43:11.2 & 110 & 7 & 2.4 & FUV & 19.8$\pm$0.1 & $<-0.6$ & 3.7$\pm$0.3 \\ & & & & & & NUV & $>$20.4 & & $<$1.5\\ SW & 10:01:24.17 & $+$55:35:57.9 & 60 & 7 & 2.4 & FUV & 20.4$\pm$0.2 & $< 0.1$ & 2.1$\pm$0.4 \\ & & & & & & NUV & $>$20.3 & & $<$1.3 \\ \enddata \tablecomments{Cols. (1) Filament ID from Figure~\ref{fig:fuv_filaments} (2-3) Central coordinates (4-6) Region (7) GALEX filter (8) AB magnitude calculated from the GALEX count rate after PSF wing and dust correction (9) FUV-NUV color (10) Luminosity} \end{deluxetable*} \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Cirrus.pdf} \caption{Neither the full \textit{GALEX}\ FUV image (\textit{left}) nor the 100~$\mu$m IRAS image (\textit{right}) support the hypothesis that the filaments are Galactic cirrus. The cirrus in the region of NGC~3079 is neither bright nor strongly spatially variable. Since the galaxy itself is a bright point source at 100~$\mu$m (89~Jy), it is not possible to search for FIR counterparts in the IRAS data. The box shows the region shown in Figure~\ref{fig:images}, while the circle shows the \textit{GALEX}\ field of view. The FUV color scale is in units of $10^{-4}$~counts~s$^{-1}$, while the IRAS color scale is in units of MJy~sr$^{-1}$. Sources detected with greater than 3$\sigma$ significance have been masked in the FUV image.} \label{fig:cirrus} \end{figure} \subsection{Galactic cirrus} Extensive Galactic cirrus filaments are common in \textit{GALEX}\ images, so we investigated whether the filaments around NGC~3079 can be explained by cirrus. First, we searched for larger structures in the full 2$^{\circ}$ FUV image (as well as in the NUV image). There is no clear structure in this wider field in either the FUV (Figure~\ref{fig:cirrus}) or the NUV. The FUV$-$NUV colors of less than $-0.4$~mag in the filament boxes shown in Figure~\ref{fig:fuv_filaments} also disfavor cirrus, which tends to have FUV$-$NUV$ \sim 0$. In this case, we should have easily detected the filaments in the NUV image. We then searched for far-infrared (FIR) counterparts, or evidence for filamentary or highly variable FIR emission from cirrus in the region. To investigate the larger-scale emission, we used the IRAS 100~$\mu$m maps\footnote{retrieved from https://irsa.ipac.caltech.edu/data/IRIS/}, as reprocessed using the IRIS software \citep{Miville-Deschenes2005}. Figure~\ref{fig:cirrus} shows the wider field in the FUV and in the FIR, and the cirrus in the region appears somewhat patchy but with variation on larger scales than the filaments. NGC~3079 itself is a FIR source, and the 4\arcmin\ resolution of the IRAS map prevents directly searching for FIR counterparts to the FUV filaments. However, no such counterparts are seen in the higher resolution, but smaller field of view, \textit{Herschel}\ images of the region (covering 70-200~$\mu$m). Since diffuse FIR emission from dust above NGC~3079 itself (``cirrus'' in that galaxy) is detected at lower latitudes in the same maps, we estimate that FIR emission from cirrus would be detectable considering the FUV fluxes. Finally, we examined the fields around NGC~3079 that have shallow \textit{GALEX}\ FUV coverage. We do not find such sharply defined structure within a few degrees of NGC~3079. Thus, we argue that it is very unlikely that the large FUV filaments are due to Galactic cirrus. They are also not due to scattered light from dust above NGC~3079 itself. At lower latitudes, dust scattering is evident through UV continuum emission in the FUV, NUV, and \textit{Swift}\ UVOT bands \citepalias{Hodges-Kluck2016}. The FUV$-$NUV color of the diffuse, low latitude scattered light is close to 0~mag, whereas FUV$-$NUV$ < -0.4$~mag for three of the four filaments. If the FUV filaments are not produced by dust in either the Galaxy or NGC~3079, the most likely light source is emission lines in the filter. Any other continuum source would have to explain the absence of emission in redder bands. There are several possible lines covered by the FUV filter, including \ion{C}{4} $\lambda\lambda$1548, 1550\AA, \ion{He}{2} $\lambda$1640\AA, and [\ion{O}{3}] $\lambda\lambda$1661, 1666\AA. These lines could be produced either by cooling wind fluid, photoionization, or shock heating. Without spectroscopic data, it is not possible to use the FUV fluxes alone to distinguish between these possibilities. If such a spectrum were to exist, models such as {\sc cloudy} \citep{Ferland2017} would be able to do so. For example, the \ion{He}{2} line requires hard ionizing photons (at least 54~eV), so a strong \ion{He}{2} line would favor shocks or AGN photoionization \citep[cf.][]{Jaskot2016}. As far as we know, there are no existing models that could use only the FUV flux to determine the origin, but we note that \citet{Borthakur2013} found \ion{C}{4} \textit{absorption} around four starburst galaxies at impact parameters between 100-200~kpc and concluded from {\sc cloudy} modeling that photoionization from either a starburst or the metagalactic radiation field cannot explain the \ion{C}{4} lines. They instead find that shock heating can do so, provided that much of the wind energy is expended in shock-heating circumgalactic gas. The emission around NGC~3079 is not as extended, but if it connects to unseen ionized gas at larger radii then the same logic would apply. \subsection{Quadrant Stacks} \begin{figure*} \centering \hspace{-0.55cm}\includegraphics[height=2.2in,trim=60 340 -10 160,clip]{quad_plot.pdf} \includegraphics[height=2.2in]{fuv_xray_combined.pdf} \caption{FUV (\textit{left}) and X-ray (\textit{center}) images rotated and stacked into a single quadrant to improve the signal, with the galaxy nucleus at the origin and the galactocentric radius along the $x$-axis. The images have been clipped at the mean background, smoothed, and overlaid with 2, 4, 8, and 12$\sigma$ contours. The X-ray image is clipped at a radius of 8$^{\prime}$ because of the lower signal at the image edges. Both images show an extended, conical, limb-brightened wind profile, with the FUV wind extending farther than the X-ray wind in both the horizontal and vertical directions. \textit{Right}: The X-ray (red) and FUV (cyan) quadrant images are clipped at the mean background + 1$\sigma$ and overlaid. The combination highlights that the X-rays are interior to the FUV emission.} \label{fig:quad_plot} \end{figure*} \begin{figure} \centering \includegraphics[trim=60 380 10 130,clip,width=0.5\textwidth]{quad_prof.pdf} \includegraphics[trim=60 380 10 130,clip,width=0.5\textwidth]{quad_prof2.pdf} \caption{Emission profiles measured from the quadrant stacks (Figure~\ref{fig:quad_plot}), with the best-fit scale heights listed. Scale heights for both components are listed when a double-exponential model is a significantly better fit than a single-exponential model. The data are shown in black (errors in grey) and the best-fit model plotted in blue (FUV) or red (X-rays), with the components plotted as dotted lines. Here we show the FUV vertical ($z$) profile averaged over $R<22$~kpc (\textit{top left}), the X-ray vertical profile averaged over $R<16$~kpc (\textit{top right}), the FUV profile along the wind cone edge as shown in Figure~\ref{fig:quad_plot} (\textit{bottom left}), and the FUV radial profile (\textit{bottom right}). The large scale heights indicate that the wind is indeed very extended. } \label{fig:quad_prof} \end{figure} The structure of the FUV and X-ray emission becomes clearer when stacking them in quadrants (Figure~\ref{fig:quad_plot}). We define the abscissa along the galactocentric radius $R$ and the height above the midplane $z$ as the ordinate. Despite the uncertainty about the filaments, the FUV and X-ray emission clearly trace limb-brightened cones. Notably, the edge of the X-ray cone is interior to that of the FUV cone, and the X-ray edge coincides with the brightest part of the FUV limb. This morphology is consistent with the hot wind model from \citet{Heckman1990}, in which the hot wind shocks surrounding gas, and this shocked gas is surrounded by a thin layer of rapidly cooling gas visible in optical and UV emission lines. On the other hand, the morphology disfavors strong radiative cooling of the hot wind fluid itself \citep{Thompson2016}, in which case we would expect strongly peaked X-ray emission below the strongest FUV line emission. We measured scale heights for several types of profiles. These include vertical profiles, radial profiles, and profiles extracted along the edge of the cone (Figure~\ref{fig:quad_prof}). The vertical profiles are averaged within $R<22$~kpc for the FUV image and $R<16$~kpc for the X-ray image, based on the extent of the base of the wind. A single exponential profile fits the X-ray data well but not the FUV, whereas the sum of two exponential profiles is a good fit to the FUV data. The scale height in the X-rays is $h_X = 5.1\pm0.2$~kpc, which is consistent with one component of the FUV, $h_{\text{FUV},1} = 4.6\pm0.5$~kpc. However, the second component of the FUV, which accounts for 50\% of the flux, has a much larger scale height of $h_{\text{FUV},2} = 18\pm2$~kpc. Meanwhile, the FUV profile along the edge of the wind cone (where the $S/N$ is highest) is even more extended, with scale heights $h_{\text{FUV,1}} = 7.5\pm0.6$~kpc and $h_{\text{FUV},2} = 31\pm18$~kpc, while the azimuthally averaged radial profile has $h_{\text{FUV}} = 10.6\pm0.6$~kpc. \begin{deluxetable*}{cccccc|ccccc} \tablenum{3} \tabletypesize{\scriptsize} \tablecaption{X-ray Wind Properties} \tablewidth{0pt} \tablehead{ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)} \\ \colhead{Height} & \colhead{Height} & \colhead{Area} & \colhead{$kT$} & \colhead{{\sc apec} norm} & \colhead{0.3-10 keV $L_{X}$} & \colhead{$R_2$} & \colhead{$R_1$} & \colhead{$h$} & \colhead{$\bar{n}(Z/Z_{\odot})$} & \colhead{Hot Mass}\\ \colhead{(arcmin)} & \colhead{(kpc)} & \colhead{(arcmin$^2$)} & \colhead{(keV)} & \colhead{($10^{-5}$)} & \colhead{($10^{39}$ erg~s$^{-1}$)} & \colhead{(arcmin)} & \colhead{(arcmin)} & \colhead{(arcmin)} & \colhead{($10^{-3}$ cm$^{-3}$)} & \colhead{($10^6 M_{\odot}$)} } \startdata -3.9 & -21.5 & 12.5 & 0.28$\pm$0.12 & $<$0.2 & $<$0.2 & 3.1$\pm$0.3 & 2.6$\pm$0.8 & 1.6 & $<$0.15 & $<$5\\ -2.4 & -13.2 & 9.3 & 0.28$\pm$0.02 & 0.80$^{+0.06}_{-0.09}$ & 0.7$\pm$0.1 & 1.9$\pm$0.2 & 1.5$\pm$0.4 & 1.2 & 0.6$\pm$0.2 & 8$\pm$3 \\ -1.2 & -6.6 & 9.5 & 0.36$\pm$0.02 & 2.4$\pm$0.1 & 2.4$\pm$0.1 & 1.4$\pm$0.2 & 1.1$\pm$0.4 & 1.2 & 1.4$\pm$0.3 & 10$\pm$3 \\ 0 & 0 & 8.0 & 0.40$\pm$0.01 & 6.8$\pm$0.4 & 4.8$\pm$0.3 & 1.1$\pm$0.3 & 0.5$\pm$0.2 & 1.2 & 2.0$\pm$0.6 & 18$\pm$7 \\ 1.2 & 6.6 & 9.5 & 0.33$\pm$0.03 & 2.2$^{+0.2}_{-0.1}$ & 2.1$^{+0.2}_{-0.1}$ & 1.4$\pm$0.2 & 1.2$\pm$0.4 & 1.2 & 1.3$\pm$0.3 & 9$\pm$2 \\ 2.4 & 13.2 & 9.7 & 0.29$\pm$0.02 & 0.9$\pm$0.1 & 0.8$\pm$0.1 & 2.0$\pm$0.2 & 1.5$\pm$0.3 & 1.2 & 0.6$\pm$0.2 & 7$\pm$2 \\ 3.9 & 21.5 & 13.8 & 0.27$\pm$0.02 & 0.5$\pm$0.1 & 0.4$\pm$0.1 & 3.5$\pm$0.3 & 2.1$\pm$0.6 & 1.6 & 0.3$\pm$0.1 & 11$\pm$5 \enddata \tablecomments{\label{table:xrayprop} Cols. (1-2) Average box height above midplane (3) Unmasked area for spectral extraction (4-6) Best parameters for an isothermal model (7-9) Best parameters for the average cylindrical shell at each height (10) Mean density in each shell. The errors assume a 30\% error in $R_1$, except for the central aperture where we assume a 50\% error because source masking makes it more difficult to determine ($R_2$ is well defined and $h$ is fixed). The error bar for $kT$ in the case of the density upper limit corresponds to allowed temperatures if the source is there. } \end{deluxetable*} \section{X-ray Properties} \label{section.xrays} The soft X-rays provide further insight into the wind through the resolved temperature and density measurements. We measured these from spectra extracted in apertures parallel to the midplane (Figure~\ref{fig:tprof}). The vertical sizes of the apertures are based on the need to obtain at least several hundred source counts to measure precise temperatures through most of the wind. There is more than enough signal at lower heights, where both soft and hard X-rays have been previously studied \citep{Strickland2004,JiangtaoLi2019}, to perform finer gridding, but we defer a more complete analysis of the temperature structure to a future paper (Yukita et al., in prep.). \begin{figure} \centering \vspace{0.25cm} \includegraphics[width=0.47\textwidth]{temp_profile_rgions.pdf} \includegraphics[trim=50 380 10 100,clip,width=0.47\textwidth]{denprof.pdf} \caption{\textit{Top Left}: The temperature and density were inferred from model fits to spectra extracted from the blue rectangular regions, shown overlaid on the soft X-ray image with point source masks marked in black. \textit{Top Right}: The bright superbubble is unresolved with XMM and its mask (dashed circle) covers much of the central region, so we measured the flux and emission measure from the \textit{Chandra}\ image where the bubble (solid circle) is resolved (see text). \textit{Bottom Left}: The temperature profile measured from these spectra, based on fitting isothermal plasma models to the data. The error bars are the 90\% credible interval. \textit{Bottom Right}: Mean density inferred from the emission measure and a cylindrical wind model (see text). The red and blue lines are the best-fitting power-law and exponential disk models, respectively, while the dotted line represents a shock compression of the Milky Way halo model \citep{Miller2015} by a factor of two. } \label{fig:tprof} \end{figure} \subsection{Spectral Extraction and Fitting} For each aperture (Figure~\ref{fig:tprof}), we masked point sources and extracted spectra from each XMM observation using the XMM-ESAS software \citep{Snowden2004}. The bright superbubble at the center \citep{Cecil2002} is point-like at the XMM resolution and is also masked. An inspection of the Chandra image shows that the diffuse spectrum from the central aperture predominantly contains flux from the base of the wind, so we include that aperture in the profiles. We also extracted a background spectrum from an annular aperture for each exposure. All spectral fitting was performed with \textit{Xspec} v12.10.1, and the spectra from a given region were fitted jointly rather than using a co-added spectrum. We used the {\sc apec} thermal plasma model with the \citet{Asplund2009} solar abundance table to model hot gas. If the X-rays come from shocked gas, {\sc apec} may not be appropriate as it is based on collisional ionization equilibrium. However, the equilibrium timescales are short relative to both the radiative lifetime and the time for the wind to expand at least to 50~kpc, so the hot gas at the wind edge is likely close to equilibrium. Instead of subtracting a scaled background spectrum, we fitted the background spectrum and used the best-fit model as a fixed component when fitting the source spectra. We adopted a model for the soft X-ray background that includes a thermal component for the Local Hot Bubble and the Galactic hot halo, and a power law with photon index $\Gamma = 1.46$ for the cosmic X-ray background ({\sc phabs(apec+apec+pow)}), in addition to instrumental lines and continua\footnote{described in the ESAS manual and particular to each observation; see https://heasarc.gsfc.nasa.gov/docs/xmm/esas/cookbook/xmm-esas.html}. We obtained a good fit with typical values ($kT_1 = 0.1$~keV and $kT_2 \approx 0.25$~keV). The source model is an absorbed, isothermal plasma ({\sc phabs*apec}), where the column density of the absorbing material is fixed at the Galactic value $N_{\text{H}} = 9\times 10^{19}$~cm$^{-2}$. We fix the metallicity ($Z/Z_{\odot}$) at the solar value for these fits, but the signal is sufficient to measure the O/Fe ratio (although not the abundances of individual elements), as explored below. \begin{figure} \centering \includegraphics[trim=70 380 50 70,clip,width=0.47\textwidth]{mihoko_abund.eps} \caption{ O/Fe number density as a function of height above the midplane. The O and Fe values are measured using an isothermal {\sc vapec} model using \citet{Asplund2009} for solar abundances. The outermost bins are not shown because the signal in the spectrum is too low to measure abundances. The expected ratios for Type~II SNe from \citet{Nomoto2006} and the Sun \citep{Anders1989, Asplund2009} are shown for reference. The outflowing material is more consistent with enrichment from Type~II SNe, in agreement with a prior Suzaku study by \citet{Konami2012}. } \label{fig:abundprof} \end{figure} \subsection{Temperature, Density, and Abundance Profiles} The best-fit temperatures, {\sc apec} normalizations, and luminosities are given in Table~\ref{table:xrayprop}. The X-ray surface brightness is asymmetric from east to west across the disk, possibly because of the interaction of the wind with companion galaxies on the west side. There is insufficient signal in the easternmost bin to robustly measure the temperature, although the wind is formally detected. The {\sc apec} normalization encodes the emission measure, $\text{EM} \propto \int n_e n_{\text{H}} dV$. To obtain $\bar{n}$ we assume a volume and that $n_e = n_{\text{H}}$. Based on the limb-brightened morphology, in each aperture except at the midplane we fitted a simple cylindrical shell model with constant density in the shell to the surface brightness to obtain the emitting volume. This model yields the average inner and outer radii $R_1$ and $R_2$, while the height $h$ is that of the aperture. Since we have fixed the metallicity at Solar, we derive the quantity $\bar{n} (Z/Z_{\odot})^{1/2}$. A summary of the measured properties is given in Table~\ref{table:xrayprop} and the temperature and density profiles are shown in Figure~\ref{fig:tprof}. The central aperture is a special case because the XMM superbubble mask covers much of the emission of interest (Figure~\ref{fig:tprof}). Thus, to estimate the density we first obtain $R_1$, $R_2$, and the temperature from the XMM data. Then, we measure the $0.3-2$~keV Chandra count rate in the region using a more appropriately sized mask for the superbubble, whose emission is associated with radio lobes and is not part of the larger X-shaped structure \citep{Irwin2003}. The Chandra response files are used to convert this count rate into an emission measure, assuming the best-fit XMM {\sc apec} temperature, and the emission measure is converted to density using the same assumptions as for the other apertures. The profiles indicate a decline in temperature as well as density. The temperature appears to flatten near $kT \sim 0.28$~keV, but in the outer regions the signal is low and the fit may be biased by the Galactic background, which has a similar $kT \sim 0.25$~keV. The decline of the density is more certain. The inferred density is weakly sensitive to the volume assumed in the shell model, but $R_2$ is tightly constrained to within 1.5~kpc. $R_1$ is less constrained, but a filled cylindrical model is strongly ruled out by the limb brightening and the uncertainty in the quantity $R_2^2-R_1^2$ is dominated by that in $R_2$. The density profile cannot be described by a power-law model, where $n(z) \propto z^{-\alpha}$ (Figure~\ref{fig:tprof}). The best-fit $\alpha = 0.26\pm0.05$ is obviously a poor fit to the data. An exponential profile, where $n(z) \propto e^{-z/h}$, is a good match with a scale height $h=10\pm3$~kpc. This is consistent with the $\approx$5~kpc scale height from the surface brightness profile, which scales as $n^2$. For a symmetric biconical wind with an opening angle of 30$^{\circ}$, the best-fit profiles imply a total mass within 30~kpc of $\approx 7\times 10^7 M_{\odot}$ for the X-ray-emitting gas. Finally, Figure~\ref{fig:abundprof} shows the O/Fe ratio as a function of height above the midplane. This ratio was determined by fitting the spectra in each box with an isothermal {\sc vapec} model in which the abundances of the $\alpha$ elements and Fe-group elements were allowed to float relative to the solar values. The number ratio was then calculated using the solar abundance table, for which we adopted that of \citet{Asplund2009}. We then compared the values to the predictions for Type~II SNe enrichment \citep{Nomoto2006} and the solar abundance. The measured values are a better match to the Type~II SNe, indicating that the outflow is enriched and powered by these SNe. There are several caveats. First, for the temperature of NGC~3079 and the energy resolution of the MOS and pn, the oxygen abundance is the best cosntrained and the difference between \citet{Asplund2009} and other commonly used tables is significant for oxygen. However, using another table with {\sc vapec} \citep[such as][]{Anders1989} would tend to increase the O/Fe ratio rather than bring it closer to the Solar value. Second, the signal in the spectra is too low to rule out solar abundances for regions well above the disk. Indeed, the outermost bins lack the signal to get a meaningful fix on O/Fe at all and are not shown. Third, if the wind has a complex temperature structure then the abundance values are likely biased. We will explore these issues in a forthcoming, detailed analysis of the X-ray spectra (Yukita et al., in prep.). \begin{figure*} \centering \includegraphics[height=5.7cm]{Ha_1-5Myr_small.png} \includegraphics[height=5.7cm]{Sxray_1-5Myr_small.png} \includegraphics[height=5.7cm]{Ha_Sxray_1-5Myr_small.png} \caption{Projected emission from a simulated galactic wind from \citet{Tanner2016}. The three panels show H$\alpha$ emission (left), which has the same morphology as FUV line emission, soft X-ray emission (center) from the shocked wind, and the superposition of X-rays on H$\alpha$ (right). Note that the X-rays are somewhat edge-brightened, mostly ``fill'' the wind cone at lower latitudes, and are interior to the strongly edge-brightened H$\alpha$ emission, similar to what is seen in NGC~3079. ach box is 1~kpc on a side, but see text for discussion.} \label{fig:simulations} \vspace{0.25cm} \end{figure*} \section{The Extended Wind} \label{section.interpretation} NGC~3079 is a well known superwind galaxy and the features described above are new only in their extent, as they continue the X-shaped structure previously reported \citep{Heckman1990,Cecil2001}. Thus, we conclude that the X-ray, and especially FUV, emission are part of an extended galactic wind on a scale similar to that seen in the galaxy Makani by \citet{Rupke2019}. This extended emission provides clues to the formation of the wind, for which there are multiple theories \citep[reviewed by][]{Zhang2018}. In particular, the line emission (primarily H$\alpha$) was interpreted by \citet{Heckman1990} in the context of a hot wind. In this case, superheated gas ($T>10^8$~K) in the starburst nucleus adiabatically expands into the ambient medium \citep{Chevalier1985,Strickland2000}. This drives a wind with velocity $v_{\text{wind}} > 1000$~km~s$^{-1}$, with some gas at lower and higher velocities. For a truly adiabatic wind, the asymptotic $v_{\text{wind}} > 3000$~km~s$^{-1}$. The actual velocity will depend on the wind evolution, which remains the subject of active research \citep[see][]{Zhang2018}. At first, the wind inflates a bubble with a forward shock whose advance speed is faster than the wind fluid, leading to a wind shock. The cooled wind fluid is reheated at the shock to $T \sim 10^6-10^7$~K, so the soft X-rays come from the sheath of shocked wind fluid around the ``free'' wind. Outside of the wind is the shocked CGM, which emits primarily optical and UV lines. Mixing of the shocked wind and shocked ambient medium through Rayleigh-Taylor instabilities also heats the cooler gas and powers the optical and UV emission. The morphology of the extended wind in NGC~3079 is in qualitative agreement with this structure, with both the X-rays and FUV forming edge-brightened cones and the X-rays interior to the FUV. We compared the FUV and X-ray morphology to more recent numerical simulations of a hot wind from \citet{Tanner2016}, which follows and is in agreement with earlier work that pioneered fractal gas distributions by \citet{cooper2008}. One such model is shown in Figure~\ref{fig:simulations}and the X-ray and H$\alpha$ (a proxy for FUV) emission look very similar to what we find in NGC~3079 as shown in Figure~\ref{fig:quad_plot}. The authors used the magnetohydrodynamic Athena code \citep{Stone2008} with radiative cooling and photoelectric heating prescriptions. The model seen in Figure~\ref{fig:simulations} shows a projection from 1.5~Myr after the start of the starburst with a SFR of 1.7~$M_{\odot}$~yr$^{-1}$ and an energy injection rate of $7.5\times 10^{41}$~erg~s$^{-1}$. This corresponds to a central mass-loading of $\beta \approx 2$, while the SNe thermalization efficiency is fixed at $\epsilon \equiv 1$. While this model uses parameters similar to M82 and not NGC~3079, the similarity of the structure to what we see in NGC~3079 is encouraging. We note that the \citet{Tanner2016} simulations include cool, molecular gas, which could, in principle, affect the partitioning of wind energy relative to the \citet{Heckman1990} hot wind model. However, an inspection of the simulations presented here shows that the hot gas contains the bulk of the energy and that the inclusion of molecular gas represents a small perturbation to the energy balance, even though the cold phase contains a substantial fraction of the mass. Thus, we investigate the wind using the X-ray and FUV data \textit{as if it is produced by a hot wind}. This enables us both to assess whether the data are indeed consistent with a hot wind (i.e., whether interpreting the data in this model leads to sensible results), as well as the impact of this wind on the galaxy and its environs. The values we can infer in this framework are the FUV/X-ray luminosity ratio, the shock velocity (a lower limit on the wind velocity), and the mass-loading factor. We defer a more detailed study of the thermal properties of the wind to a separate paper (Yukita et al., in prep.). First, we note that the same basic wind structure has also been proposed for AGN-driven winds \citep{Faucher2012}, and NGC~3079 hosts a Compton-thick AGN \citep{Iyomoto2001} with an absorption-corrected $2-10$~keV $L_X \approx 10^{42}$~erg~s$^{-1}$ \citep{LaCaria2019}. Assuming a bolometric correction of 15-25 from \citet{Vasudevan2007}, the true luminosity is a few$\times 10^{43}$~erg~s$^{-1}$, or about 1~SN per year. This is sufficient to drive a galactic wind. The XMM and \textit{GALEX}\ data alone are insufficient to distinguish between a starburst or AGN origin \citep[see][for a discussion of the wind at smaller radii]{Sebastian2019}. Hereafter, we refer to the hot wind model without regard to the initial power source, except where noted. \subsection{Luminosity Ratio} The FUV/X-ray luminosity ratio is consistent with the hot wind model. The 0.3--10~keV X-ray luminosity associated with the wind is $1.1\times 10^{40}$~erg~s$^{-1}$ (Table~\ref{table:xrayprop}), but about half of this comes from the disk region where we cannot isolate the wind FUV emission due to extinction and scattered light. The extraplanar X-ray luminosity is about $6\times 10^{39}$~erg~s$^{-1}$, while the FUV filament luminosity is 23 times higher at $1.4\times 10^{41}$~erg~s$^{-1}$, after correcting for the PSF wings of the galaxy. The disparity intensifies with height. The luminosity ratio should be high in the hot wind model \citep[up to 100;][]{Tanner2016}, as shock-heating and mixing can heat entrained clouds and the CGM to $T \sim 10^5$~K, where the C~{\sc iv} ion fraction is highest, without significantly depleting the thermal energy of the shocked wind. This is also consistent with the conclusion from \citet{Borthakur2013} that FUV absorption around starburst galaxies is from collisionally ionized gas. Without spectroscopic information we cannot strongly rule out ionization from the AGN or starburst, but both are disfavored here. A quasar or more modest AGN outburst could easily ionize the extended wind. For example, \citet{Kreimeyer2013} report giant ionization cones around a quasar, and \citet{Bland-Hawthorn2019} show that activity from Sgr A* likely ionized material in the Magellanic Stream, 75~kpc away from the Galaxy. However, measurements in the prominent superbubble close to the disk \citep{Cecil2001} find an [\ion{O}{3}]/H$\alpha$ ratio of less than 0.4, which is smaller than expected for an AGN \citep{Robitaille2007} and more consistent with a starburst. The AGN is also low luminosity and possibly obscured, so ionization of the extended filaments appear to require a more luminous episode within the past 0.2~Myr, based on the light-travel time to the edge of the nebula. More data are needed to confirm or rule out this possibility. Despite the soft ionizing spectrum implied by the superbubble line ratios, the starburst is a poor candidate for ionizing the extended nebula. A starburst relies on SNe to power the wind, but the ionizing flux from a given star cluster is highest at very early times before many SNe have exploded. Thus, the starburst would need to continually generate massive, young star clusters and sustain a nuclear wind. \subsection{Shock Velocities} In the hot wind paradigm, the wind speed can be estimated from the temperature profile. If the wind rams into adiabatically cooled wind fluid at the wind head or entrained cool gas then the shock will be strong, with $kT \approx \tfrac{3}{16}\mu v_{\text{wind}}^2$. The temperature profile in NGC~3079 then implies that $v_{\text{wind}}$ decreases from about 570~km~s$^{-1}$ near the disk to 475~km~s$^{-1}$ at 20~kpc above the disk. The outermost points hint that the temperature profile flattens. Alternatively, the X-ray emission could be a mixture of shocked wind fluid and shocked ambient CGM. Galaxies like NGC~3079 are believed to have extensive hot CGM with a temperature near or above the virial temperature \citep{Bregman2018}, $kT_{\text{vir}} \approx \tfrac{5}{3} \mu m_p G M_{\text{vir}}/r_{\text{vir}}$, where $M_{\text{vir}} \sim 10^{12} M_{\odot}$ is the mass enclosed in the virial radius and $r_{\text{vir}} \sim 250$~kpc for an $L_*$ galaxy like NGC~3079. This leads to $kT_{\text{vir}} \sim 0.18$~keV, or $2\times 10^6$~K. In this case, the strong shock limit for the Rankine-Hugoniot shock jump conditions may not apply, since even a 1000~km~s$^{-1}$ wind (near the predicted maximum for the adiabatic case) would only shock the CGM at Mach~4. Adopting $0.2$~keV as a rough estimate for the presence of a putative hot halo, the sound speed is $c_s \approx 230$~km~s$^{-1}$ and the temperature profile (Figure~\ref{fig:tprof}) implies Mach numbers of 2.1 to 1.5, or $v_{\text{shock}} \sim 350-480$~km~s$^{-1}$. This is about 20\% smaller than for the strongly shocked case. The density profile is independent of the temperature and provides a constraint if we again assume a form for the hot halo. NGC~3079 has a mass similar to that of the Milky Way ($v_{\text{circ}} \approx 208$~km~s$^{-1}$ compared to 240~km~s$^{-1}$ for the Galaxy), so we consider the hot CGM model for the Milky Way from \citet{Miller2015}, which is a modified $\beta$ model with $n(r) = n_0 (r/r_c)^{-3\beta_{\text{CGM}}}$ for $r\gg r_c$, where where $n_0 r_c^{3\beta_{\text{CGM}}} = 0.0135$~cm$^{-3}$~kpc$^{3\beta_{\text{CGM}}}$ and $\beta_{\text{CGM}} \approx 0.5$. The core radius is not independently determined because of confusion towards the Galactic center, so we fixed $\beta_{\text{CGM}} \equiv 0.5$ and determine the $r_c$ and compression factor (Mach number) that best matches the NGC~3079 density profile to the Milky Way density profile. We find a good fit (Figure~\ref{fig:tprof}) for $r_c \sim 6$~kpc and a compression factor of $\approx 2.0$, leading to an average Mach~2.4 shock. Of course, both estimates are speculative given the absence of measurements of a hot halo around NGC~3079. Nevertheless, should such a hot halo exist one could still infer the wind velocity from $kT$ as though the X-rays in the wind come from shocked gas. This puts a lower limit on the wind velocity of about 350~km~s$^{-1}$. Finally, we note that while the \citet{Miller2015} model provides a good match to X-ray emission- and absorption-line data from around the Galaxy, there are several competing models that can qualitatively change the expected wind morphology and shock speed. For example, \citet{Faerman2017} use a combination of absorption and emission data to infer a much more massive halo, while \citet{Gupta2012} and \citet{Nicastro2016} use absorption data sets to describe hot halos that contain upwards of $10^{11} M_{\odot}$. If we perform the same exercise as above for the \citet{Faerman2017} model, which has the advantage of being analytical, we find that it overpredicts the observed density above about 10~kpc while having a similar temperature. If such a massive halo existed around NGC~3079, the high density would likely crush the wind, perhaps turning it into a bubbling plume. However, we note that this does not constitute evidence for or against any Milky Way halo model \citep[for a discussion of those models, see][]{Bregman2018}. \subsection{Mass Loading and Energetics} We can estimate the mass-loading factor for gas that reaches high latitudes (here defined as $z>2$~kpc) by comparing the estimated $5\times 10^7 M_{\odot}$ in X-ray emitting gas above this height (Table~\ref{table:xrayprop}) and the size of the wind to the SFR, assuming a starburst origin. The 60~kpc size of the wind and the inferred wind velocity of $\sim$500~km~s$^{-1}$ places a lower bound on the lifetime of 120~Myr (if the X-rays trace shocked gas, the wind cone expands more slowly). Since the average density of hot gas implies a cooling time of hundreds of Myr \citep[assuming Solar metallicity and the collisional equilibrium cooling functions from][]{Gnat2007}, the X-rays should trace the energy deposited by the shocked wind over its history. The 120~Myr limit implies an outflow rate to high latitudes of $\dot{M} < 0.4 M_{\odot}$~yr$^{-1}$. If we further assume that the current SFR$\approx$2.6~$M_{\odot}$~yr$^{-1}$ \citep{Yamagishi2010} is the average over that period and that the central mass-loading factor is $\beta = 1$, then the high-latitude $\beta_{\text{hl}} < 0.2$. Note that even if we assume a younger wind (60~Myr at a terminal velocity of 1000~km~s$^{-1}$), $\beta_{\text{hl}} < 0.4$. However, $\beta_{\text{hl}}$ will be increased by the mass in the swept-up shell that is accelerated outwards. We do not know the photoionization fraction, so we have a poor constraint on the mass represented by the FUV emission. If all FUV-emitting material is shock-heated and accelerated outwards, then it is possible for $\beta_{\text{hl}}$ to approach unity. The mass and inferred shock speed lead to a kinetic energy of about $1\times 10^{56}$~erg. For a 120~Myr lifetime, the kinetic luminosity of the high-latitude wind is less than $2.7\times 10^{40}$~erg~s$^{-1}$. If we further assume that 1\% of the stellar mass becomes SNe, that each SN contributes $10^{51}$~erg, and that the thermalization efficiency is unity, then the average kinetic luminosity of the starburst is $8\times 10^{41}$~erg~s$^{-1}$. This is consistent with a Starburst99 \citep{Leitherer1999} population synthesis model for the same input SFR at solar metallicity. The average wind kinetic luminosity is only 3.4\% of this value. We have not considered the UV contribution, but it will be small. Strongly shocked gas transforms roughly half the kinetic energy into thermal energy, so even with zero photoionization and ten times more mass in warm gas the contribution will be smaller than from the X-ray emitting gas. The 120~Myr estimate from assuming a terminal wind velocity of 500~km~s$^{-1}$ is in tension with other estimates for the starburst age. \citet{Yamagishi2010} found that the gas-to-dust ratio is rather high and concluded that NGC~3079 is early in its starburst phase. Meanwhile, \citet{Konami2012} defined a region with $\alpha$-enhanced abundances (an annulus centered at 4.5~kpc from the nucleus) and used the hydrodynamic model of \citet{Tomisaka1993} to estimate the expansion velocity, and thus the age. They arrived at a velocity of about 450~km~s$^{-1}$, which is consistent with our shock-inferred velocity but leads to a starburst age of 10~Myr. This is consistent with the age of the well known superbubble \citet{Cecil2001}. There are a few ways to reconcile the two ages. The 120~Myr value assumes that the overall wind terminal velocity is 500~km~s$^{-1}$, which is substantially below the 3000~km~s$^{-1}$ adiabatic solution or 1000~km~s$^{-1}$ from \citet{Strickland2000}. Gas that remains this fast to large radii will be hard to see and must not strongly interact with the surrounding medium, or it would dissipate its kinetic energy. Thus, there may be a very fast spine to the wind with shocks at the edges from lower velocity components. However, for a maximal expansion velocity of 3000~km~s$^{-1}$, the starburst or AGN outburst would still need to be at least 20~Myr old. Alternatively, the extended structure may be a relic from a prior outburst. If it were a jetted AGN outburst, we would expect to see low-frequency radio synchrotron emission from aging cosmic rays. The 326~MHz radio continuum maps in \citet{Irwin2003} reveal a large radio halo but no structures close to the scale of the wind described here. Instead, the well known radio lobes occur on much smaller scales near the disk, and larger extensions that may indicate a prior outburst are about 10~kpc in size. Finally, the shocked wind interpretation that we applied above may be wrong. A different source for the X-rays and FUV would lead to different constraints on the wind speed. Reconciling the young starburst with the large filaments is a puzzle that remains to be solved. \subsection{A Hot Wind?} The FUV and X-ray morphology, FUV/X-ray luminosity ratio, and small $\beta$ at high latitudes are consistent with a fast, non-radiative wind powered by superheated gas. On the other hand, the inferred velocity and kinetic energy are smaller than expected for an adiabatic wind. In such a wind, $v_{\text{wind}} \sim 1000 \epsilon^{1/2} \beta^{-1/2}$~km~s$^{-1}$, where $\epsilon \in [0,1]$ is the SNe thermalization efficiency and $\beta$ is the mass-loading factor. To reconcile this velocity with the $\sim$500~km~s$^{-1}$ implied by the X-ray temperature requires $\epsilon \approx 0.25$ or a large $\beta = 4$. For a powerful wind, a large $\beta$ is more likely than a small $\epsilon$, and the high latitude $\beta_{\text{hl}} < 0.2$ would mean that very little of the mass makes it to high latitudes and the wind ceases to be adiabatic at lower latitudes. This scenario is supported by the density profile, which is inconsistent with the $R^{-2}$ profile (or any power-law model) expected from the adiabatic model. Since the shocked wind is produced by the wind overrunning itself, the density profile from the X-rays should be (to first order) a compressed version of the underlying profile. It would not be surprising for a wind that begins with adiabatic free expansion to leave the adiabatic regime after several kpc, so our main concern is whether inferring $v_{\text{shock}}$ from $kT$ is still valid in this case, i.e., whether the high latitude wind is still fast and non-radiative. \citet{Strickland2000} found that an initially adiabatic wind will not remain so but that X-rays still predominantly come from shocked gas, with some reasonable parameter sets (based on M82) yielding an average wind speed of $v_{\text{wind}} \sim 500$~km~s$^{-1}$. This suggests that $kT$ should roughly map to $v_{\text{wind}}$ even when the adiabatic approximation breaks down. However, there are two more problems for the hot wind. First, the density profile suggests that the FUV and X-ray emission are coming from shocked CGM rather than shocked wind, or that the shocked wind contributes little mass and mixes efficiently with the CGM. Secondly, the kinetic energy implied by the velocity and density places an upper bound on the kinetic luminosity of $E<2.7\times 10^{40}$~erg~s$^{-1}$, which is less than 3.4\% of the total starburst kinetic luminosity. In contrast, hot wind models hold that most of the wind energy is in hot gas. In summary, the morphology and luminosities are consistent with a fast, non-radiative wind that shock heats the surroundings, but that carries only a small fraction of the starburst or AGN energy, likely due to carrying a small amount of mass. In this case, the remainder of the energy would accelerate cooler gas at lower latitudes in an action like a galactic fountain. Alternatively, if a collimated wind propagates far beyond the 60~kpc limit to the FUV emission, then the forward shock may effectively cease to exist in the tenuous CGM and the X-rays could represent only a small fraction of the energy carried by the wind. This would be the case if there is a 3000~km~s$^{-1}$ component that persists to large radii. \subsection{Impact} Regardless of the nature of the wind, we can draw several conclusions about its impact on the galaxy. First, the $5\times 10^7 M_{\odot}$ in hot, high-latitude gas implies that the wind is not effective at removing mass from the galaxy. If the starburst is self-limiting due to feedback, the heating and recycling occurs at radii of at most a few kpc. The masses and limits from above lead to a rate of $<$1~$M_{\odot}$~yr$^{-1}$ for removing gas to $\gtrsim$2~kpc \citep[deep radio observations place limits on the neutral hydrogen in the wind;][]{Shafi2015}. The SFR is $\approx 2.6 M_{\odot}$~yr$^{-1}$, so much more gas will be locked in stars than completely removed from the galaxy by the end of the starburst. Secondly, the wind may indeed heat the intergalactic medium, assuming that $kT$ maps to $v_{\text{shock}}$. At 500~km~s$^{-1}$ between 1-30~kpc from the disk, the wind very likely exceeds the escape velocity. A conservative estimate for the escape velocity within the disk, $v_{\text{esc}} \sim 3 v_{\text{circ}} = 625$~km~s$^{-1}$, implies a lower $v_{\text{esc}} < 550$~km~s$^{-1}$ at a height of 1~kpc for an exponential disk model. Most of the X-rays from the central bin come from at least this height (in large part due to absorption in the edge-on disk). At larger heights, a disk model may not be appropriate. NGC~3079 has a similar stellar mass and $v_{\text{circ}}$ to the Milky Way, so if we adopt the RAVE Milky Way mass \citep{Piffl2014} and a NFW profile, the escape velocity at 20~kpc is $250-300$~km~s$^{-1}$. Hence, the $<$1~$M_{\odot}$~yr$^{-1}$ expelled to $>$2~kpc from the disk is also the limit on gas unbound from the galaxy. Thirdly, the wind has a strong impact on the CGM. The density profile is incompatible with the adiabatic approximation ($n(r) \propto R^{-2}$) and this suggests that many of the X-rays come from shock-compressed CGM or entrained clouds rather than strictly shocked wind. The 10~kpc scale height of the best-fit exponential profile suggests that it is mostly CGM, since most cold ISM is not expelled to high latitudes. 10~kpc is rather large for a relaxed, disk-like atmosphere around a galaxy, but the extended CGM may follow a spherical distribution like the $\beta$ model, in which $n(r) \propto (1+(r/r_c)^2)^{-3\beta_{\text{CGM}}/2}$. This is a good fit to the data for $r_c \sim 6$~kpc when $\beta_{\text{CGM}} \equiv 0.5$ (Figure~\ref{fig:tprof}) regardless of whether the working surface is warm or hot CGM. If the working surface is warm CGM then only a small fraction of the mass is heated, but all of the CGM traced by the FUV is compressed, which may trigger condensation and infall of clouds. On the other hand, if the working surface is hot CGM then the kinetic luminosity of $\dot{E}<2.7\times 10^{40}$~erg~s$^{-1}$ implied by the hot gas is likely substantially higher than the few$\times 10^{39}$~erg~s$^{-1}$ radiative luminosity of a normal hot halo in an $L_*$ galaxy \citep{JiangtaoLi2013} and can prevent cooling. Even if the leading edge of the wind expands 10~times slower than its internal speed, the high-latitude mechanical luminosity is more than enough to balance radiative cooling. The other major impact on the hot CGM is through displacement. Although the opening angle is only about 30$^{\circ}$, the hot mass in the wind cone is similar to that expected from the \citet{Miller2015} Milky Way model within a radius of about 8~kpc. Thus, as the wind evolves and eventually dissipates the hot and expanding cone will continue to stir up the CGM for a long time. The wind also likely affects the companion galaxy, NGC~3073. \citet{Shafi2015} argued that an ambient hot density of $1.3\times 10^{-2}$~cm$^{-2}$ is needed to explain the cometary \ion{H}{1} tail of NGC~3073 through ram-pressure stripping due to infall. The hot density profile shows that the hot density around NGC~3073 (beyond 25~kpc from NGC~3079) is at least an order of magnitude too small. In contrast, the ram pressure from $v_{\text{wind}} \gtrsim 500$~km~s$^{-1}$ and a density of $\sim 5\times 10^{-4}$~cm$^{-3}$ is likely, but barely, sufficient to produce the tail based on the arguments in \citet{Irwin1987}. If NGC~3073 is in the middle of the wind, then $v_{\text{wind}}$ could be substantially higher, which would strengthen the case for wind stripping. \section{Summary and Conclusions} \label{section.summary} We report the discovery of FUV emission around NGC~3079 at least to 60~kpc from the nucleus, with X-rays detected to at least 30~kpc. The FUV and X-ray emission is biconical and edge-brightened, with the X-rays interior to the FUV. We rule out dust scattering as the source of the FUV light, which makes line emission the most likely candidate. Meanwhile, the X-ray spectrum is consistent with line-dominated thermal emission from a plasma near collisional ionization equilibrium. We measured the temperature and density of the hot gas. Both decline with height, with a possible flattening in the temperature beyond 20~kpc at $kT \approx 0.27$~keV. The total X-ray emitting mass is about $5\times 10^7 M_{\odot}$. The extended FUV and X-ray emission connects smoothly to emission already well known at lower latitudes, and is part of a galactic superwind. The morphology of the FUV and X-ray emission suggest a hot, non-radiative wind in which the emission is produced by shock heating (rather than radiative cooling of upstream wind fluid). Assuming this to be the case, the X-ray temperatures imply wind velocities of $\sim$500~km~s$^{-1}$, which is sufficient to escape the galaxy. However, the density profile and low mass and kinetic energy in hot gas are inconsistent with a hot wind model, so the nature of the wind must be further explored (Yukita et al., in prep.). The wind carries little mass and less kinetic energy than expected for a hot wind, but will nonetheless significantly heat the CGM and perhaps the IGM. It remains an open question whether winds, or how much of them, can escape the galaxy's potential. If the extended emission in NGC~3079 traces shock-heated gas, then the wind is able to maintain high velocities at least to 20\% of the virial radius, and will encounter less resistance the farther it goes. The extended wind reported here is one of just a handful of highly extended winds, with others including NGC~6240 \citep{Yoshida2016} and Makani \citep{Rupke2019}. NGC~3079 is by far the closest, and presents a good target for deep H$\alpha$ mapping. The serendipitous discovery of the extended filaments in the deep \textit{GALEX}\ data suggests searching for extended emission around other nearby, well developed starbursts, such as NGC~253 or M82. These galaxies are even closer, so a very wide field is required. The other work that is required is modeling of the evolution of winds in realistic CGM environments and over long times. This is difficult because the high resolution required to resolve instabilities and mixing makes a large box computationally expensive. However, adaptive mesh refinement and similar techniques for concentrating resolution where it is needed may make these models feasible. \acknowledgments We thank the anonymous referee for a careful and helpful report which improved this paper. M.~Y. gratefully acknowledges support through NASA grant 80NSSC18K0609. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. \bibliographystyle{aasjournal}
null
null
null
proofpile-arXiv_000-10124
{"arxiv_id":"2009.09047","language":"en","timestamp":1600740144000,"url":"https:\/\/arxiv.org\/abs\/2009.09047","yymm":"2009"}
2024-02-18T23:40:25.001Z
2020-09-22T02:02:24.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10126"}
null
\section{Introduction} Fix a prime $p$, and let $G$ be a smooth affine group scheme over $\zz_p$ whose generic fiber is reductive. This paper contributes to the search for what it means to endow a $p$-divisible group with $G$-structure. When $G$ is a classical group coming from a local Shimura datum of EL- or PEL-type, to equip a $p$-divisible group with $G$-structure is to decorate it with additional structures coming from the data which cuts out $G$ inside of some general linear group, such as a polarization or an action by a semisimple algebra. Moduli spaces of $p$-divisible groups with additional structure define Rapoport-Zink formal schemes, whose rigid analytic generic fibers determine local analogs of Shimura varieties. Recently, Scholze and Weinstein \cite{SW2020} have developed a general theory of local Shimura varieties. Unlike in the EL- and PEL-type cases, however, the general theory takes place entirely in the generic fiber and leaves open the question of whether there exist formal schemes which act as integral models. One would expect moduli spaces of $p$-divisible groups with $G$-structure to define integral models in all cases, as they do in the EL- and PEL-type cases. However, due to the lack of a natural tensor product on the category of $p$-divisible groups, the traditional methods for defining $G$-structure do not simply carry over to this case. In particular, any kind of Tannakian approach is not straightforward. In this paper we restrict our focus to the case where the pair $(G,\mu)$ is of Hodge type, i.e., where there is closed embedding $\eta: G \hookrightarrow \textup{GL}(\Lambda)$ such that the cocharacter $\mu$ remains minuscule after composition with $\eta$. In this case, there are two approaches to endowing $p$-divisible groups with $G$-structure which have enjoyed some success in providing functor of points descriptions of Rapoport-Zink formal schemes. In the first approach, one uses the embedding $G \hookrightarrow \textup{GL}(\Lambda)$ to define additional structures on tensor powers of the Dieudonn\'e crystal of the given $p$-divisible group. In the second, one replaces $p$-divisible groups with Zink's displays, which are linear-algebraic objects and therefore more readily equipped with $G$-structure. The main result of this paper is that, at least under certain restrictions on the base ring (see Theorem \ref{thm-1} below), these two approaches are equivalent. Let us describe the two approaches in more detail. If $(G,\mu)$ is of Hodge type, then $G$ is the element-wise stabilizer in $\textup{GL}(\Lambda)$ of a finite collection $\underline{s} = (s_1, \dots, s_r)$ of elements of the total tensor algebra of $\Lambda \oplus \Lambda^\vee$, which we denote by $\Lambda^\otimes$. We call the tuple $\underline{G} = (G,\mu,\Lambda,\eta, \underline{s})$ a local Hodge embedding datum. If $X$ is a $p$-divisible group over a $p$-nilpotent $\zz_p$-algebra $R$, then a crystalline Tate tensor is a morphism of crystals $t: \mathbbm{1} \to \mathbb{D}(X)^\otimes$ over $\Spec R$ which preserves the Hodge filtrations and which is equivariant for the action of the Frobenius, up to isogeny. Here $\mathbbm{1}$ denotes the unit object in the tensor category of crystals of finite locally free $\mathcal{O}_{\Spec{R}/\zz_p}$-modules, and $\mathbb{D}(X)$ denotes the covariant Dieudonn\'e crystal of $X$ as in \cite{BBM1982}. A $p$-divisible group with $(\underline{s},\mu)$-structure over $R$ is a pair $(X,\underline{t})$ consisting of a $p$-divisible group $X$ over $\Spec R$ whose Hodge filtration is \'etale-locally determined by $\mu$, and a collection $\underline{t} = (t_1, \dots, t_r)$ of crystalline Tate tensors which are fppf-locally identified with $\underline{s}$ (see Definition \ref{def-smu}). The main theorem of \cite{Kim2018} (see also \cite{HP2017}) states that, in this situation, a Rapoport-Zink formal scheme can be defined which is roughly a moduli space of $p$-divisible groups with $(\underline{s},\mu)$-structure. On the other hand, the idea of using group-theoretic analogs of Zink's displays to define Rapoport-Zink spaces originally appears in \cite{BP2017}. There, a theory of $(G,\mu)$-displays is developed for pairs $(G,\mu)$ such that $G$ is reductive and $\mu$ is minuscule, and the theory is used to give a purely group-theoretic definition of Rapoport-Zink formal schemes of Hodge type. Subsequently, Lau generalized the theory of $(G,\mu)$-displays \cite{Lau2018}, and an equivalent Tannakian framework was developed in the author's previous paper \cite{Daniels2019}. Denote by $\textup{Disp}(\underline{W}(R))$ the category of higher displays over the Witt frame for $R$ as in \cite{Lau2018}. In the Tannakian framework, we say a $(G,\mu)$-display over $\underline{W}(R)$ is an exact tensor functor $\textup{Rep}_{\zz_p}G \to \textup{Disp}(\underline{W}(R))$ such that, for every representation, the structure of the corresponding display is fpqc-locally governed by the cocharacter $\mu$ (see Definitions \ref{def-typemu} and \ref{def-gmu}). This formulation is essential to the results of this paper, as it allows for a close connection with Zink's original theory of displays \cite{Zink2002}, and therefore also with $p$-divisible groups. When $G = \textup{GL}_h$ and $\mu = \mu_{d,h}$ is the cocharacter $t \mapsto (1^{(d)},t^{(h-d)})$, $(G,\mu)$-displays are nothing but Zink displays of height $h$ and dimension $d$. When $(G,\mu)$ is of Hodge type, the embedding $\eta: G \hookrightarrow \textup{GL}(\Lambda)$ induces a functor from $(G,\mu)$-displays to Zink displays, and in this case we say a $(G,\mu)$-display is nilpotent with respect to $\eta$ if the corresponding Zink display is nilpotent in the sense of \cite{Zink2002}. The following is the main result of this paper, see Proposition \ref{prop-fullyfaithful} and Theorem \ref{mainthm}. \begin{introthm}\label{thm-1} Let $R$ be a $p$-nilpotent $\zz_p$-algebra. There is a fully faithful functor \begin{align*} \textup{BT}_{\underline{G},R}: \left( \begin{array}{c} \textup{$(G,\mu)$-displays over $\underline{W}(R)$} \\ \textup{which are nilpotent with respect to $\eta$} \end{array} \right) \to \left( \begin{array}{c} \textup{formal $p$-divisible groups over $R$} \\ \textup{with $(\underline{s},\mu)$-structure} \end{array} \right). \end{align*} If $R/pR$ has a $p$-basis \'etale locally, then $\textup{BT}_{\underline{G},R}$ is an equivalence of categories. \end{introthm} In particular, Theorem \ref{thm-1} applies when $R$ is a field of characteristic $p$, or when $R/pR$ is a regular, Noetherian, and $F$-finite (the latter by \cite[Lem. 2.1]{Lau2018b}; see Definition \ref{def-pbasis} for the definition of a $p$-basis). When $G = \textup{GL}_h$ and $\mu = \mu_{d,h}$, the equivalence in question holds for arbitrary $p$-nilpotent rings $R$ by a theorem of Zink and Lau (see \cite{Zink2002} and \cite{Lau2010}). Hence the main result of this paper can be seen as a group-theoretic generalization of the theorem of Zink and Lau (but note that the theorem of Zink and Lau is an invaluable input in the proof). Let us also mention that a similar result is proven in the case where $G$ and $\mu$ come from an EL-type local Shimura datum in \cite[\textsection 5.4]{Daniels2019}. Given a $(G,\mu)$-display $\mathscr{P}$ which is nilpotent with respect to $\eta$, it is straightforward to obtain a formal $p$-divisible group: one takes the $p$-divisible group $X$ associated to the Zink display induced by the embedding $\eta:G \hookrightarrow \textup{GL}(\Lambda)$. The primary difficulty lies in determining an $(\underline{s},\mu)$-structure on $X$. This is resolved by the main innovation of this paper, which is the association of a $G$-crystal to the $(G,\mu)$-display $\mathscr{P}$. We summarize the properties of this $G$-crystal in the following theorem, which is an amalgamation of the results in \textsection \ref{sub-crystalGdisp} and \textsection \ref{sub-hodge}. Let $\textup{LFCrys}(\Spec R/\zz_p)$ denote the category of crystals in locally free $\mathcal{O}_{\Spec R/\zz_p}$-modules as in \cite{BBM1982}. \begin{introthm}\label{thm-Gcrystal} Let $R$ be a $p$-nilpotent $\zz_p$-algebra. Suppose $\mathscr{P}$ is a $(G,\mu)$-display over $\underline{W}(R)$ which is nilpotent with respect to $\eta$. Then there exists an exact tensor functor \begin{align*} \mathbb{D}(\mathscr{P}): \textup{Rep}_{\zz_p} G \to \textup{LFCrys}(\Spec R/\zz_p), \ (V,\pi) \mapsto \mathbb{D}(\mathscr{P})^\pi \end{align*} such that the following properties hold: \begin{enumerate}[$($i$)$] \item The association $\mathscr{P} \mapsto \mathbb{D}(\mathscr{P})$ is functorial in $\mathscr{P}$ and compatible with base change. \item If $\Z_{\eta}(\mathscr{P})$ is the nilpotent Zink display associated to $\mathscr{P}$ via the embedding $\eta$, then there is a natural isomorphism of crystals \begin{align*} \mathbb{D}(\mathscr{P})^\eta \cong \mathbb{D}(\Z_\eta(\mathscr{P})), \end{align*} where $\mathbb{D}(\Z_\eta(\mathscr{P}))$ denotes the crystal associated to $\Z_\eta(\mathscr{P})$ as in \textup{\cite{Zink2002}}. \end{enumerate} \end{introthm} Once such a crystal is constructed, it is not difficult to obtain an $(\underline{s},\mu)$-structure on the $p$-divisible group $X$ associated to $\mathscr{P}$. Indeed, by viewing the tensors $s_i$ as morphisms of representations from the trivial representation to $\Lambda^\otimes$, we can use functoriality of the $G$-crystal in representations and its compatibility with the tensor product to obtain morphisms $t_i:\mathbbm{1} \to (\mathbb{D}(\mathscr{P})^\eta)^\otimes$. By Theorem \ref{thm-Gcrystal}, we can replace $(\mathbb{D}(\mathscr{P})^\eta)^\otimes$ with $\mathbb{D}(\Z_\eta(\mathscr{P}))^\otimes$, which is in turn isomorphic to $\mathbb{D}(X)^\otimes$ by the theory of Zink and Lau (Lemma \ref{lem-zinkdieudonne}). With some work (see Proposition \ref{prop-frobeq}) one can show that the resulting morphisms of crystals $t_i: \mathbbm{1} \to \mathbb{D}(X)^\otimes$ are crystalline Tate tensors. The construction of the crystal in Theorem \ref{thm-Gcrystal} requires a technical result about $(G,\mu)$-displays which may be of independent interest. As a starting point we recall that by \cite[Thm. 3.16]{Daniels2019}, if $R$ is a $p$-nilpotent $\zz_p$-algebra, $(G,\mu)$-displays over the Witt frame $\underline{W}(R)$ in the Tannakian framework are equivalent to $G$-displays of type $\mu$ over $\underline{W}(R)$ defined using the torsor-theoretic framework of \cite{Lau2018}. More generally, if $\underline{S}$ is an \'etale sheaf of frames over $\Spec R$, we can define a category of $G$-displays of type $\mu$ over $\underline{S}(R)$ as in \cite{Lau2018}, and a category of $(G,\mu)$-displays over $\underline{S}(R)$ following the Tannakian formulation of \cite{Daniels2019}. We say $\underline{S}$ satisfies descent for modules if finite projective graded modules over the graded ring $S$ form an \'etale stack over $\Spec{R}$. The following theorem (Theorem \ref{thm-equiv} below), which is essentially a generalization of \cite[Thm. 3.16]{Daniels2019}, is critical to our construction of a $G$-crystal for a $(G,\mu)$-display. \begin{introthm}\label{thm-2} If $\underline{S}$ is an \'etale sheaf of frames on $\Spec{R}$ which satisfies descent for modules, then there is an equivalence of categories \begin{align*} \big(\textup{$(G,\mu)$-displays over $\underline{S}(R)$}\big) \xrightarrow{\sim} \big(\textup{$G$-displays of type $\mu$ over $\underline{S}(R)$}\big). \end{align*} \end{introthm} In particular, in Appendix \ref{section-appendix}, we prove that the sheaf on $\Spec A$ associated to the relative Witt frame $\underline{W}(B/A)$ for a $p$-adic PD-thickening $B \to A$ satisfies descent for modules. Other sheaves of frames which satisfy descent for modules are those associated to $p$-adic frames as in \cite[Def. 4.2.1]{Lau2018}. Examples of $p$-adic frames include the Zink frame $\mathbb{W}(R)$ for an admissible local ring $R$ \cite[Ex. 2.1.13]{Lau2018} and its relative analog associated to a PD-thickening $B \to R$, as well as the truncated Witt frames $\underline{W}_n(R)$ over an $\mathbb{F}_p$-algebra $R$ \cite[Ex. 2.1.6]{Lau2018}. Let us briefly sketch the construction of the $G$-crystal. Given a $(G,\mu)$-display $\mathscr{P}$ over $R$, we obtain a corresponding $G$-display of type $\mu$ over $\underline{W}(R)$ using \cite[Thm. 3.16]{Daniels2019}. Moreover, if $B \to R$ is a PD-thickening, then the work of Lau (see Proposition \ref{prop-lifting}) allows us to lift the $G$-display of type $\mu$ over $\underline{W}(R)$ to a $G$-display of type $\mu$ over the relative Witt frame $\underline{W}(B/R)$. Since the relative Witt frame satisfies descent for modules (see Proposition \ref{prop-descent}), the above theorem applies, and we obtain a $(G,\mu)$-display over $\underline{W}(B/R)$, which is, in particular, an exact tensor functor from $\textup{Rep}_{\zz_p} G$ to the category of displays over $\underline{W}(B/R)$. Given any representation, we can obtain from such an object a $B$-module, which we denote $\mathbb{D}(\mathscr{P})_{B/R}^\rho$. The functor which assigns to $(V,\pi)$ the crystal $(B \to R) \mapsto \mathbb{D}(\mathscr{P})^\pi_{B/R}$ is the desired $G$-crystal. As a consequence of the Theorem \ref{thm-1} we obtain an explicit relationship between the Rapoport-Zink functors of Hodge type defined in \cite{Kim2018} and in \cite{BP2017}. To be more specific, let $k$ be an algebraic closure of $\mathbb{F}_p$, suppose $(G,\{\mu\},[b])$ is an integral local Shimura datum which is unramified of Hodge type, and let $\underline{G} = (G,\mu, \Lambda, \eta, \underline{s})$ be a local Hodge embedding datum. Given a good choice of $\mu$ and $b$, we can define a $(G,\mu)$-display $\mathscr{P}_0$ which is nilpotent with respect to $\eta$, and we denote by $(X_0, \underline{t_0})$ its associated formal $p$-divisible group with $(\underline{s},\mu)$-structure. Denote by $\textup{Nilp}_{W(k)}^\textup{fsm}$ the category of $p$-nilpotent $W(k)$-algebras which are formally smooth and formally finitely generated over $W(k)/p^mW(k)$ for some $m$. To the datum $(\underline{G},b)$ we can associate two Rapoport-Zink functors on $\textup{Nilp}_{W(k)}^\textup{fsm}$. The first, denoted $\textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}$, assigns to a $p$-nilpotent $W(k)$-algebra the set of isomorphism classes of triples $(X,\underline{t},\iota)$, where $(X,\underline{t})$ is a $p$-divisible group with $(\underline{s},\mu)$-structure over $A$, and $\iota$ is a quasi-isogeny over $\Spf A/pA$ between $X$ and $X_0$ which respects the tensors modulo an ideal of definition. The second, denoted $\textup{RZ}_{G,\mu,b}^{\textup{disp}}$, assigns to such rings the set of isomorphism classes of pairs $(\mathscr{P},\rho)$ consisting of a $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(R)$ and a $G$-quasi-isogeny $\rho$ between $\mathscr{P}$ and $\mathscr{P}_0$ which is defined over $\Spf A/pA$ (see \textsection \ref{sub-rzspaces} for details). By \cite[Lem. 2.1]{Lau2018}, if $A$ is an object in $\textup{Nilp}_{W(k)}^\textup{fsm}$, then $A/pA$ has a $p$-basis \'etale locally, so the equivalence of Theorem \ref{thm-1} holds. As a result, we obtain the following corollary (see Theorem \ref{thm-RZfunctors}). \begin{introcor}\label{introcor1} The functors $\textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}$ and $\textup{RZ}_{G,\mu,b}^{\textup{disp,fsm}}$ on $\textup{Nilp}_{W(k)}^\textup{fsm}$ are naturally isomorphic. \end{introcor} It follows from Corollary \ref{introcor1} that the formal schemes defined by Kim \cite{Kim2018} and B\"ueltel and Pappas \cite{BP2017} which represent these functors are isomorphic. This was already known by \cite[Remark 5.2.7]{BP2017}, but the geometric method of proof offered in \textit{loc. cit.} differs from the explicit comparison of functors given here. If $X_0$ is a formal $p$-divisible group associated to $\mu$ and $b$ as above, we derive a further corollary regarding deformations of $X_0$ with crystalline Tate tensors. By definition of $X_0$, we have an isomorphism between the Dieudonn\'e module of $X_0$ and the $W(k)$-module $\Lambda \otimes_{\zz_p} W(k)$. Using this isomorphism, the collection of tensors $\underline{s}$ naturally determines a collection of crystalline Tate tensors $\underline{t_0}$ on $X_0$. This data determines a subfunctor $\mathfrak{Def}_G(X_0)$ of the functor of deformations of $X_0$ whose points consist of lifts of $X_0$ for which there exist lifts of $\underline{t_0}$ to crystalline Tate tensors for $X$. If we define $R_G$ such that $\textup{Spf }R_G$ is the formal completion of the opposite unipotent subgroup of $G$ associated to $\mu$ at the origin, then a theorem of Faltings \cite[Thm. 3.6]{Kim2018} states that $R_G$ pro-represents the restriction of the functor $\mathfrak{Def}_G(X_0)$ to power series algebras over $W(k)$. Using a characterization of the essential image of the functor $\textup{BT}_{\underline{G},R}$ defined in Theorem \ref{thm-1}, we obtain the following corollary, which extends the theorem of Faltings. Let $\textup{Art}_{W(k)}$ denote the category of local artinian $W(k)$-algebras $(R,\mathfrak{m})$ along with an isomorphism $R/\mathfrak{m} \xrightarrow{\sim} k$. \begin{introcor} Let $\mathfrak{Def}(X_0,\underline{t_0})$ denote the subfunctor of $\mathfrak{Def}_G(X_0)$ whose points in $R \in \textup{Art}_{W(k)}$ consist of lifts $X$ of $X_0$ over $R$ such that the following conditions hold: \begin{enumerate}[$($i$)$] \item There is an isomorphism of crystals in locally free $\mathcal{O}_{\Spec R / W(k_0)}$-modules \begin{align*} \lambda: \Lambda_{W(k_0)} \otimes \mathcal{O}_{\Spec R/ W(k_0)} \xrightarrow{\sim} \mathbb{D}(X) \end{align*} such that the Hodge filtration $\textup{Fil}^1(\mathbb{D}(X)) \subset \mathbb{D}(X)_{R/R} \xrightarrow{\sim} \Lambda \otimes_{\zz_p} R$ is induced by $\mu$. \item There exist a collection $\underline{t} = (t_1, \dots, t_r)$ of crystalline Tate tensors on $\mathbb{D}(X)$ over $\Spec{R}$ which are identified with $\underline{s} \otimes 1$ via $\lambda.$ \end{enumerate} Then $R_G$ pro-represents the functor $\mathfrak{Def}(X_0,\underline{t_0})$. \end{introcor} This corollary is Theorem \ref{thm-def} in the text below. To see that this implies the theorem of Faltings one needs to observe that if $R$ is a power series algebra over $W(k)$, then $\mathfrak{Def}_G(X_0)(R) = \mathfrak{Def}(X_0, \underline{t_0})(R)$, which is done in Remark \ref{rmk-Faltings}. Let us give a brief outline of the paper. In the first section we review the definitions of displays and frames as in \cite{Lau2018}, and the crystalline theory of $p$-divisible groups and displays, following especially \cite{BBM1982}, \cite{Zink2002}, and \cite{Lau2013}. In \textsection \ref{section-Gdisplays} we recall basic notions about $G$-displays of type $\mu$ and $(G,\mu)$-displays, and we prove Theorem \ref{thm-2}. By results in Appendix \ref{section-appendix}, the theorem applies in particular in the case of relative Witt frames, which is in turn crucial for \textsection \ref{section-crystals}, where we construct the $G$-crystal associated to a $(G,\mu)$-display over the Witt frame and prove the collection of results which comprise Theorem \ref{thm-Gcrystal}. In \textsection \ref{section-main} we prove Theorem \ref{thm-1}, and derive consequences for the study of Rapoport-Zink spaces of Hodge type and for the deformation theory of $p$-divisible groups with crystalline Tate tensors. \subsection{Notation} \begin{itemize}[leftmargin=*] \item Throughout the paper, fix a prime $p$ and a finite field $k_0$ of characteristic $p$ and cardinality $q = p^\ell$. \item A ring or abelian group will be called $p$-adic if it is complete and separated with respect to the $p$-adic topology. \item If $f: A \to B$ is a ring homomorphism and $M$ is an $A$-module, we write $f^\ast M$ for $M \otimes_{A,f} B$. If $f$ is understood, we write $M_B = M\otimes_A B$ as well. If $X$ is a $p$-divisible group over $\Spec A$, we often write $X \otimes_A B$ for the base change of $X$ to $B$. \item If $f: A \to B$ is a ring homomorphism, $M$ is an $A$-module, and $N$ is a $B$-module, we say a map $\alpha: M \to N$ is $f$-linear if $\alpha(a\cdot m) = f(a) \cdot \alpha(m)$ for $a \in A$, $m \in M$. In this case we write $\alpha^\sharp$ for the linearization $f^\ast M \to N$ given by $m \otimes b \mapsto \alpha(m)\cdot b$. We say $\alpha$ is an $f$-linear bijection if $\alpha^\sharp$ is a $B$-module isomorphism. \item If $R$ is a commutative ring, denote by $\textup{Mod}(R)$ the category of $R$-modules. \item For a $\zz_p$-algebra $\mathcal{O}$, denote by \text{Nilp}$_{\mathcal{O}}$ the site consisting of the category of $\mathcal{O}$-algebras in which $p$ is nilpotent, endowed with the fpqc topology. We will refer to such an $\mathcal{O}$-algebra as a $p$-nilpotent $\mathcal{O}$-algebra. \item Let $\bigoplus S_n$ be a $\zz$-graded ring. For a ring homomorphism $\varphi: \bigoplus S_n \to R$, we write $\varphi_n$ for the restriction of $\varphi$ to $S_n$. \item Let $R$ be a ring. Denote by $\text{\'Et}_R$ the category of affine \'etale $R$-schemes. We endow this category with a topology by defining a covering of $\Spec{A} \in \text{\'Et}_R$ to be an \'etale covering $\{U_i \to \Spec{A}\}$ such that each $U_i$ is affine. \item If $G$ is a sheaf of groups in a topos, denote by $\text{Tors}_G$ the stack of $G$-torsors. \item If $S$ is any $\zz$-graded ring, denote by $\text{GrMod}(S)$ the category of graded $S$-modules, and by $\text{PGrMod}(S)$ the full subcategory of finite projective graded $S$-modules. By \cite[Lemma 3.0.1]{Lau2018}, this latter category is equivalent to the full subcategory of finitely generated graded $S$-modules which are projective over $S$. \item If $R$ is a $p$-adic ring, denote by \text{pdiv}$(R)$ the category of $p$-divisible groups over $R$, and denote by $\text{fpdiv}(R)$ the full subcategory of formal $p$-divisible groups over $R$. \item If $M$ is a module over a ring $R$, denote by $M^\vee$ its linear dual. \item If $X$ is a $p$-divisible group over a ring $R$, denote by $X^D$ its Serre dual. \item Let $R$ be a $p$-nilpotent $W(k_0)$-algebra. If $A$ is an $R$-algebra, a PD-thickening of $A$ over $R$ is a surjective ring homomorphism $B \to A$ such that $B$ is a $p$-nilpotent $W(k_0)$-algebra and such that the kernel $J$ of $B \to A$ is equipped with divided powers $\delta$ which are compatible with the canonical divided powers on $pW(k_0)$. \item Let $R$ be a $p$-adic $W(k_0)$-algebra. If $A$ is an $R$-algebra, a $p$-adic PD-thickening of $A$ over $R$ is a surjective ring homomorphism $B \to A$ such that $B$ is a $p$-adic $W(k_0)$-algebra and such that the kernel $J$ of $B \to A$ is equipped with divided powers $\delta$ which are compatible with the canonical divided powers on $pW(k_0)$. \item A PD-morphism between PD-thickenings $B\to A$ and $B'\to A'$ with divided powers $\delta$ and $\delta'$ on their respective kernels is a pair of homomorphisms $\varphi: B \to B'$ and $\psi: A \to A'$ such that the obvious diagram commutes, and such that $\delta_n'(\varphi(x)) = \varphi(\delta_n(x))$ for all $x \in J$ and all $n$. \end{itemize} \section{Preliminaries} \subsection{Frames, graded modules, and displays}\label{sub-frames} We review the basic definitions and properties of (higher) frames and displays following \cite{Lau2018} and \cite[\textsection 2]{Daniels2019}. In particular, we recall the definition of the Witt frame over a $p$-nilpotent ring $R$ (Example \ref{ex-wittframe}) and the relative Witt frame associated to a $p$-adic PD-thickening $B \to A$ (Example \ref{ex-relwittframe}). Moreover, we make explicit the connection between the theory of displays presented here and the theory of windows (see Lemma \ref{lem-windows}). \begin{Def}\label{def-frame} A \textit{frame} $\underline{S} = (S,\sigma,\tau)$ is a triple consisting of a $\zz$-graded ring $S$ and two ring homomorphisms $\sigma, \tau: S \to S_0$ satisfying the following properties. \begin{enumerate}[(i)] \item The endomorphism $\tau_0$ of $S_0$ is the identity, and $\tau_{-n}: S_{-n} \to S_0$ is a bijection for all $n \ge 1$. \item The endomorphism $\sigma_0$ of $S_0$ induces the $p$-power Frobenius $s \mapsto s^p$ on $S_0 / pS_0$, and if $t$ is the unique element in $S_{-1}$ such that $\tau_{-1}(t) = 1,$ then $\sigma_{-1}(t) = p$. \item We have $p \in \textup{Rad}(S_0)$. \end{enumerate} \end{Def} We say that $\underline{S}$ is a frame for $R = S_0 / \tau(S_1)$. The conditions in the definition imply that $\tau$ acts on $S_1$ as multiplication by $t$, so we will usually write $\tau(S_1) = tS_1$. \begin{Def} Let $\underline{S} = (S,\sigma,\tau)$ be a frame for $R$. A \textit{display} over $\underline{S}$ is a pair $\underline{M} = (M,F)$ consisting of a finite projective graded $S$-module $M$ and a $\sigma$-linear bijection $F:M \to \tau^\ast M$. \end{Def} \begin{Def} A \textit{standard datum} for a display is a pair $(L, \Phi)$ consisting of a finite projective graded $S_0$-module $L$ and a $\sigma$-linear automorphism $\Phi: L \to L$. \end{Def} From a standard datum $(L, \Phi)$ we define a display $(M,F)$ where $M = L \otimes_{S_0} S$ and $F(x \otimes s) = \sigma(s) \Phi(x)$. If every $M$ in $\textup{PGrMod}(S)$ is of the form $M = L \otimes_{S_0} S$ for a finite projective $S_0$-module $L$, then every display is isomorphic to one defined by a standard datum, see \cite[\textsection 3.4]{Lau2018}. In particular, by \cite[Lem. 3.1.4]{Lau2018}, this occurs if every finite projective $R$-module lifts to $S_0$. Let us denote by $\textup{Disp}(\uS)$ the category of displays over $\uS$. If $\uS \to \uS'$ is a frame homomorphism, then we obtain a base change functor \begin{align*} \textup{Disp}(\uS) \to \textup{Disp}(\uS'), \ \underline{M} \mapsto \underline{M} \otimes_{\uS} \uS'. \end{align*} The category $\textup{Disp}(\uS)$ has a tensor product given by $(M,F) \otimes(M',F') = (M\otimes_S M', F \otimes F')$, which makes it into an exact rigid tensor category with unit object $\underline{S} = (S,\sigma)$. For any display $\underline{M}$ over a frame $\underline{S}$, there exists a canonical descending filtration on $\overline{M}:=\tau^\ast M\otimes_{S_0} R$, called the Hodge filtration of $\underline{M}$, see \cite[\textsection 5.2]{Lau2018}. Let us recall the definition. Denote by $\theta_n: M_n \to \tau^\ast M$ the composition $M_n \hookrightarrow M \to \tau^\ast M$, and by $\bar{\theta}_n$ the composition \begin{align*} M_n \xrightarrow{\theta_n} \tau^\ast M \to \tau^\ast M \otimes_{S_0} R. \end{align*} Define \begin{align}\label{eq-hodgedisp} \textup{Fil}^i(\underline{M}) := \textup{im}(\bar{\theta}_n) \subset \overline{M}. \end{align} Since $\bar{\theta}_n$ factors through $t: M_n \to M_{n-1}$, we have $\textup{Fil}^n(\underline{M}) \subset \textup{Fil}^{n-1}(\underline{M})$, so $(\textup{Fil}^n(\underline{M}))_{n \in \zz}$ defines a descending filtration on $\overline{M}$. If $(L,\Phi)$ is a standard datum for $\underline{M}$, with $L = \bigoplus_i L_i$, then \begin{align*} \textup{Fil}^n(\underline{M}) = \bigoplus_{i \ge n} L_i \otimes_{S_0} R. \end{align*} \begin{rmk}\label{rmk-hodge} The Hodge filtration is functorial in $\underline{M}$, meaning that any morphism of displays $\varphi: \underline{M} \to \underline{M'}$ induces a morphism of $R$-modules $\bar{\varphi}:\overline{M} \to \overline{M'}$ which sends $\textup{Fil}^n(\underline{M})$ into $\textup{Fil}^n(\underline{M'})$. Moreover, the Hodge filtration is compatible with tensor products of displays, i.e., we have \begin{align}\label{eq-hodgetensor} \textup{Fil}^n(\underline{M} \otimes \underline{M'}) = \sum_{j+k = n} \textup{Fil}^j(\underline{M}) \otimes \textup{Fil}^k(\underline{M'}). \end{align} \end{rmk} Let us briefly review the connection between (higher) displays over frames and windows over 1-frames. Recall the following definition (see \cite[Def. 2.2.1]{Lau2018}). \begin{Def} A $1$-frame $\mathcal{S} = (S_0 \supset I, \sigma_0, \dot{\sigma})$ consists of a ring $S_0$, an ideal $I \subset S_0$, a ring endomorphism $\sigma_0$ of $S_0$, and a $\sigma_0$-linear map $\dot{\sigma}: I \to S_0$ such that \begin{enumerate}[(i)] \item $\sigma_0: S_0 \to S_0$ is a lift of the Frobenius on $S_0/pS_0$, \item $\sigma_0(a) = p\dot{\sigma}(a)$ for $a \in I$, \item $p \in \textup{Rad}(S_0)$. \end{enumerate} We say that $\mathcal{S}$ is a 1-frame for $R = S_0/I$. \end{Def} A frame $\underline{S}$ is said to extend the frame $\mathcal{S}$ if $t:S_1 \to S_0$ is injective, $I = tS_1$, and $\dot{\sigma}(ta) = \sigma_1(a)$ for $a \in S_1$. \begin{Def} Let $\mathcal{S}$ be a $1$-frame. A window over $\mathcal{S}$ is a quadruple $\underline{P} = (P,\Fil P,F_0,F_1)$ consisting of a finitely generated projective $S_0$-module $P$, an $S_0$-submodule $\Fil P \subseteq P$, and two $\sigma_0$-linear maps $F_0:P \to P$ and $F_1: \Fil P \to P$ such that \begin{enumerate}[(i)] \item there is a decomposition $P = L_0 \oplus L_1$ with $\Fil P = L_0 \oplus IL_1$, \item $F_1(ax) = \dot{\sigma}(a)F_0(x)$ for $a \in I$ and $x \in P$, \item $F_0(x) = pF_1(x)$ for $x \in \Fil P$, \item $F_0(P)+F_1(\Fil P)$ generates $P$ as an $S_0$-module. \end{enumerate} \end{Def} Because there is no surjectivity condition on $\dot{\sigma}$ in the definition of a 1-frame, the definition of windows that appears here differs slightly from others in the literature, see \cite[Rmk. 2.11]{Lau2010}. By \cite[Lem. 2.6]{Lau2010}, if $P = L_0 \oplus L_1$ is a finite projective $S_0$-module and $\Fil P = IL_0 \oplus L_1$, then the set of $\mathcal{S}$-window structures on $P$ and $\Fil P$ is mapped bijectively to the set of $\sigma_0$-linear isomorphisms \begin{align*} \Psi: L_0 \oplus L_1 \to P. \end{align*} The bijection is determined by $\Psi = F_0 \res_{L_0} \oplus F_1 \res_{L_1}$, and the triple $(L_0, L_1, \Psi)$ is called a \textit{normal representation} for $(P, \Fil P, F_0, F_1)$. We remark for later use that, in terms of $\Psi$, the linearization of $F_0$ can be expressed as \begin{align}\label{eq-F0sharp} F_0^\sharp = \Psi^\sharp \circ (\id_{L_0} \oplus p\cdot \id_{L_1}). \end{align} To any window $\underline{P} = (P, \Fil P, F_0, F_1)$ over $\mathcal{S}$ we can associate an $S_0$-module homomorphism $V^\sharp: P \to \sigma_0^\ast P$ which is uniquely determined by the identities \begin{align*} V^\sharp(\xi \cdot F_0(x)) = p\xi \otimes x \text{ and } V^\sharp(\xi \cdot F_1(y)) = \xi \otimes y \end{align*} for $\xi \in S_0, x \in P, y \in \Fil P$. If $(L_0, L_1, \Psi)$ is a normal representation for $\underline{P}$, then \begin{align}\label{eq-Vsharp} V^\sharp = (p\cdot \id_{L_0} \oplus \id_{L_1}) \circ (\Psi^\sharp)^{-1}. \end{align} From (\ref{eq-F0sharp}) and (\ref{eq-Vsharp}) we see that $F_0^\sharp \circ V^\sharp = p\cdot \id_{P}$ and $V^\sharp \circ F_0^\sharp = \id_{\sigma_0^\ast P}$. If $\underline{S}$ is a frame over $R$, denote by $\nu$ the ring homomorphism $S \to R$ which extends the projection $S_0 \to R$ by zero on $S_n$ for $n \ne 0$. If $M$ is a finite projective graded module over $S$, then $M \otimes_{S,\nu} R$ is a finite projective graded $R$-module with graded pieces that we denote by $\overline{L}_i$. Recall the following definition, cf. \cite[Def. 2.16]{Daniels2019}. \begin{Def} We say $\underline{M}$ is a \textit{1-display} over $\underline{S}$ if $\overline{L}_i = 0$ for all $i < 0$ and $i > 1$. \end{Def} In the language of \cite{Daniels2019}, $\underline{M}$ is a 1-display if the depth of $M$ is nonnegative and the altitude of $M$ is less than 1. If $(L, \Phi)$ is a standard datum for $\underline{M}$, then $\underline{M}$ is a 1-display if and only if $L_i = 0$ for all $i < 0$ and $i > 1$, see \cite[Lem. 2.7]{Daniels2019}. \begin{lemma}\label{lem-windows} Suppose $\underline{S}$ is a frame extending the 1-frame $\mathcal{S}$, and suppose that all finitely projective $R$-modules lift to $S_0$. Then the category of windows over $\mathcal{S}$ is equivalent to the category of 1-displays over $\underline{S}$. \end{lemma} \begin{proof} The proof follows from a straightforward adaptation of the arguments in \cite[Lem. 2.25]{Daniels2019}. We record for later use the definition of the functor from $1$-displays over $\underline{S}$ to windows over $\mathcal{S}$. If $\underline{M} = (M,F)$ is a 1-display, then we can define a window $(P, \Fil P, F_0, F_1)$ over $\mathcal{S}$ by $P:= \tau^\ast M$, $\Fil P = \theta_1(M_1)$, and $F_i = F\res_{M_i} \circ \theta_i^{-1}: P_i \to P$. \end{proof} One can also prove the lemma using normal representations: if $(L, \Phi)$ is a standard datum for a 1-display over $\underline{S}$, then $L = L_0 \oplus L_1$, so $(L_0, L_1, \Phi)$ is a normal representation for the associated window over $\mathcal{S}$. For use later let us denote the functor from $1$-displays to windows by \begin{align}\label{eq-displaytowindow} P_{\underline{S}}: (1\text{-displays over }\underline{S}) \to (\text{Windows over }\mathcal{S}), \end{align} and let us denote the quasi-inverse functor by \begin{align}\label{eq-windowtodisplay} M_{\underline{S}}: (\text{Windows over }\mathcal{S}) \to (1\text{-displays over }\underline{S}). \end{align} We close this section by discussing a few example of frames and 1-frames that will be of particular importance in what follows. Recall that to give a frame it suffices to specify a triple $(S_{\ge 0}, \sigma, (t_n)_{n \ge 0})$, see \cite[\textsection 2.1]{Daniels2019} and \cite[Rmk. 2.0.2]{Lau2018}. If $R$ is a $\zz_p$-algebra, let $W(R)$ denote the ring of infinite length Witt vectors over $R$. The ring $W(R)$ comes equipped with a ring endomorphism $f_R$, called the Frobenius, and an additive self-map $v_R$, called the Verschiebung. When the ring $R$ is clear from context we will write simply $f$ and $v$ for these maps. Denote by $I(R)$ the kernel of the canonical map $w_0: W(R) \to R$, so $I(R) = v(W(R))$. \begin{ex}[The Witt frame]\label{ex-wittframe} Let $R$ be a $p$-adic ring. Define a frame $\underline{W}(R)$ from the Witt ring over $R$ as follows. For $S_{\ge 0}$ we take the $\zz_{\ge 0}$-graded ring with $S_0 = W(R)$, $S_n = I(R)$ for $n \ge 1$, and with multiplication $S_n \times S_m \to S_{n +m}$ determined by $(v(a), v(b)) \mapsto v(ab)$ for $n, m \ge 1$. The map $t_0: S_1 \to S_0$ is given by inclusion $I(R) \hookrightarrow W(R)$, and $t_n$ is multiplication by $p$ for $n \ge 1$. We let $\sigma_0 = f_R$, and for $n \ge 1$ we define $\sigma_n(v(s)) = s$ for all $v(s) \in S_n = I(R)$. We will write $S = W(R)^\oplus$ for the resulting $\zz$-graded ring. The corresponding frame $\underline{W}(R) = (W(R)^\oplus, \sigma, \tau)$ is the \textit{Witt frame} for $R$. The Witt frame extends the Witt $1$-frame $\mathcal{W}(R) = (W(R) \supset I(R), f, v^{-1})$. Windows over the Witt 1-frame are equivalent to Zink displays, so by Lemma \ref{lem-windows}, 1-displays over $\underline{W}(R)$ are equivalent to Zink displays. \end{ex} Let $B \to A$ be a $p$-adic PD-thickening with kernel $J$. Using the divided powers on $J$, Zink defines an isomorphism of $W(B)$-modules \begin{align*} \log: W(J) \xrightarrow{\sim} \prod_{i \in \mathbb{N}} J, \end{align*} see \cite[\textsection 1.4]{Zink2002} for details. Denote the image of $\xi \in W(J)$ by $\log(\xi) = [\xi_0, \xi_1, \dots]$. \begin{ex}[The relative Witt frame]\label{ex-relwittframe} Let $B \to A$ be a $p$-adic PD-thickening. Define a frame $\underline{W}(B/A)$ associated to $B/A$ as follows. For $S_{\ge 0}$ take the $\zz_{\ge 0}$-graded ring with $S_0 = W(B)$, $S_n = I(B) \oplus J$ with $J$ viewed as a $W(B)$-module by restriction of scalars, and multiplication $S_n \times S_m \to S_{n+m}$ for $n, m \ge 1$ defined by $(v(a),x)\cdot (v(b),y) = (v(ab), xy)$ for $a, b \in W(B)$, $x,y \in J$. The map $t_0: S_1 = I(B) \oplus J \to W(B) = S_0$ is given by $(v(a), x) \mapsto v(a) + \log^{-1}[x,0,0,\dots]$, and $t_n$ for $n \ge 1$ is given by multiplication by $p$ on the first factor. Finally, let $\sigma_0 = f_B$ and for $n \ge 1$ define $\sigma_n(v(a),x) = a$. Denote the resulting $\zz$-graded ring by $W(B/A)^\oplus$. The corresponding frame $\underline{W}(B/A) = (W(B/A)^\oplus, \sigma, \tau)$ is the \textit{relative Witt frame} for $B/A$. Let $I(B/A)$ denote the kernel of $W(B) \to A$, and denote by $\tilde{v}^{-1}$ the unique extension of $v^{-1}$ to $I(B/A)$ whose restriction to $W(J) = \ker(W(B) \to W(A))$ is given by $[\xi_0, \xi_1, \dots] \mapsto [\xi_1, \xi_2, \dots]$ in logarithmic coordinates. Then $\mathcal{W}(B/A) = (W(B) \subset I(B/A), f, \tilde{v}^{-1})$ is a $1$-frame (see \cite[\textsection 2.2]{Lau2014}), and $\underline{W}(B/A)$ extends $\mathcal{W}(B/A)$. \end{ex} \subsection{Recollections on crystals}\label{sub-reviewcrystals} We review the definitions of crystals and the crystalline site as in \cite{BBM1982}, and we sketch proofs of some standard lemmas which will be useful in \textsection \ref{sub-hodge} when we are checking Frobenius equivariance of certain morphisms of crystals. For a $W(k_0)$-scheme $X$ in which $p$ is locally nilpotent, denote by $\textup{CRIS}(X/W(k_0))$ the big fppf crystalline site as in \cite{BBM1982}. This is the site whose underlying category is the category of triples $(U,T,\delta)$ where $U \hookrightarrow T$ is a closed immersion of an $X$-scheme $U$ into a $p$-nilpotent $W(k_0)$-scheme $T$ such that the ideal $\mathscr{I}$ of $\mathcal{O}_T$ defining the embedding is equipped with divided powers compatible with the natural divided powers on $pW(k_0)$. If $X = \Spec R$ is affine, we will write $\textup{CRIS}(R/W(k_0))$ to mean $\textup{CRIS}(\Spec R/W(k_0))$. Recall that to give a sheaf on $\textup{CRIS}(X/W(k_0))$ is equivalent to giving, for every triple $(U,T,\delta)$ in $\textup{CRIS}(X/W(k_0))$, an fppf sheaf $\mathscr{F}_T$ on $T$, and for every morphism $(u,v): (U',T,',\delta') \to (U,T,\delta)$, a morphism of sheaves $v^{-1}\mathscr{F}_T \to \mathscr{F}_{T'}$ which satisfies a cocycle condition (see \cite[1.1.3]{BBM1982}). The crystalline structure sheaf for $X$ over $W(k_0)$, denoted by $\mathcal{O}_{X/W(k_0)}$, is defined by the rule $\Gamma((U,T,\delta),\mathcal{O}_{X/W(k_0)}) = \Gamma(T,\mathcal{O}_T).$ If $\mathscr{F}$ is a sheaf of $\mathcal{O}_{X/W(k_0)}$-modules, then for a morphisms $(u,v)$ as above the transition morphism $v^{-1}\mathscr{F}_T\to \mathscr{F}_{T'}$ induces a morphism $v^\ast\mathscr{F}_T \to \mathscr{F}_{T'}$. \begin{Def} A \textit{crystal of finite locally free (resp. locally free, resp. quasi-coherent) $\mathcal{O}_{X/W(k_0)}$-modules} is an $\mathcal{O}_{X/W(k_0)}$-module $\mathscr{F}$ such that for every $(U,T,\delta)$ in $\textup{CRIS}(X/W(k_0))$ the $\mathcal{O}_T$-module $\mathscr{F}_T$ is finite locally free (resp. locally free, resp. quasi-coherent) and for every morphism $(u,v):(U',T',\delta') \to (U,T,\delta)$, the transition morphism $v^\ast\mathscr{F}_T \to \mathscr{F}_{T'}$ is an isomorphism. \end{Def} We will denote by $\textup{LFCrys}(X/W(k_0))$ the category of crystals of locally free $\mathcal{O}_{X/W(k_0)}$-modules. The full subcategory of crystals of finite locally free $\mathcal{O}_{X/W(k_0)}$-,modules is a rigid exact tensor category which is a full tensor subcategory of the category of crystals in quasi-coherent $\mathcal{O}_{X/W(k_0)}$-modules. The unit object is the crystal $\mathbbm{1}$ which assigns to any $(U,T,\delta)$ the finite locally free $\mathcal{O}_T$-module $\mathcal{O}_T$. If $X = \Spec R$ is affine, we will write $\textup{LFCrys}(R/W(k_0))$ instead of $\textup{LFCrys}(\Spec R/W(k_0))$. \begin{rmk}\label{rmk-crystals} We will often write just $B \to A$ to denote the PD-thickening $(\Spec A, \Spec B, \delta)$. Because fppf sheaves on a scheme $T$ are uniquely determined by their evaluations on affine $T$-schemes, to give a crystal in quasi-coherent $\mathcal{O}_{X/W(k_0)}$-modules, it is enough to give, for every PD-thickening $B\to A$ of $p$-nilpotent $W(k_0)$-algebras over $X$, a $B$-module $M_{B/A}$, and for every morphism $(B'\to A') \to (B\to A)$ of PD-thickenings, an isomorphism \begin{align}\label{eq-crystalproperty} M_{B/A} \otimes_B B'\xrightarrow{\sim} M_{B'/A'}. \end{align} These isomorphisms should satisfy the obvious cocycle condition for compositions. The associated crystal is (finite) locally free if each $B$-module $M_{B/A}$ is (finite) projective. \end{rmk} Let $X = \Spec R$. We can associate to any $W(k_0)$-module $M$ a crystal in quasi-coherent $\mathcal{O}_{X/W(k_0)}$-modules, which we denote by $M \otimes \mathcal{O}_{X/W(k_0)}$. Its sections in a PD-thickening $B \to A$ over $R$ are given by \begin{align}\label{eq-pullback} (M \otimes \mathcal{O}_{X/W(k_0)})_{B/A} := M \otimes_{W(k_0)} B. \end{align} To be precise, $M \otimes \mathcal{O}_{X/W(k_0)}$ is the pullback to the big crystalline site of the quasi-coherent $\mathcal{O}_{\Spec W(k_0)}$-module associated to $M$ (cf. \cite[1.1.16]{BBM1982}). \begin{Def} The category of \textit{isocrystals over $X$}, denoted $\textup{Isoc}(X)$, is the category whose objects are crystals $\mathbb{D}$ in locally free $\mathcal{O}_{X/W(k_0)}$-modules, and whose morphisms are global sections of the Zariski sheaf $\underline{\textup{Hom}}(\mathbb{D},\mathbb{D}')[1/p]$. We will write $\mathbb{D}[1/p]$ for the object $\mathbb{D}$ viewed as an object in $\textup{Isoc}(X)$. When $X = \Spec R$ is affine, we write $\textup{Isoc}(\Spec R) = \textup{Isoc}(R)$. \end{Def} \begin{rmk} If $X$ is quasi-compact, then $\underline{\textup{Hom}}(\mathbb{D},\mathbb{D}')[1/p]$ can be identified with $\textup{Hom}(\mathbb{D},\mathbb{D}')[1/p]$, i.e. a morphism $\mathbb{D}[1/p] \to \mathbb{D}'[1/p]$ in $\textup{Isoc}(X)$ is an equivalence class of diagrams \begin{align*} \mathbb{D} \xleftarrow{p^n} \mathbb{D} \xrightarrow{s} \mathbb{D}', \end{align*} where $s$ is a morphism of crystals of $\mathcal{O}_{X/W(k_0)}$-modules. \end{rmk} If $B \to A$ is a $p$-adic PD-thickening, then $B \to A$ can be written as the projective limit of divided power extensions $B_n = B/p^nB \to A/p^nA = A_n$. If $\mathbb{D}$ is a crystal in $\mathcal{O}_{\Spec R/W(k_0)}$-modules and $B \to A$ is a $p$-adic PD-thickening over $R$, we write $\mathbb{D}_{B/A} := \varprojlim \mathbb{D}_{B_n/A_n}$. This defines an evaluation functor \begin{align}\label{eq-evalBA} (-)_{B/A}: \textup{LFCrys}(R/W(k_0)) \to \textup{Mod}(B), \ \mathbb{D}\mapsto \mathbb{D}_{B/A}. \end{align} If $B[1/p]$ is nonzero, the functor $(-)_{B/A}$ extends naturally to a functor \begin{align}\label{eq-evalBAiso} (-)_{B/A}[1/p]:\textup{Isoc}(R) \to \textup{Mod}(B[1/p]), \ \mathbb{D}[1/p] \mapsto (\mathbb{D}_{B/A})[1/p] \end{align} which on morphisms is given by the composition \begin{align*} \textup{Hom}(\mathbb{D}_1, \mathbb{D}_2) \to \textup{Hom}_B((\mathbb{D}_1)_{B/A},(\mathbb{D}_2)_{B/A})[1/p] \to \textup{Hom}_{B[1/p]}((\mathbb{D}_1)_{B/A}[1/p],(\mathbb{D}_2)_{B/A}[1/p]). \end{align*} \begin{rmk} \label{rmk-lastarrow} If $\mathbb{D}_1$ is a crystal in finite locally free $\mathcal{O}_{\Spec R/ W(k_0)}$-modules, then $(\mathbb{D}_1)_{B/A}$ is a finite projective $B$-module, and the last arrow is an isomorphism. \end{rmk} \begin{rmk}\label{rmk-perfectequiv} If $B \to A$ is a PD-thickening over $R$, then $W(B)$ is $p$-adic by \cite[Prop. 3]{Zink2002}, and $W(B) \to A$ is a $p$-adic PD-thickening, see e.g. \cite[\textsection 1G]{Lau2014}. When $pR=0$ and $R$ is perfect, the evaluation functors $(-)_{W(R)/R}$ and $(-)_{W(R)/R}[1/p]$ are equivalences (see e.g., \cite[Prop. 4.5]{Grothendieck1974}). \end{rmk} The following lemmas are no doubt well-known to experts, but we could not find a reference, so we sketch proofs for the sake of completeness. Suppose $R$ is an $\mathbb{F}_p$-algebra, and choose a polynomial algebra $W(k_0)[x_\alpha]_{\alpha \in \mathcal{A}}$ surjecting onto $R$. Let $\gamma$ denote the canonical divided powers on $pW(k_0)$, and denote by $D$ the PD-envelope of $W(k_0)[x_\alpha]$ with respect to $K = \ker(W(k_0)[x_\alpha] \to R)$ relative to $(W(k_0), pW(k_0), \gamma)$. Then the kernel $\bar{J}$ of $D \to R$ is equipped with divided powers compatible with those on $pW(k_0)$, and $D_n:=D/p^nD \to R$ is a PD-thickening over $R$ for every $n$. If we denote by $D^\wedge$ the $p$-adic completion of $D$, then $D^\wedge \to R$ defines a $p$-adic PD-thickening, and we can define functors $(-)_{D^\wedge / R}$ and $(-)_{D^\wedge / R} [1/p]$ as in (\ref{eq-evalBA}) and (\ref{eq-evalBAiso}). \begin{lemma} \label{lem-faithful} The functor $(-)_{D^\wedge / R}$ is faithful. Moreover, if $\mathbb{D}_1$ is a crystal in finite locally free $\mathcal{O}_{\Spec R/W(k_0)}$-modules, then the map \begin{align*} \textup{Hom}(\mathbb{D}_1,\mathbb{D}_2)[1/p] \to \textup{Hom}_{D^\wedge[1/p]}((\mathbb{D}_1)_{D^\wedge/R}[1/p], (\mathbb{D}_2)_{D^\wedge/R}[1/p]) \end{align*} induced by $(-)_{D^\wedge / R}[1/p]$ is injective. \end{lemma} \begin{proof} The first statement follows from the fact that for any PD-thickening $B \to A$ we can find a lift $W(k_0)[x_\alpha] \to B$ of $R \to A$, so by the universal properties of $D$ and $D^\wedge$ we obtain a PD-morphism $(D^\wedge \to R) \to (B \to A)$. The second statement follows from the first using Remark \ref{rmk-lastarrow} and exactness of localization. \end{proof} As $R$ varies in $\textup{Nilp}_{W(k_0)}$ we obtain fibered categories $\textup{LFCrys}$ and $\textup{Isoc}$ whose fibers over $R$ in $\textup{Nilp}_{W(k_0)}$ are the categories $\textup{LFCrys}(R)$ and $\textup{Isoc}(R)$, respectively. \begin{lemma}\label{lem-etalelocal} The fibered categories $\textup{LFCrys}$ and $\textup{Isoc}$ form stacks for the \'etale topology on $\textup{Nilp}_{W(k_0)}$. \end{lemma} \begin{proof} It is enough to show the result for $\textup{LFCrys}$, where the key point is that if $B \to A$ is a PD-thickening over $R$, then the homomorphism $A \to A' := A\otimes_R R'$ is \'etale and faithfully flat, so there exists a unique \'etale faithfully flat lift $B \to B'$ with $B' \to A'$ a PD-thickening over $R'$. Then the result follows from \'etale descent for modules over rings along with the crystal property (\ref{eq-crystalproperty}). \end{proof} Suppose $R$ is a $p$-nilpotent $W(k_0)$-algebra, and let $R_0 = R/pR$. Then the closed embedding $i: \Spec R_0 \hookrightarrow \Spec R$ induces a morphism of topoi $\icris = (\icris_\ast, i_{\text{CRIS}}^\ast)$ between sheaves on $\textup{CRIS}(R_0/W(k_0))$ and sheaves on $\textup{CRIS}(R /W(k_0))$. By \cite[IV, Thm. 1.4.1]{Berthelot1974}, the functors $\icris_\ast$ and $i_{\text{CRIS}}^\ast$ are quasi-inverse to one another, and induce an equivalence of categories \begin{align}\label{eq-crystalequiv} \textup{LFCrys}(R_0/W(k_0)) \xrightarrow{\sim} \textup{LFCrys}(R/W(k_0)). \end{align} This equivalence extends to an equivalence $\textup{Isoc}(R_0) \xrightarrow{\sim} \textup{Isoc}(R)$. Let $R_0$ be an $\mathbb{F}_p$-algebra, and let $\phi_0$ denote the $p$-power Frobenius $r \mapsto r^p$ of $R_0$. If $B \to A$ is a PD-thickening over $R_0$, we write $\phi_!(B/A)$ for the PD-thickening $B \to A$ where $A$ is viewed as an $R_0$-algebra via restriction of scalars along $\phi_0$. For any crystal $\mathbb{D}$ in $\mathcal{O}_{\Spec R_0/W(k_0)}$-modules, we define the value of the Frobenius pullback $\phi_0^\ast \mathbb{D}$ on a $p$-adic PD-thickening $B \to A$ over $R_0$ by \begin{align*} (\phi_0^\ast \mathbb{D})_{B/A} := \mathbb{D}_{{\phi_0}_!(B/A)}. \end{align*} If $\sigma: B \to B$ is a lift of the Frobenius of $A$ which preserves the divided powers, then $\sigma$ induces a PD-morphism $(B \to A) \to {\phi_0}_!(B \to A)$, so by the crystal property we obtain \begin{align}\label{eq-frobcrystal} (\phi_0^\ast \mathbb{D})_{B/A} \xrightarrow{\sim} \sigma^\ast( \mathbb{D}_{B/A}). \end{align} More generally, if $R$ is a $p$-nilpotent $W(k_0)$-algebra and $\mathbb{D}$ is a crystal in locally free $\mathcal{O}_{\Spec R/W(k_0)}$-modules, then we can use the equivalence (\ref{eq-crystalequiv}) to define the Frobenius pullback $\phi^\ast\mathbb{D}$ of $\mathbb{D}$. Explicitly, \begin{align*} \phi^\ast \mathbb{D} := \icris_\ast(\phi_0^\ast i_{\text{CRIS}}^\ast \mathbb{D}). \end{align*} \subsection{The crystals associated to $p$-divisible groups and displays}\label{sub-zinkcrystals} We recall the crystals associated to $p$-divisible groups and to nilpotent Zink displays, and we discuss the connection between the two. Our main reference for the crystals associated to $p$-divisible groups is \cite{BBM1982}. For more information on the crystals associated to nilpotent Zink displays, we refer the reader to \cite[\textsection 2.2]{Zink2002} and \cite[\textsection 2.4]{Lau2018}. If $X$ is a $p$-divisible group over a $p$-nilpotent $W(k_0)$-algebra $R$, denote by $\mathbb{D}(X)$ the \textit{covariant} Dieudonn\'e crystal of $X$ as in \cite{BBM1982}. In fact, the crystal associated to $X$ as defined in \textit{loc. cit.} is contravariant, so to obtain a covariant crystal we define $\mathbb{D}(X)$ to be the contravariant Dieudonn\'e crystal associated to $X^D$. Equivalently, by the crystalline duality theorem \cite[\textsection 5.3]{BBM1982}, $\mathbb{D}(X)$ is the dual of the contravariant Dieudonn\'e crystal associated to $X$. The Dieudonn\'e crystal $\mathbb{D}(X)$ is a crystal in finite locally free $\mathcal{O}_{\Spec R/W(k_0)}$-modules, and the sections of $\mathbb{D}(X)$ over the trivial PD-thickening $\id_R: R \to R$ are equipped with a filtration by finite projective $R$-modules \begin{align}\label{eq-hodgeX} \text{Fil}^0(\mathbb{D}(X)) = \mathbb{D}(X)_{R/R}\supset \text{Fil}^1(\mathbb{D}(X)) = \text{Lie}(X^D)^\vee \supset \textup{Fil}^2(\mathbb{D}(X)) = 0, \end{align} called the Hodge filtration of $X$, which makes the following sequence exact \begin{align}\label{eq-hodgeseqX} 0 \to \textup{Fil}^1(\mathbb{D}(X)) \to \mathbb{D}(X)_{R/R} \to \textup{Lie}(X) \to 0. \end{align} \begin{Def} A \textit{Dieudonn\'e crystal} over $R$ is a triple $(\mathbb{D},\mathbb{F},\mathbb{V})$, where $\mathbb{D}$ is a crystal in finite locally free $\mathcal{O}_{\Spec R/W(k_0)}$-modules, and \begin{align*} \mathbb{F}: \phi^\ast \mathbb{D} \to \mathbb{D} \text{ and } \mathbb{V}: \mathbb{D} \to \phi^\ast \mathbb{D} \end{align*} are morphisms of crystals such that $\mathbb{F} \circ \mathbb{V} = p\cdot \id_{\mathbb{D}}$ and $\mathbb{V} \circ \mathbb{F} = p \cdot \id_{\phi^\ast \mathbb{D}}$. \end{Def} If $X$ is a $p$-divisible group over an $\mathbb{F}_p$-algebra $R_0$, denote by $X^{(p)}$ the $p$-divisible group $X \otimes_{R_0, \phi_0} R_0$ obtained by base change along $\phi_0$. We obtain a Dieudonn\'e crystal structure on $\mathbb{D}(X)$ by taking $\mathbb{F}$ and $\mathbb{V}$ to be induced from the Verschiebung and Frobenius \begin{align*} V_{X}: X^{(p)} \to X, \ F_X: X \to X^{(p)}, \end{align*} respectively. Let us emphasize that since we are using the covariant Dieudonn\'e crystal, $X \mapsto \mathbb{D}(X)$ sends the Frobenius of $X$ to the Verschiebung of $\mathbb{D}(X)$ and the Verschiebung of $X$ to the Frobenius of $\mathbb{D}(X)$. More generally, if $R$ is a $p$-nilpotent $W(k_0)$-algebra, then we obtain $\mathbb{F}$ and $\mathbb{V}$ on $\mathbb{D}(X)$ by taking the unique maps lifting the Frobenius and Verschiebung for $i_{\textup{CRIS}}^\ast \mathbb{D}(X)$ along the equivalence (\ref{eq-crystalequiv}). The unit object $\mathbbm{1}$ in the rigid tensor category of finite locally free crystals in $\mathcal{O}_{\Spec R/ W(k_0)}$-modules is canonically isomorphic to the crystal $\mathbb{D}(\mu_{p^\infty})$ associated to the multiplicative $p$-divisible group $\mu_{p^\infty}$ over $R$. It follows that $\mathbbm{1}$ is endowed with the structure of a Dieudonn\'e crystal. Explicitly, we have a canonical isomorphism $\phi^\ast\mathbbm{1} \cong \mathbbm{1}$, and with respect to this isomorphism we take $\mathbb{F} = \id_{\mathbbm{1}}$ and $\mathbb{V} = p \cdot \id_{\mathbbm{1}}$. We will also endow the sections of $\mathbbm{1}$ over $\id_R: R \to R$ with the filtration \begin{align}\label{eq-hodge1} \textup{Fil}^0(\mathbbm{1}) = R \supset \textup{Fil}^1(\mathbbm{1}) = 0. \end{align} We will refer to this as the Hodge filtration for $\mathbbm{1}$. Let $R$ be a $p$-nilpotent $\zz_p$-algebra, and denote by $\textup{Zink}(R)$ the category of Zink displays over $R$, which is equivalent to the category of windows over $\mathcal{W}(R)$ and to the category of 1-displays over $\underline{W}(R)$ by Lemma \ref{lem-windows}. Denote by $\textup{nZink}(R)$ the full subcategory of nilpotent Zink displays (see \cite[Def. 11]{Zink2002}). If a Zink display $\underline{P}$ over $R$ is nilpotent, then we can associate to $\underline{P}$ a formal $p$-divisible group $\textup{BT}_R(\underline{P})$. By the main theorems of \cite{Zink2002} and \cite{Lau2008}, $\textup{BT}_R$ defines an equivalence of categories between nilpotent Zink displays and formal $p$-divisible groups over $R$. When the ring $R$ is clear from context, we will sometimes omit the subscript from $\textup{BT}_R$. An explicit quasi-inverse functor $\Phi_R$ for $\textup{BT}_R$ is defined in \cite[Prop. 2.1]{Lau2013}. Let us briefly review its definition. As a first step one defines a functor from $p$-divisible groups over $R$ to the category of filtered $F$-$V$-modules over $R$. Here a \textit{filtered $F$-$V$-module} over $R$ is a quadruple $(P, \Fil P, F^\sharp, V^\sharp)$, where $P$ is a finite projective $W(R)$-module with a filtration $I(R)P \subseteq \Fil P \subseteq P$ such that $P / \Fil P$ is projective over $R$, and where $F^\sharp: f^\ast P \to P \text{ and } V^\sharp: P \to f^\ast P $ are $W(R)$-module homomorphisms such that $F^\sharp \circ V^\sharp = p \cdot\id_{P}$ and $V^\sharp \circ F^\sharp = p \cdot \id_{f^\ast P}$. Let $R_0 = R/pR$. If $X$ is a $p$-divisible group over $R$, then the kernel of $W(R) \to R_0$ is naturally equipped with divided powers, making $W(R) \to R_0$ into a $p$-adic PD-thickening over $R_0$ (see Remark \ref{rmk-perfectequiv}). If we let $X_0 = X \otimes_R R_0$, then since the Frobenius for $W(R)$ is compatible with the PD-structure on the kernel of $W(R) \to R_0$ (see \cite[\textsection 1G]{Lau2014}), by (\ref{eq-frobcrystal}) we have \begin{align*} (\phi_0^\ast \mathbb{D}(X_0))_{W(R)/R_0} \cong f^\ast(\mathbb{D}(X_0)_{W(R)/R_0}). \end{align*} Hence if we take $P = \mathbb{D}(X_0)_{W(R)/R_0}$, the evaluation of $\mathbb{F}$ and $\mathbb{V}$ for $\mathbb{D}(X_0)$ on $W(R) \to R_0$ induce homomorphisms $F^\sharp$ and $V^\sharp$ as in the above definition. Moreover, the natural identification $\mathbb{D}(X)_{W(R)/R} \cong \mathbb{D}(X_0)_{W(R)/R_0}$ provides us with a map $P \to \textup{Lie}(X)$ via the crystal property for $\mathbb{D}(X)$. It follows that we can define a filtered $F$-$V$-module associated to $X$ by $(P, \Fil P, F^\sharp, V^\sharp)$, with $\Fil P = \ker(P \to \textup{Lie}(X))$. As in \cite{Lau2013} we write $\Theta_R$ for the functor which assigns a filtered $F$-$V$-module to a $p$-divisible group. If $\underline{P}= (P, \Fil P, F_0, F_1)$ is a Zink display over $R$, then $(P, \Fil P, F_0^\sharp, V^\sharp)$ is a filtered $F$-$V$-module, where $F_0^\sharp: f^\ast P \to P$ is the linearization of $F_0$ and $V^\sharp$ is the homomorphism $P \to f^\ast P$ associated to $\underline{P}$ by \cite[Lem. 10]{Zink2002} (see \textsection \ref{sub-frames}). Denote the functor assigning a filtered $F$-$V$-module to a Zink display by $\Upsilon_R$. By \cite[Prop. 2.1]{Lau2013}, there is a unique functor \begin{align}\label{eq-laufunctor} \Phi_R : \textup{pdiv}(R) \to \textup{Zink}(R) \end{align} which is compatible with base change and for which there is a natural isomorphism of functors $\Theta_R \cong \Upsilon_R \circ \Phi_R$. The restriction of $\Phi_R$ to formal $p$-divisible groups is an equivalence by \cite[Thm. 5.1]{Lau2013}, and $\Phi_R$ provides a quasi-inverse to $\textup{BT}_R$ by \cite[Lem. 8.1]{Lau2013}. If $\underline{P}$ is a Zink display over an $\mathbb{F}_p$-algebra $R_0$, define $\underline{P}^{(p)} = (P^{(p)}, \Fil P^{(p)}, F_0^{(p)}, F_1^{(p)})$ to be the base change of $\underline{P}$ along the $p$-power Frobenius $\phi_0: R_0 \to R_0$. By definition of base change for displays, we have $P^{(p)} = f^\ast P$. By \cite[Ex. 23]{Zink2002}, $F_0^\sharp$ and $V^\sharp$ induce functorial morphisms of Zink displays \begin{align}\label{eq-frver} \textup{Ver}_{\underline{P}}: \underline{P}^{(p)} \to \underline{P} \text{ and } \textup{Fr}_{\underline{P}}: \underline{P} \to \underline{P}^{(p)}, \end{align} respectively. If $X$ is a $p$-divisible group over $R_0$, then one sees from the definition of $\Phi_{R_0}$ that \begin{align}\label{eq-fv} \Phi_{R_0}(F_{X}) = \textup{Ver}_{\Phi_{R_0}(X)} \text{ and }\Phi_{R_0}(V_{X}) = \textup{Fr}_{\Phi_{R_0}(X)}. \end{align} Let us now recall the definition of the crystal associated to a nilpotent Zink display. Let $B \to A$ be a $p$-adic PD-thickening over $R$. Then the natural morphism of frames $\mathcal{W}(B/A) \to \mathcal{W}(A)$ induces an equivalence of categories between nilpotent windows over $\mathcal{W}(B/A)$ and nilpotent Zink displays over $A$ by \cite[Thm. 44]{Zink2002} (see also \cite[Prop. 10.4]{Lau2010}, and for the definition of nilpotence in this generality see \cite[\textsection 10.3]{Lau2010}). It follows that if $\underline{P}$ is a nilpotent Zink display over $R$, and $\underline{P}_A$ is the base change of $\underline{P}$ to $\mathcal{W}(A)$, then for any $p$-adic PD-thickening $B \to A$ over $R$ there is a unique (up to unique isomorphism which lifts the identity) lift of $\underline{P}_A$ to $\mathcal{W}(B/A)$. Denote this lift by $\underline{\tilde{P}} = (\tilde{P}, \Fil \tilde{P}, \tilde{F}_0, \tilde{F}_1)$. The evaluation of the Dieudonn\'e crystal $\mathbb{D}(\underline{P})$ associated to $\underline{P}$ on $B\to A$ is \begin{align*} \mathbb{D}(\underline{P})_{B/A} := \tilde{P} / I(B) \tilde{P}. \end{align*} In particular, if $\underline{P} = (P, \Fil P, F_0, F_1)$, then $\mathbb{D}(\underline{P})_{R/R} = P / I(R) P$. We refer to the filtration \begin{align*} \textup{Fil}^0(\mathbb{D}(\underline{P})) = P / I(R) P \supset \text{Fil}^1(\mathbb{D}(\underline{P})) = \Fil P / I(R) P \supset \textup{Fil}^2(\mathbb{D}(\underline{P})) = 0 \end{align*} as the Hodge filtration of $\mathbb{D}(\underline{P})$ (or of $\underline{P})$, and we observe that the following sequence is exact \begin{align*} 0 \to \textup{Fil}^1(\mathbb{D}(\underline{P})) \to \mathbb{D}(\underline{P})_{R/R} \to P / \Fil P \to 0. \end{align*} We will sometimes denote $P / \Fil P$ by $\text{Lie}(\underline{P})$. If $\underline{P} = \Phi_R(X)$ for a formal $p$-divisible group $X$ over $R$, then by definition of $\Phi_R$ we have $\textup{Lie}(\underline{P}) = P / \Fil P \cong \textup{Lie}(X)$. If $\underline{P}$ is the Zink display associated to a higher display $\underline{M}$ by Lemma \ref{lem-windows}, then \begin{align}\label{eq-samehodge} \textup{Fil}^i(\underline{M}) = \textup{Fil}^i(\mathbb{D}(\underline{P})), \end{align} where $\textup{Fil}^i(\underline{M})$ is the Hodge filtration of $\underline{M}$ (see (\ref{eq-hodgedisp})). The assignment $\underline{P} \mapsto \mathbb{D}(\underline{P})$ is functorial in $\underline{P}$, so if $\underline{P}$ is a Zink display over an $\mathbb{F}_p$-algebra $R_0$, then the maps (\ref{eq-frver}) induce morphisms of crystals \begin{align*} \mathbb{F}: \mathbb{D}(\underline{P}^{(p)}) \to \mathbb{D}(\underline{P}) \text{ and } \mathbb{V}: \mathbb{D}(\underline{P}) \to \mathbb{D}(\underline{P}^{(p)}). \end{align*} Moreover, as a consequence of the definition of $\mathbb{D}(\underline{P})$ we obtain a canonical isomorphism \begin{align}\label{eq-crystalbc} \phi_0^\ast \mathbb{D}(\underline{P}) \cong \mathbb{D}(\underline{P}^{(p)}). \end{align} Hence $\mathbb{D}(\underline{P})$ is canonically endowed with the structure of a Dieudonn\'e crystal. As in the case of $p$-divisible groups, we can use (\ref{eq-crystalequiv}) to lift this structure in the case of $p$-nilpotent $\zz_p$-algebras $R$. \begin{lemma}\label{lem-zinkdieudonne} The functors $\underline{P} \mapsto \mathbb{D}(\underline{P})$ and $\underline{P} \mapsto \mathbb{D}(\textup{BT}_R(\underline{P}))$ from nilpotent Zink displays to crystals in finite locally free $\mathcal{O}_{\Spec R / \zz_p}$-modules are naturally isomorphic. Moreover, the isomorphism is compatible with the Frobenius and Verschiebung maps, and it preserves the Hodge filtration. \end{lemma} \begin{proof} The first statement proven in \cite[Thm. 94]{Zink2002} for the restriction of these crystals to the nilpotent crystalline site, and in \cite[Cor. 97]{Zink2002} for the restriction to PD-thickenings $B \to A$ which have nilpotent kernel. In general, it follows from the results of \cite{Lau2013}. Indeed, it is enough to show that the functors $X \mapsto \mathbb{D}(X)$ and $X \mapsto \mathbb{D}(\Phi_R(X))$ from infinitesimal $p$-divisible groups to $\textup{LFCrys}(\Spec R/\zz_p)$ are naturally isomorphic. For any given PD-thickening $B \to A$ over $R$, there is an isomorphism of $B$-modules \begin{align}\label{eq-crystalisom} \mathbb{D}(X)_{B/A} \cong \mathbb{D}(\Phi_R(X))_{B/A} \end{align} by \cite[Cor. 2.7]{Lau2013}. Explicitly, by the results of \cite{Lau2013}, if $\underline{\tilde{P}}$ is the unique lift of $\Phi_R(X)$ to $\mathcal{W}(B/A)$, then we can identify $\tilde{P} = \mathbb{D}(X)_{W(B)/A}$, so (\ref{eq-crystalisom}) is obtained from the crystal property applied to the morphism of PD-thickenings $(W(B) \to A) \to (B \to A)$. That (\ref{eq-crystalisom}) is compatible with the transition isomorphisms follows from the cocycle condition for $\mathbb{D}(X)$ and uniqueness of liftings along $\mathcal{W}(B/A) \to \mathcal{W}(A)$. Functoriality in $X$ follows from functoriality of $X \mapsto \mathbb{D}(X)$, and if $\Phi_R(X) = (P, \Fil P, F_0, F_1)$, then $\Fil P = \ker(\mathbb{D}(X)_{W(R)/R} \to \textup{Lie}(X))$, so it follows from (\ref{eq-hodgeseqX}) that (\ref{eq-crystalisom}) preserves the Hodge filtrations. Finally to prove compatibility with the Frobenius and Verschiebung one reduces to the case where $R$ is an $\mathbb{F}_p$-algebra, in which case we have $\mathbb{F}_{\mathbb{D}(X)} = \mathbb{D}(V_X)$ and $\mathbb{V}_{\mathbb{D}(X)} = \mathbb{D}(F_X)$. Then the result follows from functoriality of the isomorphism $\mathbb{D}(X) \cong \mathbb{D}(\Phi_R(X))$ along with (\ref{eq-fv}) and compatibility of $\Phi_R$ with base change. \end{proof} \section{$G$-displays} \label{section-Gdisplays} Let $G = \Spec \mathcal{O}_G$ be a flat affine group scheme of finite type over $\zz_p$, and let $\mu: \mathbb{G}_{m,W(k_0)} \to G_{W(k_0)}$ be a cocharacter of $G_{W(k_0)}$. In section \textsection \ref{sub-gdisplau} we define the stack of $G$-displays of type $\mu$ over an \'etale sheaf of frames, following \cite{Lau2018}, and in \textsection \ref{sub-gdispdaniels} we develop Tannakian analogs of these objects. If $R$ is in $\textup{Nilp}_{\zz_p}$, and $\underline{S}$ is an \'etale sheaf of frames on $\Spec R$ which satisfies descent for displays (see Definition \ref{def-descent}), we prove (Theorem \ref{thm-equiv}) that our Tannakian framework is equivalent to Lau's torsor-theoretic framework. This is closely analogous to \cite[Thm. 3.16]{Daniels2019}, and throughout we provide references to \cite{Daniels2019} in lieu of proofs whenever the arguments mimic those in \textit{loc. cit.}. In \textsection \ref{sub-lifting}, under the additional assumptions that $G$ is smooth and $\mu$ is minuscule, we recall Lau's unique lifting lemma (Proposition \ref{prop-lifting}) for adjoint nilpotent $(G,\mu)$-displays. The unique lifting lemma is a crucial component of the construction of the crystal associated to an adjoint nilpotent $(G,\mu)$-display in \textsection \ref{sub-crystalGdisp}. \subsection{$G$-displays of type $\mu$}\label{sub-gdisplau} Recall \cite[\textsection 5]{Lau2018} a frame $\underline{S}$ is a frame over $W(k_0)$ if $S$ is a graded $W(k_0)$-algebra and $\sigma: S \to S_0$ extends the Frobenius of $W(k_0)$. In particular, if $R$ is in $\textup{Nilp}_{W(k_0)}$ and $B \to A$ is a PD-thickening over $R$, then the frames $\underline{W}(R)$ and $\underline{W}(B/A)$ are $W(k_0)$-frames. See \cite[Ex. 5.0.2]{Lau2018} for details. If $X = \text{Spec }A$ is an affine $W(k_0)$-scheme, then an action of $\mathbb{G}_m$ on $X$ is equivalent to a $\zz$-grading on $A$ (see \cite[\textsection 5.1]{Lau2018} and \cite[\textsection 3.1]{Daniels2019} for details). If $\mathbb{G}_m$ acts on $X$, and $S$ is a $\zz$-graded $W(k_0)$-algebra, denote by $X(S)^0 \subseteq X(S)$ the set of $\mathbb{G}_m$-equivariant sections $\text{Spec }S \to X$ over $W(k_0)$. In other words, $X(S)^0$ is the set $\textup{Hom}_{W(k_0)}^0(A,S)$ of homomorphisms $A \to S$ of graded $W(k_0)$-algebras. Suppose $\underline{S}$ is an \'etale sheaf of frames on $\text{Spec }R$. If $R \to R'$ is \'etale, write \begin{align*} \underline{S}(R') = (S(R'), \sigma(R'), \tau(R')), \end{align*} so $S(R')$ is a $\zz$-graded ring, and $\sigma(R')$ and $\tau(R')$ are ring homomorphisms $S(R') \to S(R')_0$ as in Definition \ref{def-frame}. To $X$ and $\underline{S}$ we associate two functors on \'etale $R$-algebras: \begin{align*} X(\underline{S})^0: R' \mapsto X(S(R'))^0, \text{ and } X(\underline{S}_0): R' \mapsto X(S(R')_0). \end{align*} \begin{lemma}\label{lem-sheaves} Let $\underline{S}$ be an \'etale sheaf of frames on $\textup{Spec }R$, and let $X$ be an affine scheme of finite type over $W(k_0)$. Then the functors $X(\underline{S})^0$ and $X(\underline{S}_0)$ are \'etale sheaves on $\textup{Spec }R$. \end{lemma} \begin{proof} The proof is formally the same as that of \cite[Lem. 5.3.1]{Lau2018}. \end{proof} Let us recall the definition of the display group associated to $G$ and $\mu$ with values in a $\zz$-graded ring $S$. For details we refer the reader to \cite[\textsection 3.1]{Daniels2019} and \cite[\textsection 5.1]{Lau2018}. The cocharacter $\mu$ defines a right action of $\mathbb{G}_{m,W(k_0)}$ on $G_{W(k_0)}$ by \begin{align*} g \cdot \lambda := \mu(\lambda)^{-1} g \mu(\lambda) \end{align*} for any $W(k_0)$-algebra $R$, $g \in G_{W(k_0)}(R)$ and $\lambda \in \mathbb{G}_{m,W(k_0)}(R)$. If $S$ is a $\zz$-graded ring, define \begin{align*} G(S)_\mu := G(S)^0, \end{align*} i.e., $G(S)_\mu$ is the subset of $G_{W(k_0)}(S) = \text{Hom}_{W(k_0)}(\mathcal{O}_G,S)$ consisting of $W(k_0)$-algebra homomorphisms which preserve the respective gradings. Similarly, if $S$ is an \'etale sheaf of frames on $\text{Spec }R$, define \begin{align*} G(\underline{S})_\mu := G(\underline{S})^0, \end{align*} so $G(\underline{S})_\mu$ is an \'etale sheaf of groups on $\Spec{R}$. Suppose $\underline{S} = (S,\sigma,\tau)$ is a $W(k_0)$-frame. Then the $\zz_p$-algebra homomorphisms $\sigma, \tau: S \to S_0$ induce group homomorphisms \begin{align*} \sigma, \tau: G(S)_\mu \to G(S_0) \end{align*} as follows: if $g \in G(S)_\mu$, then $\sigma(g)$ (resp. $\tau(g)$) is defined by post-composing $g \in \text{Hom}_{W(k_0)}(\mathcal{O}_G,S)$ with $\sigma: S \to S_0$ (resp. $\tau:S \to S_0$). Using $\sigma$ and $\tau$, we define an action of $G(S)_\mu$ on $G(S_0)$: \begin{align}\label{eq-action} G(S_0) \times G(S)_\mu \to G(S_0), \ (x, g) \mapsto \tau(g)^{-1} x \sigma(g). \end{align} If $\underline{S}$ is an \'etale sheaf of $W(k_0)$-frames on $\text{Spec }R$, this action sheafifies to provide an action of $G(\underline{S})_\mu$ on $G(\underline{S}_0)$. \begin{Def}\label{def-gdisp} Let $R$ be a $p$-nilpotent $W(k_0)$-algebra, and suppose $\underline{S}$ is an \'etale sheaf of $W(k_0)$-frames on $\text{Spec }R$. The \textit{stack of $G$-displays of type $\mu$ over $\underline{S}$} is the \'etale quotient stack \begin{align*} G\text{-}\textup{Disp}_{{\underline{S}},\mu} := [G(\underline{S}_0) / G(\underline{S})_{\mu}] \end{align*} over $\textup{\'Et}_R$, where $G(\underline{S})_{\mu}$ acts on $G(\underline{S}_0)$ via the action (\ref{eq-action}). \end{Def} Explicitly, for an \'etale $R$-algebra $R'$, $G\text{-}\textup{Disp}_{\underline{S},\mu}(R')$ is the groupoid of pairs $(Q,\alpha)$, where $Q$ is an \'etale locally trivial $G(\uS)_\mu$-torsor over $\text{Spec }R$ and $\alpha: Q \to G(\uS_0)$ is a $G(\uS)_\mu$-equivariant morphism for the action (\ref{eq-action}). Let us point out the case which will be of particular interest to us. Suppose $B \to A$ is a PD-thickening of $p$-nilpotent $W(k_0)$-algebras. If $A \to A'$ is \'etale, let $B(A')$ be the unique \'etale $B$-algebra with $B(A') \otimes_B A = A'$ (see e.g., \cite[\href{https://stacks.math.columbia.edu/tag/039R}{Tag 039R}]{stacks-project}). Denote by $\underline{W}_{B/A}$ the \'etale sheaf of frames $A'\mapsto \underline{W}(B(A')/A')$ (see Lemma \ref{lem-sheafqf}). By taking $\uS = \underline{W}_{B/A}$ in Definition \ref{def-gdisp} we obtain the stack of $G$-displays of type $\mu$ for $B\to A$ \begin{align*} \GdispBA. \end{align*} We close this section by recalling the stack of $G$-displays of type $\mu$ over the Witt frame. Let $\underline{W}$ be the fpqc sheaf in frames on $\textup{Nilp}_{\zz_p}$ given by $R\mapsto \underline{W}(R).$ Associated to $G$, $\mu$, and $W$ we have two group-valued functors on $\textup{Nilp}_{\zz_p}$: \begin{align*} L^+G := G(\underline{W}_0), \text{ and } L^+_\mu G := G(\underline{W})^0. \end{align*} By \cite[Lem. 5.4.1]{Lau2018} these are representable functors. \begin{Def} The stack of \textit{$G$-displays of $\mu$ over $\underline{W}$} is the fpqc quotient stack \begin{align*} \Gdisp := [L^+G / L^+_\mu G] \end{align*} over $\textup{Nilp}_{W(k_0)}$, where $L^+_\mu G$ acts on $L^+G$ via the action (\ref{eq-action}). \end{Def} \subsection{Tannakian $G$-displays}\label{sub-gdispdaniels} Continuing the notation of the previous section, let $G$ be a flat affine group scheme of finite type over $\zz_p$, and let $\mu: \mathbb{G}_{m,W(k_0)} \to G_{W(k_0)}$ be a cocharacter for $G_{W(k_0)}$. Let us recall some definitions from \cite{Daniels2019}. If $(V,\pi)$ is any representation of $G$, then $V_{W(k_0)} = V\otimes_{\zz_p} W(k_0)$ is graded by the action of the cocharacter $\mu$, and for any $W(k_0)$-algebra $R$ we obtain an exact tensor functor $\mathscr{C}(\underline{W})_{\mu,R}$ (denoted $\mathscr{C}_{\mu,R}$ in \cite{Daniels2019}), given by \begin{align*} \mathscr{C}(\underline{W})_{\mu,R}: \textup{Rep}_{\zz_p}G \to \textup{PGrMod}(W(R)^\oplus), \ (V,\pi) \mapsto V_{W(k_0)} \otimes_{W(k_0)} W(R)^\oplus. \end{align*} We refer to an exact tensor functor $\mathscr{F}: \textup{Rep}_{\zz_p}G \to \textup{PGrMod}(W(R)^\oplus)$ as a \textit{graded fiber functor} over $W(R)^\oplus$, and we say $\mathscr{F}$ is \textit{of type $\mu$} if $\mathscr{F}$ is fpqc-locally isomorphic to $\mathscr{C}(\underline{W})_{\mu,R}$. Let $\upsilon_R$ denote the forgetful functor $\textup{Disp}(\underline{W}(R)) \to \textup{PGrMod}(W(R)^\oplus)$. Recall the following definition (see \cite[Def. 3.14]{Daniels2019}). \begin{Def} A \textit{$(G,\mu)$-display over $\underline{W}(R)$} is an exact tensor functor \begin{align*} \mathscr{P}: \textup{Rep}_{\zz_p}G \to \textup{Disp}(\underline{W}(R)) \end{align*} such that $\upsilon_R \circ \mathscr{P}$ is a graded fiber functor of type $\mu$. \end{Def} In \cite{Daniels2019} we referred to these objects as ``Tannakian $(G,\mu)$-displays''. By \cite[Cor. 3.17]{Daniels2019}, the category of Tannakian $(G,\mu)$-displays over $R$ coincides with the category of $(G,\mu)$-displays as in \cite{BP2017} when both categories are defined, so we feel there is no risk of confusion in adopting the more streamlined terminology. The following is the main theorem of \cite[\textsection 3]{Daniels2019}: \begin{thm} The stacks of $(G,\mu)$-displays and $G$-displays of type $\mu$ over $\textup{Nilp}_{W(k_0)}$ are equivalent. \end{thm} In this section we prove an analogous theorem for $G$-displays of type $\mu$ over \'etale sheaves of frames with good descent properties. Let $R$ be a ring, and let $S$ be an \'etale sheaf of $\zz$-graded rings over $\Spec{R}$. We will denote by $\textup{PGrMod}_{S}$ the fibered category over $\textup{\'Et}_R$ whose fiber over an \'etale $R$-algebra $R'$ is $\textup{PGrMod}(S(R'))$. Further, if $\uS$ is a sheaf of frames, let $\textup{Disp}_{\uS}$ denote the fibered category of displays over $\uS$. \begin{Def}\label{def-descent} We say: \begin{itemize} \item An \'etale sheaf of $\zz$-graded rings $S$ on $\text{Spec }R$ \textit{satisfies descent for modules} if $\textup{PGrMod}_{S}$ is an \'etale stack over $\textup{\'Et}_R$. \item An \'etale sheaf of frames $\underline{S}$ on $\Spec{R}$ \textit{satisfies descent for displays} if $\textup{Disp}_{\uS}$ is an \'etale stack over $\textup{\'Et}_R$. \end{itemize} \end{Def} \begin{lemma}\label{lem-descentmoddisp} Let $\underline{S}$ be an \'etale sheaf of frames on $\Spec{R}$ such that $S$ satisfies descent for modules. Then $\underline{S}$ satisfies descent for displays. \end{lemma} \begin{proof} That morphisms descend follows from Lemma \ref{lem-localproperties} \ref{lem-dispmorphism} and the fact that $S$ satisfies descent for modules. To prove that objects descend we need only to show that isomorphisms $\sigma^\ast M \xrightarrow{\sim} \tau^\ast M$ form an \'etale sheaf. But since $\underline{S}$ is an \'etale sheaf of frames, the functor $S_0: R'\mapsto S(R')_0$ is an \'etale sheaf of rings on $\Spec R$, and so for any finite projective $S(R)_0$-module $N$ the following sequence is exact: \begin{align*} 0 \to N \to N \otimes_{S(R)_0} S(R')_0 \rightrightarrows N \otimes_{S(R)_0} S(R'\otimes_R R')_0, \end{align*} and the result follows. \end{proof} \begin{rmk} The frame of interest for the purposes of this paper is the relative Witt frame $\underline{W}(B/A)$ associated to a $p$-adic PD-thickening $B \to A$ (Example \ref{ex-relwittframe}). The \'etale sheaf of frames $A' \mapsto \underline{W}(B'/A')$ associated to $\underline{W}(B/A)$ (see \textsection \ref{sub-descent}) satisfies descent for modules (hence for displays as well, by Lemma \ref{lem-descentmoddisp}) by Proposition \ref{prop-descent}. The other primary example of a sheaf of frames which satisfies descent for modules is the \'etale sheaf of frames on $\Spec R$ associated to a $p$-adic frame $\underline{S}$ over $R$ (see \cite[Lem. 4.3.1]{Lau2018}). The Zink frame $\mathbb{W}(R)$ over an admissible ring $R$ \cite[Ex. 2.1.13]{Lau2018} and its relative analog \cite[Ex. 2.1.14]{Lau2018} for a PD-thickening $B \to A$ of admissible rings, as well as the truncated Witt frames over $\mathbb{F}_p$-algebras \cite[Ex. 2.1.6]{Lau2018} and their relative analogs are all examples of $p$-adic frames. The relative Witt frame $\underline{W}(B/A)$ for $B \to A$ is also a $p$-adic frame, but the \'etale sheaf of frames associated to it by \cite[Lem. 4.2.3]{Lau2018} using the $p$-adic topology differs from the one we consider here, which uses the natural topology for the Witt vectors (see \cite[Ex. 4.2.7]{Lau2018}). \end{rmk} \begin{Def} Let $S$ be a $\zz$-graded $W(k_0)$-algebra. A \textit{graded fiber functor} over $S$ is an exact tensor functor \begin{align*} \mathscr{F}: \textup{Rep}_{\zz_p}G \to \textup{PGrMod}(S). \end{align*} \end{Def} Denote by $\textup{GFF}(S)$ the category of graded fiber functors over $S$. Suppose $S$ is an \'etale sheaf of $\zz$-graded rings on $\text{Spec }R$. If $R \to R'$ is a homomorphism of \'etale $R$-algebras, the natural base change $M \mapsto M\otimes_{S(R)} S(R')$ induces a base change functor $\textup{GFF}(S(R)) \to \textup{GFF}(S(R'))$. In this way we obtain a fibered category $\textup{GFF}_{S}$ over $\textup{\'Et}_R$. \begin{lemma}\label{lem-gffstack} Let $\uS$ be an \'etale sheaf of frames such that the underlying sheaf of graded rings $S$ satisfies descent for modules. Then the fibered category $\textup{\textup{GFF}}_{S}$ is an \'etale stack over $\textup{\textup{\'Et}}_R$. \end{lemma} \begin{proof} The proof is the same as that of \cite[Lem. 3.7]{Daniels2019}, with Lemma \ref{lem-localproperties} \ref{lem-descentseq} replacing \cite[Lem. 2.15]{Daniels2019}. \end{proof} Suppose $R$ is a $W(k_0)$-algebra, and that $\uS$ is an \'etale sheaf of $W(k_0)$-frames over $\Spec{R}$ which satisfies descent for modules. For any cocharacter $\mu$ of $G$ defined over $W(k_0)$ and any \'etale $R$-algebra $R'$, we define a distinguished graded fiber functor over $S(R')$. Given a representation $(V,\pi)$ in $\textup{Rep}_{\zz_p}G$, let \begin{align*} V_{W(k_0)}^i = \{v \in V_{W(k_0)} \mid (\pi \circ \mu)(z) \cdot v = z^i v \text{ for all } z \in \mathbb{G}_m(W(k_0))\}. \end{align*} Then $\mu$ induces a canonical weight decomposition \begin{align} V_{W(k_0)} = \bigoplus_{i \in \zz} V_{W(k_0)}^i. \end{align} Since any morphism of representations preserves the grading induced by $\mu$, we obtain an exact tensor functor \begin{align}\label{eq-fiberfunctor} \mathscr{C}(\uS)_{\mu,R'}: \textup{Rep}_{\zz_p}G \to \textup{PGrMod}_{S}(R'), \ V \mapsto V_{W(k_0)} \otimes_{W(k_0)} S(R'). \end{align} If $R'$ is an \'etale $R$-algebra, then $\mathscr{C}(\uS)_{\mu,R'}$ is given by the composition of functors \begin{align*} \textup{Rep}_{\zz_p}G \xrightarrow{\mathscr{C}(\uS)_{\mu,R}} \textup{PGrMod}_{S}(R) \to \textup{PGrMod}_{S}(R'), \end{align*} where the second functor is the canonical base change. If $R$ is understood, we will suppress it in the notation and write $\mathscr{C}(\uS)_\mu$ for $\mathscr{C}(\uS)_{\mu,R}$. \begin{Def}\label{def-typemu} A graded fiber functor $\mathscr{F}$ over $S(R)$ is \textit{of type $\mu$} if for some faithfully flat \'etale extension $R \to R'$ there is an isomorphism $\mathscr{F}_{R'}\cong \mathscr{C}(\uS)_{\mu,R'}$. \end{Def} Let $\textup{GFF}_{S,\mu}$ denote the fibered category of graded fiber functors of type $\mu$. Since the property of being type $\mu$ is \'etale-local, $\textup{GFF}_{S,\mu}$ forms a substack of $\textup{GFF}_{S}$. If $\mathscr{F}_1$ and $\mathscr{F}_2$ are two graded fiber functors over $\uS$, denote by $\textup{\underline{Isom}}^\otimes(\mathscr{F}_1, \mathscr{F}_2)$ the \'etale sheaf of isomorphisms of tensor functors $\mathscr{F}_1 \xrightarrow{\sim} \mathscr{F}_2$. Let $\textup{\underline{Aut}}^\otimes(\mathscr{F}) = \textup{\underline{Isom}}^\otimes(\mathscr{F},\mathscr{F})$. The following is the analog of the main theorems of \cite[\textsection 3.2]{Daniels2019}. \begin{thm}\label{thm-isom} Let $\uS$ be an \'etale sheaf of $W(k_0)$-frames which satisfies descent for modules. The assignment $g \mapsto (\pi(g))_{(V,\pi)}$ defines an isomorphism of \'etale sheaves on $\Spec{R}$ \begin{align*} G(\uS)_\mu \xrightarrow{\sim} \textup{\underline{Aut}}^\otimes(\mathscr{C}(\uS)_{\mu}), \end{align*} which, in turn, induces an equivalence of stacks \begin{align*} \textup{\textup{GFF}}_{\uS,\mu} \xrightarrow{\sim} \textup{\textup{Tors}}_{G(\uS)_\mu}, \ \mathscr{F} \mapsto \textup{\underline{Isom}}^\otimes(\mathscr{C}(S)_\mu, \mathscr{F}). \end{align*} \end{thm} \begin{proof} The arguments of \cite[\textsection 3.2]{Daniels2019} go through nearly verbatim, after replacing the Witt frame with $\uS$, and the fpqc topology with the \'etale topology. \end{proof} For any \'etale $R$-algebra $R'$ we have a forgetful functor \begin{align}\label{eq-forgetful} \upsilon_{S(R')}: \textup{Disp}_{\uS}(R') \to \textup{PGrMod}_{S}(R'), \ (M,F) \mapsto M. \end{align} \begin{Def}\label{def-gmu} Let $R$ be a $p$-nilpotent $W(k_0)$-algebra. \begin{itemize} \item A \textit{$G$-display over $\uS(R)$} is an exact tensor functor \begin{align*} \mathscr{P}: \textup{Rep}_{\zz_p}G \to \textup{Disp}_{\uS}(R). \end{align*} \item A \textit{$(G,\mu)$-display} over $\uS(R)$ is a $G$-display $\mathscr{P}$ over $\uS(R')$ such that $\upsilon_{S(R)} \circ \mathscr{P}$ is a graded fiber functor of type $\mu$. \end{itemize} \end{Def} If $R \to R'$ is \'etale, denote by $G$-\textup{Disp}$^\otimes(\uS(R'))$, resp. $G$-\textup{Disp}$_\mu^\otimes(\uS(R'))$ the category of $G$-displays, resp. the full subcategory of $(G,\mu)$-displays over $\uS(R')$. By an analog of Lemma \ref{lem-gffstack} we see that $G$-displays form an \'etale stack $G$-\textup{Disp}$_{\uS}^\otimes$ over $\textup{\'Et}_R$, and $(G,\mu)$-displays define a substack $G$-\textup{Disp}$_{\uS,\mu}^\otimes$. There are a number of useful functorialities between categories of $G$-displays. If $\mathscr{P}$ is a $G$-display over $\uS(R)$ and $\psi: R \to R'$ is homomorphism of $p$-nilpotent $W(k_0)$-algebras, we denote by $\psi^\ast \mathscr{P}$ or $\mathscr{P}_{\underline{S}(R')}$ the base change of $\mathscr{P}$, which is given by \begin{align*} \textup{Rep}_{\zz_p}G \xrightarrow{\mathscr{P}} \textup{Disp}_{\uS}(R) \to \textup{Disp}_{\uS}(R'). \end{align*} Similarly, if $\alpha: \uS \to \underline{S'}$ is a morphism of \'etale sheaves of frames, we obtain a base change functor \begin{align}\label{eq-changeofframe} \alpha^\ast: G\textup{-Disp}_{\uS}^\otimes \to G\textup{-Disp}_{\underline{S'}}^\otimes \end{align} given by post-composition with $\textup{Disp}(\underline{S}(R)) \to \textup{Disp}(\underline{S'}(R))$. Finally, if $\gamma:G \to G'$ is a homomorphism of $\zz_p$-group schemes, and $\mathscr{P}$ is a $G$-display over $\underline{S}(R)$, we denote by $\gamma( \mathscr{P})$ the $G'$-display \begin{align*} \textup{Rep}_{\zz_p}G' \xrightarrow{\text{res}} \textup{Rep}_{\zz_p}G \xrightarrow{\mathscr{P}} \textup{Disp}_{\underline{S}}(R). \end{align*} If $\mathscr{P}$ is a $(G,\mu)$-display, then $\gamma(\mathscr{P})$ is a $(G', \gamma\circ\mu)$-display. To any $(G,\mu)$-display we can associate a $G$-display of type $\mu$. Let us summarize the construction (see \cite[Constr. 3.15]{Daniels2019} for details). Let $\mathscr{P}$ be a $(G,\mu)$-display over $\uS(R)$. By Theorem \ref{thm-isom}, \begin{align*} Q_{\mathscr{P}}:= \underline{\textup{Isom}}^\otimes(\mathscr{C}(\uS)_{\mu,R},\upsilon_{S(R)}\circ\mathscr{P}) \end{align*} is a $G(\uS)_\mu$-torsor over $R$. If $R'$ is an \'etale $R$-algebra, write $\mathscr{P}_{R'}(V,\pi) = (M(\pi)',F(\pi)')$ for any $(V,\pi)$ in $\textup{Rep}_{\zz_p}G$. Given an isomorphism of tensor functors $\lambda: \mathscr{C}(\uS)_{\mu,R'} \xrightarrow{\sim} \upsilon_{S(R')} \circ \mathscr{P}_{R'}$, we obtain an automorphism \begin{align*} \alpha_{\mathscr{P}}(\lambda)^\pi := \tau^\ast(\lambda^\pi) \circ (F(\pi)')^\sharp \circ \sigma^\ast(\lambda^{\pi}) \end{align*} of $V \otimes_{\zz_p} S(R')_0$ for every $(V,\pi)$ in $\textup{Rep}_{\zz_p}G$. If $\omega_{S(R')_0}$ denotes the canoncial fiber functor $(V,\pi) \mapsto V \otimes_{\zz_p} S(R')_0$, then the collection $(\alpha_{\mathscr{P}}(\lambda)^\pi)_{(V,\pi)}$ constitutes an element of $\text{Aut}^\otimes(\omega_{S(R')_0})$. By Tannakian duality \cite[Thm. 44]{Cornut2014}, the map $g \mapsto (\pi(g))_{(V,\pi)}$ determines an isomorphism \begin{align*} G(S(R')_0) \cong \text{Aut}^\otimes(\omega_{S(R')_0}), \end{align*} so there is some $\alpha_{\mathscr{P}}(\lambda) \in G(S(R')_0) = G(\uS_0)(R')$ such that $\pi(\alpha_{\mathscr{P}}(\lambda)) = \alpha_{\mathscr{P}}(\lambda)^\pi$ for every $(V,\pi)$. Altogether the assignment $\lambda \mapsto \alpha_{\mathscr{P}}(\lambda)$ defines a morphism of \'etale sheaves \begin{align}\label{eq-alpha} \alpha_{\mathscr{P}}: Q_{\mathscr{P}} \to G(\uS_0). \end{align} As in \cite[Constr. 3.15]{Daniels2019} one checks that the association $\mathscr{P} \mapsto (Q_{\mathscr{P}},\alpha_{\mathscr{P}})$ is functorial in $\mathscr{P}$ and compatible with base change, so we obtain a morphism of stacks \begin{align}\label{eq-morphism} G\text{-}\textup{Disp}_{\uS,\mu}^\otimes \to G\text{-}\textup{Disp}_{\uS,\mu}, \ \mathscr{P} \mapsto (Q_{\mathscr{P}}, \alpha_{\mathscr{P}}). \end{align} The following is the analog of \cite[Thm. 3.16]{Daniels2019}. \begin{thm}\label{thm-equiv} If $\uS$ satisfies descent for modules, the morphism $($\ref{eq-morphism}$)$ is an equivalence of \'etale stacks over $\textup{\textup{\'Et}}_R$. \end{thm} \begin{proof} The proof of \cite[Thm. 3.16]{Daniels2019} goes through here as well, after replacing the Witt frame by the frame $\uS$, and the fpqc topology by the \'etale topology. Let us sketch the argument. By the first part of Theorem \ref{thm-isom}, the functor is faithful. If $\mathscr{P}_1$ and $\mathscr{P}_2$ are $(G,\mu)$-displays over $R$, and $\eta: (Q_{\mathscr{P}_1},\alpha_{\mathscr{P}_1}) \to (Q_{\mathscr{P}_2},\alpha_{\mathscr{P}_2})$ is a morphism, then the second part of Theorem \ref{thm-isom} provides us with a morphism $\psi: \upsilon_R \circ \mathscr{P}_1 \to \upsilon_R \circ \mathscr{P}_2$ which induces $Q_{\mathscr{P}_1} \to Q_{\mathscr{P}_2}$. It remains only to check this morphism is compatible with the respective Frobeneius morphisms, but by Lemma \ref{lem-localproperties} \ref{lem-dispmorphism} it is enough to check this after some faithfully flat \'etale extension $R \to R'$. By choosing an extension such that $Q_{\mathscr{P}_1}(R')$ is nonempty, the result follows from the definitions of the $\alpha_{\mathscr{P}_i}$. Finally, to complete the proof it is enough to show that every $G$-display of type $\mu$ over $\uS(R)$ is \'etale locally in the essential image of (\ref{eq-morphism}), which is done using Theorem \ref{thm-isom}. \end{proof} \begin{cor} Let $B\to A$ be a PD-thickening of $p$-nilpotent $W(k_0)$-algebras. Then $(\ref{eq-morphism})$ induces an equivalence \begin{align*} G\text{-}\textup{\textup{Disp}}_{\underline{W}(B/A),\mu}^\otimes \xrightarrow{\sim} G\text{-}\textup{\textup{Disp}}_{\underline{W}(B/A),\mu}. \end{align*} \end{cor} \begin{proof} Combine Theorem \ref{thm-equiv} with Proposition \ref{prop-descent}. \end{proof} \begin{rmk}\label{rmk-GLn} Let $\Lambda$ be a finite free $\zz_p$-module, and let $\mu$ be a cocharacter of $\textup{GL}(\Lambda)$. Let us say a display $\underline{M}=(M,F)$ over $\underline{S}(R)$ is of type $\mu$ if, \'etale locally, there is an isomorphism $M \cong \Lambda \otimes_{\zz_p} S(R)$ of graded $S(R)$-modules, where $\Lambda$ is graded by the weight space decomposition of the cocharacter $\mu$. Denote by $\textup{Disp}_\mu(\underline{S}(R))$ the category of displays over $\underline{S}(R)$ which are of type $\mu$. Then one checks (as in \cite[Thm. 5.15]{Daniels2019}, for example) that the functor \begin{align*} \textup{GL}(\Lambda)\text{-}\textup{Disp}^\otimes_{\underline{S},\mu}(R) \to \textup{Disp}_\mu(\underline{S}) \end{align*} induced by evaluation on the standard representation is an equivalence of categories. If $I = (i_1, i_2, \dots, i_n) \in \zz^n$ with $i_1 \le i_2 \le \cdots \le i_n$, and $\mu_I$ is the cocharacter $t \mapsto \text{diag}(t^{i_1}, t^{i_2}, \dots, t^{i_n})$ for some choice of basis of $\Lambda$, then this is compatible with the equivalence between $\text{GL}_n\text{-}\textup{Disp}_{\underline{S},\mu_I}$ and the stack of displays of type $I$ over $\underline{S}$ described in \cite[Ex. 5.3.5]{Lau2018} (see also \cite[Rmk. 3.2]{Daniels2019}). Suppose now $\underline{S}(R)$ extends some $1$-frame $\mathcal{S}$. We say that a window over $\mathcal{S}$ is of type $\mu$ if the corresponding $1$-display is of type $\mu$. If $\mu$ is minuscule, the functor described above is valued in $1$-displays, and therefore induces an equivalence between $(\textup{GL}(\Lambda),\mu)$-displays over $\underline{S}(R)$ and windows over $\mathcal{S}$ of type $\mu$. In particular, if $I = (0^{(d)}, 1^{(h-d)})$ for some $d$, and $\mu = \mu_I$, then $\textup{GL}_h\text{-}\textup{Disp}^\otimes_{\underline{S},\mu_I}(R)$ is equivalent to the category of windows $(P_0, P_1, F_0, F_1)$ over $\mathcal{S}$ with $\text{rk}_{S_0} P_0 =h$ and $\text{rk}_{R}(P_0/P_1) = d$. \end{rmk} We close this section by summarizing the local description of the stack $G$-\textup{Disp}$_{\uS,\mu}^\otimes$. Let us again assume that $R$ is a $W(k_0)$-algebra, and that $\uS$ is an \'etale sheaf of $W(k_0)$-frames over $\Spec R$ which satisfies descent for modules. \begin{Def}\label{def-banal} A $(G,\mu)$-display $\mathscr{P}$ over $\uS(R)$ is \textit{banal} if there is an isomorphism $\upsilon_{S(R)} \circ \mathscr{P} \cong \mathscr{C}(\uS)_{\mu,R}$. \end{Def} If $\mathscr{P}$ is a $(G,\mu)$-display over $R$, then $\mathscr{P}$ is banal locally for the \'etale topology on $R$. Given any $U \in G(S(R)_0)$ we can define a banal $(G,\mu)$-display $\mathscr{P}_U$ on $\uS(R)$ as follows: to the representation $(V,\pi)$ we associate the display over $\uS(R)$ defined from the standard datum \begin{align*} (V \otimes_{\zz_p} S(R)_0, \pi(U) \circ (\id \otimes \sigma_0)), \end{align*} where $V \otimes_{\zz_p} S(R)_0 = V_{W(k_0)} \otimes_{W(k_0)} S(R)_0$ is endowed with the grading induced by the cocharacter $\mu$. \begin{prop}\label{prop-banal} \hspace{2cm} \begin{enumerate}[\textup{(}i\textup{)}] \item Every banal $(G,\mu)$-display $\mathscr{P}$ over $R$ is isomorphic to $\mathscr{P}_U$ for some $U \in G(S(R)_0)$. \item The category of banal $(G,\mu)$-displays over $R$ is equivalent to the category whose objects are $U \in G(S(R)_0)$ and whose morphisms are given by \begin{align*} \textup{Hom}(U,U') = \{h \in G(\uS)_\mu(R) \mid \tau(h)^{-1} U' \sigma(h) = U\}. \end{align*} \end{enumerate} \end{prop} \begin{proof} The proof follows from the arguments at the end of \cite[\textsection 3.3]{Daniels2019}. \end{proof} \subsection{Adjoint nilpotence and liftings}\label{sub-lifting} In this section we assume that $G$ is a smooth affine group scheme over $\zz_p$ and that $\mu: \mathbb{G}_{m,W(k_0)} \to G_{W(k_0)}$ is a minuscule cocharacter of $G_{W(k_0)}$. We fit the adjoint nilpotence condition of \cite[\textsection 3.4]{BP2017} into the present context, and state Lau's unique lifting lemma (Proposition \ref{prop-lifting}). Recall that $G$-\textup{Disp}$_{\underline{W},\mu}^\otimes$ (equiv. $G$-\textup{Disp}$_{\underline{W},\mu}$) is a stack for the fpqc topology. For a $p$-nilpotent $W(k_0)$-algebra $A$, the natural inclusion functor $\textup{\'Et}_A \to \textup{Nilp}_{W(k_0)}$ induces a morphism of sites, and we can pull back the stack $ G$-\textup{Disp}$_{\underline{W},\mu}^\otimes$ (resp. $G$-\textup{Disp}$_{\underline{W},\mu}$) to obtain an \'etale stack on $\Spec{A}$, which we will denote by $G$-\textup{Disp}$_{\underline{W}(A),\mu}^\otimes$ (resp. $G$-\textup{Disp}$_{\underline{W}(A),\mu}$). Let $k$ be a perfect field of characteristic $p$, and let $K = W(k)[1/p]$. The Frobenius $\sigma$ of $W(k)$ naturally extends to $K$. Denote by $\textup{$F$-Isoc}(k)$ the category $F$-isocrystals over $k$, i.e., the category of pairs $(M,\varphi)$ consisting of a finite-dimensional $K$-vector space $M$ and an isomorphism of $K$-vector spaces $\varphi: \sigma^\ast M \xrightarrow{\sim} M$. When $k$ is algebraically closed, $\textup{$F$-Isoc}(k)$ is a semi-simple category with simple objects parametrized by $\lambda \in \qq$ (see e.g., \cite{Demazure1972}). For $\lambda \in \qq$, write $M_\lambda$ for the $\lambda$-isotypic component of $M$. If $M_\lambda$ is nonzero, we say $\lambda$ is a \textit{slope} of $(M,\varphi)$. Let $R$ be a $k_0$-algebra, and let $\mathscr{P}$ be a $(G,\mu)$-display over $\underline{W}(R)$. For every point $x \in \Spec{R}$, choose an algebraic closure $k(x)$ of the residue field of $x$. The base change $\mathscr{P}_{k(x)}$ of $\mathscr{P}$ to $k(x)$ is banal, since the $L^+_\mu G$-torsor $\underline{\textup{Isom}}^\otimes(\mathscr{C}(\underline{W})_{\mu,k(x)},\upsilon_{k(x)} \circ \mathscr{P}_{k(x)})$ over $k(x)$ is trivial. Hence by Proposition \ref{prop-banal} there is some $u(x) \in L^+G(k(x)) = G(W(k(x))$ such that $u(x)$ determines $\mathscr{P}_{k(x)}$. Let $K(x) = W(k(x))[1/p]$, and define $b(x) = u(x)\mu^\sigma(p) \in G(K(x))$. To $b(x)$ we can associate an exact tensor functor \begin{align*} N_{b(x)}: \textup{Rep}_{\qq_p}(G) \to \textup{$F$-Isoc}(k(x)), \ (V,\pi) \mapsto (V\otimes_{\qq_p} K(x),\pi(b(x)) \circ (\id_V \otimes \sigma)). \end{align*} Let us denote by $(\mathfrak{g},\text{Ad}^G)$ the adjoint representation of $G$. \begin{Def}\label{def-adjnilp} Let $R$ be a $p$-nilpotent $W(k_0)$-algebra. A $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(R)$ is \textit{adjoint nilpotent} if for all $x \in \Spec R/pR$ all slopes of the isocrystal $N_{b(x)}(\mathfrak{g},\textup{Ad}^G)$ are greater than $-1$. \end{Def} We will likewise say that $U \in L^+G(R)$ is adjoint nilpotent over $\underline{W}(R)$ if the associated $(G,\mu)$-display $\mathscr{P}_U$ is adjoint nilpotent. See \cite[\textsection 3.4]{BP2017} for a discussion of this condition. Let $\Lambda$ be a finite free $\zz_p$-module. Let us briefly recall the relationship between adjoint nilpotence and Zink's nilpotence condition in the case where $G = \textup{GL}(\Lambda)$ (cf. \cite[Rmk. 3.4.5]{BP2017}). If $R$ is a $p$-nilpotent $\zz_p$-algebra, then by Remark \ref{rmk-GLn}, evaluation on the standard representation $(\Lambda, \iota)$ defines an equivalence of categories between $(\textup{GL}(\Lambda),\mu)$-displays and 1-displays of type $\mu$ over $\underline{W}(R)$. We will say that a 1-display is nilpotent if its corresponding Zink display (under the equivalence in Lemma \ref{lem-windows}) satisfies Zink's nilpotence condition (see \cite[Def. 11]{Zink2002}). \begin{lemma}\label{lem-nilpotent} Suppose $\mathscr{P}$ is a $(\textup{GL}(\Lambda), \mu)$-display over $\underline{W}(R)$ such that $\mathscr{P}(\Lambda, \iota)$ is a nilpotent 1-display. Then $\mathscr{P}$ is adjoint nilpotent over $\underline{W}(R)$. \end{lemma} \begin{proof} This follows from the arguments in \cite[Rmk. 3.4.5]{BP2017}. \end{proof} We extend this definition to the relative Witt frame as follows. Let $B \to A$ be a PD-thickening of $p$-nilpotent $W(k_0)$-algebras. The $W(k_0)$-algebra homomorphism $W(B) \to W(A)$ induces a morphism of frames $\alpha: \underline{W}(B/A) \to \underline{W}(A)$, and base change along $\alpha$ (see (\ref{eq-changeofframe})) determines a morphism \begin{align}\label{eq-reduction} G\text{-}\textup{Disp}_{\underline{W}(B/A),\mu}^\otimes \to G\text{-}\textup{Disp}_{\underline{W}(A),\mu}^\otimes \end{align} of \'etale stacks on $\Spec A$. If $\mathscr{P}$ is a $(G,\mu)$-display over $\underline{W}(B/A)$, we denote its base change to $\underline{W}(A)$ by $\alpha^\ast\mathscr{P}$ or $\mathscr{P}_{\underline{W}(A)}$. \begin{Def} Let $B \to A$ be a PD thickening of $p$-nilpotent $W(k_0)$-algebras. A $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(B/A)$ is \textit{adjoint nilpotent} if $\mathscr{P}_{\underline{W}(A)}$ is adjoint nilpotent in the sense of Definition \ref{def-adjnilp}. \end{Def} Likewise, an element $U \in G(W(B))$ is said to be adjoint nilpotent over $\underline{W}(B/A)$ if the associated $(G,\mu)$-display $\mathscr{P}_U$ over $\underline{W}(B/A)$ is adjoint nilpotent. \begin{rmk}\label{rmk-adjnilp} If $U \in G(W(B))$, we obtain banal $(G,\mu)$-displays $\mathscr{P}_U$ over $\underline{W}(B)$ and $\mathscr{P}_U'$ over $\underline{W}(B/A)$ corresponding to $U$. Since $B\to A$ induces a homeomorphism $\Spec{A} \to \Spec{B}$, $\mathscr{P}_U$ is adjoint nilpotent over $\underline{W}(B)$ if and only if $\mathscr{P}_U'$ is adjoint nilpotent over $\underline{W}(B/A)$. Hence there is no ambiguity in the statement ``$U \in G(W(B))$ is adjoint nilpotent''. \end{rmk} We will denote by $G\text{-}\textup{Disp}_{\underline{W}(B/A),\mu}^{\otimes,\text{ad}}$, resp. $G\text{-}\textup{Disp}_{\underline{W}(A),\mu}^{\otimes,\text{ad}}$ the substack of adjoint nilpotent objects in $G\text{-}\textup{Disp}_{\underline{W}(B/A),\mu}^\otimes$, resp. $G\text{-}\textup{Disp}_{\underline{W}(A),\mu}^\otimes$. The morphism (\ref{eq-reduction}) induces a morphism \begin{align}\label{eq-lifting} G\text{-}\textup{Disp}_{\underline{W}(B/A),\mu}^{\otimes,\text{ad}} \to G\text{-}\textup{Disp}_{\underline{W}(A),\mu}^{\otimes,\text{ad}}. \end{align} \begin{prop}\label{prop-lifting} The morphism $($\ref{eq-lifting}$)$ is an equivalence of \'etale stacks on $\Spec{A}$. \end{prop} \begin{proof} By Theorem \ref{thm-equiv} and \cite[Thm. 3.16]{Daniels2019}, it is enough to show the result for the respective stacks of $G$-displays of type $\mu$. Hence the proposition follows from \cite[Rmk. 7.1.8]{Lau2018}. \end{proof} \begin{rmk} In the case where $J = \ker(B\to A)$ is a nilpotent ideal and $G$ is reductive the proposition follows from \cite[Thm. 3.5.4]{BP2017}. \end{rmk} \section{Crystals and $G$-displays}\label{section-crystals} Let $R$ be a $p$-nilpotent $\zz_p$-algebra, let $G$ be a smooth affine groups scheme over $\zz_p$, and let $\mu$ be a minuscule cocharacter for $G_{W(k_0)}$. In \textsection \ref{sub-crystalGdisp}, we construct and study the functorial properties of a $G$-crystal associated to any adjoint nilpotent $(G,\mu)$-display over $\underline{W}(R)$. If $\Lambda$ is a finite free $\zz_p$-module, and $G = \textup{GL}(\Lambda)$, this construction recovers the crystal associated to a nilpotent Zink display as in \textsection \ref{sub-zinkcrystals}, see Lemma \ref{lem-comparecrystals}. In \textsection \ref{sub-hodge} we narrow our focus to the case where $(G,\mu)$ is a Hodge type pair (see Definition \ref{def-hodgetype}). \subsection{The crystals associated to $G$-displays} \label{sub-crystalGdisp} Let $G$ be a smooth group scheme over $\zz_p$, and let $\mu: \mathbb{G}_{m,W(k_0)} \to G_{W(k_0)}$ be a minuscule cocharacter of $G_{W(k_0)}$. Let $R$ be a $p$-nilpotent $W(k_0)$-algebra, and suppose $\mathscr{P}$ is an adjoint nilpotent $(G,\mu)$-display over $\underline{W}(R)$. If $A$ is an $R$-algebra, denote by $\mathscr{P}_{\underline{W}(A)}$ the base change of $\mathscr{P}$ to $\underline{W}(A)$. Let $B \to A$ be a PD-thickening over $R$. By Proposition \ref{prop-lifting}, there exists a lift $\mathscr{P}_{B/A}$ of $\mathscr{P}_{\underline{W}(A)}$ to a $(G,\mu)$-display over $\underline{W}(B/A)$, and $\mathscr{P}_{B/A}$ is unique up to a unique isomorphism which lifts $\id_{\mathscr{P}_{\underline{W}(A)}}$. For every representation $(V,\pi)$ of $G$, write \begin{align}\label{eq-liftnotation} \underline{M}_{B/A}^\pi = (M_{B/A}^\pi, F_{B/A}^\pi) \end{align} for the evaluation of $\mathscr{P}_{B/A}$ on $(V,\pi)$. By base change along the composition $W(B/A)^\oplus \xrightarrow{\tau} W(B) \xrightarrow{w_0} B$, we obtain a finite projective $B$-module \begin{align*} \mathbb{D}(\mathscr{P})_{B/A}^\pi:= (\tau^\ast M_{B/A}^\pi) \otimes_{W(B)} B. \end{align*} We claim that the assignment $(B\to A) \mapsto \mathbb{D}(\mathscr{P})_{B/A}^\pi$ defines a crystal in finite locally free $\mathcal{O}_{\Spec{R}/W(k_0)}$-modules for every representation $(V,\pi)$. Indeed, we need to show that if $(B \to A) \to (B' \to A')$ is a morphism of PD-thickenings, then there is an isomorphism of $B'$-modules \begin{align}\label{eq-transition} \mathbb{D}(\mathscr{P})^\pi_{B/A} \otimes_B B' \xrightarrow{\sim} \mathbb{D}(\mathscr{P})^\pi_{B'/A'}, \end{align} and that these isomorphisms satisfy the cocycle condition with respect to compositions. But to obtain an isomorphism (\ref{eq-transition}) it is enough to exhibit an isomorphism \begin{align*} (\mathscr{P}_{B/A})_{\underline{W}(B'/A')} \xrightarrow{\sim} \mathscr{P}_{B'/A'} \end{align*} of $(G,\mu)$-displays over $\underline{W}(B'/A')$. Such an isomorphism is readily found using uniqueness of lifts, since both $(\mathscr{P}_{B/A})_{\underline{W}(B'/A')}$ and $\mathscr{P}_{B'/A'}$ lift $\mathscr{P}_{\underline{W}(A')}$. It is straightforward to check that compositions of the transition isomorphisms obtained in this way satisfy the cocycle condition, so by Remark \ref{rmk-crystals} we obtain a crystal in finite locally free $\mathcal{O}_{\Spec{R}/W(k_0)}$-modules $\mathbb{D}(\mathscr{P})^\pi$ for every $(V,\pi)$. \begin{lemma}\label{lem-crystal} The association \begin{align*} \mathbb{D}(\mathscr{P}): \textup{Rep}_{\zz_p}(G) \to \textup{LFCrys}({R}/W(k_0)), \ (V,\pi) \mapsto \mathbb{D}(\mathscr{P})^\pi, \end{align*} defines an exact tensor functor. \end{lemma} \begin{proof} A $G$-equivariant morphism $(V_1,\pi_1) \to (V_2,\pi_2)$ induces a morphism of finite projective graded $W(B/A)^\oplus$-modules $M_{B/A}^{\pi_1} \to M_{B/A}^{\pi_2}$, and by base change to $B'$ we obtain $\mathbb{D}(\mathscr{P})^{\pi_1} \to \mathbb{D}(\mathscr{P})^{\pi_2}$. If $(B'\to A') \to (B \to A)$ is a PD-morphism, then the transition map $\mathbb{D}(\mathscr{P})^\pi_{B/A} \otimes_B B'\xrightarrow{\sim} \mathbb{D}(\mathscr{P})^\pi_{B'/A'}$ is induced from the natural transformation of functors $(\mathscr{P}_{B/A})_{\underline{W}(B'/A')} \xrightarrow{\sim} \mathscr{P}_{B'/A'}$, which is compatible with the induced morphisms of representations. It follows that $\mathbb{D}(\mathscr{P})^{\pi_1} \to \mathbb{D}(\mathscr{P})^{\pi_2}$ is a morphism of crystals. Compatibility with tensor products follows from the definition of $\mathbb{D}(\mathscr{P})$ and the compatibility of $\mathscr{P}$ with tensor products. Exactness follows similarly, using that all modules are projective and hence all exact sequences in question split. \end{proof} \begin{Def} If $\mathscr{P}$ is an adjoint nilpotent $(G,\mu)$-display over $\underline{W}(R)$ for some $p$-nilpotent $W(k_0)$-algebra $R$, then the functor $\mathbb{D}(\mathscr{P})$ defined in Lemma \ref{lem-crystal} is the \textit{$G$-crystal associated to $\mathscr{P}$}. \end{Def} \begin{lemma}\label{lem-crystalfunctorial} The assignment $\mathscr{P} \mapsto \mathbb{D}(\mathscr{P})$ is functorial in $\mathscr{P}$ and compatible with base change. \end{lemma} \begin{proof} Suppose $\psi: \mathscr{P} \to \mathscr{P}'$ is a morphism of $(G,\mu)$-displays. If $B\to A$ is a PD-thickening over $R$, denote by $\mathscr{P}_{B/A}$ the lift of $\mathscr{P}_{\underline{W}(A)}$ to $\underline{W}(B/A)$, and by $\mathscr{P}'_{B/A}$ the lift of $\mathscr{P}'_{\underline{W}(A)}$. By Theorem \ref{prop-lifting}, $\psi_{\underline{W}(A)}$ lifts uniquely to a morphism of $(G,\mu)$-displays $\mathscr{P}_{B/A} \to \mathscr{P}'_{B/A}$ over $\underline{W}(B/A)$. In particular, for every $(V,\pi)$ we have a morphism \begin{align*} \psi_{B/A}^\pi: M_{B/A}^\pi \to (M_{B/A}')^\pi, \end{align*} where here we use notation as in (\ref{eq-liftnotation}). Tensoring this along $W(B/A)^\oplus \xrightarrow{\tau} W(B) \to B$ gives us a morphism \begin{align*} \mathbb{D}(\psi)^\pi_{B/A} : \mathbb{D}(\mathscr{P})^\pi_{B/A} \to \mathbb{D}(\mathscr{P}')^\pi_{B/A} \end{align*} for every $B\to A$ and every $(V,\pi)$. That this determines a morphism of crystals $\mathbb{D}(\psi)^\pi: \mathbb{D}(\mathscr{P})^\pi \to \mathbb{D}(\mathscr{P}')^\pi$ follows from the definition of the transition morphisms and Proposition \ref{prop-lifting}. is a morphism of crystals. Moreover, that the resulting morphism $\mathbb{D}(\psi): \mathbb{D}(\mathscr{P}) \to \mathbb{D}(\mathscr{P}')$ is a natural transformation and is compatible with tensor products both follow from the corresponding properties of the morphism $\psi_{B/A}$. If $\alpha: R \to R'$ is a $W(k_0)$-algebra homomorphism, write $\alpha^\ast \mathbb{D}(\mathscr{P})$ for the base change of $\mathbb{D}(\mathscr{P})$ to $R'$. Explicitly, for any PD-thickening $B \to A$ over $R'$ and representation $(V,\pi)$, \begin{align*} \alpha^\ast \mathbb{D}(\mathscr{P})^\pi_{B/A} = \mathbb{D}(\mathscr{P})^\pi_{\alpha_!(B/A)}, \end{align*} where we write $\alpha_!(B/A)$ for the PD-thickening $B \to A$ over $R$ given by viewing $A$ as an $R$-algebra via restriction of scalars. Compatibility with base change follows, since by definition $\mathbb{D}(\mathscr{P}_{\underline{W}(R')})_{B/A}^\pi$ is also given by $\mathbb{D}(\mathscr{P})^\pi_{\alpha_!(B/A)}$. \end{proof} \begin{rmk}\label{lem-banalcrystal} Lemma \ref{lem-crystalfunctorial} allows us to give an explicit description of $\mathbb{D}(\mathscr{P})$ in the case where $\mathscr{P}$ is banal. Denote by $\mathbb{D}_G$ the tensor functor \begin{align*} \mathbb{D}_G: \textup{Rep}_{\zz_p}G \to \textup{LFCrys}( R/W(k_0)), \ (V,\pi) \mapsto V_{W(k_0)} \otimes \mathcal{O}_{\Spec R/W(k_0)}. \end{align*} Here $V_{W(k_0)} \otimes \mathcal{O}_{\Spec R/W(k_0)}$ is defined as in (\ref{eq-pullback}). Let $U \in L^+G(R)$ and let $\mathscr{P}_U$ be the $(G,\mu)$-display over $R$ associated to $U$ as in Proposition \ref{prop-banal}. By the construction of $\mathbb{D}(\mathscr{P}_U)$, we have $\mathbb{D}(\mathscr{P}_U) = \mathbb{D}_G$. If $\mathscr{P}$ is any banal $(G,\mu)$-display over $R$, then by Proposition \ref{prop-banal} there is an isomorphism $\mathscr{P}_U \xrightarrow{\sim} \mathscr{P}$ for some $U \in L^+G(R)$, so Lemma \ref{lem-crystalfunctorial} provides us with an isomorphism of tensor functors $\mathbb{D}_G \xrightarrow{\sim} \mathbb{D}(\mathscr{P})$. \end{rmk} Suppose $\Lambda$ is a finite free $\zz_p$-module, $G = \textup{GL}(\Lambda)$ and $\mu$ is a minuscule cocharacter for $G$. Then by Remark \ref{rmk-GLn}, the category of $(\text{GL}(\Lambda), \mu)$-displays over a $p$-adic $W(k_0)$-algebra $R$ is equivalent to the category of Zink displays of type $\mu$ over $R$. In \textsection \ref{sub-zinkcrystals} we recalled the definition of the crystal $\mathbb{D}(\underline{P})$ associated to a nilpotent Zink display $\underline{P}$. Denote by $\Z_R$ the functor which gives the equivalence between $(\text{GL}(\Lambda), \mu)$-displays over $\underline{W}(R)$ and Zink displays of type $\mu$ over $R$. By Lemma \ref{lem-nilpotent}, if $\mathscr{P}$ is a $(\textup{GL}(\Lambda), \mu)$-display over $\underline{W}(R)$ such that $\Z_R(\mathscr{P})$ is nilpotent, then $\mathscr{P}$ is adjoint nilpotent. The following lemma describes the relationship between the $G$-crystal associated to $\mathscr{P}$ and the crystal associated to $\Z_R(\mathscr{P})$. \begin{lemma}\label{lem-comparecrystals} Let $\mathscr{P}$ be a $(\textup{GL}(\Lambda), \mu)$-display over $\underline{W}(R)$ such that the associated Zink display $\Z_R(\mathscr{P})$ is nilpotent, and denote by $(\Lambda, \iota)$ the standard representation of $\textup{GL}(\Lambda)$. Then there is a natural isomorphism of crystals \begin{align*} \mathbb{D}(\mathscr{P})^{\iota} \cong \mathbb{D}(\Z_R(\mathscr{P})). \end{align*} \end{lemma} \begin{proof} Let $B \to A$ be a PD-thickening over $R$, and let $\mathscr{P}_{B/A}$ be the unique lift of $\mathscr{P}_{\underline{W}(A)}$ to a $(\textup{GL}(\Lambda),\mu)$-display over $\underline{W}(B/A)$. Then $\mathscr{P}_{B/A}(\Lambda,\iota)$ corresponds to a window over $\mathcal{W}(B/A)$ which lifts the Zink display corresponding to $\mathscr{P}_{\underline{W}(A)}(\Lambda,\iota)$. But $\underline{\tilde{P}}$ is the unique window over $\mathcal{W}(B/A)$ with this property, so it is isomorphic to the window associated to $\mathscr{P}_{B/A}(\Lambda,\iota)$. In particular, we obtain an isomorphism $\tau^\ast M^\iota_{B/A} \cong \tilde{P}_0$. The result follows. \end{proof} \subsection{$G$-displays of Hodge type}\label{sub-hodge} Let us continue to assume that $G$ is a smooth affine group scheme over $\zz_p$ and that $\mu:\mathbb{G}_{m,W(k_0)} \to G_{W(k_0)}$ is a minuscule cocharacter for $G_{W(k_0)}$. \begin{Def}\label{def-hodgetype} We say the pair $(G,\mu)$ is \textit{of Hodge type} if there exists a closed embedding of $\zz_p$-group schemes $\eta: G \hookrightarrow \textup{GL}(\Lambda)$ for a finite free $\zz_p$-module $\Lambda$, such that after a choice of basis $\Lambda_{W(k_0)} \xrightarrow{\sim} W(k_0)^h$, the composition $\eta \circ \mu$ is the minuscule cocharacter $a \mapsto \textup{diag}(1^{(d)}, a^{(h-d)})$ of $\textup{GL}_h$ for some $d$. In this case, the representation $(\Lambda, \eta)$ is called a \textit{Hodge embedding} for $(G,\mu)$. \end{Def} If $(G,\mu)$ is of Hodge type, and $\mathscr{P}$ is a $(G,\mu)$-display over $\underline{W}(R)$, then $\mathscr{P}(\Lambda, \eta)$ is a 1-display over $\underline{W}(R)$. Let $\textup{Z}_{\eta,R}(\mathscr{P})$ denote the Zink display associated to this 1-display via Lemma \ref{lem-windows}. If the ring $R$ is clear from context, we will write simply $\Z_\eta(\mathscr{P})$. \begin{Def} We say a $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(R)$ is \textit{nilpotent with respect to $\eta$} if $\Z_\eta(\mathscr{P})$ is a nilpotent Zink display. \end{Def} This condition is local for the fpqc topology, and we denote by $G\textup{-}\textup{Disp}_{\underline{W},\mu}^{\otimes,\eta}$ the stack of $(G,\mu)$-displays which are nilpotent with respect to $\eta$. \begin{lemma}\label{lem-wrteta} Suppose $(G,\mu)$ is of Hodge type, and let $\mathscr{P}$ be a $(G,\mu)$-display over $R$. If $\mathscr{P}$ is nilpotent with respect to $\eta$, then it is adjoint nilpotent. \end{lemma} \begin{proof} Notice $(\eta(\mathscr{P}))(\Lambda,\iota) = \mathscr{P}(\Lambda,\eta)$, so since $\Z_\eta(\mathscr{P})$ is nilpotent, it follows from Lemma \ref{lem-nilpotent} that $\eta(\mathscr{P})$ is an adjoint nilpotent $(\textup{GL}(\Lambda), \eta\circ\mu)$-display over $R$. Then $\mathscr{P}$ is an adjoint nilpotent $(G,\mu)$-display over $\underline{W}(R)$ (cf. \cite[3.7.1]{BP2017}). \end{proof} In the remainder of this section, we assume $(G,\mu)$ is of Hodge type with Hodge embedding $(\Lambda, \eta)$. Let $\mathscr{P}$ be a $(G,\mu)$-display over $\underline{W}(R)$ which is nilpotent with respect to $\eta$, so in particular $\mathscr{P}$ is adjoint nilpotent by Lemma \ref{lem-wrteta}, and we can associate a $G$-crystal $\mathbb{D}(\mathscr{P})$ to $\mathscr{P}$ as in the previous section. It is easy to see $\mathbb{D}(\mathscr{P})^\eta \cong \mathbb{D}(\eta_\ast \mathscr{P})^\iota$, so by Lemma \ref{lem-comparecrystals} we have a canonical isomorphism \begin{align}\label{eq-eta} \mathbb{D}(\mathscr{P})^\eta \cong \mathbb{D}(\Z_\eta(\mathscr{P})). \end{align} As a result we can endow $\mathbb{D}(\mathscr{P})^\eta$ with the structure of a Dieudonn\'e crystal using the Dieudonn\'e crystal structure on $\mathbb{D}(\Z_\eta(\mathscr{P}))$ as in Section \ref{sub-zinkcrystals}. Denote by \begin{align*} \mathbb{F}: \phi^\ast\mathbb{D}(\mathscr{P})^\eta \to \mathbb{D}(\mathscr{P})^\eta \text{ and } \mathbb{V}:\mathbb{D}(\mathscr{P})^\eta \to \phi^\ast \mathbb{D}(\mathscr{P})^\eta \end{align*} the Frobenius and Verschiebung for $\mathbb{D}(\mathscr{P})^\eta$. Suppose $pR=0$, and that $\mathscr{P}$ is banal (see Definition \ref{def-banal}), so there exists an isomorphism $\psi: \mathscr{P}_U \xrightarrow{\sim} \mathscr{P}$ for $U \in L^+G(R)$ by Proposition \ref{prop-banal}. In this case we can make explicit the Frobenius and Verschiebung for $\mathbb{D}(\mathscr{P})^\eta$. From $\psi$ we obtain isomorphisms \begin{align}\label{eq-etacrystal} \mathbb{D}(\mathscr{P})^\eta_{B/A} \cong \Lambda \otimes_{\zz_p} B \text{ and } \phi^\ast \mathbb{D}(\mathscr{P})^{\eta}_{B/A} \cong \Lambda \otimes_{\zz_p} B \end{align} for every PD-thickening $B \to A$ over $R$. Indeed, the first isomorphism follows immediately from Remark \ref{lem-banalcrystal}. Let us explain the second. Essentially by definition of $\phi^\ast \mathbb{D}(\mathscr{P})$, we have a canonical isomorphism $\phi^\ast \mathbb{D}(\mathscr{P}) \cong \mathbb{D}(\mathscr{P}^{(p)})$, where $\mathscr{P}^{(p)}$ is the base change of $\mathscr{P}$ along $\phi: R \to R$. We claim that $\mathscr{P}^{(p)}$ is banal if $\mathscr{P}$ is, so the second isomorphism in (\ref{eq-etacrystal}) follows from another application of Remark \ref{lem-banalcrystal}. If $f(U)$ is the image of $U$ in $L^+G(R)$ under the Witt vector Frobenius $f$, then we have an isomorphism $\varepsilon: \mathscr{P}_{f(U)} \xrightarrow{\sim} (\mathscr{P}_U)^{(p)}$ given by \begin{align}\label{eq-epsilon} V \otimes_{\zz_p} W(R)^\oplus \xrightarrow{\sim} V \otimes_{\zz_p} W(R)^\oplus \otimes_{W(R)^\oplus, W(\phi)^\oplus} W(R)^\oplus, \ x \otimes \xi \mapsto x \otimes 1 \otimes \xi \end{align} for every representation $(V,\pi)$. Hence $\mathscr{P}^{(p)}$ is banal, since it is isomorphic to $\mathscr{P}_{f(U)}$ along the composition of $\varepsilon$ with $\psi^{(p)}: (\mathscr{P}_U)^{(p)} \xrightarrow{\sim} \mathscr{P}^{(p)}$. For any PD-thickening $B \to A$, denote by $U_A$ the image of $U$ under $G(W(R)) \to G(W(A))$, and by $U_{B/A}$ a lift of $U_A$ to $G(W(B))$. We will write $\bar{U}_{B/A}$ for the image of $U_{B/A}$ in $G(B)$ under $w_0: W(B) \to B$. \begin{lemma}\label{lem-explicitfrob} With respect to $($\ref{eq-etacrystal}$)$, the Frobenius and Verschiebung for $\mathbb{D}(\mathscr{P})^\eta_{B/A}$ are given by \begin{align*} \eta(\bar{U}_{B/A}) \circ (\id_{\Lambda^0_{W(B)}} \oplus p\cdot \id_{\Lambda^1_{W(B)}}) \text{ and } (p\cdot \id_{\Lambda^0_{W(B)}} \oplus \id_{\Lambda^1_{W(B)}}) \circ \eta(\bar{U}_{B/A})^{-1}, \end{align*} respectively, for every PD-thickening $B \to A$ over $R$. \end{lemma} \begin{proof} If $\mathscr{P}$ is a $(G,\mu)$-display over $\underline{W}(R)$ (resp. over $\underline{W}(B/A)$ for some PD-thickening $B \to A$) which is nilpotent with respect to $\eta$, then we will denote by $\Z_\eta(\mathscr{P})$ the associated Zink display (resp. window over $\mathcal{W}(B/A)$). If $\underline{P} = \Z_\eta(\mathscr{P})$ is the Zink display associated to $\mathscr{P}$, then by compatibility of $\Z_\eta$ with base change we have $\Z_\eta(\mathscr{P}^{(p)}) \cong \underline{P}^{(p)}$. For any Zink display $\underline{P}$ over $R$ and any PD-thickening $B \to A$, denote by $\underline{P}_{B/A}$ the unique lift of of $\underline{P}_A$ to a window over $\mathcal{W}(B/A)$. Recall from the proof of Lemma \ref{lem-comparecrystals} that if $\underline{P} = \Z_\eta(\mathscr{P})$, then $\underline{P}_{B/A} = \mathscr{P}_{B/A}(\Lambda, \eta)$, where $\mathscr{P}_{B/A}$ is the unique lift of $\mathscr{P}_{\underline{W}(A)}$ to $\underline{W}(B/A)$. Let $\mathscr{P}$ be a banal $(G,\mu)$-display over $R$ with trivialization isomorphism $\psi: \mathscr{P}_U \xrightarrow{\sim} \mathscr{P}$. By replacing $\mathscr{P}$ by $\mathscr{P}_{\underline{W}(A)}$, we may assume $A = R$. Let us start by proving the lemma for the Frobenius. We want an explicit description of the map \begin{align}\label{eq-lemmafrob} \Lambda \otimes_{\zz_p} B \xrightarrow{\sim} \phi^\ast\mathbb{D}(\mathscr{P})^\eta_{B/R} \xrightarrow{\mathbb{F}_{B/R}} \mathbb{D}(\mathscr{P})^\eta_{B/R} \xrightarrow{\sim} \Lambda \otimes_{\zz_p} B. \end{align} The isomorphism $\psi: \mathscr{P}_U \xrightarrow{\sim} \mathscr{P}$ lifts uniquely to an isomorphism $\psi_{B/R}$ of $(G,\mu)$-displays over $\underline{W}(B/R)$, which in turn induces an isomorphism of $\mathcal{W}(B/R)$ windows $\Z_\eta(\mathscr{P}_U)_{B/R} \xrightarrow{\sim} \Z_\eta(\mathscr{P})_{B/R}.$ Similarly, the isomorphism $\varepsilon$ (see (\ref{eq-epsilon})) induces an isomorphism $\Z_\eta(\mathscr{P}_{f(U)})_{B/R} \xrightarrow{\sim} \Z_\eta(\mathscr{P}_U^{(p)})_{B/R}$. Denote by $\tilde{F}_0^\sharp$ the unique lift of the display Verschiebung $\textup{Ver}_{\Z_\eta(\mathscr{P})}:\Z_\eta(\mathscr{P}^{(p)}) \to \Z_\eta(\mathscr{P})$ to a morphism of $\mathcal{W}(B/R)$-windows. Then (\ref{eq-lemmafrob}) is the reduction modulo $I_B$ of the following composition: \begin{align}\label{eq-frobcomp1} \Z_\eta(\mathscr{P}_{f(U)})_{B/R} \xrightarrow{\sim} \Z_\eta(\mathscr{P}_U^{(p)})_{B/R} \xrightarrow{\sim} \Z_\eta(\mathscr{P}^{(p)})_{B/R} \xrightarrow{\tilde{F}_0^\sharp} \Z_\eta(\mathscr{P})_{B/R} \xleftarrow{\sim} \Z_\eta(\mathscr{P}_U)_{B/R}. \end{align} In turn, (\ref{eq-frobcomp1}) is the unique lift of the composition \begin{align}\label{eq-frobcomp2} \Z_\eta(\mathscr{P}_{f(U)}) \xrightarrow{\Z_\eta(\varepsilon)} \Z_\eta(\mathscr{P}_U)^{(p)} \xrightarrow{\Z_\eta(\psi)^{(p)}} \Z_\eta(\mathscr{P})^{(p)} \xrightarrow{\textup{Ver}_{\Z_\eta(\mathscr{P})}} \Z_\eta(\mathscr{P}) \xrightarrow{\Z_\eta(\psi^{-1})} \Z_\eta(\mathscr{P}_U). \end{align} Hence to prove the lemma for the Frobenius it is enough to show (\ref{eq-frobcomp2}) is given by $\eta(U) \circ (\id \oplus p \cdot \id)$. By functoriality of $\textup{Ver}$, we can rewrite (\ref{eq-frobcomp2}) as the composition $\textup{Ver}_{\Z_\eta(\mathscr{P}_U)} \circ \Z_\eta(\varepsilon)$, and we have an explicit description of $\textup{Ver}_{\Z_\eta(\mathscr{P}_U)}$ (see (\ref{eq-F0sharp}): \begin{align*} \textup{Ver}_{\Z_\eta(\mathscr{P}_U)} = ((\eta(U)\circ (\id_{\Lambda} \otimes f))^\sharp \circ (\id_{f^\ast \Lambda^0_{W(R)}} \oplus p \cdot \id_{f^\ast \Lambda^1_{W(R)}}). \end{align*} The result for the Frobenius follows because $(\eta(U)\circ (\id_\Lambda \otimes f))^\sharp = \eta(U) \circ Z(\varepsilon)^{-1}$, and $\Z_\eta(\varepsilon)^{-1}$ commutes with $\id \oplus p\cdot \id$. Finally we note that the computation is nearly identical for the Verschiebung, using the explicit description (\ref{eq-Vsharp}) of the Frobenius for $\Z_\eta(\mathscr{P}_U)$. \end{proof} For the remainder of this section let us furthermore assume that the generic fiber $G_{\qq_p}$ of $G$ is reductive. For any finite free $\zz_p$-module $\Lambda$, let $\Lambda^\otimes = \bigoplus_{m,n} \Lambda^{\otimes m} \otimes_{\zz_p} (\Lambda^\vee)^{\otimes n}$ denote the total tensor algebra of $\Lambda \oplus \Lambda^\vee$. If the pair $(G,\mu)$ is of Hodge type, then by \cite[Prop. 1.3.2]{Kisin2010} and \cite{Deligne2011}, there exists a finite collection of tensors $\underline{s} = (s_1, \dots, s_r)$ with $s_i \in \Lambda^\otimes$ such that, for all $\zz_p$-algebras $R$, \begin{align*} G(R) = \{ g \in \textup{GL}(\Lambda\otimes_{\zz_p} R) \mid g (s_i \otimes 1) = (s_i \otimes 1) \text{ for all }i\}. \end{align*} We say the collection of tensors $\underline{s}$ defines the group $G$ inside $\textup{GL}(\Lambda)$. Without loss of generality, we may assume that, for each $i$, we have \begin{align*} s_i \in \Lambda^{\otimes m_i} \otimes (\Lambda^\vee)^{\otimes n_i} \end{align*} for some $m_i$ and $n_i$. Let $\Lambda(i) = \Lambda^{\otimes m_i} \otimes (\Lambda^\vee)^{\otimes n_i}$. This is a $G$-stable submodule of $\Lambda^\otimes$, and we will denote by $(\Lambda(i), \eta(i))$ the corresponding representation. For every $i$, $s_i$ defines a morphism of representations \begin{align}\label{eq-si} s_i: \zz_p \to \Lambda(i), \ 1 \mapsto s_i, \end{align} where $\zz_p$ denotes the trivial representation. Each $\Lambda(i)$ is canonically graded by the action of the cocharacter $\mu$, and since $s_i$ is $G$-invariant, we see $s_i \in (\Lambda(i))^0$. \begin{Def}\label{def-lochodge} A \textit{local Hodge embedding datum} is a tuple $\underline{G} = (G,\mu,\Lambda, \eta, \underline{s})$, where \begin{itemize} \item $(G,\mu)$ is a pair consisting of a smooth affine $\zz_p$-group scheme with reductive generic fiber and a minuscule cocharacter $\mu$ of $G_{W(k_0)}$ such that $(G,\mu)$ is of Hodge type, \item $\eta: G \hookrightarrow \textup{GL}(\Lambda)$ is a Hodge embedding for $(G,\mu)$, and \item $\underline{s} = (s_1, \dots, s_r)$ is a collection of tensors which define $G$ inside of $\textup{GL}(\Lambda)$. \end{itemize} \end{Def} If $\mathscr{P}$ is an adjoint nilpotent $(G,\mu)$-display over $R$, we may apply $\mathbb{D}(\mathscr{P})$ to $s_i$ to obtain a morphism of crystals \begin{align*} t_i: \mathbbm{1} \to \mathbb{D}(\mathscr{P})^{\eta(i)}. \end{align*} Notice that $\phi^\ast \mathbbm{1}$ is canonically identified with $\mathbbm{1}$, so we likewise obtain a morphism $t_i: \mathbbm{1} \to \phi^\ast \mathbb{D}(\mathscr{P})^{\eta(i)}$. If $\mathscr{P}$ is nilpotent with respect to $\eta$, we have an identification $\mathbb{D}(\mathscr{P})^\eta \cong \mathbb{D}(\underline{P})$, where $\underline{P} = \Z_\eta(\mathscr{P})$. Since $\mathbb{D}(\mathscr{P})$ is compatible with tensor products, we see \begin{align*} \mathbb{D}(\mathscr{P})^{\eta(i)} = \mathbb{D}(\underline{P})^{\otimes m_i} \otimes (\mathbb{D}(\underline{P})^\vee)^{\otimes n_i}. \end{align*} The Frobenius $\mathbb{F}$ on $\mathbb{D}(\Z_\eta(\mathscr{P}))$ extends to tensor products, and it extends to (linear) duals after we pass to the associated isocrystal. By the relation $\mathbb{FV} = p$, we see that the resulting extension of $\mathbb{F}$ to $\mathbb{D}(\underline{P})^\vee[1/p]$ is given by $\mathbb{F} = p^{-1} \mathbb{V}^t$. Hence $\mathbb{F}$ extends to a morphism of isocrystals \begin{align*} \mathbb{F}_i: \phi^\ast\mathbb{D}(\mathscr{P})^{\eta(i)}[1/p] \to \mathbb{D}(\mathscr{P})^{\eta(i)}[1/p] \end{align*} \begin{prop}\label{prop-frobeq} For each $i$, $t_i$ is Frobenius equivariant, i.e., $\mathbb{F}_i \circ t_i = t_i$. \end{prop} \begin{proof} By the equivalence (\ref{eq-crystalequiv}) between $\textup{Isoc}(R)$ and $\textup{Isoc}(R/pR)$, we may assume $pR = 0$. Further, by Lemma \ref{lem-etalelocal} we may replace $R$ by an \'etale faithfully flat extension, and since every $(G,\mu)$-display is \'etale locally banal (see the proof of \cite[Lem. 5.4.2]{Lau2018}), we may assume $\mathscr{P}$ is banal. Hence by Remark \ref{lem-banalcrystal} we have an isomorphism $\mathbb{D}(\mathscr{P}) \cong \mathbb{D}_G$, and for each PD-thickening $B\to A$ over $R$, the morphism of $B$-modules $(t_i)_{B/A}$ is given by \begin{align*} (t_i)_{B/A}: B \to \Lambda(i) \otimes_{\zz_p} B, \ 1 \mapsto s_i \otimes 1. \end{align*} By Lemma \ref{lem-faithful}, it is enough to show the result after applying the functor $(-)_{D^\wedge/R}[1/p]: \textup{Isoc}(R) \to \textup{Mod}(D^\wedge[1/p])$, so it is enough to show $\mathbb{F}_i$ fixes the image of $s_i \otimes 1$ in \begin{align*} \mathbb{D}(\mathscr{P})^{\eta(i)}_{D^\wedge/R}[1/p] = \left( \varprojlim \Lambda(i) \otimes_{\zz_p} D_n\right) [1/p]. \end{align*} Here as in \textsection \ref{sub-reviewcrystals} we write $D_n = D^\wedge / p^n D^\wedge$. Notice $\mathbb{F}_i$ is given by $p^{-n_i}\mathbb{F}_i'$, where \begin{align*} \mathbb{F}_i' = \mathbb{F}^{\otimes m_i} \otimes (\mathbb{V}^t)^{\otimes n_i} \end{align*} is a morphism of crystals $\phi^\ast \mathbb{D}(\mathscr{P})^{\eta(i)} \to \mathbb{D}(\mathscr{P})^{\eta(i)}$. If we denote by $\mathbb{F}'_{i,n}$ the evaluation of $\mathbb{F}_i'$ on $\phi^\ast\mathbb{D}(\mathscr{P})^{\eta(i)}_{D_n/R}$, then \begin{align*} \mathbb{F}_i (s_i \otimes 1) = p^{-n_i}\cdot(\mathbb{F}'_{i,n}(s_i \otimes 1))_{n\in \zz_{>0}} \in \left( \varprojlim \Lambda(i) \otimes_{\zz_p} D_n\right) [1/p]. \end{align*} Using the decomposition $\Lambda_{W(k_0)} = \Lambda^0 \oplus \Lambda^1$, we see that $\Lambda(i)\otimes_{\zz_p} D_n$ can be written as a direct sum of terms of the form \begin{align*} (\Lambda^0_{D_n})^{\otimes j} \otimes_{D_n} (\Lambda^1_{D_n})^{\otimes m_i - j} \otimes_{D_n} ((\Lambda^0_{D_n})^\vee)^{\otimes k} \otimes_{D_n} ((\Lambda^1_{D_n})^\vee)^{\otimes n_i - k}. \end{align*} Since $s_i \in (\Lambda(i))^0$, each $s_i \otimes 1$ is contained in a direct sum of terms which satisfy $m_i-j = n_i - k$. By Lemma \ref{lem-explicitfrob}, $\mathbb{F}'_{i,n}$ acts on such a term by \begin{align}\label{eq-etai} \eta(\bar{U}_{D_n/R})^{\otimes j} \otimes p^{n_i - j} \eta(\bar{U}_{D_n/R})^{\otimes n_i - j} \otimes p^k \eta^\vee(\bar{U}_{D_n/R})^{\otimes k} \otimes \eta^\vee(\bar{U}_{D_n/R})^{\otimes n_i - k}, \end{align} where $\eta^\vee$ denotes the contragradient representation. Since $m_i-j = n_i - k$, (\ref{eq-etai}) is equal to $p^{n_i}\cdot\eta(i)(\bar{U}_{D_n/R})$, so \begin{align*} \mathbb{F}_i(s_i \otimes 1) = p^{-n_i} \cdot (p^{n_i} \cdot \eta(i)(\bar{U}_{D_n/R})(s_i \otimes 1))_{n\in \zz_{>0}}. \end{align*} The result follows because $\eta(i)(\bar{U}_{D_n/R})$ fixes $s_i\otimes 1$ for every $n$. \end{proof} \section{$G$-displays and formal $p$-divisible groups}\label{section-main} Let $G$ be a smooth affine $\zz_p$-group scheme with reductive generic fiber and let $\mu$ be a minuscule cocharacter for $G_{W(k_0)}$. Moreover, assume that the pair $(G,\mu)$ is of Hodge type, and that $\underline{G} = (G,\mu,\Lambda, \eta, \underline{s})$ is a local Hodge embedding datum. In \textsection \ref{sub-tatetensors} we define a notion of $p$-divisible groups with $(\underline{s},\mu)$-structure (Definition \ref{def-smu}) and prove that these objects form an \'etale stack on $\textup{Nilp}_{W(k_0)}$ (Lemma \ref{lem-smudescent}). The heart of the paper is \textsection \ref{sub-mainthm}, wherein we establish Theorem \ref{thm-1}. In particular, we define a functor from $(G,\mu)$-displays which are nilpotent with respect to $\eta$ to formal $p$-divisible groups with $(\underline{s},\mu)$-structure over $R$ in $\textup{Nilp}_{W(k_0)}$ (Lemma \ref{lem-tensors}), and we prove that this functor is fully faithful (Proposition \ref{prop-fullyfaithful}). In addition, we characterize the essential image of this functor (Proposition \ref{prop-essimg}), and, we prove the functor is an equivalence if $R/pR$ has a $p$-basis \'etale locally, (Theorem \ref{mainthm}). In sections \ref{sub-rzspaces} and \ref{sub-deformations} we establish corollaries of the main theorem under the additional assumption that $G$ is itself reductive. In particular, in \textsection \ref{sub-rzspaces}, using Theorem \ref{mainthm}, we prove that the RZ-functors of Hodge type defined in \cite{Kim2018} and in \cite{BP2017} are naturally equivalent, and in \textsection \ref{sub-deformations} we study the deformation theory of $p$-divisible groups with $(\underline{s},\mu)$-structure. \subsection{Crystalline Tate tensors}\label{sub-tatetensors} Let $R$ be a $p$-nilpotent $W(k_0)$-algebra, and let $\underline{\mathbb{D}} = (\mathbb{D},\mathbb{F},\mathbb{V})$ be a Dieudonn\'e crystal on $\Spec R$. Suppose $\mathbb{D}_{R/R}$ is equipped with a filtration by finite projective $R$-modules \begin{align}\label{eq-filt} \textup{Fil}^0(\mathbb{D}) = \mathbb{D}_{R/R} \supset \textup{Fil}^1(\mathbb{D}) \supset \textup{Fil}^2(\mathbb{D}) = 0. \end{align} Extending the notation of the previous section, let us denote by $\mathbb{D}^\otimes$ the total tensor algebra of $\mathbb{D} \otimes \mathbb{D}^\vee$. This is a crystal in finite locally free $\mathcal{O}_{\Spec R/W(k_0)}$-modules, and the filtration (\ref{eq-filt}) naturally extends to a filtration for $\mathbb{D}^\otimes_{R/R}$. Further, the Frobenius for $\mathbb{D}$ endows the associated isocrystal $\mathbb{D}^\otimes[1/p]$ with the structure of an $F$-isocrystal. \begin{Def} A \textit{crystalline Tate tensor} for $\underline{\mathbb{D}}$ over $\Spec{R}$ is a morphism $t: \mathbbm{1} \to \mathbb{D}^\otimes$ of locally free crystals of $\mathcal{O}_{\Spec R/W(k_0)}$-modules such that $t_{R/R}(R) \subset \textup{Fil}^0(\mathbb{D}^\otimes)$ and such that the induced morphism of isocrystals $\mathbbm{1} \to \mathbb{D}^\otimes[1/p]$ is Frobenius equivariant. \end{Def} Let $\underline{G} = (G,\mu,\Lambda, \eta, \underline{s})$ be a local Hodge embedding datum in the sense of Definition \ref{def-lochodge}. As in the previous section, we have $s_i \in \Lambda^{\otimes m_i} \otimes (\Lambda^\vee)^{\otimes n_i} = \Lambda(i)$ for every $i$. More generally, throughout this section, we fix the pair $(m_i, n_i)$ associated to each $i$, and for any object $N$ in a rigid tensor category we define $N(i) := N^{\otimes m_i} \otimes (N^\vee)^{\otimes n_i}$. If $\psi$ is a morphism $N \to N'$, write $\psi(i)$ for the induced morphism $N(i) \to N'(i)$. \begin{Def}\label{def-smu} Let $R$ be a $p$-nilpotent $W(k_0)$-algebra, and let $\underline{\mathbb{D}}$ be a Dieudonn\'e crystal over $R$ whose $R$-sections are equipped with a filtration (\ref{eq-filt}). An \textit{$(\underline{s},\mu)$-structure on $\underline{\mathbb{D}}$} over $\Spec R$ is a finite collection of crystalline Tate tensors $\underline{t} = (t_1, \dots, t_r)$ satisfying the following conditions: \begin{enumerate}[(i)] \item For every PD-thickening $B \to A$ over $R$, there is an extension $B \to B'$ which is faithfully flat and of finite presentation such that there is an isomorphism \begin{align*} (\Lambda \otimes_{\zz_p} B', \underline{s} \otimes 1) \xrightarrow{\sim} (\mathbb{D}_{B'/A'}, \underline{t}), \end{align*} where $A' = A \otimes_B B'$. \item For some faithfully flat \'etale extension $R \to R'$, there is an isomorphism \begin{align*} (\Lambda \otimes_{\zz_p} R', \underline{s} \otimes 1) \xrightarrow{\sim} (\mathbb{D}_{R/R} \otimes_R R', \underline{t}) \end{align*} respecting the tensors, such that the filtration $\textup{Fil}^1(\mathbb{D})\otimes_R R' \subset \mathbb{D}_{R/R} \otimes_R R' \xrightarrow{\sim} \Lambda \otimes_{\zz_p} R'$ is induced by $\mu$. \end{enumerate} \end{Def} In particular, we can equip the Dieudonn\'e crystal of a $p$-divisible group or a Zink display with $(\underline{s},\mu)$-structure. \begin{Def} Let $R$ be a $p$-nilpotent $W(k_0)$-algebra. \begin{enumerate}[(i)] \item A \textit{ $p$-divisible group with $(\underline{s},\mu)$-structure} over $R$ is a pair $(X,\underline{t})$ consisting of a $p$-divisible group $X$ over $R$ and an $(\underline{s},\mu)$-structure $\underline{t}$ on $\mathbb{D}(X)$ over $\Spec R$. \item A \textit{nilpotent Zink display with $(\underline{s},\mu)$-structure} over $R$ is a pair $(\underline{P}, \underline{t})$ consisting of a nilpotent Zink display $\underline{P}$ over $R$ and an $(\underline{s},\mu)$-structure $\underline{t}$ on $\mathbb{D}(\underline{P})$ over $\Spec R$. \end{enumerate} \end{Def} Denote by $\textup{fpdiv}_{\underline{s},\mu}(R)$ the category whose objects are formal $p$-divisible groups with $(\underline{s},\mu)$-structure and whose morphisms $(X,\underline{t}) \to (X',\underline{t'})$ are isomorphisms of $p$-divisible groups $X \to X'$ such that the composition of the tensor $t_i$ with the induced morphism $\mathbb{D}(X)^\otimes \to \mathbb{D}(X')^\otimes$ is the tensor $t_i'$ for every $i$. Similarly, let $\textup{nZink}_{\underline{s},\mu}(R)$ denote the category of nilpotent Zink displays with $(\underline{s},\mu)$-structure over $R$. As $R$ varies in $\textup{Nilp}_{W(k_0)}$, these determine fibered categories $\textup{fpdiv}_{\underline{s},\mu}$ and $\textup{nZink}_{\underline{s},\mu}$. \begin{lemma}\label{lem-smudescent} The fibered categories $\textup{\textup{fpdiv}}_{\underline{s},\mu}$ and $\textup{\textup{nZink}}_{\underline{s},\mu}$ form stacks for the \'etale topology on $\textup{\textup{Nilp}}_{W(k_0)}$. \end{lemma} \begin{proof} It is well known that $p$-divisible groups form an fpqc stack on $\textup{Nilp}_{W(k_0)}$ (see e.g., \cite[Rmk. 2.4.2]{Messing1972}), and formal $p$-divisible groups form a substack because the property of being a formal $p$-divisible group is fpqc local on the base. Further, nilpotent Zink displays form an fpqc stack by \cite[Thm. 37]{Zink2002}. For the remainder of the proof, the same arguments work for both $\textup{fpdiv}_{\underline{s},\mu}$ and $\textup{nZink}_{\underline{s},\mu}$, so we give the proof only for the former. Let $R \to R'$ be a faithfully flat \'etale homomorphism of $p$-nilpotent $W(k_0)$-algebras. Denote by $\textup{fpdiv}_{\underline{s},\mu}(R'/R)$ the category of formal $p$-divisible groups with $(\underline{s},\mu)$-structure equipped with descent data from $\Spec R'$ down to $\Spec R$. We want to show the natural functor $\textup{fpdiv}_{\underline{s},\mu}(R) \to \textup{fpdiv}_{\underline{s},\mu}(R'/R)$ is an equivalence. That the functor is faithful is immediate from the corresponding property for $p$-divisible groups. Moreover, morphisms in $\textup{fpdiv}_{\underline{s},\mu}(R'/R)$ automatically descend to isomorphisms of $p$-divisible groups over $R$, and these isomorphisms must be compatible with the tensors by Lemma \ref{lem-etalelocal}. It remains to prove that objects descend. Let $(X',\underline{t'})$ be a formal $p$-divisible group with $(\underline{s},\mu)$-structure over $R'$, equipped with a descent datum. We obtain an object $(X,\underline{t})$ over $R$ by descent for $p$-divisible groups and Lemma \ref{lem-etalelocal}. Frobenius equivariance of each $t_i$ follows from another application of Lemma \ref{lem-etalelocal}, and \'etale descent for $R$-modules implies that each $t_i$ preserves the filtrations. Condition (ii) of Definition \ref{def-smu} holds for $(X,\underline{t})$ because \'etale covers are stable under composition. To finish the proof we need only check that the first condition of Definition \ref{def-smu} holds for $(X,\underline{t})$. If $B \to A$ is a PD-thickening over $R$ then $A' = A \otimes_R R'$ is faithfully flat \'etale over $A$, and we can lift $B \to A$ to $B' \to A'$ with $B'$ faithfully flat \'etale over $B$. By the flatness of $B \to B'$, the divided powers extend to divided powers on the kernel of $B' \to A'$. Hence $B' \to A'$ is a PD-thickening over $\Spec R'$, so by condition (ii) for $(X', \underline{t'})$, there is an fppf cover $\Spec B'' \to \Spec B'$ trivializing $(\mathbb{D}(X')_{B'/A'}, \underline{t'})$. Then the composition $\Spec B'' \to \Spec B' \to \Spec B$ provides an fppf cover which trivializes $(\mathbb{D}(X)_{B/A}, \underline{t})$. \end{proof} \begin{rmk} \label{rmk-equiv} It is a consequence of the theorem of Zink and Lau (see \cite[Thm. 1.1.]{Lau2008}) and the compatibility of crystals (see Lemma \ref{lem-zinkdieudonne}) that the natural functor $(\underline{P},\underline{t}) \mapsto (\textup{BT}_R(\underline{P}), \underline{t})$ defines an equivalence between the stacks $\textup{nZink}_{\underline{s},\mu}$ and $\textup{fpdiv}_{\underline{s},\mu}$. \end{rmk} \subsection{From $G$-displays to $p$-divisible groups}\label{sub-mainthm} Let $\underline{G} = (G,\mu,\Lambda,\eta,\underline{s})$ be a local Hodge embedding datum in the sense of Definition \ref{def-lochodge}. Let $\mathscr{P}$ be a $(G,\mu)$-display over $\underline{W}(R)$ which is nilpotent with respect to $\eta$, let $\underline{P} = \Z_{\eta,R}(\mathscr{P})$ be the associated Zink display, and let $X = \textup{BT}_R(\underline{P})$ be the associated formal $p$-divisible group. As in the previous section, the tensors $s_i$, viewed as morphisms $\zz_p \to \Lambda(i)$, induce morphisms of crystals \begin{align*} t_i := \mathbb{D}(\mathscr{P})(s_i): \mathbbm{1} \to \mathbb{D}(\mathscr{P})^{\eta(i)}. \end{align*} Following the notation of the previous section, we write $\mathbb{D}(X)(i)= \mathbb{D}(X)^{\otimes m_i} \otimes (\mathbb{D}(X)^\vee)^{\otimes n_i}$. If $B \to A$ is a $p$-adic PD-thickening, then $\mathbb{D}(X)_{B/A}$ is $p$-adically complete and separated, since same holds for any finite projective $B$-module. The same is true of $\left(\mathbb{D}(X)_{B/A}\right)(i)$, and hence the natural map \begin{align}\label{eq-mini} \left(\mathbb{D}(X)_{B/A}\right)(i) \to \left(\mathbb{D}(X)(i)\right)_{B/A} \end{align} is an isomorphism. By combining (\ref{eq-eta}) with Lemma \ref{lem-zinkdieudonne} and applying the compatibility of $\mathbb{D}(\mathscr{P})$ with tensor products, we have $\mathbb{D}(\mathscr{P})^{\eta(i)} \cong \mathbb{D}(X)(i)$, and hence we obtain morphisms of crystals \begin{align}\label{eq-tensors} t_i: \mathbbm{1} \to \mathbb{D}(X)^\otimes \end{align} for each $i$. By Lemma \ref{lem-zinkdieudonne}, it is equivalent to view $t_i$ as a morphism $\mathbbm{1} \to \mathbb{D}(\underline{P})^\otimes$. \begin{lemma}\label{lem-tensors} The pair $(X,\underline{t})$ $($resp. $(\underline{P}, \underline{t})$$)$ defines a formal $p$-divisible group $($resp. nilpotent Zink display$)$ with $(\underline{s},\mu)$-structure. \end{lemma} \begin{proof} It is enough to prove that $(X, \underline{t})$ is a $p$-divisible group with $(\underline{s},\mu)$-structure. Let us write $\underline{M}^\pi$ for the evaluation of $\mathscr{P}$ on a representation $(V,\pi)$. We have isomorphisms $\mathbb{D}(X) \cong \mathbb{D}(\Z_{\eta,R}(\mathscr{P})) \cong \mathbb{D}(\mathscr{P})^\eta$, which all preserve the respective filtrations (see (\ref{eq-samehodge}) and Lemma \ref{lem-zinkdieudonne}), and since the Hodge filtrations of displays are compatible with tensor products (see Remark \ref{rmk-hodge}), we can conclude that the filtration on $\mathbb{D}(X)(i)_{R/R}$ induced from the filtration on $\mathbb{D}(X)_{R/R}$ agrees with the Hodge filtration of $\underline{M}^{\eta(i)}$. Similarly, the filtration on $\mathbbm{1}$ agrees with the one on the unit display $\underline{S} = (S,\sigma)$, so it is enough to show the map \begin{align*} (t_i)_{R/R}: R \to \tau^\ast M^{\eta(i)} \otimes_{W(R)} R \end{align*} preserves the filtrations of the corresponding displays. But the map $(t_i)_{R/R}$ is defined as the reduction of the map $\underline{S} \to \underline{M}^{\eta(i)}$ induced by $s_i$, so this is automatic (see again Remark \ref{rmk-hodge}). Frobenius equivariance follows from Proposition \ref{prop-frobeq} and the comparison of crystals, so we can conclude $\underline{t}$ is a collection of crystalline Tate tensors on $\underline{\mathbb{D}(X)}$ over $\Spec R$. Condition (i) of Definition \ref{def-smu} follows from Remark \ref{lem-banalcrystal} since the lift $\mathscr{P}_{B/A}$ of $\mathscr{P}_{\underline{W}(A)}$ is \'etale-locally banal for any PD-thickening $B \to A$ over $R$. For condition (ii), choose a faithfully flat \'etale extension $R \to R'$ such that $\mathscr{P}_{\underline{W}(R')}$ is banal, given by some $U \in L^+G(R')$. Then we have an isomorphism \begin{align*} \mathbb{D}(X_{R'})_{R'/R'} \cong \mathbb{D}(\mathscr{P}_{U})^\eta_{R'/R'} \cong \Lambda \otimes_{\zz_p} R' \end{align*} which preserves the filtrations, so it is enough to show $\textup{Fil}^1(\mathbb{D}(\mathscr{P}_U)^\eta) = \Lambda^1 \otimes_{W(k_0)} R'$. But if $\mathscr{P}_U(\Lambda, \eta) = \underline{M}^\eta$, then \begin{align*} \textup{Fil}^1(\mathbb{D}(\mathscr{P}_U)^\eta) = \textup{im}(\bar{\theta_1}), \end{align*} where $\bar{\theta}_1$ is the map $M_1^\eta \to \tau^\ast M^\eta \to \tau^\ast M^\eta \otimes_{W(R')} R'$. Since $\mathscr{P}_U$ is banal, we have \begin{align*} M_1^\eta = (\Lambda^0 \otimes_{W(k_0)} I_{R'}) \oplus (\Lambda^1 \otimes_{W(k_0)} W(R')), \end{align*} and $\bar{\theta}_1$ is reduction modulo $I_{R'}$, so the result follows. \end{proof} If $\mathscr{P} \to \mathscr{P}'$ is a morphism $(G,\mu)$-displays over $\underline{W}(R)$ which are nilpotent with respect to $\eta$, then it follows from the natural transformation property that the resulting morphisms $\underline{P} \to \underline{P}'$ and $X \to X'$ are compatible with the $(\underline{s},\mu)$-structure. Hence we obtain functors \begin{align}\label{eq-BTG} \textup{BT}_{\underline{G},R}: G\textup{-}\textup{Disp}_{\underline{W},\mu}^{\otimes, \eta}(R) \to \textup{fpdiv}_{\underline{s},\mu}(R), \ \mathscr{P} \mapsto (\textup{BT}_R(\Z_{\eta,R}(\mathscr{P})), \underline{t}), \end{align} and \begin{align}\label{eq-ZG} \Z_{\underline{G},R}: G\textup{-}\textup{Disp}_{\underline{W},\mu}^{\otimes, \eta}(R) \to \textup{nZink}_{\underline{s},\mu}(R), \ \mathscr{P} \mapsto (\Z_{\eta,R}(\mathscr{P}), \underline{t}). \end{align} Let $M$ be a finite projective graded $W(R)^\oplus$-module, and suppose we are given a collection $\underline{u} = (u_1, \dots, u_r)$ of $W(R)$-module homomorphisms $u_i: W(R) \to (\tau^\ast M)(i)$. Define \begin{align*} Q_{M,\underline{u}} = \underline{\textup{Isom}}^0((\Lambda_{W(R)^\oplus},\underline{s} \otimes 1), (M, \underline{u})) \end{align*} to be the fpqc sheaf on $\Spec R$ of isomorphisms of graded $W(R)^\oplus$-modules $\psi: \Lambda_{W(R)^\oplus} \xrightarrow{\sim} M$ which respect the tensors after pulling back by $\tau$, in the sense that $(\tau^\ast \psi)(i) \circ (s_i \otimes 1) = u_i$ for every $i$. We will denote such an isomorphism by $(\Lambda_{W(R)^\oplus}, \underline{s}\otimes 1) \xrightarrow{\sim} (M,\underline{u})$. We write $\underline{\textup{Aut}}^0(\Lambda_{W(R)^\oplus},\underline{s} \otimes 1)$ for the sheaf $Q_{\Lambda_{W(R)^\oplus},\underline{s}\otimes 1}$. It follows from Lemma \ref{lem-checkhodge} below that we have an identification \begin{align}\label{eq-aut} \underline{\textup{Aut}}^0(\Lambda_{W(R)^\oplus},\underline{s}\otimes 1) = L^+_\mu G. \end{align} \begin{lemma}\label{lem-checkhodge} Let $g \in L^+_{\eta\circ\mu}\textup{GL}(\Lambda)(R)$. Then $g \in L^+_\mu G(R)$ if and only if $\tau(g) \in L^+G(R)$. \end{lemma} \begin{proof} If $g \in L^+_\mu G(R)$, then $\tau(g) \in L^+G(R)$ by naturality of $\tau$. Conversely suppose $\tau(g) \in L^+G(R)$. Let $\mathcal{O}_{\textup{GL}(\Lambda)}$ and $\mathcal{O}_{G}$ be the coordinate rings of $\textup{GL}(\Lambda)$ and $G$, respectively, so $\eta: G \hookrightarrow \textup{GL}(\Lambda)$ induces a surjection $\mathcal{O}_{\textup{GL}(\Lambda)} \twoheadrightarrow \mathcal{O}_G$ which necessarily preserves the gradings induced by the cocharacters. Then $g$ corresponds to a map of graded rings $g: \mathcal{O}_{\textup{GL}(\Lambda)} \to W(R)^\oplus$, and to show $g \in L^+G(R)$, we need to show $g$ factors through $\mathcal{O}_G$, i.e. that $g$ vanishes on $K = \ker(\mathcal{O}_{\textup{GL}(\Lambda)} \to \mathcal{O}_G)$. Since $\tau(g) \in L^+G(R)$, we know $\tau \circ g$ factors through $\mathcal{O}_G$. Let $\mathcal{O}_{\textup{GL}(\Lambda),n}$ denote the $n$th graded piece of $\mathcal{O}_{\textup{GL}(\Lambda)}$ with respect to the $\mathbb{G}_m$-action induced by $\eta\circ \mu$, and write $K_n = K \cap \mathcal{O}_{\textup{GL}(\Lambda),n}$. For $n \le 1$, $\tau_n$ is injective, so $g$ necessarily vanishes on $K_n$. For $n > 1$, $\mathcal{O}_{\textup{GL}(\Lambda),n}$ is generated by products of elements in $\mathcal{O}_{\textup{GL}(\Lambda),1}$ because $\eta \circ \mu$ is minuscule. Then $\tau g(x) = x$ for all $ x \in \mathcal{O}_{\textup{GL}(\Lambda),n}$ because $\tau_1$ is the natural inclusion $I_R \hookrightarrow W(R)$. Hence $g$ vanishes on $K_n$ for $n > 1$ as well. \end{proof} Suppose $(\underline{P}, \underline{t})$ is a nilpotent Zink display with $(\underline{s},\mu)$-structure over $R$, and let $\underline{M} = M_{\underline{W}(R)}(\underline{P})$ be the $1$-display associated to $\underline{P}$ as in Lemma \ref{lem-windows} (here we use notation as in (\ref{eq-displaytowindow})). Then by \cite[Prop. 53]{Zink2002}, we have \begin{align*} \mathbb{D}(\underline{P})_{W(R)/R} = P = \tau^\ast M. \end{align*} It follows that we can identify $(\mathbb{D}(\underline{P})_{W(R)/R})(i) = (\tau^\ast M)(i)$, so for each $i$ the evaluation of $t_i$ on $W(R) \to R$ determines a $W(R)$-module homomorphism \begin{align*} (t_i)_{W(R)}: W(R) \to (\tau^\ast M)(i). \end{align*} Notice here that we are using the natural isomorphism (\ref{eq-mini}) to identify $\left(\mathbb{D}(\underline{P})_{W(R)/R}\right)(i)$ with $\left(\mathbb{D}(\underline{P})(i)\right)_{W(R)/R}$. \begin{lemma}\label{lem-torsor} Let $(\underline{P},\underline{t})$ be a nilpotent Zink display with $(\underline{s},\mu)$-structure, and let $\underline{M} = M_{\underline{W}(R)}(\underline{P})$ be the 1-display associated to $\underline{P}$. Then the fpqc sheaf \begin{align*} Q_{M,\underline{t}_{W(R)}} = \underline{\textup{Isom}}^0((\Lambda_{W(R)^\oplus}, \underline{s}\otimes 1), (M, \underline{t}_{W(R)})) \end{align*} is an \'etale-locally trivial $L^+_\mu G$-torsor over $\Spec R$. \end{lemma} \begin{proof} By (\ref{eq-aut}) it is enough to show that, \'etale locally, there is an isomorphism $\psi: (\Lambda_{W(R)^\oplus},\underline{s} \otimes 1) \xrightarrow{\sim} (M,\underline{t}_{W(R)})$. Moreover, letting \begin{align*} \Fil \Lambda_{W(R)} := I(R) (\Lambda^0 \otimes_{W(k_0)} W(R)) \oplus (\Lambda^1 \otimes_{W(k_0)} W(R)), \end{align*} we see that it is enough to show that, \'etale locally, there is an isomorphism $\overline{\psi}: \Lambda_{W(R)} \xrightarrow{\sim} P$ which sends $\Fil \Lambda_{W(R)}$ into $\Fil P$ and which respects the tensors. Condition (ii) in Definition \ref{def-smu} implies that, after replacing $R$ by some faithfully flat \'etale extension, we have an isomorphism $\Lambda_R \xrightarrow{\sim} \mathbb{D}(\underline{P})_{R/R}$ which sends $\Lambda^1_R$ into $\textup{Fil}^1(\mathbb{D}(\underline{P}))$ and which respects the tensors. Recalling the identifications \begin{align*} \mathbb{D}(\underline{P})_{R/R} = P / I(R) P \text{ and } \textup{Fil}^1(\mathbb{D}(\underline{P})) = \Fil P / I(R) P, \end{align*} we reduce the proof to showing that any such isomorphism lifts to an isomorphism $\Lambda_{W(R)} \xrightarrow{\sim} P$ which respects the tensors, since any lift will automatically preserve the filtrations. Define $Y$ to be the $W(R)$-scheme whose points in a $W(R)$-algebra $R'$ are isomorphisms $\Lambda_{R'} \xrightarrow{\sim} \mathbb{D}(\underline{P})_{W(R)/R} \otimes_{W(R)} R'$ which respect the tensors, i.e. \begin{align*} Y(R') = \textup{Isom}((\Lambda_{R'}, \underline{s}\otimes 1), (\mathbb{D}(\underline{P})_{W(R)/R} \otimes_{W(R)} R', \underline{t}_{W(R)} \otimes \id_{R'})). \end{align*} We need to show that the natural map $Y(W(R)) \to Y(R)$ is surjective. For every $n$, define the analogous $W_n(R)$-scheme $Y_n$, so for any $W_n(R)$-algebra $R'$ we have \begin{align*} Y_n(R') = \textup{Isom}((\Lambda_{R'}, \underline{s}\otimes 1), (\mathbb{D}(X)_{W_n(R)/R} \otimes_{W_n(R)} R', \underline{t}_{W_n(R)} \otimes \id_{R'})). \end{align*} Then, in particular, $Y_n(R') = Y(R')$ for all $W_n(R)$-algebras, and condition (i) of Definition \ref{def-smu} implies that $Y_n$ is an fppf locally trivial $G_{W_n(R)}$-torsor. In particular, $Y_n$ is formally smooth over $W_n(R)$. Since $W_n(R) \to W_{n-1}(R)$ has nilpotent kernel for all $p$-nilpotent $W(k_0)$-algebras $R$, it follows that the natural map $Y_n(W_n(R)) \to Y_n(W_{n-1}(R))$ is surjective for all $n$. Hence $Y(W_n(R)) \to Y(W_{n-1}(R))$ is surjective for all $n$, and therefore so too is $Y(W(R)) \to Y(R)$. \end{proof} Continuing with the notation of Lemma \ref{lem-torsor}, suppose $\beta \in Q_{M,\underline{t}_{W(R)}}(R)$. Then $\tau^\ast \beta$ defines an isomorphism $(\Lambda_{W(R)}, \Fil \Lambda_{W(R)}, \underline{s}) \xrightarrow{\sim} (P, \Fil P, \underline{t}_{W(R)})$, where $P = \tau^\ast M$. The composition \begin{align}\label{eq-Ubeta} \Lambda_{W(R)} \xrightarrow{\sim} \sigma^\ast \Lambda_{W(R)^\oplus} \xrightarrow{\sigma^\ast \beta} \sigma^\ast M \xrightarrow{F^\sharp} \tau^\ast M \xrightarrow{\tau^\ast \beta^{-1}} \Lambda_{W(R)} \end{align} determines an element $U_\beta \in \textup{GL}(\Lambda_{W(R)})$. Denote by $L_0$ and $L_1$ the images of $\Lambda_{W(R)}^0$ and $\Lambda_{W(R)}^1$ under $\beta$, respectively, and let $\Psi = F_0 \res_{L_0} \oplus F_1 \res_{L_1}$. Then $\Psi$ is an $f$-linear automorphism, and $(L_0, L_1, \Psi)$ is a normal representation for $\underline{P}$. Because $\tau^\ast M = P$, we can identify $\Psi^\sharp$ with $F^\sharp$, and (\ref{eq-Ubeta}) becomes the composition \begin{align*} \Lambda_{W(R)} \xrightarrow{\sim} f^\ast \Lambda_{W(R)} \xrightarrow{f^\ast \tau^\ast \beta} f^\ast P \xrightarrow{\Psi^\sharp} P \xrightarrow{\tau^\ast\beta^{-1}} \Lambda_{W(R)}. \end{align*} \begin{lemma}\label{lem-Ubeta} In the situation above, $\eta(i)(U_\beta)(s_i \otimes 1) = s_i \otimes 1$ for all $i$, that is $U_\beta \in G(W(R))$. \end{lemma} \begin{proof} Let $R_0 = R/pR$, and let $\underline{P}_0$ be the base change of $\underline{P}$ to $R_0$. Then $W(R)\to R_0$ is a $p$-adic PD-thickening, and we have a natural identification $\mathbb{D}(\underline{P})_{W(R)/R} = \mathbb{D}(\underline{P}_0)_{W(R)/R_0}$. By \cite[Prop. 53]{Zink2002}, we have $\mathbb{D}(\underline{P}_0)_{W(R)/R_0} \cong P$, and by (\ref{eq-frobcrystal}) there is an isomorphism \begin{align}\label{eq-frobtwist} \left(\phi^\ast \mathbb{D}(\underline{P}_0)\right)_{W(R)/R_0} \xrightarrow{\sim} f^\ast P. \end{align} This identification is compatible with the identifications $\phi^\ast \mathbb{D}(\underline{P}_0) \cong \mathbb{D}(\underline{P}_0^{(p)})$ and $\mathbb{D}(\underline{P}_0^{(p)}) \cong f^\ast P$ given earlier, so by \cite[Prop. 57]{Zink2002}, we can identify $\mathbb{F}_{W(R)/R_0}$ with the map $F_0^\sharp: f^\ast P \to P$. Since $W(R)\to R_0$ is a $p$-adic PD-thickening, we have a natural isomorphism \begin{align*} \left(\mathbb{D}(\underline{P}_0)_{W(R)/R_0}\right)(i) \xrightarrow{\sim} \mathbb{D}(\underline{P}_0)(i)_{W(R)/R_0} \end{align*} as in (\ref{eq-mini}). In turn, we have an identification of $W(R)[1/p]$-modules \begin{align}\label{eq-identify} \left(\mathbb{D}(\underline{P}_0)_{W(R)/R_0}\right)[1/p](i) \xrightarrow{\sim} \mathbb{D}(\underline{P}_0)(i)_{W(R)/R_0}[1/p]. \end{align} Denote by $F_{0,i}^\sharp$ the extension of $F_0^\sharp[1/p]$ to $f^\ast P[1/p](i)$ (note that we have to first invert $p$, in order to make $F_0^\sharp$ invertible). Using (\ref{eq-identify}), we can identify $F_{0,i}^\sharp$ with the evaluation of $\mathbb{F}$ on $\mathbb{D}(\underline{P}_0)(i)_{W(R)/R_0}[1/p]$. Using the identities $F_0^\sharp \circ V^\sharp = p \cdot \id_{P}$ and $V^\sharp \circ F_0^\sharp = p \cdot \id_{f^\ast P}$, we can rewrite $F_{0,i}^\sharp$ as \begin{align*} F_{0,i}^\sharp = (F_0^\sharp[1/p])^{\otimes m_i} \otimes p^{-n_i} ((V^\sharp)^t[1/p])^{\otimes n_i}. \end{align*} By a computation as in Lemma \ref{lem-explicitfrob} (see also \cite[Lem. 3.27]{Daniels2019}), we obtain \begin{align*} \beta^{-1} \circ (F_{0,i}^\sharp) \circ f^\ast \beta = \eta(i)\big (U_\beta\cdot\sigma(\mu)(p)\big)\circ(\id_\Lambda \otimes f)^\sharp. \end{align*} Abusing notation, denote by $(\phi^\ast t_i)_{W(R)}$ the composition \begin{align*} W(R) \xrightarrow{\sim} f^\ast W(R) \xrightarrow{\sim} \phi^\ast \mathbbm{1}_{W(R)/R_0} \xrightarrow{(\phi^\ast t_i)_{W(R)}} \phi^\ast \mathbb{D}(\underline{P}_0)_{W(R)/R_0}(i) \xrightarrow{\sim} f^\ast P(i). \end{align*} Then we have a commutative diagram \begin{center} \begin{tikzcd}[column sep = huge] W(R) \arrow[r, "(\phi^\ast t_i)_{W(R)}"] \arrow[dr, "(t_i)_{W(R)}"'] & f^\ast P(i)[1/p] \arrow[d, "F_{0,i}^\sharp"] \arrow[r, "f^\ast \tau^\ast\beta^{-1}"] & \Lambda_{W(R)}(i)[1/p] \arrow[d, "\eta(i)(U_\beta\sigma(\mu)(p))"] \\ & P(i)[1/p] \arrow[r, "\tau^\ast\beta^{-1}"] & \Lambda_{W(R)}(i)[1/p] \end{tikzcd} \end{center} Here the left-hand triangle commutes because $\underline{t}$ is a collection of crystalline Tate tensors (so each $t_i$ is Frobenius invariant). We claim composition across the top and bottom of this diagram is given by $s_i\otimes 1$. For the bottom, this is because the $\beta$ respects the tensors. For the top, we use the fact that $t_i$ is a morphism of crystals, so the isomorphism (\ref{eq-frobtwist}) identifies $(\phi^\ast t_i)_{W(R)}$ with $f^\ast (t_i)_{W(R)}$. The lemma follows because $s_i \otimes 1 \in \Lambda_{W(R)}^0(i)$, so $\eta(i)(U_\beta\cdot\sigma(\mu)(p))(s_i \otimes 1) = \eta(i)(U_\beta)(s_i \otimes 1)$. \end{proof} In the following lemma, we associate a $G$-display of type $\mu$ over $\underline{W}(R)$ to any nilpotent Zink display with $(\underline{s},\mu)$-structure $(\underline{P},\underline{t})$. Continue the notation of Lemma \ref{lem-torsor}, and denote by $\alpha_{\underline{M},\underline{t}_{W(R)}}$ the map $Q_{M,\underline{t}_{W(R)}} \to L^+G, \ \beta \mapsto U_\beta$ defined by Lemma \ref{lem-Ubeta}. \begin{lemma}\label{lem-Gdispmatching} The pair $(Q_{M,\underline{t}_{W(R)}}, \alpha_{\underline{M},\underline{t}_{W(R)}})$ determines a $G$-display of type $\mu$ over $\underline{W}(R)$. Moreover, if $\underline{M} = \mathscr{P}(\Lambda, \eta)$ for some $(G,\mu)$-display $\mathscr{P}$ over $W(R)$, then evaluation on $(\Lambda, \eta)$ induces an isomorphism of $G$-displays of type $\mu$ \begin{align*} (Q_{\mathscr{P}},\alpha_{\mathscr{P}}) \xrightarrow{\sim} (Q_{M,\underline{t}_{W(R)}}, \alpha_{\underline{M},\underline{t}_{W(R)}}). \end{align*} \end{lemma} \begin{rmk} Here $(Q_{\mathscr{P}},\alpha_{\mathscr{P}})$ is the $G$-display of type $\mu$ over $\underline{W}(R)$ associated to $\mathscr{P}$ as in (\ref{eq-morphism}) (see also \cite[Constr. 3.15]{Daniels2019}). \end{rmk} \begin{proof} For the first assertion, we note that if $h \in L^+_\mu G(R)$, then $U_{\beta \cdot h}$ is the composition $\tau^\ast h^{-1} \circ \tau^\ast \beta^{-1} \circ \Phi^\sharp \circ \sigma^\ast h \circ \sigma^\ast \beta$, which is equal to $\tau(h)^{-1} \cdot U_{\beta} \cdot \sigma(h)$. The second assertion follows from the proof of \cite[Lem. 5.14]{Daniels2019}. \end{proof} We can now prove the first part of Theorem \ref{thm-1}. \begin{prop}\label{prop-fullyfaithful} If $R$ is in \textup{Nilp}$_{W(k_0)}$, then the functor $\textup{BT}_{\underline{G},R}$ $($see $($\ref{eq-BTG}$))$ is fully faithful. \end{prop} \begin{proof} By Remark \ref{rmk-equiv} it is enough to show the functor $\Z_{\underline{G},R}$ is fully faithful. The proof of this is formally very similar to the proof of \cite[Thm. 5.15]{Daniels2019}. Indeed, faithfulness of $\Z_{\underline{G},R}$ follows exactly as in \textit{loc. cit.}. Namely, the problem reduces by descent to faithfulness of the representation $\eta$. For fullness, if $\mathscr{P}$ and $\mathscr{P}'$ are $(G,\mu)$-displays over $\underline{W}(R)$ which are nilpotent with respect to $\eta$, and $\varphi: \Z_{\underline{G},R}(\mathscr{P}) \to \Z_{\underline{G},R}(\mathscr{P}')$ is a morphism of $p$-divisible groups with $(\underline{s},\mu)$-structure, one uses Lemma \ref{lem-Gdispmatching} to obtain a morphism $(Q_{\mathscr{P}}, \alpha_{\mathscr{P}}) \to (Q_{\mathscr{P}'},\alpha_{\mathscr{P}'})$ of $G$-displays of type $\mu$ over $\underline{W}(R)$, which is induced from a unique morphism $\xi: \mathscr{P} \to \mathscr{P}'$. As in the proof of \cite[Thm. 5.15]{Daniels2019}, we have $\Z_{\eta,R}(\xi) = \xi^\eta = \varphi$. \end{proof} For the convenience of the reader, we recall the following definition. \begin{Def}\label{def-pbasis} Let $R$ be an $\mathbb{F}_p$-algebra. A \textit{$p$-basis} for $R$ is a subset $\{x_\alpha\}$ of $R$ such that the set of monomials $x^J$ for $J$ running over the multi-indices $J = (i_\alpha), 0 \le i_\alpha < p$, provides a basis for $R$ viewed as an $R$-module over itself via the Frobenius. \end{Def} For example, any field of characteristic $p$ or any regular local ring which is essentially of finite type over a field of characteristic $p$ has a $p$-basis (see \cite[Ex. 1.1.2]{BM1990}). We say that an $\mathbb{F}_p$-algebra $R$ has a $p$-basis \'etale locally if there is some faithfully flat \'etale ring homomorphism $R \to R'$ where $R'$ has a $p$-basis. The following lemma is critical in the remainder of the proof of Theorem \ref{thm-1}. \begin{lemma}\label{lem-tensorsagree} Suppose $R$ is in $\textup{Nilp}_{W(k_0)}$, and that $R_0 = R/pR$ has a $p$-basis \'etale locally. Let $\mathbb{D}$ and $\mathbb{D}'$ be crystals in locally free $\mathcal{O}_{\Spec R/ W(k_0)}$-modules, and let \begin{align*} t_1, t_2: \mathbb{D} \to \mathbb{D}' \end{align*} be two morphisms of crystals. Then $t_1 = t_2$ if and only if their evaluations on $W(R) \to R$ agree. \end{lemma} \begin{proof} One direction holds by definition, so we only prove that $t_1 = t_2$ if their evaluations on $W(R) \to R$ agree. The property of agreeing on $W(R) \to R$ is stable under base change, so by the equivalence (\ref{eq-crystalequiv}), we can replace $R$ by $R_0$. Moreover, by Lemma \ref{lem-etalelocal}, it is enough to assume that $R$ has a $p$-basis. In this case, passage to the perfect closure is faithful for locally free crystals (see \cite[Ex. 1.3.5 (ii)]{BM1990}), so we may in turn assume $R$ is perfect, where the result follows because evaluation on $W(R) \to R$ is faithful for perfect rings, see Remark \ref{rmk-perfectequiv}. \end{proof} \begin{thm}\label{mainthm} Let $R$ be in $\textup{Nilp}_{W(k_0)}$, and suppose $R_0 = R/pR$ has a $p$-basis \'etale locally. Then the functor $\textup{BT}_{\underline{G},R}$ is an equivalence. \end{thm} \begin{proof} Again, it is enough to show $\Z_{\underline{G},R}$ is an equivalence. Let $(\underline{P},\underline{s})$ be a nilpotent Zink display with $(\underline{s},\mu)$-structure over $R$. By Proposition \ref{prop-fullyfaithful} and descent it is enough to show that $(\underline{P}, \underline{t})$ is \'etale locally in the essential image of $\Z_{\underline{G},R}$. Let $\underline{M} = M_{\underline{W}(R)}(\underline{P})$ be the 1-display corresponding to $\underline{P}$. By Lemma \ref{lem-torsor}, $Q_{M,\underline{t}_{W(R)}}(R')$ has a section $\beta$ for some \'etale faithfully flat extension $R'$ of $R$. The composition $U_\beta = \tau^\ast\beta^{-1} \circ F_{R'}^\sharp \circ \sigma^\ast \beta$ is an element of $L^+G(R')$ by Lemma \ref{lem-Ubeta}, and it follows that $\beta$ determines an isomorphism between $\underline{P}_{R'}$ and $\Z_{\eta,R'}(\mathscr{P}_{U_\beta})$. It remains to show the induced isomorphism on Dieudonn\'e crystals preserves the tensors. By construction the tensors agree after evaluation on $W(R') \to R'$, and since $R_0$ has a $p$-basis \'etale locally, the same holds for $R_0'= R'/pR'$, so the result follows from Lemma \ref{lem-tensorsagree}. \end{proof} We close this section by characterizing the essential image of $\textup{BT}_{\underline{G},R}$ in general. \begin{prop}\label{prop-essimg} Let $R$ be in $\textup{Nilp}_{W(k_0)}$. Then a $p$-divisible group with $(\underline{s},\mu)$-structure $(X, \underline{t})$ is in the essential image of $\textup{BT}_{\underline{G},R}$ if and only if there is an \'etale faithfully flat extension $R \to R'$ and an isomorphism \begin{align*} \lambda: \Lambda_{W(k_0)} \otimes \mathcal{O}_{\Spec R'/W(k_0)} \xrightarrow{\sim} \mathbb{D}(X)_{R'} \end{align*} such that the Hodge filtration $\textup{Fil}^1(\mathbb{D}(X)_{R'}) \subset \mathbb{D}(X)_{R'/R'} \xrightarrow{\sim} \Lambda \otimes_{\zz_p} R'$ is induced by $\mu$, and such that $\lambda$ identifies $\underline{s} \otimes 1$ with $\underline{t}$. \end{prop} \begin{proof} Let $\mathscr{P}$ be a $(G,\mu)$-display over $\underline{W}(R)$ which is nilpotent with respect to $\eta$, and let $(X,\underline{t}) = \textup{BT}_{\underline{G},R}$. By Remark \ref{lem-banalcrystal}, for any faithfully flat \'etale extension $R \to R'$ such that $\mathscr{P}_{R'}$ is banal, we have an isomorphism of tensor functors $\mathbb{D}_G \xrightarrow{\sim} \mathbb{D}(\mathscr{P})_{R'}$. Evaluating on $(\Lambda, \eta)$, we have $\lambda:\Lambda \otimes \mathcal{O}_{\Spec R'/W(k_0)} \xrightarrow{\sim} \mathbb{D}(X)_{R'}$, and $\lambda$ identifies $\underline{t} = \mathbb{D}(\mathscr{P}_{R'})(\underline{s})$ with $\mathbb{D}_G(\underline{s}) = \underline{s} \otimes 1$. On the other hand, let $(X,\underline{t})$ and $R'$ satisfy the hypotheses of the proposition. By \'etale descent we may replace $R$ with $R'$ and assume that we have an isomorphism \begin{align*} \lambda: \Lambda_{W(k_0)} \otimes \mathcal{O}_{\Spec R/ W(k_0)} \xrightarrow{\sim} \mathbb{D}(X) \end{align*} under which the Hodge filtration is induced by $\mu$, and which identifies $\underline{s} \otimes 1$ with $\underline{t}$. If $\underline{M}$ is the 1-display corresponding to $X$, then $\beta = \lambda_{W(R)/R}$ determines a section of $Q_{M,\underline{t}_{W(R)}}(R)$, and as in the proof of Theorem \ref{mainthm}, $\beta$ defines an isomorphism $\textup{BT}_{\underline{G},R}(\mathscr{P}_{U_\beta}) \xrightarrow{\sim} (X,\underline{t})$. \end{proof} \subsection{RZ spaces of Hodge type}\label{sub-rzspaces} In this section we give an explicit isomorphism between the Rapoport-Zink functor of Hodge type defined using $(G,\mu)$-displays as in \cite{BP2017} and \cite{Daniels2019}, and the one defined using crystalline Tate tensors as in \cite{Kim2018} and \cite{HP2017}. We begin by recalling the definition of $G$-quasi-isogenies as in \cite{Daniels2019}, which are used to defined the Rapoport-Zink functor in terms of $(G,\mu)$-displays. If $R$ is a $\zz_p$-algebra, the Frobenius for $W(R)$ naturally extends to $W(R)[1/p]$. An \textit{isodisplay} over $R$ is a pair $(N,\varphi)$ where $N$ is a finitely generated projective $W(R)[1/p]$-module and $\varphi:N \to N$ is an $f$-linear isomorphism. If $\underline{M}$ is a display over $\underline{W}(R)$ of depth $d$, then $(\tau^\ast M, \varphi)$ is an isodisplay, where $\varphi = p^d\circ F_d\circ \theta_d^{-1}$ (here $\theta_d: M_d \to \tau^\ast M$ is the natural map, see \textsection \ref{sub-frames}). The assignment $\underline{M} \mapsto (\tau^\ast M, \varphi)$ determines an exact tensor functor from displays over $\underline{W}(R)$ to isodisplays over $R$. A \textit{quasi-isogeny} of displays over $\underline{W}(R)$ is an isomorphism of their corresponding isodisplays, and a quasi-isogeny is an isogeny if it is induced from a morphism of displays. These notions naturally extend to $G$-displays. Indeed, a \textit{$G$-isodisplay} over $R$ is an exact tensor functor $\textup{Rep}_{\zz_p}(G) \to \textup{Isodisp}(R)$. Any $G$-display $\mathscr{P}$ naturally determines a $G$-isodisplay $\mathscr{P}[1/p]$ by composition of functors, and a \textit{$G$-quasi-isogeny} between two $G$-displays is an isomorphism of their corresponding $G$-isodisplays. See \cite[\textsection 3.4]{Daniels2019} for more details. Let us now recall the definition of local Shimura data of Hodge type as in \cite{HP2017} and \cite{BP2017} (see also \cite{Daniels2019}). Let $k$ be an algebraic closure of $\mathbb{F}_p$, and let $W(k)$ be the Witt vectors over $k$. Write $K = W(k)[1/p]$, and let $\bar{K}$ be an algebraic closure of $K$. We will write $\sigma$ for the extension of the Frobenius of $W(k)$ to $K$ (hopefully this causes no confusion with the previous definition of $\sigma$). Assume that $G$ is a connected reductive group scheme over $\zz_p$, and let $(\{\mu\},[b])$ be a pair such that \begin{itemize} \item $\{\mu\}$ is a $G(\bar{K})$-conjugacy class of cocharacters ${\mathbb{G}_m}_{\bar{K}} \to G_{\bar{K}}$; \item $[b]$ is a $\sigma$-conjugacy class of elements $b \in G(K)$. \end{itemize} The local reflex field is the field of definition $E$ of the conjugacy class $\{\mu\}$. Because $G$ is reductive over $\zz_p$, $E$ is a subfield of $K$ (a priori, $E \subset \bar{K}$), and by \cite[Lem. 1.1.3]{Kottwitz1984}, there is a cocharacter $\mu \in \{\mu\}$ which is defined over $E$. Moreover, we an find a representative $\mu$ which extends to an integral cocharacter defined over the valuation ring $\mathcal{O}_E$ of $E$. Note that if $k_E$ is the residue field of $\mathcal{O}_E$, then $k_E$ is finite, $\mathcal{O}_E = W(k_E)$, and $E = W(k_E)[1/p]$. We say the triple $(G,\{\mu\},[b])$ is a \textit{local unramified Shimura datum} if $\{\mu\}$ is minuscule and for some (or equivalently, any) integral representative $\mu$ of $\{\mu\}$, the $\sigma$-conjugacy class $[b]$ has a representative \begin{align*} b \in G(W(k))\sigma(\mu)(p)G(W(k)). \end{align*} If these assumptions are satisfied, then we can find an integral representative $\mu$ of $\{\mu\}$ defined over $\mathcal{O}_E$ and a representative $b$ of $[b]$ such that $b = u \sigma(\mu)(p)$ for some $u \in L^+G(k)$. Such a pair $(\mu,b)$ will be called a \textit{framing pair}. If $(\mu,b)$ is a framing pair, then we associate to $(\mu,b)$ the \textit{framing object} $\mathscr{P}_0 := \mathscr{P}_u$ where $u \in L^+G(k)$ is the unique element such that $b = u\sigma(\mu)(p)$, and $\mathscr{P}_u$ is defined as in Proposition \ref{prop-banal}. \begin{Def} Fix a framing pair $(\mu,b)$ for $(G,\{\mu\}, [b])$, and let $\mathscr{P}_0$ be the associated framing object. The \textit{display RZ-functor} associated to $(G,\mu,b)$ is the functor on \textup{Nilp}$_{W(k)}$ which assigns to a $p$-nilpotent $W(k)$-algebra $R$ the set of isomorphism classes of pairs $(\mathscr{P}, \rho)$, where \begin{itemize} \item $\mathscr{P}$ is a $(G,\mu)$-display over $R$, \item $\rho: \mathscr{P}_{R/pR} \dashrightarrow (\mathscr{P}_0)_{R/pR}$ is a $G$-quasi-isogeny. \end{itemize} Denote the display RZ-functor associated to $(G,\mu,b)$ by \textup{RZ}$^{\text{disp}}_{G,\mu,b}$. \end{Def} Let $\textup{Nilp}^\text{fsm}_{W(k)}$ denote the category of adic $W(k)$-algebras in which $p$ is nilpotent, and which are formally finitely generated and formally smooth over $W(k)/p^nW(k)$ for some $n \ge 1$. We extend $\textup{RZ}_{G,\mu,b}^\textup{disp}$ to a functor $\textup{RZ}_{G,\mu,b}^\textup{disp,fsm}$ on $\textup{Nilp}^\text{fsm}_{W(k)}$ by defining \begin{align*} \textup{RZ}_{G,\mu,b}^\textup{disp,fsm}(A) := \varprojlim_n \textup{RZ}_{G,\mu,b}^\textup{disp}(A /I^n), \end{align*} where $I$ is an ideal of definition of $A$. \begin{rmk}\label{rmk-RZdispfsm} Let $A \in \textup{Nilp}_{W(k)}^\textup{fsm}$, and suppose $I$ is an ideal of definition for $A$. Define a $(G,\mu)$-display over $\Spf A$ to be a compatible system $(\mathscr{P}_n)_n$ of $(G,\mu)$-displays $\mathscr{P}_n$ over $\underline{W}(A/I^n)$. Likewise, a $G$-quasi-isogeny over $\Spf A$ is a compatible system $(\rho_n)_n$ of $G$-quasi-isogenies over $A/I^n$. With these definitions, we see that $\textup{RZ}_{G,\mu,b}^\textup{disp,fsm}(A)$ is the set of isomorphism classes of pairs $((\mathscr{P}_n)_n,(\rho_n)_n)$, with $(\mathscr{P})_n$ a $(G,\mu)$-display over $\Spf A$, and $(\rho_n)_n$ a $G$-quasi-isogeny $\mathscr{P}_{A/pA} \dashrightarrow (\mathscr{P}_0)_{A/pA}$ defined over $\Spf A/pA$. In fact, by \cite[Prop. 3.2.11]{BP2017}, the categories of $(G,\mu)$-displays over $\underline{W}(A)$ and over $\Spf A$ are equivalent, so it is equivalent to consider pairs $(\mathscr{P},(\rho_n)_n)$, where $\mathscr{P}$ is a $(G,\mu)$-display over $\underline{W}(A)$ and $(\rho_n)_n$ is a $G$-quasi-isogeny over $\Spf A/pA$. \end{rmk} Let $(G,\mu)$ be of Hodge type as in Definition \ref{def-hodgetype}, with Hodge embedding $\eta: G \hookrightarrow \textup{GL}(\Lambda)$ Suppose $(\mu,b)$ is a framing pair, and let $\mathscr{P}_0$ be the framing object given by $u \in L^+G(k)$, so $b = u\sigma(\mu)(p)$. Then $G$ is cut out by some collection of tensors $\underline{s}$, and $\underline{G} = (G,\mu,\Lambda,\eta,\underline{s})$ is a local Hodge embedding datum. For the remainder of this section we will assume $\mathscr{P}_0$ is nilpotent with respect to $\eta$. Then by \cite[Thm. 5.1.3]{BP2017}, the restriction of RZ$^{\textup{disp}}_{G,\mu,b}$ to Noetherian algebras in $\textup{Nilp}_{W(k)}$ is representable by a formal scheme $\textup{RZ}^\textup{BP}_{G,\mu,b}$ which is formally smooth and formally locally of finite type over $W(k)$. Applying $\textup{BT}_{\underline{G},k}$ to $\mathscr{P}_0$, we obtain a formal $p$-divisible group with $(\underline{s},\mu)$-structure \begin{align*} (X_0, \underline{t_0}) = (\textup{BT}_k(\Z_k(\mathscr{P}_0)), \mathbb{D}(\mathscr{P}_0)(\underline{t_0})). \end{align*} \begin{Def}\label{def-RZpdiv} Let $\underline{G} = (G,\mu,\Lambda,\eta,\underline{s})$, $b$ and $(X_0,\underline{t_0})$ be as above. The \textit{$p$-divisible group RZ-functor} associated to the data $(\underline{G},b)$ is the functor on \textup{Nilp}$_{W(k)}$ which assigns to a $p$-nilpotent $W(k)$-algebra the set of isomorphism classes of triples $(X,\underline{t},\iota)$, where \begin{itemize} \item $(X,\underline{t})$ is a $p$-divisible group with $(\underline{s},\mu)$-structure, \item $\iota: X\otimes_R R/pR \dashrightarrow X_0 \otimes_k R/pR$ is a quasi-isogeny such that, for some nilpotent ideal $J \subset R$ with $p \in J$, the composition of $t_i$ with \begin{align*} \mathbb{D}(\iota_{R/J}): \mathbb{D}(X\otimes_R R/J)^\otimes[1/p] \xrightarrow{\sim} \mathbb{D}(X \otimes_k R/J)^\otimes[1/p] \end{align*} is equal to $t_{0,i}$ for every $i$. \end{itemize} Denote the $p$-divisible group RZ-functor associated to $(\underline{G},b)$ by $\textup{RZ}^{p\text{-div}}_{\underline{G},b}$. \end{Def} We also extend $\textup{RZ}_{\underline{G},b}^{p\textup{-div}}$ to a functor $\textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}$ on $\textup{Nilp}^\text{fsm}_{W(k)}$ by defining \begin{align*} \textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}(A) := \varprojlim_n \textup{RZ}_{\underline{G},b}^{p\textup{-div}}(A /I^n), \end{align*} where once again $I$ is an ideal of definition of $A$. \begin{rmk} As in the case of the display RZ-functor, the extension to $\textup{Nilp}_{W(k)}^\textup{fsm}$ can be thought of as classifying objects over $\Spf A$. More precisely, $\textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}(A)$ is the set of isomorphism classes of triples $(X,\underline{t},\iota)$, with $(X,\underline{t})$ a $p$-divisible group with $(\underline{s},\mu)$-structure over $\Spf A$, and $\iota = (\iota_n)_n$ a quasi-isogeny over $\Spf A$ such that for every $n$, $\mathbb{D}((\iota_n)_{A/I}) \circ t_i = t_{0,i}$, for all $i$. Since $(\iota_n)_n$ is a compatible system, it is equivalent to assume $\mathbb{D}(\iota_1) \circ t_i = t_{0,i}$ for all $i$. If $I$ is chosen with $p \in I$, then by rigidity of quasi-isogenies along with \cite[Lem. 2.4.4]{deJong1996} and the proof of \cite[Prop. 2.4.8]{deJong1996}, elements of $\textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}(A)$ correspond to triples $(X, \underline{t}, \iota)$, with $(X,\underline{t})$ a $p$-divisible group with $(\underline{s},\mu)$-structure over $\Spec A$ and \begin{align*} \iota: X \otimes_A A/I \dashrightarrow X_0 \otimes_k A/I \end{align*} a quasi-isogeny such that $\mathbb{D}(\iota) \circ t_i = t_{0,i}$ for all $i$ (see \cite[\textsection 2.3.6]{HP2017} for details). \end{rmk} Suppose $(\mathscr{P},\rho) \in \textup{RZ}^{\text{disp}}_G(R)$ for $R \in \textup{Nilp}_{W(k)}$. Let us write \begin{align*} \mathscr{P}_{R/pR}(\Lambda, \eta) = \underline{M}^\eta \text{ and } (\mathscr{P}_0)_{R/pR}(\Lambda, \eta) = \underline{M_0}^\eta. \end{align*} By evaluating $\rho$ on $(\Lambda, \eta)$, we obtain a quasi-isogeny of 1-displays \begin{align*} \underline{M}^\eta \dashrightarrow \underline{M_0}^\eta. \end{align*} By \cite[Prop. 66]{Zink2002}, such a quasi-isogeny is equivalent to an invertible section of \begin{align*} \textup{Hom}_{\text{Disp}(\underline{W}(R))}(\underline{M}^\eta,\underline{M_0}^\eta)[1/p], \end{align*} so the functor $\textup{BT}$ induces a quasi-isogeny of $p$-divisible groups \begin{align*} \iota_\rho: \textup{BT}(\Z_{\eta}(\mathscr{P}_{R/pR})) \dashrightarrow \textup{BT}(\Z_{\eta}((\mathscr{P}_0)_{R/pR})). \end{align*} If $(\rho_n)_n$ is a $G$-quasi-isogeny defined over $\Spf A/pA$ for $A \in \textup{Nilp}_{W(k)}^\textup{fsm}$ with ideal of definition $I$ containing $p$, then by taking $R = A/I^n$ as $n$ varies we obtain a quasi-isogeny of $p$-divisible groups $(\iota_{\rho_n})_n$ defined over $\Spf A/pA$. \begin{lemma}\label{lem-RZnattrans} The assignment \begin{align*} (\mathscr{P}, (\rho_n)_n) \mapsto (\textup{BT}_{\underline{G},R}(\mathscr{P}), (\iota_{\rho_n})_n) \end{align*} determines a natural transformation $\Psi:\textup{RZ}^{\textup{disp,fsm}}_{G,\mu,b} \to \textup{RZ}^{p\textup{-div,fsm}}_{\underline{G},b}$ of functors on \textup{Nilp}$_{W(k)}^\textup{fsm}$. \end{lemma} \begin{proof} Let $\mathscr{P}$ be a $(G,\mu)$-display over $\underline{W}(A)$, and let $(\rho_n)_n$ be a $G$-quasi-isogeny over $\Spf A$ (cf. Remark \ref{rmk-RZdispfsm}). Suppose $I$ is an ideal of definition for $A$ with $p \in I$, and let $R = A/I$. Write $\rho = \rho_1$, and $\iota = \iota_{\rho}$, so $\iota = (\iota_{\rho_n})_{R}$ for every $n$. We need to show $\mathbb{D}(\iota) \circ t_i = t_{0,i}$ for every $i$. We claim it is enough to show this after evaluation on $W(R) \to R$. Indeed, by \cite[Lem. 3.2.8 and its proof]{HP2017}, since $R$ is finitely generated over $k$, it is enough to check the identity holds at a closed point in each connected component of $\Spec R$ (see also \cite[Rmk. 2.3.5 (d)]{HP2017}). But any field $k'$ of characteristic $p$ has a $p$-basis, so if the identity holds after evaluation on $W(k') \to k'$, then it holds over $\Spec k'$ by Lemma \ref{lem-tensorsagree}. Since $\rho$ is a natural transformation of functors $\rho: \mathscr{P}[1/p] \to (\mathscr{P}_0)_{R}[1/p]$, we have an identification \begin{align}\label{eq-identity0} \rho^{\eta(i)} \circ \mathscr{P}[1/p](s_i) = (\mathscr{P}_0)_{R}[1/p](s_i). \end{align} Moreover, by definition, we have $\mathscr{P}[1/p](s_i) = \tau^\ast \mathscr{P}(s_i)$, so (\ref{eq-identity0}) can be rewritten as \begin{align}\label{eq-identity} \rho^{\eta(i)} \circ \tau^\ast \mathscr{P}(s_i) = \tau^\ast(\mathscr{P}_0)_{R}(s_i). \end{align} By compatibility of tensor products and \cite[Prop. 53]{Zink2002}, we have \begin{align*} \mathbb{D}(\mathscr{P})^{\eta(i)}_{W(R)/R} \cong \mathbb{D}(\mathscr{P})^{\eta}_{W(R)/R}(i) \cong \tau^\ast M(i) = \tau^\ast M^{\eta(i)}. \end{align*} We claim that under this isomorphism $\tau^\ast \mathscr{P}(s_i)$ (resp. $\tau^\ast (\mathscr{P}_0)_R(s_i)$) is identified with $(t_i)_{W(R)}$ (resp. $(t_{0,i})_{W(R)}$) and $\rho^{\eta(i)}$ is identified with $\mathbb{D}(\iota)_{W(R)/R}(i)$. Once we have shown these claims, the result will follow from (\ref{eq-identity}). For the first claim, by Zink's Witt vector descent \cite[Prop. 33]{Zink2002} it is enough to check the desired equality fpqc-locally on $\Spec R$, so we may assume $\mathscr{P}$ is banal. But in that case $\tau^\ast \mathscr{P}(s_i)$ is equal to $(t_i)_{W(R)}$ since both morphisms are identified with $s_i \otimes \id_{W(R)}: W(R) \to \Lambda(i) \otimes_{\zz_p} W(R)$. Likewise we can identify $\tau^\ast (\mathscr{P}_0)_R(s_i)$ with $(t_{0,i})_{W(R)}$. Let $X = \textup{BT}(\Z_\eta(\mathscr{P}))$, and $X_0 = \textup{BT}(\Z_\eta((\mathscr{P}_0)_R))$. By \cite[Prop. 66]{Zink2002}, in order to prove the second claim it is enough to assume $\rho^\eta$ is an isogeny of 1-displays. Then we need to check the following diagram commutes \begin{center} \begin{tikzcd} \mathbb{D}(X)_{W(R)/R} \arrow[r,"\sim"] \arrow[d, "\mathbb{D}(\iota)_{W(R)/R}"] & \tau^\ast M^\eta \arrow[d,"\rho^\eta"] \\ \mathbb{D}(X_0)_{W(R)/R} \arrow[r,"\sim"] & \tau^\ast M_0^\eta. \end{tikzcd} \end{center} Under the assumption that $\rho^\eta$ is an isogeny, we have $\iota = \textup{BT}(\rho^\eta)$. Moreover, $\mathbb{D}(\iota)_{W(R)/R} = \Phi(\iota)$ where $\Phi$ is Lau's functor from $p$-divisible groups to Zink displays (see \textsection \ref{sub-zinkcrystals}). The identification $\mathbb{D}(X)_{W(R)/R} \xrightarrow{\sim} \tau^\ast M^\eta$ extends to the identification $\Phi(\textup{BT}(\Z_\eta(\mathscr{P}))) \xrightarrow{\sim} \Z_\eta(\mathscr{P})$, so the result follows because $\Phi\circ\textup{BT}$ is naturally isomorphic to the identity (see \cite[Lem. 8.1]{Lau2013}). \end{proof} \begin{thm}\label{thm-RZfunctors} The natural transformation $\Psi:\textup{RZ}_{G,\mu,b}^{\textup{disp,fsm}} \to \textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}$ defined in Lemma \ref{lem-RZnattrans} is an isomorphism of functors on $\textup{Nilp}_{W(k)}^{\textup{fsm}}$. \end{thm} \begin{proof} This is formally similar to \cite[Thm. 5.15]{Daniels2019}. For any $A$ in $\textup{Nilp}_{W(k)}^\textup{fsm}$, $\Psi_A$ is injective by full-faithfulness of $\textup{BT}_{\underline{G},A}$. For surjectivity, suppose $(X, \underline{t}, (\iota_n)_n) \in \textup{RZ}_{\underline{G},b}^{p\textup{-div,fsm}}(A)$ for $A \in \textup{Nilp}^{\textup{fsm}}_{W(k)}$. The $\mathbb{F}_p$-algebra $A/pA$ satisfies condition (1.3.1.1) of \cite{deJong1996}, so in particular it is Noetherian, $F$-finite, and formally smooth over $\mathbb{F}_p$, and hence by \cite[Lem. 2.1]{Lau2018b} $A/pA$ has a $p$-basis (Zariski) locally. Then by Theorem \ref{mainthm} there exists a $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(A)$ such that $\textup{BT}_{\underline{G},A}(\mathscr{P}) \cong (X,\underline{t})$. It remains to define a $G$-quasi-isogeny $(\rho_n)_n$ over $\Spf A$. Choose an ideal of definition $I$ with $p \in I$, fix $n$, and consider the $G$-quasi-isogeny \begin{align*} \iota_n: X\otimes_A A/(p+I^n) \dashrightarrow X_0 \otimes_k A/(p+I^n). \end{align*} By the second condition in Definition \ref{def-RZpdiv}, we have $\mathbb{D}((\iota_n)_{A/I}) \circ t_i = t_{0,i}$. Moreover, by \cite[Lem. 4.6.3]{Kim2018}, this condition is independent of the chosen nilpotent ideal, so we may replace $I$ by $(p)$, and we obtain the identity \begin{align}\label{eq-RZthm} \mathbb{D}(\iota_n)\circ t_i = t_{0,i}. \end{align} By descent it is enough to define the $G$-quasi-isogeny \'etale locally. After an \'etale faithfully flat extension, we have an isomorphism $\Lambda_{W(A/I^n)[1/p]} \xrightarrow{\sim} \tau^\ast M^\eta [1/p]$ which is compatible with the tensors. This isomorphism identifies $\mathbb{D}(\iota_n)$ with an element $g_n \in \textup{GL}(\Lambda_{W(A/I^n)[1/p]})$, and the identity (\ref{eq-RZthm}) implies that $g_n \in G(W(A/I^n)[1/p])$. This $g_n$ induces a $G$-quasi-isogeny $\rho_n: \mathscr{P}_{A/(p+I_n)} \dashrightarrow (\mathscr{P}_{0})_{A/(p+I^n)}$, and it is straightforward to check that $\Psi_A(\mathscr{P},(\rho_n)_n) \cong (X,\underline{t},(\iota_n)_n)$. \end{proof} \begin{rmk}\label{rmk-contravariant} The functor in Definition \ref{def-RZpdiv} is formulated using covariant Dieudonn\'e theory, hence it differs slightly from those of \cite{Kim2018} and \cite{HP2017} which are formulated using contravariant Dieudonn\'e theory. In fact, the difference is purely aesthetic, and the functors are isomorphic. Indeed, if $(G,\mu,\Lambda,\eta,\underline{s})$ is a local Hodge embedding datum in our sense, then the embedding $\eta^\vee: G \hookrightarrow \textup{GL}(\Lambda^\vee)$ determines a local Hodge embedding datum for $(G,\{\mu\},[b])$ in the sense of \cite[Def. 2.2.3]{HP2017}. It follows that $(G,b,\mu,\Lambda^\vee)$ is a local unramified Shimura-Hodge datum in the sense of \textit{loc. cit.}, and $X_0^D$ is the unique $p$-divisible group over $k$ associated to this datum by \cite[Lem. 2.2.5]{HP2017}. Moreover, the contravariant Dieudonn\'e crystal of a $p$-divisible group $X$ is given by the covariant Dieudonn\'e crystal of the Serre dual $X^D$ of $X$, and under this relationship the respective Hodge filtrations are identified. Hence the assignment $(X,\iota) \mapsto (X^D, \iota^D)$ provides the isomorphism between our $p$-divisible group RZ-functor and that of \cite{Kim2018} and \cite{HP2017}. \end{rmk} The main theorem of \cite{Kim2018} states that there is a formal scheme $\textup{RZ}_{\underline{G},b}$ over Spf $W(k)$ which is formally smooth and formally locally of finite type which represents $\textup{RZ}_{\underline{G},b}^\textup{fsm}$ in the sense that \begin{align}\label{eq-rep} \textup{RZ}_{\underline{G},b}^\textup{fsm}(A) = \textup{Hom}_{\textup{Spf }W(k)}(\textup{Spf }A, \textup{RZ}_{\underline{G},b}) \end{align} for $A \in \textup{Nilp}^\text{fsm}_{W(k)}$. \begin{cor}\label{cor} The formal schemes $\textup{RZ}^{\textup{BP}}_{G,\mu,b}$ and $\textup{RZ}_{\underline{G},b}$ are isomorphic. \end{cor} \begin{proof} By the results of \cite{Kim2018}, $\textup{RZ}_{\underline{G},b}$ is the unique formally smooth and locally formally of finite type formal scheme over $\textup{Spf }W(k)$ representing the functor $\textup{RZ}_{\underline{G},b}^\textup{fsm}$ on $\textup{Nilp}^\text{fsm}_{W(k)}$ in the sense of (\ref{eq-rep}). But by Theorem \ref{thm-RZfunctors} the same is true of $\textup{RZ}^{\textup{BP}}_{G,\mu,b}$. \end{proof} \begin{rmk} Corollary \ref{cor} is also known by \cite[Rmk. 5.2.7]{BP2017}. However, in \textit{loc. cit.} no explicit isomorphism is given between the respective RZ-functors. \end{rmk} \subsection{Deformations}\label{sub-deformations} Let $G$ be a reductive group scheme over $\zz_p$, and let $\mu$ be a minuscule cocharacter of $G$ defined over $W(k_0)$. In this section we want to study the infinitesimal deformation theory of $p$-divisible groups with $G$-structure over $k$. We begin by reviewing the deformation theory of adjoint nilpotent $(G,\mu)$-displays as in \cite[\textsection 3.5]{BP2017}, so fix a $(G,\mu)$-display $\mathscr{P}_0$ which is adjoint nilpotent over $k$. Let $\textup{Art}_{W(k)}$ denote the category of augmented local Artin $W(k)$-algebras, i.e., the category of local artin $W(k)$-algebras $(R,\mathfrak{m})$ together with a fixed isomorphism $R/\mathfrak{m} \xrightarrow{\sim} k$. Such a ring is necessarily a $p$-nilpotent $W(k)$-algebra. Let $\mathfrak{Def}(\mathscr{P}_0)$ denote the functor on $\textup{Art}_{W(k)}$ which assigns to $R \in \textup{Art}_{W(k)}$ the set of isomorphism classes of pairs $(\mathscr{P}, \delta)$ where $\mathscr{P}$ is a $(G,\mu)$-display over $R$ and $\delta: \mathscr{P}_k \xrightarrow{\sim} \mathscr{P}_0$ is an isomorphism of $(G,\mu)$-displays over $\underline{W}(k)$. An isomorphism between pairs $(\mathscr{P},\delta)$ and $(\mathscr{P}', \delta')$ is an isomorphism $\psi: \mathscr{P} \xrightarrow{\sim} \mathscr{P}'$ such that $\delta' \circ \psi_k = \delta$. We will usually omit the fixed isomorphism $\delta$ and refer to the pair $(\mathscr{P},\delta)$ simply as $\mathscr{P}$. By \cite[3.5.9]{BP2017}, $\mathfrak{Def}(\mathscr{P}_0)$ is prorepresentable by a power series ring over $W(k)$. Let us summarize the theory and describe the universal deformation. Denote by $U_G^\circ$ the opposite unipotent subgroup of $G$ defined by $\mu$. By \cite[Lem. 6.3.2]{Lau2018} (see also \cite[Lem. A.0.5]{BP2017}), there exists a unique $\mathbb{G}_m$-equivariant isomorphism of schemes \begin{align*} \textup{log}: U_G^\circ \xrightarrow{\sim} V(\textup{Lie }U_G^\circ) \end{align*} which induces the identity on Lie algebras. Moreover, log is an isomorphism of $W(k_0)$-group schemes. Since $U_G^\circ$ is smooth (see e.g. \cite[Thm. 4.1.17]{Conrad2014}), $\textup{Lie }U_G^\circ$ is finite and free as a $W(k_0)$-module, so after a choice of basis log induces an isomorphism of $W(k_0)$-group schemes \begin{align*} \textup{log}: U_G^\circ \xrightarrow{\sim} \mathbb{G}_m^\ell, \end{align*} where $\ell$ is the dimension of $U_G^\circ$. Let $R_G$ be the formal completion of $U_G^\circ \otimes_{W(k_0)} W(k)$ at the origin, and note that we have a (non-canonical) isomorphism $R_G \cong W(k)[[t_1, \dots, t_\ell]]$. If $w \in R_G$, denote by $[w]$ the Teichm\"uller lift of $w$ in $W(R_G)$, so $[w] = (w,0,\dots)$ in the usual Witt vector coordinates. Define the element $h_G^\text{univ} \in U_G^\circ(W(R_G))$ to be the unique element such that \begin{align*} \textup{log}(h_G^{\text{univ}}) = ([t_1], \dots, [t_\ell]) \in \mathbb{G}_a^\ell(W(R_G)). \end{align*} Since $k$ is algebraically closed, $\mathscr{P}_0$ is banal, given by some $u_0 \in L^+G(k)$, and the inclusion $W(k) \hookrightarrow R_G$ allows us to view $u_0$ as an element of $L^+G(R_G)$. Define \begin{align*} u_G^\textup{univ} := (h_G^{\textup{univ}})^{-1} u_0 \in L^+G(R_G), \end{align*} and let $\mathscr{P}^\textup{univ}$ denote the $(G,\mu)$-display over $\underline{W}(R_G)$ defined by $u_G^\textup{univ}$. By the results of \cite[3.5.9]{BP2017}, the ring $R_G$ prorepresents $\mathfrak{Def}(\mathscr{P}_0)$, and $\mathscr{P}^\textup{univ}$ defines the universal deformation of $\mathscr{P}_0$ over $R_G$. If $G = \textup{GL}_h$, $\mu = \mu_{d,h}$, and $\mathscr{P}_0$ corresponds to a nilpotent Zink display $\underline{P_0}$, then this recovers the deformation theory of \cite[\textsection 2.2]{Zink2002} because any lift of $\underline{P_0}$ to a Zink display over a local Artin $W(k)$-algebra is nilpotent by \cite[Lem. 21]{Zink2002}. Suppose now $\underline{G} = (G,\mu,\Lambda,\eta,\underline{s})$ is a local Hodge embedding datum, and suppose that we can choose a basis for $\Lambda_{W(k_0)}$ such that $\eta \circ \mu = \mu_{d,h}$. Then $R_{\textup{GL}} := R_{\textup{GL}(\Lambda)}$ is non-canonically isomorphic to the power series ring $W(k)[[t_1, \dots, t_{d(h-d)}]]$. Choose coordinates for $R_G$ so that $R_G \cong R/(t_{r+1}, \dots, t_{d(h-d)})$, and denote by $\pi$ the natural quotient $R_{\textup{GL}} \to R_G$. Let $\mathscr{P}_0$ be a $(G,\mu)$-display over $k$ which is nilpotent with respect to $\eta$, and write $\underline{P_0} = \mathscr{P}_0(\Lambda,\eta)$ for the associated Zink display over $k$. Write $\mathfrak{Def}(\underline{P_0})$ for the deformation functor of $(\textup{GL}(\Lambda),\eta \circ \mu)$-displays for $\underline{P}_0$. Then by the above paragraph $\mathfrak{Def}(\underline{P_0})$ is prorepresentable by $R_\textup{GL}$, with universal deformation $\underline{P}^\textup{univ}$ having standard representation $(\Lambda^0\otimes_{W(k_0)} W(R_\textup{GL}), \Lambda^1\otimes_{W(k_0)} W(R_\textup{GL}), \Phi^\textup{univ}_\textup{GL})$, where \begin{align*} \Phi^\textup{univ}_{\textup{GL}} = (h^\textup{univ}_\textup{GL})^{-1} \eta(u_0) \circ (\id_\Lambda \otimes f). \end{align*} \begin{lemma}\label{lem-univcompat} If $\mathscr{P}^\textup{univ}$ is the universal deformation of $\mathscr{P}_0$ as a $(G,\mu)$-display over $\underline{W}(R_G)$, then \begin{align*} \mathscr{P}^\textup{univ}(\Lambda, \eta) = (\underline{P}^\textup{univ})_{R_G}. \end{align*} \end{lemma} \begin{proof} Given the explicit descriptions of the universal deformations above, it is enough to show \begin{align*} W(\pi)(h_\textup{GL}^\textup{univ}) = \eta(h_G^\textup{univ}). \end{align*} The embedding $\eta$ induces a map $U_G^\circ \to U_{\textup{GL}}^\circ$, which we also denote by $\eta$, as well as a map $d\eta:\textup{Lie }U_G^\circ \to \textup{Lie }U_\textup{GL}^\circ$. With the above choice of coordinates, \begin{align*} W(\pi)([t_1],\dots,[t_{d(h-d)}]) = ([t_1], \dots, [t_r], 0, \dots 0) = d\eta([t_1], \dots, [t_r]). \end{align*} From the explicit description of the log map given in \cite[Lem. 6.3.2]{Lau2018} we see that $\textup{log} \circ \eta = d\eta \circ \textup{log}$ as maps $U_G^\circ \to V(\textup{Lie }U_{\textup{GL}}^\circ)$. Hence $W(\pi)(h_\textup{GL}^\textup{univ})$ and $\eta(h_G^\textup{univ})$ agree after applying log, so the result follows because log is an isomorphism. \end{proof} Let $(X_0, \underline{t_0})$ be a formal $p$-divisible group with $(\underline{s},\mu)$-structure over $k$. We will apply our results to the deformation theory of $(X_0, \underline{t_0})$. Denote by $\mathfrak{Def}(X_0)$ the functor of deformations of the $p$-divisible group $X_0$, so for $R \in \textup{Art}_{W(k)}$, $\mathfrak{Def}(X_0)(R)$ is the set of isomorphism classes of $p$-divisible groups $X$ over $R$ together with an isomorphism $X \otimes_R k \cong X_0$. If $\underline{P_0}$ be the nilpotent Zink display corresponding to $X_0$, then by the equivalence of Zink and Lau (or by \cite[Cor. 4.8(i)]{Illusie1985}) it follows that $R_{\textup{GL}}$ prorepresents $\mathfrak{Def}(X_0)$ with universal deformation given by $\textup{BT}_{R_\textup{GL}}(\underline{P}^\textup{univ})$ over $R_\textup{GL}$. \begin{thm}\label{thm-def} Let $R \in \textup{Art}_{W(k)}$, and choose a $p$-divisible group $X$ over $R$ which lifts $X_0$. Let $\varpi: R_{\textup{GL}} \to R$ be the homomorphism induced by $X$. Then $\varpi$ factors through $R_G$ if and only if the following conditions hold: \begin{enumerate}[$($i$)$] \item There is an isomorphism of crystals in locally free $\mathcal{O}_{\Spec R / W(k_0)}$-modules \begin{align*} \lambda: \Lambda_{W(k_0)} \otimes \mathcal{O}_{\Spec R/ W(k_0)} \xrightarrow{\sim} \mathbb{D}(X) \end{align*} such that the Hodge filtration $\textup{Fil}^1(\mathbb{D}(X)) \subset \mathbb{D}(X)_{R/R} \xrightarrow{\sim} \Lambda \otimes_{\zz_p} R$ is induced by $\mu$. \item There exist a collection $\underline{t} = (t_1, \dots, t_r)$ of crystalline Tate tensors on $\mathbb{D}(X)$ over $\Spec{R}$ which are identified with $\underline{s} \otimes 1$ via $\lambda.$ \end{enumerate} \end{thm} \begin{proof} First note that $X$ is infinitesimal since the same is true for $X_0$, and the property can be checked at geometric points in characteristic $p$ (see \cite[II Prop. 4.4]{Messing1972}). Let $\underline{P}$ be the nilpotent Zink display associated to $X$, so $\underline{P} = \varpi^\ast \underline{P}^\textup{univ}$. We claim $\varpi$ factors through $R_G$ if and only if $\underline{P} \cong \mathscr{P}(\Lambda, \eta)$ for some banal $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(R)$. Indeed, if $\varpi$ factors as $\zeta \circ \pi$ for some $\zeta: R_G \to R$, then $\underline{P} = \varpi^\ast \underline{P}^\textup{univ} \cong \zeta^\ast (\pi^\ast \underline{P}^\textup{univ})$. But then by Lemma \ref{lem-univcompat} we have $\underline{P}\cong(\zeta^\ast \mathscr{P}^\textup{univ})(\Lambda, \eta)$, and $\zeta^\ast \mathscr{P}^\textup{univ}$ is banal since the same is true of $\mathscr{P}^\textup{univ}$. Conversely, if $\underline{P} \cong \mathscr{P}(\Lambda, \eta)$ for some $\mathscr{P}$, then $\mathscr{P}$ is a deformation of $\mathscr{P}_0$, so there is some $\zeta: R_G \to R$ such that $\mathscr{P} = \zeta^\ast \mathscr{P}^\textup{univ}$. Then again Lemma \ref{lem-univcompat} implies that $\underline{P} \cong \zeta^\ast\pi^\ast \underline{P}^\textup{univ}$, so $\zeta \circ \pi = \varpi$ by prorepresentability of $R_\textup{GL}$ and universality of $\underline{P}^\textup{univ}$. Now it remains to show that $\underline{P} \cong \mathscr{P}(\Lambda, \eta)$ for a banal $(G,\mu)$-display $\mathscr{P}$ over $\underline{W}(R)$ if and only if conditions (i) and (ii) hold. If $\underline{P} \cong \mathscr{P}(\Lambda, \eta)$, then Remark \ref{lem-banalcrystal} and Lemma \ref{lem-tensors} imply conditions (i) and (ii). On the other hand, if (i) and (ii) hold, then $(\underline{P},\underline{t})$ is a nilpotent Zink display with $(\underline{s},\mu)$-structure, and $\underline{P} \cong \mathscr{P}(\Lambda, \eta)$ for some $(G,\mu)$-display over $\underline{W}(R)$ by Proposition \ref{prop-essimg}. Moreover, condition (i) implies that such a $\mathscr{P}$ must be banal (by the proof of Lemma \ref{lem-torsor}, for example). \end{proof} \begin{rmk}\label{rmk-Faltings} This implies a theorem of Faltings regarding deformations of $X_0$ with its $(\underline{s},\mu)$-structure over power series rings (see \cite[\textsection 7]{Faltings1999} and \cite[Prop. 4.9]{Moonen1998}). The statement of this theorem given in \cite[Thm. 3.6]{Kim2018} is the following: if $R = W(k)[[u_1, \dots, u_N]]$ for some $m$ and $N$, then the map $\varpi: R_{\textup{GL}} \to R$ corresponding to a $p$-divisible group $X$ over $R$ factors through $R_G$ if and only if the tensors $\underline{t_0}$ on $X_0$ lift to crystalline Tate tensors over $\textup{Spf }(R,(p))$. Let us explain how Theorem \ref{thm-def} implies this statement. Let $R = W(k)[[u_1, \dots, u_N]]$, and write $R_{m,n} = (W(k)/p^m)[[u_1, \dots, u_N]]/(u_1, \dots, u_N)^n$, which is an object of $\textup{Art}_{W(k)}$. We claim that if $X$ is a $p$-divisible group over $R_{m,n}$, then conditions (i) and (ii) in the statement of Theorem \ref{thm-def} are equivalent to the statement that for each $i$ there exists a lift of $t_{0,i}$ to a crystalline Tate tensor $t_i: \mathbbm{1} \to \mathbb{D}(X)^\otimes$ over $\Spec R_{m,n}$. It is obvious that this condition is implied by conditions (i) and (ii), so it is enough to show the converse. If the condition holds, we have an isomorphism $\lambda:\Lambda_{W(k_0)} \otimes \mathcal{O}_{\Spec R/W(k_0)} \xrightarrow{\sim} \mathbb{D}(X)$ which is induced by $\varpi^\ast \underline{P}^\textup{univ} \xrightarrow{\sim} \underline{P}$ and by construction of $\underline{P}^\textup{univ}$, the Hodge filtration on $\mathbb{D}(X)_{R_{m,n}/R_{m,n}}$ is induced by $\mu$ under $\lambda$. Then condition (i) holds and it remains to show condition (ii). By \cite[Lem. 4.6.4]{Kim2018}, it is enough to show that $t_i$ and $s_i \otimes 1$ agree after base change to $k$, which holds because both $t_i$ are $s_i \otimes 1$ are lifts of $t_{0,i}$ for each $i$. Now to say that the tensors $\underline{t_0}$ lift to crystalline Tate tensors over $\textup{Spf }(R,(p))$ is to say that they lift to crystalline Tate tensors over $\Spec{R_m}$ for every $m$, with $R_m = (W(k)/p^m)[[u_1, \dots, u_N]]$. In turn, this implies that the $t_{0,i}$ lift to tensors over each $R_{m,n}$, so by the above paragraph and Theorem \ref{thm-def} we see that $\varpi_{m,n}$, which is given by the composition of $\varpi$ with the natural quotient $R \to R_{m,n}$, factors through $R_G$ for every $m$ and $n$. It follows that $\varpi$ factors through $R_G$. Conversely, if $\varpi$ factors through $R_G$, then $\varpi_{m,n}$ factors through $R_{m,n}$ for every $m$ and $n$, so by Theorem \ref{thm-def} and our remarks above the tensors $\underline{t_0}$ lift to tensors defined over $\Spec R_{m,n}$ for every $m$ and $n$. It follows that we obtain tensors defined over $\textup{Spf }R_m$ for every $m$. By the arguments in the proof of \cite[Prop. 2.4.8]{deJong1996} (see also \cite[Rmk. 2.3.5 (c)]{HP2017}), there exists a unique crystalline Tate tensor over $\Spec R_m$ inducing each crystalline Tate tensor over $\textup{Spf }R_m$. In turn, for each $i$ the system of crystalline Tate tensors over $\Spec R_m$ comprises a crystalline Tate tensor over $\Spf (R,(p))$ lifting $t_{0,i}$, as desired. \end{rmk}
null
null
null
proofpile-arXiv_000-10125
{"arxiv_id":"2009.09044","language":"en","timestamp":1600740141000,"url":"https:\/\/arxiv.org\/abs\/2009.09044","yymm":"2009"}
2024-02-18T23:40:25.007Z
2020-09-22T02:02:21.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10127"}
null
\section{Introduction} Dynamic programming is the fundamental technique for solving sequential decision problems. The key object of analysis is the Bellman optimality equation. As the dimension of the state space increases, the computational burden of solving the Bellman equation becomes prohibitive. Approximate dynamic programming (ADP) is a family of algorithms developed to address this computational challenge by reducing---through various mechanisms---the dimensionality of the problem. A central ADP theme is that of value-function approximation. One a priori imposes a lower dimensional structure on the value function---assuming, for example, that it is an affine combination of pre-specified basis functions---and optimizes the combination parameters. One expects computational gains if the number of basis functions is small relative to the size of the state space. Even when these {\em function-approximation} algorithm converge, the performance---in terms of optimality gaps---depends on the choice of the basis functions; these are often chosen based on ad-hoc knowledge of the problem's structure. What we propose here is a method that, instead of imposing a lower dimensional structure on the value function, approximates directly the Markov chain by a lower-rank one. The Bellman equation for this lower-rank chain is itself lower dimensional and hence more tractable. The lower-rank {\em sister chain} is a ``non-identical twin'' of the original chain. The two are coupled through their local-transition first and second moment. Specifically, these first moments are given by the vector $\mu (x)$ and the matrix $\sigma^2(x)$: $$\mu(x)=\mathbb{E}_x[X_1-x],~~ \sigma^2(x)=\mathbb{E}_x [(X_1-x)(X_1-x)^\intercal].$$ These are collapsed "statistics" of the full transition matrix. Implicit in these definitions is our focus on chains where there is a natural notion of physical distance and it is most useful to fix attention to state spaces of the form $\mathbb{Z}^d \cap \times_{i=1}^d [\ell_i,u_i]$. The premise that coupling two chains via their moments should produce small approximation gaps is grounded in recent work \cite{braverman2018taylor} that connects the Taylor expansion of value functions to nearly optimal policies.\footnote{That work itself is closely related to the vast literature on diffusion approximations; see \S \ref{sec:lit}.} While the math that supports this statement is non-trivial, the intuition is rather simple. Fix a chain $(X_t,~t=1,2,...)$ on $\mathbb{Z}^d$ with transition probability $P$, and consider the infinite horizon $\alpha$-discounted reward $$V(x)=\mathbb{E}_x\left[ \sum_{t=0}^{\infty}\alpha^t c(X_t)\right].$$ The value $V$ solves the fixed point equation $V(x)= c(x)+ \alpha PV(x),$ which we re-write as $$0=c(x)+\alpha (PV(x)-V(x)) -(1-\alpha)V(x).$$ If we pretend that $V$ has a continuously thrice differentiable extension to the reals $\mathbb{R}^d$, then $$PV(x)-V(x)=\mu(x)'DV(x) + \frac{1}{2}trace(\sigma^2(x)'D^2V(x)) + \mbox{ Remainder}, $$ where the remainder depends on the third derivative of the continuous extension. At least intuitively, the fixed-point equation translates to the solution of a partial differential equation. We {\em do not} advocate using this PDE as a computational alternative but, rather, as a link (a ``coupling'') between the chain $P$ and a more tractable one. Put simply, if we construct a chain $\widetilde{P}$ on $\mathbb{Z}^d$ with the same local moment functions $\mu(\cdot)$ and $\sigma^2(\cdot)$ it, too, would induce the same PDE. To the extent that the quality of the PDE as an approximation depends only on those moments (as functions over the state space), we have a mechanism to bound the gap between the value of the two chains. Among all sister chains, we want one that is tractable in terms of value-function computation. At this point our work plugs into, and connects naturally, to the known {\em aggregation} method in ADP. Aggregation reduces the dimensionality of the Bellman equation by solving it for a small (in relative terms) number of ``meta-states'', denoted by $L$. The extent of the reduction in computational effort depends on how $L$ compares to the number of ``detailed'' states $N$; the fewer the meta states, the less demanding the computation of the value function. In a finite-state-space setting, evaluating the performance of a given control then requires inverting an $L\times L$ matrix instead of the $N\times N$ matrix. The design parameters of aggregation---the so-called aggregation and disaggregation matrices---are typically chosen in an ad-hoc manner. {\em Moment matching offers a principled way to choose these that is grounded in approximation/optimality gap bounds. } Though the construction of the sister chain, via moment matching, adds computational complexity that could compromise the gains of aggregation. The classical moment problem in probability has a long history; see \cite{prekopa1990discrete} and the references therein. Our challenges here deviate from the classical moment problem. Most fundamentally, we are facing a {\em simultaneous} problem as we are trying to match the first two moment of all of $N=|\mathcal{S}|$ random variables --- one for each state, where the random variable for state $x$ has the distribution $P_{x\cdot}$---via convex combination of the {\em same} (small) set of random variables. This, as will be made evident through simple examples, is generally impossible. What we do, instead, is prioritize the first moment over the second. We use as our meta states a grid of spaced out states in the state space. These ``representative states'' are the effective support of the chain $\tilde{P}$. We match perfectly the first local moment $\mu(\cdot)$---a feasible and simple task---while maintaining, through {\em non-constant} spacing, a handle on the second-moment mismatch. The coarseness of the reduced state space is directly informed by the mathematical analysis. Interestingly, once the grid is (carefully) set, our mechanism for matching the first moment is equivalent to approximating the value at a state $x$ by a {\em distance-weighted interpolation} of the values at the corner of the grid box to which $x$ belongs. In particular, the more ``locally linear'' that the value is, the more accurate the approximation. Such an interpolation is rather intuitive --- moment matching gives it mathematical support. We prove that $\widetilde{V}$ that with $L=\cal(N^{\frac{1-\varepsilon}{2}})$ meta states, the value of the sister chain, is a good approximation to the true value $V$ in the following sense: \be |V(x)-\widetilde{V}(x)| =\mathcal{O}\left(\mathbb{E}_x\left[\sum_{t=0}^{\infty}\alpha^t\frac{c(X_t)}{(1+\|X_t\|)^{\varepsilon}}\right]\right)=o\left(V(x)\right),\label{eq:guaranteeintro}\ee where $\varepsilon\in (0,1)$ is a design variable. The closer it is to $0$, the fewer meta states (computation is easier) but the larger the gap bound. This result means that the gap is proportional to the infinite-horizon discounted value with a scaled down cost function $c(x)/(1+\|x\|)^{\varepsilon}$; with $\varepsilon=0$ computation is easier but the gap is of the order of the value itself. These guarantees are not fully general; to use PDE theory, we require that the first and second transition moments satisfy some smoothness properties. These make mathematically precise the intuitive connection to the central limit theorem. Most useful, however, is the way in which the mathematical analysis informs the design of the aggregation scheme. When we embed moment matching in an approximate policy iteration algorithm the computational gains are further magnified. Savings are realized in both the policy evaluation and update steps. Importantly, our moment-based design of the aggregation and disaggregation matrices is {\em policy independent}. In turn, they are computed once and do not have to be updated on each iteration. Setting up the algorithmic framework for moment-matching based MDP is the first contribution of our paper. The second is to provide approximation guarantees. These (and the uncovering of their dependence on the coarseness) inform the algorithm design. We illustrate the computational value through several numerical examples. \vspace*{0.2cm} \noindent {\bf Notation.} Unless stated otherwise, $\|\cdot\|$ corresponds to the Euclidean norm on $\mathbb{R}^d$ ($d$ will be clear from the context). We write $y=x\pm \epsilon$ to denote $\|y-x\|\leq \epsilon$. We use $\mathbb{R}_+^d$ and $\mathbb{Z}_+^d$ to denote the non-negative reals in $\mathbb{R}^d$ and integers in $\mathbb{Z}^d$, and use $\mathbb{R}_{++}^d$ and $\mathbb{Z}_{++}^d$ when they are strictly positive. For a function $f:\mathcal{A}\to \mathbb{R}^d$ and a set $\mathcal{B}\subseteq \mathcal{A}$, $|f|_{\mathcal{B}}^*=\sup_{x \in\mathcal{B}}\|f(x)\|$. We use $\Gamma$ to denote a universal constant whose value might change from one line to the next but that does not depend on the state $x$ or the discount factor $\alpha$. Where useful we will point out its dependencies. We write $f(x)\lesssim \gridfn(x)$ to mean $f(x)\leq \Gamma \gridfn(x)$ and $f(x)\cong \gridfn(x)$ if both $f(x)\lesssim \gridfn(x)$ and $\gridfn(x)\lesssim f(x)$. \vspace*{0.2cm} \noindent {\bf A comment on organization.} We focus for much of the manuscript on the value approximation for a {\em given} policy, namely on the study of a so-called {\em Markov reward process}. This is then made a (central) module in an approximate policy iteration algorithm in \S \ref{sec:optimization}. All lemmas that are stated in the main body of the paper are proved in the appendix. \section{Literature}\label{sec:lit} ADP is concerned with approximating solutions to complex control problems where the size of the state space prohibits exact computation of the value function and/or the optimal control policy. The literature on ADP is vast. Key gains in computation are achieved by restricting the search for value functions to an {\em architecture}---a pre-specified family of functions. In linear architectures, for example, value functions are restricted to linear combinations of pre-specified features. More recent methods use neural networks as the underlying architecture; see e.g. \cite{bertsekas2018feature} and \cite{vanvuchelen2020use,gijsbrechts2019can} for recent applications to operations management problems. Two questions must be posed to any architecture-based ADP algorithm: (1) Does the algorithm converge to the best choice of parameters {\em within the given architecture}. In the case of a linear architecture, for example, does the algorithm produce the best feature coefficients; (2) Such convergence may not mean much if the architecture is inadequate for the problem at hand, so we must also ask how well the ``best'' choice within the given architecture approximates the original problem of interest. The first question was answered affirmatively for linear architectures; see \cite{tsitsiklis1996feature,tsitsiklis1997analysis}. This was followed by improvements to convergence rates; see e.g. \cite{devraj2017fastest}. There are, however, few approximation algorithms with theoretical guarantees on the optimality gaps--- that is, on how well the prescribed (approximate) control performs in the original system. Furthermore, while approximate dynamic programming has known significant practical success, the choice of the architecture often builds on ad-hoc intuition about the problem at hand, rather than on a principled approach to its construction. Our focus is not on convergence rates for a given approximation architecture but, rather, on a new architecture with optimality-gap guarantees. Our architecture does not rely on a value function approximation. Instead it approximates the controlled Markov chain by matching its local moments. The approach we put forth piggy backs on state aggregation methods to produce an algorithm that relates the approximation error of a {\em Markov Chain Moment Problem}. Specifically, given the original (controlled) chain we build a new chain that matches in local {\em transition} moments, through the choice of aggregation mechanism. In other words, what we propose is a principled approach to tune the {\em aggregation design variables}. State aggregation has a long history; see \cite{whitt1978approximations, bean1987aggregation, tsitsiklis1996feature} to name a few. We primarily follow the exposition in \cite{bertsekas2017dynamic}. Much of the literature focuses on hard aggregation, either fixing cluster memberships a priori or updating them based on value estimates, e.g. \cite{bertsekas1989adaptive, baras2000learning}. Soft aggregation has also been identified as a useful approximation infrastructure for reinforcement learning in \cite{singh1995reinforcement} for its flexibility. Our construction of the ``low rank'' sister chain is based on a relatively simple matching of the first moment. If instead the transition matrix $P$ is itself low rank, matrix factorization techniques can be used to identify the aggregation parameters, see \cite{duan2019state, ghasemi2020identifying}. Our algorithm offers a principled method for selecting the aggregation parameters backed by performance guarantees. We construct mapping from detailed states to meta-states based on local moments of the controlled chain, without requiring structural assumptions or knowledge of value function estimates. Moment-based approximations---inspired by the central limit theorem and functional version thereof---have been extremely successful in queueing theory facilitating the analysis and optimization of highly complex queueing networks. Some of the ``import'' of the mathematical theory from the control of queues to general dynamic programs has been achieved in \cite{braverman2018taylor} where the connections to queueing theory are thoroughly discussed. We use the mathematical constructs in \cite{braverman2018taylor} as a starting point for an algorithmic framework. What we adopt is the view that matching local moments---a collapsed ``statistic'' of the full transition matrix---has the potential to produce small optimality gaps. How to do so algorithmically --- how to construct the sister chain $\widetilde{P}$ for computational gains --- is the question we address in the current paper. In the process of developing our algorithm, we expand on \cite{braverman2018taylor} to allow for some mismatch in the second moment between the focal chain and its sister in our bounds. Finally, our work is indirectly related to sensitivity analysis for MDP (and POMDP); see e.g. \cite{ross2009sensitivity, mastin2012loss} which study, among other things, sensitivity to changes in the transition distribution. We bound the value-differences between two chains in terms of their local transition moments, a ``collapsed'' statistic of the transition matrix. \section{The model\label{sec:themodel}} We consider the infinite-horizon discounted reward for a discrete-time Markov chain on a finite state space $\mathcal{S}\subseteq \mathbb{Z}^d \cap \times_{i=1}^d[\ell_i, u_i]$. Let $N=|\mathcal{S}|$ be the size of the state space. $P$ is the transition matrix with $p_{xy}$ equal to the probability of transitioning from $x$ to $y$ in one step; $c:\mathcal{S}\to \mathbb{R}^+$ is the cost function. We assume that the function $c$ is norm like; that there is a $k\in \mathbb{Z}_+$ and a point $x_0\in\mathcal{S}$ such that $$\frac{1}{\Gamma} \|x-x_0\|^k\leq |c(x)|\leq \Gamma \left(1+\|x-x_0\|\right)^k.$$ Since one can shift the state space, we will assume w.l.o.g. that $x_0=0$. Finally, $\alpha \in (0,1)$ is the discount factor. This so-called ``Markov reward'' process is characterized by the tuple $\mathcal{C}=<\mathcal{S},P,c,\alpha>$. The value function is then given by $$V(x) =\mathbb{E}_x\left[ \sum_{t=0} ^{\infty} \alpha ^ t c(X_t)\right],~x\in \mathcal{S},$$ where $\mathbb{E}_x[\cdot]$ is the expectation with respect to the law $P_{x\cdot}$. For a function $f:\mathcal{S}\to \mathbb{R}$ we use the operator notation $Pf(x):=(Pf)(x)=\sum_{y}p_{xy}f(y)=\mathbb{E}_x[f(X_1)]$. As is standard, the function $V:\mathcal{S}\to \mathbb{R}$ is the unique solution to the equation $T V = V$, where \[ T V(x) = c(x) +\alpha P V(x). \] We will refer to this as the Bellman equation despite the absence of a control decision here. This allows for continuity of language with optimization in \S \ref{sec:optimization}. Since the state space is finite, $V$ can be computed via the matrix inversion formula $V=(I-\alpha P)^{-1}c.$ In our analysis we will sometimes refer to the maximal jump size of $P$ from $x$ \be \Delta_x:=\sup_{y: p_{xy}>0}\|y-x\|. \tag{maximal jump}\label{eq:maxjump}\ee \section{Tayloring reconsidered\label{sec:tayloring}} Consider two Markov Reward Processes. The first, $\mathcal{C} = <\mathcal{S}, P, c, \alpha>$, is driven by the {\em focal} chain $P$. The other, $\widetilde{\mathcal{C}} = <\mathcal{S}, \widetilde{P}, c, \alpha>$, is driven by the {\em sister} chain $\widetilde{P}$; $\widetilde{\mathcal{C}}$ differs from $\mathcal{C}$ only in terms of the transition probability matrix. A ``replacement'' of a chain with a proxy is useful only insofar as it yields computational benefits by, say, being of lower rank. It seems ambitious to require $P$ and a lower rank $\widetilde{P}$ to be close in some reasonable matrix norm unless $P$ is itself low rank. Instead, it makes sense to measure the distance between transition matrices in terms of their impact on the value function. \begin{definition} Given a function $f:\mathcal{S}\to \mathbb{R}$, and two transition probability matrices $P,\widetilde{P}$ on $\mathcal{S}$, let $$ \delta_f[P,\widetilde{P}] := |\delta_f[P,\widetilde{P}] (\cdot)|_{\mathcal{S}}^*,$$ where $$\delta_f[P,\widetilde{P}](x):= \lvert \widetilde \mathbb{E}_x[f(X_1)]-\mathbb{E}_x[f(X_1)]\rvert = \lvert \widetilde{P}{f}(x) - Pf(x)\rvert.$$ \end{definition} \vspace*{0.5cm} \begin{lemma} $$|V- \widetilde{V}|_{\mathcal{S}}^*\leq \frac{\alpha}{1-\alpha}\left(\delta_{V}[P,\widetilde{P}]+\delta_{\widetilde{V}}[P,\widetilde{P}]\right).$$ \label{lem:tilde_operator_gap1} \end{lemma} The bound in Lemma \ref{lem:tilde_operator_gap1} seems problematic, as it requires information about $V$, the very construct whose computation we seek to avoid. It is valuable, however, in that it identifies $|\mathbb{E}_x[V(X_1)]-\wtilde{\Ex}_x[V(X_1)]|$ as a central object of study. It makes clear that, in comparing two chains, what matters is the {\em local behavior}: how the one step change in value under $P$ (i.e. $\mathbb{E}_x[V(X_1)]-V(x)$) compares to that under $\widetilde{P}$ (i.e. $\widetilde \mathbb{E}_x[V(X_1)]-V(x)$). This localization makes Taylor-expansion (initially, heuristically) a natural lens through which to study approximation gaps. We make the following observation. If $V$ has a thrice continuously differentiable extension to $\mathbb{R}^d$, then $$\mathbb{E}_x[V(X_1)]= V(x) + \mu(x)'DV(x)+\frac{1}{2}trace(\sigma^2(x)'D^2V(x))\pm \frac{1}{6}\|D^3V\|\Delta_x^3,$$ where, recall, $\Delta_x:=\sup_{y: p_{xy}>0}\|y-x\|$ is the maximal jump of the chain from state $x$, and $$\mu(x)=\mathbb{E}_x[X_1-x],~~ \sigma^2(x)=\mathbb{E}_x [(X_1-x)(X_1-x)^\intercal].$$ The expectation $\wtilde{\Ex}_x[V(X_1)]$ for the sister chain can be expanded analogously. If $\widetilde{P}$ shares the first two local moments with $P$, i.e. $$\widetilde{\mu}(x):=\wtilde{\Ex}_x[X_1-x]\approx \mathbb{E}[X_1-x] \mbox{ and } \widetilde{\sigma}^2(x) :=\wtilde{\Ex}[(X_1-x)(X_1-x)^{\intercal}]\approx \mathbb{E}[(X_1-x)(X_1-x)^{\intercal}],$$ then \begin{align*} \wtilde{\Ex}_x[V(X_1)]& \approx V(x) + \widetilde{\mu}(x)'DV(x)+\frac{1}{2}trace(\widetilde{\sigma}^2(x)'D^2V(x))\\ &\approx V(x) + \mu(x)'DV(x)+\frac{1}{2}trace(\sigma^2(x)'D^2V(x)) \approx \mathbb{E}_x[V(X_1)], \end{align*} in turn, $$\delta_V[P,\widetilde{P}](x)= \mid \mathbb{E}_x[V(X_1)]- \wtilde{\Ex}_x[V(X_1)]\mid \approx0.$$ Here $\approx 0$ should be interpreted as ``$\delta_V$ being small relative to the value function $V$''; the precise mathematical meaning of $\approx 0$ is exposed further below. This informal derivation makes clear that (1) if a low-rank sister chain has the same moments as the focal chain, its value may provide a good approximation to that of the focal chain. To be low rank, this chain might have larger jumps, so that (2) in designing this sister chain we must keep its jumps small, at least in regions of the state space where the third derivative is substantial. \begin{example}[The simple random walk] {\em Consider the simple absorbing random walk on the integers: $P_{x,x+1}=P_{x,x-1}=1/2$ for all $x=1,...,n-1$ and $P_{00}=P_{nn}=1$. It is easy to see that $\mathbb{E}_x[X_t]=x$ for all $t\geq 0$ so that $\mathbb{E}_x[\sum_{t=0}^{\infty} \alpha^t X_t]=\frac{x}{1-\alpha}$. The same conclusion holds for the ``simpler'' chain that jumps in one step to one of the end points: $\widetilde{P}_{xn}=1-\widetilde{P}_{x0}=x/n$. Observe that $\mu(x)=\widetilde{\mu}(x)=0$ for all $x$. Thus, $P$ shares the local first moments as well as value function with a sister chain $\widetilde{P}$ that has only two states. } \hfill \vrule height .9ex width .8ex depth -.1ex \label{example:simple} \end{example} Example \ref{example:simple} is rather unique. One should not expect a perfect value-function match in general, certainly not with such a coarse state-space. Our bounds in \S \ref{sec:guarantees} will capture the dependence of the approximation's accuracy on the ``density'' of the meta-states. The informal derivation through Taylor expansion is useful for developing intuition but does not provide a basis for algorithm design. The value $V$ is not apriori known so it is impossible to ``refer'' to its continuous extension. To circumvent this, \cite{braverman2018taylor} develops a framework for obtaining {\em indirectly} an approximate continuous solution. A short summary of this earlier work is useful. Consider a chain on $\mathbb{Z}^d$. The value $V$ solves the Bellman equation $V(x)=c(x)+\alpha PV(x)$ which we find useful to re-write as $$0 = c(x) + \alpha (PV(x)-V(x))-(1-\alpha)V(x).$$ Pretending that the function $V$ is twice continuously differentiable, 2nd-order Taylor expansion yields the partial differential equation (PDE) $$ 0= c(x)+\alpha\left[\mu(x)'DV(x)+\frac{1}{2}trace(\sigma^2(x)'D^2V(x))\right]-(1-\alpha)V(x),$$ defined now over $\mathbb{R}^d$. While this equation has been arrived-to purely formally, the following is a valid mathematical question: what is the relationship between a solution $\widehat{V}$ (if it exists) to this equation on $\mathbb{R}^d$, and $V$ that solves the original discrete-state-space Bellman equation. Two chains $<\mathcal{S},P,c,\alpha>$ and $<\mathcal{S},\widetilde{P},c,\alpha>$ with the same local moment functions $\mu(\cdot)$ and $\sigma^2(\cdot)$ induce the same reduction to a continuous-state space PDE so that bounds $|V-\widehat{V}|$ and $|\widetilde{V}-\widehat{V}|$ produce, as a corollary, a bound on $|V-\widetilde{V}|$. This is the path we take. \section{Sister-chain construction via aggregation} \label{sec:aggregation} Aggregation effectively creates a new Markov chain on a smaller state space. The tuning of the aggregation parameters is tantamount to selecting for this chain a transition matrix from a restricted family of such. The flexibility this offers makes it an ideal vehicle for our moment-matching algorithm. \subsection{Aggregation Preliminaries} Recall that $N = \mid \mathcal{S} \mid$ denotes the number of states in the original MDP, and let $\mathcal{M}=\{1,\ldots,L\}$ be a family set of meta states; obviously $L\leq N$. We refer to \cite[Chapter 6]{bertsekas2012approximate} for a full discussion and include below the minimal ingredients for a self-contained exposition. Two weight matrices govern the mapping between $\mathcal{S}$ and $\mathcal{M}$; \begin{itemize} \item {\em Aggregation probabilities: } For each detailed state $x\in \mathcal{S}$, the probability that $x$ aggregates (or "groups") into $k$, $g_{xk}\geq 0$, represents the degree of membership of detailed state $x$ in meta-state $k\in\mathcal{M}$. The $N \times L$ matrix $G=\{g_{xk}\}$ is non-negative and row-stochastic. {\em Hard aggregation} is the special case where the meta-states form a partition of the state-space, and each state $x\in\mathcal{S}$ ``belongs'' to a single partition: $g_{xk}=1$ for one and only one $k\in\mathcal{M}$. The more general case is referred to as {\em soft aggregation}. \item {\em Disaggregation probabilities:} For each meta-state $l\in\mathcal{M}$, the probability that $l$ disaggregates (or "un-groups") into $x$, $u_{lx}\geq 0$, is the degree to which meta-state $l$ is represented by detailed state $x \in \mathcal{S}$. The $L\times N$ matrix $U=\{u_{lx}\}$ is also non-negative and row-stochastic. If some meta-state $l$ is represented by a single state $x_l$, i.e. $u_{lx_l} = 1$, we refer to this $x_l$ as the {\em representative state} of meta-state $l$. \end{itemize} Having fixed the matrix $G$ and $U$, one solves an {\em aggregated} Bellman equation on the meta-states: \begin{align}\label{eq:aggregateBellman} R(l)& =\sum_{x\in\mathcal{S}}u_{lx} (c(x)+ \alpha \sum_{y\in \mathcal{S}}p_{xy} \sum_{k\in \mathcal{M}}g_{yk}R(k)), ~l\in \mathcal{M}, \end{align} whose matrix form $R = U c+\alpha U P\bg R$ reduces to \be R = (I-\alpha U P\bg)^{-1}U c.\label{eq:agginverse}\ee The function $R$ is the value function of the aggregate problem. In the case of ``hard aggregation'' the true value function is assumed to be constant over each subset in the partition. We say that $x\in S_k$ (or in ``cluster'' $k$) if $g_{xk}=1$, and approximate its value with the aggregate value $R(k)$. The following is known. \begin{proposition}[hard aggregation bound, Proposition 4.2 \cite{bertsekas2018feature}] The unique $R$ satisfies $$ |R(k) -V(x)| \leq \frac{|\epsilon|_{\mathcal{M}}^*}{1- \alpha}, ~ k\in \mathcal{M},~ x\in S_k,$$ where \begin{equation} \label{eq:epsdefin} \epsilon(k) = \max_{x,y\in S_k} | V(x) - V(y) |. \end{equation} \label{prop:hardagg} \end{proposition} This bound makes explicit that we want to ``group'' together states that are similar in their value. Since one does not want (or cannot) compute the exact value one must have insight into the problem to identify this grouping. The bound we obtain here for soft aggregation with our choice of $U,G$ has a similar flavor; see Theorem \ref{thm:backtodelta} further below. In the case where each meta-state $l$ has a representative state $x_l$ (i.e. $U$ is binary), it is convenient to think of $\mathcal{M}$ as the set of representative states $\mathcal{S}^0:=\{x\in\mathcal{S}: x_l=x \mbox{ for some } l\in\mathcal{M}\}$. A soft aggregation (non-binary $\bg$) then interpolates detailed states from the representative ones. This {\em coarse grid} scheme is what we use in our algorithm. \subsection{A low rank chain on $\mathcal{S}$\label{sec:aggregation_connection}} It is clear that aggregation produces dynamics on the space $\mathcal{M}$ of meta-states with transition law $U P\bg$. However our aim, recall, is to build a sister chain on $\mathcal{S}$. Theorem \ref{thm:lifted} provides a simple but powerful starting point, relating aggregation to a sister chain with transition matrix $\widetilde{P}$ that is composed of $P$ and $\bg U$: \begin{theorem} [aggregation as sister chain] Consider the value $\widetilde{V}$ of a Markov chain on the original detailed state space $\mathbb{S}$ with the transition matrix $$\widetilde{P}=P\bg U~~~(\widetilde{P}_{xy} = \sum_{z\in\mathbb{S},l\in\mathcal{M}}p_{xz}g_{zl}u_{ly}).$$ The aggregate value $R$ in \eqref{eq:aggregateBellman} equals $U\widetilde{V}$, and $\widetilde{V}=c+\alpha P\bg R$. \label{thm:lifted} \end{theorem} \noindent\textbf{Proof: } From the Bellman equation for this chain, we have that the value $\widetilde{V}$ satisfies \begin{align*} \widetilde{V}(x)&=c(x) + \alpha\widetilde{P}\widetilde{V}(x)= c(x) +\alpha P\bg U \widetilde{V}(x). \end{align*} Define $\widetilde R:=U \widetilde V$. By the above, we also have $\widetilde{V}=c+\alpha P\bg \widetilde R$. Moreover, multiply both sides by $U$ gives us $$\widetilde R = U c + \alpha U P\bg\widetilde R. $$ This $\widetilde R$ is in fact the unique solution to the aggregate Bellman equation. \hfill \Halmos\vspace*{0.4cm} In this way, aggregation gives rise to a family of lower rank chains with law $\widetilde P[\bg,U] := P \bg U$ on the detailed state space, which we call the $(\bg,U)$-\emph{lifted chain}. The lifted chain's value is $c+\alpha P \bg R$ and, as such, is obtained from the lower dimensional $R$, reducing the computational complexity. In the control context, this will allow us to avoid full policy optimization on $\mathcal{S}$ during policy iteration; see \S \ref{sec:optimization}. {\em The sister chain will thus be a lifted chain whose parameters are tuned for moment matching}. In an architecture with representative states, $R=U\tilde{V}$ simplifies to $R(l)=\widetilde{V}(x_l)$. In that case the degrees of freedom are in (a) the choice of the representative states and (b) the design of the aggregation matrix $\bg$. \section{Producing the sister chain\label{sec:matching}} The moments of the transition law $\widetilde{P}[\bg,U]=P\bg U$ are given by \[ \widetilde \mu[\bg,U](x)=\sum_{y} \widetilde{P}_{xy}[\bg,U] (y-x)\mbox{, } ~ \widetilde \sigma^2[\bg,U](x)=\sum_{y}\widetilde{P}_{xy}[\bg,U](y-x)(y-x)^\intercal.\] In our framework, aggregation is likely to work well if $\widetilde{\mu} [\bg,U] \approx \mu$ and $\widetilde{\sigma}^2[\bg,U]\approx \sigma^2$. To make this formal, it is useful to introduce the functions $W_1: \mathbb{R}^d\to \mathbb{R}^d$ and $W_2:\mathbb{R}^{d}\to \mathbb{R}^{d^2}$, given by $W_1(x)=x \mbox{~(the identity operator) and } W_2(x)=xx^{\intercal},$ so that $$PW_1(x) = \mathbb{E}_x[X_1], \mbox{ and } PW_2(x)=\mathbb{E}_x[X_1X_1^{\intercal}].$$ Since $\mu(x) = PW_1(x)-x, \mbox{ and } \sigma^2(x) = PW_2(x)+x\mu(x)^{\intercal}+\mu(x)x^{\intercal}-xx^{\intercal},$ \begin{align} \mu(x) - \widetilde{\mu}[\bg,U](x) =& PW_1 (x)-\widetilde{P}W_1(x) \\ \sigma^2(x) - \widetilde{\sigma}^2[\bg,U](x) = & PW_2(x)-\widetilde{P}W_2(x) + x [\mu(x) - \widetilde{\mu}(x)] ^{\intercal} + [\mu(x) - \widetilde{\mu}(x)] x ^{\intercal}. \label{eq:second-gap} \end{align} Furthermore, if $\bg,U$ are chosen such that $\widetilde{\mu}[\bg,U](x)=\mu(x)$, equation (\ref{eq:second-gap}) becomes \begin{align} \sigma^2(x) - \widetilde{\sigma}^2[\bg,U](x) = & PW_2(x)-\widetilde{P}W_2(x).\label{eq:quadratic-gap} \end{align} We observe that to be able to match both moments for a state $x$, utilizing as support a subset $\mathcal{Y}\subseteq \mathcal{S}$, necessitates the existence of a solution $\alpha_{xy}$, $x\in\mathcal{S}, y\in \mathcal{Y}$, to the family of equations (simultaneous in $x$) $$\sum_{y\in\mathcal{S}}\alpha_{xy}W_1(y) =PW_1(x),~ \sum_{y\in\mathcal{S}}\alpha_{xy}W_2(y) =PW_2(x), x\in\mathcal{S}.$$ Or in other words that any point in the $n+n^2$ dimensional scatter $\{(PW_1(x),PW_2(x)),x\in\mathcal{S}\}$ can be written as a convex combination of states $y\in \mathcal{Y}$. Note that this is necessary but not sufficient for aggregation-based moment matching. We would further need the convex combination $\alpha$ be decomposable as $PGU$ where $G,U$ are valid aggregation and disaggregation matrices. \iffalse So the moment matching objectives can be written as \begin{align*} & \widetilde \mathbb{E}_x[X_1]= \widetilde{P}W_1(x) = PW_1 (x) = \mathbb{E}_x[X_1]\\ & \widetilde \mathbb{E}_x[X_1X_1^{\intercal}] =\widetilde{P}W_2(x) = PW_2 (x)= \mathbb{E}_x[X_1X_1^{\intercal}]. \end{align*} Re-writing $\widetilde \mathbb{E}_x[X_1]$ and $\widetilde \mathbb{E}_x[X_1X_1^{\intercal}]$, with a random variable $Z_l$ that has distribution $u_{l\cdot}$: \begin{align*} \widetilde \mathbb{E}_x[X_1] = & \sum_{l} (P\bg)_{xl} \sum_{z} u_{lz}z =\sum_{l} (P\bg)_{xl} \mathbb{E}[Z_l] \\ \widetilde \mathbb{E}_x[X_1X_1^{\intercal}]= & \sum_{l} (P\bg)_{xl} \sum_{z} u_{lz}z z^{\intercal} = \sum_{l} (P\bg)_{xl} \mathbb{E}[Z_lZ_l^{\intercal}] . \end{align*} The problem of matching the moments is one of identifying ``features'' $\mathbb{E}[Z_l], \mathbb{E}[Z_lZ_l^{\intercal}]$ and writing $\mathbb{E}_x[X_1], \mathbb{E}_x[X_1X_1^{\intercal}]$ as convex combinations of these. At this point one can write an optimization problem in decision variables $\bg,U$ to minimize a cost that increases in the moment mismatch. \fi \begin{remark}[The (im)possibility of 2nd moment matching] {\em The existence of a small strict subset $\mathcal{Y}$ of $\mathcal{S}$ (and coefficients $\alpha_{xy}$) with the above property is not guaranteed. A simple example makes this abundantly clear and also captures the subtlety of (simultaneous) moment matching. \begin{figure}[h]\centering \includegraphics[scale=0.35]{RW_combined} \caption{The moment scatter plot for two different random walks on $[0,1, \ldots,20]$. Circles: Simple random walk with absorbing end points, where simultaneous matching of both moment is impossible --- one cannot express a point as a convex combination of other points. Squares: a random walk where each point in the moment scatter can be written as a convex combination of the two end points.\label{fig:simpleRW}} \end{figure} Consider the simple absorbing random walk on $[0,1,\ldots,n]$, with $P_{x,x+1}=P_{x,x-1}=1/2$ for all $x=\{1,\ldots,N-1\}$ and $\{ 0, n \}$ are absorbing states. Here we have $\mathbb{E}_x[X_1]=x$ for all $x$ and $\mathbb{E}_x[X_1^2]=x^2+\mathbbm{1}\{x\notin \{0,n\}\}$. Given the scatter $\{(\mathbb{E}_x[X_1],\mathbb{E}_x[X_1^2]),x\in\mathcal{S}\}$, one cannot express all points as a convex combination of (a common) small number of points; see the round markers in Figure \ref{fig:simpleRW}. A piecewise linear approximation allows for matching the first moment while controlling, through the number of breakpoints, the quality of second moment match. For contrast consider a chain that has $P_{x0}=1-x/n$ and $P_{xn}=x/n$ (absorbing in one step at the boundary). Here $\mathbb{E}_x[X_1]=x$ for all $x$ and $\mathbb{E}_x[X_1^2]=nx$ and both moments can be matched using only two representative states corresponding to the end/corner points of the state-space; see the square markers in Figure \ref{fig:simpleRW}. } \hfill \vrule height .9ex width .8ex depth -.1ex \end{remark} Because of this general impossibility it seems natural to prioritize the first order match. The PDE view is informative here: errors in matching $\mu$ should translate into approximation error that are proportional to the first derivative of $\widehat{V}$, whereas errors in the matching of $\sigma^2$ would only be multiplied by the second derivative. We will insist then on matching the first moment exactly while controlling the second moment mismatch. Thus we use a coarse grid scheme where each meta state $l$ maps to a representative state $x_l$ (i.e. $\mathcal{M} \subseteq \mathcal{S}$), and each row of $\bg$ has weights over $\mathcal{M}$. Then $U W_1 (l) = x_l, U W_2 (l)= x_l x_l^{\intercal}$ and we are looking for $\bg$ such that for all $x \in \mathcal{S}$ \begin{align*} \sum_{l} [P_x\bg]_{l} [x_l] &= \mathbb{E}_x[X_1] , \mbox{ and } \sum_{l} [P_x\bg]_{l} [x_l x_l^{\intercal} ] \approx \mathbb{E}_x[X_1X_1^{\intercal}]. \end{align*} We show that for a state $x$, $\bg_{x\cdot}$ is straightforward to compute explicitly--- it assigns weights to representative states proportionally to their distance from $x$. Our choice of the coarse grid (hence the representative states) provides us control over the second moment mismatch. No optimization problem needs to be solved. \subsection{A coarse grid of representative states\label{sec:grid}} Define \textit{spacing function} $\gridfn(z)=z^{\frak{s}}$ for a {\em spacing exponent} $\frak{s} \in (0,1)$. The choice of $\frak{s}$ is a trade-off between accuracy and computation: a smaller value of $\frak{s}$ produces a finer grid, which implies greater accuracy but a heavier computational burden. The analysis in \S \ref{sec:guarantees} shows that $\frak{s} < \frac{1}{2}$ is required for accurate approximations while the complexity analysis reveals that $\frak{s}\geq \frac{1}{3}$ guarantees computational gains even under conservative estimates. \footnote{In the mathematical guarantees we use the spacing function $q^{\alpha}(z)=(1-\alpha)^{\frac{1}{4}}q(z)$. For $\alpha=0.99$ for example, $(1-\alpha)^{\frac{1}{4}}\geq 0.3$. We simplify the algorithm exposition by dropping this multiplicative constant.} The formal construction of the grid is tedious but straightforward. Recall that $\mathcal{S}=\mathbb{Z}^d\cap \times_{i=1}^d [l_i,u_i]$. The grid is constructed symmetrically about the origin, so consider the positive portion of an axis $i$. Index the grid with $\{ k \}$ and let $f(k)$ be the axis value at index $k$, which is given recursively by \be f(k+1) = \lceil f(k)+ \gridfn(f(k)) \rceil + 1.\label{eq:f_recursion} \ee Take $f(0) = \max \{ 0, \ell_i \}$ and set $f(\bar{n}_i)=u_i$ for $\bar{n}_i:=\min\{k:f(k) \geq u_i\}$. Each point on the grid is thus characterized by an index set $\vec{k} = [k_1, ..., k_d]$, where $k_i\in [-\underline{n}_i,\bar{n}_i]$, $i=1, ..., d$. These grid-points are the representative states $ \mathcal{S}^0 = \{ x({\vec{k}}): x({\vec{k}})_i = f(k_i) \}.$ We construct the matrix $U$ so that for every $\vec{k} \in \mathcal{M}$ \be u_{\vec{k}x({\vec{k}})}=1, ~ u_{\vec{k}y}=0 \mbox{ for all } y \neq x({\vec{k}}).\tag{Disaggregation matrix} \ee For notational simplicity, when the explicit value of $\vec{k}$ is immaterial we will revert to using $l$ and $x_l$ for a meta state and its representative state. Figure \ref{fig:grid} (LEFT) illustrates the general pattern over $\mathbb{Z}^2$, with the red lines highlighting how the spacing along each axis scales with the distance to the origin on that axis. \begin{figure}[h!]\centering \includegraphics[width=0.4\textwidth]{gridnB.png}\hspace*{1cm} \includegraphics[width=0.4\textwidth]{grid_G.png} \caption{(LEFT) The grid with spacing exponent $\frak{s}=0.5$ and an encasing box $\mathcal{B}$. (RIGHT) Aggregation matrix $\bg$ is used to express each state $y$ as a convex combination of the meta-states on its encasing box. \label{fig:grid}} \end{figure} \begin{lemma}[number of meta-states] With spacing exponent $\frak{s}$, the number $L=|\mathcal{S}^0|=|\mathcal{M}|$ of representative (and hence meta-) states satisfies \[L\leq \left( \frac{\sqrt{2}}{1-\frak{s}} \right) ^d |\mathcal{S}|^{1- \frak{s}}.\] \label{lem:grid_growth} \end{lemma} The spacing exponent $\frak{s}$ is related to $\varepsilon$ in our accuracy bound (see \eqref{eq:guaranteeintro} and \S \ref{sec:guarantees} below) by $\varepsilon=1-2\frak{s}$ ($\frak{s}<1/2$); in our numerical experiments we use $\frak{s}=0.45$ which already results in high accuracy. By Lemma \ref{lem:grid_growth} the number of meta states is bounded by $(2\sqrt{2})^d N^{\frac{1+\varepsilon}{2}}$ for $\frak{s}\in [0,1/2]$. In the special case where $\mathcal{S}=[0,r]^d$, $$\frac{L}{N} = \frac{|\mathcal{M}|}{|\mathcal{S}|} \leq \left(\frac{2\sqrt{2}}{r^{\frak{s}}}\right)^d,$$ implying that the bigger $r- (2\sqrt{2})^{\frac{1}{\frak{s}}}$ is, the more substantial the dimensionality reduction. \subsection{Aggregation matrix $\bg$} With the construction of representative states in the previous section, it is always feasible to find a $N\times L$ stochastic matrix $\bg$ to achieve perfect first moment matching $$y+\widetilde{\mu}(y)=\sum_{l} [P_y\bg]_{l} [x_l] = \mathbb{E}_y[X_1]=y+\mu(y) \mbox{, } ~ \forall y \in \mathcal{S}.$$ There may be multiple matrices $\bg$ that satisfy this moment matching. We construct ours as follows: For each state $y\in\mathcal{S}$ let $g_{y,\cdot}$ be a distribution over $\mathcal{M}$ such that $$\sum_{l}g_{yl}x_l=y.$$ The matrix $\bg$ with rows $\{g_{y,\cdot},y\in\mathcal{S}\}$ immediately satisfies first order moment matching because $$\widetilde{\mathbb{E}}_x[X_1]=\sum_{y}p_{xy}\sum_{x_l}g_{yx_l}x_l = \sum_{y}p_{xy} y=\mathbb{E}_x[X_1], \mbox{ for all } x \in \mathcal{S}. \footnote{This is an instance of a more general fact. For perfect first moment matching it suffices that $\bg,U$ are such that, for each $y$, $(\bg U)_{y,\cdot}$ is the distribution of $y+Z$ where $\mathbb{E}[Z]=0$. In that case, $(P\bg U)_{x\cdot}$ is the convolution of $P_{x\cdot}$ and a zero-mean jump and, consequently, has the same mean as $P_{x\cdot}$: $\bg U W_1(y) = y \implies P\bg DW_1(x) = x+\mu(x).$} $$ \noindent {\bf Computing $g_{y,\cdot}$.} For each $y\in\mathcal{S}$ we identify the smallest enclosing box and write $y$ as a convex combination of its corners, as illustrated in Figure \ref{fig:grid} (RIGHT). For a given $y$, let $\mathcal{B} = \{ \vec{k}_1, ..., \vec{k}_{2^d} \} \in \mathcal{B}$ be the set of $2^d$ meta (i.e. representative) states that form the box, restrict $g_{yl'} = 0$ for $l' \notin \mathcal{B}$, and solve for $$ \sum_{l \in \mathcal{B}}g_{yl}x_l=y, \mbox{ where } \sum_{l \in \mathcal{B}}g_{yl}=1 \mbox{ and } g_{yl} \geq 0 \mbox{ for } l \in \mathcal{B}. $$ This set of linear constraints has an explicit solution. Given a box $\mathcal{B}$, let $\bar{s}_i=\max_{x\in\mathcal{B}}x_i$ and $\underline{s}_i=\min_{x\in\mathcal{B}}x_i$ for $i \in [d]$. Then, give $y$ and its enclosing box $\mathcal{B}$, we write \be g_{yl} = \Pi_{i=1}^d \left[ \mathbbm{1}\{(x_l)_i = \bar{s}_i \} * \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i} + \mathbbm{1}\{ (x_l)_i = \underline{s}_i\} * \frac{\bar{s}_i - y_i}{\bar{s}_i-\underline{s}_i} \right] \label{eq:phiconstruction}.\ee Intuitively, $g_{yl}$ weighs nearby representative states $x_l \in \mathcal{B}$ proportional to their relative distance to state $y$. The following summarizes the properties of $\bg$. \begin{lemma} \label{lem:phi_construct} The construction of $\bg$ in \eqref{eq:phiconstruction} satisfies $\sum_{l \in \mathcal{B}} g_{yl} = 1$ and $\sum_{l \in \mathcal{B}}g_{yl}x_l=y$. Also, $g_{yl} = 1$ when $y = x_l$ for $l \in \mathcal{M}$. The choice of $\mathcal{S}^0$ (hence $U$) and $\bg$ guarantees that $\widetilde{P}=PGU$ induces perfect first moment matching. \end{lemma} To bound the second moment mismatch, let $\Sigma(x)=\mathbb{E}_x[(X_1-\mathbb{E}_x[X_1])(X_1-\mathbb{E}_x[X_1])^{\intercal}]$ be the covariance matrix of $X_1$ starting at $x$. If $\Gamma$ is such that $|\Delta|_{\mathcal{S}}^*\leq \Gamma$, then $\|\Sigma(x)\|\leq \Gamma$ for a re-defined constant. \begin{lemma}[second moment mismatch]\label{lem:secondapprox} Consider a Markov chain on $\mathcal{S}=[\ell_i,u_i]^d\cap \mathbb{Z}^d$. With $\bg,U$ produced by Algorithm \ref{alg:moma}, we have a constant $\Gamma$ such that $$\|P\bg DW_2(x)-PW_2(x)\|\leq \Gamma(\|x\|+ \Delta_x)^{\frak{s}}.$$ \end{lemma} The mathematical bounds in \S \ref{sec:guarantees} inform the choice of the coarseness parameter $\frak{s}$ for the sister chain $\widetilde{P}$. Put simply, they reveal that we ``can afford'' state-dependent spacing $\|x\|^{\frak{s}}$ for $\frak{s}<1/2$ between meta states and capture how the accuracy gap shrinks as $\frak{s}$ decreases further from $1/2$. \section{Algorithm and Complexity \label{sec:complexity}} The \emph{moment-matching $($\textsc{MoMa}$)$ aggregation} algorithm in \ref{alg:moma} summarizes our construction---via representative states and distance-proportional aggregation---of the design matrices $U, \bg$. \begin{algorithm} \floatname{algorithm}{Algorithm}\caption{Moment-Matching (\textsc{MoMa}) aggregation with coarse grid} \renewcommand{\thealgorithm}{} \label{alg:moma} \begin{algorithmic}[1] \Require State space $\mathcal{S} = [\ell_i, u_i]^d \cap \mathbb{Z}$, spacing exponent $\frak{s}$. \Ensure Aggregation structure with parameters $U, \bg$. \State {\em Construct $\frak{s}$-spaced grid}: Create grid-points $z_i^k, z_i^{-k}$ as appropriate for each axis $i$. \State {\em Construct U}: For each meta-state $\vec{k}= [k_1, ..., k_d]$ on the grid, assign $u_{\vec{k}x_{\vec{k}}}=1$ where $[x_{\vec{k}}]_i = z_i^{k_i}$. \State {\em Construct $\bg$}: For each state $y$ and meta-states in its enclosing box $l \in \mathcal{B}$, compute distribution $g_{y\cdot}$ such that $\sum_{l \in \mathcal{B}}g_{yl}x_l=y$ and $g_{yl'} = 0 $ for $l' \notin \mathcal{B}$. \end{algorithmic} \end{algorithm} \begin{algorithm} \floatname{algorithm}{Algorithm}\caption{Policy evaluation with \textsc{MoMa}~aggregation} \renewcommand{\thealgorithm}{} \label{alg:eval} \begin{algorithmic}[1] \Require Markov reward process $\mathcal{C}= <\mathcal{S}, P, c, \alpha>$, spacing exponent $\frak{s} \in [\frac{1}{3}, \frac{1}{2})$. \Ensure Approximate value $\widetilde{V}$. \State {\em \textsc{MoMa}~aggregation}: Obtain $U, \bg$ using Algorithm \ref{alg:moma}. \State Solve $R=(I-\alpha U P\bg)^{-1}U c$. \State Compute approximation $\widetilde{V}=c+\alpha P\bg R$. \end{algorithmic} \end{algorithm} The following theorem, a corollary of Lemma \ref{lem:tilde_operator_gap1}, shows that the quality of the approximation depends on the local linearity of the value function $V$ and its approximation $\widetilde{V}$. We abbreviate here the notation $$G\widetilde{V}(y) = \sum_{l}g_{yl}\widetilde{V}(x_l)$$ \begin{theorem} With the coarse grid scheme, \begin{align*} |V-\widetilde{V}|_{\mathcal{S}}^*&\leq \frac{1}{1-\alpha}\left(|V-GV|_{\mathcal{S}}^* +|\widetilde{V}-G\widetilde{V}|_{\mathcal{S}}^*\right). \end{align*} \label{thm:backtodelta} \end{theorem} The gap depends, then, on how well the convex combination of the values $V(x_l)$ at neighboring representative states $x_l \in \mathcal{B}$ approximates the value $V(y)$ for $y$ in box $\mathcal{B}$; similarly for $\widetilde{V}$. Our construction effectively interpolates the value at a point $y$ from from those at the nearest grid points with weights corresponding to the relative distance from those grid points. This distance-based interpolation is the one that arises from perfect first moment matching. In particular, $\sum_{l}g_{yl}(x_l-y)=0$, so that, pretending a smooth extension of $V$, $$V(y)-\sum_{l}g_{yl}V(x_l)\approx -\sum_{l}g_{yl}DV(y)'(x_l-y) + \mathcal{O}(\Delta_y^2 D^2V(y))= \mathcal{O}(\Delta_y^2 D^2V(y)).$$ The guarantees in \S \ref{sec:guarantees} formalize this. \iffalse This expression is useful in that---if $V,\widetilde{V}$ have continuous differentiable extensions---$\delta$ is proportional to the second derivative ``inflated'' by $1/(1-\alpha).$ Since, with the one step construction, $\sum_{l}g_{yl}(x_l-y)=0$, $V(y)-\sum_{l}g_{yl}V(x_l)\approx -\sum_{l}g_{yl}DV(y)'(x_l-y) + \mathcal{O}(\Delta_y^2 D^2V(y))= \mathcal{O}(\Delta_y^2 D^2V(y)),$ and similarly for $\widetilde{V}$ so that we have \be |V-\widetilde{V}|_{\mathcal{S}}^*\lesssim \sup_{x\in\mathcal{S}}\frac{1}{1-\alpha}\sum_{y}p_{xy} (\Delta_y^2\|D^2V(y)\|+\widetilde{\Delta}^2_y\|D^2 \widetilde{V}(y)\|).\label{eq:informalbound}\ee\fi \noindent {\bf Computational complexity}. The computational complexity of Markov Decision Problems (MDP) is well studied. For a detailed exposition of these issues see \cite{littman2013complexity}, \cite{blondel2000survey}. The discussion in this section focuses on the evaluation step. We embed this discussion in the context of policy optimization in \S \ref{sec:optimization}. Recall that $N=|\mathcal{S}|$ is the number of states and $L=|\mathcal{M}|$ is the number of meta states. Per Lemma \ref{lem:grid_growth}, we know $L=\mathcal{O}(N^{1- \frak{s}})$. Value of $\frak{s}$ closer to $1$ are less expensive but more inaccurate. Let range $r$ be smallest integer such that $u_i - \ell_i \leq r,\mbox{ for all } i\in [d]$, we have also that $N \leq (r+1)^d$ and, in turn, that $L=\mathcal{O}(r^{d(1-\frak{s})})$. Two ingredients determine the computational value of our approach. The first is the {\em gain} from matrix inversion. This is a gain that is embedded in aggregation and is independent of moment matching. The second is the {\em loss} inherent to our moment-based computation of the aggregation matrices $\bg, U$. We treat these two ingredients separately. \paragraph{\bf Matrix inversion.} Computationally speaking, the key step in solving for $V=(I-\alpha P)^{-1}c$ is the inversion of the $N\times N$ matrix $(I-\alpha P)$. The complexity of matrix inversion is $\Omega(N^2\log(N))$ (see \cite{tveit2003complexity}) but $\mathcal{O}(N^3)$ is achieved by the standard Gauss-Seidel inversion. Solving for the aggregate value $R \in \mathbb{R}^L$, on the other hand, requires the inversion of the smaller $L\times L$ matrix $(I-\alpha U P\bg)$ so that $$\mbox{Gain} = \Omega(N^2\log N)- \mathcal{O}(L^3).$$ When $\frak{s} \geq \frac{1}{3}$, $L^3=\mathcal{O}(N^{3(1- \frak{s})})= \mathcal{O}(N^2)$, so we have a gain of $$\Omega(N^2 \log N-N^{2})=\Omega(N^2\log N).$$ This is a conservative estimate of the gain. On one hand, no known algorithm achieves the $N^2\log N$ lower bound and, on the other, various algorithms are faster than Gauss-Seidel and require less than $\mathcal{O}(L^3)$ for the aggregate problem. If we fix the inversion algorithm (to, say, Gauss-Seidel inversion) the aggregate matrix inversion takes $\mathcal{O} (N^{3- 3\frak{s}})$ compared to $\mathcal{O} (N^3)$ for the full one, approximately square root the time complexity when $\frak{s}$ is close to $\frac{1}{2}$. \paragraph{\bf Moment matching.} The matrix $\bg$ can be constructed as a linear program\footnote{It would have, per state $y$, $2^d$ variables (as the number of box corners) and $d+1$ constraints (one constraint for each dimension $i\in[d]$ and an additional stochasticity constraint)}. Leveraging our coarse grid scheme, we construct $\bg$ explicitly in \eqref{eq:phiconstruction}. These operations take $\mathcal{O}(Nd2^d)$ time. This is compared against the gain of at least $\mathcal{O}(N^2 log N)$ with $\frak{s} \geq \frac{1}{3}$. The total gain is then $$\Omega(N^2\log N-Nd2^d) .$$ \iffalse We have \[ \alpha = \max {\lVert DW \rVert _{\infty}, 1} = \max {\lVert W \rVert _{\infty}, 1}= \lVert W \rVert _{\infty} \] assuming a regular, grid-like state space that contains more than one state. Later advances include Vaidya's algorithm \cite{vaidya1989speeding} which takes $O((n+m)^1.5nL)$ arithmetic operations performed to a precision of $O(L)$bits, where $m$ is the number of constraints, $n$ the number of variables and $L$ is given as \[ \log _ 2 (1+ D_{max}) + \log_2(m+n) + \log_2 \rho \] where $\rho$ is the least common multiple of the denominators of all the rationals in the input. \fi \iffalse For an LP with $n$ variables, $m$ constraints that can be encoded in $L$ input bits, ellipsoid method takes $O(n^4L)$ and projective method takes $O(n^{3.5}L)$, where as primal-dual interior point method of $\epsilon$- accuracy takes $O(\sqrt{m+n} \ln \frac{1}{\epsilon} )$. Simplex takes worst-case exponential time, but the 1990 result \cite{bazaraa1990linear} shows an average empirical performance of $O(m^2n)$. If we analyze using the average empirical runtime of the simplex algorithm, the moment matching portion would take $O(nH^2L)=O(nd^4L)$ steps. The matrix inversion portion takes no more than $O(L^3)$ steps, and if we consider only cases where $L<d^2\sqrt{n}$ (since having a large number of meta-states incurs heavy computational costs and is undesirable), the moment matching portion dominates in terms of runtime. Note further that $O(log n) = O(d)$ if the number of levels for each dimension of the state space are required to be on the same order, and subsequently assume $L>d$ (to ensure there are at least as many meta-states as characteristics of the state space). Then when $L < n^{1/4}$, we have \[O(nH^2L) = O(nd^4 L) < O(nd L^4) < O(n^2d) = O(n^2 logn) \] Since $\Omega(n^2log(n)$ is the lower bound for the matrix inversion necessary for exact policy evaluation, aggregate policy evaluation out performs exact policy evaluation even when moment matching is solved every iteration when $L < n^{1/4}$. \textcolor{blue}{Using the input bits:} In \cite{karmarkar1984new}, Karmarkar presented a projective algorithm that requires $O(n^{3.5} L)$ arithmetic operations, each being performed to a prevision of $O(L)$ bits, where $L$ is given as \[ L = log( 1 + \mid D_{max} ) + log (1 + \alpha) \] where $\alpha = \max { \mid c_i ' \mid, \mid b_i \mid, i = 1, .., n}$, and $D_{max}$ is the maximum determinant of any square submatrix of the constraint matrix. In our case, each of our $n$ LPs has $L+1$ variables and $2H +1$ constraints, where $H = \frac{d^2 +d}{2}$. We have \[ \alpha = \max {\lVert DW \rVert _{\infty}, 1} = \max {\lVert W \rVert _{\infty}, 1}= \lVert W \rVert _{\infty} \] assuming a regular, grid-like state space that contains more than one state. Later advances include Vaidya's algorithm \cite{vaidya1989speeding} which takes $O((n+m)^1.5nL)$ arithmetic operations performed to a precision of $O(L)$bits, where $m$ is the number of constraints, $n$ the number of variables and $L$ is given as \[ \log _ 2 (1+ D_{max}) + \log_2(m+n) + \log_2 \rho \] where $\rho$ is the least common multiple of the denominators of all the rationals in the input. From a corollary of the Hadamard Inequality \cite{rozanski2017more}, we know that for a matrix $N$ of size $n \times n$ $ \mid det (N) \mid \leq B^n n^{n/2}$, where $B$ is the maximal element in $N$. Here we know that $n \leq d^2 +d +1$, and $B = \alpha$, so $D_{max}$ is at most $ O(\lVert W \rVert _{\infty} ^{d^2} d ^{d^2})$, which dominates $\alpha$. Then the algorithm runtime is at most \[ O(L^{3.5} log (d \lVert W \rVert _{\infty})^{d^2} ) \] Note that $\lVert W \rVert _{\infty} ^d = O(n)$ for a regular grid structure, so we can write this as \[ O(L^{3.5} log (d ^{d^2} n^2)) \] where $L$ now is the number of aggregate states, $d$ is the dimension of each state, and $n$ is the number of states. \fi \section{Policy optimization\label{sec:optimization} } Some control notations to start. We let $\mathrm{A}(x)$ be the set of feasible controls in state $x\in\mathcal{S}$. We use the notation $\pi$ for a stationary policy; it is the function from $\mathcal{S}\to \mathrm{A}:= \cup_{x \in \mathcal{S}}\mathrm{A}(x)$ such that $\pi(x)$ is the action the policy takes in state $x$. Let $p_{xy}^a$ denote the probability of transitioning from $x$ to $y$ under the action $a \in \mathrm{A}(x)$, and $P^{\pi}$ for the transition matrix under policy $\pi$; $\mathbb{E}_x^a$ (or respectively $\mathbb{E}^{\pi}$) is the corresponding expectation. The Bellman operator for a fixed policy $\pi$ is given by $$T^{\pi} V(x)=c(x,\pi(x))+\alpha [P^{\pi}V](x),$$ so that the value under $\pi$ is the solution to the fixed point equation $V^{\pi}=T^{\pi}V^{\pi}$ which is solved by matrix inversion; recall \S \ref{sec:themodel}. The optimization Bellman operator $T$ is given by $$T V(x)=\max_{u \in \mathrm{A}(x)}\{c(x,a)+\alpha [P^aV](x)\},$$ and the optimal value $V^*$ is the unique solution of the Bellman optimality equation $V^* = T V^*$. Denote the minimizing policy with $\pi^*$; if there are multiple optimal policies, we arbitrarily pick one. The first and second local moments depend on the state and the action taken in that state. We write $$\mu_a(x)=\mathbb{E}_x^a[X_1-x],\mbox{ and } \sigma_a^2(x) = \mathbb{E}_x^a[(X_1-x)(X_1-x)^{\intercal}],~x\in\mathcal{S},$$ and denote with $\widetilde{\cdot}$ all analogous definitions for a sister chain. The optimal aggregate value function is the fixed point of $$R(k)=\sum_{x\in\mathcal{S}}u_{kx} {\min_{a \in\mathrm{A}(x)}} \sum_{y\in \mathcal{S}}p_{xy}^a [c(x,a)+\alpha \sum_{l\in \mathcal{M}}g_{yl}R(l)].$$ Although the value is defined only for $k\in \mathcal{M}$, the minimizing policy, note, is defined on the full state space $\mathcal{S}$. We measure the performance of the approximate policy $\pi'$ by comparing its value ($V^{\pi'}$) to the optimal value ($V^*$): $\mid V^*(x) - V^{\pi'}(x) \mid $ is the \emph{optimality gap}. \subsection{Approximate PI with \textsc{MoMa}~aggregation} The bound for a fixed policy in Lemma \ref{lem:tilde_operator_gap1} extends to optimality gap: \begin{lemma} [optimality gap] Consider focal chain $\mathcal{C}$ optimal value and policy $V^*,\pi^*$ and the sister chain $\widetilde{\mathcal{C}}$ optimal $\widetilde{V}^*,\widetilde{\pi}^*$. Then, $$\mid V^*(x) - \widetilde{V}^*(x) \mid \leq \frac{ \alpha}{1-\alpha} (\delta_{V^*}[P^{\pi^*}, \widetilde{P}^{\pi^*}] + \delta_{\widetilde{V}^*}[P^{\widetilde{\pi}^*}, \widetilde{P}^{\widetilde{\pi}^*}]) $$ \label{lem:opt_delta} \end{lemma} The following is proved then similarly to Theorem \ref{thm:backtodelta}. \begin{theorem} $$\mid V^* - \widetilde{V}^* \mid_{\mathcal{S}}^* \leq \frac{ 1}{1-\alpha}\left( |V^*-GV^*|_{\mathcal{S}}^* +|\widetilde{V}^*-G\widetilde{V}^*|_{\mathcal{S}}^*\right).$$ \label{thm:backtodelta2} \end{theorem} Thus the gap depends similarly on how well the optimal value under focal chain $P$ and sister chain $\widetilde{P}$ are approximated by interpolating values at their nearest grid-points, weighed by the appropriate probabilities. Once the policies $\pi^*$ and $\widetilde{\pi}$ are fixed, moment matching supports---as seen in earlier sections---a small approximation gap with non-negligible computational gains. In embedding this within policy iteration it is important (indeed central) that our construction of the aggregation matrices $\bg,U$ does not depend on the policy and, hence, does not have be updated in each iteration. The base algorithm is the aggregate analogue of standard policy iteration (PI) and alternates between evaluation and updating steps. Exact evaluation and/or update are replaced by approximate computations in approximate policy iteration (API); using aggregation, the $k^{th}$ iteration proceeds as follows: \begin{itemize} \item[(i)] Evaluation: for current policy $\pi^k$ and induced $P^{\pi ^k}$, compute $R^k=(I-\alpha U P^{\pi ^k}\bg)^{-1}U c$ \\(see \eqref{eq:aggregateBellman},\eqref{eq:agginverse}). \item[(ii)] Update: find policy $\pi ^{k+1}$ that satisfies $$\pi^{k+1}(x)\in \argmin_{a \in \mathcal{A}(x)}\left\{c(x,a)+\alpha \sum_{y\in\mathcal{S}}p_{xy}^a\sum_{l\in\mathcal{M}}g_{yl}R^k(l)\right\}$$ \end{itemize} This is nothing but policy iteration for a chain on $\mathcal{M}$ with transition matrix $U P\bg$ so that, as follows from general theory, it is guaranteed to converge; see for example \cite[Proposition 6.4.2]{puterman1994markov}. \begin{algorithm} \floatname{algorithm}{Algorithm}\caption{(\textsc{MoMa}~API)\label{alg:onestep}} \renewcommand{\thealgorithm}{} \begin{algorithmic}[1] \Require Spacing exponent $\frak{s} \in [\frac{1}{3}, \frac{1}{2}).$ \Ensure Policy $\pi^*$. \State {\em \textsc{MoMa}~aggregation}: Obtain $U, \bg$ using Algorithm \ref{alg:moma}. \State Set initial control on representative states $\bar \pi^0: \mathcal{S}^0 \rightarrow \mathrm{A}$. \State Compute the induced transition $\bar P^{\bar{\pi}^0}: \mathcal{S}^0 \rightarrow \mathcal{S}$. Set $\bar{P}^0 \leftarrow \bar P^{\bar{\pi}^0}$. \While{convergence criterion is not met} \State {\em Policy Evaluation}: Compute $R^k=(I-\alpha \bar P^k \bg )^{-1}U c$. \State {\em Policy Update}: \begin{align*} &\bar \pi^{k+1}(x_l)\leftarrow \argmin_{a \in \mathrm{A}(x_l)}\{c(x_l,a) + \alpha [\bar P^a \bg R^k](x_l)\},\mbox{ for } x_l\in\mathcal{S}^0,\\& \bar P^{k+1}\leftarrow \bar P^{\bar{\pi}^{k+1}}.\end{align*} \EndWhile \State {\em Full Update}: $$\widetilde \pi(x)\leftarrow \argmin_{a}\{c(x,a) + \alpha [P^a \bg R](x)\}, ~ \forall x \in \mathcal{S}.$$ \end{algorithmic} \end{algorithm} Into this general schema we add two ingredients: \begin{itemize} \item[1.] {\bf \textsc{MoMa}~preprocess.} As detailed in \S \ref{sec:matching}: we build the $\frak{s}$-spaced \texttt{Grid}, and create the binary disaggregation matrix $U(\texttt{Grid})$ that has $u_{lx_l}=1$ for all grid points $x_l$. Next we compute the non-negative row-stochastic matrix $\bg(\texttt{Grid})$, so that the $y^{th}$ row is a $y$-mean distribution over the representative states. This construction does not depend on the transition matrix and, in turn, neither on the control. It is computed once and requires no update during the PI iterations. \item[2.] {\bf Reduction to PI on representative states.} To further reduce computational burden---especially in the updating step---we leverage a useful implication of the coarse grid scheme. In (aggregate) evaluation we solve for $R=U c+U P^{\pi}\bg R$ by inversion $(I-\alpha U P^{\pi}\bg)^{-1}U c$. Because $u_{lx_l}=1$ (and $u_{ly}=0$ otherwise) we have $(U P^{\pi})_{ly}=p^{\pi}_{x_ly}$, so that the only rows of $P^{\pi}$ used are those corresponding to the representative states $\mathcal{S}^0=\{x_1,\ldots,x_L\}$. Defining $\bar{P}^{\pi}$ to be the $L\times N$ matrix with $\bar{p}^{\pi}_{x,y}=p_{xy}$ for $x\in\mathcal{S}^0, y\in\mathcal{S}$, we re-write $R=(I-\alpha \bar{P}^{\pi} \bg)^{-1}U c.$ Policy update can be similarly limited to $\mathcal{S}^0$, as they are the only states for which we wish to compute the induced $\bar{P}$. Define $\bar{\pi}: \mathcal{S}^0 \rightarrow \mathrm{A}$. We have at iteration $k$ \begin{align*} \bar \pi^{k+1}(x_l)\leftarrow \argmin_{a \in \mathrm{A}(x_l)}\{c(x_l,a) + \alpha [P^a \bg R^k](x_l)\}= \argmin_{a \in \mathrm{A}(x_l)}\{c(x_l,a) + \alpha [\bar{P}^a\bg R^k](x_l) \},\end{align*} where the equality follows from the fact that the $1 \times N$ vector of probabilities $p_{x_l, \cdot}$ can be accessed from $\bar{P}$ instead of $P$. Thus, we can first run complete a full policy iteration on $\mathcal{S}^0$ and do a single update for the states $x\in \mathcal{S}\backslash \mathcal{S}^0$ after convergence; this is step 7 of the algorithm. \end{itemize} With the reduction to representative states, convergence of Algorithm \ref{alg:onestep} also follows immediately from that of standard PI, applied here to the controlled chain $\bar{P}^{\bar{\pi}}$ on $\mathcal{S}^0$. The value and policy to which this PI converges inherit an optimality-gap-bound from the approximation-gap-bound in Theorem \ref{thm:guaranteemain}. \subsection{Complexity of optimization} We expand the discussion of evaluation complexity in \S \ref{sec:complexity} to the policy iteration algorithm in its totality. A potential difficulty in calculations such as these is that while the approximation algorithm is more efficient {\em per iteration} it might require more iterations to converge compared to the exact one, thus erasing any possible gains. Fortunately, the upper bounds on the number of iterations are much smaller for the aggregation PI compared to the exact PI because, recall, we perform updates only for states $x\in \mathcal{S}^0$; see \cite{ye2011simplex,singhmansour,hollanders2016improved,scherrer2013improved}. In \S \ref{sec:complexity} we showed that the time used for moment matching optimization is made up for by the time saved from evaluating policies for $L=|\mathcal{M}|= \mathcal{O} ( N^{1 - \frak{s}})$ instead of $N$ states. The time savings are {\em not} however limited to the evaluation step. Computation is reduced also because we perform the iterations, up to convergence, only on the representative states $x\in\mathcal{S}^0$. Specifically, suppose the cost of policy update for a single state is $m$; in the worst case $m$ might correspond to comparing all feasible actions $a \in \mathrm{A}(x)$. In full PI, this is done for every state $x \in \mathcal{S}$ so the complexity is $\mathcal{O}(Nm)$. In Algorithm \ref{alg:onestep}, on the other hand, we update only $x \in \mathcal{S}^0$, giving $\mathcal{O}(Lm) = \mathcal{O}(N^{1-\frak{s}}m)$. Moreover, while implicit in the algorithm, obtaining the control-induced transition matrix $P^{\pi}$ at each iteration has non-negligible computation expense of $\mathcal{O}(N^2)$ for full PI, and reduced to $\mathcal{O}(LN) = \mathcal{O}(N^{2-\frak{s}})$ each in Algorithm \ref{alg:onestep}. Except for cases where the action space far exceeds the state space in magnitude, specifically $m > \mathcal{O}(NlogN)$, the time complexity of matrix inversion in the evaluation step dominates, thus the gain in each iteration is still $\mathcal{O}(N^2logN)$. These gains are multiplied by the number of iterations it takes for the aggregate values to converge; though one must also account for the time to perform one full policy update after convergence. With $T$ iterations we have \[ Gain = \Omega(T N^2\log N - Nd2^d - Nm) = \Omega(N^2\log N) . \] \section{Numerical experiments\label{sec:numerical}} We consider two operations-management problems that pose a computational challenge for exact methods. In both cases there is a natural way to scale up the complexity, starting from small instances where we can visualize the outcomes and proceeding to larger instances that test the computational benefits of \textsc{MoMa}. Importantly, both were studied using alternative approximation methods, providing a benchmark for our own. All experiments reported in this section were run on a machine with Intel(R) Core(TM) i7-6700 CPU @ 3.40GHz 3.41 GHz and 16.0GB of RAM, using 64-bit Python. \subsection{\textsc{MoMa}~pre-process} Common to both examples is the \textsc{MoMa}~pre-processing step as detailed in Algorithm \ref{alg:moma}. Figure visualizes the grid, representative states and the aggregating probabilities $\bg$ for the case of $d=2$ and state space $[0, 40]^2 \cap \mathbb{R}^2$. The right-hand side in said figure confirms the linear scaling--in the number of states $N$---of the pre-processing. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{aug_plots/phi_g.png} \includegraphics[width=0.5\textwidth]{aug_plots/moma_runtime.png} \caption{(LEFT) Highlighted with red ticks the representative states that form the grid. In green we plot, for one fixed meta-state, the aggregating probabilities $\bg$ of each state into a fixed meta-state $l$; only states that are no further than the nearest neighboring meta-states aggregate into $l$ with positive probabilities; $g_{yl}$ is proportional to the distance from state $y$ to representative state $x_l$ (RIGHT) We scale up the (two-dimensional) state space $[0,u]^2$ by raising the value of $u$; pre-processing takes less than 25 minutes even for state space with size larger than a million. } \label{fig:moma} \end{figure} \subsection{Joint replenishment problem} A retailer carries two types of products. Demand for the products is independent (across products and time periods). There are two types of {\em fixed} ordering costs: (i) a \textit{minor} ordering cost for placing an order for product $i$; and (ii) orders of both products can arrive in the same truck and a \textit{major} ordering cost is incurred for each {\em truckload}. The number of truckloads then depends on the total amount ordered (of both products). We follow the standard setup as very clearly laid out in \cite{vanvuchelen2020use}. For simplicity, only full truckloads are considered. At time $t$, the order amount $q_{i,t}$ for each item type $i=1,2$ is determined based on the inventory level $I_{i,t}$ at the end of the previous period. Lead time is assume to be $0$ and orders arrive before the demand $d_{i,t}$ is realized. The system dynamics are given by $$I_{i,t} = I_{i, t-1}+q_{i,t} - d_{i,t}$$ Per-item holding cost $H_i$ is incurred for product-$i$ inventory per unit of time Per-item backorder cost $B_i$ is incurred for unmet demand. The minor ordering cost for product $i$ is $k_i$, and $K$ is the cost per truckload. The immediate cost function at period $t$ is then \[ c(I_t, q_t) = \sum_i ( H_i [I_{i, t}]^+ + B_i [I_{i,t}]^- + k_i \mathbbm{1}_{q_{i,t}>0}) + K \left\lceil \frac{\sum_i q_{i,t}}{TC} \right\rceil \] where $TC$ is the truck capacity. \subsubsection{Small instance} Demand is $d_1 = \mathcal{U}\{0,5\}, d_2 = \mathcal{U}\{0,3\}$ and each truck can carry 6 items. The parameters are as in Table \ref{table:params_JRP_small}. The only difference between the two products is the minor ordering cost. The discount factor is $\alpha = 0.99$. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c|c| c| } \hline \textbf{item type} & d & H & B & k & K & $\ell_i$ & $u_i$ \\ \hline i=1 & $\mathcal{U}$\{0,5\} & 1 & 19 & 40 & 75 & -30 & 40 \\ \hline i=2 & $\mathcal{U}$\{0,3\} & 1 & 19 & 10 & 75 & -30 & 40 \\ \hline \end{tabular} \caption{Demand parameters for the small instance of the joint replenishment problem.} \label{table:params_JRP_small} \end{table} We truncate the inventory for each item at 40 and the backorder is capped at 30 units for each item type; this means the order quantity must satisfy $q_{i,t}\leq 40-I_{i,t}$. The total number of states is $N=5041$. We take the \textsc{MoMa}~spacing exponent to be $\frak{s} = 0.45$, resulting in $L=400$ meta states. First, we test the evaluation performance of \textsc{MoMa}; see Algorithm \ref{alg:eval}. The instance is small enough that we can compute the exactly optimal policy $\pi^*$. We take $P = P^{\pi^*}$ as the transition function for the focal chain and compute the approximate value $\widetilde{V}(x) = c(x) +\alpha P\bg R(x)$, where $R=(I-\alpha U P\bg)^{-1}U c$ is the aggregate value. This is displayed against the exact value $V = (I-\alpha P)^{-1}c$ in Figure \ref{fig:JRP-small}; the mean and max (over the state space) of the evaluation gap as percentages of the exact value are 0.51 \% and 0.92 \% respectively. \begin{figure} \centering \includegraphics[width=0.43\textwidth]{JRP/evalV_JRP.png} \includegraphics[width=0.5\textwidth]{JRP/pol_nAgg=400_1.png} \caption{(TOP LEFT) Comparison of the approximately evaluated value function vs the exact against the state space. (BOTTOM LEFT) Their ratio against the Euclidean distance to the origin. (RIGHT) Comparison of the performance of the approximate policy $V^{\widetilde{\pi}}$ vs the optimal value $V^*$ in (TOP), their ratio in (BOTTOM), both against the Euclidean distance to the origin. \label{fig:JRP-small}} \end{figure} We consider optimization next. We obtain the candidate policy $\widetilde{\pi}$ using Algorithm \ref{alg:onestep}, and compare the value $V^{\widetilde{\pi}}$ of this policy against the optimal value $V^*$; see Figure \ref{fig:JRP-small} (RIGHT). The relative optimality gap $\mid V^{\widetilde{\pi}} - V^* \mid / V^*$, has a mean of 1.38 \% and max of 2.73 \%. Based on simulation, \cite{vanvuchelen2020use} reports an optimality gap with mean 0.46 \% max 0.91 \% for their neural network method. Computation times are not reported in \cite{vanvuchelen2020use}. Theorem \ref{thm:backtodelta} relates our approximation's quality to the ``local linearity'' of the values $V$ and $\widetilde{V}$, i.e, to how well the value at a state is a distance-proportional interpolation of the values at the gridpoints. Figure \ref{fig:remark_JRP} shows that such local linearity holds for the joint replenishment problem and explains why we are observing such impressive accuracy. \iffalse When we examine the ratio between the value functions $V, \widetilde{V}$ and their interpolation $\bg \bar{V}, \bg R$, they are both close to 1 with little fluctuation, consistent with the small approximation gaps observed. It seems that an immediately available explanation for this good performance is in the second moments, which form roughly piece-wise linear manifolds, thus taking the first order Taylor approximation results in small remainders. Another factor here is the existence of patches of structure in a desirable policy. A common proxy for performance of approximate policy $\widetilde \pi$ is Bellman residual, $\widetilde V$, $\lVert T^{\widetilde \pi} \widetilde{V} - \widetilde{V} \rVert$, as examined in e.g. \cite{williams1993tight, antos2008learning,farahmand2010error}. We can see clear patch-patterns when Bellman error as a percent of the optimal value function is plotted against the state space. The policies themselves display certain consistency. \fi \begin{figure} \centering \includegraphics[width=0.95\textwidth]{JRP/V_div_GV_nAgg=400_hor.png} \caption{Ratio of $V, \widetilde{V}$ against their interpolated values for the small instance of joint replenishment problem.} \label{fig:remark_JRP} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{JRP/policy1.png} \includegraphics[width=0.47\textwidth]{JRP/diff1.png} \includegraphics[width=0.47\textwidth]{JRP/policy2.png} \includegraphics[width=0.47\textwidth]{JRP/diff2.png} \caption{(LEFT) Optimal order policies for item type 1 in (TOP) and item type 2 in (BOTTOM), in color gradient. (RIGHT) Difference from approximate policies for item type 1 in (TOP) and item type 2 in (BOTTOM); differences are marked in shades of red depending on magnitude.} \label{fig:pol_JRP} \end{figure} \subsubsection{Large instance} Next we consider a larger instance studied in \cite{vanvuchelen2020use}. The parameters are as reported in Table \ref{table:params_JRP_large}. In addition each truck can carry 33 items and the discount factor is $\alpha = 0.99$. We cap the capacity at 120. \footnote{That is we restrict order quantity to satisfy $q_{i,t}\leq 120-I_{i,t} + \text{minimum demand}$.} For each item type, and backorder at 50 units for each item type. The state space is then $\mathcal{S} = [-50,120]^2$ and the total number of states is $N=|\mathcal{S}|=29241$; using $\frak{s}=0.45$ we have $L = 1089$ meta states. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c|c| c| } \hline \textbf{item type} & d & H & B & k & K & $\ell_i$ & $u_i$ \\ \hline i=1 & $\mathcal{U}$\{15,25\} & 7 & 19 & 40 & 400 & -50 & 120 \\ \hline i=2 & $\mathcal{U}$\{5,15\} & 1 & 19 & 10 & 400 & -50 & 120 \\ \hline \end{tabular} \caption{Demand parameters for the larger instance of the joint replenishment problem.} \label{table:params_JRP_large} \end{table} \begin{figure} \centering \includegraphics[width=0.43\textwidth]{JRP/evalV_large.png} \includegraphics[width=0.5\textwidth]{JRP/pol_nAgg=1089_0.png} \caption{(TOP LEFT) Comparison of the approximately evaluated value function vs the exact against the state space. (BOTTOM LEFT) Their ratio against the Euclidean distance to the origin. (RIGHT) Comparison of the performance of the approximate policy $V^{\widetilde{\pi}}$ vs the optimal value $V^*$ in (TOP), their ratio in (BOTTOM), both against the Euclidean distance to the origin. \label{fig:JRP-large}} \end{figure} The performance of \textsc{MoMa}~is captured in Figure \ref{fig:JRP-large}. The evaluation gap as a percent of the exact value function has mean 0.11 \% and max 0.13 \%. The optimality gap as a percent of the optimal value function has mean 0.32 \% and max 1.29 \%. In terms of computation time, \textsc{MoMa}~API took less than 6 minutes to converge, whereas exact PI took more than 2 hours; see detailed breakdown of runtime in seconds in Table \ref{table:JRP_time}. Note that the time used for each step is averaged across iterations and quoted with the unit of seconds per iteration, whereas the time cost of \textsc{MoMa}~preprocess is incurred only once and quoted with the unit of seconds. Exact runtime is not reported in \cite{vanvuchelen2020use}. The last step of the algorithm---to obtain the optimal actions for all states---always requires one full update. That cost is unavoidable unless one interpolates the control obtained for the representative states. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c|c| } \hline \textbf{algorithm} & \textbf{update} & \textbf{compute $P$} & \textbf{evaluation} & \textbf{\# iter} & \textbf{\textsc{MoMa}~preprocess} & \textbf{total} \\ \hline Exact PI & 696.90 /iter & 12.76 /iter & 119.79 /iter & 9 & / & 7597.68 \\ \hline \textsc{MoMa}~API & 35.64 /iter & 0.51 /iter & 0.10 /iter & 9 & 30.28 & 357.15 \\ \hline \end{tabular} \caption{Runtime breakdown for the larger instance of the joint replenishment problem.} \label{table:JRP_time} \end{table} \subsection{Inpatient-flow optimization} This second example follows on \cite{dai2017two} which is already re-considered in \cite{braverman2018taylor}. It is a hospital routing problem with multiple patient types and dedicated hospital wards; patients waiting in queue for beds in their specialized wards could be routed (or ``overflowed') to a different ward in the hospital at a cost. The $J$ internal wards are the server pools in this discrete-time queuing model, and the $N_j$ beds in each constitutes the servers. Arrivals of type-$j$ in a period $t$ follow a Poisson random variable with mean $\lambda_j$, and arrivals are independent across types and time periods. Once admitted to ward $j$, a patient's length of stay in pool $j$ (the time occupying a bed) is geometrically distributed with mean $1/p_j$. An arriving type-$j$ patient is immediately assigned a bed in pool $j$ when available, and otherwise waits for service in an (infinite) type-$j$ queue; the queue is truncated for numerical experiments. We let $X_j (t)$ be the number of patients either in service in pool $j$ or waiting in queue $j$; we use $X(t)$ for the vector process. While waiting, a patient in queue $j$ incurs a holding cost $H_j$ per period of delay. A waiting type-$j$ patient can be re-routed to (an unsaturated) pool $j \neq i$ at a cost of $B_{ij}$ and served immediately. A re-routing from $i$ to $j$ can happen only if there are available server in pool $j$; we do not re-route a waiting customer to another queue. This \emph{overflow decision} is made at the start of the time period, before departures and arrivals are realized. Let $U_{ij}(t) = U_{ij}(X(t)) $ be the number of customers overflown from buffer $i$ to pool $j$ at time period $t$. The action space in state $x$ is then $$ \mathrm{A}(x) = \bigg\{ u \in \mathbb{Z}^{J \times J} \mid \sum_{i \neq j} u_{ij} \leq (N_j - x_j) ^+, \sum_{j \neq i } u_{ij} \leq (x_i - N_i) ^+ \bigg\}. $$ The number of type-$i$ patients routed to other pools cannot exceed the number waiting in buffer $i$, $(x_i-N_i)^+$, and the number routed to pool $j$ cannot exceed the number of available servers there, $(N_j-x_j)^+$. The discrete-time dynamics are given by $$ X_i(t) = X^P_i (t-1) + A_i (t-1) - D_i(X^P_i (t-1)),$$ where $A_i(t) \sim $ Poisson($\lambda_i$) is the number of type-$j$ arrivals, $D_i (x) \sim Binomial(x \wedge N_i, p_i)$ is the number of type-$i$ departures, and where $$ X^P_i (t-1) = X_i (t-1) + \sum_{j \neq i} U_{ji} (X(t-1) - \sum_{j\neq i} U_{ji}(X(t-1))$$ is the post-action state. The cost is incurred immediately after the re-routing but before arrivals and service completions (i.e., before the realization of randomness). It is given by $$c(x,a) = \sum_{i} \sum_{j \neq i} B_{ij} u_{ij} + \sum_i H_i \times (x_i - \sum_{j \neq i} u_{ij} - N_i) ^+.$$ \subsubsection{Small instance} Here we consider an instance with 2 wards so again $\mathcal{S} \subseteq \mathbb{Z}_+^2$. Parameters are listed in Table \ref{table:params_hospital_small}. They are chosen so that $\lambda_1/p_1 = 14>12$ for the ward 1 and $\lambda_2/p_2= 8<12$ for ward 2, resulting in pressure to overflow patients from the overloaded ward 1 to ward 2; $B_{12}=5> B_{21} = 1$ so that such overflow is costly. Each of the queue is truncated at $30$ and the resulting state space is $\mathcal{S}=[0,42]^2\cap \mathbb{Z}_+^2$ and $N=|\mathcal{S}|=1849$. We take the \textsc{MoMa}~spacing exponent to be $\frak{s} = 0.45$ resulting in $L=196$ meta states. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c| c| c| c| c|} \hline & $\lambda_i $ & $p_i$ & $H_i $ & $B_{i1}$ & $B_{i2}$ & $N_i$ & $\ell_i$ & $u_i$\\ \hline i= 1 & 3.5 & 0.25 & 5 & / & 5 & 12 & 0 & 42 \\ \hline i= 2 & 2.8 & 0.35 & 5 & 1 & / & 12 & 0 & 42 \\ \hline \end{tabular} \caption{Parameter setting for the small instance of the hospital routing problem. } \label{table:params_hospital_small} \end{table} We first use \textsc{MoMa}~for evaluation using the exact optimal policy and compare the value of the focal chain $V=V^*$ to $\widetilde{V}$. Figure \ref{fig:Vs-eval} (LEFT) shows the gap. The approximation seems is visibly inaccurate and the accuracy is the worst for ``small'' states, namely near the origin. This is consistent with our accuracy guarantees where the gap-bound is smaller the farther one is from the origin. Figure \ref{fig:Vs-eval} explains this in the terms of Theorem \ref{thm:backtodelta}. We see that $V$ does not have the clear ``local linearity'' we observed in the replenishment problem; at least not near the origin. \begin{figure} [t!] \centering \includegraphics[width= 0.55 \textwidth]{aug_plots/Vs_eval.png} \includegraphics[width= 0.42 \textwidth]{aug_plots/remark_components.png} \caption{(LEFT TOP) Exact vs approximate values: against the state space. (LEFT BOTTOM) Their ratio against the Euclidean distance to origin. (RIGHT) Ratio of $V, \widetilde{V}$ against the interpolated value. \label{fig:Vs-eval}} \end{figure} It is important to note however that the ``shapes'' of $V$ and $\widetilde{V}$ are very similar. This matters for optimization: an optimal control $\pi^*$ satisfies $$\pi^*(x) \in \argmax_{a\in\mathrm{A}(x)}\{c(x,a)+\alpha(\mathbb{E}_x^a[V(X_1)]-V(x))\},$$ so the main influence of value $V$ on the prescribed action is through the increment $\mathbb{E}_x^a[V(X_1)]-V(x)$. The control computed using $\widetilde{V}$ will be influenced by the approximate increment \begin{align*} \wtilde{\Ex}_x[\widetilde{V}(X_1)]-\widetilde{V}(x)&= P\bg R(x)-\widetilde{V}(x). \end{align*} Figure \ref{fig:inc-c}(LEFT) plots these increments and captures how close they are. It is then less surprising that---where it matters most, i.e., in the context of optimization --- \textsc{MoMa}~performs exceedingly well here. This is confirmed in Figure \ref{fig:inc-c}(RIGHT), where $\widetilde{\pi}$ is computed using \textsc{MoMa}~API and its performance $V^{\widetilde{\pi}}$ compared to the optimal $V$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{aug_plots/inc_change_nAgg=121_0.png} \includegraphics[width=0.45\textwidth]{aug_plots/polVs.png} \caption{(LEFT) The incremental changes against the Euclidean distance to the origin, (RIGHT) Their difference against the state space. \label{fig:inc-c}} \end{figure} \subsubsection{Three and four wards} We replicate an instance studied in \cite{braverman2018taylor} with 3 specialty wards with parameters as listed in \ref{table:params_hospital_3ward}. This instance has a load levels $0.7$, i.e., $\lambda_i = 0.7 N_i p_i$. We later consider also a higher $0.8$ load. The discount factor is set to $\alpha = 0.99$. The total number of states is $N=15625$, and with $\frak{s}=0.45$, there are $L=1000$ meta states. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c| c| c| c| } \hline $\lambda_i $ & $p_i$ & $H_i $ & $B_{i1}$ & $B_{i2}$ & $B_{i3}$ & $N_i$ & $\ell_i$ & $u_i$\\ \hline i= 1 & 2.8 & 0.4 & 10 & / & 5 & 2 & 0 & 24 \\ \hline i= 2 & 4.2 & 0.6 & 2 & 3 & / & 7 & 0 & 24 \\ \hline i= 3 & 0.7 & 0.1 & 6 & 7 & 9 & / & 0 & 24 \\ \hline \end{tabular} \caption{Parameter setting for the 3-ward instance of the hospital routing problem with load level 0.7. } \label{table:params_hospital_3ward} \end{table} For this instance, exact PI converged in 8.4 hours. Meanwhile \textsc{MoMa}~API converged in 46 minutes---a time saving of over 90 \%; see Table \ref{table:3ward_time} for a detailed runtime breakdown. Runtime of "below 10 minutes" is reported in \cite{braverman2018taylor}, but without specifying whether it was for the setting of grid size 1728, 216 or 27. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c|c| } \hline \textbf{algorithm} & \textbf{update} & \textbf{compute $P$} & \textbf{evaluation} & \textbf{\# iter} & \textbf{\textsc{MoMa}~preprocess} & \textbf{total} \\ \hline Exact PI & 5530.81 /iter & 423.30 /iter & 22.23 /iter & 5 & / & 30327.26 \\ \hline \textsc{MoMa}~API & 628.02 /iter & 46.58 /iter & 0.15 /iter & 4 & 23.20 & 2768.91 \\ \hline \end{tabular} \caption{Runtime breakdown for hospital routing instance with 3 wards.} \label{table:3ward_time} \end{table} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{3-ward/3ward_load7_nAgg=512_R.png} \includegraphics[width=0.45\textwidth]{3-ward/3ward_load7_nAgg=512_L.png} \caption{(LEFT) Ratio of approximate values compared to convex combination of aggregate values for the constructed grid. (RIGHT) Bellman residual of the function approximation, as percentage of the approximate value function. \label{fig:agg-3ward}} \end{figure} The optimality gap $\frac{|V^{\widetilde{\pi}}-V^*|}{V^*}$ of \textsc{MoMa}~is reported in Figure \ref{fig:agg-3ward}(LEFT) To put in context, the mean optimality gap of 0.92 \% from \textsc{MoMa}~API is comparable to the mean of 1.1 \% in \cite{braverman2018taylor}, while the max of 2.97 \% compares quite favorably to the max of 20.6 \% quoted. We repeat this experiment with load of 0.8 and observe similar performance; see Table \ref{table:3ward_opt}. Useful for larger instances, where the exact PI is no longer possible, is to consider the the Bellman residual---a common proxy for optimality. Given a candidate policy $\widetilde \pi$, the Bellman residual is $\widetilde V$, $\lVert T^{\widetilde \pi} \widetilde{V} - \widetilde{V} \rVert$. Precise bounds---that relate the Bellman residual to the optimality gap---are developed in \cite{antos2008learning,farahmand2010error}. In this instance the Bellman residuals, as a percentage of the approximate value, have a mean 0.80 \% and a max 1.91 \%; see Figure \ref{fig:agg-3ward} (RIGHT) and Table \ref{table:3ward_opt}. \begin{table}[h!] \centering \begin{tabular}{ |c| c| c| c| c| c| } \hline Load level & TAPI mean error & TAPI max error & \textsc{MoMa}~mean error & \textsc{MoMa}~max error & max BR \\ \hline 0.7 & 1.1 \% & 20.6\% & 0.92\% & 2.97\% & 1.91\%\\ \hline 0.8 & 0.5\% & 9.6\% & 0.91\% & 3.58\% & 1.04 \% \\ \hline \end{tabular} \caption{TAPI and \textsc{MoMa}~API error (percent optimality gap) comparison with two load levels, and maximum Bellman Residual (BR) as percent of approximate value function. } \label{table:3ward_opt} \end{table} We use the Bellman residual to experiment with a 4-ward instance where our computer memory resources no longer allow for the computation of the exact policy. Here we consider $N_1 = 2, N_2 = 3, N_3 = 1, N_4 = 2$ and queue capacity of 12 each. Note that the dimensions are no longer equal in size. Here the state space size is $N=50400$, which translates to the inversion of a matrix with 2.5 billion elements in full evaluation. Using $\frak{s} = 0.45$ we have $L=1512$. \begin{table}[h!] \centering \begin{tabular}{| c |c| c| c| c| c| c| c| c| c| } \hline $\lambda_i $ & $p_i$ & $H_i $ & $B_{i1}$ & $B_{i2}$ & $B_{i3}$ & $B_{i4}$ & $N_i$ & $\ell_i$ & $u_i$\\ \hline i= 1 & 0.32 & 0.2 & 10 & / & 5 & 2 & 1 & 0 & 14 \\ \hline i= 2 & 1.68 & 0.7 & 2 & 7 & / & 1 & 2 & 0 & 15 \\ \hline i= 3 & 0.4 & 0.5 & 6 & 7 & 9 & / & 3 & 0 & 13 \\ \hline i= 4 & 0.48 & 0.3 & 6 & 1 & 2 & 3 & / & 0 & 14 \\ \hline \end{tabular} \caption{Parameter setting for the 4-ward instance of the hospital routing problem. } \label{table:params_hospital_4ward} \end{table} \textsc{MoMa}~API converged in 5 iterations, taking a total of 5 hours. Even with sparse representations our memory resources do not allow for policy evaluation. However, to get a sense of the time scale (and hence a benchmark for comparison), we point out that a single full update took 14.6 hours, 17 times as much as the aggregate policy update in \textsc{MoMa}. The Bellman residual of the resulting policy is shown in Figure \ref{fig:agg-4ward}, with mean of 0.32 \% and max of 1.13 \% relative to the approximate value function, smaller than even the 3-ward instance, which provides positive indication on the quality of the approximate policy. \begin{figure} \centering \includegraphics[width=0.495\textwidth]{3-ward/4ward_BE.png} \caption{ Bellman error as percentage of the function approximation in the 4-ward instance. \label{fig:agg-4ward}} \end{figure} \section{Accuracy guarantees\label{sec:guarantees} } We build on, and expand upon, the results of \cite{braverman2018taylor}. There, the moments $\mu$ and $\sigma^2$ are derived directly from $P$ and the focus is on the accuracy and optimality gaps that arise from using the continuous state space PDE to approximate the Bellman equation. Inherent to this then is that the error terms depend on (informally speaking) the third derivative of the solution $\widehat{V}$ to the differential equation. Here we must correct the bounds for the case of mismatch of the second moment between $\widetilde{P}$ and $P$. When comparing $\widetilde{V}$ to $\widehat{V}$ the accuracy will depend, as well, on bounds on the second derivative which multiplies this mismatch. Because the guarantees depend on a PDE, some of the notation and language of that literature is unavoidable. The final result in Theorem \ref{thm:guaranteemain} can, however, be read without familiarity with PDE language and key results. To simplify the exposition we assume that the state space of $P$ is unbounded and equal to all of $\mathbb{Z}^d$. In various applications the states cannot go negative. Such ``reflection'' at the boundary causes issues that we will ignore for the sake of exposition. These ``gaps'' however are easily completed by reference to \cite{braverman2018taylor}. Also, for computation the state space is often truncated, but this is of secondary importance. We will assume that truncation is done at large enough values to have only minimal effect. This is formalized further below. The PDE ``induced'' by the equation $0=TV-V$ is given, recall, by \be 0 = c(x) + \alpha \mu(x)'DV(x)+\alpha\frac{1}{2}trace(\sigma^2(x)'D^2V(x))-(1-\alpha)V(x).\label{eq:PDE}\ee The state space for this PDE is all of $\mathbb{R}^d$. The moment functions $\mu(x)=\mathbb{E}_x[X_1-x]$ and $\sigma^2(x)=\mathbb{E}_x[(X_1-x)(X_1-x)^{\intercal}]$, and the cost $c(x)$, are defined only for $x\in\mathbb{Z}^d$. With some abuse of notation $c(x)$, $\mu(x)$ and $\sigma^2(x)$ in the PDE are the extensions of these to $\mathbb{R}^d$. Assumption \ref{asum:primitives} below imposes condition on these extensions. {\em Any chain} on $\mathbb{Z}^d$ whose local first and second moments are given by the functions $\mu$ and $\sigma^2$ induces the same PDE. This PDE is defined relative to $\mu,\sigma^2$ and {\em not} with $\widetilde{\mu},\widetilde{\sigma}^2$. Under our construction in Algorithm \ref{alg:moma}, $\widetilde{\mu}=\mu$, but $\sigma^2\neq \widetilde{\sigma}^2$. We mark the PDE solution, when it exists, with $\widehat{V}$. The accuracy with which $\widehat{V}$ approximates the true value $V$ depends on smoothness properties of $\mu,\sigma^2$ as well on the maximal jumps $|\Delta|_{\mathcal{S}}^*$. For a function $f:\mathbb{R}^d\to \mathbb{R}$, a constant $\vartheta \in (0,1]$ and a set $\mathcal{B}\subseteq \mathbb{R}^d$, we write $$[f]_{\theta,\mathcal{B}}=\max_{x,y\in \mathcal{B}}\frac{|f(y)-f(x)|}{\|y-x\|^{\theta}}.$$ When $\theta=1$, this is the (local) Lipschitz constant over $\mathcal{B}$ and we drop the subscript $\theta$. We also remind the reader that $x\pm z$ is the set of points $\{y\in\mathbb{R}^d: \|y-x\|\leq z\}$. \begin{assumption}[primitives] The primitives $\mu,\sigma^2$ and $c$ satisfy the following assumptions \begin{enumerate} \item $\mu$ is globally bounded and Lipschitz and $$(1-\alpha)^{-1/2}|\mu|_{\mathcal{B}_r}^*+(1-\alpha)^{-1}[\mu]_{\mathcal{B}_r}^*\leq \Gamma $$ \item $\sigma^2$ is globally bounded and Lipschitz with $$|\sigma^2|_{\mathcal{B}_r}^*+(1-\alpha)^{-1/2}[\sigma^2]_{\mathcal{B}_r}^*\leq \Gamma, $$ and satisfies the ellipticity condition: there exists $\lambda>0$ such that $$\lambda^{-1}|\xi|^2\geq \sum_{i,j}\xi_i\xi_j\sigma_{ij}(x)\geq \lambda \|\xi\|^2, \mbox{ for all } \xi,x \in\mathbb{R}^d.$$ \item The cost function $c$ is norm-like and three times differentiable with $$|D^ic|_{\mathcal{B}_r}^* \leq \Gamma(1+r^{k-i}),\mbox{ for } i=0,1,2.$$ \end{enumerate} \label{asum:primitives} \end{assumption} The requirement on $c$ (specifically on $[c]$) is satisfied, for example, if $c(x)=\sum_{i=1}^d c_i(x_i)$ where $c_i(\cdot):\mathbb{R}\to\mathbb{R}$ is a polynomial of degree less than $k$. More importantly, the requirements --- most importantly that on $\mu$ --- specifies a relationship between the drift and the discount factor. This is the relationship that introduces a ``central-limit-theorem-like'' behavior. In its most basic setting, we consider $n$ random variables (and ``horizon'' of length $n$) and scale space by $\sqrt{n}$. Interpreting discounting as a random exponentially distribution horizon---we observe on average $1/(1-\alpha)$ transitions. The requirement on $\mu$ means that the natural scale of the process fluctuation is $(1-\alpha)^{-1/2}$. To state the main result in this section, define $$V_{-\varepsilon}[x] = \mathbb{E}_x\left[ \sum_{t=0}^\infty \alpha^t \frac{|c(X_t)|}{(1+\|X_t\|)^{\varepsilon}}\right].$$ \begin{theorem}[approximation gap] \label{thm:guaranteemain} Suppose that $\widetilde{\Delta}_x\lesssim (1+(1-\alpha)^{\frac{1}{4}}\|x\|^{\frac{1-\varepsilon}{2}})$ and that Assumption \ref{asum:primitives} holds. Then, $$|V(x)-\widehat{V}(x)|\lesssim \left(\frac{1}{1-\alpha}\right)^{\frac{k+3}{2}} + \frac{1}{\sqrt{1-\alpha}}V_{-1}(x)+V_{-\varepsilon}(x),$$ and the same holds with $V$ replaced by $\widetilde{V}$ everywhere. Consequently, for any $\kappa\geq \frac{1}{k}(\frac{1}{2}+\varepsilon)-\frac{1}{2}$ and all $x:\|x\|\geq (1-\alpha)^{-(1+\kappa)}$, \be |\widetilde{V}(x)-V(x)|\lesssim \frac{1}{\sqrt{1-\alpha}}(V_{-1}(x)+\widetilde{V}_{-1}(x)+ V_{-\varepsilon}(x)+\widetilde{V}_{-\varepsilon}(x)).\ee The $\lesssim$ here does not depend on $x,\alpha$. \end{theorem} To prove this result we must study the PDE \eqref{eq:PDE} and how well its solution approximates $V$ and $\widetilde{V}$. The existence and uniqueness of the PDE solution is typically considered on a smooth bounded domain and one must specify values (or derivative conditions) on the boundaries. To this end, let $$\mathcal{B}_r:=\{x\in\mathbb{R}^d: \|x\|<r\},~~ \tau(r) = \inf\{t\geq 0: X_t\notin \mathcal{B}_r\}. $$ We will effectively consider a family of PDEs with growing $r$ and establish bounds that do not depend on $r$; taking $r\uparrow \infty$ will produce Theorem \ref{thm:guaranteemain}. Given a radius $r$, $\mathcal{C}^{2,\theta}(\overline{\mathcal{B}_r})$ is the space of twice continuously differentiable functions $f:\overline{\mathcal{B}_r}\to \mathbb{R}$ whose second derivative is H\"{o}lder continuous with parameter $\theta$, i.e., $[D^2f]_{\theta,\overline{\mathcal{B}_r}}<\infty$. The next lemma follows from standard PDE results; see \cite[Theorem 6.14]{gilbarg2015elliptic} Define $$\varrho:= \frac{1}{\sqrt{1-\alpha}}, ~~~\mathcal{B}_{\varrho}(x):=x\pm \varrho.$$ \begin{lemma}[PDE derivative estimates] Fix a radius $r$ and suppose that Assumption \ref{asum:primitives}. Then, for any $\theta\in (0,1)$ the PDE \eqref{eq:PDE}, with the boundary condition $\widehat{V}(x)=0,x\in \partial \mathcal{B}_{r}$, has a unique solution $u\in \mathcal{C}^{2,\theta}(\overline{\mathcal{B}_{r}})$. Furthermore, for all $x: x\pm \varrho \in \mathcal{B}_r$ \be \tag{2nd derivative} |D^2u|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^* \leq \Gamma \left( \frac{\|x\|^{k-1}}{\sqrt{1-\alpha}}+ \left(\frac{1}{1-\alpha}\right)^{\frac{k+1}{2}}\right).\label{eq:2ndDerBound}\ee The constant $k$ is as in Assumption \ref{asum:primitives} and $\Gamma$ does not depend on $\alpha,r,x$ but may depend on $\vartheta$. \label{lem:interior} \end{lemma} \vspace*{0.2cm} The error in the approximation of the value is bounded by the ``integrated'' second derivative up to the stopping time plus the ``tail'' of the value. When one considers only initial states in $x\in\mathcal{B}_r\subset\subset \mathcal{B}_{r^2}$ the latter is small. \begin{lemma} Suppose that $\Delta_x,\widetilde{\Delta}_x\leq \Gamma(1+\sqrt{\|x\|})$ and that Assumption \ref{asum:primitives} holds. Then, fixing $r$ and letting $u$ be the solution over $\mathcal{B}_{r^2}$ and $\tau=\tau(r^2)$, we have $$|V(x)-\widehat{V}(x)|\lesssim \mathbb{E}_x\left[\sum_{t=0}^{\tau(r^2)}\alpha^t |D^2\widehat{V}|_{X_t\pm \Delta_{X_t}}^{\ast}\Delta_{X_t}^2\right]+\mathbb{E}_x\left[\sum_{t=\tau(r^2)+1}^{\infty}\alpha^t |c(X_t)|\right],~x\in\mathcal{B}_r.$$ Further, given $\epsilon>0$, we can choose $r_0$ sufficiently large such that $$\mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t |c(X_t)|\right]\leq \epsilon,~ x\in \mathcal{B}_{r_0} $$ and the same holds for $| \widetilde{V}(x)-\widehat{V}(x)|$ and $\widetilde{V}$ with $\mathbb{E},\Delta$ replaced by $\wtilde{\Ex},\widetilde{\Delta}$. \label{lem:intergratedDer} \end{lemma} \noindent {\bf Proof of Theorem \ref{thm:guaranteemain}.} Since we can make $\epsilon$ arbitrarily small, we will simplify exposition by dropping it from further calculations below. Plugging \eqref{eq:2ndDerBound} into Lemma \ref{lem:intergratedDer} \begin{align}\nonumber |V(x)-\widetilde{V}(x)| &\leq |V(x)-\widehat{V}(x)|+|\widetilde{V}(x)-\widehat{V}(x)|\\ &\lesssim \frac{1}{\sqrt{1-\alpha}}\mathbb{E}_x\left[\sum_{t=0}^{\tau}\alpha^t\|X_t\|^{k-1}\Delta_{X_t}^2\right]+ \frac{1}{\sqrt{1-\alpha}}\wtilde{\Ex}_x\left[\sum_{t=0}^{\tau}\alpha^t\|X_t\|^{k-1}\widetilde{\Delta}_{X_t}^2\right]\label{eq:decomp}\\ &\nonumber \quad +\left(\frac{1}{1-\alpha}\right)^{\frac{k+3}{2}}.\end{align} Because $\Delta_x\lesssim 1+ (1-\alpha)^{\frac{1}{4}}\|x\|^{\frac{1-\varepsilon}{2}}$, $\Delta_x^2\lesssim 1+ \sqrt{1-\alpha}\|x\|^{1-\varepsilon}$ so that \begin{align*} \frac{1}{\sqrt{1-\alpha}}\mathbb{E}_x\left[\sum_{t=0}^{\tau}\alpha^t\|X_t\|^{k-1}\Delta_{X_t}^2\right]&\leq \frac{1}{\sqrt{1-\alpha}}\mathbb{E}_x\left[\sum_{t=0}^{\tau}\alpha^t\|X_t\|^{k-1}\right]+ \mathbb{E}_x\left[\sum_{t=0}^{\tau}\alpha^t\|X_t\|^{k-\varepsilon}\right]\\&\lesssim \frac{1}{1-\alpha}+ \frac{1}{\sqrt{1-\alpha}}V_{-1}(x)+V_{-\varepsilon}(x),\end{align*} and the same holds for the sister chain.\footnote{Notice that for the focal chain, since jumps are bounded, we can take $\varepsilon=1$.} This proves the first assertion of the theorem. To establish the second assertion, we will show that for any $\kappa\geq 0:k-1+\kappa-2\varepsilon\geq 0$ and $x:\|x\|\geq (1-\alpha)^{-(1+\kappa)}$, $$\left(\frac{1}{1-\alpha}\right)^{\frac{k+3}{2}}\lesssim V_{-\varepsilon}(x).$$ To see this, let $\bar{\Delta}=\sup_{x}\Delta_x$. Then, for all $t\leq t_0(\alpha)=\frac{1}{2\bar{\Delta} (1-\alpha)}$ and $x:\|x\|\geq (1-\alpha)^{-(1+\kappa)}$ $$\|X_t\|\geq \|x\|-\bar{\Delta}t_0\geq \frac{1/2}{(1-\alpha)^{1+\kappa}}.$$ In turn, with probability 1, $$\sum_{t=0}^{t_0}\alpha^t \frac{1+|c(X_t)|}{(1+\|X_t\|)^{\varepsilon}}\geq \sum_{t=0}^{t_0}\alpha^t \left(\frac{1/2}{1-\alpha}\right)^{k(1+\kappa)-\varepsilon}\geq \gamma \left(\frac{1/2}{1-\alpha}\right)^{k(1+\kappa)+1-\varepsilon}$$ for some $\gamma<1$ (that does not depend on $\alpha,x$). The last inequality follows from the fact that, as $\alpha\uparrow 1$, $$\sum_{t=t_0(\alpha)}^{\infty}\alpha^t = \alpha^{t_0(\epsilon)}\sum_{t=0}^{\infty}\alpha^t = \frac{1}{1-\alpha}\alpha^{\frac{1}{2\bar{\Delta}(1-\alpha)}},$$ and noting that $\alpha^{\frac{1}{2\bar{\Delta}(1-\alpha)}}\rightarrow e^{-\frac{1}{2\Delta}}<1$. Finally, $$\left(\frac{1}{1-\alpha}\right)^{\frac{k+3}{2}}\lesssim \left(\frac{1}{1-\alpha}\right)^{k(1+\kappa)+1-\varepsilon},$$ if $k(\frac{1}{2}+\kappa)-\frac{1}{2}-\varepsilon\geq 0$. \hfill \Halmos\vspace*{0.4cm} \iffalse \noindent {\bf The connection between the PDE-based bounds and $\delta$.} The PDE-based results do not translate easily to bounds on $\delta$ but they are qualitatively informative. Recall Theorem \ref{thm:backtodelta} where, starting from $\delta$, we developed an informal bound in terms of the derivative of extension of $V$ and $\widetilde{V}$. Treating Replacing $\Delta^2V$ and $\Delta^2\widetilde{V}$ by $\Delta^2\widehat{V}$: $$|V(x)-\widetilde{V}(x)|\lesssim \frac{1}{1-\alpha}\sum_{y}p_{xy}(\Delta_y^2+\widetilde{\Delta}_y^2)\|D^2\widehat{V}(y)\|.$$ By Lemma \ref{lem:interior}, $$\widetilde{\Delta}_x^2\|D^2\widehat{V}\|_{x\pm \widetilde{\Delta}_x}^*\lesssim \left(\frac{c_{-(1+\varepsilon)}(x)}{1-\alpha}+c_{-\varepsilon}(x)\right).$$ Overall, we have $$|V(x)-\widetilde{V}(x)|\lesssim \frac{1}{1-\alpha}\left(\frac{c_{-(1+\varepsilon)}(x)}{1-\alpha}+c_{-\varepsilon}(x)\right).$$ This is of the same order of magnitude as the bound we established rigorously. Indeed, for all $x:\|x\|\geq \frac{1}{1-\alpha}$, $c_{-(1+\varepsilon)}(x)\leq c_{-\varepsilon}(x)$ and because $P$ has bounded jumps $\frac{1}{1-\alpha}c_{-\varepsilon}(x)\lesssim V_{-\varepsilon},$ for all such $x$ and we arrive at the conclusion that \begin{equation}|V(x)-\widetilde{V}(x)|\lesssim V_{-\varepsilon}(x),\mbox{ for all } x:\|x\|\geq \frac{1}{1-\alpha}.\label{eq:lastbound} \end{equation} \textcolor{red}{the back and forth with $\delta$ is still a bit weak.}\fi We conclude this section with a reference back to the \textsc{MoMa}~algorithm. \begin{theorem}[optimality gap of \textsc{MoMa}~API] Consider a controlled chain on $\mathbb{Z}^d$. Let $\pi^*$ be the optimal policy and $\widetilde \pi$ be the \textsc{MoMa}~policy (produced by Algorithm \ref{alg:onestep}). Suppose that Assumption \ref{asum:primitives} holds for $c(x,\pi(x)),\mu_{\pi(x)}(x),\sigma^2_{\pi(x)}(x)$ for both $\pi \in \{ \widetilde{\pi}, \pi^*\}$. Then, $$V^{ \widetilde \pi}(x) - V^*(x) \lesssim V^*_{-\varepsilon}(x)+ V^{\widetilde{\pi}}_{-\varepsilon}(x).$$ \label{thm:aggOptGap} \end{theorem} \noindent\textbf{Proof: } By Theorem \ref{thm:lifted}, the policy $\widetilde{\pi}$ produced by \textsc{MoMa}~API is optimal for the lifted chain. In particular, $\widetilde{V}^*=\widetilde{V}^{\widetilde{\pi}}\leq \widetilde{V}^{\pi^*}$. Then by Theorem \ref{thm:guaranteemain} \begin{align*} V^{\widetilde{\pi}}(x)-V^{\pi^*}(x)&\leq V^{\widetilde{\pi}}(x)-\widetilde{V}^{\widetilde{\pi}}(x)+\widetilde{V}^{\widetilde{\pi}}(x)-\widetilde{V}^{\pi^*}(x)+\widetilde{V}^{\pi^*}(x)-V^{\pi^*}(x)\\ & \leq V^{\widetilde{\pi}}(x)-\widetilde{V}^{\widetilde{\pi}}(x)+\widetilde{V}^{\pi^*}(x)-V^{\pi^*}(x)\\& \lesssim V^{\pi^*}_{-\varepsilon}(x)+ V^{\widetilde{\pi}}_{-\varepsilon}(x). \end{align*} In the second inequality we used $\widetilde{V}^{\widetilde{\pi}}(x)\leq \widetilde{V}^{\pi^*}(x)$ and in the third we applied (twice) the bounds from Theorem \ref{thm:guaranteemain}. \hfill \Halmos\vspace*{0.4cm} \section{Concluding remarks\label{sec:concluding}} What we offer here is an approach to ADP that achieves a synergy between a central-limit theorem view of control (the matching of moments) and a well-established algorithmic building block (aggregation). Our paper brings algorithmic relevance to some theoretical ideas introduced in \cite{braverman2018taylor}, which itself builds on a long list of papers on CLT-based approximations. The key here is the identification of aggregation as a stepping stone on which to build implementable algorithms that can be ``matched'' with the theory that approximates a discrete problem with a continuous one. The re-interpretation of aggregation as creating a new Markov chain on the original state space, gives rise to a flexible infrastructure on which to superimpose moment matching. The approximation is grounded in math that informs a choice of the aggregation parameters that is consistent with optimality guarantees. \noindent {\bf Next steps.} \begin{enumerate} \item {\bf m-step moment matching.} Our choice of $\bg U$, recall, is such that $$\sum_{z}(\bg U)_{yz} = y = \mathbb{E}_y[X_0].$$ The second equality is trivial---the chain at time $t=0$ is at its initial state $y$ and serves to re-interpret what we did here. By choosing $\bg U$ that matches the first moment at time ${\bf t=0}$, we guarantees that the $\widetilde{P} = P\bg U$ matches the moment at time ${\bf t=1}$: $$\sum_{z} (P\bg U)_{yz} = \mathbb{E}_y[X_1] = y+\mu(y).$$ More generally, if one chooses $\bg,U$ to match the first moment at time $t=m-1$, i.e., $\sum_{z}(\bg U)_{yz} = \mathbb{E}_y[X_{m-1}]$, then $\widetilde{P}$ matches the $m$-step moment: $$\sum_{z} (P\bg U)_{yz} = \mathbb{E}_y[X_m].$$ Why might this be valuable? In the coarse grid approximation it is inevitable that the chain $\widetilde{P}$ has large jumps relative to $P$ itself. The coarser the grid the larger the jumps of $\widetilde{P}$. This affects the second moment mismatch which, in turn, affects the quality of the approximation. Because it lumps multiple transitions together, the $m$-step chain $P^m$ has larger jumps than $P$ that are possibly better aligned with those of $\widetilde{P}$. This, in some settings, may improve performance. To make this point more concrete, some initial developments are offered in \S \ref{sec:mstep} of the appendix. \item {\bf Approximation-Estimation tradeoff and a hierarchy of models:} Consider a setting where $P$ is not known in advance but rather estimated based on observations. As transitions are performed, the estimate of the matrix $P$ improves. It seems intuitively reasonable that the approximate model, based as it is on a Taylor approximations, will induce smaller variance over finite samples but, because of the approximation, will have a larger bias. Given the uncertainty early in the horizon it may, nevertheless, make sense to use a coarser and computationally cheaper model and gradually transition -- as more samples are collected --- to a more accurate model. Within our framework, such a gradual transition, is enabled naturally by the coarseness (spacing) exponent $\mathfrak{s}$. \iffalse That this bias/variance tradeoff is to be expected and is made sufficiently evident in a simple static setting. Take a random variable $Z$ and the expectation $\mathbb{E}[f(x+Z)]$ where $f(y)=y^4$. Consider the second-order Taylor approximation $\tilde{f} (x+y) = f(x)+ f'(x)y + \frac{1}{2} f''(x)y^2,$ or, with expectations, $$\mathbb{E}[f(x+Z)]\approx f(x) + f'(x)\mathbb{E}[Z] +\frac{1}{2}f''(x)\mathbb{E}[Z^2].$$ Obviously $\mathbb{E}[f(x+Z)]\neq \mathbb{E}[\tilde{f}(x+Z)],$ and the approximation error is of the order of $x^3\mathbb{E}[Z^3]$. Suppose that we have a finite number of samples $z_1,\ldots,z_n$ and we compute our estimates as $$\widehat{\mathbb{E}}[f(x+Z)]=\frac{1}{n}\sum_{k=1}^n f(x+z_k), ~ \widehat{\mathbb{E}}[\tilde{f}(x+Z)] = \frac{1}{n}\sum_{k=1}^n\tilde{f}(x+z_k).$$ If, for example, $x\geq 0$ and $Z$ is positive valued that it is easily verified here that $$\sqrt{n}\sigma_{\tilde{f}} = \sqrt{Var(\widehat{\mathbb{E}}[\tilde{f}(x+Z)]} < \sqrt{n}\sigma_{f} = \sqrt{Var(\widehat{\mathbb{E}}[f(x+Z)].$$ \fi The bias-variance view offers then a lens through which to explore the interaction of ADP approximation and parameter estimation, one that is natural within the Tayloring/moment matching framework we put forth here. \item {\bf Aggregation based RL.} Aggregation has already been espoused as an aid in simulation-based policy iteration in \cite{bertsekas2018feature}. Useful in our algorithm is the fact that the matrix $\bg$ does not depend on the transition probability matrix $P$ so that, in contrast to other architectures, it requires no updating within iterations. This may have useful implications for sample complexity. From an analysis perspective, the coarse grid approximation is appealing. Because the construction leads itself to a Markov chain (a lower rank one) on a subset of states, one can build on existing studies; see e.g. \cite{haskell2016empirical}. \end{enumerate} \paragraph{Acknowledgements} This work is supported by NSF grant CMMI-1662294. We thank Anton Braverman and Nathalie Vanvuchelen for generously sharing their dynamic programming codes for the examples in our \S \ref{sec:numerical} \iffalse \begin{figure}[h!] \begin{center} \includegraphics[scale=0.35]{BiasVariance.pdf} \end{center} \label{fig:taylor} \caption{\textcolor{red}{I don't like this figure standing here like a stand alone. At any rate, caption is missing} } \end{figure} Fixing $x=25, \mu=\sigma=5$ we compare the mean and the variance (computed via simulation with 10000 replications) of the two estimates. The result is plotted in Figure \ref{fig:taylor}. What is important is that the standard deviation of the estimate--- $\sigma_{\tilde{f}}$ --- is smaller that of $\sigma_f$ that correspond to the estimate for $f$. \fi \input{main.bbl} \newpage \renewcommand*{\thesection}{A.\arabic{section}} \setcounter{section}{0} \section*{Appendix} \section{Proofs of Auxiliary Lemmas} \label{sec:proofs} \noindent {\bf Proof of Lemma \ref{lem:tilde_operator_gap1}.} This is a special case of Lemma \ref{lem:opt_delta} obtained, trivially, by assuming that the actions space $\mathcal{A}(x)$ contains a single action (so that $\pi^*=\widetilde{\pi}^*$). We refer the reader to that proof later in this appendix. \hfill \Halmos\vspace*{0.4cm} \iffalse The proof follows a standard argument; see e.g. \cite[Proposition 6.2]{bertsekasneuro}.\\ Let $J_1(x) = V(x) + \frac{\alpha}{1-\alpha}\delta_{V}[P,\widetilde{P}].$ \begin{align*} \widetilde T J_1(x) &= c(x) + \alpha \sum_{y \in S} \widetilde p_{xy}J_1 (y)\\ &\leq c(x) + \alpha \sum_{y \in S} \widetilde p_{xy}\Big(V(y) + \frac{\alpha}{1-\alpha}\delta_{V}[P,\widetilde{P}]|\Big)\\ &= c(x) + \frac{\alpha ^2 }{1-\alpha}\delta_{V}[P,\widetilde{P}] +\alpha \sum_{y \in S} [ p_{xy}V(y) + \widetilde p_{xy}V(y) - p_{xy}V(y)] \\ &\leq c(x) +\alpha P_{x}V(x) + \alpha \mid \widetilde P_{x}V(x) - P_{x}V(x) \mid + \frac{\alpha ^2}{1-\alpha}\delta_{V}[P,\widetilde{P}]\\ &\leq V(x) + \alpha \delta_{V}[P,\widetilde{P}](x)+ \frac{ \alpha ^2 }{1-\alpha} \delta_{V}[P,\widetilde{P}]\\ &\leq V(x) + \frac{\alpha}{1-\alpha}\delta_{V}[P,\widetilde{P}] = J_1(x) \end{align*} Thus, we have that that $\widetilde T J_1(x) \leq J_1(x)$ and, inductively, that $\widetilde{T}^m J_1(x)\leq J_1(x)$. By the convergence of value iteration we have that $\widetilde T^m J_1(x) \to \widetilde {V}(x)$ as $m\uparrow \infty$, so that $\widetilde V (x) \leq J_1(x)$. Letting $J_2(x) = \widetilde V (x) + \frac{\alpha}{1-\alpha}\delta_{\widetilde{V}}[P,\widetilde{P}],$ and following an identical argument we arrive at \begin{align*} T J_2(x) &= c(x) + \alpha \sum_{y \in S} p_{xy}J_2 (y)\leq J_2(x),\end{align*} so that $T^mJ_2(x) \leq J_2(x)$ for all $m$ and, in particular, Since ${T}^m J_2(x) \to V(x)\leq J_2(x)$ as $m\uparrow \infty$. We conclude that $ \mid V(x) - \widetilde V (x)\mid \leq \frac{\alpha}{1-\alpha}\left(\delta_{V}[P,\widetilde{P}]+\delta_{\widetilde{V}}[P,\widetilde{P}]\right)$ as stated. \fi \noindent {\bf Proof of Lemma \ref{lem:grid_growth}.} Recall that $\mathcal{S} = \times_{i=1}^d [\ell_i,u_i]$. Consider first the case that $\ell_i\geq 0$. Fix $\frak{s}\in [\frac{1}{3},\frac{1}{2})$ and define for $k\in\mathbb{Z}_+$ the recursion $f(0)=\ell_i$ and \be f(k+1) = \lceil f(k)+f^\frak{s}(k) \rceil + 1,\label{eq:frecursion}\ee that defines, recall \eqref{eq:f_recursion}, the grid points on the $i^{th}$ axis. The number $k_f^* = \inf \{ k: f(k) \geq u_i \}$ is then the number of grid points on this axis. We will bound this number by considering a continuous lower bound on $f$. Defined for $x\geq 0$ the function $h(x) = ((1-\frak{s})x)^\frac{1}{1-\frak{s}} + \ell_i$. Its Taylor expansion has form \[ h(k+1) = h(k) + ((1 -\frak{s}) k)^{\frac{\frak{s}}{1-\frak{s}}} \pm \frac{1}{2}\frak{s} ((1-\frak{s})k)^{\frac{2\frak{s}-1}{1-\frak{s}}}=h(k)+h^{\frak{s}}(k)\pm \frac{1}{2}\frak{s} ((1-\frak{s})k)^{\frac{2\frak{s}-1}{1-\frak{s}}}.\] Note that $2\frak{s}-1 < 0$ for $\frak{s} < \frac{1}{2}$, so we have $((1-\frak{s})k)^{\frac{2\frak{s}-1}{1-\frak{s}}} \leq 1$, thus the last term is upper bounded by $\frac{1}{4}$. Then combined with \eqref{eq:frecursion}, we know that if there is a $k \in \mathbb{Z_+}$ such that $h(k) \leq f(k)$, we have $h(k') \leq f(k')$ for all $k' \in \mathbb{Z_+}$ where $k' \geq k$. Since $h(0)=f(0)=\ell_i$, it is clear that $h(k)\leq f(k)\mbox{ for all }k\in\mathbb{Z}_+.$ We argue the slightly stronger claim \be h(x) \leq f(\lfloor x \rfloor ) \mbox{ for $x \geq 2$.} \label{eq:hfclaim}\ee Let us use \eqref{eq:hfclaim} to complete the proof of the lemma. Defining now $k_h^* := \inf \{ k: h(k) = u_i \}$ we have $f(\lfloor k_h^* \rfloor) \geq h(k_h^*) = u_i $, thus $k_f^* \leq k_h^*$. Since $h$ is a continuous increasing function $h(k_h^*) = \ell_i+((1-\frak{s}) k_h^*)^{\frac{1}{1-\frak{s}}} = u_i$ so that $ k_h^* = (u_i - \ell_i)^{1- \frak{s}}/(1-\frak{s}),$ and, in turn, $$k_f^* \leq \frac{(u_i - \ell_i)^{1- \frak{s}}}{1-\frak{s}}.$$ The case that the $i^{th}$ axis has $u_i\leq 0$. Define $f_{-}(0)=u_i$ and recursively $f_{-}(-k) = \lfloor f_{-}(-(k-1))-(-f_{-}(-(k-1)))^\frak{s} \rfloor - 1$. It follows identically that $$k_{f_{-}}^* \leq \frac{(- \ell_i - (-u_i))^{1- \frak{s}}}{1-\frak{s}}.$$ If $\ell_i\leq 0$ and $u_i\geq 0$, we treat the negative portion $[\ell_i, 0]$ and the positive portion $[0, u_i]$ separately to obtain that the number of grid points on the $x$ axis satisfies $k^*\leq \frac{u_i^{1- \frak{s}} + (-\ell_i)^{1- \frak{s}}}{1-\frak{s}}.$ In all cases then, $k$ is upper bounded by \begin{align*} \prod_i \{ \mathrm{1}_{\{u_i > 0\}} \frac{u_i^{1 - \frak{s}}}{1-\frak{s}} + \mathrm{1}_{\{\ell_i < 0\}} \frac{(-\ell_i)^{1 - \frak{s}}}{1-\frak{s}} \} \leq \prod_i \frac{2(\frac{u_i-\ell_i+1}{2})^{1-\frak{s}}}{1- \frak{s}} \leq \prod_i \frac{\sqrt{2} r_i^{1-\frak{s}}}{1- \frak{s}} = \left(\frac{\sqrt{2}}{1- \frak{s}}\right)^d N^{1-\frak{s}}, \end{align*} where we used $N=\Pi_{i}r_i$. It remains only to prove \eqref{eq:hfclaim}. Because $h$ is decreasing in $\frak{s}$, the maximum value over $\frak{s}\in [\frac{1}{3},\frac{1}{2})$ is achieved at $\frak{s} = \frac{1}{3}$. For the basis of the induction, notice that if $\ell_i =0$, $f(1) = 1$ and $f(2) = 3$, then for $x\in [2,3)$, \[ h(x) = ( (1-\frak{s})x )^ {\frac{1}{1-\frak{s}}} \leq 2^{\frac{3}{2}} < 3 = f(\lfloor x \rfloor). \] If $\ell_i \geq 1$, $f(1) \geq \lfloor \ell_i + \ell_i^\frak{s} \rfloor +1\geq \ell_i + 2$ so that, again, \[ h(x) = ( (1-\frak{s})x )^ {\frac{1}{1-\frak{s}}} \leq \left(\frac{4}{3}\right)^{\frac{3}{2}} + \ell_i < 2 + \ell_i < f(\lfloor x \rfloor). \] Suppose now that \eqref{eq:hfclaim} holds up to $k_1 \in \mathbb{Z}_+$, and for $x\in [k_1, k_1+1)$. Then, for such $x$, we have by the induction assumption that $h(x)\leq f(k_1)$ and, in turn, for $y=x+1\in [k_1+1,k_1+2)$ \[ h(y) = h(x) + h^{\frak{s}}(x) \pm \frac{1}{4} \leq f(k_1) + f^{\frak{s}}(k_1) \pm \frac{1}{4} < f(k_1+1)=f(\lfloor y\rfloor), \] and this completes the induction. \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{lem:phi_construct}. } For fixed $y \in \mathcal{S}$ and $x_l$ in its enclosing box $\mathcal{B}^d$, we defined \[ g_{yl} = \Pi_{i=1}^d \left( \mathrm{1}\{[x_l]_i = \bar{s}_i \} \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i} + \mathrm{1}\{ [x_l]_i = \underline{s}_i\} \frac{\bar{s}_i - y_i}{\bar{s}_i-\underline{s}_i} \right) \] where $\bar{s}_i, \underline{s}_i$ are the upper and lower values along axis $i \in [d]$ for corners of the hyperbox. There are $2^d$ such corners. To simplify notation, let $$h_i(l)=\left\{\begin{array}{ll} 1 &\mbox{ if } (x_l)_i = \bar{s}_i,\\ 0&\mbox{ if} (x_l)_i = \underline{s}_i.\end{array}\right.$$ Also, set $w_i(y) = \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i}.$ so that $0 \leq w_i(y) \leq 1$ is such that $y_i=w_i (y) * \bar{s}_i + (1-w_i(y)) * \underline{s}_i = y_i $. Finally, define $F_i(l) = h_i(l) * w_i(y) + (1 - h_i(l))* (1-w_i(y)),$ so that $$g_{yl} = \Pi_{i=1}^d F_i(l).$$ Since $F_i(l) \in [0,1]$ also $g_{yl} \in [0,1]$. We will argue by induction that $\sum_{l\in \mathcal{B}^d}g_{yl}=1$. For $d=1$, $\sum_{l \in \mathcal{B}^d}g_{yl} = w_1(y) + (1-w_1(y)) = 1$. Now suppose for $d'=d-1$, $\sum_{l \in \mathcal{B}^d} \Pi_{i=1}^d F_i(l) = 1$. Suppose also, w.l.o.g, that the indices are ordered such that for $k=1, .., 2^{d'-1}$, $l_{2k-1}, l_{2k} \in \mathcal{B}^{d'}$ differ only on axis $d$, specifically $h_{d'}(l_{2k-1})=1, h_{d'}(l_{2k})=0$. Then we have \begin{align*} \sum_{l \in \mathcal{B}^{d'}} g_{yl} = & \sum_{l \in \mathcal{B}^{d'}} \Pi_{i=1}^{d'} F_i(l)\\ = & w_{d'}(y) \Pi_{i=1}^{d'-1} F_i(l_1) + (1-w_{d'}(y)) \Pi_{i=1}^{d'-1} F_i(l_2) + ... + w_{d'}(y) \Pi_{i=1}^{d'-1} F_i(l_{2^{d'-1}}) + (1-w_{d'}(y)) \Pi_{i=1}^{d'-1} F_i(l_{2^{d'}})\\ = & w_{d'}(y) \sum_{k =1}^{2^{d'-1}} \Pi_{i=1}^{d'-1} F_i(l_{2k-1}) + (1-w_{d'}(y)) \sum_{k =1}^{2^{d'-1}} \Pi_{i=1}^{d'-1} F_i(l_{2k}) \\ =& \left( w_{d'}(y)+1-w_{d'}(y) \right) \sum_{l' \in \mathcal{B}^{d'-1}} \Pi_{i=1}^{d'-1} F_i(l') \\ = & 1 \end{align*} where the last equality is due to the inductive assumption, completing the induction. Now we want to show $\sum_{l \in \mathcal{B}^d} g_{yl} x_l = y$, i.e. \[ \sum_{l \in \mathcal{B}^d} [x_l]_j \Pi_{i=1}^d \left( \mathrm{1}\{[x_l]_i = \bar{s}_i \} * \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i} + \mathrm{1}\{ [x_l]_i = \underline{s}_i\} * \frac{\bar{s}_i - y_i}{\bar{s}_i-\underline{s}_i} \right) = [y]_j\] for $j = 1, ..., d$. Notice that $[x_l]_j = h_j(l)\bar{s}_j + (1- h_j(l)) \underline{s}_j$, so when $d=1$, the quantity on the left hand side simply evaluates to $LHS = \bar{s}_1 w_1(y) + \underline{s}_1 (1-w_1(y))$, satisfying the equality. For $d \geq 2$, \begin{align*} LHS = & \bar{s}_j w_j(y) \sum_{\mathclap{\substack{l \in \mathcal{B}^d\\ h_j(l) =1}}} \Pi_{i \neq j} F_i(l) + \underline{s}_j (1-w_j(y)) \sum_{\mathclap{\substack{l \in \mathcal{B}^d\\ h_j(l) =0}}} \Pi_{i \neq j} F_i(l) \\ = & \left( \bar{s}_j w_j(y) + \underline{s}_j (1-w_j(y)) \right) \sum_{l \in \mathcal{B}^{d-1} } g_{yl} \\ = & y_j \end{align*} using the first part. Since this is true for any $j = 1, ..., d$, this concludes the proof. \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{lem:secondapprox}.} Fix $y\in\mathcal{S}$ and let $\hat{x}^1,\ldots,\hat{x}^{2^d}$ be the corners of the box $\mathcal{B}$ that contains $y$. The second order Taylor expansion of the function $W_2(z)=zz^{\intercal}$ has \begin{align*} W_2(\hat{x}^l) & = W_2(y)+ DW_2(x)'(\hat{x}^l-y)\pm \frac{1}{2}\Gamma \|\hat{x}^l-y\|^2, \end{align*} where we use the fact that $\|D^2 W_2\|\leq \Gamma(d)$. Taking the coordinate-wise smallest corner of the box (say this is point $l_0$), every point $y$ in the box satisfies $y_i\in [\hat{x}^{l_0}_i,\hat{x}^{l_0}_i+\gridfn(|\hat{x}^{l_0}_i|)]$ so that $\|\hat{x}^l-y\|^2\leq \max_{i}\gridfn^2(|\hat{x}^{l_0}_i|) \leq \gridfn^2(\|y\|) $ Since, by construction, $\sum_{l}g_{yl}\hat{x}_l=y$ we have that $\sum_{l}g_{kl}(\hat{x}^l-y)=0$ and, in turn, that $$\bg DW_2(y)=\sum_{l}g_{kl}W_2(\hat{x}^l) =W_2(y)\pm \Gamma \gridfn^2(\|y\|).$$ Finally, since $\max_{y:p_{xy}>0}\gridfn^2(\|y\|)\leq \Gamma \gridfn^2(\|x\|+\Delta_x)$, we conclude that $$P \bg DW_2(x) = PW_2(x)\pm \Gamma \gridfn^2(\|x\|+\Delta_x),$$ as required. \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{thm:backtodelta}.} Recall from \S \ref{sec:tayloring} that $\delta_f[P,\widetilde{P}] := |\delta_f[P,\widetilde{P}] (\cdot)|_{\mathcal{S}}^*$ where $$\delta_f[P,\widetilde{P}](x):= \lvert \widetilde \mathbb{E}_x[f(X_1)]-\mathbb{E}_x[f(X_1)]\rvert = \lvert \widetilde{P}{f}(x) - Pf(x)\rvert.$$ With the coarse grid scheme, it takes on the form \begin{align*} \delta_V[P,\widetilde{P}](x)&=|PV(x)-\widetilde{P} V(x)| =|\sum_{y}p_{xy}V(y)-\sum_{y}p_{xy}\sum_{l}g_{yl}V(x_l)|\\ & \leq \sum_{y}p_{xy}|V(y)-\sum_{l}g_{yl}V(x_l)|\end{align*} so that $\delta_V[P,\widetilde{P}]\leq \max_{x \in \mathcal{S}} \sum_{y}p_{xy}|V(y)-\sum_{l}g_{yl}V(x_l)|.$ Then by Lemma \ref{lem:tilde_operator_gap1}, we have \begin{align*} |V-\widetilde{V}|_{\mathcal{S}}^*\leq & \max_{x \in \mathcal{S}} \frac{1}{1-\alpha}\sum_{y}p_{xy}\Big(|V(y)-\sum_{l}g_{yl}V(x_l)|+|\widetilde{V}(y)-\sum_{l}g_{yl}\widetilde{V}(x_l)|\Big) \\ \leq & \frac{1}{1-\alpha}\left(|V-GV|_{\mathcal{S}}^* +|\widetilde{V}-G\widetilde{V}|_{\mathcal{S}}^*\right) \end{align*} \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{lem:interior}.} The existence and uniqueness of a solution $\widehat{V}\in \mathcal{C}^{2,1-\vartheta}(\bar{\mathcal{B}_r})$ to the Dirichlet problem follows from Assumption \ref{asum:primitives} and \cite[Theorem 6.14]{gilbarg2015elliptic}. Indeed, The assumed Lipschitz continuity of $\mu,\sigma^2$ implies, in particular, that they are both H\"{o}lder continuous for any $\vartheta\in(0,1)$ on any bounded set and, in particular, on $\mathcal{B}_r$. Let $u$ be this solution and let us write $$u(x) =\frac{c(x)}{1-\alpha} - \frac{f}{1-\alpha},\mbox{ where } f:=c-(1-\alpha)u.$$ Since $c$ twice continuously differentiable, $f$ inherits its smoothness from that of $u$ as established above. In particular, $$D^2 u = \frac{1}{1-\alpha}(D^2c - D^2 f).$$ Then, $$|D^2 u |_{\mathcal{B}_{\varrho}(x)}^* \leq \frac{1}{1-\alpha}\left(|D^2c|_{\mathcal{B}_{\varrho}(x)}^* + |D^2 f|_{\mathcal{B}_{\varrho}(x)}^*\right) \leq \frac{1}{1-\alpha}\Gamma \left( \|x\|^{k-2}+\left(\frac{1}{1-\alpha}\right)^{\frac{k-2}{2}}+ |D^2 f|_{\mathcal{B}_{\varrho}(x)}^*\right).$$ To complete the bounds we must bound $|D^2 f|_{\mathcal{B}_{\varrho}(x)}^*$. The function $f$, notice, solves the equation \be \frac{1}{2}trace(\sigma^2(y)D^2 f(y))+\mu(y)'Df(y)-(1-\alpha)f(y) = \frac{1}{2}trace(\sigma^2(y)D^2 c(y)) + \mu(y)' D c(y).\label{eq:fequation}\ee Bounds on the derivative of $f$ will now follow from general derivative estimates for PDEs. To be self contained we quote here \cite[Theorem 6.2]{gilbarg2015elliptic}. Also by \cite[Page 61]{gilbarg2015elliptic}, for $\theta\in (0,1)$ and a set $\mathcal{B}_r\subset \mathbb{R}^d$, $f\in\mathcal{C}^{2,\theta}(\mathcal{B}_r)$ $$|f|_{2,\theta,\mathcal{B}_r}^*:= \sup_{x\in\mathcal{B}_r}|u(x)|+\sup_{x\in\mathcal{B}_r} d_x\|Df(x)\| +\sup_{x\in\mathcal{B}_r} d_x^2\|D^2 f(x)\| + \sup_{x,y\in\mathcal{B}_r}d_{x,y}^{2+\theta}\frac{\|D^2f(x)-D^2f(y)\|}{\|y-x\|^{\theta}}, $$ where $d_x= dist(x,\partial \mathcal{B}_r)\leq r$ is the distance from $x$ to the boundary $\mathcal{B}_r$ and $d_{x,y}=\min\{d_x,d_y\}$. With some abuse of notation, let $$\mathcal{B}_{\varrho}(x):=\{y:\|y-x\|\leq \varrho\}=x\pm \varrho,\mbox{ and } \mathcal{B}_{\frac{\varrho}{2}}(x):=\{y:\|y-x\|\leq \varrho/2\}=x\pm \varrho/2.$$ Then, $|f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*\geq \sup_{y\in\mathcal{B}_{\varrho}(x)} d_y^2\|D^2 f(y)\|\geq \sup_{y\in\mathcal{B}_{\frac{\varrho}{2}}(x)} d_y^2\|D^2 f(y)\|.$ For all $y\in \mathcal{B}_{\frac{\varrho}{2}}(x)$, notice, $d_y\geq \varrho/2$ (the distance from the boundary is greater than $\varrho/2$) so that $$|f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*\geq \sup_{y\in \mathcal{B}_{\frac{\varrho}{2}}(x)}d_y^2\|D^2 f(y)\|\geq \frac{\varrho^2}{4}\|D^2 f\|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^*,$$ and, thus, that $$|D^2 f|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^*\leq \frac{4}{\varrho^2} |f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*.$$ In this way a bound on $|f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*$ will produce the bound stated in Lemma \ref{lem:interior}. \begin{theorem}[Theorem 6.2 in \cite{gilbarg2015elliptic}] Let $\Omega$ be an open subset of $\mathbb{R}^d$, and let $u\in\mathcal{C}^{2,\theta}(\Omega)$ be a bounded solution in $\Omega$ of the equation $$\frac{1}{2} trace(\sigma^2(y)'D^2u(y)) + \mu(y)'Du(y) -\beta(y)u(y) = g(y)$$ where $f$ is in $\mathcal{C}^{\theta}(\Omega)$ and there are positive constants $\lambda,\Lambda$ such that the coefficients satisfy $$ \lambda^{-1}\|\xi\|^2\geq \sum_{i,j}\xi_i\xi_j\sigma_{ij}(x)\geq \lambda \|\xi\|^2, \mbox{ for all } \xi,x \in\mathbb{R}^d, $$ and $$ |\sigma^2|^{(0)}_{0,\theta,\Omega},|\mu|^{(1)}_{0,\theta,\Omega}, |\beta|^{(2)}_{0,\theta,\Omega}\leq \Lambda. $$ Then, $$|u|^*_{2,\theta,\Omega}\leq C\left(|u|_{\Omega}^* + |g|^{(2)}_{0,\theta,\Omega}\right), $$ where $C=C(d,\theta,\lambda,\Lambda)$. \end{theorem} In this theorem we take $$g(y):= \frac{1}{2}trace(\sigma^2(y)D^2c(y))+ \mu(y)'D c(y)\mbox{ and } \beta = 1-\alpha.$$ Per our observations, with $\varrho(x)\equiv (1-\alpha)^{-1/2}$, $|\sigma^2|^{(0)}_{0,\theta,\mathcal{B}_{\varrho}(x)},|\mu|^{(1)}_{0,\theta,\mathcal{B}_{\varrho}(x)},|\beta|^{(2)}_{0,\theta,\mathcal{B}_{\varrho}(x)}\leq \Gamma$ so we can take $\Lambda=\Gamma,$ to conclude that \footnote{Here, we use the following simple fact: for a Lipschitz continuous function $f$, $$\sup_{x,y}d_{x,y}^{2+\theta}\frac{|f(y)-f(x)|}{\|y-x\|^{\theta}}\leq \varrho^{2+\theta}[f]_{\mathcal{B}_{\varrho}(x)}^*\varrho^{1-\theta}\leq \varrho[f]_{\mathcal{B}_{\varrho}(x)}.$$} \begin{align} |D^2 f|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^* & \leq 4(1-\alpha) |f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*\leq \Gamma (1-\alpha) \left(|f |_{\mathcal{B}_{\varrho}(x)}^* + |g|_{0,\theta,\mathcal{B}_{\varrho}(x)}^{(2)}\right) \nonumber \\ & = \Gamma\left((1-\alpha)|f |_{\mathcal{B}_{\varrho}(x)}^* + \sqrt{1-\alpha}|g|_{\mathcal{B}_{\varrho}(x)}^{*}+[g]_{\mathcal{B}_{\varrho}(x)}^{*}\right). \label{eq:intermediatebound}\end{align} By our assumption on $c$ and $\mu$ $$ |g|_{\mathcal{B}_{\varrho}(x)}^*\leq |\mu|_{\mathcal{B}_{\varrho}(x)}^*|D c|_{\mathcal{B}_{\varrho}(x)}^*+|\sigma^2|_{\mathcal{B}_{\varrho}(x)}^*|D^2 c|_{\mathcal{B}_{\varrho}(x)}^*\leq \sqrt{1-\alpha}\|x\|^{k-1}+\|x\|^{k-2},$$ and \begin{align*} [g]_{\mathcal{B}_{\varrho}(x)}^*& \leq |\sigma^2|_{\mathcal{B}_{\varrho}(x)}^*[D^2c]_{\mathcal{B}_{\varrho}(x)}^*+[\sigma^2]_{\mathcal{B}_{\varrho}(x)}^*|D^2c|_{\mathcal{B}_{\varrho}(x)}^* +|\mu|_{\mathcal{B}_{\varrho}(x)}^*[D c]_{\mathcal{B}_{\varrho}(x)}^*+[\mu]_{\mathcal{B}_{\varrho}(x)}^*|D c|_{\mathcal{B}_{\varrho}(x)}^*\\ & \leq \Gamma \left((1-\alpha)\|x\|^{k-1} + \sqrt{1-\alpha} \|x\|^{k-2}+\|x\|^{k-3}\right),\end{align*} so that \be \sqrt{1-\alpha}|g|_{0,\mathcal{B}_{\varrho}(x)}^{*}+[g]_{0,\mathcal{B}_{\varrho}(x)}^{*} \leq \Gamma\left((1-\alpha)\|x\|^{k-1}+\sqrt{1-\alpha}\|x\|^{k-2}+\|x\| ^{k-3}\right). \label{eq:gbound} \ee It remains to bound $|f|_{\mathcal{B}_{\varrho}(x)}^*$ where, recall, $f=c-(1-\alpha)u$. We will do so directly by studying a related diffusion process. Specifically, consider the process \be \widehat{X}_i(t) = x_i+ \int_0^t \alpha \mu_i(\widehat{X}_s)ds + \sum_{j=1}^d\int_0^t \alpha \sigma_{ij}(\widehat{X}_s)dB_j(s),\label{eq:diffeq}\ee where $B_j(\cdot)$ is a standard Brownian motion. Our requirement in Assumption \ref{asum:primitives} guarantee the existence of this process as a strong solution of this stochastic differential equation; see e.g. \cite[Theorem 5.4]{klebaner2005introduction}. It is then a standard argument that the function \be u(x) = \mathbb{E}_x\left[\int_0^{\tau} e^{-(1-\alpha)s} c(\widehat{X}_s)ds\right]\label{eq:udiff}, \ee where $\tau=\inf\{t\geq 0:\widehat{X}_t\in \partial \mathcal{B}_{r_0^2}$\} is the solution to the PDE \eqref{eq:PDE} with the boundary condition $u(x)=0$ when $x\in\partial\mathcal{B}_{r_0}$. By Ito's formula \cite[Chapter 4]{klebaner2005introduction} \begin{align*} c(\widehat{X}_s)&= c(x) + \sum_{i} \alpha\int_0^s\mu_i(\widehat{X}_u)c_i(\widehat{X}_u)du+ \alpha\frac{1}{2}\sum_{ij} \int_0^s\sigma_{ij}(\widehat{X}_u)c_{ij}(\widehat{X}_u)du + \sum_{ij}\int_0^s c_{i}(\widehat{X}_u)\sigma_{ij}^2(\widehat{X}_u)dB_j(u).\end{align*} Using the boundedness of $\mu$ and $\sigma^2$ it is easily proves that, for all $x\in\mathcal{B}_{r_0}$ (recall that \eqref{eq:udiff} is defined in the larger ball $\mathcal{B}_{r_0^2}$), $c(x)\mathbb{E}_x[\int_{t=\tau}^{\infty}e^{-(1-\alpha)s}]\leq \epsilon$. Thus, we conclude that $$\left|u(x)-\frac{c(x)}{1-\alpha}\right| = \left|\mathbb{E}_x\left[ \int_0^{\tau} e^{-(1-\alpha)s} c(\widehat{X}_s)ds\right] -\frac{c(x)}{1-\alpha} \right|\leq \epsilon + \int_0^{\infty}e^{-(1-\alpha)s}\left( \mathbb{A}(s)+\mathbb{B}(s)+\mathbb{C}(s)ds\right),$$ where \begin{align*} \mathbb{A}(s)&:=\mathbb{E}_x\left[\sum_{i}\left|\int_0^s \mu_i(\widehat{X}_u)c_i(\widehat{X}_u)du\right|\right],\\ \mathbb{B}(s) & := \mathbb{E}_x\left[ \sum_{i,j} \left|\int_0^s\sigma_{ij}^2(\widehat{X}_u)c_{ij}(\widehat{X}_u)du\right|\right], \\ \mathbb{C}(s)& :=\mathbb{E}_x\left[\sum_{ij}\left|\int_0^s c_{i}(\widehat{X}_u)\sigma_{ij}(\widehat{X}_u)dB_j(u)\right|\right].\end{align*} Since $|D^ic(x)|\leq \Gamma(1+\|x\|^{k-i})$ for $i=0,1,2$ and since $|\mu|,\sigma^2$ are globally bounded, we have \begin{align*} \mathbb{A}(s)\leq \Gamma \left(s+ \sum_{i} \mathbb{E}_x\left[\int_0^s|\mu|_{\mathcal{B}_{r_0^2}}^*\|\widehat{X}_u\|^{k-1}du\right]\right),~ \mathbb{B}(s)\leq \Gamma\left(s+\sum_{ij} \mathbb{E}_x\left[ \int_0^s|\sigma^2|_{\mathcal{B}_{r_0^2}}^*\|\widehat{X}_u\|^{k-2}du\right] \right), \end{align*} and $$ \mathbb{C}(s) \leq \Gamma\left(s+ \sqrt{\mathbb{E}_x\left[\int_0^s(|\sigma|_{\mathcal{B}_{r_0^2}}^*)^2\|\widehat{X}_u\|^{2(k-1)}du\right]}\right).$$ This last bound follows, again, from a standard result on Brownian integrals \cite[Theorem 4.3]{klebaner2005introduction}. From \eqref{eq:diffeq} and the global boundedness of $\mu$ and $\sigma^2$ we have, for any $l\in\mathbb{Z}_+$ and $t$, that (recalling \eqref{eq:diffeq}) \begin{align*} \mathbb{E}_x[|\widehat{X}(t)|^l]&\leq \Gamma\left( \|x\|^l+\sum_{i}\mathbb{E}_x[\int_0^t |\mu_i(\widehat{X}_{u})|du] +\sum_{j}|\mathbb{E}_x\int_0^t \sigma_{ij}(\widehat{X}_u)dB_j(u)|^l\right]\\ & \leq \Gamma(|x_i|^{l} + \sqrt{1-\alpha}^lt^l + t^{l/2}),\end{align*} where we use our assumption that $|\mu|_{\mathcal{B}_{r_0^2}}^*\leq \Gamma \sqrt{1-\alpha}$. Thus, $$\mathbb{A}(s)\leq \int_0^s (1+\sqrt{1-\alpha}\mathbb{E}_x[\|\widehat{X}_u\|^{k-1})du\leq \Gamma(1+ \sqrt{1-\alpha}(s\|x\|^{k-1} + \sqrt{1-\alpha}^{k-1} s^{k} + s^{\frac{k+1}{2}})).$$ We can repeat the same for $\mathbb{B}(s),\mathbb{C}(s)$. Multiplying by $(1-\alpha)$ we conclude \begin{align} |c-(1-\alpha)u|&=(1-\alpha)|\frac{c}{1-\alpha}-u|\nonumber\\&\leq (1-\alpha)\epsilon + (1-\alpha)\int_0^{\infty}e^{-(1-\alpha)s}(\mathbb{A}(s)+\mathbb{B}(s)+\mathbb{C}(s))ds\nonumber \\& \leq \Gamma \left (\frac{\|x\|^{k-1}}{\sqrt{1-\alpha}} + \left(\frac{1}{1-\alpha}\right)^{\frac{k+1}{2}}\right).\label{eq:fbound}\end{align} We then have that \be \Gamma(1-\alpha)|f|_{\mathcal{B}_{\varrho}(x)}^* \leq \Gamma\left(\sqrt{1-\alpha}\|x\|^{k-1}+ \left(\frac{1}{1-\alpha}\right)^{\frac{k-1}{2}}\right).\label{eq:fbound} \ee Combining this with \eqref{eq:gbound} we have that $$ \frac{|D^2f|_{\mathcal{B}_{\varrho}(x)}^*}{1-\alpha} \leq \Gamma\left(\frac{\|x\|^{k+1}}{\sqrt{1-\alpha}}+\left(\frac{1}{1-\alpha}\right)^{\frac{k-1}{2}}\right).$$ Finally, recalling $\|D^2c\|\leq \Gamma(1+\|x\|^{k-2}),$ we also have \begin{align*} |D^2u|_{\mathcal{B}_{\varrho}(x)}^*& = \frac{|D^2 c|_{\mathcal{B}_{\varrho}(x)}^*}{1-\alpha}+ \frac{|D^2 f|_{\mathcal{B}_{\varrho}(x)}^*}{1-\alpha} \\& \leq \Gamma\left(\frac{\|x\|^{k-1}}{\sqrt{1-\alpha}}+\frac{\|x\|^{k-2}}{1-\alpha}+\left(\frac{1}{1-\alpha}\right)^{\frac{k+1}{2}}\right)\\ & \leq \Gamma\left(\frac{\|x\|^{k-1}}{\sqrt{1-\alpha}}+\left(\frac{1}{1-\alpha}\right)^{\frac{k+1}{2}}\right),\end{align*} as stated. \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{lem:intergratedDer}.} The proof of the first part is a simpler version of that of \cite[Theorem 1]{braverman2018taylor} and we refer the reader there. We turn to second part. By the definition of $\Delta_x$, $\|X\|_{t+1}\leq \|X_t\|+\Delta_x\leq \Gamma(1+\|X_t\|+\sqrt{\|X_t\|}).$ In particular, given $\kappa$, there exists $m(\kappa)$ such that if $\|x\|\geq m(\kappa)$, $\|X_{t+1}\|\leq (1+\kappa)\|X_t\|.$ Overall, \begin{equation} \|X_{t+1}\|\leq \max\{(1+\kappa)\|X_t\|\mathbbm{1}_{\{\|X_t\| \geq m(\kappa)\}},2m(\kappa)\}.\label{eq:jumps} \end{equation} By Assumption \ref{asum:primitives}, $|c(x)|\leq \Gamma (1+\|x\|^k)$ so that $|c(X_{t+1})| \lesssim 1+\max\{(1+\kappa)\|X_t\|\mathbbm{1}_{\{\|X_t\| \geq m(\kappa)\}},2m(\kappa)\}^k$. Thus, $$\mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t c(X_{t})\right] \lesssim \frac{\mathbb{E}_x[\alpha^{\tau(r_0^2)}]}{1-\alpha}+ \mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t ((1+\kappa)^t\|x\|)^k\right] .$$ Choosing $\kappa$ such that $\beta = \alpha (1+\kappa)^{k}<1$ we then have (notice that $\alpha<\beta$) $$\mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t c(X_{t})\right]\lesssim \frac{\mathbb{E}_x[\beta^{\tau(r_0^2)}]}{1-\alpha}.$$ Equation \eqref{eq:jumps} implies that, for $x\in \mathcal{B}_{r_0}$ $\|X_t\|\leq (1+\kappa)^t \|x\|\leq (1+\kappa)^t r_0$ with probability $1$. In turn, $\tau(r_0^2)=\inf\{t\geq 0:X_t\notin \mathcal{B}_{r_0}\}\geq \frac{log(r)}{\log(1+\kappa)}$ with probability $1$, so that $\mathbb{E}_x[\beta^{\tau(r_0^2)}]\downarrow 0$ as $r_0\uparrow \infty$. Choosing $r_0$ large enough then concludes the proof. \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{lem:opt_delta}. } The proof follows a standard argument; see e.g. \cite[Proposition 6.2]{bertsekasneuro}. Let $J_1(x) = V^{ \ast }(x) + \frac{\alpha}{1-\alpha}\delta_{V^{\pi^*}}[P^{\pi^*}, \widetilde{P}^{\pi^*}].$ \begin{align*} \widetilde T J_1(x) &= \min_{a \in \mathrm{A}(x)} \left[ c(x,a) + \alpha \sum_{y \in S} \widetilde p_{xy}^a J_1 (y) \right] \\ &\leq \min_{a \in \mathrm{A}(x)} \left[ c(x,a) + \alpha \sum_{y \in S} \widetilde p_{xy}^a \{ V^{ \ast }(y) + \frac{\alpha}{1-\alpha}\delta_{V^{\pi^*}}[P^{\pi^*}, \widetilde{P}^{\pi^*}] \} \right]\\ &= \min_{a \in \mathrm{A}(x)} \left[ c(x,a) + \frac{\alpha ^2}{1-\alpha}\delta_{V^{\pi^*}}[P^{\pi^*}, \widetilde{P}^{\pi^*}] +\alpha \sum_{y \in S} [p_{xy}^aV^{ \ast }(y) + \widetilde p_{xy}^aV^{ \ast }(y) - p_{xy}^aV^{ \ast }(y) ] \right] \\ &\leq c(x,\pi ^ \ast(x)) +\alpha P^{\pi ^ \ast}V^{ \ast }(x) + \alpha \mid \widetilde P^{ \pi ^ \ast}V^{ \ast }(x) - P^{\pi ^ \ast}V^{ \ast }(x) \mid + \frac{\alpha ^2}{1-\alpha} \delta_{V^{\pi^*}}[P^{\pi^*}, \widetilde{P}^{\pi^*}]\\ &\leq V^{ \ast }(x) + \frac{\alpha}{1-\alpha}\delta_{V^{\pi^*}}[P^{\pi^*}, \widetilde{P}^{\pi^*}] = J_1(x) \end{align*} The above shows that $ J_1(x) \geq \widetilde T J_1(x) $. Since $\widetilde T J_1(x) \to \widetilde V(x)$, we have $J_1(x) \geq \widetilde V(x)$ by monotonicity. Now repeat for the other side and let $J_2(x) = \widetilde V ^{\ast}(x) + \frac{\alpha}{1-\alpha}\delta_{\widetilde{V}^{\widetilde{\pi}}}[P^{\widetilde{\pi}}, \widetilde{P}^{\widetilde{\pi}}].$ \begin{align*} T J_2(x) &= \min_{a \in \mathrm{A}(x)} \left[ c(x,a) + \alpha \sum_{y \in S} p_{xy}^{ u}J_2 (y) \right] \\ &= \min_{a \in \mathrm{A}(x)} \left[ c(x,a) + \alpha \sum_{y \in S} p_{xy}^a [ \widetilde V ^{\ast}(y) + \frac{\alpha }{1-\alpha}\delta_{\widetilde{V}^{\widetilde{\pi}}}[P^{\widetilde{\pi}}, \widetilde{P}^{\widetilde{\pi}}] ] \right] \\ &= \min_{a \in \mathrm{A}(x)} \left[ c(x,a) + \frac{\alpha ^2 }{1-\alpha}\delta_{\widetilde{V}^{\widetilde{\pi}}}[P^{\widetilde{\pi}}, \widetilde{P}^{\widetilde{\pi}}] +\alpha \sum_{y \in S} [ \widetilde p_{xy}^a\widetilde V ^{\ast}(y) + p_{xy}^a\widetilde V ^{\ast}(y) - \widetilde p_{xy}^a\widetilde V ^{\ast}(y) ] \right] \\ &\leq c(x, \widetilde \pi ^ \ast(x)) +\alpha \widetilde P^{\widetilde \pi ^ \ast}\widetilde V ^{\ast}(x) + \alpha \mid P^{\widetilde \pi ^ \ast} \widetilde V ^{\ast}(x) - \widetilde P^{\widetilde \pi ^ \ast}\widetilde V ^{\ast}(x) \mid + \frac{\alpha ^2 }{1-\alpha}\delta_{\widetilde{V}^{\widetilde{\pi}}}[P^{\widetilde{\pi}}, \widetilde{P}^{\widetilde{\pi}}]\\ & \leq \widetilde V ^{\ast}(x) + \frac{\alpha}{1-\alpha}\delta_{\widetilde{V}^{\widetilde{\pi}}}[P^{\widetilde{\pi}}, \widetilde{P}^{\widetilde{\pi}}]= J_2(x) \end{align*} So we have $ J_2(x) \geq T J_2(x)$. Since ${T} J_2(x) \to V ^{\ast}(x)$, by monotonicity we have $ J_2(x) \geq V ^{\ast}(x)$. We conclude that $$\mid V^*(x) - \widetilde{V}^*(x) \mid \leq \frac{ \alpha}{1-\alpha} (\delta_{V^{\pi^*}}[P^{\pi^*}, \widetilde{P}^{\pi^*}] + \delta_{\widetilde{V}^{\widetilde{\pi}}}[P^{\widetilde{\pi}}, \widetilde{P}^{\widetilde{\pi}}]). $$ as stated. \hfill \Halmos\vspace*{0.4cm} \iffalse \noindent {\bf Detailed connection between $\delta$ and moment matching gaps} \begin{align*} \delta (x) \approx & \max_a \max_{V \in \{ \hat{ J}_{\pi}, J_{\pi}\}, \pi } \lvert [\mu_a(x) - \Tilde{\mu_a}(x)]^{\intercal} DV(x) +\frac{1}{2} tr \left[ [\sigma_a^2(x) - \Tilde{\sigma_a^2}(x)] ^{\intercal}[ D^2 V(x)] \right] \rvert \\ \leq & \max_a \max_{V \in \{ \hat{ J}_{\pi}, J_{\pi}\}, \pi } \left[ \max_i \mid \partial _i V(x) \mid \cdot \lVert \hat \mu_a (x) - \mu_a (x) \rVert _1 + \frac{1}{2} \max_{i,j} \mid \partial _i \partial _j V(x) \mid \cdot \lVert \hat{\sigma}_a^2(x) -\sigma_a^2(x) \rVert _1 \right] \label{eq:delta_as_moments} \end{align*} The components of the last inequality are detailed below. Consider the difference in first order moments: \begin{align*} &\mid \hat{\mu_a}(x)^{\intercal} DV(x) -\mu_a(x)^{\intercal} DV(x) \mid \\ = & \mid \sum_{i=1}^d \partial _i V(x) \cdot \sum_y ( \hat p_{xy} (u) - p_{xy}(u) ) (y-x) _i \mid \\ \leq & \sum_{i=1} ^d \mid \partial _i V(x) \mid \cdot \mid \sum_y (\hat p_{xy}(u) - p_{xy} (u) ) (y-x)_i \mid \\ \leq & \max_i \mid \partial _i V(x) \mid \sum_{i=1}^d \mid \sum_y ( \hat p_{xy} (u) - p_{xy} (u) ) (y-x)_i \mid \\ = & \max_i \mid \partial _i V(x) \mid \sum_{i=1}^d \mid (\hat \mu_a (x) - \mu_a (x) ) _i \mid\\ \leq & \max_i \mid \partial _i V(x) \mid \cdot \lVert \hat \mu_a (x) - \mu_a (x) \rVert _1 \end{align*} Similarly consider the difference in second order moments: \begin{align*} & \mid \frac{1}{2}trace( \hat {\sigma_a}^2(x) ^{\intercal} D^2 V(x)) -\frac{1}{2}trace( \sigma_a^2(x) ^{\intercal} D^2 V(x)) \mid \\ = & \mid \frac{1}{2} \sum_{i,j=1}^d \partial _i \partial _j V(x) \sum_y ( \hat p_{xy}(u) - p_{xy}(u) ) (y-x)_i (y-x) _j \mid \\ \leq & \frac{1}{2} \sum_{i,j = 1} ^d \mid \partial _i \partial _j V(x) \mid \cdot \mid \E \left[ (\hat Z_{x,u} -x) (\hat Z_{x,u} -x)^{\intercal} \right] _{ij} -\E \left[ (Z_{x,u} -x) (Z_{x,u} -x)^{\intercal} \right] _{ij} \mid \\ \leq & \frac{1}{2} \max_{i,j} \mid \partial _i \partial _j V(x) \mid \cdot \sum_{i,j = 1} ^d \mid \hat{\sigma}_a^2(x) _{ij} -\sigma_a^2(x) _{ij} \mid \\ \leq & \frac{1}{2} \max_{i,j} \mid \partial _i \partial _j V(x) \mid \cdot \lVert \hat{\sigma}_a^2(x) -\sigma_a^2(x) \rVert _1 \\ \end{align*} \fi \iffalse \subsection{Explicit construction for $\bg$} Recall that we want $\bg >0 $ such that for each state $y$, \[ \sum_{l \in \mathcal{B}} g_{yl} x_l = y \mbox{ and } \sum_{l \in \mathcal{B}} g_{yl} = 1 \] where $\mathcal{B}$ is the smallest box of representative states containing $y$. On each axis $i$, the corners of $\mathcal{B}$ have two possible values, denote them by $\bar{s}_i, \underline{s}_i$. Then one can construct a binary representation of a specific corner of $\mathcal{B}$ as $H(l) = (h_1(l), ..., h_{d}(l))$, where $h_i(l)=1$ if $(x_l)_i = \bar{s}_i$ and $h_i(l) = 0$ if $(x_l)_i = \underline{s}_i$. Take $w_i(y) = \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i}$, which satisfies $0 \leq w_i(y) \leq 1$ and $w_i (y) * \bar{s}_i + (1-w_i(y)) * \underline{s}_i = y_i $. Then we get the desired $\bg$ by taking for $l \in \mathcal{B}$ \[ g_{yl} = \Pi_{i=1}^d [ h_i(l) * w_i(y) + (1 - h_i(l))* (1-w_i(y)) ] \] \[ g_{yl} = \Pi_{i=1}^d [ \mathrm{1}\{(x_l)_i = \bar{s}_i \} * \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i} + \mathrm{1}\{ (x_l)_i = \underline{s}_i\} * \frac{\bar{s}_i - y_i}{\bar{s}_i-\underline{s}_i} ] \] Note that when $y \in \mathcal{S}^0$, i.e. there is a $l \in \mathcal{M}$ such that $y = x_l$, $g_{yl} = 1$. This construction of $\bg$ takes $\mathcal{O}(nd2^d)$ time. \fi \section{m-step moment coupling \label{sec:mstep}} This section is an informal complement to the first comment in the concluding remarks \S \ref{sec:concluding}. There, we argued that choosing $\bg,U$ to match the ($m-1$)-step moment, i.e., $\sum_{z}(\bg U)_{yz} = \mathbb{E}_y[X_{m-1}]$, $\widetilde{P}$ matches the $m$-step moment. $$\sum_{z} (P\bg U)_{yz} = \mathbb{E}_y[X_m].$$ A similar property holds for the second moment: if $\bg U$ matches the second moment $\mathbb{E}_y[X_{m-1}X_{m-1}^{\intercal}]$. then the $\widetilde{P}$ matches the second moment at time $m$. We refer to this generalization as $m$-step coupling. As an extension of \textsc{MoMa}, one expects that a sister chain based on $m$-step coupling approximate the value of the focal chain ``sampled'' every $m$ step, namely that given $\beta\in (0,1)$: $$V^m(x):=\mathbb{E}_x\left[\sum_{t=0}^{\infty}\beta^t c(X_{mt})\right]\approx\wtilde{\Ex}_x\left[\sum_{t=0}^{\infty}\beta^t c(X_{t})\right] = (I-\beta \widetilde{P})^{-1}c.$$ Since what we want to eventually approximate is the value $V(x)=\mathbb{E}_x[\sum_{t=0}^{\infty}\alpha^t c(X_t)]$ of the focal chain, the following simple relationship is useful. \begin{lemma}\label{lem:secondstep} Consider a Markov reward process $(\mathcal{S},P,c,\alpha)$. Let $V^m(x):=\mathbb{E}_x\left[ \sum_{t=0}^{\infty}\alpha^{mt} c(X_{mt})\right]$ and $V(x)=\mathbb{E}_x\left[ \sum_{t=0}^{\infty}\alpha^{t} c(X_{t})\right]$. Then, $$V^m(x)=\frac{V(x)}{1+\sum_{k=1}^{m-1}\alpha^k}-\frac{1}{1+\sum_{k=1}^{m-1}\alpha^k}\left(\sum_{k=1}^{m-1}\alpha^k (\mathbb{E}_x[V^m(X_k)]-V^m(x)])\right). $$ \end{lemma} \noindent\textbf{Proof: } Notice that \begin{align*} V(x) & = \mathbb{E}\left[ \sum_{t=0}^{\infty}\alpha^t c(X_t)\right] \\ & = \mathbb{E}\left[\sum_{t=0}^{\infty}\alpha^{mt} c(X_{mt})\right] + \sum_{k=1}^{m-1} \alpha^k\mathbb{E}\left[ \mathbb{E}_{X_k}\left[\sum_{t=0}^{\infty} \alpha^{mt}c(X_{mt})\right]\rsb \\& = V^m(x)(1+\sum_{k=1}^{m-1}\alpha^k) + \sum_{k=1}^{m-1}\alpha^k (\mathbb{E}_x[V^m(X_k)]-V^m(x)]). \end{align*} \hfill \Halmos\vspace*{0.4cm} \begin{figure}[h] \includegraphics[scale=0.3]{Vbase2alpha08.pdf} \includegraphics[scale=0.3]{Vbase2alpha099.pdf} \caption{A numerical illustration of Lemma \ref{lem:secondstep} (LEFT) $\alpha=0.8$ (RIGHT) $\alpha=0.99$. The relative error is below \%5 in the former and below \%1 in the latter. \label{fig:secondstep}} \end{figure} Supposing that the chain is ergodic, we would have that the first term approaches $(1-\alpha)V(x)$ as $m\uparrow \infty$ and then $\alpha\uparrow 1$, while the second shrinks to $0$. We heuristically then take the approximation $$V^m(x) \approx \frac{V(x)}{1+\sum_{k=1}^{m-1}\alpha^k}.$$ Since one expects, via moment matching, that the chain $\widetilde{P}$ that matches the $m$-step moment has $\widetilde{V}\approx V^m$ (notice that the discount factor for $\widetilde{V}$ is $\beta=\alpha^m$) we arrive at the approximation $$(1+\sum_{k=1}^{m-1}\alpha^k)\widetilde{V}_{\beta}(x) \approx (1+\sum_{k=1}^{m-1}\alpha^k)V^m(x)\approx V(x).$$ A computational implementation of $m$-step coupling is not straightforward. With $1$-step coupling, each $\mathbb{E}_y[X_0]=y$ can be expressed as a convex combination of its enclosing box corners. It is no longer clear that $\mathbb{E}_y[X_1]$ can be expressed as the a convex combination of the values $\mathbb{E}_{x_l}[X_1]$ in the corner points $x_1,\ldots$. The grid has to be designed more carefully. Despite of this difficulty, the following numerical examples suggests that $m$-step coupling is a direction worth exploring. \iffalse \noindent {\bf Computing $\bg$ for $m$-step coupling}: For each $x$ we restrict attention to its enclosing box $\mathcal{B}$ and compute $\bg$ such that $\sum_{l \in \mathcal{B}} g_{xl} [x_l] &= \mathbb{E}_x[X_{m-1}]$ and $\sum_{l \in \mathcal{B}} g_{xl} = 1 $; set $g_{xl'}=0$ for $l' \notin \mathcal{B}$. This is similar to out computation of $\bg$ but, crucially, the right hand side here is $\mathbb{E}_x[X_{m-1}]$ rather than $x$. Perform aggregation using this $\bg$ and discount factor $\alpha^m$ to get \be R = (I-\alpha^m U P\bg)^{-1}U c.\label{eq:Rapp}\ee We can then get the value of the sister chain $$\widetilde{V}_{\alpha^m}(x)=c(x)+\alpha^m P\bg R(x),$$ which is approximately $\frac{1}{1+\sum_{k=1}^{m-1}\alpha^k}$ of the true value $V$. The total procedure is an immediate analog of Algorithm \ref{alg:moma}. \begin{algorithm} \floatname{algorithm}{Algorithm}\caption{Value evaluation with \textsc{MoMa}~aggregation (m-step variant)} \renewcommand{\thealgorithm}{} \label{alg:eval2} \begin{algorithmic}[1] \Require Markov reward process $\mathcal{C}= <\mathcal{S}, P, c, \alpha>$, spacing exponent $\frak{s}$. \Ensure Approximate value $\widetilde{V}$. \State Construct $\frak{s}$-spaced grid and binary $U$. \State {\em Two-step moment matching}: Compute row-stochastic $\bg\geq0$ such that $\sum_{l \in \mathcal{B}} g_{xl} [x_l] &= \mathbb{E}_x[X_{m-1}]$. \State Solve $R=(I-\alpha^m U P\bg)^{-1}U c$. \State Approximate value: $(1+\sum_{k=1}^{m-1}\alpha^k)\widetilde{V}(x)=(1+\sum_{k=1}^{m-1}\alpha^k) \{ c(x)+\alpha^m [P \bg R](x) \}$. \end{algorithmic} \end{algorithm} \fi \iffalse \begin{remark} {\em Consider a a simple random walk on the lattice $\times_{i=1}^2[1,20]\cap \mathbb{Z}_+$ with $0$ mean at all interior points and, on the boundary $\mu_1=1$ on the dimension that is on the axis and it is $-1$ on an outer boundary. For example, $\mu(1,1)=(1,1)$ (so that $x+\mu(x)=(2,2)$), $\mu(1,3)= (1,0)$ but $\mu(20,3)=(-1,0)$. The resulting scatter is in Figure \ref{fig:simplescatter} below where it is evident that we can write all values $x+\mu(x)$ as convex combination of the four corner values. \begin{figure}[h]\begin{center} \includegraphics[scale=0.4]{scattersimple.pdf} \end{center} \caption{The scatter $(W(x),x\in\mathcal{S})$ for a markov chain on $\mathbb{Z}_+^2$. Here, we can write for all $x$ in the state space $PW_1(x)=x+\mu(x)$ as a convex combination of the four corner values, namely, those for $x=(1,1),(1,20),(20,1),(20,20)$ ($x+\mu(x)=(2,2),(2,19),(19,2),(19,19)$. \label{fig:simplescatter}} \end{figure} } \hfill \vrule height .9ex width .8ex depth -.1ex \end{remark} \fi \begin{example} {\em Consider a (non-absorbing) random walk on $[1,\ldots,N]$ with two-step coupling. We generate $\bg$ and $\tilde{P}$ for 2-step coupling, i.e., so that $\wtilde{\Ex}_x[X_1]=\mathbb{E}_x[X_2].$ The random walk is a simple one (i.e., jumps up by one or down by 1) with ``reflecting boundaries'' $P_{12}=P_{N,N-1}=1$. Otherwise $P_{i,i+1}\in [0.5,0.6]$; the actual value was chosen as a random number $P_{i,i+1}=0.5-0.1*rand()$. This chain then has a downward drift. We construct the grid based on spacing exponent $\frak{s}\in \{0.35,0.45\}$. Figure \ref{fig:RW2spacing} shows the growth of the second-moment (mis) match between $\widetilde{P}$ and $P^2$. Figure \ref{fig:RW2spacingvalue} displays the value comparison $(\tilde{V}-V_s)/V_s$ where $\tilde{V}=(I-\alpha \widetilde{P})^{-1}c$ and the scaled value $V_s=\frac{1}{1+\sqrt{\alpha}}(I-\sqrt{\alpha}P)^{-1}c$ for $c(i)=i^2$ and discount $\alpha=0.95$. It also displays the first moment matching (to confirm it is 0, as by design). The one-step construction is showing inferior performance for small states $x$. This may be because the variance gap between $\tilde{P}$ and $P$ (under the one-step construction) is larger than that between $\tilde{P}$ and $P^2$ under the two-step construction; see Figure \ref{fig:variance}. Since $P^2$ has larger jumps than $P$, the two-step approximation ``suffers less'' from the coarse grid. Generally, we conjecture that the two-step approximation will be beneficial where the local variance is small, so that the gains in the better matching of the second moment overwhelm the approximation error in Lemma \ref{lem:secondstep}.} \label{example:RWspacing}\hfill \vrule height .9ex width .8ex depth -.1ex \end{example} \begin{figure} \includegraphics[scale=0.32]{RWtheta035.pdf}\includegraphics[scale=0.32]{RWtheta045.pdf} \caption{Second moment matching, $\widetilde{P}$ vs $P^2$, with state-dependent grid (LEFT) $\frak{s}=0.35$ (RIGHT) $\frak{s}=0.45$. The growth in the latter is the larger but still sub-linear. The accuracy gap is less than $20\%$ near the origin but shrinks rapidly thereafter. \label{fig:RW2spacing}} \end{figure} \begin{figure} \includegraphics[scale=0.3]{RWtheta035value.pdf}\includegraphics[scale=0.3]{RWtheta045value.pdf}\\ \includegraphics[scale=0.3]{RWtheta035value0mean.pdf}\includegraphics[scale=0.3]{RWtheta035value1mean.pdf} \caption{Value comparison for Example \ref{fig:RW2spacing} (LEFT) $\frak{s}=0.35$ (RIGHT) $\frak{s}=0.45$ (BOTTOM) For $\frak{s}=0.35$ and $\mathcal{S} = [1,\ldots,100]$, with (BOTTOM LEFT) the zero-mean construction of $\bg U$ using 1-step coupling and (BOTTOM RIGHT) the 2-step construction. The latter shows better performance for small states. \label{fig:RW2spacingvalue}} \end{figure} \begin{figure} \includegraphics[scale=0.3]{RWtheta035variance1step.pdf}\includegraphics[scale=0.3]{RWtheta035variance2step.pdf}\caption{The difference in variance matching. The dashed line depicts the difference (mismatch) between $\tilde{P}W_2$ and, respectively, $PW_2$ (on the left) and $P^2W_2$ (on the right): (LEFT) 1-step coupling (RIGHT) 2-step coupling. \label{fig:variance}}\end{figure} \iffalse In the same spirit as Remark \ref{thm:backtodelta}, we make the following observation about the explicit form of $\delta$ in this setting: \begin{remark}[{re-visiting $\delta$ for two-step coupling}]\label{thm:backtodelta}}} In two-step coupling we are taking $P^2$ against $\widetilde{P}$ and hence computing (here $V$ is the value of the two-step chain): $$\delta_V[P^2,\widetilde{P}](x) =|P^2V(x)-\widetilde{P} V(x)| \leq \sum_{y}p_{xy}|\sum_{z}p_{yz}V(z)-\sum_{l}g_{yl}V(x_l)|.$$ A similar bound in terms of the second derivative multiplied by the jump size follows here as well using the fact that $\sum_{l}g_{yl}(x_l-y)=\mu(y)$. Pretending once again that $V$ has a twice differentiable extension we have $\sum_{z}p_{yz}V(z)=V(y)+\mu(y)'DV(y)+\mathcal{O}(\Delta_y^2\|D^2V(y)\|)$. Since, under two-step coupling, $g_{yl}'(x_l-y)=\mu(y)$, we also have $\sum_{l}g_{yl}V(x_l)=V(y) + \mu(y)'DV(y)+\mathcal{O}(\Delta_y^2\|D^2V(y)\|)$. Combined we have that $$\delta_V[P^2,\widetilde{P}](x)\lesssim \sum_{y}p_{xy}\Delta_y^2\|D^2V(y)\|.$$ \hfill \vrule height .9ex width .8ex depth -.1ex \end{remark} \fi \iffalse \subsubsection{Numerical results from hospital routing} \label{sec:numerics2} We examine the performance of two-step value evaluation on the same hospital routing example as described before. We use the same set of parameters as the $J=2$ instance: $N_1 = N_2 = 12$, $\lambda_1 = \lambda_2 = 3.5$, $p_1 = 0.25, p_2 = 0.35$, $H_1 = H_2 = 5$, $B_{12}=1, B_{21} = 10$, $\alpha = 0.99$. Obtain $H$ using two-step moment coupling, and compute $R_2=(I-\alpha^2 P H U)^{-1}U c$, the approximate value $(1+\alpha)\widetilde{V}[\alpha^2](x)=(1+\alpha)\{c(x)+\alpha^2PH R_2(x)\} $ is compared to the exact value $V$ in Figure \ref{fig:Vs-same-lambda-2}. \begin{figure} [t!] \centering \includegraphics[width= 0.6 \textwidth]{2step_plots/Vs_2step.png} \includegraphics[width= 0.35 \textwidth]{2step_plots/ratioV_2step.png} \caption{Exact vs approximate values: against the state space in (LEFT TOP) and against the Euclidean distance to the origin in (LEFT BOTTOM). Ratio of approximate values compared to the exact values in (RIGHT). \label{fig:Vs-same-lambda-2}} \end{figure} Already, we can see that there is a more notable gap between the approximate and the exact value, confirmed by the maximal difference of more than 25 \% from the exact value, compare to 16 \% with one-step coupling. Recall that to think towards control, we are interested in the increment $\mathbb{E}_x^a[V(x_1)]-V(x)$ for its role in $\pi^*(x) = \argmax_{a\in\mathrm{A}(x)}\{c(x,a)+\alpha(\mathbb{E}_x^a[V(X_1)]-V(x))\}$. Here we compare the incremental change $\mathbb{E}_x[V(X_1)]-V(x)$ against that of the two-step chain \begin{align} (1+\alpha) (\wtilde{\Ex}_x[\widetilde{V}[\alpha^2](X_1)]-\widetilde{V}[\alpha^2](x))&= (1+\alpha)(P\bg R_2(x)-\widetilde{V}[\alpha^2](x)) \tag{two-step} \end{align} This comparison and their differences are shown in Figure \ref{fig:inc-c-2}, which again displays more divergence compared to the one-step coupling results. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{2step_plots/inc_2d_2step.png} \includegraphics[width=0.48\textwidth]{2step_plots/inc_diff_2step.png} \caption{(LEFT) The incremental changes against the Euclidean distance to the origin, (RIGHT) Their difference against the state space. \label{fig:inc-c-2}} \end{figure} The second moment mismatch for the created sister chain compared to the two-step focal chain is displayed in Figure \ref{fig:gap2-2}. Figure \ref{fig:phi-col-2} provides the visualization for $H[:, 137]$. \begin{figure} \centering \includegraphics[width=0.53\textwidth]{2step_plots/gap2_2step.png} \caption{Second moment gaps between the sister and focal chain, plotted dimension-wise against the state space. \label{fig:gap2-2}} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{2step_plots/phi_2step.png} \caption{Plot of the aggregation probabilities from each state into one particular meta-state (with index 137), $A[:, 137]$. The set of representative states are highlighted in red. \label{fig:phi-col-2}} \end{figure} \subsection{Complexity} \paragraph{\bf Moment matching.} Now we have to solve for $\bg$ with a linear program, the complexity depends on the complexity of LP methods. We have $2^d$ variables and $d+1$ constraints per state. Some LP algorithms have performance guarantees and substantial empirical evidence. For the sake of concreteness, we fix our attention on Karmarkar's projective algorithm \cite{karmarkar1984new}. The algorithm requires $O((\#var)^{3.5} B)$ arithmetic operations where $\#var$ is the number of variables and $ B = log( 1 + \mid D_{max}| ) + log (1 + \zeta)$. In the expression, $\zeta = \max { \mid c_i ' \mid, \mid b_i \mid, i = 1, .., \# var}$ is the maximal coefficient that appears in the objective or the constraints and bounded by $\|W_1\|_{\infty}\leq r$. $D_{max}$, meanwhile, is the maximum determinant of any square submatrix of the constraint matrix. Each of our $n$ (one per state) LPs has a constraint matrix of dimension $(d+1)\times 2^d$. The determinant of a square ($(d+1)\times (d+1)$) matrix with maximal value of $r$ is bounded by $r^{d+1}(d+1)^\frac{d+1}{2}$; see \cite{rozanski2017more}. In turn, $$B=\log(1+|D_{max}|)+\log(1+\zeta)=\mathcal{O}\left((d+1)\log((d+1)r)\right).$$ Multiplying the per-state cost by the number of states, the total complexity is $$\mathcal{O}\left(n 2^{3.5d}[(d+1)log(d+1)+\log (r)]\right)=\mathcal{O}(r^d2^{3.5d}d\log(rd)).$$ Recall that this should be compared against the inversion gain of $\Omega(n^2\log n)=\Omega(r^{2d}d\log(r))$. Therefore, there is computational gains to the evaluation algorithm outlined in Algorithm \ref{alg:eval} for $\frak{s}\in [1/3,1/2)$ as long as $r\gg 12$. \subsubsection{Optimization} The natural approach one might attempt for transitioning into the control setting is one that is analogous to the one-step method: use two-step moment coupling to obtain the aggregation mechanism, then perform aggregate PI. Recall however that given grid and $U$, $H$ is obtained from solving $\sum_{l \in \mathcal{M}} h_{xl} [x_l] &= \mathbb{E}_x^{\pi}[X_1]$, where the expectation is now induced by some policy $\pi$ which changes in each iteration. We present in Algorithm \ref{alg:twostep} the version where $H$ is computed on the fly for controls that arise in the PI iteration. However, in contrast to Algorithm \ref{alg:onestep}, we have no guarantee at this point for the convergence of this algorithm. \begin{algorithm} \floatname{algorithm}{Algorithm}\caption{\textsc{MoMa}~API (two-step variant)\label{alg:twostep}} \renewcommand{\thealgorithm}{} \label{alg:resolve} \begin{algorithmic}[1] \Require $\mathcal{C}= <\mathcal{S}, \mathrm{A}, P, c, \alpha>$, $\frak{s}$-spaced grid and associated $U$, initial policy $\pi^0$. \Ensure Approximate policy $\pi^*$. \State Set $\pi^0$ and $P^0:=P^{\pi^0}$ as the given initial control and the transition matrix it induces. \While{convergence criterion is not met} \State {\em Moment matching}: Compute $H^k=H^k(P^{\pi^k})$ with two-step coupling. \State {\em Policy Evaluation}: Compute $R^k=(I-\alpha^2 U P^kA^k)^{-1}U c$. \State {\em Policy Update}: \begin{align*} &\pi^{k+1}(x)\leftarrow \max_a\{c(x,a) + \alpha^2 ((P^k)^a H^k R^k)(x)\},\mbox{ for all } x\in\mathcal{S},\\& P^{k+1}\leftarrow P^{\pi^{k+1}}.\end{align*} \EndWhile \end{algorithmic} \end{algorithm} Alternatively we can adopt a {\bf ``pre-compute''} view. Suppose that for each state $x\in\mathcal{S}$ and action $a\in\mathrm{A}(x)$ we compute the vector $H^a_{x\cdot}$ that matches with the moment $x+\mu_a(x)$.We can then speak of a controlled chain with the (family of) transition matrices $\tilde{P}^a=P^aA^aD$. Then with some subtlety, the convergence of this variant of the algorithm follows mostly from standard PI. \begin{lemma} The pre-compute version of Algorithm \ref{alg:twostep} converges in a finite number of iterations. \label{lem:twostepconv}\end{lwmma} In practice, it seems inefficient to pre-compute the vector $H_{x\cdot}$ for each possible action $a\in \mathrm{A}(x)$. This leads to the conclusion that the one-step approach is superior for its simplicity and ease of incorporation into optimization algorithms. As we have seen and argued, there are still certain settings where the two-step matching may produce better results. In such case, a reasonable {\em hybrid} solution is running Algorithm \ref{alg:onestep} to convergence and then using the policy there as the initial policy for Algorithm \ref{alg:twostep}. We can stop running Algorithm \ref{alg:twostep} if it diverges from the estimate at the conclusion of Algorithm \ref{alg:onestep}. \subsubsection{Alternatives: non-binary $U$} As an added note, in combination with the two-step coupling method, one can alternatively construct $U$ to be non-binary. In particular, given the same $\frak{s}$-spaced grid and the representative states $\mathcal{M} = \{ x_l \}$ induced, we can populate the $l^{th}$ row of $U$ to be the row of $P$ corresponding to $x_l$. In other words, $U$ is the row subset of $P$ restricted to the rows in the set of representative states, $\bar P$. This structure allows for an interesting intuitive interpretation of treating the representative states themselves as the meta-states, which then "transition" to the full state space like a standard transition from these representative states. It further embeds the additional interpretation of looking for a convex combination of distributions via the weights of $\bg$. However, this scheme has clear disadvantages. The complexity and additional dependence on the policy makes analysis challenging. From a practical point of view, with a non-binary $U$, the aggregate PI can no longer utilize the computational advantage of the coarse-grid scheme from computing on $\mathcal{M}$ alone. Nevertheless, this construction combines well with the two-step method to produce favorable numerical results in the hospital routing example. See Figure \ref{fig:Vs-nonbinary}. \begin{figure} \centering \includegraphics[width= 0.495 \textwidth]{PinD/Vs_dist_grid=2_theta=0.35_rowP_0.png} \includegraphics[width= 0.495 \textwidth]{PinD/ratio_dist_grid=2_theta=0.35_rowP_0.png} \caption{(LEFT) Exact vs approximate values. (RIGHT) Ratio of approximate values compared to the exact values. \label{fig:Vs-nonbinary}} \end{figure} \fi \iffalse \noindent {\bf Proof of Lemmas \ref{lem:existence} and \ref{lem:interior}.} The existence and uniqueness of a solution $\widehat{V}\in \mathcal{C}^{2,1-\vartheta}(\overbar{\mathcal{B}_r})$ to the Dirichlet problem follows from Assumption \ref{asum:primitives} and \cite[Theorem 6.14]{gilbarg2015elliptic}. The assumed Lipschitz continuity of $\mu,\sigma^2$ implies, in particular, that they are both H\"{o}lder continuous for any $\vartheta\in(0,1)$ on any bounded set and, in particular, on $\mathcal{B}$. Let $u$ be this solution and let us write $$u(x) =\frac{c(x)}{1-\alpha} - \frac{f}{1-\alpha},\mbox{ where } f:=c-(1-\alpha)u.$$ Then, since $c$ is assumed to be twice continuously differentiable, $f$ inherits its smoothness from that of $u$ as established above. In particular, $$D^2 u = \frac{1}{1-\alpha}(D^2c + D^2 f).$$ Then, $$|D^2 u |_{\mathcal{B}_r}^* \leq \frac{1}{1-\alpha}\left(|D^2c|_{\mathcal{B}_r}^* + |D^2 f|_{\mathcal{B}_r}^*\right) \leq \frac{1}{1-\alpha}\Gamma \left( (1+r^{k-2}+ |D^2 f|_{\mathcal{B}_r}^*\right).$$ To complete the bounds we must bound $|D^2 f|_{\mathcal{B}_r}^*$. The function $f$, notice, solves the equation $$\frac{1}{2}trace(\sigma^2(y)D^2 f(y))+\mu(y)'Df(y)+(1-\alpha)f(y) = \frac{1}{2}trace(\sigma^2(y)D^2 c(y)) + \mu(y)'U c(y).$$ Bounds on the derivative of $f$ will now follow from general derivative estimates for PDEs. To be self contained we quote here \cite[Theorem 6.2]{gilbarg2015elliptic}. We draw the following implications of Assumption \ref{asum:primitives}: \textcolor{blue}{the following is a strengthening of these assumption} \begin{enumerate} \item $\mu$ is globally bounded and Lipschitz and $$\sup_{r\geq 0}r|\mu|_{\mathcal{B}_r}^*+\sup_{r\geq 0}r^2[\mu]_{\mathcal{B}_r}^*\leq \Gamma $$ \item $\sigma^2$ is globally bound and Lipschitz and $$\sup_{r\geq 0}|\sigma^2|_{\mathcal{B}_r}^*+\sup_{r\geq 0}r[\sigma^2]_{\mathcal{B}_r}^*\leq \Gamma. $$ \item The cost function $c$ is norm-like and three times differentiable with $$|U c|_{\mathcal{B}_r}^* \leq \Gamma(1+r^{k-1})\mbox{ and } |D^2c|_{\mathcal{B}_r}^* \leq \Gamma(1+r^{k-2}).$$ \end{enumerate} We draw the following immediate conclusions from these assumptions: \begin{itemize} \item[1)] $\mu,\sigma^2$ satisfy the following bounds for any $\theta\in (0,1)$ and any $r$: \begin{align*} |\sigma^2|^{(0)}_{0,\theta,\mathcal{B}_r} & := |\sigma^2|_{\Omega}^* + \sup_{x,y\in\mathcal{B}_r}d_{x,y}^{\theta}\frac{|\sigma^2(y)-\sigma^2(x)|}{\|y-x\|^{\theta}}\leq r^{\theta}[\sigma^2]_{\mathcal{B}_r}^*r^{1-\theta}\leq \Gamma,\\ |\mu|^{(1)}_{0,\theta,\mathcal{B}_r} &:= \sup_{x\in \mathcal{B}_r}d_{x}|\mu(x)| + \sup_{x,y\in\mathcal{B}_r}d_{x,y}^{1+\theta}\frac{|\mu(y)-\mu(x)|}{\|y-x\|}\leq \Gamma, \end{align*} where $d_x= dist(x,\partial \mathcal{B}_r)\leq r$ is the distance from $x$ to the boundary $\mathcal{B}_r$ and $d_{x,y}=\min\{d_x,d_y\}$. \end{itemize} Also \cite[Page 61]{gilbarg2015elliptic}, for $\theta\in (0,1)$ and a set $\Omega\subset \mathbb{R}^d$, $f\in\mathcal{C}^{2,\theta}(\Omega)$ $$|f|_{2,\theta,\Omega}^*:= \sup_{x\in\Omega}|u(x)|+\sup_{x\in\mathcal{B}_r} d_x\|Df(x)\| +\sup_{x\in\Omega} d_x^2\|D^2 f(x)\| + \sup_{x,y\in\Omega}d_{x,y}^{2+\theta}\frac{\|D^2f(x)-D^2f(y)\|}{\|y-x\|^{\theta}}; $$ Recalling $\mathcal{B}_{\varrho}(x):=\{y:\|y-x\|\leq \varrho\}=x\pm \varrho$, we have $|f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*\geq \sup_{x\in\mathcal{B}_{\varrho}(x)} d_x^2\|D^2 u(x)\|.$ For all $y\in \mathcal{B}_{\frac{\varrho}{2}}(x)= x\pm \varrho(x)/2$ we have that $d_y\geq \varrho(x)/2$ so that $$|f|_{2,\theta,\mathcal{B}_r}^*\geq \sup_{y\in \mathcal{B}_{\frac{\varrho}{2}}(x)}(x)}d_y^2\|D^2 f(y)\|\geq \frac{\varrho(x)}{2}\|D^2 f\|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^*,$$ and, thus, that $$|D^2 f|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^*\leq \frac{1}{\varrho^2(x)} |f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*.$$ In this way a bound on $|f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*$ will produce the bound stated in Lemma \ref{lem:interior}. \begin{theorem}[Theorem 6.2 in \cite{gilbarg2015elliptic}] Let $\Omega$ be an open subset of $\mathbb{R}^d$, and let $u\in\mathcal{C}^{2,\theta}(\Omega)$ be a bounded solution in $\Omega$ of the equation $$\frac{1}{2} trace(\sigma^2(y)'D^2u(y)) + \mu(y)'Du(y) -\beta(y)u(y) = g(y)$$ where $f$ is in $\mathcal{C}^{\theta}(\Omega)$ and there are positive constants $\lambda,\Lambda$ such that the coefficients satisfy \be \lambda^{-1}\|\xi\|^2\geq \sum_{i,j}\xi_i\xi_j\sigma_{ij}(x)\geq \lambda \|\xi\|^2, \mbox{ for all } \xi,x \in\mathbb{R}^d, \label{eq:appelip}\ee and \be |\sigma^2|^{(0)}_{0,\theta,\Omega},|\mu|^{(1)}_{0,\theta,\Omega}, |\beta|^{(2)}_{0,\theta,\Omega}\leq \Lambda. \label{eq:appcoeff}\ee Then, $$|u|^*_{2,\theta,\Omega}\leq C\left(|u|_{\Omega}^* + |g|^{(2)}_{0,\theta,\Omega}\right), $$ where $C=C(d,\theta,\lambda,\Lambda)$. \end{theorem} In this theorem we take $$g(y):= \frac{1}{2}trace(\sigma^2(y)D^2c(y))+ \mu(y)'U c(y)\mbox{ and } \beta = -(1-\alpha).$$ Per our observations $|\sigma^2|^{(0)}_{0,\theta,\mathcal{B}_{\varrho}(x)},|\mu|^{(1)}_{0,\theta,\mathcal{B}_{\varrho}(x)}\leq \Gamma$ and, $|\beta|^{(2)}_{0,\theta,\mathcal{B}_{\varrho}(x)}=(1-\alpha)\varrho^2(x).$ So we can take $$\Lambda=\Gamma(1+(1-\alpha)\varrho^2(x)),$$ to conclude that \begin{align} |D^2 f|_{\mathcal{B}_{\frac{\varrho}{2}}(x)}^* & \leq \frac{1}{\varrho^2(x)} |f|_{2,\theta,\mathcal{B}_{\varrho}(x)}^*\leq \Gamma(1+(1-\alpha)\varrho^2(x)) \left(\frac{|f |_{\mathcal{B}_{\varrho}(x)}^*}{\varrho^2(x)} + \frac{1}{\varrho^2(x)}|g|_{0,\theta,\mathcal{B}_{\varrho}(x)}^{(2)}\right) \nonumber \\ & \leq \Gamma(1+(1-\alpha)\varrho^2(x))\left(\frac{|f |_{\mathcal{B}_{\varrho}(x)}^*}{\varrho^2(x)} + \frac{1}{\varrho(x)}|g|_{0,\mathcal{B}_{\varrho}(x)}^{*}+[g]_{0,\mathcal{B}_{\varrho}(x)}^{*}\right). \label{eq:intermediatebound}\end{align} It is useful here (as is clear from \cite[Theorem 6.2]{gilbarg2015elliptic}) that the bound depends linearly on $\Lambda$. By our assumption on $c$ and $\mu$, $|\mu|_{\mathcal{B}_{\varrho}(x)}^*|U c|_{\mathcal{B}_{\varrho}(x)}^*+|\sigma^2|_{\mathcal{B}_{\varrho}(x)}^*|D^2 c|_{\mathcal{B}_{\varrho}(x)}^*\leq \Gamma \|x\|^{k-2}$ so that $$ |g|_{\mathcal{B}_{\varrho}(x)}^*\leq \Gamma \|x\|^{k-2},~ [g]_{\mathcal{B}_{\varrho}(x)}^*\leq \Gamma \|x\|^{k-3},$$ and \be \left( (1+ (1-\alpha)\varrho^2(x)\right) \left(\frac{1}{\varrho(x)}|g|_{\mathcal{B}_{\varrho}(x)}^* + [g]_{\mathcal{B}_{\varrho}(x)}^*\right)\leq \Gamma(1-\alpha)\|x\|^{k-1}+ \Gamma \|x\|^{k-3}. \label{eq:gbound} \ee It remains to bound $|f|_{\mathcal{B}_{\varrho}(x)}^*$. Recall that $f=c-(1-\alpha)u$. We will do so directly by studying a related diffusion process. Specifically, consider the process \be \widehat{X}_i(t) = x_i+ \int_0^t \alpha \mu_i(\widehat{X}_s)ds + \sum_{j=1}^d\int_0^t \alpha \sigma_{ij}(\widehat{X}_s)dB_j(s),\label{eq:diffeq}\ee where $B_j(\cdot)$ is a standard Brownian motion. Our requirement in Assumption \ref{asum:primitives} guarantee the existence of this process as a strong solution of this stochastic differential equation. Then, the cost \be u(x) = \mathbb{E}_x\left[\int_0^t e^{-(1-\alpha)s} c(\widehat{X}_s)ds\right]=\int_0^t e^{-(1-\alpha)s} \mathbb{E}_x[c(\widehat{X}_s)]ds\label{eq:udiff}\ee solves the PDE with the boundary condition $u(x)=0$ when $x\in\partial\mathcal{B}_{r_0}$. By Ito's formula \begin{align*} \mathbb{E}_x[c(\widehat{X}_s)]&= c(x) + \sum_{i} \alpha\mathbb{E}_x\left[\int_0^s\mu_i(\widehat{X}_u)c_i(\widehat{X}_u)du\right]+ \alpha\frac{1}{2}\sum_{ij}\mathbb{E}_x\left[ \int_0^s\sigma_{ij}^2(\widehat{X}_u)c_{ij}(\widehat{X}_u)du\right],\end{align*} so that, since $|D^ic(x)|\leq \Gamma(1+\|x\|^{k-i})$ for $i=0,1,2$ and since $|\mu|,\sigma^2$ are globally bounded, we have \begin{align*} \mathbb{E}_x[c(\widehat{X}_s)]& = c(x) \pm \Gamma \left(s+ \sum_{i} \mathbb{E}_x\left[\int_0^s\|\mu(\widehat{X}_u)\|\|\widehat{X}_u\|^{k-1}du\right]+ \frac{1}{2}\sum_{ij}\mathbb{E}_x\left[ \int_0^s\|\sigma^2(\widehat{X}_u)\|\|\widehat{X}_u\|^{k-2}du\right]\right)\\ & =c(x) \pm \Gamma \int_0^s (1+\mathbb{E}_x[\|\widehat{X}_u\|^{k-2})du.\end{align*} \textcolor{blue}{need to argue why everything allows apply Ito's formula and changing expectation and integration} A crude standard bound follows from \eqref{eq:diffeq} and the global boundedness of $\mu$ and $\sigma^2$ \begin{align*} \mathbb{E}[\|\widehat{X}\|_u^{k-2}]\leq \Gamma (\|x\|^{k-2}+ u^{k-2} + u^{\frac{k-2}{2}}),\end{align*} so that $$\int_0^s (1+\mathbb{E}_x[\|\widehat{X}_u\|^{k-2})du\leq \Gamma(s\|x\|^{k-2} + s + s^{k-1} + s^{k/2})$$ Plugging this back into \eqref{eq:udiff} and multiplying by $(1-\alpha)$ we conclude \be |c-(1-\alpha)u|\leq \Gamma \left (\frac{\|x\|^{k-2}\|}{1-\alpha} + \left(\frac{1}{1-\alpha}\right)^{k-1} + \left(\frac{1}{1-\alpha}\right)^{k/2}\right)\leq \left (\frac{\|x\|^{k-2}\|}{1-\alpha} + \left(\frac{1}{1-\alpha}\right)^{k-1}\right),\label{eq:fbound}\ee where the last inequality assumes $k\geq 2$. We treat the case $k=1$ at the end of this proof. We then have that \be \Gamma\left(\frac{1}{\varrho^2(x)}+(1-\alpha)\right)|f|_{\mathcal{B}_{\varrho}(x)}^* \leq \Gamma\left(\|x\|^{k-2}+\left(\frac{1}{1-\alpha}\right)^{ k-2}+\frac{\|x\|^{k-4}}{1-\alpha}+ \frac{1}{\varrho^2(x)}\left(\frac{1}{1-\alpha}\right)^{ k-1}\right).\label{eq:fbound} \ee Combining this with \eqref{eq:gbound} we have that $$ \frac{|D^2f|_{\Omega_x}^*}{1-\alpha} \leq \Gamma\left(\|x\|^{k-1}+\frac{\|x\|^{k-2}}{1-\alpha}+\frac{1}{\varrho^2(x)}\left(\frac{1}{1-\alpha}\right)^{k}+\left(\frac{1}{1-\alpha}\right)^{k-1}\right),$$ where we used the fact that $\|x\|^{k-4}/(1-\alpha)^2\leq \|x\|^{k-2}/(1-\alpha)+(1-\alpha)^{-(k-1)}.$ Recalling $\|D^2c\|\leq \Gamma(1+\|x\|^{k-2}),$ we also have \begin{align*} |D^2u|_{\mathcal{B}_{\varrho}(x)}^*& \leq \Gamma\left(1+\|x\|^{k-1}+\frac{\|x\|^{k-2}}{1-\alpha}+\frac{1}{\varrho^2(x)}\left(\frac{1}{1-\alpha}\right)^{k}+\left(\frac{1}{1-\alpha}\right)^{k-1}\right) \\& \leq \Gamma\left(\|x\|^{k-1}+\frac{1}{\varrho^2(x)}\left(\frac{1}{1-\alpha}\right)^{k}+\left(\frac{1}{1-\alpha}\right)^{k-1}\right),\end{align*} as stated. \textcolor{blue}{still need to complete the below. Need to use the assumption on mu and use that term that is integral of the reciprocal. Then, the error will be a constant. That is the process does not get far enough from $c(x)$} Finally, for $k=1$ we have that $$|f|= |c-(1-\alpha)u|\leq \Gamma \eta(x),$$ where $$\eta(x) = (1-\alpha)\mathbb{E}_x[\int_0^{\infty}e^{-(1-\alpha)s}\int_0^s \frac{1}{1+\|\widehat{X}_u\|}du].$$ We will prove that $\eta(x)$ is bounded by a constant for all $x:\|x\|\geq (1-\alpha)^{-1}$ and it is globally bounded by $1/(1-\alpha)$ so that $$\Gamma\left(\frac{1}{\varrho^2(x)}+(1-\alpha)\right)|f|_{\mathcal{B}_{\varrho}(x)}^* \leq \Gamma\eta(x)\left(1-\alpha+ \frac{1}{\varrho^2(x)}\right).$$\textcolor{blue}{assuming that in this case the second derivative is decreasing with $x$ in this case like $1/\|x\|$---withoug the 1 in the constant---} we have that \begin{align*} |D^2u|_{\mathcal{B}_{\varrho}(x)}^*& \leq \Gamma\eta(x)\left(1+\frac{1}{\varrho(x)}\frac{1}{1-\alpha}\right).\end{align*} Notice that for $\|x\|\geq (1-\alpha)^{-1}$ this is constant with the generla bound. \hfill \Halmos\vspace*{0.4cm} \noindent {\bf Proof of Lemma \ref{lem:truncation}. } By definition of $\Delta_x$ $\|X\|_{t+1}\leq \|X_t\|+\Delta_x\leq \Gamma(1+\|X_t\|+\sqrt{\|X_t\|}).$ In particular, given $\kappa$, there exists $m(\kappa)$ such that if $\|x\|\geq m(\kappa)$, $\|X_{t+1}\|\leq (1+\kappa)\|X_t\|.$ Overall, \begin{equation} \|X_{t+1}\|\leq \max\{(1+\kappa)\|X_t\|\mathbbm{1}_{\{\|X_t\|\leq m(\kappa)\}},2m(\kappa)\}.\label{eq:jumps} \end{equation} By Assumption \ref{asum:primitives}, $|c(x)|\leq \Gamma (1+\|x\|^k)$ so that $|c(X_{t+1})| \lesssim 1 + (\|X_t\|+\sqrt{\|X_{t}\|}^k)$. Thus, $$\mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t c(X_{t})\right] \lesssim \frac{\mathbb{E}_x[\alpha^{\tau(r_0^2)}]}{1-\alpha}+ \mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t ((1+\kappa)^t\|x\|)^k\right] .$$ Choosing $\kappa$ such that $\beta = \alpha (1+\kappa)^{k}<1$ we then have (notice that $\alpha<\beta$) $$\mathbb{E}_x\left[\sum_{t=\tau(r_0^2)+1}^{\infty}\alpha^t c(X_{t})\right]\lesssim \frac{\mathbb{E}_x[\beta^{\tau(r_0^2)}]}{1-\alpha}.$$ Equation \eqref{eq:jumps} implies that, for $x\in \mathcal{B}_{r_0}$ $\|X_t\|\leq (1+\kappa)^t \|x\|\leq (1+\kappa)^t r_0$ with probability $1$. In turn, $\tau(r_0^2)=\inf\{t\geq 0:X_t\notin \mathcal{B}_{r_0}\}\geq \frac{log(r)}{\log(1+\kappa)}$ with probability $1$, so that $\mathbb{E}_x[\beta^{\tau(r_0^2)}]\downarrow 0$ as $r_0\uparrow \infty$. Choosing $r_0$ large enough then concludes the proof. \hfill \Halmos\vspace*{0.4cm} \fi \iffalse \subsection{Explicit construction for $\bg$} Recall that we want $\bg >0 $ such that for each state $y$, \[ \sum_{l \in \mathcal{B}} g_{yl} x_l = y \mbox{ and } \sum_{l \in \mathcal{B}} g_{yl} = 1 \] where $\mathcal{B}$ is the smallest box of representative states containing $y$. On each axis $i$, the corners of $\mathcal{B}$ have two possible values, denote them by $\bar{s}_i, \underline{s}_i$. Then one can construct a binary representation of a specific corner of $\mathcal{B}$ as $H(l) = (h_1(l), ..., h_{d}(l))$, where $h_i(l)=1$ if $(x_l)_i = \bar{s}_i$ and $h_i(l) = 0$ if $(x_l)_i = \underline{s}_i$. Take $w_i(y) = \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i}$, which satisfies $0 \leq w_i(y) \leq 1$ and $w_i (y) * \bar{s}_i + (1-w_i(y)) * \underline{s}_i = y_i $. Then we get the desired $\bg$ by taking for $l \in \mathcal{B}$ \[ g_{yl} = \Pi_{i=1}^d [ h_i(l) * w_i(y) + (1 - h_i(l))* (1-w_i(y)) ] \] \[ g_{yl} = \Pi_{i=1}^d [ \mathrm{1}\{(x_l)_i = \bar{s}_i \} * \frac{y_i - \underline{s}_i}{\bar{s}_i-\underline{s}_i} + \mathrm{1}\{ (x_l)_i = \underline{s}_i\} * \frac{\bar{s}_i - y_i}{\bar{s}_i-\underline{s}_i} ] \] Note that when $y \in \mathcal{S}^0$, i.e. there is a $l \in \mathcal{M}$ such that $y = x_l$, $g_{yl} = 1$. This construction of $\bg$ takes $\mathcal{O}(nd2^d)$ time. \fi \iffalse \FloatBarrier \newpage \subsection{Unused plots} One more comparison of interest is between the second moment of the induced sister chain and that of the focal chain. Recall that only match the two first moments explicitly, while providing a guarantee for bounded second moment mismatch. This is confirmed in Figure \ref{fig:gap2}, seeing that the maximum absolute mismatch is less than 4.0 even though the values go up to more than 1500. \begin{figure} \centering \includegraphics[width=0.53\textwidth]{unused/gap2_sub_n1849_nAgg196_2.png} \includegraphics[width=0.17\textwidth]{unused/heatmaps.png} \caption{Second moment gaps between the sister and focal chain, plotted dimension-wise against the state space in (LEFT), and as heat-maps in (RIGHT). We can see that the maximum absolute mismatch is less than 4.0, even as the value range goes above 1500. \label{fig:gap2}} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{aug_plots/remark_bound_nAgg=121_0.png} \caption{How well remark 1 bounds the gap, 2-ward.} \label{fig:my_label} \end{figure} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{aug_plots/moment2_nAgg=121_1.png} \includegraphics[width=0.45\textwidth]{aug_plots/moment2_2d.png} \caption{Sister chain second moment and comparison on dimension 1, in 3d and 2d.} \label{fig:my_label} \end{figure} \fi \end{document}
null
null
null
proofpile-arXiv_000-10126
{"arxiv_id":"2009.08966","language":"en","timestamp":1600654706000,"url":"https:\/\/arxiv.org\/abs\/2009.08966","yymm":"2009"}
2024-02-18T23:40:25.021Z
2020-09-21T02:18:26.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10128"}
null
\section{Introduction} Throughout this article, all rings are commutative, Noetherian, and with multiplicative identity. For rings containing a field of characteristic $p>0$, the seminal work of Hochster and Huneke on tight closure, and subsequent works of many others, has led to a systematic study of the so-called F-singularities. Roughly speaking, these are singularities that can be defined using the Frobenius endomorphism $F$: $R \to R$, which is the map that raises every element of $R$ to its $p$-th power. One of the most studied F-singularities is F-injectivity, which is defined in terms of injectivity of the natural Frobenius actions on the local cohomology modules $H^i_\mathfrak{m}(R)$. It was first introduced and studied by Fedder in \cite{Fedder83}. We say that a property $\mathcal{P}$ of local rings deforms if, whenever $(R,\mathfrak{m})$ is a local ring and $x \in \mathfrak{m}$ is a nonzerodivisor such that $R/(x)$ satisfies $\mathcal{P}$, then $R$ satisfies $\mathcal{P}$. While this deformation problem for other classical F-singularities has been settled \cite{Fedder83, HoHu, SinghFReg, SinghFPureFReg}, whether F-injectivity deforms or not in general is still an open question. Fedder proved that F-injectivity deforms when $R$ is Cohen--Macaulay \cite[Theorem 3.4]{Fedder83}, and Horiuchi, Miller, Shimomoto proved that F-injectivity deforms either if $R/(x)$ is F-split \cite[Theorem 4.13]{HMS}, or if $H^i_\mathfrak{m}(R/(x))$ has finite length for all $i \ne \dim(R)$ and $R/\mathfrak{m}$ is perfect \cite[Theorem 4.7]{HMS}. More recently, the second author and Pham \cite{MaQuy} extended some of these results by further relaxing the assumptions on $R/(x)$. In this paper, we consider secondary representations of the local cohomology modules $H^i_\mathfrak{m}(R)$ (see subsection 2.2 for definitions and basic properties of secondary representations of Artinian modules). It seems natural to ask how the Frobenius action on $H^i_\mathfrak{m}(R)$ interacts with a given secondary representation. Our first main result is that F-injectivity deforms when each local cohomology module $H^i_\mathfrak{m}(R)$ admits a secondary representation which is stable under the natural Frobenius action (see Definition \ref{Defn F-stable} for details). \begin{Theoremx}[Theorem \ref{mainThm1}] \label{THMX A} Let $(R,\mathfrak{m})$ be a $d$-dimensional local ring of characteristic $p>0$ and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. Suppose for each $i \ne d$, $H^i_\mathfrak{m}(R)$ has an F-stable secondary representation. If $R/(x)$ is F-injective, then $R$ is F-injective. \end{Theoremx} We prove that secondary components that correspond to minimal attached primes of $H^i_\mathfrak{m}(R)$ are always F-stable, see Lemma \ref{Lem.FrobArtinian}. As a consequence, F-injectivity deforms when the attached primes of $H_\mathfrak{m}^i(R)$ are all minimal, see Corollary \ref{Corollary minimal} for a slightly stronger statement. In particular, we obtain the following: \begin{Corollaryx}[Corollary \ref{CorSeqCM}] \label{CORX B} Let $(R,\mathfrak{m})$ be a $d$-dimensional sequentially Cohen--Macaulay local ring of characteristic $p>0$ and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. If $R/(x)$ is F-injective, then $R$ is F-injective. \end{Corollaryx} We can further relax our assumptions if either the residue field of $R$ is perfect, or if $R$ is $\mathbb{N}$-graded over a field, by only putting conditions on those secondary components of $H^i_\mathfrak{m}(R)$ whose attached primes are not equal to $\mathfrak{m}$. We refer to Definition \ref{Defn Fnot-stable} for the precise meaning of $\operatorname{F^\circ}$-stable secondary representations. \begin{Theoremx}[Theorem \ref{mainThm2} and Theorem \ref{mainThmGraded}] \label{THMX B} Let $(R,\mathfrak{m}, k)$ be a $d$-dimensional local ring of characteristic $p>0$ that is either local with perfect residue field or $\mathbb{N}$-graded over a field $k$, and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$ (homogeneous in the graded case). Suppose for each $i\ne d$, $H^i_\mathfrak{m}(R)$ has an $\operatorname{F^\circ}$-stable secondary representation. If $R/(x)$ is F-injective, then $R$ is F-injective. \end{Theoremx} \subsection*{Acknowledgments} We thank Pham Hung Quy and Ilya Smirnov for several useful discussions on the topics of this article. \section{Preliminaries} \label{Section preliminaries} \subsection{Frobenius actions on local cohomology and F-injectivity} Let $R$ be a ring of characteristic $p>0$. A Frobenius action on an $R$-module $W$ is an additive map $F$: $W \to W$ such that $F(r\eta) = r^pF(\eta)$ for all $r \in R$ and $\eta \in W$. Let $I=(f_1,\dots, f_n)$ be an ideal of $R$, then we have the \v{C}ech complex: $$ C^\bullet(f_1,\dots,f_n; R):= 0\to R\to \oplus_i R_{f_i}\to \cdots\to R_{f_1f_2\cdots f_n}\to 0. $$ Since the Frobenius endomorphism on $R$ induces the Frobenius endomorphism on all localizations of $R$, it induces a natural Frobenius action on $C^\bullet(f_1,\dots,f_n; R)$, and hence it induces a natural Frobenius action on each $H_I^i(R)$. In particular, there is a natural Frobenius action $F$: $H^i_\mathfrak{m}(R) \to H^i_\mathfrak{m}(R)$ on each local cohomology module of $R$ supported at a maximal ideal $\mathfrak{m}$. A local ring $(R,\mathfrak{m})$ is called F-injective if $F$: $H^i_\mathfrak{m}(R) \to H^i_\mathfrak{m}(R)$ is injective for all $i$. \subsection{Secondary representations} We recall some well-known facts on secondary representations that we will use throughout this article. For unexplained facts, or further details, we refer the reader to \cite[Section 7.2]{BrodmannSharp}. \begin{Definition} Let $R$ be a ring. An $R$-module $W$ is called secondary if $W \neq 0$ and for each $x \in R$ the multiplication by $x$ map on $W$ is either surjective or nilpotent. \end{Definition} One can easily check that, if $W$ is a secondary $R$-module, then $\mathfrak{p} = \sqrt{\ann_R(W)}$ is a prime ideal, and $\ann_R(W)$ is $\mathfrak{p}$-primary. \begin{Definition} Let $R$ be a ring and $W$ be an $R$-module. A secondary representation of $W$ is an expression of $W$ as a sum of secondary submodules, $W = \sum_{i=1}^t W_i$, where each $W_i$ is called a secondary component of this representation. A secondary representation of $W$ is called irredundant if the prime ideals $\mathfrak{p}_i = \sqrt{\ann_R(W_i)}$ are all distinct and none of the summands $W_i$ can be removed from the sum. The set $\{\mathfrak{p}_1,\ldots,\mathfrak{p}_t\}$ is independent of the irredundant secondary representation and is called the set of attached primes of $W$, denoted by $\operatorname{Att}_R(W)$. \end{Definition} Clearly a secondary module has a unique attached prime. Moreover, over a local ring $(R,\mathfrak{m})$, if a nonzero module $W$ has finite length, then $W$ is secondary with $\operatorname{Att}_R(W) = \{\mathfrak{m}\}$. A key fact is that every Artinian $R$-module admits an irredundant secondary representation. In particular, all local cohomology modules $H^i_\mathfrak{m}(R)$ have an irredundant secondary representation. \begin{Remark} \label{Remark Ass-Att} When $(R,\mathfrak{m})$ is a complete local ring, Matlis duality induces a correspondence between (irredundant) secondary representations of Artinian modules and (irredundant) primary decompositions of Noetherian modules. In particular, if $(R,\mathfrak{m})$ is complete, and $S$ is an $n$-dimensional regular local ring mapping onto $R$, then $\operatorname{Att}_R(H^i_\mathfrak{m}(R)) = \Ass_R(\Ext^{n-i}_S(R,S))$, as the Matlis dual of $H^i_\mathfrak{m}(R)$ is isomorphic to $\Ext^{n-i}_S(R,S)$. \end{Remark} We conclude this section by recalling the definition of surjective element and strictly filter regular element. \begin{Definition} \label{Defn surjective} Let $(R,\mathfrak{m})$ be a local ring of dimension $d$. An element $x \in \mathfrak{m}$ is called a surjective element if $x \notin \mathfrak{p}$ for all $\mathfrak{p}\in \bigcup_{i=0}^d \operatorname{Att}_R(H^i_\mathfrak{m}(R))$, and $x$ is called a strictly filter regular element if $x \notin \mathfrak{p}$ for all $\mathfrak{p}\in \left(\bigcup_{i=0}^d \operatorname{Att}_R(H^i_\mathfrak{m}(R))\right) \smallsetminus \{\mathfrak{m}\}$. \end{Definition} \begin{Remark} The definition of surjective element we give is not the original one introduced in \cite{HMS}. However, note that $\Ass_R(R)\subseteq \cup_{i=0}^{\dim(R)}\operatorname{Att}_R(H_\mathfrak{m}^i(R))$ by \cite[11.3.9]{BrodmannSharp} and thus surjective elements are always nonzerodivisors. Moreover, it follows from the definition that $x$ is a surjective element if and only if $H_\mathfrak{m}^i(R)\xrightarrow{\cdot x}H_\mathfrak{m}^i(R)$ is surjective for each $i$. Therefore our definition is equivalent to the original definition of surjective element by \cite[Proposition 3.3]{MaQuy}. Similarly, it is easy to see that $x$ is a strictly filter regular element if and only if $\coker(H_\mathfrak{m}^i(R)\xrightarrow{x}H_\mathfrak{m}^i(R))$ has finite length for each $i$. \end{Remark} Surjective elements are important in the study of the deformation problem for F-injectivity. For instance, it was first proved in \cite[Theorem 3.7]{HMS} that if $R/(x)$ is F-injective and $x$ is a surjective element, then $R$ is F-injective (see also \cite[Corollary 3.8]{MaQuy} or the proof of Theorem \ref{mainThm1} in the next section). In fact, we do not know any example that $R/(x)$ is F-injective but $x$ is not a surjective element, see Question \ref{q3}. \section{F-stable secondary representation} We introduce the key concept of this article. \begin{Definition} \label{Defn F-stable} Let $R$ be a ring of characteristic $p>0$, and let $W$ be an $R$-module with a Frobenius action $F$. We say that $W$ admits an F-stable secondary representation if there exists a secondary representation $W = \sum_{i=1}^t W_i$ such that each $W_i$ is F-stable, i.e., $F(W_i) \subseteq W_i$ for all $i$. \end{Definition} Observe that, even though we are not explicitly asking that the F-stable secondary representation is irredundant, this can always be achieved, whenever such a representation exists. It seems natural to ask when a secondary component of an Artianina module is F-stable, we show this is always the case for secondary components whose attached primes are minimal in the set of all attached primes. \begin{Lemma} \label{Lem.FrobArtinian} Let $R$ be a ring of characteristic $p>0$, and let $W$ be an Artinian $R$-module with a Frobenius action $F$. Let $W=\sum_{i=1}^t W_i$ be an irredundant secondary representation, with $\mathfrak{p}_i=\sqrt{\ann_R(W_i)}$. If $\mathfrak{p}_i\in\operatorname{MinAtt}_R(W)$, then $W_i$ is F-stable. \end{Lemma} \begin{proof} Since $\mathfrak{p}_i\in\Min\operatorname{Att}(W)$, we can pick $y\in \cap_{j\neq i}\mathfrak{p}_j$ but $y\notin \mathfrak{p}_i$. Then $yW_i=W_i$ and $y^NW_j=0$ for all $j\neq i$ and $N\gg0$. Therefore we have $y^NW=W_i$ for all $N\gg0$, and thus $F(W_i)=F(y^NW_i)\subseteq F(y^NW)=y^{pN}F(W)\subseteq y^{pN}W=W_i$. \end{proof} For secondary components whose attached primes are not necessarily minimal, the corresponding secondary components may not be F-stable. However, we do not know whether this can happen when $W$ is a local cohomology module with its natural Frobenius action, see Question \ref{q1}. \begin{Example} Let $R=\mathbb{F}_p\ps{x,y}$ and let $W=\mathbb{F}_p\oplus H_\mathfrak{m}^2(R)$. Consider the Frobenius action $F$ on $W$ that sends $(1,0)$ to $(1,x^{-p}y^{-1})$ and is the natural one on $H_\mathfrak{m}^2(R)$. Then $F$ is injective on $W$, but we claim that $H^2_\mathfrak{m}(R)$ is the only proper nontrivial F-stable submodule of $W$. Indeed, let $0 \neq W'$ be an F-stable submodule of $W$, it is enough to show that $0 \oplus H^2_\mathfrak{m}(R) \subseteq W'$. Choose $a = (b, c) \neq 0$ inside $W'$. If $c = 0$, then $b \neq 0$. By replacing $a$ with $F(a)$, we can assume that $c \neq 0$. Note that $yF(a) = yF(b, 0) + (0, yF(c)) = (0, yF(c)) \neq 0$ since the action $yF: H^2_\mathfrak{m}(R) \to H^2_\mathfrak{m}(R)$ is injective. Moreover $H^2_\mathfrak{m}(R)$ is simple as an $R$-module with a Frobenius action, so $0 \oplus H^2_\mathfrak{m}(R) \subseteq W'$. Since $W$ is not secondary, this implies that there is no secondary representation of $W$ which is stable with respect to the given Frobenius action (any secondary component with attached prime $\mathfrak{m}$ is not F-stable). \end{Example} We let $\mathbb{V}(x)$ denote the set of primes of $R$ which contain $x$. Our first main result is the following. \begin{Theorem} \label{mainThm1} Let $(R,\mathfrak{m})$ be a $d$-dimensional local ring of characteristic $p>0$ and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. Suppose for each $i \ne d$, $H^i_\mathfrak{m}(R)$ admits a secondary representation in which the secondary components whose attached primes belong to $\mathbb{V}(x)$ are F-stable (e.g., $H_\mathfrak{m}^i(R)$ has an F-stable secondary representation). If $R/(x)$ is F-injective, then $x$ is a surjective element and $R$ is F-injective. \end{Theorem} \begin{proof} We prove by induction on $i \geq -1$ that multiplication by $x$ is surjective on $H^i_\mathfrak{m}(R)$ and that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)$ for all $e>0$. This will conclude the proof, since the first assertion implies $x$ is a surjective element and the second assertion implies $F$ is injective on $H^i_\mathfrak{m}(R)$ for all $i$. The base case $i=-1$ is trivial. Suppose both assertions hold for $i-1$; we show them for $i$. Consider the following commutative diagram: \[\xymatrix{ 0 \ar[r] & H_\mathfrak{m}^{i-1}(R/(x)) \ar[d]^{F^e} \ar[r] & H_\mathfrak{m}^i(R) \ar[r]^{\cdot x} \ar[d]^{x^{p^e-1}F^e} & H_\mathfrak{m}^i(R) \ar[r] \ar[d]^{F^e} & H_\mathfrak{m}^i(R/(x)) \ar[d]^{F^e} \ar[r] & \ldots \\ 0 \ar[r] & H_\mathfrak{m}^{i-1}(R/(x)) \ar[r] & H_\mathfrak{m}^i(R) \ar[r]^{\cdot x} & H_\mathfrak{m}^i(R) \ar[r] & H_\mathfrak{m}^i(R/(x)) \ar[r] & \ldots }\] where injectivity on the left of the rows follows from our inductive hypotheses. Let $u \in \operatorname{soc}(H^i_\mathfrak{m}(R)) \cap \ker(x^{p^e-1}F^e)$. Then $xu=0$, and thus $u$ is the image of an element $v \in H^{i-1}_\mathfrak{m}(R/(x))$. Chasing the diagram shows that $F^e(v)=0$. But since $R/(x)$ is F-injective, $F^e$ is injective on $H^{i-1}_\mathfrak{m}(R/(x))$ for all $e>0$, so $v=0$ and thus $u=0$. This shows that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)$ for all $e>0$. It remains to show that multiplication by $x$ is surjective on $H^i_\mathfrak{m}(R)$. We distinguish two cases. If $i=d$ then this is clear because $H^{d}_\mathfrak{m}(R/(x)) = 0$. Therefore we assume $i<d$. Let $H_\mathfrak{m}^i(R)=\sum W_j$ be the secondary representation that satisfies the conditions of the theorem. If there exists $W_j\neq 0$ whose attached prime $\mathfrak{p}_j \in \mathbb{V}(x)$, then it follows from the assumptions that $W_j$ is F-stable. Thus $x^{p^e-1}F^e(W_j)\subseteq x^{p^e-1}W_j = 0$ for all $e \gg 0$ (since $x\in \mathfrak{p}_j = \sqrt{\ann_R(W_j)}$). However, we have proved that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)$ for all $e>0$, this implies $W_j=0$ and we arrive at a contradiction. Therefore $x\notin\mathfrak{p}$ for all $\mathfrak{p}\in \operatorname{Att}_R(H^i_\mathfrak{m}(R))$, i.e., multiplication by $x$ is surjective on $H^i_\mathfrak{m}(R)$. \end{proof} \begin{Corollary} \label{Corollary minimal} Let $(R,\mathfrak{m})$ be a $d$-dimensional local ring of characteristic $p>0$ and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. Suppose that $\operatorname{Att}_R(H_\mathfrak{m}^i(R)) \cap \mathbb{V}(x) \subseteq \operatorname{MinAtt}_R(H_\mathfrak{m}^i(R))$ for all $i \ne d$ (e.g., when each $H_\mathfrak{m}^i(R)$ has no embedded attached primes). If $R/(x)$ is F-injective, then $x$ is a surjective element and $R$ is F-injective. \end{Corollary} \begin{proof} By Lemma \ref{Lem.FrobArtinian}, every irredundant secondary representation of $H_\mathfrak{m}^i(R)$ satisfies the assumptions of Theorem \ref{mainThm1} so the conclusion follows. \end{proof} We next exhibit an explicit new class of rings for which deformation of F-injectivity holds. Recall that a finitely generated $R$-module $M$ is called sequentially Cohen--Macaulay if there exists a finite filtration $0=M_0\subseteq M_1\subseteq M_2\subseteq \cdots \subseteq M_n=M$ such that each $M_{i+1}/M_i$ is Cohen--Macaulay and $\dim(M_i/M_{i-1})<\dim(M_{i+1}/M_i)$. A local ring $(R,\mathfrak{m})$ is called sequentially Cohen--Macaulay if $R$ is sequentially Cohen--Macaulay as an $R$-module. \begin{Corollary} \label{CorSeqCM} Let $(R,\mathfrak{m})$ be a $d$-dimensional sequentially Cohen--Macaulay local ring of characteristic $p>0$ and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. If $R/(x)$ is F-injective, then $x$ is a surjective element and $R$ is F-injective. \end{Corollary} \begin{proof} First we observe that $R$ is sequentially Cohen--Macaulay implies $\widehat{R}$ is sequentially Cohen--Macaulay and whether $R$ is F-injective (and whether $x$ is a surjective element) is unaffected by passing to the completion. Therefore we may assume $R$ is complete and thus $R$ is a homomorphic image of a regular local ring $S$. By \cite[Theorem 1.4]{HerzogSbarraSequentiallCM}, $R$ is sequentially Cohen--Macaulay is equivalent to saying that, for each $0\leq i \leq d$, $\Ext^{\dim(S)-i} _S(R,S)$ is either zero or Cohen--Macaulay of dimension $i$. In particular, $\Ext^{\dim(S)-i} _S(R,S)$ has no embedded associated primes and hence by Remark \ref{Remark Ass-Att}, $H_\mathfrak{m}^i(R)$ has no embedded attached primes for each $0\leq i \leq d$ , that is, $\operatorname{Att}_R(H^i_\mathfrak{m}(R)) = \Min\operatorname{Att}_R(H^i_\mathfrak{m}(R))$. The conclusion now follows from Corollary~\ref{Corollary minimal}. \end{proof} \subsection{Results on local rings with perfect residue field} If we assume the residue field of $(R,\mathfrak{m})$ is perfect, then we can prove some slight stronger results. The arguments are based on appropriate modifications of the proof of Theorem~\ref{mainThm1}, together with some ideas employed in \cite[Section 5]{MaQuy}. First, we make a modification of the definition of F-stable secondary representation. \begin{Definition} \label{Defn Fnot-stable} Let $R$ be a ring of characteristic $p>0$ and $\mathfrak{m}$ be a maximal ideal of $R$. Let $W$ be an $R$-module with a Frobenius action $F$. We say that $W$ admits an $\operatorname{F^\circ}$-stable secondary representation if there exists a secondary representation $W = \sum_{i=1}^t W_i$ such that $W_i$ is F-stable for all $i$ such that $\operatorname{Att}_R(W_i)\neq \{\mathfrak{m}\}$. \end{Definition} \begin{Theorem} \label{mainThm2} Let $(R,\mathfrak{m})$ be a $d$-dimensional local ring of characteristic $p>0$ with perfect residue field, and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. Suppose for each $i \ne d$, $H^i_\mathfrak{m}(R) \ne 0$ admits a secondary representation in which the secondary components whose attached primes belong to $\mathbb{V}(x)\smallsetminus \{\mathfrak{m}\}$ are F-stable (e.g., $H_\mathfrak{m}^i(R)$ has an $\operatorname{F^\circ}$-stable secondary representation). If $R/(x)$ is F-injective, then $x$ is a strictly filter regular element and $R$ is F-injective. \end{Theorem} \begin{proof For every $i$, we let $L_i=\coker(H_\mathfrak{m}^i(R)\xrightarrow{x}H_\mathfrak{m}^i(R))$. We prove by induction on $i \geq -1$ that $L_i$ has finite length and that the Frobenius action $x^{p^e-1}F^e$ on $H_\mathfrak{m}^i(R)$ is injective for all $e>0$. This will conclude the proof, since the first assertion implies $x$ is a strictly filter regular element and the second assertion implies $F$ is injective on $H^i_\mathfrak{m}(R)$ for all $i$. The initial case $i=-1$ is trivial. Suppose both assertions hold for $i-1$; we show them for $i$. Consider the following commutative diagram: \[\xymatrix{ 0 \ar[r] & H_\mathfrak{m}^{i-1}(R/(x))/L_{i-1} \ar[d]^{F^e} \ar[r] & H_\mathfrak{m}^i(R) \ar[r]^{\cdot x} \ar[d]^{x^{p^e-1}F^e} & H_\mathfrak{m}^i(R) \ar[r] \ar[d]^{F^e} & H_\mathfrak{m}^i(R/(x)) \ar[d]^{F^e} \ar[r] & \ldots \\ 0 \ar[r] & H_\mathfrak{m}^{i-1}(R/(x))/L_{i-1} \ar[r] & H_\mathfrak{m}^i(R) \ar[r]^{\cdot x} & H_\mathfrak{m}^i(R) \ar[r] & H_\mathfrak{m}^i(R/(x)) \ar[r] & \ldots }\] Since $L_{i-1}$ has finite length, $F^e$ is injective on $H_\mathfrak{m}^{i-1}(R/(x))$ by assumption, and $R/\mathfrak{m}$ is perfect, we have that $F^e$ induces a bijection on $L_{i-1}\subseteq H_\mathfrak{m}^{i-1}(R/(x))$. Thus, $F^e$ induces an injection on $H_\mathfrak{m}^{i-1}(R/(x))/L_{i-1}$ for all $e>0$. Therefore, chasing the diagram above as in the proof of Theorem \ref{mainThm1} we know that $x^{p^e-1}F^e$ is injective on $H_\mathfrak{m}^i(R)$ for all $e>0$. It remains to show that $L_i$ has finite length. If $i=d$ then this is clear because $L_d\subseteq H^{d}_\mathfrak{m}(R/(x)) = 0$. Therefore we assume $i<d$. Let $H_\mathfrak{m}^i(R)=\sum W_j$ be the secondary representation that satisfies the conditions of the theorem. If there exists $W_j\neq 0$ whose attached prime $\mathfrak{p}_j \in \mathbb{V}(x)\smallsetminus \{\mathfrak{m}\}$, then it follows from the assumptions that $W$ is F-stable. Thus $x^{p^e-1}F^e(W_j)\subseteq x^{p^e-1}W_j = 0$ for all $e \gg 0$ (since $x\in \mathfrak{p}_j=\sqrt{\ann_R(W_j)}$). However, we have proved that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)$ for all $e>0$, this implies $W_j=0$ and we arrive at a contradiction. Therefore $x\notin \mathfrak{p}$ for all $\mathfrak{p}\in \operatorname{Att}_R(H^i_\mathfrak{m}(R))\smallsetminus \{\mathfrak{m}\}$, i.e., $L_i=\coker(H_\mathfrak{m}^i(R)\xrightarrow{x}H_\mathfrak{m}^i(R))$ has finite length. \end{proof} \begin{Corollary} \label{Corollary minimal perfect residue} Let $(R,\mathfrak{m})$ be a $d$-dimensional local ring of characteristic $p>0$ with perfect residue field, and let $x\in \mathfrak{m}$ be a nonzerodivisor on $R$. Suppose that $\operatorname{Att}_R(H_\mathfrak{m}^i(R)) \cap \mathbb{V}(x) \subseteq \operatorname{MinAtt}_R(H_\mathfrak{m}^i(R)) \cup \{\mathfrak{m}\}$ for all $i \ne d$. If $R/(x)$ is F-injective, then $x$ is a strictly filter regular element and $R$ is F-injective. In particular, F-injectivity deforms if $\dim(R/\ann_R(H^i_\mathfrak{m}(R))) \leq 1$ for all $i \ne d$ and $R/\mathfrak{m}$ is perfect. \end{Corollary} \begin{proof} By Lemma \ref{Lem.FrobArtinian}, every irredundant secondary representation of $H_\mathfrak{m}^i(R)$ satisfies the assumptions of Theorem \ref{mainThm2} so the first conclusion follows. To see the second conclusion, it is enough to observe that when $\dim(R/\ann_R(H^i_\mathfrak{m}(R))) \leq 1$, we have $\operatorname{Att}_R(H_\mathfrak{m}^i(R))\subseteq \operatorname{MinAtt}_R(H_\mathfrak{m}^i(R)) \cup \{\mathfrak{m}\}$. \end{proof} \subsection{Results on $\mathbb{N}$-graded rings} For the rest of this section, we assume that $(R,\mathfrak{m}, k)$ is an $\mathbb{N}$-graded algebra over a field $k$ of characteristic $p>0$ ($k$ is not necessarily perfect). Given a graded module $W = \bigoplus_j W_j$ and $a \in \mathbb{Z}$, we denote by $W(a)$ the shift of $W$ by $a$, that is, the graded $R$-module such that $W(a)_j = W_{a+j}$. In this context, when talking about a Frobenius action $F$ on a graded module $W$, we insist that $\deg(F(\eta))=p\cdot\deg(\eta)$ for all homogeneous $\eta\in W$. This is the case for the natural Frobenius action $F$ on the local cohomology modules $H_\mathfrak{m}^i(R)$. The goal of this subsection is to extend Theorem \ref{mainThm2} in this $\mathbb{N}$-graded setting, by removing the assumption that the residue field $k$ is perfect and by strengthening the conclusion to that $x$ is actually a surjective element. \begin{Theorem} \label{mainThmGraded} Let $(R,\mathfrak{m},k)$ be a $d$-dimensional $\mathbb{N}$-graded $k$-algebra of characteristic $p>0$ and let $x \in \mathfrak{m}$ be a homogeneous nonzerodivisor on $R$. Suppose for each $i \ne d$, $H^i_\mathfrak{m}(R)$ admits a secondary representation in which the secondary components whose attached primes belong to $\mathbb{V}(x)\smallsetminus \{\mathfrak{m}\}$ are F-stable (e.g., $H_\mathfrak{m}^i(R)$ has an $\operatorname{F^\circ}$-stable secondary representation). If $R/(x)$ is F-injective, then $x$ is a surjective element and $R$ is F-injective. \end{Theorem} \begin{proof} Let $\deg(x)=t>0$. We have a graded long exact sequence of local cohomology, induced by the short exact sequence $0 \to R(-t) \xrightarrow{x} R \to R/(x) \to 0$. Moreover, this exact sequence fits in the commutative diagram: \[\xymatrix{ \ldots \ar[r] & H_\mathfrak{m}^{i-1}(R/(x)) \ar[d]^{F^e} \ar[r] & H_\mathfrak{m}^i(R)(-t) \ar[r]^{\cdot x} \ar[d]^{x^{p^e-1}F^e} & H_\mathfrak{m}^i(R) \ar[r] \ar[d]^{F^e} & H_\mathfrak{m}^i(R/(x)) \ar[d]^{F^e} \ar[r] & \ldots \\ \ldots \ar[r] & H_\mathfrak{m}^{i-1}(R/(x)) \ar[r] & H_\mathfrak{m}^i(R)(-t) \ar[r]^{\cdot x} & H_\mathfrak{m}^i(R) \ar[r] & H_\mathfrak{m}^i(R/(x)) \ar[r] & \ldots }\] Observe that all the Frobenius actions are compatible with the grading. We show by induction on $i \geq -1$ that the map $H^i_\mathfrak{m}(R)(-t) \xrightarrow{x} H^i_\mathfrak{m}(R)$ is surjective and that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)(-t)$. This will conclude the proof, since the first assertion implies $x$ is a surjective element and the second assertion implies $F$ is injective on $H_\mathfrak{m}^i(R)$ for all $i$. The base case $i=-1$ is trivial. Suppose both assertions hold for $i-1$; we show them for $i$. By the same argument as in the proof of Theorem~\ref{mainThm1}, we have that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)(-t)$ for all $e>0$. It remains to show that multiplication by $x$ map $H^i_\mathfrak{m}(R)(-t) \xrightarrow{x} H^i_\mathfrak{m}(R)$ is surjective. If $i=d$ then this is clear because $H^{d}_\mathfrak{m}(R/(x)) = 0$. Therefore we assume $i<d$. Now by the same argument as in the proof of Theorem \ref{mainThm2}, we know that $L_i=\coker(H_\mathfrak{m}^i(R)(-t)\xrightarrow{x}H_\mathfrak{m}^i(R))$ has finite length (note that we can ignore the graded structure here). Finally, consider the following commutative diagram: \[\xymatrix{ 0 \ar[r] & L_i \ar[d]^-{F^e} \ar[r] & H^i_\mathfrak{m}(R/(x)) \ar[d]^-{F^e} \ar[r] & H^{i+1}_\mathfrak{m}(R)(-t) \ar[d]^-{x^{p^e-1}F^e} \ar[r]^-x & \ldots \\ 0 \ar[r] & L_i \ar[r] & H^i_\mathfrak{m}(R/(x)) \ar[r] & H^{i+1}_\mathfrak{m}(R)(-t) \ar[r]^-x & \ldots }\] Since $F^e$ is injective on $H^i_\mathfrak{m}(R/(x))$ by assumption, it is also injective on $L_i$. But since the finite length module $L_i$ is graded and the Frobenius action is compatible with the grading (as the action is induced from $H^i_\mathfrak{m}(R/(x))$), this forces $L_i$ to be concentrated in degree zero. If $L_i \ne 0$, then $[L_i]_0 \cong [H^i_\mathfrak{m}(R)/xH^i_\mathfrak{m}(R)(-t)]_0 \ne 0$, in particular $[H^i_\mathfrak{m}(R)]_0 \ne 0$. However, this implies the existence of a nonzero element $u \in [H^i_\mathfrak{m}(R)(-t)]_t$. Since we have proved that $x^{p^e-1}F^e$ is injective on $H^i_\mathfrak{m}(R)(-t)$, this gives a nonzero element $x^{p^e-1}F^e(u)$ in degree $p^et>0$ for all $e > 0$, which is a contradiction because $[H^i_\mathfrak{m}(R)(-t)]_{\gg 0}=0$ (here we are using that the Frobenius action $x^{p^e-1}F^e$ is compatible with the grading on $H_\mathfrak{m}^i(R)(-t)$, that is, $\deg(x^{p^e-1}F(\eta))=p^e\deg(\eta)$ for all $\eta\in H_\mathfrak{m}^i(R)(-t)$). Therefore $L_i=0$, i.e., the multiplication by $x$ map $H^i_\mathfrak{m}(R)(-t) \xrightarrow{x} H^i_\mathfrak{m}(R)$ is surjective. \end{proof} \begin{Corollary} \label{Corollary minimal graded} Let $(R,\mathfrak{m},k)$ be a $d$-dimensional $\mathbb{N}$-graded $k$-algebra of characteristic $p>0$ and let $x \in \mathfrak{m}$ be a homogeneous nonzerodivisor on $R$. Suppose that $\operatorname{Att}_R(H_\mathfrak{m}^i(R)) \cap \mathbb{V}(x) \subseteq \operatorname{MinAtt}_R(H_\mathfrak{m}^i(R)) \cup \{\mathfrak{m}\}$ for all $i \ne d$. If $R/(x)$ is F-injective, then $x$ is a surjective element and $R$ is F-injective. \end{Corollary} \begin{proof} By Lemma \ref{Lem.FrobArtinian}, every irredundant secondary representation of $H_\mathfrak{m}^i(R)$ satisfies the assumptions of Theorem \ref{mainThmGraded} so the conclusion follows. \end{proof} \section{Ending questions and Remarks} We end by collecting some questions that arise from the results in this article. Motivated by Definition \ref{Defn F-stable} and Theorem \ref{mainThm1}, it is natural to ask the following. \begin{Question} \label{q1} Let $(R,\mathfrak{m})$ be a local ring of characteristic $p>0$. If $H^i_\mathfrak{m}(R) \ne 0$, does it admit an F-stable secondary representation? \end{Question} By Theorem \ref{mainThm1}, a positive answer to Question \ref{q1} implies that F-injectivity deforms. \begin{Question} \label{q1maximal} Let $(R,\mathfrak{m})$ be a local ring of characteristic $p>0$. If $H^i_\mathfrak{m}(R)\ne 0$, does it admit a secondary representation such that the secondary component with attached prime $\mathfrak{m}$, if not zero, is F-stable? \end{Question} This is weaker than Question \ref{q1}, but an affirmative answer also implies that F-injectivity deforms. Suppose $R/(x)$ is F-injective, we will show $x$ is a surjective element and thus $R$ is F-injective by \cite[Theorem 3.7]{HMS} (or use the same argument as in Theorem \ref{mainThm1}). In fact, if $x\in\mathfrak{p}$ for some $\mathfrak{p}\in \operatorname{Att}_R(H^i_\mathfrak{m}(R))$, then $x\in \mathfrak{p} R_\mathfrak{p}\in\operatorname{Att}_{R_\mathfrak{p}}(H^j_{\mathfrak{p} R_\mathfrak{p}}(R_\mathfrak{p}))$ for some $j$ and $R_\mathfrak{p}/xR_\mathfrak{p}$ is still F-injective. Now an affirmative answer to Question \ref{q1maximal} applied to $(R_\mathfrak{p}, \mathfrak{p} R_\mathfrak{p})$ implies that there exists a nonzero secondary component of $H^j_{\mathfrak{p} R_\mathfrak{p}}(R_\mathfrak{p})$ with attached prime $\mathfrak{p} R_\mathfrak{p}$ that is F-stable, and we can argue as in the proof of Theorem \ref{mainThm1} to arrive at a contradiction. \begin{Question} \label{q3} Let $(R,\mathfrak{m})$ be a local ring of characteristic $p>0$, and let $x \in \mathfrak{m}$ be a nonzerodivisor on $R$. If $R/(x)$ is F-injective, is it true that $\mathfrak{m} \notin \operatorname{Att}(H^i_\mathfrak{m}(R))$ for all $i$? \end{Question} Similar to the discussion above, we point out that an affirmative answer to Question \ref{q3} also implies that $x$ is a surjective element (and hence implies that F-injectivity deforms): if not, then $x\in\mathfrak{p}$ for some $\mathfrak{p}\in \operatorname{Att}_R(H^i_\mathfrak{m}(R))$, but then $R_\mathfrak{p}/xR_\mathfrak{p}$ is still F-injective and $\mathfrak{p} R_\mathfrak{p}\in\operatorname{Att}_{R_\mathfrak{p}}(H^j_{\mathfrak{p} R_\mathfrak{p}}(R_\mathfrak{p}))$ for some $j$, which contradicts Question \ref{q3} for $(R_\mathfrak{p}, \mathfrak{p} R_\mathfrak{p})$. \bibliographystyle{alpha}
null
null
null
proofpile-arXiv_000-10127
{"arxiv_id":"2009.09038","language":"en","timestamp":1600740134000,"url":"https:\/\/arxiv.org\/abs\/2009.09038","yymm":"2009"}
2024-02-18T23:40:25.034Z
2020-09-22T02:02:14.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10129"}
null
\section{Introduction} While the $\Lambda$CDM model of cosmology is remarkably successful at explaining a wide range of cosmological observations, it currently fails to reconcile distance-redshift measurements when anchored at low redshift through the distance ladder and high redshift by cosmic microwave background (CMB) anisotropies. Specifically under $\Lambda$CDM, the Planck 2018 measurement of $H_0=67.4 \pm 0.5$ (in units of km s$^{-1}$ Mpc$^{-1}$ from here on)~\cite{Aghanim:2018eyx} is in $4.4\sigma$ tension with the latest SH0ES estimate $H_0=74.03 \pm 1.42$~\cite{Riess:2019cxk}. The significance of this discrepancy makes it unlikely to be a statistical fluctuation and hence requires an explanation. A resolution of this tension may lie in unknown systematic effects in the local distance ladder and an array of alternative measurement methods are being pursued to address this possibility. For example, calibrations based on the tip of the red giant branch~\cite{Freedman:2019jwv,Freedman:2020dne}, Mira variables~\cite{Huang:2019yhh}, megamasers~\cite{Pesce:2020xfe}, lensing time delays~\cite{Wong:2019kwg,Birrer:2020tax} all give broadly consistent results but differ in the significance of the discrepancy. On the other hand, different CMB measurements from Planck~\cite{Aghanim:2018eyx}, the South Pole Telescope (SPT)~\cite{Aylor:2017haa} and the Atacama Cosmology Telescope (ACT)~\cite{Aiola:2020azj,Choi:2020ccd} give compatible distance calibrations under $\Lambda$CDM. While Planck and previous measurements weight the calibration to mainly the sound horizon, recent ground based experiments such as SPT and ACT provide precise measurements of the damping scale as well~\cite{Aiola:2020azj}. Cosmological solutions now generally require a consistent change in the calibration of both the CMB sound horizon and damping scale which anchor the high redshift end of the distance scale, preventing high redshift solutions that substantially change their ratio~\cite{Wyman:2013lza,Dvorkin:2014lea,Leistedt:2014sia,Ade:2015rim,Lesgourgues:2015wza,Adhikari:2016bei,DiValentino:2016hlg,Canac:2016smv,Feng:2017nss,Oldengott:2017fhy,Lancaster:2017ksf,Kreisch:2019yzn,Park:2019ibn}. Once anchored there, the rungs on the distance ladder through baryon acoustic oscillations (BAOs) to supernovae Type IA (SN) leave little room for missing cosmological physics in between (see e.g.~\cite{Bernal:2016gxb, Aylor:2018drw, Raveri:2019mxg, Knox:2019rjx, Benevento:2020fev,Zhao:2017cud,Lin:2018nxe,Benevento:2019,Ghosh:2019tab,Liu:2019awo,Dai:2020rfo,Zumalacarregui:2020cjh,Alestas:2020mvb,Jedamzik:2020krr,Braglia:2020iik,Gonzalez:2020fdy,DiValentino:2020kha}). For this reason a class of models which posit a new form of energy density whose relative contribution peaks near matter radiation equality~\cite{Poulin:2018cxd,Lin:2019qug} have received much interest~\cite{Agrawal:2019lmo,Alexander:2019rsc,Smith:2019ihp,Berghaus:2019cls,Niedermann:2019olb,Sakstein:2019fmf,Ye:2020btb,Braglia:2020bym,Niedermann:2020dwg,Ye:2020oix}. In these models, the extra energy density changes the expansion rate before recombination and so the sound horizon while simultaneously tuning the timing of these contributions adjusts the damping scale as well. These models can therefore successfully raise $H_0$ by changing the distance ladder calibration and are limited mainly by the compensating changes to parameters in order to offset the driving of the acoustic oscillations from the Jeans-stable additional component. These changes cause testable effects on CMB polarization, for modes that cross the horizon near matter-radiation equality~\cite{Lin:2019qug}, and on the clustering of cosmological structure, changing the amplitude and shape of the power spectrum~\cite{Ivanov:2020ril,Hill:2020osr,DAmico:2020ods,Niedermann:2020qbw}. Differences between models in this class can also be distinguished by these effects. Given the recent improvements in their measurement, we focus on the CMB polarization effects here and their implications for the canonical acoustic dark energy (cADE) model~\cite{Lin:2019qug}, where a scalar field with a canonical kinetic term suddenly converts its potential energy to kinetic energy by being released from Hubble drag on a sufficiently steep potential. With only two additional parameters, this model provides the most efficient and generic realization of the extra energy density scenarios. This paper is organized as follows. In \S~\ref{Sec:ADErecap} we briefly review the cADE model and its relationship to other models in the literature. In \S~\ref{sec:data} we introduce the data sets that we use to obtain the constraints presented in \S~\ref{Sec:Results}. We highlight the role of ACT in \S~\ref{Sec:MinusACT}, Planck polarization in \S~\ref{Sec:Planck}, SH0ES in \S~\ref{Sec:MinusH0} and discuss differences with other models where extra dark energy alleviates the Hubble tension in \S~\ref{Sec:Models}. We conclude in \S~\ref{Sec:Conclusions}. \section{Acoustic Dark Energy} \label{Sec:ADErecap} In this section we review the model parameterization of acoustic dark energy (ADE) and its relationship to early dark energy (EDE) \cite{Poulin:2018dzj} following Ref.~\cite{Lin:2019qug}. For the purposes of this work, acoustic dark energy can be viewed either as a dark fluid component described by an equation of state $w_{\rm ADE}$ and rest frame sound speed $c_s^2$~\cite{Hu:1998kj} that becomes transiently important around matter radiation equality or as a scalar field that suddenly converts its potential energy to kinetic energy by being released from Hubble drag at that time. Adopting the former description, we model the ADE equation of state as \begin{equation} \label{eqn:eos} 1+w_{{\rm ADE}}(a) = \frac{1+w_{\rm f} }{[1+(a_c/a)^{3(1+w_{\rm f} )/p}]^{p}} \,. \end{equation} In addition we parameterize the fractional energy density contribution by its value at $a_c$ \begin{equation} f_c = \frac{\rho_{\rm ADE}(a_c)}{\rho_{\rm tot}(a_c)} \,. \end{equation} The ADE component therefore {has a transition in} its equation of state around a scale factor $a=a_c$ from $w_{{\rm ADE}}=-1$ to $w_{\rm f}$ which causes its fractional energy density to peak near $a_c$. The rapidity of the transition is determined by $p$, which we set $p=1/2$ throughout as its specific value does not affect our qualitative results \cite{Lin:2019qug}. The connection to the scalar field picture comes from these asymptotic behaviors. Given a constant sound speed, $w_{\rm f}=c_s^2$ for a potential to kinetic conversion. We call the case of a canonical scalar field where $c_s^2=1$ ``cADE". In \S \ref{Sec:Models}, we widen the description to allow $w_{\rm f}$ and $c_s^2$ to be free parameters and call this superset ``ADE". In summary, cADE is described by two parameters $\{ a_c, f_c \}$ whereas ADE is described by four parameters $\{ w_{\rm f}, a_c, f_c, c_s^2 \}$. When varying these parameters we impose flat, range bound priors: $-4.5\leq \log_{10}a_c \leq -3.0$, $0\leq f_c \leq 0.2$, $0\leq w_{\rm f} \leq 3.6$ and $0\leq c_s^2 \leq 1.5$. These ADE models can be contrasted with the EDE model in its fluid description \cite{Poulin:2018dzj}. In the EDE case, the fluid behavior is modeled on a scalar field that oscillates around the minimum of its potential whose equation of state can likewise be parameterized by Eq.~({\ref{eqn:eos}). In this case, $p=1$ and $w_{\rm f}=(n-1)/(n+1)$ is a free parameter associated with raising an axion or cosine like potential to the $n$th power, where $w_{\rm f} \approx 1/2$ was found to best relieve the Hubble tension \cite{Poulin:2018cxd}. An additional parameter $\Theta_i$ models the initial position of the field in the potential and controls an effective, scale-dependent, sound speed (see \cite{Poulin:2018dzj,Lin:2019qug}). The EDE model is therefore parameterized by $\{ w_{\rm f}, a_c, f_c, \Theta_i \}$. When varying these parameters we use the same priors as ADE for $\{ a_c , f_c\}$ but fix $w_{\rm f}=1/2$ and impose a flat prior on $\Theta_i$ in its range $0 \le \Theta_i \le \pi$. In addition to these parameters, the full cosmological model that we fit to data also includes the six $\Lambda$CDM parameters: the angular size of the CMB sound horizon $\theta_{s}$, the cold dark matter density $\Omega_c h^2$, baryon density $\Omega_b h^2$, the optical depth to reionization $\tau$, the initial curvature spectrum normalization at $k=0.05$ Mpc$^{-1}$, $A_s$ and its tilt $n_s$. All these parameters have the usual non-informative priors \cite{Aghanim:2019ame}. We fix the sum of neutrino masses to the minimal value (e.g.~\cite{Long:2017dru}). We use the EDE and ADE implementation in CAMB~\cite{Lewis:1999bs} and CosmoMC~\cite{Lewis:2002ah} codes, following~\citep{Lin:2019qug}. We sample the posterior parameter distribution until the Gelman-Rubin convergence statistic~\cite{gelman1992} satisfies $R-1<0.01$ or better unless otherwise stated. \section{Datasets} \label{sec:data} In this paper, we combine several data sets relevant to the Hubble tension. We use the publicly available Planck 2018 likelihoods for the CMB temperature and polarization power spectra at small (Planck 18 TTTEEE) and large angular scales (lowl+lowE) and the CMB lensing potential power spectrum in the multipole range $40 \leq \ell \leq 400$~\citep{Aghanim:2018eyx, Aghanim:2019ame, Aghanim:2018oex}. We then compare the results to the 2015 version of the same data set~\citep{Ade:2015xua,Aghanim:2015xee,Ade:2015zua} and examine the impact of the improved high-$\ell$ polarization data, which we sometimes refer to as ``acoustic polarization" to distinguish it from the low-$\ell$ reionization signature. We combine Planck data with ACT data which measures CMB temperature and polarization spectra out to higher multipoles~\citep{Aiola:2020azj}. We exclude the lowest temperature multipoles for ACT that would otherwise be correlated with Planck, following~\citep{Aiola:2020azj}. To expose the Hubble tension, we consider the SH0ES measurement of the Hubble constant, $H_0=74.03\pm1.42$ (in units of km\,s$^{-1}$\,Mpc$^{-1}$ here and throughout)~\citep{Riess:2019cxk}. To these data sets we add BAO measurements from BOSS DR12~\cite{Alam:2016hwk}, SDSS Main Galaxy Sample~\cite{Ross:2014qpa} and 6dFGS~\cite{Beutler:2011hx} and the Pantheon Supernovae (SN) sample~\cite{Scolnic:2017caz}. These data sets prevent resolving the Hubble tension by modifying the dark sector only between recombination and the very low redshift universe~\cite{Aylor:2018drw}. Our baseline configuration which we call ``All" contains: CMB temperature, polarization and lensing, BAO, SN and $H_0$ measurements. We then proceed to examine the impact of key pieces of this combination by removing or replacing various data sets. Specifically we consider the following cases: \begin{itemize} \item All = Planck+ACT+SH0ES+BAO+Pantheon \item -ACT = All$-$ACT \item -P18Pol = All, but Planck 18 TTTEEE $\rightarrow$ Planck 18 TT \item -H0 = All$-$SH0ES \item P18$\rightarrow$15,-ACT = -ACT, but Planck 2018$\rightarrow$2015. This is the default combination used in \cite{Lin:2019qug}. \end{itemize} When highlighting the impact of a specific data component $i$ below, we quote $\Delta\chi^2_i \equiv -2\Delta \ln {\cal L}_i$, relative to the appropriate maximum total likelihood (${\cal L}$) model under $\Lambda$CDM. For example $\Delta\chi^2_{\rm P}$ denotes the contribution from Planck CMB power spectra and includes Planck TTTEEE+lowl+lowE, except for the -P18Pol configuration where it includes Planck TT+lowl+lowE. \section{Results} \label{Sec:Results} In this section we discuss all results. In \S~\ref{Sec:All}, we present results for cADE and the All data combination. In \S~\ref{Sec:MinusACT} and \S~\ref{Sec:Planck} we explore the impact of the ACT and 2018 improvements to the Planck data, highlighting the crucial role of polarization. In \S~\ref{Sec:MinusH0}, we show that the ability to raise $H_0$ in cADE is not exclusively driven by the SH0ES measurement. We discuss how polarization measurements distinguish between cADE and the wider class of ADE and EDE models in \S~\ref{Sec:Models}. \begin{table*} \centering \begin{tabular}{c|ccccc} \hline cADE & All & -ACT & -P18Pol & -H0 & P$18\rightarrow15$,-ACT \\ \hline $f_c$ & 0.072(0.068$^{+0.025}_{-0.022}$) & 0.081(0.070$^{+0.027}_{-0.024}$) & 0.105(0.110$\pm$0.030) & 0.050(0.027$^{+0.008}_{-0.027}$) & 0.086(0.082$^{+0.026}_{-0.023}$) \\ $\log_{10}a_c$ & -3.42(-3.43$^{+0.05}_{-0.07}$) & -3.50(-3.50$^{+0.07}_{-0.06}$) & -3.41(-3.39$^{+0.03}_{-0.10}$) & -3.42(-3.47$^{+0.24}_{-0.11}$) & -3.45(-3.46$^{+0.05}_{-0.06}$) \\ \hline $H_0$ & 70.25(70.14$\pm$0.82) & 70.60(70.19$\pm$0.86) & 71.38(71.54$\pm$1.07) & 69.19(68.50$^{+0.55}_{-0.93}$) & 70.57(70.60$\pm$0.85) \\ $S_8$ & 0.841(0.839$\pm$0.013) & 0.841(0.839$\pm$0.013) & 0.846(0.845$^{+0.018}_{-0.015}$) & 0.842(0.833$^{+0.011}_{-0.012}$) & 0.843(0.842$\pm$0.013) \\ \hline $\Delta\chi^2_{\rm P}$ & -0.2 & -1.5 & -4.3 & -1.7 & -4.7 \\ $\Delta\chi^2_{\rm ACT}$ & -1.8 & -- & -4.3 & -1.0 & -- \\ $\Delta\chi^2_{\rm tot}$ & -11.5 & -10.7 & -19.4 & -1.6 & -12.7 \\ \hline \hline $H_0^{{\rm \Lambda CDM}}$ & 68.23(68.17$\pm$0.38) & 68.29(68.22$\pm$0.40) & 68.30(68.32$\pm$0.42) & 67.80(67.73$\pm$0.39) & 68.58(68.35$\pm$0.42) \\ $S_8^{{\rm \Lambda CDM}}$ & 0.815(0.818$\pm$0.010) & 0.812(0.814$\pm$0.010) & 0.814(0.813$\pm$0.011) & 0.826(0.827$\pm$0.010) & 0.819(0.819$\pm$0.010)\\ \hline \end{tabular} \caption{\label{T:cADE} Maximum likelihood (ML) parameters and constraints (mean and the 68\% C.L. lower and upper limits) for the cADE model with different data sets. $\Delta\chi^2$ values for ML cADE model are quoted relative to the ML $\Lambda$CDM model for the same data set. $\Delta\chi^2_{P}$ reflects the contribution of the Planck CMB datasets involved in each case: for the -P18Pol case this includes the TT, lowl, and lowE likelihoods while for P$18\rightarrow15$ this employs the Planck 15 versions of all likelihoods (see~\ref{sec:data}). For comparison, the $H_0$ and $S_8 \equiv \sigma_8(\Omega_m/0.3)^{1/2}$ values for $\Lambda$CDM model are also presented. } \end{table*} \begin{figure} \centering \includegraphics[width=\columnwidth]{paper_22_cADE+default+-18ACT.pdf} \caption{ \label{fig:cADE-triangle1} The marginalized joint posterior of parameters of the cADE model for data sets that highlight the impact of ACT and the 2018 update to the Planck data. The darker and lighter shades correspond respectively to the 68\% C.L. and the 95\% C.L. } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure_P18+ACT_ADE18+ACT.pdf} \caption{\label{fig:P18+ACTresiduals+ACT} CMB residuals of Planck 2018 (orange points) and ACT (blue points) data and the ML cADE model (red curve) with respect to the ML $\Lambda$CDM model (black line) with both optimized to All data. The gray vertical lines indicate the positions of the acoustic peaks in the ML $\Lambda$CDM:All model. } \end{figure} \subsection{All data} \label{Sec:All} We begin with results for the All data combination and the cADE model. In Fig.~\ref{fig:cADE-triangle1}, we show the constraints on the additional cADE parameters $f_c$ and $a_c$ as well as their impact on $H_0$. The mean value for $f_c$ is 2.8 standard deviations from zero, which we will refer to as a $2.8\sigma$ detection, and its distribution is strongly correlated with that of $H_0$. In Tab.~\ref{T:cADE} we also show the maximum likelihood (ML) parameters, notably $H_0=70.25$ in cADE vs.~68.23 in $\Lambda$CDM, as well as the improvement of fit over $\Lambda$CDM, a total of $\Delta \chi^2_{\rm tot} =- 11.5$ for 2 additional parameters. The portion that comes from Planck CMB power spectra, $\Delta \chi^2_{\rm P} = -0.2$ and from ACT $\Delta \chi^2_{\rm ACT}=-1.8$, reflects a slightly better fit to CMB power spectra than $\Lambda$CDM. Note that the ML value for $a_c$ is near matter-radiation equality. Since the ML of a class of models depends on the dataset it is optimized to, from this point forward we refer to such models as e.g. ML cADE:All and ML $\Lambda$CDM:All. In Fig.~\ref{fig:P18+ACTresiduals+ACT}, we show the model and data residuals, both for Planck and ACT, of the ML cADE:All model relative to the ML $\Lambda$CDM:All model. The residuals are shown in units of $\sigma_{\rm CV}$, the cosmic variance error per multipole moment for the ML $\Lambda$CDM:All model \begin{equation} \sigma_{\rm CV} = \begin{cases} \sqrt{\frac{2}{2\ell+1}} C_\ell^{TT}, & {\rm TT} \,;\\ \sqrt{\frac{1}{2\ell+1}} \sqrt{ C_\ell^{TT} C_\ell^{EE} + (C_\ell^{TE})^2}, & {\rm TE} \,;\\ \sqrt{\frac{2}{2\ell+1}} C_\ell^{EE}, & {\rm EE} \,. \\ \end{cases} \end{equation} In spite of the higher $H_0$, the ML cADE:All model closely matches the ML $\Lambda$CDM:All model for all spectra and relevant multipoles. Along the $f_c-H_0$ degeneracy, $f_c$ adjusts the CMB sound horizon scale to match the acoustic peak positions while $a_c$ near matter-radiation equality allows the damping tail of the CMB to match as well. Note that the ACT data provide a new test, that has been successfully passed, for this class of solution by providing more sensitive polarization constraints than Planck in the damping tail $\ell \gtrsim 10^3$ as we shall discuss in the next section. Since adding an extra Jeans-stable energy density component drives CMB acoustic oscillations and changes the heights of the peaks, small variations in $\Omega_c h^2, \Omega_b h^2, n_s$ are required as well. We shall see below that a crucial test that distinguishes cADE and related explanations of the Hubble tension is the imperfect compensation in the polarization, especially at intermediate multipoles that correspond to modes that cross the horizon near $a_c$ \cite{Lin:2019qug}. Relatedly, as shown in Tab.~\ref{T:cADE}, the higher $\Omega_c h^2$ and $H_0$ values exacerbate the high $S_8=\sigma_8 (\Omega_m/0.3)^{1/2}$ values of $\Lambda$CDM so that accurate measurements of local structure test these scenarios as well~\cite{Ivanov:2020ril,Hill:2020osr,DAmico:2020ods,Niedermann:2020qbw}. \begin{figure} \centering \includegraphics[width=\columnwidth]{figure_residual_ADE18+ACTvv.pdf} \caption{\label{fig:CMBresiduals+ACT} Planck CMB data residuals and the ML cADE, ADE and EDE models relative to the ML $\Lambda$CDM model, all optimized to All data. The gray vertical lines indicate the positions of the acoustic peaks in the ML $\Lambda$CDM:All model. } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{chi2cum_TT_ALL+-POL.pdf} \caption{\label{fig:chi2cum_TT_ALL+-POL} Cumulative $\Delta\chi^2_{\rm P}$ for the Planck 18 TT+lowl+lowE likelihoods between ML cADE and ML $\Lambda$CDM models, optimized to either All or -P18Pol data. Note that All includes the Planck 18 TTTEEE likelihood whereas -P18Pol does not. } \end{figure} \subsection{ACT impact} \label{Sec:MinusACT} The ACT data provide better constraints than Planck on the CMB EE polarization spectrum at $\ell \gtrsim 10^3$ as well as competitive TE and corroborating TT constraints in this range. The former provides new tests of the cADE model as shown in Fig.~\ref{fig:P18+ACTresiduals+ACT}. On the data side, it is notable that for TT the Planck data residuals compared with $\Lambda$CDM that oscillate with the acoustic peaks (gray lines) at $\ell \gtrsim 10^3$ are echoed in ACT data, albeit at a lower significance. We shall see below that were it not for Planck polarization constraints at lower multipole, the cADE fit to these oscillatory TT residuals would drive $H_0$ even higher. The additional constraining power of ACT polarization at high $\ell$ reduces the model freedom there and slightly shifts the compensation in acoustic driving toward higher $a_c$ and lower $\Omega_b h^2$. This change in $a_c$ can be seen in Fig.~\ref{fig:cADE-triangle1} where we also show the impact of removing ACT data. On the other hand the ability to raise $H_0$ is nearly unchanged. Interestingly, the ACT TE data is not in good agreement with the Planck data as noted in \cite{Aiola:2020azj} and attributed to $\sim 5\%$ calibration difference leading to $\Lambda$CDM parameter discrepancies at the $2.7\sigma$ level. In Fig.~\ref{fig:P18+ACTresiduals+ACT}, we see that the Planck TE data have residuals that oscillate with the acoustic frequency when compared with $\Lambda$CDM, whereas the ACT TE data do not. The ML cADE:All model attempts to compensate but must then compromise on the fit to the high-$\ell$ power spectra. This tradeoff has important implications for the comparison of cADE and $\Lambda$CDM as well as cADE and alternate models that add extra energy density near matter-radiation equality. This data discrepancy also motivates the study of the impact of Planck polarization data below. \subsection{Planck impact} \label{Sec:Planck} \subsubsection{Planck 2015 vs.~2018 data} We start with the impact of the Planck 2018 data relative to the older 2015 release studied in \cite{Lin:2019qug} by reverting the data and removing ACT data in P18$\rightarrow$15,-ACT. The main difference in the updated Planck data is the better polarization data and control over systematics, which makes both the TE and EE data important tests of the cADE model. In Fig.~\ref{fig:cADE-triangle1} and Tab.~\ref{T:cADE}, we see that the main impact on cADE is a slight reduction of its ability to raise $H_0$ and a shift to lower $a_c$ that is countered by the ACT data in the All combination. This mild tension reflects the competition between fitting the high multipole spectra of both Planck and ACT and the intermediate multipole ($\ell \sim 500$) range of the Planck TEEE data. The latter is a critical test of the cADE scenario since the perturbation scales associated with them cross the horizon near matter-radiation equality and are highly sensitive to changes in the manner the acoustic oscillations are driven. Polarization data represent a cleaner test than temperature data since they lack the smoothing effects of the Doppler and integrated Sachs-Wolfe contributions. On the other hand, as we have seen Planck and ACT disagree somewhat on the TE spectrum in this range. In Fig.~\ref{fig:CMBresiduals+ACT} we highlight the Planck 2018 data residuals and ML cADE:All model residuals, both relative to ML $\Lambda$CDM:All model. Notice again the oscillatory residuals in TE and the features in cADE that respond to these residuals as well as the features in EE at $\ell \lesssim 600$. Furthermore, because of the ability to adjust Planck foregrounds, the overall amplitude of the TT data residuals, which have foregrounds fixed to the best fit to Planck 18 alone for visualization in Fig.~\ref{fig:CMBresiduals+ACT}, are low compared with the models. To better isolate the regions of the data that impact the models the most, we also show the cumulative $\Delta \chi^2_{\rm P}$ contributed by the Planck TT+lowl+lowE data in Fig.~\ref{fig:chi2cum_TT_ALL+-POL} for the ML cADE:All relative to ML $\Lambda$CDM:All model. While the ML cADE:All model successfully minimizes differences with $\Lambda$CDM, there are notable regions where the $\Delta\chi^2_P$ changes rapidly: $\ell \sim 500$, $800$, $1400$. Note that the latter two regions are near the 3rd and 5th TT acoustic peaks and are related to the oscillatory TT residuals. We shall next see that these areas reflect the trade-off between fitting the high $\ell$ power spectra of Planck and ACT and the intermediate scale polarization spectra of Planck. \begin{figure} \centering \includegraphics[width=\columnwidth]{paper_22_cADE+ALL+-.pdf} \caption{\label{fig:show_22_cADE+ALL+-} The marginalized joint posterior of parameters of the cADE model for data sets that highlight the role of Planck 2018 acoustic polarization data and SH0ES $H_0$ measurements. The darker and lighter shades correspond respectively to the 68\% C.L. and the 95\% C.L. } \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{figure_residual_ADE18-POL+ACT.pdf} \caption{\label{fig:CMBresiduals-POL+ACT} Planck CMB data residuals and the ML cADE, ADE and EDE models relative to the ML $\Lambda$CDM model, all optimized to the -P18Pol data. The gray vertical lines indicate the positions of the acoustic peaks in the ML $\Lambda$CDM:-P18Pol model. } \end{figure} \subsubsection{ Planck polarization impact} \label{Sec:MinusPlanckPol} The crucial role of intermediate scale TE and EE data in distinguishing models and the discrepancy between the TE calibrations of Planck and ACT motivate a more direct examination of the impact of Planck polarization data. In Fig.~\ref{fig:show_22_cADE+ALL+-} and Tab.~\ref{T:cADE}, we show the cADE parameters and $H_0$ constraints without the Planck 2018 acoustic polarization data but including acoustic polarization data from ACT as -P18Pol. Notice that the ability to raise $H_0$ increases to $H_0=71.38$ for the ML cADE:-P18Pol model and total $\Delta \chi^2_{\rm tot}$ improvement over $\Lambda$CDM rises to $-19.4$. Correspondingly, a finite $f_c$ is preferred at $\sim 3.7\sigma$ and its ML value increases to $f_c=0.105$. The fit to both the remaining Planck CMB power spectra data and ACT temperature and polarization data correspondingly also improve by $-4.3$ and $-4.3$ respectively. The transition scale $a_c$ can also further increase in value, especially at lower $f_c$. In Fig.~\ref{fig:CMBresiduals-POL+ACT}, we show how the ML cADE:-P18Pol model fits residuals in the Planck TT data relative to the ML $\Lambda$CDM:-P18Pol model. Notice that the cADE model now responds to the oscillatory TT residuals. In Fig.~\ref{fig:chi2cum_TT_ALL+-POL}, we see that the main cumulative TT improvement comes from $\ell \gtrsim 1400$. On the other hand, Planck polarization data at intermediate scales $(\ell \sim 500)$ strongly disfavor this solution. In Fig.~\ref{fig:chi2cum_TTTEEE_ALL+-POL}, we compare the cumulative Planck TTTEEE $\Delta\chi^2$ for the ML cADE:All model vs the ML cADE:-P18Pol model, both relative to their respective ML $\Lambda$CDM models.\footnote{The value of the cumulative $\Delta \chi^2_{\rm P}({\rm data})$ at the highest $\ell$ matches the values in Tab.~\ref{T:cADE} only for the cases where the optimization matches the data, i.e.~ML cADE:-P18Pol in Fig.~\ref{fig:chi2cum_TT_ALL+-POL} and ML cADE:All in {Fig.~\ref{fig:chi2cum_TTTEEE_ALL+-POL}}.} While the former remains flat, reflecting an equally good fit for the cADE, the latter encounters a sharp degradation in the fit just below $\ell \sim 500$ and a more gradual degradation between $500-1000$. The first degradation is associated with features in the EE spectrum and the second receives contributions from the uniformly low TE spectrum in Fig.~\ref{fig:CMBresiduals-POL+ACT}. Since the Planck polarization data are far from cosmic variance limited even just statistically, future data in this region can provide a sharp test of cADE and distinguish it from alternatives. \begin{figure} \centering \includegraphics[width=\columnwidth]{chi2cum_TTTEEE_ALL+-POL.pdf} \caption{\label{fig:chi2cum_TTTEEE_ALL+-POL} Cumulative $\Delta\chi^2_{\rm P}$ for the Planck 18 TTTEEE +lowl+lowE likelihoods between ML cADE and ML $\Lambda$CDM models, optimized to either All or -P18Pol data. Note that -P18Pol replaces the TTTEEE with the TT likelihood. } \end{figure} \subsection{SH0ES impact} \label{Sec:MinusH0} Given the highly significant $H_0$ tension in $\Lambda$CDM, it is interesting to ask whether preference for a higher $H_0$ in cADE simply reflects the SH0ES $H_0$ data. In Fig.~\ref{fig:show_22_cADE+ALL+-} and Tab.~\ref{T:cADE}, we show the impact of removing this data. Notice that although the ML value of $H_0$ drops to 69.19, the cADE constraints still allow a non-Gaussian tail to the higher $H_0$ values that are compatible with the All data. The ML cADE:-H0 model remains a better fit to both Planck and ACT temperature and polarization data than the ML $\Lambda$CDM:-H0 model which has a lower $H_0=67.8$. In this case, finite values for the cADE parameter $f_c$ are no longer significantly preferred. Since all cADE models become indistinguishable from $\Lambda$CDM in the limit $f_c\rightarrow 0$, there is a large prior parameter volume associated with the poorly constrained $a_c$ that favors $\Lambda$CDM, pulling the posterior probability of $H_0$ to lower values and skewing the distribution. \begin{table*} \centering \begin{tabular}{c|cccccc} \hline & $\Delta\chi^2_{\rm tot}$ & $H_0$ & $f_c$ & $\log_{10}a_c$ & $w_{\rm f}$ & $c_s^2$ or $\Theta_i/\pi$ \\ \hline {\rm ADE}\ (ALL) & -14.0 & 70.25(69.67$^{+0.93}_{-0.97}$) & 0.061(0.055$^{+0.028}_{-0.030}$) & -3.60(-3.57$^{+0.20}_{-0.12}$) & 0.55(1.37$^{+0.37}_{-1.09}$) & 0.70(0.87$\pm$0.29) \\ EDE (ALL) & -16.6 & 71.03(71.14$^{+0.98}_{-0.99}$) & 0.056(0.061$^{+0.018}_{-0.017}$) & -3.71(-3.68$^{+0.09}_{-0.07}$) & 0.5(fixed) & 0.94($>$0.84) \\ \hline {\rm ADE}\ (-ACT) & -11.9 & 70.55 & 0.074 & -3.61 & 0.68 & 0.80 \\ EDE (-ACT) & -13.7 & 71.61 & 0.068 & -3.80 & 0.5(fixed) & 0.92 \\ \hline {\rm ADE}\ (-P18Pol) & -23.7 & 72.11 & 0.103 & -3.51 & 0.57 & 0.85 \\ EDE (-P18Pol) & -26.1 & 73.07 & 0.100 & -3.65 & 0.5(fixed) & 0.90 \\ \hline {\rm ADE}\ (-H0) & -3.9 & 69.18 & 0.049 & -3.58 & 0.81 & 0.71 \\ EDE (-H0) & -4.0 & 70.11 & 0.044 & -3.69 & 0.5(fixed) & 0.94 \\ \hline {\rm ADE}\ (P18$\rightarrow$15,-ACT) & -14.1 & 70.81(70.20$^{+0.87}_{-0.88}$) & 0.086(0.079$\pm$0.033) & -3.52(-3.50$^{+0.14}_{-0.08}$) & 0.87(1.89$^{+0.85}_{-1.07}$) & 0.86(1.07$^{+0.30}_{-0.20}$) \\ EDE (P18$\rightarrow$15,-ACT) & -16.6 & 71.92(71.40$^{+1.07}_{-1.05}$) & 0.074(0.064$^{+0.020}_{-0.018}$) & -3.72(-3.72$^{+0.10}_{-0.07}$) & 0.5(fixed) & 0.90($>$0.82)\\ \hline \end{tabular} \caption{\label{T:ADEs} ML parameters and constraints (mean and the 68\% C.L. lower and upper limits) for of cADE, {\rm ADE}, EDE models with different data sets. $\Delta\chi^2_{\rm tot}$ values are quoted relative to the ML $\Lambda$CDM model for the same data set. The column labeled ``$c_s^2$ or $\Theta_i/\pi$" indicates $c_s^2$ for {\rm ADE}\ and $\Theta_i/\pi$ for EDE. Since the boundary $\Theta_i/\pi=1$ is consistent with the data, we have quoted the 1-sided 68\% CL lower interval from this boundary. Both {\rm ADE}\ and EDE have four parameters in addition to $\Lambda$CDM, but the $w_{\rm f}$ value of EDE is crudely optimized by setting it to the value of best solving the $H_0$ tension following \cite{Poulin:2018dzj}. } \end{table*} \subsection{Distinguishing model alternatives} \label{Sec:Models} As we have seen in the previous sections, intermediate scale polarization data is crucial for limiting the ability of the cADE model to raise $H_0$ as well as distinguishing it from $\Lambda$CDM. This is because differences in acoustic driving are most manifest for modes that cross the horizon while the additional energy density is important and the signatures in polarization vs.~temperature spectra are clearer, due to the lack of other contaminating effects. Intermediate scale polarization is equally important for distinguishing cADE from the wider class of ADE models or EDE models. In Tab.~\ref{T:ADEs} we show results for these wider classes and the joint posterior of the common parameters along with $H_0$ are shown in Fig.~\ref{fig:show_22_ADE+default18+ACT}. The ADE chains are converged at the $R-1<0.04$ level, reflecting degeneracies in poorly constrained parameters. Note that both models possess 4 additional parameters, but for EDE we have followed \cite{Poulin:2018dzj} in crudely optimizing $w_{\rm f}$ by setting it to $w_{\rm f} =0.5$. In the ADE case the ML $H_0$ value remains at $H_0=70.25$ in cADE while in EDE case it rises to $71.03$. The total $\Delta \chi_{\rm tot}^2$ for the All dataset also improves from $-11.5$ to $-14.0$ and $-16.6$ respectively for 2 additional parameters. The All dataset therefore does not strongly favor either increase in model complexity. Note that because of the large parameter volume in {\rm ADE}\ near $f_c\rightarrow 0$, the posterior of $H_0$ in that case is strongly pulled by the prior to lower values than the ML value. For EDE notice that in the scalar field interpretation $\Theta_i/\pi=1$ is a field whose initial value is at the top of the potential and the data require a moderate tuning to this boundary value \cite{Lin:2019qug}. Tab.~\ref{T:ADEs} also displays the ML models for the various other data combinations discussed above. The trends are similar to those discussed for cADE. In addition, for {\rm ADE}, the All data favors a lower value for $w_{\rm f}$ due mainly to the ACT data as compared with the P15-based previous results from Ref.~\cite{Lin:2019qug}. More interestingly for the future, Figs.~\ref{fig:P18+ACTresiduals+ACT} and \ref{fig:CMBresiduals-POL+ACT} show that the current compromises between fitting the high $\ell$ power spectra of Planck and ACT vs.~the intermediate scale Planck polarization data are model dependent, especially in the polarization spectra around $\ell \lesssim 500$. Since the Planck data are far from cosmic variance limited in TEEE, better measurements in this regime can distinguish between the various alternatives for adding extra energy density around matter radiation equality to alleviate the Hubble tension. \begin{figure} \centering \includegraphics[width=\columnwidth]{paper_22_ADE+default18+ACT.pdf} \caption{\label{fig:show_22_ADE+default18+ACT} The marginalized joint posterior of parameters of different models for the All data. The darker and lighter shades correspond respectively to the 68\% C.L. and the 95\% C.L. The 1-D $H_0$ distribution of $\Lambda$CDM: All chain is also shown as the light blue curve. } \end{figure} \section{Discussion} \label{Sec:Conclusions} The acoustic dark energy model, which is based on a canonical kinetic term for a scalar field which rapidly converts potential to kinetic energy around matter radiation equality, alleviates the Hubble tension in $\Lambda$CDM and successfully passes new consistency tests in the CMB damping tail provided by the ACT data, while being increasingly constrained and distinguished from alternate mechanisms by the better intermediate scale polarization data from Planck. The best fit cADE model has $H_0=70.25$ compared with $68.23$ in $\Lambda$CDM and a finite cADE component is preferred at the $2.8\sigma$ level. While this preference is driven by the SH0ES measurement of $H_0$ itself, even without this data the cADE model prefers a higher $H_0$ than in $\Lambda$CDM. Intermediate scale $(\ell \sim 500)$ polarization data plays a critical role in testing these and other scenarios where an extra component of energy density alters the sound horizon and damping scale of the CMB. Such components also drive CMB acoustic oscillations leaving particularly clear imprints on the polarization of modes that cross the horizon around matter radiation equality. Were it not for the Planck 2018 polarization data, the ML cADE model would have $H_0=71.38$ and more fully resolve the Hubble tension. Intriguingly the ACT TE data do not agree with Planck TE data in their normalization \cite{Aiola:2020azj} and in cADE the two data sets drive moderately different preferences in parameters, especially the epoch $a_c$ at which its relative energy density peaks. In the wider class of non-canonical acoustic dark energy (ADE) or early dark energy (EDE), which differ in the manner that acoustic oscillations are driven, polarization data at these scales is critical for distinguishing models, with the current freedom allowing an even larger $H_0 \sim 70-71$ and $71-73$ at ML with and without Planck polarization, albeit with two additional parameters. Given the current statistical and systematic errors in measurements, future intermediate scale polarization data can provide even more incisive tests of the cADE model and its alternatives to resolving the Hubble tension. \acknowledgments MXL and WH are supported by U.S.~Dept.~of Energy contract DE-FG02-13ER41958 and the Simons Foundation. MR is supported in part by NASA ATP Grant No. NNH17ZDA001N, and by funds provided by the Center for Particle Cosmology. Computing resources are provided by the University of Chicago Research Computing Center through the Kavli Institute for Cosmological Physics at the University of Chicago. \vfill
null
null
null
proofpile-arXiv_000-10128
{"arxiv_id":"2009.08974","language":"en","timestamp":1600654716000,"url":"https:\/\/arxiv.org\/abs\/2009.08974","yymm":"2009"}
2024-02-18T23:40:25.035Z
2020-09-21T02:18:36.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10130"}
null
\section{Introduction} The emergence of nonfullerene acceptors~(NFAs) has pushed bulk-heterojunction organic photovoltaics~(OPVs) to record efficiencies close to 20\%.\cite{Cui2020,Luo2020} However, despite the wide variety of materials that are now available, there are still only a few systems that maintain their full performance at junction thicknesses of~\unit[300]{nm} and more.\cite{Armin2018,Sun2018,Jin2017} Compatibility with such thick active layers, as well as a general tolerance to thickness variations, is considered an important prerequisite for OPVs to become commercial reality.\cite{Meredith2018,Clarke2014} The main problem with increasing thickness is the slowdown of charge collection, which makes photogenerated carriers more vulnerable to recombination.\cite{Bartesaghi2015,Kaienburg2016,Neher2016} This is particularly true for NFA-based systems, which typically have low carrier mobilities of~$10^{-9}$ to $\unit[10^{-8}]{m^2\,V^{-1}s^{-1}}$. Hence, in order to compensate for the limitations in transport, it has become extremely important to find strategies how charge recombination can be suppressed. Despite considerable efforts, there is still no agreement on which are the key factors determining the recombination strength.\cite{Goehler2018,Lakhwani2014} Conceptually, free charge recombination is a bimolecular process and can be described by the rate equation~$R = k_2 n^2$, where $k_2$ is the rate constant and $n$ the carrier density. The parameter~$k_2$ is often compared with the homogeneous Langevin model, \begin{equation} k_L = \frac{q}{\varepsilon\varepsilon_0} (\mu_n + \mu_p), \label{eq:Langevin} \end{equation} where $q$ is the elementary charge, $\varepsilon\varepsilon_0$ the dielectric permittivity, $\mu_n$ the electron mobility and $\mu_p$ the hole mobility. Although there are a number of systems in which~$k_2$ is significantly reduced compared to Langevin recombination~($k_2 = \zeta k_L$, where $\zeta < 1$), the reduction is usually not great enough to ensure thickness-insensitive device performance. It is generally accepted that the reduction factor~$\zeta$ is affected by the blend morphology, but the details remain controversial. For example, while some authors relate~$\zeta$ to the phase separation between the donor and acceptor,\cite{Heiber2015,Coropceanu2017,Groves2008,Koster2006,Pivrikas2005} others highlight the importance of phase purity and molecular order.\cite{Cha2019,Wilken2020b,Burke2015,Nyman2015} The difficulties of manipulating the morphology in a controlled way makes experimental clarification a complex task. Among the benchmark systems for reduced recombination are blends based on the classical polymer poly(3-hexyl\-thiophene)~(P3HT). Both with fullerene and NFAs, reduction factors as low as~$10^{-4}$ have been reported.\cite{Ferguson2011,Pivrikas2005,Gasparini2017} In particular, the availability of NFAs that complement the absorption of P3HT and minimize voltage losses due to a fine-tuned energy level alignment has led to a remarkable renaissance of P3HT in OPV~research.\cite{Khan2019,Holliday2016,Gasparini2017,Baran2017,Xiao2017,Wu2015,Wu2019} One special feature of P3HT-based OPVs is that they develop their full performance only after the application of post-deposition treatments such as thermal and solvent annealing. The annealing induces phase separation and crystallization, thereby transforming the active layer into an ``optimized'' morphology in terms of charge generation and transport.\cite{Kniepert2014,Deibel2008,Li2005} However, a closer look at the literature reveals that the connection between the morphological changes and the recombination is much less clear. In the Supporting Information~(Table~S1 and Figure~S1) we collected 16 studies on annealed blends of P3HT and phenyl-\ce{C61}-butyric acid methyl ester~(PCBM), which is the most studied system to date. Even for nominally equally processed devices, reported reduction factors~$\zeta$ span over 3~orders of magnitude. Although the variation may be partly explained by different measurement methods, it points towards an underlying structure--property relationship that is yet to be revealed. Understanding this relation, or simply answering the question ``What makes P3HT performing so well?'', will help in designing new materials for commercially relevant OPVs. In this article, we revisit the P3HT:PCBM~blend to determine the key morphological features to suppress charge recombination in OPVs. Specifically, we consider two model morphologies that behave similarly with respect to generation and transport, but show clear differences in recombination strength. By combining electron tomography, optical spectroscopy and electrical measurements, we draw clear connections between morphology and the thickness-dependent competition between charge collection and recombination. We find that the presence of aggregates of high crystalline quality and purity is crucial for OPVs to function as Shockley-type devices without recombination losses even at high thickness. In contrast, the experiments show that the size of the phase-separated domains is of secondary importance. Our results suggest that delocalization along conjugated chain segments is key to allow charge-transfer pairs formed by the encounter of free electrons and holes to re-dissociate rather than to recombine into the ground state. This provides clear design rules for future OPV~materials. \section{Results} \subsection{Sample Systems and Device Performance} \label{sec:systems} \begin{figure*}[t] \includegraphics[width=\textwidth]{figure01.pdf} \caption{Thickness-dependent performance of CB and DCB~devices. (a,b)~$j$--$V$ curves under simulated sunlight for different active-layer thicknesses. See Table~S2 in the Supporting Information for the full data set. (c)~Measured $j_\text{sc}$~(symbols) together with the result of transfer-matrix calculations~(dashed lines) assuming a constant IQE of~$70\%$. (d)~White-light bias dependent EQE for 300-nm thick devices.} \label{fig:figure01} \end{figure*} Throughout the following, we compare P3HT:PCBM films in 1:1~blend ratio that were spin-coated from either chlorobenzene~(CB) or 1,2-dichlorobenzene~(DCB) and subsequently thermally annealed. The main effect of the solvent is on the drying rate. While films from CB dried rapidly during spin coating, those from DCB were still wet after deposition and let dry slowly before the thermal anneal was applied. We implemented the differently prepared blends into inverted solar cells~(substrate\slash{}cathode\slash{}active layer\slash{}anode) and varied the thickness~$L$ of the active layer. Figure~\ref{fig:figure01}a,b and Table~S2 in the Supporting Information summarize the thickness-dependent current--voltage~($j$--$V$) characteristics under simulated solar illumination. Clearly, only the slow grown DCB~devices maintain their performance by increasing~$L$ from around~50 to over~\unit[300]{nm}. In contrast, the photocurrent of the CB~devices becomes increasingly voltage-dependent with increasing thickness, which leads to a clear drop in the short-circuit current~($j_\text{sc}$) and fill factor~(FF). Figure~\ref{fig:figure01}c shows that the photocurrent of the DCB~devices is well described by an optical transfer-matrix model assuming an internal quantum efficiency~(IQE) independent on thickness. We determined the IQE experimentally using the external quantum efficiency~(EQE) measured with lock-in technique~(see Figure~S3 in the Supporting Information). In the case of the CB~devices, this method fails to explain the drop of $j_\text{sc}$ for~$L > \unit[150]{nm}$. The reason is that the EQE decreases strongly with increasing light intensity, which can be seen by adding bias illumination to the low-intensity probe in the EQE measurement~(Figure~\ref{fig:figure01}d). Hence, the poor performance is not an inherent property of the thick CB~devices, but develops gradually with increasing carrier density. This aspect is further elaborated in Figures~S4 and~S5 in the Supporting Information, where light-intensity dependent $j$--$V$~curves and EQE~spectra are shown. Such a behavior is usually attributed to bimolecular recombination, often accompanied by space-charge effects.\cite{Wilken2020} Importantly, at low intensity, all CB and DCB devices display a rather high~IQE of about 70\%. This illustrates that free charge carriers are efficiently generated in both systems, while the differences lie in how the collection of those carriers competes with recombination.\cite{Bartesaghi2015,Neher2016} Having shown that seemingly small details in the blend preparation have drastic effect on the thickness-dependent device performance, we now want to establish relationships with the morphology. For this purpose, we will first present a detailed characterization of the relevant structural features, that is, phase separation, aggregation and phase purity. We then use mobility measurements, as well as transient photocurrent and photovoltage studies to relate these properties to the kinetics of charge collection and recombination. \subsection{Phase Separation} \label{sec:tem} To investigate the phase separation in real space, we used high-resolution transmission electron microscopy~(TEM) and tomography~(Figure~\ref{fig:figure02}). Blend films of $\unit[{\sim}200]{nm}$ thickness were selected for this analysis because they are representative of the thickness series and still exhibit high enough electron transparency.\cite{Oosterhout2009,vanBavel2009b} Figures~\ref{fig:figure02}a and \ref{fig:figure02}c show regular bright-field TEM images, which reveal significant differences at various length scales. While the CB~sample exhibits no distinct structural features on the micrometer scale, alternating regions of bright and dark contrast can be seen for the DCB~sample. Atomic force microscopy~(Figure~S6, Supporting Information) confirms that these alternations are due to the surface topography~(i.e., height variations) rather than the blend composition. Independent of the substrate used, much smoother films with a root-mean-square roughness of $\unit[{\sim}1]{nm}$ were obtained by rapid drying~(CB), as compared to $\unit[{\sim}10]{nm}$ in the case of slow drying~(DCB), in agreement with previous works.\cite{Li2005,Turner2011} \begin{figure*}[t] \includegraphics[width=\linewidth]{figure02.pdf} \caption{Morphology of \unit[200]{nm} thick P3HT:PCBM films processed from CB~(panels a,b and e) and DCB~(panels c,d and f), respectively. (a,c)~Regular bright-field TEM images of free-standing films. Scale bar: \unit[300]{nm}. (b,d)~Exemplary slices through electron tomographic reconstructions parallel~(top) and perpendicular~(bottom) to the film plane. The color coding represents the brightness value of a certain pixel, which decreases from red over green to blue. Scale bar:~\unit[100]{nm}. (e,f)~Representative volume elements, reassembled from the $xy$~slices after binarization. See also Movie~S1 in the Supporting Information.} \label{fig:figure02} \end{figure*} Instead of that, image contrast on the nanometer scale contains information about the phase separation. Because of the lower density of P3HT~($\unit[{\sim}1.1]{g\,cm^{-3}}$) compared to PCBM~(\unit[1.3]{g\,cm$^{-3}$}), bright areas can be assigned to polymer domains and dark areas to fullerene domains.\cite{Ma2007,Moon2009,vanBavel2009,Dutta2011} The regular TEM~images already suggest a coarser phase separation~(i.e., larger domains) for the CB~cast blend films. However, as these are two-dimensional~(2D) projections of a three-di\-men\-sion\-al~(3D) network, they are insensitive to possible vertical gradients\cite{vanBavel2009,Campoy-Quiles2008}. To overcome this limitation, we used electron tomography, which yields a volumetric reconstruction of the blend film. Figures~\ref{fig:figure02}b and~\ref{fig:figure02}d show exemplary slices trough the tomograms parallel~($xy$~direction) and perpendicular~($xz$~direction) to the film plane. The tomographic data confirms the trend from the regular TEM imaging. While the CB~sample exhibits well-separated and homogeneously distributed domains in the order of \unit[10]{nm}, the DCB~sample shows a much finer phase separation with a higher degree of interpenetration between the P3HT and PCBM domains. From the $xz$~slices, a columnar structure with transport paths towards the bottom and top of the film can be seen in both cases. Figures~\ref{fig:figure02}e and \ref{fig:figure02}f give an impression of the 3D phase-separated morphology. For this representation, the slices were binarized by attributing each pixel either to the P3HT or PCBM phase~(see also Movie S1 in the Supporting Information). \begin{figure}[t] \includegraphics{figure03.pdf} \caption{Morphological features derived from the electron tomograms. Upper panel: Domain size~$d$ estimated from the width of the self-correlation peak of the radially averaged 2D~autocorrelation function~(see the Supporting Information). Lower panel: Interfacial-area-to-volume ratio of the P3HT phase calculated using Minkowski functionals. Dashed lines indicate the average values.} \label{fig:figure03} \end{figure} For a quantitative statistical analysis of the structural features, we computed the 2D~autocorrelation using the fast Fourier transform~(FFT) algorithm as detailed in the Supporting Information. The upper panel of Figure~\ref{fig:figure03} shows the domain size~$d$, estimated from the width of the autocorrelation function, versus the vertical position. In case of the rapidly dried sample~(CB), the domain size was found to be nearly constant throughout the film, with an average value of $d = \unit[9.6 \pm 0.2]{nm}$. In case of the slowly grown sample~(DCB), the domain size was slightly increasing (from 5.3 to \unit[6.5]{nm}) towards the upper boundary, with an average of $d = \unit[5.9 \pm 0.4]{nm}$. From the binarized slices, we were further able to compute Minkowski functionals such as perimeter and area.\cite{Legland2007} We used these measures to estimate the interfacial-area-to-volume ratio~(Figure~\ref{fig:figure03}, lower panel). As one would expect, the finer phase separation of the DCB~sample is accompanied by a larger amount of interfacial area. We note that due to the 1:1 blend ratio by weight, the P3HT and PCBM phases are not equal in volume. From the number of bright to dark pixels, we estimate the P3HT volume fraction to 0.57~(CB) and 0.54~(DCB), respectively, which fits well to the density ratio mentioned above. Furthermore, the slightly lower P3HT volume for the DCB sample hints to a higher degree of aggregation, as will be discussed in the next section. \subsection{Aggregation and Phase Purity} \label{sec:absorption} The casting solvent does not only influence the size of the phase-separated domains, but also the molecular order within them.\cite{Chu2008,Campoy-Quiles2008,Xie2012} To complement our TEM~studies with information about the internal domain structure, we used optical absorption spectroscopy~(Figure \ref{fig:figure04}). Generally, blend films cast from DCB showed more pronounced 0--0 and 0--1~vibronic features, suggesting a higher ordering in the polymer phase.\cite{Brown2003,Tremel2014} It is known that the P3HT~phase in P3HT:PCBM~blend films consists of a mixture of amorphous and aggregated material, the latter formed by lamellar crystallites of 2D~conjugated sheets.\cite{Sirringhaus1999,Moule2008,Tremel2014} For a spectral decomposition of the absorption bands, we fitted the P3HT absorbance component to the model of weakly interacting H-aggregates by Spano and coworkers.\cite{Spano2005,Clark2007,Clark2009} The model treats the absorption in the ordered regions by a series of Gaussian peaks, determined by three parameters: the energy~$E_{0-0}$ of the 0--0 transition, the Gaussian bandwidth~$\sigma$, and the intramolecular free exciton bandwidth~$W$~(see Experimental Section for details). \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figure04.pdf} \caption{Decomposition of the absorption spectra of CB~(left) and DCB~(right) cast blends after thermal annealing. Solid lines are the P3HT component of the P3HT:PCBM absorbance. Dashed lines are fits to the Spano model to the aggregate region~(1.95 to \unit[2.25]{eV}), and shaded areas are the single Gaussian contributions (see Experimental Section for fitting details). Dotted lines represent the residual absorption attributed to amorphous P3HT.} \label{fig:figure04} \end{figure} \begin{table} \caption{Results of the Spano analysis for CB and DCB cast blend films.} \begin{tabular}{cccc} \toprule Solvent & $E_{0-0}$ (eV) & $\sigma$ (meV) & $W$ (meV)\\ \midrule CB & 2.048 & 75 & 132 \\ DCB & 2.040 & 73 & 87\\ \bottomrule \end{tabular} \label{tab:Spano} \end{table} Table~\ref{tab:Spano} and the dashed lines in Figure~\ref{fig:figure04} show the results of the Spano analysis for the spectral region dominated by aggregate absorption. The main difference between the CB and DCB cast blends lies in the exciton bandwidth~$W$, which is known to scale inversely with the intramolecular order in the P3HT aggregates.\cite{Beljonne2000,Gierschner2009} In particular, $W$ has been correlated with the conjugation length, that is, the number of interacting thiophene repeat units. We can estimate the conjugation length using previous approaches,\cite{Gierschner2009,Turner2011,Pingel2010} which gives about 27~repeat units for the CB~blends and 40~repeat units for the DCB~blends. Since both systems were subjected to the same thermal annealing, the apparent differences can only be explained by the solvent evaporation rate. We hypothesize that the fast evaporation of CB gives the P3HT not enough time to organize into larger aggregates, and that thermal annealing cannot fully convert the disordered structure into a highly ordered one with an extended conjugation. This is different to the DCB~blends, where the slow drying already leads to the formation of aggregates of high crystalline quality.\cite{Turner2011} In contrast, $E_{0-0}$ and $\sigma$ are largely unaffected by the solvent. This means that the aggregates in the CB and DCB blends have very similar energetic properties, despite their significantly different expansion in the direction of conjugation. Interestingly, very similar trends with the solvent have recently been reported for a P3HT:NFA blend.\cite{Patel2019} Therefore, we assume that our results are generally valid for P3HT and similar semicrystalline systems. From the difference between measurement and Spano fit we can calculate also the amorphous absorption component (Figure~\ref{fig:figure04}, dotted lines). Using the ratio between the two absorption fractions, and the extinction coefficients of P3HT in the aggregated and amorphous state,\cite{Clark2009} we estimate the aggregate percentage in the P3HT phase to 45\%~(CB) and 51\%~(DCB), respectively. Even if we cannot resolve the aggregation in our tomography data, there is a direct link to these numbers: Because of the closer packing, aggregated P3HT has a higher density than amorphous P3HT.\cite{Kohn2013} Hence, at a given weight, a slightly smaller P3HT volume is expected in the DCB blends with the higher aggregate percentage. This is exactly the trend we derived from the tomography analysis. When blended with an acceptor, the P3HT aggregation is directly correlated with the phase purity. In particular, PCBM is known to be intermiscible with P3HT in amorphous state, but not with P3HT in aggregated state.\cite{Treat2011} If we assume that all amorphous P3HT is molecularly mixed with PCBM, we get a refined picture of the morphology consisting of roughly one third pure P3HT, one third pure PCBM and one third mixed phase. For a more accurate estimate, we can combine the P3HT volume from the tomography measurements with the amorphous P3HT fraction from the Spano analysis. This way we arrive at about 31\%~mixed phase in the CB~blends and 26\% in the DCB~blends, which lies in the range that has been reported for annealed P3HT:PCBM blends using high-resolution spectroscopic imaging.\cite{Pfannmoeller2011,Masters2015} Hence, we can conclude that in addition to the higher intramolecular order in the P3HT~aggregates, the DCB~blends also exhibit a higher overall phase purity. \subsection{Charge Transport} Having shown that the CB and DCB~blends show clear differences in their morphological features, we now turn to the electrical properties. To estimate the carrier mobilities, we used space-charge limited current~(SCLC) experiments. Figure~S8 in the Supporting Information shows $j$--$V$~curves of electron-only and hole-only devices whose active layers were prepared the same way as for the solar cells. As the simplest and most robust model, we fitted the data in the SCLC~regime to the Mott--Gurney law, and Table~\ref{tab:SCLC} lists the resulting electron and hole mobilities. As the most important result, the magnitude of both $\mu_n$ and $\mu_p$ appears to be fairly independent of the solvent used. We further analyzed the data in terms of the Gaussian disorder model,\cite{Pasveer2005,Felekidis2018} which explicitly takes into account the hopping nature of transport. Also the derived hopping parameters given in Table~\ref{tab:SCLC}, that is, the Gaussian disorder~$\sigma$ and the attempt-to-hop frequency~$\nu_0$, point to very similar transport properties between the CB and DCB~blends. Notably, while there is virtually no difference in the Gaussian disorder for electrons, the disorder for hole transport is $\unit[{\sim}15]{meV}$ smaller in case of the DCB~samples. This shows once again that it is mainly properties of the P3HT phase that are influenced by the solvent. Considering that the apparent transport characteristics are an average over aggregated and amorphous regions, the result is also in line with our hypothesis that the DCB~blends consist of slightly less disordered material. However, these favorable local transport properties apparently do not affect the \textit{macroscopic} mobility, which is known to be largely dominated by the transport through amorphous regions rather than within the aggregates.\cite{Pingel2010,Turner2011,Noriega2013} \begin{table} \caption{Charge transport parameters derived from SCLC measurements.} \begin{tabular}{lcccc} \toprule & \multicolumn{2}{c}{Electrons} & \multicolumn{2}{c}{Holes} \\ & CB & DCB & CB & DCB \\ \midrule Mobility, $\mu$ [$\unit{m^2\,V^{-1}s^{-1}}$] & $1.5 \times 10^{-7}$ & $8.9 \times 10^{-8}$ & $1.6 \times 10^{-8}$ & $1.3 \times 10^{-8}$\\ \addlinespace Gaussian disorder, $\sigma$ [meV] & 107 & 112 & 79 & 63 \\ Attempt-to-hop frequency, $\nu_0$ [$\unit{s^{-1}}$] & $3 \times 10^{12}$ & $2 \times 10^{12}$ & $9 \times 10^9$ & $7 \times 10^9$\\ \bottomrule \end{tabular} \label{tab:SCLC} \end{table} It is important to note that SCLC~diodes probe the transport characteristics of charges injected from the contacts. To check whether the results are also relevant for photogenerated charges, we performed resistance-dependent photovoltage~(RPV) measurements on operational solar cells devices. The RPV method is a transient technique with ns to ms resolution, that is, the time scale relevant for charge collection and recombination.\cite{Roland2019} From the RPV~transients shown in Figure~S9~in the Supporting Information, very similar electron mobilities can be derived as from the SCLC~measurements, and about two times lower hole mobilities. As RPV is carried out under much lower carrier densities, the latter might point to a slight carrier density dependence of~$\mu_p$, but is within the typical range when comparing different mobility measurements methods.\cite{Dahlstrom2020} However, the important fact is that there is still no significant difference between the CB and DCB~blends. This shows that also the transport of photogenerated charges is only slightly influenced by the differences in morphology. In particular, both the CB and DCB blends show the typical mobility imbalance of about one order of magnitude.\cite{Mihailetchi2006,Bartelt2015} Consequently, the thick devices are supposed to be affected by space charge effects, as will be discussed below. \subsection{Charge Collection versus Recombination} \label{sec:tpc} We now focus in more detail on the thickness-dependent competition between charge collection and recombination. To study the dynamics of collection, we measured the transient photocurrent~(TPC) due to a small optical perturbation while the device is held at a constant bias voltage and background illumination. We used a relatively long pulse length of $\unit[100]{\mu s}$ to guarantee that a steady state is reached. Because of the finite carrier mobility, a transient current~$j(t)$ is observed after switching off the light pulse at time $t = 0$, which reflects the carrier sweep-out.\cite{Cowan2011,Li2011} Figure~\ref{fig:figure05}a,b illustrates the TPC~behavior of 300-nm devices by plotting the voltage dependence of the extracted charge, \begin{equation} \Delta Q = \int_0^{t_f} j(t)\,\text{d} t, \end{equation} where $t_f$ is a time at which charge collection is completed. Data for the whole thickness series can be found in the Supporting Information~(Figures~S10 and S11). \begin{figure} \centering \includegraphics[width=\textwidth]{figure05.pdf} \caption{Dynamics of charge collection derived from TPC~measurements. (a,b)~Voltage dependence of the extracted charge~$\Delta Q$ for 300-nm thick devices at different background light intensities. (c,d)~Device FF versus the ratio $t_\text{ex}/t_\text{rec}$ between the carrier response time in the extraction regime~($V = \unit[-1]{V}$) and the recombination regime~($V \rightarrow V_\text{oc}$). Data points correspond to 8 samples of different thickness~$L$ and background light intensities ranging from 0.01 to \unit[1]{sun}. The fill factor is normalized to its low-intensity value to exclude other influences such as the quality of the contacts. Dashed lines are a guide to the eye.} \label{fig:figure05} \end{figure} At low background illumination, all devices exhibit a similar behavior, which can be understood as follows. Reducing the internal voltage, $V_\text{int} = V_\text{oc} - V$, by going from reverse to forward bias slows down the current decay and~$\Delta Q$ becomes larger. However, as $V \rightarrow V_\text{oc}$, the internal field is close to zero and the extracted charge is reduced due to recombination. Altogether, this leads to characteristic maximum of~$\Delta Q$, indicating the point at which recombination starts to compete with collection. Such a voltage dependence proves that the TPC~signals represent the free charge carrier dynamics and are not limited by RC~effects.\cite{Li2011} The situation changes with increasing background illumination. In the thicker CB devices~($L > \unit[150]{nm}$), the extracted charge is drastically reduced and the maximum of~$\Delta Q$ shifts gradually towards higher internal fields. Hence, recombination clearly competes with collection over a large range of voltages. This is in contrast to the DCB~devices, where~$\Delta Q$ is nearly invariant to thickness and light intensity. Also in the thick DCB~devices, recombination is only significant close to~$V_\text{oc}$, where photogenerated carriers mostly recombine with carriers injected from the contacts.\cite{Wurfel2019} Figure~\ref{fig:figure05}c,d illustrates the relevance of the TPC~dynamics for the solar cell performance. Shown is the device~FF versus the ratio between the TPC decay time~$t_\text{ex}$ in the extraction regime~(reverse bias) and $t_\text{rec}$ in the recombination regime~(close to $V_\text{oc}$) for a range of thicknesses and light intensities. The ratio~$t_\text{ex}/t_\text{rec}$ serves as a figure of merit for the competition between collection and recombination.\cite{Bartesaghi2015,Neher2016} Notably, all data points collapse into a universal curve. For the CB devices, the fill factor drops when~$t_\text{ex}$ and~$t_\text{rec}$ are in the same order of magnitude, which is the case for~$L > \unit[150]{nm}$ at high light intensities. In contrast, for the DCB~devices, the absence of data points in this region indicates that collection is always faster than recombination. Given the very similar mobilities, it is not likely that carriers are collected at a higher rate. Instead, the striking differences between the two systems must be related to the charge recombination. In order to characterize the recombination mechanism, we determined the reaction order~$\delta$ and the ideality factor~$n_\text{id}$ using transient and steady-state photovoltage measurements as described in the Supporting Information. For the CB~devices, we find $\delta$ close to~2 and $n_\text{id}$ close to~1, which indicates that bimolecular recombination between free electrons and holes is the dominant loss mechanism.\cite{Foertig2012,Kirchartz2012} Hence, we can employ the rate equation~$R = k_2 n^2$ and estimate the rate constant to~$k_2 \approx \unit[2\times10^{-18}]{m^3\,s^{-1}}$~(Supporting Information, Figure~S13). This value confirms recent charge extraction measurements on the same system,\cite{Wilken2020} which also show only a weak dependence on the carrier density~(Figure~\ref{fig:figure06}a). For the DCB~devices, the apparent recombination behavior is more complex. We find~$\delta$ significantly exceeding~2 and $n_\text{id}$ ranging between~1 and~2, which suggests that recombination involves carriers trapped in exponential tails of the density of states.\cite{Kirchartz2011} Given that the DCB~blends actually consist of \textit{less} disordered material than the CB~blends, it does not seem likely that they have more or deeper tail states. Instead, we assume the free carrier recombination~(as given by $k_2$) to be much stronger reduced, so that the trap-assisted regime becomes more apparent. Under these conditions, the transient techniques used herein to determine~$k_2$ do not lead to meaningful results.\cite{Kiermasch2018,Sandberg2019} However, to explain the differences in device performance, we estimate that~$k_2$ must be at least one order of magnitude smaller than in the CB~blends. This is supported by a recent study in which the newly developed impedance-photocurrent device analysis technique showed a rate constant~$k_2$ of about~$\unit[10^{-19}]{m^3\,s^{-1}}$ for comparably processed DCB~devices.\cite{Heiber2018} \begin{figure}[t] \centering \includegraphics{figure06.pdf} \caption{Characterization of the recombination kinetics. (a)~Rate constants~$k_2$ for the CB and DCB blends. The data points for the CB~system were taken from a recent literature study\cite{Wilken2020} and confirmed by transient photovoltage measurements~(Supporting Information, Figure~S13). For the DCB system, $k_2$ was estimated as detailed in the text. (b)~Device~FF versus the parameter~$\alpha$ as given in Equation~(\ref{eq:Neher2}) for CB and DCB~solar cells of various thickness and measured at different light intensities. Dashed lines are the expectations according to the modified Shockley model by Neher~et al.\cite{Neher2016}~for recombination rate constants of $k_2 = \unit[2 \times 10^{-18}]{m^3s^{-1}}$~(CB) and $k_2 = \unit[1 \times 10^{-19}]{m^3s^{-1}}$~(DCB), respectively. The degradation of the~FF in the CB~devices can be explained by the larger $k_2$ alone.} \label{fig:figure06} \end{figure} To test whether a contrast in $k_2$ alone can explain the striking differences between CB and DCB~devices, we applied the modified Shockley equation by Neher~et~al.,\cite{Neher2016} \begin{equation} j = qGL \left\{\exp\left[\frac{q}{(1 + \alpha) k_\text{B} T} (V-V_\text{oc}) \right] - 1\right\}, \label{eq:Neher1} \end{equation} where $G$ is the generation rate calculated with our transfer-matrix model and $\alpha$ is a factor that relates charge generation, transport and recombination to each other, \begin{equation} \alpha = \frac{q (k_2 G)^{1/2} L^2}{2 \mu_p k_\text{B} T}. \label{eq:Neher2} \end{equation} In the denominator we used only the mobility of the slower carrier~(here: holes), which dominates the photocurrent if transport is significantly imbalanced.\cite{Neher2016,Wilken2020} Figure~\ref{fig:figure06}b shows the analytical relationship between the FF and $\alpha$~(dashed lines) together with ca.~200 data points each for the CB and DCB~system, corresponding to different samples of variable thickness~$L$ measured at varying light intensity~(and thus~$G$). There is a reasonable agreement between the Neher model and the experiments, which confirms the validity of the~$k_2$ values. We note that the deviations at low and high $\alpha$~values are due to electrical imperfections~(finite shunt resistance, surface recombination) in the thin devices and space-charge effects in the thick devices, respectively. An important finding from Figure~\ref{fig:figure06}b is that under nearly all conditions tested, the DCB~devices operate as Shockley-type solar cells~($\alpha < 1$). This is an outstanding result for OPVs, especially with thick active layers.\cite{Armin2018} In contrast, most CB~devices are in the transport-limited regime~($\alpha > 1$). Figure~S14 in the Supporting Information shows that the Neher model can also correctly describe full $j$--$V$ curves at different light intensities. The notable exception is the thick CB~devices at high intensities. The reason is that under these conditions the photocurrent becomes space-charge limited, which is not considered in Equation~(\ref{eq:Neher1}). It is important to note that due to the similar mobility imbalance, space charge will also build up in the thick DCB~devices, as can be seen from modeled energy band diagrams~(Figure S15, Supporting Information). We have recently shown that collection is limited to the width~$w$ of the space-charge region plus the diffusion length~$L_D$ of the slower charge carrier.\cite{Wilken2020} For the thick devices, we find $w \approx \unit[160]{nm}$ under 1-sun illumination. This gives an alternative way to determine an upper limit for $k_2$ in the DCB blends: The fact that charges are collected from a 300-nm device without noticeable loss means that the hole diffusion length must be at least around~\unit[140]{nm}. This is an exceptionally long diffusion length for~OPVs, outperforming for instance the highly efficient PM6:Y6 system.\cite{Tokmoldin2020} Using the relationship \begin{equation} L_D = \sqrt{\frac{\mu_p \tau k_B T}{q}}, \end{equation} where $\tau = (k_2 G)^{1/2}$, we can then estimate that $k_2$ must be about~$\unit[1 \times 10^{-19}]{m^3\,s^{-1}}$ or less. This is in excellent agreement with our above assumption and confirms that the key difference between the CB and DCB~devices lies in the free charge recombination, which is about 20~times more reduced in the DCB~blends. \section{Discussion} We now want to discuss our findings in the light of recent recombination models and derive design rules for commercially relevant OPV~materials. Figure~\ref{fig:figure07} summarizes the current understanding of charge generation and recombination in organic solar cells.\cite{Shoaee2019,Goehler2018,Burke2015,Murthy2013,Deibel2010b} Both processes involve bound excitons~(either in spin singlet or triplet state), less bound charge transfer~(CT) pairs, and free carriers. Nongeminate charge recombination, on which we will focus in the following, is a two-step process. The first step is the encounter of a free electron and a free hole originating from different photoexcitations~(rate constant~$k_\text{enc}$); the resulting encounter complex has been identified as CT~state with similar properties to the one involved in charge generation.\cite{Murthy2013,Tvingstedt2009} The second step is the decay of the CT~state into the ground state~(rate constant~$k_f$), that is, the actual recombination event. However, instead of decaying into the ground state, the CT~state also has the possibility to dissociate again~(rate constant~$k_d$). Following this rationale, the experimentally observable bimolecular rate constant~$k_2$ can be written as \begin{equation} k_2 = \zeta_\text{CT} k_\text{enc}, \label{eq:reduction} \end{equation} where $\zeta_\text{CT}$ is a reduction factor related to the CT~kinetics. \begin{figure}[t] \centering \includegraphics{figure07.pdf} \caption{Illustration of the relevant energy levels and transitions for charge generation and recombination in organic bulk-heterojunction solar cells. The nongeminate recombination of electrons and holes from the charge separated~(CS) state to the electronic ground state involves the formation of a charge transfer~(CT) pair as intermediate.} \label{fig:figure07} \end{figure} When the decay of the CT~state is much faster than its dissociation~($k_f \gg k_d$), the recombination is encounter-limited and $\zeta_\text{CT} \rightarrow 1$. Hence, in this case, the recombination rate is solely given by~$k_\text{enc}$. While in a homogeneous medium, $k_\text{enc}$ should be given by the Langevin prefactor~$k_L$, it may become reduced in a blend with phase separation. This is because electrons and holes are confined to different material phases and can only meet at the heterointerface.\cite{Koster2006,Pivrikas2005,Groves2008,Gorenflot2014} Heiber et al.\cite{Heiber2015}~provided a semi-anlytical model for the encounter rate in the presence of phase separation, \begin{equation} k_\text{enc} = \frac{q}{\varepsilon\varepsilon_0} 2 f_1 \left(\frac{\mu_n^g + \mu_p^g}{2}\right)^{1/g}, \label{eq:Heiber} \end{equation} where $f_1$ and $g$ are domain-size dependent factors derived from Monte Carlo simulations on artificial blend morphologies. Figure~\ref{fig:figure08}a illustrates the possible reduction through Equation~(\ref{eq:Heiber}) for a range of mobilities and domains sizes. Comparing the numbers with the measured reduction factors yields two conclusions: First, phase separation cannot explain the differences between the CB and DCB~blends. With the given mobilities and domain sizes, the Heiber model would predict recombination to be (slightly) weaker in the CB~blends, which is the opposite trend to our experimental result. Second, the calculated encounter rates exceed the measured values of~$k_2$ by orders of magnitude, which proves that both the CB and DCB blends are not in the encounter-limited regime. Even in extreme cases, only relatively mild reductions ($>$$10^{-2}$) are predicted, which shows that tuning the domain size is not a promising strategy to strongly suppress charge recombination. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{figure08.pdf} \caption{Effect of morphology on charge recombination. (a)~Measured reductions~$k_2/k_L$ for the CB and DCB~blends~(data points) in comparison with the model by Heiber~et~al.\cite{Heiber2015} assuming encounter-limited recombination with a reduction factor~$k_\text{enc}/k_L$~(lines). Different traces belong to different domain sizes. (b)~Schematic illustration of the morphologies studied herein (top: CB, bottom: DCB). Our data suggests that the more extended delocalization of holes in the DCB blends assists CT states to dissociate.} \label{fig:figure08} \end{figure} By inserting the calculated encounter rates into Equation~(\ref{eq:reduction}) we can deduce CT~reduction factors of $\zeta_\text{CT} = 10^{-2}$ and $10^{-3}$ for the CB and DCB blends, respectively. In other words, only 1 in 100 or 1000 encounter events will lead to an actual loss, which implies that the probability for the CT~state to separate again must be much higher than the probability to relax to the ground state~($k_d \gg k_f$). As shown by Burke~et~al.,\cite{Burke2015} this leads to an equilibrium between CT~states and free carriers, reducing the recombination rate by \begin{equation} \zeta_\text{CT} = \frac{k_f}{k_f + k_d}. \label{eq:Burke} \end{equation} Several studies have indicated that the balance between~$k_d$ and~$k_f$ is moved towards separation by the presence of aggregates.\cite{Jamieson2012,Sweetnam2014,Burke2015,Cha2019,Wilken2020b} Energetically speaking, aggregation shifts the molecular orbitals such that the electronic gap is reduced compared to the amorphous state.\cite{Osterbacka2000} In a three-phase morphology, consisting of molecularly mixed, amorphous regions and pure aggregates, this creates an energy cascade pushing carriers from mixed to pure regions. The additional driving force has been shown to not only improve the split-up of CT~pairs during photogeneration, but also to reduce the loss due to charge recombination.\cite{Burke2015,Wilken2020b} Given the well documented energy level offset between P3HT aggregates and amorphous P3HT mixed with PCBM,\cite{Sweetnam2014} the significant P3HT aggregation in the CB and DCB~blends reasonably explains why both systems are non-Langevin systems. However, the relatively small difference in the P3HT aggregate percentage of~45 to 51\% alone cannot explain why the reduction is one order of magnitude stronger in the DCB~blends. We also find no evidence that the properties of the PCBM are substantially altered by the solvent. Hence, it is reasonable to suggest that the crystalline quality in the P3HT~aggregates, expressed by the exciton bandwidth~$W$, is the decisive factor. Although the parameter~$W$ mainly refers to excitons, a direct connection to the charge domain could be shown.\cite{Clark2009} Here, the significantly smaller value of~$W$ for the DCB blends translates into a larger number of interacting repeating units in the polymer. As a result, the holes are supposed to be more delocalized in the DCB blends. In other words, they have a higher \textit{local} mobility~(as opposed to the macroscopic mobility addressed in the transport measurements), which leads to a higher probability for carriers to overcome the energetic barrier between the interfacial CT~state and the charge-separated state. This in turn increases~$k_d$ and thereby the denominator in Equation~(\ref{eq:Burke}). For a schematic illustration of this scenario, see Figure~\ref{fig:figure08}b. It should be noted that the CT~dissociation rate will also affect the way how free charges are generated following photoexcitation. Hence, one might argue that the contrast in~$\zeta_\text{CT}$ is at odds with the similar generation efficiencies we found for the CB and DCB~blends. However, as pointed out by Shoaee~et~al.,\cite{Shoaee2019} equilibrium between CT~states and free charges implies a reverse relationship of the form~$\zeta_\text{CT} = 1 - \eta_\text{CT,diss}$, where~$\eta_\text{CT,diss}$ is the yield of CT~dissociation during charge generation. Varying~$\zeta_\text{CT}$ from~$10^{-3}$ to~$10^{-2}$, which has drastic consequences for the recombination behavior as shown in this work, corresponds only to a change in~$\eta_\text{CT,diss}$ from~$0.999$ to~$0.99$. Such small differences in generation efficiency are not distinguishable with the methods used herein. In other words, recombination is much more sensitive to the CT~dissociation rate than generation is. Another aspect to consider is the role of spin in the recombination. In general, the encounter of two independent charges should form CT states of singlet~($^1$CT) and triplet~($^3$CT) spin state in a 1:3~ratio.\cite{Rao2013,Chow2014,Shoaee2019} The direct transition from $^3$CT to the ground state is spin-forbidden, but the triplet CT~state may undergo back electron transfer to triplet excitons~(T$_1$). Hence, there would be in principle two different decay channels with different relaxation kinetics, which would make the relation between generation and recombination less straightforward.\cite{Shoaee2019} For back electron transfer to be relevant, the T$_1$ level in either the donor or acceptor must be at lower energy than the CT state. Even though such a configuration is not typical for P3HT:PCBM, it cannot be completely ruled out from this work. However, it could be shown that in other materials the loss channel through triplets is turned off upon aggregation.\cite{Rao2013,Chow2014} Thus, if triplets were relevant, it is likely that they would lead to additional losses in the CB rather than in the DCB system, which enforces our view that aggregation is key to suppress charge recombination. Clarification of this aspect would be an interesting direction for future research. Summarizing, we find that P3HT:PCBM blends processed via the DCB~route display an optimal morphology in terms of reduced recombination and thickness-insensitive device performance. The optimal morphology consists of both amorphous and aggregated regions. To this end, our work confirms earlier suggestions that a three-phase morphology balances best between efficient generation~(mainly to occur in the amorphous phase) and reduced recombination~(carriers are pushed away from the interface towards aggregated regions), and outperforms both purely amorphous and highly ordered blends.\cite{Nyman2015,Schwarz2020} However, the crucial point here is that the mere existence of aggregates in an amorphous matrix is not sufficient to suppress recombination to such an extent that efficient thick-film devices are possible. To achieve this, the aggregates must have a high crystalline quality and purity, which in the present case is only realized in the carefully equilibrated DCB blends. We believe that our findings derived from P3HT:PCBM are transferable to other blend systems and, therefore, the ability to form high-quality aggregates should serve as a guiding principle in the development of novel OPV~materials. \section{Conclusions} In this paper, we revisited the classical P3HT:PCBM blend to establish connections between the nanoscale morphology and device physics of organic bulk-heterojunction solar cells. By exploiting a structure--property relationship that has so far received little attention, we could show that aggregation is the key feature to reduce recombination losses. However, in order to reduce the recombination rate to such an extent that the solar cells operate as Shockley-type devices even in thick junctions, the mere presence of aggregates is not sufficient. For this to be the case, the aggregates must be of high crystalline quality and purity, so that the charge carriers are delocalized over larger areas. The delocalization boosts the dissociation of charger-transfer states that are formed by the encounter of free electrons and holes. In the case of P3HT:PCBM, such a situation is realized in carefully equilibrated blend films that are slowly dried after spin-coating. The optimized blends show extraordinarily long hole diffusions lengths exceeding~\unit[100]{nm} and can also tolerate the build-up of space charge due to imbalanced transport. In contrast, we find that phase separation plays only a minor role in recombination. The fact that charge carriers are confined in donor and acceptor domains, and thus a coarse phase separation would be preferable from a geometric point of view, is far outperformed by the effect of aggregation. Therefore, optimization of the domain size, for example through nanostructuring, is not a promising approach to significantly suppress recombination. Instead, the focus should lie on molecular order, crystalline quality and phase purity. This is especially an important design rule for nonfullerene acceptors, in which reducing the recombination is particularly important due to their typically low mobilities. \section{Experimental Section} \paragraph{Materials} Regioregular P3HT was purchased from Rieke Metals (4002-E, molecular weight 50--70 kDa, regioregularity 91--94\%). PCBM was purchased from Solenne BV (purity 99.5\%). Chlorobenzene (CB), 1,2-dichlorobenzene (DCB) and poly\-ethylen\-imine (PEIE) were purchased from Sigma-Aldrich. Poly(3,4\-ethylene\-di\-oxy\-thio\-phene) poly\-styrene sulfonate (PEDOT:PSS) was purchased from Heraeus (Clevios P VP AI 4083). Indium tin oxide~(ITO) covered glass substrates were purchased from Pr\"{a}zisions Glas \& Optik. \paragraph{Blend-Film Preparation} Blend solutions were prepared by dissolving P3HT and PCBM in 1:1 weight ratio in either CB or DCB and stirred at \unit[60]{$^\circ$C} for \unit[12]{h} prior to further processing. Blend films were produced by dynamic spin coating. The thickness was controlled by the concentration of the solution~(30 to \unit[60]{mg/ml}) and the spin-coating speed (500 to \unit[1500]{rpm}). After complete drying in a closed vessel, all samples were thermally annealed at \unit[150]{$^\circ$C} for \unit[10]{min}. All preparation was carried out under a dry nitrogen atmosphere. \paragraph{Electron Tomography} Specimens for TEM were prepared by depositing blend films on glass substrates covered with a sacrificial layer of~PEDOT:PSS. After immersion in deionized water, the PEDOT:PSS layer dissolved, and the free-floating blends were transferred to 300-mesh copper grids. Bright-field TEM images were acquired with a 200-kV field emission electron microscope~(Jeol JEM-2100F) at an underfocus of \unit[10]{$\mu$m}~\cite{Ma2007,Moon2009,Dutta2011}. Electron tomography was performed by recording a series of TEM~images under different viewing angles by tilting the specimen in a range of~$\pm65^\circ$ in nonequidistant increments according to Saxton~\textit{et~al.}~\cite{Saxton1984}. The tilt series was assembled to a volumetric reconstruction~(voxel size \unit[0.43]{nm}) using the simultaneous iterative reconstruction technique~(SIRT) with 25~iterations. Acquisition, alignment, and reconstruction of tomographic data was done with the software package Temography~(System in Frontier, Inc.). For the FFT analysis, the reconstructed volumes were cut into series of horizontal slices. Binarization of the grayscale slices was done by applying a median filter~($\unit[9 \times 9]{pixels}$) and thresholding using Otsu's method. \paragraph{Device Fabrication} Inverted solar cells were fabricated by spin coating a 50-nm~layer of ZnO nanoparticles~(diameter \unit[5]{nm}, see Ref.~\citenum{Wilken2014} for details) on cleaned and patterned ITO~substrates. Subsequently, the active layer was deposited either from CB or DCB as described above. After thermal annealing, a \ce{MoO3}~(\unit[12]{nm})\slash{}\ce{Ag}~(\unit[150]{nm}) electrode was evaporated under high vacuum~(\unit[$10^{-6}$]{mbar}). The active area was \unit[0.3]{cm$^2$}. Solar cells were encapsulated with glass slides and an UV-cured optical adhesive. Single-carrier diodes were fabricated with the device architecture ITO\slash{}PEIE\slash{}P3HT:PCBM\slash{}Ca\slash{}Al (electrons) and ITO\slash{}PEDOT:PSS\slash{}P3HT:PCBM\slash{}\ce{MoO3}\slash{}Ag (holes), respectively. \paragraph{Characterizations} Current--voltage curves were recorded with a parameter analyzer~(Keithley 4200). A class AAA solar simulator (Photo Emission Tech) was used to provide simulated AM1.5G~illumination at~\unit[100]{mW cm$^{-2}$}. The EQE was measured with a custom-built setup~(Bentham PVE300), equipped with a 75-W~Xe arc lamp and a monochromator. Photocurrent signals were modulated at~\unit[780]{Hz} and monitored with a lock-in amplifier~(Stanford Research Systems SR830). White-light bias illumination was provided by a 50-W~halogen lamp. The intensity of all light sources used was calibrated with a KG5-filtered reference solar cell. UV--vis absorption spectra were recorded from optically thin films on glass with a spectrophotometer~(Varian Cary 100) and corrected for the transmission of the substrate. The P3HT absorption component was fitted to the Spano model,\cite{Spano2005,Clark2007,Clark2009} which treats the absorption spectrum as a series of Gaussian bands, \begin{equation} A \propto \sum_{m = 0} \left(\frac{S^m}{m!}\right) \left(1 - \frac{W \text{e}^{-S}}{2 E_p} \sum_{n \neq m} \frac{S^n}{n! (n-m)}\right)^2 \exp \left( - \frac{\left(\hbar\omega - E_{0-0} - mE_p - \frac{1}{2}\frac{W S^m \text{e}^{-S}}{m!}\right)^2}{2\sigma^2} \right), \label{eq:Spano} \end{equation} where $\hbar\omega$ is the photon energy, $S$ the Huang-Rhys factor, $E_p$ the intramolecular vibrational energy, $E_{0-0}$ the energy of the 0--0 transition, $\sigma$ the Gaussian bandwidth, and $W$ the intramolecular free exciton bandwidth. Assuming $S = 1$ and $E_p = \unit[0.179]{eV}$ for the C=C symmetric stretch,\cite{Clark2007} the only free fit parameters were $E_{0-0}$, $\sigma$ and $W$. Film thicknesses were measured with a stylus profiler~(Veeco Dektak 6M). \paragraph{Transient Measurements} For TPC and TPV~measurements, a 4-W~white-light LED (Seoul P4) was used to provide constant background illumination. Another LED~(wavelength \unit[525]{nm}, \unit[250]{ns} rise/fall time), driven by a double pulse generator~(Agilent 81150A), was used to apply a small optical perturbation to the sample. The second channel of the pulse generator served as bias-voltage source. Current and voltage transients were recorded with a 1-GHz~digital storage oscilloscope~(Tektronix DPO7104) at an input impedance of~\unit[50]{$\Omega$} and~\unit[1]{M$\Omega$}, respectively. The TPC~transients were routinely corrected for RC time effects as described elsewhere.\cite{Kettlitz2013} A biased silicon detector~(Thorlabs DET36A) was used to monitor the switching dynamics and background light intensity. The latter was pre-adjusted for each device by matching the current--voltage response under simulated sunlight. Light sources were attenuated with neutral density filters. Experiments were done at room temperature and ambient pressure. \paragraph{Transfer-Matrix Model} One-dimensional transfer-matrix calculations were performed with a customized MATLAB code.\cite{Burkhard2010} The optical constants of all materials involved were determined by spectroscopic ellipsometry. The validity of the model was checked by comparing simulated and experimental reflectance spectra of complete OPV devices. Further details are given in the Supporting Information. \begin{acknowledgement} We thank Michael Koopmeiners for preliminary experiments and AFM~measurements, and Edit Kieselhorst, Erhard Rhiel, Matthias Macke and Dirk Otteken for technical support. We are also thankful to Carsten Deibel, Oskar J.~Sandberg, Manuela Schiek, Konstantin Kloppstech, Nils K\"onne, Nora Wilson, Christian Ahl\"ang and Christoph Norrenbrock for helpful discussions during various stages of this work. S.W. acknowledges funding through the Research Mobility Programme of \AA{}bo Akademi University. D.S. is grateful for financial support from the Magnus Ehrnrooth Foundation. S.D. acknowledges funding from the Society of Swedish Literature in Finland. M.N. acknowledges funding from the Jane and Aatos Erkko Foundation through the ASPIRE project. This project has received funding through European Union's Horizon 2020 research and innovation programme under the Marie Sk\l{}odowska-Curie grant agreement No~799801~(``ReMorphOPV''). \end{acknowledgement} \section{Summary of Literature Studies} \begin{table} \small \begin{flushleft} \caption{Recombination studies on annealed P3HT:PCBM solar cells.} \label{tab:tableS1} \begin{tabular}{lllcl} \toprule Study & Solvent & Annealing & Reduction factor~$\zeta$ & Experiment\\ \midrule Pivrikas~et~al.~\cite{Pivrikas2005} (2005) & & & $\sim10^{-4}$ & TOF \\ Ferguson~et~al.~\cite{Ferguson2011} (2011) & CF & solvent-vapor & $3 \times 10^{-4}$ & TRMC\\ Ju\v{s}ka~et~al.~\cite{Juska2005} (2005) & & & $5\times10^{-4}$ & DI\\ Bartelt~et~al.~\cite{Bartelt2015} (2015) & CF & thermal & $7\times10^{-4}$ & IV\\ Deibel~et~al.~\cite{Deibel2008} (2008) & CB & thermal & $\sim10^{-3}$ & CELIV \\ Kniepert~et~al.~\cite{Kniepert2014} (2014) & CF & thermal & $1\times10^{-3}$ & BACE \\ Heiber~et~al.~\cite{Heiber2018} (2018) & DCB & solvent-vapor & $1\times10^{-3}$ & IPDA \\ Shuttle~et~al.~\cite{Shuttle2010} (2010) & Xylene & thermal & $2\times10^{-3}$ & CE\\ Shoaee~et~al.~\cite{Shoaee2019} (2019) & DCB & solvent-vapor & $2\times10^{-3}$ & BACE\\ Wetzelaer~et~al.~\cite{Wetzelaer2013} (2013) & CF & thermal & $3\times10^{-3}$ & IV\\ Kniepert~et~al.~\cite{Kniepert2011} (2011) & DCB & solvent-vapor & $6\times10^{-3}$ & TDCF\\ Shuttle~et~al.~\cite{Shuttle2008c} (2008) & CB & thermal & $10^{-2}$--$10^{-3}$ & TA\\ Garcia-B.~et~al.~\cite{Garcia-Belmonte2010} (2010) & DCB & & $\sim10^{-2}$ & EIS \\ Guo~et~al.~\cite{Guo2010} (2010) & CB & thermal & $1\times10^{-2}$ & TA\\ Mingebach~et~al.~\cite{Mingebach2012} (2012) & CB & thermal & $2\times10^{-2}$ & TDCF\\ Mauer~et~al.~\cite{Mauer2011} (2011) & CB & thermal & $10^{-1}$--$10^{-3}$ & DP\\ \bottomrule \end{tabular} \begin{tabular}{lll} & CB & Chlorobenzene\\ & CF & Chloroform\\ & DCB & 1,2-Dichlorobenzene\\ \addlinespace & BACE & Bias-assisted charge extraction\\ & CE & Charge extraction\\ & CELIV & Charge extraction by linearly increasing voltage\\ & DI & Double injection\\ & DP & Double-pulse technique with variable delay\\ & EIS & Electrical impedance spectroscopy\\ & IPDA & Impedance-photocurrent device analysis\\ & IV & Current-voltage analysis\\ & TA & Transient absorption\\ & TDCF & Time-delayed collection field\\ & TOF & Time of flight\\ & TRMC & Time-resolved microwave conductivity\\ \end{tabular} \end{flushleft} \end{table} \clearpage \begin{figure} \centering \includegraphics[width=0.8\textwidth]{figure_S1.pdf} \caption{Reported values of the reduction factor~$\zeta$ for annealed P3HT:PCBM solar cells. See Table~\ref{tab:tableS1} for details on the literature studies. For studies that specify a range, the upper bound is given in the diagram.} \label{fig:figureS1} \end{figure} \clearpage \section{Solar Cell Device Performance} \begin{table} \caption{Photovoltaic performance of the CB and DCB thickness series.$^\dagger$ } \begin{tabular}{cccccc} \toprule Solvent & Thickness & $V_\text{oc}$ & $j_\text{sc}$ & FF & PCE \\ & (nm) & (\unit{mV}) & ($\unit{mA/cm^2}$) & & (\%) \\ \midrule CB & 65 & 635 & 7.5 & 0.62 & 3.0 \\ & 100 & 621 & 6.9 & 0.64 & 2.7 \\ & 140 & 618 & 7.4 & 0.60 & 2.7 \\ & 150 & 616 & 8.1 & 0.50 & 2.5 \\ & 190 & 606 & 8.6 & 0.44 & 2.3 \\ & 220 & 601 & 7.8 & 0.43 & 2.0 \\ & 260 & 595 & 6.0 & 0.42 & 1.5 \\ & 300 & 589 & 6.0 & 0.42 & 1.5 \\ & 350 & 588 & 4.5 & 0.43 & 1.1 \\ \addlinespace DCB & 45 & 646 & 7.4 & 0.60 & 2.9 \\ & 70 & 640 & 8.2 & 0.61 & 3.3 \\ & 105 & 616 & 7.4 & 0.56 & 2.7 \\ & 140 & 630 & 7.6 & 0.58 & 2.8 \\ & 180 & 629 & 8.9 & 0.61 & 3.4 \\ & 200 & 622 & 9.3 & 0.58 & 3.4 \\ & 250 & 625 & 8.8 & 0.60 & 3.3 \\ & 300 & 624 & 9.1 & 0.60 & 3.4 \\ & 320 & 620 & 9.3 & 0.60 & 3.5 \\ \bottomrule \end{tabular} $^\dagger$Data was obtained under simulated AM1.5G solar irradiation of $\unit[100]{mW/cm^2}$ with a spectral mismatch factor of 1.015--1.032~(depending on sample). The reported values for each thickness represent an average over 4~devices. \label{tab:IVparams} \end{table} \clearpage \section{Transfer Matrix Model} Optical characteristics of the solar cells were modeled using a one-dimensional transfer-matrix approach.\cite{Pettersson1999,Burkhard2010} Each material layer involved in the device stack was treated as homogeneous medium, characterized by its film thickness and complex index of refraction~$\tilde{n} = n' + \text{i}n''$, where $n'(\lambda)$ and $n''(\lambda)$ are the refractive index and extinction coefficient, respectively. The latter were experimentally determined by means of variable angle spectroscopic ellipsometry. Therefore, the ellipsometric parameters $\Psi(\lambda)$ and $\Delta(\lambda)$ were recorded under different angles of incidence with a rotating-analyzer instrument~(Woollam VASE) and fitted to adequate dispersion models. Further details regarding the analysis can be found in previous publications.\cite{Scheunemann2015,Wilken2015} Figure~\ref{fig:R_exp_vs_sim} illustrates the quality of the optical model by comparing measured and simulated reflection spectra of complete solar cell devices. \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figure_S2.pdf} \caption{Comparison of the experimental~(data points) and simulated~(solid lines) reflectance of complete solar cell devices with variable active-layer thickness.} \label{fig:R_exp_vs_sim} \end{figure} \clearpage \section{Internal Quantum Efficiency} \begin{figure} \centering \includegraphics[width=\textwidth]{figure_S3.pdf} \caption{Internal quantum efficiency~(IQE) for devices of different thickness as calculated from the measured EQE without white-light bias, the measured reflectance~($R$) and the modeled parasitic absorption~(PA) using the relation $\text{IQE} = \text{EQE}/(1 - R - \text{PA})$. The gradients seen in the IQE at low wavelengths are due to incomplete exciton harvesting in the PCBM phase.\cite{Burkhard2009} Horizontal dashed line indicate an IQE of 0.7 as it was assumed in the photocurrent simulations in the main text.} \label{fig:figureS1} \end{figure} \clearpage \section{Light-Intensity Dependent Measurements} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure_S4.pdf} \caption{Light-intensity dependent $j$--$V$ curves~(upper row) and corresponding white-light biased EQE spectra~(lower row) for CB devices of variable active-layer thickness as indicated in the figure. Arrows indicate increasing~(bias) light intensity.} \label{fig:IV_EQE_CB} \end{figure} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure_S5.pdf} \caption{Same representation as in Fig.~\ref{fig:IV_EQE_CB} for DCB devices of variable active-layer thickness. The slight improvement of the EQE with increasing light intensity seen in the thicker devices may be a consequence of trap-filling.} \label{fig:IV_EQE_DCB} \end{figure} \clearpage \section{Atomic Force Microscopy} \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure_S6.pdf} \caption{Surface topography obtained from atomic force microscopy on CB and DCB blend films~(thickness: \unit[250]{nm}) coated on different types of substrates. Independent of the substrate used, processing from DCB results in much rougher films on the $\unit{\mu m}$~scale. The values of the root-mean-square roughness calculated from the $\unit[10 \times 10]{\mu m}$ scans are \unit[0.9]{nm}~(CB) and \unit[9.2]{nm}~(DCB) for the samples on ITO/PEDOT:PSS, and \unit[1.5]{nm}~(CB) and \unit[14]{nm}~(DCB) for the samples on ITO/ZnO.} \label{fig:AFM} \end{figure} \clearpage \section{Fourier-Transform Analysis} For a quantitative statistical analysis of the morphological features revealed by electron tomography, we calculated the 2D autocorrelation function~(ACF) for each slice parallel to the film plane. For an $M \times N$ image, the ACF is given by \begin{equation} \Psi_{gg}(m,n) = \sum_{k=1}^{M-m}\sum_{\ell=1}^{N-n} g_{m,n}\,g_{m+k,n+\ell}, \label{eq:ACF} \end{equation} where $g_{m,n}$ denotes the brightness value at pixel position~$(m,n)$. The sums in Equation~(\ref{eq:ACF}) were evaluated in the reciprocal space using the fast Fourier transform algorithm. Assuming that the phase-separated domains are randomly oriented in the lateral plane, it is useful to introduce a radially averaged ACF by transformation to polar coordinates~$(r,\varphi)$ and integration over the polar angle, \begin{equation} \left\langle\Psi_{gg}\right\rangle_\varphi = \int_0^{2\pi} \Psi_{gg}(r\cos\varphi,r\sin\varphi)\,\text{d}\varphi. \label{eq:rACF} \end{equation} Figure~\ref{fig:ACF}a shows the results of Equation~(\ref{eq:rACF}) for an exemplary slice from the middle of the tomograms. The data is normalized to~$-1,1$, where 1 indicates perfect correlation and $-1$ perfect anti-correlation. The shape of the ACFs is typical of a periodic two-phase system distorted by domain size fluctuations, long-spacing variations, and diffusive phase boundaries.\cite{Strobl1980} While the first side maximum is related to the period of the pseudo-periodic structure, the width of the self-correlation peak centered at~$r = 0$ contains information about the average domain size~$d$. According to Strobl and Schneider,\cite{Strobl1980} the domain size was estimated by extrapolating the linear region to the baseline intercept~(Figure~\ref{fig:ACF}b). \begin{figure} \centering \includegraphics[width=\textwidth]{figure_S7.pdf} \caption{Statistical analysis of the tomography data. (a)~Radially averaged ACF for an exemplary $xy$~slice through the reconstructed volumes. (b)~Linear part of the self-correlation peak with the ordinate shifted by the first minimum value of the ACF. The domain size was derived from the zero intercept of a linear fit to the data~(straight lines).} \label{fig:ACF} \end{figure} \clearpage \section{Charge Transport Measurements} \begin{figure} \centering \includegraphics[width=\textwidth]{figure_S8.pdf} \caption{Charge transport in CB~(left) and DCB~(right) cast blends. Shown are typical SCLC~current--voltage characteristics of electron-only~(closed symbols) and hole-only~(open symbols) devices. Straight lines are fits to the Mott--Gurney law, $j = 9\varepsilon\varepsilon_0 \mu_{n,p} V^2/8L^3$. Dashed lines are the result of drift--diffusion simulations using the extended Gaussian disorder model~(GDM, see Ref.~\citenum{Felekidis2018} for details).} \label{fig:figure05} \end{figure} \begin{figure} \centering \includegraphics[width=\textwidth]{figure_S9.pdf} \caption{Resistance-dependent photovoltage~(RPV) transients for complete solar cell devices at about \unit[180]{nm} thickness. The RPV~experiments were carried out with a pulsed Nd:YAG laser~(wavelength \unit[532]{nm}) and a variable load resistance~$R_L$ as indicated in the figure. From the photovoltage shoulders, which corresponds to the transit times of electrons and holes, respectively, very similar carrier mobilities of $\mu_n \approx \unit[10^{-3}]{cm^2\,V^{-1}s^{-1}}$ and $\mu_p \approx \unit[5 \times 10^{-5}]{cm^2\,V^{-1}s^{-1}}$ can be estimated for the CB and DCB sample. While the electron mobility agrees well with the SCLC experiments, the hole mobility is about two times smaller. This might indicate a slight carrier density dependence of the hole transport as RPV is performed under much lower carrier densities than SCLC, but is within the typical range when comparing different mobility measurement methods.\cite{Dahlstrom2020}} \label{fig:figureS1} \end{figure} \clearpage \section{Transient Photocurrent} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figure_S10.pdf} \caption{Extracted charge~$\Delta Q$ versus applied voltage for CB devices of variable thickness, ranging from \unit[65]{nm}~(top left corner) to \unit[350]{nm}~(bottom right corner), in dependence of the background illumination intensity. The voltage axis was corrected for the voltage drop due to the current offset generated by the background light.} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{figure_S11.pdf} \caption{Extracted charge~$\Delta Q$ versus applied voltage for DCB devices of variable thickness, ranging from \unit[45]{nm}~(top left corner) to \unit[320]{nm}~(bottom right corner), in dependence of the background illumination intensity. The voltage axis was corrected for the voltage drop due to the current offset generated by the background light.} \end{figure} \clearpage \section{Reaction Order and Light Ideality Factor} Two parameters that serve as fingerprints for the recombination mechanism are the reaction order~$\delta$ and the ideality factor~$n_\text{id}$. The reaction order describes how the recombination rate~$R$ scales with the charge carrier density, $R \propto n^\delta$. For a purely bimolecular process, $\delta = 2$ applies. Conceptually, the ideality factor describes the slope of the exponential dependence of the recombination rate on voltage, \begin{equation} R = R_0 \exp\left(\frac{qV_\text{oc}}{n_\text{id} kT}\right), \label{eq:R_oc} \end{equation} where $R_0$ is the recombination rate without photogeneration. Table~\ref{tab:rec} lists typical values of~$\delta$ and~$n_\text{id}$ for some relevant recombination processes, namely direct or band-to-band recombination of free carriers, Shockley--Read--Hall recombination via deep traps, and recombination via exponential tails of the density of states. \begin{table} \caption{Typical values of the reaction order~$\delta$ and ideality factor~$n_\text{id}$ for different recombination mechanisms. Note that the values only apply to the case of balanced electron and hole densities~($n = p$). Also given is the parameter~$\xi$ as it is defined in the text.} \begin{tabular}{lccc} \toprule Recombination mechanism & $\delta$ & $n_\text{id}$ & $\xi$\\ \midrule Free-carrier & 2 & 1 & 0.5 \\ Shockley--Read--Hall & 1 & 2 & 0 \\ Tail states & $>2$ & 1--2 & 0.5--1 \\ \bottomrule \end{tabular} \label{tab:rec} \end{table} An alternative representation of the ideality factor is given by \begin{equation} n_\text{id} = \frac{q}{kT} \frac{dV_\text{oc}}{d\ln(j_\text{gen})}, \label{eq:nid} \end{equation} where $j_\text{gen} = qGL$ is the photogenerated current with $G$ being the spatially averaged generation rate. Equation~(\ref{eq:nid}) is based on the assumption that $R = G$ prevails at $V = V_\text{oc}$. Given that $G$ is normally proportional to the light intensity, $n_\text{id}$ is relatively straightforward to determine from the light-intensity dependence of~$V_\text{oc}$. This representation of the ideality factor is also called light ideality factor and is more reliable than the determination via dark current--voltage curves, which are influenced by series resistance.\cite{Kirchartz2013} Compared to the ideality factor, the reaction order is much more difficult to determine. Probably the most common method is to measure pairs of values for the carrier lifetime~$\tau$ and the charge density~$n$ by means of transient photovoltage~(TPV) and charge extraction~(CE) measurements, respectively, and to apply the relation~$R = n/\tau$. However, this method is based on the assumption that during the CE experiment all charge carriers are extracted and that there are no recombination losses. To be independent of this assumption, we used an alternative way to determine~$\delta$, which is based solely on photovoltage measurements and outlined in the following. Several studies have demonstrated that both the carrier density and carrier lifetime follow an exponential dependence on $V_\text{oc}$, \begin{eqnarray} \label{eq:n} n = n_0 \exp\left(\frac{qV_\text{oc}}{m_n kT}\right),\\ \label{eq:tau_n} \tau = \tau_0 \exp\left(-\frac{qV_\text{oc}}{m_\tau kT}\right), \end{eqnarray} where $n_0$ and $\tau_0$ are proportionality constants, while $m_n$ and $m_\tau$ determine the slope in a semi-logarithmic representation.\cite{Maurano2011,Foertig2012,Shuttle2008a} The reaction order can be estimated either by plotting $\tau$ against $n$ and fitting the data to $\tau \propto n^{1-\delta}$, or by the slopes of Eqs.~(\ref{eq:n}) and (\ref{eq:tau_n}), \begin{equation} \delta = \frac{m_n}{m_\tau} + 1. \label{eq:delta1} \end{equation} We now want to find an alternative for Eq.~(\ref{eq:delta1}) that does not depend on~$m_n$ due to the experimental difficulties mentioned above. We start by dividing Eq.~(\ref{eq:n}) by Eq.~(\ref{eq:tau_n}), which yields the following expression for the recombination rate: \begin{equation} R = \frac{n}{\tau}= \frac{n_0}{\tau_0} \exp\left[\frac{qV_\text{oc}}{(\frac{1}{m_n} + \frac{1}{m_\tau}) kT}\right]. \label{eq:R_oc_2} \end{equation} Comparison with Eq.~(\ref{eq:R_oc}) yields that $R_0 = n_0/\tau_0$ and gives the following reciprocal expression for the ideality factor: \begin{equation} n_\text{id}^{-1} = m_n^{-1} + m_\tau^{-1}. \label{eq:ideality_factors} \end{equation} Foertig~et~al.\cite{Foertig2012} demonstrated that Eqs.~(\ref{eq:R_oc}), (\ref{eq:nid}) and (\ref{eq:ideality_factors}) are equivalent representations of the ideality factor and that static and transient approaches can provide a consistent picture of the recombination in OPVs. Thus, by plugging Eq.~(\ref{eq:ideality_factors}) into Eq.~(\ref{eq:delta1}), the reaction order~$\delta$ can be determined from any two of the three parameters $m_n$, $m_\tau$, and $n_\text{id}$. The corresponding relationship that is relevant for this work is \begin{equation} \delta = \left(1 - \frac{n_\text{id}}{m_\tau}\right)^{-1} = (1-\xi)^{-1}, \label{eq:delta2} \end{equation} where we have introduced the ratio~$\xi = n_\text{id}/m_\tau$. Using Eqs.~(\ref{eq:nid}) and (\ref{eq:tau_n}), the parameter~$\xi$ can be written as \begin{equation} \xi = \frac{n_\text{id}}{m_\tau} = \frac{d V_\text{oc}}{d\ln(j_\text{gen})}\frac{d\ln(\tau)}{d V_\text{oc}} = \frac{d\ln(\tau)}{d\ln(j_\text{gen})}, \label{eq:xi} \end{equation} which implies that the lifetime follows a power law of the form~$\tau \propto G^\xi$. Hence, it is instructive to determine $\xi$ using TPV~measurements, which probe the small-perturbation lifetime~$\tau_{\Delta n}$ at various background photogeneration rates~$G$. Using the relationship~$\tau = \delta\tau_{\Delta n}$ as demonstrated in Ref.~\citenum{Maurano2011}, we finally arrive at: \begin{equation} \tau_{\Delta n} \propto (1-\xi)G^\xi. \label{eq:taudn} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{figure_S12_a.pdf} \hfill \includegraphics[width=0.49\textwidth]{figure_S12_b.pdf} \caption{(a)~TPV~lifetime~$\tau_{\Delta n}$ and open-circuit voltage~$V_\text{oc}$ as a function of the photogeneration rate for \unit[300]{nm} thick devices. Straight lines are fits to Eq.~{\ref{eq:taudn}}, from which the parameter~$\xi$ was derived. The generation rate was estimated from TMM calculations. (b)~Exponent~$\xi$~(triangles), light ideality factor~$n_\text{id}$~(diamonds), and apparent reaction order~$\delta = (1-\xi)^{-1}$~(circles) as a function of the active-layer thickness.} \label{fig:Rec_1_2} \end{figure} Figure~\ref{fig:Rec_1_2}a validates this relationship for both thick CB and DCB devices. Note that at large thickness, the geometrical capacity is low, so that measured lifetimes are supposed to represent the recombination dynamics of photogenerated charges rather than the capacitive discharging of the device.\cite{Kiermasch2018} Hence, fitting the experimental data to Eq.~(\ref{eq:taudn}) and using~$\delta = (1 - \xi)^{-1}$ gives an estimate of the reaction order as long as~$\xi$ is not too close to unity. Figure~\ref{fig:Rec_1_2}a also displays the photogeneration dependence of~$V_\text{oc}$, determined under the same illumination conditions as the TPV~measurements. While for the CB~device, the whole experimental range is characterized by a single slope, the DCB~device shows a transition towards lower ideality factors at high generations rates~(${\sim}\unit[10^{27}]{m^{-3}s^{-1}}$), roughly corresponding to 1-sun illumination. Hence, the following analysis is limited to medium photogeneration, where the slopes for both kinds of devices are well defined. Figure~\ref{fig:Rec_1_2}b shows the extracted values of~$\xi$ and~$\delta$ for the complete thickness series, together with the light ideality factor according to Eq.~(\ref{eq:nid}). For the CB~devices, $\delta$ and~$n_\text{id}$ exhibit an initial decrease with thickness, but reach fairly constant values of~$\delta = 2.27 \pm 0.08$ and~$n_\text{id} = 1.07 \pm 0.02$ between~$L = 150$ and~\unit[350]{nm}. The largely increased reaction order and ideality factor for the 65-nm device may be due to the importance of spatial carrier gradients\cite{Kirchartz2012} or capacitive discharging effects.\cite{Kiermasch2018} In contrast, for the DCB~devices, the trend in thickness is less clear and the data generally show a larger scatter; but it is clearly seen that the reaction order and the ideality factor are generally larger than for the CB devices, ranging from $\delta = 2.7$ to~3.9 and from $n_\text{id} = 1.4$ to slightly above~2. Thus, the DCB~devices clearly show the fingerprints for recombination via exponential tails, while the behavior of the CB~devices is much closer to the expectation for a direct bimolecular recombination mechanism. However, as we discuss in the main text, it is not likely that the DCB~devices have more or deeper tail states. Instead, we assume that direct bimolecular recombination is more reduced than in CB~devices. This assumption is also supported by the fact that the transition of the ideality factor to the bimolecular regime~(see Figure~\ref{fig:Rec_1_2}a) only occurs at much higher generation rates. \clearpage \section{Estimation of Recombination Rate Constant} \begin{figure}[h] \centering \includegraphics[width=0.6\linewidth]{figure_S13.pdf} \caption{Estimation of the recombination rate constant~$k_2$ according to $\tau = (k_2 G)^{-1/2}$~(see main text) for thick CB devices with $L > \unit[150]{nm}$. The total charge carrier lifetime was approximated from the small-perturbation lifetime via $\tau = 2\tau_{\Delta n}$ under the assumption of a strictly bimolecular process. As can be seen, the data points for high generation rates~(i.e., low values of $G^{-1/2}$) collapse into one straight line and a linear fit yields a rate constant of~$k_2 = \unit[2 \times 10^{-18}]{m^3s^{-1}}$.} \label{fig:tau_vs_genRate} \end{figure} \clearpage \section{Neher Model} \begin{figure}[h] \centering \includegraphics[width=\linewidth]{figure_S14.pdf} \caption{Light-intensity dependent $j$--$V$ curves of 300-nm thick devices~(data points) in comparison to the modified Shockley model by Neher~et~al.\cite{Neher2016}~(dashed lines) for recombination rate constants of $k_2 = \unit[2 \times 10^{-12}]{cm^3\,s^{-1}}$~(CB) and $k_2 = \unit[1 \times 10^{-13}]{cm^3\,s^{-1}}$~(DCB). The deviations for the CB~device at high light intensities are due to the build-up of space charge, which is further discussed in the main text.} \end{figure} \clearpage \section{Energy-Level Diagrams} \begin{figure} \centering \includegraphics[width=0.9\textwidth]{figure_S15.pdf} \caption{Energy-level diagrams under short-circuit conditions for \unit[300]{nm} thick solar cells calculated with a drift--diffusion model.\cite{Burgelman2000,Wilken2020} In both the CB and DCB~device, space charge builds up with increasing light intensity due to the imbalanced charge transport. The width of the space-charge region is about \unit[160]{nm} at 1-sun illumination. Note that the energy levels are independent of the recombination rate constant~$k_2$.} \label{fig:figureS1} \end{figure}
null
null
null
proofpile-arXiv_000-10129
{"arxiv_id":"2009.09042","language":"en","timestamp":1600740140000,"url":"https:\/\/arxiv.org\/abs\/2009.09042","yymm":"2009"}
2024-02-18T23:40:25.038Z
2020-09-22T02:02:20.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10131"}
null
\section{Introduction} \label{sec:introduction} The landmark 2017 multimessenger detection of a binary neutron star merger \cite{Abbott2017} confirmed an evolutionary channel of compact objects that had been theoretically speculated for decades \cite{Lattimer1974,Li1998a,Rosswog2003}. In addition, a handful of other events with less significance but consistent with binary neutron star mergers have been observed without gravitational waves \cite{Tanvir2013a,Yang2015}, without an electromagnetic counterpart \cite{Abbott2020}, or as a possible neutron star-black hole merger \cite{Abbott2020a}. The promise of future multimessenger detections \cite{Pol2019} provides the possibility of constraining the nuclear equation of state \cite{Margalit2019}, confirming the origin of heavy elements in the universe \cite{Cote2018}, and understanding the nature of the engine that drives gamma-ray bursts \cite{Abbott2019}. Following years of purely theoretical work and a flurry of papers following GW170817, a standard model of the merger dynamics has emerged (see \cite{Shibata2019,Radice2020,Metzger2020} for recent reviews), though this model is incomplete and numerical simulations still exhibit a good deal of uncertainty \cite{Foucart2016a,Foucart2018a,Radice2018a,Foucart2019}. During the inspiral and merger of a pair of neutron stars, a tidal ejecta is launched in the equatorial regions with a very low electron fraction that leads to a red transient due to efficient production of heavy elements via the rapid neutron capture process. There remains a central compact object that can be a stable neutron star, temporarily stable hypermassive neutron star, or black hole, depending on the details of the merging system and currently uncertain details about the nuclear equation of state (e.g., \cite{Radice2018b}). Around the central object is a hot and dense accretion disk, the mass of which depends on all of the same factors as well \cite{Radice2018a}. Emission of neutrinos allows the disk to cool and raise the electron fraction toward 0.5. The accretion disk itself also launches a significant amount of matter due to a combination of viscous or MHD stresses and neutrino heating\cite{Miller2019b,Fernandez2020}, and may launch a polar jet via MHD stresses \cite{Blandford1977,Ruiz2020} or neutrino pair annihilation \cite{Eichler1989,Just2016,Perego2017,Fujibayashi2017}. In any case, neutrinos play a significant role in transporting energy and lepton number, which can determine the amount and composition of the ejecta, which then determines the electromagnetic observables like the brightness, color, and duration of a kilonova \cite{Metzger2014,Radice2018a}. The ejected matter enshrouds the inner mass ejection engine, obscuring our view of the details of the accretion disk dynamics. Because of this, we resort to theoretical and numerical models to interpret and predict observables, but the combination of turbulent hydrodynamics, strong spacetime curvature, an uncertain nuclear equation of state, and strong asymmetries make understanding the dynamics of the radiation a challenging problem. The general relativistic quantum kinetic equations must be solved to fully understand the neutrino dynamics \cite{Vlasenko2014,Volpe2015,Blaschke2016}. Though efforts are being made to understand the role of neutrino flavor changes in neutron star mergers (e.g., \cite{Wu2017c}), state of the art dynamical models generally ignore this wrinkle and solve or approximate the Boltzmann equation. The most exact methods currently employed are Monte Carlo methods (e.g. \cite{Miller2019b}). Such accurate methods are currently prohibitively expensive in extremely dense regions like the interior of a hypermassive neutron star and for carrying out simulations beyond a couple hundred milliseconds. Other exact neutrino transport methods have been used in the context of multidimensional core-collapse supernova simulations, but have not yet been tested in such a relativistic environment as neutron star mergers (e.g., \cite{Sumiyoshi2012,Nagakura2014,Nagakura2017a}). The most efficient approximate treatments of neutrinos in merger disks employ the leakage scheme \cite{Metzger2014,Siegel2018,Fernandez2020}, the advanced spectral leakage scheme \cite{Gizzi2019}, or a combination of leakage and moment treatments \cite{Sekiguchi2015,Radice2016a,Fujibayashi2017}. Two-moment methods \cite{Thorne1981,Shibata2011,Cardall2013} are a more sophisticated and very popular approximation to the Boltzmann equations in all types of astrophysical accretion disks, including protoplanetary disks \cite{Fuksman2020}, active galactic nuclei \cite{Gonzalez2007,Sadowski2013,Skinner2013,Fragile2014,Jiang2014,Stone2020}, and compact object merger remnants \cite{Foucart2018a,Fujibayashi2020}. However, these methods still require some scheme for estimating higher-rank moments in order to complete the system of equations. While methods exist to evaluate these higher moments using the method of short characteristics (e.g., \cite{Davis2012,Jiang2019,Weih2020}) or potentially Monte Carlo transport \cite{Foucart2018}, it is much more efficient to use an analytic closure method to determine the higher moments as a function only of local quantities \cite{Thorne1981,Shibata2011,Cardall2013}. The analytic closure is implemented by expressing the pressure tensor as a function of the energy density and flux, most often according to the Levermore closure \cite{Levermore1984}. One of the exciting features of moment-based methods is that if the closure is exact the evolution equations for the evolved moments are exact. There has been a great deal of effort to find a closure relation that approaches this ideal, though largely restricted to one dimension (see \cite{Cernohorsky1994,Murchikova2017} for summaries). \cite{Foucart2018a} analyze the difference between radiation fields evolved using a dynamical gray moment method and a Monte Carlo method evolved in parallel but without feedback to the fluid. They showed that moment methods fail to accurately reproduce neutrino average energies in the equatorial region, neutrino densities in the polar region, and neutrino pair annihilation rates, likely affecting mass outflow from the disk. The question naturally remains, however, of whether it is possible to invent a local closure that is adequately realistic. \cite{Iwakami2020} demonstrated that in multidimensional core-collapse supernova simulations, the pressure tensor can be significantly misaligned with the flux vector, violating a fundamental assumption that goes into the closure approximation. In addition, although the closure for the rank-2 moment (pressure tensor) is often discussed, there has yet been no analysis of the quality of the rank-3 moment closure needed for spectral moment schemes. In this paper we extend the maximum entropy Fermi-Dirac (MEFD) closure to include the rank-3 moments, investigate how a realistic radiation field breaks the fundamental assumptions used by any local analytic closure, compare several closures suggested in the literature, and identify where the choice of closure has the largest impact. The paper is organized as follows. In Section~\ref{sec:SedonuGR} we introduce our upgraded general-relativistic Monte Carlo radiation transport code. We discuss the analytic closure approximation in Section~\ref{sec:closure} and proceed to derive the maximum-entropy Fermi-Dirac closure for the rank-3 moment tensor. We calculate a steady-state neutrino radiation field on a single snapshot of a neutron star merger simulation in Section~\ref{sec:results}, carefully inspect the validity of the assumptions that go into the analytic closures, quantify errors from several closures in the literature, and assess the impact that the closure choice has on the neutrino pair annihilation rate. Finally, we conclude in Section~\ref{sec:conclusions} with a discussion of the possibility of improving the closure in a general way. We focus our attention on the neutron star merger environment, but many of the conclusions in this paper are relevant to any system where moment-based radiation transport algorithms are used. \section{Discrete General Relativistic Monte Carlo Transport} \label{sec:SedonuGR} \texttt{SedonuGR} is a time-independent general-relativistic (GR) neutrino radiation transport code that operates in zero (spatially homogeneous) to three dimensional systems. This is a heavily modified and upgraded version of the special-relativistic code \texttt{Sedonu} neutrino transport code \cite{Richers2015, Richers2017a} and is publicly available \cite{SedonuGR}\footnote{\url{github.com/srichers/SedonuGR}}. In this section we will describe the code in detail, and several code tests are presented in Appendix~\ref{app:code_tests}. The general relativistic Boltzmann equation describes the evolution of the neutrino distribution $f_\epsilon$ as \cite{Thorne1981} \begin{equation} \frac{d x^\alpha}{d\tau}\frac{\partial f_\epsilon}{\partial x^\alpha} + \frac{d k^i}{d\tau}\frac{\partial f_{{\epsilon}}}{\partial k^i} = -k^\alpha u_\alpha S\,\,, \end{equation} where $d\tau$ is an interval of time in the rest frame of the background fluid, $x^\alpha$ is the neutrino position, $k^\alpha$ is the neutrino four-momentum (in units of energy), $u^\alpha$ is the four-velocity, and $S$ is a source term that accounts for collisions. The current goal of {\tt SedonuGR} is to solve the time-independent version of this equation, namely under the assumption that $\partial_t f_\epsilon=0$. We solve this equation via Monte Carlo transport \cite{Haghighat2015} by discretizing the distribution function into a finite number of neutrino packets, each of which undergo random emission, propagation, and scattering just as real individual neutrinos would. This section is an exposition of the details of the method, and the reader interested in the results of the calculations can jump to Section~\ref{sec:results}. \subsection{Coordinates} \label{sec:coordinates} The Monte Carlo neutrino packets use the standard 3+1 Cartesian metric with the (-+++) sign convention describing coordinates in the \textit{lab frame}: \begin{equation} \begin{aligned} g_{\alpha\beta} &= \begin{bmatrix} -\alpha^2 + \beta_\alpha\beta^\alpha & \beta_i \\ \beta_j & \gamma_{ij} \end{bmatrix}\,\,,\\ g^{\alpha\beta} &= \begin{bmatrix} -\alpha^{-2} & \alpha^{-2}\beta^i \\ \alpha^{-2}\beta^j & \gamma^{ij} - \alpha^{-2}\beta^i\beta^j \end{bmatrix}\,\,.\\ \end{aligned} \end{equation} where $\alpha$ is the lapse, $\beta^\alpha$ is the shift vector, and $\gamma_{ij}$ is the three-metric. A location is specified with a four-coordinate $x^\mu$ and a neutrino momentum is specified with a wavevector $k^\alpha$. All particle motion is done in the lab frame in 3D Cartesian coordinates. The shift vector can be chosen freely \cite{Baumgarte2013}, affecting how the spacetime is evolved. Since we are not allowing the spacetime to evolve, we must choose the shift vector in a way that is consistent with the volume of a spatial cell not to changing in time. That is, we must choose $\beta^i$ such that the extrinsic curvature vanishes, which is most simply done by choosing $\beta^i=0$. With the metric quantities and the three-velocity in these coordinates given by a simulation snapshot, we reconstruct the dimensionless four-velocity as \begin{equation} \begin{aligned} u^t &= \frac{W}{\alpha}\,\,, \\ u^i &= W\left(\frac{v^i}{c} - \frac{\beta^i}{\alpha}\right)\,\,. \end{aligned} \end{equation} where $W = 1/\sqrt{1-\gamma_{ij}v^i v^j}$. In addition to the \textit{lab} frame, we also require the ability to define a local comoving orthonormal \textit{tetrad} defined by orthonormal basis vectors $e_{(\mu)}^\alpha$. This frame is used when performing neutrino-fluid interactions or aggregating the radiation field. When constructing a tetrad at a particular location, the timelike basis vector is the fluid four-velocity at that location ($e^\alpha_{(t)} = u^\alpha$), yielding a comoving coordinate system. Following \cite{Dolence2009}, we provide a trial vector, subtract off components not normal to each of the previously determined basis vectors, and renormalize the vector to make it have a magnitude of unity. To make a trial vector $e_{(\mathrm{trial})}^\alpha$ orthogonal to another vector $l^\alpha$, we set $e_\mathrm{trial}^\alpha \gets e_\mathrm{trial}^\alpha - e_\mathrm{trial}^\alpha l_\alpha / l^\alpha l_\alpha$. To normalize, we set $e_\mathrm{trial}^\alpha \gets e_\mathrm{trial}^\alpha / \sqrt{e_\mathrm{trial}^\alpha e^\mathrm{trial}_\alpha}$. Once the basis vectors are established, we can transform a four-vector into the comoving tetrad basis and back out via \begin{equation} \label{eq:tet_transform} \begin{aligned} k^\alpha_\mathrm{tet} &= k^\mu e_{(\alpha)}^{\nu}g_{\mu\nu}\,\,,\\ k^\alpha &= k^\mu_\mathrm{tet} e_{(\mu)}^{\alpha}\,\,. \end{aligned} \end{equation} All Monte Carlo packets use these coordinates independent of the structure of the underlying data. The background fluid and metric data are stored in a grid unused by the neutrino packets themselves, and a grid class specific to each type of geometry (homogeneous, 1D spherical, 2D spherical, 3D Cartesian) knows how to interpolate all quantities and derivatives (Section~\ref{sec:interpolation}) to the MC particle position $x^\alpha$ and momentum $k^\alpha$ in these Cartesian coordinates. Below, we describe how the neutrino packets interface with background data stored in three-dimensional Cartesian and one-dimensional spherical-polar grids. In addition to the spatial grid, we also use a discrete energy grid with NG bins, centered at energy $\omega_i$ and with grid boundaries at $\omega_{i-1/2}$ and $\omega_{i+1/2}$, where $i$ ranges from 1 to NG. The neutrino energies are defined as $\omega=-k^\alpha u_\alpha$, or simply the energy in the comoving frame. \subsubsection{1D Spherical-Polar Background} We use the 1D spherical geometry in the code tests in Appendix~\ref{app:code_tests}. In spherical symmetry, the metric in general is $ds^2 = -\alpha^2 dt^2 + X^2 dr^2 + r^2d\Omega^2$, where the spherical coordinates can be expressed in terms of the Cartesian coordinates as $r=\sqrt{x^i x^i}$, $\theta=\cos^{-1}(x^z/r)$, and $\phi=\tan^{-1}(x^y/x^x)$. We store and interpolate $\alpha$, $X$, and $v^r$ to a given neutrino position in the spherical grid. We can then reconstruct the three-dimensional three-velocity components and metric as \begin{equation} \begin{aligned} v^i &= v^r \frac{x^i}{r}\,\,,\\ \gamma_{ij} &= \frac{x^i x^j}{r^2} (X^2-1) + \delta_{ij}\,\,. \end{aligned} \end{equation} Given the radial derivatives of $\alpha$ and $X$, we can also reconstruct the Christoffel symbols as \begin{equation} \begin{aligned} &\Gamma^t_{\mu\mu} = \Gamma^t_{ij} = 0\,\,, \\ &\Gamma^t_{ti} = \frac{x^i}{r} \frac{\partial_r \alpha}{\alpha} \,\,,\\ &\Gamma^a_{tt} = \frac{x^a}{r} \frac{\alpha \partial_r \alpha}{X^2}\,\,,\\ &\begin{aligned} \Gamma^a_{ij} = \frac{x^a}{(Xr)^2} &\left[\frac{x^i x^j}{r^2}(1-X^2 + rX \partial_r X)\right. \\ &\left.- \delta_{ij} (1-X^2)\right]\,\,. \end{aligned} \end{aligned} \end{equation} In the above, radial derivatives are computed via finite differencing between the nearest neighbors, and time derivatives are assumed to be zero. When constructing a comoving orthonormal tetrad, our trial vectors are $\{xz, yz, -\widetilde{r}^2, 0\}$, $\{-y, x, 0, 0\}$, and $\{x, y, z, 0\}$, where $\widetilde{r}^2 = x^2+y^2$. These correspond to the $\theta$, $\phi$, and $r$ directions, respectively. \subsubsection{3D Cartesian Background} \label{sec:background} The 3D Cartesian grid class directly stores and interpolates $\alpha$, $\beta^i$, $v^i$, the six independent components of $\gamma_{ij}$, and the 40 independent components of the connection coefficients $\Gamma^\alpha_{\mu\nu}$ in the same Cartesian coordinates described above. The connection coefficients are computed using $\Gamma^\alpha_{\mu\nu} = \frac{1}{2}g^{\alpha\eta}\left(g_{\mu\eta,\nu}+g_{\eta\nu,\mu}-g_{\mu\nu,\eta}\right)$, where spatial derivatives are computed via finite differencing with nearest neighbors in each direction and interpolated and time derivatives are assumed to be zero. When constructing a comoving orthonormal tetrad, the three spacelike trial vectors are chosen to be $e_{(z,\mathrm{trial})}^\alpha=\{0,0,0,1\}$,$e_{(y,\mathrm{trial})}^\alpha=\{0,0,1,0\}$, and $e_{(x,\mathrm{trial})}^\alpha=\{0,1,0,0\}$. \subsection{Interpolation} \label{sec:interpolation} We perform multi-dimensional interpolation for the values and derivatives of metric quantities and neutrino-fluid interaction rates at a neutrino's position $x^\mu$ and momentum $k^\mu$. For a position in a general n-dimensonal grid, there are $2^n$ grid points that define the hyper-cube enclosing the position. We denote the left and right coordinates of those points along each coordinate direction $k$ by $x_0^k$ and $x_1^k$, respectively. We denote the value of the function at the corners $f_{i_1 i_2 \ldots i_n}$, where each $i_k$ can be either 0 (left) or 1 (right). The value of the function at $x$ linearly interpolated in each dimension is then a sum over the values of the function values at the corners multiplied by appropriate weights: \begin{equation} f(\mathbf{x}) = \sum_{i_1=0}^1 \sum_{i_2=0}^1 \ldots \sum_{i_n=0}^1 W_{i_1 i_2 \ldots i_n} f_{i_1 i_2 \ldots i_n}\,\,, \end{equation} where the weights are \begin{equation} \begin{aligned} W_{i_1 i_2 \ldots i_n} &= \frac{1}{V}\prod_k \delta x^k(i_k) \,\,,\\ \delta x^k(i_k) &= \begin{cases} x_1^k - x^k & i_k=0\\ x^k - x_0^k & i_k=1 \end{cases}\,\,,\\ V &= \prod_k (x_1^k - x_0^k)\,\,. \end{aligned} \end{equation} Similarly, the derivative of the function along direction $d$ is \begin{equation} \partial_d f(\mathbf{x}) = \sum_{i_1=0}^1 \sum_{i_2=0}^1 \ldots \sum_{i_n=0}^1 (S_d)_{i_1 i_2 \ldots i_n} f_{i_1 i_2 \ldots i_n}\,\,, \end{equation} where the weights are \begin{equation} (S_d)_{i_1 i_2 \ldots i_n} = \frac{2i_d-1}{V} \prod_{k \neq d} \delta x^k(i_k)\,\,. \end{equation} The weights can be computed once for each position and used to interpolate all quantities. As a side note, we also explored discrete discontinuous linear interpolation of variables. In this method, the values and derivatives in each direction at the cell center are stored. This has the advantage that interpolation is much faster, but also requires more storage. Interpolating in N dimensions is simply $f(\mathbf{x}) = f(\mathbf{x}_0) + \sum_{i} (x^i-x^i_0)\partial_i f(\mathbf{x}_0)$, where $\mathbf{x}_0$ is the position of the grid point nearest to $\mathbf{x}$, and $f(\mathbf{x}_0)$ and $\partial_i f(\mathbf{x}_0)$ are stored. However, we found that the discontinuities in metric quantities across cell boundaries were problematic for the integration of the neutrino momenta. In general, once the neutrino moves across a cell boundary, the jump in the metric causes the stored neutrino four-momentum to no longer be null. We tried several methods of null-normalizing the momenta when neutrinos that cross the boundaries and special integration steps for neutrinos that cross cell boundaries, but the induced errors always led to unrealistic neutrino momenta in neutrino packets that cross many boundaries like in scattering-dominated regions. \subsection{Emission} \label{sec:emission} A specified number of neutrino packets $n_{\mathrm{emit},ig}$ is emitted from each grid cell labeled by index $i$, in each energy bin labeled by index $g$, and for each species. Each of these packets is given uniform random coordinates within the grid zone (uniform values of $r^2$, $\cos\theta$, and $\phi$ for 1D spherical coordinates) and a random comoving-frame frequency uniform in $\nu^3$ within the energy bin $g$. The metric and fluid quantities are interpolated to the packet's position and momentum, the local orthonormal tetrad is constructed, and an isotropic random direction is given to the packet in the tetrad frame. The packet weight is then set to \begin{equation} \begin{aligned} N_0 = & \frac{1}{n_{\mathrm{emit},ig}}c^2 B_{s} \kappa_{{\mathrm{abs}},s}4\pi \Delta\left(\frac{\nu^3}{3}\right)_g \mathcal{V}_i \,\,, \end{aligned} \end{equation} where $s$ is the species index and $i$ is the spatial grid cell index. The effective zone four-volume (i.e., that using interpolated metric quantities) is $\mathcal{V}_i = V_{\mathrm{coord},i} \sqrt{\det(\mathbf{\gamma})} (-n^\alpha u_\alpha) \Delta t$, where $V_{\mathrm{coord},i}$ is the coordinate volume of the grid cell $i$ {and $n^\alpha$ is the unit vector normal to the time slice (equivalently, the four-velocity of an Eulerian observer)}. We multiply this by the square root of the determinant of the three-metric $\mathbf{\gamma}$, the Lorentz factor $-n^\alpha u_\alpha$, and an arbitrary coordinate time interval $\Delta t=1\,\mathrm{s}$ to get the comoving four-volume of the grid cell. The $\Delta t$ is arbitrary because all quantities are divided by $\Delta t$ to yield rates, so it always cancels out. $\kappa_{\mathrm{abs},s}(\nu,\mathbf{x})$ is the absorption opacity of species $s$. In order to maintain consistency, instead of interpolating the emissivity, we interpolate the fluid temperature $T$ and electron neutrino equilibrium chemical potential $\mu_{\nu_e}=\mu_e + \mu_p - \mu_n$ given by the equation of state. We then compute the emissivity as the product of the absorption opacity with the blackbody function \begin{equation} B_s(\nu,T,\mu_s) = \frac{1}{1+\exp[(h\nu-\mu_s)/k_B T]}\,\,. \end{equation} The chemical potentials of the different species are determined by $\mu_{\bar{\nu}_e}=-\mu_{\nu_e}$ and $\mu_{\nu_x}=0$. We record the contribution to the volume-specific four-force exerted by the neutrino radiation on the fluid via emission $\mathcal{F}_{\mathrm{emit},i}^\alpha$ and rate of change of lepton number $\mathcal{L}_{\mathrm{emit},i}$ in spatial grid zone $i$ in the comoving tetrad frame as \begin{equation} \label{eq:fourforce_emission} \begin{aligned} \delta \mathcal{F}_{\mathrm{emit},i}^\alpha &= -\frac{1}{\mathcal{V}_i} N_0 k^\alpha_{\mathrm{tet},q}\,\,,\\ \delta \mathcal{L}_{\mathrm{emit},i} &= -\frac{1}{\mathcal{V}_i} N_0 l_s\,\,, \end{aligned} \end{equation} where $l_s$ is the lepton number of species $s$ (1 for $\nu_e$, -1 for $\bar{\nu}_e$, and 0 for $\nu_x$). \subsection{Standard Transport} \label{sec:propagation} Here we describe the standard method, used where the transport is not scattering-dominated, but will describe the scattering-dominated method in Section~\ref{sec:randomwalk}. Each neutrino packet will take a series of small steps of length determined by the neutrino-fluid interaction rates and the distance from fluid grid cell boundaries. We express the path length of the neutrino packet in terms of a distance in the comoving tetrad $ds_\mathrm{move}$, which is a proxy for the interval in the affine parameter along the trajectory $d\lambda = ds_\mathrm{move}/k_\mathrm{tet}^t$. Each time the particle moves, the distance it moves is the smaller of the grid distance and an interaction distance. That is, \begin{equation} \label{eq:ds_move} ds_\mathrm{move} = \min(ds_\mathrm{grid}, ds_\mathrm{interact})\,\,. \end{equation} The tetrad-frame distance to the next scattering event is randomly sampled as \begin{equation} ds_\mathrm{interact}=-(\ln U)/\kappa_\mathrm{scat}\,\,, \end{equation} where U is a uniform random number between 0 and 1. In order to limit the sizes of the step to not be too large (to appropriately sample each grid cell) and not too small (to prevent spending computer time on particles close to the boundary), we set the grid distance to \begin{equation} \label{eq:ds_grid} ds_\mathrm{grid} = \min(\max(ds_\mathrm{boundary},ds_{\min}), ds_{\max})\,\,. \end{equation} We estimate for the comoving tetrad-frame distance to the next grid cell boundary $ds_\mathrm{boundary}$ as \begin{equation} ds_\mathrm{boundary} \approx k_\mathrm{tet}^t \min_i\left[ (x^i - x^i_\pm) / (g_{\mu\nu}e_{(i)}^\mu k^\nu)\right]\,\,, \end{equation} where $i$ ranges from 1 to the number of dimensions in the background fluid profile. $x^i_\pm$ is the grid cell boundary coordinate to the right/left of $x^i$ if $g_{\mu\nu}e_{(i)}^\mu k^\nu$ is larger/smaller than 0, respectively. In spherical symmetry, for example, this becomes $ds_\mathrm{boundary} = k_\mathrm{tet}^t (r-r_\pm) / (g_{\mu\nu}e_{(r)}^\mu k^\nu)$. We use $ds_{\min}=0.05 ds_\mathrm{zone}$ and $ds_{\max}=0.5 ds_\mathrm{zone}$, where \begin{equation} ds_\mathrm{zone} = k_\mathrm{tet}^t \min_i\left[ (x^i_+ - x^i_-) / (g_{\mu\nu}e_{(i)}^\mu k^\nu)\right]\,\,. \end{equation} We integrate the particle position and momentum using a kick-drift-kick method. That is, to find the particle position and momentum at step $q+1$ based on the position and momentum at step q, we use \begin{equation} \begin{aligned} k^\alpha_{q+1/2} &= k^\alpha_q - \frac{d\lambda}{2}\Gamma^\alpha_{\mu\nu}(\mathbf{x}_q)k^\mu_q k^\nu_q \,\,, \\ x^\alpha_{q+1} &= x^\alpha_q + k^\alpha_{q+1/2}\,\,,\\ k^\alpha_{q+1} &= k^\alpha_{q+1/2} - \frac{d\lambda}{2}\Gamma^\alpha_{\mu\nu}(\mathbf{x}_{q+1})k^\mu_{q+1/2} k^\nu_{q+1/2}\,\,. \end{aligned} \end{equation} We then renormalize $k^\alpha$ by scaling each spatial component of $k^\alpha$ by the same factor to ensure that $k^\alpha k_\alpha=0$. {In principle, not all four components of the four-momentum are independent, constrained by the requirement that the vector remain null. One could, for example, just integrate the spatial components and set the time component to ensure the vector is null. However, this can cause the truncation error to preferentially go to the time component.} We scale the spatial components, since we find that the errors introduced by scaling the time component can lead some neutrino packets to have unrealistically large momenta. { In addition, for a static spacetime one can in principle leverage the fact that $k_t$ must remain constant, resulting in only two independent quantities. However, \cite{Dolence2009} discuss that straightforward geodesic integration tends to be simpler to implement and modify and also faster.} In the 3D Cartesian calculations in Section~\ref{sec:results}, there is a reflection symmetry boundary on the $z=0$ plane. The other boundaries are outflow boundaries. If the packet passes through the reflection boundary, the $z$-component of $k_\alpha$ and $x^\alpha$ are negated. The neutrino packet is immediately destroyed if $g_{tt}(\mathbf{x}_\mathrm{new})\geq 0$ (i.e., the packet passed a coordinate singularity). Finally, the packet is destroyed when it passes through an outflow boundary. \subsubsection{Absorption} \label{sec:absorption} Following propagation, the packet weight is then reduced to $N_{q+1} = N_q e^{-\eta}$ to account for absorption. We calculate the absorption optical depth as $\eta = \kappa_{\mathrm{abs},q} ds_\mathrm{move}$. We record the contribution to the volume-specific four-force exerted by the neutrino radiation on the fluid via absorption $\mathcal{F}_{\mathrm{abs},i}^\alpha$ and rate of change of lepton number $\mathcal{L}_{\mathrm{abs},i}$ in spatial grid zone $i$ in the comoving tetrad frame as \begin{equation} \label{eq:absorption} \begin{aligned} \delta \mathcal{F}_{\mathrm{abs},i}^\alpha &= \frac{1}{\mathcal{V}_i} (N_q-N_{q+1}) k^\alpha_{\mathrm{tet},q}\,\,,\\ \delta \mathcal{L}_{\mathrm{abs},i} &= \frac{1}{\mathcal{V}_i} (N_q-N_{q+1}) l_s\,\,, \end{aligned} \end{equation} where $l_s$ is the lepton number of species $s$ (1 for $\nu_e$, -1 for $\bar{\nu}_e$, and 0 for $\nu_x$). Since absorption continuously decreases the packet weight, we roulette the packet if its weight becomes too low. That is, if a packet's weight decreases below $10^{-3}N_0$, a uniform random number $U$ between 0 and 1 is sampled. If $U<0.5$ the packet is destroyed and if $U>0.5$ the packet weight doubles. This preserves all statistical averages and prevents unimportant packets from using computer time. \subsubsection{Neutrino Radiation Field} \label{sec:sedonu_radiation_field} As the packet moves, it contributes to the local radiation field. We account for this in the comoving tetrad frame where the neutrino distribution is binned into the same comoving-frame frequency bins used to discretize and store the interaction rates. The angular structure of the distribution function $f_\epsilon$, where $\epsilon=-k^\alpha u_\alpha$ is the neutrino comoving-frame energy, is decomposed into comoving tetrad-frame moments as \begin{equation} \label{eq:moments} \begin{aligned} E &= \frac{1}{(hc)^3} \int \frac{d\epsilon^3}{3} \epsilon\int d\Omega f_\epsilon\,\,, \\ F^a &= \frac{1}{(hc)^3} \int \frac{d\epsilon^3}{3} \epsilon\int d\Omega f_\epsilon l^a \,\,, \\ P^{ab} &= \frac{1}{(hc)^3} \int \frac{d\epsilon^3}{3} \epsilon\int d\Omega f_\epsilon l^a l^b \,\,, \\ L^{abc} &= \frac{1}{(hc)^3} \int \frac{d\epsilon^3}{3} \epsilon\int d\Omega f_\epsilon l^a l^b l^c \,\,, \\ ... \end{aligned} \end{equation} where $l^i$ are the spatial basis vectors in the tetrad coordinates {and the differential $d\epsilon^3/3$ is equivalent to $\epsilon^2 d\epsilon$ used to integrate spherical volumes}. Of course, by construction, the basis vectors in these coordinates are simply $\{1,0,0\}$, $\{0,1,0\}$, and $\{0,0,1\}$. During a propagation step $q$, the contribution to the radiation field moments to the currently occupied spatial grid zone $i$ is then given by \begin{equation} \begin{aligned} \langle N \rangle &= N_0\int_0^\eta e^{-\eta'}d\eta' \\ &\approx \begin{cases} (N_{q+1}+N_q)/2 & \eta << 1 \\ (N_{q+1}-N_q)/\eta & \mathrm{otherwise} \end{cases}\\ \delta E_{i,q} &\approx \frac{\langle N\rangle k^t_{\mathrm{tet},q} ds_\mathrm{move}}{c\mathcal{V}_i} \\ \delta F_{i,q}^a &= \delta E_{i,q} \frac{k^a_{\mathrm{tet},q}}{k^t_{\mathrm{tet},q}}\,\,,\\ \delta P_{i,q}^{ab} &= \delta E_{i,q} \frac{k^a_{\mathrm{tet},q}}{k^t_{\mathrm{tet},q}}\frac{k^b_{\mathrm{tet},q}}{k^t_{\mathrm{tet},q}}\,\,,\\ \delta L_{i,q}^{abc} &= \delta E_{i,q} \frac{k^a_{\mathrm{tet},q}}{k^t_{\mathrm{tet},q}}\frac{k^b_{\mathrm{tet},q}}{k^t_{\mathrm{tet},q}}\frac{k^c_{\mathrm{tet},q}}{k^t_{\mathrm{tet},q}}\,\,.\\ \end{aligned} \end{equation} The neutrino momentum is evaluated at the beginning of the step rather than the end because in the event of near complete absorption immediately following emission, all of the energy is emitted and deposited at the same location, helping avoid some noise from stiff source terms. \subsubsection{Scattering} \label{sec:scattering} If the shorter of the distances in Equation~\ref{eq:ds_move} was the interaction distance, the particle will undergo an elastic scattering event. The particle is given an isotropic random direction in the comoving tetrad frame, keeping the same energy in that frame, and the new lab-frame momentum is determined by Lorentz-transforming to the lab frame (Equation~\ref{eq:tet_transform}). The resulting four-force on the fluid due to scattering is then accumulated as \begin{equation} \label{eq:scattering} \delta \mathcal{F}_{\mathrm{abs},i}^\alpha = \frac{1}{\mathcal{V}_i} (N_{q+1} k^\alpha_{\mathrm{tet},q+1}-N'_{q+1}{k'}^\alpha_{\mathrm{tet},q+1}) \\ \end{equation} where the prime and unprimed variables refes to the state before and after the scattering event, respectively. As a side note, we have also experimented with biasing procedures to change the probability of scattering (e.g., to help escape scattering-dominated regions), but this inevitably caused the weights of some particles to randomly undergo scattering events several times in such a way that their weights increased to excessively large values, increasing overall variance in the solution. \subsection{Random Walk Monte Carlo} \label{sec:randomwalk} It is immensely computationally inefficient to simulate a particle directly if it is in a location with a large scattering opacity, since the distance moved between scatters is extremely small. Following \cite{Fleck1984, Richers2017a,Foucart2018}, we approximate a large number of scatters with a single large step in a random direction and an appropriate random sampling of the time it took to get there. Although the salient features of this method were already outlined in \cite{Richers2017a,Foucart2018}, there are some subtle differences, and so for completeness we will describe the full method. It was shown in \cite{Richers2017a} that if we define a sphere of radius $R$ centered around a particle's position in a homogeneous isotropic medium, the probability of escaping the sphere before time $\tau$ is \begin{equation} \begin{aligned} P_\mathrm{escape}(R,\tau) &= 1-2\sum_{n=1}^\infty (-1)^{n-1}\\ &\times \exp\left[-(n\pi)^2\zeta\right] \end{aligned} \end{equation} Here, $\zeta=D\tau/R^2$ and $D=c/3\kappa_\mathrm{scat}$ is the diffusion constant. We tabulate this function using 100 evenly spaced points in $\zeta$ up to $\zeta_\mathrm{max}=2$. To sample the time required to escape $\tau_\mathrm{esc,tet}$ the sphere in the comoving tetrad frame, we sample a uniform random number between 0 and 1 to substitute in for $P_\mathrm{escape}$ and invert the function numerically via linear interpolation. Following \cite{Foucart2018}, we assume that the particle spends a period of time $\tau_\mathrm{trap} = \tau_\mathrm{esc}-R/c$ trapped at the center of the sphere and spends the remaining time $\tau_\mathrm{free}=R/c$ free-streaming to the edge of the sphere. This is not formally correct and will not account for adiabatic losses or other effects that depends on derivatives of the fluid quantities, but relativistic effects (e.g., redshift) will be approximately correct. We must first select an appropriate random walk sphere size $R$ before actually performing the random walk. We do this by ensuring that the coordinate-frame displacement is at most approximately the distance to the cell boundaries in each of several directions. The coordinate displacement from the random walk can be estimated as \begin{equation} \Delta x^\alpha \approx \left.\frac{dx^\alpha}{d\tau}\right|_\mathrm{free} \left(\frac{R}{c}\right)+\left.\frac{dx^\alpha}{d\tau}\right|_\mathrm{trap}\left(\tau_\mathrm{esc}-\frac{R}{c}\right) \label{eq:dx_randomwalk_approx} \end{equation} where $dx^\alpha/d\tau|_\mathrm{free}=k^\alpha_{(a,\pm)}/k_{(a,\pm),\mathrm{tet}}^t$ and $dx^i/d\tau|_\mathrm{trap}=u^i$. In order to make a general algorithm that accounts for multiple possible displacement directions and the possibility of large $\zeta$, we solve Equation~\ref{eq:dx_randomwalk_approx} for $R$ with $\tau_\mathrm{esc}=R^2 \zeta_\mathrm{max}/D$, a test $k^\alpha_{(a,\pm)}$ in each coordinate direction $a$, and $\Delta x^a=x^a_\pm-x^a$, where $x^a_\pm$ is the coordinate of the right(+) or left(-) cell boundary. That is, we select the spatial components of each trial $k^\alpha_{(a,\pm)}$ to be $\pm1$ in the direction of increasing(+) or decreasing(-) coordinate $a$ and choosing a time component to make the test momentum a null vector. For a 3D Cartesian grid, $k^i_{(x,+)}=\{1,0,0\}$, $k_{(x,-)}^i=\{-1,0,0\}$, etc. For a 1D spherical grid, $k_{(r,+)}^i=\{x,y,z\}$ and $k_{(r,-)}^i=\{-x,-y,-z\}$. We then select $R$ to be the smallest of all of these trials and propagate the packet in a direction uniformly randomly selected in the tetrad frame for comoving-frame distance $c\,d\tau_\mathrm{free}$ (Section~\ref{sec:propagation}). Here we denote the four-momentum following the free-streaming step $k^{\alpha\prime}_\mathrm{free}$. We then interpolate all fluid and metric quantities at the new position and assign a new random direction at the end of the step $k^\alpha_{q+1}$ facing outward from the surface of the sphere to cause the surface of the sphere to be isotropically bright. In order to do this, we first transform $k'_\mathrm{free}$ to the tetrad frame. We then uniformily sample a new $k_{\mathrm{tet},q+1}$ and calculate the angle between the wavevectors $\cos\Theta=k_{\mathrm{tet},q+1}^\alpha k'_{\mathrm{str,tet}_\alpha}/(k_{\mathrm{tet},q+1}^t k'_{\mathrm{str,tet},t})$ (the denominator is valid because the calculation is done in the tetrad frame). If $\cos\Theta<U$ for a uniform random number $U$ between $0$ and $1$, we reject the new wavevector and sample again until we get a wavevector that is not rejected, thereby making the surface intensity of the random walk sphere proportional to $\cos\Theta$. We must properly account for the particle contribution to the radiation field and the four-force on the fluid. We will do this separately for the two stages (trapped and free-streaming). In the trapped stage, we assume that the neutrino contributes to the radiation field isotropically in the tetrad frame. The total energy density contributed during the trapped phase is \begin{equation} \begin{aligned} \delta E_{i,q,\mathrm{trap}} &= \frac{c\tau_\mathrm{esc}-R}{c\mathcal{V}_i}\langle N\rangle k^t_{\mathrm{tet},q}\,\,,\\ \delta F^a_{i,q,\mathrm{trap}} &= 0\,\,,\\ \delta P^{aa}_{i,q,\mathrm{trap}} &= \frac{\delta E_{q,i,\mathrm{trap}}}{3} \delta^{ab}\,\,,\\ \delta L^{abc}_{i,q,\mathrm{trap}} &= 0\,\,,\\ \end{aligned} \end{equation} where $\langle N\rangle$ is computed the same way as in Equation~\ref{eq:absorption} over the trapped part of the random {w}alk step. The energy density contribution from the free-streaming step is accounted for in the same way as in Section~\ref{sec:propagation} over the free-streaming part of the random walk step. Each time the packet changes direction (between the trapped and free-streaming steps, and following the free-streaming step), the force exerted on the fluid is accounted for in the same way as in Section~\ref{sec:scattering}. \subsection{Neutrino Pair Annihilation} \label{sec:sedonu_annihilation} The simplest reconstruction of the comoving tetrad-frame distribution function from its moments (Equation~\ref{eq:moments}) is \begin{equation} \label{eq:distribution_expansion} \begin{aligned} f_\epsilon(\theta,&\phi) = f_0 + 3 f_1^i l^i \\ &+\frac{5}{2}\left(3 f_2^{ij}l^i l^j - f_0\right)\\ &+ \frac{7}{2}\left(5f_3^{ijk}l^i l^j l^k - 3 f_1^i l^i\right) +...\,\,, \end{aligned} \end{equation} where $f_0=\int d\Omega f_\epsilon{/4\pi}$, $f_1^i=\int d\Omega f_\epsilon l^i{/4\pi}$, etc. Here $l^i$ are again the values of the components of the tetrad basis vectors in the tetrad coordinates. {This is effectively a multi-dimensional extension to an expansion in Legendre polynomials, and was derived by requiring that each term be orthogonal to each other term and that each moment can be recovered by the appropriate angular integral (the angular part of Equation~\ref{eq:moments}). This expansion does not enforce that the distribution remain between 0 and 1, so many terms are required to realistically represent sharply peaked distributions. However, we use this representation simply as a tool to be able to carry out angular integrals in what follows, so the particular angular representation is not important and the ability to recover moments from angular integrals is the key property.} The four-force on the fluid due to neutrino pair annihilation rate is an integral over both neutrino and anti-neutrino distribution functions \cite{Bruenn1985}. \begin{equation} \begin{aligned} \mathcal{F}^\mu_{(s)} &= \int \widetilde{dp}\int \widetilde{d\bar{p}} \int d\Omega\int d\bar{\Omega} p^\mu\\ &\left(f_\epsilon\bar{f}_\epsilon \Phi^{(a)}(\cos\theta)-(1-f_\epsilon)(1-\bar{f}_\epsilon) \Phi^{(p)}(\cos\theta)\right)\,\,, \end{aligned} \end{equation} where $\widetilde{dp} = d(\epsilon^3/3)/(hc)^3$. This is most easily evaluated in the local comoving tetrad. The annihilation kernel can be decomposed into Legendre polynomials: \begin{equation} \Phi(\mu) = \frac{1}{2}\Phi_0 + \frac{3}{2}\Phi_1 \mu + \frac{5}{2}\Phi_2 \frac{1}{2}\left(3\mu^2-1\right)+... \end{equation} (units of cm$^3$/s). If we integrate the annihilation four-force {using Equation~\ref{eq:distribution_expansion}} over the directions of both the neutrino and anti-neutrino distribution functions up to second order, {after a great expansion and contraction of terms} we get \begin{equation} \begin{aligned} \mathcal{F}^t_{(s)} &= (4\pi)^2\int \widetilde{dp}\int \,\,\widetilde{d\bar{p}} \,\,\epsilon \\ & \times\left[\frac{1}{2}(\Phi_0^{(a)}-\Phi_0^{(p)}) f_0\bar{f}_0\right. \\ &- \frac{1}{2}\Phi_0^{(p)} \left(1-(f_0+\bar{f}_0)\right)\\ &+ \frac{3}{2}(\Phi_1^{(a)}-\Phi_1^{(p)}) f_1^b\bar{f}_1^b \\ &+ \left.\frac{5}{4}(\Phi_2^{(a)}-\Phi_2^{(p)}) \left(3 f_2^{bc}\bar{f}_2^{bc}-f_0 \bar{f}_0\right)\right]\\ \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \mathcal{F}^a_{(s)} &= (4\pi)^2\int \widetilde{dp}\int \widetilde{d\bar{p}} \,\,\epsilon \\ &\times \left[\frac{1}{2}(\Phi_0^{(a)}-\Phi_0^{(p)}) f_1^a\bar{f}_0 +\frac{1}{2}\Phi_0^{(p)} f_1^a \right.\\ &+ \frac{3}{2}(\Phi_1^{(a)}-\Phi_1^{(p)}) f_2^{ai}\bar{f}_1^i +\frac{3}{2}\Phi_1^{(p)}\frac{1}{3}\bar{f}_1^a \\ &\left.+ \frac{5}{4}(\Phi_2^{(a)}-\Phi_2^{(p)})\left( 3f_3^{aij}\bar{f}_2^{ij}-f_1^a\bar{f}_0\right)\right]\\ \end{aligned} \end{equation} Integrating over energy, these become \begin{equation} \begin{aligned} \mathcal{F}^t_{(s)} &\approx \sum_{ij} \frac{1}{\bar{\epsilon}_j}\\ & \times\left[\frac{1}{2}(\Phi_0^{(a)}-\Phi_0^{(p)}) E_i \bar{E}_j\right. \\ &- \frac{1}{2}\Phi_0^{(p)} \left(\mathcal{E}_i \bar{\mathcal{E}}_j-\mathcal{E}_i\bar{E}_j-E_i\bar{\mathcal{E}}_j\right)\\ &+ \frac{3}{2}(\Phi_1^{(a)}-\Phi_1^{(p)}) F^b_i \bar{F}^b_j \\ &+ \left.\frac{5}{4}(\Phi_2^{(a)}-\Phi_2^{(p)}) \left(3 P^{bc}_i\bar{P}^{bc}_j-E_i \bar{E}_j\right)\right]\\ \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \mathcal{F}^a_{(s)} &\approx \sum_{ij} \frac{1}{\bar{\epsilon}_j}\\ &\times \left[\frac{1}{2}(\Phi_0^{(a)}-\Phi_0^{(p)}) F^a_i\bar{E}_j +\frac{1}{2}\Phi_0^{(p)}F_i^a \bar{\mathcal{E}}_j\right.\\ &+ \frac{3}{2}(\Phi_1^{(a)}-\Phi_1^{(p)}) P_i^{ab}\bar{F}_j^b +\frac{3}{2}\Phi_1^{(p)}\frac{1}{3}\mathcal{E}_i\bar{F}_j^a\\ &\left.+ \frac{5}{4}(\Phi_2^{(a)}-\Phi_2^{(p)})(3 L_i^{abc}\bar{P}_j^{bc}-F_i^a\bar{E}_j)\right]\,\,,\\ \end{aligned} \label{eq:annihil_fourforce} \end{equation} where $\mathcal{E}_i=\int \widetilde{dp} \epsilon\int d\Omega \,1=\pi(\epsilon_{i+1/2}^4-\epsilon_{i-1/2}^4)/(hc)^3$ is the maximum possible neutrino energy density contribution from energy bin $i$. We use NuLib \cite{OConnor2015} to generate neutrino pair annihilation kernels $\Phi$, but only the first two moments of the kernel are given. We can guess a second moment by requiring that the annihilation rate for comoving neutrinos is zero. That is, requiring that \begin{equation} \Phi(\mu=1) = \frac{1}{2}\Phi_0 + \frac{3}{2}\Phi_1 + \frac{5}{2}\Phi_2 = 0 \end{equation} and that moments of the annihilation kernel higher than the second are zero implies that \begin{equation} \Phi_2 = -\frac{1}{5}\left(\Phi_0 + 3\Phi_1\right) \label{eq:phiannihil2} \end{equation} The angular dependence of the annihilation rate in vacuum is proportional to $(1-\cos\Theta)^2$, where $\Theta$ is the angle between the directions of the annihilating neutrinos, so in vacuum this approximation becomes exact. \section{Analytic Moment Closures} \label{sec:closure} The general relativistic transport equations for angular moments of the radiation field moments $\widetilde{M}$ are described in detail in \cite{Shibata2011,Cardall2013}. Ignoring details and suppressing indices, the structure of these equations follows \begin{equation} \begin{aligned} \partial_t(\sqrt{\gamma}\widetilde{M}_{(1)}) &+ \partial_j A(g,\widetilde{M}_{(2)}) + \partial_\epsilon B(g,\widetilde{M}_{(3)},\nabla u) \\ &= \mathcal{S}(g,\nabla g,\widetilde{M}_{(2)})\,\,.\\ \end{aligned} \label{eq:moment_transport} \end{equation} The subscript in parentheses indicates the rank of the moment. That is $\widetilde{M}_{0}$ is the energy density, $\widetilde{M}_{(1)}$ contains the energy density and flux vector, $\widetilde{M}_{(2)}$ contains the energy density, flux vector, and pressure tensor, etc. These will be defined more carefully in the comoving orthonormal tetrad below. $A$,$B$, and $\mathcal{S}$ are functions whose details are not important for our purposes, except for the dependencies indicated in the function arguments. Most importantly, the evolution equation for the rank-1 tensor depends on the rank-2 and rank-3 tensors, which are not independently evolved and must be estimated by some other means. The lab-frame moment tensors $\widetilde{M}$ can be constructed from from the moments in the comoving orthonormal tetrad defined in Equation~\ref{eq:moments} using tetrad basis vectors $e^{(\alpha)}_\mu$ \begin{equation} \begin{aligned} \widetilde{M}_{(3)}^{\alpha\beta\gamma}&=M^{\mu\nu\eta}e_\mu^{(\alpha)}e_\nu^{(\beta)}e_\eta^{(\gamma)}\,\,,\\ \widetilde{M}_{(2)}^{\alpha\beta}&=M^{\mu\nu}e_\mu^{(\alpha)}e_\nu^{(\beta)}\,\,,\\ \widetilde{M}_{(1)}^{\alpha}&=M^{\mu}e_\mu^{(\alpha)}\,\,,\\ \end{aligned} \end{equation} where $e^{(\alpha)}$ are the set of four tetrad basis vectors. The moments in the comoving orthonormal tetrad take the form of \begin{equation} \begin{aligned} M^{ijk} &= L^{ijk}\,\,,\\ M^{tij}&=M^{ij}=P^{ij}\,\,,\\ M^{tti}&=M^{ti}=M^i=F^i\,\,,\\ M^{ttt}&=M^{tt}=M^t=E\,\,. \end{aligned} \end{equation} Note that $M^{\alpha\beta}_{\phantom{\alpha\beta}\beta}=0$. The closure to Equation~\ref{eq:moment_transport} (i.e., determining the rank-2 and rank-3 tensors) is usually implemented in the comoving orthonormal tetrad, so to have a well-defined set of evolution equations we need a prescription for the unknown comoving tetrad moments $P^{ij}$ and $L^{ijk}$ in terms of known quantities. All of the closures used in the literature rely on a few basic assumptions, which we will assess Section~\ref{sec:results}. The pressure tensor at each neutrino energy $\epsilon$ is assumed to take the form \begin{equation} \begin{aligned} P^{ij} &= \frac{3\chi_p -1}{2}P^{ij}_\mathrm{free} + \frac{3(1-\chi_p)}{2}P^{ij}_\mathrm{diff}\,\,, \\ L^{ijk} &= \frac{3\chi_l -1}{2}L^{ijk}_\mathrm{free} + \frac{3(1-\chi_l)}{2}L^{ijk}_\mathrm{diff}\,\,, \\ \end{aligned} \label{eq:closure_interpolation} \end{equation} where, under the regular assumptions that the radiation field is symmetric about the flux direction \begin{equation} \begin{aligned} P^{ij}_\mathrm{free} &= E_\epsilon \frac{F_\epsilon^i F_\epsilon^j}{\mathbf{F}_\epsilon \cdot \mathbf{F}_\epsilon}\,\,, \\ P^{ij}_\mathrm{diff} &= \frac{E_\epsilon}{3}I^{ij}\,\,,\\ L_{iii,\mathrm{diff}} &= \frac{3 F_i}{5}\,\,,\\ L_{ijj,\mathrm{diff}} &= \frac{F_i}{5}\,\,,\\ L_{ijk,\mathrm{diff}} &= 0\,\,,\\ L_{ijk,\mathrm{free}} &= \frac{F_i F_j F_k}{\sqrt{F_i F^i}}\,\,. \end{aligned} \label{eq:PL_diff_free} \end{equation} Different analytical closures differ in how they interpolate between the diffusive and free-streaming limits based on the flux factor $\xi_\epsilon = \sqrt{\mathbf{F}_\epsilon \cdot \mathbf{F}_\epsilon / E_\epsilon^2}$. With these quantities defined in an orthonormal tetrad moving with the fluid, they can be transformed out using the tetrad basis vectors. \subsection{Extending the MEFD Closure} \label{sec:analytic_MEFD} It is often assumed that the interpolating function between the diffusive and free-streaming limits for the third moment is the same as for the second moment. \cite{Banach2013} also used the maximum entropy condition to generate a closure for the third moment. However, the closure was expressed as a power series, so it is only accurate near the diffusion limit. In addition, \cite{Banach2017} writes limiting cases of the closure, but does not write it down in generality. In both cases, the closures for the third moment are designed for so-called nine-moment systems, in which the energy density, three fluxes, and five independent components of the pressure tensor are evolved variables (they would call our case a four-moment system, since we try to evolve the energy density and three fluxes). As such, the closure is derived using a functional for the distribution function that has more free variables, resulting in a closure that depends on all three of the energy density, flux, and pressure. However, we wish to use a closure for a two-moment system that has only two independent variables. To do this, we follow \cite{Cernohorsky1994} and derive an approximation to a maximum-entropy closure for the third moment based on only the energy density and flux. The MEFD closure maximizes the entropy for a{n} angular distribution with functional form \begin{equation} f_\mathrm{{\epsilon}}(\mu) = \frac{N}{e^{\eta-a\mu}+k}\,\,, \label{eq:f_MEFD} \end{equation} where $\eta$ and $a$ are parameters that determine the angular distribution and $k=-1$ is used for Bose-Einstein statistics and $k=1$ is used for Fermi-Dirac statistics. The angular moment integrals are then \begin{equation} \begin{aligned} f &= \frac{1}{e}\frac{1}{4\pi}\int_0^{2\pi} d\phi \int_{-1}^1 d\mu \mu f(\mu)\\ p &= \frac{1}{e}\frac{1}{4\pi}\int_0^{2\pi} d\phi \int_{-1}^1 d\mu \mu^2 f(\mu)\\ l &= \frac{1}{e}\frac{1}{4\pi}\int_0^{2\pi} d\phi \int_{-1}^1 d\mu \mu^3 f(\mu)\,\,,\\ \end{aligned} \label{eq:MEFD_moment_integrals} \end{equation} where we are using for shorthand $e=E/E_\mathrm{max}$ (the occupation probability), $f=|F|/E$ (the flux factor, distinct from the distribution function $f_{{\epsilon}}$), $p=P_{ff}/E$, and $l=L_{fff}/E$. $N$ is a normalization factor that cancels out everywhere in this analysis. Finding a closure amounts to solving for $\eta(e,f)$ and $a(e,f)$. {In the classical limit ($e\rightarrow 0$ or k=0), these integrals are analytic, yielding $f= \coth(a)-1/a$, $p= 1-2f/a$, and $l= \left[(6+a)^2 f - 2\right]/a^2$. However, for general Fermi-Dirac radiation case we must first look at limiting cases.} We can then evaluate the integrals under the assumption of maximum packing. That is, assuming the distribution is $f=1$ between $\mu=1$ and $\mu=\mu_0$ and $0$ outside of that range. Under these assumptions, the same moments become \begin{equation} \begin{aligned} f_\mathrm{max}(e) &= 1-e\,\,, \\ p_\mathrm{max}(e) &= \frac{2(1-e)(1-2e)}{3} + \frac{1}{3}\,\,, \\ l_\mathrm{max}(e) &= (1-e)(1-2e+2e^2)\,\,. \end{aligned} \label{eq:MEFD_max} \end{equation} For functional form of the distribution function in Equation~\ref{eq:f_MEFD}, we can in general express the pressure and third moment in terms of the flux saturation $x=f/f_\mathrm{max}$ as \begin{equation} \begin{aligned} p(e,x) &=\left[p_\mathrm{max}(e)-p_\mathrm{diff}(e,1)\right]\zeta_p(e,x)+p_\mathrm{diff}(e,x)\,\,,\\ l(e,x) &= \left[l_\mathrm{max}(e)-l_\mathrm{diff}(e,1)\right] \zeta_l(e,x) + l_\mathrm{diff}(e,x)\,\,, \end{aligned} \label{eq:MEFD_closure} \end{equation} where the diffusive solution is $p_\mathrm{diff}(e,x)=1/3$ and $l_\mathrm{diff}(e,x)=3 x f_\mathrm{max}(e)/5$. The functions $\zeta_p(e,x)$ and $\zeta_l(e,x)$ are not representable analytically, requiring numerical root finding to get $a(e,f)$. However, we can analytically express both in the isotropic ($x\rightarrow 0$) and high-packing ($x\rightarrow 1$) limits. Following \cite{Cernohorsky1994}, we can approximate $f(\mu,\eta,a)$ using the first two terms of a Sommerfeld expansion to get the high-packing limit. We arrive at \begin{equation} \begin{aligned} x_{x\rightarrow1}&\approx1-\frac{A}{a^2}\,\,,\\ \zeta_{p,x\rightarrow1} &\approx 1-\frac{3A}{a^2}\,\,,\\ \zeta_{l,x\rightarrow1} &\approx 1-\frac{3A(1-2e)^2+3a^2 x/5}{l_\mathrm{max}(e)/f_\mathrm{max}(e)-3/5}\,\,,\\ \end{aligned} \end{equation} where $A(e)=\pi^2/[12e(1-e)]$. After eliminating $a$, this becomes \begin{equation} \begin{aligned} \zeta_{p,x\rightarrow1} &\approx 3x-2\,\,,\\ \zeta_{l,x\rightarrow1} &\approx 6x-5\,\,.\\ \end{aligned} \label{eq:zeta_0} \end{equation} Again following \cite{Cernohorsky1994}, in the isotropic limit $a<<1$ and $f(\mu,\eta,a)$ can be Taylor-expanded around $a=0$ keeping only the first two terms. Including three terms gives the same result, and four or more yields intractable expressions. This leads to \begin{equation} \begin{aligned} x_{x\rightarrow0} &\approx \frac{a}{3}\,\,,\\ \zeta_{p,x\rightarrow0} &\approx \frac{a^2}{15}\,\,, \\ \zeta_{l,x\rightarrow0} &\approx \frac{a/5 - 3x/5}{l_\mathrm{max}(e)/f_\mathrm{max}(e)-3/5}\,\,. \end{aligned} \end{equation} Again eliminating $a$, this becomes \begin{equation} \begin{aligned} \zeta_{p,x\rightarrow0} &\approx \frac{3x^2}{5} \,\,,\\ \zeta_{l,x\rightarrow0} &\approx 0\,\,. \end{aligned} \label{eq:zeta_1} \end{equation} \begin{figure} \centering \includegraphics[width=\linewidth]{zeta_MEFD.pdf} \caption{MEFD rank-3 radiation saturation curve $\zeta_l$ as a function of the flux saturation $x=f(x,e)/f_\mathrm{max}(e)$ (implicitly defined in Equation~\ref{eq:MEFD_closure}). Different colors correspond to different energy saturation $e=E/E_\mathrm{max}$ (effectively the direction-averaged occupation number). All of the curves have the same value and derivative in the limits of $x\rightarrow0$ and $x\rightarrow1$. The dashed white curve shows the approximant in Equation~\ref{eq:MEFD_approximate}. {The lower dotted curve shows an approximation for $\zeta_l$ as derived for the Minerbo closure (equivalent to the classical limit of the MEFD closure) by \cite{Just2015} using a different approximation for the Langevin function. The middle dotted curve shows the corresponding curve for the M1 closure as derived by \cite{Vaytet2011}. The upper dotted curve shows a suggestion for the rank-3 Kershaw closure from taking $\chi_l = \chi_p$.}} \label{fig:zeta_l} \end{figure} Both $\zeta_p$ and $\zeta_l$ must be invariant under $e\leftrightarrow (1-e)$. We see from Equations~\ref{eq:zeta_1} and \ref{eq:zeta_0} that in the isotropic and high-packing limits both are in fact independent of $e$, and \cite{Cernohorsky1994} showed that $\zeta_p(e,x)=\zeta_p(x)$ is actually completely independent of $e$. The solid curves in Figure~\ref{fig:zeta_l} shows the variation in the $\zeta_l(e,x)$ over different values in $e$. \begin{figure} \centering \includegraphics[width=\linewidth]{l_closure.pdf} \caption{MEFD closure for the rank-3 radiation field tensor. $l=L_{fff}/E$ is the amount of energy in the component of the tensor aligned with the flux as a function of the flux factor $f$. Each solid lines show the results from solving for $a$ and integrating Equations~\ref{eq:MEFD_moment_integrals} for a chosen value of the energy saturation $e$. $e$ ranges from 0.1 (curve ending at $f=0.1$ to 1.0 (curve ending at $f=1.0$) in increments of 0.1. The dashed lines show the results from using Equation~\ref{eq:MEFD_closure} using Equation~\ref{eq:MEFD_approximate}. The dotted curve is the maximum packing curve (Equation~\ref{eq:MEFD_max}) that traces the endpoints of each of the curves. Note that the seemingly large deviations of the approximant in Equation~\ref{eq:MEFD_approximate} do not cause the solid and dashed lines to be far separated.} \label{fig:MEFD_l} \end{figure} $\zeta_l(e,x)$ can be approximated using the lowest-order polynomial that satisfies the values and derivatives of the functions in the high-packing and isotropic limits, along with the requirement that $0\leq \zeta(e,x) \leq 1$ (based on observation of the numerical solution). \begin{equation} \begin{aligned} \zeta_p(e,x) &\approx x^2 (3-x+3x^2)/5\,\,, \\ \zeta_l(e,x) &\approx x^6\,\,. \end{aligned} \label{eq:MEFD_approximate} \end{equation} \cite{Cernohorsky1994} showed that the approximation for $\zeta_p(x)$ is accurate everywhere within $2\%$. The dashed curve in Figure~\ref{fig:zeta_l} shows this approximation for $\zeta_l(e,x)$. While the error appears to be quite large near $e=0.3$, the prefactor in Equation~\ref{eq:MEFD_closure} is quite small. Figure~\ref{fig:MEFD_l} shows the closure curves of $l(e,f)$ (solid curves) for several values of $e$ and the corresponding approximate curve (dashed curves). The approximation produces values of $l$ that are accurate to within $3.5\%$ for any value of $f$ and $e$, and errors are largest at the $x=0,1$ and $e=0.5$ limits. One can relate $\zeta$ back to $\chi$ based on Equation~\ref{eq:closure_interpolation} via \begin{equation} \chi_p(e,x) = \frac{2}{3}\frac{p_\mathrm{max}(e)-p_\mathrm{diff}(e,1)}{p_\mathrm{free}(e,x)-p_\mathrm{diff}(e,x)}\zeta_p + \frac{1}{3} \label{eq:chi_from_zeta} \end{equation} and similarly for $\chi_l$, where $p_\mathrm{free}(e,x)=1$ and $l_\mathrm{free}(e,x)=x f_\mathrm{max}(e)$. The classical limit of the MEFD closure is obtained by taking $e\rightarrow0$ so that $f_\mathrm{max}=p_\mathrm{max}=l_\mathrm{max}=1$, leading to \begin{equation} \begin{aligned} \chi_{p,\mathrm{classical}} &\approx \frac{2}{15}f^2\left(3-f+3f^2\right)+\frac{1}{3}\,\,,\\ \chi_{l,\mathrm{classical}} &\approx \frac{2}{3}f^5 + \frac{1}{3} \,\,.\\ \end{aligned} \end{equation} The maximum packing limit is obtained by taking $e\rightarrow 1-f$ so that $x=\zeta=1$. In this limit, \begin{equation} \begin{aligned} \chi_{p,\mathrm{maxpack}} &\approx \frac{1}{3}(1-2f+4f^2)\,\,,\\ \chi_{l,\mathrm{maxpack}} &\approx \frac{1}{3}(3-10f+10f^2)\,\,. \end{aligned} \end{equation} Note Equation~\ref{eq:closure_interpolation} disagrees with the choice of \cite{Shibata2011}. Our choice is made in order to always preserve the identity that $L_{ij}^j=F_i$, though an appropriate modification to the closure can ensure this indirectly. If one chooses the free-streaming limit of the third moment to be \begin{equation} L_{ijk,\mathrm{free}}=\frac{J F_i F_j F_k}{(F_i F^i)^{3/2}} \label{eq:Lfree_BAD} \end{equation} then Equation~\ref{eq:chi_from_zeta} must be applied with $l_\mathrm{free}(e,x)=1$ since the interpolation is between the diffusive solution and one where all \textit{energy} (rather than flux) is moving in one direction. This leads to \begin{equation} \begin{aligned} \chi_{l,\mathrm{classical}} &\approx \frac{2}{3}\frac{2f^6}{5-3f} + \frac{1}{3} \,\,,\\ \chi_{l,\mathrm{maxpack}} &\approx \frac{2}{3}f\left(\frac{2-10f + 10f^2}{5-3f}\right) + \frac{1}{3}\,\,, \end{aligned} \label{eq:chil_BAD} \end{equation} and results in the curves in Figure~\ref{fig:MEFD_l} remaining unchanged. We will demonstrate how the choice of free streaming limit impacts other closures in Section~\ref{sec:results_L}. {We also show $\zeta_l$ from the Minerbo closure as derived by \cite{Just2015} (converting to $\zeta_l$ from their Equations 32-33) as the lower black dotted line in Figure~\ref{fig:MEFD_l}. Their result differs considerably, although the Minerbo closure is identical to the classical limit of the MEFD closure. Although the limiting values of $\zeta_l$ at $x=\{0,1\}$ are correct, the limiting behavior differs from that derived in Equations~\ref{eq:zeta_0} and \ref{eq:zeta_1} due to a choice in how they approximate the inversion. \cite{Cernohorsky1994} choose a simple polynomial to approximate the Langevin function such that the limiting behaviors of $\zeta_p$ are correct. \cite{Just2015} use this same function to approximate the Langevin function to determine $\zeta_l$. By contrast, we follow the same process used in \cite{Cernohorsky1994} to choose a different approximation that causes the limiting behaviors of $\zeta_l$ to be correct. Both versions are valid as closures, but our classical limit exhibits smaller errors from the exact classical solution under the MEFD assumptions. For reference, we also show the equivalent curve from the M1 rank-3 closure as derived by \cite{Vaytet2011} as the upper black dotted curve in Figure~\ref{fig:MEFD_l}, converted from their Equations 12-14. There is no reason this curve should match any of the others, as it was derived under different assumptions and is just shown for comparison. We only plot values for $x>0.15$ because below this value the results of evaluating the expressions are dominated by round-off errors. Finally, the Kershaw closure \cite{Kershaw1976} can in principle be extended to the third moment in a way that obeys realizability constraints (e.g., \cite{Schneider2016}). If one assumes that $\chi_l=\chi_p$ the result fits nicely into the realizable moment space, and this can be considered the three-moment extension of the Kershaw closure. This is plotted as the upper dotted curve in Figure~\ref{fig:MEFD_l} for comparison, and also has no reason to follow any of the other curves.} \begin{figure} \centering \includegraphics[width=\linewidth]{uniform_sphere.pdf} \caption{{Uniform sphere test. There is a constant absorption opacity of $\kappa_\mathrm{abs}=4\,\mathrm{cm}^{-1}$ below $r=1\,\mathrm{cm}$ and vacuum above. The top panel shows the resulting radial flux factor $f$, the middle panel shows the radial component of the pressure tensor $p$, and the bottom panel shows the radial component of the rank-3 tensor $l$. The analytic result (e.g., \cite{Smit1997} is shown as a black dashed curve, and the Monte Carlo results computed by Sedonu are shown with a thick blue curve. The bottom two panels also show the results from several approximate moment closures described in more detail in Section~\ref{sec:specific_closures}. No closure reproduces the physical results everywhere, though the MEFDmp closure is well suited to this test problem for $r>1\,\mathrm{cm}$.}} \label{fig:uniform_sphere} \end{figure} {As an appetizer, we present the performance of the MEFD and other popular closures (described in more detail in Section~\ref{sec:specific_closures}) in a simple test problem. As in \cite{Smit1997}, we create a homogeneous sphere with radius $R=1\,\mathrm{cm}$, a constant absorption opacity of $\kappa_\mathrm{abs}=4\,\mathrm{cm}^{-1}$, and no scattering opacity. There is an analytic solution to the radiation field (also outlined in, e.g., \cite{Smit1997}), which we plot in Figure~\ref{fig:uniform_sphere} as a black dashed curve. The thick blue curve immediately under the dashed curve is the result from Sedonu directly and shows excellent agreement. Outside of the sphere where the opacity is zero, the maximum packing limit of the MEFD closure also matches well, though it performs poorly inside the sphere. The opposite is true of most of the other closures, and already it is apparent that none of these closures performs well everywhere, as concluded by \cite{Smit2000,Murchikova2017}. The goal of this paper is to perform a similar assessment of these closures, for up to the rank-3 moments and in the multidimensional and relativistic environment of a neutron star merger.} \subsection{Tensor Invariants} \label{sec:tensor_invariants} The pressure tensor in the comoving orthonormal tetrad is diagonalizable, meaning that with the proper rotation of coordinates one can express the pressure tensor with only three diagonal elements. These correspond to the neutrino pressure in three directions. \begin{equation} P^{ij} = \begin{bmatrix} P^{xx} & P^{xy} & P^{xz} \\ P^{yx} & P^{yy} & P^{yz} \\ P^{zx} & P^{zy} & P^{zz} \\ \end{bmatrix} = R \begin{bmatrix} \lambda_0 & 0 & 0 \\ 0 & \lambda_1 & 0 \\ 0 & 0 & \lambda_2 \end{bmatrix} R^T\,\,. \end{equation} Following \cite{Kindlmann2006a}, the eigenvalues can be expressed as \begin{equation} 0=|\lambda I^{ij} - P^{ij}| = \lambda^3 - J_1 \lambda^2 + J{_2} \lambda - J{_3}\,\,, \end{equation} where \begin{equation} \begin{aligned} J_1 &= \mathrm{Tr}(P^{ij})=E \,\,,\\ J_2 &= \frac{1}{2}[\mathrm{Tr}(P^{ij})^2-\mathrm{Tr}(P^{ij}P^{jk})]\,\,,\\ J_3 &= |P^{ij}|\,\,. \end{aligned} \end{equation} $\lambda$, $J_1$, $J_2$, and $J_3$ are all invariant under rotation. Furthermore, we can write the eigenvalues as a function of the $J$ invariants as \begin{equation} \begin{aligned} \lambda_k &= \frac{1}{3}J_1 + 2\sqrt{Q} \cos\left(\theta + \frac{2\pi}{3}k\right) \,\,,\\ \end{aligned} \end{equation} where $Q = (J_1^2-3J_2)/9$, $\theta = \left[\cos^{-1}\left(R Q^{-3/2}\right)\right]/3$, and $R = \left(23 J_3-9J_1 J_2+2J_1^3\right)/54$. Thus, the three eigenvalues can be visualized as projections onto the x-axis of three equally-spaced points on a circle centered at (E/3,0). The magnitude of the differences between the eigenvalues is determined by $Q$ and the configuration of the eigenvalues is determined by $\theta$. In our analysis, we will refer to the dimensionless quantities \begin{equation} \begin{aligned} \mathrm{anisotropy} &= \frac{3\sqrt{Q}}{J_1} \in [0,1]\,\,,\\ \mathrm{oblateness} &= \frac{3\theta}{\pi} \in [0,1]\,\,. \end{aligned} \label{eq:anisotropy_oblateness} \end{equation} The pressure tensor can be visualized as a triaxial ellipsoid, where the size of each axis represents the size of an eigenvalue, i.e., the magnitude of the pressure in the direction of the corresponding eigenvector. An anisotropy of 0 indicates that the pressure in all directions is equal, while an anisotropy of 1 indicates that the pressure in all but one direction is zero. An oblateness of 0 means the ellipsoid is prolate, i.e., that two eigenvalues are equal and one is larger. An oblateness of 1 means the ellipsoid is oblate, i.e., that two eigenvalues are equal and one is smaller. An oblatness between 0 and 1 means that no two of the eigenvalues are equal. These rotation-independent quantities are useful for understanding the limitations imposed by moment closures. Similarly, there are 11 invariants for the rank-3 tensor $L^{ijk}$ \cite{Ahmad2011}. However, several of these invariants are degenerate or represent the know relationship between the trace of $L$ and the flux. After ignoring all of the known or degenerate invariants, only one remains, which we call $L_4$ (using the subscript from \cite{Ahmad2011}). \begin{equation} L_4 = L^{ijk}L^{ijk}\,\,. \label{eq:L4} \end{equation} We will use this as a scalar quantity representing $L$ so we can compute differences between closures without needing to refer to particular directions. \section{Results} \label{sec:results} We perform time-independent Monte Carlo neutrino radiation transport on a simulation snapshot from the LS220\_M135135\_M0\_L25 simulation of \cite{Radice2018}. The snapshot is at $t=31.3\,\mathrm{ms}$ after merger and is after the formation of a black hole. The neutrino interaction rates and comoving-frame radiation field are binned into 24 energy bins spaced logarithmically from 1 MeV (bin width of 2 MeV) to 270 MeV (bin width of 37 MeV). We performed the transport on a single refinement level with Cartesian grid spacing of $0.74\,\mathrm{km}$, a domain of $-66-66\,\mathrm{km}$ in each coordinate direction, and a reflecting boundary condition at $z=0$. This refinement level was chosen to be the smallest level that contained the regions of the accretion disk where transport is relevant. \begin{figure*} \centering \includegraphics[width=\linewidth]{profile.png} \caption{Background fluid profile from the LS220\_M135135\_M0\_L25 simulation of \cite{Radice2018} on top of which we calculate steady-state neutrino radiation fields using Sedonu. \textit{First panel:} baryon rest density. \textit{Sedond panel:} magnitude of the three-velocity. \textit{Third panel:} fluid rest temperature. \textit{Fourth panel:} electron fraction.} \label{fig:profile} \end{figure*} A $x-z$ slice of the fluid data is shown in Figure~\ref{fig:profile}. The complicated matter, velocity, and spacetime structure pose a significant challenge for radiation transport algorithms. There is a dense emitting disk and a sparse polar region. Though the second panel in Figure~\ref{fig:profile} only shows the magnitude of the three-velocity, the large velocity in the disk is in the azimuthal direction, while the velocity in the polar region is in the positive $z$-direction. There is also a large $x$-velocity in the boundary between the two within $10-15\,\mathrm{km}$ from the black hole. The large temperature in the inner regions of the disk (third panel) indicate where most of the neutrinos are being produced, and the disk still has a low $Y_e$ (fourth panel) as antineutrino losses continue to drive up the $Y_e$. \begin{figure} \centering \includegraphics[width=\linewidth]{e_fluxfac.png} \caption{\textit{Top panel:} neutrino occupation number, maximized over neutrino species and neutrino energy. \textit{Bottom panel:} comoving-frame neutrino flux factor, averaged over species and neutrino energy, weighted by the energy density in the corresponding species-energy bin. Neutrinos are mildly degenerate and trapped in the disk and free-streaming in the polar regions. The goal of an analytic closure is to use just this information to predict all higher-rank moments.} \label{fig:e_fluxfac} \end{figure} We simulate $2\times10^9$ neutrino packets to generate a steady-state radiation field according to Section~\ref{sec:SedonuGR}. As in the original dynamical calculation, we use the LS220 equation of state \cite{Lattimer1991} and use NuLib \cite{OConnor2015} to calculate neutrino absorption and elastic scattering rates. The resulting comoving-frame maximum occupation number (top panel) and energy-density-averaged flux factor (bottom panel) computed by {\tt SedonuGR} are shown in Figure~\ref{fig:e_fluxfac}. The maximum occupation number of a given energy bin is computed by dividing the energy density in an energy bin by the energy density that would be present in that bin if the occupation number were 1. Though neutrinos can become very degenerate in a proto- or hypermassive neutron star, in this disk they are only mildly degenerate. The flux factor plot shows that the disk is on average optically deep (resulting in a flux factor close to 0) and the polar regions are optically thin (resulting in a flux factor approaching unity). Even with only two moments, it is apparent that there is interesting structure in the radiation field, especially in the interface between the disk and polar regions. This will prove to be a very difficult region for analytic closures. The simulation in \cite{Radice2018} was performed with the $M0$ scheme which combines a leakage method for radiation losses with a diffusion method for neutrino reheating. The goal of the rest of this section is not to analyze the differences between the two methods (as in, e.g., \cite{Richers2015,Foucart2018a}, but to quantify the errors induced by the assumptions that go into closure relations. To do this we will compare the rank-2 and rank-3 moments with the energy density and flux, all computed in the same calculation by {\tt SedonuGR}. \subsection{Assessing Closure Assumptions} \label{sec:assessing_assumptions} The analytic closure method described in Section~\ref{sec:closure} attempts to capture the dominant structure of the radiation field present in the rank-2 and rank-3 moments. In this section, we use {\tt SedonuGR} to assess the ability of such a closure to represent the real second and third angular moments of the radiation field. While other authors have compared the results of simulations performed using Monte Carlo and moment methods (e.g. \cite{Foucart2018a}), here we focus instead on how well the Monte Carlo radiation field respects the fundamental assumptions that go into forming such a closure. These assumptions are (1) that the pressure tensor depends only on the flux factor and perhaps the energy density, (2) that the pressure tensor is prolate, (3) that the pressure in the direction of the flux matches the largest eigenvalue of the pressure tensor, and (4) that the third moment can be closed using the same functional form as the pressure tensor. \subsubsection{The pressure tensor depends only on the flux.} \begin{figure} \includegraphics[width=\linewidth]{xz89_Pff.png} \includegraphics[width=\linewidth]{hist_f_Pff.png} \caption{Eddington factor. \textit{Top panel:} Neutrino pressure in the flux direction normalized by energy density on a $x-z$ slice. As expected, there is a correlation between the flux factor in Figure~\ref{fig:e_fluxfac}. \textit{Bottom panel:} histogram of the number of spatial-species-energy grid cells that have each combination of flux factor (x-axis) and Eddington factor (y-axis). The white curves show the closure relations listed in Table~\ref{tab:closure_error}. The blue curves are the MEFD maximal packing (lower) an MEFD classical (upper) closures. No simple closure can describe all grid cells.} \label{fig:pressure} \end{figure} The most well-known feature of moment closures is the parameterization of the shape of higher moments based on a single quantity - the flux factor. It is also well-known that there is no single functional form of the closure that works well in all test cases (e.g., \cite{Murchikova2017}). Comparing the Eddington factor in the top panel of Figure~\ref{fig:pressure} to the flux factor in the bottom panel of Figure~\ref{fig:e_fluxfac}, there indeed appears to be a correlation between the flux factor and the magnitude of the pressure in the direction of the flux. The bottom panel of Figure~\ref{fig:pressure} shows the functional form of each of the closures in Table~\ref{tab:closure_error} on top of a histogram of the corresponding relationship between fluxes and pressures in the simulation domain. The color indicates the number of grid cells that have a flux factor and $P_{ff}$ indicated by the location on the plot. The sharp boundary on the lower right side of the colored region is a geometric limitation - it is not possible to simultaneously have all energy moving in one direction (flux factor of 1) and no pressure in that direction. Although most closures lie close to the dark ridge, the distribution is too broad to be described by a single curve. In fact, there appears to be a second dark ridge at flux factors $\gtrsim 0.5$ near the MEFDmp curve (bottom blue curve). The bottom ridge becomes more prevalent at higher latitudes, while the main ridge is more prevalent near the equator. The majority of the closures (white and blue curves) trace the main ridge, since the equatorial regions have a smoother transition from trapped to free-streaming regimes that is more typical of spherical problems. The match between the boundary of this region and the MEFDmp curve hints that the full MEFD closure may be able to account for both ridges, but we will see in Section~\ref{sec:MEFD_results} that the extra information provided by the MEFD closure does not account for the spread. \subsubsection{$P_{ff}$ is the largest eigenvalue.} \label{sec:eigenvalue} \noindent \begin{figure} \centering \includegraphics[width=\linewidth]{eigenvals.png} \caption{Misalignment between the pressure tensor and the eigenvectors. \textit{Top panel:} the energy- and species-averaged difference between the Eddington factor and the largest eigenvalue $\lambda_0$ of the pressure tensor. \textit{Middle panel:} Similar to the above, showing the energy- and species-averaged minimum of the difference between the Eddington factor and either the largest or smallest eigenvalues of the pressure tensor $\lambda_0$ and $\lambda_1$. \textit{Bottom panel:} Similar to the above, but minimizing over the difference between the Eddington factor and any of the three eigenvalues. In most regions the largest axis of the pressure tensor is parallel to the flux. Where the largest deviations from this occur, the smallest axis of the pressure tensor is largely parallel to the flux. In the interface between the disk and polar region, the flux is not well-aligned with any of the pressure tensor axes.} \label{fig:eigenvalue_error} \end{figure} The form of Equation~\ref{eq:closure_interpolation} indicates that the flux is always an eigenvector of the pressure tensor under the analytic closure approximation, since it is an eigenvector of both the diffusive and free-streaming limits. Furthermore, $P_{ff}$ must correspond to the largest eigenvalue for all but the Wilson and MEFD closures. For these closures, the Eddington factor is allowed to drop below $1/3$ at low flux factors, so $P_{ff}$ then corresponds to either the largest or smallest eigenvalue. The top panel of Figure~\ref{fig:eigenvalue_error} shows the energy-averaged difference between the pressure in the direction of the flux and the largest eigenvalue. For all but the Wilson and MEFD cloures, the darkenss of the color effectively indicates the amount of deviation from the closure approximation. The middle panel shows the average difference between $P_{ff}$ and either the largest or the smallest eigenvalue, so this panel shows the magnitude of the deviation from fundamental properties of the MEFD and Wilson closures. Finally, the bottom plot shows the average minimum difference from any of the three eigenvalues, and so a dark color indicates that the direction of the flux is misaligned with all of the eigenvectors, or that the orientation of the pressure tensor is weakly tied to the flux direction. One can also compare the pressures in other directions with the eigenvalues, though we do not show the results here. The only other local vector quantity to compare to is the three-velocity, so we can define the direction $w$ as the direction in the $F-v$ plane orthogonal to $F$, and the direction $q$ as the direction orthogonal to both. Near the equator beyond a radius of $\sim 50\,\mathrm{km}$, $P_{ff}$ matches the largest eigenvalue, $P_{qq}$ matches the smallest, and $P_\mathrm{ww}$ matches the middle one. The radiation there is moving predominantly radially, hence the dominant component of the pressure in the direction of the flux. The disk is larger in the azimuthal direction than in the polar direction, and the larger solid angle of emitting surface presented in one direction results in a larger pressure in that direction. The regions above the disk are more difficult to understand. In the diagonal regions above the disk but below the curling flow, the largest eigenvalue is best represented by $P_{qq}$, and in the curling flow $P_{ww}$ and $P_{qq}$ are both close to the largest eigenvalue, resulting in the dark regions in the top panel of Figure~\ref{fig:eigenvalue_error}. In some regions $P_{ff}$ matches the smallest eigenvalue (seen in the removal of dark regions between the top and middle panels) or the middle eigenvalue (seen as the removal of dark regions between the middle and bottom panels). We saw no clear trend in local variables that could account for all of this behavior. \subsubsection{The pressure tensor is prolate} \begin{figure} \centering \includegraphics[width=\linewidth]{xz89_oblateness.png} \includegraphics[width=\linewidth]{fluxfac_oblateness_m.png} \caption{Oblateness of the pressure tensor as defined in Equation~\ref{eq:anisotropy_oblateness}. The analytic closures in Table~\ref{tab:closure_error} only allow for an oblateness of 0 or 1. \textit{Top panel:} $x-z$ slice showing the species- and energy-averaged oblateness. The interface between disk and polar region and the region just above the black hole show large deviations from analytic closure assumptions. The noise in the disk is because at small anisotropies, small changes in the radiation field map to radically different oblateness. \textit{Bottom panel:} histogram showing the number of species-energy-spatial grid cells with each combination of oblateness and flux factor. There is an inverse correlation between flux factor and oblateness, and fiew zones have an oblateness of zero. The dark region on the left is due to the optically-deep regions of the disk, where the flux factor is small and any oblateness can be realized through statistical Monte Carlo noise.} \label{fig:oblateness} \end{figure} Once again, for all but the MEFD and Wilson closures, the assumed pressure tensor is prolate (oblateness of zero), and for the MEFD and Wilson closures the pressure tensor can only be prolate (oblateness of zero) or oblate (oblateness of 1). The top panel of Figure~\ref{fig:oblateness} shows a $x-z$ slice of the species- and energy-averaged oblateness of the pressure tensor. Far from the disk, the oblateness indeed tends to zero as expected. In the optically deep regions of the disk, there is significant noise, since the distribution is nearly isotropic in the fluid rest frame, and small Monte Carlo statistical fluctuations correspond to large changes in oblateness because the anisotropy is so small. The first interesting note is that very close to the black hole and in the interface between the disk and polar region the oblateness can take on the full range of values. Once again, this is another indicator that the analytic closure approximation is poor in these regions. Interestingly, the oblateness is also nonzero in the equatorial optically-thin regions at radii larger than $50\,\mathrm{km}$. This is a result of the aspect ratio of the disk. As described in Section~\ref{sec:eigenvalue}, the pressure in the azimuthal direction is larger than in the $z$-direction because the disk is larger in that direction. As such, the radial pressure is largest, followed by the azimuthal pressure, followed by the $z$ pressure. The triaxial nature of the pressure tensor yields oblateness values of $\lesssim 0.5$. The bottom panel of Figure~\ref{fig:oblateness} shows a histogram of oblateness and flux factor. The boundary at the right side of the plot is a geometric limit - in the limit of flux factor approaching 1, all of the energy must be moving in one direction and so the pressure in directions orthogonal to the flux direction tend to zero. The histogram shows that there is an inverse correlation between the flux factor and the oblateness, but once again does not follow a simple functional form. In addition, this trend varies with polar angle. The dark region on the left side of the plot comes from equatorial regions at $\cos\theta\lesssim0.5$, since this comes from the low flux factor region in the optically-deep part of the disk with near random oblatenesses. On the right side of the plot, the slope of the dark ridge is steeper at low polar angles ($\cos\theta\lesssim0.2$), indicating that some of this trend comes from the distal equatorial regions. If we only include grid zones at high polar angles ($\cos\theta\gtrsim0.6$) the slope of the dark region at $|F|/E\gtrsim 0.6$ becomes shallower than in Figure~\ref{fig:oblateness}, but then quickly rises to an oblateness of 1 at a flux factor of 0.5. This is also a geometric effect. The flux factor is small and the oblateness large at small radii despite a very low optical depth, since the radiation is crossing largely in the equatorial direction with little upward component. This trend is potentially useful information for designing a closure that uses the coordinate position as extra information for the closure, though we were unsuccessful in finding a means to do so (see Section~\ref{sec:closure_comparison}). \subsubsection{$L$ can be closed with $\chi_p$} \label{sec:results_L} \begin{figure} \includegraphics[width=\linewidth]{xz89_Lfff.png} \includegraphics[width=\linewidth]{hist_f_Lfff.png} \caption{\textit{Top panel:} $x-z$ slice of the species- and energy-averaged component of the rank-3 moment tensor in the direction of the flux. Similar to the other moments, there is a correlation with the flux factor in Figure~\ref{fig:e_fluxfac}. \textit{Bottom panel:} histogram showing the number of cells with each combination of $L_{fff}$ and flux factor. The flux factor is subtracted from the y-axis to be able to show more detail in the plot. The white curves show the closures listed in Table~\ref{tab:closure_error}, while the blue curves show the MEFD maximal packing (lower) and MEFD classical (upper) derived in Section~\ref{sec:analytic_MEFD}. The green curves are equivalent to the white curves, but interpolating using Equation~\ref{eq:Lfree_BAD}. Most of the closures do a decent job of tracking the most dense region on the plot, but the MEFD closure covers a larger portion of the high-density region.} \label{fig:Lfff} \end{figure} The top panel of Figure~\ref{fig:Lfff} shows a slice of the component of the rank-3 tensor in the direction of the flux $L_{fff}$. Once again, there is a correlation between this and the flux factor in Figure~\ref{fig:e_fluxfac}. Since both the rank-2 and rank-3 tensors are being interpolated from optically thin to thick limits, the same closure relation is often used for both. That is, $\chi_l$ is assumed to be equal to $\chi_p$. The only closure with a self-consistent third moment interpolator is the MEFD closure, which we derive in Section~\ref{sec:analytic_MEFD}. The bottom panel of Figure~\ref{fig:Lfff} shows a histogram of the flux factors and values for $L_{fff}$, with the flux factor also subtracted from the $y$-axis value to show more detail in the plot. Similar to the case of the pressure histogram in Figure~\ref{fig:pressure}, there is an upper and a lower dark ridge and the distribution is too broad to be described by a single curve. The white curves show values of $L_{fff}$ inferred by using the $\chi_p$ from the bottom set of closures in Table~\ref{tab:closure_error} in lieu of an appropriately derived $\chi_l$. The blue curves show the results from the $\chi_l$ derived in Section~\ref{sec:analytic_MEFD} for the MEFD maximal packing (lower) and classical (upper) limits. It seems that the white curves do generally follow the upper ridge, but are unable to account for the lower ridge. The MEFD maximum packing curve, however, once again nicely encompasses this region, leading to a hope that the MEFD closure may be able to account for both regions. However, we will show in Section~\ref{sec:MEFD_results} that in this snapshot the MEFD closure closely resembles its classical limit and the extra information from the degeneracy cannot explain the spread. The green curves in Figure~\ref{fig:Lfff} show the results when using Equation~\ref{eq:Lfree_BAD} for the free-streaming limit. This results in unequivocally poor results. Although one can construct an interpolator for this flavor of interpolation (e.g., Equation~\ref{eq:chil_BAD} for the MEFD closure), it is more straightforward to use Equation~\ref{eq:PL_diff_free}. Doing so requires that the trace of the rank-3 tensor is the flux and makes $\chi_p$ a reasonable approximation for $\chi_l$. \subsection{Specific Closures} \label{sec:specific_closures} The particular closures we compare to are listed in Table~\ref{tab:closure_error} and shown in the bottom panels of Figures~\ref{fig:pressure} and \ref{fig:Lfff}. The Thick and Thin closures simply take the corresponding limit in Equation~\ref{eq:closure_interpolation} irrespective of the flux factor, providing a sense of scale for the errors the other more reasonable closures make. The MEFD closure along with it's classical (MEFDc) and maximum-packing (MEFDmp) limits were described in detail in Section~\ref{sec:analytic_MEFD} and guarantee that the set of moments is possible to realize with a fermionic radiation field. This is also the only closure with a self-consistent closure for the third moment. The Levermore closure \cite{Levermore1984} is derived by assuming that the radiation field is isotropic in the frame where the net radiation flux is zero. The Kershaw {closure} \cite{Kershaw1976} {is just a simple, non-unique quadratic function interpolating between the optically thick and thin limits in a way that is always realizable for a Bose-Einstein gas. The Wilson \cite{Wilson1975} closure is the harmonic mean of the diffusive and free-streaming limits \cite{Smit2000}. Finally, the Janka closures \cite{Janka1991} were determined from fits to Monte Carlo neutrino transport data in one-dimensional simulations of core-collapse supernovae. \subsubsection{The MEFD Closure} \label{sec:MEFD_results} \begin{figure} \centering \includegraphics[width=\linewidth]{zeta.png} \caption{Evaluation of the maximum entropy closure assumptions. \textit{Top panel:} 2-D histogram of the flux saturation $x$ and the maximum-entropy universal pressure-closure curve (Equation~\ref{eq:MEFD_approximate}). \textit{Bottom panel:} Similar, but for the semi-universal rank-3 moment closure curve (Equation~\ref{eq:MEFD_approximate}). If the Monte-Carlo derived distributions looked like the maximum-entropy distributions (Equation~\ref{eq:f_MEFD}), all points would lie along the dotted white curve in the top panel. The curve is not as universal away from the $x=\{0,1\}$ limits, but this shows that the approximate curve neatly lies within the most dense regions of the plot.} \label{fig:MEFD_closure_fit} \end{figure} Since the MEFD maximum packing curve neatly outlines the bulk of the dark regions in Figures~\ref{fig:pressure} and \ref{fig:Lfff}. Unlike any of the other closures, the MEFD closure also uses the occupation number as input, so one might be tempted to guess that the spread is neatly accounted for by this extra information. The salient feature of the MEFD closure for the pressure tensor is that there is a single universal curve of $\zeta_p(x)$ (Equation~\ref{eq:zeta_0}) for all values of the occupation number, and the effects of the occupation number come in only through Equation~\ref{eq:MEFD_closure} and the definition of the flux saturation $x$. The top panel of Figure~\ref{fig:MEFD_closure_fit} is similar to the bottom panel of Figure~\ref{fig:pressure}, though the x-axis is flux saturation $x$ instead of flux factor, and the y-axis is pressure saturation $\zeta_p$ instead of Eddington factor. The white curve shows the approximate universal function from Equation~\ref{eq:MEFD_approximate}. If the information from the occupation number were able to account for the spread of the dark regions in Figure~\ref{fig:pressure}, we would expect the dark regions to collapse to the white line in this figure. Unfortunately, that is not the case - since the occupation numbers only reach at most $0.5$, $\{|F|/E,P_{ff}\}$ and $\{x,\zeta_p\}$ are nearly identical. The same applies to the bottom panel, which shows the rank-3 saturation $\zeta_l$ and the semi-universal curve in Equation~\ref{eq:MEFD_approximate}. Since $\zeta_l$ is not universal except in the limits of $x\rightarrow\{0,1\}$, we do not expect the distribution to collapse to a single line, but this does comfortingly demonstrate that the approximate curve does lie in the most dense regions of the plot. \subsubsection{Other Closures} \label{sec:closure_comparison} \begin{table} \begin{tabular}{lcccccc} Closure & $P_{ff}/E$ & $\Theta$ & A & $L_{fff}/E$ & $L_4/E^2$ & $\alpha\mathcal{F}^\mu_\mathrm{ann}u_\mu$\\ & $\times100$ & $\times100$ & $\times100$ & $\times100$ & $\times100$ & $\times100$ \\\hline Thick & 12.6 & -- & 29.8 & 2.8 & 1.5 & {-16.6} \\ Thin & 17.7 & 6.9 & 35.6 & 2.15 & 1.93 & {87.4} \\\hline MEFD & 0.297 & 6.9 & 1.20 & {0.316} & {0.33} & {1.01} \\ MEFDc & 0.297 & 6.9 & 1.20 & {0.316} & {0.33} & {1.10} \\ MEFDmp & 0.939 & 13.6 & 2.26 & {0.705} & {0.42} & {-4.76} \\\hline Levermore & 0.233 & 6.9 & 0.93 & 0.231 & 0.22 & {3.98} \\ Kershaw & 0.32 & 6.9 & 0.90 & 0.339 & 0.28 & {11.1} \\ Wilson & 0.33 & 10 & 1.27 & 0.237 & 0.23 & {2.17} \\ Janka1 & 0.442 & 6.9 & 1.59 & 0.198 & 0.22 & {-1.76} \\ Janka2 & 0.274 & 6.9 & 0.96 & 0.213 & 0.21 & {2.48} \\ \end{tabular} \caption{Integrated difference between the Monte Carlo and indicated closure solution for representative components of rank-2 and rank-3 moment tensors. Numbers are displayed to the first digit that changes using a similar calculation with 0.4 times as many Monte Carlo particles and are multiplied by 100 to remove leading zeros for display. $P_{ff}$ and $L_{fff}$ are the component of the neutrino pressure tensor and rank-3 tensor along the direction of the flux. $\Theta$ is the oblateness and $A$ is the anisotropy (Equation~\ref{eq:anisotropy_oblateness}). $L_4$ is the scalar invariant of the rank-3 tensor (Equation~\ref{eq:L4}). $\alpha\mathcal{F}^\mu_\mathrm{ann}u_\mu$ is the rate of increase of the thermal energy by neutrino pair processes as measured by an observer at infinity. The pair process error is an actual error rather than a $\chi^2$ value, comparing to the Monte Carlo result of $4.78\times10^{50}\,\mathrm{erg\,s}^{-1}$} \label{tab:closure_error} \end{table} Table~\ref{tab:closure_error} shows the errors in integrated values of various quantities relative to the Monte Carlo results. Specifically, the numbers for a quantity $q$ are computed as \begin{equation} \frac{1}{n_x n_y n_z n_\epsilon} \sum_{i,j,k,l} (q_{\mathrm{closure},i,j,k,l} - q_{\mathrm{MC},i,j,k,l})^2\,\,, \end{equation} where the prefactor contains the number of grid zones in $x$, $y$, $z$, and energy, and the sum is over the corresponding grid zones labeled by $\{i,j,i,l\}$. The exception is the farthest right column, which we discuss in Section~\ref{sec:annihilation_results} The thick and thin closures are obviously poor choices, but are shown for reference of scale. Many of the closures are similarly accurate, since they all generally lie within the rather broad distribution of flux factor and pressure/rank-3 tensor in Figures~\ref{fig:pressure} and \ref{fig:Lfff}. Even so, the Janka 2 closure shows the smallest error for $P_{ff}/E$ and $L_4/E^2$ and the Janka 1 closure for $L_{fff}/E$. These are followed closely by the MEFD closure, which performs reasonably well in the pressure categories (columns 2-4), although somewhat poorly in the rank-3 categories (columns 5-6). Ironically, using the rank-2 closure for the rank-3 moments produces smaller errors than the rank-3 closure freshly derived in Section~\ref{sec:analytic_MEFD}}. Many of the closures yield similar errors for the oblateness $\Theta$ because the Thin, MEFDc, Levermore, Kershaw, and Janka closures all assume an oblateness of exactly 1. The MEFD, MEFDmp, and Wilson closures do allow for an oblateness of exactly 1 at low flux factors (as indicated by the curves dipping below 1/3 in Figure~\ref{fig:pressure}), resulting in a larger error. The only way to drive this error smaller is to create a closure that allows for triaxial pressure tensors. We attempted to create such a closure by assuming the oblateness follows $\Theta=(1-|F|/E)^2$ (estimated from Figure~\ref{fig:oblateness}), setting the eigenvector with the largest eigenvalue along $F/E$, that with the smallest eigenvalue along the component of the three-velocity orthogonal to $F/E$, and that with the middle eigenvalue along the direction perpendicular to both. This results in a smaller oblateness error of $0.0442$, though at the cost of a marginal increase of the error in $P_{ff}/E$ to 0.00304. We were unable to find a good way to set the oblateness and orientation of the pressure tensor using only local variables, since the trends differ in different regions of the system (see Section~\ref{sec:assessing_assumptions}). In addition, although the MEFD and Levermore closures guarantee a realizable distribution (i.e. they never require occupation numbers larger than 1 or smaller than 0), once we break the assumptions on the symmetry directions used in the derivation, it is not clear how to ensure that the triaxial closure is realizable. It is worth noting that the MEFD and MEFDc closure yield nearly identical results, indicating that the neutrino field in this snapshot is not very degenerage (see Figure~\ref{fig:e_fluxfac}). The ability of the MEFD closure to yield realizable moments in highly degenerate scenarios is little-used. For the same reason, the MEFDmp closure yields comparatively large errors, since the closure assumes that the distribution is maximally degenerate for a given flux factor. \subsection{Annihilation} \label{sec:annihilation_results} \begin{figure} \centering \includegraphics[width=\linewidth]{Qannihil.png} \caption{Neutrino pair annihilation rate normalized by mass-energy density. The rate of deposition of $x$-, $y$-, and $z$-momentum are shown in the first, second, and third panels, respectively. The rate of thermal energy deposition is shown in the bottom panel. Neutrino pair annihilation is not dynamically important in the disk, but in the polar regions it rapidly heats the low-density matter and drives it inward, around in the direction of the disk's orbit, and upward.} \label{fig:annihilation} \end{figure} Figure~\ref{fig:annihilation} shows the momentum (top three panels) and thermal energy (bottom panel) deposition rate in a slice of the domain normalized by the mass energy density. Since we include both absorption and emission, the net effect on the dense optically-thick regions of the disk is minimal. The value of the number then indicates the rate at which the thermal energy density or momentum becomes relativistic. The top panel shows that the right side of the polar region is being pushed left, and the left side right, possibly helping to collimate the flow very close to the black hole. The second panel shows the azimuthal momentum deposition, which is in the direction of the disk orbit. The third panel shows the $z$-component of the momentum deposition, indicating that the annihilation can directly provide a great deal of upward momentum. Finally, the bottom panel shows the rate of deposition of thermal energy. Even without direct momentum, this thermal energy will drive a polar outflow. For an order-of-magnitude estimate of the power available from neutrino annihilation for powering polar outflows, we compute the net deposition of thermal energy in the region within 45 degrees of the polar axis. The fiducial Monte Carlo annihilation calculation yields $4.78\times10^{50}\,\mathrm{erg/s}$ of deposited thermal energy. The specific heating rate, annihilation power, and density in the polar region are comparable to that seen in dynamical calculations including neutrino annihilation \cite{Just2016,Fujibayashi2017}. Because of this, we echo the conclusions of these works that the neutrino pair annihilation will modify the dynamics and leptonization of polar ejecta, but the mass in this region will likely preclude a neutrino-driven jet. The time component of the four-force due to neutrino pair annihilation in Equation~\ref{eq:annihil_fourforce} only depends on the number of neutrino field moments equal to the number of terms used in the Legendre expansion of the kernel $\Phi$. Using this, we can demonstrate the importance of each of these terms. If we only include the $\Phi_0$ term, the annihilation power comes out to $1.22\times10^{51}\,\mathrm{erg\,s}^{-1}$ (larger by a factor of 2.5 than the fiducial result above using three terms). Including only the $\Phi_0$ and $\Phi_1$ terms yields $3.98\times10^{50}\,\mathrm{erg\,s}^{-1}$, or 0.83 times the fiducial result. Thus, including the estimate for the third moment of the annihilation kernel in Equation~\ref{eq:phiannihil2} has roughly a $17\%$ effect on the available annihilation power. Since the term in Equation~\ref{eq:annihil_fourforce} with $\Phi_2$ depends on the pressure tensor, the choice of closure can affect the calculated annihilation rate in a two-moment radiation transport scheme. The farthest right column of Table~\ref{tab:closure_error} shows the relative error of the integrated annihilation power due to the choice of closure. The MEFD and MEFDc closures exhibit the smallest error, while the Kershaw closure yields a large error of 1{1}\%. The rest of the other closures yield an error of only a few percent. Thus, despite the complexity of the radiation field in the polar regions, the choice of a closure is important only to get percent-level accuracy of the integrated annihilation power. Finally, we briefly note the impact of two other assumptions. If we do not subtract off the mass energy of the electron-positron pairs, we get a polar annihilation power of $5.85\times10^{50}\,\mathrm{erg\,s}^{-1}$ (difference of 22\%). Second, although we include the $\Phi^{(p)}$ terms in Equation~\ref{eq:annihil_fourforce}, they do not actually affect the polar annihilation power to the presented accuracy, since the vast majority of the neutrino pair production is occurring in the dense disk. \section{Conclusions} \label{sec:conclusions} We use a newly developed steady-state Monte Carlo radiation transport code to evaluate assumptions used by analytic closures to the moment equations for radiation transport in neutron star merger disks. We first extend the MEFD closure to include the rank-3 moment for use in spectral M1 simulations (Figure~\ref{fig:MEFD_l}). The proposed approximation to the closure (Equation ~\ref{eq:MEFD_approximate}) accurately represents the full solution under the assumptions that go into the closure to at most 3.5\% error for all flux factors and neutrino degeneracies. This is the the only closure with a self-consistent treatment of the third moment, though we do not expect the impact to be large. In order to test this and other closures in the context of neutron star merger simulations, we developed the Monte Carlo neutrino transport code {\tt SedonuGR} to compute the steady-state neutrino radiation field on a static discretized fluid and spacetime background from a three-dimensional simulation snapshot in three dimensions. We calculate the full steady-state radiation field including moments up to rank-3 and compare these moments against fundamental assumptions used in all analytic closure relations (Section~\ref{sec:assessing_assumptions}). We demonstrate the expected result that a single analytic closure is unable to reproduce the Eddington factor (Figure~\ref{fig:pressure}). Furthermore, the largest axis of the pressure tensor ellipsoid is not aligned with the flux direction just above the black hole and in the interface between the disk and the evacuated polar regions (Figure~\ref{fig:eigenvalue_error}), contrary to the assumptions commonly used in generating analytic closures. In these same regions, the pressure tensor ellipsoid is largely triaxial (Figure~\ref{fig:oblateness}), unlike the prolate ellipsoid assumed by analytic closures. This is also true near the equator outside of the dense part of the disk due to the aspect ratio of the disk. Finally, we demonstrate that the analytic relations used to determine the pressure tensor are also reasonable closures for the rank-3 moment (Figure~\ref{fig:Lfff}), though they once again cannot explain the full spread seen in various parts of the disk. None of the closures listed in Table~\ref{tab:closure_error} spring out as an obvious best choice, including the MEFD closure that we so carefully extend, though the MEFD, MEFDc, Levermore, and Janka 2 closures were as accurate as could be expected by such a closure. Although we tried to use additional information like the oblateness of the pressure tensor to improve the analytic closures, the lack of clear trends made improvements based on estimations of little benefit. Finally, we briefly touch on the impact of the moment expansion in calculating the rate of deposition of energy and momentum in the polar regions (Figure~\ref{fig:annihilation}). Assuming the annihilation kernels are expanded in terms of Legendre polynomials, keeping up to the third term in the expansion (which involves the pressure tensor) yields a $17\%$ enhancement of the net annihilation power over keeping only the first two terms. The choice of closure among those listed in Table~\ref{tab:closure_error} made at most a $5\%$ impact on the annihilation power. We intentionally avoid investigating non-local closures (e.g., closures that depend on the coordinate position or on derivatives of the radiation field) because they fundamentally change the nature of the transport equation. For instance, if the pressure tensor is evaluated based on the gradient of the flux factor, the flux of the neutrino flux would then depend on the second derivative of itself, adding an elliptic character to an otherwise hyperbolic equation. It may yet be possible to construct a closure specific to neutron star mergers using, for example, a neural network and exact transport data from a large number of snapshots of many models evolved using exact methods like Monte Carlo \cite{Foucart2018,Miller2019a}, discrete ordinates \cite{Nagakura2017a}, or characteristics \cite{Davis2012,Weih2020}. In addition, it is possible to extend moment methods to dynamically evolve the pressure tensor as well (e.g., \cite{Banach2005,Banach2013}), requiring a closure for the rank-3 and rank-4 moments. However, the problem certainly appears to be complex enough to warrant using full transport methods directly. \section{Acknowledgements} We are grateful to David Radice for providing the 3D model of a neutron star merger accretion disk. We also thank Sanjana Curtis, Carla Fr\"ohlich, Dan Kasen, and Hiroki Nagakura for useful discussions, and Jim Kneller and Carla Fr\"ohlich for exclusive access to computing resources. SR is supported by the N3AS Fellowship under National Science Foundation grant PHY-1630782 and Heising-Simons Foundation grant 2017-228. This material is based upon work supported by the National Science Foundation under Award No. AST-2001760. Software used: NumPy \cite{VanDerWalt2011}, Matplotlib \cite{Hunter2007}, and Mathematica \cite{Mathematica}.
null
null
null
proofpile-arXiv_000-10130
{"arxiv_id":"2009.09046","language":"en","timestamp":1601863524000,"url":"https:\/\/arxiv.org\/abs\/2009.09046","yymm":"2009"}
2024-02-18T23:40:25.045Z
2020-10-05T02:05:24.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10132"}
null
\section{Materials and Methods} \subsection{Sample preparation} We study {Cr$_2$O$_3$ \/} domain walls on three bulk single crystals. Samples A and B (Refs.~\onlinecite{Fiebig1995,Fiebig1996,Fiebig1996thesis}) are grown by the Verneuil method and oriented using a single-crystal X-ray diffractometer. The samples are cut perpendicular to the $z$-axis or (001) orientation. Subsequently, the samples are thinned down to $70\unit{\,\mu{\rm m}}$. Both samples are lapped and polished, each of them following a different process. Sample A is lapped using SiC powder with $3\unit{\,\mu{\rm m}}$ grain size on a cast-iron-lapping plate. Subsequently, the sample is polished following a two-step process. In the first step, the lapped surface is polished with a soft metal plate using diamond powder with $1\unit{\,\mu{\rm m}}$ grain size. In the second step, a refining polishing step follows using a polyurethane polishing plate together with colloidal silicate. Here, scratches from previous mechanical treatments are removed. The sample surface is polished until it reveals a root-mean-square (rms) roughness below 1\,nm. Sample B is lapped using Al$_{2}$O$_{3}$ powder and H$_{2}$O solution. Next, the lapped surface is diamond polished until it reveals a surface with a rms roughness below 3\,nm. Sample C (Ref.~\onlinecite{Fiebig1996thesis}) is a flux-grown (001) {Cr$_2$O$_3$ \/} platelet of $30\unit{\,\mu{\rm m}}$ thickness. The flat as-grown surface presents a rms roughness below 0.5\,nm. SHG images of all crystals are shown in Figs. S1 and S2. We create antiferromagnetic domains by cooling samples through the transition temperature $T_\mr{N}$. For samples A and B, domains are induced by magnetoelectric poling~\cite{Krichevtsov1988}. In sample C, different domain patterns spontaneously form when the sample is cooled through $T_\mr{N}$. \subsection{Second harmonic generation (SHG) measurements} SHG microscopy exploits an interference contrast of frequency-doubled optical photons in domains of opposite magnetic polarization to reveal the domain pattern~\cite{Fiebig1995}. A magnetic contribution to the frequency-doubled light wave coupling linearly to the antiferromagnetic order parameter $\pm L$ interferes with a frequency-doubled crystallographic background contribution which identifies the two antiferromagnetic domain states by their different brightness~\cite{Fiebig1994}. We use a transmission SHG setup to acquire the SHG images, in which we use a Coherent Elite Duo laser system, which emits 120\,fs pulses at a repetition rate of 1\,kHz. An optical parametric amplifier tunes the wavelength to excite the bulk {Cr$_2$O$_3$ \/} samples with a photon energy of 1.033\,eV and a pulsed energy of 80\,$\mu$J. The crystals are excited in transmission and at normal incidence by an unfocused circularly-polarized laser beam. Right-handed circularly-polarized light denote the clockwise rotation of the electric-field vector of light with respect to its propagation direction. The opposite follows for left-handed circular polarization. A camera lens is used to collect the SHG signal. Optical filters are added to select the SHG spectral wavelength, suppressing the fundamental beam and higher-harmonic contributions. SHG light is detected at room temperature with a Jobin-Yvon, back-illuminated, deep-depletion digital camera with a near-infrared detector chip of 1024$\times$256 pixels. The camera is cooled with liquid nitrogen to reduce thermal noise. \subsection{Nanoscale scanning diamond magnetometry (NSDM) measurements} Scanning NV magnetometry measurements are carried out on a user-facility instrument, built in-house~\cite{ccmx}, and under ambient conditions. The instrument uses $520\unit{nm}$ laser light and $2.76\unit{GHz}$ to $2.91\unit{GHz}$ microwave pulses to detect the NV center spin resonances. Laser illumination is kept below $90\unit{\,\mu{\rm W}}$ to avoid laser-induced heating of the sample. The spin resonance frequency is determined by sweeping the microwave frequency and fitting a Lorentzian function to the optically-detected magnetic resonance spectrum. Four different diamond probes (QZabre LLC) of $\sim 22\%$ CW ODMR contrast at a measurement count rate of $\sim 200\unit{kC/s}$ are used. The sensitivity of these probes (as determined from the average least-squares variance of the center frequency) is $1.7\unit{\,\mu{\rm T}}$ for an integration time of 6.4 seconds per pixel. All scans are performed on the {Cr$_2$O$_3$ \/} surface pointing towards the camera in the SHG experiment. We use both continuous and pulsed ODMR protocols~\cite{Dreau2011,Schirhagl2014} on either transition ($m_S=0$ to $m_S=\pm1$) of the NV center. A small external bias field of $\sim 4\unit{mT}$ is applied to split the spin resonances; this small bias field is not expected to influence the {Cr$_2$O$_3$ \/} physics. To convert the measured spin resonance frequency $f$ to units of magnetic field, we compute \begin{align} B_\mr{NV} = \frac{f_0 - f}{28.02\unit{MHz/mT}} \end{align} where $f_0$ is the mean frequency taken over the entire scan, which is approximately the frequency far from the sample surface. We recall that NV magnetometry provides one vector component of the magnetic field, $B_\mr{NV} = \vec{e}\cdot\vec{B}$, which is the projection of $\vec{B}$ onto the anisotropy axis $\vec{e} = (e_x,e_y,e_z)$ of the spin. The unit vector $\vec{e} = (\sin\theta_\mr{NV}\cos\phi_\mr{NV},\sin\theta_\mr{NV}\sin\phi_\mr{NV},\cos\theta_\mr{NV})$ corresponds to the symmetry axis (N-V axis) of the NV center, as expressed by the laboratory frame angles $\theta_\mr{NV}$ and $\phi_\mr{NV}$. The sensor vector orientation is pre-determined for each tip using an external field sweep. The stand-off distance $z$ between NV center and the sample surface is measured by independent calibration scans over a magnetized Co stripe before and after the {Cr$_2$O$_3$ \/} scans~\cite{Tetienne2015,Velez2019}. For our probes, ($\theta_\mr{NV},\phi_\mr{NV},z$) is ($55^\circ$, $270^\circ$, $73\pm7\unit{nm}$) for tip A, ($55^\circ$, $180^\circ$, $64\pm4\unit{nm}$) for tip B, ($55^\circ$, $176^\circ$, $65\pm3\unit{nm}$) for tip C and ($55^\circ$, $176^\circ$, $68\pm8\unit{nm}$) for tip D. \section{Data analysis} \subsection{Definition of surface magnetization $\sigma_z^0$} Antiferromagnetic order in the form of vertically alternating layers of oppositely polarized ions leads to an effective surface layer magnetization on the top and bottom surfaces of the crystal, in analogy to the bound surface charge appearing for a polarized dielectric~\cite{Belashchenko2010,Andreev1996,He2010}. To calculate the surface magnetization, we assign the alternating layers of opposite polarization to two oppositely magnetized volumes, each with magnetization $M_s = n m/V$, vertically shifted with respect to each other by $s$. Here, $n$ is the number of ions per unit cell and polarization direction, $m$ is the magnetic moment per ion, and $V$ is the volume of the unit cell. Within the bulk, the magnetization of the two volumes is exactly compensated, except in two thin layers of thickness $s$ at the top and bottom of the body. Thus, the bulk antiferromagnetic order appears like an magnetized surface layer at the top and bottom of the crystal, with an effective layer magnetization of \begin{align} \sigma_z^0 = d M_s = \frac{n m s}{V} \ . \end{align} For thick crystals a local magnetic probe only detects the stray field of the top layer. {Cr$_2$O$_3$ \/} has a hexagonal unit cell with a side length of $a=4.961\unit{\AA}$, a height of $c=13.6\unit{\AA}$, a hexagonal surface area of $A=\frac{3\sqrt{3}}{2}a^2=63.9\unit{\AA^2}$, and a volume of $V=Ac=869.6\unit{\AA^3}$ (Refs. \onlinecite{Fiebig1996thesis,He2010}). The hexagonal unit cell is constructed from six vertically stacked {O$^{2-}$ \/} planes. Each {O$^{2-}$ \/} plane has two nearest {Cr$^{3+}$ \/} ions of opposite magnetic polarization located $0.941\unit{\AA}$ above or below the plane, respectively, therefore $s=1.882\unit{\AA}$ (see Fig.~1a and Ref.~\onlinecite{Fiebig1996thesis}). Accounting for the 12 {Cr$^{3+}$ \/} ions per unit cell, $n=6$ for each orientation. Assuming a moment of $m=2.8\unit{\mu_{\rm B}}$ per {Cr$^{3+}$ \/} ion~\cite{Madelung2000}, we calculate a surface magnetization of \begin{align} \sigma_z^0 = \frac{6 \times 2.8\unit{\mu_{\rm B}} \times 0.188\unit{nm}}{0.870\unit{nm^3}} = 3.64\unit{\mu_\mr{B}/\mr{nm}^2} \ . \end{align} This is slightly less than what one would expect from one monolayer of {Cr$^{3+}$ \/} ions, which has a magnetization of $m/A=4.38\unit{\mu_\mr{B}/\mr{nm}^2}$. \subsection{Transformations between surface magnetization and magnetic field} Using the relations between magnetization and magnetic stray field for two-dimensional thin films~\cite{Beardsley1989}, we can reconstruct the surface magnetization $\sigma_z(x,y)$ and vector magnetic field $\vec{B}(x,y)$ from the measured stray field component $B_\mr{NV}(x,y)$. We perform transformations in Fourier space. The magnetic vector field $\vec{B}$ associated with the magnetization $\vec\sigma$ is given by \begin{align} \rowvec{\hat{B}_x,\hat{B}_y,\hat{B}_z} = \frac12 \mu_0 e^{-kz} \rowvec{-k_x\hat{\sigma}_k-ik_x\hat{\sigma}_z, -k_y\hat{\sigma}_k-ik_y\hat{\sigma}_z, -ik\hat{\sigma}_k+k\hat{\sigma}_z} \label{eq:BfromM} \end{align} where $k_x$, $k_y$ are the in-plane $k$-vectors, $k=(k_x^2+k_y^2)^{1/2}$, $\hat{\sigma}_k = (k_x\hat{\sigma}_x+k_y\hat{\sigma}_y)/k$, and hat symbols denote Fourier transforms in $x$ and $y$. $z$ is the stand-off distance of the sensor and $\mu_0=4\pi\ee{-7}\unit{Tm/A}$. For a line scan in $x$ direction, scanned across a domain wall extending in $y$ direction, the magnetic field is \begin{align} \rowvec{\hat{B}_x,\hat{B}_y,\hat{B}_z} = \frac12 \mu_0 e^{-kz} \rowvec{ -k_x\hat{\sigma}_x-ik_x\hat{\sigma}_z, 0, -ik\hat{\sigma}_x + k\hat{\sigma}_z } \ , \label{eq:BfromMline} \end{align} where now $k = |k_x|$. Likewise, we can recover the magnetic vector field $\vec{B}$ from the measured projection $B_\mr{NV}$ as \begin{align} \rowvec{\hat{B}_x,\hat{B}_y,\hat{B}_z} = \frac{1}{k_\mr{NV}} \rowvec{ ik_x, ik_y, -k } \hat{B}_\mr{NV} \label{eq:vecBfromB} \end{align} where $k_\mr{NV} = (ie_xk_x+ie_yk_y-e_z k)$ and $(e_x,e_y,e_z)$ is the vector orientation of the sensor. Finally, under the assumption that the magnetization is fully out-of-plane ($\sigma_x=\sigma_y=0$), we can reconstruct $\sigma_z$ from the stray field $B_\mr{NV}$, \begin{align} \hat{\sigma}_z = - \frac{2W \hat{B}_\mr{NV}}{\mu_0 e^{-kz}k_\mr{NV}} \label{eq:MfromB} \end{align} where $W=W(k)$ is a suitable window function (here a Hann function) that provides a high-frequency cutoff. Although our {Cr$_2$O$_3$ \/} films do have an in-plane component in the vicinity of the domain wall, the reconstructed $\sigma_z$ still accurately reproduces the domain pattern and surface magnetization $\sigma_z^0$. \subsection{Magnetic field from surface roughness} Surface roughness leads to tiny stray fields at topographic steps, as sketched in Fig.~1b. The magnetic field produced at a step of height $h$ corresponds to the differential field of two magnetized layers located at $z$ and $z+h$. According to Eq.~\ref{eq:BfromMline}, the $B_z$ field of the step is given by \begin{align} \hat{B}_z = \frac12 \mu_0 e^{-kz} k h (k\hat{\sigma}_z) \ . \end{align} For a simple order-of-magnitude estimate of the stray field, we look at the Fourier component of $\sigma_z$ that produces the strongest $B_z$. This occurs for $k = 2/z$. For this Fourier component, the amplitude of $B_z$ is \begin{align} B_z = \frac{\mu_0 h 2e^{-2}\sigma_z^0}{z^2} \approx \frac{0.2707 \mu_0 h\sigma_z^0}{z^2} \end{align} For our {Cr$_2$O$_3$ \/} crystals, where $\sigma_z^0\approx 2\unit{\mu_\mr{B}/\mr{nm}^2}$, and using $z=68\unit{nm}$, we find $B_z/h \approx 1.4\unit{\,\mu{\rm T}/nm}$. For an rms surface roughness of $3\unit{nm\text{-rms}}$ we therefore expect stray field fluctuations of $\sim 5\unit{\,\mu{\rm T}}$, in good agreement with the experimental $7\unit{\,\mu{\rm T}\text{-rms}}$ (Fig.~2d). \subsection{Fitting of line scans} We model the domain wall as presented in Eq. 1 in the main text. The stray field is then computed via Eq.\ \ref{eq:BfromMline}. The resulting model features 7 parameters: the effective surface magnetization $\sigma_z^0$, the position of the domain wall $x_0$, its width parameter $\Delta$ and twist angle $\chi$, and the sensor geometry $(z, \theta_\mr{NV}, \phi_\mr{NV})$. Since $z$, $\theta_\mr{NV}$, $\phi_\mr{NV}$ have been determined separately at this point, they are left fixed in the following least-squares optimization, leaving only $\sigma_z$, $x_0$, $\Delta$ and $\chi$ as free parameters. The initial value of $\sigma_z^0$ is determined by estimating the surface magnetization using the two complementary methods (step height, integration of $B_x$) described below. The initial value for width and chirality are set to $\Delta = 40 \unit{nm}$ and $\chi = 90^\circ$. We checked that other starting values did not significantly alter the fit results. The fitting procedure is repeated for each individual line scan. \subsection{Complementary methods for estimating $\sigma_z^0$} We use two complementary methods for estimating the {Cr$_2$O$_3$ \/} surface magnetization $\sigma_z^0$ from a stray field scans across domain walls: \textit{Step height in reconstructed $\sigma_z$ map:} We reconstruct the surface magnetization $\sigma_z(s)$ using Eqs.~(\ref{eq:BfromMline}) and (\ref{eq:MfromB}). The step height at the domain wall is $2\sigma_z^0$. \textit{Integration of $B_x$:} We assume a domain wall extending along the $y$ direction. We compute the $B_x(x)$ component of the stray field from $B_\mr{NV}(x)$, using the known orientation of the sensor $(\theta_\mr{NV},\phi_\mr{NV})$ and Eq.~(\ref{eq:vecBfromB}). The integrated $B_x(x)$ is then equal to $\mu_0\sigma_z^0$, irrespective of the stand-off $z$ and the domain-wall profile and chirality. To explain this, assume an out-of-plane magnetized film with magnetization $\vec\sigma(x')$ and a domain wall centered at $x=0$ and extending along the $y$-direction. The step edge can have a $\sigma_x$ or $\sigma_y$ component. The magnetic field $B_x$ produced by the magnetization element $\mathrm{d}x' \vec\sigma(x')$ is \begin{align} \mathrm{d}B(x) = \frac{\mu_0 j_y(x') t \mathrm{d} x' z}{2\pi[(x-x')^2+z^2]} \end{align} where $j_y(x') t = [\vec\nabla\times\vec\sigma]_y(x') = -[\partial_x \sigma_z](x')$ is the bound current element associated with $\vec\sigma(x')$ and $t$ is the film thickness ($t\ll z$). The total magnetic field at position $x$ is \begin{align} B(x) = \int_{-\infty}^{\infty} \mathrm{d}x' \frac{\mu_0 j_y(x') t z}{2\pi[(x-x')^2+z^2]} = -\int_{-\infty}^{\infty} \mathrm{d}x' \frac{\mu_0 z}{2\pi[(x-x')^2+z^2]} \, [\partial_x \sigma_z](x') \end{align} and the integrated $B(x)$ is \begin{align} \int_{-\infty}^{\infty} \mathrm{d}x B(x) &= - \left( \int_{-\infty}^{\infty} \mathrm{d}x'' \frac{\mu_0 z}{2\pi[(x'')^2+z^2]} \right) \, \left( \int_{-\infty}^{\infty} \mathrm{d}x' [\partial_x \sigma_z](x') \right) \\ &= -\frac{\mu_0}{2} \, [\sigma_z(+\infty)-\sigma_z(-\infty)] = \mu_0 \sigma_z^0 \end{align} where we have used Fubini's theorem and the last equation is for a domain wall where $[\sigma_z(+\infty)-\sigma_z(-\infty)] = -2\sigma_z^0$. \subsection{Complementary method for estimating $\Delta$ and $\chi$} For a fixed pair ($\chi$,$\Delta$), we only fit $x_0$ to the data, and record the residual sum of squares (RSS). The surface magnetization is determined for each line scan by the previously introduced three complementary methods. The RSS is a measure of the likelihood. Indeed, assuming Gaussian errors, the log-likelihood is given by \begin{align} \ln\mathcal{L} = \ln\left(\frac{1}{2\pi\sigma^2}\right)\frac{n}{2} - \frac{1}{2\sigma^2}\mathrm{RSS} \end{align} Here, $\sigma$ is the standard deviation describing the error of a single data point, and $n$ is the number of data points. We can compare the relative likelihood of two models 1 and 2 ({\it i.e. \/} two pairs of $\Delta$ and $\chi$) by estimating $\sigma^2_i = \mathrm{RSS}_i/n$, $i\in\{1,2\}$, giving \begin{align} \ln\mathcal{L}_1 - \ln\mathcal{L}_2 = -\frac{n}{2}\ln\frac{\mathrm{RSS}_1}{\mathrm{RSS}_2} \label{eq:relativeLL} \end{align} We choose model 2 as the best model (i.e. the least squares solution), so that Eq.\ \ref{eq:relativeLL} is normalized to 0. To consider the data from all scans, we sum the RSS of each line and scan, and set $n$ to be the total number of data points. \section{Model of antiferromagnetic domain wall} We focus on orientational $180^{\circ}$ domain walls, as they exist in {Cr$_2$O$_3$}, and compare their properties between ferromagnetic (FM) and collinear antiferromagnetic (AFM) systems. The domain wall properties, \textit{i.e.}, the domain wall profile, domain wall width $\Delta$ and twist angle $\chi$ are governed by the interplay between exchange, anisotropy and Zeeman energies, and, for FM systems, the demagnetizing field~\cite{Malozemoff2016}. In a first step, we use a 1D model of a domain wall and only include exchange and anisotropy energies. We show that the domain wall properties are the same for FM and AFM and that the chirality is undetermined. The exchange energy arises from the Coulomb interaction and the Pauli principle and favors parallel (antiparallel) alignment of neighbored spins in the FM (AFM) case. The exchange energy is given by: \begin{align} e_\mr{ex} = -J m_i m_j \cos(\Delta\phi), \end{align} where $J$ is the exchange constant and $m_{i,j}$ are neighboring magnetic moments. $\Delta\phi$ describes the angle between the $i$'th and $j$'th moment~\cite{Mitsumata2011}. In the FM (AFM) case the exchange constant $J$ is positive (negative). Note that a deviation from the parallel (antiparallel) alignment increases the energy equally for the FM and AFM case. The magnetocrystalline anisotropy favors moments that lie along specific lattice directions, the so-called magnetic easy axes. In our model, we use a system with uniaxial anisotropy, which can be described as \begin{align} e_\mr{ani} = K\sin^2\left(\theta\right), \end{align} where $K$ is the uniaxial constant and $\theta$ the angle between the easy axis and the moments~\cite{Mitsumata2011}. Similar to the exchange energy, a misalignment from the easy axis is equal for the FM and AFM case. Based on the fact that exchange and anisotropy act in similar ways on FM and AFM systems, we conclude that both system have similar domain wall properties in the absence of a demagnetizing energy. This is also observed in more elaborated models~\cite{Tveten2013,Tveten2016,Shiino2016,Swaving2011} and explicitly mentioned in Refs. \onlinecite{Chen2019,Mitsumata2011,Papanicolaou1995}. In the following we derive an analytical expression for a 1D domain wall pointing along the $x$-direction, following Ref. \onlinecite{Malozemoff2016}, p.79-82. The easy axis of the uniaxial anisotropy is pointing along the $z$-direction. In order to solve the problem analytically, we begin by writing the energy density in polar coordinates $(\theta,\phi)$: \begin{align} e = e_\mr{ex}+e_\mr{ani} = A \left[\left(\partial\theta/\partial y\right)^2 + \left(\sin\theta\ \partial\phi/\partial y\right)^2\right] + K\sin^2\theta. \end{align} The term of the exchange energy is given for the one-dimensional case with the exchange stiffness $A$, which is related to the exchange constant $J$ (the same formula without azimuthal dependency is also provided in Ref. \onlinecite{Mitsumata2011}, Eq. (9), specifically for the AFM case). The static equilibrium is reached when all torques acting on the moments are zero. The solution, satisfying the boundary conditions $\theta\left(\pm\infty\right)=\left(0,\pi\right)$, is given by: \begin{align} \phi\left(y\right) &= \chi = \mathrm{const.} \\ \theta\left(y\right) &= \pm 2 \arctan\left[\exp\left(y/\Delta_0\right)\right], \end{align} with the domain wall width $\Delta_0 = \sqrt{(A/K)}$. The total energy per unit area of the wall $\sigma_0$ is given by: \begin{align} \epsilon_0 =4\sqrt{AK}. \end{align} At this point it is worth mentioning that the angle $\phi(y)=\chi$ is arbitrary, or in other words, N\'eel and Bloch domain walls, or any combination of the two, are equal in energy for both the FM and AFM case. In a next step, we consider the demagnetizing field arising due to magnetic charges at the domain wall when $\phi=\chi\neq0$. We further add an in-plane anisotropy $K_p$ in the $xy$-plane with the easy axis pointing at an angle $\psi_p$ with respect to the wall plane. We find that the domain wall width $\Delta_0$ and $\epsilon_0$ change to: \begin{align} \Delta &= \Delta_0\left[1-\frac{4K}{\mu_0M^2}\sin^2\chi-\left(K_p/2K\right)\sin^2\left(\chi-\psi_p\right)\right]\label{eq:dwMagnetostatics} , \\ \sigma &= \sigma_0 + \mu_0M^2 \Delta_0\sin^2\chi + 2K_p\Delta_0\sin^2\left(\chi-\psi_p\right). \end{align} Let us first consider the demagnetizing energy term $\mu_0M^2 \Delta_0\sin^2\chi$. We notice that in the FM case, the domain wall energy of a Bloch wall, $\chi \in \{0,\pi\}$, is lower than that of a N\'eel wall, $\chi \in \{\pm\pi/2\}$. Therefore, in the FM case and taking into account the demagnetizing field, Bloch walls are favored over N\'eel walls. For a generic AFM, if the volume magnetization $M$ is zero, no energy contribution is expected. However, a residual demagnetizing field still persists due to the small but finite magnetic moment of the wall~\cite{Papanicolaou1995,Tveten2016}. This field promotes the formation of a Bloch wall. Adding an in-plane anisotropy $K_p$ forces the moments to cant along the in-plane easy axis when $K_p>\mu_0 M^2$. For sufficiently large in-plane anisotropy we observe a N\'eel-type domain wall if the in-plane easy axis is $\psi_p \approx \pm \pi/2$, {\it i.e. \/}, when the domain wall runs perpendicular to the in-plane easy axis. For $\psi_p \approx 0$ and $\psi_p \approx \pi$, a Bloch-type domain wall is expected. We note that the domain wall width of an N\'eel wall is reduced with respect to a Bloch domain wall according to Eq.~(\ref{eq:dwMagnetostatics}). To estimate the reduction in wall width for {Cr$_2$O$_3$ \/}, we divide the magnetization of the polarized surface layer $\sigma_z^0=1.6 \mu_\mr{B}/\mr{nm}^2$ (Sample C) by the thickness of the cleaved layer, $t=0.188$~nm. We use the anisotropy constant $K=13$~$\text{kJ}/\text{m}^3$ given in Ref. \onlinecite{Kota2016}. The ratio $r$ of the width of N\'eel to Bloch domain wall is then \begin{align} r = \frac{\Delta_{\text{N\'eel}}}{\Delta_{\text{Bloch}}} = \frac{\Delta_0\left[1-\frac{4K}{\mu_0M^2}\right]}{\Delta_0} = 0.85. \label{eq:reductiondw} \end{align} which is in reasonable agreement with the experimental ratio of $r=0.65\pm 0.10$ (Fig. 4, caption) given the simplicity of our assumptions. \clearpage \section{Supplementary Figures} \begin{figure}[h!] \includegraphics[width=0.8\textwidth]{figS1.png} \caption{SHG microscopy images of sample A (left panels) and sample B (right panels). Upper panels used left-handed circular polarization, lower panels used right-handed circular polarization. \label{fig:figS1} } \end{figure} \clearpage \begin{figure}[h!] \includegraphics[width=0.8\textwidth]{figS2.png} \caption{SHG microscopy images of sample C. Upper panel used left-handed circular polarization, lower panel used right-handed circular polarization. \label{fig:figS2} } \end{figure} \clearpage \begin{figure}[h!] \includegraphics[width=0.8\textwidth]{figS3.pdf} \caption{Maximum likelihood estimates for domain wall width $\Delta$ and twist angle $\chi$, as explained in the Methods section. Gray dots are fit results from individual line scans. Colored contours are maximum likelihood isolines containing 75\%, 50\%, and 25\% of datapoints. The most likely ($\chi,\Delta$) pair is indicated by a central cross. For sample C, datasets with $\alpha>9^\circ$ (blue) and $\alpha<9^\circ$ (red) are analyzed separately. \label{fig:figS3} } \end{figure} \clearpage \begin{figure}[h!] \includegraphics[width=0.8\textwidth]{figS4.pdf} \caption{Maximum likelihood estimates for domain wall width $\Delta$ and twist angle $\chi$ for upper and lower bound stand-off distances $z\pm 10\unit{nm}$, where $z$ is the calibrated stand-off distance. {\bf a-c} Maximum likelihood estimates for the lower bound $z-10\unit{nm}$. {\bf d-f} Maximum likelihood estimates for the upper bound $z+10\unit{nm}$. Data points, contours and central cross are as with Fig. S3. We note that the our observation -- the presence of Bloch-like walls in samples A and B, and mixed Bloch and N\'eel walls in sample C -- is valid within the uncertainty of the sample-sensor distance. The $z\pm 10\unit{nm}$ bounds are a conservative estimate, as all probes showed a calibration error of $\leq 8\unit{nm}$ (see Methods). \label{fig:figS4} } \end{figure} \clearpage \input{"references_suppl.bbl"} \end{document}
null
null
null
proofpile-arXiv_000-10131
{"arxiv_id":"2009.09015","language":"en","timestamp":1600740070000,"url":"https:\/\/arxiv.org\/abs\/2009.09015","yymm":"2009"}
2024-02-18T23:40:25.052Z
2020-09-22T02:01:10.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10133"}
null
\section{Analysis}\label{sec:simulation} To study the implications of a PGPP deployment, we create a simulation to model users, mobility, and cell infrastructure. We study the impact of PGPP's design on various cellular attacks that occur today. We then analyze the inherent tradeoffs from the PGPP operator's perspective, as improved privacy comes at the price of increased control traffic. Lastly, we examine PGPP in a lab testbed on real devices. \subsection{Simulation configuration} \textit{\acrshort{enb} dataset.} We select Los Angeles County, California as the region for our simulation, which provides a mix of both highly urban areas as well as rural areas. For \acrshort{enb} location information, we use OpenCellID~\cite{opencellid}, an open database that includes tower locations and carrier information. To simplify the simulation, we select \acrshort{enb}s from the database that are listed as providing LTE from AT\&T, the provider with the most \acrshort{enb}s (22,437) in the region. Given their geographic coordinates, we estimate coverage areas for every \acrshort{enb} using a Voronoi diagram. During the simulation, a \acrshort{ue} is assigned to the \acrshort{enb} that corresponds to the region the \acrshort{ue} is located within. While such discretization is not likely in reality as \acrshort{ue}s remain associated with an \acrshort{enb} based on received signal strength, this technique provides us with a tractable mobility simulation. A partial map of the simulation region is shown in Figure~\ref{fig:map}. ENodeB regions are shaded based on the tracking area value in the OpenCellID database. \begin{figure}[t] \centering \includegraphics[width=.85\columnwidth]{images/LAMap5.pdf} \caption{Partial simulation map. Cells are shaded by AT\&T LTE tracking area.} \label{fig:map} \end{figure} \begin{figure}[t] \centering \includegraphics[width=.75\columnwidth]{images/userseNBCDF_box.pdf} \caption{ENodeBs visited by simulated mobile users.} \label{fig:usereNodeBs} \end{figure} \textit{Mobility traces.} To simulate realistic mobility patterns ({\it i.e.}, users must follow available paths), we generate mobility traces using the Google Places~\cite{placesapi} and Directions~\cite{directionsapi} APIs. First, we use the Places API to find locations in the simulation region that are available when searching for ``post office.'' Each place is associated with latitudinal and longitudinal coordinates. We then generate mobility traces by randomly selecting start and end points, and use the Directions API to obtain a polyline with coordinates along with estimated times to reach points along the line. We generate 50,000 mobility traces: 25,000 cars and 25,000 pedestrians. We then use ns-3 to process the mobility traces and generate coordinates for each trace at 5-second intervals, in a method similar to~\cite{routesmobility}. We use this output, along with the \acrshort{enb} Voronoi diagram to assign each simulated \acrshort{ue} to an \acrshort{enb} for every 5-second interval in the mobility trace. Figure~\ref{fig:usereNodeBs} shows the distribution of the number of \acrshort{enb}s visited by \acrshort{ue}s in the simulation. As expected, car trips result in a significantly higher number of \acrshort{enb}s for a \acrshort{ue} compared with pedestrian trips. \textit{Synthetic traffic.}\label{sec:traffic} We simulate one hour. To create control traffic, at every 5-second interval we randomly select 5\% of the user population to receive a ``call.'' A call results in a paging message that is sent to all \acrshort{enb}s in the \acrshort{ue}'s tracking area. Each paged user enters a 3-minute ``call'' if it is not already in one, at which point further paging messages are suppressed for that user until the call is complete. We run the simulation with PGPP enabled as well as with the conventional infrastructure setup. \textit{Custom \acrshort{ta}s.}\label{sec:customtas} As we detail further in~\cref{sec:control}, large \acrshort{tal}s increase control traffic loads, which lowers the network's user capacity. Therefore, we generate new tracking areas in the underlying network in order to mitigate the control traffic burden. As tracking areas normally consist of groups of adjacent \acrshort{enb}s, we need a method by which we can cluster nearby \acrshort{enb}s into logical groupings. To do so, we use k-means clustering with the \acrshort{enb} geographic coordinates allowing for Euclidean distance to be calculated between \acrshort{enb}s. We generate several underlying tracking area maps, with the number of \acrshort{ta}s ({\it i.e.}, k-means centers) ranging from 25 to 1,000. For comparison, the AT\&T LTE network in the simulation is composed of 113 \acrshort{ta}s. \subsection{Cellular privacy attack analysis}\label{sec:attackanalysis} Given the taxonomy we presented in~\cref{sec:attacks}, we analyze the identity and location privacy benefits of PGPP in the simulated environment. \noindent\textbf{Global-bulk attacks.}\label{sec:globalbulk} By nullifying the value of \acrshort{imsi}s, separating authentication with connectivity, and increasing the broadcast domain for users, we increase user identity privacy even with an adversary that is capable of bulk surveillance over an entire network ({\it e.g.}, operators, governments). \textit{Anonymity analysis} We measure the anonymity of a user when under bulk attacks using \textit{degree of anonymity}~\cite{10.5555/1765299.1765304}. The degree of anonymity value ranges from zero to one, with ideal anonymity being one, meaning the user could be any member of the population with equal probability. In this case, we consider the IMSI value to be the target identity. The size of the anonymity set for a population of $ N $ users will result in a maximum entropy of: \begin{equation} H_{M}= log_{2}(N) \end{equation} The degree of anonymity is determined based on the size of the subset of user identities $ S $ that an attacker could possibly believe the victim to be: \begin{equation} d = \frac{H(X)}{H_{M}} = \frac{log_{2}(S)}{log_{2}(N)} \end{equation} \begin{figure}[t] \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/degree_anon.pdf} \caption{TALs.} \label{fig:talanon} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/degree_anon_custom.pdf} \caption{Custom TAs.} \label{fig:customanon} \end{subfigure} \caption{Degree of anonymity using TALs and custom TAs.} \vspace{-3mm} \end{figure} Given global visibility into the network, we can reason about the anonymity set using the number of \acrshort{enb}s that a victim could possibly be connected to. This is because a cellular carrier can know the exact base station that a user is connected to once the \acrshort{ue} enters an active state. As a baseline, the anonymity set for traditional cellular is $ \frac{log_{2}(1)}{log_{2}(22,437)} = 0 $, as each \acrshort{imsi} is a unique value. With PGPP, \acrshort{imsi}s are identical, so from the perspective of the carrier, the victim could be connected to any \acrshort{enb} that has at least one PGPP client connected to it. Using our simulated environment we collect, for each paging message, the number of \acrshort{enb}s that had users within their range and use the median value to calculate the degree of anonymity. Figures~\ref{fig:talanon} and~\ref{fig:customanon} show the degree of anonymity using different configurations of \acrshort{tal}s and custom \acrshort{ta}s, respectively. We see that high degrees of anonymity are attainable despite an attacker's global visibility. For instance, with \acrshort{tal}s of length 8, the degree of anonymity is 0.748. \noindent\textbf{Local-bulk attacks.}\label{sec:localbulk} PGPP's use of identical \acrshort{imsi}s reduces the importance of \acrshort{imsi}s, and by extension the usefulness of local bulk attacks on user identity. An attacker that can view traffic at the \acrshort{enb}(s) can gain insight into nearby \acrshort{imsi}s. In traditional cell networks, each user has a globally unique \acrshort{imsi} ($S = 1$), resulting in a degree of anonymity of zero as the victim could only be one user. In our measurement study (\cref{sec:zaatarimeasurement}), we showed that \acrshort{imsi}s are routinely broadcast over cell networks, making an \acrshort{imsi} catcher or SDR attack powerful. The subset $ S $ in PGPP, on the other hand, is the size of the population of PGPP users in a given location, as all \acrshort{imsi} values are identical and a local bulk attacker cannot know the true identity of a single user. To get an idea of $ S $, we can calculate the number of PGPP users connected to each \acrshort{enb} in the simulation. Over the course of the simulation, we find a mean value of 223.09 users connected to each \acrshort{enb} that has users, which results in a degree of anonymity $ \frac{log_{2}(223.09)}{log_{2}(50,000)} = 0.50 $. While this value is somewhat low compared to the ideal value of $ 1 $, it is a drastic improvement over conventional cellular architecture, and is dependent on the overall user population in the network. As more PGPP users exist, the degree of anonymity increases. \noindent\textbf{Local-targeted attacks.}\label{sec:localtargeted} In PGPP, local-targeted attacks to discover a user's location are diminished in two ways: first, \acrshort{imsi}s are no longer a useful ID, so identifying an individual among all users is challenging; and second, we use \acrshort{tal}s to increase the paging broadcast domain for a given \acrshort{ue}. From an attacker's point of view, this broadens the scope of where the target \acrshort{ue} may be located. \begin{figure}[t] \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/areasCDF_new.pdf} \caption{TALs.} \label{fig:areas} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/areasCustomTAsCDF_new.pdf} \caption{Custom TAs.} \label{fig:areacustom} \end{subfigure} \caption{Area anonymity using TALs and custom TAs.} \vspace{-3mm} \end{figure} In Figure~\ref{fig:areas}, we plot the CDF of geographic areas in which pages are broadcast as we increase \acrshort{tal} lengths using the base map consisting of 113 tracking areas. We calculate the area by generating a bounding box around all \acrshort{enb}s that are included in the broadcast domain. As shown, large \acrshort{tal}s result in drastically higher area anonymity compared with \acrshort{tal}s disabled, particularly considering the number of \acrshort{ue}s that could potentially be located in the larger geographic areas. For instance, the median area for the conventional simulation is 378.09 km\textsuperscript{2} whereas \acrshort{tal} lengths of 8 and 16 result in median areas of 5,876.96 and 9,585.17 km\textsuperscript{2}, respectively. We analyze anonymity with \acrshort{tal}s of length 16 while the underlying map is varied using custom \acrshort{ta}s. Figure~\ref{fig:areacustom} shows our results. We observe that as the number of tracking areas increase, resulting in smaller tracking areas, the area anonymity decreases. However, despite the decrease, the area anonymity remains considerably larger than anonymity with \acrshort{tal}s disabled as \acrshort{tal}s include additional tracking areas. For instance, the median area for the conventional case is 378.09 km\textsuperscript{2} whereas the median area for a base map of 500 tracking areas with \acrshort{tal} 16 is 4891.08 km\textsuperscript{2}, a nearly 13-fold increase from the perspective of a local targeted attacker. \subsection{Impact of PGPP on network capacity}\label{sec:control} From an operational perspective, the privacy benefits delivered by PGPP must coincide with feasibility in terms of control overhead in order for it to be deployable. Control traffic determines network capacity in terms of the number of users that are serviceable in a given area. In this section, we explore control traffic load when using \acrshort{tal}s. \subsubsection{Control overhead with PGPP TALs} We first seek to quantify control message overhead while we leverage tracking area lists to provide location anonymity against local-targeted attacks. Recall from~\cref{sec:tals} that we randomly select additional tracking areas from the simulated coverage area to create \acrshort{tal}s, which increases the broadcast domain for a page. Increased control traffic impacts both \acrshort{enb}s and \acrshort{mme}s, however, from our experience with real cellular networks the control traffic capacity at \acrshort{enb}s is the bottleneck as \acrshort{mme}s have much higher capacity. Thus, we focus on \acrshort{enb} control load. Figure~\ref{fig:control} shows a cumulative distribution function (CDF) for the number of pages broadcast by the simulated \acrshort{enb}s. In the figure, ``Conventional'' corresponds to disabling \acrshort{tal} functionality. As expected, larger \acrshort{tal} lengths result in increased control traffic for \acrshort{enb}s as they are more likely to be included in the paging broadcast domain for a given \acrshort{ue}. \begin{figure} \centering \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/controlCDF_new.pdf} \caption{Control traffic with TALs.} \label{fig:control} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/userCapacityCDF_new.pdf} \caption{Capacity with TALs.} \label{fig:users} \end{subfigure} \caption{Control traffic and system capacities leveraging PGPP TALs in the simulated environment.} \end{figure} To gain insight into the control limitations of real \acrshort{enb}s, we consider the capabilities of a Huawei BTS3202E \acrshort{enb}~\cite{huawei-enodeb}, which is limited to 750 pages per second. When capacity planning, it is commonplace to budget paging traffic headroom; accordingly, we estimate the maximum paging capacity for an \acrshort{enb} to be 525 pages per second (70\% of the BTS3202E capacity). This value is depicted in the vertical red line in the figure (525 pages $\times$ 3600 seconds = 1,890,000 pages/hour). The simulation allows us to illustrate the user population that could be supported by the network, provided a population with similar mobility and traffic profiles as defined in~\cref{sec:traffic}. Recall that we simulate 50,000 users, both pedestrians and cars. We consider the paging load for the network and select the \acrshort{enb}s with the maximum paging load, the 95th percentile, and the median to estimate the number of users each could theoretically support by taking into account the max page limitation of the BS3202E. Figure~\ref{fig:users} shows the user capacity as \acrshort{tal} lengths are increased. A \acrshort{tal} length of one shows the conventional network, as the \acrshort{tal} is composed of a single tracking area. As expected, larger \acrshort{tal}s result in a reduction in the number of users the \acrshort{enb}s can handle compared with performance when \acrshort{tal}s are disabled, due to increased paging load. \begin{figure} \centering \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/controlCustomTAsCDF_new.pdf} \caption{Custom TAs: Control traffic.} \label{fig:controlcustom} \end{subfigure} \hfill \begin{subfigure}[b]{0.485\columnwidth} \centering \includegraphics[width=\columnwidth]{images/userCapacityCustomTAsCDF_new.pdf} \caption{Custom TAs: Capacity.} \label{fig:userscustom} \end{subfigure} \caption{Control traffic and system capacities with custom tracking areas in the simulated environment.} \end{figure} \subsubsection{Control overhead with custom tracking areas} As we've demonstrated, large \acrshort{tal}s result in \acrshort{enb}s with higher control traffic load, effectively reducing the user capacity the network. To explore whether we can re-gain control traffic we again consider new, custom tracking area maps that are generated using k-means where we vary the number of unique tracking areas in the simulated network. We run the simulation with various custom tracking area maps, with all \acrshort{ue}s using \acrshort{tal} lengths of 16. The results are shown in Figures~\ref{fig:controlcustom} and~\ref{fig:userscustom}. We observe that a basemap consisting of 25 tracking areas leads to even higher control traffic compared with the conventional ({\it i.e.}, AT\&T) tracking area map. A map consisting of more tracking areas results in \acrshort{ta}s with fewer \acrshort{enb}s, thus reducing the paging load. We see that a map of 500 \acrshort{ta}s, even with a \acrshort{tal} of length 16, results in similar paging load compared with the conventional map with \acrshort{tal} disabled. Correspondingly, the user capacity of the network with a higher number of tracking areas nears the conventional capacity from Figure~\ref{fig:users}. \subsection{Testbed analysis} We study our PGPP design on a lab testbed in order to understand potential drawbacks. We implement a software-based \acrshort{EPC} and connect commodity phones to the software-defined radio-based \acrshort{enb}. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/pgppgear2.jpg} \caption{PGPP prototype test hardware.} \label{f:pgppgear} \vspace{-2mm} \end{figure} \noindent\textbf{Prototype.}\label{sec:prototype} We create our prototype code on srsLTE~\cite{srsLTE}, an open-source platform that implements LTE-compliant \acrshort{enb} and \acrshort{EPC} functionality and can be run using software-defined radios. Our \acrshort{EPC} / \acrshort{enb} testbed, shown in Figure~\ref{f:pgppgear}, consists of an Intel Core i7 machine running Linux and a USRP B210 radio. We use off-the-shelf commodity phones (Moto X4, Samsung Galaxy S6, and two OnePlus 5s) with programmable \acrshort{sim} cards installed to allow the phones to connect to the PGPP LTE network. The srsLTE \acrshort{mme} maintains EPS mobility management (\acrshort{emm}) and EPS connection management (\acrshort{ecm}) contexts for connected \acrshort{ue}s. The contexts are stored as structs that include the \acrshort{ue} \acrshort{imsi} in a simple key-value store, with the \acrshort{imsi} serving as the key. When the \acrshort{mme} receives S1 application protocol (\acrshort{s1ap}) messages ({\it e.g.}, due to mobility), it looks up the appropriate \acrshort{ecm} or \acrshort{emm} contexts to handle the requests. We add an additional value, a PGPPIMSI, into the \acrshort{ecm} and \acrshort{emm} structs. The PGPPIMSI is generated by combining the \acrshort{imsi} with a temporary value that is unique to the individual \acrshort{ue}-\acrshort{enb}-\acrshort{mme} connection. Accordingly, each \acrshort{ue} has a unique PGPPIMSI, which then allows us to look up the correct context when managing states. \noindent\textbf{Identical IMSIs and Shared Keys.} Given identical \acrshort{imsi} values for all users, the PGPP attach procedure can result in additional steps compared with the traditional attach. This is caused by sequence number synchronization checks during the authentication and key agreement (\acrshort{aka}) procedure, which is designed to allow the \acrshort{ue} and the network to authenticate each other. The fundamental issue is that the \acrshort{hss} and the \acrshort{sim} maintain a sequence number (\acrshort{sqn}) value that both entities increment with each successful attach. As multiple devices use the same \acrshort{imsi}s, the sequence numbers held at the \acrshort{hss} and on individual devices will no longer match, causing an authentication failure (known as a sync\_failure). At that point the \acrshort{ue} re-synchronizes with the \acrshort{hss}. We include an overview figure and details of the procedure in Appendix~\ref{sec:sqn}. We explore the delay introduced by sync\_failures using our testbed. Figure~\ref{f:sqnconnect} shows a PDF of the delays to connection completion for \acrshort{ue}s that hold identical \acrshort{imsi}s and attempt to authenticate simultaneously. In order to trigger many simultaneous authentication requests, we use openairinterface5G~\cite{nikaein2014openairinterface} to create 100 simulated \acrshort{ue}s. We observe in that the first successful \acrshort{ue} usually takes roughly 200 ms to connect, while subsequent \acrshort{ue}s that experienced sync\_failures experience additional delays. In our relatively small experiment the \acrshort{ue}s all successfully connect to the network within 1.1 seconds. In a large-scale production network the number of UEs that simultaneously attempt to connect would be larger. PGPP-based networks can mitigate the issue by using more \acrshort{hss}es, which would reduce the number of \acrshort{ue}s that each \acrshort{hss} is responsible for. Fortunately, the push for 5G will lend itself to many \acrshort{hss}es as the core network entities are being redesigned to be virtualized and located nearer to \acrshort{ue}s. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth, trim={0 10 0 13},clip]{images/connectdelayPDF.pdf} \caption{Connection delays due to sync\_failure.} \label{f:sqnconnect} \end{figure} \section{Background} Here we provide a brief overview of the cellular architecture and describe the inherent privacy challenges. For simplicity we focus on 4G LTE, though the fundamental challenges exist in 5G (discussed in~\cref{sec:5g}) as well as legacy standards. \subsection{Cellular architecture overview}\label{sec:background} The 4G LTE architecture can be divided into two areas: the Evolved UMTS Terrestrial Radio Access Network (\acrshort{eutran}), which is responsible for radio access; and the Evolved Packet Core (\acrshort{EPC}), which includes the entities responsible for authentication and connectivity to the network core. Figure~\ref{fig:lte_arch} shows a simplified architecture for both conventional cellular as well as with PGPP. PGPP moves authentication and billing to a new entity, the PGPP-GW, that is external to the \acrshort{EPC}. We detail PGPP's specific changes in~\cref{sec:pgpp}. We include a glossary of cellular terms in Appendix~\ref{sec:glossary}. \textit{\acrshort{eutran}.} The \acrshort{eutran} is the network that facilitates connectivity between user devices (\acrshort{ue}s)---commonly a cell phone with a \acrshort{sim} card installed---and the serving base station (\acrshort{enb}). The \acrshort{eutran} is responsible for providing \acrshort{ue}s a means of connecting to the \acrshort{EPC} via \acrshort{enb}s. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{images/CombinedPGPP.pdf} \caption{Simplified LTE architecture with and without PGPP. PGPP decouples authentication and connectivity credentials and shifts authentication to a new, external entity, the PGPP-GW. Details of the PGPP-GW are found in~\cref{sec:auth}.} \label{fig:lte_arch} \vspace{-3mm} \end{figure} \textit{\acrshort{EPC}.} The \acrshort{EPC} is the core of the cellular network and includes entities that provide authentication, billing, voice, SMS, and data connectivity. The \acrshort{EPC} entities relevant to our discussion are the Mobility Management Entity (\acrshort{mme}), the Home Subscriber Server (\acrshort{hss}), and the Serving and Packet Data Network Gateways (\acrshort{sgw} and \acrshort{pgw}, respectively). The \acrshort{mme} is the main point of contact for a \acrshort{ue} and is responsible for orchestrating mobility and connectivity. \acrshort{ue}s authenticate to the network by sending an identifier that is stored in the \acrshort{sim} to the \acrshort{mme}. The \acrshort{hss} is then queried to verify that the \acrshort{ue} is a valid subscriber. Once the \acrshort{ue} is authenticated, the \acrshort{mme} assigns the \acrshort{ue} to an \acrshort{sgw} and \acrshort{pgw}, which offer an IP address and connectivity to the Internet. Note that LTE networks can include many copies of these entities and contain many more entities; however, for the purposes of our discussion this simplified model suffices. \textit{\acrshort{mvno}s.} We design our solution to be implemented by a Mobile Virtual Network Operator (\acrshort{mvno}). \acrshort{mvno}s are virtual in that they offer cellular service without owning the infrastructure itself. Rather, \acrshort{mvno}s pay to share capacity on the infrastructure that an underlying carrier operates. \acrshort{mvno}s can choose whether they wish to operate their own LTE core entities such as the \acrshort{mme}, \acrshort{hss}, and \acrshort{pgw}, which is the type of operation we propose. \acrshort{mvno}s that run their own core network are often called ``full'' \acrshort{mvno}s. Critically, our architecture is now feasible as the industry moves toward ``whitebox'' \acrshort{enb}s that connect to a central office that is a datacenter with virtualized \acrshort{EPC} services, as in the Open Networking Foundation's M-CORD project~\cite{m-cord} and in the upcoming 5G standard. Recent work has shown that dramatic performance gains are possible using such newer architectures~\cite{PEPC,klein}. \subsection{Identity in the cellular architecture}\label{sec:privbackground} Maintaining user privacy has long been challenging in cellular networks as it is not a primary goal of the architecture. In order to authenticate users for access and billing purposes, networks use globally unique identifiers. Likewise, the infrastructure itself must always know the location of a user in order to minimize latency when providing connectivity. We briefly discuss cellular identifiers as well as location information available from the perspective of the network in this section. We use acronyms from the 4G LTE architecture; however, similar entities exist in all generations (2G, 3G, 5G). \textit{User identifiers.}\label{sec:ids} There are multiple identifiers that can be used to associate network usage with a given subscriber. The International Mobile Subscriber Identity (\acrshort{imsi}) is the identifier used to gain access to the network when a phone (\acrshort{ue}) performs initial attachment. The \acrshort{imsi} is globally unique, permanent, and is stored on the \acrshort{sim} card. Carriers maintain a \acrshort{hss} database containing the list of \acrshort{imsi}s that are provisioned for use on the network and subscription details for each. Given the \acrshort{imsi}'s importance and sensitivity, temporary identifiers are often used instead. The Globally Unique Temporary Identifier (\acrshort{guti}) can be thought of as a temporary replacement for an \acrshort{imsi}. Once a phone attaches to the network, the Mobility Management Entity (\acrshort{mme}) generates a \acrshort{guti} value that is sent to the \acrshort{ue}, which stores the value. The \acrshort{ue} uses the \acrshort{guti} rather than the \acrshort{imsi} when it attaches to the network in the future. The \acrshort{guti} can be changed by the \acrshort{mme} periodically. Prior work recently found that \acrshort{guti}s are often predictable with consistent patterns, thus offering little privacy~\cite{hong2018guti}, but this can be remedied with a lightweight fix that we expect will be used going forward. \textit{User location information.}\label{sec:location} Cellular networks maintain knowledge of the physical location of each \acrshort{ue}. Location information is necessary to support mobility and to quickly find the \acrshort{ue} when there is an incoming call, SMS, or data for a user. The mechanism used to locate a \acrshort{ue} is known as ``paging'' and it relies on logical groupings of similarly located \acrshort{enb}'s known as ``tracking areas'' (\acrshort{ta}s). Each \acrshort{enb} is assigned to a single \acrshort{ta}. \acrshort{ta}s can be thought of as broadcast domains for paging traffic. If there is incoming data for an idle \acrshort{ue}, the paging procedure is used, where the network broadcasts a paging message to all \acrshort{enb}s in the user's last-known \acrshort{ta}. Prior work has shown that the paging mechanism can be leveraged by attackers that know an identifier of the victim ({\it e.g.}, phone number, WhatsApp ID) to generate paging messages intended for the victim, which enables an unprivileged attacker to identify a specific user's location~\cite{kune2012location}. Cellular operators also often store location metadata for subscriber, giving them the ability to trace user movement and location history. \section{Related Work} Prior work on anonymous communications often traded off latency and anonymity~\cite{vandenHooff:2015:VSP:2815400.2815417,199303,corrigan2010dissent,Corrigan-Gibbs:2015:RAM:2867539.2867658}. Likewise, Tor~\cite{dingledine2004tor} and Mixnets~\cite{chaum1981untraceable} also result in increased latency while improving anonymity. However, such solutions are inappropriate for cellular systems as, apart from SMS, cellular use cases require low latency. Additionally, the architecture continues to utilize identifiers ({\it e.g.}, \acrshort{imsi}) that can expose the user to \acrshort{imsi} catcher attack or allow for location tracking by the operator. There has been extensive prior work on finding security and privacy issues in cellular networks~\cite{4215735,8835335,practicalattackslte,lteinspector,kune2012location}. We decouple the \acrshort{imsi} from the subscriber by setting it to a single value for all users of the network. Altering the \acrshort{imsi} to specifically thwart \acrshort{imsi} catcher and similar passive attacks has been previously proposed~\cite{vandenBroek:2015:DIC:2810103.2813615, Khan:2017:TIC:3098243.3098248,sung2014location,Arapinisnew}. These techniques use pseudo-\acrshort{imsi}s (PMSIs), which are kept synchronized between the \acrshort{sim} and the \acrshort{hss}, or hypothetical virtual \acrshort{sim}s, allowing for user identification. We aim to go beyond thwarting \acrshort{imsi} catchers, and do so while considering active attacks without requiring fundamental changes on the \acrshort{ue}; we protect users from the operator itself. Hussain {\it et al.}{} introduce the TORPEDO attack~\cite{hussain2019privacy}, which allows attackers to identify the page frame index and using that, the presence or absence of a victim in a paging broadcast area ({\it i.e.}, a tracking area). However, our use of tracking area lists to provide additional paging anonymity (\cref{sec:locationprivacy}) increases the location in which a victim could potentially be, reducing the effectiveness of third-party paging-related localization attacks. The authors also define the PIERCER attack, which enables the attacker to reveal a victim's \acrshort{imsi} with only their phone number. PGPP nullifies this attack by making all \acrshort{imsi}s identical. Cellular signaling protocols have been demonstrated by multiple works to leave users' privacy vulnerable to attack~\cite{lorenz2001securing,sengar2006ss7,engel2008locating,holtmans2016detach,sonar}. Our initial design avoids signaling protocol vulnerabilities by providing data-only rather than voice/SMS, and roaming to other networks can be enabled by requiring home-routing rather than local breakout. Hussain {\it et al.}{} identifies a 5G vulnerability that allows an attacker to neutralize \acrshort{guti} refreshment in~\cite{Hussain:2019:PSP:3319535.3354263}. However, this requires a MiTM attack ({\it e.g.}, \acrshort{imsi} catcher), which necessarily means the attacker knows the victim's location. Additionally, the \acrshort{guti} is a temporary identifier, and is not associated with a specific user. Choudhury and K\o ien alter \acrshort{imsi} values, however both require substantial changes to network entities~\cite{Choudhury:2012:EUI:2360018.2360115,6673421}. We argue that a privacy-preserving architecture must be fully compatible with existing infrastructure as the global telecom infrastructure is truly a network of networks, comprised of multiple operators that connect via well-known APIs. \section{Concluding Remarks} User privacy is a hotly contested topic today, especially as law enforcement organizations, particularly in authoritarian states, insist upon increasingly ubiquitous surveillance. In addition, law enforcement has long demanded backdoor access to private user devices and user data~\cite{savage2018lawful}. We do not believe that users of PGPP, in its current form, would be capable of withstanding targeted legal or extra-legal attacks by nation-state organizations ({\it e.g.}, the FBI or NSA), though PGPP would likely limit the ability of such organizations to continue to operate a regime of mass surveillance of user mobility. In addition, a more common and problematic form of privacy loss today is due to the surreptitious sale of user data by network providers; this is a matter PGPP addresses in a manner that aligns with user autonomy. Our aim is to improve privacy in line with prior societal norms and user expectations, and to present an approach in which privacy-enhanced service can be seamlessly deployed. \section{Introduction} Cellular phone and data networks are an essential part of global communications infrastructure. In the United States, there are 129 cellular subscriptions for every 100 people and the total number of cellular subscriptions worldwide now stands at over 7.9 billion~\cite{worldbank}. Unfortunately, today's cellular architecture embeds privacy assumptions of a bygone era. In decades past, providers were highly regulated and centralized, few users had mobile devices, and data broker ecosystems were undeveloped. As a result, except for law enforcement access to phone records, user privacy was generally preserved. Protocols for cell communication embed an assumption of trusted hardware and infrastructure~\cite{3gpp.23.401}, and specifications for cellular backend infrastructure contain few formal prescriptions for preserving user data privacy. The result is that the locations of all users are constantly tracked as they simply carry a phone in their pocket, \textit{without even using it}.\\[0.5ex] \noindent \textbf{Privacy violations by carriers.} In the last two years it has been extensively reported that mobile carriers have been selling and leaking mobile location data and call metadata of hundreds of millions of users~\cite{zdnet,nytimes-cell,adage,vice,vice2}. This behavior appears to have been legal and has left mobile users without a means of recourse due to the confluence of a deregulated industry, high mobile use, and the proliferation of data brokers in the landscape. As a result, in many countries every mobile user can be physically located by anyone with a few dollars to spend. This privacy loss is ongoing and is \emph{independent} of leakage by apps that users choose to install on their phones (which is a related but orthogonal issue). While this major privacy issue has long been present in the architecture, the practical reality of the problem and lack of technical countermeasures against bulk surveillance is beyond what was known before. However there is a fundamental technical challenge at the root of this problem: even if steps were taken to limit the sale or disclosure of user data, such as by passing legislation, the cellular architecture generally and operators specifically would still seemingly need to know where users are located in order to provide connectivity. Thus users must trust that network operators will do the right thing with respect to privacy despite not having done so to date.\\[0.5ex] \noindent \textbf{Architectural, deployable solution.} We aim to remedy this state of affairs by identifying and leveraging points of decoupling in the architecture. Our solution is designed to be deployed by Mobile Virtual Network Operators (\acrshort{mvno}s), where the \acrshort{mvno} operates the evolved packet core (\acrshort{EPC}) while the base stations (\acrshort{enb}s) are operated by a Mobile Network Operator (\acrshort{mno}). This presents us with architectural independence as the \acrshort{mvno} can alter its core functionality, so long as the \acrshort{EPC} conforms to LTE/5G standards. Our approach is made feasible by the industry-wide shift toward software-based \acrshort{EPC}s. In our approach, users are protected even against tracking by their own carrier (the \acrshort{mvno}). We decouple network connectivity from authentication and billing, which allows the carrier to run \acrshort{EPC} services that are unaware of the identity or location of their users but while still authenticating them for network use. We shift authentication and billing functionality to outside of the cellular core and separate traditional cellular credentials from credentials used to gain global connectivity Since it will take time for infrastructure and legislation to change, our work is explicitly \emph{not} clean slate. In addition, we assume that existing industry players are unlikely to adopt new technologies or have an interest in preserving user privacy unless legal remedies are instituted. As a result, we consider how privacy can be added on top of today's mobile infrastructure solely by new industry entrants ({\it i.e.}, \acrshort{mvno}s) \noindent \textbf{Contributions.} We describe our prototype implementation, Pretty Good Phone Privacy (PGPP). In doing so, we examine several key challenges in achieving privacy in today's cell architecture. In particular, we consider: 1) which personal identifiers are stored and transmitted within the cellular infrastructure; 2) which core network entities have visibility into them (and how this can be mitigated); 3) which entities have the ability to provide privacy and with what guarantees; and 4) how we can provide privacy while maintaining compatibility with today's infrastructure and without requiring the cooperation of established providers. We show PGPP's impact on control traffic and on user anonymity. We show that by altering the network coverage map we are able to gain control traffic headroom compared with today's networks; we then consume that headroom in exchange for improved anonymity. We analyze the privacy improvements against a variety of common cellular attacks, including those based on bulk surveillance as well as targeted attacks. We find that PGPP significantly increases anonymity where there is none today. We find that an example PGPP network is able to increase the geographic area that an attacker could believe a victim to be within by \textasciitilde 1,200\% with little change in control load.\\[1ex] \noindent Our contributions are as follows: \begin{itemize}[nolistsep,noitemsep] \item We conduct a measurement study to demonstrate privacy leakage that exists in today's mobile networks (\cref{sec:zaatarimeasurement}). \item We design a new architecture that decouples connectivity from authentication and billing functionality, allowing us to alter the identifiers used to gain connectivity (\cref{sec:imsis}) and enable PGPP-based operators to continue to authenticate and bill users (\cref{sec:auth}) without identifying them. \item We adapt existing mechanisms to grow control traffic broadcast domains, thus enhancing user location privacy while maintaining backwards compatibility (\cref{sec:locationprivacy}). \item We quantify the impacts of PGPP on both user privacy and network control traffic through simulation (\cref{sec:simulation}) and demonstrate PGPP's feasibility in a lab testbed. \end{itemize} \section{Measurement study}\label{sec:zaatarimeasurement} In this section we demonstrate the privacy leakage that exists in today's cellular architecture by conducting a measurement study while acting as a relatively weak attacker in a real-world environment. Recall from~\cref{sec:privbackground} that the \acrshort{imsi} is a globally unique, permanent identifier. Unfortunately for user privacy, the traditional cellular architecture uses \acrshort{imsi}s for authentication and billing, as well as providing connectivity, causing the \acrshort{imsi} to be transmitted for multiple reasons. Because of its importance and permanence, the \acrshort{imsi} is seen as a high-value target for those who wish to surveil cellular users. For example, in recent years there has been a proliferation of cell-site simulators, also known as \acrshort{imsi} catchers. These devices offer what appears to be a legitimate base station (\acrshort{enb}) signal. Since \acrshort{ue} baseband radios are na\"ive and automatically connect to the strongest signal, they attempt to attach to the \acrshort{imsi} catcher and offer their \acrshort{imsi}. \acrshort{imsi} catchers have been used extensively by law enforcement and state-level surveillance agencies, with and without warrants, to identify, track, and eavesdrop on cellular users~\cite{paget2010practical}.\\[0.5ex] \noindent \textbf{Dataset.}\label{sec:zaatari} We analyze a dataset of cellular traces that our team gathered previously in a large refugee camp over a period of three days. Details of the trace collection can be found in~\cite{hybridcell, zaatari}. The traces include messages that were sent on broadcast channels in plaintext for three cellular providers that offer service in the area. Traces were captured using software defined radios and mobile phones. The trace dataset provides a vantage point that is akin to an \acrshort{imsi} catcher.\footnote{Trace collection methodology and analysis received IRB approval.}\\[0.5ex] \noindent \textbf{\acrshort{imsi}s are often broadcast in-the-clear.} We discover that, while the architecture is designed to largely use temporary \acrshort{guti}s once \acrshort{ue}s are connected, \acrshort{imsi}s are often present in paging messages. Overall we see 588,921 total paging messages, with 38,917 containing \acrshort{imsi}s (6.6\% of all pages). Of those messages we see 11,873 unique \acrshort{imsi}s. We track the number of times each individual \acrshort{imsi} was paged and plot a CDF in Figure~\ref{fig:zaataripages}. As shown, more than 60\% of \acrshort{imsi}s were paged more than once in the traces. Note that we count multiple pages seen within one second as a single page. Given this network behavior, even a passive eavesdropper could learn the permanent identifiers of nearby users.\\[0.5ex] \noindent \textbf{\acrshort{imsi}s can be tracked over time.} Given that \acrshort{imsi}s are regularly broadcast, an eavesdropper can track the presence or absence of users over time. We investigate the intervals between pages containing individual \acrshort{imsi}s. In Figure~\ref{fig:zaatariintervals} we plot a CDF of intervals (greater than one second) between subsequent pages of individual \acrshort{imsi}s. Overall, we see that \acrshort{imsi}s are repeatedly broadcast over time, even though the design of the architecture should dictate that \acrshort{imsi}s should be used sparingly in favor of temporary \acrshort{guti}s.\\[0.5ex] \noindent \textbf{Individuals can be tracked over time.} Given that we can track \acrshort{imsi}s over time, a passive attacker can track individuals' movements. Figure~\ref{fig:zaatarimap} shows locations of base stations that broadcast the \acrshort{imsi} for a single user in the traces. As shown, we saw the user in multiple locations over the course of two days. Location A was recorded at 10am on a Monday; location B was thirty minutes later. The user connected to a base station at location C at noon that same day. Locations D and E were recorded the following day at noon and 1:30pm, respectively. From this we see that a passive observer unaffiliated with a cellular carrier can, over time, record the presence and location of nearby users. This attacker is weak, with a relatively small vantage point. In reality, carriers \textit{can and do} maintain this information for \textit{all} of their users. \section{0pt}{12pt plus 4pt minus 2pt}{4pt plus 2pt minus 2pt} \titlespacing\subsection{0pt}{8pt plus 4pt minus 2pt}{3pt plus 2pt minus 2pt} \titlespacing\subsubsection{0pt}{4pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt} \begin{document} \sloppy \title{Pretty Good Phone Privacy} \author{ {\rm Paul Schmitt}\\ Princeton University\\ \and {\rm Barath Raghavan}\\ University of Southern California \\ } \maketitle \begin{abstract} \input{abstract} \end{abstract} \input{intro} \input{case} \input{measurement} \input{questions} \input{pgpp} \input{analysis} \input{discussion}\label{lastpage} \bibliographystyle{plain} \interlinepenalty=10000 \section{PMVNO Design} \section{Design}\label{sec:pgpp} In this section we describe the mechanisms PGPP employs to increase user identity and location privacy. Ultimately, PGPP's design choices appear obvious in retrospect. We believe its simplicity is an asset, as PGPP is compatible with existing networks and immediately deployable. In order to provide identity privacy against bulk attacks, we nullify the value of the \acrshort{imsi}, as it is the most common target identifier for attackers. In our design, we choose to set all PGPP user \acrshort{imsi}s to an identical value to break the link between \acrshort{imsi} and individual users. This change requires a fundamental shift in the architecture, as \acrshort{imsi}s are currently used for connectivity as well as authentication, billing, and voice/SMS routing. We design a new cellular entity for billing and authentication that preserves identity privacy. Fortunately, the industry push for software-based \acrshort{EPC}s makes our architecture feasible. We describe the architecture in~\cref{sec:imsis}. To provide location privacy from targeted attacks, PGPP leverages an existing mechanism (\acrshort{tal}s) in the cellular specification in order to grow the broadcast domain for control traffic (\cref{sec:locationprivacy}). By changing the broadcast domain for every user, the potential location of a victim is broadened from the attacker's vantage point. \subsection{User identity privacy}\label{sec:imsis} As discussed in~\cref{sec:ids}, \acrshort{imsi}s are globally unique, permanent identifiers. As such, they are routinely targeted by attackers, both legal and illegal. In this section we re-architect the network in order to thwart {\em bulk} attacks introduced in~\cref{sec:attacks} that are based on identifying individuals via \acrshort{imsi}. We decouple back-end connectivity from the authentication procedure that normally occurs at the \acrshort{hss} when a \acrshort{ue} attaches to the network. Instead, the PGPP operator issues \acrshort{sim} cards with \textit{identical} \acrshort{imsi}s to all of its subscribers. In this model, the \acrshort{imsi} is used only to prove that a user has a valid \acrshort{sim} card to use the infrastructure and, in turn, the PGPP network can provide an IP address and connectivity and offer the client a \acrshort{guti}, providing the user with a unique identity necessary for basic connectivity. \label{sec:auth} LTE authentication is normally accomplished using \acrshort{imsi}s at the \acrshort{hss}; however, all PGPP users share a single \acrshort{imsi}. Thus, to authenticate a user, we designed a post-attach oblivious authentication scheme to ensure that the PGPP operator is able to account for the user without knowing who they are. \textit{PGPP Gateway.} In order to perform this authentication we create a new logical LTE entity called a PGPP Gateway (\acrshort{pgppgw}), which sits between the \acrshort{pgw} and the public Internet. The \acrshort{pgw} is configured to have a fixed tunnel to a \acrshort{pgppgw}, which can be located outside of the PGPP operator's network. Using this mechanism, the \acrshort{pgppgw} only sees an IP address, which is typically NATed by the \acrshort{pgw}, and whether that IP address is a valid user. Notably, it does not have any information about the user's \acrshort{imsi}. The \acrshort{pgppgw} design also allows for many different architectures. For instance, multiple \acrshort{pgppgw}s could be placed in multiple datacenters or even use a privacy service such as Tor.\footnote{We leave exploration into such scenarios to future work.} \textit{Authentication properties.} From the perspective of the \acrshort{pgppgw}, there are multiple properties an authentication scheme must guarantee: (1) the gateway can authenticate that a user is indeed a valid customer\footnote{Due to ``Know Your Customer'' rules in some jurisdictions, the provider may need to have a customer list, necessitating that the user authentication scheme be compatible with periodic explicit customer billing.}; (2) the gateway and/or any other entities cannot determine the user's identity, and thus cannot link the user's credentials/authentication data with a user identity; and (3) the gateway can determine whether a user is unique or if two users are sharing credentials. \begin{table}[t] \centering \small \scalebox{1.0}{ \begin{tabular}{|l||c|c|c|} \hline \textbf{Scheme} & \textbf{Customer?} & \textbf{Anonymous?} & \textbf{Unique?}\\\hline\hline Standard auth & $\bullet$ & & \\\hline Group/ring sig & $\bullet$ & $\bullet$ & \\\hline Linkable ring sig & $\bullet$ & & $\bullet$ \\\hline\hline Cryptocurrency & & $\bullet$ & $\bullet$ \\\hline PGPP tokens & $\bullet$ & $\bullet$ & $\bullet$ \\\hline \end{tabular}} \caption{Three properties needed for user authentication in a privacy-preserving cell network and schemes to achieve them.} \label{t:gateway-auth} \vspace{-3.5mm} \end{table} As we show in Table~\ref{t:gateway-auth}, the challenge is that standard approaches for authentication only provide one of the three required properties and widely-studied cryptographic mechanisms only provide two of the three properties. For example, an ordinary authentication protocol (of which there are many~\cite{bellare1993entity, jakobsson2001mutual}) can provide property 1) but not 2) and 3). A cryptographic mechanism such as group signatures~\cite{chaum1991group,boneh2004short} or ring signatures~\cite{cramer1994proofs,rivest2001leak} can protect the user's identity upon authentication, providing properties 1) and 2), but not 3) as providing the last property would violate the security of the signature scheme. Similarly, traitor tracing schemes~\cite{chor1994tracing} (such as for broadcast encryption~\cite{fiat1993broadcast}) can provide all three properties but in practice cannot provide property 3) as the traitor tracing would require actual physical confiscation of the ``traitor'' phone by the \acrshort{mvno}, which is infeasible. A variation on ring signatures known as linkable ring signatures~\cite{liu2004linkable} provides the ability for a user's identity to be revealed if the user signs multiple messages with the same key. While this is useful in establishing that the user is unique and hasn't shared their credentials, it also partially violates the user's anonymity, as that key cannot be used again. \textit{Effective authentication.} There are two approaches that we view as viable, depending on the circumstances. An anonymity-preserving cryptocurrency can provide properties 2) and 3), but not 1) as a cryptocurrency would combine billing and authentication at the \acrshort{pgppgw}. For \acrshort{mvno}s that are not required to know their customers, an anonymity-preserving cryptocurrency may be the ideal solution for both user authentication and payment, though even the best coins provide imperfect anonymity guarantees~\cite{kappos2018empirical}. To provide all three properties, we develop a simple scheme called \textbf{PGPP tokens} that helps us sidestep the issues with alternative approaches. The choice of authentication scheme is deployment-context specific. With PGPP tokens, when paying a monthly bill a user retrieves authentication tokens that are blind-signed using Chaum's classic scheme~\cite{chaum1983blind,bellare2003one} by the billing system. Later, when authenticating to the service, the user presents tokens and the service (the \acrshort{pgppgw}) verifies their signature before allowing the user to use the network. The token scheme ensures that the service can check the validity of tokens without identifying the user requesting access. The user then presents the next token in advance so as to ensure seamless service. Note that PGPP tokens disallow the post-pay model for cellular billing, as the network would be required to know the identity of users in order to accurately charge them for usage. Therefore, PGPP is pre-pay only, though this can be adjusted to emulate post-payment ({\it e.g.}, users pre-pay for tokens on an ongoing basis rather than only monthly, and tokens are valid for a longer time period, such as a year, rather than for only one billing period). Each token represents a unit of access, as is appropriate for the service provider. Some providers may choose to offer flat-rate unlimited-data service, in which case each token represents a fixed period of time; this is the default approach that we use to describe the scheme below. Other providers may choose to offer metered service, in which case each token represents a fixed unit of data, such as 100 MB or 1 GB, rather than a period of time. Still others may choose to provide two-tiered service priority by marking each token with a priority bit, in addition to either unlimited data or metered data service; such prioritization does come with slight privacy loss, as the \acrshort{mvno} and \acrshort{mno} alike would be able to differentiate which priority level was in use. The privacy loss of two-tiered data priority can be partially mitigated by offering all users some amount of time or GB of high-priority service after which they must fall back to low-priority service; such a service plan structure is fairly standard in the industry today. In such a setting, each user would have both high-priority and low-priority tokens and thus would not be clearly stratified into two identifiable groups of users. At the beginning of a billing period, the billing system defines $s$ time slices ({\it e.g.}, corresponding to hours) or another unit of access ({\it e.g.}, a unit of data) and generates $s$ RSA keypairs for performing blind signatures using Chaum's scheme. It then appends the public keys for this time period to a well-known public repository that is externally maintained ({\it e.g.}, on GitHub), and these are fetched by users. The user generates $s$ tokens where each token takes the form $i\|r$ where $i$ is the time slice index as a 256-bit unsigned value zero indexed from the beginning of the billing period, and $r$ is a 256-bit random value chosen by the user. The user then blinds these tokens. The user pays the bill using a conventional means of payment ({\it e.g.}, credit card), and presents the blinded tokens to the billing system to be signed; the system signs each token with the corresponding time slice key and returns these values to the user. The user unblinds the response values and verifies the signatures for each. Upon later authentication to the service, the user presents its signed token for the current time slice to the \acrshort{pgppgw}, which verifies the signature and if valid begins forwarding the user's traffic onto the Internet. Since the token signature was generated using Chaum's scheme, the service cannot determine which human user corresponds to which signed token. If the same token is used by two different users during the same time period then the service can conclude that a user has shared their credentials and is attempting to cheat. The costs of this scheme to both the PGPP operator and the user are low. The operator stores the list of used tokens in a standard consistent and replicated cloud database, so the service can operate multiple \acrshort{pgppgw}s, though it is likely that a small number of \acrshort{pgppgw}s can serve a large number of users: we benchmarked the 2048-bit RSA signature verification used here at 31$\mu$s per call using Crypto++~\cite{cryptopp} on a single core of a 2.6GHz Intel Xeon E5-2640 CPU, and thus with a single CPU core the \acrshort{pgppgw} can handle token verification for tens of millions of users. The tokens themselves are small and the storage cost to the provider is about 1.5 MB / user per time period, which is a small amount for any user's phone to store and for a provider even hundreds of millions of tokens amounts to mere GBs of data in cloud storage. \textit{User device agent.} To automate the process of authenticating with the \acrshort{pgppgw}, we create a simple agent that runs as background job on the user device. This agent leverages the Android JobScheduler API; in the event of cellular connectivity, the JobScheduler triggers PGPP-token-based authentication with the \acrshort{pgppgw}. The agent establishes a TLS connection to the \acrshort{pgppgw} and then sends the token for the current time slice. Once the user presents a valid token, the \acrshort{pgppgw} begins forwarding traffic for that user, and thus this behavior is akin to a captive portal though the authentication is automatic and unseen by the user. \subsection{Location privacy}\label{sec:locationprivacy} As described in~\cref{sec:location}, cellular operators track user location in the form of tracking areas for \acrshort{ue}s in order to quickly find users when there is incoming content. PGPP leverages an existing mechanism in the cellular standard to reduce the effectiveness of {\em local-targeted} attacks described in~\cref{sec:attacks}. Paging has been exploited in the past to discover user location by adversaries. However, the use of tracking areas is useful for the cellular provider in that it confines the signaling message load ({\it i.e.}, paging messages) to a relatively small subset of the infrastructure. Tracking areas reduce mobility signaling from \acrshort{ue}s as they move through the coverage zone of a single tracking area. Note that emergency calling represents a special case in cellular networks. When a device dials 911, the phone and network attempt to estimate accurate location information. In this work we do not alter this functionality as we anticipate that users dialing 911 are willing to reveal their location. \textit{TALs.}\label{sec:tals} In PGPP, we exploit the tracking area list (\acrshort{tal}) concept, introduced in 3GPP Release 8~\cite{3gpp.23.401}. Using \acrshort{tal}s, a \acrshort{ue} no longer belongs to a single tracking area, but rather is given a list of up to 16 tracking areas that it can freely move through without triggering a tracking area update, essentially creating larger tracking areas. Whereas prior work has focused on using \acrshort{tal}s to pre-compute optimal tracking area combinations for users~\cite{razavi,razavi2,razavi3}, in PGPP, we use \acrshort{tal}s to provide provide improved location anonymity. Typically, \acrshort{tal}s consist of groups of adjacent tracking areas that are pre-computed, essentially growing the tracking area for a \acrshort{ue} to the union of all tracking areas in the \acrshort{tal}. We do not use \acrshort{tal}s in this way. Instead, we generate \acrshort{tal}s on-the-fly and generate them uniquely for each \acrshort{ue}. When a \acrshort{ue} attaches or issues a tracking area update message, the \acrshort{mme} learns the \acrshort{enb} and tracking area the \acrshort{ue} is currently attached to. The \acrshort{mme} then generates a unique \acrshort{tal} by iteratively selecting at random some number (up to the \acrshort{tal} limit of 16) of additional, adjacent tracking areas. By generating unique \acrshort{tal}s for each user, attackers are unable to know a priori which set of tracking areas (or \acrshort{enb}s) that victim is within. We explore tradeoffs in terms of \acrshort{tal} length, control traffic overhead, and location anonymity in the next section. \section{Scope} We believe that many designs are possible to increase privacy in mobile networks, and no architecture, today or in the future, is likely to provide perfect privacy. Nevertheless, below we discuss various properties that PGPP strives to achieve. Prior work examined the security vulnerabilities in modern cell networks~\cite{lteinspector,practicalattackslte,kune2012location} and revealed a number of flaws in the architecture itself. In addition, data brokers and major operators alike have taken advantage of the cellular architecture's vulnerabilities to profit off of revealing sensitive user data. We believe mobile networks should aim to, at a minimum, provide one or both of the following privacy properties.\\[0.5ex] \noindent \textbf{Identity privacy.} A network can aim to protect users' identity. Networks---as well as third party attackers---identify users through \acrshort{imsi}s, which are intended to be uniquely identifying.\\[0.5ex] \noindent \textbf{Location privacy.} A network can aim to protect information about the whereabouts of a phone. Naturally, these privacy properties do not exist in isolation; they intersect in critical ways. For example, attackers often aim to learn not only who a user is but where a specific user is currently located, or where a user was when a specific call was made. Also, the definition of an attacker or adversary is a complex one, and depending on context may include individuals aiming to steal user data, mobile carriers and data brokers looking to profit off of user data, governments seeking to perform bulk surveillance, law enforcement seeking to monitor a user with or without due process, and many others. Due to context dependence, we do not expect all privacy-focused mobile networks to make the same choice of tradeoffs. \subsection{Cellular privacy threat model}\label{sec:attacks} Given the above discussion, we distinguish between bulk and targeted data collection. We define bulk collection to be the collection of information from existing cellular architecture traffic without the introduction of attack traffic; thus, bulk collection is passive. Bulk attacks commonly target user identities ({\it e.g.}, \acrshort{imsi}s). PGPP's core aim is to protect against bulk attacks. Targeted attacks are active and require injection of traffic to attack specific targets. Targeted attacks are often aimed at discovering a victim's location. We also delineate attacks by the adversary's capabilities, as they may have visibility into an entire network (global) versus, for an unprivileged attacker, some smaller subset of a network's infrastructure (local). Table~\ref{tab:attacks} gives the taxonomy of attacks. Carriers and governments are the most common \textbf{global-bulk} attackers. Such bulk surveillance is commonplace in cellular networks, and has been at the center of recent lawsuits and privacy concerns. Attacks that employ \acrshort{imsi} catchers or passively listen to broadcasts using software-defined radios are considered \textbf{local-bulk}. Here, an \acrshort{imsi} catcher is only able to monitor phones that connect directly to it, so its visibility is limited to its radio range. Similarly, SDR-based passive snooping (as in the example in~\cref{sec:zaatarimeasurement}) is only able to monitor nearby base stations and will miss portions of the network. We design PGPP with a primary focus on thwarting bulk attacks by nullifying the value of \acrshort{imsi}s (\cref{sec:imsis}). \begin{table}[t] \centering \small \resizebox{\columnwidth}{!}{ \begin{tabular}{ll|l|l|} \cline{3-4} & & \multicolumn{2}{c|}{\textbf{Attack type}} \\ \cline{3-4} \multicolumn{1}{l}{} & & \textbf{Bulk} & \textbf{Targeted} \\ \hline \multicolumn{1}{|l|}{\multirow{2}{*}{\rotatebox[origin=c]{90}{\textbf{Visibility\hspace{-0.3em}}}}} & \textbf{Global} & \begin{tabular}[c]{@{}l@{}}Carrier logs~\cite{zdnet,adage,vice,vice2} / \\ Government Surveillance~\cite{carpenter_v_united_states_2018}\end{tabular} & Carrier Paging \\ \cline{2-4} \multicolumn{1}{|l|}{} & \textbf{Local} & \begin{tabular}[c]{@{}l@{}}SDR~\cite{7146071, van2016effectiveness, mjolsnes2017easy} / \\ IMSI Catcher~\cite{paget2010practical, joachim2003method}\end{tabular} & Paging attack~\cite{hussain2019privacy, kune2012location}\\ \hline \end{tabular}} \caption{Common cellular attacks.} \label{tab:attacks} \vspace{-3mm} \end{table} \textbf{Local-targeted} attacks can be carried out by ordinary users by generating traffic that causes a network to page a victim ({\it e.g.}, phone call to the victim). As local-targeted attackers do not have visibility into the entire network, they must rely upon knowledge of the geographic area that is encompassed by a tracking area. Due to the prevalence of such attacks, as an enhancement, an operator can provide functionality, in cooperation with the user, that reduces the efficacy of local-targeted attacks through the use of \acrshort{tal}s~(\cref{sec:locationprivacy}). \textbf{Global-targeted} attacks represent a very powerful attacker who can actively probe a victim while having global visibility of the network. We envision defenses against such attacks would require fundamental changes to to communication models. PGPP does not mitigate global-targeted attacks as we focus on immediately deployable solutions; we leave this to future work. \subsection{Aims} Next we discuss the aims of PGPP by considering several common questions that arise. \textit{\textbf{What sort of privacy does PGPP provide?}} As its name suggests, PGPP aims to provide ``pretty good'' privacy; we don't believe there is a solution that provides perfect privacy, causes no service changes ({\it i.e.}, does not increase latency), and is incrementally deployable on today's cellular networks. The main focus is to offer privacy against global-bulk surveillance of mobility and location, a practice by carriers that is widespread and pernicious. We thwart this via eliminating the \acrshort{imsi} as an individual identifier and decoupling the authentication and connectivity mechanisms in the cellular architecture. \textit{\textbf{Isn't 5G more secure than legacy generations?}} \label{sec:5g} We are currently on the brink of a new generation of cellular connectivity: 5G. While the ITU requirements for what can be called 5G have not been fully ratified (they are scheduled for this year), many preliminary components of the standard have achieved widespread agreement. \textit{Encrypted \acrshort{imsi}s.} 5G includes the addition of encrypted \acrshort{imsi}s, where public key cryptography, along with ephemeral keys generated on the \acrshort{sim}, is used to encrypt the \acrshort{imsi} when sending it to the network. This protects user \acrshort{imsi}s from eavesdroppers. However, encrypted \acrshort{imsi}s do not prevent the cellular provider \textit{itself} from knowing the user's identity. An analogy for encrypted \acrshort{imsi}s can be found in TLS for web traffic: eavesdroppers cannot see unencrypted traffic, yet the endpoints (the web server for TLS, the cellular core in 5G) can. The goal of this work is to not only thwart local-bulk attacks, but also protect user privacy from mobile operators that would otherwise violate it ({\it i.e.}, global-bulk attacks). \textit{Small cell location privacy.} The 5G standard strives for reduced latencies as well as much higher data throughputs. This necessitates the use of cells that cover smaller areas in higher frequency spectrum in order to overcome interference compared with previous cellular generations that used macrocells to provide coverage to large areas. A (likely unintended) byproduct of 5G's use of smaller cells is a dramatic {\em reduction} in location privacy for users. As the 5G network provider maintains state pertaining to the location in the network for a given user for the purposes of paging, smaller cells result in the operator, or attacker, knowing user locations at a much higher precision compared with previous generations. \textit{\textbf{What about active | traffic analysis | signaling attacks?}} While active, targeted attacks aren’t our main focus, we improve privacy in the face of them by leveraging \acrshort{tal}s to increase and randomize the broadcast domain for paging traffic, making it more difficult for attackers to know where a victim is located (analyzed in~\cref{sec:localtargeted}). Further, the goal of many active attacks is to learn users' \acrshort{imsi}s, and our nullification of \acrshort{imsi}s renders such attacks meaningless. An attacker with a tap at the network edge could use traffic analysis attacks to reduce user privacy. We largely view this as out of scope as users can tunnel traffic and use other means to hide their data usage patterns. Cellular networks rely on signaling protocols such as Signaling System 7 (\acrshort{ss7}) and \acrshort{diameter} when managing mobility as well as voice and SMS setup and teardown. These protocols enable interoperability between carriers needed for roaming and connectivity across carriers. Unfortunately, these protocols were designed with inherent trust in the network players, and have thus been used to reduce user privacy and disrupt connectivity~\cite{lorenz2001securing,sengar2006ss7,engel2008locating,holtmans2016detach,sonar}. We design PGPP for 4G/5G data only, which renders legacy \acrshort{ss7} compatibility moot. Our PGPP design expects users to use outside messaging services rather than an in-EPC \acrshort{ims} system. \textit{\textbf{Can PGPP support roaming?}} Yes. While we envision that many PGPP users would explicitly not wish to roam, as roaming partners may not provide privacy guarantees, roaming is possible using a \acrshort{diameter} edge agent that only allows for home routed roaming, forcing traffic to route from the visited network's \acrshort{sgw} back to the PGPP operator's \acrshort{pgw}, rather than local breakout due to our authentication mechanism (\cref{sec:auth}). Roaming, and international roaming in particular, adds billing complexities for the PGPP operator. Typically, the visited network collects call data records for each roaming user on its network and calculates the wholesale charges payable by the home network. The visited network then sends a Transferred Account Procedure (\acrshort{tap}) file to the home network via a data clearing house. The home network then pays the visited network. In PGPP, the individual identity of the user that roamed is not known, yet the PGPP operator remains able to pay the appropriate fees to visited networks. \textit{\textbf{How does PGPP protect user privacy for voice or text service?}} Out of the box, PGPP doesn't provide protection for such service. Instead, PGPP aims provide privacy \textit{from the cellular architecture itself}, and in doing so users are free to use a third party VoIP provider (in which case the phone will operate identically to a normal phone for telephony service from a user's perspective) or use recent systems by Lazar et al.~\cite{yodel, karaoke} that provide strong metadata privacy guarantees for communications, or similar systems such as~\cite{199303, vandenHooff:2015:VSP:2815400.2815417,corrigan2010dissent,Corrigan-Gibbs:2015:RAM:2867539.2867658}. We view PGPP as complementary to such systems. \textit{\textbf{How does PGPP protect users against leaky apps?}} PGPP doesn't, as it is about providing protection in the cellular infrastructure. Even without leaky apps, users can always intentionally or inadvertently reveal their identity and location. Leaky apps make this worse as they collect and, sometimes, divulge sensitive user information. We see PGPP as complementary to work that has targeted privacy in mobile app ecosystems. Further, apps are not as fundamental as connectivity---users can choose whether to install and run a leaky app, and can constrain app permissions. However, phones are, by their nature, always connected to carrier networks, and those very networks have been selling user data to third parties. \textit{\textbf{If users can't be identified by carriers, how can carriers still make money?}} We introduce PGPP tokens in~\cref{sec:auth} as a mechanism for a PGPP operator to charge customers while protecting user anonymity. \textit{\textbf{Can't phone hardware be tracked as well?}} Phones have an International Mobile Equipment Identity (\acrshort{imei}). The \acrshort{imei} is assigned to the hardware by the manufacturer and identifies the manufacturer, model, and serial number of a given device. Some operators keep an \acrshort{imei} database to check whether a device has been reported as stolen, known as an equipment identity register (\acrshort{eir}); \acrshort{imei}s in the database are blacklisted. For many devices, the \acrshort{imei} can be changed through software, often without root access. We envision a PGPP \acrshort{mvno} would allow for subscribers to present their unchanged device \acrshort{imei}, giving the PGPP operator the opportunity to check against a \acrshort{eir} to verify the phone has not been reported as stolen. At that point, the \acrshort{imei} could be reprogrammed to a single value, similar to our changes to the \acrshort{imsi}. Note that different jurisdictions have different rules about whether, how, and by whom an \acrshort{imei} can be changed, so only in some cases \acrshort{imei} changes require cooperation with the \acrshort{mvno}. \textit{\textbf{Is PGPP legal?}} Legality varies by jurisdiction. For example, U.S. law (CALEA~\cite{calea}), requires providers to offer lawful interception of voice and SMS traffic. A PGPP-based MVNO is data-only, with voice and messaging provided by third parties. CALEA requires the provider to offer content of communication data at the \acrshort{pgw}, {\it e.g.}, raw (likely-encrypted) network traffic. This is supported by PGPP. \section{Sequence number check}\label{sec:sqn} When multiple \acrshort{ue}s attach to the network using identical \acrshort{imsi}s, the sequence number check will fail in the network, triggering synch\_failure messages and introducing delay before successful attachment. Figure~\ref{fig:aka} shows the message sequence of the LTE authentication procedure, used today on most celluar networks, and notes what occurs if duplicate IMSIs are present. When a \acrshort{ue} sends an attach request, the \acrshort{hss} feeds the following to its \acrshort{aka} function: a new nonce value \acrshort{rand}, the shared key K that is stored for the \acrshort{imsi}, and the locally-held sequence number SQN\textsubscript{HSS} for that \acrshort{imsi}. The \acrshort{aka} function generates an authentication token (AUTN\textsubscript{HSS}) that includes the SQN\textsubscript{HSS}, an expected response (\acrshort{xres}), and a key (K\textsubscript{ASME}) that is ultimately used for \acrshort{ue} and \acrshort{mme} authentication. The \acrshort{mme} forwards the \acrshort{rand} and the AUTN\textsubscript{HSS} to the \acrshort{ue}. The \acrshort{ue} then uses its own SQN\textsubscript{UE}, its shared key K, and the \acrshort{rand} to generate its own AUTN\textsubscript{UE}, a response (RES), and the key (K\textsubscript{ASME}). The \acrshort{ue} checks if the AUTN\textsubscript{UE} and AUTN\textsubscript{HSS} match and whether the its SQN\textsubscript{UE} matches the SQN\textsubscript{HSS} in the AUTN\textsubscript{HSS}. In PGPP, this will fail for most attach procedures, as individual \acrshort{sim}s will not have the same \acrshort{sqn} value as the \acrshort{hss} for the shared \acrshort{imsi}. When the \acrshort{sqn} check fails the \acrshort{ue} will send an authentication failure message with the cause: \texttt{sync\_failure}, along with the \acrshort{ue}'s current \acrshort{sqn} which allows the \acrshort{hss} to re-synchronize the \acrshort{sqn} value. The \acrshort{ue} then begins the attach sequence again, which will then succeed as the \acrshort{hss} and \acrshort{ue} should begin the attach procedure with the same \acrshort{sqn} values.
null
null
null
proofpile-arXiv_000-10132
{"arxiv_id":"2009.09035","language":"en","timestamp":1600827334000,"url":"https:\/\/arxiv.org\/abs\/2009.09035","yymm":"2009"}
2024-02-18T23:40:25.055Z
2020-09-23T02:15:34.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10134"}
null
\section{Introduction} \label{sec:intro} As machine learning algorithms are being increasingly used in real-world decision making scenarios, there has been growing concern that these methods may produce decisions that discriminate against particular groups of people. The relevant applications include online advertising, hiring, loan approvals, and criminal risk assessment~\cite{datta2015automated,barocas2016big,chouldechova2017fair,berk2018fairness}. To address these concerns, various methods have been proposed to quantify and ensure fairness in automated decision making systems~\cite{chouldechova2017fair,dwork2012fairness,feldman2015certifying,kusner2017counterfactual,kamishima2012fairness,zemel2013learning}. A widely used notion of fairness is demographic parity, which states that sensitive attributes such as gender or race must be statistically independent of the class predictions. In this paper, we study the problem of enforcing demographic parity in probabilistic classifiers. In particular, we focus on the fact that class labels in the data are often biased, and then propose a latent variable approach that treats the observed labels as biased proxies of hidden, fair labels that satisfy demographic parity. The process that generated bias is modeled by a probability distribution over the fair label, observed label, and other features including the sensitive attributes. Moreover, we show that group fairness guarantees for a probabilistic model hold in the real world only if the model accurately captures the real-world data. Therefore, the goal of learning a fair probabilistic classifier also entails learning a distribution that achieves high likelihood. Our first contribution is to systematically derive the assumptions of a fair probabilistic model in terms of independence constraints. Each constraint serves the purpose of explaining how the observed, biased labels come from hidden fair labels and/or ensuring that the model closely represents the data distribution. Secondly, we propose an algorithm to learn probabilistic circuits (PCs)~\cite{pcTutorialUAI}, a type of tractable probabilistic models, so that the fairness constraints are satisfied. Specifically, this involves encoding independence assumptions into the circuits and developing an algorithm to learn PCs from incomplete data, as we have a latent variable. Finally, we evaluate our approach empirically on synthetic and real-world datasets, comparing against existing fair learning methods as well as a baseline we propose that does not include a latent variable. The experiments demonstrate that our method achieves high likelihoods that indeed translate to more trustworthy fairness guarantees. It also has high accuracy for predicting the true fair labels in the synthetic data, and the predicted fair decisions can still be close to unfair labels in real-world data. \section{Related Work} \label{sec:related} Several frameworks have been proposed to design fairness-aware systems. We discuss a few of them here and refer to \citet{romei2014multidisciplinary,barocas-hardt-narayanan} for a more comprehensive review. Some of the most prominent fairness frameworks include individual fairness and group fairness. Individual fairness~\citep{dwork2012fairness} is based on the idea that similar individuals should receive similar treatments, although defining similarity between individuals can be challenging. On the other hand, group fairness aims to equalize some statistics across groups defined by sensitive attributes. These include equality of opportunity~\citep{hardt2016equality} and demographic (statistical) parity~\citep{calders2010three,kamiran2009classifying} as well as its relaxed notion of disparate impact~\citep{feldman2015certifying,zafar2017fairness}. There are several approaches to achieve group fairness, which can be broadly categorized into (1) pre-processing data to remove bias~\citep{zemel2013learning,kamiran2009classifying,calmon2017optimized}, (2) post-processing of model outputs such as calibration and threshold selection~\citep{hardt2016equality,pleiss2017fairness}, and (3) in-processing which incorporates fairness constraints directly in learning/optimization~\citep{corbett2017algorithmic,agarwal2018reductions,kearns2018preventing}. Some recent works on group fairness also consider bias in the observed labels, both for evaluation and learning~\cite{Fogliato2020FairnessEI,blum2020recovering,jiang2020identifying}. For instance, \citet{blum2020recovering} studies empirical risk minimization (ERM) with various group fairness constraints and showed that ERM constrained by demographic parity does not recover the Bayes optimal classifier under one-sided, single-group label noise (this setting is subsumed by ours). In addition, \citet{jiang2020identifying} developed a pre-processing method to learn fair classifiers under noisy labels, by reweighting according to an unknown, fair labeling function. Here, the observed labels are assumed to come from a biased labeling function that is the ``closest'' to the fair one; whereas, we aim to find the bias mechanism that best explains the observed data. We would like to point out that while pre-processing methods have the advantage of allowing any model to be learned on top of the processed data, it is also known that certain modeling assumptions can result in bias even when learning from fair data~\citep{ChoiAAAI20}. Moreover, certain post-processing methods to achieve group fairness are shown to be suboptimal under some conditions~\citep{woodworth2017learning}. Instead, we take the in-processing approach to explicitly optimize the model's performance while enforcing fairness. Many fair learning methods make use of probabilistic models such as Bayesian networks~\citep{calders2010three,mancuhan2014combating}. Among those, perhaps the most related to our approach is the latent variable naive Bayes model by \citet{calders2010three}, which also assumes a latent decision variable to make fair predictions. However, they make a naive Bayes assumption among features. We relax this assumption and will later demonstrate how this helps in more closely modeling the data distribution, as well as providing better fairness guarantees. \section{Latent Fair Decisions} \label{sec:model} We use uppercase letters (e.g., $X$) for discrete random variables (RVs) and lowercase letters ($x$) for their assignments. Negation of a binary assignment $x$ is denoted by $\bar{x}$. Sets of RVs are denoted by bold uppercase letters ($\rvars{X}$), and their joint assignments by bold lowercase ($\jstate{x}$). Let $S$ denote a \textit{sensitive attribute}, such as gender or race, and let $\rvars{X}$ be the \textit{non-sensitive attributes} or features. In this paper, we assume $S$ is a binary variable for simplicity, but our method can be easily generalized to multiple multi-valued sensitive attributes. We have a dataset $\ensuremath{\mathcal{D}}$ in which each individual is characterized by variables $S$ and $\rvars{X}$ and labeled with a binary decision/class variable $D$. One of the most popular and yet simple fairness notions is demographic (or statistical) parity. It requires that the classification is independent of the sensitive attributes; i.e., the rate of positive classification is the same across groups defined by the sensitive attributes. Since we focus on probabilistic classifiers, we consider a generalized version introduced by \citet{pleiss2017fairness}, sometimes also called \emph{strong demographic parity}~\citep{jiang2019wasserstein}: \begin{defn}[Generalized demographic parity] Suppose $f$ is a probabilistic classifier and $p$ is a distribution over variables $\rvars{X}$ and $S$. Then $f$ satisfies demographic parity w.r.t.\ $p$ if: \begin{linenomath} \begin{equation*} \Ex_p[f(\rvars{X},S) \mid S=1) = \Ex_p[f(\rvars{X},S) \mid S=0]. \end{equation*} \end{linenomath} \end{defn} Probabilistic classifiers are often obtained from joint distributions $\Pr(.)$ over $D,\rvars{X},S$ by computing $\Pr(D \vert \rvars{X},S)$. Then we say the distribution satisfies demographic parity if $\Pr(D \vert S\!=\!1) = \Pr(D \vert S\!=\!0)$, i.e., $D$ is independent of $S$. \subsection{Motivation} A common fairness concern when learning decision making systems is that the dataset used is often biased. In particular, observed labels may not be the true target variable but only its proxy. For example, re-arrest is generally used as a label for recidivism prediction, but it is not equivalent to recidivism and may be biased. We will later show how the relationship between observed label and true target can be modeled probabilistically using a latent variable. Moreover, probabilistic group fairness guarantees hold in the real world only if the model accurately captures the real world distribution. In other words, using a model that only achieves low likelihood w.r.t\ the data, it is easy to give false guarantees. For instance, consider a probabilistic classifier $f(X,S)$ over a binary sensitive attribute $S$ and non-sensitive attribute $X$ shown below. \begin{center} \scalebox{0.76}{ \begin{tabular}{lc|cc|cc} \toprule $S,X$ & $f(X,S)$ & $P_{\text{data}}(X\vert S)$ & $\Ex_{P_{\text{data}}}[f\vert S]$ & $Q(X\vert S)$ & $\Ex_{Q}[f\vert S]$ \\ \midrule 1,1 & 0.8 & 0.7 & \multirow{2}{*}{0.65} & 0.5 & \multirow{2}{*}{0.55} \\ 1,0 & 0.3 & 0.3 & & 0.5 & \\ \midrule 0,1 & 0.7 & 0.4 & \multirow{2}{*}{0.52} & 0.5 & \multirow{2}{*}{0.55}\\ 0,0 & 0.4 & 0.6 & & 0.5\\ \bottomrule \end{tabular} } \end{center} Suppose in the data, the probability of $X=1$ given $S=1$ (resp.\ $S=0$) is 0.7 (resp.\ 0.4). Then this classifier does not satisfy demographic parity, as the expected prediction for group $S=1$ is $0.8\cdot0.7+0.3\cdot0.3=0.65$ while for group $S=0$ it is 0.52. On the other hand, suppose you have a distribution $Q$ that incorrectly assumes the feature $X$ to be uniform and independent of $S$. Then you would conclude, incorrectly, that the prediction is indeed fair, with the average prediction for both protected groups being 0.55. Therefore, to provide meaningful fairness guarantees, we need to model the data distribution closely, i.e., with high likelihood. \subsection{Modeling with a latent fair decision} We now describe our proposed latent variable approach to address the aforementioned issues. We suppose there is a hidden variable that represents the true label without discrimination. This latent variable is denoted as $D_f$ and used for prediction instead of $D$; i.e., decisions for future instances can be made by inferring the conditional probability $\Pr(D_f \vert \jstate{e})$ given some feature observations $\jstate{e}$ for $\rvars{E} \subseteq \rvars{X} \cup S$. We assume that the latent variable $D_f$ is independent of $S$, thereby satisfying demographic parity. Moreover, the observed label $D$ is modeled as being generated from the fair label by altering its values with different probabilities depending on the sensitive attribute. In other words, the probability of $D$ being positive depends on both $D_f$ and $S$. \begin{figure}[tb] \centering \begin{subfigure}[b]{0.49\columnwidth} \centering \scalebox{0.9}{ \begin{tikzpicture}[bayesnet] \def0bp{0bp} \def-40bp{-40bp} \node (S) at (-20bp,0bp) [bnnode] {$S$}; \node (Df) at (20bp,0bp) [bnnode] {$D_f$}; \node (X) at (-20bp,-40bp) [bnnode] {$\rvars{X}$}; \node (D) at (20bp,-40bp) [bnnode] {$D$}; \begin{scope}[on background layer] \draw [bnarrow] (S) -- (X); \draw [bnarrow] (S) -- (D); \draw [bnarrow] (Df) -- (X); \draw [bnarrow] (Df) -- (D); \end{scope} \end{tikzpicture}} \caption{} \label{fig:fair-BN} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} \centering \scalebox{0.9}{ \begin{tikzpicture}[bayesnet] \def0bp{0bp} \def-40bp{-40bp} \node (S) at (-20bp,0bp) [bnnode] {$S$}; \node (D) at (20bp,0bp) [bnnode] {$D$}; \node (X) at (-20bp,-40bp) [bnnode] {$\rvars{X}$}; \begin{scope}[on background layer] \draw [bnarrow] (S) -- (X); \draw [bnarrow] (D) -- (X); \end{scope} \end{tikzpicture}} \caption{} \label{fig:alt-BN} \end{subfigure} \caption{Bayesian network structures that represent the proposed fair latent variable approach (left) and model without a latent variable (right). Abusing notation, the set of features $\rvars{X}$ is represented as a single node, but refers to some local Bayesian network over $\rvars{X}$.} \end{figure} In addition, our model also assumes that the observed label $D$ and non-sensitive features $\rvars{X}$ are conditionally independent given the latent fair decision and sensitive attributes, i.e., $D \independent \rvars{X} \vert D_f,S$. This is a crucial assumption to learn the model from data where $D_f$ is hidden. To illustrate why, suppose there is no such independence. Then the induced model allows variables $S, \rvars{X}, D$ to depend on one another freely. Thus, such model can represent any marginal distribution over these variables, regardless of the parameters for $D_f$. We can quickly see this from the fact that for all $s,\jstate{x},d$, \begin{linenomath}\begin{align*} \Pr(s\jstate{x} d) &= \Pr(s) \Pr(\jstate{x} d \vert s) \\ &= \Pr(s) \big( \Pr(\jstate{x} d \vert s,D_f\!=\!1) \Pr(D_f\!=\!1) \\ &\phantom{=\Pr(s) \big(}+ \Pr(\jstate{x} d \vert s, D_f\!=\!0) \Pr(D_f\!=\!0) \big). \end{align*}\end{linenomath} That is, multiple conditional distributions involving the latent fair decision $D_f$ will result in the same marginal distribution over $S,\rvars{X},D$, and thus the real joint distribution is not identifiable when learning from data where $D_f$ is completely hidden. For instance, the learner will not be incentivized to learn the relationship between $D_f$ and other features, and may assume the latent decision variable to be completely independent of the observed variables. This is clearly undesirable because we want to use the latent variable to make decisions based on feature observations. The independence assumptions of our proposed model are summarized as a Bayesian network structure in Figure~\ref{fig:fair-BN}. Note that the set of features $\rvars{X}$ is represented as a single node, as we do not make any independence assumptions among the features. In practice, we learn the statistical relationships between these variables from data. This is in contrast to the latent variable model by \citet{calders2010three} which had a naive Bayes assumption among the non-sensitive features; i.e., variables in $\rvars{X}$ are conditionally independent given the sensitive attribute $S$ and $D_f$. As we will later show empirically, such strong assumption not only affects the prediction quality but also limits the fairness guarantee, as it will hold only if the naive Bayes assumption is indeed true in the data distribution. The latent variable not only encodes the intuition that observed labels may be biased, but it also has advantages in achieving high likelihood with respect to data. Consider an alternative way to satisfy statistical parity: by directly enforcing independence between the observed decision variable $D$ and sensitive attributes $\rvars{S}$: see Figure~\ref{fig:alt-BN}. We will show that, on the same data, our proposed model can always achieve marginal likelihood at least as high as the model without a latent decision variable. We can enforce the independence of $D$ and $\rvars{S}$ by setting the latent variable $D_f$ to always be equal to $D$, which results in a marginal distribution over $S,\rvars{X},D$ with the same independencies as in Figure~\ref{fig:alt-BN}: \begin{linenomath}\begin{align*} &\Pr(s\jstate{x} d) \\ &= \Pr(\jstate{x} \mid s,D_f\!=\!1) \Pr(d \mid s,D_f\!=\!1)\Pr(s)\Pr(D_f\!=\!1) \\ &\quad + \Pr(\jstate{x} \mid s,D_f\!=\!0)\Pr(d \mid s, D_f\!=\!0)\Pr(s)\Pr(D_f\!=\!0) \\ &= \Pr(\jstate{x} \mid s d) \Pr(s) \Pr(d) \end{align*}\end{linenomath} Thus, any fair distribution without the latent decision can also be represented by our latent variable approach. In addition, our approach will achieve strictly better likelihood if the observed data does not satisfy demographic parity, because it can also model distributions where $D$ and $S$ are dependent. Lastly, we emphasize that Bayesian network structures were used in this section only to illustrate the independence assumptions of our model. In practice, other probabilistic models can be used to represent the distribution as long as they satisfy our independence assumptions; we use probabilistic circuits as discussed in the next section. \section{Learning Fair Probabilistic Circuits} \label{sec:pc} There are several challenges in modeling a fair probability distribution. First, as shown previously, fairness guarantees hold with respect to the modeled distribution, and thus we want to closely model the data distribution. A possible approach is to learn a deep generative model such as a generative adversarial networks (GANs)~\citep{goodfellow2014generative}. However, then we must resort to approximate inference, or deal with models that have no explicit likelihood, and the fairness guarantees no longer hold. An alternative is to use models that allow exact inference such as Bayesian networks. Unfortunately, marginal inference, which is needed to make predictions $\Pr(D_f \vert \jstate{e})$, is \#P-hard for general BNs~\cite{Roth1996}. Tree-like BNs such as naive Bayes allow polytime inference, but they are not expressive enough to accurately capture the real world distribution. Hence, the second challenge is to also support tractable exact inference without sacrificing expressiveness. Lastly, the probabilistic modeling method we choose must be able to encode the independencies outlined in the previous section, to satisfy demographic parity and to learn a meaningful relationship between the latent fair decision and other variables. In the following, we give some background on \emph{probabilistic circuits (PCs)} and show how they satisfy each of the above criteria. Then we will describe our proposed algorithm to learn fair probabilistic circuits from data. \subsection{Probabilistic Circuits} \paragraph{Representation} \emph{Probabilistic circuits (PCs)}~\cite{PCTuto20} refer to a family of tractable probabilistic models including arithmetic circuits~\cite{darwicheKR02,darwicheJACM-POLY}, sum-product networks~\cite{poon2011sum}, and cutset networks~\cite{rahman2014cutset}. A probabilistic circuit $\ensuremath{\mathcal{C}} = (\ensuremath{\mathcal{G}},\mathcal{\boldsymbol{\theta}})$ over RVs $\rvars{X}$ is characterized by its structure $\ensuremath{\mathcal{G}}$ and parameters $\mathcal{\boldsymbol{\theta}}$. The circuit structure $\ensuremath{\mathcal{G}}$ is a directed acyclic graph (DAG) such that each inner node is either a \textit{sum} node or a \textit{product} node, and each \textit{leaf} (input) node is associated with a univariate input distribution. We denote the distribution associated with leaf $n$ by $f_n(.)$. This may be any probability mass function, a special case being an indicator function such as $[X=1]$. Parameters $\mathcal{\boldsymbol{\theta}}$ are each associated with an input edge to a sum node. Note that a subcircuit rooted at an inner node of a PC is itself a valid PC. Figure~\ref{fig:fair-pc} depicts an example probabilistic circuit.\footnote{The features $\rvars{X}$ and $D$ are shown as leaf nodes for graphical conciseness, but refer to sub-circuits over the respective variables.} \begin{figure}[tb] \centering \scalebox{0.8}{ \begin{tikzpicture} \sumnode[line width=1.0pt]{s1}; \prodnode[line width=1.0pt, below=17pt of s1, xshift=-2*17pt]{p2}; \prodnode[line width=1.0pt, below=17pt of s1, xshift=2*17pt]{p3}; \prodnode[line width=1.0pt, left=2*17pt of p2]{p4}; \prodnode[line width=1.0pt, right=2*17pt of p3]{p5}; \bernode[line width=1.0pt, below=17pt of p4, xshift=13pt]{v1}{$S\!=\!1$}; \bernode[line width=1.0pt, below=17pt of p2, xshift=-13pt]{v2}{$D_f\!=\!1$}; \bernode[line width=1.0pt, below=17pt of p3, xshift=13pt]{v3}{$D_f\!=\!0$}; \bernode[line width=1.0pt, below=17pt of p5, xshift=-13pt]{v4}{$S\!=\!0$}; \prodnode[line width=1.0pt, below=17pt+13pt of p4, xshift=-13pt]{p6}; \prodnode[line width=1.0pt, below=17pt+13pt of p2, xshift=13pt]{p7}; \prodnode[line width=1.0pt, below=17pt+13pt of p3, xshift=-13pt]{p8}; \prodnode[line width=1.0pt, below=17pt+13pt of p5, xshift=13pt]{p9}; \contnode[line width=1.0pt, below=17pt of p6, xshift=-13pt]{v5}{$D$}; \contnode[line width=1.0pt, below=17pt of p6, xshift=13pt]{v6}{$\rvars{X}$}; \contnode[line width=1.0pt, below=17pt+13pt of v2]{v7}{$D$}; \contnode[line width=1.0pt, right=13pt of v7]{v8}{$\rvars{X}$}; \contnode[line width=1.0pt, below=17pt+13pt of v3]{v9}{$\rvars{X}$}; \contnode[line width=1.0pt, left=13pt of v9]{v10}{$D$}; \contnode[line width=1.0pt, below=17pt of p9, xshift=-13pt]{v11}{$D$}; \contnode[line width=1.0pt, below=17pt of p9, xshift=13pt]{v12}{$\rvars{X}$}; \weigedge[line width=1.0pt,above,pos=0.45,color=red] {s1} {p4} {\footnotesize $\color{red}\theta_1$}; \weigedge[line width=1.0pt,left,pos=0.37,color=ForestGreen] {s1} {p2} {\footnotesize $\color{ForestGreen}\theta_2$}; \weigedge[line width=1.0pt,right,pos=0.37] {s1} {p3} {\footnotesize $\theta_3$}; \weigedge[line width=1.0pt,above,pos=0.45] {s1} {p5} {\footnotesize $\theta_4$}; \edge[line width=1.0pt,left,>=stealth,draw=red] {p4} {v1}; \edge[line width=1.0pt,right,>=stealth,draw=red] {p4} {v2}; \edge[line width=1.0pt,left,>=stealth,color=ForestGreen] {p2} {v1}; \edge[line width=1.0pt,right,>=stealth,color=ForestGreen] {p2} {v3}; \edge[line width=1.0pt,left,>=stealth] {p3} {v2}; \edge[line width=1.0pt,right,>=stealth] {p3} {v4}; \edge[line width=1.0pt,left,>=stealth] {p5} {v3}; \edge[line width=1.0pt,right,>=stealth] {p5} {v4}; \edge[line width=1.0pt,left,>=stealth,draw=red] {p4} {p6}; \edge[line width=1.0pt,left,>=stealth,color=ForestGreen] {p2} {p7}; \edge[line width=1.0pt,left,>=stealth] {p3} {p8}; \edge[line width=1.0pt,left,>=stealth] {p5} {p9}; \edge[line width=1.0pt,left,>=stealth,draw=red] {p6} {v5}; \edge[line width=1.0pt,left,>=stealth,draw=red] {p6} {v6}; \edge[line width=1.0pt,left,>=stealth,color=ForestGreen] {p7} {v7}; \edge[line width=1.0pt,left,>=stealth,color=ForestGreen] {p7} {v8}; \edge[line width=1.0pt,left,>=stealth] {p8} {v9}; \edge[line width=1.0pt,left,>=stealth] {p8} {v10}; \edge[line width=1.0pt,left,>=stealth] {p9} {v11}; \edge[line width=1.0pt,left,>=stealth] {p9} {v12}; \end{tikzpicture} } \caption{ A probabilistic circuit over variables $S,\rvars{X},D,D_f$} \label{fig:fair-pc} \end{figure} Let $\ensuremath{\mathsf{ch}}(n)$ be the set of children nodes of an inner node $n$. Then a probabilistic circuit $\ensuremath{\mathcal{C}}$ over RVs $\rvars{X}$ defines a joint distribution $\Pr_{\ensuremath{\mathcal{C}}}(\rvars{X})$ in a recursive way as follows: \begin{linenomath}\begin{equation*} {\Pr}_n(\jstate{x})= \begin{cases} f_n(\jstate{x}) &\text{if $n$ is a leaf node} \\ \prod_{c\in\ensuremath{\mathsf{ch}}(n)} \Pr_c(\jstate{x}) &\text{if $n$ is a product node} \\ \sum_{c\in\ensuremath{\mathsf{ch}}(n)} \theta_{n,c} \Pr_c(\jstate{x}) &\text{if $n$ is a sum node} \end{cases} \label{eq:EVI} \end{equation*}\end{linenomath} Intuitively, a product node $n$ defines a factorized distribution, and a sum node $n$ defines a mixture model parameterized by weights $\{\theta_{n,c}\}_{c\in\ensuremath{\mathsf{ch}}(n)}$. $\Pr_n$ is also called the \textit{output} of $n$. \paragraph{Properties and inference} A strength of probabilistic circuits is that (1) they are expressive, achieving high likelihoods on density estimation tasks~\cite{rahman2016merging,LiangUAI17,peharz2020random}, and (2) they support tractable probabilistic inference, enabled by certain structural properties. In particular, PCs support efficient marginal inference if they are smooth and decomposable. A circuit is said to be \emph{smooth} if for every sum node all of its children depend on the same set of variables; it is \emph{decomposable} if for every product node its children depend on disjoint sets of variables~\citep{darwiche2002knowledge}. Given a smooth and decomposable probabilistic circuit, computing the marginal probability for any partial evidence is reduced to simply evaluating the circuit bottom-up. This also implies tractable computation of conditional probabilities, which are ratios of marginals. Thus, we can make predictions in time linear in the size of the circuit. Another useful structural property is \emph{determinism}; a circuit is deterministic if for every complete input $\jstate{x}$, at most one child of every sum node has a non-zero output. In addition to enabling tractable inference for more queries~\citep{ChoiDarwiche17}, it leads to closed-form parameter estimation of probabilistic circuits given complete data. We also exploit this property for learning PCs with latent variables, which we will later describe in detail. \paragraph{Encoding independence assumptions} Next, we demonstrate how we encode the independence assumptions of a fair distribution as in Figure~\ref{fig:fair-BN} in a probabilistic circuit. Recall the example PC in Figure~\ref{fig:fair-pc}: regardless of parameterization, this circuit structure always encodes a distribution where $D$ is independent of $\rvars{X}$ given $S$ and $D_f$. To prove this, we first observe that the four product nodes in the second layer each correspond to four possible assignments to $S$ and $D_f$. For instance, the left-most product node returns a non-zero output only if the input sets both $S=1$ and $D_f=1$. Effectively, the sub-circuits rooted at these nodes represent conditional distributions $\Pr(D,\rvars{X} \vert s, d_f)$ for assignments $s,d_f$. Because the distributions for $D$ and $\rvars{X}$ factorize, we have $\Pr(D,\rvars{X} \vert s,d_f) = \Pr(D \vert s,d_f) \cdot \Pr(\rvars{X} \vert s,d_f)$, thereby satisfying the conditional independence $D \independent \rvars{X} \vert D_f,S$. We also need to encode the independence between $D_f$ and $S$. In the example circuit, each edge parameter $\theta_i$ corresponds to $\Pr(s,d_f)$ for a joint assignment to $S,D_f$. With no restriction on these parameters, the circuit structure does not necessarily imply $D_f\!\independent\!S$. Thus, we introduce auxiliary parameters $\phi_s$ and $\phi_{d_f}$ representing $\Pr(S\!=\!1)$ and $\Pr(D_f\!=\!1)$, respectively, and enforce that: \begin{linenomath}\begin{gather*} \phi_s = \theta_1 + \theta_2, \quad \phi_{d_f} = \theta_1 + \theta_2, \\ \theta_1 = \phi_s \cdot \phi_{d_f},\; \theta_2 = \phi_s \cdot (1-\phi_{d_f}), \\ \theta_3 = (1-\phi_s) \cdot \phi_{d_f},\; \theta_4 = (1-\phi_s) \cdot (1-\phi_{d_f}). \end{gather*}\end{linenomath} Hence, when learning these parameters, we limit the degree of freedom such that the four edge parameters are given by two free variables $\phi_s$ and $\phi_{d_f}$ instead of the four $\theta_i$ variables. Next, we discuss how to learn a fair probabilistic circuit with latent variable from data. This consists of two parts: learning the circuit structure and estimating the parameters of a given structure. We first study parameter learning in the next section, then structure learning in Section~\ref{sec:str-learning}. \subsection{Parameter Learning} \label{sec:param-learn} Given a complete data set, maximum-likelihood parameters of a smooth, decomposable, and deterministic PC can be computed in closed-form~\citep{KisaVCD14}. For an edge between a sum node $n$ and its child $c$, the associated maximum-likelihood parameter for a complete dataset $\ensuremath{\mathcal{D}}$ is given by: \begin{equation} \theta_{n,c} = F_{\ensuremath{\mathcal{D}}}(n,c) / \sum_{c\in\ensuremath{\mathsf{ch}}(n)} F_{\ensuremath{\mathcal{D}}}(n,c) \label{eq:param} \end{equation} Here, $F_{\ensuremath{\mathcal{D}}}(n,c)$ is called the \emph{circuit flow} of edge $(n,c)$ given $\ensuremath{\mathcal{D}}$, and it counts the number of data samples in $\ensuremath{\mathcal{D}}$ that ``activate'' this edge. For example, in Figure \ref{fig:fair-pc}, the edges activated by sample $\{D_f\!=\!1,S\!=\!1,d,\jstate{x}\}$, for any assignments $d,\jstate{x}$, are colored red.\footnote{See Appendix~\ref{sec:app-EF} for a formal definition and proof of Equation~\ref{eq:param}.} However, our proposed approach for fair distribution includes a latent variable, and thus must be learned from incomplete data. One of the most common methods to learn parameters of a probabilistic model from incomplete data is the Expectation Maximization (EM) algorithm~\citep{PGM,darwiche2009}. EM iteratively completes the data by computing the probability of unobserved values (E-step) and estimates the maximum-likelihood parameters from the expected dataset (M-step). We now propose an EM parameter learning algorithm for PCs that does not explicitly complete the data, but rather utilizes circuit flows. In particular, we introduce the notion of \emph{expected flows}, which is defined as the following for a given circuit $\ensuremath{\mathcal{C}}=(\ensuremath{\mathcal{G}},\mathcal{\boldsymbol{\theta}})$ over RVs $\rvars{Z}$ and an incomplete dataset $\ensuremath{\mathcal{D}}$: \begin{linenomath}\begin{align*} \EF_{\ensuremath{\mathcal{D}},\mathcal{\boldsymbol{\theta}}}(n,c) :=& \mathbb{E}_{\pr_{\ensuremath{\mathcal{C}}}}[F_{\ensuremath{\mathcal{D}}_i}(n,c)] \\ =& \sum_{\ensuremath{\mathcal{D}}_i \in \ensuremath{\mathcal{D}}} \sum_{\jstate{z} \models \ensuremath{\mathcal{D}}_i} \pr_{\ensuremath{\mathcal{C}}}(\jstate{z} \vert \ensuremath{\mathcal{D}}_i) \cdot F_{\jstate{z}}(n,c). \end{align*}\end{linenomath} Here, $\ensuremath{\mathcal{D}}_i$ denotes the i-th sample in the dataset, and $\jstate{z} \models \ensuremath{\mathcal{D}}_i$ are the possible completions of sample $\ensuremath{\mathcal{D}}_i$. For example, in Figure~\ref{fig:fair-pc}, the expected flows of the edges highlighted in red and green, given a sample $\{S\!=\!1,d,\jstate{x}\}$, are $\Pr_{\ensuremath{\mathcal{C}}}(D_f\!=\!1 \mid S\!=\!1,d,\jstate{x})$ and $\Pr_{\ensuremath{\mathcal{C}}}(D_f\!=\!0 \mid S\!=\!1,d,\jstate{x})$, respectively. Similar to circuit flows, the expected flows for all edges can be computed with a single bottom-up and top-down evaluation of the circuit. Then, using expected flows, we can perform both the E- and M-step by the following closed-form solution. \begin{prop}\label{prop:em} Given a smooth, decomposable, and deterministic circuit with parameters $\mathcal{\boldsymbol{\theta}}$ and an incomplete data $\ensuremath{\mathcal{D}}$, the parameters for the next EM iteration are given by: \begin{linenomath}\begin{equation*} \theta_{n,c}^{\text{(new)}} = \EF_{\ensuremath{\mathcal{D}},\mathcal{\boldsymbol{\theta}}}(n,c) / \sum_{c\in\ensuremath{\mathsf{ch}}(n)} \EF_{\ensuremath{\mathcal{D}},\mathcal{\boldsymbol{\theta}}}(n,c). \end{equation*}\end{linenomath} \end{prop} Note that this is very similar to the ML estimate from complete data in Equation~\ref{eq:param}, except that expected flows are used instead of circuit flows. Furthermore, the expected flow can be computed even if each data sample has different variables missing; thus, the EM method can naturally handle missing values for other features as well. We refer to Appendix~\ref{sec:app-EF} for details on computing the expected flows and proof for above proposition. \paragraph{Initial parameters using prior knowledge} Typically the EM algorithm is run starting from randomly initialized parameters. While the algorithm is guaranteed to improve the likelihood at each iteration until convergence, it still has the problem of multiple local maxima and identifiability, especially when there is a latent variable involved~\citep{PGM}. Namely, we can converge to different learned models with similar likelihoods but different parameters for the latent fair variable, thus resulting in different behaviors in the prediction task. For example, for a given fair distribution, we can flip the value of $D_f$ and the parameters accordingly such that the marginal distribution over $S,\rvars{X},D$, as well as the likelihood on the dataset, is unchanged. However, this clearly has a significant impact on the predictions which will be completely opposite. Therefore, instead of random initialization, we encode prior knowledge in the initial parameters that determine $\Pr(D \vert S, D_f)$. In particular, it is obvious that $D_f$ should be equal to $D$ if the observed labels are already fair. Furthermore, for individual predictions, we would want $D_f$ to be close to $D$ as much as possible while ensuring fairness. Thus, we start the EM algorithm from a conditional probability $\Pr(d \vert s,d_f) = [d=d_f]$. \subsection{Structure Learning} \label{sec:str-learning} Lastly, we describe how a fair probabilistic circuit structure is learned from data. As described previously, top layers of the circuit are fixed in order to encode the independence assumptions of our latent variable approach. On the other hand, the sub-circuits over features $\rvars{X}$ can be learned to best fit the data. We adopt the \textsc{Strudel}\ algorithm to learn the structures~\citep{DangPGM20}.\footnote{PCs learned this way also satisfy properties such as structured decomposability that are not necessary for our use case.} Starting from a Chow-Liu tree initial distribution~\citep{ChowLiu}, \textsc{Strudel}\ performs a heuristic-based greedy search over possible candidate structures. At each iteration, it first selects the edge with the highest circuit flow and the variable with the strongest dependencies on other variables, estimated by the sum of pairwise mutual informations. Then it applies the \textit{split} operation -- a simple structural transformation that ``splits'' the selected edge by introducing new sub-circuits conditioned on the selected variable. Intuitively, this operation aims to model the data more closely by capturing the dependence among variables (variable heuristic) appearing in many data samples (edge heuristic). After learning the structure, we update the parameters of the learned circuit using EM as described previously. \section{Experiments} \label{sec:exp} We now empirically evaluate our proposed model \textsc{FairPC}\ on real-world benchmark datasets as well as synthetic data. \paragraph{Baselines} We first compare \textsc{FairPC}\ to three other probabilistic methods: fair naive Bayes models (\textsc{2NB}\ and \textsc{LatNB}) by \citet{calders2010three} and PCs without latent variable (\textsc{NLatPC}) as described in Sec~\ref{sec:model}. We also compare against existing methods that learn discriminative classifiers satisfying group fairness: (1) \textsc{FairLR}~\citep{zafar2017fairness}, which learns a classifier subject to co-variance constraints; (2) \textsc{Reduction}~\citep{agarwal2018reductions}, which reduces the fair learning problem to cost-sensitive classification problems and learns a randomized classifier subject to fairness constraints; and (3) \textsc{Reweight}~\citep{jiang2020identifying} which corrects bias by re-weighting the data points. All three methods learn logistic regression classifiers, either with constraints or using modified objective functions. \paragraph{Evaluation criteria} For predictive performance, we use accuracy and F1 score. Note that models with latent variables use the latent fair decision $D_f$ to make predictions, while other models directly use $D$. Moreover, in the real-world datasets, we do not have access to the fair labels and instead evaluate using the observed labels which may be ``noisy'' and biased. We emphasize that the accuracy w.r.t\ unfair labels is not the goal of our method, as we want to predict the true target, not its biased proxy. Rather, it measures how similar the latent variable is to the observed labels, thereby justifying its use as fair decision. To address this, we also evaluate on synthetic data where fair labels can be generated. For fairness performance, we define the discrimination score as the difference in average prediction probability between the majority and minority groups, i.e., $\Pr(D_f\!=\!1\vert S\!=\!0)-\Pr(D_f\!=\!1\vert S\!=\!1)$ estimated on the test set. \begin{figure}[t] \centering \begin{tabular}{ccc} \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/compas_Log-likelihood.pdf}& \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/compas_F1.pdf}& \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/compas_Discrimination.pdf}\\ \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/adult_Log-likelihood.pdf}& \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/adult_F1.pdf}& \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/adult_Discrimination.pdf}\\ \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/german_Log-likelihood.pdf}& \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/german_F1.pdf}& \includegraphics[width=0.28\columnwidth,trim=0cm 1.5cm 3.5cm 0.5cm]{figs/histograms/german_Discrimination.pdf}\\ \end{tabular} \caption{Comparison of fair probability distributions. \textbf{Columns:} log-likelihood, F1-score, discrimination score (higher is better for the first two; lower is better for last). \textbf{Rows:} COMPAS, Adult, German datasets. The four bars in each graph from left to right are: 1) \textsc{2NB}, 2) \textsc{LatNB}, 3) \textsc{NLatPC}, 4) \textsc{FairPC}. \label{fig:exp-realworld-4base}} \end{figure} \subsection{Real-World Data} \label{sec:exp-realworld} \paragraph{Data} We use three datasets: COMPAS~\cite{compas}, Adult, and German~\cite{Dua2019}, which are commonly studied benchmarks for fair ML. They contain both numerical and categorical features and are used for predicting recidivism, income level, and credit risk, respectively. We wish to make predictions without discrimination with respect to a protected attribute: ``sex'' for Adult and German, and ``ethnicity'' for COMPAS. As pre-processing, we discretize numerical features (e.g.\ age), remove unique or duplicate features (e.g.\ names of individuals), and remove low frequency counts. \paragraph{Probabilistic methods} We first compare against probabilistic methods to illustrate the effects of using latent variables and learning more expressive distributions. Figure~\ref{fig:exp-realworld-4base} summarizes the result. The bars, from left to right, correspond to \textsc{2NB}, \textsc{LatNB}, \textsc{NLatPC}, and \textsc{FairPC}. First and last two bars in each graph correspond to NB and PC models, respectively. Blue bars denote non-latent model, and yellow/orange denote latent-variable approach. In terms of log-likelihoods, both PC-based methods outperform NB models, which aligns with our motivation for relaxing the naive Bayes assumption---to better fit the data distribution. Furthermore, models with latent variables outperform their corresponding non-latent models, i.e., \textsc{LatNB}\ outperforms \textsc{2NB}\ and \textsc{FairPC}\ outperforms \textsc{NLatPC}. This validates our argument made in Section~\ref{sec:model} that the latent variable approach can achieve higher likelihood than enforcing fairness directly in the observed label. Next, we compare the methods using F1-score as there is class imbalance in these datasets. Although it is measured with respect to possibly biased labels, \textsc{FairPC}\ achieves competitive performance, demonstrating that the latent fair decision variable still exhibits high similarity with the observed labels. Lastly, \textsc{FairPC}\ achieves the lowest discrimination scores in COMPAS and Adult datasets by a significant margin. Moreover, as expected, PCs achieve lower discrimination scores than their counterpart NB models, as they fit the data distribution better. \begin{figure} \centering \begin{tabular}{ll} \includegraphics[width=0.49\columnwidth]{figs/rw-benchmark/compas_test_Acc.pdf}& \includegraphics[width=0.49\columnwidth]{figs/rw-benchmark/compas_test_F1.pdf}\\ \includegraphics[width=0.49\columnwidth]{figs/rw-benchmark/adult_test_Acc.pdf}& \includegraphics[width=0.49\columnwidth]{figs/rw-benchmark/adult_test_F1.pdf}\\ \includegraphics[width=0.49\columnwidth]{figs/rw-benchmark/german_test_Acc.pdf}& \includegraphics[width=0.49\columnwidth]{figs/rw-benchmark/german_test_F1.pdf} \end{tabular} \caption{Predictive performance (y-axis) vs.\ discrimination score (x-axis) for \textsc{FairPC}\ and fair classification methods (\textsc{FairLR}, \textsc{Reduction}, \textsc{Reweight}), in addition with two trivial baselines (\textsc{Rand}\ and \textsc{LR}). \textbf{Columns:} accuracy, F1-score. \textbf{Rows:} COMPAS, Adult, German datasets. \label{fig:exp-rw-benchmark}} \end{figure} \paragraph{Discriminative classifiers} Next we compare \textsc{FairPC}\ to existing fair classification methods. Figure~\ref{fig:exp-rw-benchmark} shows the trade-off between predictive performance and fairness. We add two other baselines to the plot: \textsc{Rand}, which makes random predictions, and \textsc{LR}, which is an unconstrained logistic regression classifier. They represent the two ends of the fairness-accuracy tradeoff. \textsc{Rand}\ has no predictive power but low discrimination, while \textsc{LR}\ has high accuracy but unfair. Informally, the further above the line between these baselines, the better the method optimizes this tradeoff. On COMPAS and Adult datasets, our approach achieves a good balance between predictive performance and fairness guarantees. In fact, it achieves the best or close to best accuracy and F1-score, again showing that the latent decision variable is highly similar to the observed labels even though the explicit objective is not to predict the unfair labels. However, on German dataset, while \textsc{FairLR}\ and \textsc{Reweight}\ achieve the best performance on average, the estimates for all models including the trivial baselines are too highly noisy to draw a statistically significant conclusion. This may be explained by the fact that the dataset is relatively small with 1000 samples. \subsection{Synthetic Data} \label{sec:exp-synthetic} As discussed previously, ideally we want to evaluate against the true target labels, but they are generally unknown in real-world data. Therefore, we also evaluate on synthetic data with fair ground-truth labels in order to evaluate whether our model indeed captures the hidden process of bias and makes accurate predictions. \paragraph{Generating Data} We generate data by constructing a fair PC $\ensuremath{\mathcal{C}}_{true}$ to represent the ``true distribution'' and sampling from it. The process that generates biased labels $d$ is represented by the following (conditional) probability table: \begin{center} \scalebox{0.76}{ \footnotesize \begin{tabular}{c|cc||c|cccc} \toprule $\cdot$ & $D_f$ & $S$ & $d_f, s$ & 1,1 & 1,0 & 0,1 &0,0\\ \midrule \normalsize $\Pr(\cdot\!=\!1)$ & 0.5 & 0.3 & $\Pr(D\!=\!1\mid D_f\!=\!d_f,S\!=\!s)$& 0.8 & 0.9 & 0.1 & 0.4 \\ \bottomrule \end{tabular} } \end{center} Here, $S=1$ is the minority group, and the unfair label $D$ is in favor of the majority group: $D$ is more likely to be positive for the majority group $S\!=\!0$ than for $S\!=\!1$, for both values of fair label $D_f$ but at different rates. The sub-circuits of $\ensuremath{\mathcal{C}}_{true}$ over features $\rvars{X}$ are randomly generated tree distributions, and their parameters are randomly initialized with Laplace smoothing. We generated different synthetic datasets with the number of non-sensitive features ranging from 10 to 30, using 10-fold CV for each. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=0.45\columnwidth]{figs/synthetic-basemodel.pdf}& \includegraphics[width=0.45\columnwidth]{figs/synthetic-benchmark.pdf}\\ \end{tabular} \captionof{figure}{\label{fig:exp-sync}Accuracy (y-axis) vs.\ discrimination score (x-axis) on synthetic datasets. We compare \textsc{FairPC}\ with \textsc{2NB}, \textsc{LatNB}, \textsc{NLatPC}\ (left) and with \textsc{Reduction}, \textsc{Reweight}, \textsc{FairLR}\ (right). Each dot is a single run on a generated dataset using the method indicated by its color.} \end{figure} \paragraph{Results} We first test \textsc{FairPC}, \textsc{LatNB}, \textsc{NLatPC}\ and \textsc{NLatPC}\ on the generated datasets. Figure~\ref{fig:exp-sync} (left) illustrates the accuracy and discrimination scores on separate test sets with fair decision labels. In terms of accuracy, PCs outperform NBs, and latent variable approaches outperform non-latent ones, which shows that adopting density estimation to fit the data and introducing a latent variable indeed help improve the performance. When comparing the average discrimination score for each method, \textsc{2NB}\ and \textsc{NLatPC}\ always have negative scores, showing that the non-latent methods are more biased towards the majority group; while \textsc{LatNB}\ and \textsc{FairPC}\ are more equally distributed around zero on the x-axis, thus demonstrating that a latent fair decision variable helps to correct this bias. While both latent variable approaches achieve reasonably low discrimination on average, \textsc{FairPC}\ is more stable and has even lower average discrimination score than \textsc{LatNB}. Moreover \textsc{FairPC}\ also outperforms the other probabilistic methods in terms of likelihood; see Appendix~\ref{sec:app-exp}. We also compare \textsc{FairPC}\ to \textsc{FairLR}, \textsc{Reduction}, and \textsc{Reweight}, the results visualized in Figure~\ref{fig:exp-sync} (right). Our method achieves a much higher accuracy w.r.t.\ the generated fair labels; for instance, the average accuracy of \textsc{FairPC}\ is around 0.17 higher than that of \textsc{FairLR}. Also, we are still being comparable in terms of discrimination score, illustrating the benefits of explicitly modeling the latent fair decision. \subsection{Additional experiments} Appendix~\ref{sec:app-exp} includes learning curves, statistical tests, and detailed performance of our real-world data experiments, as well as the following additional experiments. We empirically validated that initializing parameters using prior knowledge as described in Section~\ref{sec:param-learn} indeed converges closer to the true distribution of $\Pr(D \vert S,D_f)$ than randomly initializing parameters. In addition, as mentioned in Section~\ref{sec:param-learn}, our method can be applied even on datasets with missing values, with no change to the algorithm. We demonstrate this empirically and show that our approach still gets comparably good performance for density estimation. \section{Conclusion} In this paper, we proposed a latent variable approach to learning fair distributions that satisfy demographic parity, and developed an algorithm to learn fair probabilistic circuits from incomplete data. Experimental evaluation on simulated data showed that our method consistently achieves the highest log-likelihoods and a low discrimination score. It also accurately predicts true fair decisions, and even on real-world data where fair labels are not available, our predictions remain close to the unfair ones. \subsubsection*{Acknowledgments} This work is partially supported by NSF grants \#IIS-1943641, \#IIS-1633857, \#CCF-1837129, DARPA grant \#N66001-17-2-4032, a Sloan Fellowship, Intel, and Facebook.
null
null
null
proofpile-arXiv_000-10133
{"arxiv_id":"2009.09031","language":"en","timestamp":1600740115000,"url":"https:\/\/arxiv.org\/abs\/2009.09031","yymm":"2009"}
2024-02-18T23:40:25.060Z
2020-09-22T02:01:55.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10135"}
null
\section{Introduction} Ultracold atoms give a unique perspective on strongly correlated matter \cite{Bloch2005,lewenstein2012ultracold} as they allow one, for example, to study quantum states with single-atom resolution or to explore higher-order correlations and entanglement \cite{Kaufman2016, Schweigler2017}. Moreover, ultracold atoms have several features, which make them particularly well suited for the study of strongly correlated matter. Their isolation from the environment is excellent and the microscopic system parameters are highly tunable. This tunability allows for preparing a variety of strongly correlated states by adiabatically ramping the system parameters starting from a well-defined state such as a trapped Bose-Einstein condensate. Strongly correlated states of particular interest are fractional quantum Hall states, especially because of their prospects for topological quantum computation \cite{Kitaev2003}. Although fractional quantum Hall physics has been experimentally discovered already four decades ago \cite{tsui82}, and has readily been explained in terms of Laughlin's trial wave function \cite{laughlin83}, the fractional quantum Hall effect continues to be a challenging subject of research: One of the most striking predictions about the fractional quantum Hall physics is the existence of quasiparticles with fractional statistics \cite{leinaas77,wilczek82}, so-called anyons. The existence of these quasi-particles has yet to be confirmed ultimately, despite strong efforts and much experimental progress made towards anyon detection \cite{saminadayar97,camino05,camino07,bartolomei20}. A new direction of how to approach these challenges are quantum simulators, which prepare fractional quantum Hall states in highly controlled experimental settings. Many advances towards such synthetic fractional quantum Hall systems have been made in both atomic \cite{miyake13,aidelsburger13,aidelsburger15,flaeschner16,asteria19,tarnowski19} and photonic \cite{hafezi13,rechtsman13,mittal14,mittal16,bandres16,baboux17} quantum simulators. These advances include the generation of artificial magnetic fields, which are responsible for the flat band structure, and detection of their topological properties, such as chiral edge states \cite{hafezi13,rechtsman13}, topological quantum numbers \cite{aidelsburger15,mittal16,baboux17,asteria19,tarnowski19}, topological transport \cite{mittal14,bandres16}. Through light-matter coupling, it has also been possible to create interactions between two photons in a synthetic gauge field, yielding a Laughlin-type quantum state \cite{clark20}. Although atomic systems are interacting in a more natural way, the evidence of atomic Laughlin states has remained limited until now \cite{gemelke10}. Various difficulties in reaching synthetic Laughlin states are known: In the strongly correlated regime, the centrifugal forces leading to the artificial gauge field almost compensate the trap \cite{dagnino09,julia-diaz11}, and thus reduces the stability of the system. Adding steeper potentials to the harmonic trap such as a confining quartic potential or a weak hard wall confinement have been found to be very harmful to bosonic Laughlin states \cite{roussou19, Macaluso2017}. The generation of synthetic gauge fields may heat the system, especially if periodic driving is involved \cite{dalessio14}. In this context, it is particularly important to note that various intermediate phases separate the uncorrelated system from the strongly correlated liquid phase \cite{viefers00,viefers08,dagnino09,julia-diaz11}. Thus, the phase diagram exhibits different regions of small energy gaps above the ground state. Nevertheless, an adiabatic path to the Laughlin state has been proposed for a system of bosonic cold atoms in a harmonic elliptic trap with tunable rotation frequency and tunable ellipticity \cite{popp04}. Similar considerations for the adiabatic preparation also apply to fermionic systems \cite{Palm2020}. An alternative route to synthetic Laughlin states is based on ``growing'' the state via variable particle numbers \cite{grusdt14}. In the present paper, we revisit the adiabatic preparation scheme for bosonic Laughlin states in rotating traps \cite{popp04}. The idea is to increase the angular momentum $L$ of $N$ atoms in a rotating trap from the non-rotating state $L=0$ to the angular momentum of the 1/2-Laughlin state, $L=N(N-1)$, by a ramp of the rotation frequency of the trap, and simultaneously breaking rotational symmetry by an anisotropic deformation of the trap. In Ref. \cite{popp04}, a preparation time of $6450$ trapping periods was reported, in which the Laughlin state of four atoms was reached with a fidelity of 0.97. This implies that even for a trapping frequency as large as $(2\pi)\times30$~kHz, the preparation time exceeds 200~ms. However, we show that such an adiabatic preparation can be dramatically be improved. Specifically, our numerics reach a four-atom Laughlin state with a fidelity of 0.99 within $605$ trapping periods, or 20~ms for a frequency of $(2\pi)\times30$~kHz. This result significantly improves the prospects of preparing atomic Laughlin states using an adiabatic scheme. The main ingredients that distinguish our scheme from earlier work are: \begin{itemize} \item larger anisotropies of the trap: During the preparation the atoms acquire large values of angular momentum, exceeding the Laughlin value, far before reaching the strongly correlated regime. Thus, the accumulation of angular momentum occurs in regimes which are characterized by relatively large energy gaps, and in the final stage of the protocol, the Laughlin state is approached by \textit{reducing} the angular momentum of the system. \item varying ramp speeds: relatively large energy gaps allow for quick ramps at an early stage of the preparation scheme, shortening the total evolution time. \end{itemize} Our work is organized as follows: In Sec. \ref{System}, we describe the system and its behavior at different rotation frequencies. In Sec. \ref{Results}, we present our main result, that is, how rotation frequency and trap anisotropy have to be tuned to reach the Laughlin state with high fidelity. Conclusions of this result are drawn in Sec. \ref{Conclusions}. \section{Theoretical Model} \label{System} We consider a microscopic model of $N$ bosonic atoms confined to two dimensions and trapped in a harmonic potential. These microtraps can be realized either via a tightly-focused optical tweezer or via an optical lattice as a decoupled array of individual microtraps as in Ref. \cite{gemelke10}. Tight harmonic confinement along the third dimension ($z$-direction) freezes all excitations along that direction, and each microtrap becomes effectively two-dimensional. We denote the harmonic oscillator frequency by $\omega_z$, and the associated length scale is given by $\lambda_z = (\hbar/M\omega_z)^{1/2}$, with $M$ the mass of the atoms. The bosonic atoms interact via contact interaction, which we parametrize with the dimensionless coupling constant $g$. In the considered experimental setups the dimensionless coupling is given by $g = \sqrt{8\pi} (a_S/\lambda_z)$, with $a_S$ being the three-dimensional scattering length. The artificial gauge field is created by rotation around the $z$ direction with frequency $\Omega$. For a review on artificial gauge fields with atoms in a rotating trap, we suggest Refs. \cite{Cooper2008, Fetter2009}. The total Hamiltonian $H = H_0 + H_I$ describing $N$ atoms consists of the non-interacting part \begin{align} H_0 = \sum_{j=1}^N \left[ \frac{\mathbf{p}_{j}^{\,2}}{2M} +\frac{1}{2}M\omega^2 \mathbf{r}_{ j}^{\:2}-\Omega L_{z,j} \right] \, , \end{align} and the interacting part \begin{align} H_I = \frac{\hbar^2g}{M}\sum_{j=1}^N\sum_{k>j}\delta(\mathbf{r}_{j}-\mathbf{r}_{k}) \, , \end{align} where $\mathbf{r}_{j} = x_j \mathbf{e}_x + y_j \mathbf{e}_y$ is the position operator in the $xy$-plane, and $L_{z,j}$ is the angular momentum operator in $z$-direction of the $j$th atom. Moreover, $\omega$ is the frequency of the harmonic trapping in the $xy$-plane. The single particle Hamiltonian can be written as \begin{equation} H_0 = \sum_{j=1}^N \left[ \frac{|\mathbf{p}_{j}-M \Omega \times \mathbf{r}_{ j}|^{2}}{2 M}+\frac{1}{2} M\left(\omega^{2}-\boldsymbol{\Omega}^{2}\right)\mathbf{r}_{ j}^{\:2} \right]\!, \label{eq:ParticleInGaugeField} \end{equation} where introduced the rotation vector $\boldsymbol{\Omega}=\Omega \hat{z}$ along the z-axis. Eq.~\eqref{eq:ParticleInGaugeField} describes non-interacting particles with charge $q$ in a magnetic field $q \mathbf{B} = 2 M \boldsymbol{\Omega}$. The single-particle eigenstates of $H_0$ are the Fock-Darwin states, cf. Ref. \cite{dagnino09}, which are organized in different Landau levels, separated by a ``cyclotron'' energy $\hbar(\omega+\Omega)$. Different states within a Landau level are distinguished by an angular momentum quantum number $m$, which contributes the term $m\hbar(\omega-\Omega)$ to the single-particle energy. Assuming that $\omega+\Omega \gg \omega-\Omega$, and that the cyclotron energy also sufficiently exceeds the interaction energy of the system, the effective Hilbert space can be reduced to the lowest Landau level. The Fock-Darwin wave functions of the lowest Landau level are given by \begin{align} \phi_m(x,y)=\frac{1}{\lambda^{m+1}\sqrt{\pi m!}}\:(x+iy)^me^{-(x^2+y^2)/2\lambda^2}, \end{align} where $\lambda=\sqrt{\frac{\hbar}{M\omega}}$ is the harmonic oscillator length scale. We use these eigenstates as a computational basis. The second-quantized operator $a^\dagger_m$ ($a_m$) creates (annihilates) a particle described by $\phi_m(x,y)$. In units of $\hbar\omega$ and using second quantization the Hamiltonian can be written as \begin{align} H = H_0 + H_I = {N} + [1-\Omega]{L} + {U}, \end{align} where ${N}=\sum_{m}a^\dagger_ma_m$ is the number operator, ${L}= \hbar \sum_{m} m a^\dagger_ma_m$ is the total angular momentum operator and ${U}= H_I/(\hbar \omega)$ is the interaction operator \begin{align} {U}&=\sum_{m,n,p,q}U_{m,n,p,q}\:a^\dagger_ma^\dagger_na_pa_q \,, \end{align} where the matrix element is given by \begin{align} U_{m,n,p,q}&=\frac{g}{\pi}\frac{\delta_{m+n,p+q}}{\sqrt{m!n!p!q!}}\frac{(m+n)!}{2^{m+n+1}}. \end{align} All terms in the Hamiltonian commute with ${L}$, and hence the angular momentum is a conserved quantity at this point. We are interested in preparing the ground state of a bosonic fractional quantum Hall system at Landau filling fraction $\nu=1/2$, i.e. the lowest Landau level shall be half-filled. For particles which interact with short-range interactions such phase is exactly described by the $1/2$-Laughlin wavefunction \begin{align} \psi_{L}(z_1,\ldots, z_N)=\prod_{i<j}^{N}\left(z_{i}-z_{j}\right)^{2} \prod_{k=1}^{N} e^{-\left|z_{k}\right|^{2} / 2}. \label{laughlin} \end{align} Here, we have used complex numbers $z_j$ to represent the position of the $j$th particle, $z_j = (x_j + iy_j)/\lambda$. This wave function is zero whenever two particles are at the same position, and thus, it is a zero-energy eigenstate of the contact potential $H_I$. The 1/2-Laughlin state has total angular momentum $L=N(N-1)$ (in units $\hbar$), as can be inferred from the degree of the polynomial part of Eq.~\eqref{laughlin}. On the other hand, the total angular momentum of the ground state of $H$ is the result of a competition between $H_0$ and $H_I$: The single-particle part $H_0$ yields an energy which is proportional to $L$, while larger values $L$ allow the particles to avoid each other, reducing the amount of interaction energy. In particular, there are no zero-energy eigenstates of $H_I$ for $L<N(N-1)$. We can control this competition of $H_0$ and $H_I$ by the rotation frequency in $H_0$, which in the following will therefore be chosen to be time-dependent, i.e. $\Omega(t)$. Throughout the paper, we will express $\Omega(t)$ in units of $\omega$. \begin{figure}[t] \includegraphics[scale=1.15]{A=0.pdf} \caption{\textbf{(a)} Energy of ground state and first excited state in an isotropic system of four atoms (with $g=1$) as a function of rotation frequency. True level crossings happen at $\Omega=0.841,0.947$ and $0.974$. \textbf{(b)} Angular momentum expectation value as a function of rotation frequency. The Laughlin state is the ground state after the third crossing, when $L=N(N-1)=12$.} \label{A=0} \end{figure} This competition is illustrated in Fig.~\ref{A=0}, where we have plotted the energy of ground state and first excited state in Fig.~\ref{A=0}a, and the total angular momentum of the ground state in Fig.~\ref{A=0}b as a function of the rotation frequency $\Omega$. At discrete values of $\Omega$, the energy gap above the ground state vanishes, and the ground state angular momentum changes abruptly. In the system of four particles, we obtain ground states at $\expval*{{L}}=0,4,8$ and $12$. It will be the goal of our adiabatic protocol to bring a rotating system from the condensate phase ($L=0$) to the Laughlin state ($L=N(N-1)$) by a ramp of the rotation frequency. In this work, we consider the experimentally relevant case of $N=4$ atoms implying an angular momentum of $L=12$ for the Laughlin state. We fix the interaction parameter to $g=1$, noting that in practice $g$ can be tuned via Feshbach resonances and/or confinement-induced resonances. The transitions in Fig.~\ref{A=0} are true level crossings, as allowed by the rotational symmetry of the system. Thus, in order to adiabatically connect the different ground states, we have to turn these true crossings (corresponding to first-order phase transitions) into avoided crossings (corresponding to second-order phase transitions). This can be achieved by removing the rotational symmetry, e.g. by introducing an anisotropic potential to the Hamiltonian \begin{align} {V}(t) = A(t) M\omega^2 \sum_i ({x}_i^2-{y}_i^2) \end{align} or, in terms of creation and annihilation operators and in untis of $\hbar \omega$, \begin{align} {V}(t) = \frac{A(t)}{2}\sum_{m=2}^{\infty} \left[ \sqrt{m(m-1)}{a}_m^\dagger {a}_{m-2}+ {\rm h.c.} \right]. \end{align} With this, the new Hamiltonian for the system is \begin{align} H(t) = {N} + [1-\Omega(t)] {L} + {U} + {V}(t). \end{align} These expressions for $V(t)$ implicitly define an ``anisotropy'' parameter $A(t)$, which together with the rotation frequency $\Omega(t)$ shall be controllable as a function of time. Our goal is to fix the temporal behavior of these parameters such that the system evolves into the Laughlin state. We note that the anisotropy in $V(t)$ is due to an increase of the trapping frequency along the $x$-direction, and a decrease of the trapping frequency along the $y$-direction. Concretely, the trapping frequency along $y$-direction is proportional to $\sqrt{1-2A}$, which sets the centrifugal limit to $\Omega\leq\sqrt{1-2A}$. For larger rotation frequencies, the experiment is expected to become unstable as the atoms can be expelled from the trap. We will avoid this region in our protocol. The anisotropy also introduces additional complexity from the computational point of view: Since the new Hamiltonian does not conserve the total angular momentum, we must truncate the Hilbert space at some $L=L_{\rm max}$. The choice of $L_{\rm max}$ depends on the protocol, and to have a good truncation we need to assure that at all times sectors of large $L$ (i.e. the many-body states with total angular momentum close or equal to $L_{\rm max}$) contribute a negligible part to the many-body wavefunction. In Fig. \ref{colorplots}, we plot the energy gap above the ground state as a function of anisotropy parameter $A$ and rotation frequency $\Omega$ for different choices of $L_{\rm max}$. This comparison illustrates that truncation at fairly small values ($L_{\rm max}=12$) is possible only for small values of $A$ or $\Omega$. On the other hand, the plots for $L_{\rm max}=26$ and $L_{\rm max}=40$ agree very well in the whole parameter region, suggesting that good convergence of the numerics has been reached. For our simulation of the adiabatic state preparation, presented in the next Section, we have chosen $L_{\rm max}=40$. This truncation provides good convergence in the protocol we propose for the Laughlin state preparation. \begin{figure}[t] \includegraphics[scale=0.90]{colorplots.pdf} \caption{\textbf{Energy gap} as a function of rotation frequency and anisotropy parameter for different angular momentum truncations: \textbf{(a)} $L_{\rm max}=40$ \textbf{(b)} $L_{\rm max}=26$ \textbf{(c)} $L_{\rm max}=12$ All plots share the same color scale as (a), the energy gap $\Delta E$ is given in units of $\hbar\omega$.} \label{colorplots} \end{figure} \section{Adiabatic State preparation \label{Results}} We now aim at a protocol for $A(t)$ and $\Omega(t)$ which adiabatically moves the system from the condensate ($L=0$) into the Laughlin state ($L=12$). In order to ensure adiabaticity regions with small energy gap should be avoided, and the velocity of parameter changes should be adjusted to the size of the energy gap. With this in mind, we have considered the protocol as illustrated by the red line in Fig.~\ref{protocol}(a): First, the anisotropy is ramped up to a relatively large value ($A=0.08$) at slow rotation ($\Omega=0.8$). Next, the rotation frequency is increased almost up to the centrifugal limit (marked by the black line in Fig.~\ref{protocol}). Finally, we simultaneously decrease $A$ and increase $\Omega$ along the centrifugal limit, until isotropy is restored and the Laughlin state is reached. From the contour plot of the energy gap, it is obvious that this path avoids regions of small gaps. Furthermore, we allocate different amounts of time for the evolution along different segments of the path. To this end, we have marked different points $P_i=(\Omega_i,A_i)$ along the path, which shall be reached at given times $t_i$. Between adjacent points, the parameters $A(t)$ and $\Omega(t)$ are changed linearly in time. Thus, the protocol is fully determined by $P_i$ and $t_i$, as given by Table~\ref{points}. In this table, we have parametrized time $t$ by dimensionless values $\tau=\frac{\omega\:t}{2\pi}$, which measure time in units of the trapping period. An illustration of the protocol defined by Table~\ref{points} is provided in Fig. \ref{protocol}(b). \begin{table}[h!] \centering \begin{tabular}{| c | c | c | c |c|} \hline & $\Omega_i$ & $A_i$ & $\tau_i$ & $\Delta \tau_i$ \\ \hline $P_1$ & $0.8$ & $0$ & $0$ & - \\ $P_2$ & $0.8$ & $0.08$ & $48$ & 48\\ $P_3$ & $0.88$ & $0.08$ & $80$ & 32\\ $P_4$ & $0.912$ & $0.08$ & $160$ & 80\\ $P_5$ & $0.977$ & $0.014$ & $366$ & 206\\ $P_6$ & $0.985$ & $0$ & $605$ & 239\\ \hline \end{tabular} \caption{Coordinates $(\Omega_i,A_i)$ of the points $P_i$ along the protocol in Fig. \ref{protocol}(a), and the dimensionless time $\tau_i$ at which the given configuration is reached within the protocol. The difference $\Delta \tau_i=\tau_i-\tau_{i-1}$ measures the amount of time spent to evolve between adjacent points.} \label{points} \end{table} \begin{figure*} \includegraphics[scale=1]{protocol.pdf} \caption{Characteristic of adiabatic Laughlin state preparation. \textbf{(a)} Path in the parameters space for truncation $L_{\rm max}=40$. The black line is defined by $\Omega=\sqrt{1-2A}$, it bounds the region where the experiment becomes unstable. \textbf{(b)} Energy gap along the protocol. \textbf{(c)} Rotation frequency and anisotropy parameter as a function of time. \textbf{(d)} Angular momentum expectation value as a function of time. The precise coordinates of the points and time marks are given in Table \ref{points}, in (b), (c) and (d) we omit the label of intermediate points for better visualization.} \label{protocol} \end{figure*} \begin{figure}[h!] \includegraphics[scale=0.55]{gap_equal.pdf} \caption{Gap along the red line in Fig. \ref{protocol}(a) with homogeneous time distribution. In this case the time spent to go from $P_i$ to $P_{i+1}$ is a fraction of the total time $T=605$ proportional to the geometric distance between these points. The total time is still $T$ and the parameters are changed linearly in time on each segment.} \label{gapequal} \end{figure} We stress that our protocol is significantly slowed down in the regions of small gap (between $P_3$ and $P_4$, and between $P_5$ and $P_6$), while it quickly passes the other regions. This can also be seen from Fig. \ref{protocol}(c), which plots the energy gap as a function of $\tau$. This figure shall be compared with Fig.~\ref{gapequal}: There, we have employed a different protocol, of the same total duration. In this case, the time between two points, $\Delta \tau_i = \tau_i-\tau_{i-1}$, is chosen proportional to the geometric distance between the points $\tau_i \propto [(A_i-A_{i-1})^2+ (\Omega_{i}-\Omega_{i-1})^2]^{1/2}$ and corresponds to homogeneous ramp speeds. It is seen that, with this choice, more than half of the preparation time is spent for the evolution through relatively strongly gapped regions, i.e. from $P_1$ to $P_3$, whereas in our protocol with adjusted ramp speeds (i.e. the protocol defined by Table~\ref{points}) the same evolution takes less than 15\% of the total evolution time. The advantage of adjusted ramp speeds is reflected by the fidelity $F$ with which the Laughlin state is reached. Defining the fidelity $F(\tau)$ as the squared overlap between the evolved state at time $\tau$ with the instantaneous ground state of the Hamiltonian $H(\tau)$, and fixing in both protocols the total evolution time at $T=605$ (in units $2\pi/\omega$), the protocol with adjusted ramp speed reaches the Laughlin state with fidelity $F(T)=0.99$, whereas the protocol with homogeneous ramp speed achieves only a fidelity of $F(T)=0.94$. During the evolution, the ``instantaneous'' fidelity $F(\tau)$ always remains above $F>0.98$ in the protocol with adjusted ramp speeds, whereas in the protocol with homogeneous ramp speed it drops below $F<0.92$ at some moments. These numbers indicate that the protocol with adjusted ramp speed operates with good approximation in an adiabatic regime, whereas non-negligible excitations are produced if the ramp speed is homogeneous. The chosen evolution time, $T=605$, corresponds to 20~ms, 60~ms and 200~ms for trapping frequencies of $\omega$ = $(2\pi)\times$ 30 kHz, 10 kHz and 3 kHz, respectively. The total time for the Laughlin state preparation appears to be in an experimental accessible regime. However, the frequencies only correspond to the in-plane trap, whereas the trapping frequency along $z$ must be chosen much larger than $\omega$, which sets experimental limitations. Strikingly, we may further shorten preparation times without dramatic loss of fidelity. For instance, if times are decreased by $10\%$ in all segments as compared to the protocol of Table Table~\ref{points}, the fidelity with the Laughlin state still reaches $F(T)=0.98$, and $F>0.97$ at any time during the evolution. In all cases, the angular momentum reached at the end of the protocol is very close to the desired value, $L=12\pm 0.02$. However, it is noteworthy that this value is not reached by a monotonous increase of $L$. In Fig.~\ref{protocol}(d), we see that significantly larger values of $\langle L \rangle >20$ are reached when the system is closest to the centrifugal limit, i.e. between $P_4$ and $P_5$. Only in the very end, between $P_5$ and $P_6$, our protocol converges to the correct value of $12$. Therefore, even though in $P_5$ we already have the right rotation frequency for the Laughlin state, $\Omega>0.974$ as in Fig. \ref{A=0}, we still have to decrease the ellipticity to obtain the right angular momentum. We note that, despite the high angular momentum values reached in our protocol, for any ground state along the red line in Fig.~\ref{protocol}(a), the Hilbert space sectors with $L>34$ are barely populated: The weights of the many-body wave function in the $L=36$, $38$, and $40$ sectors are $0.012$, $0.005$, and $0.001$, respectively. This justifies the chosen Hilbert space truncation at $L_{\rm max}=40$. In particulat truncations with lowe angular momentum have to be restricted to lower anisotropy. However, it is obvious from the contour plot of the energy gap, see Fig.~\ref{colorplots}, that smaller anisotropy values would also decrease the size of the smallest gaps along the path. Therefore, the protocol would lose fidelity very fast if we wanted to keep the same total time of $T=605$ trapping periods. \section{Conclusions} \label{Conclusions} In this work, we have proposed a time efficient adiabatic protocol to prepare the $\nu=1/2$ fractional quantum Hall ground state of four bosonic atoms. Starting from a condensate in the lowest Landau level, we reach the Laughlin state within $T=605$ trapping periods and with a fidelity of $0.99$. At the end of our protocol, the expectation value of the angular momentum is $12$. Our total time of $T=605$ trapping periods represents an improvement by a factor of $10$ when compared to the 6450 trapping periods in Ref. \cite{popp04}. For a trapping frequency of $(2\pi)\times$30~kHz, our protocol would take only $20~$ms. However, the experimental work of Ref. \cite{gemelke10} considers a traping frequency of only $(2\pi)\times$2~kHz, for which our protocol would take $300~$ms. This is a typical time scale for the adiabatic preparation of correlated states with cold atoms. The results presented here will be valuable in guiding experiments with cold atoms. An important feature of our protocol is usage of large anisotropies \bibnote[]{In this context, we stress the different definition of our parameter $A$ as compared to the anisotropy parameter $\epsilon$ used in Ref. \cite{popp04}.} This leads to ellipticities which in our protocol are twice as large as in Ref. \cite{popp04}. The correct description in the regime of large deformation is numerically expensive, but our study shows that strong anisotropy is important for reaching fast adiabatic ramps. Large rotating quadrupolar deformations are experimentally feasible be it in optical traps \cite{gemelke10}, in a time-orbiting potential trap \cite{fletcher2019} or by a rotating pair of repulsive optical traps \cite{Abo-Shaeer2001}. For an accurate description in the vicinity of the centrifugal limit, we had to carefully choose the angular momentum truncation $L_{\rm max}$: The contour plots of the energy gap, Fig. \ref{colorplots}, considerably depend on this truncation. In particular, by truncating at $L_{\rm max}=12$, the Laughlin region (lower right corner of the contour plot) is fully separated from other regions by a valley of very small energy gap. This hinders the fast preparation of the Laughlin state. Luckily, allowing for larger angular momentum changes this picture, and the Laughlin state can then be reached without crossing such a valley of small gaps, if the anisotropy parameter is chosen sufficiently large. In this work, we have assumed an interaction parameter of $g=1$. This is slightly larger than the value $g=0.6$ assumed in Ref.~\cite{popp04}, or $g=0.41$ in Ref.~\cite{gemelke10}. Sufficiently strong interactions are important because the many-body gap above the Laughlin state scales as $\sim0.1g\:\hbar\omega$ \cite{regnault2003,julia-diaz12}. While many experiments operate in the weakly-interacting regime with $g\approx0.1$, strong interactions of $g\approx3$ have been realized using a Feshbach resonance \cite{Ha2013}. In principle, it is also possible to tune $g$ as a function of time. This would provide another knob in the state preparation scheme - an opportunity which is left for future work. We expect that, if experimentally required, the preparation time can be further reduced. An adiabatic scheme could for instance benefit from exploring even larger anisotropies, or from introducing more points $P_i$ at which ramps are changed. In this context, optimal control strategies for many-body systems \cite{doria11} might be used to find the best path, however, in practice, this possibility is limited by the fact that simulating systems with large ellipticities is numerically expensive. Such optimization protocols might also leave behind adiabatic paths, and it would be interesting to investigate whether counter-diabatic preparation schemes can achieve better results. \begin{acknowledgements} The authors would like to thank Klaus Sengstock, Leticia Tarruell and Fabian Grusdt for fruitful discussions. B.A. acknowledges financial support from the Maria Yzuel Fellowship and from the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. T.G. acknowledges financial support from a fellowship granted by “la Caixa” Foundation (ID 100010434, fellowship code LCF/BQ/PI19/11690013). B.A., V.K., M.L., and T.G. acknowledge funding from ERC AdG NOQIA, Spanish Ministry MINECO and State Research Agency AEI (FISICATEAMO and FIDEUA PID2019-106901GB-I00/10.13039 / 501100011033, SEVERO OCHOA No. SEV-2015-0522 and CEX2019-000910-S, FPI), European Social Fund, Fundaci\'o Cellex, Fundaci\'o Mir-Puig, Generalitat de Catalunya (AGAUR Grant No. 2017 SGR 1341, CERCA program, QuantumCAT U16-011424, co-funded by ERDF Operational Program of Catalonia 2014-2020), MINECO-EU QUANTERA MAQS (funded by State Research Agency (AEI) PCI2019-111828-2/10.13039/501100011033), EU Horizon 2020 FET-OPEN OPTOLogic (Grant No 899794), and the National Science Centre, Poland- Symfonia Grant No. 2016/20/W/ST4/00314. C.W. acknowledges funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme under grant agreement No. 802701. \end{acknowledgements}
null
null
null
proofpile-arXiv_000-10134
{"arxiv_id":"2009.08943","language":"en","timestamp":1600654676000,"url":"https:\/\/arxiv.org\/abs\/2009.08943","yymm":"2009"}
2024-02-18T23:40:25.063Z
2020-09-21T02:17:56.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10136"}
null
\section{Introduction} Linear TV programs play crucial roles in our daily lives. With the quickly increasing number of TV channels and programs, it is important to develop effective recommender systems for TV users. Although the development of recommender systems has been stimulated by the rapid growth of information on the Internet, and many algorithms have been successfully applied to various online services (e.g., music and video streaming services)~\cite{Gomez-Uribe16, Yang2018,Chen2019}, little has been done for personalized TV recommendation (TV Rec) in the literature. Most well-developed recommendation algorithms are not applicable for such a recommendation problem due to the following two key challenges of TV Rec: (1) Complete-item cold start: Unlike video on demand (VOD), new TV programs are released on a daily basis (although some drama or movies are replayed, they usually have different titles or descriptions);\footnote{Another practical challenge is that the programs that share common content do not share an identical ID, which rules out directly adopting collaborative filtering or matrix factorization in real-world scenarios.} (2) Context awareness: user viewing behavior for TV programs strongly depends on their conditions (e.g., time and mood); for instance, watching news during dinner but preferring sports in the morning. To address the first challenge, some studies adopt content-based approaches combined with collaborative filtering (CF) for TV Rec~\cite{ali2004tivo,cotter2000ptv,fernandez2006avatar,Smyth1999,david2012}. However, these approaches do not consider the second key characteristic---context awareness---in TV Rec, for which another line of work focuses mainly on characterizing users' time-aware preferences~\cite{ardissono2004user,turrin2014time,david2012,Yu17,Kim2018ATR}. Although, these studies model users' time-aware preferences regarding channels and program genres, they do not precisely reflect users' viewing preferences regarding program content. This is due to the fact that users' access to channels also depends on their viewing habits or location, and that genres are merely coarse-grained information about programs and thus provide little information about program content. Moreover, ~\cite{Hsu2007} further accounts user moods but such user data is difficult to obtain and even harder to measure. To address the above two challenges within a unified framework, we propose a two-stage ranking approach for TV Rec which consists of two components: one to model viewing behavior and the other for viewing preferences. Specifically, viewing behavior refers to users' viewing patterns regarding time and TV channels, whereas viewing preferences refers to preferences regarding the content of TV programs. For the former, we adopt a finer granularity in terms of time than previous work (e.g., days$\times$hours in \cite{ardissono2004user,turrin2014time}), whereas for the latter, we leverage textual information about programs to better model user viewing preferences. Moreover, inspired by the capabilities and limitations of the two components, we propose fusing them with a simple yet effective two-stage ranking algorithm that locates potential candidates based on the first component and then further ranks them based on the second component. Also note that in the literature, this is the first work to formally define the problem of TV Rec and provide a unified approach to capture both user viewing behavior and preferences. Empirical results on a real-world TV dataset demonstrate its effectiveness in recommendation; at the same time, this approach is advantageous and practical for real-world applications due to its time-efficient and parameter-free design. \section{Methodology} \label{sec:methodology} \subsection{Problem Formulation} Personalized TV recommendation (TV Rec) is the task of recommending yet-to-be-released TV programs to a group of users. To properly formulate the problem and our proposed method, we first define three terms: 1) weekly time slot, 2) interaction tensor, and 3) program meta information required for TV Rec. With these definitions, we formalize personalized TV Rec as a top-$k$ recommendation problem given user-implicit feedback. \begin{definition}[{\bf Weekly time slot}] A weekly time interval can be equally divided into $n$ weekly time slots, each of which is denoted as $w_i=(t_i,t_{i+1}]$, where $t_i$ ($t_{i+1}$) denotes the beginning time (the end time, respectively) of the $i$-th time slot. Together, all of the time slots compose set $W=\{ w_i|1\leq i \leq n\}$. Thus, any given timestamp $\mathbf{s}\in S$ can be projected onto a weekly time slot $w_{\mathcal{T}(\mathbf{s})} \in W$ by function $\mathcal{T}(\cdot):S\rightarrow\{1,\cdots,n\}$, where $S$ denotes a set of arbitrary timestamps. \end{definition} For example, when we divide a week into 168 time slots (i.e., one hour for each time slot), we have $W=\{w_1=[\text{Mon 00:00}, \text{Mon 01:00}),\cdots, w_{168}=[\text{Sun 23:00}, \text{Mon 00:00}) \}$, in which the specific timestamp ``May 11, 2020, 05:30 (Mon)'' belongs to the 6th time slot, $w_6$. Note that a given time span $[\mathbf{s},\mathbf{e}]$ can also be projected onto a set of time slots $\{ w_j |\mathcal{T}(\mathbf{s})\leq j\leq \mathcal{T}(\mathbf{e}) \}$. Also note that in our later empirical studies, we adopt a finer granularity in terms of time (i.e.,~15 minutes as the length of the time slot) than prior art. \begin{definition} [{\bf Interaction tensor}] Let $U$, $I$, and $C$ denote the sets of users, TV programs, and TV channels, respectively. An interaction tensor, denoted as $\mathcal{A}=(a_{u,i,w,c})\in \mathbb{R}^{|U|\times |I|\times |W|\times |C|}$, represents user-item associations through a certain channel within a certain weekly time slot, where $a_{u,i,w,c}$ denotes the weight of the association. Note that the tensor is binary for implicit feedback; that is, if user $u\in U$ views program $i\in I$ played in channel $c\in C$ within time slot $w\in W$, $a_{u,i,w,c}=1$; otherwise, $a_{u,i,w,c}=0$. \end{definition} \begin{definition}[{\bf Program meta information}] \label{def:meta} Given a set of TV programs $I$, meta information for each $i\in I$ records that program $i$ is broadcast by channel $\mathrm{CH}({i}) \in C$ at the time interval $[\mathbf{s}_i,\mathbf{e}_{i}]$ with the content information $\mathrm{CNT}(i)$, where $\mathrm{CH}(\cdot)$ and $\mathrm{CNT}(\cdot)$ are the projection functions respectively mapping program $i$ to its channel and its textual information (e.g., title, artists, and abstract). \end{definition} \begin{problem} \label{def:problem setting} \textbf{Top-$k$ TV Recommendation from Implicit Feedback.} Let $I_{\rm train}$ and $I_{\rm test}$ denote the sets of TV programs broadcast in the past (training data) and in the future (test data), respectively; note that for the problem of TV Rec, $I_{\rm train}\bigcap I_{\rm test}= \emptyset$. Given a historical interaction tensor $\mathcal{A}_{\rm train}=(a_{u,i,w,c}) \in \mathbb{R}^{|U|\times |I_{\rm train}|\times|W| \times |C|}$, for each user $u \in U$, we identify the top-$k$ programs from the set of yet-to-be-released (new) programs $I_{\rm test}$ by leveraging the information from $\mathcal{A}_{\rm train}$ and meta information of $I_{\rm train}\bigcup I_{\rm test}$. \end{problem} \subsection{Proposed Method} With a TV recommender system, we seek to leverage historical viewing logs and content information of programs to infer two user characteristics: (1) behavior and (2) preferences, which are addressed in Sections~\ref{sec:behavior} and~\ref{sec:preference}, respectively. We then propose a simple yet effective two-stage ranking method in Section~\ref{sec:algorithm} that takes into account both user characteristics, thereby fusing user viewing habits and preferences into the modeling process. \subsubsection{Viewing behavior}\label{sec:behavior} Here, we define the so-called \emph{viewing behavior} of users based on the following observations. As suggested by \cite{turrin2014time}, most TV users exhibit predictable viewing behavior strongly connected to weekly time slots and TV channels. Intuitively, users prefer to watch TV during their leisure time, which heavily depends on their work and lifestyle. In addition, users tend to switch between a limited number of channels even though they have a large number to choose from. Thus a user's TV viewing behavior can be defined as the probability distribution of watching TV on a given channel at a given time. Given a historical user-item interaction tensor $\mathcal{A}_{\rm train}=(a_{u,i,w,c}) \in \mathbb{R}^{|U|\times |I_{\rm train}|\times|W| \times |C|}$, we extract each user $\mathbf{u}$'s viewing behavior by computing his or her viewing probability distribution over weekly time slots $W$ and TV channels $C$. Formally speaking, we represent each $\mathbf{u}$'s viewing behavior as a probability distribution matrix, $\mathcal{B}^{\mathbf{u}}=(b^{\mathbf{u}}_{\mathbf{w},\mathbf{c}}) \in \mathbb{R}^{\left| W \right| \times \left| C \right|} $, where each element $b^{\mathbf{u}}_{\mathbf{w},\mathbf{c}}$ is defined as \begin{equation} \label{eq:watching_behavior} b^{\mathbf{u}}_{\mathbf{w},\mathbf{c}}= \left( \sum\limits_{i,w,c}a_{\mathbf{u},i,w,c} \mathbbm{1}_{ \{ w=\mathbf{w} \} } \mathbbm{1}_{ \{ c=\mathbf{c}\} } \right)\left/ \left(\sum\limits_{i, w, c }a_{\mathbf{u},i, w, c } \right)\right. . \end{equation} Additionally, in order to recommend yet-to-be-released TV programs for users based on their viewing behavior, we construct the matrix $\mathcal{B}^{\mathbf{i}'}=(b^{\mathbf{i}'}_{\mathbf{w},\mathbf{c}}) \in \mathbb{R}^{\left| W \right| \times \left| C \right|} $ for each new item $\mathbf{i}' \in I_{\rm }$ using the meta information defined in Definition~\ref{def:meta}, where $b^{\mathbf{i}'}_{\mathbf{w},\mathbf{c}}= \mathbbm{1}_{ \{ \mathbf{w} \in \{ w_j |\mathcal{T}(\mathbf{s}_{\mathbf{i}'})\leq j\leq \mathcal{T}(\mathbf{e}_{\mathbf{i}'}) \} \} } \cdot \mathbbm{1}_{ \{ \mathrm{CH}(\mathbf{i}')=\mathbf{c}\} } $. Recall that $[\mathbf{s}_{\mathbf{i}'},\mathbf{e}_{\mathbf{i}'}]$ denotes the time interval during which program $\mathbf{i}'$ is broadcast. Finally, we compute the matching score between $\mathbf{u}$ and $\mathbf{i}^{'}$ given viewing behavior as \begin{equation} \label{eq:watching_behavior_score} \begin{aligned} s_{\mathbf{u},\mathbf{i'}}^{b} =\mathrm{MAX}\left(\mathcal{B}^{\mathbf{u}} \odot \mathcal{B}^{\mathbf{i'}}\right) \text{ and } (w,c) = {\rm IdxMax}\left(\mathcal{B}^{\mathbf{u}} \odot \mathcal{B}^{\mathbf{i'}}\right), \end{aligned} \end{equation} where $\odot$ denotes element-wise multiplication between two matrices, $\mathrm{MAX}(\cdot)$ is the function to extract the maximum element in a matrix, and ${\rm IdxMax}(\cdot)$ locates the indices of the maximum element.\footnote{In practice, there is no need to conduct the element-wise multiplication to get $s^{b}_{\mathbf{u},\mathbf{i}'}$; instead, for each $\mathbf{i}'$, $s^{b}_{\mathbf{u},\mathbf{i}'}$ is the maximum in the set $\{b^{\mathbf{u}}_{\mathbf{w},\mathbf{c}}| \mathbf{w} \in \{ w_j |\mathcal{T}(\mathbf{s}_{\mathbf{i}'})\leq j\leq \mathcal{T}(\mathbf{e}_{\mathbf{i}'})\}\wedge \mathbf{c}={\rm CH}(\mathbf{i}') \}$.} Note that $s_{\mathbf{u},\mathbf{i'}}^{b}$ is the estimated probability that user $\mathbf{u}$ views item $\mathbf{i'}$ given his or her historical viewing behavior. \subsubsection{Viewing Preferences}\label{sec:preference} In contrast to the aforementioned user behavior, a user's preferences are usually associated with the content of his or her preferred items. We formally define a user's \emph{viewing preferences} as the program contents he or she prefers to watch, which we represent in the proposed method using the textual information of programs. Note that as with a typical TV Rec scenario, all candidate items in $I_{\rm test}$ for recommendation are new, which is the same as the complete cold-start problem in typical recommender systems. Such a problem is commonly addressed using content-based approaches~\cite{david2012, Chou2016}; likewise, we here use textual item information to locate new items for recommendation. For each program $\mathbf{i} \in I_{\rm train}$, we map its content information to a $d$-dimensional embedding~$h_{\mathbf{i}}$ using a text encoder~$\mathcal{E}$: \begin{equation} \label{eq:text_enc} {h}_{\mathbf{i}}=\mathcal{E}\left(\mathrm{CNT}(\mathbf{i})\right) \in \mathbb{R}^{d}. \end{equation} In order to map user~$\mathbf{u}$'s preferences to the same embedding space, we gather all the programs associated with $\mathbf{u}$ in the training data, after which we compute the average pooling over their embeddings to obtain $\mathbf{u}$'s viewing preferences~$h_{\mathbf{u}}$ as \begin{equation} h_{\mathbf{u}}= \frac{\sum_{i\in I^{\mathbf{u}}_{\rm train}} h_i}{|I_{\rm train}^{\mathbf{u}}|} \in \mathbb{R}^{d}, \label{eq:gpreference} \end{equation} where $I^{\mathbf{u}}_{\rm train}= \{ i\,|\,i \in I_{\rm train} \wedge \exists\, w\in W, c\in C\, a_{\mathbf{u},i,w,c}=1\}$. Similarly, for each item $\mathbf{i}' \in I_{\rm test}$, we project its content information using the same text encoder~$\mathcal{E}$ from Eq.~(\ref{eq:text_enc}). Finally, the matching score for $\mathbf{u}$ and $\mathbf{i}^{'}$ in terms of of viewing preferences is computed as \begin{equation} \label{eq:preference_score} s^{p}_{\mathbf{u},\mathbf{i}'}=\langle h_{\mathbf{u}}, {h_{\mathbf{i}'}}\rangle, \end{equation} where $\langle\cdot,\cdot\rangle$ denotes the dot product of two vectors. In addition, for TV Rec, it is common that multiple users (i.e., family members) share the same account, under which these users may have different viewing preferences and watch TV at different weekly time slots. For example, whereas children enjoy watching cartoons after school, parents prefer to watch news or dramas after work. We address this by further tailoring the viewing preferences of an ``account'' to time-aware preferences; that is, for each account~$\mathbf{u}\in U$ and each time slot~$\mathbf{w}\in W$, we have \begin{equation} h_{\mathbf{u}, \mathbf{w}}= \frac{\sum_{i \in I^{\mathbf{u}, \mathbf{w}}_{\rm train}}h_i}{|I_{\rm train}^{\mathbf{u}, \mathbf{w}}|} \in \mathbb{R}^{d}, \label{eq:tpreference} \end{equation} where $I^{\mathbf{u},\mathbf{w}}_{\rm train}= \{ i\,|\,i \in I_{\rm train}\wedge \exists\,c\in C\, a_{\mathbf{u},i,\mathbf{w},c}=1\}$. With these fine-grained viewing preferences, the score of user~$\mathbf{u}$ for item $\mathbf{i}'$ becomes \begin{equation} \label{eq:time_preference_score} s^{p}_{\mathbf{u},\mathbf{i}'}=\langle h_{\mathbf{u},{w}_j}, {h_{\mathbf{i}'}}\rangle, \end{equation} where ${w}_{j}\in W$ denotes the time slot in which item $\mathbf{i}'$ begins playing; i.e., $j=\mathcal{T}(\mathbf{s}_{\mathbf{i}'})$. \subsubsection{Two-stage Ranking}\label{sec:algorithm} In this section, we propose a two-stage ranking approach that leverages the above two features---user viewing behavior and user viewing preferences---for TV Rec. Before describing the proposed approach, we make observations and lay out the motivation of our design based on the limitations of each feature as follows. \begin{itemize}[leftmargin=*] \item \textbf{Viewing behavior}: In practice, there are usually multiple programs broadcast on the same channel at the same time slot; in this case, these programs are given the same matching score for a user in terms of his or her viewing behavior. Thus, recommendation that is based solely on user viewing behavior chooses all the programs from a certain channel and time slot.\footnote{When multiple programs have the same score, we assign a higher rank to programs with earlier starting times.} However, in a real-world scenario, it is unlikely that a user at a given time slot watches more than one TV program, especially for short time slots;\footnote{In the experiments, we adopted 15 minutes as our time slot interval, an optimal setting for using only viewing behavior for recommendation; even in this case, each time slot nevertheless contains 1.5 programs on average.} in this case recommending multiple programs from the same channel at a given time slot could lead to poor recommendation quality. \item \textbf{Viewing preferences}: Although user preferences are useful for recommendation, recommending linear TV programs based solely thereon usually results in low accuracy. For example, if an office worker enjoys watching action movies during the weekend, it is unreasonable to recommend action movies at midnight during weekdays. \end{itemize} Based on the above characteristics and limitations, we propose two-stage ranking to leverage the two features for TV Rec, as detailed in Algorithm~\ref{alg:ts_ranking}. Briefly speaking, for each user~$\mathbf{u}$, we propose first ranking the program set~$I_{\rm test}$ according to viewing behavior~($s^{b}_{\mathbf{u},\mathbf{i}'}$) (lines 2--6); then, at the second stage (lines 7--15), for those programs broadcast on the same channel at the same weekly time slot, we choose only one program among them according to the user's viewing preferences~($s^p_{\mathbf{u},\mathbf{i}'}$). Note that we put the model for viewing behavior at the first stage as previous studies indicate that the viewing behavior usually dominates the recommendation performance~\cite{turrin2014time}, which is also consistent with the finding in our later experiments. This approach boasts two advantages: 1) it is parameter-free, and 2) it is computationally efficient as only a limited number of preference matching scores~$s^{b}_{\textbf{u},\textbf{i}'}$ are computed at the second stage. Thus, the computational cost of the proposed two-stage ranking method is only slightly higher than for recommendation based solely on viewing behavior; this is also discussed in later experiments. \begin{algorithm} \small \caption{Two-stage Ranking} \label{alg:ts_ranking} \KwIn{ $\mathcal{A}_{\rm train}$, $I_{\rm train}$, $I_{\rm test}$, $k$, $\mathbf{u}$} \KwOut{${\hat{I}}^{\mathbf{u}}_{\rm test}$ (set consisting of recommended programs in $I_{\rm test}$ for user $\mathbf{u}$)} $\mathcal{S}^{b} \leftarrow [ ]; \mathcal{S}^{p} \leftarrow []; {\hat{I}}_{\mathbf{u}} \leftarrow []$\\ Construct $\mathcal{B^{\mathbf{u}}}$ with Eq.~(\ref{eq:watching_behavior}) \\ \For {each $\mathbf{i}'$ in $I_{\rm test}$}{ Compute $s^{b}_{\mathbf{u},\mathbf{i'}}$ and $(w, c)$ with Eq.~(\ref{eq:watching_behavior_score})\\ $\mathcal{S}^{b}$.append$\left( \left(\mathbf{i}', (w, c),s^{b}_{\mathbf{u},\mathbf{i}'}\right) \right)$\\ } Sort $\mathcal{S}^{b}$ in ascending order according to $s^{b}_{\mathbf{u},\mathbf{i}'}$\\ \While {$\left(\left|{\hat{I}}^{\mathbf{u}}_{\rm test}\right|< k\right)$}{ $(\mathbf{i}', (w, c),s^{b}_{\mathbf{u},\mathbf{i}'}) \leftarrow\mathcal{S}^{b}$.pop() \\ Compute $h_{{w}_j,\mathbf{u}}$ (or $h_{\mathbf{u}}$), $h_{\mathbf{i}'}$ and $s^p_{\mathbf{u},\mathbf{i}'}$ with Eqs.~(\ref{eq:text_enc})--(\ref{eq:time_preference_score})\\ \If{$\mathcal{S}^p \neq \emptyset$ {\bf and} $(w,c) \neq (w_{0}, c_{0})$}{ Sort $\mathcal{S}^{p}$ in ascending order according to $s^{p}_{\mathbf{u},\mathbf{i}'}$ \\ ${\hat{I}}^{\mathbf{u}}_{\rm test}$.append$\left( \mathcal{S}^{p}.{\rm pop()} \right)$ \\ $\mathcal{S}^{p} \leftarrow []$ \\ } $(w_{0}, c_{0}) \leftarrow (w,c)$ \\ $\mathcal{S}^p$.append$\left( \left(\mathbf{i}', s^{p}_{\mathbf{u},\mathbf{i}'}\right) \right)$ } \KwRet{${\hat{I}}^{\mathbf{u}}_{\rm test}$} \end{algorithm} \vspace{-0.6cm} \section{Experiment} \label{sec:experiment} \subsection{Dataset and Preprocessing} We collected user viewing logs, denoted as $D_{\rm raw}$, from a set of set-top boxes providing linear television service to end users in Japan from Jan 1, 2019 to June 1, 2019. This period comprises a total of 42,301 unique users and 875,550 distinct programs (denoted as $I_{\rm raw}$), where each user was anonymized using a hashed ID. Each log records a channel-switching event for a user, denoted as $d=(u, i, c, t, \Delta t)$, indicating that user~$u$ switched to channel~$c$ broadcasting program~$i$ at UTC timestamp~$t$. Above, $\Delta t$ is the interval between channel-switching events, which can be considered as the duration of the user's viewing of the program. Note that each program was broadcast only once on a channel in the linear TV system. In addition, each program~$i \in I_{\rm raw}$ was associated with its meta information (see Definition~\ref{def:meta}). Given these data logs~$D_{\rm raw}$ and TV programs~$I_{\rm raw}$, we first removed viewing logs whose duration was less than $\Delta_{t_\theta}$ (e.g., 15 minutes in the experiments) to filter out logs where users were just flipping channels rather than watching a program. Formally, we constructed the preprocessed data logs~$D=\{d=(u,i,c,t,\Delta t)| d\in D_{\rm raw} \wedge \Delta t \geq \Delta t_{\theta}\}$. We then generated training and testing sets by splitting the processed data logs~$D$ based on a timestamp~$t_{\rm split}$ and extracting the logs of period~$T_{\rm train}=[t_{\rm split}-\Delta{t_{\rm train}},t_{\rm split})$ for training (denoted as $D_{\rm train}$) and $T_{\rm test}=[t_{\rm split},t_{\rm split}+\Delta{t_{\rm test}})$ for testing ($D_{\rm test}$); thus $I_{\rm train}=\{i\,|\,i\in I_{\rm raw}, \mathbf{s}_i \in T_{\rm train}\}$ and $I_{\rm test}=\{i\,|\,i\in I_{\rm raw}, \mathbf{s}_i \in T_{\rm test}\}$. In our experiments, we constructed four datasets with different values for $t_{\rm split}$ and set $\Delta t_{\rm train}$, $\Delta t_{\rm test}$ to 90 and 7 days, respectively. Table~\ref{table: data stat} contains the dataset statistics. With user logs in $D_{\rm train}$, the interaction tensor is $\mathcal{A}_{\rm train}=(a_{\mathbf{u},\mathbf{i},\mathbf{w},\mathbf{c}}) \in \mathbb{R}^{|U|\times |I_{\rm train}|\times|W| \times |C|}$, where $a_{\mathbf{u},\mathbf{i},\mathbf{w},\mathbf{c}}= \sum_{ (u,i,c,t,\Delta t)\in D_{\rm train}} \mathbbm{1}_{\left\{\left(u,i,w_{\mathcal{T}(t)},c\right)=\left(\mathbf{u},\mathbf{i},\mathbf{w},\mathbf{c}\right)\right\}}.$ Here we consider only user sets $U$ appearing at least once both in $D_{\rm train}$ and $D_{\rm test}$. The length of each weekly time slot~$w_i \in W$ was set to 15 minutes by setting $n$ to $672$. For validation, we adopted user-implicit feedback extracted from $I_{\rm test}$; that is, for each user~$\mathbf{u} \in U$, we constructed program set~${I}^{\mathbf{u}}_{\rm test}=\{i\,|\,i\in I_{\rm test} \wedge (\mathbf{u},i,c,t,\Delta t)\in D_{\rm test} \}$ as our ground truth. \input{data_stat} \subsection{Baselines and Experimental Setup} We first built two baselines based on viewing behavior and viewing preferences, the user characteristics introduced in Sections~\ref{sec:behavior} and~\ref{sec:preference}, respectively. Note that for viewing preferences, we tokenized the textual information of each program using MeCab,\footnote{\url{ https://taku910.github.io/mecab/}} after which we used the term frequency-inverse document frequency (tf-idf) vectorizer as the text encoder (see $\mathcal{E}(\cdot)$ in Eq.~(\ref{eq:text_enc})) to represent items in $I_{\rm train}\bigcup I_{\rm test}$. In addition, we compared the proposed two-stage ranking approach with a ranking fusion method that combines the recommendations from the above two baselines using reciprocal rank fusion (RRF)~\cite{Cormack09}. In information retrieval (IR), RRF is a simple but effective method for combining document rankings from multiple IR systems. Formally speaking, given a set of items~$I_{\rm test}$ and a set of ranking functions~$\mathcal{K}$, where each $\kappa\in \mathcal{K}$ is a function mapping item~$i \in I_{\rm test}$ to its ranking~$\kappa(i)$, the fusion score for each item~$i$ is computed as $s_{\rm RRF}(i)=\sum_{\kappa\in \mathcal{K}} \frac{1}{\kappa(i)+\eta}$, where $\eta$ is a hyperparameter to reduce the impact of high-ranking items from any of the systems. With the two ranking functions based on viewing behavior and preferences (denoted as $\kappa_{b}$ and $\kappa_{p}$, respectively), we have $s_{\rm RRF}(i)= \frac{1}{\kappa_{b}(i)+\eta}+\frac{1}{\kappa_{p}(i)+\eta}$. Another baseline is an RRF variant with an additional hyperparameter $\xi$ to control the impact of two ranking systems, $s_{\rm RRF}^{\xi}(i)=\frac{\xi}{\kappa_{b}(i)+\eta}+\frac{1-\xi}{\kappa_{p}(i)+\eta}$. We use the following metrics to evaluate our models: (1) nDCG, (2) precision, and (3) recall. For each user~$\mathbf{u} \in U$, we recommend $k=30$ programs among $I_{\rm test}$ and evaluate model performance with cut-offs $N\in\{10,20,30\}$. To fine-tune the hyperparameters for the RRF fusion methods (denoted as RRF and RRF$^\xi$), we randomly selected 10\% of the users in Dataset~1 as the development set and searched $\eta$ and $\xi$ in the range of $\{ 1,2,\cdots 100 \}$ and $\{ 0,0.1,\cdots 1 \}$, respectively, for the best performance in terms of Recall@30. Additionally, to examine the efficiency of each model, we evaluated each model's CPU time cost for inference (seconds/user).\footnote{As the inference time is measured on a per-user basis, the number of threads does not impact the measurement.} For models using viewing preferences (including fusion methods), we computed and indexed $h_{\mathbf{u},\mathbf{w}}$ (or $h_{\mathbf{u}}$) and $h_{\mathbf{i'}}$ in advance; thus, for each user at the inference stage, the computation cost is mainly associated with the dot product between $h_{\mathbf{u},\mathbf{w}}$ (or $h_{\mathbf{u}}$) and $h_{\mathbf{i'}}$ (for all programs $\mathbf{i}' \in I_{\rm test}$). In modeling the viewing behavior, the time cost results are primarily due to the construction of matrix $\mathcal{B}^{\mathbf{u}}$ and the calculation of $s^{b}_{\mathbf{u},\mathbf{i}'}$. \subsection{Quantitative Results} \begin{table*}[t!] \centering \resizebox{1\linewidth}{!}{ \setlength{\tabcolsep}{2.5pt} \begin{tabular}{clcrrrrrrrrrrr} \toprule & &&\multicolumn{3}{c}{$N=10$}&\multicolumn{3}{c}{$N=20$}&\multicolumn{3}{c}{$N=30$}& \\ \cmidrule(lr){4-6} \cmidrule(lr){7-9} \cmidrule(lr){10-12} & &Time-aware& nDCG & Prec. & Recall & nDCG & Prec. & Recall & nDCG & Prec. & Recall & Time\\ \midrule \multicolumn{2}{c}{Behavior}&&35.25&33.79&12.26&34.68&30.42&18.39&34.25&27.97&22.91& $\dagger$\textbf{0.27}\\ \multicolumn{2}{c}{Preferences} & \checkmark&13.79&12.91&4.61&13.96&12.13&7.63&14.11&11.45&9.98&1.45\\ \cdashline{1-13}\\[-0.3cm] \multirow{6}{*}{Fusion}&\multirow{2}{*}{RRF}& & 43.82&38.27&12.89&38.87&30.30&17.97&36.41&26.01&21.59&1.45\\ & &{\checkmark} & 45.99&40.64&13.15&41.02&32.58&18.72&38.25&27.82&22.43&1.46\\ \cdashline{2-13}\\[-0.3cm] &\multirow{2}{*}{RRF$^{\xi}$}& &45.44&39.93&13.69&41.78&33.53&19.66&39.80&29.60&23.83&1.45\\ & &{\checkmark}&47.79&41.90&13.93&43.35&34.65&19.86&41.13&30.53&24.11&1.46\\ \cdashline{2-13}\\[-0.3cm] &\multirow{2}{*}{Two-stage}&&46.32&40.92&13.61&42.61&34.45&19.23&40.54&30.44&23.27&0.30\\ &&{\checkmark}&$\dagger$\textbf{48.92}&$\dagger$\textbf{43.28}&\textbf{14.12}&$\dagger$\textbf{44.90}&$\dagger$\textbf{36.41}&\textbf{19.98}&$\dagger$\textbf{42.64}&$\dagger$\textbf{32.13}&\textbf{24.20}&0.31\\ \bottomrule \end{tabular}% } \caption{Recommendation performance. $\checkmark$ denotes methods using time-aware user preferences, and $\dagger$ denotes statistical significance at $p < 0.05$. } \label{table:main results} \vspace{-1cm} \end{table*} Table~\ref{table:main results} compares model performance in terms of the aforementioned metrics and inference time. In the table, the best result for each column is in boldface; $\dagger$ denotes statistical significance at $p < 0.05$ (paired $t$-test over four datasets) with respect to all other methods, and $\checkmark$ indicates methods using time-aware user preferences (i.e., $h_{\textbf{u},\textbf{w}}$ in Eq.~(\ref{eq:tpreference})) as opposed to global preferences (i.e., $h_{\textbf{u}}$ in Eq.~(\ref{eq:gpreference})). First, the comparison between the methods using only behavior or preferences (denoted as Behavior and Preferences, respectively, in the table and hereafter) is strong evidence that in the TV Rec scenario, user viewing behavior dominates recommendation performance, which underscores the importance of putting the model for viewing behavior at the first stage of the proposed two-stage ranking approach. In addition, note that the inference time cost of Behavior is five times less than that of Preferences. On the other hand, as demonstrated in the table, fusing the two user characteristics significantly boosts ranking performance. Specifically, RRF outperforms Behavior in terms of nDCG and Precision by over 7\% in the low cut-off regions (i.e., $N=\{10, 20\}$). Tuning the impact of Behavior and Preferences (the second row of RRF$^\xi$ with $\xi=0.6$) further improves overall ranking performance in terms of nDCG and precision by over 10\% and recall by over 5\%. However, both RRF and RRF$^\xi$ include exhaustive dot product computation over all programs in $I_{\rm test}$, resulting in a time cost per user approximately equal to that of Preferences. Table~\ref{table:main results} shows that the proposed two-stage ranking consistently outperforms other (fusion) methods in terms of efficiency and the three evaluation metrics. Specifically, the method significantly surpasses the strongest baseline RRF$^{\xi}$ by over 2\% in terms of nDCG and precision when modeling user preferences both globally and in a time-dependent fashion; also note that time-aware preferences better capture user viewing preferences and thus yield better performance. Most importantly, from an efficiency perspective, the time cost of the two-stage ranking shown in the table is much lower than that of the two fusion methods and is approximate to that of Behavior, because in our method, only a limited number of preference matching scores involving the dot product operation are computed at the second stage. Combining such efficiency and the fact that our method is parameter-free, we conclude that the proposed method is much more practical than RRF-based methods. \section{Conclusion} We propose a two-stage ranking approach to fuse two user characteristics---viewing behavior and viewing preferences---in a unified manner for TV Rec. The empirical results on a real-world TV dataset show that our proposed approach consistently outperforms other baseline methods; more importantly, our two-stage ranking approach is parameter-free and efficient at inference, making it applicable and practical to real-world TV Rec scenarios. \bibliographystyle{ACM-Reference-Format}
null
null
null
proofpile-arXiv_000-10135
{"arxiv_id":"2009.08957","language":"en","timestamp":1600654693000,"url":"https:\/\/arxiv.org\/abs\/2009.08957","yymm":"2009"}
2024-02-18T23:40:25.067Z
2020-09-21T02:18:13.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10137"}
null
\section{Introduction} The AB$_2$X$_4$ transition-metal spinels, with B cations forming a geometrically frustrated pyrochlore lattice~\cite{SickafusStructureSpinel1999}, exhibit a wealth of interesting electronic and magnetic properties, such as spin dimerization~\cite{RadaelliFormationisomorphicIr32002b,SchmidtSpinSingletFormation2004}, spin-lattice instability~\cite{MatsudaSpinlatticeinstability2007}, orbital ordering~\cite{RadaelliOrbitalorderingtransitionmetal2005a}, charge ordering~\cite{WrightLongRangeCharge2001}, and metal-insulator transitions~\cite{NagataMetalinsulatortransitionspineltype1998b,IsobeObservationPhaseTransition2002,ItoPressureInducedSuperconductorInsulatorTransition2003}. For instance, the CuIr$_2$S$_4$\xspace thiospinel system exhibits a transition from paramagnetic metal to diamagnetic insulator on cooling~\cite{FurubayashiStructuralMagneticStudies1994c,MatsunoPhotoemissionstudymetalinsulator1997b,MatsumotoMetalinsulatortransitionsuperconductivity1999a}, the formation of structural isomorphic octamers~\cite{IshibashiXraydiffractionstudy2001a,RadaelliFormationisomorphicIr32002b}, and anomalous electrical properties~\cite{BurkovAnomalousresistivitythermopower2000b,TakuboIngapstateeffect2008b}. In particular, CuIr$_2$S$_4$\xspace has broken symmetries in its ground state accompanied by formation of charge order, orbital order, and the creation of magnetic spin singlets on dimerized Ir$^{4+}$-Ir$^{4+}$ pairs~\cite{RadaelliFormationisomorphicIr32002b}. These dimers were shown to disappear on warming through the transition, but preformed local symmetry broken states were seen at temperatures well above the global long-range ordering structure transition~\cite{BozinLocalorbitaldegeneracy2019c}. Thus, on cooling, a precursor state exists that has broken local symmetry but is high symmetry over long range. This behavior was shown to be driven by breaking of $d$-electron orbital degeneracies, resulting in a local fluctuating orbital-degeneracy-lifted (ODL) state distinct from spin singlet dimer. This local state results from direct $t_{2g}$\xspace Ir orbital overlap promoted by the topology of the crystal structure. In the regime of partial filling and high crystal symmetry that imposes the degeneracy of the orbital manifold, a molecular-orbital-like state is formed, accompanied by local structure distortion. Many of the interesting physical properties of CuIr$_2$S$_4$\xspace could be explained by the short-range-ordered ODL mechanism, such as the non-conventional conduction in the high-temperature metallic phase~\cite{BurkovAnomalousresistivitythermopower2000b}, and the apparently contradictory observation of the destruction of the dimers at the phase transition but the persistence of poor metallic response above the transition~\cite{BozinDetailedMappingLocal2011c}. Similar local symmetry-broken ODL states preformed at high temperature have been observed in other non-spinel $d$-electron systems such as the FeSe superconductor~\cite{KochRoomtemperaturelocal2019d,FrandsenQuantitativecharacterizationshortrange2019a}. Even the well studied physics of the perovskite lanthanum manganites (LaMnO$_{3}$) could be interpreted in the same way~\cite{qiu;prl05}, arguing that the ODL phenomenon may be widespread among the myriad of materials with partially filled $d$-electron manifolds. To explore this hypothesis it is important to seek it out systematically and characterize other materials that have the potential of revealing both commonalities and novel aspects that give more insights into the general ODL phenomenon. For instance, there is no fundamental reason for the ODL states to be exclusively comprised of one orbital per transition metal, yet multi-orbital ODL states have so far not been identified. Here we show that such multi-orbital ODL state exists in a related spinel, MgTi$_2$O$_4$\xspace, and that this is a consequence of the $d^{1}$ electron configuration decorating the pyrochlore lattice. MgTi$_2$O$_4$\xspace\ shares many features with CuIr$_2$S$_4$\xspace~\cite{KhomskiiOrbitallyInducedPeierls2005a}. They are both cubic spinels at high temperature with a transition metal ion pyrochlore sublattice of corner-shared tetrahedra. In both materials the crystal field splits the $d$-orbitals into a $t_{2g}$ triplet and an $e_g$ doublet with the $t_{2g}$ orbitals partially occupied and the $e_g$ orbitals empty. They both exhibit a temperature-dependent metal-insulator transition (MIT)~\cite{NagataMetalinsulatortransitionthiospinel1994a,IsobeObservationPhaseTransition2002}, the origin of which has been attributed to an orbital-selective Peierls mechanism~\cite{KhomskiiOrbitallyInducedPeierls2005a}. They both exhibit a global symmetry lowering, from cubic to tetragonal, at the MIT on cooling. They both have anomalous electrical resistivity behavior in the high-temperature metallic phase~\cite{ZhouOpticalstudyMgTi2O42006,BurkovAnomalousresistivitythermopower2000b}. The symmetry lowering at the MIT is accompanied by a dimerization of transition metal ions that results in alternating short and long metal-metal bonds, and a resulting tetramerization~\cite{CroftMetalinsulatortransitionmathrmCuIr2003a}, along linear chains of ions on the pyrochlore sublattice~\cite{KhomskiiOrbitallyInducedPeierls2005a}. The short bonds are associated with spin singlet dimer formation~\cite{RadaelliFormationisomorphicIr32002b,SchmidtSpinSingletFormation2004}. The charge filling is also electron-hole symmetric between the systems with Ti$^{3+}$ having one electron in the $t_{2g}$ manifold, whilst Ir$^{4+}$ has one hole, although the nominal charge of Ir in CuIr$_2$S$_4$\xspace is 3.5+ which would place half a hole per Ir in the $t_{2g}$ anti-bonding band on average in this compound. Despite the similarities, there are also notable differences. The Ti valence electrons reside in 3$d$ orbitals whereas for Ir they are 5$d$, which are more extended and should result in larger bandwidth. Indeed, the average separation between the transition metals on the undistorted tetrahedral pyrochlore sublattices of their cubic structures follow this expectation: Ti-Ti separation is shorter ($\sim$3.0~\AA) than Ir-Ir separation ($\sim$3.5~\AA). Also, experimentally, in MgTi$_2$O$_4$\xspace the tetragonal distortion is shown to be compressive ($c < a$) below the MIT~\cite{SchmidtSpinSingletFormation2004} while it is tensile ($c > a$)~\cite{FurubayashiStructuralMagneticStudies1994c} in CuIr$_2$S$_4$\xspace. Dimers form helical superstructures in MgTi$_2$O$_4$\xspace, whereas they form octameric ``molecules" in CuIr$_2$S$_4$\xspace, thereby lowering the symmetry further to triclinic. Both materials tetramerize below the MIT; however, CuIr$_2$S$_4$\xspace has a 3+-3+-4+-4+ charge ordering (CO) that accompanies an orbital ordering (OO)~\cite{BozinLocalorbitaldegeneracy2019c}, whereas a uniform 3+ charge on Ti rules out CO in MgTi$_2$O$_4$\xspace. With both similarities -- $t_{2g}$ orbitals on pyrochlore lattice, formation of dimerized singlets -- and differences to CuIr$_2$S$_4$\xspace\ -- lack of charge order, electron rather than hole states -- MgTi$_2$O$_4$\xspace\ provides a natural next step in a deeper, broader mapping of ODL phenomena. As we show below, this includes the emergence of a multi-orbital ODL state. Neutron PDF (nPDF) analysis on MgTi$_2$O$_4$\xspace has been performed previously~\cite{TorigoeNanoscaleicetypestructural2018} to study the spin singlet dimers, suggesting that they do persist to high temperature. However, due to the weak and negative neutron scattering length of Ti, and appreciable overlap with substantially stronger oxygen signal, nPDF itself cannot fully reveal how the local structure behaves with temperature. Here we have applied a combined x-ray and neutron analysis to understand the full picture. We find unambiguously that the Ti-Ti dimers do disassemble on warming through MIT. However, the local structure does not agree with the average structure even at high temperature. In analogy with CuIr$_2$S$_4$\xspace, partially filled $t_{2g}$\xspace transition metal orbital manifolds of MgTi$_2$O$_4$\xspace, which are triply degenerate in the average cubic symmetry, utilize their favorable overlaps fostered by the pyrochlore sublattice topology to form an ODL state. Its structural signatures are observed up to at least 500~K ($\sim$ 2T$_{s}$), the highest temperature measured. The spatial extent of the local structural response is found to be greater than that observed in CuIr$_2$S$_4$\xspace, consistent with the proposed two-orbital character of the ODL state in MgTi$_2$O$_4$\xspace. \section{Methods} \label{sec;methods} {\it Sample preparation \& characterization.\textemdash} TiO$_{2}$, Ti metal, and an excess of MgO were mixed and reacted using a spark plasma sintering technique in a graphite crucible. Synthesis at 1100~$^{o}$~C was complete in $\approx$ 15 minutes. The sample was reground and fired a second time under similar conditions. Powder X-ray diffraction analysis of the product showed well-crystallized MgTi$_{2}$O$_{4}$ spinel accompanied by an extremely small concentration of Ti$_{2}$O$_{3}$ as a second phase. Magnetization measurements were conducted using SQUID magnetometer on a specimen with mass of 6.2~mg. The data show a pronounced low temperature Curie tail which was subtracted. The Curie-Weiss fit to the low temperature data yielded a Weiss temperature of -0.45~K, consistent with isolated spins, and a Curie constant of 0.023 emu$\cdot$K/mol which corresponds to $\approx 3$~\%\ by mole of putative Ti$^{3+}$ spin-1/2 impurities {\it The PDF method.\textemdash}The local structure was studied using the atomic pair distribution function (PDF) technique~\cite{egami;b;utbp12,billi;b;itoch18}. The PDF analysis of x-ray and neutron powder diffraction datasets has been demonstrated to be an excellent tool for revealing local-structural distortions in many systems~\cite{BozinLocalorbitaldegeneracy2019c,KochRoomtemperaturelocal2019d,billi;prl96,qiu;prl05,YoungApplicationspairdistribution2011a,Keencrystallographycorrelateddisorder2015a,LavedaStructurepropertyinsights2018}. The PDF gives the scaled probability of finding two atoms in a material a distance $r$ apart and is related to the density of atom pairs in the material. It does not presume periodicity so goes well beyond just well ordered crystals~\cite{egami;b;utbp12,billi;b;itoch18}. The experimental PDF, denoted $G(r)$, is the truncated Fourier transform of the reduced total scattering structure function~\cite{farro;aca09}, $F(Q)=Q[S(Q)-1]$: \begin{equation} \label{eq:FTofSQtoGr} G(r) = \frac{2}{\pi} \int_{\ensuremath{Q_{\mathrm{min}}}\xspace}^{\ensuremath{Q_{\mathrm{max}}}\xspace} F(Q)\sin(Qr) \: \mathrm{d} Q, \end{equation} where $Q$ is the magnitude of the scattering momentum transfer. The total scattering structure function, $S(Q)$, is extracted from the Bragg and diffuse components of x-ray, neutron or electron powder diffraction intensity. For elastic scattering, $Q = 4 \pi \sin(\theta) / \lambda$, where $\lambda$ is the wavelength of the probe and $2\theta$ is the scattering angle. In practice, values of $\ensuremath{Q_{\mathrm{min}}}\xspace$ and $\ensuremath{Q_{\mathrm{max}}}\xspace$ are determined by the experimental setup and $\ensuremath{Q_{\mathrm{max}}}\xspace$ is often reduced below the experimental maximum to eliminate noisy data from the PDF since the signal to noise ratio becomes unfavorable in the high-$Q$ region~\cite{egami;b;utbp12}. {\it X-ray PDF experiment.\textemdash} The synchrotron x-ray total scattering measurements were carried out at the PDF beamline (28-ID-1) at the National Synchrotron Light Source II (NSLS-II) at Brookhaven National Laboratory (BNL) using the rapid acquisition PDF method (RAPDF)~\cite{chupa;jac03}. The MgTi$_2$O$_4$\xspace powder sample was loaded in a 1~mm diameter polymide capillary and measured from 90~K to 500~K on warming using a flowing nitrogen cryostream provided by Oxford Cryosystems 700 Series Cryocooler. The experimental setup was calibrated by measuring the crystalline Ni as a standard material. A two-dimensional (2D) PerkinElmer area detector was mounted behind the samples perpendicular to the primary beam path with a sample-to-detector distance of 227.7466~mm. The incident x-ray wavelength was 0.1668~\AA. The PDF instrument resolution effects are accounted for by two parameters in modeling, $Q_{damp}$ and $Q_{broad}$~\cite{proff;jac99,farro;jpcm07}. For x-ray PDF measurement, these were determined as $Q_{damp} = 0.039$~\AA$^{-1}$ and $Q_{broad} = 0.010$~\AA$^{-1}$ by fitting the x-ray PDF from a well crystallized sample of Ni collected under the same experimental conditions. In order to verify the data reproducibility, an additional set of x-ray data (in 90~K to 300~K temperature range on warming) was collected at the XPD beamline (28-ID-2) at the NSLS-II at BNL using a similar RAPDF setup but with x-ray wavelength of 0.1901~\AA\ and sample-to-detector distance of 251.1493~mm. The corresponding instrument resolution parameters were determined to be $Q_{damp} = 0.032$~\AA$^{-1}$ and $Q_{broad} = 0.010$~\AA$^{-1}$, implying similar instrument resolution effects across the two sets of measurements. The collected x-ray data frames were summed, corrected for detector and polarization effects, and masked to remove outlier pixels before being integrated along arcs of constant momentum transfer $Q$, to produce 1D powder diffraction patterns using the \textsc{pyFAI}\xspace program~\cite{Ashiotisfastazimuthalintegration2015a}. Standardized corrections and normalizations were applied to the data to obtain the reduced total scattering structure function, $F(Q)$, which was Fourier transformed to obtain the PDF, using \textsc{PDFgetX3}\xspace~\cite{juhas;jac13} within \textsc{xPDFsuite}\xspace~\cite{YangxPDFsuiteendtoendsoftware2015}. The maximum range of data used in the Fourier transform was chosen to be $Q_{max} = 25.0$~\AA$^{-1}$\xspace, so as to give the best trade-off between statistical noise and real-space resolution. {\it Neutron PDF experiment.\textemdash} The time-of-flight (TOF) neutron total scattering measurements were conducted at the NOMAD beamline (BL-1B)~\cite{NeuefeindNanoscaleOrderedMAterials2012} at the Spallation Neutron Source (SNS) at Oak Ridge National Laboratory. The MgTi$_2$O$_4$\xspace powder sample was loaded into a 3~mm diameter silica capillary mounted on a sample shifter, and data were collected from 100~K to 500~K on warming using an Ar flow cryostream. The neutron PDF instrument resolution parameters were determined as $Q_{damp} = 0.0232$~\AA$^{-1}$ and $Q_{broad} = 0.0175$~\AA$^{-1}$ by fitting the neutron PDF of a NIST standard 640d crystalline silicon sample. The neutron data were reduced and transformed to the PDF with $Q_{max} = 25.0$~\AA$^{-1}$\xspace\ using the automated data reduction scripts available at the NOMAD beamline. {\it Structural modeling.\textemdash} The PDF modeling programs \textsc{PDFgui}\xspace and \textsc{DiffPy-CMI}\xspace were used for PDF structure refinements~\cite{farro;jpcm07,juhas;aca15}. In these refinements $U_{iso}$~(\AA$^2$) is the isotropic atomic displacement parameter (ADP) and the ADPs of the same type of atoms are constrained to be the same; $\delta_2$~(\AA$^2$) is a parameter that describes correlated atomic motions~\cite{jeong;jpc99}. The PDF instrument parameters $Q_{damp}$ and $Q_{broad}$ determined by fitting the PDF from the well crystallized standard sample under the same experimental conditions are fixed in the structural refinements on MgTi$_2$O$_4$\xspace dataset. To estimate the lengthscale of local structural correlations, a $r_{min}$ dependent fit is performed on select high temperature x-ray data (300~K, 400~K, and 500~K). During the fit, the average high-temperature cubic MgTi$_2$O$_4$\xspace model is used, fixing the upper limit of fit range as $r_{max} = 50$~\AA, but changing the lower limit $r_{min}$ from $1 \le r_{min} \le 36$~\AA\ in 0.2~\AA\ $r$-steps. Each fit range uses the same initial parameter values and $\delta_2$ term is not applied since the data fitting range does not consistently include the low-$r$ region in all these refinements. The Rietveld refinement on the neutron TOF Bragg data was implemented in the GSAS-II software~\cite{TobyGSASIIgenesismodern2013c}. The instrument parameters were refined to the standard silicon data collected under the same experimental conditions and then fixed in the MgTi$_2$O$_4$\xspace Rietveld refinements. The sequential refinement option was used to refine the temperature series dataset collected at $2\theta=120^{\circ}$ detector bank from 100~K to 500~K in a systematic manner. {\it Structural models.\textemdash} Two candidate MgTi$_2$O$_4$\xspace models were fit against the experimental data. In the cubic MgTi$_2$O$_4$\xspace model (space group: $Fd\overline{3}m$), the atoms sit at the following Wyckoff positions: Mg at 8a (0.125,0.125,0.125), Ti at 16d (0.5, 0.5, 0.5), and O at 32e ($x$, $x$, $x$). The initial lattice parameters and atomic positions are $a=8.509027$~\AA\ and O at (0.25920, 0.25920, 0.25920)~\cite{SchmidtSpinSingletFormation2004}. The Ti pyrochlore sublattice is shown in Fig.~\ref{fig;stru_mt_char}(a), indicating that all the Ti-Ti bonds are of equal length, reflecting regular Ti$_{4}$ tetrahedra. \begin{figure}[tb] \begin{centering} \includegraphics[width=1\columnwidth]{fig1}\\ \end{centering} \caption{ (a,b) The Ti pyrochlore sublattice of corner-shared Ti$_4$ tetrahedra for (a) undistorted (cubic) and (b) distorted (tetragonal) MgTi$_2$O$_4$\xspace structures, respectively, highlighting both short (red) and long (yellow) Ti-Ti bonds in the distorted structure. (c) The temperature dependence of magnetization with the Curie-Weiss behavior contribution subtracted. The inset shows a TiO$_6$ octahedron and the t$_{2g}$ orbitals that point towards O-O edges. Ti atom is in blue and O atom is in red. The vertical grey dashed line at 250~K marks the MIT temperature. } \label{fig;stru_mt_char} \end{figure} In the tetragonal MgTi$_2$O$_4$\xspace model (space group: $ P4_12_12$), the atoms sit at the following Wyckoff positions: Mg at 4a ($x$,$x$,0), Ti at 8b ($x$, $y$, $z$), O1 at 8b ($x$, $y$, $z$), and O2 at 8b ($x$, $y$, $z$). The initial lattice parameters and atomic positions are $a=6.02201$~\AA, $c=8.48482$~\AA, Mg at (0.7448, 0.7448, 0), Ti at (-0.0089, 0.2499, -0.1332), O1 at (0.4824 0.2468 0.1212), and O2 at (0.2405, 0.0257, 0.8824)~\cite{SchmidtSpinSingletFormation2004}. The corresponding distorted Ti sublattice is presented in Fig.~\ref{fig;stru_mt_char}(b), showing that one Ti-Ti bond gets shorter (indicated in red) and one gets longer (in yellow) out of the six Ti-Ti bonds of each Ti$_{4}$ tetrahedron. The lattice parameters of these two models have the relationship of $c_{c} \sim c_{t} \sim \sqrt{2}a_{t}$, where the subscripts $c$ and $t$ refer to the cubic and tetragonal models, respectively. \section{Vanishing spin singlet dimers} \subsection{Canonical behavior} The magnetization, $M(T)$, of the sample up to 300~K is shown in Fig.~\ref{fig;stru_mt_char}(c). A low temperature Curie-Weiss like component has been subtracted to account for the effect of magnetic impurities contributing to the signal at low temperature. The onset of a broad dimerization transition is observed at around 250~K, which is close to the literature reported MIT temperature of 260~K~\cite{IsobeObservationPhaseTransition2002,VasilievSpecificheatmagnetic2006} and implies canonical behavior of our sample. We next establish that the average structural behavior of our sample agrees with other observations in the literature~\cite{IsobeObservationPhaseTransition2002,SchmidtSpinSingletFormation2004,TalanovTheorystructuralphase2015} by carrying out Rietveld refinements fitting tetragonal and cubic MgTi$_2$O$_4$\xspace models to the neutron time-of-flight (TOF) Bragg data measured from 100~K to 500~K. The refinement results from select fits are reproduced in Table~\ref{tab;fit_rietveld} presented in Appendix~\ref{sec_supplementalResults}, and a representative fit, of the tetragonal model to the 100~K dataset, is shown in Fig.~\ref{fig;bank4_4panel_fits_rw_versus_pdf}(a). \begin{figure}[tb] \begin{centering} \includegraphics[width=1\columnwidth]{fig2.pdf}\\ \end{centering} \caption{ (a) The 100~K neutron powder diffraction pattern (blue) collected using $2\theta=120^{\circ}$ detector bank fit by tetragonal MgTi$_2$O$_4$\xspace model (red) with difference curve (green) offset below. (b) The resulting neutron Rietveld refinement weighted profile agreement factor $R_{wp}$ values versus temperature on neutron TOF Bragg data using tetragonal (blue) and cubic (red) MgTi$_2$O$_4$\xspace models from 100~K to 500~K (120$^{\circ}$ detector bank data). (c) The x-ray PDF of 100~K data (blue) fit by tetragonal model (red) over the range of $1.5<r<10$~\AA. The difference curve (green) is shown offset below. (d) The resulting x-ray PDF refinement goodness-of-fit parameter $R_w$ values versus temperature on x-ray data using tetragonal (blue) and cubic (red) MgTi$_2$O$_4$\xspace models from 100~K to 500~K over the range of $1.5<r<10$~\AA. The vertical grey dashed line at 250~K indicates the MIT temperature. } \label{fig;bank4_4panel_fits_rw_versus_pdf} \end{figure} The tetragonal distortion, given by $c/\sqrt{2}a$, is small at all temperatures and the data are not of high enough resolution to directly observe distinct peaks that would indicate any tetragonal distortion. However, the temperature dependence of the weighted profile agreement factor, $R_{wp}$, of the two models is shown in Fig.~\ref{fig;bank4_4panel_fits_rw_versus_pdf}(b), which clearly implicates the tetragonal model as preferred at low-temperature, but not at high temperature, above around 230-250~K, which is close the literature reported MIT temperature 260~K~\cite{IsobeObservationPhaseTransition2002}. \subsection{Local structure behavior} We investigate the behavior of the local structure by performing a PDF structural refinement of the x-ray total scattering PDF data from 90~K to 500~K. Fits were carried out over the data range of $1.5<r<4$~\AA\ and $1.5<r<10$~\AA\ and the results are qualitatively the same. A representative fit of the tetragonal model to the 100~K dataset is shown in Fig.~\ref{fig;bank4_4panel_fits_rw_versus_pdf}(c). This time, the temperature dependence of the goodness-of-fit parameter, $R_w$, of the tetragonal and cubic models (Fig.~\ref{fig;bank4_4panel_fits_rw_versus_pdf}(d)) indicates that the tetragonal model fits the local structure better than the cubic model at all temperatures. In the local structure the tetragonal distortion is always evident. To see visually in the PDF how the cubic model fails we consider the low and high-temperature PDFs and fits with the cubic and tetragonal models, shown in Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}. \begin{figure}[tb] \begin{centering} \includegraphics[width=1\columnwidth]{fig3.pdf}\\ \end{centering} \caption{(a-d) The x-ray PDF data (blue) collected at the PDF beamline at (a,b) 90~K and (c,d) 300~K fit by (a,c) tetragonal and (b,d) cubic models (red) over the range of $1.5<r<4$~\AA. The difference curves (green) are shown offset below. } \label{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may} \end{figure} The low temperature PDFs (below the MIT) are shown in the top row ((a) and (b)) and the high temperature (above the MIT) in the lower row ((c) and (d)). The tetragonal model fit is shown in the first column ((a) and (c)) and the cubic fit is shown in the second column ((b) and (d)). At low temperature, as expected, the tetragonal model (Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}(a)) fits much better than the cubic model (Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}(b)). At 300~K we have already established that the global structure is cubic; however, again we see that in these local structural fits the tetragonal model (Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}(c)) is superior to that of the cubic model (Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}(d)), with smaller oscillations in the difference curve. To emphasize our argument, we note that the difference curves are similar when data collected at distinct temperatures are fit using identical symmetry constraints. Specifically, careful inspection of the difference curves in Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}(b) and (d) reveals that the positions of residual maxima and minima are nearly identical, although their amplitudes are smaller in Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may}(d). Since this difference signal represents the inability of the cubic model to fit the data, it is further support to the idea that a similar tetragonally distorted structure is present in the low-$r$ region at 90~K and at 300~K, but smaller in amplitude at 300~K. To validate that the results are reproducible, Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit_may} represents x-ray PDF data collected at the PDF beamline, and data collected from the XPD beamline is shown in Fig.~\ref{fig;xray_tetra_cubic_90K_300K_0_4A_fit} presented in Appendix~\ref{sec_supplementalResults}, which reproduces the same result as discussed above. In the cubic MgTi$_2$O$_4$\xspace structure unit cell, all six Ti-Ti bonds have the same length, 3.008~\AA, whereas in the tetragonal MgTi$_2$O$_4$\xspace model, one of the six Ti-Ti distances is shortened (2.853~\AA), forming dimers, and one Ti-Ti bond is longer (3.157~\AA)~\cite{TalanovTheorystructuralphase2015}. The dimer contact is considerably shorter than the 2.917~\AA\ found in titanium metal~\cite{Hullcrystalstructurescommon1922}, indicating a strong covalent interaction in MgTi$_2$O$_4$\xspace. From the analysis of the average structure, it was understood that the MIT was accompanied by the formation of Ti-Ti structural dimers. Above we showed the observation from the PDF of local tetragonality at high temperature which may indicate that local dimers survive above the MIT. This would seem to be in qualitative agreement with a prior observation from a neutron PDF study that reported the persistence of Ti-Ti dimers up to high temperature~\cite{TorigoeNanoscaleicetypestructural2018}. In this picture the dimers survive locally but become disordered at the transition where the structure becomes globally cubic on warming. However, our PDF analysis clearly shows that the large amplitude structural dimers actually do disappear at the MIT, as described below, and that the local tetragonality that we observe at high temperature in the PDF has a more subtle origin as we develop in greater detail below. \subsection{Orbital degeneracy lifting or spin dimerization?} To establish the disappearance of the structural dimers at the phase transition, we simulated x-ray PDFs of the cubic (no dimers) and tetragonal (dimers) models. These are plotted as the red and blue curves, respectively, in Fig.~\ref{fig;xray_neutron_100_300K_calc} (a). \begin{figure}[tb] \begin{centering} \includegraphics[width=1\columnwidth]{fig4.pdf}\\ \end{centering} \caption{(a,b): The simulated (a) x-ray and (b) neutron PDFs of tetragonal (blue) and cubic (red) MgTi$_2$O$_4$\xspace models in the low-$r$ region. All atoms use the same isotropic atomic displacement parameter $U_{iso} = 0.005$~\AA$^2$. $Q_{max}$, $Q_{damp}$ and $Q_{broad}$ are set as the same values as the experimental PDFs. The difference curves (tetragonal-cubic) are shown offset below. The nearest Ti-Ti bonds are highlighted in the blue vertical span region. (c,d): The experimental (c) x-ray and (d) neutron PDFs from MgTi$_2$O$_4$\xspace collected at 100~K (blue) and 300~K (red). The difference curves (100~K - 300~K) are shown offset below in green. The vertical dashed lines at $r=3.01$~\AA\ represent the length of the undistorted Ti-Ti bond length in the average cubic model. } \label{fig;xray_neutron_100_300K_calc} \end{figure} A number of the PDF peaks are affected on the transition through the MIT, with the largest change observed on the peak at around 3.0~\AA, which contains the shortest Ti-Ti distances. Based on the change in crystal structure, the expected change in the PDF results in \sout{a }\ins{the} characteristic M-shaped signature in the difference curve \sout{ plotted offset below}\ins{seen in Fig.~\ref{fig;xray_neutron_100_300K_calc} (a)} which comes from the disappearance of the long and short Ti-Ti bonds associated with the dimer, leaving all the Ti-Ti distances the same. \sout{This can be compared with what happens to the measured PDF data as the sample crosses the MIT, shown in Fig.~\ref{fig;xray_neutron_100_300K_calc}(c) which compares the data-PDFs from MgTi$_2$O$_4$\xspace collected at 300~K (red) and 100~K (blue).} \ins{This can be compared to the difference in the measured PDF data as MgTi$_2$O$_4$\xspace crosses the MIT, shown in Fig.~\ref{fig;xray_neutron_100_300K_calc}(c) at 300~K (red) and 100~K (blue).} \sout{The first observation is that there is a significant change in the relevant 3.0~\AA\ peak on going through the MIT.} \ins{First, we see a significant change in the relevant 3.0~\AA\ peak when moving through the MIT.} This \ins{change} is inconsistent with the \sout{observation}\ins{conclusion} from the previous neutron PDF study that the dimers \ins{are retained,} unaltered\ins{,} to high temperature in the local structure~\cite{TorigoeNanoscaleicetypestructural2018}, \ins{necessitating} an order-disorder scenario in which the local structure across the transition \ins{is unchanged}\sout{would be the same}, with small or no change in the PDF. Instead we see a significant change in the x-ray PDF at the phase transition, with PDF intensity moving from the short $r$ position \sout{back}towards \ins{that}\sout{the $r$ position} of the average Ti-Ti distance, as indicated by the grey circle in Fig.~\ref{fig;xray_neutron_100_300K_calc}(c). \sout{Careful inspections show that the difference curve from the experimental PDFs (Fig.~\ref{fig;xray_neutron_100_300K_calc}(c)) is significantly different from that of the calculated PDFs (Fig.~\ref{fig;xray_neutron_100_300K_calc}(a)).} \ins{It is clear from careful inspection that the observed and simulated difference curves across the transition (Fig.~\ref{fig;xray_neutron_100_300K_calc}(c) and (a), respectively) are significantly different.} PDF peak intensity redistribution associated with the dimer removal is seen \ins{in the differential curve} as a transfer of intensity \sout{in the differential curve} from 2.71~\AA\ \sout{position moving back to a position that is centered around}\ins{to} 2.89~\AA, i.e., on the low-$r$ side of the average Ti-Ti distance at 3.01~\AA\ marked by the vertical dashed line in the figure. This implies that the very short Ti-Ti ion dimers disappear at the phase transition, but the \ins{associated PDF} intensity \sout{in the PDF}is shifted \sout{back}to a position that is still shorter than that of the average Ti-Ti distance: shortened Ti-Ti contacts in fact do exist at high temperature above the MIT but they are not the original spin singlet dimers. Our results are consistent with the \ins{disappearance of} dimers \sout{disappearing}at the phase transition on warming, but \ins{a persistence of a local tetragonality, smaller in magnitude than that associated with the the dimer phase.}\sout{local tetragonality getting smaller than that corresponding to the dimer phase still persisting well above this transition.} This behavior is shown on an expanded scale in Fig.~\ref{fig;xray_90_300K_temp_dimer_1_5A_4panel}(b) where PDFs measured at multiple temperatures are compared, focusing on the 3~\AA\ peak. \begin{figure}[tb] \begin{centering} \includegraphics[width=1\columnwidth]{fig5.pdf} \end{centering} \caption{ (a) PDFs measured as a function of temperature from 90~K (blue) to 300~K (red) in the low-$r$ region. The intermediate data-sets are plotted in grey. The vertical green arrow at $r=2.71$~\AA\ and the purple arrow at 2.89~\AA\ represent how the position of the Ti-Ti short bond changes on warming. The difference curve shown offset below is that between the 90~K dataset subtracting the 300~K dataset. The positive/negative feature between $2.6<r<3.0$~\AA\ indicates that intensity in the PDF at 2.71~\AA\ at low temperature is shifting to the position 2.89~\AA\ at high temperature. (b) Re-plot of (a) over an expanded $r$-range. The vertical dashed line at $r=3.01$~\AA\ represents the length of the undistorted Ti-Ti bond length in the cubic average structure. (c) The temperature dependence of the integrated area in the difference curve shown shaded green. The vertical grey dashed line at 250~K indicates the MIT temperature. (d) The x-ray PDF refinement goodness-of-fit $R_w$ values versus a variable $r_{min}$ (for an $r_{max}$ fixed to 50~\AA) when fitting a cubic MgTi$_2$O$_4$\xspace model to 300~K (blue), 400~K (green), and 500~K (red) data. In each refinement $r_{min}$ was allowed to vary from $1 \le r_{min} \le 36$~\AA\ in 0.2~\AA\ $r$-steps. The fits with the high-$r$ yield the behavior of the average structure. The fits with the low $r_{min}$ are weighted by the local structural signal. (e, f) The temperature dependence of the short (blue) and long (red) Ti-Ti bond lengths, and their differences (green) from PDF structural modeling using the tetragonal structure over the range of $1.5<r<10$~\AA. The horizontal dashed line represents the length of the undistorted Ti-Ti bond length in the cubic average structure. The vertical grey dashed line at 250~K indicates the MIT temperature. } \label{fig;xray_90_300K_temp_dimer_1_5A_4panel} \end{figure} The feature in the difference curve that shows the shift of the intensity associated with the Ti-Ti short bonds from 2.71~\AA\ to the longer position (2.89~\AA) is shown below shaded green (for the loss of short bonds) and purple (for the gain in longer bonds). This result is qualitatively supported by the PDF structural modeling over the range of $1.5<r<10$~\AA, \ins{shown in Fig.~\ref{fig;xray_90_300K_temp_dimer_1_5A_4panel}(e) and (f),} where the short Ti-Ti bond shifts from 2.83~\AA\ to 2.88~\AA\ over the same temperature range, but never \ins{converges}\sout{returns} to the average cubic value of 3.01~\AA\ at 300~K. \sout{This is shown in Fig.~\ref{fig;xray_90_300K_temp_dimer_1_5A_4panel}(e) and (f).} The full temperature dependence of the dimer disappearance may then be extracted by integrating the shaded regions in the difference curve and is shown in Fig.~\ref{fig;xray_90_300K_temp_dimer_1_5A_4panel}(c), where we plot the integrated area shown shaded in green in the difference curve vs. temperature. In this case the difference is always taken with respect to the 500~K dataset. The intensity decreases gradually until around 200~K, where it rapidly falls off. The rate of fall-off in this intensity then slows again above 250~K. There are two principal contributions to the differential signal below the transition: the changes in the local structure and the thermal broadening effects. Thermal broadening effects in the differential are gradual and typically small, particularly from one data point to the next corresponding to the trends observed below $\sim$ 200~K and above $\sim$ 250~K. On the other hand, the signal in the differential coming from the dimers is considerably larger, roughly proportional to the big step seen at the transition, and dissipates rapidly as the temperature passes through the MIT range. This is a model-independent way of observing how the local dimer disappears on warming. \subsection{X-ray PDF versus neutron PDF} We may seek an explanation for why we see the dimers disappear at the transition from our x-ray PDF measurements, whereas this was not evident in the earlier neutron PDF study~\cite{TorigoeNanoscaleicetypestructural2018}. Observing the local Ti dimer is complicated in the neutron case by the relatively weak and negative neutron scattering length of titanium, and an appreciable overlap of titanium contributions with strong oxygen contributions. To show this, we perform the same comparison that we just made for the x-ray PDFs, but on neutron data. The neutron simulations are shown in Fig.~\ref{fig;xray_neutron_100_300K_calc}(b) and the neutron experimental PDFs in Fig.~\ref{fig;xray_neutron_100_300K_calc}(d). It is clear that the signals in the difference curve, both for the simulations based on the average structure and the data PDFs themselves, are much smaller than for the x-ray case. This is because the most important signal in MgTi$_2$O$_4$\xspace is coming from the Ti ions that are relatively strong scatterers in the x-ray case but not in the neutron measurement. Specifically, in the x-ray case, Ti scatters 2.75 times stronger than O and 1.8 times stronger than Mg, whereas for neutrons the scattering of Ti is 1.7 times weaker than that of O and 1.6 times weaker than that of Mg. In addition, the neutron PDF intensity in the range of interest is dominated by the contributions from O-O pairs constituting TiO$_{6}$ octahedra, evidenced in the data as the additional shoulder intensity features around the 3~\AA\ peak (Fig.~\ref{fig;xray_neutron_100_300K_calc}(d)), as compared to the x-ray case where such features are largely absent (Fig.~\ref{fig;xray_neutron_100_300K_calc}(c)). This may explain the different interpretation in the earlier neutron PDF study~\cite{TorigoeNanoscaleicetypestructural2018}. However, the x-ray data unambiguously show the disappearance of the full-amplitude spin singlet Ti-Ti dimers. \subsection{Detection of the ODL state} This behavior, where sizeable distortions associated with ordered spin singlet dimers evolve into smaller local distortions with spin singlets disassembled, is reminiscent of that observed in the CuIr$_2$S$_4$\xspace system. In that case a local symmetry broken orbital-degeneracy-lifted (ODL) state was observed up to the highest temperatures studied~\cite{BozinLocalorbitaldegeneracy2019c}. On cooling these local distortions of broken symmetry ordered into a long-range orbitally ordered (LROO) state. Only below the LROO transition did charges disproportionate, forming a charge density wave accompanied by a Peierls distortion and the formation of Ir-Ir dimers with very short Ir-Ir bonds. We believe our observations in MgTi$_2$O$_4$\xspace suggest a similar kind of ODL behavior, which we explore to a greater extent below, though different in detail because of the different charge filling. As discussed above, we can rule out that the local structure is changing in the same way as the average structure. The {\it expected} changes in the PDF at low-$r$ due to average structure changes at the MIT make all the low-$r$ peaks sharper for $T>T_{MIT}$, as shown in Fig.~\ref{fig;xray_neutron_100_300K_calc}(a). However, the data do not show this (Fig.~\ref{fig;xray_neutron_100_300K_calc}(c)). There is a small change in the local structure, evidenced by the feature in the difference curve around 2.9~\AA, indicated by the grey circle in Fig.~\ref{fig;xray_neutron_100_300K_calc}(c), but it is smaller than the average structure change at the MIT. As discussed above, the short Ti-Ti dimers ($r=2.71$~\AA\ shoulder) go away on warming by a shift to a longer bond (around $r=2.89$~\AA), but this ``longer" bond is still shorter than the average 3.01~\AA\ Ti-Ti bond distance expected from the cubic average model. These two behaviors exactly mimic the ODL state found in CuIr$_2$S$_4$\xspace~\cite{BozinLocalorbitaldegeneracy2019c}. As is evident in Fig.~\ref{fig;bank4_4panel_fits_rw_versus_pdf}(d), the tetragonal model fits the local structure better than the cubic model at all temperatures to 500~K, the highest measurement temperature. As in CuIr$_2$S$_4$\xspace~\cite{BozinLocalorbitaldegeneracy2019c} the dimers disappear at the MIT transition on warming, but the local symmetry broken state with, presumably, fluctuating short Ti-Ti bonds is present to high temperature. \subsection{Spatial extent of the ODL state} It is of interest to explore whether the fluctuating short Ti-Ti bonds at temperature above the MIT correlate with each other, and how this varies with temperature. To extract the correlation length of the local fluctuating \sout{ODL}\ins{symmetry broken} states, $r_{min}$ dependent PDF fits were performed on selected high-temperature datasets (300~K, 400~K, and 500~K). In these models the high-temperature average cubic model was fit over a range from $r_{min}$ to a fixed $r_{max}=50$~\AA, where $r_{min}$ was allowed to vary from $1 \le r_{min} \le 36$~\AA . When $r_{min}$ is large the fit is over just the high-$r$ region of the PDF and will retrieve the average structure and the cubic fit will be good. As $r_{min}$ extends to lower values, progressively more of the local structure that has a tetragonal distortion is included in the fit and the agreement of the cubic model becomes degraded. The resulting $R_w(r_{min})$ is shown in Fig.~\ref{fig;xray_90_300K_temp_dimer_1_5A_4panel}(d). The cubic fits are good over the entire region, $r_{min}>10$~\AA, with very little variation in $R_w$ indicated by the horizontal grey band in the figure, but rapidly degrade below this length-scale. This suggests that the \sout{ODL}\ins{symmetry broken local} distortions have a correlation length of around 1~nm but that this correlation length does not vary significantly in the temperature range above 300~K. This agrees well with the previously reported length-scale of local tetragonality based on neutron PDF analysis~\cite{TorigoeNanoscaleicetypestructural2018}. \section{Two-orbital ODL state} \subsection{The ODL regime} The driving force behind the high temperature local symmetry breaking in CuIr$_2$S$_4$\xspace was shown to be of electronic origin involving orbital degeneracy lifting~\cite{BozinLocalorbitaldegeneracy2019c}. Significantly, the isostructural and isoelectronic sister compound, CuIr$_2$Se$_4$\xspace, remains metallic down to the lowest temperature and does not show local symmetry breaking orbital-degeneracy-lifted (ODL) effects. This intriguingly suggests that the ODL state could be {\it a prerequisite} for the MIT~\cite{BozinLocalorbitaldegeneracy2019c} in these spinels. Here we argue that also in MgTi$_2$O$_4$\xspace local ODL effects are present and produce the local symmetry breaking in the high temperature region that we report here. By analogy with CuIr$_2$S$_4$\xspace, the ODL states are precursor states to the spin singlet dimerization and MIT. \begin{figure}[tb] \begin{centering} \includegraphics[width=1.0\columnwidth]{fig6.png} \end{centering} \caption{Recapitulation of the ODL mechanism in CuIr$_2$S$_4$\xspace thiospinel: (a) The energy diagram of atomic and molecular orbitals for nearest Ir atom pairs in various situations: degenerate/nonbonding (DEG/NONBOND, top), orbital degeneracy lifted (ODL, middle), and spin singlet dimer (DIMER, bottom). Horizontal arrows indicate the ODL displacements. (b) The ODL state in CuIr$_2$S$_4$\xspace in $t_{2g}$\xspace orbital manifold representation on a segment of pyrochlore sublattice. The ODL active $t_{2g}$\xspace orbitals are shown in yellow, passive orbitals are shown in grey. Each Ir atom participates in exactly one ODL state. The ODL states are randomly placed following principles discussed in text. The arrows denote the displacements of Ir associated with the ODL states and provide mapping onto the ``two in -- two out" ice rules. The white arrows represent in-plane displacements, whereas the blue arrows represent displacements with an out-of-plane component. The dotted arrows mark the displacements associated with the ODL states occurring in neighboring tetrahedra that are not represented in the displayed structural segment. } \label{fig;cis_odl_review} \end{figure} \begin{figure*}[tb] \begin{centering} \includegraphics[width=1.8\columnwidth]{fig7.png} \end{centering} \caption{The Ti $t_{2g}$\xspace orbital manifold on a pyrochlore sublattice of MgTi$_2$O$_4$\xspace spinel: (a) Spin-singlet dimerized lattice within the tetragonal model; (b) ODL lattice within the tetragonal model; (c) Lattice of degenerate $t_{2g}$\xspace orbitals as portrayed by the cubic model. The charge state of all Ti in all models is +3. Electron density in $t_{2g}$\xspace orbitals is indicated by color (grey is empty) and its distribution is proportional to the blue color intensity. The Ti dimers in (a) are indicated by the double blue lines, while a pair of antiferromagnetically coupled block arrows on an exemplar dimer denotes its spin singlet character. Note that all Ti are involved in dimerization but only dimers contained in a section of one structural slab are indicated in (a). Out of all Ti-Ti contacts dimerized contacts have nominal bond charge of 2$e^{-}$, whereas the other contacts carry no net charge in this picture. The ODL states in (b) are indicated by the single blue lines. Arc arrows in (a) denote bond charge transfer as the spin singlet dimers convert to the ODL states, according to the model discussed in text. The insets between the panels provide the energy diagram of atomic and molecular orbitals for nearest Ti atom pairs in various situations: degenerate/nonbonding (DEG/NONBOND, bottom right), spin singlet dimer (DIMER, top left), one-orbital ODL (1O-ODL, top right) and two-orbital ODL (2O-ODL, bottom left). Note: the 2O-ODL state refers to two orbitals of the same Ti (e.g. one labelled ``2") that are engaged in two 1O-ODL states with two {\it different} Ti neighbors (e.g. these labelled ``1" and ``3"). Differently colored spins and lines denoting ODL contacts indicate this two-component aspect in the schematics for 2O-ODL. The local spin arrangement shown is for illustration purpose only and does not represent an experimentally established alignment. } \label{fig;mto_mechanism} \end{figure*} The long range pattern of ordered spin singlet dimers differs in detail in the two systems, being octamers in CuIr$_2$S$_4$\xspace~\cite{RadaelliFormationisomorphicIr32002b} and helices in MgTi$_2$O$_4$\xspace~\cite{SchmidtSpinSingletFormation2004}, and is presumably dictated by delicate energetics~\cite{DiMatteoValencebondcrystallattice2005,RahamanTetramerorbitalordering2019}. Although intriguing, these intricate aspects of long range order are not of concern here. Our interest lies with internal dimer structure, the relationship between the dimer state and the ODL state, and the electron-hole symmetry that links the two systems~\cite{KhomskiiOrbitallyInducedPeierls2005a}. Prior to pursuing the analogy between the CuIr$_2$S$_4$\xspace\ and MgTi$_2$O$_4$\xspace, we appraise the two systems from the perspective of a regime under which the ODL state can form in a transition metal system~\cite{BozinLocalorbitaldegeneracy2019c}. Three criteria need to be fulfilled: (i) {\it filling} -- the transition metal in the system possesses partially filled $d$ orbitals, (ii) {\it symmetry} -- the high crystallographic point symmetry of the system imposes the orbital degeneracy, and (iii) {\it topology} -- the structural topology promotes adequate highly directional orbital overlaps. In practice, materialization of the ODL state could further be impacted by the existence of other competing degeneracy lifting channels that may be available to the system, such as the relativistic spin orbit coupling, the effects of crystal field, etc. Eventually, formation of the ODL local broken symmetry state is accompanied by associated local structure distortion. Both CuIr$_2$S$_4$\xspace\ and MgTi$_2$O$_4$\xspace\ systems meet the criteria: partial filling (Ir is $5d^{5.5}$, Ti is $3d^{1}$), degeneracy promoting high symmetry structure (cubic spinel structure imposes 3-fold degeneracy of the $t_{2g}$\xspace manifolds in both materials), and favorable structural topology (edge shared IrS$_{6}$ and TiO$_{6}$ octahedra fostering direct $t_{2g}$\xspace overlap). In both systems short nearest neighbor transition metal contacts are observed in the high temperature metallic regime, that are distinct from the very short spin singlet dimer bonds observed in the insulating state at low temperature. In what follows we first review the ODL state and the dimerization mechanism within the local picture of CuIr$_2$S$_4$\xspace, summarized in Fig.~\ref{fig;cis_odl_review}. We then utilize the analogies between the two systems to put forward a scenario portraying the high temperature state and the dimerization mechanism in MgTi$_2$O$_4$\xspace, illustrated in Fig.~\ref{fig;mto_mechanism}. \subsection{Single-orbital ODL state} In CuIr$_2$S$_4$\xspace the dimers involve two Ir$^{4+}$ ions in $5d^{5}$ configuration with hole character~\cite{BozinLocalorbitaldegeneracy2019c}. In the localized electron picture, the ODL state is based on molecular orbital (MO) concepts. In this, two neighboring transition metal ions with partially filled degenerate orbitals form a MO state with shared electrons (holes) that lifts the degeneracy and lowers the system energy. In the high temperature regime of CuIr$_2$S$_4$\xspace iridium nominally has a half-integer valence (3.5+)~\cite{MatsunoPhotoemissionstudymetalinsulator1997b,YagasakiHoppingConductivityCuIr2S42006b}. This means that each Ir$^{3.5+}$ on the pyrochlore sublattice is in a nominal $5d^{5.5}$ state corresponding to half a hole per three degenerate $t_{2g}$\xspace orbitals (Fig.~\ref{fig;cis_odl_review}(a), top). The sublattice geometry fosters direct overlaps of orbitals from the $t_{2g}$\xspace manifolds (Fig.~\ref{fig;cis_odl_review}(b)), which, in turn allows neighboring Ir pairs to form a bound state sharing a single hole in the antibonding MO~\cite{BozinLocalorbitaldegeneracy2019c} (middle panel of Fig.~\ref{fig;cis_odl_review}(a)). Importantly, for any choice of two neighboring Ir on a pyrochlore lattice only one member of the three $t_{2g}$\xspace orbitals overlap (e.g. {\em xy} with {\em xy}, etc.) along the Ir$_{4}$ tetrahedral edges. Due to the specifics of filling (0.5 holes/Ir) each Ir participates in exactly one such paired state at a time. We call this a {\it one-orbital ODL (1O-ODL)} state. The ODL state is hence comprised of two atomic orbitals, one from each Ir in the pair, with on average 1.5 electrons (0.5 holes) per Ir, resulting in MO with 3 electrons and one hole, as shown in Fig.~\ref{fig;cis_odl_review}(a), with a net spin of 1/2. This configuration results in the observed contraction of the Ir-Ir separation in the local structure vis-{\`a}-vis that expected if orbital degeneracy is retained. Since each iridium has 6 Ir neighbors to pair with, the ODL state fluctuates spatially and, presumably, temporally among ({\em xy, xy}), ({\em yz, yz}), and ({\em zx, zx}) variants, which results in an undistorted cubic structure on average~\cite{BozinLocalorbitaldegeneracy2019c}. One such configuration is illustrated in Fig.~\ref{fig;cis_odl_review}(b), with strong short-range correlations governed by the ``one ODL state per Ir" and the Coulomb bond charge repulsion principles, resulting in a single ODL state per Ir$_{4}$ tetrahedron. Pursuant to this, the ODL state in CuIr$_2$S$_4$\xspace follows the ``two in -- two out" ice rules~\cite{ander;pr56}, which will be addressed later. In this localized picture, the ODL state is also a precursor for the spin singlet dimer. The dimer state is attained by removing an excess electron from the antibonding MO of the ODL state, thus stabilizing the bond, as shown in the bottom panel of Fig.~\ref{fig;cis_odl_review}(a). The process involves charge transfer between two ODL Ir$^{3.5+}$-Ir$^{3.5+}$ pairs, one of which becomes dimerized Ir$^{4+}$-Ir$^{4+}$ by losing an electron (or gaining a hole, hence hole dimer) and the other becomes non-ODL (and non-dimer) Ir$^{3+}$-Ir$^{3+}$ by gaining an electron ($d^{6}$-$d^{6}$ configuration). \subsection{Two-orbital ODL state} We now turn to MgTi$_2$O$_4$\xspace\ starting from the local view of spin singlet dimers. Dimerization in MgTi$_2$O$_4$\xspace, depicted using the Ti $t_{2g}$\xspace manifold representation, is shown in Fig.~\ref{fig;mto_mechanism}(a), overlaying a fragment of pyrochlore sublattice as seen in the $P4_12_12$ model. In MgTi$_2$O$_4$\xspace the dimers involve two Ti$^{3+}$ in $3d^{1}$ configuration with a single electron in $t_{2g}$\xspace manifold, hence Ti$^{3+}$-Ti$^{3+}$ dimers inevitably have electron character by construction. These result in short Ti-Ti dimerized contacts observed in the tetragonal structure at low temperature. Each dimer carries 2$e^{-}$ of net charge. Since each Ti participates in a dimer, and since there is exactly one dimer per Ti$_{4}$ tetrahedron~\cite{DiMatteoValencebondcrystallattice2005,TorigoeNanoscaleicetypestructural2018}, nominal charge count results in 1$e^{-}$/Ti site. This is consistent with no CO being observed experimentally in MgTi$_2$O$_4$\xspace, in contrast to CuIr$_2$S$_4$\xspace, implying that all Ti sites are equivalent in this regard~\cite{SchmidtSpinSingletFormation2004}. From the average structure perspective and in the localized picture, supported by the experimentally observed paramagnetism and poor metallic conduction, above the MIT the dimers could be seen to disassemble in such a way as to statistically distribute 1$e^{-}$ of charge evenly across the three degenerate $t_{2g}$\xspace orbitals, resulting in a cubic structure with all Ti-Ti nearest neighbor contacts equivalent, as schematically presented in Fig.~\ref{fig;mto_mechanism}(c). This implies that, despite nominal charge equivalence of all Ti sites and no site CO observed, some charge transfer, presumably involving bond charge, still has to take place at the transition. This then inevitably implies that the ground state has to involve the bond charge order which coincides with, and is hence indistinguishable from, the observed dimer order in the diffraction measurements. The implication of charge being equally distributed across the triply degenerate $t_{2g}$\xspace manifold of Ti in a manner depicted in Fig.~\ref{fig;mto_mechanism}(c), presented also in associated energy diagram (bottom right inset to the Figure), would be that the pyrochlore sublattice is comprised of regular Ti$_{4}$ tetrahedra with equidistant Ti-Ti contacts corresponding to equal bond charge, as described within the cubic spinel model. However, this is not what is observed experimentally. The observations based on the PDF analyses clearly demonstrate that the pyrochlore sublattice is comprised of locally distorted Ti$_{4}$ tetrahedra with a distribution of distances, suggesting that the bond charge remains inequivalent above the MIT. One possibility is that the dimers indeed persist in the metallic regime, as hinted at in the earlier neutron study~\cite{TorigoeNanoscaleicetypestructural2018}. However, such an interpretation would be inconsistent with magnetization measurements of MgTi$_2$O$_4$\xspace that establish the disappearance of spin singlets above the MIT. It would further be inconsistent with the elongation of the short Ti-Ti dimer contacts evidenced in our x-ray PDF analysis, which implies disappearance of spin singlet dimers at MIT even locally. In analogy with CuIr$_2$S$_4$\xspace, it is plausible that the dimer state gets replaced locally by an ODL-type state in the high temperature metallic phase. Observation of heterogeneous local Ti-Ti contacts corresponding to inequivalent bond charge is consistent with an ODL-like state in MgTi$_2$O$_4$\xspace. Since the local structures at high temperature and in the ground state are distinct, the MIT in MgTi$_2$O$_4$\xspace cannot be assigned to a trivial order-disorder type. Importantly, as careful assessment shows, the ODL state in MgTi$_2$O$_4$\xspace cannot be exactly mapped onto the ODL state seen in CuIr$_2$S$_4$\xspace, since the charge filling is different in the two systems (1 $e^{-}$/Ti in MgTi$_2$O$_4$\xspace, 0.5 holes/Ir in CuIr$_2$S$_4$\xspace). In the latter case two Ir neighbors can reduce the energy of the system by forming a 1O-ODL state accommodating a common hole, as described in the previous section. This results in a 3-electron state shown in the middle panel of Fig.~\ref{fig;cis_odl_review}(a). Given that the destruction of the spin singlet dimer in MgTi$_2$O$_4$\xspace involves removal of one of the electrons from the dimer state sketched in the energy diagram in the top left inset to Fig.~\ref{fig;mto_mechanism}, consistent with destabilization and elongation of the short Ti-Ti bond, in the ODL picture the one electron left behind would indeed result in an electron-hole antipode of the 1O-ODL state seen in CuIr$_2$S$_4$\xspace, as shown in the top right inset to Fig.~\ref{fig;mto_mechanism}. The issue arises with the placement of the extra electron from the dimer. In case of CuIr$_2$S$_4$\xspace, and due to filling, the removal of one of the two holes constituting a dimer results in generation of two 1O-ODL states on two different pairs of Ir. There, since only 50\% of Ir participates in dimerization, in terms of filling one should think of this process as a replacement of --4+--4+--3+--3+-- charge tetramer with a pair of --3.5+--3.5+-- ODL states achieved by hole redistribution. Since there is no site charge disproportionation in MgTi$_2$O$_4$\xspace and since each Ti is involved in dimerization, the dimer density per formula unit is twice as large in MgTi$_2$O$_4$\xspace as in CuIr$_2$S$_4$\xspace and each Ti has to participate in two independent 1O-ODL states simultaneously. Notably, the geometry of $t_{2g}$\xspace orbital overlaps imposes a constraint that the two 1O-ODL states for each Ti have to be assembled with two {\it different} Ti neighbors, as illustrated in Fig.~\ref{fig;mto_mechanism}(a),(b). There, for example, the dimer in Fig.~\ref{fig;mto_mechanism}(a) involving Ti labeled ``2" in Fig.~\ref{fig;mto_mechanism}(b) disassembles at MIT to make two 1O-ODL states, one with the Ti neighbor labeled ``1" and another with the neighbor labeled ``3". This results in two short Ti-Ti contacts that are both longer than the dimer distance, but shorter than the average Ti-Ti separation in the cubic structure. We call this a two-orbital ODL state (2O-ODL) given that two atomic orbitals of a single Ti ion are utilized. Such 2O-ODL state is hence comprised of a superposition of two 1O-ODL states of different variety (e.g. ({\em xy, xy}) and ({\em yz, yz}), etc.) that point in different directions and lie along different edges of the pyrochlore sublattice. These ODL distances are marked as thick lines color coded as red and blue in Fig.~\ref{fig;mto_mechanism}(b), and the corresponding state is schematically shown on the energy diagram in the bottom left inset to the Figure, where the matching color coding of the electron spins signifies that they belong to different 1O-ODL states. The existence of 2O-ODL state in MgTi$_2$O$_4$\xspace is corroborated by another experimentally observed difference between the two systems: Spatial extent of the local order associated with the metallic regime of MgTi$_2$O$_4$\xspace is observably larger than that seen in CuIr$_2$S$_4$\xspace~\cite{BozinLocalorbitaldegeneracy2019c}. The extended character of the local structural distortions associated with the 2O-ODL state in MgTi$_2$O$_4$\xspace are expected for the following reasons. First, due to filling the two ingredient 1O-ODL prong states in the 2O-ODL super-state cannot both be of the same type (e.g. if one is ({\em xy, xy}), the other can only be ({\em yz, yz}) or ({\em zx, zx})) which would tend to increase local structural correlations. Second, the bond charge on different 1O-ODL bonds is of the same sign, resulting in the Coulomb repulsion which would also maximize the span of the 2O-ODL state itself. The observed local distortions over the length scale of $\sim$1~nm, spanning approximately three Ti$_{4}$ tetrahedra ~\cite{TorigoeNanoscaleicetypestructural2018}, are therefore consistent with the presence of the 2O-ODL state in MgTi$_2$O$_4$\xspace. The energetic benefit of the 2O-ODL state over the local spin singlet dimer state is not apparent and remains elusive. The local 2O-ODL and disordered dimer states would both increase the system entropy and, in turn, the entropic contribution to energy would stabilize corresponding short range ordered state at elevated temperature. On the other hand, some vestigial magnetic correlations could be expected in the high temperature ODL regime. Another possibility then is that the principal stabilization of the 2O-ODL state stems from its presumed magnetism. The character of local spin correlations within the 2O-ODL state cannot be established by the analysis carried out in this work. In the 2O-ODL diagram shown in Fig.~\ref{fig;mto_mechanism} the two spins are arbitrarily drawn as parallel to avoid any confusion with the dimer state, but their relationship in fact has not been established experimentally. In fact, the magnetic response of MgTi$_2$O$_4$\xspace above the MIT, Fig.~\ref{fig;stru_mt_char}(c), is neither Pauli-like nor Curie-Weiss-like. Rather it resembles that of charge density wave systems at high temperature~\cite{BenchimolAnisotropymagneticsusceptibility1978}, the regime associated with a pseudogap in the electronic density of states~\cite{BorisenkoPseudogapChargeDensity2008b}. In these systems magnetic susceptibility behavior at T$>$T$_{s}$ was attributed to fluctuations of the charge density wave amplitude~\cite{JohnstonThermodynamicsChargeDensityWaves1984,ZhangIntertwineddensitywaves2020}. It is thus tempting to speculate that in MgTi$_2$O$_4$\xspace\ the observed magnetic response may similarly be due to ODL fluctuations. It would therefore be of appreciable interest to explore this aspect of the 2O-ODL state by techniques sensitive to local magnetism, such as $\mu$SR~\cite{McKenziepositivemuonmSR2013} and magnetic PDF~\cite{FrandsenMagneticpairdistribution2014}, which would be particularly informative in that regard. While the PDF probe used here provides the information on instantaneous atomic structure, and as such does not differentiate between static and dynamic disorder, the ODL state in these systems is expected to be dynamic. Spatiotemporal fluctuations then average out to perceived undistorted cubic average structure as observed crystallographically. Notably, the resistivity just above the MIT in CuIr$_2$S$_4$\xspace\ is about 2~m$\Omega$cm and linearly increasing with temperature~\cite{BurkovAnomalousresistivitythermopower2000b}, ascribed to bipolaronic hopping mechanism~\cite{YagasakiHoppingConductivityCuIr2S42006b,TakuboIngapstateeffect2008b}, whereas in the metallic regime of MgTi$_2$O$_4$\xspace\ electric resistivity is not only substantially higher, but it decreases with increasing temperature in an insulator-like manner~\cite{IsobeObservationPhaseTransition2002}. This stark difference in the observed electronic transport could be considered as an important indicator, albeit indirect, of the underlying difference reflecting the 1O-ODL and 2O-ODL characters of the high temperature states in these two systems, respectively. Although the Ir dimers in CuIr$_2$S$_4$\xspace are strictly speaking equivalent to the Ti dimers in MgTi$_2$O$_4$\xspace, the mechanism of their local formation from the ODL state is electron-hole symmetric and in that sense the dimers in these two systems could be considered as having a different flavor derived from their origin. Formation of dimers in CuIr$_2$S$_4$\xspace requires transfer of holes from one half of the available population of ODL states to the other, and the ODLs receptors of a hole become dimers, accounting for only 50~\%\ of Ir being dimerized. In contrast, the dimers in MgTi$_2$O$_4$\xspace assemble from the ODL states by a virtue of electron transfer, where 1O-ODL states that receive electrons become dimers with all Ti participating in dimerization. While in both systems the process involves bond charge disproportionation, in CuIr$_2$S$_4$\xspace this consequentially results in the observed site charge disproportionation and subsequent charge order, which is presumably imposed by the specifics of the filling and reflected in the dimer density per formula unit. \subsection{Consequences for ice-type nanoscale fluctuations} Presence of 2O-ODL state has another important consequence for MgTi$_2$O$_4$\xspace. Each Ti$_{4}$ tetrahedron inevitably hosts two 1O-ODL states. Due to the Coulomb repulsion of the bond charge we would expect the two states to be placed on the opposite skew edges of each tetrahedron, although other constellations cannot be excluded. On the other hand, and irrespective of the details of their distribution, multiple 1O-ODLs on one Ti$_{4}$ tetrahedron would cause distortions that are incompatible with the ice-type structural fluctuations in the MgTi$_2$O$_4$\xspace system such as those suggested in the previous study of the local structure of MgTi$_2$O$_4$\xspace~\cite{TorigoeNanoscaleicetypestructural2018}. Given that the spin singlet dimer distortion in the ground state of MgTi$_2$O$_4$\xspace~\cite{SchmidtSpinSingletFormation2004} follows the ``two in -- two out" ice rules~\cite{ander;pr56} on each individual Ti$_{4}$ tetrahedron in the structure, the proposition that the local Ti atomic displacements have the same configuration in the cubic phase~\cite{TorigoeNanoscaleicetypestructural2018} presumably originates in part from the order-disorder type view of the MIT in MgTi$_2$O$_4$\xspace that would be implicated by the survival of dimer-like distortions in the high temperature phase. Our analysis does not support this picture. However, based on the considerations described above, CuIr$_2$S$_4$\xspace may possibly be a better candidate for exhibiting the distortions of the ``two in -- two out" type in the disordered ODL regime, as illustrated by mapping shown in Fig.~\ref{fig;cis_odl_review}(b). Exploring this matter further both experimentally and theoretically should provide a more detailed understanding of these systems. Single crystal diffuse scattering based methods~\cite{Weberthreedimensionalpairdistribution2012b,KrogstadReciprocalspaceimaging2020,DavenportFragile3DOrder2019,RothModelfreereconstructionmagnetic2018b,RothSolvingdisorderedstructure2019b} and dynamical mean-field and advanced first-principles approaches~\cite{PramudyaNearlyfrozenCoulomb2011,MahmoudianGlassyDynamicsGeometrically2015,WangTetragonalFeSePolymorphous2020} would be particularly useful in that regard. \section{Conclusion} Here we applied joint x-ray and neutron pair distribution function analysis on the dimerized MgTi$_2$O$_4$\xspace spinel, a candidate system for hosting multi-orbital orbital-degeneracy-lifted (ODL) state, to track the evolution of its local atomic structure across its localized-to-itinerant electronic transition. Consistent with recent reports, the local structure does not agree with the average structure above the MIT temperature of 250~K and deep in the metallic cubic regime. However, in stark contrast to previous findings~\cite{TorigoeNanoscaleicetypestructural2018}, we provide unambiguous evidence that spin singlet dimers are vanishing at the MIT. The shortest Ti-Ti distance corresponding to spin singlet dimers experiences a discontinuous elongation locally on warming through the MIT but remains shorter than that prescribed by the cubic average structure. The local distortion in the metallic regime is quantitatively and qualitatively different than that observed in association with the spin singlet state, implying that MIT is not a trivial order-disorder type transition. The distortion characterizes the entire metallic regime, and persists up to at least 500~K ($\sim$2T$_{s}$). The observed behavior is a fingerprint of the local symmetry broken ODL state observed in the related CuIr$_2$S$_4$\xspace system. The correlation length of local distortions associated with the ODL state in MgTi$_2$O$_4$\xspace is about 1~nm, which is double that seen in CuIr$_2$S$_4$\xspace, implying two-orbital character of the ODL state. The observations exemplify that high temperature electronic precursor states that govern emergent complex low temperature behaviors in quantum materials can indeed have a multi-orbital degeneracy lifting character. \begin{acknowledgements} Work at Brookhaven National Laboratory was supported by the U.S. Department of Energy (US DOE), Office of Science, Office of Basic Energy Sciences under contract DE-SC0012704. LY and MGT acknowledge support from the ORNL Graduate Opportunity (GO) program, which was funded by the Neutron Science Directorate, with support from the Scientific User Facilities Division, Office of Basic Energy Science, US DOE. Work in the Materials Science Division at Argonne National Laboratory (sample synthesis and characterization) was sponsored by the US DOE Office of Science, Basic Energy Sciences, Materials Science and Engineering Division. X-ray PDF measurements were conducted at 28-ID-1 and 28-ID-2 beamlines of the National Synchrotron Light Source II, a US DOE Office of Science User Facility operated for the DOE Office of Science by Brookhaven National Laboratory. Neutron diffraction experiments were carried out at the NOMAD beamline of the Spallation Neutron Source, Oak Ridge National Laboratory, which was sponsored by the Scientific User Facilities Division, Office of Basic Energy Science, US DOE. \end{acknowledgements}
null
null
null
proofpile-arXiv_000-10136
{"arxiv_id":"2009.08923","language":"en","timestamp":1600654641000,"url":"https:\/\/arxiv.org\/abs\/2009.08923","yymm":"2009"}
2024-02-18T23:40:25.069Z
2020-09-21T02:17:21.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10138"}
null
\section{Introduction} Solving optimisation problems is at the heart of every scientific discipline to improve our understanding and interpretation of scientific findings. Evolution-based and swarm intelligence search and optimisation have been in remarkable growth over the years to tackle complex problems ranging from scientific research \cite{abraham2006swarm} to industry \cite{sanchez2012industrial} and commerce \cite{freitas2002review}. Hybridizing evolutionary, swarm, and annealing algorithms \cite{grosan2007hybrid} (the focus of this work) is an active area of research, since usually hybrid algorithms can offer several advantages over standalone algorithms in terms of stability, search speed, and exploration capabilities, where these advantages highlight the goals of these hybrid studies. Few examples of successful demonstration of hybrid algorithms in different domains are: (1) genetic algorithm (GA) and particle swarm optimisation (PSO) \cite{kao2008hybrid}, (2) GA and Simulated Annealing (SA) \cite{chen2009hybrid}, (3) GA and SA \cite{ma2014hybrid}, (4) PSO and tabu search \cite{shen2008hybrid}, (5) PSO and evolution strategies (ES) \cite{jamasb2019novel}, and many more. For the authors' interests on nuclear power plant design, evolutionary, swarm, and annealing algorithms have been proposed in several research studies in standalone and hybrid forms to optimise nuclear fuel assemblies and cores of light water reactors \cite{zameer2014core,rogers2009optimization,de2009particle}, to reduce fuel costs and improve nuclear reactor safety \cite{kropaczek1991core}. As improving capabilities of evolutionary and stochastic algorithms in solving optimisation problems is always a research target due to their widely usage, we propose a new algorithm called PESA (\textbf{P}SO, \textbf{E}S, and \textbf{S}A \textbf{A}lgorithm). PESA hybridizes three known optimisation techniques by exchanging their search data on-the-fly, storing all their solutions in a buffer, and replaying them frequently based on their importance. The concept of experience replay was first introduced in deep reinforcement learning (RL) \cite{mnih2015human,schaul2015prioritized} to improve agent learning by replaying relevant state/action pairs weighted by their temporal difference and reward values. Accordingly, we introduce experience replay into evolutionary algorithms to determine whether this concept would improve the performance. First, we hybridize PSO, ES, and SA to create a replay memory with diverse samples by taking the search advantages of each individual algorithm. Next, two modes of experience replay are applied: (1) prioritized replay to enhance exploration capabilities of PESA, and (2) ``backdoor'' greedy replay to improve algorithm exploitation, such that once the search is close to end, PESA prioritizes its main experience. To enhance search speed, PSO, ES, and SA are parallelized during search such that the three algorithms can run simultaneously to collect experiences, before updating the memory and executing the replay. We benchmark PESA against its standalone components: PSO, ES, and SA to show its promise. Variety of commonly used continuous benchmark functions are utilized in this paper, which also have a high dimensional nature to represent realistic optimisation problems. We evaluate PESA performance by its ability to find a known global optima, exploration/exploitation capabilities, and computational efficiency. \section{Methodology} \label{sec:method} The workflow of PESA is sketched in Figure \ref{fig:pesa}, which shows how the algorithms are connected to each other through the replay memory. The choices of PSO, ES, and SA are not arbitrary. PSO \cite{kennedy1995particle} is known to excel in continuous optimisation, ES \cite{beyer2002evolution} (which originated from the genetic algorithms) performs well in both continuous/discrete spaces, while SA \cite{kirkpatrick1983optimization} is introduced to ensure exploration due to its stochastic nature, and more importantly to drive PESA exploitation to the best solution found, as will be described in the next section. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/pesa_arxiv.png} \caption{PESA algorithm workflow. Red arrows represent memory feedback, $\alpha_{backdoor}$ is the probability to use backdoor greedy replay in SA. $\eta$ and $\eta'$ are the number of PSO particles surviving from previous generation and from the memory, respectively. $\mu$ and $\mu'$ are the number of ES individuals surviving from previous generation and from the memory, respectively. $\lambda$ is the full size of ES offspring. All hyperparameters are defined in detail later in this section} \label{fig:pesa} \end{figure} \subsection{Evolutionary, Swarm, and Annealing Computation} \label{sec:evol} Evolution strategies (ES) \cite{beyer2002evolution} are inspired by the theory of natural selection. In this work, the well-known $(\mu, \lambda)$ strategy is used, which is indeed similar to the continuous GA in terms of the operators (e.g., crossover, mutation, and reproduction). However, the mutation strength of each gene/attribute in the individual ($\vec{x}$) is not constant, but learned during the evolution. Accordingly, each individual in the population carries a strategy vector of same size as ($\vec{x}$), where the strategy vector is adapted during the evolution, and controls the mutation strength of each attribute. We adopt log-normal adaptation for the strategy vector (see \cite{beyer2002evolution}), where the min and max strategies are bounded between $1/n$ and 0.5, respectively, as suggested by the literature, where $n$ is the size of $\vec{x}$. On the population level, the crossover operation selects two random individuals for mating with probability $CX$, where we use the classical two-point crossover in this work. After crossover, some of the individuals may be selected for mutation with a small probability $MUT$. Notice that for $(\mu, \lambda)$, both $MUT$ and $CX$ on the population level remain fixed, unlike the internal mutation strength for each individual. After fitness evaluation of the population, the selected individuals $\mu$ are passed to the next generation, where these $\mu$ individuals participate in generating the next offspring of size $\lambda$ (i.e., $\lambda \geq \mu$). Particle swarm optimisation (PSO) \cite{kennedy1995particle} is inspired by the movement of organisms in bird or fish flocks. Each particle in the swarm experiences position update ($x^{t+1}_i = x^t_i + v^{t+1}_i$), where $i$ is the attribute index and $v$ is the velocity value for that attribute. We implement the constriction approach by Clerc and Kennedy \cite{clerc2002particle} for velocity update, which can be expressed as follows \begin{align} v^{t+1}_i &= K[v^t_i + c_1r_1(pbest^t_i - x^t_i) + c_2r_2(gbest^t - x^t_i)] \\ K &= \frac{2}{|2- \phi - \sqrt{\phi^2-4\phi}|} \\ \phi &= c_1 + c_2, \quad \phi > 4 \end{align} where $c_1$, $c_2$ are the cognitive and social speed constants, respectively, $r_1$, $r_2$ are uniform random numbers between [0,1], and $pbest$, $gbest$ are the local best position for each particle and global best position of the swarm, respectively. Lastly, $K$ is the constriction coefficient introduced to balance PSO exploration/exploitation and improve stability. Typically, when $c_1 = c_2 = 2.05$, then $K=0.73$. Another advantage of using constriction is it exempts us from using velocity clamping; therefore there is no need to specify minimum and maximum velocities, which reduces PSO hyperparameters, and PESA by definition. The number of particles in the swarm is given by $\eta$. Simulated Annealing (SA) \cite{kirkpatrick1983optimization} is inspired from the concept of annealing in physics to reduce defects in crystals through heating followed by progressive cooling. In optimisation, SA combines high climbing and pure random-walk to help us find an optimum solution through implementing five basic steps: (1) generate a candidate solution, (2) evaluate candidate fitness, (3) generate a random neighbor solution and calculate its fitness, (4) compare the old and new fitness evaluations (i.e., $\Delta E$ increment), if better continue with the new solution, if worse, accept the old solution with probability $\alpha=exp^{-\Delta E/T}$, where T is the annealing temperature, (5) repeat steps 3-4 until convergence. Temperature is annealed between $T_{max}$ and $T_{min}$ over the annealing period, where the fast annealing schedule is adopted in this work \begin{equation} \label{eq:fast} T = T_{max}\cdot exp\bigg[\frac{-log(T_{max}/T_{min})N}{N_{steps}}\bigg], \end{equation} where $N$ is the current annealing step, which builds up from 1 to $N_{steps}$. New SA candidates are generated using random-walk with a small probability $\chi$, where each input attribute is subjected to perturbation once $rand \sim U[0,1] < \chi$ is satisfied. Small value of $\chi$ between 0.05-0.15 seemed to yield better performance in this work. \subsection{Experience Replay} \label{sec:er} Two modes of experience replay are relevant to this work: (1) greedy and (2) prioritized replay. Greedy replay sorts the memory from min to max (assuming a minimization problem), and replays the sample(s) with lowest fitness every time. Intuitively, replaying always the best samples could lead to improvement early on, but will restrict the diversity of the search, and likely to end with premature convergence (i.e., methods converge to a local optima). Therefore, greedy replay in PESA is used in a special form with SA described later. Prioritized replay balances between uniform and greedy sampling through using uniform replay early on, allowing PESA to explore more at first and exploit at the end. Probability of replaying sample $i$ can be defined as follows: \begin{equation} \label{eq:prior1} P_i = \frac{p_i^\alpha}{\sum_{d=1}^D p_d^\alpha} \end{equation} where $D$ is the current memory size and exponent $\alpha$ is the priority coefficient $\in [0,1]$, explaining how much prioritization is used. $\alpha = 0$ corresponds to uniform replay (i.e., all samples have equal weights), while $\alpha =1$ corresponds to the fully prioritized replay. Notice here we describe $\alpha =1$ as ``fully prioritized`` replay rather than greedy replay (i.e., min operator). When $\alpha =1$, the best samples (lowest fitness) are ``more likely'' to be replayed due to their larger weight, but in greedy replay these quality samples are ``100\%'' replayed (because of applying the $min$ operator). Next, $p_i$ refers to the priority of sample $i$, which can take different forms. For our case, we choose a prioritization based on sample rank: \begin{equation} \label{eq:prior2} p_i = \frac{1}{rank(i)} \end{equation} where $rank(i)$ is the sample rank based on fitness value when the memory is sorted from minimum fitness to maximum. After sorting, sample 1 has $p_i = 1$, sample 2 has $p_i = 0.5$, and so on. According to \cite{schaul2015prioritized}, rank-based prioritization is robust since it is insensitive to outliers. Now, to balance between exploration/exploitation in prioritized replay, we start PESA evolution with small value of $\alpha = \alpha_{init} = 0.01$ (more exploration), and then gradually increasing until $\alpha =\alpha_{end} = 1.0$ (more exploitation) by the end of evolution. After a generation $k$, priorities are calculated by Eq.\eqref{eq:prior2}, sampling probability is determined by Eq.\eqref{eq:prior1}, and replay is performed by sampling from a non-uniform distribution with probabilities [$P_1, P_2, ..., P_D$]. Notice that redundant samples (if any) are removed from the memory before re-sampling to avoid biasing the replay toward certain samples. \begin{algorithm}[!h] \small \caption{Evolution Strategy ($\mu + \mu', \lambda$) in PESA} \label{alg:es} \begin{algorithmic}[1] \State \textbullet Given ES hyperparameters: $\mu$, $\mu'$, $\lambda$, $MUT$, $CX$ \For{a generation $k$} \State \textbullet Apply crossover operator to the population ($\mu + \mu'$) with probability $CX$ \State \textbullet Apply mutation operator to the population ($\mu + \mu'$) with probability $MUT$ \If {$MUT + CX < 1$} \State \textbullet Apply reproduction to the population ($\mu + \mu'$) with probability $1 - MUT - CX$ \EndIf \State \textbullet Generate final offspring $\lambda$ from the population ($\mu + \mu'$) \State \textbullet \textit Evaluate fitness of the offspring \State \textbullet Select $\mu$ individuals with best fitness for next generation \EndFor \State \textbullet Return the selected individuals $\bm{\mu} = \{(\vec{x}_1, y_1), (\vec{x}_2, y_2),...,(\vec{x}_\mu, y_\mu)\}$ \end{algorithmic} \end{algorithm} As in Figure \ref{fig:pesa}, prioritized replay is used with all three algorithms at the beginning of each generation. For ES, the replay memory provides $\mu'$ individuals at the beginning of every generation, which mix with original $\mu$ individuals (from previous generation) using crossover, mutation, and reproduction operations (see Figure \ref{fig:pesa}). Therefore, the ($\mu,\lambda$) algorithm becomes ($\mu + \mu',\lambda$) for PESA, as $\mu'$ individuals are mixed with $\mu$ before forming the next $\lambda$ offspring. The ES algorithm in PESA is given in Algorithm \ref{alg:es}. The prioritized replay for PSO is similar to ES, where the swarm particles at every generation constitute from $\eta$ particles of the previous generation and $\eta'$ particles from the memory, before going through velocity and position updates as described in section \ref{sec:evol}. In both cases (ES, PSO), the values of $\mu'$ and $\eta'$ can be tuned for better performance. The PSO algorithm in PESA is given in Algorithm \ref{alg:pso}. \begin{algorithm}[!h] \small \caption{Particle Swarm Optimisation ($\eta + \eta'$) in PESA} \label{alg:pso} \begin{algorithmic}[1] \State \textbullet Given PSO hyperparameters: $\eta$, $\eta'$, $c_1$, $c_2$ \For{a generation $k$} \State \textbullet Update velocity of swarm particles ($\eta + \eta'$) with constriction coefficient \State \textbullet Generate new swarm by updating all particle positions ($\eta + \eta'$) \State \textbullet Evaluate fitness of the swarm \State \textbullet Select $\eta$ particles with best fitness for next generation \EndFor \State \textbullet Return the selected particles $\bm{\eta} = \{(\vec{x}_1, y_1), (\vec{x}_2, y_2),...,(\vec{x}_\eta, y_\eta)\}$ \end{algorithmic} \end{algorithm} For SA, prioritized replay is used to make an initial guess for the chain before starting a new annealing generation. Given SA is running in series in this work, a single chain is used. Once the generation is done, SA updates the memory by the last pair ($\vec{x}^{last},y^{last}$) and best pair ($\vec{x}^{best},y^{best}$) observed by the chain in that generation. The SA algorithm in PESA is given in Algorithm \ref{alg:sa}. \begin{algorithm}[!h] \small \caption{Simulated Annealing in PESA} \label{alg:sa} \begin{algorithmic}[1] \State \textbullet Given SA hyperparameters: $T_{max}$, $T_{min}$, $\alpha_{backdoor}$ \State \textbullet Draw initial state from the memory, $\theta' = (\vec{x}_0, y_0)$ \State \textbullet Set $\vec{x}_{prev} \xleftarrow{} \vec{x}_0$, $E_{prev} \xleftarrow{} y_0$ \For{a generation $k$} \If {$rand \sim$ U[0,1] $< \alpha_{backdoor}$} \State \textbullet Draw the best sample from the memory $(\vec{x}',y')$ \State \textbullet Set $\vec{x}\xleftarrow{} \vec{x}'$, $E \xleftarrow{} y'$ \Else \State \textbullet Perform a random walk as next chain state $(\vec{x})$ \State \textbullet Evaluate fitness $E$ for the new state \EndIf \State \textbullet Calculate $\Delta E = E - E_{prev}$ \If {$\Delta E < 0$ OR $exp(-\Delta E/T) > rand \sim$ U[0,1]} \State \textbullet Accept the candidate \State \textbullet Set $E_{prev} \xleftarrow{} E$, $\vec{x}_{prev} \xleftarrow{} \vec{x}$ \Else \State \textbullet Reject the candidate and restore previous state \State \textbullet Set $E_{prev} \xleftarrow{} E_{prev}$, $x_{prev} \xleftarrow{} x_{prev}$ \EndIf \State \textbullet Anneal $T$ between $T_{max}$ and $T_{min}$ \EndFor \State \textbullet Return the last chain state ($\vec{x}^{last},y^{last}$) and best state ($\vec{x}^{best},y^{best}$) \end{algorithmic} \end{algorithm} The backdoor greedy replay for SA in Figure \ref{fig:pesa} and Algorithm \ref{alg:sa} has two main benefits: (1) ensuring PESA exploitation at the end of evolution and (2) providing more guidance to SA that relies extensively on random-walk. Unlike prioritized replays that occur explicitly at the beginning of every generation of all algorithms including SA, backdoor replay obtained this name since it occurs implicitly during SA generation, by giving the SA chain a choice between its regular random-walk or the best quality sample in the memory with probability $rand \sim U[0,1] < \alpha_{backdoor}$. Since SA tends to accept low quality solutions early on, but tightens the acceptance criteria at the end by rejecting low quality solutions, this means SA will implicitly drive PESA to always converge to the best solution in the memory once the evolution is close to end. Second, as SA lacks the learning capabilities of PSO and ES (velocity update, crossover, etc.), the backdoor replay will frequently correct the SA chain to focus the search in relevant space regions as found by her sisters (PSO, ES) so far during the evolution. We typically recommend small value for $\alpha_{backdoor}$ (i.e., $<$ 0.15), since obviously large values lead to repetitive greedy replays, which in turn cause a bias in the chain, eventually leading to premature convergence in SA. \subsection{PESA Algorithm} By combining all parts presented in sections \ref{sec:evol}-\ref{sec:er}, PESA algorithm can be constructed as given in Algorithm \ref{alg:pesa}. The flow of PESA can be summarised in three main phases: \begin{enumerate} \item Warmup (lines 1-4 in Algorithm \ref{alg:pesa}): Hyperparameters of all individual algorithms (ES, PSO, SA) are specified. The memory is initialized with a few warmup samples ($N_{warmup}$) and maximum capacity ($D_{max}$). \item Evolution (lines 8-17 in Algorithm \ref{alg:pesa}): The three algorithms, ES, PSO, and SA are executed in parallel according to Algorithm \ref{alg:es}, Algorithm \ref{alg:pso}, and Algorithm \ref{alg:sa}, respectively. \textit{Each individual algorithm runs its iterations in serial}. \item Memory management (lines 6-7, 18-20 in Algorithm \ref{alg:pesa}): This phase involves updating the memory with new samples, calculating and updating sample priority, annealing the prioritization coefficient ($\alpha$), and cleaning the memory from duplicates. \end{enumerate} \begin{algorithm}[!h] \small \caption{PESA Algorithm with Prioritized Replay} \label{alg:pesa} \begin{algorithmic}[1] \State \textbullet Set hyperparameters of ES, SA, PSO \State \textbullet Set replay parameters: $\alpha_{backdoor}$, $\alpha_{init}$, $\alpha_{end}$ \State \textbullet Construct the replay memory with size $D_{max}$ and initialize with warmup samples $N_{warmup}$ \State \textbullet Set $\alpha \xleftarrow{} \alpha_{init}$ \For{GEN $i = 1$ to $N_{gen}$} \State \textbullet Calculate $p_i = 1/rank(i)$ and priorities $P(i) = p_i^\alpha/\sum_d p_d^\alpha$ \State \textbullet With probabilities $P(i)$, draw samples $\bm{\mu'}$, $\bm{\eta'}$, and $\bm{\theta'}$ \For{\textit{Parallel} Process 1: ES} \State \textbullet Given $\bm{\mu'} = \{(\vec{x}_1, y_1),...,(\vec{x}_\mu, y_{\mu'})\}$, run ES Algorithm \ref{alg:es} \State \textbullet Obtain ES population $\bm{\mu} = \{(\vec{x}_1, y_1), (\vec{x}_2, y_2),...,(\vec{x}_\mu, y_\mu)\}$ \EndFor \For{\textit{Parallel} Process 2: PSO} \State \textbullet Given $\bm{\eta'} = \{(\vec{x}_1, y_1),...,(\vec{x}_\eta, y_{\eta'})\}$, run PSO Algorithm \ref{alg:pso} \State \textbullet Obtain PSO population $\bm{\eta} = \{(\vec{x}_1, y_1), (\vec{x}_2, y_2),...,(\vec{x}_\eta, y_\eta)\}$ \EndFor \For {\textit{Parallel} Process 3: SA} \State \textbullet Given $\bm{\theta'} = \{(\vec{x}', y')\}$, run SA Algorithm \ref{alg:sa} \State \textbullet Obtain SA population $\bm{\theta} = \{(\vec{x}^{last}, y^{last})\}$ \State \textbullet Obtain SA best population $\bm{\theta}^{best} = \{(\vec{x}^{best}, y^{best})\}$ \EndFor \State \textbullet Update the memory with samples $\bm{\mu}, \bm{\eta}$, $\bm{\theta}$, and $\bm{\theta}^{best}$ \State \textbullet Remove duplicated samples from the memory \State \textbullet Anneal $\alpha$ between $\alpha_{init}$, $\alpha_{end}$ \EndFor \end{algorithmic} \end{algorithm} As can be noticed from Algorithm \ref{alg:pesa}, algorithm-based parallelism can be observed in PESA (see steps 8, 11, and 14 in Algorithm \ref{alg:pesa}), which involves running the three algorithms simultaneously. This feature exists in the algorathim, but not activated in this work, since the functions investigated are too fast-to-evaluate. Internal parallelization of each algorithm is left for future, since again the benchmark functions tested in this work are cheap-to-evaluate, so multiprocessing of each algorithm is also not advantageous. With an effort to minimize the number of hyperparameters in PESA, we assume similar size of generation across algorithms. For example, lets assume a generation of 60 individuals assigned for each algorithm. In this case, the ES population has size $\lambda=60$, PSO swarm has 60 particles, SA chain has size of $C_{size}=60$ steps. Although previous assumptions do not necessarily guarantee the most optimal performance, the authors believe that such symmetry in PESA has two main advantages. First, the burden of hyperparameter optimisation is significantly reduced. Second, algorithm dynamics is improved as all internal algorithms will finish their generation roughly same time, which allows them to stay up to date with the memory. This is clearly under the assumption that fitness evaluation cost ($y$) is roughly the same regardless of the input value ($\vec{x}$). \section{Numerical Tests} \label{sec:tests} The mathematical forms of selected benchmark functions are given in Table \ref{tab:funcs}, which are all well-known optimisation benchmarks \cite{jamil2013literature}. All functions have $n$-dimensions nature, where here we select $n=50$ to represent a more high-dimensional problem. All functions have a known global minima at $f(\vec{x}) = 0$, except for the Ridge and Exponential functions, which have their global minima at -5 and -1, respectively. \begin{table}[htbp] \centering \footnotesize \caption{List of continuous benchmark functions with $n$-dimensional space} \begin{adjustbox}{max width=\textwidth} \begin{tabular}{llll} \toprule Name & Formula & $\vec{x}$ Range & Global Optima \\ \midrule (1) Cigar & $ f(\vec{x}) = x_0^2 + 10^6\sum_{i=1}^n\,x_i^2$ & $[-10,10]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (2) Sphere & $ f(\vec{x}) = \sum_{i=1}^n x_i^2$ & $[-100,100]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (3) Ridge & $ f(\vec{x}) = x_1 + (\sum_{i=2}^{n}x_i^2)^{0.5}$ & $[-5,5]^n$ & $\vec{x}^* = [-5,0, ..., 0], f(\vec{x}^*) = -5$ \\ (4) Ackley & $ f(\vec{x}) = 20-20exp\Big(-0.2\sqrt{\frac{1}{n}\sum_{i=1}^{n}x_i^2}\Big)-exp\Big(\frac{1}{n}\sum_{i=1}^{n}cos(2\pi x_i)\Big) + e$ & $[-32,32]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (5) Bohachevsky & $f(\vec{x}) = \sum_{i=1}^{n-1}(x_i^2 + 2x_{i+1}^2 - 0.3\cos(3\pi x_i) - 0.4\cos(4\pi x_{i+1}) + 0.7)$ & $[-100,100]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (6) Griewank & $ f(\vec{x}) = \frac{1}{4000}\sum_{i=1}^n\,x_i^2 - \prod_{i=1}^n\cos\left(\frac{x_i}{\sqrt{i}}\right) + 1$ & $[-600,600]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (7) Brown & $ f(\vec{x}) = \sum_{i=1}^{n-1}(x_i^2)^{(x_{i+1}^{2}+1)}+(x_{i+1}^2)^{(x_{i}^{2}+1)}$ & $[-1,4]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (8) Exponential & $ f(\vec{x})=-exp(-0.5\sum_{i=1}^n{x_i^2})$ & $[-1,1]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = -1$ \\ (9) Zakharov & $ f(\vec{x})=\sum_{i=1}^n x_i^{2}+(\sum_{i=1}^n 0.5ix_i)^2 + (\sum_{i=1}^n 0.5ix_i)^4$ & $[-5,10]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (10) Salomon & $ f(\vec{x})=1-cos(2\pi\sqrt{\sum_{i=1}^{n}x_i^2})+0.1\sqrt{\sum_{i=1}^{n}x_i^2}$ & $[-100,100]^n$ & $\vec{x}^* = \vec{0}, f(\vec{x}^*) = 0$ \\ (11) Quartic & $ f(\vec{x})=\sum_{i=1}^{n}ix_i^4+\text{random}[0,1)$ & $[-1.28,1.28]^n$ & $\vec{x}^*=\vec{0}, f(\vec{x}^*) = 0 + \text{noise}$ \\ (12) Levy & $\begin{aligned} f(\vec{x}) & = sin^2(\pi w_1) + \sum_{i=1}^{n-1}(w_i-1)^2[1+10sin^2(\pi w_i+1)] \\ & + (w_n-1)^2[1+sin^2(2\pi w_n)], \quad w_i = 1 + (x_i-1)/4 \end{aligned}$ & $[-10,10]^n$ & $\vec{x}^* = \vec{1}, f(\vec{x}^*) = 0$ \\ \bottomrule \end{tabular}% \end{adjustbox} \label{tab:funcs}% \end{table}% The hyperparameters selected to perform the tests are as follows: For ES ($CX=0.6, MUT=0.15, \lambda=60, \mu=30, \mu'=30$), for PSO ($c_1=2.05, c_2=2.05, \eta=30, \eta'=30$), for SA ($T_{max} = 10000, T_{min} = 1, \chi=0.1, C_{size} = 60$), while for replay parameters ($\alpha_{init}=0.01, \alpha_{end}=1.0, \alpha_{backdoor}=0.1$). The tests are executed with $N_{gen} = 100$ generations and $N_{warmup} = 500$ samples. The previous hyperparameters yielded satisfactory performance for all test functions. \textit{It is very important highlighting that the hyperparameters, initial starting points, and number of generations are preserved between PESA and the standalone algorithms to isolate their effects}. This means that any difference in performance comes purely from the ``experience share and replay'', the contribution of this work. The convergence of fitness results is plotted in Figure \ref{fig:pesa}, which compares PESA against the standalone algorithms given the prescribed hyperparameters. For brevity, the plotted results include the minimum fitness found in each generation, since it is our end goal, so the first generation point is not necessarily the initial guess in all algorithms. In addition, log-scale is used for some functions (Cigar, Sphere, Brown, etc.) to reflect a better scale, where we set $10^{-2}$ as a lower bound to represent the zero global minima. Obviously, the results clearly show PESA outperforming the standalone ES, PSO, and SA by a wide margin in terms of number of generations to reach global minima. First, in all benchmarks, PESA successfully found and converged to the global optima, while the standalone algorithms failed to do so in most cases. In addition, PESA tends to converge much faster to the feasible region, thanks to the collaborative environment that PESA creates. We can notice that PSO is the most competitive algorithm to PESA. This is expected, since PSO was developed for continuous optimization, which is the feature of all the considered benchmarks. However, PSO alone seems to converge slowly. Due to the high dimensionality, standalone SA struggles to resolve the search space or to converge to a relevant solution in all cases. Standalone ES performance in Figure \ref{fig:pesa} is bounded between PSO and SA in most cases, except for the Levy function at which ES outperforms PSO. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figures/bench_arxiv.png} \caption{Comparison of fitness convergence of PESA against ES, PSO, and SA in \textbf{standalone} form for minimization of 12 continuous benchmarks, dimensionality $n=50$ (See Table \ref{tab:funcs})} \label{fig:bench} \end{figure} To demonstrate PESA exploration capabilities, Figure \ref{fig:explore} plots the Ackley fitness mean and $1-\sigma$ standard deviation for PESA against standalone ES, PSO, and SA. The statistics are calculated based on ``all'' individuals within each generation (i.e., low and high quality solutions). Obviously, PESA features a much bigger error bar than the standalone algorithms, which reflects sample diversity within each PESA generation. The diversity of samples helps PESA to have better exploration of the search space, and overall better performance. On the other hand, the very small error bar of ES in Figure \ref{fig:explore} implies more exploitative behavior than PSO, which fairly explores the space. \begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{figures/explore.png} \caption{Convergence of fitness mean and standard deviation of PESA against \textbf{standalone} ES, PSO, and SA for the Ackley function} \label{fig:explore} \end{figure} To explain how PESA improved the performance of the three individual algorithms, Figure \ref{fig:prog} shows how ES, PSO, and SA perform \textbf{during PESA search} (i.e. NOT as standalone algorithms). First, we notice that PSO is leading PESA search early on as can be seen from the lower PSO fitness. Afterward, experience replay takes over by first guiding SA and preventing it from divergence as seen in Figure \ref{fig:bench}, and second by allowing ES and SA to lead PESA search as can be observed in some generations between 5-10. Additionally, thanks to the backdoor greedy replay, which allows SA to stay close to ES and PSO over the whole search period, investigating relevant search regions. At the end of the evolution, the three algorithms converge to each other due to the experience share of high quality solutions across the three algorithms. Unlike Figure \ref{fig:explore}, which shows PESA exploration ability, Figure \ref{fig:prog} shows how PESA uses SA to continuously exploit toward best solutions over the whole evolution period. \begin{figure}[h] \centering \includegraphics[width=0.55\textwidth]{figures/prog.png} \caption{Fitness convergence of ES, PSO, and SA \textit{\textbf{within PESA}} for the Sphere function} \label{fig:prog} \end{figure} Comparison of computational time between the four algorithms for three selected functions is given in Table \ref{tab:time}. The memory management phase in the forms of sorting the memory, updating and re-sampling from the memory, calculating the priorities, annealing $\alpha$, and cleaning the memory from duplicates result in a longer run time for PESA compared to the standalone algorithms as expected. It is important mentioning that the numbers in Table \ref{tab:time} involve running PESA and other algorithms in serial (single core), meaning that PSO, ES, and SA parts in PESA are not running in parallel, since we found that parallelization not necessary for these benchmarks. Therefore, from Table \ref{tab:time}, out of about $\sim$12s of PESA computing time, about 65\% ($\sim$7-8s) of the time is taken by the algorithms, while the rest are taken by the memory. The user can reduce the memory effect by controlling the full capacity of the memory ($D_{max}$), as interpreting large memories later in the evolution is consuming more time. The tests below in Table \ref{tab:time} involve registering every possible unique solution without limits (to obtain best performance), which would justify the significant portion consumed by the memory. \begin{table}[htbp] \centering \footnotesize \caption{Computing time in seconds required by different algorithms to run 100 generations for three selected functions} \begin{tabular}{llll} \toprule Method & Cigar & Sphere & Ackley \\ \midrule PESA (serial) & 11.7s & 11.0s & 12.8s \\ ES (serial) & 1.2s & 1.3s & 1.2s \\ PSO (serial) & 5.6s & 5.2s & 6.2s \\ SA (serial) & 0.5s & 0.5s & 0.9s \\ \bottomrule \end{tabular}% \label{tab:time}% \end{table}% \section{Conclusions} The concepts of experience share and replay are demonstrated through the proposed PESA algorithm to improve the search performance of evolutionary/stochastic algorithms. Experience share is preformed through connecting particle swarm optimisation (PSO), evolution strategies (ES), and simulated annealing (SA) with a replay memory, storing all their observed solutions. Experience replay is conducted by re-sampling with priority coefficient from the memory to guide the learning of all algorithms. In addition, greedy replay is used in backdoor form with SA to improve PESA exploitation behavior. The validation against 12 high-dimensional continuous benchmark functions shows superior performance by PESA against standalone ES, PSO, and SA, under similar initial starting points, hyperparameters, and number of generations. PESA shows much better exploration behaviour, faster convergence, and ability to find the global optima compared to its standalone counterparts. Given the promising performance, the authors are now focusing on fully-parallelizing PESA such that ES, PSO, and SA can evaluate each generation in shorter time. This is especially important when the fitness evaluation is complex (e.g., requires computer simulation). Additionally, PESA will go through additional benchmarking against other hybrid evolutionary methods in the literature, e.g. ES/SA or RL/PSO. Lastly, combinatorial PESA version will be developed and benchmarked in solving engineering combinatorial problems with heavy constraints. \section*{Acknowledgment} This work is sponsored by Exelon Corporation, a nuclear electric power generation company, under the award (40008739) \section*{Data Availability} The PESA GitHub repository will be released to the public once the peer-review process is done, which will include the source implementation and wide range of unit tests from benchmarks to engineering applications. \bibliographystyle{elsarticle-num} { \footnotesize
null
null
null
proofpile-arXiv_000-10137
{"arxiv_id":"2009.08936","language":"en","timestamp":1600654660000,"url":"https:\/\/arxiv.org\/abs\/2009.08936","yymm":"2009"}
2024-02-18T23:40:25.073Z
2020-09-21T02:17:40.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10139"}
null
\section{Introduction} Machine learning (ML) systems are increasingly used to automate decisions with direct impact on our daily lives such as credit scoring, loan assessments, crime prediction, hiring, and college admissions. There is increasing awareness that ML algorithms can affect people in unfair ways with legal or ethical consequences when used to automate decisions~\cite{barocas2016big,angwin2016machine}, for example, exhibiting discrimination towards certain demographic groups. As social media platforms are a major contributor to the number of automated data-driven decisions that we as individuals are subjected to, it is clear that such ML fairness issues in social media can potentially cause substantial societal harm. Recommender systems are the primary method for a variety of ML tasks for social media data, e.g. suggesting targets of advertisements, products (e.g. movies, music etc.), friends, web pages, and potentially consequential suggestions such as romantic partners or even career paths. Despite the practical challenges due to labor market dynamics~\cite{kenthapadi2017personalized}, professional networking site-based job recommendation approaches~\cite{bastian2014linkedin,frid2019find,gutierrez2019explaining} are helpful for job seekers and employers. However, biases inherent in social media data can lead recommender systems to produce unfair suggestions. For example, XING, a job platform similar to LinkedIn, was found to rank less qualified male candidates higher than more qualified female candidates~\cite{lahoti2019ifair}. Recommendations in educational and career choices are another important motivating application for fair recommender systems. Students' academic choices can have significant impacts on their future careers and lives. In 2010, women accounted for only 18\% of the bachelor’s degrees awarded in computer science~\cite{broad2014recruiting}, and interventions to help bridge this gap are crucial ~\cite{beede2011women}. Recommender systems can reinforce this disparity, or ---potentially--- help to mitigate it. In this paper, we investigate gender bias in recommender systems trained on social media data for suggesting sensitive items (e.g. academic concentrations or career paths). For social media data in particular, we typically have abundant implicit feedback for user preferences of various \emph{non-sensitive} items in which gender disparities are acceptable, or even desirable (e.g. ``liked'' Facebook pages, movies, or music), but limited data on the \emph{sensitive} items (e.g., users typically have only one or two college concentrations or occupations). User embeddings learned from the non-sensitive data can help predict the sparse sensitive items, but may encode harmful stereotypes, as has been observed for word embeddings~\cite{bolukbasi2016man}. Furthermore, the distribution of sensitive items typically introduces further unwanted bias due to societal disparities in academic concentrations and career paths, e.g. from the ``leaky pipeline'' in STEM education \cite{beede2011women}. We propose a practical technique to mitigate gender bias in sensitive item recommendations while resolving the above challenges. Our approach, which we call \emph{neural fair collaborative filtering (NFCF)}, achieves accurate predictions while addressing \emph{\textbf{sensitive data sparsity}} by pre-training a deep neural network on big implicit feedback data for non-sensitive items, and then fine-tuning the neural network for sensitive item recommendations. We perform two bias corrections, to address \emph{(1) bias in the \textbf{input embeddings} due to the non-sensitive items}, and \emph{(2) bias in the \textbf{prediction outputs} due to the sensitive items}. An ablation study shows that \emph{\textbf{both} interventions are important for fairness}. We demonstrate the utility of our method on two datasets: \emph{MovieLens} (non-sensitive \emph{movie ratings} and sensitive \emph{occupations}), and a \emph{Facebook} dataset (non-sensitive \emph{Facebook page ``likes''} and sensitive \emph{college majors}). Our main contributions include: \begin{itemize} \item We propose a pre-training + fine-tuning neural network method for fair recommendations on social media data. \item We propose two de-biasing methods: 1) de-biasing latent embeddings, and 2) learning with a fairness penalty. \item We perform extensive experiments showing both fairness and accuracy benefits over baselines on two datasets. \end{itemize} \section{Background and Related Work} In this section we formalize the problem, and discuss collaborative filtering with implicit data, and fairness metrics. \subsection{Problem Formulation} Let $M$ and $N$ denote the number of users and items, respectively. Suppose we are given a user-item interaction matrix $\textbf{Y}\in \mathbb{R}^{M\times N}$ of \emph{implicit feedback} from users, defined as \begin{equation} y_{ui} = \begin{cases} 1, & \text{if $u$ interacts with $i$}\\ 0, & \text{otherwise.} \end{cases} \end{equation} Here, $y_{ui}=1$ when there is an interaction between user $u$ and item $i$, e.g. when $u$ ``likes'' Facebook page $i$. In this setting, a value of $0$ does not necessarily mean $u$ is not interested in $i$, as it can be that the user is not yet aware of it, or has not yet interacted with it. While interacted entries reflects users' interest in items, the unobserved entries may just be missing data. Therefore, there is a natural scarcity of strong negative feedback. The collaborative filtering (CF) problem with implicit feedback is formulated as the problem of predicting scores of unobserved entries, which can be used for ranking the items. The CF model outputs $\hat{y}_{ui}=f(u,i|\Theta)$, where $\hat{y}_{ui}$ denotes the estimated score of interaction $y_{ui}$, $\Theta$ denotes model parameters, and $f$ denotes the function that maps model parameters to the estimated score. If we constrain $\hat{y}_{ui}$ in the range of [0,1] and interpret it as the probability of an interaction, we can learn $\Theta$ by minimizing the following negative log-likelihood objective function: \begin{equation}\label{eq:loss} L = - \sum_{(u,i)\in \chi \cup \chi^{-}} y_{ui}\log\hat{y}_{ui} + (1-y_{ui})\log(1-\hat{y}_{ui})\mbox{ ,} \end{equation} where $\chi$ represents the set of interacted user-item pairs, and $\chi^{-}$ represents the set of negative instances, which can be all (or a sample of) unobserved interactions. Learning with implicit feedback becomes more challenging when there is not enough observed interaction data per user. In our setting, we further suppose that items $i$ are divided into non-sensitive items ($i_n$) and sensitive items ($i_s$). For example, the $i_n$'s can be \emph{Facebook pages} where user preferences may reasonably be influenced by the protected attribute such as gender, and the user's ``likes'' of the pages are the implicit feedback. Since each user $u$ can (and often does) ``like'' many pages, $u$'s observed non-sensitive data ($u$-$i_n$) is typically large. On the other hand, $i_s$ may be the users' \emph{occupation} or \emph{academic concentration} provided in their social media profiles. We desire that the recommendations of $i_s$ to new users should be unrelated to the users' gender (or other protected attribute). Since each user $u$ may typically be associated with only a single occupation (or other sensitive personal data rarely disclosed), the data sparsity in the observed sensitive item interactions ($u$-$i_s$) is a major challenge. As a result, it is difficult to directly predict $u$-$i_s$ interactions based on other $u$-$i_s$ interactions. Typical collaborative filtering methods can suffer from overfitting in this scenario, and overfitting often amplifies unfairness or bias in the data such as harmful stereotypes \cite{zhao2017men,foulds2018intersectional}. Alternatively, the non-sensitive interactions $u$-$i_n$ can be leveraged, but these will by definition encode biases that are unwanted for predicting the sensitive items. For example, liking the \emph{Barbie doll} Facebook page may be correlated with being female and negatively correlated with \emph{computer science}, thus implicitly encoding societal bias in the career recommendations. \subsection{Neural Collaborative Filtering} Traditional matrix factorization (MF) models~\cite{koren2009matrix} map both users and items to a joint latent factor space of dimensionality $v$ such that user-item interactions are modeled as inner products in that space. Each item $i$ and user $u$ are associated with a vector $q_i\in R^{v}$ and $p_u\in R^{v}$, with \begin{equation}\label{eq:MF} \hat{y}_{ui} = q_{i}^{T}p_{u}+\mu + b_i + b_u\mbox{ ,} \end{equation} where $\mu$ is the overall average rating, and $b_u$ and $b_i$ indicate the deviations of user $u$ and item $i$ from $\mu$, respectively. Neural collaborative filtering (NCF)~\cite{he2017neural} replaces the inner products in matrix factorization with a deep neural network (DNN) that learns the user-item interactions. In the input layer, the users and items are typically one-hot encoded, then mapped into the latent space with an embedding layer. NCF combines the latent features of users ${p}_u$ and items ${q}_i$ by concatenating them. Complex non-linear interactions are modeled by stacking hidden layers on the concatenated vector, e.g. using a standard multi-layer perceptron (MLP). A commonly used architecture is a tower pattern, where the bottom layer is the widest and each successive layer has a smaller number of neurons~\cite{he2016deep}. \subsection{Fairness in Recommender Systems} The recommender systems research community has begun to consider issues of fairness in recommendation. A frequently practiced strategy for encouraging fairness is to enforce \emph{demographic parity} among different protected groups. Demographic parity aims to ensure that the set of individuals in each protected group have similar overall distributions over outcomes (e.g. recommended items) ~\cite{zemel2013learning}. Some authors have addressed the unfairness issue in recommender systems by adding a regularization term that enforces demographic parity~\cite{kamishima2011fairness,kamishima2012enhancement,kamishima2013efficiency,kamishima2014correcting,kamishima2016model}. However, demographic parity is only appropriate when user preferences have no legitimate relationship to the protected attributes. In recommendation systems, user preferences are indeed often influenced by protected attributes such as gender, race, and age~\cite{chausson2010watches}. Therefore, enforcing demographic parity may significantly damage the quality of recommendations. Fair recommendation systems have also been proposed by penalizing disparate distributions of prediction error~\cite{yao2017beyond}, and by making recommended items independent from protected attributes such as gender, race, or age~\cite{kamishima2017considerations}. In addition, ~\cite{burke2017balanced,burke2017multisided} taxonomize fairness objectives and methods based on which set of stakeholders in the recommender system are being considered, since it may be meaningful to consider fairness among many different groups. Pareto efficiency-based fairness-aware group recommendation~\cite{xiao2017fairness} was also proposed, however this method is not effective in personalized fair recommendations. In a non-archival extended abstract~\cite{rashid}, we recently proposed a simple technique to improve fairness in social media-based CF models. Our approach was to learn an NCF model, then debias the user embeddings using a linear projection technique, and predict sensitive items using k-nearest neighbors or logistic regression. This method improves the fairness of CF, but substantially degrades the accuracy of recommendations. In this work, we improve on this approach by using the method in ~\cite{rashid} to debias a pre-trained neural model for non-sensitive items, then fine-tuning using a fairness penalty to learn to recommend sensitive items. Our results show improved fairness and accuracy versus~\cite{rashid}. \subsection{Fairness Metrics} We consider several existing fairness metrics which are applicable for collaborative filtering problems. \subsubsection{Differential Fairness} The differential fairness \cite{foulds2018intersectional,foulds2018bayesian} metric aims to ensure equitable treatment for all protected groups, and it provides a privacy interpretation of disparity. Let $M(x)$ be an algorithmic mechanism (e.g. a recommender system) which takes an individual's data $x$ and assigns them an outcome $y$ (e.g. a class label or whether a user-item interaction is present). The mechanism $M(x)$ is $\epsilon$-\emph{differentially fair (DF)} with respect to $(A, \Theta)$ if for all $\theta \in \Theta$ with $x \sim \theta$, and $y \in \mbox{Range}(M)$, \begin{equation} e^{-\epsilon} \leq \frac{P_{M, \theta}(M(x) = y|s_i, \theta)}{P_{M, \theta}(M(x) = y|s_j, \theta)}\leq e^\epsilon \mbox{ ,} \end{equation} for all $(s_i, s_j) \in A \times A$ where $P(s_i|\theta) > 0$, $P(s_j|\theta) > 0$. Here, $s_i$, $s_j \in A$ are tuples of all protected attribute values, e.g. male and female, and $\Theta$, the set of data generating distributions, is typically a point estimate of the data distribution. If all of the $P_{M, \theta}(M(x) = y|s, \theta)$ probabilities are equal for each group $s$, across all outcomes $y$ and distributions $\theta$, $\epsilon = 0$, otherwise $\epsilon > 0$. \cite{foulds2018intersectional} proved that a small $\epsilon$ guarantees similar utility per protected group, and ensures that protected attributes cannot be inferred based on outcomes. DF can be estimated using smoothed ``soft counts'' of the predicted outcomes based on a probabilistic model. For gender bias in our recommender (assuming a gender binary), we can estimate $\epsilon$-DF per sensitive item $i$ by verifying that: \begin{align} e^{-\epsilon} \leq \frac{\sum_{u: A = m} \hat{y}_{ui} + \alpha}{N_{m} + 2\alpha}\frac{N_{f} + 2\alpha}{\sum_{u: A = f} \hat{y}_{ui} + \alpha}\leq e^\epsilon \label{eqn:smoothedFairnessSoft} \mbox{ ,}\nonumber \\ e^{-\epsilon} \leq \frac{\sum_{u: A = m} (1-\hat{y}_{ui}) + \alpha}{N_{m} + 2\alpha}\frac{N_{f} + 2\alpha}{\sum_{u: A = f} (1-\hat{y}_{ui}) + \alpha}\leq e^\epsilon \mbox{ ,} \end{align} where scalar $\alpha$ is each entry of the parameter of a symmetric Dirichlet prior with concentration parameter $2\alpha$, $i$ is an item and $N_{A}$ is the number of users of gender $A$ ($m$ or $f$). \subsubsection{Absolute Unfairness} The absolute unfairness ($U_{abs}$) metric for recommender systems measures the discrepancy between the predicted behavior for disadvantaged and advantaged users~\cite{yao2017beyond}. It measures inconsistency in absolute estimation error across user types, defined as follows: \begin{equation} U_{abs} = \frac{1}{N}\sum_{j=1}^{N}||(E_{D}[\hat{y}_{ui}]_j-E_{D}[r]_j)|-|(E_{A}[\hat{y}_{ui}]_j-E_{A}[r]_j)|| \end{equation} where, for $N$ items, $E_{D}[\hat{y}_{ui}]_j$ is the average predicted score for the $j$-th item for disadvantaged users, $E_{A}[\hat{y}_{ui}]_j$ is the average predicted score for advantaged users, and$E_{D}[r]_j$ and $E_{A}[r]_j$ are the average score for the disadvantaged and advantaged users, respectively. $U_{abs}$ captures a single statistic representing the quality of prediction for each user group. If one protected group has small estimation error and the other group has large estimation error, then the former type of group has the unfair advantage of good recommendations, while the other user group has poor recommendations. \section{Neural Fair Collaborative Filtering} Due to biased data that encode harmful human stereotypes in our society, typical social media-based collaborative filtering (CF) models can encode gender bias and make unfair decisions. In this section, we propose a practical framework to mitigate gender bias in CF recommendations, which we refer to as \emph{neural fair collaborative filtering} (NFCF) as shown in Figure~\ref{fig:NFCF_block}. The main components in our NFCF framework are as follows: an \emph{NCF model}, \emph{pre-training user and non-sensitive item embeddings}, \emph{de-biasing pre-trained user embeddings}, and \emph{fine-tuning with a fairness penalty}. We use NCF as the CF model because of its flexible network structure for pre-training and fine-tuning. We will show the value of each component below with an \emph{\textbf{ablation study}} (Table \ref{tab:ablation_study}). Similarly to \cite{he2017neural}, the DNN under the NCF model can be defined as: \begin{align}\label{eq:NCF} {z}_1 = \phi_1({p}_u,{q}_i) = {\begin{bmatrix}{p}_u\\{q}_i \end{bmatrix}} \mbox{ , } \nonumber\\ {z}_2 = \phi_2(z_1) = a_2(W_{2}^{T}z_1 + b_2) \mbox{ , } \nonumber\\ \vdots \nonumber\\ \phi_L(z_{L-1}) = a_L(W_{L}^{T}z_{L-1} + b_L) \mbox{ , } \nonumber\\ \hat{y}_{ui} = \sigma(h^{T}\phi_L(z_{L-1})) \end{align} where $z_l$, $\phi_l$, $W_l$, $b_l$. and $a_l$ denote the neuron values, mapping functio , weight matrix, intercept term, and activation function for the $l$-th layer's perceptron, respectively. The DNN is applied to ${z}_1$ to learn the user-item latent interactions. \begin{figure*}[t] \centerline{\includegraphics[width=0.85\textwidth]{./figures/NFCF.pdf}} \caption{\small Schematic diagram of neural fair collaborative filtering (NFCF). Red arrows indicate back-propagation only. } \label{fig:NFCF_block} \end{figure*} \begin{algorithm}[t] \caption{Training NFCF for Gender De-biased Recommendations}\label{alg-nfcf} \small \begin{flushleft} \textbf{Input:} user and non-sensitive item pairs: $\mathcal{D}_n = (u,i_{n})$, user and sensitive item pairs: $\mathcal{D}_s = (u,i_{s})$, and gender attribute: $A$\\ \textbf{Output:} Fair CF model $M_\mathbf{W}$(x) for $i_{s}$ recommendations \\ \end{flushleft} \begin{flushleft} \emph{\textbf{Pre-training steps:}} \end{flushleft} \begin{itemize} \item Randomly initialize $M_\mathbf{W}$(x)'s parameters $\boldsymbol{W}$: $p_u$, $q_{i_n}$, $W_l$, and $b_l$ \item For each epoch of ${D}_n$ : \begin{itemize} \item For each mini-batch: \begin{itemize} \item Learn $M_\mathbf{W}$(x)'s parameters $\boldsymbol{W}$ by minimizing: \item[] $L = -\sum_{(u,i_n)\in \chi \cup \chi^{-}}[ y_{ui_{n}}\log\hat{y}_{ui_{n}} + (1-y_{ui_{n}})\log(1-\hat{y}_{ui_{n}})]$ \end{itemize} \end{itemize} \end{itemize} \begin{flushleft} \emph{\textbf{De-biasing embeddings steps:}} \end{flushleft} \begin{itemize} \item Compute gender bias vector $v_B$ using Equation ~\ref{eq:female} and ~\ref{eq:bias} \item De-bias each user embedding using: $p_{u} := p_{u} - (p_{u}\cdot v_{B})v_{B}$ \end{itemize} \begin{flushleft} \emph{\textbf{Fine-tuning steps:}} \end{flushleft} \begin{itemize} \item Initialize with pre-trained $M_\mathbf{W}$(x)'s parameters $\boldsymbol{W}$: $W_l$, $b_l$ and de-biased $p_{u}$ with randomly initialized $q_{i_s}$ \item For each epoch of ${D}_s$ : \begin{itemize} \item For each mini-batch: \begin{itemize} \item Fine-tune $M_\mathbf{W}$(x) by minimizing (while $p_{u}$ is kept fixed): \item[] \quad\quad $\underset{\textbf{W}}{\text{min}}[L_{\mathbf{\chi \cup \chi^{-}}}(\textbf{W}) + \lambda R_{\mathbf{\chi}}(\epsilon_{mean})]$ \end{itemize} \end{itemize} \end{itemize} \end{algorithm} In the first step of our NFCF method, \emph{pre-training user and item embeddings}, NCF is trained to predict users' interactions with \emph{non-sensitive} items (e.g. ``liked'' social media pages) via back-propagation. This leverages plentiful non-sensitive social media data to learn user embeddings and network weights, but may introduce \textbf{demographic bias due to correlations between non-sensitive items and demographics}. E.g., liking the \emph{Barbie doll} page typically correlates with user gender. These correlations are expected to result in systematic differences in the embeddings for different demographics, which in turn can lead to systematic differences in sensitive item recommendations. To address this, in step two the user embeddings from step one are \emph{de-biased}. Our method to de-bias user embeddings adapts a very recent work on attenuating bias in word vectors~\cite{dev2019attenuating} to the task of collaborative filtering. Specifically, ~\cite{dev2019attenuating} propose to debias word vectors using a linear projection of each word embedding $w$ orthogonally onto a \emph{bias vector} $v_{B}$, which identifies the ``bias component'' of $w$. The bias component is then removed via $w' = w - (w\cdot v_{B})v_B$. To adapt this method to CF, the main challenge is to find the proper bias direction $v_B$. ~\cite{dev2019attenuating} construct $v_B$ based on word embeddings for gender-specific first names, which are not applicable for CF. We instead use \emph{CF embeddings for users from each protected group}. We first compute a group-specific bias direction for female users as \begin{equation}\label{eq:female} v_{female} = \frac{1}{n_f}(f_1+f_2+\dots +f_n)\mbox{ ,} \end{equation} where $f_1,f_2,\dots$ are vectors for each female user, and $n_f$ is the total number of female users. We similarly compute a bias direction for men $v_{male}$. Finally, we compute the overall gender bias vector: \begin{equation}\label{eq:bias} v_{B} = \frac{v_{female}-v_{male}}{||v_{female}-v_{male}||}\mbox{ .} \end{equation} We then de-bias each user embedding $p_{u}$ by subtracting its component in the direction of the bias vector \begin{equation} p_{u}' = p_{u} - (p_{u}\cdot v_{B})v_{B} \mbox{ .} \end{equation} As we typically do not have demographic attributes for items, we only de-bias user embeddings and not item embeddings. In the third step, we \emph{transfer} the de-biased user embeddings and pre-trained DNN's parameters to a model for recommending \emph{sensitive items}, which we \emph{fine-tune} for this task. During fine-tuning, a \emph{fairness penalty} is added to the objective function to address a second source of bias: \textbf{demographic bias in the sensitive items}. E.g., more men than women choose computer science careers~\cite{broad2014recruiting}, and this should be corrected~\cite{beede2011women}. We penalize the mean of the \emph{per-item} $\epsilon$'s \begin{equation} \epsilon_{mean} = \frac{1}{n_s}\sum_{i=1}^{n_s}\epsilon_i \mbox{ ,} \end{equation} where $\epsilon_1, \epsilon_2, \dots \epsilon_{n_s}$ are the DF measures for sensitive items and $\epsilon_{mean}$ is the average across the $\epsilon$'s for each item. Following ~\cite{foulds2018intersectional}, our learning algorithm for fine-tuning uses the fairness cost as a regularizer to balance the trade-off between fairness and accuracy. Using back-propagation, we minimize the loss function $L_{\mathbf{\chi \cup \chi^{-}}}(\textbf{W})$ from Equation~\ref{eq:loss} for model parameters $\textbf{W}$ plus a penalty on $\epsilon_{mean}$, weighted by a tuning parameter $\lambda>0$ \begin{equation}\label{eq:objective} \underset{\textbf{W}}{\text{min}}[L_{\mathbf{\chi \cup \chi^{-}}}(\textbf{W}) + \lambda R_{\mathbf{\chi}}(\epsilon_{mean})] \end{equation} where $R_{\mathbf{\chi}}(\epsilon_{mean}) =max(0,\epsilon_{mean_{M_\mathbf{W}(\mathbf{\chi})}} - \epsilon_{mean_{0}})$ is the fairness penalty term, and $\epsilon_{mean_{M_\mathbf{W}(\mathbf{\chi})}}$ is the $\epsilon_{mean}$ for the CF model $M_\mathbf{W}(\mathbf{\chi})$ while $\chi$ and $\chi^{-}$ are the set of interacted and not-interacted user-item pairs, respectively. Setting the target $\epsilon_{mean_{0}}$ to 0 encourages demographic parity, while setting $\epsilon_{mean_{0}}$ to the dataset's empirical value penalizes any increase in the unfairness metric over ``societal bias'' in the data (which would in this case be presumed to be legitimate). In our experiments, we use $\epsilon_{mean_{0}}=0$. Pseudo-code for training the \emph{NFCF} algorithm is given in Algorithm~\ref{alg-nfcf}. We also consider an additional variant of the proposed framework, called \emph{NFCF\_embd}, which only de-biases the user embeddings to mitigate bias. In this algorithm, we compute the bias vector $v_B$ on the pre-trained user embeddings, fine-tune the model for sensitive item recommendations without any fairness penalty, and then de-bias the held-out user embeddings using the pre-computed bias vector. Since there is no additional fairness penalty with the objective, this algorithm converges faster. There is also no requirement to tune the $\lambda$ hyperparameter. \section{Experiments} In this section, we validate and compare our model with multiple baselines for recommending careers and academic concentrations using social media data. Our implementation's source code will be provided in the online. \subsection{Datasets} \begin{table}[t] \centering \small \resizebox{1.0\textwidth}{!}{ \begin{tabular}{llllllllllll} \toprule & \multicolumn{4}{c}{Non-sensitive Data} & \multicolumn{6}{c}{Sensitive Data} \\ \midrule & Users & Items & Pairs & Sparsity & & Users & Males & Females & Items & Pairs & Sparsity \\ \cline{2-5} \cline{7-12} \emph{MovieLens} Dataset & 6,040 & 3,416 & 999,611 & 95.16\% & & 4,920 & 3,558 & 1,362 & 17 & 4,920 & 94.12\% \\ \emph{Facebook} Dataset & 29,081 & 42,169 & 5,389,541 & 99.56\% & & 13,362 & 5,053 & 8,309 & 169 & 13,362 & 99.41\% \\ \bottomrule \end{tabular} } \caption{Statistics of the datasets. \label{tab:data}} \end{table} \begin{figure*}[t] \centerline{\includegraphics[width=0.98\textwidth]{./figures/gender_dist_all_update.pdf}} \caption{\small Gender distributions of example gender-biased careers and college majors for (a) \emph{MovieLens} and (b) \emph{Facebook} datasets. We report the distributions in the dataset (left columns), and corresponding top-$1$ recommendation by our NFCF model (right columns) } \label{fig:user_dist_all} \end{figure*} We evaluate our models on two datasets: \emph{MovieLens},\footnote{\url{http://grouplens.org/datasets/movielens/1m/}.} a public dataset which facilitates research reproducibility, and a \emph{Facebook} dataset which is larger and is a more realistic setting for a fair social media-based recommender system. \subsubsection{\textbf{MovieLens Data}} We analyzed the widely-used \emph{MovieLens} dataset which contains 1 million ratings of 3,900 movies by 6,040 users who joined MovieLens~\cite{harper2015movielens}, a noncommercial movie recommendation service operated by the University of Minnesota. We used \emph{gender} as the protected attribute, self-reported \emph{occupation} as the sensitive item (with one occupation per user), and \emph{movies} as the non-sensitive items. Since we focus on implicit feedback, which is common in a social media setting (e.g. page ``likes''), we converted explicit movie ratings to binary implicit feedback~\cite{koren2008factorization,he2017neural}, where a 1 indicates that the user has rated the item. We discarded movies that were rated less than 5 times, and users who declared their occupation as ``K-12 student,'' ``retired,'' ``unemployed,'' and ``unknown or not specified'' were discarded for career recommendation. A summary of the pre-processed dataset is shown in Table~\ref{tab:data}. \subsubsection{\textbf{Facebook Data}} The \emph{Facebook} dataset we analyzed was collected as part of the myPersonality project ~\cite{kosinski2015facebook}. The data for research were collected with opt-in consent. We used \emph{gender} as the protected attribute, \emph{college major} as the sensitive items (at most one per user), and \emph{user-page} interaction pairs as the non-sensitive items. A user-page interaction occurs when a user ``likes'' a Facebook page. We discarded pages that occurred in less than 5 user-page interactions. See Table~\ref{tab:data} for a summary of the dataset after pre-processing. \subsubsection{\textbf{Gender Distributions for Datasets}} In Figure~\ref{fig:user_dist_all}, we show disparities in the gender distributions of $10$ example careers and college majors for \emph{MovieLens} and \emph{Facebook} datasets, respectively. For example, 97\% of the associated users for the occupation \emph{homemaker} are women in the \emph{MovieLens} data, while there are only 27\% women among the users associated with the \emph{computer science} major in the \emph{Facebook} data. As a qualitative illustration, we also show the gender distribution of top-1 recommendations from our proposed NFCF model. NFCF mitigated gender bias for most of these sensitive items. In the above examples, NFCF decreased the percentage of women for \emph{homemaker} from 97\% to 50\%, while increasing the percentage of women for \emph{computer science} from 27\% to 48\%. \subsection{Baselines} \begin{table}[t] \centering \small \begin{tabular}{lcccc} \hline \multicolumn{5}{c}{\emph{MovieLens} Dataset}\\ \hline Models & HR@$10$ & NDCG@$10$ & HR@$25$ & NDCG@$25$ \\ \hline NCF & 0.543 & \textbf{0.306} & 0.825 & \textbf{0.377} \\ MF & \textbf{0.551} & 0.304 & \textbf{0.832} & 0.374 \\ \hline \end{tabular} \hspace{0.25 cm} \small \begin{tabular}{lcccc} \hline \multicolumn{5}{c}{\emph{Facebook} Dataset}\\ \hline Models & HR@$10$ & NDCG@$10$ & HR@$25$ & NDCG@$25$ \\ \hline NCF & \textbf{0.720} & \textbf{0.468} & \textbf{0.904} & \textbf{0.514} \\ MF & 0.609 & 0.382 & 0.812 & 0.434 \\ \hline \end{tabular} \caption{Performance of NCF and MF models for movie and Facebook page recommendations (the pre-training task) on the \emph{Movielens} and \emph{Facebook} datasets, respectively. \label{tab:preTrain_performance}} \end{table} \begin{figure*}[t] \centerline{\includegraphics[width=0.9\textwidth]{./figures/preTrain_benefit_both.pdf}} \caption{\small Comparison of proposed models with ``typical'' baselines that do not consider fairness. Evaluation of Top-$K$ career and college major recommendations on the (a) \emph{MovieLens} (among 17 unique careers) and (b) \emph{Facebook} (among 169 unique majors) datasets, where $K$ ranges from $1$ to $5$ and $1$ to $25$, respectively. \emph{NCF w/ Pre-train} outperforms all the baselines; NFCF performs similarly.} \label{fig:preTrain} \end{figure*} \begin{figure*}[t] \centerline{\includegraphics[width=0.95\textwidth]{./figures/debiasing_vectors_both.pdf}} \caption{\small De-biasing pre-trained user embeddings by removing the component along the bias direction $v_B$ (PCA projection) for the (a) \emph{MovieLens} and (b) \emph{Facebook} datasets. PCA was performed based on all embeddings.} \label{fig:debiasing_vectors} \end{figure*} We compare our proposed framework to the following ``typical'' baseline models without any fairness constraints: \begin{itemize} \item \textbf{MF w/o Pre-train}. Typical matrix factorization (MF) model which is trained with the user-item interactions for sensitive item recommendations, where the items contain both non-sensitive and sensitive items. \item \textbf{MF w Pre-train}. Typical MF model which is pre-trained with the interactions of user and non-sensitive items and fine-tuned with the interactions of users and sensitive items. Specifically, $q_{i}$, $b_i$, and $b_u$ from Equation~\ref{eq:MF} are fine-tuned while $p_{u}$ is kept fixed. \item \textbf{NCF w/o Pre-train}. Typical NCF model which is trained with the user-item interactions for sensitive item recommendations, where the items contain both non-sensitive and sensitive items. \item \textbf{NCF w Pre-train}. Typical NCF model which is pre-trained with the interactions of users and non-sensitive items and fine-tuned with the interactions of users and sensitive items. Specifically, $q_{i}$, ${W}_{l}$, and ${b}_{l}$ from Equation~\ref{eq:NCF} are fine-tuned while $p_{u}$ is kept fixed. \item \textbf{DNN Classifier}. A simple baseline where we train a DNN-based classifier to predict career labels given the interactions of users and non-sensitive items as features (i.e. binary features, one per user-page ``like'' or one per user-movie ``rating''). No user embeddings are learned. \item \textbf{BPMF}. We also used Bayesian probabilistic matrix factorization (BPMF) via MCMC~\cite{salakhutdinov2008bayesian} as a baseline, since it typically has good performance with small data. BPMF is trained on the user-item interactions for sensitive item recommendations, where the items contain both non-sensitive and sensitive items. \end{itemize} We also compared our proposed models with the following fair baseline models: \begin{itemize} \item \textbf{Projection-based CF}. This is our previous linear projection-based fair CF method~\cite{rashid}. First, NCF is trained using user and non-sensitive item data, followed by de-biasing user vectors by subtracting the component in the bias direction. Finally, a multi-class logistic regression model is trained on the de-biased user vectors to predict sensitive items. \item \textbf{MF-$U_{abs}$}. The learning objective of the MF model is augmented with a smoothed variation of $U_{abs}$~\cite{yao2017beyond} using the Huber loss~\cite{huber1992robust}, weighted by a tuning parameter $\lambda$. The MF-$U_{abs}$ model is trained with the user-item interactions for career recommendations, where the items include both non-sensitive and sensitive items. \item \textbf{Resampling for Balance}. This method~\cite{ekstrand2018all} involves pre-processing by resampling the training user-item data to produce a gender-balanced version of the data. First, we extract user-item data for users with known gender information and randomly sample the same number of male and female users without replacement. The items include both non-sensitive and sensitive items. Finally, NCF and MF are trained on the gender-balanced user-item data to perform sensitive item recommendation. \end{itemize} \begin{table*}[t] \centering \small \begin{tabular}{lcccccc} \toprule \multicolumn{7}{c}{\emph{MovieLens} Dataset}\\ \midrule Ablation study & HR@$5\uparrow$ & NDCG@$5\uparrow$ & HR@$7\uparrow$ & NDCG@$7\uparrow$ & $\epsilon_{mean}\downarrow$ & $U_{abs}\downarrow$ \\ \midrule NFCF & \textbf{0.670} & 0.480 & 0.822 & 0.536 & \textbf{0.083} & \textbf{0.009} \\ w/o pre-train & 0.493 & 0.323 & 0.731 & 0.446 & 0.112 & 0.017 \\ w/o de-biasing embeddings & 0.665 & \textbf{0.481} & \textbf{0.832} & \textbf{0.543} & 0.120 & 0.010 \\ w/o fairness penalty & 0.667 & 0.480 & 0.827 & 0.539 & 0.097 & 0.013 \\ replace NCF w/ MF & 0.514 & 0.350 & 0.707 & 0.423 & 0.122 & 0.021 \\ \bottomrule &&&&&&\\ \toprule \multicolumn{7}{c}{\emph{Facebook} Dataset}\\ \midrule Ablation study & HR@$10\uparrow$ & NDCG@$10\uparrow$ & HR@$25\uparrow$ & NDCG@$25\uparrow$ & $\epsilon_{mean}\downarrow$ & $U_{abs}\downarrow$ \\ \midrule NFCF & 0.551 & 0.326 & 0.848 & \textbf{0.401} & \textbf{0.302} & \textbf{0.024} \\ w/o pre-train & 0.339 & 0.127 & 0.587 & 0.224 & 0.613 & 0.038 \\ w/o de-biasing embeddings & 0.556 & \textbf{0.328} & 0.847 & 0.400 & 0.314 & \textbf{0.024} \\ w/o fairness penalty & \textbf{0.557} & 0.327 & \textbf{0.849} & \textbf{0.401} & 0.363 & 0.026 \\ replace NCF w/ MF & 0.297 & 0.112 & 0.427 & 0.194 & 0.880 & 0.071 \\ \bottomrule \end{tabular} \caption{Ablation study of \emph{NFCF} for career and college major recommendations on the \emph{MovieLens} and \emph{Facebook} datasets. Higher is better for HR and NDCG; lower is better for $\epsilon_{mean}$ and $U_{abs}$. Removing each model component harms performance and/or fairness. \label{tab:ablation_study}} \end{table*} \begin{figure*}[t] \centerline{\includegraphics[width=0.90\textwidth]{./figures/compare_fair_model.pdf}} \caption{\small Comparison of proposed models with fair baselines. Evaluation of Top-$K$ career and college major recommendations on the (a) \emph{MovieLens} (among 17 unique careers) and (b) \emph{Facebook} (among 169 unique majors) datasets, where $K$ ranges from $1$ to $5$ and $1$ to $25$, respectively. \emph{NFCF} outperforms all the baselines; \emph{NFCF\_embd} performs similarly.} \label{fig:compare_fair_model} \end{figure*} \begin{table*}[t] \centering \small \resizebox{1.0\textwidth}{!}{ \begin{tabular}{llcccccc} \toprule \multicolumn{8}{c}{\emph{MovieLens} Dataset}\\ \midrule & Models & HR@$5\uparrow$ & NDCG@$5\uparrow$ & HR@$7\uparrow$ & NDCG@$7\uparrow$ & $\epsilon_{mean}\downarrow$ & $U_{abs}\downarrow$ \\ \midrule \multirow{2}{*}{Proposed Models} & NFCF & \textbf{0.670} & 0.480 & 0.822 & 0.536 & \textbf{0.083} & \textbf{0.009} \\ & NFCF\_embd & 0.661 & 0.470 & \textbf{0.825} & 0.531 & 0.091 & 0.016 \\ \midrule \multirow{5}{*}{Typical Baselines} & NCF w Pre-train & 0.667 & \textbf{0.484} & \textbf{0.825} & \textbf{0.542} & 0.188 & 0.022 \\ & NCF w/o Pre-train & 0.570 & 0.360 & 0.762 & 0.432 & 0.244 & 0.026 \\ & MF w Pre-train & 0.548 & 0.362 & 0.747 & 0.436 & 0.285 & 0.060 \\ & MF w/o Pre-train & 0.622 & 0.397 & 0.820 & 0.471 & 0.130 & 0.020 \\ & DNN Classifier & 0.428 & 0.311 & 0.546 & 0.355 & 0.453 & 0.035 \\ & BPMF & 0.225 & 0.138 & 0.338 & 0.180 & 0.852 & 0.063 \\ \midrule \multirow{4}{*}{Fair Baselines} & Projection-based CF ~\cite{rashid} & 0.514 & 0.355 & 0.655 & 0.408 & 0.229 & 0.012 \\ & MF-$U_{abs}$ ~\cite{yao2017beyond} & 0.588 & 0.356 & 0.776 & 0.426 & 0.096 & 0.017 \\ & NCF via Resampling ~\cite{ekstrand2018all} & 0.443 & 0.295 & 0.622 & 0.362 & 0.144 & 0.022 \\ & MF via Resampling ~\cite{ekstrand2018all} & 0.542 & 0.332 & 0.759 & 0.413 & 0.103 & 0.029 \\ \bottomrule &&&&&&&\\ \toprule \multicolumn{8}{c}{\emph{Facebook} Dataset}\\ \midrule & Models & HR@$10\uparrow$ & NDCG@$10\uparrow$ & HR@$25\uparrow$ & NDCG@$25\uparrow$ & $\epsilon_{mean}\downarrow$ & $U_{abs}\downarrow$ \\ \midrule \multirow{2}{*}{Proposed Models} & NFCF & 0.551 & 0.326 & 0.848 & 0.401 & \textbf{0.302} & 0.024 \\ & NFCF\_embd & 0.557 & \textbf{0.333} & 0.850 & 0.397 & 0.359 & \textbf{0.022} \\ \midrule \multirow{5}{*}{Typical Baselines} & NCF w Pre-train & \textbf{0.559} & 0.329 & \textbf{0.851} & \textbf{0.403} & 0.376 & 0.027 \\ & NCF w/o Pre-train & 0.402 & 0.187 & 0.762 & 0.278 & 0.785 & 0.039 \\ & MF w Pre-train & 0.372 & 0.200 & 0.717 & 0.286 & 0.875 & 0.077 \\ & MF w/o Pre-train & 0.267 & 0.119 & 0.625 & 0.210 & 0.661 & 0.029 \\ & DNN Classifier & 0.379 & 0.212 & 0.630 & 0.274 & 0.633 & 0.070 \\ & BPMF & 0.131 & 0.066 & 0.339 & 0.117 & 1.173 & 0.084 \\ \midrule \multirow{4}{*}{Fair Baselines} & Projection-based CF ~\cite{rashid} & 0.419 & 0.244 & 0.674 & 0.307 & 0.407 & 0.030 \\ & MF-$U_{abs}$ ~\cite{yao2017beyond} & 0.163 & 0.007 & 0.627 & 0.184 & 0.629 & 0.026 \\ & NCF via Resampling ~\cite{ekstrand2018all} & 0.315 & 0.153 & 0.586 & 0.222 & 0.442 & 0.025 \\ & MF via Resampling ~\cite{ekstrand2018all} & 0.103 & 0.041 & 0.314 & 0.094 & 0.756 & 0.039 \\ \bottomrule \end{tabular} } \caption{Comparison of proposed models with the baselines in career and college major recommendations on \emph{MovieLens} (17 \emph{careers}) and \emph{Facebook} (169 \emph{majors}). Higher is better for HR and NDCG; lower is better for $\epsilon_{mean}$ and $U_{abs}$. NFCF greatly improves fairness metrics and beats all baselines at recommendation except for NCF w Pre-train, a variant of NFCF without its fairness correction. \label{tab:quantitative_results}} \end{table*} \begin{table*}[t] \centering \small \resizebox{0.8\textwidth}{!} { \begin{tabular}{llll} \toprule \multicolumn{4}{c}{\emph{MovieLens} Dataset}\\ \midrule \multicolumn{2}{c}{NFCF} & \multicolumn{2}{c}{NCF w/o Pre-train} \\ \midrule Male & Female & Male & Female \\ \midrule college/grad student & college/grad student & sales/marketing & customer service \\ executive/managerial & executive/managerial & academic/educator & academic/educator \\ academic/educator & technician/engineer & executive/managerial & artist \\ technician/engineer & academic/educator & doctor/health care & writer \\ programmer & programmer & college/grad student & college/grad student \\ \bottomrule &&&\\ \toprule \multicolumn{4}{c}{\emph{Facebook} Dataset}\\ \midrule \multicolumn{2}{c}{NFCF} & \multicolumn{2}{c}{NCF w/o Pre-train} \\ \midrule Male & Female & Male & Female \\ \midrule psychology & psychology & philosophy & psychology \\ english literature & english literature & psychology & nursing \\ graphic design & music & computer science & sociology \\ music & theatre & biochemistry & graphic design \\ nursing & nursing & business admin. & business marketing \\ liberal arts & history & political science & elementary education \\ business admin. & sociology & business management & cosmetology \\ biology & liberal arts & medicine & accounting \\ history & business admin. & law & physical therapy \\ criminal justice & biology & finance & music \\ \bottomrule \end{tabular} } \caption{Top 5 (among 17 unique careers) and 10 (among 169 unique majors) most frequent career and college major recommendations on the \emph{MovieLens} and \emph{Facebook} datasets, respectively, to the overall male and female users using \emph{NFCF} and \emph{NCF w/o Pre-train} models . \label{tab:qualitative}} \end{table*} \subsection{Experimental Settings} All the models were trained via adaptive gradient descent optimization (Adam) with learning rate = $0.001$ using pyTorch where we sampled $5$ negative instances per positive instance. The mini-batch size for all models was set to $2048$ and $256$ for user-page and user-career data, respectively, while the embedding size for users and items was set to $128$. The configuration of the DNN under \emph{NFCF} and \emph{NFCF\_embd} was $4$ hidden layers with $256$, $64$, $32$, $16$ neurons in each successive layer, ``relu'' and ``sigmoid'' activations for the hidden and output layers, respectively. We used the same DNN architecture for the \emph{NCF} and \emph{DNN Classifier} models. For the Facebook dataset, we held-out $1\%$ and $40\%$ from the user-page and user-college major data, respectively, as the test set, using the remainder for training. Since there are fewer users in the Movielens dataset, we held-out $1\%$ and $30\%$ from the user-movie and user-career data, respectively, as the test set, using the remainder for training. We further held-out $1\%$ and $20\%$ from the training user-nonsensitive item and user-sensitive item data, respectively, as the development set for each dataset. The tuning parameter $\lambda$ needs to be chosen as a trade-off between accuracy and fairness~\cite{foulds2018intersectional}. We chose $\lambda$ as $0.1$ for \emph{NFCF} and \emph{MF-$U_{abs}$} via a grid search on the development set according to similar criteria to ~\cite{foulds2018intersectional}, i.e. optimizing fairness while allowing up to $2\%$ degradation in accuracy from the typical model. To evaluate the performance of item recommendation on the test data, since it is too time-consuming to rank all items for every user during evaluation~\cite{he2017neural}, we followed a common strategy in the literature ~\cite{elkahky2015multi}. For non-sensitive items, we randomly sampled $100$ items which are not interacted by the user for each test instance, and ranked the test instance among the 100 items. For sensitive item recommendations, in the case of Facebook data we similarly randomly sampled $100$ college majors. For the MovieLens data, there are only $17$ unique careers, so we used the remaining $16$ careers when ranking the test instance. The performance of a ranked list is measured by the average Hit Ratio (HR) and Normalized Discounted Cumulative Gain (NDCG) ~\cite{he2015trirank}. The HR measures whether the test item is present in the top-$K$ list, while the NDCG accounts for the position of the hit by assigning higher scores to hits at top ranks. We calculated both metrics for each test user-item pair and reported the average score. Finally, we computed $\epsilon_{mean}$ and $U_{abs}$ on the test data for user-sensitive item to measure the fairness of the models in career and college major recommendations. \subsection{Validation of NFCF Model Design} Before comparing to fair recommendation baseline models, we systematically validate our modeling choices for NFCF. \textbf{Pre-training Task Performance:} We first study the performance for NCF and MF model at the pre-training task, \emph{Facebook page} and \emph{movie} recommendations (Table~\ref{tab:preTrain_performance}). NCF had substantially and consistently better performance compared to MF on the larger \emph{Facebook} dataset, and similar overall performance on \emph{MovieLens} (better in 2 of 4 metrics). \textbf{Fine-Tuning Performance:} We fine-tuned these models on the interaction of users with the sensitive items for \emph{career} and \emph{college major} recommendations on \emph{MovieLens} and \emph{Facebook} dataset, respectively. Figure~\ref{fig:preTrain} shows top-$K$ recommendations from $17$ and $169$ unique \emph{careers} and \emph{college majors} using several ``typical'' baseline models that do not involve any fairness constraints, where $K$ ranges from $1$ to $5$ and $1$ to $25$ for MovieLens and Facebook dataset, respectively. \emph{NCF w/ Pre-train} had the best performance in HR and NDCG versus other baselines while our proposed \emph{NFCF} and \emph{NFCF\_embd} performed approximately similarly to \emph{NCF w/ Pre-train} for both datasets. Of the typical baselines, \emph{MF w/o Pre-train} and \emph{NCF w/o Pre-train} performed the second best for \emph{MovieLens} and \emph{Facebook} dataset, respectively. For the \emph{MovieLens} dataset, \emph{MF w/o Pre-train} performed better than \emph{MF w/ Pre-train}, presumably due to the relatively small dataset and having relatively few parameters to fine-tune, unlike for the DNN-based NCF model. \emph{BPMF} performed poorly despite using Bayesian inference for scarce data, perhaps due to \cite{salakhutdinov2008bayesian}'s initialization via older MF methods. \textbf{Visualization of Embedding De-biasing:} We visualized the PCA projections of the male and female vectors (Equation~\ref{eq:female}) before and after the linear projection-based de-biasing embeddings method, where PCA was performed based on all the embeddings. Figure~\ref{fig:debiasing_vectors} shows that the male and female vectors have very different directions and magnitudes. After de-biasing, the male and female vectors had a more similar direction and magnitude to each other. \textbf{Ablation Study:} Finally, we conducted an ablation study in which the components of the method were removed one at a time. As shown in Table~\ref{tab:ablation_study}, there was a large degradation of the performance of \emph{NFCF} when pre-training was removed (de-biasing embeddings step was also removed, since there was no pre-trained user vector), or when NCF was replaced by MF. Removing the de-biased embedding method lead to better HR and NDCG scores, but with an increase in the gender bias metrics. Similarly, learning without the fairness penalty lead to similar performance in HR and NDCG, but greatly increased gender bias. Therefore, \emph{\textbf{both} of the bias correction methods in the NFCF model are necessary to achieve the best level of fairness while maintaining a high recommendation accuracy}. \subsection{Performance for Mitigating Gender Bias in Sensitive Item Recommendations} We evaluated performance for career and college major recommendations in terms of accuracy (HR and NDCG) and fairness ($\epsilon_{mean}$ and $U_{abs}$). In Figure~\ref{fig:compare_fair_model}, we show that our proposed \emph{NFCF} and \emph{NFCF\_embd} models clearly outperformed all the fair baseline models in terms of HR and NDCG, regardless of the cut-off $K$. \emph{Projection-based CF} performed the second best on both datasets out of all the fair models. In Table~\ref{tab:quantitative_results}, we show detailed results for the top $5$ and top $7$ recommendations on \emph{MovieLens} and for the top $10$ and top $25$ recommendations on the \emph{Facebook} dataset. Our proposed \emph{NFCF} model was the most fair career and college major recommender in terms of $\epsilon_{mean}$, while our \emph{NFCF\_embd} was the most fair in terms of $U_{abs}$ on the \emph{Facebook} dataset. In the case of the \emph{MovieLens} dataset, our \emph{NFCF} model was the most fair recommender model in terms of both fairness metrics. \emph{NCF w/ Pre-train} performed best in the HR and NDCG metrics on both datasets. \emph{NFCF} and \emph{NFCF\_embd} had nearly as good HR and NDCG performance as \emph{NCF w/ Pre-train}, while also mitigating gender bias. We also found that the pre-training and fine-tuning approach reduced overfitting for \emph{NCF w/ Pre-train}, and thus improved the fairness metrics by reducing bias amplification. This was not the case for \emph{MF w/ Pre-train}, presumably due to the limited number of pre-trained parameters to fine-tune. \emph{Projection-based CF} and \emph{MF-$U_{abs}$} also showed relatively good performance in mitigating bias in terms of $U_{abs}$ compared to the typical models, but with a huge sacrifice in the accuracy. Similarly, \emph{NCF via Resampling} and \emph{MF via Resampling} had poor performance in accuracy, but improved fairness to some extent over their corresponding ``typical'' models, \emph{NCF w/o Pre-train} and \emph{MF w/o Pre-train}, respectively. As a further qualitative experiment, we recommended the top-$1$ career and college major to each test male and female user via the \emph{NFCF} and \emph{NCF w/o Pre-train} models. In Table~\ref{tab:qualitative}, we show top $5$ and $10$ most frequent recommendations to the overall male and female users among the $17$ and $169$ unique careers and majors for \emph{MovieLens} and \emph{Facebook} dataset, respectively. \emph{NFCF} was found to recommend similar careers to both male and female users on average for both datasets, while \emph{NCF w/o Pre-train} encoded societal stereotypes in its recommendations. For example, \emph{NCF w/o Pre-train} recommends \emph{computer science} to male users and \emph{nursing} to female users on the \emph{Facebook} dataset while it recommends \emph{executive/managerial} to male users and \emph{customer service} to female users on the \emph{MovieLens} dataset. \section{Conclusion} We investigated gender bias in social-media based collaborative filtering. To address this problem, we introduced Neural Fair Collaborative Filtering (NFCF), a pre-training and fine-tuning method which corrects gender bias for recommending sensitive items such as careers or college majors with little loss in performance. On the \emph{MovieLens} and \emph{Facebook} datasets, we achieved better performance and fairness compared to an array of state-of-the-art models. \bibliographystyle{coling}
null
null
null
proofpile-arXiv_000-10138
{"arxiv_id":"2009.08955","language":"en","timestamp":1600654691000,"url":"https:\/\/arxiv.org\/abs\/2009.08955","yymm":"2009"}
2024-02-18T23:40:25.076Z
2020-09-21T02:18:11.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10140"}
null
\section{Introduction} High Resolution Spectroscopy (HRS) is a relatively recent, powerful method for exoplanet atmospheric characterization. It uses a spectral resolution high enough ($R \gtrsim 30,000$) to unambiguously detect the unique sets of spectral lines from atoms or molecules in an exoplanet's spectrum. While the planet's spectrum is often orders of magnitude weaker than the stellar noise, its signal can be detected via cross-correlation with a template spectrum \added{due to the increased number of lines present at high resolution.}. This is accomplished by exploiting the planet's changing orbital radial velocity along the observers line of sight which helps to remove the stellar and telluric signals, whose spectral features remain effectively at fixed wavelengths over the duration of a typical observation. \added{By removing the components of spectrum that are constant with time, one is left with noise and the planet spectrum, which can then be detected via cross-correlation.} \citet{birkbyhires} presents a review of the HRS method and recent results from its use. HRS was first applied to the well-known hot Jupiter HD~209458b using the CRIRES instrument on the VLT \citep{2010Natur.465.1049S}, definitively detecting CO in transmission spectra from the planet. Further analysis of the transmission spectra of this planet at high resolution have resulted in detections of water vapor \citep{sanchezlopez2019} and helium \citep{alonso2019}. Emission spectra of this planet have also been measured with HRS, providing evidence against an atmospheric temperature inversion \citep{schwarz}, as well as determining both carbon monoxide and water abundances when combined with lower resolution data \citep{combininglowandhigh,combinedhudra}. In this paper we present a re-analysis of the previously published CRIRES/VLT data for this planet \citep{schwarz}, but with template spectra generated from a three-dimensional atmospheric model. One of the unique strengths of HRS is that at the highest resolutions ($R \sim 100,000$) the observed spectra can contain information about the \textit{atmospheric} motion of the planet. The original HRS result by \citet{2010Natur.465.1049S} found hints of day-to-night winds on the planet in a net blue-shift of the planet's spectrum by $2 \pm 1$ km s$^{-1}$ (during transit, day-to-night winds blow toward the observer). Transmission spectra of the hot Jupiter HD~189733b also show evidence for atmospheric motion, including both net Doppler shifts from winds and Doppler broadening from a combination of rotation and eastward equatorial winds \citep{louden2015,Brogianalysis,Flowers}. Measured Doppler broadening in high-resolution emission spectra of directly imaged planets/companions have also been used to constrain the rotation rates of these objects \citep{rotationbpic,Schwarz2016,Bryan2020}. The two sources of atmospheric motion---winds and rotation---are not physically independent.\added{For a recent review of hot Jupiter dynamics, see \citet{showman2020review}.} One of the governing forces in determining atmospheric circulation is the Coriolis force, meaning that the rotation rate of a planet strongly influences the wind structure and speeds. Hot Jupiters are commonly assumed to be tidally locked into rotation rates synchronous with their orbits \citep[e.g.,][]{Rasio1996}, but deviations from this expected rotation state would have consequences for the speed and structure of atmospheric winds \citep{Showman2009}, which then influences the expected Doppler shifts and broadening in HRS data \citep{Rauscher_rotationrates}. It is an ongoing debate within the community as to how tidal forces interact with the complex structure of hot Jupiters and whether we should assume them to be synchronized or not \citep{Gu2009,Arras2010,Auclair2018,Lee2020,Yu2020}. Given the exquisite spectral detail measurable by HRS, including constraints on atmospheric motions, we may wonder how sensitive it is to the full three-dimensional nature of the planet; and what degree of bias will a one dimensional model introduce. Another way to state this is whether or not one-dimensional atmospheric models are sufficient to accurately interpret HRS data. Especially for hot Jupiters, where we expect hundreds of Kelvin temperature contrasts across the globe \added{\citep{GCM, DobbsDixon2013,kataria2016,Parmentier2018,Deitrick2020,Drummond2020} } , differences in the local atmospheric structure can result in limb- or disk-integrated transmission or emission spectra (respectively) that are significantly different from a spectrum calculated using a 1-D model. Several studies have considered how the 3-D nature of a planet can influence lower resolution spectra \citep[e.g.,][]{Fortney2006,Fortney2010,Burrows2010} and, in particular, ways that the use of 1-D models could bias our interpretation of spectral data \citep[e.g.,][]{Feng2016,Blecic2017,Caldas2019,Pluriel2020}. For HRS data, several studies have simulated high-resolution spectra from different 3-D models, both in transmission \citep{Kempton2012,showman2013,Kempton2014,Rauscher_rotationrates} and emission \citep{jisheng,Harada2019}, demonstrating that the complex atmospheric structures of hot Jupiters can influence HRS data. \citet{Flowers} presented a first-of-its-kind analysis of HRS data, using simulated transmission spectra from 3-D models as template spectra in the cross-correlation analysis of observations of the hot Jupiter HD~189733b. Not only was the planet's signal detected at high significance (supporting the validity of the 3-D models), but this work also consistently detected the Doppler signature of day-to-night winds on this planet. \replaced{This was accomplished by showing that the best signal detection required an anomalous Doppler shift of the planet if the effect of the winds was artificially turned off in the simulated spectra, but had correct planetary motion with the Doppler effects of the winds included.}{When the Doppler effects from the winds were artificially excluded from the calculation of the template spectra, the planet's signal was detected with an anomalous blue-shift; when the effects of the winds were included, the detection was at the expected planet velocity.} \added{That is, ignoring the Doppler effects on simulated transmission spectra resulted in incorrect inferred planetary motion, confirming their measurable influence in the observed spectra. } Here we present an analogous study to \citet{Flowers}, but for emission spectra (as opposed to transmission), in which we use simulated spectra from 3-D models in the HRS cross-correlation analysis. In addition to studying a complimentary observational technique---emission instead of transmission---we also target a different bright hot Jupiter than that analysis, namely HD~209458b. HRS transmission spectra can be directly influenced by atmospheric motion, but are only secondarily affected by the three-dimensional temperature structure \added{ \citet{Flowers}}. We expect that HRS emission spectra may be much more sensitive to differences in atmospheric \added{thermal} structure around the planet, given that any Doppler effects from atmospheric motion will be most sensitive to the brightest regions of the planet \citep{jisheng}. In this paper we empirically determine how sensitive HRS emission spectra are to the 3-D nature of a particular planet, as well as to what degree various aspects of the atmospheric structure contribute to the observed data. Specifically, we study the sensitivity of the data to the planet's rotation period by running a suite of 3-D models for a range of rotation rates, producing a set of consistent temperature and wind structures for each case. We also test the sensitivity of the data to atmospheric chemistry by comparing an assumption of well-mixed abundances or local chemical equilibrium values in the radiative transfer routine we use to post-process the 3-D models and create simulated spectra. We also analyze the relative contributions of the two main opacity sources over the wavelengths of observation (2.285 to 2.348 $\mu$m): carbon monoxide and water. Finally, we test the sensitivity of the data to Doppler effects from atmospheric motions by cross-correlating with simulated spectra calculated with and without those effects. In Section \ref{sec:numerical}, we explain the various numerical methods used in this work: the three-dimensional atmospheric model and the radiative transfer routine used to post-process the 3-D models and calculate simulated emission spectra. Additionally, we briefly describe the results of these standard hot Jupiter models. In Section \ref{sec:data} we describe the observational data, along with details of our reduction and analysis methods. In Section \ref{sec:CC} we present the results of our cross-correlation analysis, comparing the strength of planetary signal detected when using template spectra from 1-D or 3-D models, and comparing the aforementioned assumptions regarding chemistry, opacity sources, Doppler effects, and rotation rates. In Section \ref{sec:conclusion} we summarize our main results. \section{Numerical Models: 3D GCMs and Simulated Emission Spectra} \label{sec:numerical} In order to create simulated high-resolution emission spectra for HD~209458b, we first use a General Circulation Model to predict the three-dimensional atmospheric structure of the planet---that is, thermal and wind structure--- and then post-process the results using a detailed radiative transfer routine that accounts for the correct geometry and atmospheric Doppler shifts. These modeling methods and results are not particularly novel, having formed the basis of previous papers \citep{Kempton2012,GCM,Rauscher_rotationrates,newrad,jisheng}; however, our suite of models for this particular planet have not been published previously and so we briefly describe the results in order to set the stage for the comparison between the simulated emission spectra and observed data. \subsection{General Circulation Model} \label{sec:gcm} General Circulation Models (GCMs) are three-dimensional computational atmospheric models that simulate the underlying physics and circulation patterns of planetary atmospheres. For this work, we utilized the GCM from \cite{GCM} with the radiative transfer scheme upgraded as described in \cite{newrad}. This model solves the primitive equations of meteorology: the standard set of fluid dynamics equations with simplifying assumptions appropriate for the atmospheric context, solved in the rotating frame of the planet \citep[see an early review by][]{SCM2010}. The radiative heating and cooling of the atmospheric uses a double-gray scheme. That is, radiation is treated with two different absorption coefficients under two regimes; an infrared coefficient to model the thermal interaction of the gas with radiation and an optical coefficient to model the absorption of incoming starlight. For a more detailed explanation of the GCM, see \citet{GCM} and \citet{newrad}. We model the hot Jupiter HD~209458b using the parameters listed in Table \ref{tab:gcm_params}, with system parameters from \citet{hd209params}, a high internal heat flux appropriate for this inflated hot Jupiter \citep{Thorngren2019}, and absorption coefficients and gas properties set to match our previous models of hot Jupiter atmospheres \citep[e.g.,][]{GCM}. Typically, we assume that hot Jupiters have been tidally locked into synchronous orbits, \added{meaning that the rotation period and orbital period are equal}. In order to empirically test this, we ran the GCM for a total of 12 different rotation rates spanning values faster and slower than synchronous. \added{The slowest rotation rate was chosen to ensure that at least one of the models fell into the disrupted circulation regime for slow rotation previously found in \citet{Rauscher_rotationrates}. We then extended our rotation rate sampling (at 0.25 km/s in rotation speed) to comparably cover faster rotation rates.} We list the set of chosen rotation periods and their corresponding equatorial rotational velocities in Table \ref{tab:rot_models}, along with some representative wind speeds from each model. We ran each model at a horizontal spectral resolution of T31, corresponding to a physical scale of $\sim$4 degrees at the equator and with 45 vertical layers evenly spaced in log pressure from 100 bar to 10 microbar. The planets were initialized with a globally averaged temperature-pressure profile and no winds.\added{See \citet{Guillot2010} for a derivation of profiles and \citet{RauscherandKempton2014} for a discussion of the global averaging parameter chosen (set to $f=0.375$ here). } Each simulation was allowed to run for 3000 orbits; by this point the upper atmosphere (including the infrared photosphere) had reached a steady state. \citet{Carone2019} recently demonstrated that the treatment of the deep atmosphere in hot Jupiter simulations---in particular the depth of the bottom boundary and the assumed strengths of convective adjustment and frictional/magnetic damping---can influence the circulation results predicted for the upper, observable atmosphere. Nevertheless, their models of HD~209458b show that this planet exhibits the standard hot Jupiter circulation pattern, in agreement with our results here. \begin{deluxetable}{lc} \caption{HD 209458b System Parameters} \label{tab:gcm_params} \tablehead{ \colhead{Parameter} & \colhead{Value}} \startdata Planet radius, $R_{p}$ & $9.9 \times 10^{7}$ m \\ Gravitational acceleration, $g$ & 9.434 m s$^{-2}$ \\ \added{Orbital Period & 3.525 days} \\ Orbital revolution rate, $\omega_{\mathrm{orb}}$ & $2.06318 \times 10^{-5}$ s$^{-1}$ \\ \added{Synchronous rotation speed} \tablenotemark{a} & \added{2.04 km s$^{-1}$} \\ Substellar irradiation, $F_{\mathrm{irr}}$ & $1.06 \times 10^{6}$ W m$^{-2}$\\ Planet internal heat flux, $F_{\mathrm{int}}$ & 3500 W m$^{-2}$\\ Optical absorption coefficient, $\kappa_{vis}$ & $4 \times 10^{-3}$ cm$^{2}$ g$^{-1}$ \\ Infrared absorption coefficient, $\kappa_{IR}$ & $1 \times 10^{-2}$ cm $^{2}$ g$^{-1}$ \\ Specific gas constant, $R$ & 3523 J kg$^{-1}$ K$^{-1} $\\ Ratio of gas constant to heat capacity, $R/c_{p}$ & 0.286 \\ Stellar radius, $R_{*}$ & $1.19 $ $ M_{\odot}$ \\ Stellar effective temperature, $T_{*eff}$ & $6090$ K \\ \enddata \tablenotetext{a}{In the case of synchronous rotation, this is the corresponding \\ velocity at the equator, calculated as $2 \pi R_{p}/ \omega_{orb}$.} \end{deluxetable} \begin{deluxetable}{cccc} \caption{Suite of General Circulation Models}\label{tab:rot_models} \tablehead{ \colhead{Rotation} & \colhead{Rotational} & \colhead{Max.\ wind speed} & \colhead{Max.\ wind speed} \\ \colhead{period} & \colhead{speed} & \colhead{at IR photosphere} & \colhead{at 0.1 mbar} \\ \colhead{(days)} & \colhead{(km/s)} & \colhead{(km/s)} & \colhead{(km/s)} } \startdata 9.08 & 0.79 & 2.50 & 6.28 \\ 6.91 & 1.04 & 2.64 & 4.44 \\ 5.57 & 1.29 & 5.65 & 6.87 \\ 4.67 & 1.54 & 5.64 & 6.76 \\ 4.02 & 1.79 & 5.61 & 6.64 \\ \textbf{3.53} & \textbf{2.04} & \textbf{5.64} & \textbf{6.32}\\ 3.14 & 2.29 & 5.43 & 6.19 \\ 2.83 & 2.54 & 5.47 & 6.15 \\ 2.58 & 2.79 & 5.10 & 5.72 \\ 2.37 & 3.04 & 4.78 & 5.56 \\ 2.19 & 3.29 & 3.77 & 5.02 \\ 2.03 & 3.54 & 4.62 & 5.17 \\ \enddata \tablecomments{The bolded values are for the model in a tidally-locked, synchronous rotation state. The rotational speeds are calculated as $2\pi R_p/\omega_{\mathrm{rot}}$. Continuum emission comes from the IR photosphere (at $\sim$65 mbar), while the absorption line cores come from pressure regions nearer to 0.1 mbar. Wind speeds are measured in the rotating frame of the planet.} \end{deluxetable} \subsection{GCM Results} \label{sec:gcm_results} Most of our models display the quintessential features expected for hot Jupiters: a strong eastward equatorial jet which advects the hottest spot on the planet slightly eastward of the substellar point and reduces---but does not eliminate---a large day-to-night temperature contrast of hundreds of Kelvin. We show this temperature structure for the synchronous model in Figure \ref{fig:cylinmap}. The equatorial jet characteristically extends throughout most of the atmosphere; Figure \ref{fig: windpressure} shows the zonally averaged winds for the synchronous model. Higher in the atmosphere an additional, significant component of the winds is a substellar-to-antistellar flow pattern; in Figure \ref{fig: windpressure} this shows up as a decrease in the averaged east-west wind speed. \begin{figure} \centering \includegraphics[width=3.4in]{STREAM24.png} \caption{The temperature structure near the infrared photosphere ( $\sim$ 65 mbar), for our synchronously rotating model of HD~209458b, centered on the substellar point (at 0,0). Streamlines have been overplotted, with thicker lines showing stronger winds. In the eastward direction, the winds reach a speed of 5.6 km/s. The hottest gas has been advected to the east of the substellar point by a strong equatorial jet, in the typical hot Jupiter circulation pattern. } \label{fig:cylinmap} \end{figure} \begin{figure} \centering \includegraphics[width=3.25in]{presssinglecbarnumticks.pdf} \caption{Longitudinally averaged east-west wind speeds throughout the atmosphere, for the synchronous rotation case. The eastward equatorial jet (dark blue) extends deep into the atmosphere. The black contour shows the boundary between eastward (positive) and westward (negative) winds.} \label{fig: windpressure} \end{figure} Figures \ref{fig:alltemps} and \ref{fig:windgrid} in the Appendix show maps of the temperature and winds at the infrared photosphere for all of the 12 models with different rotation rates. In line with results from previous investigations of non-synchronously rotating hot Jupiters \citep{Showman2009,Rauscher_rotationrates,Flowers}, we find that as the rotation rate increases, the stronger Coriolis force causes the equatorial jet to become more narrow and eventually secondary, higher latitude jets form. The wind speeds tend to decrease with increasing rotation rate (see Table \ref{tab:rot_models}), conspiring to create generally similar temperature patterns at the infrared photospheres of each model (Figure \ref{fig:alltemps}). The exceptions to these trends are the two most slowly rotating models, whose circulations have been disrupted from the standard hot Jupiter pattern. This disruption for very slow rotators was first identified by \citet{Rauscher_rotationrates}, and the dynamics have been studied by \citet{Penn2017}. For the purpose of this paper, these most slowly rotating models help to provide a lower limit to the possible rotation rate of HD~209458b, as the westward flow and corresponding advection of the hottest region of the atmosphere would result in an orbital phase curve of thermal emission significantly different from what has been previously observed for this planet. In Figure \ref{fig:pcurve} we show phase curves of the total thermal emission\footnote{ \added{Due to the double-gray radiative transfer in our GCM, the thermal emission is effectively bolometric, making it challenging to compare directly to the 4.5 micron flux from \citet{hd209phase}. The phase of peak flux, however, is more directly comparable as it is indicative of the photospheric temperature structure.}} from each model, calculated throughout one orbit. While most of the models do show similar curves, which peak near the measured phase of maximum flux at 4.5 micron \citep[$0.387\pm 0.017$;][]{hd209phase}, the two most slowly rotating models are ruled out by this data as they peak later in phase. Nevertheless, we include these models in the rest of our analysis in order to investigate how they are constrained by HRS data. \begin{figure} \centering \includegraphics[width=3.4in]{allphasecurve.pdf} \caption{Calculated orbital phase curves of total thermal emission from our suite of models with different rotation rates. Only the models with the slowest two rotation rates---with circulation patterns disrupted from the standard hot Jupiter eastward flow---have phase curves that peak after secondary eclipse (which would occur at a phase of 0.5, not shown here). Since phase curves measured at 4.5 microns of HD~209458b show a peak before the secondary eclipse \citep[at $0.387\pm 0.017$][shown by the black dashed line and grey shaded area]{hd209phase}, we find that all models except the two slowest rotators are consistent with observations.} \label{fig:pcurve} \end{figure} Finally, since the CRIRES/VLT emission spectra of HD~209458b are the focus of our paper, we also show the temperature structure and line-of-sight velocities (from both winds and rotation) in the upper atmosphere of the synchronous model in Figure \ref{fig:syncortho}, shown in an orientation corresponding to the first night of observation. This is the region of the atmosphere from which the flux in the CO line cores emerges, meaning that the detailed structure of those line shapes comes from the brightness-weighted local Doppler shifts, integrated across the visible hemisphere. Since the winds are dominantly eastward, they contribute to the Doppler shifts in the same direction as the rotation field. However, the line-of-sight velocity contours are slightly bent away from being strictly aligned with the rotation axis by the specific atmospheric flow pattern. \begin{figure} \centering \includegraphics[width=3.5in]{syncortho218.png} \caption{The temperature structure of the upper atmosphere ($\sim 0.1$ mbar), within the region from which the flux in the CO line cores emerges. The projection is centered on the subobserver point at a phase corresponding to the beginning of the observation, shortly after secondary eclipse. The substellar point is marked with a white star. Also shown are contours of the line-of-sight velocity toward (blue) or away (red) from the observer, due to contributions from both the winds and rotation of the planet (Equation \ref{vloseq}). The contour levels shown in red and blue are $\pm$2, 4, and 6 km/s. The black dotted contour shows the boundary of 0 km/s. \added{Note that whereas at the infrared photosphere the hottest region is east of the substellar point, here it is west of the substellar point due to convergence in the atmospheric flow at that location. } } \label{fig:syncortho} \end{figure} The full set of orthographic projections for our suite of 12 models is shown in Figure \ref{fig:orthogrid} in the Appendix. Aside from the two most slowly rotating models, we see similar temperature and line of sight velocity fields across the rest of the models. While higher blue-shifted line-of-sight velocities occur on the visible hemisphere, the red-shifted flow extends across a larger fraction of the planet disk. In contrast, the two slowest rotators have weak contributions to the velocity field from their rotation, and the winds generally work in an opposite direction to the rotation, leading to very little Doppler shifting compared to the other models. In addition, the temperature structure is fairly uniform across the visible hemisphere. A hot feature exists on the western side of the planet (from our perspective, to the left of the subobserver point), where we also see strongly blue-shifted velocities from the combination of rotation and winds blowing around from the night side. \added{Chevron features like this, regions of flow convergence and associated heating, are commonly seen in hot Jupiter GCMs \citep[e.g.,][]{Showman2009,Rauscher2010,Komacek2019} and are related to the transport of momentum from higher latitudes to the equator \citep{ShowmanPolvani2011}. Depending on the particular model---and the pressure level within the atmosphere---chevron features may appear to the east or west of the substellar point. Here we see multiple chevron features, both near the infrared photosphere and in the upper atmosphere. New state-of-the-art GCMs in \citet{Deitrick2020} also show these features, at multiple resolutions and robust against assumptions regarding vertical hydrostatic equilibrium (see their Figures 19 and 22).} While we have already determined that the phase curve data for HD~209458b excludes the two slowest rotation states for this planet (Figure \ref{fig:pcurve}), we are still interested to compare the simulated high-resolution emission spectra from these models to the rest of the suite. For most of the models, based on Figures \ref{fig:syncortho} and \ref{fig:orthogrid} we expect that the integrated emission spectra should show both red- and blue-shifting of the CO lines, but the detailed line shapes will be controlled by the complex three-dimensionality of the atmospheric temperature and line-of-sight velocity structures. Due to the slowing of the winds with increasing rotation rate (see Table \ref{tab:rot_models} and Figure \ref{fig:orthogrid}) we may expect similar Doppler-induced line profiles for these models. In contrast, for the two most slowly rotating models there may be very minimal Doppler effects shaping the line shapes in their simulated emission spectra. \subsection{Radiative Transfer Post-Processing} \label{sec:rt} In order to generate high-resolution emission spectra from our three-dimensional models, we apply the code and method outlined in \citet{jisheng}. Briefly, we take the output from the GCM (temperature and winds at 48 $\times$ 96 $\times$ 60 points in latitude $\times$ longitude $\times$ pressure; see Figure \ref{fig: tpprof} for the synchronous case and Figure~\ref{fig:alltp} for all of the GCM outputs) and solve the radiative transfer equation in a geometrically-consistent manner to produce the thermal emission spectrum emanating from the visible hemisphere of the planet. The radiative transfer equation is solved in the limit of pure thermal emission: \begin{equation} I(\lambda)=B_{o}e^{-\tau_{0}}+ \int_{0}^{\tau_{o}}e^{-\tau}B\ d\tau, \end{equation} where $I$ is the intensity at each wavelength $\lambda$, $B$ is the Planck function (calculated from the local temperatures) and $\tau$ is the slant optical depth along the line of sight toward the observer, taking into account varying opacities throughout the path. We strike 2,304 ($= 48 \times 96 / 2$) individual line-of-sight intensity rays through the atmosphere and then integrate with respect to the solid angle subtended by each grid cell to produce the planet's emission spectrum in flux units. To correctly account for the line-of-sight geometry we must first interpolate the temperature and wind output from the GCM onto a fixed-\textit{altitude} vertical grid. This interpolation allows us to readily strike straight-through rays along the observer's sight line. This geometrically-consistent approach to the radiative transfer is somewhat unique in calculations of emission spectra from GCMs. A more common and computationally less challenging technique is to calculate the radiative transfer along radial profiles and assume isotropic emission from the top of the atmosphere. \citet{Caldas2019} have recently shown that using correct ray-tracing geometry is important in calculating transmission spectra from 3-D models; we are not aware of a similar study of geometry's importance in calculating emission spectra.\\ \begin{figure} \centering \includegraphics[width=\columnwidth]{1doverlay.png} \caption{Temperature-pressure profiles throughout the atmosphere for our synchronously rotating model of HD~209458b. The rainbow lines show equatorial profiles, with the hue corresponding to the longitude east of the substellar point. The gray profiles are from the entire planet. We use this 3-D atmospheric structure, together with the local wind velocities, to calculate simulated high-resolution emission spectra for cross-correlation with the observed data.The black lines show examples of four temperature-pressure profiles from a suite of 1D models \added{(described in Section \ref{1Dmodels})} also used to simulate spectra. These models cover the same temperature range realized by our 3-D models, but use only a single profile to represent the entire planet. Note that the best-fit model from this suite has an unrealistic super-adiabatic profile.} \label{fig: tpprof} \end{figure} As a consequence of having varying temperature conditions over the visible hemisphere of the planet, we may expect that our integrated spectra are influenced by spatial variations in the chemical abundances of our main opacity sources. Based on the temperature range spanned by the GCM outputs and the wavelength range modeled (2.28 -- 2.35 $\mu$m), we expect that H$_2$O and CO will be the dominant opacity sources. One of the simplest assumptions we can make about the abundances of H$_2$O and CO is that they are in chemical equilibrium for the local conditions at each location in the atmosphere. However, this neglects the important influence of mixing from atmospheric dynamics, which is likely to bring these species out of chemical equilibrium. The physically and chemically sophisticated work by \citet{Drummond2020} demonstrated that 3-D mixing is expected to alter the chemical structure of hot Jupiter atmospheres, with the vertical and horizontal advection components both being significant \citep[with similar results also found by][]{Mendoca2018}. In their model of HD~209458b, however, they found minimal differences in the abundances of CO and H$_2$O between their kinetics model and the assumption of chemical equilibrium. While they predicted minimal differences between these cases in their simulated (lower resolution) emission spectra, here we further investigate the influence of chemical abundances in high resolution emission spectra.\\ The double-gray radiative transfer scheme within our GCM simplifies the multi-wavelength opacities of the atmosphere, meaning that we do not prescribe a specific chemistry, nor does the simulation predict chemical mixing. In order to investigate the impact of chemistry on the emission spectra, we consider two extreme cases within our post-processing framework: abundances determined everywhere by local chemical equilibrium, or abundances that are fully homogenized throughout the atmosphere and set to some constant volume mixing ratio (VMR). The first assumption applies to the limit where dynamics do not create any significant chemical disequilibrium, while the second may be a proxy for fully efficient mixing, with the caveat that we still need to choose a value for the homogenized abundances. We choose to fix the values for water and CO to the best-fit values from a previous retrieval analysis of these same data \citep[VMR values of $1\times 10^{-3.5}$ for CO and $1\times 10^{-5}$ for water,][]{combininglowandhigh}. There is significant evidence in the literature suggesting a water abundance below the solar equilibrium value \added{\citep[which would be $\sim 5 \times 10^{-4} $; ][]{h2oabundance_madhusudhan}} for HD~209458b \citep[although see][]{Line2016}. From previous analysis of these HRS data, a marginal evidence for H$_2$O was claimed by \citet{Brogi_2019}, with a peak around VMR $\sim 1 \times 10^{-5.5}$ but with an unbounded lower limit. From HST transmission spectroscopy, \citet{Barstow2017} and \citet{Pinhas2019} both retrieve a low water abundance of $1\times10^{-5}$ and $1\times10^{-4.7}$, respectively. These results are particularly significant as they are obtained with models accounting for the presence of aerosols, and therefore include their known ability to mimic a low water abundance by reducing the contrast of the water band in the WFC3 pass-band. Lastly, a recent attempt at combining both low-resolution and high-resolution emission spectroscopy was presented by \citet{combinedhudra}, resulting in a VMR of $1\times10^{-4.1}$. The observational constraints presented above and the weak detection of water in these data inspired us to explore an additional set of models without water vapor, along with our constant VMR models with water under-abundant compared to equilibrium calculations. \\ In order to self-consistently account for Doppler shifts resulting from winds and rotation in the high-resolution spectra given that the resolution is comparable to the speeds of atmospheric motion ($\sim$km/s), we calculate the line-of-sight velocity for a latitude-longitude ($\theta$, $\phi$) pair at an atmospheric height of $z$ as: \begin{multline} v_{LOS}(\theta,\phi) = -u \sin (\theta) - v\cos(\theta) \sin(\phi) \\ +w \cos(\theta) \cos(\phi) -(R_{p}+z)\Omega\sin (\theta) \cos (\phi) \label{vloseq} \end{multline} where $u,v,w$ are the wind speeds in the east-west, north-south, and radial directions, respectively, and $\Omega$ is the planet's bulk rotation rate. We calculate simulated spectra both with and without these Doppler shifts, so that we can quantitatively evaluate how much they contribute to the observed data. We calculate the simulated emission spectra at a higher spectral resolution ($R\sim 250,000$) than that of CRIRES data across the same wavelength range (2.2855 -- 2.3475 $\mu$m). During the data analysis, the simulated spectra are convolved with a Gaussian kernel to match the resolving power of CRIRES ($R=100,000$). As the planet rotates throughout the time of observation, we calculate spectra for each exposure time. The atmospheric structure from the GCM is output every 4 degrees in phase. In order to match the more frequently sampled observed phases, we created interpolated spectra as follows. For each exposure (corresponding to some orbital phase) we take the GCM outputs from the two nearest-neighbor phases, rotate each atmosphere to the correct orientation, calculate simulated spectra from each, and then combine those two spectra, weighting linearly by how close each GCM output is to the phase of observation. Even before this weighted average, the spectra produced from two adjacent GCM outputs differed only marginally. For the fastest rotating model, the average difference was less than $5 \% $. For the slowest rotating model, this difference was only $0.6 \%$ on average. Thus, in our interpolation process, the resultant changes to the spectra were on order of a few percent at most. \deleted{In addition to producing post-processed spectra from the 3-D GCM outputs, it is also instructive to compare our results against spectra produced from 1-D models of HD 209458b. We perform comparisons against a suite of 1-D models (described in Section \ref{1Dmodels}) and choose four representative T-P profiles to show in Figure~\ref{fig: tpprof}. These four chosen representatives consist of the best fit 1-D model to the observations, two profiles that bound the temperatures produced in our GCM, and a model that approximately reproduces the average equatorial T-P profile produced by our GCM.} \subsection{Simulated Emission Spectra} \label{sec:rt_spectra} Simulated spectra from the full set of 12 rotation models over a partial range of the total wavelength coverage for a time near the beginning of the observation (phase of $\sim$ 0.52) are shown in Figure \ref{fig: spec12rr}. For each model, versions of the spectrum with and without Doppler effects are plotted in solid and dashed lines, respectively. As expected from the discussion of their circulation patterns above, the models with the slowest rotation rates have very little Doppler broadening. In contrast, all of the other models show significant broadening, without a strong dependence on the planet's rotation rate because the faster winds in slower rotating models work to contribute to the broadening. The main notable difference between these spectra is in the relative depths of the spectral lines, which is a function of the vertical structure of these atmospheres, both thermal and as probed by the line opacities. \begin{figure*} \centering \includegraphics[width=\textwidth]{broadenedandnot.pdf} \caption{Simulated spectra from post-processing the atmospheric structures predicted by our GCM, color coded by the rotation rate assumed for each model (with the synchronous model in black). In these spectra we only include opacity from CO (not water; see Figure \ref{fig:synccasediffchem} for comparison) and assume local chemical equilibrium abundances. The dashed lines show spectra produced without the influence of Doppler effects while the solid lines account for shifts and broadening due to winds and rotation. The main result of the atmospheric motion is to produce significant line broadening; for most of the models the amount of broadening is similar, due to a trade-off between the contributions from winds and rotation. The two most slowly rotating models have very little broadening, due to the weak contribution from rotation, but also because of westward winds in these models.} \label{fig: spec12rr} \end{figure*} We can investigate the relative contributions of the thermal profile and changing opacities to the depth of the spectral lines by comparing the different chemical assumptions we use in the post-processing. Figure \ref{fig:synccasediffchem} shows the differences in spectra calculated under our assumptions of chemical equilibrium abundances or constant volume mixing ratios, both with and without water included as an opacity source, for our synchronous model. The spectral features from CO appear fairly consistent for all of our assumed chemistry conditions. Over the range of pressures and temperatures that contribute to our planet's dayside emitted spectra ($P \sim 0.1-100$ mbar, $T \sim 900-1700$ K, see Figures \ref{fig:cylinmap} and \ref{fig:syncortho}), local chemical equilibrium abundances for CO at solar composition are fairly constant, at a VMR of $\sim 4\times 10^{-4}$, only slightly higher than the value we use for our constant VMR assumption. \begin{figure} \centering \includegraphics[width=3.6in]{synccasediffchem.pdf} \caption{Simulated emission spectra, post-processed from our 3D atmospheric model, comparing the different assumptions used for the abundances of water and CO, the main sources of opacity at these wavelengths. These spectra are from the synchronously rotating model, over a fraction of the wavelength coverage of the observations; the solid and dashed spectra are produced with and without the Doppler effects of winds and rotation, respectively. The spectra produced assuming abundances determined by local chemical equilibrium and fixed to a constant value look very similar for the CO features. The assumption of local chemical equilibrium results in much more abundant water with much stronger spectral features in comparison to the constant value that best-matches previous observations.} \label{fig:synccasediffchem} \end{figure} In contrast, the assumption of local chemical equilibrium produces significantly different water abundances than the constant VMR value we use \citep[the best-fit value from a previous 1-D analysis of these data;][]{combininglowandhigh}. For the temperature and pressure conditions probed by these emission spectra, local chemical equilibrium abundances for water have VMR $\sim10^{-3}-10^{-4}$, with the hottest and lowest pressure regions dipping down to VMR $\sim 10^{-7}$. These abundances are mostly significantly higher than our constant VMR value ($10^{-5}$), leading to much more visually apparent spectral features in Figure \ref{fig:synccasediffchem}. These differences will strongly influence the significance of detection in our data analysis, as discussed in Section \ref{sec:CC}. One measure of the effect of Doppler shifting across the entire spectrum can be assessed by cross correlating each simulated emission spectrum with the non-Doppler shifted spectrum calculated from the same model, as shown in Figure \ref{fig:allcc}, where we have plotted these cross correlation functions for each of our 12 rotation models. The dashed black line shows the spectrum from the synchronous model without Doppler effects cross correlated with itself, to characterize the intrinsic width of the cross-correlation function. The two slowest rotators have the least amount of broadening and the second slowest rotator actually has a cross-correlation function similar to the unshifted reference. All of the other rotation rates produce roughly similar levels of broadening, with only minimal net red- or blue-shifts (and no trend in the shift with rotation rate), in agreement with our previous findings in \citet{jisheng}. \begin{figure} \centering \includegraphics[width=3.5in]{AllCCfboxshad.pdf} \caption{For each of our 12 models with different rotation rates, we cross-correlate two simulated spectra from the same model: one with Doppler effects included and one without. (The solid black line is the synchronous model.) The resulting cross correlation functions, plotted here, allow us to assess the contribution of the planet's winds and rotation to the overall Doppler shifting and broadening of the lines in the emission spectra. The gray dashed line shows the synchronous model's non-Doppler shifted spectrum, cross correlated with itself, to show the intrinsic broadening in the spectra. The dotted vertical lines mark the velocity at the peak of the cross correlation function for each model. All but the two most slowly rotating models show significant---and similar---broadening, while none of the models exhibit large net red- or blue-shifts. CRIRES allows us to fully resolve the shapes of these line profiles since its instrumental profile (approx $\sim 3$ km/s) is smaller than the FWHM of these lines. } \label{fig:allcc} \end{figure} The similarity in Doppler broadening between all but the two most slowly rotating models is to be expected, from the discussions of circulation patterns above and from visual inspection of their spectra in Figure \ref{fig: spec12rr}. As a more quantitative comparison, in Figure \ref{fig:FWHM} we show the width of the cross correlation function, calculated at $80 \%$ of its maximum (shown in Figure \ref{fig:allcc}) as a function of the rotation period of the simulated planet, normalized to the synchronous model. This width serves as a proxy to understand the degree of broadening caused by the differing sources of Doppler effects. The filled and unfilled circles correspond to spectra that have been broadened by both winds and rotation and only rotation, respectively. The scatter in the unfilled circles is a result of differences in temperature structure in the corresponding GCM. Aside from the two slowest rotating models---which exhibit westward flow, opposite of the direction of rotation---allowing the spectra to also be broadened by the winds cause the width to increase. We show the result of a single temperature structure artificially broadened at the various rotation rates with the black dashed line. The unfilled circles lie both above and below this line, meaning that the amount of broadening in the lines themselves does not allow us to constrain the rotation rate strongly. Because the total broadening of the line is sensitive to temperature and wind structures in addition to rotation rate, we are unable to retrieve a rotation rate from the broadening width of the spectra alone. \begin{figure} \centering \includegraphics[width=3.5in]{FWHMrot80.png} \caption{ Width of the cross correlation function, calculated at at a height of $80 \%$ of its maximum (shown in Figure \ref{fig:allcc}) as a function of the rotation period of the simulated planet, normalized to the synchronous model. The filled circles correspond to spectra that have been broadened from both wind and rotation and the open circles represent spectra that have been broadened only by rotation. To produce the black dotted line, we took the temperature structure of the synchronous model and calculated the resulting broadening for each rotation rate. For the two slowest rotating models, we find that the westward rotating winds cause the fully broadened spectra to have a smaller width than the spectra only broadened by rotation. For all of the other models, we see the addition of winds cause the resulting correlation width to increase. Because the total broadening of the line is sensitive to temperature and wind structures in addition to rotation rate, we are unable to retrieve a rotation rate from the broadening width of the spectra alone. } \label{fig:FWHM} \end{figure} \subsection{1D Atmospheric Models} \added{In addition to producing post-processed spectra from the 3-D GCM outputs, it is also instructive to compare our results against spectra produced from 1-D models of HD 209458b. We perform comparisons against a suite of previously published 1-D models (described in Section \ref{1Dmodels}) and choose four representative T-P profiles to show in Figure~\ref{fig: tpprof}. These four chosen representatives consist of the best fit 1-D model to the observations, two profiles that bound the temperatures produced in our GCM, and a model that approximately reproduces the average equatorial T-P profile produced by our GCM. } \section{Observational Data of HD~209458b} \label{sec:data} The data we re-analyze in this paper were originally published in \citet{schwarz}, where the full details of the observations can be found. In brief, the star HD~209458 (K=6.31 mag) was observed for a total of 17.5 hours with the CRIRES instrument on the VLT as part of the ESO program 186.C-0289 in August and September 2011. The system was observed on three separate nights, always shortly after secondary eclipse. Here we utilize only the first two nights of data, which were observed in nodding mode. We discard the third night, because this was observed in staring mode for testing purposes and shows a higher noise budget. As explained in \citet{schwarz}, the spectra were optimally extracted via the standard ESO pipeline and then re-calibrated in wavelength using the known position of telluric lines as a reference. Due to previously reported issues with the fourth detector of CRIRES, we chose to include only the first three detectors in our analysis. Extracting the planetary signal from the calibrated spectra poses a unique challenge due to the highly unequal flux ratio of the Hot Jupiter and the star. Furthermore, for ground based observations, spectral absorption lines formed in the Earth's atmosphere (telluric features) must be accounted for and are often so strong that parts of the data must be masked completely as they exhibit near-zero flux. In order to decouple the planet's spectrum from the stellar and telluric lines, we utilize standard analysis algorithms \added{(see \citep[Section 3.2]{Brogi_2019} for a detailed description} for HRS. These are based on the principle that over the relatively short period of observations, the planetary lines are Doppler shifted by a varying amount due to the changing orbital motion of the exoplanet, while telluric and stellar lines are essentially stationary \footnote{Stellar lines do shift by $\sim100$ m s$^{-1}$ per hour of observations due to the barycentric velocity of the observer and the stellar motion around the center of mass of the system, but these are negligible compared to the change in planet's radial velocity.}. Thus, by removing the parts of our signal that do not shift with time, we are left with the planetary spectrum. We apply the latest iteration of the HRS analysis described in \cite{Brogi_2019}, to which we point the reader for a step-by-step description. In short, the algorithm determines a model for the time-dependent stellar and telluric spectrum empirically from the observations, and normalizes the data by dividing out such model. The resulting data product only contains the planet spectrum, deeply embedded in the stellar photon noise at this stage. Similarly to previous studies of atmospheric circulation from transmission spectra \citep{Brogianalysis, Flowers} we run two parallel versions of the analysis: one with the data as is (hereafter the {\sl real} data), and one containing each model spectrum injected at a small level (hereafter the {\sl injected} data), chosen to be 0.1$\times$ the nominal value. Here the nominal value is the planet's emission spectrum in units of stellar flux, i.e. scaled by a blackbody at the stellar effective temperature and multiplied by the planet-to-star surface ratio (see system parameters in Table~\ref{tab:gcm_params}). The exact value of the scaling factor is not important for the outcome of the analysis, as long as it is significantly smaller than the nominal value. A small scaling factor is needed to realistically simulate the effects of the analysis on each model spectrum without sensibly changing the signal content of the data. In order to detect the planet's emission spectrum, buried in the stellar noise at this stage, we use the standard technique in high-resolution spectra, where we cross-correlate a template spectrum---or set of templates---for the planet with the data. If the template is a good representation of the planet's spectrum, there will be a maximum cross-correlation value at velocities corresponding to the planet's orbital radial velocity during the time of observation. The significance of each tested model is determined as in previous work: we compute the difference between the CCF of the injected data and the CCF of the real data. This will remove the cross correlation noise and the correlation with the real planet signal, and provide us with the {\sl model} cross correlation. Note that this is different from the CCF obtained by autocorrelating the spectra, because it contains any alterations that our data analysis necessarily introduces on the planet signal while removing telluric and stellar spectra. We then compare the model CCF and the real CCF via chi-square, and we assign a significance by discriminating against a non-detection, which in our case is a flat cross correlation function (i.e. a straight line). Finally, $n$-$\sigma$ confidence intervals are determined by the region in the parameter space where the detection significance drops by $n\,\sigma$. For the full explanation of how the chi-square statistic is utilized, we refer the reader to \citet{Brogianalysis} and \citet{Flowers}. \section{Data Analysis Results} \label{sec:CC} We apply the cross correlation and significance test explained in Section~\ref{sec:data} to the spectra produced from our three-dimensional model, as well as to a suite of one-dimensional models for comparison. These one-dimensional models are taken from previous work and further information about them is provided in Section \ref{1Dmodels}. We find significant detection of the planet's signal over the range of template spectra tested, but our strongest detection came from the spectra produced by post-processing our three-dimensional model, as reported in Table \ref{tab:sigvalues}. In particular, we found the highest significance of detection (at 6.8 sigma) for the model that was post-processed assuming uniform volume mixing ratios for CO and water, and that included the Doppler effects from winds and rotation. Figure \ref{fig:vmrsingleh20} shows the significance of cross-correlation detection for this model, over the range of orbital and rest frame velocities included in the analysis. Note that these observations have a relatively small phase range and they are taken close to superior conjunction, where the planet's radial velocity curve can be approximated with a linear function of time at small signal to noise. This means that higher orbital velocities can be somewhat compensated for by allowing the planet to have a positive rest frame velocity (i.e., anomalous motion away from the observer), resulting in some inherent degeneracy between those parameters. Our detection agrees with a zero rest frame velocity for the planet and the orbital velocity reported in \citet{hd209params}. \begin{deluxetable*}{ccccc} \caption{Highest significance detections for the model spectra tested in this work. The highest increase in detection significance came from using a 3D atmospheric model, compared to the 697 1D models tested. Note that the best fitting 1D model exhibits a non-physical, super-adiabatic lapse rate. \added{For detections broken down by rotation rate, see Table \ref{tab:results} in the appendix.} } \label{tab:sigvalues} \tablehead{ \colhead{Dimensions} & \colhead{Abundances} & \colhead{Molecules Included} & \colhead{Doppler Effects On} & \colhead{Doppler Effects Off}} \startdata 3D & Chemical equilibrium & CO & 6.49 & 6.40 \\ & Chemical equilibrium & CO and H$_2$O & 4.22 & 3.39 \\ & Constant volume mixing ratio & CO & 6.02 & 5.72\\ & Constant volume mixing ratio & CO and H$_2$O & 6.87 & 6.37 \\ \hline 1D & Constant volume mixing ratio & CO and H$_2$O & - & 5.06 \\ \enddata \end{deluxetable*} \begin{figure} \centering \includegraphics[width=3.6in]{watercovmrmatteo.pdf} \caption{The significance of our detection of the planetary signal, showing 1-, 2-, and 3-$\sigma$ confidence intervals from the peak detection (at 6.78 $\sigma$, for our spectra calculated using water and CO with constant abundances), over the velocity parameter space explored by the cross correlation fitting. The literature orbital velocity of the planet is shown as a white star, as is its expected rest frame velocity. Our analysis confidently detects the planet, at its expected velocity.} \label{fig:vmrsingleh20} \end{figure} One of the main results from our analysis is this: \textit{that template spectra from our 3-D model---calculated without any fine-tuning---outperform a large suite of template spectra from one-dimensional models} (a 6.8 sigma detection significance compared to 5.1; Table \ref{tab:sigvalues}). This is evidence that the three-dimensional structure of this hot Jupiter's atmosphere leaves detectable signatures in the disk-integrated high-resolution emission spectrum of the planet. In the following sections we explore the various physical properties that could contribute to this enhanced detection and evaluate their influence. \subsection{Comparison to 1D Models} \label{1Dmodels} To compare our results with the modeling presented in past work, we estimated the significance of the cross correlation with two grids of models obtained with one-dimensional, plane-parallel radiative-transfer calculations. The first grid of models is described in \cite{schwarz} and consists of 704 models describing a parametric $T-p$ profile with a region at constant lapse rate ($dT/d\log(p)$) sandwiched between two isothermal regions. Pressure and temperature at the upper and lower boundaries can be changed, thus exploring a wide range of lapse rates up to $d\log(T)/d\log(p)=0.31$, which includes non-physical super-adiabatic lapse rates. Relative abundances of CO and H$_2$O are also varied in the range log(CO/H$_2$O) = 0-1.5. After excluding models with a thermal inversion layer \added{(ruled out in \citet{schwarz}} we were left with 546 models to test. Since these models were not designed to explore high abundance ratios between CO and H$_2$O, we also tested a subset of the models described in \citet{combininglowandhigh} and sampled from the low-resolution posterior retrieved by \citet{Line2016}. From that initial sample of 5,000 models we remove those models with thermal inversion and/or log(CO/H$_2$O) $< 1.5$ \added{\citep[as low CO/H$_2$O models are already included in the grid from][]{schwarz}}, resulting in 151 additional models, spanning abundance ratios up to log(CO/H$_2$O) = 3.0. All these models have a sub-adiabatic lapse rate in the range $0.05 < d\log T/d\log p < 0.08$. The only broadening that has been applied to the 1-D model spectra arises from the pressure and thermal broadening components of the Voigt profile used to generate the spectral lines. Of the 697 one-dimensional models tested, the highest measured significance is 5.06$\sigma$, with only 14 models reaching a significance value greater than 4$\sigma$. These are models with a steep lapse rate ($0.13 < d\log T/d\log p < 0.31$) and an abundance ratio of 10-30 between CO and H$_2$O. Thus, the vast majority of the 1-D models return a significance below the threshold of detection (usually set at 4$\sigma$ for these HRS observations), and consistent with the tentative detection reported in \cite{schwarz}. We note that the temperature-pressure profiles explored in the set of 1-D models encompasses the range realized in our 3-D model (Figure \ref{fig: tpprof}). This implies that the deficiency in the 1-D models is not that they didn't include the \textit{appropriate } physical conditions of the planet, but rather that those conditions are inherently, and observably, three-dimensional. In Figure \ref{fig:1dspectra}, we show a subset of the spectra produced from the 1D models and spectra from our best fitting 3D model. All the spectra shown seem to show the same absorption lines, yet still result in a range of detection strengths. The subtleties in spectral line shapes and relative depths are not adequately captured by the 1D models. \begin{figure} \centering \includegraphics[width=3.6in]{comparewith1d.png} \caption{A comparison of the spectra produced from a 1D atmosphere with our best fitting 3D model (in black). The solid black spectrum has been broadened by Doppler effects arising from winds and rotation. These sources of broadening are not included in the dotted black spectrum or any of the 1D spectra. All of the models appear to show the same absorption lines but the relative depths of absorption, influenced by the underlying temperature structure and chemical abundances, changes with each model. These variances in relative depth and line shape result in a range of significance of detection when cross correlated with the data. } \label{fig:1dspectra} \end{figure} \subsection{Influence of Temperature Structure} As Table \ref{tab:sigvalues} hints at, and as we will discuss in subsequent sections, the improvement in detection from using the 3-D models over the 1-D models is not primarily due to the chemical or velocity structure of the atmosphere, as those influences on the spectrum only give marginal improvements in the significance of detection. Instead, we find that the contribution from multiple regions of the planet, with different thermal structures, is a much better match to the observed data than a representation of the planet with a single thermal profile. Whether the influence of spatial inhomogeneity is intrinsically within \textit{all} HRS emission observations requires further study, but for this particular planet we find it to be the case. Recent complementary work by \citet{Taylor2020} predicts that James Webb Space Telescope observations may similarly contain inherent signatures of multiple thermal regions, although whether this inhomogeneity will be measurable or not depends on wavelength coverage and signal-to-noise. \subsection{Influence of Chemical Structure} Table \ref{tab:sigvalues} shows that for models with CO alone the assumption of abundances that follow local chemical equilibrium is slightly preferred over using the best-fit value from a previous analysis of these data \citep{Brogianalysis}. However, as discussed in Section \ref{sec:rt_spectra}, local chemical equilibrium does not predict strong variations in the abundance of CO throughout the atmosphere, meaning that the improvement of signal does not come from any significant chemical heterogeneity influencing the disk-integrated spectra, but rather from an abundance slightly closer to reality. It may be the case that by capturing the inherent \textit{thermal} inhomogeneity of the atmosphere, we can more accurately find the correct chemical abundances \citep[][]{Taylor2020}. In contrast to our results for CO, Table \ref{tab:sigvalues} shows a strong decrease in the significance of planet detection when using chemical equilibrium values for water. The data prefer depleted abundances for water; Section \ref{sec:rt_spectra} and the discussion surrounding Figure \ref{fig:synccasediffchem} demonstrate that water at equilibrium values would result in large spectral features that are not apparent in the data, according to our analysis. It is noteworthy that the data are not suggesting a complete lack of water; the very low water abundance used in calculating the spectra with constant VMR does improve the planet detection over the comparable CO-only model. A full gridded analysis of varying chemical abundances is outside the scope of this work. Even without considering a full grid,these results show that the 3-D chemical structure of the atmosphere contributes to our enhanced detection, compared to 1-D models, insofar as it seems to slightly more robustly predict the abundance of CO in the atmosphere. Notably, we find that the data prefer a water abundance that is orders of magnitude depleted below chemical equilibrium values. \subsection{Influence of Atmospheric Doppler Effects} In addition to predicting the 3-D temperature structure of the planet's atmosphere, our GCM also predicts the wind vectors throughout, all of which are influenced by the rotation rate assumed for the planet. Here we examine how the Doppler shifts and broadening due to winds and rotation in our simulated spectra may contribute to our enhanced detection of the planet's signal over the 1-D models that do not include this additional physics, and whether the data can help to empirically constrain the planet's wind speeds and rotation rate (generally assumed to be synchronous with its orbit). In Table \ref{tab:sigvalues} we report that including the spectral line shifting and broadening from winds and rotation does enhance our detection of the planet, but with only a minor increase in significance over the spectra without Doppler effects. As discussed and shown above in Figure \ref{fig:allcc}, the main influence of the Doppler effects (for most of the models) is to broaden the spectral lines, since both winds and rotation contribute similar symmetric velocity patterns. Thus we expect the main contribution to the increased detection is that the planet's actual spectrum does contain some significant broadening from winds and rotation. Even with a symmetric velocity field, an uneven brightness pattern across the planet can result in the red- or blue-shifted side of the planet contributing more emission to the disk-integrated spectrum, resulting in a net Doppler shift \citep{jisheng}. Figure \ref{fig:allcc} has small net Doppler shifts for the models. Depending on the precision of the data, this could result a small anomalous radial velocity of the planet if not included in the analysis. In order to test whether a net Doppler shift contributes in any significant way to our detection, in Figure \ref{fig:dopoffkpev} we plot the models' significance of detection in velocity space, comparing the spectra with and without the Doppler effects included. While we see an overall increase in detection significance with the Doppler effects included, there is no very noticeable shift in velocity space between the models with and without them. This agrees with our discussion above, that the main improvement in significance comes from the broadening of the lines, rather than any net Doppler shift. \begin{figure*} \centering \includegraphics[width=\textwidth]{doponvsoffsigma.pdf} \caption{A comparison between our significance of planet detection with and without Doppler effects included in our best-fit simulated spectra (\textit{left} and \textit{middle} plots), shown as a function the planet's assumed orbital velocity and its rest frame velocity (which should be zero unless there is anomalous motion). The \textit{right} plot shows the difference in significance caused by including the Doppler effects in our analysis. While there is a slight increase in detection significance, this does not correspond to any net shift in velocity space, indicating that it is largely due to the line broadening rather than any shifting. } \label{fig:dopoffkpev} \end{figure*} \subsubsection{Constraints on rotation and winds?} As part of this investigation, we wanted to see what constraint, if any, could be placed on the rotation rate or wind speeds for HD~209458b. In Figure \ref{fig:combinedvmrh20} we show how the significance of detection depends on which rotation rate we use in our 3-D model of the planet (plotted here as the planet's equatorial velocity). The significance of detection is largely insensitive to the planet's rotation rate, aside from the two most slowly rotating models being slightly disfavored (and those are also inconsistent with thermal phase curve data; see Figure \ref{fig:pcurve} and discussion). Our small improvement in detection from including Doppler effects, combined with the strong similarity in Doppler broadening for all but the slowest models (Figure \ref{fig:allcc}), makes this result unsurprising. \begin{figure} \centering \includegraphics[width=3.6in]{verticalvmrh2o.pdf} \caption{\replaced{Contours of detection significance for}{Confidence intervals from} cross-correlation between the data and our 3D models with constant volume mixing ratios of CO and water, and Doppler effects included. Similar to Figure \ref{fig:vmrsingleh20}, the white star marks literature values and equatorial velocity for synchronous rotation. Here, the two plots \replaced{compare the significance of detection }{show the 1-, 2-, and 3-$\sigma$ confidence intervals}for models with different rotation rates as a function of orbital velocity (\textit{top}) and rest frame velocity (\textit{bottom}). The data have a slight aversion to the two most slowly rotating models (low values of equatorial velocity), but otherwise the temperature structures and wind patterns of all other models are roughly equally well allowed by the data. } \label{fig:combinedvmrh20} \end{figure} However, it is a valuable result to determine that the amount of Doppler broadening for models across a wide range of rotation rates is so similar (quantified in Figure \ref{fig:FWHM}). It indicates that we cannot constrain rotation rates as well as we might think from rotational broadening alone; the winds are faster in the more slowly rotating models and their predominantly eastward direction lets them compensate for the weaker rotational broadening. Although our particular analysis is only for observations around one particular orbital phase, the eastward wind pattern extends around the whole globe and so we expect the same behavior regardless of orbital phase. This is the same general behavior previously reported for high-resolution transmission spectra in \citet{Flowers}; we have now shown that emission spectra are subject to this inherent physical uncertainty as well. \section{Conclusions and Summary} \label{sec:conclusion} In this project, we combined state of the art observational and modeling techniques to obtain \replaced{a result stronger}{a higher significance detection} than could be achieved with either of these techniques alone. We ran a 3D atmospheric model for the hot Jupiter, HD 209458b, for a range of rotation rates. We post-processed the resulting atmospheric structures in a geometrically correct way to generate template spectra. We then cross correlated the synthetic spectra with previously published data for this planet from CRIRES/VLT and detected the planet at a greater significance than a whole suite of 1D models. We explored why the 3D models were a strong improvement over the 1D models by looking at properties such as temperature and chemical structure and Doppler shifts from winds and rotation. Our main findings are summarized as follows: \begin{itemize} \item High resolution emission spectra are sensitive to the 3D structure of the atmosphere, at least for these data of this particular hot Jupiter. \item One dimensional models, despite covering the same range in temperature and pressure, returned detections that were at best $\sim 1.8 \sigma$ lower than our best fit from 3D models. \item In terms of detection significance, the primary improvement is from the use of a 3D temperature structure, with secondary improvements related to the chemistry and Doppler effects. \item Doppler shifts are present in the high resolution spectra, but are unable to offer strong constraints for wind speed or rotation rate. We have shown that the widths of the spectral lines cannot be directly related to the planet's rotation rate alone. \item Our analysis detects water in these high resolution spectra of HD 209458b, but at a significantly depleted value \added{compared to the solar chemical equilibrium abundance} . \end{itemize} High resolution spectroscopy enables detailed characterization of exoplanets. It is becoming increasingly clear that the three-dimensional nature of planets and their atmospheric dynamics influence high resolution spectra. Looking toward the upcoming era of high resolution spectrographs on Extremely Large Telescopes, we eagerly await what detailed atmospheric characterizations will be possible. \acknowledgments This research was supported in part by NASA Astrophysics Theory Program grant NNX17AG25G and the Heising-Simons Foundation. MB acknowledges support from the UK Science and Technology Facilities Council (STFC) research grant ST/S000631/1. \added{We thank the referee for their constructive feedback which helped to improve the clarity of this paper.} \bibliographystyle{aasjournal} \section{Introduction} High Resolution Spectroscopy (HRS) is a relatively recent, powerful method for exoplanet atmospheric characterization. It uses a spectral resolution high enough ($R \gtrsim 30,000$) to unambiguously detect the unique sets of spectral lines from atoms or molecules in an exoplanet's spectrum. While the planet's spectrum is often orders of magnitude weaker than the stellar noise, its signal can be detected via cross-correlation with a template spectrum \added{due to the increased number of lines present at high resolution.}. This is accomplished by exploiting the planet's changing orbital radial velocity along the observers line of sight which helps to remove the stellar and telluric signals, whose spectral features remain effectively at fixed wavelengths over the duration of a typical observation. \added{By removing the components of spectrum that are constant with time, one is left with noise and the planet spectrum, which can then be detected via cross-correlation.} \citet{birkbyhires} presents a review of the HRS method and recent results from its use. HRS was first applied to the well-known hot Jupiter HD~209458b using the CRIRES instrument on the VLT \citep{2010Natur.465.1049S}, definitively detecting CO in transmission spectra from the planet. Further analysis of the transmission spectra of this planet at high resolution have resulted in detections of water vapor \citep{sanchezlopez2019} and helium \citep{alonso2019}. Emission spectra of this planet have also been measured with HRS, providing evidence against an atmospheric temperature inversion \citep{schwarz}, as well as determining both carbon monoxide and water abundances when combined with lower resolution data \citep{combininglowandhigh,combinedhudra}. In this paper we present a re-analysis of the previously published CRIRES/VLT data for this planet \citep{schwarz}, but with template spectra generated from a three-dimensional atmospheric model. One of the unique strengths of HRS is that at the highest resolutions ($R \sim 100,000$) the observed spectra can contain information about the \textit{atmospheric} motion of the planet. The original HRS result by \citet{2010Natur.465.1049S} found hints of day-to-night winds on the planet in a net blue-shift of the planet's spectrum by $2 \pm 1$ km s$^{-1}$ (during transit, day-to-night winds blow toward the observer). Transmission spectra of the hot Jupiter HD~189733b also show evidence for atmospheric motion, including both net Doppler shifts from winds and Doppler broadening from a combination of rotation and eastward equatorial winds \citep{louden2015,Brogianalysis,Flowers}. Measured Doppler broadening in high-resolution emission spectra of directly imaged planets/companions have also been used to constrain the rotation rates of these objects \citep{rotationbpic,Schwarz2016,Bryan2020}. The two sources of atmospheric motion---winds and rotation---are not physically independent.\added{For a recent review of hot Jupiter dynamics, see \citet{showman2020review}.} One of the governing forces in determining atmospheric circulation is the Coriolis force, meaning that the rotation rate of a planet strongly influences the wind structure and speeds. Hot Jupiters are commonly assumed to be tidally locked into rotation rates synchronous with their orbits \citep[e.g.,][]{Rasio1996}, but deviations from this expected rotation state would have consequences for the speed and structure of atmospheric winds \citep{Showman2009}, which then influences the expected Doppler shifts and broadening in HRS data \citep{Rauscher_rotationrates}. It is an ongoing debate within the community as to how tidal forces interact with the complex structure of hot Jupiters and whether we should assume them to be synchronized or not \citep{Gu2009,Arras2010,Auclair2018,Lee2020,Yu2020}. Given the exquisite spectral detail measurable by HRS, including constraints on atmospheric motions, we may wonder how sensitive it is to the full three-dimensional nature of the planet; and what degree of bias will a one dimensional model introduce. Another way to state this is whether or not one-dimensional atmospheric models are sufficient to accurately interpret HRS data. Especially for hot Jupiters, where we expect hundreds of Kelvin temperature contrasts across the globe \added{\citep{GCM, DobbsDixon2013,kataria2016,Parmentier2018,Deitrick2020,Drummond2020} } , differences in the local atmospheric structure can result in limb- or disk-integrated transmission or emission spectra (respectively) that are significantly different from a spectrum calculated using a 1-D model. Several studies have considered how the 3-D nature of a planet can influence lower resolution spectra \citep[e.g.,][]{Fortney2006,Fortney2010,Burrows2010} and, in particular, ways that the use of 1-D models could bias our interpretation of spectral data \citep[e.g.,][]{Feng2016,Blecic2017,Caldas2019,Pluriel2020}. For HRS data, several studies have simulated high-resolution spectra from different 3-D models, both in transmission \citep{Kempton2012,showman2013,Kempton2014,Rauscher_rotationrates} and emission \citep{jisheng,Harada2019}, demonstrating that the complex atmospheric structures of hot Jupiters can influence HRS data. \citet{Flowers} presented a first-of-its-kind analysis of HRS data, using simulated transmission spectra from 3-D models as template spectra in the cross-correlation analysis of observations of the hot Jupiter HD~189733b. Not only was the planet's signal detected at high significance (supporting the validity of the 3-D models), but this work also consistently detected the Doppler signature of day-to-night winds on this planet. \replaced{This was accomplished by showing that the best signal detection required an anomalous Doppler shift of the planet if the effect of the winds was artificially turned off in the simulated spectra, but had correct planetary motion with the Doppler effects of the winds included.}{When the Doppler effects from the winds were artificially excluded from the calculation of the template spectra, the planet's signal was detected with an anomalous blue-shift; when the effects of the winds were included, the detection was at the expected planet velocity.} \added{That is, ignoring the Doppler effects on simulated transmission spectra resulted in incorrect inferred planetary motion, confirming their measurable influence in the observed spectra. } Here we present an analogous study to \citet{Flowers}, but for emission spectra (as opposed to transmission), in which we use simulated spectra from 3-D models in the HRS cross-correlation analysis. In addition to studying a complimentary observational technique---emission instead of transmission---we also target a different bright hot Jupiter than that analysis, namely HD~209458b. HRS transmission spectra can be directly influenced by atmospheric motion, but are only secondarily affected by the three-dimensional temperature structure \added{ \citet{Flowers}}. We expect that HRS emission spectra may be much more sensitive to differences in atmospheric \added{thermal} structure around the planet, given that any Doppler effects from atmospheric motion will be most sensitive to the brightest regions of the planet \citep{jisheng}. In this paper we empirically determine how sensitive HRS emission spectra are to the 3-D nature of a particular planet, as well as to what degree various aspects of the atmospheric structure contribute to the observed data. Specifically, we study the sensitivity of the data to the planet's rotation period by running a suite of 3-D models for a range of rotation rates, producing a set of consistent temperature and wind structures for each case. We also test the sensitivity of the data to atmospheric chemistry by comparing an assumption of well-mixed abundances or local chemical equilibrium values in the radiative transfer routine we use to post-process the 3-D models and create simulated spectra. We also analyze the relative contributions of the two main opacity sources over the wavelengths of observation (2.285 to 2.348 $\mu$m): carbon monoxide and water. Finally, we test the sensitivity of the data to Doppler effects from atmospheric motions by cross-correlating with simulated spectra calculated with and without those effects. In Section \ref{sec:numerical}, we explain the various numerical methods used in this work: the three-dimensional atmospheric model and the radiative transfer routine used to post-process the 3-D models and calculate simulated emission spectra. Additionally, we briefly describe the results of these standard hot Jupiter models. In Section \ref{sec:data} we describe the observational data, along with details of our reduction and analysis methods. In Section \ref{sec:CC} we present the results of our cross-correlation analysis, comparing the strength of planetary signal detected when using template spectra from 1-D or 3-D models, and comparing the aforementioned assumptions regarding chemistry, opacity sources, Doppler effects, and rotation rates. In Section \ref{sec:conclusion} we summarize our main results. \section{Numerical Models: 3D GCMs and Simulated Emission Spectra} \label{sec:numerical} In order to create simulated high-resolution emission spectra for HD~209458b, we first use a General Circulation Model to predict the three-dimensional atmospheric structure of the planet---that is, thermal and wind structure--- and then post-process the results using a detailed radiative transfer routine that accounts for the correct geometry and atmospheric Doppler shifts. These modeling methods and results are not particularly novel, having formed the basis of previous papers \citep{Kempton2012,GCM,Rauscher_rotationrates,newrad,jisheng}; however, our suite of models for this particular planet have not been published previously and so we briefly describe the results in order to set the stage for the comparison between the simulated emission spectra and observed data. \subsection{General Circulation Model} \label{sec:gcm} General Circulation Models (GCMs) are three-dimensional computational atmospheric models that simulate the underlying physics and circulation patterns of planetary atmospheres. For this work, we utilized the GCM from \cite{GCM} with the radiative transfer scheme upgraded as described in \cite{newrad}. This model solves the primitive equations of meteorology: the standard set of fluid dynamics equations with simplifying assumptions appropriate for the atmospheric context, solved in the rotating frame of the planet \citep[see an early review by][]{SCM2010}. The radiative heating and cooling of the atmospheric uses a double-gray scheme. That is, radiation is treated with two different absorption coefficients under two regimes; an infrared coefficient to model the thermal interaction of the gas with radiation and an optical coefficient to model the absorption of incoming starlight. For a more detailed explanation of the GCM, see \citet{GCM} and \citet{newrad}. We model the hot Jupiter HD~209458b using the parameters listed in Table \ref{tab:gcm_params}, with system parameters from \citet{hd209params}, a high internal heat flux appropriate for this inflated hot Jupiter \citep{Thorngren2019}, and absorption coefficients and gas properties set to match our previous models of hot Jupiter atmospheres \citep[e.g.,][]{GCM}. Typically, we assume that hot Jupiters have been tidally locked into synchronous orbits, \added{meaning that the rotation period and orbital period are equal}. In order to empirically test this, we ran the GCM for a total of 12 different rotation rates spanning values faster and slower than synchronous. \added{The slowest rotation rate was chosen to ensure that at least one of the models fell into the disrupted circulation regime for slow rotation previously found in \citet{Rauscher_rotationrates}. We then extended our rotation rate sampling (at 0.25 km/s in rotation speed) to comparably cover faster rotation rates.} We list the set of chosen rotation periods and their corresponding equatorial rotational velocities in Table \ref{tab:rot_models}, along with some representative wind speeds from each model. We ran each model at a horizontal spectral resolution of T31, corresponding to a physical scale of $\sim$4 degrees at the equator and with 45 vertical layers evenly spaced in log pressure from 100 bar to 10 microbar. The planets were initialized with a globally averaged temperature-pressure profile and no winds.\added{See \citet{Guillot2010} for a derivation of profiles and \citet{RauscherandKempton2014} for a discussion of the global averaging parameter chosen (set to $f=0.375$ here). } Each simulation was allowed to run for 3000 orbits; by this point the upper atmosphere (including the infrared photosphere) had reached a steady state. \citet{Carone2019} recently demonstrated that the treatment of the deep atmosphere in hot Jupiter simulations---in particular the depth of the bottom boundary and the assumed strengths of convective adjustment and frictional/magnetic damping---can influence the circulation results predicted for the upper, observable atmosphere. Nevertheless, their models of HD~209458b show that this planet exhibits the standard hot Jupiter circulation pattern, in agreement with our results here. \begin{deluxetable}{lc} \caption{HD 209458b System Parameters} \label{tab:gcm_params} \tablehead{ \colhead{Parameter} & \colhead{Value}} \startdata Planet radius, $R_{p}$ & $9.9 \times 10^{7}$ m \\ Gravitational acceleration, $g$ & 9.434 m s$^{-2}$ \\ \added{Orbital Period & 3.525 days} \\ Orbital revolution rate, $\omega_{\mathrm{orb}}$ & $2.06318 \times 10^{-5}$ s$^{-1}$ \\ \added{Synchronous rotation speed} \tablenotemark{a} & \added{2.04 km s$^{-1}$} \\ Substellar irradiation, $F_{\mathrm{irr}}$ & $1.06 \times 10^{6}$ W m$^{-2}$\\ Planet internal heat flux, $F_{\mathrm{int}}$ & 3500 W m$^{-2}$\\ Optical absorption coefficient, $\kappa_{vis}$ & $4 \times 10^{-3}$ cm$^{2}$ g$^{-1}$ \\ Infrared absorption coefficient, $\kappa_{IR}$ & $1 \times 10^{-2}$ cm $^{2}$ g$^{-1}$ \\ Specific gas constant, $R$ & 3523 J kg$^{-1}$ K$^{-1} $\\ Ratio of gas constant to heat capacity, $R/c_{p}$ & 0.286 \\ Stellar radius, $R_{*}$ & $1.19 $ $ M_{\odot}$ \\ Stellar effective temperature, $T_{*eff}$ & $6090$ K \\ \enddata \tablenotetext{a}{In the case of synchronous rotation, this is the corresponding \\ velocity at the equator, calculated as $2 \pi R_{p}/ \omega_{orb}$.} \end{deluxetable} \begin{deluxetable}{cccc} \caption{Suite of General Circulation Models}\label{tab:rot_models} \tablehead{ \colhead{Rotation} & \colhead{Rotational} & \colhead{Max.\ wind speed} & \colhead{Max.\ wind speed} \\ \colhead{period} & \colhead{speed} & \colhead{at IR photosphere} & \colhead{at 0.1 mbar} \\ \colhead{(days)} & \colhead{(km/s)} & \colhead{(km/s)} & \colhead{(km/s)} } \startdata 9.08 & 0.79 & 2.50 & 6.28 \\ 6.91 & 1.04 & 2.64 & 4.44 \\ 5.57 & 1.29 & 5.65 & 6.87 \\ 4.67 & 1.54 & 5.64 & 6.76 \\ 4.02 & 1.79 & 5.61 & 6.64 \\ \textbf{3.53} & \textbf{2.04} & \textbf{5.64} & \textbf{6.32}\\ 3.14 & 2.29 & 5.43 & 6.19 \\ 2.83 & 2.54 & 5.47 & 6.15 \\ 2.58 & 2.79 & 5.10 & 5.72 \\ 2.37 & 3.04 & 4.78 & 5.56 \\ 2.19 & 3.29 & 3.77 & 5.02 \\ 2.03 & 3.54 & 4.62 & 5.17 \\ \enddata \tablecomments{The bolded values are for the model in a tidally-locked, synchronous rotation state. The rotational speeds are calculated as $2\pi R_p/\omega_{\mathrm{rot}}$. Continuum emission comes from the IR photosphere (at $\sim$65 mbar), while the absorption line cores come from pressure regions nearer to 0.1 mbar. Wind speeds are measured in the rotating frame of the planet.} \end{deluxetable} \subsection{GCM Results} \label{sec:gcm_results} Most of our models display the quintessential features expected for hot Jupiters: a strong eastward equatorial jet which advects the hottest spot on the planet slightly eastward of the substellar point and reduces---but does not eliminate---a large day-to-night temperature contrast of hundreds of Kelvin. We show this temperature structure for the synchronous model in Figure \ref{fig:cylinmap}. The equatorial jet characteristically extends throughout most of the atmosphere; Figure \ref{fig: windpressure} shows the zonally averaged winds for the synchronous model. Higher in the atmosphere an additional, significant component of the winds is a substellar-to-antistellar flow pattern; in Figure \ref{fig: windpressure} this shows up as a decrease in the averaged east-west wind speed. \begin{figure} \centering \includegraphics[width=3.4in]{STREAM24.png} \caption{The temperature structure near the infrared photosphere ( $\sim$ 65 mbar), for our synchronously rotating model of HD~209458b, centered on the substellar point (at 0,0). Streamlines have been overplotted, with thicker lines showing stronger winds. In the eastward direction, the winds reach a speed of 5.6 km/s. The hottest gas has been advected to the east of the substellar point by a strong equatorial jet, in the typical hot Jupiter circulation pattern. } \label{fig:cylinmap} \end{figure} \begin{figure} \centering \includegraphics[width=3.25in]{presssinglecbarnumticks.pdf} \caption{Longitudinally averaged east-west wind speeds throughout the atmosphere, for the synchronous rotation case. The eastward equatorial jet (dark blue) extends deep into the atmosphere. The black contour shows the boundary between eastward (positive) and westward (negative) winds.} \label{fig: windpressure} \end{figure} Figures \ref{fig:alltemps} and \ref{fig:windgrid} in the Appendix show maps of the temperature and winds at the infrared photosphere for all of the 12 models with different rotation rates. In line with results from previous investigations of non-synchronously rotating hot Jupiters \citep{Showman2009,Rauscher_rotationrates,Flowers}, we find that as the rotation rate increases, the stronger Coriolis force causes the equatorial jet to become more narrow and eventually secondary, higher latitude jets form. The wind speeds tend to decrease with increasing rotation rate (see Table \ref{tab:rot_models}), conspiring to create generally similar temperature patterns at the infrared photospheres of each model (Figure \ref{fig:alltemps}). The exceptions to these trends are the two most slowly rotating models, whose circulations have been disrupted from the standard hot Jupiter pattern. This disruption for very slow rotators was first identified by \citet{Rauscher_rotationrates}, and the dynamics have been studied by \citet{Penn2017}. For the purpose of this paper, these most slowly rotating models help to provide a lower limit to the possible rotation rate of HD~209458b, as the westward flow and corresponding advection of the hottest region of the atmosphere would result in an orbital phase curve of thermal emission significantly different from what has been previously observed for this planet. In Figure \ref{fig:pcurve} we show phase curves of the total thermal emission\footnote{ \added{Due to the double-gray radiative transfer in our GCM, the thermal emission is effectively bolometric, making it challenging to compare directly to the 4.5 micron flux from \citet{hd209phase}. The phase of peak flux, however, is more directly comparable as it is indicative of the photospheric temperature structure.}} from each model, calculated throughout one orbit. While most of the models do show similar curves, which peak near the measured phase of maximum flux at 4.5 micron \citep[$0.387\pm 0.017$;][]{hd209phase}, the two most slowly rotating models are ruled out by this data as they peak later in phase. Nevertheless, we include these models in the rest of our analysis in order to investigate how they are constrained by HRS data. \begin{figure} \centering \includegraphics[width=3.4in]{allphasecurve.pdf} \caption{Calculated orbital phase curves of total thermal emission from our suite of models with different rotation rates. Only the models with the slowest two rotation rates---with circulation patterns disrupted from the standard hot Jupiter eastward flow---have phase curves that peak after secondary eclipse (which would occur at a phase of 0.5, not shown here). Since phase curves measured at 4.5 microns of HD~209458b show a peak before the secondary eclipse \citep[at $0.387\pm 0.017$][shown by the black dashed line and grey shaded area]{hd209phase}, we find that all models except the two slowest rotators are consistent with observations.} \label{fig:pcurve} \end{figure} Finally, since the CRIRES/VLT emission spectra of HD~209458b are the focus of our paper, we also show the temperature structure and line-of-sight velocities (from both winds and rotation) in the upper atmosphere of the synchronous model in Figure \ref{fig:syncortho}, shown in an orientation corresponding to the first night of observation. This is the region of the atmosphere from which the flux in the CO line cores emerges, meaning that the detailed structure of those line shapes comes from the brightness-weighted local Doppler shifts, integrated across the visible hemisphere. Since the winds are dominantly eastward, they contribute to the Doppler shifts in the same direction as the rotation field. However, the line-of-sight velocity contours are slightly bent away from being strictly aligned with the rotation axis by the specific atmospheric flow pattern. \begin{figure} \centering \includegraphics[width=3.5in]{syncortho218.png} \caption{The temperature structure of the upper atmosphere ($\sim 0.1$ mbar), within the region from which the flux in the CO line cores emerges. The projection is centered on the subobserver point at a phase corresponding to the beginning of the observation, shortly after secondary eclipse. The substellar point is marked with a white star. Also shown are contours of the line-of-sight velocity toward (blue) or away (red) from the observer, due to contributions from both the winds and rotation of the planet (Equation \ref{vloseq}). The contour levels shown in red and blue are $\pm$2, 4, and 6 km/s. The black dotted contour shows the boundary of 0 km/s. \added{Note that whereas at the infrared photosphere the hottest region is east of the substellar point, here it is west of the substellar point due to convergence in the atmospheric flow at that location. } } \label{fig:syncortho} \end{figure} The full set of orthographic projections for our suite of 12 models is shown in Figure \ref{fig:orthogrid} in the Appendix. Aside from the two most slowly rotating models, we see similar temperature and line of sight velocity fields across the rest of the models. While higher blue-shifted line-of-sight velocities occur on the visible hemisphere, the red-shifted flow extends across a larger fraction of the planet disk. In contrast, the two slowest rotators have weak contributions to the velocity field from their rotation, and the winds generally work in an opposite direction to the rotation, leading to very little Doppler shifting compared to the other models. In addition, the temperature structure is fairly uniform across the visible hemisphere. A hot feature exists on the western side of the planet (from our perspective, to the left of the subobserver point), where we also see strongly blue-shifted velocities from the combination of rotation and winds blowing around from the night side. \added{Chevron features like this, regions of flow convergence and associated heating, are commonly seen in hot Jupiter GCMs \citep[e.g.,][]{Showman2009,Rauscher2010,Komacek2019} and are related to the transport of momentum from higher latitudes to the equator \citep{ShowmanPolvani2011}. Depending on the particular model---and the pressure level within the atmosphere---chevron features may appear to the east or west of the substellar point. Here we see multiple chevron features, both near the infrared photosphere and in the upper atmosphere. New state-of-the-art GCMs in \citet{Deitrick2020} also show these features, at multiple resolutions and robust against assumptions regarding vertical hydrostatic equilibrium (see their Figures 19 and 22).} While we have already determined that the phase curve data for HD~209458b excludes the two slowest rotation states for this planet (Figure \ref{fig:pcurve}), we are still interested to compare the simulated high-resolution emission spectra from these models to the rest of the suite. For most of the models, based on Figures \ref{fig:syncortho} and \ref{fig:orthogrid} we expect that the integrated emission spectra should show both red- and blue-shifting of the CO lines, but the detailed line shapes will be controlled by the complex three-dimensionality of the atmospheric temperature and line-of-sight velocity structures. Due to the slowing of the winds with increasing rotation rate (see Table \ref{tab:rot_models} and Figure \ref{fig:orthogrid}) we may expect similar Doppler-induced line profiles for these models. In contrast, for the two most slowly rotating models there may be very minimal Doppler effects shaping the line shapes in their simulated emission spectra. \subsection{Radiative Transfer Post-Processing} \label{sec:rt} In order to generate high-resolution emission spectra from our three-dimensional models, we apply the code and method outlined in \citet{jisheng}. Briefly, we take the output from the GCM (temperature and winds at 48 $\times$ 96 $\times$ 60 points in latitude $\times$ longitude $\times$ pressure; see Figure \ref{fig: tpprof} for the synchronous case and Figure~\ref{fig:alltp} for all of the GCM outputs) and solve the radiative transfer equation in a geometrically-consistent manner to produce the thermal emission spectrum emanating from the visible hemisphere of the planet. The radiative transfer equation is solved in the limit of pure thermal emission: \begin{equation} I(\lambda)=B_{o}e^{-\tau_{0}}+ \int_{0}^{\tau_{o}}e^{-\tau}B\ d\tau, \end{equation} where $I$ is the intensity at each wavelength $\lambda$, $B$ is the Planck function (calculated from the local temperatures) and $\tau$ is the slant optical depth along the line of sight toward the observer, taking into account varying opacities throughout the path. We strike 2,304 ($= 48 \times 96 / 2$) individual line-of-sight intensity rays through the atmosphere and then integrate with respect to the solid angle subtended by each grid cell to produce the planet's emission spectrum in flux units. To correctly account for the line-of-sight geometry we must first interpolate the temperature and wind output from the GCM onto a fixed-\textit{altitude} vertical grid. This interpolation allows us to readily strike straight-through rays along the observer's sight line. This geometrically-consistent approach to the radiative transfer is somewhat unique in calculations of emission spectra from GCMs. A more common and computationally less challenging technique is to calculate the radiative transfer along radial profiles and assume isotropic emission from the top of the atmosphere. \citet{Caldas2019} have recently shown that using correct ray-tracing geometry is important in calculating transmission spectra from 3-D models; we are not aware of a similar study of geometry's importance in calculating emission spectra.\\ \begin{figure} \centering \includegraphics[width=\columnwidth]{1doverlay.png} \caption{Temperature-pressure profiles throughout the atmosphere for our synchronously rotating model of HD~209458b. The rainbow lines show equatorial profiles, with the hue corresponding to the longitude east of the substellar point. The gray profiles are from the entire planet. We use this 3-D atmospheric structure, together with the local wind velocities, to calculate simulated high-resolution emission spectra for cross-correlation with the observed data.The black lines show examples of four temperature-pressure profiles from a suite of 1D models \added{(described in Section \ref{1Dmodels})} also used to simulate spectra. These models cover the same temperature range realized by our 3-D models, but use only a single profile to represent the entire planet. Note that the best-fit model from this suite has an unrealistic super-adiabatic profile.} \label{fig: tpprof} \end{figure} As a consequence of having varying temperature conditions over the visible hemisphere of the planet, we may expect that our integrated spectra are influenced by spatial variations in the chemical abundances of our main opacity sources. Based on the temperature range spanned by the GCM outputs and the wavelength range modeled (2.28 -- 2.35 $\mu$m), we expect that H$_2$O and CO will be the dominant opacity sources. One of the simplest assumptions we can make about the abundances of H$_2$O and CO is that they are in chemical equilibrium for the local conditions at each location in the atmosphere. However, this neglects the important influence of mixing from atmospheric dynamics, which is likely to bring these species out of chemical equilibrium. The physically and chemically sophisticated work by \citet{Drummond2020} demonstrated that 3-D mixing is expected to alter the chemical structure of hot Jupiter atmospheres, with the vertical and horizontal advection components both being significant \citep[with similar results also found by][]{Mendoca2018}. In their model of HD~209458b, however, they found minimal differences in the abundances of CO and H$_2$O between their kinetics model and the assumption of chemical equilibrium. While they predicted minimal differences between these cases in their simulated (lower resolution) emission spectra, here we further investigate the influence of chemical abundances in high resolution emission spectra.\\ The double-gray radiative transfer scheme within our GCM simplifies the multi-wavelength opacities of the atmosphere, meaning that we do not prescribe a specific chemistry, nor does the simulation predict chemical mixing. In order to investigate the impact of chemistry on the emission spectra, we consider two extreme cases within our post-processing framework: abundances determined everywhere by local chemical equilibrium, or abundances that are fully homogenized throughout the atmosphere and set to some constant volume mixing ratio (VMR). The first assumption applies to the limit where dynamics do not create any significant chemical disequilibrium, while the second may be a proxy for fully efficient mixing, with the caveat that we still need to choose a value for the homogenized abundances. We choose to fix the values for water and CO to the best-fit values from a previous retrieval analysis of these same data \citep[VMR values of $1\times 10^{-3.5}$ for CO and $1\times 10^{-5}$ for water,][]{combininglowandhigh}. There is significant evidence in the literature suggesting a water abundance below the solar equilibrium value \added{\citep[which would be $\sim 5 \times 10^{-4} $; ][]{h2oabundance_madhusudhan}} for HD~209458b \citep[although see][]{Line2016}. From previous analysis of these HRS data, a marginal evidence for H$_2$O was claimed by \citet{Brogi_2019}, with a peak around VMR $\sim 1 \times 10^{-5.5}$ but with an unbounded lower limit. From HST transmission spectroscopy, \citet{Barstow2017} and \citet{Pinhas2019} both retrieve a low water abundance of $1\times10^{-5}$ and $1\times10^{-4.7}$, respectively. These results are particularly significant as they are obtained with models accounting for the presence of aerosols, and therefore include their known ability to mimic a low water abundance by reducing the contrast of the water band in the WFC3 pass-band. Lastly, a recent attempt at combining both low-resolution and high-resolution emission spectroscopy was presented by \citet{combinedhudra}, resulting in a VMR of $1\times10^{-4.1}$. The observational constraints presented above and the weak detection of water in these data inspired us to explore an additional set of models without water vapor, along with our constant VMR models with water under-abundant compared to equilibrium calculations. \\ In order to self-consistently account for Doppler shifts resulting from winds and rotation in the high-resolution spectra given that the resolution is comparable to the speeds of atmospheric motion ($\sim$km/s), we calculate the line-of-sight velocity for a latitude-longitude ($\theta$, $\phi$) pair at an atmospheric height of $z$ as: \begin{multline} v_{LOS}(\theta,\phi) = -u \sin (\theta) - v\cos(\theta) \sin(\phi) \\ +w \cos(\theta) \cos(\phi) -(R_{p}+z)\Omega\sin (\theta) \cos (\phi) \label{vloseq} \end{multline} where $u,v,w$ are the wind speeds in the east-west, north-south, and radial directions, respectively, and $\Omega$ is the planet's bulk rotation rate. We calculate simulated spectra both with and without these Doppler shifts, so that we can quantitatively evaluate how much they contribute to the observed data. We calculate the simulated emission spectra at a higher spectral resolution ($R\sim 250,000$) than that of CRIRES data across the same wavelength range (2.2855 -- 2.3475 $\mu$m). During the data analysis, the simulated spectra are convolved with a Gaussian kernel to match the resolving power of CRIRES ($R=100,000$). As the planet rotates throughout the time of observation, we calculate spectra for each exposure time. The atmospheric structure from the GCM is output every 4 degrees in phase. In order to match the more frequently sampled observed phases, we created interpolated spectra as follows. For each exposure (corresponding to some orbital phase) we take the GCM outputs from the two nearest-neighbor phases, rotate each atmosphere to the correct orientation, calculate simulated spectra from each, and then combine those two spectra, weighting linearly by how close each GCM output is to the phase of observation. Even before this weighted average, the spectra produced from two adjacent GCM outputs differed only marginally. For the fastest rotating model, the average difference was less than $5 \% $. For the slowest rotating model, this difference was only $0.6 \%$ on average. Thus, in our interpolation process, the resultant changes to the spectra were on order of a few percent at most. \deleted{In addition to producing post-processed spectra from the 3-D GCM outputs, it is also instructive to compare our results against spectra produced from 1-D models of HD 209458b. We perform comparisons against a suite of 1-D models (described in Section \ref{1Dmodels}) and choose four representative T-P profiles to show in Figure~\ref{fig: tpprof}. These four chosen representatives consist of the best fit 1-D model to the observations, two profiles that bound the temperatures produced in our GCM, and a model that approximately reproduces the average equatorial T-P profile produced by our GCM.} \subsection{Simulated Emission Spectra} \label{sec:rt_spectra} Simulated spectra from the full set of 12 rotation models over a partial range of the total wavelength coverage for a time near the beginning of the observation (phase of $\sim$ 0.52) are shown in Figure \ref{fig: spec12rr}. For each model, versions of the spectrum with and without Doppler effects are plotted in solid and dashed lines, respectively. As expected from the discussion of their circulation patterns above, the models with the slowest rotation rates have very little Doppler broadening. In contrast, all of the other models show significant broadening, without a strong dependence on the planet's rotation rate because the faster winds in slower rotating models work to contribute to the broadening. The main notable difference between these spectra is in the relative depths of the spectral lines, which is a function of the vertical structure of these atmospheres, both thermal and as probed by the line opacities. \begin{figure*} \centering \includegraphics[width=\textwidth]{broadenedandnot.pdf} \caption{Simulated spectra from post-processing the atmospheric structures predicted by our GCM, color coded by the rotation rate assumed for each model (with the synchronous model in black). In these spectra we only include opacity from CO (not water; see Figure \ref{fig:synccasediffchem} for comparison) and assume local chemical equilibrium abundances. The dashed lines show spectra produced without the influence of Doppler effects while the solid lines account for shifts and broadening due to winds and rotation. The main result of the atmospheric motion is to produce significant line broadening; for most of the models the amount of broadening is similar, due to a trade-off between the contributions from winds and rotation. The two most slowly rotating models have very little broadening, due to the weak contribution from rotation, but also because of westward winds in these models.} \label{fig: spec12rr} \end{figure*} We can investigate the relative contributions of the thermal profile and changing opacities to the depth of the spectral lines by comparing the different chemical assumptions we use in the post-processing. Figure \ref{fig:synccasediffchem} shows the differences in spectra calculated under our assumptions of chemical equilibrium abundances or constant volume mixing ratios, both with and without water included as an opacity source, for our synchronous model. The spectral features from CO appear fairly consistent for all of our assumed chemistry conditions. Over the range of pressures and temperatures that contribute to our planet's dayside emitted spectra ($P \sim 0.1-100$ mbar, $T \sim 900-1700$ K, see Figures \ref{fig:cylinmap} and \ref{fig:syncortho}), local chemical equilibrium abundances for CO at solar composition are fairly constant, at a VMR of $\sim 4\times 10^{-4}$, only slightly higher than the value we use for our constant VMR assumption. \begin{figure} \centering \includegraphics[width=3.6in]{synccasediffchem.pdf} \caption{Simulated emission spectra, post-processed from our 3D atmospheric model, comparing the different assumptions used for the abundances of water and CO, the main sources of opacity at these wavelengths. These spectra are from the synchronously rotating model, over a fraction of the wavelength coverage of the observations; the solid and dashed spectra are produced with and without the Doppler effects of winds and rotation, respectively. The spectra produced assuming abundances determined by local chemical equilibrium and fixed to a constant value look very similar for the CO features. The assumption of local chemical equilibrium results in much more abundant water with much stronger spectral features in comparison to the constant value that best-matches previous observations.} \label{fig:synccasediffchem} \end{figure} In contrast, the assumption of local chemical equilibrium produces significantly different water abundances than the constant VMR value we use \citep[the best-fit value from a previous 1-D analysis of these data;][]{combininglowandhigh}. For the temperature and pressure conditions probed by these emission spectra, local chemical equilibrium abundances for water have VMR $\sim10^{-3}-10^{-4}$, with the hottest and lowest pressure regions dipping down to VMR $\sim 10^{-7}$. These abundances are mostly significantly higher than our constant VMR value ($10^{-5}$), leading to much more visually apparent spectral features in Figure \ref{fig:synccasediffchem}. These differences will strongly influence the significance of detection in our data analysis, as discussed in Section \ref{sec:CC}. One measure of the effect of Doppler shifting across the entire spectrum can be assessed by cross correlating each simulated emission spectrum with the non-Doppler shifted spectrum calculated from the same model, as shown in Figure \ref{fig:allcc}, where we have plotted these cross correlation functions for each of our 12 rotation models. The dashed black line shows the spectrum from the synchronous model without Doppler effects cross correlated with itself, to characterize the intrinsic width of the cross-correlation function. The two slowest rotators have the least amount of broadening and the second slowest rotator actually has a cross-correlation function similar to the unshifted reference. All of the other rotation rates produce roughly similar levels of broadening, with only minimal net red- or blue-shifts (and no trend in the shift with rotation rate), in agreement with our previous findings in \citet{jisheng}. \begin{figure} \centering \includegraphics[width=3.5in]{AllCCfboxshad.pdf} \caption{For each of our 12 models with different rotation rates, we cross-correlate two simulated spectra from the same model: one with Doppler effects included and one without. (The solid black line is the synchronous model.) The resulting cross correlation functions, plotted here, allow us to assess the contribution of the planet's winds and rotation to the overall Doppler shifting and broadening of the lines in the emission spectra. The gray dashed line shows the synchronous model's non-Doppler shifted spectrum, cross correlated with itself, to show the intrinsic broadening in the spectra. The dotted vertical lines mark the velocity at the peak of the cross correlation function for each model. All but the two most slowly rotating models show significant---and similar---broadening, while none of the models exhibit large net red- or blue-shifts. CRIRES allows us to fully resolve the shapes of these line profiles since its instrumental profile (approx $\sim 3$ km/s) is smaller than the FWHM of these lines. } \label{fig:allcc} \end{figure} The similarity in Doppler broadening between all but the two most slowly rotating models is to be expected, from the discussions of circulation patterns above and from visual inspection of their spectra in Figure \ref{fig: spec12rr}. As a more quantitative comparison, in Figure \ref{fig:FWHM} we show the width of the cross correlation function, calculated at $80 \%$ of its maximum (shown in Figure \ref{fig:allcc}) as a function of the rotation period of the simulated planet, normalized to the synchronous model. This width serves as a proxy to understand the degree of broadening caused by the differing sources of Doppler effects. The filled and unfilled circles correspond to spectra that have been broadened by both winds and rotation and only rotation, respectively. The scatter in the unfilled circles is a result of differences in temperature structure in the corresponding GCM. Aside from the two slowest rotating models---which exhibit westward flow, opposite of the direction of rotation---allowing the spectra to also be broadened by the winds cause the width to increase. We show the result of a single temperature structure artificially broadened at the various rotation rates with the black dashed line. The unfilled circles lie both above and below this line, meaning that the amount of broadening in the lines themselves does not allow us to constrain the rotation rate strongly. Because the total broadening of the line is sensitive to temperature and wind structures in addition to rotation rate, we are unable to retrieve a rotation rate from the broadening width of the spectra alone. \begin{figure} \centering \includegraphics[width=3.5in]{FWHMrot80.png} \caption{ Width of the cross correlation function, calculated at at a height of $80 \%$ of its maximum (shown in Figure \ref{fig:allcc}) as a function of the rotation period of the simulated planet, normalized to the synchronous model. The filled circles correspond to spectra that have been broadened from both wind and rotation and the open circles represent spectra that have been broadened only by rotation. To produce the black dotted line, we took the temperature structure of the synchronous model and calculated the resulting broadening for each rotation rate. For the two slowest rotating models, we find that the westward rotating winds cause the fully broadened spectra to have a smaller width than the spectra only broadened by rotation. For all of the other models, we see the addition of winds cause the resulting correlation width to increase. Because the total broadening of the line is sensitive to temperature and wind structures in addition to rotation rate, we are unable to retrieve a rotation rate from the broadening width of the spectra alone. } \label{fig:FWHM} \end{figure} \subsection{1D Atmospheric Models} \added{In addition to producing post-processed spectra from the 3-D GCM outputs, it is also instructive to compare our results against spectra produced from 1-D models of HD 209458b. We perform comparisons against a suite of previously published 1-D models (described in Section \ref{1Dmodels}) and choose four representative T-P profiles to show in Figure~\ref{fig: tpprof}. These four chosen representatives consist of the best fit 1-D model to the observations, two profiles that bound the temperatures produced in our GCM, and a model that approximately reproduces the average equatorial T-P profile produced by our GCM. } \section{Observational Data of HD~209458b} \label{sec:data} The data we re-analyze in this paper were originally published in \citet{schwarz}, where the full details of the observations can be found. In brief, the star HD~209458 (K=6.31 mag) was observed for a total of 17.5 hours with the CRIRES instrument on the VLT as part of the ESO program 186.C-0289 in August and September 2011. The system was observed on three separate nights, always shortly after secondary eclipse. Here we utilize only the first two nights of data, which were observed in nodding mode. We discard the third night, because this was observed in staring mode for testing purposes and shows a higher noise budget. As explained in \citet{schwarz}, the spectra were optimally extracted via the standard ESO pipeline and then re-calibrated in wavelength using the known position of telluric lines as a reference. Due to previously reported issues with the fourth detector of CRIRES, we chose to include only the first three detectors in our analysis. Extracting the planetary signal from the calibrated spectra poses a unique challenge due to the highly unequal flux ratio of the Hot Jupiter and the star. Furthermore, for ground based observations, spectral absorption lines formed in the Earth's atmosphere (telluric features) must be accounted for and are often so strong that parts of the data must be masked completely as they exhibit near-zero flux. In order to decouple the planet's spectrum from the stellar and telluric lines, we utilize standard analysis algorithms \added{(see \citep[Section 3.2]{Brogi_2019} for a detailed description} for HRS. These are based on the principle that over the relatively short period of observations, the planetary lines are Doppler shifted by a varying amount due to the changing orbital motion of the exoplanet, while telluric and stellar lines are essentially stationary \footnote{Stellar lines do shift by $\sim100$ m s$^{-1}$ per hour of observations due to the barycentric velocity of the observer and the stellar motion around the center of mass of the system, but these are negligible compared to the change in planet's radial velocity.}. Thus, by removing the parts of our signal that do not shift with time, we are left with the planetary spectrum. We apply the latest iteration of the HRS analysis described in \cite{Brogi_2019}, to which we point the reader for a step-by-step description. In short, the algorithm determines a model for the time-dependent stellar and telluric spectrum empirically from the observations, and normalizes the data by dividing out such model. The resulting data product only contains the planet spectrum, deeply embedded in the stellar photon noise at this stage. Similarly to previous studies of atmospheric circulation from transmission spectra \citep{Brogianalysis, Flowers} we run two parallel versions of the analysis: one with the data as is (hereafter the {\sl real} data), and one containing each model spectrum injected at a small level (hereafter the {\sl injected} data), chosen to be 0.1$\times$ the nominal value. Here the nominal value is the planet's emission spectrum in units of stellar flux, i.e. scaled by a blackbody at the stellar effective temperature and multiplied by the planet-to-star surface ratio (see system parameters in Table~\ref{tab:gcm_params}). The exact value of the scaling factor is not important for the outcome of the analysis, as long as it is significantly smaller than the nominal value. A small scaling factor is needed to realistically simulate the effects of the analysis on each model spectrum without sensibly changing the signal content of the data. In order to detect the planet's emission spectrum, buried in the stellar noise at this stage, we use the standard technique in high-resolution spectra, where we cross-correlate a template spectrum---or set of templates---for the planet with the data. If the template is a good representation of the planet's spectrum, there will be a maximum cross-correlation value at velocities corresponding to the planet's orbital radial velocity during the time of observation. The significance of each tested model is determined as in previous work: we compute the difference between the CCF of the injected data and the CCF of the real data. This will remove the cross correlation noise and the correlation with the real planet signal, and provide us with the {\sl model} cross correlation. Note that this is different from the CCF obtained by autocorrelating the spectra, because it contains any alterations that our data analysis necessarily introduces on the planet signal while removing telluric and stellar spectra. We then compare the model CCF and the real CCF via chi-square, and we assign a significance by discriminating against a non-detection, which in our case is a flat cross correlation function (i.e. a straight line). Finally, $n$-$\sigma$ confidence intervals are determined by the region in the parameter space where the detection significance drops by $n\,\sigma$. For the full explanation of how the chi-square statistic is utilized, we refer the reader to \citet{Brogianalysis} and \citet{Flowers}. \section{Data Analysis Results} \label{sec:CC} We apply the cross correlation and significance test explained in Section~\ref{sec:data} to the spectra produced from our three-dimensional model, as well as to a suite of one-dimensional models for comparison. These one-dimensional models are taken from previous work and further information about them is provided in Section \ref{1Dmodels}. We find significant detection of the planet's signal over the range of template spectra tested, but our strongest detection came from the spectra produced by post-processing our three-dimensional model, as reported in Table \ref{tab:sigvalues}. In particular, we found the highest significance of detection (at 6.8 sigma) for the model that was post-processed assuming uniform volume mixing ratios for CO and water, and that included the Doppler effects from winds and rotation. Figure \ref{fig:vmrsingleh20} shows the significance of cross-correlation detection for this model, over the range of orbital and rest frame velocities included in the analysis. Note that these observations have a relatively small phase range and they are taken close to superior conjunction, where the planet's radial velocity curve can be approximated with a linear function of time at small signal to noise. This means that higher orbital velocities can be somewhat compensated for by allowing the planet to have a positive rest frame velocity (i.e., anomalous motion away from the observer), resulting in some inherent degeneracy between those parameters. Our detection agrees with a zero rest frame velocity for the planet and the orbital velocity reported in \citet{hd209params}. \begin{deluxetable*}{ccccc} \caption{Highest significance detections for the model spectra tested in this work. The highest increase in detection significance came from using a 3D atmospheric model, compared to the 697 1D models tested. Note that the best fitting 1D model exhibits a non-physical, super-adiabatic lapse rate. \added{For detections broken down by rotation rate, see Table \ref{tab:results} in the appendix.} } \label{tab:sigvalues} \tablehead{ \colhead{Dimensions} & \colhead{Abundances} & \colhead{Molecules Included} & \colhead{Doppler Effects On} & \colhead{Doppler Effects Off}} \startdata 3D & Chemical equilibrium & CO & 6.49 & 6.40 \\ & Chemical equilibrium & CO and H$_2$O & 4.22 & 3.39 \\ & Constant volume mixing ratio & CO & 6.02 & 5.72\\ & Constant volume mixing ratio & CO and H$_2$O & 6.87 & 6.37 \\ \hline 1D & Constant volume mixing ratio & CO and H$_2$O & - & 5.06 \\ \enddata \end{deluxetable*} \begin{figure} \centering \includegraphics[width=3.6in]{watercovmrmatteo.pdf} \caption{The significance of our detection of the planetary signal, showing 1-, 2-, and 3-$\sigma$ confidence intervals from the peak detection (at 6.78 $\sigma$, for our spectra calculated using water and CO with constant abundances), over the velocity parameter space explored by the cross correlation fitting. The literature orbital velocity of the planet is shown as a white star, as is its expected rest frame velocity. Our analysis confidently detects the planet, at its expected velocity.} \label{fig:vmrsingleh20} \end{figure} One of the main results from our analysis is this: \textit{that template spectra from our 3-D model---calculated without any fine-tuning---outperform a large suite of template spectra from one-dimensional models} (a 6.8 sigma detection significance compared to 5.1; Table \ref{tab:sigvalues}). This is evidence that the three-dimensional structure of this hot Jupiter's atmosphere leaves detectable signatures in the disk-integrated high-resolution emission spectrum of the planet. In the following sections we explore the various physical properties that could contribute to this enhanced detection and evaluate their influence. \subsection{Comparison to 1D Models} \label{1Dmodels} To compare our results with the modeling presented in past work, we estimated the significance of the cross correlation with two grids of models obtained with one-dimensional, plane-parallel radiative-transfer calculations. The first grid of models is described in \cite{schwarz} and consists of 704 models describing a parametric $T-p$ profile with a region at constant lapse rate ($dT/d\log(p)$) sandwiched between two isothermal regions. Pressure and temperature at the upper and lower boundaries can be changed, thus exploring a wide range of lapse rates up to $d\log(T)/d\log(p)=0.31$, which includes non-physical super-adiabatic lapse rates. Relative abundances of CO and H$_2$O are also varied in the range log(CO/H$_2$O) = 0-1.5. After excluding models with a thermal inversion layer \added{(ruled out in \citet{schwarz}} we were left with 546 models to test. Since these models were not designed to explore high abundance ratios between CO and H$_2$O, we also tested a subset of the models described in \citet{combininglowandhigh} and sampled from the low-resolution posterior retrieved by \citet{Line2016}. From that initial sample of 5,000 models we remove those models with thermal inversion and/or log(CO/H$_2$O) $< 1.5$ \added{\citep[as low CO/H$_2$O models are already included in the grid from][]{schwarz}}, resulting in 151 additional models, spanning abundance ratios up to log(CO/H$_2$O) = 3.0. All these models have a sub-adiabatic lapse rate in the range $0.05 < d\log T/d\log p < 0.08$. The only broadening that has been applied to the 1-D model spectra arises from the pressure and thermal broadening components of the Voigt profile used to generate the spectral lines. Of the 697 one-dimensional models tested, the highest measured significance is 5.06$\sigma$, with only 14 models reaching a significance value greater than 4$\sigma$. These are models with a steep lapse rate ($0.13 < d\log T/d\log p < 0.31$) and an abundance ratio of 10-30 between CO and H$_2$O. Thus, the vast majority of the 1-D models return a significance below the threshold of detection (usually set at 4$\sigma$ for these HRS observations), and consistent with the tentative detection reported in \cite{schwarz}. We note that the temperature-pressure profiles explored in the set of 1-D models encompasses the range realized in our 3-D model (Figure \ref{fig: tpprof}). This implies that the deficiency in the 1-D models is not that they didn't include the \textit{appropriate } physical conditions of the planet, but rather that those conditions are inherently, and observably, three-dimensional. In Figure \ref{fig:1dspectra}, we show a subset of the spectra produced from the 1D models and spectra from our best fitting 3D model. All the spectra shown seem to show the same absorption lines, yet still result in a range of detection strengths. The subtleties in spectral line shapes and relative depths are not adequately captured by the 1D models. \begin{figure} \centering \includegraphics[width=3.6in]{comparewith1d.png} \caption{A comparison of the spectra produced from a 1D atmosphere with our best fitting 3D model (in black). The solid black spectrum has been broadened by Doppler effects arising from winds and rotation. These sources of broadening are not included in the dotted black spectrum or any of the 1D spectra. All of the models appear to show the same absorption lines but the relative depths of absorption, influenced by the underlying temperature structure and chemical abundances, changes with each model. These variances in relative depth and line shape result in a range of significance of detection when cross correlated with the data. } \label{fig:1dspectra} \end{figure} \subsection{Influence of Temperature Structure} As Table \ref{tab:sigvalues} hints at, and as we will discuss in subsequent sections, the improvement in detection from using the 3-D models over the 1-D models is not primarily due to the chemical or velocity structure of the atmosphere, as those influences on the spectrum only give marginal improvements in the significance of detection. Instead, we find that the contribution from multiple regions of the planet, with different thermal structures, is a much better match to the observed data than a representation of the planet with a single thermal profile. Whether the influence of spatial inhomogeneity is intrinsically within \textit{all} HRS emission observations requires further study, but for this particular planet we find it to be the case. Recent complementary work by \citet{Taylor2020} predicts that James Webb Space Telescope observations may similarly contain inherent signatures of multiple thermal regions, although whether this inhomogeneity will be measurable or not depends on wavelength coverage and signal-to-noise. \subsection{Influence of Chemical Structure} Table \ref{tab:sigvalues} shows that for models with CO alone the assumption of abundances that follow local chemical equilibrium is slightly preferred over using the best-fit value from a previous analysis of these data \citep{Brogianalysis}. However, as discussed in Section \ref{sec:rt_spectra}, local chemical equilibrium does not predict strong variations in the abundance of CO throughout the atmosphere, meaning that the improvement of signal does not come from any significant chemical heterogeneity influencing the disk-integrated spectra, but rather from an abundance slightly closer to reality. It may be the case that by capturing the inherent \textit{thermal} inhomogeneity of the atmosphere, we can more accurately find the correct chemical abundances \citep[][]{Taylor2020}. In contrast to our results for CO, Table \ref{tab:sigvalues} shows a strong decrease in the significance of planet detection when using chemical equilibrium values for water. The data prefer depleted abundances for water; Section \ref{sec:rt_spectra} and the discussion surrounding Figure \ref{fig:synccasediffchem} demonstrate that water at equilibrium values would result in large spectral features that are not apparent in the data, according to our analysis. It is noteworthy that the data are not suggesting a complete lack of water; the very low water abundance used in calculating the spectra with constant VMR does improve the planet detection over the comparable CO-only model. A full gridded analysis of varying chemical abundances is outside the scope of this work. Even without considering a full grid,these results show that the 3-D chemical structure of the atmosphere contributes to our enhanced detection, compared to 1-D models, insofar as it seems to slightly more robustly predict the abundance of CO in the atmosphere. Notably, we find that the data prefer a water abundance that is orders of magnitude depleted below chemical equilibrium values. \subsection{Influence of Atmospheric Doppler Effects} In addition to predicting the 3-D temperature structure of the planet's atmosphere, our GCM also predicts the wind vectors throughout, all of which are influenced by the rotation rate assumed for the planet. Here we examine how the Doppler shifts and broadening due to winds and rotation in our simulated spectra may contribute to our enhanced detection of the planet's signal over the 1-D models that do not include this additional physics, and whether the data can help to empirically constrain the planet's wind speeds and rotation rate (generally assumed to be synchronous with its orbit). In Table \ref{tab:sigvalues} we report that including the spectral line shifting and broadening from winds and rotation does enhance our detection of the planet, but with only a minor increase in significance over the spectra without Doppler effects. As discussed and shown above in Figure \ref{fig:allcc}, the main influence of the Doppler effects (for most of the models) is to broaden the spectral lines, since both winds and rotation contribute similar symmetric velocity patterns. Thus we expect the main contribution to the increased detection is that the planet's actual spectrum does contain some significant broadening from winds and rotation. Even with a symmetric velocity field, an uneven brightness pattern across the planet can result in the red- or blue-shifted side of the planet contributing more emission to the disk-integrated spectrum, resulting in a net Doppler shift \citep{jisheng}. Figure \ref{fig:allcc} has small net Doppler shifts for the models. Depending on the precision of the data, this could result a small anomalous radial velocity of the planet if not included in the analysis. In order to test whether a net Doppler shift contributes in any significant way to our detection, in Figure \ref{fig:dopoffkpev} we plot the models' significance of detection in velocity space, comparing the spectra with and without the Doppler effects included. While we see an overall increase in detection significance with the Doppler effects included, there is no very noticeable shift in velocity space between the models with and without them. This agrees with our discussion above, that the main improvement in significance comes from the broadening of the lines, rather than any net Doppler shift. \begin{figure*} \centering \includegraphics[width=\textwidth]{doponvsoffsigma.pdf} \caption{A comparison between our significance of planet detection with and without Doppler effects included in our best-fit simulated spectra (\textit{left} and \textit{middle} plots), shown as a function the planet's assumed orbital velocity and its rest frame velocity (which should be zero unless there is anomalous motion). The \textit{right} plot shows the difference in significance caused by including the Doppler effects in our analysis. While there is a slight increase in detection significance, this does not correspond to any net shift in velocity space, indicating that it is largely due to the line broadening rather than any shifting. } \label{fig:dopoffkpev} \end{figure*} \subsubsection{Constraints on rotation and winds?} As part of this investigation, we wanted to see what constraint, if any, could be placed on the rotation rate or wind speeds for HD~209458b. In Figure \ref{fig:combinedvmrh20} we show how the significance of detection depends on which rotation rate we use in our 3-D model of the planet (plotted here as the planet's equatorial velocity). The significance of detection is largely insensitive to the planet's rotation rate, aside from the two most slowly rotating models being slightly disfavored (and those are also inconsistent with thermal phase curve data; see Figure \ref{fig:pcurve} and discussion). Our small improvement in detection from including Doppler effects, combined with the strong similarity in Doppler broadening for all but the slowest models (Figure \ref{fig:allcc}), makes this result unsurprising. \begin{figure} \centering \includegraphics[width=3.6in]{verticalvmrh2o.pdf} \caption{\replaced{Contours of detection significance for}{Confidence intervals from} cross-correlation between the data and our 3D models with constant volume mixing ratios of CO and water, and Doppler effects included. Similar to Figure \ref{fig:vmrsingleh20}, the white star marks literature values and equatorial velocity for synchronous rotation. Here, the two plots \replaced{compare the significance of detection }{show the 1-, 2-, and 3-$\sigma$ confidence intervals}for models with different rotation rates as a function of orbital velocity (\textit{top}) and rest frame velocity (\textit{bottom}). The data have a slight aversion to the two most slowly rotating models (low values of equatorial velocity), but otherwise the temperature structures and wind patterns of all other models are roughly equally well allowed by the data. } \label{fig:combinedvmrh20} \end{figure} However, it is a valuable result to determine that the amount of Doppler broadening for models across a wide range of rotation rates is so similar (quantified in Figure \ref{fig:FWHM}). It indicates that we cannot constrain rotation rates as well as we might think from rotational broadening alone; the winds are faster in the more slowly rotating models and their predominantly eastward direction lets them compensate for the weaker rotational broadening. Although our particular analysis is only for observations around one particular orbital phase, the eastward wind pattern extends around the whole globe and so we expect the same behavior regardless of orbital phase. This is the same general behavior previously reported for high-resolution transmission spectra in \citet{Flowers}; we have now shown that emission spectra are subject to this inherent physical uncertainty as well. \section{Conclusions and Summary} \label{sec:conclusion} In this project, we combined state of the art observational and modeling techniques to obtain \replaced{a result stronger}{a higher significance detection} than could be achieved with either of these techniques alone. We ran a 3D atmospheric model for the hot Jupiter, HD 209458b, for a range of rotation rates. We post-processed the resulting atmospheric structures in a geometrically correct way to generate template spectra. We then cross correlated the synthetic spectra with previously published data for this planet from CRIRES/VLT and detected the planet at a greater significance than a whole suite of 1D models. We explored why the 3D models were a strong improvement over the 1D models by looking at properties such as temperature and chemical structure and Doppler shifts from winds and rotation. Our main findings are summarized as follows: \begin{itemize} \item High resolution emission spectra are sensitive to the 3D structure of the atmosphere, at least for these data of this particular hot Jupiter. \item One dimensional models, despite covering the same range in temperature and pressure, returned detections that were at best $\sim 1.8 \sigma$ lower than our best fit from 3D models. \item In terms of detection significance, the primary improvement is from the use of a 3D temperature structure, with secondary improvements related to the chemistry and Doppler effects. \item Doppler shifts are present in the high resolution spectra, but are unable to offer strong constraints for wind speed or rotation rate. We have shown that the widths of the spectral lines cannot be directly related to the planet's rotation rate alone. \item Our analysis detects water in these high resolution spectra of HD 209458b, but at a significantly depleted value \added{compared to the solar chemical equilibrium abundance} . \end{itemize} High resolution spectroscopy enables detailed characterization of exoplanets. It is becoming increasingly clear that the three-dimensional nature of planets and their atmospheric dynamics influence high resolution spectra. Looking toward the upcoming era of high resolution spectrographs on Extremely Large Telescopes, we eagerly await what detailed atmospheric characterizations will be possible. \acknowledgments This research was supported in part by NASA Astrophysics Theory Program grant NNX17AG25G and the Heising-Simons Foundation. MB acknowledges support from the UK Science and Technology Facilities Council (STFC) research grant ST/S000631/1. \added{We thank the referee for their constructive feedback which helped to improve the clarity of this paper.} \bibliographystyle{aasjournal}
null
null
null
proofpile-arXiv_000-10139
{"arxiv_id":"2009.09030","language":"en","timestamp":1600740108000,"url":"https:\/\/arxiv.org\/abs\/2009.09030","yymm":"2009"}
2024-02-18T23:40:25.081Z
2020-09-22T02:01:48.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10141"}
null
\section{Introduction} Partial differential equations (PDEs) with random coefficients have been the focus of many studies as they occur in a variety of mathematical models of physical systems. Some examples from this class of PDEs include passive scalar (e.g., fluid temperature or solute concentration) advection by random fluid flows~\cite{mclaughlin1996explicit,balkovsky1998two,chertkov1998intermittent,sinai1989limiting}, linear and nonlinear Schr\"{o}dinger equations with random potentials~\cite{anderson1958absence,bronski1997stability}, light propagating through random media~\cite{stephen1988temporal} and random water waves impinging on a step \cite{bolles2019anomalous,majda2019statistical}. Motivated by the Chicago convection experiments \cite{castaing1989scaling}, random passive scalars have been intensely studied as a simplified model for intermittency in fluid turbulence that, while enjoying a linear evolution, retain many statistical closure features reminiscent of problems in fluid turbulence \cite{sinai1989limiting,yakhot1990phenomenological,pumir1991exponential,kerstein1991linear,kimura1993statistics,ching1994passive}. In particular, the case of diffusing passive scalar advected by a rapidly fluctuating Gaussian random fluid flow has been the focus of much analysis as the moment closure problem is bypassed in the white noise limit \cite{majda1993random,majda1993explicit,balkovsky1998two,bronski1997scalar,bronski2000rigorous,kraichnan1968small,majda1993random,mclaughlin1996explicit,sinai1989limiting}. Notably, the availability of closed evolution equations for the statistical correlators led to the discovery that a diffusing passive scalar could inherit a heavy-tailed, non-Gaussian PDF from a Gaussian random fluid flow \cite{majda1993explicit,mclaughlin1996explicit,bronski2000rigorous,bronski2000problem}. Additional studies have explored the role played by finite or infinite correlation times in a random shear flow \cite{resnick1996dynamical,vanden2001non}. This generic non-Gaussian behavior in a passive scalar has been termed `scalar intermittency'. Subsequently, similar findings have been observed in field experimental data, such atmospheric wind measurements \cite{antonia1977log} as well as observations of stratospheric inert tracers~\cite{sparling2001scale}. Further investigations have provided more in depth understanding of how the non-Gaussian measure is dynamically attained \cite{camassa2008evolution}, and further explored the case of a passive scalar advected by a shear-free temporally fluctuating wind, where the entire probability measure can be determined at any time \cite{bronski2007explicit}. This further exhibited how the diffusivity adjusts the location of singularities in the probability measure. Additional studies contrasted the scalar PDF inherited by an unbounded linear shear with that of a bounded, periodic shear flow \cite{bronski1997scalar}. This established that for integrable random initial data the PDF would `Gaussianize' at long times, whereas short ranged, random wave initial data would produce divergent flatness factors in the same limit. While theoretically interesting, unbounded domains are of course unattainable in actual experiments, and the effects of boundaries need to be included for realistic models. Recently, the role of impermeable boundaries has been explored in a channel geometry with deterministic initial conditions \cite{camassa2019symmetry}. This work established the surprising role that the boundary conditions play in setting the skewness of the PDF. McLaughlin and Majda \cite{mclaughlin1996explicit} established that in free-space, with deterministic initial data, the long time PDF skewness would be strictly positive, whereas Monte-Carlo simulations in \cite{camassa2019symmetry} have demonstrated that with no-flux boundary conditions in a channel geometry, the long time PDF skewness can be negative. Further, it has been shown in \cite{camassa2019symmetry} that such flows could be physically realized by a randomly moving wall. More recently, the enhanced diffusion \cite{taylor1953dispersion} and third spatial Aris moment \cite{aris1956dispersion} induced by a periodically moving wall was studied experimentally and theoretically \cite{ding2020enhanced}, where it is noteworthy that the flows' temporal dependence is non-multiplicative. Inspired from the ground state energy expansion strategy to handle more realistic flows (e.g. periodic flows in~\cite{bronski1997scalar}) and the recent numerical findings provided in~\cite{camassa2019symmetry}, here we rigorously establish that impermeable boundary conditions in a channel geometry can yield a scalar PDF with negative long-time skewness. We do so for a range of molecular diffusivities and for arbitrary nonlinear shear layers multiplied by a stationary Ornstein-Uhlenbeck process, through an explicit calculation of the long time scalar skewness asymptotics. Further, we gain insight into the role of the correlation time in the underlying stochastic process in the dynamic evolution of the scalar skewness, and in particular establish that longer correlations times yield increased transient dynamics. The paper is organized as follows: In section \ref{sec:setup}, we formulate the problem of the evolution of the passive scalar field advected by a nonlinear shear layers multiplied by an Ornstein-Uhlenbeck random process with an impermeable boundary and introduce some important conclusions of this scalar intermittency model. In section \ref{sec:GroundStateEnergyExpansion}, we derive a long time asymptotic expansion of the $N$-point correlation function of the scalar field by the perturbation theory and the differential operator spectral theory. Based on the $N$-point correlation function, we study the PDF of the scalar and show how the flow controls the asymmetry of PDF, which rigorizes and generalizes the conclusions in the article \cite{camassa2019symmetry}. In section \ref{sec:wind}, we study the model with a spatially uniform, temporally Gaussian fluctuated shear flow. Being a special case of shear flow, the spatially uniform structure allows access to the exact formulae of the Green's function and the $N$-point correlation function. These are consistent with the long time asymptotic expansion in section \ref{sec:setup} and are verified by Direct Monte-Carlo (DMC) simulation proposed in \cite{camassa2019symmetry}. In section \ref{sec:numerical}, we perform numerical simulations for spatially non-uniform flows using the backward Monte-Carlo method. The numerical results quantitatively demonstrate the validity of the formulae we derive in section \ref{sec:setup}. In section \ref{sec:discuss}, we summarize the conclusions from the findings in the paper and briefly discuss future studies. \section{Setup and background of the problem for scalar intermittency} \label{sec:setup} We will study intermittency in the following non-dimensional, random advection diffusion equation with deterministic initial condition $T_{0}\left(x,y\right)$ and impermeable channel boundary conditions, \begin{equation}\label{eq:advectionDiffusion} \displaystyle \frac{\partial T}{\partial t}+\xi(t)u(y)\frac{\partial T}{\partial x}=\kappa \Delta T \,,\qquad T(x,y,0)=T_{0}(x,y)\,, \qquad \displaystyle\left. \frac{\partial T}{\partial y}\right|_{y= 0,L}=0 \, , \end{equation} where the domain is $\left\{ (x,y)| x\in \mathbb{R}, y \in [0, L] \right\}$, $L$ is the gap thickness of the channel, $\kappa$ is the diffusivity, $\xi(t)$ is a zero-mean, Gaussian random process with the correlation function given by $\left\langle \xi(t)\xi(s) \right\rangle=R(t,s)$. The time dependent random shear flow can originate from either a time varying pressure field, or by randomly moving portions of the boundary. Such a shear flow can be obtained by solving the Navier-Stokes equations with boundary conditions matching the wall velocity $\xi(t)$, see section 2 of \cite{camassa2019symmetry} for more details. We note that in this study we only consider shear flows whose spatial averages are non-zero, such as would arise in an experiment in which only one channel wall is moved, with statistics measured in the laboratory frame. We note that the symmetric case involving two oppositely moving walls requires higher order asymptotics to compute leading order long time skewness limits and will be explored in future work. In this paper, two additional simplifying assumptions are made. 1) $\xi(t)$ is a Gaussian white noise in time so that $R(t,s)=g \delta(t-s)$, or 2) $\xi(t)$ is a stationary Ornstein-Uhlenbeck process with damping $\gamma$ and dispersion $\sigma$, which is the solution of stochastic differential equation (SDE) $\mathrm{d}\xi (t) =-\gamma \xi (t)\mathrm{d}t +\sigma \mathrm{d}B (t)$ with initial condition $\xi(0) \sim \mathcal{N} (0, {\sigma^2}/{2 \gamma})$. Here $B (t)$ is the standard Brownian motion and $\mathcal{N} (a,b)$ is the normal distribution with mean $a$ and variance $b$. The correlation function of $\xi (t)$ is $R(t,s)=\frac{\sigma^{2}}{2\gamma}e^{-\gamma \left| t-s \right|}$. $\gamma^{-1}$ is often referred to as the correlation time of the OU process. It is easy to check that the stationary Ornstein-Uhlenbeck process converges to the Gaussian white noise process as the correlation time vanishes with fixed ${\sigma}/{\gamma}$. Notice that $\gamma \sim \frac{1}{\text{Time}}$, $\sigma \sim \frac{1}{\text{Time}^{ \frac{1}{2}}}$. With the change of variables, \begin{equation} \begin{array}{ccc} Lx'= x \quad Ly'=y&\frac{L^2}{\kappa}t'=t & g=\frac{\sigma}{\gamma} \\ \frac{\kappa}{L^2}\gamma'=\gamma &U= \frac{L}{g^{2}} &v (y,t)= u (y)\xi (t) \\ Uv'(y,t)=v (y, t) &\frac{g \sqrt{\kappa}}{L}\xi'(\frac{L^2}{\kappa}t')=\xi (t) &T' (Lx',Ly',\frac{L^2}{\kappa}t')=T (x,y,t) \\ \end{array} \end{equation} We can drop the primes without confusion and obtain the nondimensionalized version of \eqref{eq:advectionDiffusion}: \begin{equation}\label{eq:advectionDiffusionNonDimension} \displaystyle \frac{\partial T}{\partial t}+\text{Pe} \,\xi(t)u(y)\frac{\partial T}{\partial x}=\Delta T\,,\qquad \displaystyle T(x,y,0)=T_{0}(x,y)\,,\qquad \displaystyle \left.\frac{\partial T}{\partial x}\right|_{x= 0,1}=0\,, \end{equation} where the domain is $\left\{ (x,y)| x\in \mathbb{R}, y \in [0,1] \right\}$, and we have introduced the P\'{e}clet number $\text{Pe}= {U L}/{ \kappa}= {L^{2}}/{ (g^{2}\kappa)}$. When $\xi (t)$ is the white noise process, the correlation function of $\xi (t)$ is $R(t,s)= \delta (t-s)$. Conversely, when $\xi (t)$ is the stationary Ornstein-Uhlenbeck process, the underlying SDE becomes $\mathrm{d}\xi (t) =-\gamma \xi (t)\mathrm{d}t + \mathrm{d}B (t)$ with the initial condition $\xi (0) \sim \mathcal{N} (0,\frac{\gamma}{2})$, and the correlation function of $\xi (t)$ is $R(t,s)= \frac{\gamma}{2}e^{-\gamma \left| t-s \right|}$. Define the $N$-point correlation function $\mathbf{\Psi}_{N}$ of the scalar field $T(x,y,t)$: $\mathbb{R}^{2N}\times \mathbb{R}^{+}\rightarrow \mathbb{R}$ by $\mathbf{\Psi}_N (\mathbf{x}, \mathbf{y},t) =\left<\prod_{j=1}^N T(x_j,y_j,t)\right >_{\xi (t)}$, where $\mathbf{x}=\left(x_1,x_2,\cdots,x_N\right)$, $\mathbf{y}=(y_1,y_2,\cdots,y_N)$. Here, the brackets $\left\langle \cdot \right\rangle_{\xi(t)}$ denote ensemble averaging with respect to the stochastic process $\xi(t)$. The $\mathbf{\Psi}_{N}$ associated with the free space version of \eqref{eq:advectionDiffusion} is known for some special flows. When $\xi(t)$ is the Gaussian white noise process, Majda \cite{majda1993random} showed that $\hat{\mathbf{\Psi}}_{N}$ satisfies a $N$-body parabolic quantum mechanics problem, \begin{eqnarray}\label{closureeqnWhite} \frac{\partial \hat{\mathbf{\Psi}}_{N}}{\partial t} &=& \Delta_N \hat{\mathbf{\Psi}}_{N}- \left( \frac{\text{Pe}^2}{2}\nonumber \left( \sum\limits_{j=1}^{N}u\left(y_{j}\right) k_{j}\right)^2+ |\mathbf{k}|^2 \right)\hat{\mathbf{\Psi}}_{N}\\ \nonumber \hat{\mathbf{\Psi}}_{N}(\mathbf{k},\mathbf{y},0)&=& \prod_{j=1}^N \hat{T}_{0}(k_j,y_j)\\ \end{eqnarray} where $\hat{f}(\mathbf{k})= \int\limits_{\mathbb{R}^N}^{} \mathrm{d}\mathbf{y} e^{\mathrm{i}(\mathbf{x}\cdot \mathbf{k} )}f(\mathbf{x})$ is the Fourier transformation of $f(\mathbf{x})$, $\Delta_N$ is the Laplacian operator in $N$ dimensions $\Delta_{N}=\sum\limits_{j=1}^N \frac{\partial^2}{\partial y_j^{2}}$, $\mathbf{k}=\left(k_1,k_2,\cdots,k_N\right)$. When $u (y)=y$, Majda \cite{majda1993random} derived the exact expression of $\Psi_N$. A rotation of coordinates reduces the $N$-dimensional problem to a one-dimensional problem. Then the solution of \eqref{closureeqnWhite} is available via Mehler's formula. Based on this exact $N$-point correlation function, the distribution of the scalar field advected by a linear shear flow has been studied for deterministic and random initial data. The non-Gaussian behaviors of PDF have been reported in \cite{mclaughlin1996explicit,bronski2000problem,bronski2000rigorous}. When $\xi(t)$ is the stationary Ornstein-Uhlenbeck process, by introducing an extra variable $z$, Resnick \cite{resnick1996dynamical} showed that $\hat{\mathbf{\Psi}}_{N} (\mathbf{k}, \mathbf{y},t)= \frac{1}{\sqrt{\pi}}\int\limits_{-\infty}^{+\infty} \mathrm{d} z \hat{\psi} (\mathbf{k}, \mathbf{y},z,t) e^{-z^2}$, where $\hat{\psi} (\mathbf{k}, \mathbf{y},z,t)$ satisfies the following partial differential equation: \begin{eqnarray} \frac{\partial \hat{\psi}}{\partial t}+\mathrm{i} \text{Pe}\sqrt{\gamma} z \sum\limits_{j=1}^N k_{i} u(y_{i}) \hat{\psi}+ \gamma z \hat{\psi}_z &=& \Delta_N \hat{\psi}- |\mathbf{k}|^{2}\hat{\psi}+ \frac{\gamma}{2}\hat{\psi}_{zz}\\ \nonumber \hat{\psi}(\mathbf{k},\mathbf{y},z,0)&=& \prod_{j=1}^N \hat{T}_{0}(k_j,y_j) \label{closureeqnOU} \end{eqnarray} When $u (y)=y$, Resnick derived the exact expression for $\Psi_{N}$ via the same strategy Majda used for solving \eqref{closureeqnWhite} and showed it converges to the solution of \eqref{closureeqnWhite} in the limit $\gamma\rightarrow \infty$ of the damping OU parameter. These results are all derived in free-space. The $N$-point correlation function $\Psi_{N}$ for the boundary value problem \eqref{eq:advectionDiffusionNonDimension} is unknown even for simple-geometry domains. For periodic boundary conditions, Bronski and McLaughlin \cite{bronski1997scalar} carried out a second order perturbation expansion for the ground state of periodic Schr\"odinger equations to analyze the inherited probability measure for a passive scalar field advected by periodic shear flows with multiplicative white noise. In \cite{camassa2019symmetry} equation \eqref{eq:advectionDiffusionNonDimension} was studied with the flow $u(y)=y\xi(t)$ where $\xi(t)$ is the white noise process. A dramatically different long time state resulting from the existence of the impermeable boundaries was found. In particular, the PDF of the scalar in the channel case has negative skewness, in stark contrast to free space, where the limiting skewness is positive. Inspired by the observation reported in \cite{camassa2019symmetry}, we further explore here the PDF of the advected scalar in the presence of impermeable boundaries by the perturbation method introduced in \cite{bronski1997scalar}. Briefly, the long time behavior of the Fourier transformation of $N$-point correlation function $\hat{\Psi}_{N}$ of the scalar field is dominated by the neighborhood of the zero frequency $\mathbf{k}=\mathbf{0}$. This observation reduces the series expansion of $\hat{\Psi}_{N}$ to a single multi-dimensional Laplace type integral. Then, the standard asymptotic analysis and inverse Fourier transformation yield the long time asymptotic expansion of $\Psi_N$. \section{Long-time asymptotics: ground state energy expansion in channel geometry}\label{sec:GroundStateEnergyExpansion} For bounded domains, the $N$-point correlation function $\mathbf{\Psi}_{N}$ inherits the impermeable boundary condition from the scalar field. From spectral theory of parabolic differential operators, the solution of \eqref{closureeqnWhite},\eqref{closureeqnOU} can be written as an eigenfunction expansion of the form \begin{equation} \begin{array}{rl} \label{eq:eigenfunctionExpansion} \hat{\Psi}_{N}(\mathbf{k},\mathbf{y},t) =& \sum\limits_{l=0}^{\infty}\beta_l(\mathbf{k}) \phi_l (\mathbf{k},\mathbf{y}) e^{-\lambda_l (\mathbf{k}) t} \\ \end{array}. \end{equation} When the statistics of velocity field is white in time, $\lambda_l, \phi_l$ are the eigenvalues and eigenfunctions of the eigenvalue problem \begin{equation} \begin{array}{rl} -\lambda_{l}\phi_{l}&= \Delta_N \phi_{l}- \left( \frac{\text{Pe}^2}{2} \left( \sum\limits_{j=1}^{N}u\left(y_{j}\right) k_{j}\right)^2+ |\mathbf{k}|^2 \right)\phi_{l} ,\\ \frac{\partial \phi_{l}}{\partial y_{j}}|_{y_{j}= 0,1}&=0, \qquad \forall \, 1\leq j\leq N. \end{array} \end{equation} For simplicity, we scale $\phi_l$ so that $\left\{ \phi_{l} \right\}_{l=0}^{\infty}$ form an orthonormal basis with respect to the inner product $\left\langle f(\mathbf{y}),g(\mathbf{y}) \right\rangle= \int\limits_{[0,1]^{N}}^{}\mathrm{d} \mathbf{y} f(\mathbf{y})g(\mathbf{y})$ for all $\mathbf{k}$. $\beta_l$ are determined by the initial condition and the eigenfunction via $\beta_l(\mathbf{k})= \left\langle \prod_{j=1}^N \hat{T}_{0}(k_j,y_j), \phi_l(\mathbf{k},\mathbf{y}) \right\rangle $. When $\xi(t)$ is the stationary Ornstein-Uhlenbeck process, $\phi_l(\mathbf{k}, \mathbf{y}) = \frac{1}{\sqrt{\pi}} \int\limits_{-\infty}^{+\infty}\mathrm{d} z \varphi_{l} (\mathbf{k}, \mathbf{y},z) e^{-z^2}$, where $\lambda_l, \varphi_l$ are the eigenvalues and eigenfunctions of the eigenvalue problem \begin{equation} \begin{array}{rl} -\lambda_{l} \varphi_{l}&=-\mathrm{i} \text{Pe}\sqrt{\gamma} z \sum\limits_{j=1}^N k_{i} u(y_{i}) \varphi_{l}- \gamma z \frac{\partial \varphi_{l}}{\partial z} + \Delta_N \varphi_{l}- |\mathbf{k}|^{2}\varphi_{l}+ \frac{\gamma}{2} \frac{\partial^2 \varphi_l}{\partial z^{2}} \\ \frac{\partial \varphi_{l}}{\partial y_{j}}|_{y_{j}= 0,1}&=0, \qquad \forall 1\leq j\leq N. \end{array} \end{equation} We also choose $\varphi_{l}$ such that $\left\{ \varphi_{l} \right\}_{l=0}^{\infty}$ form an orthonormal basis with respect to the inner product $ \left\langle f(\mathbf{y},z),g(\mathbf{y},z) \right\rangle=\frac{1}{\sqrt{\pi}} \int\limits_{-\infty}^{+\infty}\mathrm{d}z \int\limits_{[0,1]^{N}}^{}\mathrm{d} \mathbf{y} f(\mathbf{y},z)g^{*}(\mathbf{y},z)e^{-z^{2}} $ respectively, where $g^{*}$ is the complex conjugate of $g$. $\beta_l$ have the same definition as the Gaussian white noise case. Bronski and McLaughlin \cite{bronski1997scalar} proved that $\lambda_l(\mathbf{k})$ strictly increases with respect to the subscript $l$ for all $\mathbf{k}$ and have a global minimum value at $\mathbf{k}=\mathbf{0}$; in particular, $\lambda_{0}(\mathbf{0})=0, \lambda_{1}(\mathbf{0})=\pi^{2}$. As a consequence, the series given in (\ref{eq:eigenfunctionExpansion}) is dominated at long times by the ground state $j=0$, since the other terms are $\mathcal{O} (e^{-\pi^{2}t})$. This observation yields the following asymptotic formula valid at long times for arbitrary $N$-point correlation function of the scalar field: \begin{equation} \begin{array}{rl} \mathbf{\Psi}_{N}(\mathbf{x}, \mathbf{y},t)= \frac{1}{(2\pi)^{N}} \int\limits_{\mathbb{R}}^{} \mathrm{d}\mathbf{k} e^{-\mathrm{i}(\mathbf{x}\cdot \mathbf{k})}\beta_0(\mathbf{k}) \phi_0 (\mathbf{k},\mathbf{y}) e^{-\lambda_0 (\mathbf{k}) t} +\mathcal{O} (e^{-\pi^{2}t}) \text{ as } t\rightarrow \infty. \end{array} \end{equation} This is an $N$-dimensional Laplace type integral with respect to the frequency variable $\mathbf{k}$. By formula (1) in \cite{inglot2014simple}, the asymptotic expansion of $\mathbf{\Psi}_N(\mathbf{x}, \mathbf{y},t)$ for large $t$ is \begin{equation} \begin{array}{rl} \label{eq:NpointCorrelation} \mathbf{\Psi}_N (\mathbf{x}, \mathbf{y},t)= & \frac{1}{\left( 2\pi t \right)^{\frac{N}{2}} \text{det} (\mathbf{H})} \left(\int\limits_{0}^{1}\mathrm{d} y\hat{T}_{0} (0,y) \right)^{N} +\mathcal{O}(t^{\frac{N+2}{2}}) \text{ as } t\rightarrow \infty, \\ \end{array} \end{equation} where $\mathbf{H}_{i,j}= \frac{\partial^2 }{\partial k_i \partial k_j} \lambda_0(\mathbf{k},\mathbf{y})|_{\mathbf{k}=\mathbf{0}}$ is the Hessian matrix of the eigenvalue $\lambda_0 (\mathbf{k})$ at $\mathbf{k}=\mathbf{0}$. Here, we are primarily concerned with single-point statistics, namely the moment of the random scalar field at point $(x,y)$, $\left\langle T^{N}(x,y,t) \right\rangle=\mathbf{\Psi}_N (\mathbf{x},\mathbf{y},t)$ , where all components of $\mathbf{x}, \mathbf{y}$ are $x,y$, namely $x=x_{1}=x_{2}=...=x_{N}, y=y_{1}=y_{2}=...=y_{N}$. When $N=1,2,3$, the $\text{det} (\mathbf{H})$ in the \eqref{eq:NpointCorrelation} only depend on the derivative of eigenvalues in the one-dimensional eigenvalue problem $\lambda^{(2)}=\frac{\partial^{2}}{\partial k_1^{2}}\lambda_0 (k_{1})|_{k_1=0} $ and the derivative of eigenvalues in the two-dimensional eigenvalue problem $\lambda^{(1,1)}=\frac{\partial^{2}}{\partial k_1\partial k_2}\lambda_0 (k_{1},k_2)|_{k_1=0,k_2=0} $. Hence, as $t\rightarrow \infty$, the first three moments are \begin{equation}\label{eq:firstThreeMoments} \begin{array}{rl} \left\langle T (x,y,t) \right\rangle =&\int\limits_{0}^{1}\mathrm{d} y\hat{T}_{0} (0,y) \displaystyle \frac{1}{(2\pi t)^{\frac{1}{2}}\sqrt{\lambda^{(2)}}}+\mathcal{O} (t^{- \frac{3}{2}}) \\ \left\langle T^{2}(x,y,t) \right\rangle =& \left( \int\limits_{0}^{1}\mathrm{d} y\hat{T}_{0} (0,y) \right)^{2} \displaystyle \frac{1}{2\pi t \sqrt{(\lambda^{(2)})^{2}-(\lambda^{(1,1)})^{2} }}+\mathcal{O} (t^{-2}) \\ \left\langle T^{3}(x,y,t) \right\rangle =& \left(\int\limits_{0}^{1}\mathrm{d} y\hat{T}_{0} (0,y) \right)^{3} \displaystyle \frac{1}{ \left( 2\pi t \right)^{\frac{3}{2}}\sqrt{(\lambda^{(2)} -\lambda^{(1,1)})^{2}(\lambda^{(2)}+2\lambda^{(1,1)})}}+\mathcal{O} (t^{- \frac{5}{2}}). \\ \end{array} \end{equation} Here $\lambda^{(2)},\lambda^{(1,1)}$ can be obtained by the perturbation method introduced in the appendix of \cite{bronski1997scalar}. When $\xi(t)$ is a Gaussian white noise process, the derivatives of eigenvalues in the \eqref{eq:firstThreeMoments} are \begin{equation}\label{eq:eigenvalueWhite} \begin{array}{rl} & \lambda^{(2)}=2+ \text{Pe}^2\int\limits_{}^{1}\mathrm{d}y\, u^2(y)\\ & \lambda^{(1,1)}= \text{Pe}^{2}\left( \int\limits_{0}^{1}\mathrm{d} y \,u(y) \right)^{2}=\text{Pe}^{2}\bar{u}^{2}\\ \end{array} \end{equation} Conversely, when $\xi(t)$ is the stationary Ornstein-Uhlenbeck process, the derivatives of eigenvalues in the \eqref{eq:firstThreeMoments} are \begin{eqnarray}\label{eq:eigenvalueOU} &&\hspace{-2cm} \lambda^{(2)}= 2+\text{Pe}^{2} \sqrt{\gamma } \int_{0}^{1}\mathrm{d}y \,u(y) \left\{\frac{\cosh \left(\sqrt{\gamma } y\right)}{\sinh\left(\sqrt{\gamma }\right)} \int_{0}^{1}\mathrm{d}s \,u(s) \cosh \left(\sqrt{\gamma } \left(1-s\right)\right) \right. \nonumber \\ &&\hspace{5cm}\left. -\int_{0}^y \mathrm{d}s \,u(s) \sinh \left(\sqrt{\gamma } (y-s)\right) \right\} \\ &&\hspace{-2cm} \lambda^{(1,1)}= \text{Pe}^{2}\bar{u}^{2} \nonumber \end{eqnarray} The white noise can be regraded as a limiting case of vanishing correlation time $\gamma^{-1}$ in the stationary Ornstein-Uhlenbeck process. It is natural to ask whether the scalar field statistics with $\xi(t)$ an Ornstein-Uhlenbeck process asymptotically satisfies, as $\gamma\rightarrow \infty$, the corresponding model with white noise process. In the free-space problem, Resnick \cite{resnick1996dynamical} proves this for linear shear flow $u(y)=y$ via the exact formula of $\Psi_{N}$. In channel domain problem, the asymptotic analysis shows that equation \eqref{eq:eigenvalueOU} converges to equation \eqref{eq:eigenvalueWhite} as $\gamma\rightarrow+\infty$, which supports this compatibility for large values of the parameter $\gamma$. In the free space problem, \cite{vanden2001non} proves that both of the two flows we considered in this paper share the same limiting distribution of the scalar field at long times for any $\gamma$. However, in channel domains, the differences between equations \eqref{eq:eigenvalueWhite} and \eqref{eq:eigenvalueOU} lead to different corresponding limiting distributions. Thus, impermeable boundaries can affect the limiting distribution of the random scalar fields. The right hand side of each equation in \eqref{eq:NpointCorrelation} is independent of $x,y$, which means all points in the domain have the same statistics behavior at long times. Without loss of generality, we focus on the single point $T(0,0,t)$ of the random scalar field. In \cite{camassa2019symmetry}, the authors derived the PDF of $T(0,0,t)$ at long times for the free space version of \eqref{eq:advectionDiffusionNonDimension} and $u(y)=y$, $\text{Pe}=1$ using the method of characteristics and the Green function. The study of the explicit formula of PDF for the free space problem shows that the skewness of the PDF for $T(0,0,t)$ is positive at long times while the numerical studies show the skewness becomes negative in presence of impermeable channel boundaries, demonstrating how the impermeable boundary has a crucial impact on the PDF of random scalar flied. With the long time asymptotic expansion of moments \eqref{eq:firstThreeMoments} at hand, we can theoretically study the skewness of $T(0,0,t)$ for various parameters and more general shear flows. Based on the formula in \eqref{eq:firstThreeMoments}, as $t\rightarrow \infty$, the variance of $T(x,y,t)$ is given by \begin{equation}\label{eq:variaceLongTimeAsymptotic} \begin{array}{rl} \text{Var}(T)= & \left\langle (T- \left\langle T \right\rangle)^{2} \right\rangle\\ = & \left( \displaystyle\int_{0}^{1}\mathrm{d} y\hat{T}_{0} (0,y) \right)^{2} \left( \displaystyle\frac{1}{ \sqrt{(\lambda^{(2)})^{2}-(\lambda^{(1,1)})^{2} }} -\frac{1}{\lambda^{(2)}} \right) \displaystyle \frac{1}{2\pi t} +\mathcal{O}(t^{-2}) . \\ \end{array} \end{equation} Notice that coefficient of $t^{-1}$ in \eqref{eq:variaceLongTimeAsymptotic} is strictly positive if $\lambda^{(1,1)}\neq 0$, which requires $\bar{u} \neq 0$. As $t\rightarrow \infty$, the skewness of $T(x,y,t)$ is given by \begin{equation}\label{eq:skewnessLongTimeAsymptotic} \begin{array}{rl} \text{S}(T)=& \displaystyle \frac{ \left\langle (T- \left\langle T \right\rangle)^{3} \right\rangle}{\left( \hbox{\small Var}(T) \right)^{\frac{3}{2}}}\\\\ =&\displaystyle \frac{ \displaystyle\frac{1}{\sqrt{(\lambda^{(2)} -\lambda^{(1,1)})^{2}(\lambda^{(2)}+2\lambda^{(1,1)})}} - \displaystyle\frac{3}{\sqrt{\lambda^{(2)} (\lambda^{(2)})^{2}-(\lambda^{(1,1)})^{2} }} + \displaystyle\frac{2}{(\lambda^{(2)})^{\frac{3}{2}}} }{\left( \displaystyle \frac{1}{ \sqrt{(\lambda^{(2)})^{2}-(\lambda^{(1,1)})^{2} }} - \displaystyle\frac{1}{\lambda^{(2)}} \right)^{\frac{3}{2}}}+\mathcal{O}(t^{-1}).\\ \end{array} \end{equation} The first term on the right hand side of \eqref{eq:skewnessLongTimeAsymptotic} shows the existence of tbe long time limit of skewness, which means the PDF of $T(0,0,t)$ has a persisting asymmetry. There are five factors that affect the limit value: the P\'eclet number $\text{Pe}$, the mean of spatial component of flow $\bar{u}$, the shape of $u(y)$, the temporal fluctuation $\xi(t)$ and the OU damping parameter $\gamma$. Figure \ref{fig:SkewnessShearPeA} and figure \ref{fig:SkewStepPea} show the long time limit of skewness of $T(0,0,t)$ for the flow with $u(y)=y+A$ and $u(y)=\theta(a-y)$ (with $\theta$ denoting the Heaviside step function) for various P\'eclet numbers and $\bar{u}$, respectively. Panel (a1) in figure \ref{fig:SkewnessShearPeA} shows that the skewness limit is negative when $\bar{u}={1}/{2}$, ${Pe}=1$, which is consistent with Monte-Carlo simulation results reported in~\cite{camassa2019symmetry}. Both of those figures have a similar pattern. The skewness is negative when \text{Pe} or $\bar{u}$ is small and positive when they are large. Alternatively, with the step function shear flow, the differences are larger as can be seen by comparing panel (a1) or panel (a2) in figure \ref{fig:SkewnessShearPeA} with the corresponding panels in figure \ref{fig:SkewStepPea}. One can see that the change of $u(y)$ dramatically changes the long time asymptotics of the skewness in its $\text{Pe}-\bar{u}$ dependence. Of course, while the Ornstein-Uhlenbeck process yields different numerical values compared with the white noise process, for the parameter region $\text{Pe}\times \bar{u} \in [0,4]\times [0,4]$ shown in the figures the relative difference between them is less than $0.1$. Hence it is hard to observe a difference when comparing the left panel and the right panel in figure \ref{fig:SkewnessShearPeA} or figure \ref{fig:SkewStepPea}. Figure \ref{fig:skewshearGamma} shows the dependence of the skewness long time limit on the damping parameter $\gamma$. Note that depending on $\text{Pe}$ and $u(y)$, the sign of skewness can be made to change by varying $\gamma$. \begin{figure} \centering \includegraphics{Figures/skewshearPeA.pdf} \hfill \caption[ ] {\textbf{The skewness limit of $T(0,0,t)$ at long times for various P\'eclet numbers and $\bar{u}$.} In both panels, the flow takes the form $\text{Pe}\,u(y)\xi(t)$, where $u(y)=(y+A)$ and $\bar{u}=A-\frac{1}{2}$. In panel (a1), $\xi(t)$ is the Gaussian white noise process. In panel (a2), $\xi(t)$ is the stationary Ornstein-Uhlenbeck process with $\gamma=1$. } \label{fig:SkewnessShearPeA} \end{figure} \begin{figure} \centering \includegraphics{Figures/skewStepPea.pdf} \hfill \caption[ ] {\textbf{The skewness limit of $T(0,0,t)$ at long times for various P\'eclet numbers and $\bar{u}$.} In both panels, the flow takes the form $\text{Pe}u(y)\xi(t)$, where $u(y)=\theta (a-y)$ and $\bar{u}=a$. In panel (a1), $\xi(t)$ is the Gaussian white noise process. In panel (a2), $\xi(t)$ is the stationary Ornstein-Uhlenbeck process with $\gamma=1$. } \label{fig:SkewStepPea} \end{figure} \begin{figure} \centering \includegraphics{Figures/skewshearGamma.pdf} \hfill \caption[ ] {\textbf{The skewness limit of $T(0,0,t)$ at long times for various damping parameters $\gamma$.} The flow is $\text{Pe}(y+1.2)\xi(t)$, where $\xi (t)$ is a stationary Ornstein-Uhlenbeck process with the damping parameter $\gamma$. The cases of $\text{Pe}=1.5$ and $\text{Pe}=1.6$ are shown by the blue and orange curves, respectively.} \label{fig:skewshearGamma} \end{figure} \section{An explicit example for scalar intermittency} \label{sec:wind} In this section we study a special case of \eqref{eq:advectionDiffusion}, which yields an exact formula valid at all times. Therefore, this is a solid benchmark for the long time asymptotic analysis derived in the previous section. In \cite{bronski2007explicit}, the authors call the advection-diffusion equation \eqref{eq:advectionDiffusion} with $u(y)=1$ the `wind model'. They study the one dimensional problem when $\xi(t)$ is the Gaussian white noise process. Here, we present the exact formula of $N$-point correlation function $\mathbf{\Psi}_N$ for the channel domain problem with any general Gaussian process $\xi(t)$. The associated Green's function $G(x,y,x_{0},y_{0},t)$, that is, the solution of \eqref{eq:advectionDiffusion} with the initial condition $T(x,y,0)=\delta(x-x_{0})\delta(y-y_{0})$, can be obtained by the separation of variables and the method of characteristics, \begin{equation} \begin{array}{rl} G(x,y,x_{0},y_{0},t)=&K(y,y_{0},t) \displaystyle\frac{1}{ \sqrt{4 \pi t}}\exp \Bigg( \displaystyle - \frac{(x-x_{0}-\text{Pe} \int\limits_0^t\mathrm{d}s \,\xi(s) )^{2}}{4t} \Bigg) .\\ \end{array} \end{equation} where $K(y,y_{0},t)= 1+ 2 \sum\limits_{n=1}^{\infty} \cos (n\pi y) \cos (n \pi y_0) \exp (-n^2 \pi^2 t)$. The solution with general initial condition $T_0(x,y)$ can be constructed by the Green function via convolution, \begin{equation} \begin{array}{rl} T(x,y,t)= & \int\limits_{-\infty}^{+\infty} \mathrm{d}x_0 \int\limits_{0}^{1} \mathrm{d} y_{0} T_0 (x_{0},y_0) G(x,y,x_{0},y_{0},t). \\ \end{array} \end{equation} By the definition of $\mathbf{\Psi}_N$ and Fourier transform, we have \begin{equation} \begin{array}{rl} &\mathbf{\Psi}_N (\mathbf{x}, \mathbf{y},t)=\\ & \int\limits_{\mathbb{R}^{N}}^{}\mathrm{d}\mathbf{x}_0 \int\limits_{[0, 1]^{N}}^{} \mathrm{d} \mathbf{y}_{0} \frac{1}{(2\pi)^N}\int\limits_{\mathbb{R}^{N}}^{} \mathrm{d} \mathbf{k}\text{exp} ( \sum\limits_{j=1}^N -t k_j^2-\mathrm{i}k_j (x_{j}-x_{0j}))\left\langle \text{exp} (\mathrm{i} \text{Pe} \int\limits_0^t\mathrm{d}s \xi(s) \sum\limits_{j=1}^Nk_j) \right\rangle\\ &\times \prod\limits_{j=1}^NK(y_{j},y_{0j},t )T_{0}(x_{0j},y_{0j}) ,\\ \end{array} \end{equation} where $\mathbf{x}_{0}=\left(x_{01},x_{02},\cdots,x_{0N}\right)$, $\mathbf{y}_{0}=(y_{01},y_{02},\cdots,y_{0N})$. Since $\xi(t)$ is a Gaussian process, $\int\limits_0^t\mathrm{d} s \xi(s)$ is a normal random variable at any instant of time. By utilizing the characteristic function of the normal random variable, we obtain: \begin{equation} \begin{array}{rl} &\mathbf{\Psi}_N (\mathbf{x}, \mathbf{y},t)=\\ &\int\limits_{\mathbb{R}^{N}}^{}\mathrm{d}\mathbf{x}_0 \int\limits_{[0, 1]^{N}}^{} \mathrm{d} \mathbf{y}_{0} \frac{1}{(2\pi)^N}\int\limits_{\mathbb{R}^{N}}^{}\mathrm{d} \mathbf{k}\, \exp ( \sum\limits_{j=1}^N -t k_j^2-\mathrm{i}k_j (x_{j}-x_{0j})) \text{exp}\left( - \frac{1}{2} v(t) (\text{Pe}\sum\limits_{j=1}^{N}k_j)^{2} \right)\\ &\times \prod\limits_{j=1}^NK(y_{j},y_{0j},t )T_{0}(x_{0j},y_{0j})\\ \end{array} \end{equation} where $v(t)$ is the variance of stochastic process $\int\limits_0^t\mathrm{d} s \xi(s)$. Comparing this integral with the multivariate normal distribution, we have \begin{equation} \begin{array}{rl} \mathbf{\Psi}_N (\mathbf{x}, \mathbf{y},t)= &\int\limits_{\mathbb{R}^{N}}^{}\mathrm{d}\mathbf{x}_0 \int\limits_{[0, 1]^{N}}^{} \mathrm{d} \mathbf{y}_{0} \frac{\text{exp}\left( -\frac{1}{2}(\mathbf{x}-\mathbf{x}_{0})\Lambda^{-1}(\mathbf{x}-\mathbf{x}_{0})^{\text{T}} \right)}{(2\pi)^\frac{N}{2}\sqrt{\text{det} (\Lambda)} }\prod\limits_{j=1}^NK(y_{j},y_{0j},t )T_{0}(x_{0j},y_{0j})\\ \end{array} \end{equation} where $\Lambda= 2tI+v(t) \text{Pe}^2\mathbf{e}^{\text{T}}\mathbf{e}$, $I$ is the identity matrix of size $N\times N$ and $\mathbf{e}$ is a $1\times N$ vector with $1$ in all coordinate. By the Sherman-Morrison formula \cite{sherman1950adjustment}, $\Lambda^{-1}=(2t)^{-1}\left( I- \frac{v(t)\text{Pe}^2\mathbf{e}^{\text{T}}\mathbf{e} }{2t+ N v(t)\text{Pe}^2} \right)$, and by the matrix determinant lemma, $\text{det} (\Lambda)= (2t)^{N}\left( 1+ \frac{N v(t)\text{Pe}^2}{2t} \right)$. To compare with the $N$-th moment $\left\langle T^N(x,y,t) \right\rangle$, we choose the initial condition as $T_0(x,y)=\delta(x)$. Hence, the solution is independent of $x$, \begin{equation} \begin{array}{rl}\label{eq:LinesourceWind} T(x,t)=\displaystyle\frac{1}{ \sqrt{4 \pi t}}\text{exp} \Bigg( - \frac{(x-\text{Pe} \int\limits_0^t\mathrm{d}s \xi(s) )^{2}}{4t} \Bigg), \end{array} \end{equation} and the $N$-th moment is \begin{equation} \begin{array}{rl}\label{eq:NmomentWind.pdf} \left\langle T^{N}(x,t) \right\rangle=& \displaystyle\frac{1}{(4\pi t)^{\frac{N}{2}}} \frac{1}{\sqrt{1+\frac{N v(t)\text{Pe}^2}{2t} }}\text{exp}\left( -\frac{N x^{2}}{4t}\left(1-\frac{N \text{Pe}^2 v (t)}{N \text{Pe}^2 v (t) +2 t}\right) \right). \\ \end{array} \end{equation} \begin{figure} \centering \includegraphics{Figures/skewWindPet.pdf} \hfill \caption[ ] {\textbf{Evolution of the random variable $T(0,0,t)$ for various P\'eclet numbers and time}. In both panels, the flow takes the form $\text{Pe}u(y)\xi(t)$, where $u(y)= 1$ and $\bar{u}=1$. In panel (a1), $\xi(t)$ is the Gaussian white noise process. In panel (a2), $\xi(t)$ is the stationary Ornstein-Uhlenbeck process with $\gamma=1$. } \label{fig:SkewnessPet} \end{figure} The long time asymptotic expansion of \eqref{eq:NmomentWind} is consistent with the \eqref{eq:firstThreeMoments}. Figure \ref{fig:SkewnessPet} shows the skewness evolution of the scalar field at the point $(0,0)$ varying with different $\text{Pe}$. To consider a more multi-point statistic as studied in the Monte-Carlo simulations in \cite{camassa2019symmetry} we may study the average of the scalar field over $x\in[-a,a]$, \begin{equation}\label{eq:WindStrip} \begin{array}{rl} M(a,t)= & \displaystyle\frac{1}{2a}\int\limits_{-a}^aT(x,t)\mathrm{d}x= \frac{1}{2a} \left(\text{erf}\Bigg(\frac{a+\text{Pe} \int\limits_0^{t}\mathrm{d} s\xi(s)}{2 \sqrt{t}}\Bigg)+\text{erf}\Bigg(\frac{a-\text{Pe} \int\limits_0^{t}\mathrm{d} s\xi(s)}{2 \sqrt{t}}\Bigg)\right),\\ \end{array} \end{equation} where $\text{erf}(z)= \frac{2}{\sqrt{\pi}} \int\limits_0^{z}\mathrm{d} t e^{-t^2}$ is the error function. When $a\rightarrow 0$, $M(a,t)$ will converge to $T(0,t)$. By switching the order of integration and ensemble average, the $N$-moment of $M(a,t)$ is \begin{equation}\label{eq:NmomentWindStrip} \begin{array}{rl} \left\langle M(a,t)^N \right\rangle= & \frac{1}{a^{N}}\int\limits_{[-a,a]^N}^{}\mathrm{d} \mathbf{x}\left\langle \prod\limits_{j=1}^N T(x_{j},t) \right\rangle \end{array} \end{equation} To verify our theoretical analysis, we perform the Direct Monte-Carlo(DMC) method proposed in \cite{camassa2019symmetry}. Panel (a2) in figure \ref{fig:wind_strength} shows the skewness computed by the theoretical approach and DMC approach. The consistency of the two approaches demonstrates the validity of the theoretical analysis in this section. Panel (a1) in figure \ref{fig:wind_strength} depicts the PDF of $M(\frac{1}{10},1)$ obtained by DMC method for different P\'{e}clet numbers. It shows that with increasing P\'{e}clet number the PDF changes from negatively-skewed to positively-skewed, which is consistent with the observation we made from figure \ref{fig:SkewnessShearPeA} and \ref{fig:SkewStepPea}. Figure \ref{fig:SkewnessStripPet} shows the skewness evolution of $M(\frac{1}{10},t)$ computed by \eqref{eq:NmomentWindStrip} for various P\'eclet numbers and different types of temporal fluctuation. Panel (a1) in figure \ref{fig:SkewnessStripPet} shows the skewness evolution when $\xi (t)$ is the white noise process. The skewness is almost unchanged in time, which means that the system reaches the long time asymptotic state in very short time. Panel (a2) in the same figure shows the skewness evolution when $\xi (t)$ is the stationary Ornstein-Uhlenbeck process. The finite correlation time in the temporal fluctuation $\xi(t)$ introduces a noticeable transient before reaching the long time asymptotic state. This phenomenon weakens as the P\'eclet number increases. \begin{figure} \centering \includegraphics{Figures/superposed_peclet_time.pdf} \caption{\textbf{Evolution of the spatially averaged random variable defined in \eqref{eq:WindStrip} for various P\'eclet numbers and time}. In panel (a1), we superposed $3$ probability distributions of $M (\frac{1}{10},1)$ with $\text{Pe}=1$ (blue), $\text{Pe}=2$ (dark green) and $\text{Pe}=3$ (green). Next, in panel (a2), we show the skewness of $M (\frac{1}{10},t)$ calculated through the Direct Monte Carlo simulations (blue line) and the numerically integrating the formula \eqref{eq:NmomentWindStrip} (green circle). } \label{fig:wind_strength} \end{figure} \begin{figure} \centering \includegraphics{Figures/skewWindStripPet.pdf} \hfill \caption{ { \textbf{Evolution of the spatially averaged random variable $M( \frac{1}{10},t)$ defined in \eqref{eq:WindStrip} for various P\'eclet numbers and time}. In both panels, the flow takes the form $\text{Pe}u(y)\xi(t)$, where $u(y)= 1$ and $\bar{u}=1$. In panel (a1), $\xi(t)$ is the Gaussian white noise process. In panel (a2), $\xi(t)$ is the stationary Ornstein-Uhlenbeck process with $\gamma=1$. } \label{fig:SkewnessStripPet}} \end{figure} \section{Numerical simulations} \label{sec:numerical} To verify the long-time asymptotic analysis results, we need to simulate the sample of random scalar field $T(x,y,t)$ at a single point $(x,y)$. The Feynman-Kac's based backward Monte-Carlo method is efficient in this case, since it can access the single point value of the scalar field without the global solution of \eqref{eq:advectionDiffusion}. For each realization of the stochastic process $\xi(t)$, the random field has the path integral representation $T(x,y,t)= \left\langle T_0(X_{t} (t),Y_{t} (t)) \right\rangle_{B_{1} (t),B_{2} (t)}$ by the Feynman-Kac's formula, where $X_{t} (s),Y_{t} (s)$ are the solution of the stochastic differential equation(SDE) \begin{equation} \begin{array}{rl} \mathrm{d}X_{t} (s)&=-\text{Pe}\xi (t-s)u(Y_{t} (s))\mathrm{d}s+ \sqrt{2} \mathrm{d}B_{1} (t)\\ \mathrm{d}Y_{t} (s)&=\sqrt{2}\mathrm{d}B_{2} (t) \\ X_{t} (0)=x& \quad Y_{t}(0)=y\\ \end{array} \end{equation} where $B_i$ are independent Brownian motions. Notice that both the white noise process and the stationary Ornstein-Uhlenbeck process are stationary and temporally homogeneous, so $\xi(t-s)=\xi(s)$. This property allow us to reuse the solution$(X_{t_{i}} (s),Y_{t_{i}} (s))$ to compute $(X_{t_{i+1}} (s),Y_{t_{i+1}} (s))$, which saves a lot of computation cost. We solve the SDE by an Euler scheme with a time increment $\Delta s =0.01$. \begin{equation} \begin{array}{rl} X_{s_{i+1}}&=X_{s_{i}}-\text{Pe}\xi (s_{i})u(Y_{t} (s_{i}))\Delta s++ \sqrt{2 \Delta s}n_{1,i}\\ Y_{s_{i+1}}&=Y_{s_{i}}+\sqrt{2\Delta s}n_{2,i} \\ \end{array} \end{equation} $n_{1,i},n_{2,i}$ are standard independent and identically distributed normal random variables which are produced by the Mersenne Twister uniform random number generator. We impose billiard-like reflection rules on the boundary $y=0,1$. We typically generate $10^{6}$ realization of $\xi(s)$. The realization of the Gaussian white noise process are produced by $\xi(t_{i})= \frac{n}{\sqrt{\Delta s}}$. The Ornstein-Uhlenbeck process are simulated by the scheme in \cite{gillespie1996exact}. For each realization of $\xi (s)$, we use $10^6$ independent SDE solution $(X_{t}(s), Y_{t} (s))$ to compute the path integral representation of $T(x,y,t)$. The simulations are performed on UNC's Longleaf computing cluster with 400 parallel computing jobs, and each job takes approximately 3 days on the cluster. We simulate $T(0,0,t)$ with the initial condition $T_0(x,y)= {e^{-x^2}}/{\sqrt{\pi}}$ for different flows, and results are shown in figure \ref{fig:SkewnessTimeFK}. The blue curves are the numerical result of skewness evolution and the green horizontal lines are the skewness limits computed by \eqref{eq:skewnessLongTimeAsymptotic}. The consistency between them validates this formula. In panel (a1) and (a2) , $\xi(t)$ is white noise process, $u(y)$ are $y$ and $y+\frac{1}{2}$ respectively. One can see that the larger spatial mean of the flow leads to longer transient dynamics before reaching the long time asymptotic state. In panel (b1) and (b2) , $\xi(t)$ is the stationary Ornstein-Uhlenbeck process and $u(y)=y$, the damping parameter $\gamma$ are $5$ and $50$ respectively. Comparison between panel (b1) and (b2) shows the longer correlation time in the panel (b1) yields a more dramatic transient dynamics in the skewness evolution. Comparing the panel $(a1)$ and panel $(b2)$, we see the convergence of the Ornstein-Uhlenbeck case to the white noise case when the correlation time $\gamma^{-1}$ is small. \begin{figure}[tbp] \centering \includegraphics[width=1\linewidth]{Figures/skewgamma_all.pdf} \hfill \caption{\textbf{Skewness evolution for random shear flows fluctuating with Gaussian white noise and Ornstein-Uhlenbeck process statistics.} {Here, we provide the skewness evolution for random shear flows with different spatial means and different fluctuation statistics obtained from Monte Carlo simulations along with the long time asymptotics theoretical predictions of skewness \eqref{eq:skewnessLongTimeAsymptotic}. In panels (a1)-(a2) we provide the skewness evolution and corresponding long time asymptotics for flows with Gaussian white noise fluctuations. Next, in panels (b1)-(b2), we provide the skewness evolution and its long time asymptotics for random shear flow $u(y)=y$ with fluctuations that has Ornstein-Uhlenbeck process statistics. Furthermore, in panel (b1) and (b2), the correlation strength parameter of Ornstein-Uhlenbeck processes are $\gamma=5$ and $\gamma=50$, respectively. In all panels, Monte Carlo simulation results are shown in blue curves where the theoretical predictions of the long time limits of skewness are shown in green horizontal line.} \label{fig:SkewnessTimeFK}} \end{figure} \section{Conclusion} \label{sec:discuss} We have demonstrated analytically and numerically that the single point statistics, in particular \emph{skewness}, of a passive scalar advected by a random shear flow with deterministic initial data have opposite symmetric behaviors at long times depending on the presence or the absence of \emph{impermeable boundaries} . We have investigated two types of flow temporal fluctuations, respectively modeled by Gaussian white noise and stationary Ornstein-Uhlenbeck processes. We have shown the convergence of the Ornstein-Uhlenbeck case to its white noise counterpart in the limit $\gamma\rightarrow \infty$ of the OU damping parameter, which generalizes the conclusion in the article \cite{resnick1996dynamical} for free space to the confined channel domain problem. Importantly, we observe that the OU damping parameter $\gamma$ plays a more significant role in channel domains than in the free space problem. The first three moments of the scalar distribution at infinite time depend on the correlation time $\gamma^{-1}$ in the channel domain, which is in strong contrast to the result of Vanden-Eijnden \cite{vanden2001non} in free space where the PDF at long time is independent of $\gamma$. We have presented the detailed discussions of three different shear flows. All of them show the transient of skewness from negative to positive when increasing either the P\'eclet number or $\bar{u}$, which rigorizes and generalizes the observation from the simulation result in \cite{camassa2019symmetry}. To find a benchmark for theoretical analysis, we have generalized the wind model studied in \cite{bronski2007explicit} and derived the exact formula of the $N$-point correlation function for the flow with no spatial dependence and Gaussian temporal fluctuation. The long time asymptotic expansion of this formula is consistent with our theory for general shear flows. We have presented numerical studies that verify the validity of our theoretical results. We have performed Direct Monte Carlo simulation for the wind model and observed that the P\'eclet number can adjust the time at which the skewness of the distribution changes sign. Due to the lack of an exact solution for general shear flows, we implemented backward Monte-Carlo simulations to verify the long time asymptotic results we derived. We confirmed that as the damping parameter $\gamma$ increases the stationary Ornstein-Uhlenbeck case converges to the white noise case and found that transient for the skewness of the passive scalar's PDF to reach its long time asymptotic state lasts longer as the damping parameter decreases. Future work will include considering an experimental campaign with the associated theoretical analysis. Our recent study \cite{ding2020enhanced} regarding the enhanced diffusion \cite{taylor1953dispersion} and third spatial Aris moment \cite{aris1956dispersion} induced by a periodically moving wall led to the development of an experimental framework of the model explored in this paper. The computer controlled robotic arm we developed for the periodic study can be applied to the case of a randomly moving wall, such as the OU process $\xi(t)$, with suitable parameters for the fluid and the channel. The induced flow in the channel can be modeled by $y\xi(t)$. Hence, the tracers in the fluid satisfy the advection-diffusion equation \eqref{eq:advectionDiffusion}. The symmetry properties of the tracer's PDF can be predicted by the theory we developed here. Perhaps even more interesting will be considering cases in which the physical shear flow is not decomposed into a product of a function of space and a function of time, such as happens with the general nonlinear Ferry wave solutions at finite viscosities. More involved analysis will clearly be needed to study these interesting configurations. Lastly, we discuss and overview interesting issues associated with a vanishing spatial mean of the flow. Such a case could be experimentally observed by having two walls executing equal but opposite parallel motions, or by putting the observer in an appropriate frame of reference. The asymptotic analysis strategy we presented doesn't technically fail if the spatial average of the flow in the physical domain vanishes, namely $\bar{u}= \int_{0}^{1}\mathrm{d} y u(y) =0$, as the distribution is expected to be symmetric at long time in this case, consistent with our asymptotics which show that the third moment vanishes at long time with a zero spatial mean. However, the case with $\bar{u}=0$ requires considerable additional analysis to investigate how the long time PDF relaxes to a symmetric state. To see that, it is easy to check that the coefficient of $t^{-1}$ in the equation \eqref{eq:variaceLongTimeAsymptotic} becomes zero as well as the coefficient of $t^{-\frac{3}{2}}$ and $t^{- \frac{5}{2}}$ in centered third moment expansion. That means the higher-order terms in \eqref{eq:firstThreeMoments} are needed for the analysis of skewness for the case $\bar{u}=0$. The thesis work of \cite{Kilic} reports preliminary results that the point statistics induced by some flows with $\bar{u}=0$ have distinct behaviors from the case $\bar{u}\neq 0$. More detailed analysis in this direction has been completed and it will be reported separately. \section{Acknowledgements} We acknowledge funding received from the following NSF Grant Nos.: DMS-1517879, and DMS-1910824; and ONR Grant No: ONR N00014-18-1-2490. \bibliographystyle{elsarticle-harv}
null
null
null
proofpile-arXiv_000-10140
{"arxiv_id":"2009.09010","language":"en","timestamp":1600740062000,"url":"https:\/\/arxiv.org\/abs\/2009.09010","yymm":"2009"}
2024-02-18T23:40:25.090Z
2020-09-22T02:01:02.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10142"}
null
\section{Introduction} XX is an important problem in machine learning and healthcare. (Make sure that the clinicians can see the relevance! \emph{Unclear clinical relevance is a major reason that otherwise strong-looking papers are scored low/rejected.}) Addressing this problem is challenging because XX. (Make sure that you connect to the machine learning here.) Others have tried, but XX remains tough. (Acknowledge related work.) In this work, we... As you write, keep in mind that MLHC papers are meant to be read by computer scientists and clinicians. In the later sections, you might have to use some medical terminology that a computer scientist may not be familiar with, and you might have to use some math that a clinician might not be familiar with. That's okay, as long as you've done your best to make sure that the core ideas can be understood by an informed reader in either community. \subsection*{Generalizable Insights about Machine Learning in the Context of Healthcare} This section is \emph{required}, must keep the above title, and should be the final part of your introduction. In about one paragraph, or 2-4 bullet points, explain what we should \emph{learn} from reading this paper that might be relevant to other machine learning in health endeavors. For example, a work that simply applies a bunch of existing algorithms to a new domain may be useful clinically but doesn't increase our understanding of the machine learning and healthcare; if that study also investigates \emph{why} different approaches have different performance, that might get us excited! A more theoretical machine learning work may be in how it enables a new kind of clinical study. \emph{Reviewers and readers will look to evaluate (a) the significance of your claimed insights and (b) evidence you provide later in the work of you achieving that contribution} \section{Related Work} Make sure you also put your awesomeness in the context of related work. Who else has worked on this problem, and how did they approach it? What makes your direction interesting or distinct? \section{Methods} Tell us your techniques! If your paper is develops a novel machine learning method or extension, then be sure to give the technical details---as you would for a machine learning publication---here and, as needed, in appendices. If your paper is developing new methods and/or theory, this section might be several pages. If you are combining existing methods, feel free to cite other packages and papers and tell us how you put them together; that said, the work should stand alone for someone in that general machine learning area. \emph{Lack of technical details, such that the soundness of the methods can be verified, is a major reason that otherwise strong-looking papers are scored low/rejected.} \section{Cohort} \emph{This section is optional, more theoretical work may not need this section. However, if you are using health data, then you need to describe it carefully so that the clinicians can validate the soundness of your choices.} Describe the cohort. Give us the details of any inclusion/exclusion criteria, what data were extracted, how features were processed, etc. Recommended headings include: \subsection{Cohort Selection} with choice of criteria and basic numbers, as well as any relevant information about the study design (such how cases and controls were identified, if appropriate), \subsection{Data Extraction} with what raw information you extracted or collected, including any assumptions and imputation that may have been used, and \subsection{Feature Choices} with how you might have converted the raw data into features that were used in your algorithm. Cohort descriptions are just as important as methods details and code to replicate a study. For more clinical application papers, each of the sections above might be several paragraphs because we really want to understand the setting. For the submission, please do \emph{not} include the name of the institutions for any private data sources. However, in the camera-ready, you may include identifying information about the institution as well as should include any relevant IRB approval statements. \subsection{Results on Synthetic Experiments} \emph{Depending on the claim you make in the paper, this section may not be relevant.} Especially if you are developing a new method, you will want to demonstrate its properties on synthetic or semi-synthetic experiments. Include experiments that will help us understand the contribution of the work to machine learning and healthcare. \section{Results on Real Data} \emph{Depending on the claim you make in the paper, different components may be important for this section.} \subsection{Evaluation Approach/Study Design} Before jumping into the results: what exactly are you evaluating? Tell us (or remind us) about your study design and evaluation criteria. \subsection{Results on Application A} Present your numbers and baselines. You should provide a summary of the results in the text, as well as in tables (such as table~\ref{tab:example}) and figures (such as figure~\ref{fig:example}). You may use subfigures/wrapfigures so that figures don't have to span the whole page or multiple figures are side by side. \begin{table}[htbp] \centering \caption{Description with the main take-away point.} \begin{tabular}{|l|l|}\hline Us & Score \\ \hline Baseline & Score \\ \hline \end{tabular} \label{tab:example} \end{table} \begin{figure}[htbp] \centering \includegraphics[width=1.5in]{smile.jpeg} \caption{Description with the main take-away point.} \label{fig:example} \end{figure} \subsection{Results on Application/Follow-up Study B} \section{Required: Discussion} \emph{This is probably the most important section of your paper! This is where you tell us how your work advances our understanding of machine learning and healthcare.} Discuss both technical and clinical implications, as appropriate. \paragraph{Limitations} Explain when your approach may not apply, or things you could not check. \emph{Discussing limitations is essential. Both ACs and reviewers have been coached to be skeptical of any work that does not consider limitations.} \section{Introduction} Type 1 diabetes (T1D) is a chronic disease affecting 20-40 million people worldwide \citep{you_type_2016}, and its incidence is increasing \citep{tuomilehto_emerging_2013}. People with T1D cannot produce insulin, a hormone that controls blood glucose levels, and must monitor their blood glucose levels and manually administer insulin doses as needed. Administering too little insulin can lead to hyperglycemia (high blood glucose levels), which results in chronic health complications \citep{control_resource_1995}, while administering too much insulin results in hypoglycemia (low blood glucose levels), leading to coma or death. Getting the correct dose requires careful measurement of glucose levels and carbohydrate intake, resulting in at least 15-17 data points a day. When using a continuous glucose monitor (CGM), this can increase to over 300 data points, or a blood glucose reading every 5 minutes \citep{coffen_magnitude_2009}. CGMs with an insulin pump, a device that delivers insulin, can be used with a closed-loop controller as an `artificial pancreas' (AP). Though the technology behind CGMs and insulin pumps has advanced, there remains significant room for improvement when it comes to the control algorithms \citep{bothe_use_2013, pinsker_randomized_2016}. Current hybrid closed-loop approaches require accurate meal announcements to maintain glucose control. In this work, we investigate deep reinforcement learning (RL) for blood glucose control. RL is a promising solution, as it is well-suited to learning complex behavior and readily adapts to changing domains \citep{clavera2018learning}. We hypothesize that deep RL, the combination of RL with a deep neural network, will be able to accurately infer latent meals to control insulin. Furthermore, as RL is a learning-based approach, we hypothesize that RL will adapt to predictable meal schedules better than baseline approaches. The fact that RL is learning-based means it requires data to work effectively. Unlike many other health settings, there are credible simulators for blood glucose management \citep{visentin_university_2014}. Having a simulator alleviates many concerns of applying RL to health problems \citep{gottesman2018evaluating, gottesman_guidelines_2019}. However, that does not mean RL for blood glucose control is straightforward, and, in this paper, we identify and address several challenges. To the best of our knowledge, we present the first deep RL approach that achieves human-level performance in controlling blood glucose without requiring meal announcements. \subsection*{Generalizable Insights about Machine Learning in the Context of Healthcare} Applying deep RL to blood glucose management, we encountered challenges broadly relevant for RL in healthcare. As such, we believe our solutions and insights, outlined below, are broadly relevant as well. \begin{itemize} \item The range of insulin and carbohydrate requirements across patients makes it difficult to find a single action space that balances the needs of rapid insulin administration and safety. Indeed, many health problems involve changes in action distributions across patients (\textit{e.g.} in anesthesia dosing \citep{bouillon_size}). To solve this problem, we present a robust patient-specific action space that naturally encourages safer policies. \item We found several pitfalls in evaluating our proposed approach that led to unrealistic performance estimates. To address this issue, we used validation data to perform careful model selection, and used extensive test data to evaluate the quality of our models. In RL, it is typical to report performance on the final trained model (without model selection) over a handful of rollouts. Our experiments demonstrate the danger of this approach. \item Deep RL has been shown to be unstable \citep{rlblogpost, henderson_deep_2018}, often achieving poor worst-case performance. This is unacceptable for safety-critical tasks, such as those in healthcare. We found that a combination of simple and widely applicable approaches stabilized performance. In particular, we used a safety-augmented reward function, realistic randomness in training data, and random restarts to train models that behaved safely over thousands of days of evaluation. \item Finally, unlike game settings where one has ability to learn from hundreds of thousands of hours of interaction, any patient-specific model must be able to achieve strong performance using a limited amount of data. We show that a simple transfer learning approach can be remarkably sample efficient and can even surpass the performance of models trained from scratch. \end{itemize} \section{Background and Related Work} This work develops and applies techniques in reinforcement learning to the problem of blood glucose management. To frame this work, we first provide a brief introduction to RL, both in general and specifically applied to problems in healthcare. We then discuss how RL and other approaches have been used for blood glucose control and present an overview on blood glucose simulation. \subsection{Reinforcement Learning}\label{sec:rl} RL is an approach to optimize sequential decision making in an environment, which is typically assumed to follow a Markov Decision Process (MDP). An MDP is characterized by a 5-tuple $(S, A, P, R, \gamma)$, where $s \in S$ are the states of the environment, $a \in A$ are actions that can be taken in the environment, the transition function $P: (s, a) \rightarrow s'$ defines the dynamics of the environment, the reward function $R: (s, a) \rightarrow r \in \mathbb{R}$ defines the desirability of state-action pairs, and the discount factor $\gamma \in [0, 1]$ determines the tradeoff between the value of immediate and delayed rewards. The goal in RL is to learn a policy $\pi: s \rightarrow a$, or function mapping states to actions, that maximizes the expected cumulative reward, or: \begin{equation} \arg\max_{\pi \in \Pi} \sum_{t=1}^{\infty} E_{s_t \sim P(s_{t-1}, a_{t-1})}[\gamma^t R(s_t, \pi(s_t))], \end{equation} where $\Pi$ is the space of possible policies and $s_0 \in S$ is the starting state. The state value function, $V(s)$, is the expected cumulative reward where $s_0 = s$. The state-action value function $Q(s,a) = R(s,a) + E_{s' \sim P(s, a)}[\gamma V(s')]$ extends the notion of value to state-action pairs. \subsection{Reinforcement Learning in Healthcare} In recent years, researchers have started to explore RL in healthcare. Examples include matching patients to treatment in the management of sepsis \citep{weng_representation_2017, komorowski_artificial_2018} and mechanical ventilation \citep{prasad_reinforcement_2017}. In addition, RL has been explored to provide contextual suggestions for behavioral modifications \citep{klasnja_efficacy_2019}. Despite its successes, RL has yet to be fully explored as a solution for a closed-loop AP system \citep{bothe_use_2013}. \subsection{Algorithms for Blood Glucose Control} Among recent commercial AP products, proportional-integral-derivative (PID) control is the most common backbone \citep{trevitt_artificial_2015}. The simplicity of PID controllers make them easy to use, and in practice they achieve strong results \citep{steil2013algorithms}. The Medtronic Hybrid Closed-Loop system, one of the few commercially available, is built on a PID controller \citep{garg_glucose_2017, ruiz_effect_2012}. A hybrid closed-loop controller adjusts baseline insulin rates but requires human intervention to control for the effect of meals. The main weakness of PID controllers is their reactivity. As a result, they often cannot react fast enough to meals, and thus rely on meal announcements \citep{garg_glucose_2017}. Additionally, without safety modifications, PID controllers can deliver too much insulin, triggering hypoglycemia \citep{ruiz_effect_2012}. In contrast, we hypothesize that an RL approach will be able to leverage patterns associated with meal times, resulting in more responsive and safer policies. Previous works have examined RL for different aspects of blood glucose control. See \cite{tejedor_reinforcement_2020} for a recent survey. Many of these works investigated the use of RL to adapt existing insulin treatment regimens to learn a ‘human-in-the-loop’ policy \citep{ngo_reinforcement-learning_2018, oroojeni_mohammad_javad_reinforcement_2015, sun_dual_2018}. This contrasts with our setting, where we aim to learn a fully closed-loop policy. Like our work, \cite{daskalaki_preliminary_2010} and \cite{de_paula_-line_2015} focus on the task of closed-loop glucose control. \cite{daskalaki_preliminary_2010} use direct future prediction to aid PID-style control, substituting the problem of RL with prediction. \cite{de_paula_-line_2015} use a policy-iteration framework with Gaussian process approximation and Bayesian active learning. While they tackle a similar problem, these works use a simple simulator and a fully deterministic meal routine for training and testing. In our experiments, we use an FDA-approved glucose simulator and a realistic non-deterministic meal schedule, significantly increasing the challenge. \subsection{Glucose Models and Simulation} Models of the glucoregulatory system are important for the development and testing of an AP \citep{cobelli_integrated_1982}. In our experiments, we use the UVA/Padova model \citep{kovatchev_silico_2009}. This simulator models the glucoregulatory system as a nonlinear multi-compartment system, where glucose is generated in the liver, absorbed through the gut, and controlled by external insulin. A more detailed explanation can be found in \cite{kovatchev_silico_2009}. For reproducibility, we use an open-source version of the simulator that comes with 30 virtual patients \citep{xie_simglucose_2018}. The parameter distribution of the patient population is determined by age, and the simulator comes with 10 children, adolescents, and adults each \cite{kovatchev_silico_2009}. We combine the simulator with a non-deterministic meal schedule (\textbf{Appendix \ref{app:meal}}) to realistically simulate patient behavior. \section{Methods} We present a deep RL approach well suited for blood glucose control. In framing our problem, we pay special attention to the concerns of partial observability and safety. The issue of partial observability motivates us to use a maximum entropy control algorithm, soft actor-critic, combined with a recurrent neural network. Safety concerns inspire many aspects of our experimental setup, including our choice of action space, reward function, and evaluation metrics. We also introduce several strong baselines, both with and without meal announcements, to which we compare. \subsection{Problem Setup}\label{ssec:setup} We frame the problem of closed-loop blood glucose control as a partially-observable Markov decision process (POMDP) consisting of the 7-tuple $(S^*, O, S, A, P, R, \gamma)$. A POMDP generalizes the MDP setting described in \textbf{Section \ref{sec:rl}} by assuming we do not have access to the true environment states, here denoted $s^* \in S^*$, but instead observe noisy states $s \in S$ according to the (potentially stochastic) observation function: $O: s^* \rightarrow s$. This setting applies given the noise inherent in CGM sensors \citep{vettoretti2019modeling} and our assumption of unobserved meals. In our setting, the true states $\mathbf{s}^*_t \in S^*$ are the 13-dimensional simulator states, as described in \cite{kovatchev_silico_2009}. The stochastic observation function $O: \mathbf{s}^*_t \rightarrow b_t, i_t$ maps from the simulator state to the CGM observation $b_t$ and insulin $i_t$ administered. To provide temporal context, we augment our observed state space $\mathbf{s}_t \in S$ to include the previous 4 hours of CGM $\mathbf{b}^t$ and insulin data $\mathbf{i}^t$ at 5-minute resolution: $\mathbf{s}_t = [\mathbf{b}^t, \mathbf{i}^t]$ where: $$ \mathbf{b}^t = [b_{t-47}, b_{t-46}, \dots, b_{t}], \mathbf{i}^t = [i_{t-47}, i_{t-46}, \dots, i_{t}], $$ $b_{t} \in \mathbb{N}_{40:400}$, $i_{t} \in \mathbb{R}_{+}$, and $t \in \mathbb{N}_{1:288}$ represents a time index for a day at 5-minute resolution. Note that in our augmented formulation, the observation function no longer directly maps to observed states, as observed states incorporate significant amounts of historical data. We chose a 4-hour window, as we empirically found it led strong performance. We use a time resolution of 5 minutes to mimic the sampling frequency of many common CGMs. Actions $a_t \in \mathbb{R}_{\ge 0}$ are real positive numbers, denoting the size of the insulin bolus in medication units. The transition function $P$, our simulator, consists of two elements: i) $M: t \rightarrow c_t$ is the meal schedule, and is defined in \textbf{Appendix \ref{app:meal}}, and ii) $G: (a_t, c_t, \mathbf{s}^*_t) \rightarrow (b_{t+1}, i_{t+1}, \mathbf{s}^*_{t+1})$, where $c_t \in \mathbb{R}_{\ge 0}$ is the amount of carbohydrates input at time $t$ and $G$ is the UVA/Padova simulator \citep{kovatchev_silico_2009}. Note that our meal schedule is patient-specific, and includes randomness in the daily number, size, and timing of meals. The reward function $R: (\mathbf{s}_t, a_t) \rightarrow \mathbb{R}$ is defined as negative $-risk(b_t)$ where $risk$ is the Magni risk function: \begin{equation} risk(b) = 10*(c_0 * \log(b)^{c_1}-c_2)^2, \end{equation} $c_0=3.35506$, $c_1=0.8353$, and $c_2=3.7932$ \citep{magni_model_2007}. These values are set such that $risk(70)=risk(280)=25$, see \textbf{Figure \ref{fig:risk}}. Finally, we set $\gamma=0.99$ for our experiments, a value we determined empirically on validation data in combination with the early termination penalty. Considered in isolation, our reward could lead to dangerous behavior. As it is always negative, cumulative reward is maximized by ending the episode as quickly as possible, which occurs when glucose reaches unsafe levels. To avoid this, we add a termination penalty of $-1e5$ to trajectories that enter dangerous blood glucose regimes (blood glucose levels less than 10 or more than 1,000 mg/dL), countering the advantage of early termination. We investigated other reward functions, such as time in range or distance from a target blood glucose value, but found this reward worked best. It led to control schemes with less hypoglycemia, as low blood glucose is penalized more heavily than high glucose. Low glucose can occur quickly when large amounts of insulin are given without an accompanying meal. Given the lack of meal announcements and sensor noise in our setting, avoiding hypoglycemia was a significant challenge. \begin{figure} \centering \includegraphics[width=0.45\linewidth]{figures/magni_risk_post.pdf} \caption{The risk function proposed in \citep{magni_model_2007}. The mapping between blood glucose values (in mg/dL, x-axis) and risk values (y-axis). Blood glucose levels corresponding to hypoglycemia are shown in the blue shaded region, the glucose range corresponding to hyperglycemia is shown in the red shaded region. This function identifies low blood glucose values as higher risk than high blood glucose values, which is sensible given the rapidity of hypoglycemia.} \label{fig:risk} \end{figure} \subsection{Soft Actor-Critic} We chose to use the soft actor-critic (SAC) algorithm to learn glucose control policies. We initially experimented with a Deep-Q Network approach \citep{mnih_human-level_2015}. However, choosing a discretized action space (as is required by Q-learning) that accounted for the range of insulin values across a day and allowed exploration proved impractical, as large doses of inappropriately timed insulin can be dangerous. Among continuous control algorithms, we selected SAC as it has been shown to be sample efficient and competitive \citep{haarnoja_soft_2018}. Additionally, maximum entropy policies like the ones produced by SAC can do well in partially observed settings like our own \citep{eysenbach2019if}. SAC produces a stochastic policy $\pi: \mathbf{s}_t \rightarrow p(a)$ $\forall a \in A$, which maps a state to a distribution over possible actions. Under SAC, the policy (or actor) is represented by a neural network with parameters $\phi$. Our network generates outputs $\mu, \log(\sigma)$ which parameterize a normal distribution $\mathcal{N}(\mu, \sigma)$. The actions are distributed according to a TanhNormal distribution, or $tanh(z), z \sim \mathcal{N}(\mu, \sigma)$. $\pi_\phi$ is trained to maximize the maximum entropy RL objective function: \vspace{-1pt} \begin{equation}\label{eqn:policy_return} J(\pi) = \sum_{t=0}^T \mathbb{E}_{(\mathbf{s}_t, a_t) \sim P(\mathbf{s}_{t-1}, \pi_\phi(\mathbf{s}_{t-1}))} [R(\mathbf{s}_t, a_t) + \alpha H(\pi_\phi(\cdot|\mathbf{s}_t))], \end{equation} \vspace{-1pt} where entropy, $H$, is added to the expected cumulative reward to improve exploration and robustness \citep{haarnoja_soft_2018}. Intuitively, the return in \textbf{Equation \ref{eqn:policy_return}} encourages a policy that can obtain a high reward under a variety of potential actions. The temperature hyperparameter $\alpha$ controls the tradeoff between reward and entropy. In our work, we set this using automatic temperature tuning \citep{haarnoja_soft_2018a}. \textbf{Equation \ref{eqn:policy_return}} is optimized by minimizing the KL divergence between the action distribution and the distribution induced by the state-action values: \vspace{-1pt} \begin{equation}\label{eqn:pi} J_{\pi}(\phi)=\mathbb{E}_{\mathbf{s}_{t} \sim \mathcal{D}}\left[\mathrm{D}_{\mathrm{KL}}\left(\pi_{\phi}\left(\cdot | \mathbf{s}_{t}\right) \| \frac{\exp \left(Q_{\theta}\left(\mathbf{s}_{t}, \cdot\right)\right)}{Z_{\theta}\left(\mathbf{s}_{t}\right)}\right)\right] \end{equation} \vspace{-1pt} where $\mathcal{D}$ is a replay buffer containing previously seen $(\mathbf{s}_t, \mathbf{a}_t, r_t, \mathbf{s}_{t+1})$ tuples, $Z_\theta$ is a partition function, and $Q_\theta$ is the state-action value function parameterized by a neural network (also called a critic). This means that our learned policy engages in probability matching, selecting an action with probability proportional to its expected value. This requires an accurate value function. To achieve this, $Q_\theta$ is trained by minimizing the temporal difference loss: \begin{gather}\label{eqn:q} J_{Q}(\theta)=\mathbb{E}_{\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right) \sim \mathcal{D}}\left[\frac{1}{2}\left(Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)-\hat{Q}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)\right)^{2}\right], \\ \hat{Q}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)=r\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\gamma \mathbb{E}_{\mathbf{s}_{t+1} \sim p}\left[V_{\overline{\psi}}\left(\mathbf{s}_{t+1}\right)\right]. \end{gather} \vspace{-1pt} $V_\psi$ is the soft value function parameterized by a third neural network, and $V_{\overline{\psi}}$ is the running exponential average of the weights of $V_\psi$ over training. This is a continuous variant of the hard target network replication in \citep{mnih_human-level_2015}. $V_\psi$ is trained to minimize: \begin{equation}\label{eqn:v} J_{V}(\psi)=\mathbb{E}_{\mathbf{s}_{t} \sim \mathcal{D}}\left[\frac{1}{2}\left(V_{\psi}\left(\mathbf{s}_{t}\right)-\mathbb{E}_{\mathbf{a}_{t} \sim \pi_{\rho}}\left[Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)-\log \pi_{\phi}\left(\mathbf{a}_{t} | \mathbf{s}_{t}\right)\right]\right)^{2}\right]. \end{equation} In summary: we learn a policy that maps from states to a probability distribution over actions, the policy is parameterized by a neural network $\pi_\phi$. Optimizing this network (\textbf{Equation \ref{eqn:pi}}) requires an estimation of the soft state-action value function, we learn such an estimate $Q_\theta$ (\textbf{Equation \ref{eqn:q}}) together with a soft value function $V_\psi$ (\textbf{Equation \ref{eqn:v}}). Additional details of this approach, including the gradient calculations, are given in \citep{haarnoja_soft_2018}. In keeping with previous work, when testing our policy we remove the sampling component, instead selecting the mean action $tanh(\mu)$. We replace the MSE temporal difference loss in \textbf{Equation} \ref{eqn:q} with the Huber loss, as we found this improved convergence. \subsubsection{Recurrent Architecture}\label{sec:architecture} Our network $\pi_\phi$ takes as input the past 4 hours of CGM and insulin data, requiring no human input (\textit{i.e.}, no meal announcements). To approximate the true state $\mathbf{s}^*_t$ from the augmented state $s$ we parameterize $Q_\theta$, $V_\psi$, and $\pi_\phi$ using gated-recurrent unit (GRU) networks \citep{cho_learning_2014}, as GRUs have been successfully used for glucose modeling previously \citep{fox_deep_2018, zhu_deep_nodate}. \subsubsection{Patient-Specific Action Space} After the network output layer, actions are squashed using a \textit{tanh} function. Note that this results in half the action space corresponding to negative values, which we interpret as administering no insulin. This encourages sparse insulin utilization and makes it easier for the network to learn to safely control baseline glucose levels. To ensure that the maximum amount of insulin delivered over a 5-minute interval is roughly equal to a normal meal bolus for each individual, we use the average ratio of basal to bolus insulin in a day \citep{kuroda_basal_2011} to calculate a scale parameter for the action space, $\omega_{b}=43.2*bas$, where $bas$ is the default patient-specific basal insulin rate provided by \cite{xie_simglucose_2018}. \subsection{Efficient Policy Transfer} One of the main disadvantages of deep RL is sample efficiency. Thus, we explored transfer learning techniques to efficiently transfer existing models to new patients. We refer to our method trained from scratch as RL-Scratch, and the transfer approach as RL-Trans. For RL-Trans, we initialize $Q_\theta, V_\psi$ and $\pi_\phi$ for each class of patients (children, adolescents, and adults) using fully trained networks from one patient of that source population (see \textbf{Appendix \ref{app:patient}}). We then fine-tune these networks on data collected from the target patient. When fine-tuning, we modify the reward function by removing the termination penalty and adding a constant positive value (100) to all rewards. This avoids the negative reward issue discussed in \textbf{Section \ref{ssec:setup}}. Removing the termination penalty increased the consistency of returns over training, allowing for a more consistent policy gradient. The additional safety provided by a termination penalty is not required as we begin with policies that are already stable. We found this simple approach for training patient-specific policies attains good performance while using far less patient-specific data. \subsection{Baselines} We examine three baseline methods for control: basal-bolus (\textbf{BB}), \textbf{PID} control, and PID with meal announcements (\textbf{PID-MA}). BB reflects an idealized human-in-the-loop control strategy, and PID reflects a common closed-loop AP algorithm. PID with meal announcements is based on current AP technology, and is a combination of the two, requiring regular human intervention. Finally, we consider an 'oracle' approach that has access to the true state $s^*_t$. This decouples the task of learning a policy from state inference, serving as a pseudo-upper bound on performance for our proposed approach. \subsubsection{Basal-Bolus Baseline} This baseline is designed to mimic human control and is an ideal depiction of how an individual with T1D controls their blood glucose. In this setting, we modify the state representation $\mathbf{s}_t$ to include a carbohydrate signal and a cooldown signal (explained below), and to remove the historical data, $\mathbf{s}_t = [b_t, i_t, c_t, cooldown]$. Note that the inclusion of a carbohydrate signal, or meal announcement, places the burden of providing accurate and timely estimates of meals on the person with diabetes. Each virtual patient in the simulator comes with the parameters necessary to calculate a reasonable basal insulin rate $bas$ (the same value used in our action space definition), correction factor $CF$, and carbohydrate ratio $CR$. These three parameters, together with a glucose target $b_g$, define a clinician-recommended policy $\pi(s_t) = bas + (c_t > 0) * (\frac{c_t}{CR} + cooldown * \frac{b_t - b_g}{CF})$ where $cooldown$ is 1 if there have been no meals in the past three hours, otherwise it is 0. The cooldown ensures that each meal is only corrected for once. Appropriate settings for these parameters can be estimated by endocrinologists, using previous glucose and insulin information \citep{walsh_guidelines_2011}. The parameters for our virtual patient population, which are derived from a distribution validated by clinical trials \citep{kovatchev_silico_2009}, are given in \textbf{Appendix \ref{app:bb}}. \subsubsection{PID Baseline} PID controllers are a common and robust closed-loop baseline \citep{steil2013algorithms}. A PID controller operates by setting the control variable, $a_t$, to the weighted combination of three terms $a_t = k_P P(b_t) + k_I I(b_t) + k_D D(b_t)$ such that the process variable $b_t$ ($t$ is the time index) remains close to a specified setpoint $b_g$. The terms are calculated as follows: i) the proportional term $P(b_t)=\max(0, b_t - b_g)$ increases the control variable proportionally to the distance from the setpoint, ii) the integral term $I(b_t) = \sum_{j=0}^t (b_j - b_g)$ corrects long-term deviations from the setpoint, and iii) the derivative term $D(b_t) = |b_{t} - b_{t-1}|$ uses the rate of change as a basic estimate of the future. The set point and the weights (also called gains) $k_P, k_I, k_D$ are hyperparameters. To compare against the strongest PID controller possible, we tuned these hyperparameters using multiple iterations of grid-search with exponential refinement per-patient, minimizing risk. Our final settings are presented in \textbf{Appendix \ref{app:pid_param}}. \subsubsection{PID with Meal Announcements.} This baseline, which is similar to available hybrid closed loop AP systems \citep{garg_glucose_2017, ruiz_effect_2012}, combines BB and PID into a control algorithm we call PID-MA. During meals, insulin boluses are calculated and applied as in the BB approach. The basal rate, instead of being fixed, is controlled by a PID algorithm, allowing for adaptation between meals. As above, we tune the gain parameters for the PID algorithm using sequential grid search to minimize risk. \subsubsection{Oracle Architecture} A deep RL approach to learning AP algorithms requires that the representation learned by the network contain sufficient information to control the system. As we are working with a simulator, we can explore the difficulty of this task in isolation, by replacing the observed state $\mathbf{s}_t$ with the ground-truth state $\mathbf{s}^*_t$. Though unavailable in real-world settings, this representation decouples the problem of learning a policy from that of inferring the state. Here, $Q_\theta$, $V_\psi$, and $\pi_\phi$ are fully connected with two hidden layers, each with 256 units. The network uses ReLU nonlinearities and BatchNorm \citep{ioffe2015batch}. \subsection{Experimental Setup \& Evaluation} To measure the utility of deep RL for the task of blood glucose control, we trained and tested our policies on data with different random seeds across 30 different simulated individuals. \paragraph{Training and Hyperparameters.} We trained our models separately for each patient. They were trained from scratch for 300 epochs for RL-Scratch, and fine-tuned for 50 epochs for RL-Trans. They were trained with batch size 256 and an epoch length of 20 days. We used an experience replay buffer of size 1e6 and a discount factor of 0.99. We found that extensive training from-scratch was required to obtain consistent performance across test runs. We also found that too small of an epoch length could lead to dangerous control policies. We optimized the parameters of $Q_\theta$, $V_\psi$ and $\pi_\phi$ using Adam with a learning rate of $3E-4$. All deep networks were composed of two layers of GRU cells with a hidden state size of 128, followed by a fully-connected output layer. All network hyperparameters, including number and size of layers, were optimized on training seeds on a subset of the simulated patients for computational efficiency. Our networks were initialized using PyTorch defaults. \paragraph{Evaluation.} We measured the performance of $\pi_\phi$ on 10 days of validation data after each training epoch. After training, we evaluated on test data using the model parameters from the best epoch as determined by the validation data. While this form of model selection is not typical for RL, we found it led to significant changes in performance (see \textbf{Section \ref{sec:val_selection}}). Our model selection procedure first filters out runs that could not control blood glucose within safe levels over the validation run (glucose between 30-1000 mg/dL), then selects the epoch that achieved the lowest risk. We tested each patient-specific model on 1000 days of test data, broken into 100 independent 10 day rollouts. We trained and evaluated each approach 3 times, resulting in 3000 days of evaluation per method per person. We evaluated approaches using i) risk, the average Magni risk calculated over the 10-day test rollout, ii) \% time spent euglycemic (blood glucose levels between 70-180 mg/dL), iii) \% time hypo/hyperglycemic (blood glucose lower than 70 mg/dL or higher than 180 mg/dL respectively), and iv) \% of rollouts that resulted in a catastrophic failure, which we define as a run that achieves a minimal blood glucose level below 5 mg/dL (at which point recovery becomes highly unlikely). Note that while catastrophic failures are a major concern, our simulation process does not consider consuming carbohydrates in reaction to low blood glucose levels. This is a common strategy to avoid dangerous hypoglycemia in real life, and thus catastrophic failures, while serious, are manageable. The random seeds controlling noise, meals, and all other forms of randomness, were different between training, validation, and test runs. We test the statistical significance of differences between methods using Mood's median test for all metrics except for catastrophic failure rate, for which we use a binomial test. \section{Experiments and Results} Our experiments are divided into two broad categories: i) experiments showing the benefits of deep RL for blood glucose control relative to baseline approaches and ii) the challenges of using deep RL in this setting, and how to overcome them. Throughout our experiments, we consider 3 variants of RL methods: i) RL-Scratch, our approach trained from scratch on every individual, ii) RL-Trans, which fine-tunes an RL-Scratch model from an arbitrary child/adolescent/adult, and iii) RL-MA, which uses RL-Scratch trained using the automated meal boluses from BB or PID-MA. We also report results on an Oracle approach, which is trained and evaluated using the ground truth simulator state. \begin{figure*}[!htbp] \centering \includegraphics[width=\linewidth]{figures/noma_risk.png} \caption{The risk over 10 days for different simulated patients using methods that do not require meal announcements. Each point corresponds to a different random test seed that controls the meal schedule and sensor noise, and the line indicates the median performance for each method on each patient. Results are presented across 3 random training seeds, controlling model initialization and randomness in training. We observe that, although there is a wide range in performance across and within individuals, The RL approaches tend to outperform PID.} \label{fig:full_risk_noma} \end{figure*} \begin{figure*}[!htbp] \centering \includegraphics[width=\linewidth]{figures/ma_risk.png} \caption{The risk over 10 days using methods that require meal announcements. PID-MA tends to outperform BB, and RL-MA outperforms PID-MA.} \label{fig:full_risk_ma} \end{figure*} \subsection{Advantages of Deep RL}\label{sec:advantages} We compare our deep RL approaches to baselines with and without meal announcements across several metrics (\textbf{Section \ref{ssec:baseline}}). We then investigate two hypotheses for why deep RL is well suited to the problem of glucose management without meal announcements: \begin{itemize} \item the high-capacity neural network, integral to the RL approaches, is able to quickly infer when meals occur (\textbf{Section \ref{ssec:meals}}), and \item the learning-based approach is able to adapt to predictable meal schedules better than a PID controller (\textbf{Section \ref{ssec:behavior}}). \end{itemize} \subsubsection{Deep RL vs. Baseline Approaches}\label{ssec:baseline} A comparison between the PID baseline to the RL approaches is presented in \textbf{Figure \ref{fig:full_risk_noma}}. Each point represents a different test rollout by a policy. For the RL approaches, the performance of each method is reported as the combination of 3 random training restarts. Among the 3 methods that do not require meal announcements, RL-Scratch performs best across patients (average rank 1.33), followed by RL-Trans (average rank 1.7), then PID (average rank 2.97). For each individual, we rank the 3 approaches in terms of median risk. We calculate average rank by taking the mean of each approach's rankings across all 30 individuals. Note that RL-Scratch, while achieving strong performance overall, reliably performs poorly on adolescent\#002. We discuss this issue in \textbf{Appendix \ref{app:ao2}}. One major advantage of our proposed approach is its ability to achieve strong performance without meal announcements. This does not mean that it does not benefit from meal announcements, as shown in \textbf{Figure \ref{fig:full_risk_ma}}. Among the 3 methods that require meal announcements, RL-MA performs best (average rank 1.07), followed by PID-MA (average rank 2.13) then BB (average rank 2.8). We examine additional metrics in the results presented in \textbf{Table \ref{tab:risk}}. The difference between results that are bold, or bold and underlined, and the next best non-bold result (excluding RL-Oracle) are statistically significant with $p<0.001$. We observe that RL-MA equals or surpasses the performance of all non-oracle methods on all metrics, except for \% time spent hyperglycemia. Interestingly, all RL variants achieve lower median risk than PID-MA, which requires meal announcements. This is because the RL approaches achieve low levels of hypoglycemia, which the risk metric heavily penalizes (see \textbf{Figure \ref{fig:risk}}). Note that all methods, including PID-MA, were optimized to minimize this metric. Across patients, the RL methods achieve approximately 60-80\% time euglycemic, compared with $52\% \pm 19.6$\% observed in real human control \citep{ayanotakahara_carbohydrate_2015}. These results suggest that deep RL could be a valuable tool for closed-loop or hybrid closed-loop AP control. \begin{table*}[!htbp] \centering \caption{Median risk, percent of time Eu/Hypo/Hyperglycemic, and failure rate calculated using 1000 days of simulation broken into 100 independent 10-day rollouts for each of 3 training seeds for 30 patients, totaling 90k days of evaluation (with interquartile range). Lower Magni Risk, Hypoglycemia, and Hyperglycemia are better, higher Euglycemia is better. Hybrid and Non-closed loop approaches (requiring meal announcements) are indicated with *. Approaches requiring a fully observed simulator state are indicated with $\dagger$. The non-oracle approach with the best average score is in bold and underlined, the best approach that does not require meal announcements is in bold.} \scalebox{0.83}{ \begin{tabular}{lcccccc} \toprule & & Risk & Euglycemia & Hypoglycemia & Hyperglycemia & Failure \\ & & $\downarrow$ & (\%) $\uparrow$ & (\%) $\downarrow$ & (\%) $\downarrow$ & (\%) $\downarrow$ \\ \midrule \multirow[c]{3}{*}{\rotatebox[origin=c]{90}{No MA}} & PID & 8.86 (6.8-14.3) & 71.68 (65.9-75.9) & 1.98 (0.3-5.5) & \textbf{24.71 (21.1-28.6)} & 0.12 \\ & RL-Scratch & \textbf{6.50 (4.8-9.3)} & \textbf{72.68 (67.7-76.2)} & \textbf{0.73 (0.0-1.8)} & 26.17 (23.1-30.6) & \textbf{0.07} \\ & RL-Trans & 6.83 (5.1-9.7) & 71.91 (66.6-76.2) & 1.04 (0.0-2.5) & 26.60 (22.7-31.0) & 0.22 \\ \hline \multirow[c]{3}{*}{\rotatebox[origin=c]{90}{MA}} & BB$^*$ & 8.34 (5.3-12.5) & 71.09 (62.2-77.9) & 2.60 (0.0-7.2) & 23.85 (17.0-32.2) & 0.26 \\ & PID-MA$^*$ & 8.31 (4.7-10.4) & 76.54 (70.5-82.0) & 3.23 (0.0-8.8) & \bf{\underline{18.74 (12.9-23.2)}} & \bf{\underline{0.00}} \\ & RL-MA$^*$ & \bf{\underline{4.24 (3.0-6.5)}} & \bf{\underline{77.12 (71.8-83.0)}} & \bf{\underline{0.00 (0.0-0.9)}} & 22.36 (16.6-27.7) & \bf{\underline{0.00}} \\ \midrule & RL-Oracle$^\dagger$ & 3.58 (1.9-5.4) & 78.78 (73.9-84.9) & 0.00 (0.0-0.0) & 21.22 (15.1-26.1) & 0.01 \\ \bottomrule \end{tabular} } \label{tab:risk} \end{table*} \subsubsection{Detecting Latent Meals}\label{ssec:meals} Our approach achieves strong blood glucose control without meal announcements, but how much of this is due to the ability to react to meals? To investigate this, we looked at the total proportion of insulin delivered on average after meals for PID and RL-Scratch, shown in \textbf{Figure \ref{fig:rl_advantage}a}. A method able to infer meals should use insulin rapidly after meals, as the sooner insulin is administered the faster glycemic spikes can be controlled while avoiding hypoglycemia. We observe that RL-Scratch administers the majority of its post-meal bolus within one hour of a meal, whereas PID requires over 90 minutes, suggesting RL-Scratch can indeed better infer meals. Interestingly, when considering the percentage of total daily insulin administered in the hour after meals, RL-Scratch responds even more aggressively than BB or PID-MA (54.7\% \textit{vs.} 48.5\% and 47.3\% respectively). This demonstrates that our RL approach is able to readily react to latent meals shortly after they have occurred. \subsubsection{Ability to Adapt to Predictable Meal Schedules}\label{ssec:behavior} We hypothesize that one advantage of RL is its ability to compensate for predictable variations (such as meals) in the environment, improving control as the environment becomes more predictable. To test this benefit, we changed the meal schedule generation procedure outlined in \textbf{Algorithm \ref{alg:schedule}} (\textbf{Appendix \ref{app:meal}}) for Adult 1. We removed the small `snack' meals, set all meal occurrence probabilities to 1, and made meal amounts constant (\textit{i.e.}, each day Adult 1 consumes an identical set of meals). We then evaluated both the PID model and RL-Scratch on 3 variations of this environment, characterized by the standard deviation of the meal times (either 0.1, 1, or 10 hours). This tests the ability of each method to take advantage of patterns in the environment. As the standard deviation decreases, the task becomes easier for two reasons: i) there are fewer instances where two meals occur in quick succession, and ii) the meals become more predictable. The results are presented in \textbf{Figure \ref{fig:rl_advantage}b}. We observe that both methods improve in performance as the standard deviation decreases, likely due to (i). However, while RL-Scratch outperforms PID under all settings, the difference increases as the standard deviation of meal times decreases, suggesting RL is better able to leverage the predictability of meals. Specifically, mean risk decreases by roughly 12\% for the PID approach (from 9.65 to 8.54), whereas it decreases nearly 24\% for the RL approach (from 8.40 to 6.42). This supports our hypothesis that RL is better able to take advantage of predictable meal schedules. \vspace{-3pt} \begin{figure} \centering \begin{subfigure}{} \includegraphics[width=.4\linewidth]{figures/percent_tdi.pdf} \end{subfigure} \begin{subfigure}{} \includegraphics[width=.4\linewidth]{figures/time_std_euglycemic.pdf} \end{subfigure} \caption{a) The average amount of insulin (in percent of total daily insulin) provided after meals for PID and RL-Scratch (note: RL-Trans, unshown, is very similar to RL-Scratch). RL-Scratch is able respond to meals more quickly than PID, with insulin peaking 30 minutes post-meal as opposed to roughly 60 minutes for PID. Additionally, the RL approach finishes delivering most post-meal insulin after 1hr, PID takes over 90 minutes. b) The distribution of average risk scores over 300 10-day rollouts for Adult 1 using meal schedules with varying amounts of predictability (meal time standard deviation). While PID performs better with more regularly spaced meals (median risk lowers from 9.66 at std=10 to 8.53 at std=0.1, a 12\% decrease), RL-Scratch sees a larger proportional and absolute improvement (median risk lowers from 8.33 at std=10 to 6.36 at std=0.1, a 24\% decrease).}\label{fig:rl_advantage} \end{figure} \subsection{Challenges for Deep RL} While in the previous section we demonstrated several advantages of using deep RL for blood glucose control, here we emphasize that the application of deep RL to this task and its evaluation are non-trivial. Specifically, in this section we: \begin{itemize} \item demonstrate the importance of our action space formulation for performance (\textbf{Section \ref{ssec:actionspace}}), \item illustrate the critical need for careful and extensive validation, both for model selection and evaluation (\textbf{Section \ref{sec:val_selection}}). \item show that, applied naively, deep RL leads to an unacceptable catastrophic failure rate, and present three simple approaches to improve this (\textbf{Section \ref{ssec:failures}}), \item address the issue of sample complexity with simple policy transfer (\textbf{Section \ref{ssec:transfer}}). \end{itemize} \subsubsection{Developing an Effective Action Space}\label{ssec:actionspace} One challenging element of blood glucose control in an RL setting is defining the action space. Insulin requirements vary significantly by person (from 16 to 60 daily units in the simulator population we use), and throughout most of the day, insulin requirements are much lower than after meals. To account for these challenges, we used a patient-specific action space, where much of the space corresponds to delivering no insulin (discussed in \textbf{Section \ref{sec:architecture}}). We perform an ablation study to test the impact of these decisions. On an arbitrarily chosen patient (child\#001), we shifted the $tanh$ output to remove the negative insulin space. This increased the catastrophic failure rate from 0\% (on this patient) to 6.6\%. On a challenging subset of 4 patients (indicated in \textbf{Appendix \ref{app:bb}}), we looked at the effect of removing the patient-specific action scaling $\omega_b$. This resulted in a 13\% increase in median risk from 9.92 to 11.23. These results demonstrate that a patient-specific action space that encourages conservative behavior can improve performance. \subsubsection{Potential Pitfalls in Evaluation}\label{sec:val_selection} In our experiments, we observed two key points for model evaluation: i) while often overlooked in RL, using validation data for model selection during training was key to achieving good performance, and ii) without evaluating on far more data than is typical, it was easy to underestimate the catastrophic failure rate. \paragraph{Performance instability necessitates careful model selection.} Off-policy RL with function approximation, particularly deep neural networks, is known to be unstable \citep{sutton_reinforcement_2018, gottesman2018evaluating}. As a result, we found it was extremely important to be careful in selecting which network (and therefore policy) to evaluate. In \textbf{Figure \ref{fig:validation_selection}a}, we show the fluctuation in validation performance over training for Adult\#009. While performance increases on average over the course of training (at least initially), there are several points where performance degrades considerably. \textbf{Figure \ref{fig:validation_selection}b} shows how performance averaged over all patients changes depending on the approach used to select the policy for evaluation. When we simply evaluate using the final epoch, almost half of test rollouts end in a catastrophic failure. Surprisingly, even when we select the model that minimized risk on the validation set, nearly a fifth of rollouts fail. However, by first limiting our pool of models to those that achieve a minimum blood glucose level of at least 30 mg/dL over the validation data, we reduce the catastrophic failure rate to $0.07\%$. As performance instability has been noted in other RL domains \citep{islam_reproducibility_2017, henderson_deep_2018}, this observation is likely relevant to other applications of deep RL in healthcare. \begin{figure}[!htbp] \centering \begin{subfigure} \centering \includegraphics[width=0.45\linewidth]{figures/unstable_training.pdf} \label{fig:validation_curve} \end{subfigure} \begin{subfigure}{} \includegraphics[width=0.4\linewidth]{figures/validation_selection.pdf} \label{fig:validation_perf} \end{subfigure} \caption{a) The training and validation curve (average reward) for adult\#009. Note the periods of instability affect both training and validation performance. b) Catastrophic failure rate over all patients for 3 methods of model selection: i) selecting the final training epoch, ii) selecting the epoch that achieved minimal risk, and iii) selecting the minimal risk epoch that maintained blood glucose above 30 mg/dL. We see large differences in performance depending on the model selection strategy.}\label{fig:validation_selection} \end{figure} \paragraph{Extensive evaluation is necessary.} Health applications are often safety critical. Thus, the susceptibility of deep RL to unexpected or unsafe behavior can pose a significant risk \citep{leike2017ai, futoma_identifying_2020}. While ongoing work seeks to provide safety guarantees in control applications using deep learning \citep{achiam_constrained_2017, futoma_identifying_2020}, it is important that practitioners take every step to evaluate the safety of their approaches. While it is typical to evaluate RL algorithms on a small number of rollouts \citep{henderson_deep_2018}, in our work we saw how easy it can be to miss unsafe behavior, even with significant testing. We examined the number of catastrophic failures that occurred using RL-Trans using different evaluation sets. Over our full evaluation set of 9,000 10-day rollouts, we observed 20 catastrophic failures (a rate of 0.22\%). However, when we only evaluated using the first 5 test seeds, which is still over 12 years of data, we observed 0 catastrophic failures. Additionally, when we evaluated using 3-day test rollouts instead of 10, we only observed only 5 catastrophic failures (a rate of .05\%), suggesting that longer rollouts result in a higher catastrophic failure rate. These results demonstrate that, particularly when dealing with noisy observations, it is critical to evaluate potential policies using a large number of different, lengthy rollouts. \subsubsection{Reducing Catastrophic Failures}\label{ssec:failures} Due to their potential danger, avoiding catastrophic failures was a significant goal of this work. The most direct approach we used was to modify the reward function, using a large termination penalty to discourage dangerous behavior. While unnecessary for fine-tuning policies, when training from scratch this technique was crucial. On a subset of 6 patients (see \textbf{Appendix \ref{app:patient}}), including the termination penalty reduced the catastrophic failure rate from $4.2\%$ to $0.2\%$. We found varying the training data also had a major impact on the catastrophic failure rate. During training, every time we reset the environment we used a different random seed (which controls meal schedule and sensor noise). Note that the pool of training, validation, and test seeds were non-overlapping. On a challenging subset of 7 patients (described in \textbf{Appendix \ref{app:patient}}), we ran RL-Scratch with and without this strategy. The run that varied the training seeds had a catastrophic failure rate of $0.15\%$, the run that didn't had a $2.05\%$ rate (largely driven by adult\#009, who reliably had the highest catastrophic failure rate across experiments). Other approaches can further improve stability. In \textbf{Table \ref{tab:risk}}, our RL results are averaged over three random restarts in training. This was done to demonstrate that our learning framework is robust to randomness in training data and model initialization. However, in a real application it would make more sense to select (using validation data) one model for use out of the random restarts. We apply this approach in \textbf{Table \ref{tab:ss_risk}}, choosing the seed that obtained the best performance according to our model selection criteria. This improves all the RL methods. Most notably, it further reduces the catastrophic failure rate for the approaches without meal announcements (0.07\% to 0\% for RL-Scratch, and 0.22\% to 0.13\% for RL-Trans). \begin{table*}[t!] \centering \caption{Risk and percent of time Eu/Hypo/Hyperglycemic calculated for the RL approaches treating the 3 training seeds as different random restarts. The stability of the Scratch and Trans approaches improves relative to performance in Table \ref{tab:risk}.} \scalebox{0.89}{ \begin{tabular}{lccccc} \toprule & Risk & Euglycemia & Hypoglycemia & Hyperglycemia & Failure Rate \\ & $\downarrow$ & (\%) $\uparrow$ & (\%) $\downarrow$ & (\%) $\downarrow$ & (\%) $\downarrow$ \\ \midrule RL-Scratch & 6.39 (4.7-8.9) & 72.96 (69.1-76.6) & 0.62 (0.0-1.7) & 25.96 (22.7-29.6) & 0.00 \\ RL-Trans & 6.57 (5.0-9.3) & 71.56 (67.0-75.7) & 0.80 (0.0-1.9) & 27.19 (23.4-31.2) & 0.13 \\ RL-MA & 3.71 (2.7-6.3) & 77.36 (72.7-83.2) & 0.00 (0.0-0.5) & 22.45 (16.7-26.9) & 0.00 \\ \bottomrule \end{tabular} } \label{tab:ss_risk} \end{table*} \subsubsection{Sample Efficiency and Policy Transfer}\label{ssec:transfer} While RL-Scratch achieves strong performance on average, it requires a large amount of patient-specific data: 16.5 years per patient. While RL-Trans reduced this amount, it still required over 2 years of patient-specific data, which for most health applications would be infeasible. Thus, we investigated how performance degrades as less data is used. In \textbf{Figure \ref{fig:finetune}}, we show the average policy performance by epoch for both RL-Scratch and RL-Trans relative to the PID controller. Note the epoch determines the maximum possible epoch for our model selection, not the actual chosen epoch. We see that far less training is required to achieve good performance with RL-Trans. In over 40\% of rollouts, RL-Trans outperforms PID with no patient-specific data (median risk 10.31), and with 10 epochs of training (roughly half a year of data) RL-Trans outperforms PID in the majority of rollouts (59.6\%; median risk 7.95). Interestingly, the lack of patient-specific data does not appear to cause excessive catastrophic failures. With no patient-specific data the failure rate is 0\%, after 5 epochs of training it has risen to .5\%, and then declines over training to the final value of .22\%. This implies two things: i) patient-specific training can increase the catastrophic failure rate, possibly by learning overly aggressive treatment policies, and ii) our current model selection procedure does not minimize the catastrophic failure rate. We do not observe this for RL-Scratch, where all epochs under 50 achieve a catastrophic error rate of over 17\%. These results suggest that our simple transfer approach can be effective even with limited amounts of patient-specific data. \begin{figure} \centering \includegraphics[width=.5\linewidth]{figures/finetuning_per_epoch.pdf} \caption{The proportion of test rollouts where RL-Scratch and RL-Trans outperform the median PID risk with different amounts of patient-specific training. We see that without any patient-specific data RL-Trans performs better than PID in 40\% of rollouts. RL-Scratch requires a significant amount of patient-specific data before achieving comparable performance.}\label{fig:finetune} \end{figure} \section{Conclusion} In this work, we demonstrated how deep RL can lead to significant improvements over alternative approaches to blood glucose control, with and without meal announcements (\textbf{Section \ref{ssec:baseline}}). We provide insight into why (\textbf{Section \ref{ssec:meals}}) and when (\textbf{Section \ref{ssec:behavior}}) we would expect to see RL perform better. We demonstrated the importance of a robust action space in patient-specific tasks (\textbf{Section \ref{ssec:actionspace}}), showed how careful and extensive validation is necessary for realistic evaluation of performance(\textbf{Section \ref{sec:val_selection}}), investigated several simple and general approaches to improving the stability of deep RL (\textbf{Section \ref{ssec:failures}}), and developed a simple approach to reduce the requirements of patient-specific data (\textbf{Section \ref{ssec:transfer}}). While the main goal of this paper was to advance a clinical application, the challenges we encountered and the solutions they inspired are likely to be of interest to researchers applying RL to healthcare more broadly. While our results in applying deep RL to blood glucose control are encouraging, they come with several limitations. First, our results are based on simulation. The simulator may not adequately capture variation across patients or changes in the glucoregulatory system over time. In particular, the virtual patient population the simulator comes equipped with does not differentiate individuals based on demographic information, such as gender and ethnicity. Thus, the applicability of our proposed techniques to all relevant patient populations cannot be assessed. However, as an FDA-approved substitute for animal trials \citep{kovatchev_silico_2009}, success in using this simulator is a nontrivial accomplishment. Second, we define a reward function based on risk. Though optimizing this risk function should lead to tight glucose control, it could lead to excess insulin utilization (as its use is unpenalized). Future work could consider resource-aware variants of this reward. Third, our choice of a 4-hour state space discourages learning long-term patterns or trends. In our environment, this did not reduce performance relative to a longer input history, but this could be important for managing blood glucose levels in more realistic simulators or real-world cases \cite{visentin_uva/padova_2018}. Finally, we emphasize that blood glucose control is a safety-critical application. An incorrect dose of insulin could lead to life-threatening situations. Many of the methods we examined, even those that achieve good average performance, are susceptible to catastrophic failures. We have investigated several ways to minimize such failures, including modifying the reward function and selecting models across multiple random restarts. While the results from \textbf{Table \ref{tab:ss_risk}} suggest these approaches mitigate catastrophic failures, the results of \textbf{Section \ref{sec:val_selection}} show such empirical evaluations can miss failure cases. To enable researchers to better explore and correct these limitations, we evaluated on an open-source simulator and made all the code required to reproduce our experiments publicly available. \acks{This work was supported by JDRF. The views and conclusions in this document are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied of JDRF.} \section{Introduction} Type 1 diabetes (T1D) is a chronic disease affecting 20-40 million people worldwide \citep{you_type_2016}, and its incidence is increasing \citep{tuomilehto_emerging_2013}. People with T1D cannot produce insulin, a hormone that signals cells to uptake glucose in the bloodstream and must continually make decisions about how much insulin to self-administer. Administering too little insulin can lead to hyperglycemia (high blood glucose levels), which results in chronic health complications \citep{control_resource_1995}, while administering too much insulin results in hypoglycemia (low blood glucose levels), which can lead to coma or death. Getting the correct dose requires careful measurement of glucose levels and carbohydrate intake, resulting in at least 15-17 data points a day. When using a continuous glucose monitor (CGM), this can increase to over 300 data points, or a blood glucose reading every 5 minutes \citep{coffen_magnitude_2009}. Combined with an insulin pump, a wearable device that delivers insulin, CGMs present an opportunity for closed-loop control, called an `artificial pancreas' (AP). Though the technology behind CGMs and insulin pumps has advanced, there remains significant room for improvement when it comes to the control algorithms \citep{bothe_use_2013, pinsker_randomized_2016}. In particular, current hybrid closed-loop approaches require accurate and timely meal announcements to maintain tight glucose control and are not readily able to adapt to latent behavioral patterns. In this work, we investigate the utility of a deep reinforcement learning (RL) based approach for blood glucose control. RL is a promising solution to this problem, as it is well-suited to learning complex behavior that readily adapts to changing domains \citep{clavera2018learning}. When combined with a deep neural network for state inference, we hypothesize that RL will be able to infer the presence of latent meals efficiently, and adapt well to existing behavioral patterns. Moreover, unlike many other disease settings, there exist credible simulators for blood glucose management \citep{visentin_university_2014}. The presence of a credible simulator alleviates many common concerns of RL applied to problems in health \citep{gottesman_guidelines_2019}. However, that does not mean applying deep RL to this problem is straightforward. In this paper, we identify several challenges in this application, and make significant progress in addressing them. As a result of our proposed solutions, we present the first deep RL approach that surpasses human-level performance in controlling blood glucose without requiring meal announcements. \subsection*{Generalizable Insights about Machine Learning in the Context of Healthcare} Throughout our work in applying deep RL to blood glucose management, we encountered several challenges that extend beyond our particular application to problems more generally in applying RL in a healthcare setting. As such, we believe the solutions to these problems and the insights that inspired them will be of more general interest as well. Specifically: \begin{enumerate} \item Due to large differences between individuals insulin responsiveness and meal patterns, it is difficult to find a single action space that allows for rapid insulin administration while ensuring safety. To solve this problem, we developed a robust action space that encompass a range of different individuals and naturally encourages safe policies. Robust action spaces are relevant to any RL problem in healthcare where action distributions are likely to change across patients. \item We found several potential pitfalls in evaluating our proposed approach that could lead to unrealistically pessimistic or optimistic performance assessments. To address this issue, we used validation data to perform careful model selection, and extensive test data to evaluate the quality of our selected models. In RL, it is typical to report performance on the final trained model (without model selection) over a handful of rollouts. Our experiments demonstrate the danger of this approach. \item While deep RL has been shown to perform well in several tasks, it has also been shown to be brittle and unstable \citep{rlblogpost}, characteristics that are unacceptable in a safety-critical task. We found that to stabilize the test performance of our approaches, a combination of simple and widely applicable approaches was sufficient. In particular, we used a safety-augmented reward function, realistic randomness in training data, and model selection over random restarts to train an approach that behaved safely over thousands of days of evaluation data for each patient. \item Finally, unlike game settings where one has ability to learn from hundreds of thousands of hours of interaction, any learning approach to blood glucose control must be able to achieve strong performance using only a limited number of days of patient-specific data. While a great deal of work has been done on learning policies that generalize well across multiple tasks \cite{teh2017distral}, we show that a simple transfer learning approach can be remarkably sample efficient and can even surpass the performance of models trained from scratch. This insight in encouraging to RL applications in healthcare, as it suggests patient-specific policies can be learned without large amounts of patient-specific data. \end{enumerate} \section{Background and Related Works} In recent years, researchers have started to explore RL in healthcare. Examples include matching patients to treatment in the management of sepsis \citep{weng_representation_2017, komorowski_artificial_2018} and mechanical ventilation \citep{prasad_reinforcement_2017}. In addition, RL has been explored to provide contextual suggestions for behavioral modifications \citep{klasnja_efficacy_2019}. Despite its success in other problem settings, RL has yet to be fully explored as a solution for a closed-loop AP system \citep{bothe_use_2013}. \subsection{Current AP algorithms and RL for Blood Glucose Control} Among recent commercial AP products, proportional-integral-derivative (PID) control is one of the most common backbones \citep{trevitt_artificial_2015}. The simplicity of PID controllers make them easy to use, and in practice they achieve strong results \citep{steil2013algorithms}. For example, the Medtronic Hybrid Closed-Loop system, one of the few commercially available, is built on a PID controller \citep{garg_glucose_2017, ruiz_effect_2012}. In this setting, a hybrid closed-loop controller automatically adjusts basal insulin rates, but still requires human-directed insulin boluses to adjust for meals. The main weakness of PID controllers, in the setting of blood glucose control, is their reactivity. As they only respond to current glucose values (including a derivative), often they cannot react fast enough to meals to satisfactorily control glucose excursions without meal announcements \citep{garg_glucose_2017}. And, without additional safety modifications, PID controllers can overcorrect for meal spikes, triggering hypoglycemia \citep{ruiz_effect_2012}. In contrast, we hypothesize that an RL approach will be able to leverage patterns associated with meal times, resulting in better policies that do not require meal announcements. Several previous works have examined the use of RL for different aspects of blood glucose control. See \cite{tejedor_reinforcement_2020} for a recent survey. Several recent works have investigated the use RL to adapt existing insulin treatment regimes \citep{ngo_reinforcement-learning_2018, oroojeni_mohammad_javad_reinforcement_2015, sun_dual_2018}. In contrast to our setting, in which we aim to learn a closed-loop control policy, these works have focused on a ‘human-in-the-loop’ setting, in which the goal is to learn optimal correction factors and carbohydrate ratios that can be used in the calculation of boluses, which rely on humans consistently providing accurate and timely meal announcmeents. Similar to our own work, \cite{daskalaki_preliminary_2010} and \cite{de_paula_-line_2015} focus on the task of closed-loop glucose control. \cite{daskalaki_preliminary_2010} uses a model-based actor-critic formulation. Their critic directly predicts future glucose values from which state values are derived. The actor is an unlearned PID-style controller that uses current glucose value and state value to control glucose. While their work focuses on closed loop control, it substitutes the problem of RL with future prediction. In \cite{de_paula_-line_2015}, a kernelized Q-learning framework is used for closed loop glucose control. They make use of Bayesian active learning for on-the-fly personalization. \cite{de_paula_-line_2015} tackles a similar problem to our own, but uses a simple two-compartment model for the glucoregulatory system and a fully deterministic meal routine for testing. In our simulation environment, we found that Q-learning did not lead to satisfactory closed-loop performance and instead we examine deep actor-critic algorithms for continuous control. \subsection{Glucose Models and Simulation} Models of the glucoregulatory system have long been important to the development and testing of an AP \citep{cobelli_integrated_1982}. Current models are based on a combination of rigorous experimentation and expert knowledge of the underlying physiological phenomena. Typical models consist of a multi-compartment model, with various sources and sinks corresponding to physiological phenomena, involving often dozens of patient-specific parameters. One such simulator, the one we use in our experiments, is the UVA/Padova model \citep{kovatchev_silico_2009}. Briefly, this simulator models the glucoregulatory system as a nonlinear multi-compartment system, where glucose is generated through the liver and absorbed through the gut and controlled by externally administered insulin. A more detailed explanation can be found in \cite{kovatchev_silico_2009}. For reproducibility, we use an open-source version of the UVA/Padova simulator that comes with 30 virtual patients, each of which consists of several dozen parameters fully specifying the glucoregulatory system \citep{xie_simglucose_2018}. The patients are divided into three classes: children, adolescents, and adults, each with 10 patients. While the simulator we use includes only 10 patients per class, there is a wide range of patient types among each class, with ages ranging from 7-68 years and recommended daily insulin from 16 units to over 60. At each time step the simulator takes as input the amount of insulin and carbohydrates consumed and produces a 13-dimensional state vector describing levels of glucose and insulin in various compartments of the body. We combine the simulator with a meal schedule which simulates patient behavior, so the control policy only effects the insulin doses. \section{Methods} The use of deep RL for blood glucose control presents several challenges. Through extensive experimentation, we found that the choice of action space, reward function, and training data have significant impact on training and validation performance. Additionally, the high sample complexity of standard RL approaches for continuous control tasks can make the application of these methods in real-world settings infeasible. We address these challenges in turn, settling on a state, action, and reward formulation and learning/validation pipelines that result in policies that achieve strong performance across 30 different simulated patients with the same architecture and hyperparameters, without requiring meal announcements. Finally, we demonstrate how such policies can be transferred across patients in a data-efficient manner. We begin by formalizing the problem. We then describe deep RL approaches that vary in terms of architecture and state representation, and present several baselines: an analogue to human-control in the form of a basal-bolus controller and variants on a PID controller. \subsection{Problem Setup} We frame the problem of blood glucose control as a Partially-Observable Markov decision process (POMDP) consisting of the 6-tuple $(S^*, O, S, A, P, R)$. Our precise formulation of this problem varies depending on the method and setting. Here, we describe the standard formulation, and explain further differences as they arise. The true states $\mathbf{s}^*_t \in S^*$ consist of ground truth levels of insulin, glucose, and carbohydrates in different compartments of the body, as described in \cite{kovatchev_silico_2009}. The observation function $O: \mathbf{s}^*_t \rightarrow b_t$ maps from the true state value to the CGM observation $b_t$. Observed states $\mathbf{s}_t \in S$ consist of the previous 4 hours of CGM $\mathbf{b}^t$ and insulin data $\mathbf{i}^t$ at the resolution of 5-minute intervals: $\mathbf{s}_t = [\mathbf{b}^t, \mathbf{i}^t]$ where: $$ \mathbf{b}^t = [b_{t-47}, b_{t-46}, \dots, b_{t}], \mathbf{i}^t = [i_{t-47}, i_{t-46}, \dots, i_{t}] $$ and $b_{t} \in \mathbb{N}_{40:400}$, $i_{t} \in \mathbb{R}_{\ge 0}$, $t \in \mathbb{N}_{1:288}$ and represents a time index for a day at 5-minute resolution. Note that in our formulation, the observation function \textit{does not} map to observed states, as observed states incorporate significant amounts of historical data. We systematically explored history lengths between 1 and 24 hours as input. After tuning on the validation data, we found that 4 hours struck a good balance between time to convergence and strong performance. We use an update resolution of every 5 minutes to mimic the sampling frequency of many common continuous glucose monitors. Actions $a_t \in \mathbb{R}_{\ge 0}$ are real positive numbers, denoting the size of the insulin bolus in medication units. We experimented with numerous discritized action spaces (as is required by Q-learning approaches), but given the extreme range of insulin values required to achieve good performance (\textit{e.g.} total daily insulin requirements vary between 16 and 60 units), designing an action space where exploration was feasible proved impractical (as an inappropriately timed bolus can be dangerous). The transition function $P$, our simulator, consists of two elements: i) $M: t \rightarrow c_t$ is the meal schedule, and is defined in \textbf{Appendix \ref{app:meal}}, and ii) $G: (a_t, c_t, \mathbf{s}^*_t) \rightarrow (b_{t+1}, i_{t+1}, \mathbf{s}^*_{t+1})$, where $c_t \in \mathbb{R}_{\ge 0}$ is the amount of carbohydrates input at time $t$ and $G$ is a model of the glucoregulatory system, its behavior is defined in accordance with the UVA/Padova simulator \citep{kovatchev_silico_2009}. The reward function $R: (\mathbf{s}_t, a_t) \rightarrow \mathbb{R}$ is defined as negative risk $-risk(b_t)$ where $risk$ is the asymmetric blood glucose risk function defined as: \begin{equation} risk(b) = 10*(3.5506 * \log(b)^{0.8353}-3.7932)^2 \end{equation} shown in \textbf{Figure \ref{fig:risk}}, and is an established tool for computing glucose risk \citep{magni_model_2007}. Considered in isolation, this reward could lead to dangerous behavior. As it is entirely negative, cumulative reward can be maximized by ending the episode as quickly as possible (corresponding to a catastrophic failure). To mitigate this, we add an additional termination penalty to this reward function, where trajectories that enter dangerous blood glucose regimes (blood glucose levels less than 10 or more than 1,000 mg/dL) receive a negative reward that outbalances the advantage of early termination. We found a value $-1e5$ worked empirically. We investigated other reward functions, such as time in range or distance from a target blood glucose value, or risk without a termination penalty, but found that optimizing for the proposed reward function consistently led to better control. In particular, it led to control schemes that were less prone to hypoglycemia, as these trajectories were penalized more heavily than occasional high glucose levels. This decision was important for our problem setting as extreme hypoglycemia can be rapidly triggered by a large dose of insulin without an accompanying meal. Given the lack of meal announcements and sensor noise which can look like the beginnings of a meal spike, avoiding extreme hypoglycemia was one of the major challenges we encountered in applying deep RL to blood glucose control \begin{figure} \centering \includegraphics[width=0.45\linewidth]{figures/magni_risk_post.pdf} \caption{The risk function proposed in \citep{magni_model_2007}. The mapping between blood glucose values (in mg/dL, x-axis) and risk values (y-axis). The hypo- and hyperglycemic thresholds are shown as shaded regions. The risk at the hyperglycemic threshold (180 mg/dL) is roughly 3.5, the risk at the hypoglycemic threshold (70 mg/dL) is approximately 25.} \label{fig:risk} \end{figure} \subsection{Soft Actor Critic} We learn a stochastic policy $\pi: \mathbf{s}_t \rightarrow p(a_t)$, or a policy that given an observed state produces a distribution of possible actions, using the Soft Actor Critic (SAC) algorithm \citep{haarnoja_soft_2018}. In particular, our policy network generates outputs $\mu, \log(\sigma)$ which parameterize a normal distribution $\mathcal{N}(\mu, \sigma)$. The actions are distributed according to $tanh(z), z \sim \mathcal{N}(\mu, \sigma)$. This algorithm has been shown to be a reasonably sample efficient and well-performing algorithm when learning continuous control policies. Additionally, the fact that our problem setting is partially observed suggests potential benefits from a stochastic formulation \citep{eysenbach2019if}. SAC, a member of the Actor-Critic family, trains a stochastic policy network (or actor) parameterized by $\phi$ to maximize the maximum entropy RL objective function: \vspace{-1pt} \begin{equation}\label{eqn:policy_return} J(\pi) = \sum_{t=0}^T \mathbb{E}_{(\mathbf{s}_t, a_t) \sim P(\mathbf{s}_{t-1}, \pi_\phi(\mathbf{s}_{t-1}))} [R(\mathbf{s}_t, a_t) + \alpha H(\pi_\phi(\cdot|\mathbf{s}_t))], \end{equation} \vspace{-1pt} where the entropy regularization term, $H$, is added to the expected cumulative reward to improve exploration and robustness \citep{haarnoja_soft_2018}. Intuitively, the return in \textbf{Equation \ref{eqn:policy_return}} encourages a policy that maximizes expected reward while also maximizing the entropy of the learned policy. The temperature hyperparameter $\alpha$ controls the tradeoff between these two objectives. While it can be set manually, in our work we use the automatic temperature tuning derived by \cite{haarnoja_soft_2018a}. The overall objective function is optimized by minimizing the KL divergence between the action distribution $\pi_\phi(\cdot|\mathbf{s}_t))$ and the distribution induced by state-action values $Q_{\theta}(\mathbf{s}_{t}, \cdot)$: \vspace{-1pt} \begin{equation}\label{eqn:pi} J_{\pi}(\phi)=\mathbb{E}_{\mathbf{s}_{t} \sim \mathcal{D}}\left[\mathrm{D}_{\mathrm{KL}}\left(\pi_{\phi}\left(\cdot | \mathbf{s}_{t}\right) \| \frac{\exp \left(Q_{\theta}\left(\mathbf{s}_{t}, \cdot\right)\right)}{Z_{\theta}\left(\mathbf{s}_{t}\right)}\right)\right] \end{equation} \vspace{-1pt} where $\mathcal{D}$ is a replay buffer containing previously seen $(\mathbf{s}_t, \mathbf{a}_t, r_t, \mathbf{s}_{t+1})$ tuples, $Z_\theta$ is a partition function, and $Q_\theta$ is the state-action value function parameterized by a neural network (also called a critic). This means that our learned policy engages in probability matching, or selecting an action with probability proportional to its expected value. $Q_\theta$ is trained by minimizing the temporal difference loss: \begin{gather}\label{eqn:q} J_{Q}(\theta)=\mathbb{E}_{\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right) \sim \mathcal{D}}\left[\frac{1}{2}\left(Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)-\hat{Q}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)\right)^{2}\right], \\ \hat{Q}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)=r\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)+\gamma \mathbb{E}_{\mathbf{s}_{t+1} \sim p}\left[V_{\overline{\psi}}\left(\mathbf{s}_{t+1}\right)\right]. \end{gather} \vspace{-1pt} $V_{\overline{\psi}}$ is the running exponential average of the weights of $V_\psi$ (the soft value function parameterized by a third neural network) over training. This is a continuous variant of the hard target network replication in \citep{mnih_human-level_2015}. $V_\psi$ is trained to minimize: \begin{equation}\label{eqn:v} J_{V}(\psi)=\mathbb{E}_{\mathbf{s}_{t} \sim \mathcal{D}}\left[\frac{1}{2}\left(V_{\psi}\left(\mathbf{s}_{t}\right)-\mathbb{E}_{\mathbf{a}_{t} \sim \pi_{\rho}}\left[Q_{\theta}\left(\mathbf{s}_{t}, \mathbf{a}_{t}\right)-\log \pi_{\phi}\left(\mathbf{a}_{t} | \mathbf{s}_{t}\right)\right]\right)^{2}\right]. \end{equation} In summary: we train a policy which maps from states to a probability distribution over actions and parameterized using a neural network $\pi_\phi$. Training this network (\textbf{Equation \ref{eqn:pi}}) requires an estimation of the soft state-action value function, we learn such an estimate $Q_\theta$ (\textbf{Equation \ref{eqn:q}}) together with the soft value function $V_\psi$ (\textbf{Equation \ref{eqn:v}}). Additional details of this approach, including the gradient calculations, are given in \citep{haarnoja_soft_2018}. Note that we replace the MSE temporal difference loss with Huber loss, as we empirically found this improved convergence. \subsubsection{Recurrent Architecture}\label{sec:architecture} Our proposed approach takes as input only the past 4 hours of CGM and insulin data, mimicking real-world applications without human input (\textit{i.e.}, no meal announcements). To extract useful state information from the noisy CGM and insulin history, we parameterize $Q_\theta$, $V_\psi$, and $\pi_\phi$ using gated-recurrent unit (GRU) networks \citep{cho_learning_2014}, as these types of architectures have successfully been used to model to blood glucose data in the past \citep{fox_deep_2018, zhu_deep_nodate}. The GRU in $\pi_\phi$ maps states to scalar values $\mu$ and $\log(\sigma)$. Our GRU networks are composed of two layers and have a hidden state size of 128, followed by a fully-connected output layer. We tuned this architecture using validation data on a limited subset of patients. \subsection{Baselines} We examine three baseline methods for control: basal-bolus (BB), PID control, and PID with meal announcements (PID-MA). BB reflects typical human-in-the-loop control strategies, PID reflects a common control strategy used in preliminary fully closed loop AP applications, PID with meal announcements is based on current AP technology, and requires regular human intervention.Finally, we consider an 'oracle' approach that has access to the true state $s^*_t$. This decouples the task of learning a policy from learning a representation, serving as an upper bound on performance for our proposed approach. \subsubsection{Basal-Bolus Baseline} This baseline is designed to mimic human control and is an ideal depiction of how an individual with T1D currently controls their blood glucose. In this setting, we modify the standard state representation $\mathbf{s}_t$ to include a carbohydrate signal and a cooldown signal (explained below), and to remove all measurements occurring before $t$, $\mathbf{s}_t = [b_t, i_t, c_t, cooldown]$. Note that the inclusion of a carbohydrate signal, or meal announcement, places the burden of providing accurate and timely estimates of meals on the person with diabetes. Each virtual patient in the simulator comes with the parameters necessary to calculate a reasonable basal insulin rate $bas$ (the same value used in our action space definition), correction factor $CF$, and carbohydrate ratio $CR$. These three parameters, together with a glucose target $b_g$ define a clinician-recommended policy $\pi(s_t) = bas + (c_t > 0) * (\frac{c_t}{CR} + cooldown * \frac{b_t - b_g}{CF})$ where $cooldown$ is 1 if there have been no meals in the past three hours, otherwise it is 0. This ensures that each meal is only corrected for once, otherwise meals close in time could lead to over-correction and hypoglycemia. Appropriate settings for these parameters can be estimated by endocrinologists, using previous glucose and insulin information \citep{walsh_guidelines_2011}. The parameters for our virtual patient population are given in \textbf{Appendix \ref{app:bb}}. \subsubsection{PID Baseline} Variants of PID controllers are already used in commercial AP applications \citep{garg_glucose_2017}. A PID controller operates by setting the control variable, here $a_t$, to the weighted combination of three terms $a_t = k_P P(b_t) + k_I I(b_t) + k_D D(b_t)$ such that the process variable $b_t$ (where $t$ is again the time index) remains close to a specified setpoint $b_g$. The terms are calculated as follows: i) the proportional term $P(b_t)=\max(0, b_t - b_g)$ increases the control variable proportionally to the distance from the setpoint, ii) the integral term $I(b_t) = \sum_{j=0}^t (b_j - b_g)$ acts to correct long-term deviations from the setpoint, and iii) the derivative term $D(b_t) = |b_{t} - b_{t-1}|$ acts to control a basic estimate of the future, here approximated by the rate of change. The set point and the weights (also called gains) $k_P, k_D, k_I$ are hyperparameters. To compare against the strongest PID controller possible, we tuned these hyperparameters extensively using multiple iterations of grid-search with exponential refinement per-patient, optimizing to minimize risk. Our final settings are presented in \textbf{Appendix \ref{app:pid_param}}. \subsubsection{PID with Meal Announcements.} This baseline, which is designed to be similar to commercially available hybrid closed loop systems \citep{garg_glucose_2017, ruiz_effect_2012}, combines the BB with the PID algorithm into a control algorithm that we call "PID with meal announcements" (PID-MA). During meals, insulin boluses are calculated and applied as in the BB approach, but instead of using a predetermined fixed basal insulin rate, the PID algorithm controls the basal rate, allowing for adaptation between meals. As above, we tune the gain parameters for the PID algorithm using sequential grid search with exponential refinement to minimize risk. \subsubsection{Oracle Architecture} A deep RL approach to learning AP algorithms requires that: i) the representation learned by the network contain sufficient information to control the system, and ii) an appropriate control algorithm be learned through interaction with the glucoregulatory system. As we are working with a simulator, we can explore the difficulty of task (ii) in isolation, by replacing the state $\mathbf{s}_t$ with the ground-truth state of the simulator $\mathbf{s}^*_t$. Though unavailable in real-world settings, this representation decouples the problem of learning a policy from that of learning a good state representation. Here, $Q_\theta$, $V_\psi$, and $\pi_\phi$ are fully-connected with two hidden layers, each with 256 units. The network uses ReLU nonlinearities and BatchNorm \citep{ioffe2015batch}. \subsection{Experimental Setup \& Evaluation} To measure the utility of deep RL for the task of blood glucose control, we learned policies using the approaches described above, and tested these policies on data with different random seeds across 30 different simulated individuals. \textbf{Training and Model Selection. }We trained our models (from scratch) for 300 epochs (batch size 256, epoch length 20 days) with an experience replay buffer of size 1e6 and a discount factor of 0.99. We examined different choices of discount factor on a subset of patients, and while our selection performed best, the effect was minor. While our RL algorithms often achieved reasonable performance within the first 20-50 epochs of training, we found that additional training was required for stable performance. We also found that a sufficient epoch length was critical for learning a stable controller. Too small of an epoch length can lead to dangerous control policies. We optimized the parameters of $Q$, $V$ and $\pi$ using Adam with a learning rate of $3E-4$. All network hyperparameters, including number and size of layers, were optimized on training seeds on a subset of the simulated patients. Our networks were initialized using PyTorch defaults. When fine-tuning models transferred across patients, we train them for 50 epochs with a learning rate of $10^{-4}$. All of our code will be made publicly available to allow for replication. For the purpose of anonymous peer-review, we have made our code available on an anonymous google drive account \footnote{\url{https://tinyurl.com/y6e2m68b}}. \textbf{Evaluation. }We measured the performance (risk) of the policy networks on 10 days of validation data after each epoch. After training over all epochs, we evaluated on test data using the model parameters from the best epoch as determined by the validation data. Our method of choosing the best validation run led to significant changes in performance (see \textbf{Section \ref{sec:val_selection}}). To optimize our validation selection method, we generated a separate set of tuning seeds for a patient, and chose the selection approach that resulted in best performance on this pseudo-test data. Our validation procedure selects the epoch to test by first filtering out runs that could not control blood glucose within safe levels over the validation run (glucose between 30-1000 mg/dL), then it selects the remaining epoch that achieved the lowest mean risk. We evaluated each patient-specific model on 1000 days of test data, broken into 100 independent 10 day rollouts. We trained and evaluated each approach 3 times, resulting in 3000 days of evaluation per method per person. We evaluated approaches using i) risk, the average magni risk calculated over the 10-day rollout, ii) \% time spent euglycemic (blood glucose levels between 70-180 mg/dL), iii) \% time hypo/hyperglycemic (blood glucose lower than 70 mg/dL or higher than 180 mg/dL respectively), and iv) \% of runs that resulted in a catastrophic failure, which we denote as a run that achieves a minimal blood glucose level below 5 mg/dL (at which point recovery becomes highly unlikely). Note that while these catastrophic failures are a major concern, our simulation process does not consider consuming carbs to correct for low blood glucose levels. This is a common strategy to avoid dangerous hypoglycemia in the real world. The random seeds controlling noise, meals in the environment, and all other forms of stochasticity were different between training, validation, and test runs. We test the statistical significance of differences between methods using Mood's median test for all metrics except for catastrophic failure rate, for which we use a binomial test. \textbf{Patient-Specific Action Space.} After the network output layer, actions are squashed using a \textit{tanh} function. Note that this results in half the action space corresponding to negative values, which we interpret as administering no insulin. This encourages sparse insulin utilization, and makes it easier for the network to learn to safely control baseline glucose levels. To ensure that the maximum amount of insulin delivered over a 5-minute interval is roughly equal to a normal meal bolus, we use the average ratio of basal to bolus insulin in a day \citep{kuroda_basal_2011} to calculate a scale parameter for the action space. We use $\omega_{b}=43.2*bas$ as the scale, where $bas$ is the default basal insulin rate provided by \cite{xie_simglucose_2018} (which varies per-person). This scaling was an important addition to improving performance with a continuous action space. \textbf{Efficient Policy Transfer.} Given that one of the main disadvantages of deep RL approaches is their sample efficiency, we sought to explore transfer learning techniques that could allow networks trained from scratch to be efficiently transferred to new patients. We refer to our method trained from scratch as RL-Scratch, and the transfer approach as RL-Trans. For RL-Trans, we initialize $Q_\theta, V_\psi, \pi_\phi$ for each class of patients (children, adolescents, and adults) using fully trained networks from one randomly selected member of that source population (\textit{e.g.}, Child/Adolescent/Adult 1). We then fine-tune these networks on data collected from the target patient. When fine-tuning, we modify the reward function, removing the termination penalty and adding a large positive constant value (100) to all rewards. Removing the termination penalty led to more consistent returns over training, as it reduces the range of possible episode returns allowing for a more consistent policy gradient. As we begin with policies that are approximately correct, the additional safety provided by training with a termination penalty is no longer required. This provides a simple approach for training policies using far less data per-patient. \section{Experiments and Results} We investigate the performance of several different classes of policies for blood glucose management with and without meal announcements. We compare the performance of the BB controller, the PID with and without meal announcements, and our RL approaches. We consider 3 variants of RL methods: i) RL-Scratch, the standard SAC trained from scratch on every individual, ii) RL-Trans, which fine-tunes a trained RL-Scratch model from an arbitrary child/adolescent/adult, and iii) RL-MA, which uses RL-Scratch trained using the same automated meal boluses used for PID-MA. We also report results on an Oracle approach, which is a SAC method trained and evaluated using the ground truth simulator state. To begin, we demonstrate the potential of deep RL for learning AP algorithms relative to other approaches. Then, we address the challenges we faced in applying deep RL to this task and present how we overcame them. \begin{figure*}[htb!] \centering \includegraphics[width=\linewidth]{figures/noma_risk.png} \caption{The risk over 10 days achieved by the methods that do not require meal announcements when applied to different simulated patients. Each point corresponds to a different random test seed that controls the meal schedule and sensor noise. The line indicates median performance across all test seeds for each method on each patient. Results for each method are presented across 3 random training seeds, controlling model initialization and randomness in training. We observe that, although there is a wide range in performance across and within individuals, The RL approaches tend to outperform the PID.} \label{fig:full_risk_noma} \end{figure*} \begin{figure*}[htb!] \centering \includegraphics[width=\linewidth]{figures/ma_risk.png} \caption{The risk over 10 days achieved by the methods that require meal announcements. We observe that PID-MA tends to outperform BB, and RL-MA similarly outperforms PID-MA.} \label{fig:full_risk_ma} \end{figure*} \begin{table*}[t!] \centering \caption{Median risk, percent of time Eu/Hypo/Hyperglycemic, and failure rate calculated using 1000 days of simulation broken into 100 independent 10-day rollouts for each of 3 training seeds for 30 patients, totaling 90k days of evaluation (with interquartile range). Lower Magni Risk, Hypoglycemia, and Hyperglycemia are better, higher Euglycemia is better. Hybrid and Non-closed loop approaches (requiring meal announcements) are indicated with *. Approaches requiring a fully observed simulator state are indicated with $\dagger$. The non-oracle approach with the best average score is bolded and underlined, the best approach that does not require meal announcements is bolded.} \scalebox{0.9}{ \begin{tabular}{lcccccc} \toprule & & Risk & Euglycemia & Hypoglycemia & Hyperglycemia & Failure \\ & & \downarrow & (\%) \uparrow & (\%) \downarrow & (\%) \downarrow & (\%) \downarrow \\ \midrule \multirow{3}{}{\rotatebox[origin=c]{90}{No MA}} & PID & 8.86 (6.8-14.3) & 71.68 (65.9-75.9) & 1.98 (0.3-5.5) & \textbf{24.71 (21.1-28.6)} & 0.12 \\ & RL-Scratch & \textbf{6.50 (4.8-9.3)} & \textbf{72.68 (67.7-76.2)} & \textbf{0.73 (0.0-1.8)} & 26.17 (23.1-30.6) & \textbf{0.07} \\ & RL-Trans & 6.83 (5.1-9.7) & 71.91 (66.6-76.2) & 1.04 (0.0-2.5) & 26.60 (22.7-31.0) & 0.22 \\ \hline \multirow{4}{}{\rotatebox[origin=c]{90}{MA}} & BB* & 8.34 (5.3-12.5) & 71.09 (62.2-77.9) & 2.60 (0.0-7.2) & 23.85 (17.0-32.2) & 0.26 \\ & PID-MA* & 8.31 (4.7-10.4) & 76.54 (70.5-82.0) & 3.23 (0.0-8.8) & \bf{\underline{18.74 (12.9-23.2)}} & \bf{\underline{0.00}} \\ & RL-MA* & \bf{\underline{4.24 (3.0-6.5)}} & \bf{\underline{77.12 (71.8-83.0)}} & \bf{\underline{0.00 (0.0-0.9)}} & 22.36 (16.6-27.7) & \bf{\underline{0.00}} \\ & RL-Oracle^\dagger & 3.58 (1.9-5.4) & 78.78 (73.9-84.9) & 0.00 (0.0-0.0) & 21.22 (15.1-26.1) & 0.01 \\ \bottomrule \end{tabular} } \label{tab:risk} \end{table*} \subsection{Advantages of Deep RL}\label{sec:advantages} We begin by comparing our deep RL approaches to baselines with and without meal announcements. We then investigate two hypotheses for why deep RL is well suited to the problem of glucose management without meal announcements: i) that the high capacity neural network is able to quickly infer when meals occur, and ii) the approach is able to adapt to behavioral patterns. \subsubsection{Deep RL vs. Baseline Approaches} Results comparing the PID baseline to the RL approaches are presented in \textbf{Figure \ref{fig:full_risk_noma}}. Each point represents a different test rollout by a policy. For the RL approaches, the performance of each method is reported as the combination of 3 random restarts. For each individual we rank the 3 approaches in terms of median risk. We calculate average rank by taking the mean of each approaches rankings across all 30 patients. Among the 3 methods that do not require meal announcements, RL-Scratch performs best across patients (average rank 1.33), followed by RL-Trans (average rank 1.7), then PID (average rank 2.97). \textbf{Figure \ref{fig:full_risk_ma}} shows the performance for approaches requiring meal announcements. The mean risk for almost all individuals is above the risk threshold for hyperglycemia of 3.5. This is far from the optimal level of control. However, it is not the case that all time is spent hypo/hyperglycemic. Across patients, approximately 60-80\% of time is spent euglycemic, compared with $52\% \pm 19.6$\% observed in real human control \citep{ayanotakahara_carbohydrate_2015}. Note that RL-Scratch, while achieving strong performance overall, reliably performs quite poorly on adolescent\#002. We discuss this issue in \textbf{Appendix \ref{app:ao2}}. One of the major advantages of our proposed approach is its ability to achieve strong performance without meal announcements. This does not mean that our approach does not benefit from meal announcements. Among the 3 methods that require meal announcements, RL-MA performs best (average rank 1.07), followed by PID-MA (average rank 2.13) then BB (average rank 2.8). These results suggest that deep RL could be a valuable addition to current hybrid closed loop systems. We examine additional models and metrics in the results presented in \textbf{Table \ref{tab:risk}}. We observe that RL-MA equals or surpasses the performance of all non-oracle methods with the exception of hyperglycemia. Interestingly, all the RL variants achieve lower median risk than PID-MA, which requires meal announcements. This is primarily driven by low levels of hypoglycemia, which the risk metric heavily penalizes (see \textbf{Figure \ref{fig:risk}}). Note that all methods, including PID-MA, were optimized to minimize this metric. The difference between results that are bolded, or bolded and underlined, and the next best non-bolded result (excluding RL-Oracle) are statistically significant with $p<0.001$. \subsubsection{Detecting Latent Meals} Our RL approaches achieves overall strong blood glucose control without meal announcements, but how much of this is the deep networks ability to detect meals? To investigate this, we looked at the total proportion of insulin delivered on average after meals for PID and RL-Scratch, shown in \textbf{Figure \ref{fig:rl_advantage}a}. If our RL approach was better able to infer meals, we would expect sharper increase in insulin administered after meals, as the sooner insulin is administered the faster glycemic spikes can be controlled without risking hypoglycemia. This is precisely what we observe, RL-Scratch administers the majority of its post-meal bolus within one hour of a meal, whereas PID requires two hours. Interestingly, when considering the percentage of total daily insulin administered in the hour after meals, RL-Scratch responds even more aggressively than BB or PID-MA (54.7\% \textit{vs.} 48.5\% and 47.3\% respectively). This demonstrates that our RL approach is able to readily react to latent meals shortly after they have occurred. \subsubsection{Ability to Adapt to Behavioral Schedules} We hypothesize that one of the potential advantages of RL is its ability to exploit underlying behavioral patterns, improving control as the environment becomes more predictable. To investigate this potential benefit, we explored changing the meal schedule generation procedure outlined in \textbf{Algorithm \ref{alg:schedule}} for Adult 1. We removed the `snack' meals (those with occurrence probabilities of 0.3) and set all meal occurrence probabilities to 1 and meal amount standard deviations to 0 (\textit{i.e.}, each day Adult 1 consumes an identical set of meals). We then evaluated both the PID model and the RL-Scratch model on 3 variations of this environment, characterized by the standard deviation of the meal times (either 0.1, 1, or 10 hours). This tests the ability of each method to take advantage of patterns in the environment. The results are presented in \textbf{Figure \ref{fig:rl_advantage}b}. We observe that, while RL-Scratch achieves lower risk than PID under all settings, the difference becomes more pronounced as the standard deviation of meal times becomes smaller (and thus meal timing becomes more predictable). Specifically, mean risk decreases by roughly 12\% for the PID approach (form 9.65 to 8.54), whereas it decreases nearly 24\% for the RL approach (from 8.40 to 6.42). This supports our hypothesis that RL is better able to take advantage of behavioral patterns. \vspace{-3pt} \begin{figure} \centering \begin{subfigure}{} \includegraphics[width=.4\linewidth]{figures/percent_tdi.pdf} \end{subfigure} \begin{subfigure}{} \includegraphics[width=.4\linewidth]{figures/time_std_euglycemic.pdf} \end{subfigure} \caption{a) The average amount of insulin (in percent of total daily insulin) provided after meals for PID and RL-Scratch (note: RL-Trans, unshown, is very similar to RL-Scratch). We see RL-Scratch is able to more quickly respond to meals, with insulin peaking 30 minutes post-meal opposed to roughly 60 minutes for PID. Additionally, the RL approach finishes delivering most post-meal insulin after 1 hour, PID takes approximately 90 minutes. b) The distribution of average risk scores over 300 10-day rollouts for Adult 1 using different meal schedules. As meal times become more predictable (lower standard deviation). While PID performs better with more regularly spaced meals (median risk lowers from 9.66 at std=10 to 8.53 at std=0.1, a 12\% decrease), RL-Scratch sees a larger proportional and absolute improvement (median risk lowers from 8.33 at std=10 to 6.36 at std=0.1, a 24\% decrease).}\label{fig:rl_advantage} \end{figure} \subsection{Challenges for Deep RL} While \textbf{Section \ref{sec:advantages}} demonstrates several advantages from using deep RL for blood glucose control, the application of deep RL to this task and its evaluation is non-trivial. In this section, we begin by discussing the key design decisions made to achieve the successes described above. We pay particular attention to the issue of catastrophic failures, as overcoming these were a major hurdle for the deep RL approaches. We conclude by addressing the issue of sample complexity with patient-specific data. \subsubsection{Developing an Effective Action Space} One challenging element of blood glucose control in an RL setting is the action space. Insulin requirements vary significantly by person (from 16 to 60 daily units in the simulator population we use), and throughout most of the day, insulin requirements are much lower than near meals. To account for these challenges, we used a patient-specific action-space where much of the space corresponds to delivering no insulin as discussed in \textbf{Section \ref{sec:architecture}}. We perform an ablation study to test the impact of these decisions. On an arbitrarily chosen patient (child\#001), we experimented with removing the negative insulin space by adding 1 to the tanh output. We found this increased the catastrophic failure rate from 0\% (on this one patient) to 6.6\%. On a challenging subset of 4 patients (indicated in \textbf{Appendix \ref{app:bb}}), we looked at the effect of removing the patient-specific action scaling $\omega_b$. We found this increased median risk from 9.92 to 11.23, a 13\% increase. These results demonstrate that an action space that encourages safe behavior and that naturally accommodates variation across patients can improve performance. \subsubsection{Potential Pitfalls in Evaluation} \label{sec:val_selection} In our experiments, we observed two key points for model evaluation: i) while often overlooked in RL, using validation data for model selection during training was key to achieving good performance, and ii) without evaluating on far more data than is typical, it was easy to underestimate the catastrophic failure rate. \textbf{Performance instability necessitates careful model selection. } Off-policy RL with function approximation is known to be unstable, and these effects can be multiplied when using deep neural networks \citep{sutton_reinforcement_2018}. As a result, we found it was extremely important to be careful in selecting which network to evaluate. In \textbf{Figure \ref{fig:validation_selection}a}, we show the fluctuation in validation performance over training for Adult\#009. While performance increases on average over the course of training, there are several points where performance degrades considerably. \textbf{Figure \ref{fig:validation_selection}b} shows how average performance over all patients changes depending on the approach we use to select the training epoch and corresponding policy. When we simply test on the final epoch, almost half of test rollouts end in a catastrophic failure. Surprisingly, even when we select the model that performed best on the validation set in terms of average reward, nearly a fifth of rollouts can fail. However, by first limiting our model selection to those that achieve a minimum blood glucose level of at least 30 mg/dL over the validation data, we reduce the catastrophic failure rate to only $0.07\%$. As others have noted this instability over a range of domains \citep{islam_reproducibility_2017, henderson_deep_2018}, we believe this observation is relevant to future applications of deep RL to healthcare problems. \begin{figure}[hbtp] \centering \begin{subfigure} \centering \includegraphics[width=0.45\linewidth]{figures/unstable_training.pdf} \label{fig:validation_curve} \end{subfigure} \begin{subfigure}{} \includegraphics[width=0.4\linewidth]{figures/validation_selection.pdf} \label{fig:validation_perf} \end{subfigure} \caption{a) The training and validation curve for adult\#009 in terms of average reward. Note the periods of instability affect both training and validation performance. b) Catastrophic failure rate averaged over all patients for 3 methods of validation selection: i) selecting the final training epoch, ii) selecting the epoch that achieved maximal average reward over the validation rollout, iii) selecting the maximal reward epoch that also maintained blood glucose above a certain limit (30 mg/dL).}\label{fig:validation_selection} \end{figure} \textbf{Extensive evaluation is necessary.} While health applications vary widely, by and large they are safety critical. While we have shown that deep learning can improve performance significantly in the average case, the susceptibility of such approaches to unexpected or unsafe behavior poses a significant risk. While ongoing work seeks to provide safety guarantees in control applications using deep learning \citep{achiam_constrained_2017, futoma_identifying_2020}, it is important that practitioners take every step to thoroughly evaluate the safety of their proposed approaches. In our own work, we saw how easy it can be to incorrectly assume an approach is safe, even with significant testing. We examined the number of catastrophic failures that occurred using our RL-Trans method when evaluating with different criteria. Over our full evaluation set of 9000 10-day rollouts, we observed 20 catastrophic failures (a rate of 0.22\%). However, when we only evaluated using the first 5 test seeds, we observed 0 catastrophic failures. Additionally, when we evaluated using 3-day test rollouts instead of 10, we only observed 5 catastrophic failures (a rate of .05\%) across the full test set, suggesting that longer rollouts result in a higher catastrophic failure rate. Without evaluating on a large pool of possible scenarios, it is difficult to judge possible performance. \subsubsection{Reducing Catastrophic Failures} We found varying the training data had a major impact on the catastrophic failure rate. During training, every time we reset the environment we incremented the random seed (which controls meal schedule and sensor noise) by one. Note that the pool of training, validation, and test seeds were set so there was no chance of overlap resulting from these seed increments. On a challenging subset of 7 patients (described in \textbf{Appendix \ref{app:patient}}), we ran RL-Scratch with and without this seed increment strategy. The run with the seed increment had a catastrophic failure rate of $0.15\%$, the method without the seed increment had a $2.05\%$ rate (largely driven by adult\#009, who was reliably the person with the highest catastrophic failure rate across experiments). Other approaches can further improve stability. In \textbf{Table \ref{tab:risk}}, our RL results are averaged over three random restarts. This was done to demonstrate that our learning framework is fairly robust to randomness in training data and model initialization. However, in a real application it would make more sense to select one model for use out of the random restarts using validation data. We apply this approach in \textbf{Table \ref{tab:ss_risk}}. It results in an improvement for all RL methods. Most notably, it drastically reduces the rate of catastrophic failures for the approaches without meal announcements (0.07\% to 0\% for RL-Scratch, and 0.22\% to 0.13\% for RL-Trans). \begin{table*}[t!] \centering \caption{Risk and percent of time Eu/Hypo/Hyperglycemic calculated for the RL approaches treating the 3 training seeds as different random restarts. We see a notable improvement in the stability of the Scratch and Trans approaches.} \scalebox{0.9}{ \begin{tabular}{lccccc} \toprule & Risk & Euglycemia & Hypoglycemia & Hyperglycemia & Failure Rate \\ & \downarrow & (\%) \uparrow & (\%) \downarrow & (\%) \downarrow & (\%) \downarrow \\ \midrule RL-Scratch & 6.39 (4.7-8.9) & 72.96 (69.1-76.6) & 0.62 (0.0-1.7) & 25.96 (22.7-29.6) & 0.00 \\ RL-Trans & 6.57 (5.0-9.3) & 71.56 (67.0-75.7) & 0.80 (0.0-1.9) & 27.19 (23.4-31.2) & 0.13 \\ RL-MA & 3.71 (2.7-6.3) & 77.36 (72.7-83.2) & 0.00 (0.0-0.5) & 22.45 (16.7-26.9) & 0.00 \\ \bottomrule \end{tabular} } \label{tab:ss_risk} \end{table*} \subsubsection{Sample Efficiency and Policy Transfer} While RL-Scratch achieves strong performance on average, it requires a large amount of patient specific data: 16.5 years per patient. While RL-Trans reduced this amount, it still required over 2 years of patient specific data, which for most health applications would be infeasible. Thus, we investigated how performance degrades as less data is used. In \textbf{Figure \ref{fig:finetune}}, we show the average policy performance by epoch of target training for both RL-Scratch and RL-Trans relative to the PID controller to see how low patient-specific data requirements can go. Note the epoch determines the maximum possible epoch for our validation selection, not the actual chosen epoch. We see that far less training is required to achieve good performance with RL-Trans. In over 40\% of rollouts, RL-Trans outperforms PID with no patient-specific data (median risk 10.31), and with 10 epochs of training (roughly half a year of data) RL-Trans outperforms PID in the majority of rollouts (59.6\%; median risk 7.95). Interestingly, the lack of patient-specific data does not appear to cause excessive catastrophic failures. With no patient-specific data the failure rate is 0\%, after 5 epochs of training it has risen to .5\%, and then declines over training to the final value of .22\%. This means two things: i) patient-specific training can increase the catastrophic failure rate, possibly by learning overly aggressive treatment policies, and ii) our current validation selection procedure does not select the epoch that minimizes catastrophic failure rates. Note we do not observe this for RL-Scratch, where all epochs under 50 achieve a catastrophic error rate of over 17\%. These results suggest that our simple transfer approach can be reasonably effective even with limited amounts of patient-specific data. \begin{figure} \centering \includegraphics[width=.5\linewidth]{figures/finetuning_per_epoch.pdf} \caption{The proportion of times RL-Scratch and RL-Trans outperform the average risk of PID over training. We see that without any patient-specific data RL-Trans performs better than PID in 40\% of cases. RL-Scratch requires a significant amount of patient-specific data before achieving comparable performance.}\label{fig:finetune} \end{figure} \section{Conclusion} In this work, we demonstrated how deep RL can significantly improve over alternative approaches to blood glucose control, both with and without meal announcements. While our results applying deep RL to blood glucose control are encouraging, they come with several limitations. First, our results are based on simulation. While the simulator in question is a highly credible one, it may not adequately capture variation across patients or changes in the glucoregulatory system over time. However, as an FDA-approved substitute for animal trials \citep{kovatchev_silico_2009}, success in this simulator is a nontrivial accomplishment. Second, we define a reward function based on risk. Though optimizing this risk function should lead to tight glucose control, it could lead to excess insulin utilization (as its use is unpenalized). Future work could consider resource-aware variants of this reward. Finally, we emphasize that blood glucose control is a safety-critical application. An incorrect dose of insulin could lead to life-threatening situations. Many of the methods we examined, even those which achieve good average performance, are susceptible to catastrophic failures. We have investigated several ways to minimize such failures, including modifying the reward function and selecting models across multiple random restarts. While the results from \textbf{Table \ref{tab:ss_risk}} suggest these approaches are successful, the results of \textbf{Section \ref{sec:val_selection}} show such empirical evaluations can miss failure cases. Despite these limitations, our results clearly demonstrate that deep RL is a promising approach for learning hybrid and fully closed-loop for blood glucose control.
null
null
null
proofpile-arXiv_000-10141
{"arxiv_id":"2009.09051","language":"en","timestamp":1600740156000,"url":"https:\/\/arxiv.org\/abs\/2009.09051","yymm":"2009"}
2024-02-18T23:40:25.093Z
2020-09-22T02:02:36.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10143"}
null
\section{Introduction} \label{sec:intro} Jupiter Trojans are a class of small objects trapped in the Jupiter L4 and L5 Lagrangian points. Their origin is still disputed, with the most likely scenarios falling into the categories 1) coeval trapping of local planetesimals in the 1:1 mean motion resonance with accreting Jupiter as a consequence of drag or collisions \citep{Yoder1979,Shoemakeretal1989} or 2) post Jupiter-formation capture of scattered trans-Neptunian planetesimals following an episode of orbital chaos \citep{Morbidellietal2005} during the orbital migration of the giant planets or as a consequence of the \textit{Jumping Jupiter} scenario as defined in \citet{Nesvornyetal2013}. In either cases, Trojans are thought to be primitive objects that experienced little thermal evolution and contain a considerable amount of volatiles, which makes them close relatives to cometary nuclei. \\ With a launch planned for October 2021, Lucy is the thirteenth NASA mission of the Discovery Program and will be the first one to explore the Jupiter Trojan System. Its trajectory is designed to fly-by 5 Trojans -- one of which, (617) Patroclus is an equal-size binary system -- distributed over the two Lagrangian clouds. The encounter with Leucus is currently planned for June 18, 2028. A coordinated effort has been initiated to support the mission with a systematic program of ground-based observations of the mission targets aiming at characterizing their dynamical, physical, rotational and photometric properties. The goal is to inform the mission design in order to maximize the scientific return of the encounters and to complement the space-based measurements with data that are most efficiently acquired from the ground. \\ This paper presents new lightcurve photometry of the Lucy target Leucus, which is used, together with the results of stellar occultation campaigns presented in a companion paper \citep{Buieetal2020}, to determine its convex shape, spin axis orientation, albedo, size, sphere-integrated phase curve and V--R color index. \section{Leucus} \label{sec:leucus} (11351) Leucus belongs to the Jupiter L4 Trojan swarm. Radiometric measurements by the IRAS satellite reported a size and a geometric albedo of 42.2$\pm{4.0}$ km and 0.063$\pm{0.014}$, respectively \citep{Tedescoetal2004}. \citet{Gravetal2012}, reported size and albedo of 34.155$\pm{0.646}$ km and 0.079$\pm{0.013}$, respectively, based on WISE radiometry. \citet{Levison2016} classified Leucus tentatively as belonging to the D taxonomic type, \added{based on its visible spectral slope \citep{Roigetal2008}}. No collisional family membership has been proposed as of today. \citet{Frenchetal2013} first realized from lightcurve data that Leucus is a very slow rotator, although they identified an incorrect period of 515 h. \citet{Buieetal2018}, by using photometric observations acquired during the 2016 apparition, combined with the observations by French, determined a firm rotation period of 445.732 $\pm$ 0.021 h. They also estimated a geometric albedo of 0.047 based on the WISE diameter and on an average color index for D-type asteroids. \section{Observations and data reduction} \label{sec:observations} The new Leucus photometric observations reported in this paper were performed during its 2017, 2018 and 2019 apparitions by using 1.0 m telescopes from the Las Cumbres Observatory Global Telescope (LCOGT) network, the 1.2 m telescope at the Calar Alto Observatory, Spain, and the two 24$''$ telescopes sited at Sierra Remote Observatories (SRO), Auberry, CA, USA, owned and operated by SwRI. The observational circumstances are detailed in Table \ref{tab:obscirc}. Typical exposure times were of 5 min for the Calar Alto observations, with the telescope tracked at half the relative tracking rate of the asteroid, in order reduce smearing and obtain equal point-spread functions for the target and field stars. The LCOGT observations were also exposed for typically 5 min, but were tracked at object rate, since those telescopes do not support halfway tracking. The SRO systems use an Andor Xyla sCMOS camera with a maximum exposure time of 30 seconds. All data taken on Leucus used this maximum exposure time. In 30 seconds, the motion of Leucus is 125 mas at opposition, corresponding to less than a half of a pixel smear and and even smaller fraction of the point-spread function (PSF). The very fast readout and low read noise of the sCMOS camera nearly eliminate the penalties of taking such short exposures. During processing, we can stack as many images as desired to reach a SNR goal. Two stacks are built from the data, one registered on the stars and the other registered on the apparent motion of Leucus. We can use image subtraction to remove the stars in the Leucus stack but this was not necessary for the 2018 and 2019 data from SRO\null. For these data we chose to stack 10 images at a time and used synthetic aperture integration to retrieve the photometry of Leucus, thus providing an effective integration over 5 minutes. The star-stacked images were used with the same aperture to determine the instrumental magnitudes of the stars. This raw photometry was further binned in time by a factor of 3 to increase the per-point SNR while also allowing the estimation of a good uncertainty for the photometry. For the observations from Calar Alto, most fields were measured with \textit{Himmelspolizey}, a reduction pipeline developed by SH. The latter implements a semi-automatic astrometric/photometric workflow that uses \textit{SExtractor} \citep{Sextractor} for photometric extraction, an optimistic pattern matching algorithm \citep{Tabur2007} for astrometric reduction and a moving object detection algorithm for asteroid identification as described in \citet{Kubicaetal2007}. In the case of crowded fields, synthetic aperture photometry was measured interactively with AstPhot \citep{Mottolaetal1995}.\\ The observations proved to be challenging mainly because of two aspects. First, during the 2017 apparition the target was still close to the Galactic center, which implied that involvement of the source with field stars was extremely frequent. This issue was dealt with by applying star subtraction. For the observations taken from the LCOGT Network the subtraction was performed in the image domain, by applying the method described in \citet{Buieetal2018}. For the observations from Calar Alto, the subtraction was performed in the intensity domain, by separately measuring the flux of the involved stars at many epochs distant from the time of the respective appulses. Although star subtraction mitigates the contamination problem, it is not a perfect solution. While the flux of the polluting star is removed, the photon noise associated to the subtracted flux still contributes to the degradation of the total SNR, sometimes being the dominant contribution. In the latter cases the affected frames were excluded altogether from the data set. Furthermore, imperfect background source removal, due e.g. to changing seeing conditions during the observations, results in a non-Gauss distribution of the measurements error which can cause outliers in the data that can be difficult to discern.\\ The second challenging aspect was represented by the very long rotation period of the target, which caused a useful night of observations to produce data points only at a single rotational aspect. As a consequence, a large number of nights over extended periods of time were needed to ensure complete rotational coverage. Furthermore, the resulting data set virtually consisted only of sparse photometry, in the sense that subsequent nights' observations didn't overlap in rotation phase, preventing a composite lightcurve to be compiled from relative photometry. Composite lightcurves were therefore generally constructed based on absolute photometry, with the disadvantage of the additional uncertainty component due to the errors of the calibration zero points. Fortunately, modern, all-sky photometric catalogs with very good coverage, limiting magnitude and accuracy have become available in recent times, so that the problem of the accuracy of the zero point is not as severe as in the past. With the fields of view of our detectors, a large number of suitable photometric catalog stars was always simultaneously present in the same frame as the target, allowing us to identify variable stars and outliers. For the observations from LCOGT, carried out with the SDSS r' filter, we used the APASS photometric catalog \citep{Hendenetal2012}. For the Calar Alto observations, carried out in the Johnson V and Cousins R$_C$ filters, we used the GAIA DR2 catalog \added{\citep{GaiaCollaboration2018}} with the transformations from the G photometric band from \citet{Evansetal2018}. The observations from the SRO were performed with a VR broadband filter. Gaia DR2 field stars were used to directly express the asteroid in the Gaia G bandpass without further transformation or color correction. Typically, relative photometric accuracy of the individual points (binned points for the SRO) ranged from 0.01 to 0.03 mag RMS. The absolute photometric accuracy of the zero points was typically of the order of 0.02 mag RMS. \added{For all observations, the magnitudes were reduced to 1 au from the Sun and the observer. The times were converted to the Barycentric Dynamical Time (TDB) frame, in order to provide a uniform time reference. Further, the times were corrected for the one-leg, target-observer light-travel time.\\ } For the purpose of compact representation -- but not for model computation -- the photometric time series were compiled into composites for each individual apparition. This was achieved by performing a Fourier series fit of the fourth order to determine the respective best-fit synodic periods by using the procedure described in \citet{Harrisetal1989}. Normally, we would fit simultaneously the Fourier coefficients and a phase function to absolute photometry data, in order to compensate for brightness changes due to the phase curve. The implicit assumption is that the shape of the lightcurve -- in particular its amplitude -- remains constant over a few consecutive rotational cycles. While this is a reasonable assumption for most asteroids, in the case of Leucus, given its very slow rotation, the amplitude of the lightcurve can change over consecutive rotational cycles due to the change in viewing and observing geometry. Fitting the phase function simultaneously with the rotation period would tend to compensate the change in the lightcurve amplitude by skewing the phase function, which could result in varying phase curve slopes for different apparitions. For this reason we performed the composites with a nominal linear phase coefficient $\beta=0.0395$ $mag/\degr$ for all of the observations reported in this paper \added {(see section \ref{subsec:phasecurve})}.\\ The composite lightcurves for the 2017, 2018 and 2019 apparitions are reported in Figure \ref{fig:compo-fig}. It can be seen that the respective synodic periods differ among each other by as much as 0.7 h, which corresponds to about 0.15\%. This is expected for two lines of reasons. First, due to the slow rotation, the useful baseline for the determination of the periods for each apparition just covers a handful of rotational cycles, which limits the accuracy of the determination. Secondly -- and to a lesser extent -- the apparent instantaneous rotation rate depends on the rate of change of the topocentric ecliptic longitude of the object, which can make the synodic period change slightly \replaced{from apparition to apparition}{during the course of an appartition and from one apparition to the next}. In the next sections we will derive a very accurate sidereal period and phase function, by using a dynamical and shape model that makes use of all of the available observations, over a baseline of about 6 years. \begin{deluxetable*}{ccccccccccc} \tablenum{1} \tablecaption{Observational circumstances\label{tab:obscirc}} \tablewidth{0pt} \tablecolumns{11} \tablehead{ \colhead{Date} & \colhead{$\lambda$} & \colhead{$\beta$} & \colhead{$\alpha$} &\colhead{r} & \colhead{$\Delta$} & \multicolumn{2}{c}{$\lambda$ (PAB) $\beta$} & \colhead{Band} & \colhead{Observatory} & \colhead{Observers}\\ \colhead{(UT)} & \multicolumn{2}{c}{($\degr$ J2000)} & \colhead{($\degr$)} &\colhead{(au)} & \colhead{(au)} & \multicolumn{2}{c}{($\degr$ J2000)} } \startdata 2017 May 1.1 & 287.4 & +5.7 & 9.683 & 5.5071 & 5.0320 & 282.5 & +5.5 & R$_C$& 493& SH, SMo \\ 2017 May 2.1 & 287.4 & +5.8 & 9.616 & 5.5067 & 5.0175 & 282.6 & +5.5 & R$_C$& 493& SH, SMo \\ 2017 May 20.1 & 287.0 & +6.3 & 7.887 & 5.5005 & 4.7745 & 283.0 & +5.9 & R$_C$& 493& SH, SMo \\ 2017 May 28.1 & 286.5 & +6.6 & 6.841 & 5.4977 & 4.6852 & 283.1 & +6.1 & R$_C$& 493& SMo, SH \\ 2017 May 30.1 & 286.3 & +6.6 & 6.550 & 5.4970 & 4.6646 & 283.1 & +6.1 & R$_C$& 493& SMo, SH \\ 2017 May 31.0 & 286.3 & +6.6 & 6.408 & 5.4966 & 4.6551 & 283.1 & +6.1 & R$_C$& 493& SMo, SH \\ \hline 2017 Jul 16.4 & 281.0 & +7.6 & 2.738 & 5.4798 & 4.4914 & 282.2 & +6.9 & r' & Q64 & MWB, AZ \\ 2017 Jul 17.0 & 280.9 & +7.6 & 2.833 & 5.4795 & 4.4932 & 282.1 & +6.9 & r' & W86 & MWB, AZ \\ 2017 Jul 17.3 & 280.9 & +7.6 & 2.882 & 5.4794 & 4.4942 & 282.1 & +6.9 & r' & W85 & MWB, AZ \\ 2017 Jul 17.6 & 280.8 & +7.6 & 2.933 & 5.4793 & 4.4952 & 282.1 & +6.9 & r' & Q63 & MWB, AZ \\ 2017 Jul 17.9 & 280.8 & +7.6 & 2.980 & 5.4792 & 4.4962 & 282.1 & +6.9 & r' & K93 & MWB, AZ \\ 2017 Jul 18.2 & 280.8 & +7.6 & 3.031 & 5.4791 & 4.4973 & 282.1 & +6.9 & r' & W86 & MWB, AZ \\ \hline 2018 Jun 11.1 & 318.3 & +11.1 & 9.382 & 5.3394 & 4.7451 & 313.5 & +10.5 & R$_C$ & 493 & SH, SMo\\ 2018 Jul 9.1 & 316.6 & +12.1 & 5.829 & 5.3263 & 4.4379 & 313.8 & +11.1 & V & 493 & SH, SMo\\ 2018 Jul 15.0 & 316.0 & +12.2 & 4.895 & 5.3236 & 4.3947 & 313.8 & +11.2 & V & 493 & SH, SMo\\ 2018 Sep 4.9 & 309.8 & +12.5 & 6.171 & 5.2990 & 4.4362 & 312.8 & +11.5 & V & 493 & SH, SMo\\ 2018 Aug 6.9 & 313.2 & +12.7 & 2.406 & 5.3127 & 4.3186 & 313.3 & +11.5 & R$_C$ & 493 & SH, SMo\\ 2018 Aug 8.0 & 313.1 & +12.7 & 2.432 & 5.3123 & 4.3187 & 313.3 & +11.5 & R$_C$ & 493 & SH, SMo\\ 2018 Aug 9.0 & 312.9 & +12.7 & 2.475 & 5.3118 & 4.3192 & 313.3 & +11.5 & R$_C$ & 493 & SH, SMo\\ \hline 2019 Nov 9.1 & 342.2 & +12.6 & 10.103 & 5.1016 & 4.5976 & 347.3 & +12.0 & VR & G80 & MWB\\ 2019 Nov 10.1 & 342.2 & +12.6 & 10.181 & 5.1012 & 4.6112 & 347.4 & +12.0 & VR & G80 & MWB\\ 2019 Nov 11.1 & 342.2 & +12.5 & 10.257 & 5.1008 & 4.6249 & 347.4 & +12.0 & VR & G80 & MWB\\ 2019 Nov 12.1 & 342.2 & +12.5 & 10.329 & 5.1004 & 4.6387 & 347.5 & +11.9 & VR & G80 & MWB\\ 2019 Nov 24.1 & 342.6 & +12.0 & 10.962 & 5.0955 & 4.8123 & 348.2 & +11.7 & VR & G80 & MWB\\ \enddata \tablecomments{This table is an excerpt. The observational circumstances for all of the observation nights are reported in the online material. $\lambda$ and $\beta$ are the topocentric ecliptic longitude and latitude of the target, respectively. $\alpha$ is the solar phase angle, r is the heliocentric distance and $\Delta$ is the topocentric range of Leucus. $\lambda$ and $\beta$ (PAB) are the topocentric ecliptic longitude and latitude of the phase angle bisector, as defined in \citet{Harrisetal1984} } \end{deluxetable*} \begin{figure} \centering \includegraphics[width=13.8cm]{combined.pdf} \caption{Composite lightcurves for the 2017, 2018 and 2019 apparitions. The data points are folded with the synodic periods listed in the respective graphs, with zero phase corresponding to the respective T$_0$ epochs. T$_0$ are one-leg, light-travel time corrected Modified Julian Dates (MJD) expressed in the TDB uniform time frame. The listed synodic periods are the exact numbers used for folding the composites, and as such, are reported without uncertainty. Data points beyond rotation phase 1.0 are repeated for clarity. The magnitudes are reduced to 1 au from the observer and from the Sun and to the respective reference phase angles by using a nominal linear phase coefficient $\beta=0.0395$ mag/$\degr$.\label{fig:compo-fig}} \end{figure} \section{Modeling} \label{sec:modeling} \subsection{Data} \label{subsec:data} The photometric lightcurves presented in the previous section and in \citet{Buieetal2018}, and the results from the stellar occultation campaigns reported in \citet{Buieetal2020} constitute the bulk of observational data used for our modeling work. In addition, we made use of dense lightcurve photometry by \citet{Frenchetal2013} (retrieved through the ALCDEF service (\url{http://alcdef.org/})) and sparse photometry from the following sources: \begin{enumerate} \item The Gaia DR2 database \citep{GaiaCollaboration2018} retrieved through the VizieR server (\url{https://vizier.u-strasbg.fr/}), \item The ZTF project \citep{Bellmetal2019} retrieved through the IRSA server (\url{https://irsa.ipac.caltech.edu/applications/ztf/}) and from the nightly transient archive at \url{https://ztf.uw.edu}, \item The PAN-STARRS-1 DR2 database \citep{flewellingetal2016} retrieved through a query to the MAST archive (\url{https://catalogs.mast.stsci.edu/}), and \item The ATLAS project \citep{Tonryetal2018} retrieved through the AstDys database (\url{https://newton.spacedys.com/astdys/}). \end{enumerate} \added{For all but the ATLAS observations, the photometric uncertainties were retrieved along with the magnitude data. In the case of the ATLAS observations, we estimated nominal photometric uncertainties by compiling the data into composites for the individual oppositions and computing their residuals. } Since the survey observations were acquired in a variety of different photometric systems, for which transformations to the Johnson system are not accurately established, they were treated as relative photometry. \added{Similarly to the dense data set, the times were light-time corrected, and the magnitudes reduced to 1 au.} Although these additional datasets provide varying degrees of accuracy, they proved to be very useful to extend the coverage and baseline of the observations, which combined, cover a period of about 6 years.\\ \subsection{Convex inversion} \label{subsec:convexinversion} In this paper we apply the convex shape inversion approach described in \citet{Kaasetal2002book} and references therein to the photometric time series to simultaneously solve for the sidereal period, the spin axis orientation, the photometric function and a convex, polyhedral approximation of the shape. The occultation data are used as a constraint to resolve the spin axis ambiguity, to determine the scale of the object -- and hence its albedo -- and to refine the orientation of the spin axis. Although it is likely that Leucus does contain some degree of global-scale concavity, we did not feel that the available data -- both because of coverage and photometric accuracy, in the case of lightcurve data, and because of limited unique observation geometries in the case of occultation data -- would allow addressing of the intrinsic non-uniqueness of the non-convex problem. On the other hand, the convex inversion scheme offers the advantage of a provably convergent method that results in a unique solution in the case of a convex shape \citep{KaasalainenandLamberg2006}, and gracefully degrades in the case of moderate concavities. Non-convex modeling of Leucus will be the subject of future work as soon as more data -- especially at further occultation geometries -- become available. \added{Although a working implementation of the convex inversion algorithm (\textit{convexinv}) is publicly available \citep{Durechetal2010} we decided to develop our own implementation based on the original publications \citep{KaasalainenandTorppa2001,Kaasalainenetal2001}. This approach has allowed us to overcome some limitations of the code (described later), to expand its functionality, and to correct a minor bug in the computation of the $\chi^2$ metric that is present in the current \textit{convexinv} version and that has been promptly communicated to the author. Although we did not make use of any code from \textit{convexinv}, we did use that software to validate the results of our own code on a few test cases. }\\ The surface brightness of the object is described through its photometric function, which, for the purpose of this work, is assumed to be separable into a \textit{disk function} and a \textit{surface phase function} (see e.g. \citealt{Schroederetal2013}). Following \citet{Kaasalainenetal2001}, we adopt Lommel-Seelinger-Lambert (LSL) scattering as a disk function and a 3-parameter exponential-linear combination as a surface phase function, as described in Appendix A. The LSL disk function is the linear combination of the Lommel-Seeliger and Lambert scattering functions through a partition parameter $c$, and is equivalent to the Lunar-Lambert disk function (\citealt{Lietal2015} and references therein) when the latter is used with a partition function independent on the phase angle. We fixed $c$ parameter to a constant value $c=0.1$, appropriate for dark asteroids \citep{Kaasalainenetal2005}. For this work we didn't attempt to use more complex photometric models -- as e.g. the Hapke function (\citealt{Hapke2012} and references therein) -- because, due to the small phase angle range in which Trojans can be observed from Earth, no meaningful retrieval of the model parameters is achievable.\\ The brightness of the object depends on the product of its size and its albedo. From unresolved photometric measurements alone, it is not possible to retrieve independently those two quantities. Independent measurements as thermal radiometry, stellar occultations or direct imaging, however, offer the possibility to disentangle the two quantities. In absence of more detailed information, we assume \deleted{that} that the photometric properties of the target -- in particular its albedo -- are uniform over its surface. This is quite a reasonable assumption to be made, because, although albedo variations on asteroids do occur, the observed lightcurve variations tend to be dominated by the changing cross section of a non-spherical shape.\\ The convex inversion scheme models the brightness of the target in the space of its Extended Gaussian Image (EGI, \citealt{KaasalainenandTorppa2001}), as opposed to the 3D object space. The EGI represents the discrete equivalent of the inverse of the Gaussian Surface Density, and is used to represent the area of the facets oriented towards a particular direction. For the Leucus model we parametrize the EGI with a spherical harmonic expansion of rank and order \replaced{6, which results in a total of 49}{8, which results in a total of 81} shape parameters, one of which represents its scale. An exponential-function representation of the EGI is used for guaranteeing the positiveness of the surface areas. We integrate the EGI over the unit sphere by sampling the spherical harmonics expansion at discrete points by following a Lebedev quadrature \citep{LebedevandLaikov1999}, which offers greater efficiency compared to a quadrature based, e.g. on a uniform or random sampling \citep{Kaasalainenetal2012}. In the case of Leucus we applied a Lebedev rule of order 302, which results in an EGI with the same number of facets.\\ \added{Contrary to \citet{KaasalainenandTorppa2001} \added{-- and to the \textit{convexinv} implementation --} we minimize a weighted metric for solving the least-squares problem, in order to properly account for the varying degree of accuracy of the different data sets.} In the case of absolute observations, the optimization is performed by minimizing the reduced $\chi^2_{red}$ metric defined as \begin{equation} \chi_{red}^{2}= \frac{1}{N}\sum\limits_{i}{\frac{(L_i^{obs}-L_i^{mod})^2}{\sigma_i^2}} \end{equation} where $L_i^{obs}$ and $L_i^{mod}$ are the observed and model intensities, respectively, $\sigma_i$ are the intensity uncertainties and $N$ is the number of degrees of freedom for errors. The index $i$ runs over all of the $n$ photometric data points.\\ In the case of relative observations, we minimize the deviations of the intensities relative to the average of the respective lightcurve: \begin{equation} \chi_{rel}^{2}= \frac{1}{N}\sum\limits_{k,j}\left(\frac{\bar{L}^{obs}_{j}}{\sigma_{k,j}}\right)^2\left(\frac{L_{k,j}^{obs}}{\bar{L}^{obs}_{j}}-\frac{L_{k,j}^{mod}}{\bar{L}^{mod}_{j}}\right)^2 \end{equation} where the index $k$ runs over the data points of the lightcurve $j$ and $\bar{L}^{obs}_{j}$ and $\bar{L}^{mod}_{j}$ are the average intensities of the observed and modeled lightcurve $j$, respectively. \added{The term $\sigma_{k,j}$ represents the uncertainty of the $k^{th}$ data point of lightcurve $j$.} If absolute observations were performed during the same apparition in two photometric bands, then a color index term is introduced to tie the two photometric systems. The color term is also optimized in the procedure. If, on the other hand, one photometric band is never used together with another photometric system at least during one apparition, then these series of observations are treated as relative photometry, in order to avoid a possible parameter coupling between the color index and the spin axis orientation. The non-linear optimization is performed with the Levenberg-Marquardt algorithm \citep{NumericalRecipes}.\\ A unique transformation from the EGI to a polyhedron in 3D space is guaranteed by a Minkowski theorem \citep{Minkowski1897}. For this transformation we use the iterative scheme proposed by \citet{Lamberg1993}. The body reference system is defined such that the Z axis coincides with the spin axis. The plane that contains the body center of mass and that is perpendicular to the Z axis defines the XY plane in the body system. The direction of the X axis is chosen such that it coincides with the projection of the principal axis of smallest inertia onto the XY plane. The body X axis also defines the location of the prime meridian and thus the zero longitude. \subsection{Rotation model} \label{subsec:rotationandshape} The rotation period strongly modulates the spectrum of the $\chi^2$ of the fit with periodic local minima at a minimum spacing $\Delta P\approx P^2/(2T)$ \citep{Kaasalainenetal2001}, where $P$ is the rotation period and $T$ is the total baseline of the lightcurve coverage. It is therefore important that the optimization be started in the vicinity of the correct period, in order to avoid that the optimizer could become trapped in the local minimum of an alias period. For this reason, the search of the correct sidereal period is the first step in the convex inversion scheme. The search is performed by running the inversion procedure by using as starting conditions all trial periods in a relevant range, with a step sufficiently smaller than the minima separation. For each trial period we use twelve different starting pole directions. Figure \ref{fig:period-fig} shows the results of the period scan for Leucus. \begin{figure} \plotone{period_scan.pdf} \caption{Result of the sidereal period scan. \added{Each data point corresponds to a local-minimum solution obtained by using trial periods in the range between 300 h and 600 h as iteration start values for the sidereal period. The global minimum around 445.7 h corresponds to the best-fit solution.} \label{fig:period-fig}} \end{figure} In order to identify the coarse direction of the spin axis we run the optimization procedure by using the best-fit sidereal period derived in the previous \replaced{section}{paragraph} as a starting value and by fixing the spin axis orientation to each of about 20,000 trial directions equally spaced on the celestial sphere. The shape of the object, the sidereal period and the photometric parameters (but not the pole coordinates) are simultaneously optimized for each trial pole. The resulting $\chi^2$ values for each solution are mapped on the celestial sphere via a polar azimuthal equidistant projection and are shown in Figure \ref{figchi2map-fig}. As expected, two, equally significant loci for the best solution are identified. This is the consequence of the \textit{ambiguity theorem} \citep{KaasalainenandLamberg2006} that states that if disk-integrated photometric observations are always carried out in the same photometric plane -- as is the case for low-inclination objects observed from Earth -- then two indistinguishable solutions exist that satisfy the observations and that are separated by about 180$\degr$ in Ecliptic longitude. The shapes corresponding to the two solutions are approximately mirrored shapes of one another around the body XY-plane. The graph also shows that both solutions are prograde and, as already inferred by \citet{Buieetal2018}, the obliquity of the spin axis is low.\\ \begin{figure} \plotone{Fig3_cropped.pdf} \caption{Polar azimuthal equidistant projection of the $\chi^2$ of the pole solutions. The coordinates are expressed in the J2000 Ecliptic Frame. The left panel is centered on the North Ecliptic pole and the right one on the South Ecliptic pole. The loci of the two complementary best solutions are clearly visible as white regions. \label{figchi2map-fig}} \end{figure} \subsection{Fit to the occultation data} \label{subsec:occultationfit} By providing disk-resolved information, occultation data can resolve the pole ambiguity, fix the absolute scale of the shape model and, together with the determined $H_V$-value, measure its geometric albedo.\\ In principle, occultation data can also be used to derive non-convex shape models, provided sufficient, densely sampled silhouettes are available at multiple rotation phases. Much work has been recently done concerning the optimal fusion of data coming from disk-integrated photometry and disk-resolved techniques as stellar occultations, adaptive-optics direct imaging and interferometry (e.g. \citealt{KaasalainenandViikinkoski2012,Viikinkoskietal2015}). In the case of Leucus, however, both the limited lightcurve coverage and the sparse silhouette sampling would not allow a reliable non-convex model to be derived. For this reason we decided to adopt an approach similar to \citet{Durechetal2011} and favor the advantages of the uniqueness and stability of a convex solution. \\ For each of the two best candidate solutions from the previous section we project the vertices of the shape model onto the plane of sky at the time of each occultation event and then compute the 2D convex hull of the projected points. Applying this procedure to the two complementary solutions visible in Fig. \ref{figchi2map-fig} allowed us to unambiguously identify the correct solution as the one centered at an Ecliptic longitude of around 210$\degr$. However, it also became apparent that the best solution had a slight systematic deviation in the orientation of the projection with respect to the occultation data that could be explained by a slight offset ($\approx$5$\degr$) in the direction of the spin axis orientation of the model. Such a small mismatch was not unexpected, as the loci of the solutions in Fig. \ref{figchi2map-fig} are quite broad and shallow, and a small shift in the spin axis direction of the model would produce fits to the lightcurves with similar $\chi^{2}$. On the other hand, disk-resolved data as stellar occultations are much more sensitive to a pole misalignment. For this reason we decided to use the occultation data for the refinement of the solution. As a goodness of fit for the occultation data we define a metric \begin{equation} \chi_{occ}^{2}= \sum\limits_{i,j}\frac{{(D_{ij})^2}}{N_{op}} \end{equation} where $D_{ij}$ represents the minimum Euclidean distance between the occultation transition point $j$ (either ingress or egress) of the event $i$ and the model 2D convex hull of the event $i$. $N_{op}$ is the total number of the observed transition points. This metric is different from the one chosen by \citet{Durechetal2011}, who prefer to use the distance of the occultation points to the occultation limb measured in the direction of the asteroid ground track. Their choice is justified by the fact that the largest contribution to their occultation data is given by timing errors and observer reaction times, which act along track. In our case, on the other hand, we estimate the largest errors to be due to the convex shape model, which are not expected to have a preferred direction. \\ $\chi_{occ}^{2}$ is minimized by optimizing the global scale and the Cartesian coordinates of the centers of the projections for each occultation epoch. The latter is necessary to compensate both for inaccuracies in the position of the occulted stars and for the uncertainties of the target ephemeris at the epochs of the occultations. In practice the minimization is performed by varying the projection centers and the $A_{LSL}$ albedo (see Appendix A) -- which constrains the scale -- in an adaptive grid search fashion.\\ One practical problem arises from the fact that the convex shape optimization is performed in the EGI space, while the occultation profile optimization is done in the space of the projected shape. It is therefore impractical to perform a simultaneous, combined optimization. For this reason we used the $\chi_{occ}^{2}$ as a mild penalty function for a combined $\chi_{tot}^{2}$ of the form \begin{equation} \chi_{tot}^{2}= \chi_{conv}^{2} + \lambda \chi_{occ}^{2} \end{equation} where $\chi_{conv}^{2}$ is the term coming from the convex inversion and $\lambda$ is a small weight. The value of $\lambda$ has been determined by experimentation by adopting a value of $\lambda$ that minimized $\chi_{occ}^{2}$ without significantly increasing $\chi_{conv}^{2}$. We have then computed the quantity $\chi_{tot}^{2}$ for hundred discrete pole directions within a radius of 10$\degr$ of the best solution (and of the complementary one).\\ \begin{figure} \plotone{mod.pdf} \caption{Observed lightcurves shown along with the respective model lightcurves. The data are plotted in intensity and are normalized to unity at the model maximum value for the respective opposition. The dates reported in the labels represent the epoch of the first observation of each sequence.\label{fig:modlcs}} \end{figure} The parameters for the best-fit solution are reported in Table \ref{tab:results}. The errors quoted for the different quantities have been determined by computing perturbed solutions and investigating the effects on the $\chi^{2}$. It must be noted that this method produces an evaluation of the statistical component of the uncertainties only. The largest contribution \added{to} the errors is thought to be due to the violations of the assumptions, as the assumption of the functional form of the photometric model and the convexity assumption. Those contributions, however, are virtually impossible to be formally quantified. For this reason we refrained from using more sophisticated statistical error models, as the Markov chain Monte Carlo (MCMC) method, because they also only address the stochastic component of the uncertainty. Figure \ref{fig:modlcs} shows the synthetic lightcurves for the corresponding shape model for the epochs covered by the dense observations reported in this paper. The lightcurve intensities are corrected for changing heliocentric and topocentric range and are normalized to unity at the maximum of the respective observation window. Besides the rotational variation, the intensity variation due to the changing phase angle is visible. Figure \ref{fig:silhouette-fig} shows the resulting best-fit model overplot onto the occultation chords by \citet{Buieetal2020}. The complementary (wrong) solution is also plotted in light gray, showing how occultation data can help identify the correct solution. Please note that, for comparison purposes, we plot the data following Buie's convention of projecting the shape onto the plane of sky \citep{Green}, with the $\eta$ coordinate increasing towards celestial North and the $\xi$ coordinate increasing towards East. This is a different convention than in \cite{Durechetal2011}, who project the shape onto the fundamental plane, with the $\eta$ coordinate increasing towards West and the $\xi$ coordinate increasing towards North. \begin{figure} \plotone{11351_rot_multi.pdf} \caption{Occulting silhouettes of the best-fit convex model of Leucus (solid black line) along with the rejected complementary model displayed in light gray. The five panels correspond to the epochs of the five occultation events reported in \citep{Buieetal2020}. The red circles correspond to the starts and ends of the respective positive occultation chords. The red, green, and blue arrows represent the X, Y, and Z axes in the body-fixed reference frame, respectively. $\theta$ represents the aspect angle, i.e. the angle between the spin axis and the target-observer direction. \label{fig:silhouette-fig}} \end{figure} We note that the model shape well reproduces the occultation profiles and captures the general shape of the object, within the limits of a convex representation. In particular, it reproduces well the flat sides visible during the events LE20181118 and LE20191002 and the polygonal appearance of event LE20181118. It is also important to note that the occultation data are not directly used to derive the convex shape. The shape is influenced by the occultation data only indirectly, through the refinement of the spin axis orientation. Under this light, the match of the convex shape model to the occultation data appears even more convincing. \begin{figure} \plotone{Fig6.pdf} \caption{Six orthogonal projections of the best-fit Leucus convex shape arranged similarly to an unfolded dice. For contrast reasons, Lambert shading is used for the figure rendering instead of the LSL disk function used in the modeling. \label{fig:dice-fig}} \end{figure} Figure \ref{fig:dice-fig} shows six orthogonal views of the best-fit Leucus shape model. As already hinted by the occultation silhouettes, Leucus' shape considerably deviates from an ellipsoid and is characterized by a comparatively flat Northern hemisphere. \deleted{and an approximately triangular cross section in the body YZ plane.} \added{The convexity residual of the shape model resulting from the Minkowski transformation \citep{KaasalainenandTorppa2001} is a mere 0.14\%, which, taken at face value, would suggest negligible global-scale concavities. However, the small phase angle range at which observations are available reduces the diagnostic value of this parameter.} As a sanity check, we computed the model's inertia tensor with the method by \citet{Dobrovolskis1996} -- assuming a uniform bulk density -- and established that the principal inertia axis is misaligned with respect to the body's rotation axis by about 8$\degr$. To this end, we have to recall that the derived shape model represents a convex approximation of the shape, and ignoring concavities can contribute to shift the direction of the inertia axes. Also the assumption of uniform bulk density, if violated, would contribute to shift the direction of the principal axis of inertia. The observed misalignment is therefore not necessarily a hint that the object is not in a principal rotation state. Rather, it is an expression of the fact that the convex shape represents a \textit{photometric shape}, which can locally differ from the physical shape. The maximum extent of a complex shape can be defined in several different ways (see e.g. \citealt{Torppaetal2008}). For our model we define the maximum dimensions as \begin{equation} \begin{array}{ccc} L_X=max(x_1, ..., x_i)-min(x_1, ..., x_i) \\L_Y=max(y_1, ..., y_i)-min(y_1, ..., y_i) \\L_Z=max(z_1, ..., z_i)-min(z_1, ..., z_i) \end{array} \end{equation} where $(x_i,y_i,z_i)$ represent the Cartesian coordinates of the $i^{th}$ vertex of the shape model and the $X$, $Y$, and $Z$ body axes are defined as per Sec. \ref{subsec:convexinversion}. This definition produces similar -- but not identical -- extents as the \textit{Overall Dimensions} (OD) definition of \citet{Torppaetal2008} (see their Fig. 1). In particular, with our definition the largest extent is computed in the general direction of the principal axis of smallest inertia, which represents a natural axis of the body. In the case of the OD definition by \citet{Torppaetal2008}, on the other hand, the maximum dimension is the largest extent that occurs anywhere in the XY plane. As an example, for a hypothetical body with a rectangular equatorial cross section, with our definition the maximum extent would be represented by the longest side of the rectangle, while, according to the definition by Torppa the maximum extent would be represented by the diagonal of the rectangle. With our definition, the maximum dimensions for Leucus are $L_X=60.8$ km, $L_Y=39.2$ km and $L_Z=27.8$ km (see Table \ref{tab:results}). Those compare with the axes (63.8, 36.6, 29.6) km of the ellipsoidal approximation by \citet{Buieetal2020} that were derived under the assumption of a strictly equatorial aspect during the occultation events. In the same table the orientation of the model is described both by reporting the Ecliptic J2000 coordinates of the spin axis and initial angle $\Phi_0$ at the epoch $T_0$ using the formalism by \citet{Kaasalainenetal2001}, as well as using the IAU convention of reporting the ICRF equatorial coordinates of the spin axis and the $W_0$ angle at the standard epoch J2000 (\citealt{Archinaletal2018,Archinaletal2019}). The surface-equivalent spherical diameter of the convex model is $D=41.0\pm0.7$ km, whereas its volume is $(30\pm2)\times10^3 ~\rm km^3$. It should be noted, however, that due to the likely presence of concavities the quoted value for the volume rather represents an upper bound. \begin{deluxetable*}{cc} \tablenum{2} \tablecaption{Results\label{tab:results}} \tablewidth{0pt} \tablecolumns{2} \tablehead{ } \startdata Sidereal Period (h) & 445.683 $\pm$ 0.007\\ Pole J2000 Ecl. Longitude ($\degr$) & 208 \\ Pole J2000 Ecl. Latitude ($\degr$) & +77 \\ Pole J2000 RA ($\degr$) & 248 \\ Pole J2000 Dec ($\degr$) & +58\\ Radius of pole uncertainty ($\degr$, 1$\sigma$) &3\\ Ecliptic obliquity of pole ($\degr$) & 13\\ Orbital obliquity of pole ($\degr$) & 10 \\ $T_0$ (JD$\rm _{TDB}$) & 2456378.0 \\ $\Phi_0$ ($\rm \degr$) & -76.129 \\ $W_0$ ($\degr$) & 60.014 \\ $\dot{W}$ ($\rm\degr day^{-1}$) & 19.38596 $\pm$ 0.00030\\ $p_{\rm V}$ & 0.043 $\pm$ 0.002\\ $A_{\rm LSL}$ & 0.061 $\pm$ 0.002\\ $A_0$ & 0.23 $\pm$ 0.09\\ $D$ (rad)& 0.075 $\pm$ 0.015\\ $k$ ($\rm rad^{-1}$)& -1.07 $\pm$ 0.23\\ $c$ (fixed)& 0.1 \\ $H_{\rm V-LSL}$ (sph. int.)& 10.979 $\pm$ 0.037\\ $H_{\rm V-lin}$ (sph. int.)& 11.034 $\pm$ 0.035\\ $\beta$ ($\rm mag/\degr$) & 0.0395 $\pm$ 0.005\\ $H_{\rm V-HG}$ (sph. int.)& 10.894 $\pm$ 0.004\\ $G$ & 0.34 $\pm$ 0.02\\ $H_{\rm V-HG_1G_2}$ (sph. int.)& 10.95 $\pm$ 0.01\\ $G_1$ & 0.63 $\pm$ 0.04\\ $G_2$ & 0.23 $\pm$ 0.02\\ $V-R$& 0.464 $\pm$ 0.015\\ $V-r'$& 0.313 $\pm$ 0.021\\ $L_X \rm(km)$& 60.8 \\ $L_Y \rm(km)$& 39.2 \\ $L_Z \rm(km)$& 27.8 \\ Surface-equivalent spherical diam. $\rm(km)$& 41.0 $\pm$ 0.7\\ Photometric surface $\rm(km^2)$ & 5288 $\pm$ 180\\ Volume $\rm(km^3)$ & $\leq 3.0\times10^4$\\ \enddata \tablecomments{Please refer to the text for the definition of the respective quantities.} \end{deluxetable*} \subsection{Sphere-integrated phase curve} \label{subsec:phasecurve} The disk-integrated phase curve of an object condenses the often complex parameter space of a photometric function into a two-dimensional space. As such, the phase curve is a useful phenomenological tool to compare and classify different objects, and to infer the presence of physical phenomena, as e.g. coherent backscattering. A practical difficulty in deriving phase curves, however, is that during a single apparition -- and even more so across multiple apparitions -- in addition to the phase angle also other viewing and illumination angles change (i.e. aspect angle and photometric obliquity), which affects the phase curves. This effect is more pronounced the more the object deviates from a sphere. Having determined through modeling the surface phase function, however, it is possible to compute the phase curve that the object would display if it were a sphere, freeing it from any dependence on the shape and thus being only the expression of the photometric properties of the surface regolith. This is achieved by integrating the surface phase function over a sphere in a range of phase angles, as detailed in Appendix A. This representation, which we refer to as the \textit{sphere-integrated phase curve} corresponds to the \textit{reference phase curves} defined by \citet{Kaasalainenetal2001} in the particular case of a sphere. Figure \ref{fig:phasecurve-fig} shows the sphere-integrated phase curve for the LSL photometric function for Leucus (red line), corresponding to the best-fit photometric parameters derived in Section \ref{subsec:occultationfit} and listed in Table \ref{tab:results}. For the purpose of comparison, \replaced{ also a linear phase fit and a fit to the IAU HG system \citep{Bowelletal1989} are shown.} {fits are also shown for 1) the best-fit linear phase function ($\beta=0.0395\rm mag/\degr$) 2) the IAU HG system \citep{Bowelletal1989} and 3) the more recently adopted IAU $\rm{HG_1G_2}$ system \citep{Muinonenetal2010}}. \added{The latter was computed with the online tool described in \citet{Penttilaetal2016}}. As already apparent during the 2016 apparition \citep{Buieetal2018}, and as observed for several other Trojans (see e.g. \citealt{Shevchenkoetal2012}), Leucus has a very subtle -- if at all -- opposition effect. Within the observed phase angle range, \replaced{both the LSL phase curve and a linear phase curve with $\beta=0.0395\rm mag/\degr$ provide a good fit to the data. The HG system less so, showing systematic variations both at the small and at the large end of the phase angle range.} {all of the phase curves except for the HG curve -- that shows systematic variations both at the small and at the large end of the phase angle range -- provide a good fit to the data}. \replaced {The extrapolation at zero phase produces $H_{\rm V}$ values that differ by about 0.05 mag for the linear fit ($H_{\rm V-lin}$) and the LSL solution ($H_{\rm V-LSL}$).} {The extrapolation at zero phase produces $H_{\rm V-lin}$ and $H_{\rm V-HG_1G_2}$ values that differ from the LSL solution ($H_{\rm V-LSL}$) by about 0.05 mag and 0.03 mag, respectively.} \replaced {This is an expression of} {These small deviations are partly due to} the fact that no calibrated data were available in the V band below the phase angle of about 1.6$\degr$ to constrain the fit. \citet{Buieetal2018} did observe Leucus in the r' band at phase angles as low as 0.125$\degr$ during the 2016 apparition. However, there appear to be calibration inconsistencies between the 2016 observations and those acquired at the LCOGT in 2017 that are not fully understood. For this reason the 2016 observations were used as a relative data set \added{and do not contribute to our phase curve}. \added{It has long been realized (see e.g. \citealt{Oszkiewiczetal2012,Shevchenkoetal2016} and references therein) that a correlation exists between an asteroid's taxonomic class and the shape of its phase curve. The abovementioned tool by \citet{Penttilaetal2016} also performs an unsupervised taxonomic classification based on the derived H$\rm{G_1}$,$\rm{G_2}$ solution. Interestingly, the software classifies Leucus as a D type, solely based on its phase curve. This further supports the tentative classification by \citet{Levison2016} that was based on spectral information}. It is also important to recall that since Leucus has a considerably larger equatorial cross section than the polar one, oppositions with pole-on aspect would \replaced{appear}{result in measured phase curves that are} brighter than the \replaced{spherical model}{sphere-integrated phase curve} and conversely, apparitions with equator-on aspect would appear fainter than the spherical model. Given the low obliquity of Leucus' spin axis, however, all apparitions tend to be at near-equatorial aspect, as observed from Earth, and therefore Leucus never exposes its largest cross section to the observer. \added{This effect must be considered, e.g., when computing spherical equivalent diameters from observations.} \begin{figure} \plotone{11351_phasecur.pdf} \caption{Sphere-integrated phase curve obtained by integrating the best-fit LSL photometric function over a sphere (red solid curve). The blue squares represent the single photometric points for which absolute calibrations and transformations to the Johnson V band were available. Those data points have been corrected by multiplying the intensity of the original photometric measurement by the ratio of intensity of the spherical model to that of the best-fit convex shape model. In this way the effects of changing viewing and observing geometry, as well as the rotational variations are removed. The remaining scatter in the data is mostly due to the SNR of the measurements and to the uncertainty in the zero points of the absolute calibrations. A fit to the data by using the IAU HG system\added{, the $\rm{HG_1G_2}$ system,} and a linear phase function are also shown for comparison.\label{fig:phasecurve-fig}} \end{figure} \subsection{Albedo} \label{subsec:albedo} The geometric albedo is quite an elusive quantity to measure. First, it is defined for an observation geometry (0$\degr$ solar phase angle) that is rarely observable from Earth, and in which the photometric behavior of different planetary surfaces can wildly vary. Second, it is the result of an indirect measurement that requires the brightness at zero phase and a further measurement as a thermal flux or a geometric cross section, which adds to the total error budget. Third, the albedo depends in principle on the shape of the object, although this issue is less of a problem for dark objects as the Trojans. The error on the brightness at zero phase directly translates into the same relative error for the geometric albedo (i.e. a 10$\%$ error in the brightness would cause a 10$\%$ error in the albedo). In our case the geometric albedo is derived through the simultaneous fit of the photometric lightcurves and the occultation data and by applying Eq. A3, which results in \replaced{a geometric albedo}{an accurate geometric albedo} determination $~p_{\rm V}=0.043 \pm 0.002$.\\ \citet{Gravetal2012} report for Leucus a much higher geometric albedo of $p_{\rm V}=0.079 \pm 0.013$ and a spherical-equivalent diameter $D=34.155 \pm 0.646$ km derived from WISE observations. Those values are clearly incompatible with the occultation footprints. Part of the reason for their overestimation of the albedo is that they used an inaccurate $H_{\rm V}=10.70$, retrieved from the Minor Planet Center database. Very often, those photometric measurements are acquired in non-standard photometric systems, are subject to inaccurate calibrations, and are therefore affected by large uncertainties. A further reason for the discrepancy is that the WISE observations happen to have occurred in the vicinity of the lightcurve minimum, thereby underestimating the average thermal flux of Leucus. If we use our value $H_{\rm V-LSL}=10.979$ to correct their determination by using the method proposed by \citet{Harris&Harris1997}, and account for the apparent visible cross-section at the time of the WISE observations, we obtain a corrected geometric albedo $p_{\rm V}=0.048\pm0.014$ and a diameter $D=39\pm1$ km. With these corrections applied, the WISE determinations are compatible with our own, within the respective uncertainties. \\ \begin{figure} \plotone{compo_wise.pdf} \caption{WISE observations \citep{Gravetal2012} in the W3 12 $\mu$m thermal band are phased with a synthetic lightcurve from our convex model for the same epoch. Data points beyond rotational phase 1.0 are repeated for clarity. The magnitude scale for the WISE observations is arbitrarily offset to provide the best fit to the model curve. The plot shows that the WISE observations of Leucus were acquired near the lightcurve minimum.} \end{figure} The IRAS albedo and size determinations by \citet{Tedescoetal2004} (0.063$\pm{0.014}$ and 42.2$\pm{4.0}$ km, respectively) were based on an inaccurate H-value of 10.5. By using our $H_{\rm V-LSL}$ value we update their determinations to $p_{\rm V}=0.041 \pm 0.014$ and $D=41.9 \pm 4.0$, which are also in agreement with our own determinations.\\ \citet{Buieetal2020} determined geometric albedo values for the four occultation events in 2018 and 2019 by estimating the object cross-sectional area from best-fit ellipses and by using the absolute photometry reported in this paper. They derived geometric albedo values ranging from 0.035 to 0.043 for the different occultation events, with the scatter of the measurements probably reflecting the uncertainty in the different elliptical approximations of the occultation profiles. \section{Discussion} \label{sec:cite} The combination of time-resolved, disk-integrated photometry and stellar occultations is a powerful technique that allows accurate characterization from the ground of otherwise unresolved targets. We have determined a convex shape model that is compatible with the available occultation footprints and, thanks to accurate absolute photometry, produces precise size and albedo estimates of Leucus. Our model also allows us to understand and correct previous incompatible radiometric albedo and size determinations.\\ The accuracy of our rotation model is such that the 1$\sigma$ uncertainty on the rotation phase will be smaller than 2$\degr$ at the time of the June 2028 Lucy encounter with Leucus. At the time of the fly-by the sub-solar latitude will be about -9$\degr$ and the South pole will be permanently illuminated -- although at grazing incidence -- whereas the North pole will be in its winter night. Unfortunately, due to the slow rotation of the body, Lucy will be able to observe at most 60$\%$ of the surface in the 40 hours during which the object will be resolved with more than 40 pixels. For this reason, an accurate ground-based shape model is very valuable to the mission. On the one hand it enables careful planning of the acquisition sequences, in order to guarantee optimum sampling. On the other hand, it complements the data from the mission to complete the uncharted hemisphere, similarly to what was done, e.g., in the case of the Rosetta fly-by of Lutetia \citep{Carryetal2010, Preuskeretal2012}. The latter is of crucial importance for the estimation of the volume of the object and hence of its bulk density. Our convex model already serves this purpose well and represents a good second-order approximation of the shape -- the first order being an ellipsoid \citep{Buieetal2020}. \\ With an angular size of the longest axis of 15 mas at most, Leucus represents a challenging target to resolve for ground-based adaptive optics, James Webb Space Telescope or ESO's ALMA observations. Further improvement of the shape model in the near future can likely only come from dense stellar occultation data at further geometries, and from more, accurate absolute lightcurves. These data could be possibly used to produce a realistic non-convex model, provided the observation geometries are favorable. In this respect it is important to recall that not all concavities are directly resolvable from stellar occultations. A Star-Wars Death-Star shape, for example, would always project a convex occultation silhouette, with its concavity only being hinted at by a flat side of the contour. \\ Our photometric modeling of Leucus confirms that the object is very dark and lacks a pronounced opposition effect. These properties put Leucus in the context of other Trojan asteroids, and of other primitive, Outer-Belt objects. Due to the limited phase angle range achievable from Earth, however, we caution from extrapolating the derived phase curve to predict the brightness of Leucus at the large phase angles occurring on Lucy approach ($>$ 90$\degr$), because it could be in error by a considerable factor. For this purpose it will be important to benefit \replaced{form}{from} Lucy's vantage point during the cruise phase to extend the coverage of the phase curve to larger angles. Such measurements would also allow the use of more sophisticated photometric models and to uniquely retrieve their parameters. Further, they would allow a reliable determination of the phase integral, which, together with the geometric albedo, is critical to establish the thermal balance of the body.\\ \added{Leucus exhibits an exceptionally slow rotation, the cause of which is currently not known. As of today, only about 0.8\% of the about 32600 asteroids in the Asteroid Lightcurve Data Base (LCDB) \citep{Warneretal2009} for which rotation period estimates are available, have a slower spin than Leucus. Objects with such slow rotation cannot be explained as just belonging to the tail of a single population of rotators \citep{Harris2002}, and some mechanism must have been in place that slowed down their rotation. Radiation recoil forces as the YORP effect \citep{Rubincam2000} cannot explain the slow rotation of Leucus because of its large size and heliocentric distance. More realistic possibilities are 1) rotation angular momentum loss due to the evolution and eventual separation of a binary system with an elongated primary \citep{Harris2002} and 2) spin-down due to reaction forces resulting from sublimation of volatiles. \\ } \replaced{\citet{Harris1994}}{\citet{Pravecetal2014}} revised the damping time scales for excited rotation as a function of asteroid size and rotation period. \replaced{Taken at face value, his Figure 1 would translate into a relaxation time for Leucus of $\approx$ 10 Gyr -- longer than the age of the Solar System. In fact, given that the bulk density of Leucus is probably closer to 1 g cm$^{-3}$ than to 2 g cm$^{-3}$ -- as assumed by Harris for his estimate -- the relaxation time should be even longer. In other words, if excited rotation was ever present for Leucus -- either of primordial origin, or induced by a catastrophic event -- it should be still in place as of today. } {By assuming a bulk density for Leucus of $1\times10^3$ kg m$^{-3}$, their estimate would translate into a relaxation time for Leucus in the range 2.3 - 3.0 Gyr. If excited rotation was ever present for Leucus it could be still in place as of today.} Given the good fit of the lightcurves to our simple-rotation model, however, we conclude that if an excited rotation is present, its precession amplitude must be small. Due to the short duration of the fly-by, it is unlikely that any degree of precession can be detected by Lucy from resolved imagery. Instead, unresolved photometric sequences acquired by Lucy during the last few months of approach could be used to search for multiple periodicities in the lightcurves. The detection of a non-principal rotation state would place additional constraints on the dynamics and the internal structure of Leucus.
null
null
null
proofpile-arXiv_000-10142
{"arxiv_id":"2009.08951","language":"en","timestamp":1600654689000,"url":"https:\/\/arxiv.org\/abs\/2009.08951","yymm":"2009"}
2024-02-18T23:40:25.102Z
2020-09-21T02:18:09.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10144"}
null
\section{Introduction} \label{section-introduction} Theoretical descriptions of driven amorphous materials remain challenging, despite of decades of extensive analytical and computational studies \cite{arceri_landes_2020_Arxiv-2006.09725,berthier_biroli_2011_RevModPhys83_587,nicolas_2018_RevModPhys90_045006, rodney_2011_ModellingSimulMatterSciEng19_083001}. The technical difficulties pertain to the interplay of competing sources of stochasticity --~in particular their self-generated structural disorder~-- and to the resulting out-of-equilibrium nature of these systems. More generally, these issues are common to driven complex systems in a broad sense, \textit{i.e.}~composed of many interacting degrees of freedom, and stretch to such dissimilar fields as active matter \cite{marchetti_2013_RevModPhys85_1143} or machine learning \cite{baityjesi_2019_JStatMech2019_124013,geiger_2019_PhysRevE100_012115}. Assessing both the similarities and discrepancies in the statistical features of such systems upon different interactions and drivings, is thus key to allow for the transfer of known results between them. In that respect, the limit of infinite dimension provides a particularly valuable vantage point, as it can often be exactly analytically tractable and thus allow for rigourous analogies already at a mean-field level. Dense many-body systems of pairwise interacting particles constitute a standard model for amorphous materials, allowing us to focus on the key role of their structural (positional) disorder. At sufficiently high packing fraction and low temperature, they behave as amorphous \emph{solids}, meaning that they exhibit a rigidity which is sustained under external shear deformation, up to a sample-dependent maximum shear strain amplitude. Beyond this so-called `yielding point', they might reach a steady state with a finite built-in stress, and thus behave as yield stress \emph{fluids}. Such a transition from arrested to flowing state is to be expected in driven disordered systems, and has prompted the characterisation of corresponding phase diagrams as for instance in Refs~\cite{liu_nagel_1998_Nature396_6706,bi_manning_2016_PhysRevX6_021011}. However, one important pending question is the following: are global \textit{versus} local drivings fundamentally different? This question should obviously be completed with the nature of the observables of interest, on the one hand mean-field quantities such as pressure or the average stress (taken as a proxy for the predicted macroscopic stress), and on the other hand spatio-temporal correlations and their related features (such as possible transient or permanent shear bands). While the latter are \textit{a priori} highly sensitive to the built-in spatio-temporal structure of a specific driving, mean-field quantities are better suited --~by their very definition~-- for establishing possible equivalences between different drivings. For sheared amorphous materials, the equivalence between global shear strain and random local displacements can be addressed analytically in the limit of infinite spatial dimension, where their statics and dynamics become exactly mean-field. This limit has been extensively studied in the past years since it provides an exact benchmark for investigating the properties of structural glasses~\cite{book_parisi_urbani_zamponi_2020}, such as the statistical features of their free-energy landscape~\cite{charbonneau_2014_NatureCommunications5_3725} or their response to quasistatic drivings \cite{rainone_urbani_2016_JStatMech2016_053302,biroli_urbani_2016_NatPhys12_1130, urbani_zamponi_2017_PhysRevLett118_038001, biroli_urbani_2018_SciPostPhys4_020,altieri_2019_PhysRevE100_032140}. In fact, this framework can naturally be extended to the new driving protocol that has been recently introduced in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}, namely the quasistatic driving of a glass through random local displacements, constant on each particle and spatially correlated. Under this new Athermal Quasistatic Random Displacements (AQRD) protocol, it has been shown in Ref.~~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706} for two-dimensional numerical simulations (of Hertzian contact particles, under periodic boundary conditions) that the stress-strain curves are qualitatively similar to those obtained under a standard Athermal Quasistatic Shear (AQS) protocol~\cite{maloney_lemaitre_2006_PhysRevE74_016118}, as well as the distributions in the pre-yielding regime of \textit{(i)}~local elastic moduli along elastic branches, \textit{(ii)}~strain intervals between stress drops, and \textit{(iii)}~stress drop magnitudes. More importantly, it was shown that these mean-field-like metrics can be quantitatively collapsed one onto each other, in remarkably good agreement with the infinite-dimensional predictions. The aim of the present paper is to present the detailed derivation of the exact mean-field dynamics which led to these predictions, and to discuss its implications for and beyond quasistatics. Conceptually speaking, our main statement will be the following: in the infinite-dimensional limit, traversing the potential energy landscape of such many-body systems is equivalent under a global shear or under a constant random local displacement field, in the sense that the statistical sampling of the configurational phase space leads to the same mean-field metrics, up to a single rescaling factor. Thereafter we essentially adapt the derivation for the case of a global shear strain presented in Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}, following the notations and definitions of the recent extensive review on this topic~\cite{book_parisi_urbani_zamponi_2020}. We start in Sec.~\ref{section-settings-global-vs-local-shear-strain} by defining the random local displacements settings we consider, mirroring the global shear case, and we discuss in particular how we choose to encode their spatial correlations. Secondly we sketch in Sec.~\ref{section-recalling-DMFE} how we can go from the full many-body dynamics to an effective scalar stochastic process with such random local displacements. Thirdly we focus in Sec.~\ref{section-quasistatics-glassy-states} on the quasistatic driving of a glassy state, starting from a replica-symmetric (RS) equilibrium configuration, and connect with the previously \emph{static} results for quasistatic shear in infinite-dimension. The latter has first been discussed for hard spheres~\cite{rainone_2015_PhysRevLett114_015701} and its further extensions and ramifications are extensively reviewed in Ref.~\cite{book_parisi_urbani_zamponi_2020}. In Sec.~\ref{section-AQRD-stress-strain-elastic-modulus} we focus furthermore on the implications for the quasistatic stress-strain curves and the elastic modulus. Finally, in Sec.~\ref{section-discussion-conclusion}, we conclude and discuss some implications as possible perspectives to this work. This whole derivation will essentially allow us to show how our random local forcing and global shear turn out to be strictly equivalent, in the infinite-dimensional limit, upon a simple rescaling of the accumulated strain: the scaling factor is then simply controlled by the variance of relative local displacements for a given pair of interacting particles, which encodes the finite spatial correlations of the local displacement field. This statement holds in particular for athermal quasistatic drivings, and the AQS protocol can be interpreted, from that perspective, as a special case of the AQRD protocol. Note that this statement will be obtained for the replica-symmetric equilibrium case. Further work would be needed to extend the AQRD protocol to the full-replica-symmetry-breaking case (and thus all the way down to the yielding transition), as it has already been done for shear~\cite{rainone_urbani_2016_JStatMech2016_053302}. \section{Global shear \textit{versus} random local displacements} \label{section-settings-global-vs-local-shear-strain} We consider the same general settings as in Refs.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002,agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}: a system of $N$ interacting particles in $d$ dimension and of positions ${\lbrace {\bf x}_i(t) \in \Omega \subset \mathbb{R}^d \rbrace_{i=1,\dots,N}}$ at time $t$. The region $\Omega$ has a volume ${\vert \Omega \vert}$ and thus a number density ${\rho = N/\vert \Omega \vert}$, and for simplicity we assume $\Omega$ to be a cubic region with periodic boundary condition (\textit{i.e.}~in the same spirit as the numerical settings for instance in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}). Note that we always assume first the thermodynamic limit (${N \to \infty}$ and ${\vert \Omega \vert \to \infty}$ at fixed $\rho$), and secondly the infinite-dimensional limit (${d \to \infty}$). We consider the case of pairwise interactions between identical particles, with a generic radial potential ${v (\vert \mathbf{r}_{ij}(t) \vert)}$ where ${\mathbf{r}_{ij}(t) = {\bf x}_{i}(t) - {\bf x}_{j}(t)}$ fluctuates around a typical interaction length~$\ell$. This potential could be chosen to be a hard-sphere, soft-sphere or Lennard-Jones-like for instance, as long as it is thermodynamically stable in high dimension and has a well-defined infinite-dimensional limit upon rescaling: ${\lim_{d \to \infty} v (\ell (1 + h/d)) = \bar v(h)}$ \cite{book_parisi_urbani_zamponi_2020}\footnote{ We recall for instance from Sec.~2.1 of Ref.~\cite{kurchan_2016_JStatMech2016_033210} that we have for soft harmonic spheres ${v(r)=\epsilon \, d^2 (r/\ell-1) \, \theta(\ell -r) = \epsilon \, h^2 \, \theta (-h)= \bar v(h)}$, for soft spheres ${v(r)= \epsilon \, (\ell/r)^{\alpha d} \stackrel{(d\to\infty)}{\to} \epsilon \, e^{-\alpha h} = \bar v(h)}$, and for Lennard-Jones potential ${v(r) = \epsilon \, \argc{(\ell/r)^{4d} - (\ell/r)^{2d}} \stackrel{(d\to\infty)}{\to} \epsilon \argc{e^{-4h} - e^{-2h}} = \bar v(h)}$. }. The rationale behind this requirement is that, in the infinite-dimensional limit, the interparticle distances (for effectively interacting particles) have fluctuations of ${\mathcal{O}(1/d)}$ around $\ell$, so that ${r_{ij}(t) = \ell (1 + h_{ij}(t)/d)}$ with the gap ${h_{ij}(t) \sim \mathcal{O}(d^0)}$, and the definition of the rescaled potential ${\bar v(h)}$ allows to focus on this gap of order $1$. A global shear strain of amplitude $\gamma$, in the plane ${\lbrace \hat{\bf x}_1,\hat{\bf x}_2 \rbrace}$ for instance, is defined by \begin{equation} \hat{\gamma} = \left( \begin{array}{cccc} 0 & \gamma & 0 & \cdots \\ 0 & 0 & 0 & \cdots \\ \vdots & \vdots & \vdots \end{array} \right) \in \mathbb{R}^d \times \mathbb{R}^d \quad \Rightarrow \quad {\bf x}_i' = \underbrace{\argp{\hat{1} + \hat{\gamma}}}_{= \hat{S}_{\gamma}} {\bf x}_i = \argp{\begin{array}{c} x_{i,1} + \gamma x_{i,2} \\ x_{i,2} \\ \vdots \end{array}} = {\bf x}_i + \gamma \, x_{i,2} \, \hat{\bf x}_1 \, , \end{equation} meaning that it assigns to each particle~$i$ a local displacement ${\gamma \mathbf{c}_i}$ with ${\mathbf{c}_i= x_{i,2} \, \hat{\bf x}_1}$: this local displacement is always applied along the direction ${\hat{\bf x}_1}$ and its amplitude depends on the configuration on which the shear strain step is applied, so it is continually updated along the dynamics. The motion of particles in the laboratory frame can then be decomposed into the affine motion due to the accumulated shear strain ${\gamma(t)}$, starting from a given initial configuration, and a `non-affine' correction: \begin{equation} {\bf x}_i(t) = {\bf x}_i(0) + \gamma(t) \, x_{i,2}(0) \, \hat{\bf x}_1 + \mathbf{u}_i(t) \: \Rightarrow \: \mathbf{r}_{ij}(t) = \underbrace{\mathbf{r}_{ij}(0) + \gamma(t) \, r_{ij,2}(0) \, \hat{\bf x}_1}_{=\mathbf{r}_{0,ij}'(t) \text{ (for shear)}} + \mathbf{w}_{ij}(t) \, , \label{eq-def-non-affine-shear} \end{equation} with ${\mathbf{u}_i(t)}$ and ${\mathbf{w}_{ij}(t)}$ the non-affine absolute and relative displacements, respectively. Note that, since the affine transformation is the same for both the absolute and relative positions (using the matrix ${\hat{S}_{\gamma}}$) it makes sense to define a `co-shearing frame' where coordinates are directly given by the non-affine motion~\cite{maloney_lemaitre_2006_PhysRevE74_016118}. For random local displacements, we essentially release the constraint for all these vectors ${\lbrace \mathbf{c}_i \rbrace_{i=1,\dots,N}}$ to be aligned to ${\hat{\bf x}_1}$, and we allow for the local displacement vector ${\vert c \rangle \equiv \lbrace \mathbf{c}_i \rbrace_{i=1,\dots,N}}$ to be a \emph{constant} random vector in ${\mathbb{R}^{Nd}}$. We can then generalise the definition of non-affine motion from Eq.~\eqref{eq-def-non-affine-shear} as follows: \begin{equation} {\bf x}_i(t) = {\bf x}_i(0) + \gamma(t) \, \mathbf{c}_i + \mathbf{u}_i(t) \: \Rightarrow \: \mathbf{r}_{ij}(t) = \underbrace{\mathbf{r}_{ij}(0) + \gamma(t) \, \left( \mathbf{c}_i -\mathbf{c}_j \right)}_{=\mathbf{r}_{0,ij}'(t) \text{ (for random local displacements)}} + \mathbf{w}_{ij}(t) \, , \label{eq-def-non-affine-AQRD} \end{equation} Contrary to global shear, we cannot define a `co-shearing frame' \textit{per se}, but we can still focus on the non-affine motion through ${\mathbf{u}_i(t)}$ and ${\mathbf{w}_{ij}(t)}$. The definitions~\eqref{eq-def-non-affine-shear}-\eqref{eq-def-non-affine-AQRD} will be key in the next section, when we focus on obtaining an effective dynamics for the non-affine motion: the derivation will be very similar for shear and random local displacements, and their differences will mostly rely on replacing ${\mathbf{r}_{0,ij}'(t)}$ by their corresponding explicit expressions. Moreover, regarding quasistatic driving of glasses and thus the connection between AQS and AQRD, what will be relevant is specifically the statistical distribution of the affine motion at finite shear strain. For AQRD, it is thus the distribution of the \emph{relative} local displacement of pairs of particles ${\mathbf{c}_{ij}=\mathbf{c}_i -\mathbf{c}_j}$ that will matter, through the definition of the associated affine motion ${\mathbf{r}_{0,ij}'(t)}$. An important point, though, is that we must impose the scaling in distribution ${\vert \mathbf{c}_{ij} \vert^2 \sim \mathcal{O}(1/d)}$ in order to mirror the scaling of local displacements under global shear: in the latter case we have indeed ${\vert r_{ij,2}(0)^2 \hat{{\bf x}}_1^2 \vert=r_{ij,2}(0)^2 \sim \mathcal{O}(1/d)}$ since it involves one component of a $d$-dimensional vector ${\mathbf{r}_{ij}(0)}$ whose norm is of order $1$. If we were considering a finite system, we should moreover take care of the finite-size and finite-$N$ scalings, as discussed in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}; here we recall that we consider on the contrary an infinite-size system and the thermodynamic limit, so there is no such issue. In order to implement a local displacement vector ${\vert \lbrace c_i \rbrace \rangle \in \mathbb{R}^{Nd}}$ as in the AQRD protocol introduced in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}, we first generate a local displacement field ${\mathcal{C}({\bf x})}$ continuously defined for ${{\bf x} \in \mathbb{R}^d}$, with a Gaussian distribution given by ${\overline{\mathcal{C}({\bf x})}=0}$ and ${\overline{\mathcal{C}({\bf x})\, \mathcal{C}({\bf x'})}= \ell^2 \Xi \, f_{\xi}(\vert {\bf x}-{\bf x'} \vert)/d}$, where the overline denotes the statistical average over this `quenched random displacement field', $\Xi$ is a tunable amplitude which has the units of a length, and ${\xi>0}$ a finite correlation length. For practical purposes, whenever we need an explicit expression we will assume a Gaussian function ${f_{\xi}(x)=e^{-x^2/(2\xi^2)}/\sqrt{2 \pi \xi^2}}$. Secondly we associate to each particle a local displacement fixed by its initial position ${\mathbf{c}_i = \mathcal{C}({\bf x}_i(0))}$, so that the local displacements are also Gaussian distributed with zero mean and a spatial correlation ${\overline{\mathbf{c}_i \, \mathbf{c}_j} = \ell^2 \Xi \, f_{\xi}(\vert \mathbf{r}_{ij} (0) \vert)/d}$. Note that the explicit scaling in $d$ is chosen precisely such as to match the scaling of local displacements in AQS, as discussed above, and that we implicitly assume a statistical isotropy of the local displacements with a radial function for the correlator. From there, we can directly characterise the distribution of the relative local displacements ${\mathbf{c}_{ij}=\mathbf{c}_i -\mathbf{c}_j}$: it is also Gaussian of zero mean, but its variance considerably simplifies in the infinite-dimensional limit, using that ${r = \ell (1 + h/d)}$ with the gap ${h \sim \mathcal{O}(1)}$ and thus ${f_{\xi} \argp{r_{ij}(0)} = f_{\xi}(\ell) + f_{\xi}'(\ell) \frac{\ell}{d} h_{ij}(0) + \mathcal{O}(1/d^2)}$. This implies on the one hand that, for different pairs ${(i,j) \neq (i',j')}$, correlations are subdominant in high dimension and eventually vanish in the limit ${d \to \infty}$: \begin{equation} \begin{split} d \, \overline{\mathbf{c}_{ij} \mathbf{c}_{i'j'}} &= \ell^2 \Xi \argc{ f_{\xi} \argp{r_{ii'}(0)} + f_{\xi}\argp{r_{jj'}(0)} - f_{\xi}\argp{r_{ij'}(0)} - f_{\xi}\argp{r_{i'j}(0)} } \\ &= \ell^2 \Xi \, f_{\xi}' (\ell) \frac{\ell}{d} \argc{ h_{ii'}(0) + h_{jj'}(0) - h_{ij'}(0) - h_{i'j}(0)} \stackrel{(d \to \infty)}{\to} 0 \end{split} \label{eq-distr-cij-1} \end{equation} whereas on the other hand the variance of a given pair remains finite: \begin{equation} d \, \overline{\mathbf{c}_{ij}^2} = 2 \ell^2 \Xi \, \argc{f_{\xi}\argp{0} - f_{\xi}\argp{r_{ij}(0)}} \stackrel{(d \to \infty)}{\to} 2 \ell^2 \Xi \, \argc{f_{\xi}\argp{0} - f_{\xi}\argp{\ell}} \equiv \mathfrak{F} \argp{\Xi ,\ell, \xi} \, , \label{eq-distr-cij-2} \end{equation} Another way to phrase these results is that spatial correlations between the local displacements ${\mathbf{c}_i = \mathcal{C}({\bf x}_i(0)) }$ only affect the variance of a given pair \emph{relative} displacements ${\mathbf{c}_{ij}}$, since different pairs of particles effectively do not interact in infinite dimension (or rather their contribution becomes irrelevant, in the limit ${d \to \infty}$, to path-integral statistical averages). We can choose to decompose these relative displacements as ${\sqrt{d} \, \mathbf{c}_{ij} \equiv \tilde{c}_{ij} \, \hat{\mathbf{c}}_{ij}}$, where ${\hat{\mathbf{c}}_{ij}}$ is a unitary vector with a uniform distribution by statistical isotropy; the factor $\mathfrak{F}$ defined in Eq.~\eqref{eq-distr-cij-2} should then be understood as the variance of the amplitude ${\tilde{c}_{ij}}$, restricted to the pairs whose interactions are relevant for the dynamics (\textit{i.e.}~${r_{ij}(0) \approx \ell}$). Beware that ${\tilde{c}_{ij}}$ is not the norm, since we allow it to take negative values for a given choice of unitary vector, so that its probability distribution is simply the Gaussian function: \begin{equation} \bar{\mathcal{P}} \argp{\tilde{c}_{ij}} =e^{-\tilde{c}_{ij}^2/(2\mathfrak{F})}/\sqrt{2 \pi \mathfrak{F}} \quad \Rightarrow \quad \overline{\tilde{c}_{ij}}=0 \, , \quad \overline{\tilde{c}_{ij}^2}=\mathfrak{F} \, . \label{eq-def-frakF-Gaussian-PDF} \end{equation} Note that we choose a slightly different convention than in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}: here the vector ${\vert c \rangle}$ is a displacement field and has thus the units of a length, and it differs only from a factor $\ell$ compared to the unitless strain field in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}; otherwise other quantities are defined with the same units, and in particular $\sqrt{\mathfrak{F}}$ has the units of a length. This dependence of the variance ${\mathfrak{F}}$ on the spatial correlations of local displacements is in fact physically expected and can be simply understood in the following two opposite limits: for an infinite correlation length $\xi$, all the ${\lbrace \tilde{c}_{i} \rbrace}$ would be the same, and there would be no relative displacements, so for ${\xi \to \infty}$ their distribution should consistently tend to ${\mathcal{P}({\bf c_{ij}}) \propto \delta(\vert {\bf c_{ij}} \vert)}$; in the opposite limit, the variance ${\mathfrak{F}}$ diverges, so we actually need to have a finite correlation ${\xi>0}$ to keep it regular, as it is physically the case when we consider discrete interacting particles instead of a true continuum of local displacements. If we assume ${f_\xi(x)}$ to be a normalised Gaussian function, we can compute explicitly ${\mathfrak{F}}$: \begin{equation} \mathfrak{F} \argp{\Xi,\ell, \xi} = \frac{2}{\sqrt{2 \pi}} \ell \Xi \, \frac{\ell}{\xi} \argp{1 - e^{-(\ell/\xi)^2/2}} \left\lbrace \begin{array}{l} \stackrel{(\ell/\xi \gg 1)}{=} \frac{2}{\sqrt{2\pi}}\ell \Xi \, \frac{\ell}{\xi} \\ \stackrel{(\ell/\xi \ll 1)}{=} \frac{1}{\sqrt{2\pi}} \ell \Xi \, \argp{\ell/\xi}^3 \, , \end{array} \right. \label{eq-mathfrak-F-explicit-Gaussian_fxi} \end{equation} thus predicting a crossover of the $\xi$-dependence depending on the ratio ${\ell/\xi}$, with ${\mathfrak{F} \sim 1/\xi}$ at ${\ell/\xi \gg 1}$ and ${\mathfrak{F} \sim 1/\xi^{3}}$ at ${\ell/\xi \ll 1}$. Technically, these specific scalings rely on the functional form of the even correlator ${f_{\xi}(x) = \xi^{-1} f_1(x/\xi)}$, valid in particular for $f_{\xi}$ being a Gaussian function. Nevertheless, the decrease of ${\mathfrak{F}}$ with an increasing correlation length $\xi$ must be qualitatively robust with respect to alternative (physical) choices for ${f_{\xi}(x)}$. As a concluding remark, we note that such a correlation length exists under global shear (thus for the AQS protocol as well), and indeed it was the motivation for the creation of the AQRD protocol in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}. Indeed, when we shear a finite-size system with periodic boundary conditions, the individual affine motions are \emph{by construction} correlated on a finite portion of the whole system, with a periodicity of twice the system size, so the value of ${\xi}$ is set once and for all. Note that this statement is only true in the laboratory frame, since in the co-shearing frame the affine motions are by definition set to zero, so the associated local displacements have to be defined in the laboratory frame in order to be able to establish a direct connection between the global shear and random-local-displacements protocols. An intermediate case is to have regularly patterned local displacements, chosen such as to be compatible with the periodic boundary conditions), where we can tune the correlation length $\xi$. Such settings has also been examined, for two-dimensional numerical simulations, in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}. Here we considered the case of Gaussian random local displacements because it directly relates to the shear case in infinite dimension, as we will see in the next section. However, these different alternatives will only affect the quantitative distribution of affine motions ${\mathbf{r}_{ij}'(t)}$ but not the effective dynamics for the non-affine motions ${\mathbf{u}_i(t)}$ and ${\mathbf{w}_{ij}(t)}$, so we could \textit{a priori} generalise our derivation to the patterned local displacements as well (but this is kept for future work). \section{From many-body to effective scalar dynamics} \label{section-recalling-DMFE} We consider the following many-body Langevin dynamics, adapted from the standard shear case~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001} for a finite `shear rate' ${\dot{\gamma}(t)}$ (such that the AQRD case corresponds to the limit ${\dot{\gamma}(t) \to 0}$): \begin{equation} \zeta \argc{\dot{\bf x}_i(t)- \dot{\gamma}(t) \, \mathbf{c}_i } = {\bm F}_i(t) + \bm{\xi}_i(t) \, , \quad \text{with} \quad {\bm F}_i(t) = -\sum_{j (\neq i)} \nabla v \argp{\vert {\bf x}_i(t)-{\bf x}_j(t) \vert} \label{eqC3:GENLang-shear-dynamics-AQRD} \end{equation} and a microscopic Gaussian noise ${\lbrace {\bm \xi}_i(t) \rbrace_{i=1,\dots,N}}$, of mean and variance respectively given by \begin{equation} \moy{\xi_{i,\mu}(t)}_{\bm\xi} =0 , \qquad \moy{\xi_{i,\mu}(t) \xi_{j,\nu}(t')}_{\bm \xi} = \delta_{ij} \delta_{\mu\nu} [2 T \zeta \delta(t-t') + \Gamma_C(t,t')] \, , \label{eqC3:GENLang-shear-noise} \end{equation} where the brackets ${\moy{\cdots}_{\bm\xi}}$ denote the statistical average over realisations of the noise~${\bm\xi}$. $\zeta$ is the (local) friction coefficient and ${T=\beta^{-1}}$ the temperature of a thermal bath (with the Boltzmann constant ${k_B=1}$ setting the units). The generic noise kernel ${\Gamma_C(t,s)}$ can be chosen such as to describe a wide variety of physical situations, as discussed extensively in Sec.~2 of Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002}. By essentially tuning its `persistence time' $\tau$, we can consider anything from an isotropic active matter with ${\Gamma_C(t,s) = f_0^2 \, e^{- \valabs{t-s}/\tau}}$, to a pure thermal bath (${\tau=0}$) with ${\Gamma_C(t,s)=0}$, and constant random forces (`${\tau=\infty}$') with ${\Gamma_C(t,s)=f_0^2}$. In addition, we need to specify the distribution of the initial configurations at time ${t=0}$; for explicit computations we can focus on the particular case of an equilibrium (possibly supercooled) liquid phase at a temperature ${T_0=\beta_0^{-1}}$, where positions are sampled from a Gibbs-Boltzmann distribution ${\propto e^{-\frac12 \sum_{i \neq j} v(\vert r_{ij}(0) \vert) }}$. These settings are very similar to the standard case under shear strain, that we have examined in Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}: here we essentially replaced for each particle~$i$ the time-dependent local displacements ${\gamma(t) x_{i,2}(t) \hat{\bf x}_1}$ under a global shear (in the plane ${\lbrace \hat{\bf x}_1 ,\hat{\bf x}_2 \rbrace}$) by the random local displacements ${{\gamma}(t) \, \mathbf{c}_i}$. In Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001} we focused on the non-affine motion by using the change of variables of Eq.~\eqref{eq-def-non-affine-shear}, which is equivalent to working in the co-shearing frame. We can similarly use Eq.~\eqref{eqC3:GENLang-shear-dynamics-AQRD} to obtain the dynamics for the non-affine motion under random local displacements: \begin{equation} \zeta \dot{\mathbf{u}}_{i}(t) = {\bm F}_i(t) + \bm{\xi}_i(t) \, , \quad \text{with} \quad {\bm F}_i(t) = -\sum_{j (\neq i)} \nabla v \argp{\vert \mathbf{r}_{0,ij}'(t) + \mathbf{u}_{i}(t) - \mathbf{u}_{j}(t)) \vert} \label{eqC3:GENLang-for-ucs-bis-AQRD} \end{equation} with the uniform initial condition ${\mathbf{u}_i(0)=\bm{0} \, \forall i}$. Compared to global shear, we do not have a term ${\zeta \hat{\dot{\gamma}}(t) \mathbf{u}(t)}$ so the dynamics is \textit{de facto} isotropic, and the affine motion is given by ${\mathbf{r}_{0,ij}'(t) \equiv \mathbf{r}_{0,ij} + \gamma(t) \mathbf{c}_{ij}}$ instead of ${\mathbf{r}_{0,ij}'(t) = \hat{S}_{\gamma}(t) \, \mathbf{r}_{0,ij}}$ (see Eqs.~\eqref{eq-def-non-affine-shear}-\eqref{eq-def-non-affine-AQRD}). Note that if we start from a statistically isotropic initial condition, such as equilibrium, with such an isotropic dynamics we can always assume statistical isotropy to hold. This many-body dynamics becomes exactly mean-field in infinite dimension, as shown in previous works for the isotropic case~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002} or in presence of global shear~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}. In a nutshell, this can be understood as a consequence of two key physical assumptions: particles always stay `close' to their affine motion with respect to their initial position --~in the sense that their non-affine displacements are of ${\mathcal{O}(1/d)}$~-- and each particle has numerous uncorrelated neighbours. Intuition can be gained from the simpler case of an isotropic random walk: each particle has so many directions towards which it can move, that it effectively explores a volume whose typical radius shrinks with an increasing dimensionality; moreover, in a dense system each particle has ${\mathcal{O}(d)}$ neighbours, and once it interacts with a given neighbour it is very unlikely that it would interact with it again, making them effectively uncorrelated. The recipe for obtaining the mean-field dynamics is thus as follows: we start from the many-body dynamics~\eqref{eqC3:GENLang-shear-dynamics-AQRD}, focus on the non-affine displacement in Eq.~\eqref{eqC3:GENLang-for-ucs-bis-AQRD}, and Taylor-expand the interaction force ${{\bm F}_i(t)}$ to leading order in the infinite-dimension limit. Technically, the difficulty is to correctly identify the scaling in $d$ of the different contributions to the dynamics and the leading-order terms. Physically that amounts to neglecting the contributions due to collective feedbacks involving more than two particles, which are subdominant in the infinite-dimension limit when computing statistical averages of observables. Starting from Eq.~\eqref{eqC3:GENLang-for-ucs-bis-AQRD}, we can directly adapt the mean-field dynamics previously obtained for global shear (specifically Eqs.~(23)-(24) of Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}). The interactions with other particles are fully described by three tensorial kernels ${\lbrace \hat{k}(t), \hat{M}_R(t,s),\hat{M}_C(t,s) \rbrace}$, defined as averages involving the force ${\nabla v(\mathbf{r}_{ij}(t))=\nabla v(\mathbf{r}_{0,ij}'(t) + \mathbf{w}_{ij}(t))}$ as a result of the Taylor-expansion of ${{\bm F}_i(t)}$. From this point, we can drop on the indices ${(i,j)}$ only keeping in mind that $\mathbf{c}$ stands for the \emph{relative} local displacements $\mathbf{c}_{ij}$. ${\mathbf{r}_{0}'(t)}$ is set by the distributions of the initial condition and of the local strains, and the non-affine displacements follow an exact mean-field dynamics given by the following \emph{vectorial} stochastic processes: \begin{equation} \label{eq-vectorial-effective-stoch-processes} \begin{split} & \zeta \dot{\mathbf{u}}(t) = - \hat{k}(t) \, \mathbf{u}(t) + \int_0^t \mathrm d s \, \hat{M}_R(t,s) \, \mathbf{u}(s) + \sqrt{2}\,\bm{\Xi}(t) \, , \\ & \frac{\zeta}{2} \dot \mathbf{w}(t) = - \frac12 \hat{k}(t) + \frac12 \int_0^t \mathrm d s \, \hat{M}_R(t,s) \, \mathbf{w}(s) - \nabla v \argp{\mathbf{r}_{0}'(t) + \mathbf{w}(t)} + \bm{\Xi}(t) \, , \\ & \mathbf{u}(0)=0 \, , \quad \mathbf{w}(0)=0 \, , \\ & \moy{\Xi_{\mu}(t)}_{\bm{\Xi}}=0 \, , \quad \moy{\Xi_{\mu}(t) \Xi_{\nu}(t')}_{\bm{\Xi}} = \delta_{\mu\nu} \argc{ T \zeta \delta(t-t') +\frac12 \Gamma_C(t,t') } + \frac12 M^{\mu\nu}_C(t,t') \, , \end{split} \end{equation} where the noise has a zero mean by statistical isotropy, and the following self-consistent equations for the tensorial kernels with ${\mathbf{r}_0'(t)=\mathbf{r}_0 + \gamma(t) \, \mathbf{c} \,}$: \begin{equation} \label{eqC3:Mself} \begin{split} k^{\mu\nu}(t) &= \rho \int \mathrm d \mathbf{r}_0 \, g_{\text{in}}(\mathbf{r}_0) \int \mathrm d \mathbf{c} \, \bar{\mathcal{P}}(\mathbf{c}) \, \moy{ \nabla_\mu \nabla_\nu v( \mathbf{r}_0'(t) + \mathbf{w}(t))}_{\mathbf{w}} \ , \\ M^{\mu\nu}_C(t,t') &= \rho \int \mathrm d \mathbf{r}_0 \, g_{\text{in}}(\mathbf{r}_0) \, \int \mathrm d \mathbf{c} \, \bar{\mathcal{P}}(\mathbf{c}) \, \moy{ \nabla_\mu v(\mathbf{r}_0'(t) + \mathbf{w}(t)) \nabla_\nu v(\mathbf{r}_0'(t') + \mathbf{w}(t')) }_{\mathbf{w}} \ , \\ M^{\mu\nu}_R(t,s) &= \rho \int \mathrm d \mathbf{r}_0 \, g_{\text{in}}(\mathbf{r}_0) \int \mathrm d \mathbf{c} \, \bar{\mathcal{P}}(\mathbf{c}) \, \, \left. \frac{\delta \moy{ \nabla_\mu v(\mathbf{r}_0'(t) + \mathbf{w}(t)) }_{\mathbf{w},\bm{P}}}{\delta P_{\nu}(s)}\right\vert_{\bm P=\bm0} \ . \end{split} \end{equation} Here $\rho$ is the density, ${g_{\text{in}}(\mathbf{r}_0)}$ the initial distribution of inter-particle distances in the laboratory frame at ${t=0}$~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002,book_hansen_mcdonald-4th-ed}, and ${\bar{\mathcal{P}}(\mathbf{c})}$ the distribution of relative local displacements. The brackets ${\moy{\bullet}_{\mathbf{w}}}$ denote the dynamical average over the stochastic process $\mathbf{w}(t)$ (thus self-consistently defined), starting from a given set ${\lbrace \mathbf{r}_0, \mathbf{c} \rbrace }$. ${\bm{P}(s)}$ is to be understood as a perturbation field added into the dynamics, inside the interaction potential as ${\nabla v(\mathbf{r}_0'(t)+\mathbf{w}(t) - \bm{P}(t))}$. If we start from a replica-symmetric equilibrium at inverse temperature ${\beta_0}$, we would have ${g_{\text{in}}(\mathbf{r}_0)=g_{\rm{eq}}(\mathbf{r}_0)=e^{-\beta_0 \, v( \mathbf{r}_0 )}}$. Finally, as discussed extensively in the previous section around Eq.~\eqref{eq-distr-cij-2}, we consider specifically vectors ${\mathbf{c} \equiv \tilde{c} \, \hat{\mathbf{c}} / \sqrt{d} }$ with ${\hat{\mathbf{c}}}$ a unitary vector uniformly distributed (by statistical isotropy) and the norm ${\tilde{c} \sim \mathcal{O}(1)}$ with the Gaussian distribution of zero mean and variance ${\mathfrak{F}}$ given in Eq.~\eqref{eq-def-frakF-Gaussian-PDF}. Physically, we emphasise that $\hat{k}(t)$ characterises the averaged divergence of forces at time $t$, ${\hat{M}_C(t,t')}$ is nothing but the two-time force correlator, and ${\hat{M}_R(t,s)}$ the averaged response on the force at time $t$ after a perturbation at an anterior time $s$. We can in fact reduce further these high-dimensional vectorial stochastic processes, using the fact that the pairwise interaction potential is a radial function of the distance ${r(t) \equiv \vert \mathbf{r} (t) \vert = \vert {\mathbf{r}_0'(t) + \mathbf{w}(t)} \vert \approx \ell (1 + h(t)/d)}$ of typical interaction length $\ell$ and that the definition of the gap ${h(t)\sim \mathcal{O}(1)}$ allows us to focus on fluctuations of $\mathcal{O}(1/d)$ around ${r(t) \approx \ell}$. Exactly as for shear (in Sec.~IV.B of Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}), we can decompose this inter-particle distance in three contributions as follows: \begin{equation} \label{eq-def-fluctuating-gap} \valabs{\mathbf{r}(t)} \equiv \valabs{\mathbf{r}_0'(t) + \mathbf{w}(t)} \approx r_0'(t) + \hat{\mathbf{r}}_0'(t) \cdot \mathbf{w}(t) + \frac{\mathbf{w}(t)^2 }{2 r_0'(t)} \quad \text{with} \quad \left\lbrace \begin{array}{l} \hat{\mathbf{r}}_0'(t) \cdot \mathbf{w}(t) \equiv \frac{\ell}{d} y(t) \, , \\ \\ \frac{\mathbf{w}(t)^2 }{2 r_0'(t)} \approx \frac{\moy{\mathbf{w}(t)^2} }{2 \ell} \approx \frac{\ell}{d} \Delta_r(t) \, , \end{array} \right. \end{equation} where we used that ${\valabs{\mathbf{w}(t)} \ll r_0'(t)}$ in high dimension, defined ${y(t)}$ as the projection of ${\mathbf{w}(t)}$ on the affine relative displacement ${\hat{\mathbf{r}}_0'(t)}$, and introduced the rescaled mean-square-displacement (MSD) function ${\Delta_r(t) = \frac{d}{\ell^2} \moy{\valabs{\mathbf{u}(t)}^2}}$. We recall that for a global shear we would have instead ${\mathbf{r}_0'(t)=\mathbf{r}_0 + \gamma(t) \, r_{0,2} \, \hat{\bf x}_1}$, with relative displacements $\mathbf{c}$ fully prescribed by the initial condition as ${\mathbf{c} = r_{0,2} \hat{\mathbf{x}}_1}$, or --equivalently-- the special case where ${\bar{\mathcal{P}}(\mathbf{c}) = \delta (\mathbf{c} - r_{0,2} \hat{\mathbf{x}}_1)}$. The implications of the more generic definition ${\mathbf{r}_0'(t)=\mathbf{r}_0 + \gamma(t) \, \mathbf{c}}$ are better reflected by on the gap associated to the third contribution in Eq.~\eqref{eq-def-fluctuating-gap}, namely the affine contribution ${r_0'(t)}$: \begin{equation} r_0'(t) = \left\vert \mathbf{r}_{0}+ \gamma(t) \, \frac{\tilde{c}}{\sqrt{d}} \, \hat{\mathbf{c}} \right\vert \stackrel{(d \to \infty)}{\approx} r_0 \argp{1 + \frac{h_0}{d} + \frac{\gamma(t)}{d} \frac{\tilde{c}}{\, r_0} \, \sqrt{d} \hat{\mathbf{r}}_{0} \cdot \hat\mathbf{c} + \frac{\gamma(t)^2}{2d} \frac{\tilde{c}^2}{r_0^2}} \, , \label{eq-drift-r0-AQRD} \end{equation} where $h_0$ is the initial gap, ${r_0 \approx \ell}$ and we truncated higher orders in ${1/d}$. Note that the scalar product of the two random unit vectors ${\hat{\mathbf{r}}_{0} \cdot \hat\mathbf{c}}$ scales in distribution as ${1/\sqrt{d}}$ (despite the fact that it is a sum of $d$ terms each of ${\mathcal{O}(1/d)}$, because of the randomness of both vectors). We have thus ${g_c \equiv \sqrt{d} \hat{\mathbf{r}}_{0} \cdot \hat\mathbf{c} \sim \mathcal{O}(1)}$, normal distributed with zero mean and unit variance. Further, we can rewrite Eqs.~\eqref{eq-def-fluctuating-gap}-\eqref{eq-drift-r0-AQRD} for the total gap as \begin{equation} \label{eq-def-gap-in-AQRD} \valabs{\mathbf{r}(t)} = \valabs{\mathbf{r}_0'(t) + \mathbf{w}(t)} \approx \ell (1 + h(t)/d) \quad \text{with} \quad \left\lbrace \begin{array}{l} h(t) = h_0'(t) + y(t) + \Delta_r(t) \, , \\ \\ h_0'(t)= h_0 + \gamma(t) \, g_c \, (\tilde{c}/\ell) + \frac{\gamma(t)^2}{2} \argp{\tilde{c}/\ell}^2 \, . \end{array} \right. \end{equation} This allows us to rewrite the high-dimensional \emph{vectorial} stochastic processes Eqs.~\eqref{eq-vectorial-effective-stoch-processes}-\eqref{eqC3:Mself} into a \emph{scalar} stochastic process, with its three associated kernels (generalizing Eqs.~(46) and~(48) from Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}): \begin{equation} \label{eq-def-longitudinal-motion-stoch-process-AQRD} \begin{split} & \widehat{\zeta} \dot y(t) = - \kappa^{\text{iso}}(t) y(t) + \int_0^t \mathrm d s \, \mathcal{M}^{\text{iso}}_R(t,s) \, y(s) - \bar v'(h(t)) + \Xi(t) \, , \\ & h(t) = h_0 + \gamma(t) \, g_c \, \breve{c} + \frac{\gamma(t)^2}{2} \breve{c}^2 + y(t) + \Delta_r(t) \, , \\ & \text{Initial condition:} \quad y(0)=0 \, , \quad \gamma(0)=0 \, , \quad \Delta_r(0)=0 \, , \quad \text{random} \: \arga{h_0, \breve{c}=\tilde{c}/\ell, g_c} \sim \mathcal{O}(1) \, , \\ & \text{Gaussian noise:} \quad \moy{\Xi(t)}_\Xi=0 \, , \quad \moy{\Xi(t)\Xi(s)}_\Xi= 2T \hat{\zeta} \delta(t-s) + \mathcal{G}_C(t,s)+ \mathcal{M}^{\text{iso}}_C(t,s) \, , \end{split} \end{equation} with the rescaled friction coefficient ${\hat{\zeta}= \frac{\ell^2}{2 d^2} \zeta}$ and noise kernel ${\mathcal{G}_C(t,t')= \frac{\ell^2}{2 d^2} \Gamma_C (t,t')}$, and the three rescaled kernels: \begin{equation} \label{eq-def-kernels-high-dim-gap-AQRD} \begin{split} \kappa^{\text{iso}}(t) &= \frac{\widehat{\varphi}}2 \int^{\infty}_{-\infty}\!\! \mathrm d h_0 \, e^{h_0} g_{\text{in}}(h_0) \, \int \mathrm d \tilde{c} \int \mathrm d g_c \, \bar{\mathcal{P}}(g_c,\tilde{c}) \, \moy{ \bar v''(h(t)) + \bar v'(h(t)) }_{h \vert h_0,\tilde{c},g_c} \ , \\ \mathcal{M}_C^{\text{iso}}(t,t') &= \frac{\widehat{\varphi}}2 \int^{\infty}_{-\infty}\!\! \mathrm d h_0 \, e^{h_0} g_{\text{in}}(h_0) \, \int \mathrm d \tilde{c} \int \mathrm d g_c \, \bar{\mathcal{P}}(g_c,\tilde{c}) \, \moy{ \bar v'(h(t)) \bar v'(h(t')) }_{h \vert h_0,\tilde{c},g_c} \ , \\ \mathcal{M}_R^{\text{iso}}(t,s) &= \frac{\widehat{\varphi}}2 \int^{\infty}_{-\infty}\!\! \mathrm d h_0 \, e^{h_0} g_{\text{in}}(h_0) \, \int \mathrm d \tilde{c} \int \mathrm d g_c \, \bar{\mathcal{P}}(g_c,\tilde{c}) \, \left. \frac{\delta \moy{ \bar v'(h(t)) }_{h \vert h_0,\tilde{c},g_c,\mathcal{P}}}{\delta \mathcal{P}(s)}\right\vert_{\mathcal{P}=0} \ . \end{split} \end{equation} We recall from Refs.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002}-\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001} that instead of the density $\rho$ we have in those definitions a dependence on the rescaled packing fraction ${\widehat{\varphi} = \rho V_d \ell^d/d}$ with ${V_d = \pi^{d/2} / \Gamma(d/2+1)}$ the volume of the unit sphere in $d$ dimensions. If we start from equilibrium at temperature ${T_0=\beta_0^{-1}}$, we simply have ${g_{\text{in}}(h_0) = g_{\text{eq}}(h_0) = e^{-\beta_0 \bar v(h_0)}}$, and the statistical average over the relative local displacements takes the explicit form \begin{equation} \int \mathrm d \tilde{c} \int \mathrm d g_c \, \bar{\mathcal{P}}(g_c,\tilde{c}) \, (\dots) \quad \mapsto \quad \int_{\mathbb{R}} \underbrace{\mathrm d g_c \frac{e^{-g_c^2/2}}{\sqrt{2\pi}}}_{\equiv \mathcal{D} g_c} \, \int_{\mathbb{R}} \mathrm d \tilde{c} \, \frac{e^{-\tilde{c}^2/(2\mathfrak{F})}}{\sqrt{2 \pi \mathfrak{F}}} \, (\dots) \, . \label{eq-def-P-gc-ctilde} \end{equation} Finally, the dynamical equations for the correlation and response functions (and in particular of the MSD function) are strictly the same as for the case of global shear, see Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001} (specifically Eq.~(41) and the definitions Eqs.~(31)-(33) and~(40)). If we wish to add an acceleration term with a particle mass $m$ and/or a retarded friction kernel ${\Gamma_R(t,s)}$, in the many-body dynamics~\eqref{eqC3:GENLang-shear-dynamics-AQRD} that we took as a starting point, we can similarly adapt the dynamical equations given in Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002} (see specifically its summary section~VII). We emphasise that the case of global shear can be seen as the special case of Eqs.~\eqref{eq-drift-r0-AQRD}-\eqref{eq-def-gap-in-AQRD}, with \begin{equation} \mathbf{r}_0'(t) = \mathbf{r}_0 + \gamma(t) \, r_{0,2} \hat{\mathbf{x}}_1 \quad \Rightarrow \quad \mathbf{c} = r_{0,2} \hat{\mathbf{x}}_1 \: \Leftrightarrow \: \left\lbrace \begin{array}{ll} \text{rescaled amplitude:} & \tilde{c}/r_0 \equiv \sqrt{d} \, c/r_0 = \sqrt{d} \, \hat{r}_{0,2} \equiv g_2 \\ \text{direction:} & \hat{\mathbf{c}}=\hat{\mathbf{x}}_1 \; \Rightarrow \; g_c = \sqrt{d} \, \hat{r}_{0,1} \equiv g_1 \end{array} \right. \label{eq-translation-global-local-shear} \end{equation} where we used the definition for the unit vector coordinates ${\hat{r}_{0,\mu}\equiv g_\mu/\sqrt{d}}$ with ${g_\mu \sim \mathcal{O}(1)}$~\cite{biroli_urbani_2018_SciPostPhys4_020,agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}, and the average over relative displacements~\eqref{eq-def-P-gc-ctilde} simplifies into ${\int \mathcal{D} g_1 \int \mathcal{D} g_2 \, (\dots)}$. In other words, \emph{global shear has the same scalar mean-field dynamics as random local displacements with ${\mathfrak{F}/\ell^2 \equiv 1}$}. The other way around, we can tune the variance ${\mathfrak{F}}$, for instance by playing with the amplitude~$\Xi$ or the spatial correlation length~$\xi$ in Eq.~\eqref{eq-distr-cij-2}, allowing for a more general family of such random local driving. In the expression~\eqref{eq-def-longitudinal-motion-stoch-process-AQRD} of the gap ${h_0'(t)}$, if we define ${\gamma_{\text{eff}}(t)= \gamma(t) \, \sqrt{\mathfrak{F}}/\ell}$, then $\breve{c}$ can be taken Gaussian distributed of zero mean and unit variance, exactly as the random variable ${g_2}$ appearing in the global shear expressions. This means that we can go one step further than Eq.~\eqref{eq-def-P-gc-ctilde}: \begin{equation} \int \mathrm d \tilde{c} \int \mathrm d g_c \, \bar{\mathcal{P}}(g_c,\tilde{c}) \, \argp{\dots \arga{\gamma(t), \tilde{c}, g_c} \dots} \quad \mapsto \quad \int \mathcal{D} g_c \, \int \mathcal{D} \breve{c} \, \argp{\dots \arga{\gamma_{\text{eff}}(t)=\gamma(t) \, \sqrt{\mathfrak{F}}/\ell , \ell \breve{c}, g_c} \dots} \, . \label{eq-def-P-gc-ctilde-bis} \end{equation} Consequently, and this is our main result, in the limit of infinite dimension, the many-body dynamics under a global shear or spatially correlated random local displacements can be exactly reduced on scalar mean-field dynamics which are strictly equivalent, upon the above rescaling of the accumulated strain. The dependence on the spatial correlation length~$\xi$ persists through the variance ${\mathfrak{F}}$ of relative local displacements, with a functional form which is for instance given by Eq.~\eqref{eq-mathfrak-F-explicit-Gaussian_fxi} for Gaussian distributed local displacements. Technically, this simple connection with global shear relies ultimately on the separation of the affine and non-affine motions ${\mathbf{r}_0'(t)}$ and ${\mathbf{w} (t)}$, respectively. In infinite dimension, the explicit dependence on the `accumulated shear strain' ${\gamma(t)}$ only appears within the affine motion which keeps a memory of the initial condition ${\mathbf{r}_0'(t)}$ and its associated gap ${h_0'(t)}$. As for the non-affine motion, the feedback of $\gamma(t)$ is treated in an exact mean-field way through the three kernels~\eqref{eq-def-kernels-high-dim-gap-AQRD}. Our new protocol essentially allows for a more general distribution of relative displacement amplitude ${\bar{\mathcal{P}}(\tilde{c})}$, admittedly Gaussian with zero mean and variance ${\mathfrak{F}}$, but it is the exact mean-field reduction of the dynamics at ${d \to \infty}$ which allows us to do so without any loss of generality (see the discussion surrounding Eqs.~\eqref{eq-distr-cij-1}-\eqref{eq-distr-cij-2} in the previous section on that point). Finally, we recall the dynamical definition of the stress under a global shear strain (see Eq.~(49) in Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}): \begin{equation} \hat{\sigma}^{\text{shear}}(t) \equiv \frac{\beta \sigma^{\text{shear}}(t)}{d \, \rho} = \frac{\widehat{\varphi}}2 \int^{\infty}_{-\infty}\!\! \mathrm d h_0 \, \int \mathcal{D}g_1 \mathcal{D}g_2 \, e^{h_0} g_{\text{in}}(h_0,g_1,g_2) \, g_1 g_2 \, \moy{\bar v'(h(t))}_{h \vert h_0, g_1, g_2, \gamma(t)} \, , \label{eq-DMFE-def-stress-global-shear} \end{equation} which consists in a statistical average over the initial condition (\textit{i.e.}~the inital gap $h_0$ and rescaled components of ${\hat{\mathbf{r}}_0}$ in the shear plane, $g_{\mu}=\sqrt{d} \hat{r}_{0,\mu}$ for ${\mu=1,2}$) of the average force amplitude ${\moy{\bar v'(h(t))}}$ under an accumulated strain ${\gamma(t)}$. This definition can be generalised to our new protocol, using the translation stated in Eq.~\eqref{eq-translation-global-local-shear} and the decomposition of the gap ${h(t)}$ of Eq.~\eqref{eq-def-gap-in-AQRD}, as follows: \begin{equation} \begin{split} \hat{\sigma}(t) \equiv \frac{\beta \sigma(t)}{d \, \rho} =& \frac{\widehat{\varphi}}2 \int^{\infty}_{-\infty}\!\! \mathrm d h_0 \, e^{h_0} \, g_{\text{in}}(h_0) \int \mathcal{D}g_c \, \int_{\mathbb{R}} \mathrm d \tilde{c} \, \frac{e^{-\tilde{c}^2/(2\mathfrak{F})}}{\sqrt{2 \pi \mathfrak{F}}} \, \frac{\tilde{c}}{r_0} \, g_c \, \moy{\bar v'(h(t))}_{h \vert h_0, g_c, \tilde{c}/\mathbf{r}_0,\gamma(t)} \\ =& \frac{\widehat{\varphi}}2 \int^{\infty}_{-\infty}\!\! \mathrm d h_0 \, e^{h_0} \, g_{\text{in}}(h_0) \int \mathcal{D}g_c \, \int \mathcal{D} \breve{c} \, \, \breve{c} \, g_c \, \moy{\bar v'(h(t))}_{h \vert h_0, g_c, \breve{c},\gamma_{\text{eff}}(t)} \, . \end{split} \label{eq-DMFE-def-stress-random-local-displ} \end{equation} The last equality emphasises once again the equivalence with global shear, upon the rescaling of the accumulated strain, since we can simply replace $g_1$ by ${\breve{c}=\tilde{c}/\ell}$, $g_2$ by $g_c$, and ${g_{\text{in}}(h_0,g_1,g_2)}$ by ${g_{\text{in}}(h_0)}$ (the latter by statistical isotropy). In addition, it provides a quite intuitive definition of the stress in the system: it is the statistical average over the scalar product, for a given interacting pair of particles, of the interparticle force at time $t$ and its assigned relative local displacement, \textit{i.e.} \begin{equation} \frac{\tilde{c}}{r_0} \, g_c \, \moy{\bar v(h(t))} = \frac{1}{r_0} \underbrace{\sqrt{d} \tilde{c} \, \hat{\mathbf{c}}}_{\equiv \mathbf{c}} \cdot \hat{\mathbf{r}}_0 \, \moy{ v'(r(t))} \, \frac{\ell}{d} \approx \frac{1}{d} \moy{\mathbf{c}_{ij} \cdot \nabla v \argp{\valabs{\mathbf{r}_{ij} (t)}}} \, , \end{equation} where we have reinstated the pair indices just to emphasise that it is computed over pairs, and the brackets denote here the dynamical average at fixed initial conditions. This definition is in agreement with Eq.~(4) in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}, where the stress is computed as the scalar product in the ${Nd}$-dimensional phase space of the local relative displacement vector ${\vert \lbrace c_{ij} \rbrace \rangle}$ with the force vector ${\vert \lbrace F_{ij}(t) \rbrace \rangle}$ prescribed by the configuration at time $t$. It enforces \emph{the notion that the stress is defined with respect to a given driving direction in configuration space}, as further discussed in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}. \section{Quasistatic driving of glassy states} \label{section-quasistatics-glassy-states} We now focus on the quasistatic driving of glassy states, meaning that we start from an equilibrium initial condition below the dynamical transition temperature~\cite{book_parisi_urbani_zamponi_2020}. We aim in particular at comparing the AQRD and AQS protocols, as discussed in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}. Such quasistatic driving can be investigated directly with static approaches~\cite{rainone_2015_PhysRevLett114_015701,rainone_urbani_2016_JStatMech2016_053302,biroli_urbani_2016_NatPhys12_1130, urbani_zamponi_2017_PhysRevLett118_038001,biroli_urbani_2018_SciPostPhys4_020,altieri_2019_PhysRevE100_032140}, and in Refs.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002,agoritsas_maimbourg_zamponi_2019_JPhysA52_334001} we provided a dynamical derivation of such `state-following protocols'. Because of the equivalence with the case of global shear, that we have established in the previous section \emph{at the level the dynamics}, we can straightforwardly adapt the results presented in Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001} (specifically in Sec.~V), using the notations and definitions of the review book~\cite{book_parisi_urbani_zamponi_2020}. However, we emphasise that our formalism allows us to include more generally the possibility of a finite temperature, directly in the many-body dynamics given by Eqs.~\eqref{eqC3:GENLang-shear-dynamics-AQRD}-\eqref{eqC3:GENLang-shear-noise}, so we are not restricted to the pure athermal case even though we refer by convention to the AQRD and AQS protocols. In the `solid' glassy phase we have at short timescales equilibrium-like fluctuations (\textit{i.e.}~satisfying fluctuation-dissipation relations), within the local metabasins of a disordered energy landscape, whose statistics depend on the preparation protocol of the system (we assume it to be replica-symmetric, for now). By definition, quasistatic driving amounts to slowly modifying this landscape while always enforcing equilibrium at short timescales. In practice, we substitute such a timescale separation ansatz in our dynamical mean-field equations (DMFE) and we assume that the mean-square-displacements (MSD) functions saturate to a finite plateau at long times, which can in turn be interpreted as the typical size of the local metabasins. The set of self-consistent equations for this plateau value has a solution as long as the system behaves as a `solid', which can eventually be broken by either increasing the temperature, lowering the packing fraction, shearing too much, or increasing the variance of random forces. Note that we focus here on the RS phase, but the system might undergo a transition to a full replica-symmetric phase \cite{rainone_2015_PhysRevLett114_015701,rainone_urbani_2016_JStatMech2016_053302} before actually yielding `for good'. We consider the case of a `strain' ${\gamma}$ smoothly applied over a finite timescale $\tau$, such that at long times the affine motion simply becomes ${\mathbf{r}_0'(t) \to \mathbf{r}_0 + \gamma \mathbf{c} \equiv \hat{S}_{\gamma {\bf c}} \mathbf{r}_0}$. We assume the same ansatz as in Eq.~(51)~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}, which thus leads to the same restricted equilibrium distributions as in Eq.~(55)~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}, replacing only ${\hat{S}_{\gamma}}$ in AQS by ${\hat{S}_{\gamma {\bf c}}}$ in AQRD. In addition to integrating over the initial condition ${\int \mathrm d \mathbf{r}_0 \, g_{\text{in}}(\mathbf{r}_0)}$, we have to average over the local random displacements ${\int \mathrm d {\bf c} \, \bar{\mathcal{P}}({\bf c}) \, (\dots)}$. We complement the ansatz by assuming that we have at long times the MSD plateaus $D_r$ and $D$, with the following definitions (recalled from Eq.~(56)~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}): \begin{equation} \label{eq:MSDslongtime} D_r = \frac1d \lim_{t\to\infty} \moy{ \valabs{ \mathbf{x}(t) - \mathbf{x}(0) }^2} \, , \quad D = \frac1d \lim_{t\to\infty} \lim_{|t-s|\to\infty} \moy{ \valabs{ \mathbf{x}(t) - \mathbf{x}(s) }^2 } \, , \quad A \equiv 2 D_r - D \, . \end{equation} We assume that we start from an RS equilibrium at inverse temperature ${\beta_0}$, that we are in contact with a thermal bath at inverse temperature ${\beta}$, and for completeness we allow for having constant random forces with ${\Gamma_C(t,t')=f_0^2}$ in Eq.~\eqref{eqC3:GENLang-shear-noise}. These quantities have to satisfy the following set of equations, given first in their high-dimensional vectorial form: \begin{equation} \label{eq-state-following-equa-AQRD} \begin{split} & \frac{1}{D} - \frac{A}{D^2} = - \frac{\beta^2 f_0^2}{2}-\frac{\rho}{d} \int \mathrm d \mathbf{r}_0 \, e^{-\beta_0 v(\mathbf{r}_0)} \int \mathrm d \mathbf{c} \, \bar{\mathcal{P}}(\mathbf{c}) \, e^{\frac{A}{2} \nabla^2 } \argc{ \frac{ \frac{\nabla^2}2 e^{\frac{D}{2} \nabla^2} e^{-\beta v(\hat{S}_{\gamma {\mathbf{c}}} \mathbf{r}_0 )} } { e^{\frac{D}{2}\nabla^2} e^{-\beta v(\hat{S}_{\gamma {\mathbf{c}}} \mathbf{r}_0 )}} } \, , \\ & \frac1{D} = -\frac{\rho}{d} \int \mathrm d \mathbf{r}_0 \, e^{-\beta_0 v(\mathbf{r}_0)} \int \mathrm d {\mathbf{c}} \, \bar{\mathcal{P}}({\mathbf{c}}) \, e^{\frac{A}{2} \nabla^2 } \frac{\nabla^2}2\log \argc{ e^{\frac{D}2 \nabla^2} e^{-\beta v(\hat{S}_{\gamma {\mathbf{c}}} \mathbf{r}_0)} } \, . \end{split} \end{equation} where we used the compact notation ${e^{\frac{\tilde{\Delta}}2 \nabla^2} f(\mathbf{r})= \int \mathrm d \mathbf{x} \, \frac{e^{- \frac{\mathbf{x}^2}{2 \tilde{\Delta}}}}{(2\pi \tilde{\Delta})^{d/2}} f(\mathbf{r} + \mathbf{x})}$. The corresponding glassy free energy ${{\rm f}_g}$, which is the quantity that would be computed and studied using replic{\ae} in statics approaches~\cite{book_parisi_urbani_zamponi_2020}, is \begin{equation} \label{eq-state-following-free-energybis-AQRD} - \frac{2}{d} \beta {\rm f}_g = \argc{ 1 + \log(\pi D) + \frac{A}{D} } + \frac{\beta^2 f_0^2 D}{2} + \frac{\rho}{d} \int \mathrm d \mathbf{r}_0 \, e^{\beta_0 v(\mathbf{r}_0)} \int \mathrm d \mathbf{c} \, \bar{\mathcal{P}}(\mathbf{c}) \, e^{\frac{A}2 \nabla^2} \log \argc{ e^{\frac{D}2 \nabla^2} e^{-\beta v(\hat{S}_{\gamma \mathbf{c}} )} } \, , \end{equation} and one can check that we can recover in particular Eqs.~\eqref{eq-state-following-equa-AQRD} from the extremalization conditions ${\partial_\Delta {\rm f}_g =0}$ and ${\partial_A {\rm f}_g =0}$, as it should be. For explicit computations of observables, such as the stress-strain curves, we need to work with the scalar formulation of the free energy, which is much more user-friendly and essentially involves scalar Gaussian convolutions. It can be obtained either starting from Eq.~\eqref{eq-state-following-free-energybis-AQRD}, or from the scalar mean-field dynamics with the scalar counterpart of the quasistatic ansatz. Thereafter we directly use the expressions from the review book~\cite{book_parisi_urbani_zamponi_2020}, starting from the definitions of the rescaled MSD functions ${\Delta_r = \frac{d^2}{\ell^2} D_r}$, ${\Delta = \frac{d^2}{\ell^2} D}$, and ${\mathcal{A} = 2 \Delta_r - \Delta}$, as well as the rescaled random forces variance ${\hat{f}_0^2 = \frac{\ell^2}{2 d^2} f_0^2}$ (same rescaling as for ${\mathcal{G}_C(t,s)=\frac{\ell^2}{2 d^2} \Gamma_C(t,s)}$, given after Eq.~\eqref{eq-def-longitudinal-motion-stoch-process-AQRD}). The glass free energy takes the now standard form~\cite{book_parisi_urbani_zamponi_2020}: \begin{equation} - \frac{2}{d} \beta {\rm f}_g = \argc{1 + \log \argp{\pi \ell^2 \Delta /d^2} + \frac{2 \Delta_r - \Delta}{\Delta}} + \beta^2 \hat{f}_0^2 \Delta + \widehat{\varphi} \int_{\mathbb{R}} \mathrm d h_0 \, e^{h_0} \, \underbrace{q_{\gamma}\argp{2 \Delta_r - \Delta, \beta_0 ; h_0}}_{\text{only dependence on $\gamma$}} \, \log q \argp{\Delta, \beta;h_0} \, , \label{eq-fg-scalar-shear} \end{equation} with $\widehat{\varphi}$ the rescaled packing fraction as defined after Eq.~\eqref{eq-def-kernels-high-dim-gap-AQRD}, and the following definitions: \begin{eqnarray} q \argp{\Delta, \beta; h_0} &\equiv& \int_{\mathbb{R}} \mathrm d h \, e^{-\beta \bar v(h)} \frac{e^{-\frac{(h-h_0-\Delta/2)^2}{2 \Delta}}}{\sqrt{2 \pi \Delta}} \, , \label{eq-fg-scalar-shear-q-function-generic} \\ q_{\gamma}^{\text{AQS}} \argp{\widetilde{\Delta}, \beta_0 ; h_0} &\equiv& \int \mathrm d g_2 \, \frac{e^{-g^2/2}}{\sqrt{2 \pi}} \, q \argp{\widetilde{\Delta} + \gamma^2 g_2^2, \beta_0 ; h_0} \equiv \int \mathcal{D} g_2 \, q \argp{\widetilde{\Delta} + \gamma^2 g_2^2, \beta_0 ; h_0} \, , \label{eq-fg-scalar-shear-q-function-AQS} \\ q_{\gamma}^{\text{AQRD}}\argp{\widetilde{\Delta}, \beta_0 ; h_0} &\equiv& \int \mathrm d \tilde{c} \, \frac{e^{-\tilde{c}^2/(2 \mathfrak{F})}}{\sqrt{2 \pi \mathfrak{F}}} \, q \argp{\widetilde{\Delta} + \gamma^2 \frac{\tilde{c}^2}{\ell^2}, \beta_0 ; h_0} = \int \mathrm d \breve{c} \, \frac{e^{-\breve{c}^2/2}}{\sqrt{2 \pi}} \, q \argp{\widetilde{\Delta} + \underbrace{\gamma^2 \frac{\mathfrak{F}}{\ell^2}}_{\gamma_\text{eff}^2} \breve{c}^2, \beta_0 ; h_0} \, . \label{eq-qgammaAQRD} \end{eqnarray} In that case, we see that the free energy (from which all static observables can be derived) is strictly equivalent for AQS and AQRD \emph{provided that we rescale accordingly the accumulated strain}, specifically by replacing $\gamma$ by ${\gamma_{\text{eff}}= \gamma \sqrt{\mathfrak{F}}/\ell}$. This is a direct consequence of the equivalence between global shear and constant random local displacements --~that we have shown in Sec.~\ref{section-recalling-DMFE} to be valid for the whole dynamics~-- which had to hold in particular for quasistatic drivings. All those quantities can be computed at least numerically, for a given interaction potential ${v(r) = \bar v(h)}$. In practice, the set of equations for $\arga{\Delta,\mathcal{A}=2\Delta_r - \Delta}$ is obtained by extremalization of the free energy~\eqref{eq-fg-scalar-shear}, with ${\partial_{\Delta} {\rm f}_g =0}$ and ${\partial_{\Delta_r} {\rm f}_g =0}$, yielding the following equations: \begin{equation} \begin{split} & \frac{1}{\Delta} - \frac{\mathcal{A}}{\Delta^2} = -\beta^2 \hat{f}_0^2 - \widehat{\varphi} \int_{\mathbb{R}}\mathrm d h_0 \, e^{h_0} \, q_{\gamma} \argp{\mathcal{A}, \beta_0 ; h_0} \frac{\partial_\Delta q \argp{\Delta, \beta_0 ; h_0}}{q \argp{\Delta, \beta_0 ; h_0}} \, , \\ & \frac{1}{\Delta} = - \widehat{\varphi} \int_{\mathbb{R}}\mathrm d h_0 \, e^{h_0} \, \partial_{\mathcal{A}} q_{\gamma} \argp{\mathcal{A}, \beta_0 ; h_0} \, \log q \argp{\Delta, \beta_0 ; h_0} \, . \end{split} \label{eq-MSD-scalar-complete} \end{equation} Once we have self-consistently determined the values of ${\arga{\Delta,\mathcal{A}}}$ or ${\arga{\Delta,\Delta_r}}$ satisfying these equations (if such a solution exists, we are in the solid phase or at least it can be sustainable), we can compute observables such as the shear stress or the pressure~\cite{book_parisi_urbani_zamponi_2020}. The principal interest of including the case with constant random forces is that Eq.~\eqref{eq-MSD-scalar-complete} can then be used more generally to study not only a `strain'-controlled protocol (controlling ${\gamma(t)}$ at ${\hat{f}_0^2=0}$), but also its `stress'-controlled counterpart (controlling ${f_0^2}$ at ${\gamma(t)=0}$). In the next section, we focus on the former case, which was the case under study in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}, for which the corresponding stress is computed as ${\sigma = \partial_{\gamma} {\rm f}_g}$ (see Eq.~(10.15) of~\cite{book_parisi_urbani_zamponi_2020}). \section{AQRD \textit{versus} AQS stress-strain curves and elastic modulus} \label{section-AQRD-stress-strain-elastic-modulus} We have established in Sec.~\ref{section-recalling-DMFE} the equivalence between global shear and constant random local displacements at the level of the dynamics, which includes the quasistatic case as explicitly shown in Sec.~\ref{section-quasistatics-glassy-states}. The implications on the quasistatic stress-strain curves and their associated elastic moduli, in the comparison between the AQRD or AQS protocols, are quite straightforward to obtain, as we discuss below. We recall that our formalism allows a finite temperature, so our predictions are not restricted to the athermal case, although we use thereafter the AQS/AQRD abbreviation by convention and in direct reference to the results announced in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}. For a quasistatic global shear in the pre-yielding regime, starting from a RS equilibrium glass, the stress can be computed by taking the derivative of the glass free energy given by Eqs.~\eqref{eq-fg-scalar-shear}-\eqref{eq-qgammaAQRD} with respect to the `true' strain, ${\sigma_{\text{AQS}}(\gamma)=\partial_{\gamma} {\rm f}_g^{\text{AQS}}(\gamma)}$~\cite{book_parisi_urbani_zamponi_2020}. This definition has been used for instance to compute the quasistatic mean-field stress-strain curves given in Refs.~\cite{rainone_2015_PhysRevLett114_015701,rainone_urbani_2016_JStatMech2016_053302, urbani_zamponi_2017_PhysRevLett118_038001,altieri_2019_PhysRevE100_032140}, for sheared hard-sphere systems. Consequently, their AQRD counterparts are simply obtained by the following rescaling: \begin{equation} \gamma_\text{eff} = \gamma \sqrt{\mathfrak{F}}/\ell \quad \Rightarrow \quad \sigma_{\text{AQRD}}(\gamma) \equiv \partial_\gamma {\rm f}_g^{\text{AQRD}} \argp{\gamma} = \partial_\gamma {\rm f}_g^{\text{AQS}} \argp{\gamma_{\text{eff}}} = \frac{\sqrt{\mathfrak{F}}}{\ell} \, \frac{\partial {\rm f}_g^{\text{AQS}} \argp{\gamma_{\text{eff}}}}{\partial {\gamma_{\text{eff}}} } \equiv \frac{\sqrt{\mathfrak{F}}}{\ell} \, \sigma_{\text{AQS}}(\gamma_{\text{eff}}) \, . \label{stress-strain-curve-AQRD} \end{equation} As for the elastic modulus at zero strain, using the definition given by Eq.~(10.18) in~\cite{book_parisi_urbani_zamponi_2020}, it is simply given by: \begin{equation} \mu_{\text{AQS}} \equiv \frac{\mathrm d \sigma_{\text{AQS}}(\gamma)}{\mathrm d \gamma}\Big\vert_{\gamma=0} \quad \Rightarrow \quad \mu_{\text{AQRD}} \equiv \frac{\mathrm d \sigma_{\text{AQRD}}(\gamma)}{\mathrm d \gamma}\Big\vert_{\gamma=0} = \frac{\mathfrak{F}}{\ell^2} \, \frac{\mathrm d \sigma_{\text{AQS}}(\gamma_{\text{eff}})}{\mathrm d \gamma_{\text{eff}}}\Big\vert_{\gamma_{\text{eff}}=0} = \frac{\mathfrak{F}}{\ell^2} \mu_{\text{AQS}} \, , \label{elastic-modulus-zero-strain-AQRD} \end{equation} where ${\mu_{\text{AQS}}}$ is fixed by the rescaling packing fraction ${\widehat{\varphi}}$, the initial and final inverse temperatures ${\arga{\beta_0,\beta}}$, and of course the specific interaction potential ${v(r)=\bar v(h)}$ with its typical interaction length ${\ell}$ fixing the units of length. Eq.~\eqref{elastic-modulus-zero-strain-AQRD} is particularly interesting since it predicts the ratio $\kappa$ of the initial elastic moduli to be directly given by the unitless variance ${\mathfrak{F}/\ell^2}$. These predictions can be rewritten as follows, denoting by $\tilde{\gamma}$ the `random strain' controlled in AQRD as opposed to $\gamma_{\text{shear}}$ the corresponding AQS strain, in order to match the notations used in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706} (see its Eq.~(7)): \begin{equation} \kappa \equiv \frac{\mu_{\text{AQRD}}}{\mu_{\text{AQS}}} = \frac{\mathfrak{F}}{\ell^2} \, , \quad \gamma_{\text{shear}} = \tilde{\gamma} \sqrt{\kappa} \, , \quad \sigma_{\text{AQS}} = \sigma_{\text{AQRD}}(\tilde{\gamma}) / \sqrt{\kappa} \, . \label{eq-prescription-rescaling-stress-strain-curves-AQRD-AQS} \end{equation} These relations prescribe how one should rescale the AQRD mean-field stress-strain data to make them collapse on their AQS counterpart, and the other way around how to generate AQRD curves from an AQS one. These predictions, obtained in the infinite-dimensional limit, might extend to lower dimensions as far as mean-field-like quantities are considered. They have been tested in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706} on two-dimensional numerical simulations, and the quantitative agreement was remarkably good for the stress-strain curves, the distributions of elastic moduli, of strain-step between stress drops (\textit{i.e.}~of the distance between two successible saddle points of the potential energy landscape), and of stress drops themselves. According to these predictions, ${\mu_{\text{AQRD}} \propto \mathfrak{F} \, \mu_{\text{shear}}}$. If we assume in particular ${f_{\xi}(x)}$ to be a normalised Gaussian function, as in Eq.~\eqref{eq-mathfrak-F-explicit-Gaussian_fxi}, we then expect the elastic modulus to have a crossover depending on the ratio ${\ell/\xi}$, with ${\mathfrak{F} \sim 1/\xi}$ at ${\ell/\xi \gg 1}$ and ${\mathfrak{F} \sim 1/\xi^{3}}$ at ${\ell/\xi \ll 1}$, the latter case corresponding to a global shear, where $\xi$ is of the order of the system size~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}. In both cases, this implies that the elastic modulus decreases with increasing $\xi$, as we numerically observe and physically expect: it is easier to deform a glass with local displacements which are more coordinated, \textit{i.e.}~with a larger correlation length. Less obvious is that a larger correlation length between individual displacements implies a smaller variance of the relative displacements, \textit{i.e.}~$\mathfrak{F}$ must decrease with increasing $\xi$. This crossover is illustrated in Fig.~\ref{fig-rescaled-stress-strain-curves}, on the left for ${\mathfrak{F}} \propto \kappa \equiv \mu_{\text{AQRD}}/\mu_{\text{AQS}}$ and on the right for a set of stress-strain curves, obtained by rescaling ${\sigma_{\text{AQS}}(\gamma)}$ computed for hard spheres in Ref.~\cite{rainone_2015_PhysRevLett114_015701} (see its Fig.~2, for a packing fraction ${\widehat{\varphi}=6}$). Because of the power-law dependence of $\mathfrak{F}$ on $\xi$, these curves can easily be shifted by several order of magnitude (including their ending point, which is related to the yielding point). \begin{figure}[h] \begin{center} \includegraphics[width=0.85\textwidth]{rescaled-strain-stress-curves-v3} \caption{ \textit{Left:}~Variance of relative displacements ${\mathfrak{F}}$ as a function of the correlation length $\xi$ (both rescaled by the typical interparticle distance $\ell$), assuming a Gaussian correlation as in Eq.~\eqref{eq-mathfrak-F-explicit-Gaussian_fxi} with an overall amplitude ${\Xi=1}$. This amplitude can be tuned to shift the crossover in the scaling of $\xi$, with respect to the value ${\mathfrak{F}/\ell^2=1}$ which indicates the collapse onto the AQS case. \textit{Right:}~Rescaled AQRD stress $\hat{\sigma}_{\text{AQRD}}$ (as defined in Eq.~\eqref{eq-DMFE-def-stress-random-local-displ}) as a function of the `random' strain ${\tilde{\gamma}}$, generated by rescaling of the AQS stress-strain curve in dotted black (reproduced from Fig.~2 in Ref.~\cite{rainone_2015_PhysRevLett114_015701}, for hard spheres at a packing fraction ${\widehat{\varphi}=6}$) according to Eq.~\eqref{eq-prescription-rescaling-stress-strain-curves-AQRD-AQS}. The three continuous curves correspond to ${\xi/\ell \in \lbrace 0.5, 0.7, 1 \rbrace}$ (respectively purple, blue and red), as indicated by the black arrow for increasing $\xi$. They correspond to ${\mathfrak{F}/\ell^2 \in \lbrace 1.38 , 0.73 , 0.31 \rbrace}$ for the amplitude fixed at ${\Xi=1}$. Note that the overall amplitude $\Xi$ is also a control parameter of our protocol, whereas in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706} its value is fixed by the imposed normalisation ${\langle c \vert c \rangle=1}$. } \label{fig-rescaled-stress-strain-curves} \end{center} \end{figure} \section{Conclusion} \label{section-discussion-conclusion} We have established the exact equivalence between global shear and a local forcing --~in the form of imposed constant random local displacements~-- for infinite-dimensional particle systems with pairwise interactions. By adapting the derivation for global shear detailed in Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_334001}, we have shown that this equivalence holds at the level of the full mean-field dynamics upon a simple rescaling of the accumulated effective strain (see Sec.~\ref{section-recalling-DMFE}). This statement holds in particular for quasistatic drivings (in Sec.~\ref{section-quasistatics-glassy-states}), and culminates in the prediction that the AQS and AQRD stress-strain curves and their initial elastic modulus can be collapsed on each other via Eq.~\eqref{eq-prescription-rescaling-stress-strain-curves-AQRD-AQS} (see Sec.~\ref{section-AQRD-stress-strain-elastic-modulus}). The key parameter of this equivalence is the unitless variance ${\mathfrak{F}/\ell^2}$ of the relative local displacements --~restricted on interacting pairs of particles~-- which keeps an explicit dependence on the spatial correlation of the local forcing. An increasing correlation length implies a decreasing variance: such a coordinated forcing deforms an amorphous material more efficiently, in the sense that it corresponds to a larger effective accumulated strain ${\gamma_{\text{eff}} = \gamma \sqrt{\mathfrak{F}}/\ell}$. In our derivation, we first assume a Gaussian distribution for the local deformation field. In addition, for explicit computations we consider a Gaussian two-point correlator ${f_\xi}$ (in Eq.~\eqref{eq-distr-cij-2}) for which we predict a crossover from ${\mathfrak{F}(\xi) \sim 1/\xi}$ at ${\xi/\ell \ll 1}$ to ${\sim 1/\xi^3}$ in the opposite limit, the latter being relevant to compare to the global shear case (see Sec.~\ref{section-settings-global-vs-local-shear-strain}). Remarkably, except for these scalings, our construction is not specific to these assumptions. It relies indeed on the conjunction of two features which are exact in the infinite-dimensional limit: \textit{(i)}~different pairs of particles effectively do not interact, in the sense that their contribution becomes irrelevant in path-integral statistical averages (see for instance Ref.~\cite{agoritsas_maimbourg_zamponi_2019_JPhysA52_144002}), so the effective mean-field dynamics is controlled by single-pair statistics; \textit{(ii)}~the statistics of relative displacements tends to a Gaussian distribution, uncorrelated between different pairs and fully characterised by the variance on a given pair. We could consequently consider alternative deformation fields, as for instance wave-like patterned field as in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}, and their sole relevant feature for the mean-field dynamics in infinite dimension will always be their variance $\mathfrak{F}$ (albeit with a different functional dependence on $\xi$). A pending issue regarding such infinite-dimensional results is their relevance regarding systems in low dimensions. As emphasised in the introduction, this limit has the advantage that exact mean-field predictions might be within reach, while on the other hand important physics due to spatial correlations might be completely washed out. Since this work has been done in parallel to the numerical study presented in Ref.~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706}, we were able to directly address this issue by comparing our mean-field predictions~\eqref{eq-prescription-rescaling-stress-strain-curves-AQRD-AQS} to 2D numerics under either AQRD or AQS protocols. Beyond qualitatively similar behaviours, we found a remarkably good quantitative agreement regarding the stress-strain curves and the avalanches statistics. First we could collapse the different stress-strain curves and the distribution of stress drops and strain steps via a rescaling controlled by the ratio of initial elastic moduli ${\kappa=\mu_{\text{AQRD}}(\xi)/\mu_{\text{AQS}}}$. Secondly we could measure the unitless variance ${\mathfrak{F}(\xi)/\ell^2}$ and show that it coincides with $\kappa$ (as predicted in infinite dimension) for wave-like patterned displacement field, and also for the Gaussian random displacements that we considered here. For the latter, quantitative discrepancies nevertheless appear when the correlation length $\xi$ decreases, hinting at the increasing role of spatial correlations in these low-dimensional systems. These results support the physical picture that there is indeed a proper equivalence between global shear and local forcing, regarding the statistical sampling of the configurational phase space, at least as long as we focus on mean-field metrics. A promising perspective to this work would be to challenge this picture in other driven disordered systems, and to systematically disentangle such mean-field behaviour from spatial-correlation effects, depending on the specific nature of the driving. \section*{Acknowledgments} I would like to warmly thank Peter Morse, and also Eric Corwin, Lisa Manning, Sudeshna Roy and Ethan Stanifer, for the discussions on our common work~\cite{morse_roy_agoritsas_2020_Arxiv-2009.07706} which motived this whole computation, as well as Francesco Zamponi and Ada Altieri for their helpful insight on the infinite-dimensional limit. This work was supported by the Swiss National Science Foundation under the SNSF Ambizione Grant PZ00P2{\_}173962. \bibliographystyle{plain_url}
null
null
null
proofpile-arXiv_000-10143
{"arxiv_id":"2009.08944","language":"en","timestamp":1600654687000,"url":"https:\/\/arxiv.org\/abs\/2009.08944","yymm":"2009"}
2024-02-18T23:40:25.107Z
2020-09-21T02:18:07.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10145"}
null
\section{Introduction} The theory of hyperrings and hyperfields has its origins in the paper of Krasner \cite{Krasner2}. More recently, several authors studied their theory from various points of view. For instance, J.\ Jun, in \cite{Jun}, studies algebraic geometry over hyperrings and gives the definition of hyperideals in a hyperring. M.\ Marshall in \cite{Marshall} studies the definitions of orderings and of positive cones in hyperfields. It was already mentioned in that paper that the two concepts are not equivalent as in the classical theory of those objects. In fact, we show in Example \ref{ex pos cone} that the natural generalization of the construction which in the classical case leads to this equivalence, does not work in the case of hyperfields.\par In \cite{DeS} B.\ Davvaz and A.\ Salasi deal with hypervaluations on a hyperring onto an ordered abelian group, and also J.\ Lee in \cite{JUN} works with valued hyperfields. These two papers present different approaches in defining a hypervaluation on a hyperfield, which both appear to be interesting and well chosen. \par In this paper we study another possibility, introduced by Kh.\ Mirdar Harijani and S.\ M.\ Anvariyeh in \emph{A hypervaluation of a hyperfield onto a totally ordered canonical hypergroup}, Studia Scientiarum Mathematicarum Hungarica 52 (1), 87-101 (2015). What they propose is a generalization of the definition of Davvaz and Salasi where the value set is allowed to be an ordered canonical hypergroup (see Definition \ref{OCH}). Here the domain is always assumed to be a hyperfield. We show in our main result, that even though this definition is proper, i.e., nontrivial examples can be found, it is not of much interest since such a hypervaluation can always be decomposed into an order preserving homomorphism of hypergroups and a hypervaluation onto an ordered abelian group (in the sense of Davvaz and Salasi) which, moreover, induces the same valuation hyperring.\par In the paper of Mirdar Harijani and Anvariyeh, a lot of constructions are proposed without details; it turns out that they are not always possible to carry out. In the present article we also provide counterexamples to justify this last assertion. \section{Ordered Canonical Hypergroups} \begin{df} Let $H$ be a nonempty set and $\mathcal{P}^* (H)$ the family of nonempty subsets of $H$. A \emph{hyperoperation} $*$ is a function which associates with every pair $(x,y) \in H \times H$ an element of $\mathcal{P}^* (H)$, denoted by $x*y$. \end{df} A \emph{hypergroupoid} is a nonempty set $H$ with a hyperoperation $*: H \times H \to \mathcal{P}^* (H)$. For $x \in H$, $A,B\subseteq H$ we set \[ A*B=\bigcup_{a\in A,b\in B} a*b, \] $A * x = A * \lbrace x \rbrace$ and $x *A = \lbrace x \rbrace * A$. In 1934 the concept of a hypergroup was defined by F.\ Marty in \cite{Marty} to be a nonempty set $H$ with an associative hyperoperation (see Definition \ref{hypergp} below) such that $x * H = H * x = H$ for all $x \in H$. A special class of hypergroups, which will be of interest for us, is the following: \begin{df} \label{hypergp} A \emph{canonical hypergroup} is a tuple $(H,*,e)$, where $(H, *)$ is a hypergroupoid and $e$ is an element of $H$ such that the following axioms hold: \begin{itemize} \item[(H1)] the hyperoperation $*$ is associative, i.e., $(x*y)*z=x*(y*z)$ for all $x,y,z \in H$, \item[(H2)] $x*y=y*x$ for all $x,y\in H$, \item[(H3)] for every $x\in H$ there exists a unique $x'\in H$ such that $e\in x*x'$ (the element $x'$ will be denoted by $x^{-1}$), \item[(H4)] $z\in x*y$ implies $y\in x^{-1}*z$ for all $x,y,z\in H$. \end{itemize} \end{df} \begin{re} A canonical hypergroup is a hypergroup in the sense of Marty. Fix $a \in H$ and take $x \in H*a$. Then there exist $h \in H$ such that $x \in h*a \subseteq H$, showing that $H*a \subseteq H$. For the other inclusion, take $x \in H$, then \[x \in x*e \subseteq x*(a^{-1}*a) = (x*a^{-1}) * a, \] so there exist $h \in x*a^{-1} \subseteq H$ such that $x \in h*a \subseteq H*a$. \end{re} \begin{re} In \cite{Mittas} the definition of a canonical hypergroup also requires explicitly that $x*e = \lbrace x \rbrace$ for all $x \in H$. However, we note that this axiom follows from (H3) and (H4). Indeed, suppose that $y \in x*e$ for some $x,y \in H$. Then $e \in x^{-1} * y$ by (H4). Now $y = x$ follows from the uniqueness required in (H3). \end{re} \begin{re} \label{grhy} Note that an abelian group $G$ is not a priori a hypergroup, because the operation on $G$ is not a hyperoperation, as it takes values in $G$ and not in $\mathcal P^* (G)$. But it can be turned into a hypergroup by setting $a * b := \lbrace ab \rbrace$. In other words, we can turn an abelian group into a hypergroup by identifying each element $a$ of $G$ with the singleton $\lbrace a \rbrace$. \end{re} \vspace{0,2cm} \begin{example} \label{sign hgp} Consider the set $H: =\lbrace -1, 0 , 1 \rbrace$ with a hyperoperation $*$ defined as follows: \begin{align*} (-1) * (-1) &=(-1)*0= 0*(-1)=\{-1\}\\ 0 * 0 &= \{0\}\\ 1 * 1 &=1*0=0*1= \{1\}\\ 1 * (-1) &= (-1)*1 =\lbrace -1, 0 , 1\rbrace. \end{align*} Then $(H, *, 0)$ is a canonical hypergroup, called the \emph{sign hypergroup}. As the reader may check, we have that $1^{-1} = -1$, $(-1)^{-1} = 1$, $0^{-1} = 0$. \end{example} The next example can be found in \cite{Krasner}. \begin{example}\label{quotient} Let $R$ be a ring and $G$ a normal subgroup of its multiplicative semigroup. Consider the following equivalence relation $\sim$ on $R$: $a \sim b$ if and only if there exist $g,h \in G$ s.t. $ag = bh$. The equivalence class of $a \in R$ is \[ aG:= \lbrace ag \mid g \in G \rbrace.\] It is possible to define a hyperoperation on $R/G$ in the following way: \[ aG + bG := \lbrace (ag+bh)G \mid g,h \in G \rbrace. \] Then $(R/G,+,\lbrace 0_R \rbrace)$ is a canonical hypergroup. Indeed, the associative law follows from the same law in $R$, as well as commutativity. The unique inverse of $aG$ is $(-a)G$. Indeed, \[ aG + (-a)G = \lbrace (ag-ah)G \mid g,h \in G \rbrace \ni (a-a)G = 0_R G, \] moreover, if \[ 0_R G \in aG + bG = \lbrace (ag+bh)G \mid g,h \in G \rbrace, \] then there exist $g, h \in G$ such that $0_R = ag+bh$. Multiplying by $g^{-1}$, we obtain that $-a = bhg^{-1}$ and so $(-a)G = bG$. Assume now that $cG \in aG+bG$. We wish to show that $bG \in (-a)G + cG$. We have \[ cG \in \lbrace (ag+bh)G \mid g,h \in G \rbrace, \] so there exist $g,h \in G$ such that $c = ag+bh$. Multiplying by $h^{-1}$, we obtain $b = -agh^{-1} + ch^{-1}$, so $bG \in \lbrace ((-a)g' + ch')G \mid g', h' \in G \rbrace = (-a)G+cG$. \end{example} \begin{re} \label{re} We note that the sign hypergroup can be obtained as a quotient in the way described in the previous example. Take $R=\mathbb{R}$ and $G=\dot \mathbb{R}^2$, where $\dot\mathbb{R}^2$ denotes the set of non-zero squares in $\mathbb{R}$. The result follows from the fact that every non-zero real number is either a square or the opposite of a square. \end{re} In their paper Mirdar Harijani and Anvariyeh consider the set $\mathbb Z / \mathbb N = \lbrace a \mathbb N \mid a \in \mathbb Z \rbrace$ with the hyperoperation defined as follows: \[a \mathbb N * b\mathbb N = \lbrace c \mathbb N \mid c \in a \mathbb N + b\mathbb N \rbrace.\] This construction does not give a canonical hypergroup in the sense of Definition \ref{hypergp}, as is shown in the next example. The reason is that $\mathbb N$ is not a multiplicative subgroup of the multiplicative semigroup of $\mathbb Z$. \begin{example} Consider the tuple $(\mathbb{Z} / \mathbb{N}, *, 0 \mathbb{N})$. We observe that $0 \mathbb{N} \in 1 \mathbb N * -k \mathbb N$ for every $k \in \mathbb{N}$. Hence axiom (H3) is not fulfilled. \end{example} \begin{df}\label{OCH} An \emph{ordered canonical hypergroup} is a tuple $(H, *, e, \leq )$, where $(H, *, e)$ is a canonical hypergroup and $\leq$ is a partial order such that \begin{equation} \label{ord} a \leq b \Longrightarrow a*c \nearrow b*c \end{equation} for all $a,b,c, \in H$; here, if $A, B \subseteq H$, then $A \nearrow B$ means that for all $b \in B$ there exists $a \in A$ such that $a \leq b$. If for each $a, b \in H$ either $a \leq b$ or $b \leq a$, then we call $\leq$ an \emph{ordering} or a \emph{linear order relation} on $H$. \end{df} \begin{re} One should not use \lq\lq$\leq$\rq\rq\ on the subsets as the relation will not be antisymmetric. For example if $H$ contains a smallest element $b$, then any two subsets $A,B$, which both contain $b$ will satisfy $A \nearrow B$ and $B \nearrow A$ without necessarily being equal. \end{re} \begin{re} It should be noted that some authors in defining $A \nearrow B$ require that for all $a \in A$ there exists $b \in B$ and for all $b \in B$ there exists $a \in A$ such that $a \leq b$. Others instead require only that for all $a \in A$ there exists $b \in B$ such that $a \leq b$. The relations between all these definitions still have to be investigated. However, in what follows we will just use the concept introduced in Definition \ref{OCH}. \end{re} \begin{lem} \label{FVK} Let $(H, *, e, \leq)$ be an ordered canonical hypergroup. Take $a,b,x,y \in H$ and $B \subseteq H$. \begin{enumerate} \item[1)] If $\lbrace a \rbrace \nearrow B$, then $a \leq b$ for all $b \in B$. \item[2)] If $x > e$, then $x^{-1} < e$. \item[3)] If $x \geq e$ and $y \geq e$, then $b \geq e$ for all $b \in x*y$. \item[4)] If $x > e$ and $y \geq e$, then $b > e$ for all $b \in x*y$. \end{enumerate} \end{lem} \begin{proof} 1): This follows from the definition of ``$\nearrow$'' and the fact that $\{a\}$ contains only $a$. \\ 2): By condition (\ref{ord}), $\{x^{-1}\}=0*x^{-1}\nearrow x*x^{-1}$, so $x^{-1}\leq b$ for every $b\in x*x^{-1}$ by part 1). As $e\in x*x^{-1}$, we obtain that $x^{-1}\leq e$. Now $x^{-1}=e$ is impossible because otherwise, $e\in x*x^{-1}=\{x\}$, which implies that $x=e$ in contradiction to our assumption that $x>e$. \\ 3): If $x\geq e$ and $y\geq e$, then $\{y\}=e*y\nearrow x*y$, so $e\leq y\leq b$ for every $b\in x*y$ by part 1). \\ 4): If $x> e$ and $y\geq e$, then $b\geq e$ for all $b\in x*y$ by part 3). However, $e\notin x*y$ since otherwise, $y=x^{-1}$ which by part 2) yields that $y<e$, in contradiction to our assumption that $y\geq e$. Hence $b\ne e$ and consequently, $b>e$. \end{proof} In \cite{Marshall} the definition of a positive cone is given for hyperfields. Let us now define this concept for canonical hypergroups. \begin{df} A subset $P$ of a canonical hypergroup $(H, *, e)$ is called a \emph{positive cone} if the following axioms hold: \begin{itemize} \item[(P1)] $P \cap -P = \lbrace e \rbrace$, \item[(P2)] $P * P \subseteq P$, \item[(P3)] $P\cup -P=H$. \end{itemize} \end{df} In the theory of ordered abelian groups, the existence of a positive cone is equivalent to the existence of an ordering and there is a one to one correspondence between them. It was already mentioned in \cite{Marshall} that this, in general, is no more true in the case of hyperfields, and the argument holds for hypergroups as well. Mirdar Harijani and Anvariyeh claim that one can always construct an ordering from a positive cone by setting: \begin{equation}\label{2} x \leq y \iff (y * x^{-1}) \cap P \neq \emptyset. \end{equation} In the following example we show that this construction is not always possible. \begin{example} \label{ex pos cone} Take $H:= \mathbb Q / \dot{\mathbb Q}^{2}$, where $\dot{\mathbb Q}^{2}$ denotes the set of nonzero squares in $\mathbb{Q}$. We see that $H$ is a canonical hypergroup with the hyperoperation defined as: \[ a \dot {\mathbb Q}^{2} * b \dot{\mathbb Q}^{2} = \lbrace c\dot{\mathbb Q}^{2} \mid c \in a\dot{\mathbb Q}^{2} + b\dot{\mathbb Q}^{2} \rbrace \] (see also Example \ref{quotient}). Observe that the set $P = {\mathbb Q}^{+} / \dot{\mathbb Q}^{2} \cup \lbrace 0 \rbrace$ fulfils the conditions of the definition of a positive cone. However, the relation defined in (\ref{2}) is not an ordering on $H$. Indeed, observe that $ 5 \dot{\mathbb Q}^{2} \in (2 \dot{\mathbb Q}^{2} * (3 \dot{\mathbb Q}^{2})^{-1}) \cap P$, so, in particular, $(2 \dot{\mathbb Q}^{*2} * (3 \dot{\mathbb Q}^{2})^{-1}) \cap P \neq \emptyset$, which means that $3 \dot{\mathbb Q}^{2} \leq 2 \dot{\mathbb Q}^{2}$. On the other hand, $1 \dot{\mathbb Q}^{2} \in (3 \dot{\mathbb Q}^{2} * (2 \dot{\mathbb Q}^{2})^{-1}) \cap P$, so $(3 \dot{\mathbb Q}^{2} * (2 \dot{\mathbb Q}^{2})^{-1}) \cap P \neq \emptyset$, which means that $2 \dot{\mathbb Q}^{2} \leq 3 \dot{\mathbb Q}^{2}$. Clearly, $2 \dot{\mathbb Q}^{2} \neq 3 \dot{\mathbb Q}^{2}$, so the relation $\leq$ is not antisymmetric. \end{example} However, there exist hypergroups in which positive cone and orderings behave as in the classical case. The simplest example is the sign hypergroup: \begin{example} Consider the sign hypergroup $H = \lbrace -1, 0 ,1 \rbrace$ with positive cone $P = \lbrace 0, 1 \rbrace$. We define an order relation $\leq$ on $H$ as follows: $-1 \leq 0 \leq 1$. We leave it to the reader to show that $\leq$ is a linear order on $H$ as in Definition \ref{OCH}. This ordering corresponds to $P$ in the way described in (\ref{2}). Indeed, $(1 * 0^{-1}) \cap P = \lbrace 1 \rbrace$, $(0 * (-1)^{-1}) \cap P = \lbrace 1 \rbrace$, $(1 * (-1)^{-1}) \cap P = \lbrace 0, 1 \rbrace$, so in every case we obtain a nonempty intersection. \end{example} \section{Hyperrings and hyperfields} \begin{df} A \emph{hyperring} is a tuple $(R,+,\cdot,0,1)$ which satisfies the following axioms: \begin{itemize} \item[(R1)] $(R,+,0)$ is a canonical hypergroup, \item[(R2)] $(R,\cdot,1)$ is a commutaive monoid such that $x\cdot0=0$ for all $x\in R$, \item[(R3)] the operation $\cdot$ is distributive with respect to the hyperoperation $+$. That is, for all $x,y,z\in R$, \[ x(y+z)=xy+xz \] as sets. Here for $x\in H$ and $A\subseteq H$ we have set \[ xA:=\{xa\mid a\in A\}. \] \end{itemize} If $(R\setminus\{0\}, \cdot,1)$ is an abelian group, then $(R,+,\cdot,0,1)$ is called \emph{hyperfield}. \end{df} The following example is the original example of a quotient hyperring (see \cite{Krasner}). \begin{example} \label{quot} Let $R$ be a commutative ring with $1$, $G$ a normal subgroup of its multiplicative semigroup and recall the notations introduced in Example \ref{quotient}. One may define multiplication in $R/G$ as $aG \cdot bG:= (ab)G$. Then $(R/G, +, \cdot, \lbrace 0 \rbrace, 1_RG)$ is a hyperring and if $R$ is a field, then it is a hyperfield. We have to show that (R3) holds. Take $a,b,c \in R$ and suppose that $xG \in aG \cdot (bG + cG)$. Then there exist $g,h \in G$ such that $x = a(bg+ch) = abg + ach$, where we have used the distributivity law in $R$. We obtain \[ xG \in \lbrace (abg + ach)G \mid g,h \in G \rbrace = abG + acG = aG bG + aG cG. \] This shows that $aG \cdot (bG+cG) \subseteq aG bG + aG cG$. To show the converse inclusion we note that \[ xG \in aG bG + aG cG = (ab)G + (ac)G. \] Hence there exist $g,h \in G$ such that $x = (ab)g + (ac)h = a(bg+ch)$, where we used the distributivity law in $R$. We obtain \[ xG \in \lbrace a(bg+ch)G \mid g,h \in G \rbrace = \lbrace aG(bg + ch)G \mid g,h \in G \rbrace = aG \cdot (bG + cG). \] If $R$ is a field, then for every $a \in R$ we have that $a^{-1}G = (aG)^{-1}$. Indeed, by definition $aG \cdot a^{-1}G = (a \cdot a^{-1})G = 1_RG$. \end{example} \begin{example} As we have already noted in Remark \ref{re} that the sign hypergroup $H$ can be seen as a quotient $\mathbb{R} / \dot\mathbb{R}^2$. Since $\mathbb{R}$ is a field, we obtain that $(H, +, \cdot, 0, 1)$ is a hyperfield, where we now denote $*$ by $+$ and $\cdot$ behaves as follows: \begin{align*} -1 \cdot 1 &= 1 \cdot -1 = -1,\\ 0 \cdot 0 &= 0 \cdot 1 = 1 \cdot 0 = 0 \cdot -1 = -1 \cdot 0 = 0, \\ 1 \cdot 1 &= -1 \cdot -1 = 1. \end{align*} \end{example} \vspace{0,2cm} In their paper Mirdar Harijani and Anvariyeh, in an attempt to generalize the construction of formal power series, argue that the set $X$ of order preserving mappings $f:H\to K$ such that the support of $f$ is finite, where $H$ is a ordered canonical hypergroup and $K$ an ordered hyperfield, is a hyperdomain, i.e., a hyperring without zero divisors, with the operations defined as follows: \[ (f+g)(x):=f(x)+_K g(x), \] and \begin{equation}\label{1} (f g)(x):=\sum_{x\in x_1* x_2}f(x_1)g(x_2). \end{equation} First of all we note that $X$ is not even a hyperring since if $f$ is non-constant and order preserving, then $-f$ is not order preserving. In addition, if $K$ is a hyperfield which is not a field, then the multiplication $(fg)(x)$ defined in (\ref{1}) is a subset of $K$ and not an element of $K$ as the definition of hyperring would require. Hence we do not obtain a hyperring. Finally, even if we do not restrict to order preserving mappings and assume $K$ to be a field, we show, in the next example, that the multiplication defined in (\ref{1}) is not associative. \begin{example} Let $H=\{-1,0,1\}$ denote the sign hypergroup and $K$ an ordered field. We define the maps $f, g, h:H\to K$ as follows $f(-1)=f(0) = f(1)=1_K$, $g(-1) = g(0)=g(1)=-1_K$, and $h(1)=1_K$, $h(0)=-1_K$, $h(-1)=0_K$. By direct computations we obtain \begin{align*} ((fg)h)(1) &= 5 \cdot (-1_K) + 5 \cdot 1_K + 3 \cdot (-1_K) + 5 \cdot (-1_K) = 8 \cdot (-1_K) \end{align*} and \begin{align*} (f(gh))(1) &= 2 \cdot (-1_K) + 2 \cdot (-1_K) + 2 \cdot (-1_K) = 6 \cdot (-1_K) \end{align*} where we used the fact that $1\in 1* 1,1* 0,0* 1,1*-1,-1* 1$, that $-1\in -1* -1,-1* 0,0* -1,1* -1,-1 * 1$ and that $0\in0* 0,1* -1,-1* 1$. Here $*$ denotes the operation of $H$. We conclude that $f(gh)\neq (fg)h$, hence the associativity law does not hold. \end{example} \section{Hypervaluations} As it was mentioned before, there are several approaches to the definition of a hypervaluation on a hyperfield. We now wish to investigate the following. \begin{df}\label{hyperval} Let $(F,+,\cdot,0,1)$ be a hyperfield and $(H,*,e,\leq)$ be an ordered canonical hypergroup. A surjective map $w: F \to H \cup \lbrace \infty \rbrace$ is called a \emph{hypervaluation} on the hyperfield $F$ if the following properties are satisfied: \begin{itemize} \item [(V1)] $w(x) = \infty \iff x = 0$, \item [(V2)] $w(-x) = w(x)$, \item [(V3)] $w( x \cdot y) \in w(x) * w(y)$, \item [(V4)] $z \in x+y \implies w(z) \geq \min \lbrace w(x), w(y) \rbrace$. \end{itemize} \end{df} The following is a nontrivial example of a hypervaluation onto an ordered canonical hypergroup. \begin{example} \label{ex hypval} Let $K$ be a field and $\Gamma$ an ordered abelian group. Assume that a classical valuation \[ v:K\to\Gamma\cup\{\infty\} \] is given. We now define a hypervaluation $w$ from $K$ onto the sign hypergroup $\{-1,0,1\}$. \[ w(x)=\begin{cases}1&\text{if }\infty\neq v(x)>0_\Gamma\\ 0&\text{if }v(x)=0_\Gamma\\ -1&\text{if }v(x)<0_\Gamma\\ \infty&\text{otherwise} \end{cases} \] Let us show that $w$ is a hypervaluation on the field $K$, as in Definition 4.1. Clearly, $w(x)=\infty$ if and only if $x=0_K$ and $w(x)=w(-x)$ because that is true for $v$. In order to show that $w(xy)\in w(x) * w(y)$, where $*$ denotes the operation in the sign hypergroup, we observe that if $v(x)$ and $v(y)$ have the same sign there is nothing to show, since $v(xy)=v(x)+v(y)$ will have the same sign as $v(x)$ and $v(y)$. If $v(x)=0$, then $w(x)=0$ and $v(xy)=v(y)$ and $w(xy) = w(y) = 0 * w(y)$; the same holds if $v(y) = 0$. If $x=0$ or $y=0$, then the situation is clear. If, say, $v(x)<0_\Gamma$ and $v(y)>0_\Gamma$, then $w(xy)\in \{-1,0,1\} = -1 * 1 = w(x) * w(y)$, and similarly in the case where $v(x)>0_\Gamma$ and $v(y)<0_\Gamma$. This shows that the third axiom of a hypervaluation holds for $w$. The fourth and last axiom states: \[ z\in x+y \Longrightarrow w(z)\succeq\min\{w(x),w(y)\} \] where $\preceq$ denotes the ordering in the sign hypergroup. Since $K$ is a field we have to check that $w(x+y)\succeq\min\{w(x),w(y)\}$. If $w(x) \neq w(y)$, then clearly $v(x) \neq v(y)$, so we have $v(x+y) = \min \lbrace v(x), v(y) \rbrace$. Hence $w(x+y) = \min \lbrace w(x), w(y) \rbrace$. If $x=y=0$ there is nothing to show. If $w(x)=w(y)=0$, then $v(x)=v(y)=0_\Gamma$, so $v(x+y) \geq 0_\Gamma$. Then $w(x+y) \in \lbrace 0,1, \infty \rbrace$ and $w(x+y)\succeq 0 = \min\{w(x),w(y)\}$. If $w(x)=w(y)=1$, then $v(x), v(y) > 0_\Gamma$, so $v(x+y) > 0_\Gamma$. We obtain $w(x+y) \in \lbrace 1, \infty \rbrace$ and $w(x+y)\succeq 1 = \min\{w(x),w(y)\}$. Finally, if $w(x)=w(y)=-1$, then $w(x+y) \in \lbrace -1, 0, 1, \infty \rbrace$, but $\min \lbrace w(x), w(y) \rbrace = -1$, so the fourth axiom also holds for $w$. \end{example} \begin{lem} \label{valpro} Let $w: F \to H \cup \lbrace \infty \rbrace$ be a hypervaluation. Then $w(1_F) = e$ and $w(x^{-1}) = (w(x))^{-1}$. \end{lem} \begin{proof} Let $x \in F$ be such that $w(x) = e$. Then $e = w(x \cdot 1_F) \in w(x) * w(1_F) = \lbrace w(1_F) \rbrace$, hence $e = w(1_F)$. To prove the second assertion observe that, since $xx^{-1} = 1_F$, we have that $w(xx^{-1}) = w(1_F) = e$. By axiom (V3) we obtain that $e=w(xx^{-1}) \in w(x) * w(x^{-1})$, then axiom (H3) implies that $w(x^{-1})=(w(x))^{-1}$. \end{proof} \begin{df} Let $(R, +, \cdot, 0,1)$ be a hyperring. An element $x \in R$ is called a \emph{unit of $R$} if there exists $y \in R$ such that $x \cdot y = 1$. \end{df} \begin{df} Let $F$ be a hyperfield and $R \subseteq F$ a hyperring with respect to the hyperaddition and multiplication of $F$. If for every $x \in F$ either $x \in R$ or $x^{-1} \in R$, then $R$ is called a \emph{valuation hyperring}. \end{df} \begin{df} Let $R$ be a hyperring. \begin{enumerate} \item A nonempty subset $I \subseteq R$ is a \emph{hyperideal} if for all $a,b \in I$ and for all $r \in R$ we have $a+b \subseteq I$, $-a \in I$ and $ar \in I$. \item A hyperideal $I \subsetneq R$ is \emph{maximal} if $I$ satisfies the following property: if $J \subseteq R$ is a hyperideal of $R$ such that $I\subsetneq J$, then $J = R$. \end{enumerate} \end{df} In the paper of Mirdar Harijani and Anvariyeh one can find the first three statements of the following proposition. For the sake of completeness we rewrite the proof and complete it with details where needed. For our purposes, we also add a fourth statement. \begin{pr} \label{prop} Let $(F, +, \cdot, 0, 1)$ be a hyperfield, $(H, *,e,\leq)$ an ordered canonical hypergroup and $w: F \rightarrow H\cup\{\infty\}$ a hypervaluation. \begin{enumerate} \item[1)] The set $\mathcal{O}_w = \lbrace x \in F \mid w(x) \geq e \rbrace$ is a valuation hyperring. \item[2)] The set $U_w = \lbrace x \in F \mid w(x) = e \rbrace$ is a group under the multiplication of the hyperfield $F$ and consists of all units of $\mathcal O_w$. \item[3)] The set $\mathfrak{m}_w = \lbrace x \in F \mid w(x) > e \rbrace$ is the unique maximal hyperideal of $\mathcal O_w$. \item[4)] The quotient $G:= (F\setminus\{0\}) / U_w$ is an ordered abelian group with respect to the operation $xU_w \cdot yU_w = xyU_w$ and the ordering $xU_w \leq yU_w \Leftrightarrow yx^{-1} \in \mathcal O_w$. \end{enumerate} \end{pr} \begin{proof} 1) If $x, y \in \mathcal O_w$, then $w(x), w(y) \geq e$, so by part 3) of Lemma \ref{FVK} we obtain that for all $b \in w(x) * w(y)$ we have $b \geq e$. Since $w(xy) \in w(x) * w(y)$, in particular $w(xy) \geq e$, hence $xy \in \mathcal O_w$. The inclusion $x+y \subseteq \mathcal O_w$ follows from (V4) and from the fact that $\min \lbrace w(x), w(y) \rbrace \geq e$. Indeed by (V4), \[ z\in x+y \Longrightarrow w(z) \geq \min\{w(x),w(y)\} \geq e. \] To see that $\mathcal O_w$ is a valuation hyperring, take $x \in F \setminus \mathcal O_w$. Then $w(x) <e$, which implies that $w(x^{-1}) = (w(x))^{-1} >e$ by Lemma \ref{valpro} and part 2) of Lemma \ref{FVK}. Thus, $x^{-1} \in \mathcal O_w$.\\ 2) If $x \in U_w$, then $x^{-1} \in U_w$ too. Indeed, $w(x^{-1}) = (w(x))^{-1} = e$. Hence the elements of $U_w$ are units of $\mathcal O_w$. To show the other inclusion, assume that $x \in \mathcal O_w \setminus U_w$. Then $w(x)>e$, so $w(x^{-1})<e$ by part 2) of Lemma \ref{FVK}, which means that $x^{-1} \notin \mathcal O_w$.\\ 3) Let $x,y \in \mathfrak{m}_w$. Then by (V4) we obtain that \[ z\in x+y \Longrightarrow w(z)\geq\min\{w(x),w(y)\} > e. \] Thus, $x+y \subseteq \mathfrak{m}_w$. From $w(-x)=w(x)$ it follows that $-x\in\mathfrak{m}_w$. If $x \in \mathfrak{m}_w$ and $y \in \mathcal O_w$, then $w(x) > e$ and $w(y) \geq e$, so by part 4) of Lemma \ref{FVK} we obtain that for every $b \in w(x)*w(y)$ we have $b>e$. Since by (V3) $w(xy) \in w(x) * w(y)$, we conclude that $xy \in \mathfrak {m}_w$. We have proved that $\mathfrak {m}_w$ is a hyperideal. It follows from part 2) and from the fact that if a hyperideal contains a unit, then is not proper, that $\mathfrak m_w$ is the unique maximal hyperideal of $\mathcal O_w$.\\ 4) By part 2) $U_w$ is a subgroup of the abelian group $(F \setminus \lbrace 0 \rbrace, \cdot, 1)$. So $G$ is the quotient group with respect to this (normal) subgroup. The proof that $\leq$ is a linear order on $G$ is exactly the same as in the classical case.\qedhere \end{proof} \begin{df} Let $(H_1,*_1,e_1)$ and $(H_2,*_2,e_2)$ be canonical hypergroups. \begin{enumerate} \item A \emph{homomorphism of hypergroups} is a function $f:H_1\to H_2$ such that $f(e_1)=e_2$ and $f(a*_1b)\subseteq f(a)*_2f(b)$. \item A \emph{strong homomorphism of hypergroups} is a function $f:H_1\to H_2$ such that $f(e_1)=e_2$ and $f(a*_1b)= f(a)*_2f(b)$ as sets. \item An \emph{isomorphism of hypergroups} is a strong homomorphism of hypergroups which is bijective. \end{enumerate} \end{df} In their paper, Mirdar Harijani and Anvariyeh claim that two hypervaluations $v: F \to H_v\cup\{\infty\}$ and $w: F \to H_w\cup\{\infty\}$ induce the same valuation hyperring if and only if there exists an order preserving isomorphism $f$ between the value hypergroups $H_v, H_w$ such that $w = f \circ v$. Although this statement is true in classical valuation theory, this is no longer true for hypervaluations. \begin{example} Consider the hypervaluations $v, w$ from Example \ref{ex hypval}. We observe that the valuation ring $\mathcal O_v$ of $v$ coincides with the valuation (hyper)ring of $w$. Indeed, if $v(x)\geq 0_\Gamma$, then $w(x)\in\{0,1\}$, so $w(x)\succeq 0$ which means that $\mathcal O_v\subseteq\mathcal O_w$. On the other hand, if $w(x)\succeq 0$, then, by definition, $v(x)\geq 0$, so $x\in \mathcal O_w$ implies $x\in\mathcal O_v$. Note that there is no order preserving isomorphism between the ordered abelian group $\Gamma$ and the sign hypergroup. But in the present case, as shown above, they define the same valuation (hyper)ring in $K$. \end{example} The next theorem states that any hypervaluation from a hyperfield onto an ordered canonical hypergroup is the composition of a hypervaluation onto an ordered abelian group (which induces the same valuation hyperring) and an order preserving homomorphism of hypergroups. \begin{tw} \label{main} Let $F$ be a hyperfield, $(H, *,e, \leq)$ an ordered canonical hypergroup and $w:F \rightarrow H \cup \lbrace \infty \rbrace$ a hypervaluation. Then there exists a hypervaluation $v:F \rightarrow G \cup \lbrace \infty \rbrace$, where $G$ is an ordered abelian group, and an order preserving homomorphism $h:G \rightarrow H$ of hypergroups from $G$ to $H$ s.t. $w = h \circ v$ and $\mathcal O_v = \mathcal O_w$. \end{tw} \begin{proof} By part 4) of Proposition \ref{prop} we can consider the ordered abelian group $G= (F\setminus\{0\}) / U_w$. Let $h : G \rightarrow H $ be the map \[ h(xU_{w}) = w(x). \] We first show that $h$ is well defined. Suppose that $xU_w = yU_w$. This is equivalent to $xy^{-1} \in U_w$, which holds if and only if $w(xy^{-1}) = e$. From (V3) and Lemma \ref{valpro} we obtain that $e \in w(x) * w(y^{-1}) = w(x) * (w(y))^{-1}$ and this means that $w(x) = w(y)$ by the uniqueness required in axiom (H3). This proves that $h$ is well defined. To show that $h$ is a homomorphism of hypergroups, note that $h(1U_w) = w(1) = e$ by Lemma \ref{valpro}. Take $xU_w, yU_w \in G$. According to Remark \ref{grhy} we obtain that \[ h(xU_w \cdot yU_w) = \lbrace h(xyU_w) \rbrace = \lbrace w(xy) \rbrace \subseteq w(x) * w(y) = h(xU_w) * h(yU_w), \] which proves that $h$ is a homomorphism of hypergroups. Moreover, assume that $xU_w \leq yU_w$, which by definition is equivalent to $e \leq w(yx^{-1})$. From the latter it follows by condition (\ref{ord}) of Definition \ref{OCH} that $\lbrace w(x) \rbrace = e * w(x) \nearrow w(yx^{-1}) * w(x)$. Hence by part 1) of Lemma \ref{FVK}, $w(x) \leq b$ for every $b \in w(yx^{-1}) * w(x)$. As $w(y) = w(yx^{-1}x) \in w(yx^{-1}) * w(x)$, we obtain that $w(x) \leq w(y)$. This proves that $h$ is order preserving.\par We now define $v: F \rightarrow G \cup \lbrace \infty \rbrace$ as follows: \[ v(x) = \left\{ \begin{array}{ll} x U_w, & \textnormal{ if $x \neq 0$}\\ \infty, & \textnormal{ if $ x=0 $} \end{array} \right. \] The map $v$ is a hypervaluation onto the ordered abelian group $G$ (again, modulo the provision of Remark \ref{grhy}). The only axiom which needs justification is the fourth: if $z \in x+y$ for some $x,y \in F$, then we wish to show that $v(z) \geq \min \lbrace v(x), v(y) \rbrace$. Assume, without loss of generality, that $v(x) \leq v(y)$, so that $yx^{-1} \in \mathcal O_w$. From $z \in x+y$ it follows that \[ yx^{-1} \in (x-z)x^{-1} = 1 - zx^{-1}. \] This means that $zx^{-1} \in yx^{-1} - 1 \subseteq \mathcal O_w,$ which proves that $v(z) \geq v(x)$. Finally, one can see that $w = h \circ v$. Indeed, if $x \in F$, then if $x \neq 0$, we have that $h(v(x)) = h(xU_w) = w(x)$ by definition. If $x= 0$, the situation is clear once we set $h(\infty):=\infty$. It is left to show that $\mathcal O_v = \mathcal O_w$. We observe that $x \in \mathcal O_w$ if and only if $1U_w \leq xU_w$, which happens if and only if $x \in \mathcal O_v$. This completes the proof. \end{proof}
null
null
null
proofpile-arXiv_000-10144
{"arxiv_id":"2009.08954","language":"en","timestamp":1600654691000,"url":"https:\/\/arxiv.org\/abs\/2009.08954","yymm":"2009"}
2024-02-18T23:40:25.112Z
2020-09-21T02:18:11.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10146"}
null
\section{Introduction} The solar wind is steadily removing angular momentum (AM) from the Sun \citep{weber1967angular, mestel1968magnetic}. This can be measured in-situ by evaluating the mechanical AM flux in the solar wind particles, and the stresses in the interplanetary magnetic field \citep{lazarus1971observation, pizzo1983determination, marsch1984distribution, li1999magnetic, finley2019direct}. The value of the current solar AM-loss rate is a useful test of models which attempt to describe the rotation-evolution of low-mass stars \citep[i.e. $M_*\leq 1.3M_{\odot}$; e.g.][]{gallet2013improved, gallet2015improved, brown2014metastable, johnstone2015stellar, matt2015mass, amard2016rotating, amard2019first, blackman2016minimalist, sadeghi2017semi, garraffo2018revolution, see2018open}. Such stars have magnetic activity which is directly linked to their rotation rates \citep{wright2011stellar, wright2016solar}. One consequence is that the habitability of exoplanets will likely depend somewhat on the rotation rate of their host star, and how it has varied in the past \citep[e.g.][]{johnstone2015evolution, gallet2017impacts}. \begin{table*} \caption{Observed Solar Wind Angular Momentum Fluxes} \label{AMvalues} \center \setlength{\tabcolsep}{2pt} \begin{tabular}{c|cc|c|c|c} \hline\hline Spacecraft & Component &$\langle r^2 F_{AM}\rangle$ & Protons/ & Radial & Source \\ Name && [$\times 10^{30}$erg/ster] & Magnetic & Distance [au] & \\ \hline \textit{Parker Solar Probe*} & {total} & {0.31(0.50)} & 0.9(3.2) & 0.16-0.7 & This Work \\ & \color{red} protons & \color{red} 0.15(0.38)& & & \\ & alphas & -& & & \\ & \color{blue} magnetic & \color{blue} 0.16(0.12)& & & \\ \hline \textit{Wind} & {total} & {0.39} & 2.4& 1 & \begin{NoHyper}\cite{finley2019direct}\end{NoHyper} \\ & \color{red} protons & \color{red} 0.29 & & & \\ & alphas & -0.02 & & & \\ & \color{blue} magnetic & \color{blue} 0.12 & & & \\ \hline \textit{Helios} & {total} & {0.20} & 1.1& 0.3-1 & \begin{NoHyper}\cite{pizzo1983determination}\end{NoHyper} \\ & \color{red} protons & \color{red} 0.17 & & & \\ & alphas & -0.13 & & & \\ & \color{blue} magnetic & \color{blue} 0.15 & & & \\ \hline \textit{Mariner 5} & {total} & {$\bf \sim 1.2$}& $\sim 4.3$& 0.6 & \begin{NoHyper}\cite{lazarus1971observation}\end{NoHyper} \\ & \color{red} protons & \color{red} $\sim1$ & & & \\ & alphas & -& & & \\ & \color{blue} magnetic & \color{blue} 0.23 & & & \\ \hline \hline \end{tabular} \small \item *Values for PSP are the averaged values from Figure \ref{latitudinallyAveraged} in the format E01(E02). \end{table*} Stellar convection, rotation, and magnetic field generation are all intricately linked for low-mass stars \citep[see review of][]{brun2017magnetism}. In general, this causes the rotation rates of Sun-like stars on the main sequence to broadly follow an approximate relationship where \textit{rotation rate} $\propto$ \textit{age}$^{-1/p}$ \citep{skumanich1972time, soderblom1983rotational, barnes2003rotational, barnes2010angular, lorenzo2019constraining}, where $p$ is observed to be around 2. However, with the increasing number of rotation period observations \citep{agueros2011factory, agueros2017setting, mcquillan2013measuring, nunez2015linking, rebull2016rotation, covey2016rapidly, douglas2017poking, nascimento2020rotation}, it has become clear that rotation may not always follow a simple, single power-law in time \citep[e.g.][]{davenport2018rotating, metcalfe2019understanding, reinhold2019transition}. For example, there may be a temporary ``stalling'' of the spin-down during the early main sequence \citep{curtis2019temporary} or a significant reduction of spin-down torques (or equivalently a larger value of $p$) at approximately the solar age \citep[][]{van2016weakened}. Previous measurements of the solar wind AM flux using the \textit{Helios} \citep{pizzo1983determination, marsch1984distribution}, and \textit{Wind} \citep{finley2019direct} spacecraft, have suffered from pointing errors and/or large uncertainties in the detection of the tangential solar wind speed at 1au (with an amplitude around 1-5 km/s), but these studies suggest an average equatorial {flow of AM per solid angle} $\langle r^2 F_{AM}\rangle$ of around $0.3\pm 0.1\times 10^{30}$erg/steradian. The role of the solar wind alpha particles in carrying AM is unclear. Measurements from the \textit{Helios} spacecraft indicate they carry a net negative AM flux, whereas measurements from \textit{Wind} suggest they have a negligible contribution to the total AM flux. Notable values from previous works can be found in Table \ref{AMvalues}. A common factor in most previous observations, including those that use older spacecraft like \textit{Mariner 5} \citep[][]{lazarus1971observation}, is the presence of localized wind streams that can carry a net negative AM flux. A likely mechanism to generate these negative streams is wind stream-interactions, i.e. when fast wind streams catch up to, and collide with, slow wind streams. It is expected that the fast wind is then significantly deflected, given its lower density, in the direction opposite of rotation. Therefore the fast solar wind should carry the majority of the observed negative AM flux \citep[which is shown in][]{finley2019direct}. \begin{figure*} \begin{center} \includegraphics[trim=2.cm 4.cm 0.5cm 2.cm, clip, width=\textwidth]{f1.pdf} \end{center} \caption{Solar wind AM flux observed by PSP, in the protons (red vertical ticks), magnetic field stresses (blue horizontal ticks), and their sum (coloured circles), averaged over 2.5 hour intervals versus time. The left panel shows the first encounter E01 (5th Oct - 2nd Dec, 2018), and the right panel shows the second encounter E02 (3rd Mar - 30th Apr, 2019). The perihelia of each orbit are indicated by green dotted lines. The HCS crossings from Figure \ref{trajectory} are indicated with grey dashed lines, with the background color/hatching corresponding to the global magnetic field polarity. A sustained period of negative AM flux, which coincides with PSP being in close proximity to the HCS is indicated in purple, and is also highlighted in Figure \ref{trajectory}. {We identify repeated crossings of a slow solar wind stream during E01 (in yellow), and an intermediate speed stream during E02 (magenta). These wind streams are shown in more detail in Figures \ref{trajectory} and \ref{windStreams}.}} \label{fluxes} \end{figure*} {During its first two orbits, \textit{Parker Solar Probe} (PSP) observed} the solar wind close to the Sun ($\sim 36R_{\odot}$) to have tangential speeds of up to $\sim 50$km/s \citep{kasper2019alfvenic}, which is far greater than the expected $1-5$km/s from previous \cite{weber1967angular} wind modelling \citep[e.g.][]{reville2020role}. At face value, this implies a larger AM-loss rate from the Sun than previously thought \citep[see also estimates in][]{finley2018effect, finley2019solar}. However, in this letter we argue that the observations made by PSP during its perihelion passes are not necessarily representative of the global AM-loss rate. As such, by taking into account the spatial variation of the AM flux, our average values from PSP are closer to previous spacecraft observations. \section{Data} In this work we utilise the publicly available data from both the Solar Wind Electrons, Alphas and Protons (SWEAP) instrument suite\footnote{http://sweap.cfa.harvard.edu/pub/data/sci/sweap/spc/L3/ - Accessed March 2020.} \citep{kasper2016solar}, and the FIELDS instrument suite\footnote{http://research.ssl.berkeley.edu/data/psp/data/sci/fields/l2/ - Accessed March 2020.} \citep{bale2016fields}, during the first two orbits of PSP. The Solar Probe Cup (SPC) \citep{case2020solar}, part of the SWEAP instrument suit, is capable of measuring the velocity distribution and density of the solar wind particles using moment-fitting algorithms which return the bulk characteristics of the particle populations. SPC operates at a varying data cadence during the orbit of PSP around the Sun, with its highest sampling rate inside 0.25au. Vector magnetic field data is collected by the FIELDS instrument suite at various time resolutions \citep[e.g.][]{bale2019highly}. For this work, we use the minute cadence data and interpolate this down to the variable time resolution of the SPC data. \begin{figure*} \begin{center} \includegraphics[trim=3.cm 1.cm 2.cm 0.cm, clip, width=\textwidth]{f2.pdf} \end{center} \caption{The trajectory of PSP (grey line) in a reference frame co-rotating with the Sun, projected onto the equatorial plane (as viewed from above the North pole), with the Sun at the center. The first encounter E01 (perihelion 6th November 2018) is in the left panel, and the the second encounter E02 (perihelion 4th April 2019) is in the right panel. The AM flux in the solar wind as observed by PSP (in the protons plus magnetic field stresses, as in Figure \ref{fluxes}), is then shown using coloured circles that each represent the 2.5 hour average values. Using the radial wind speed observed by SPC, the connectivity of the magnetic field in the inner heliosphere is visualised with Parker spiral magnetic field lines, which are initialised along PSP's trajectory at 2.5 hour increments (only when $r < 124R_{\odot}$). Each field line is coloured by the magnetic field polarity observed by FIELDS, averaged over the 2.5 hour increment (red is positive, blue is negative). Significant reversals in the observed magnetic field polarity, likely caused by crossing the HCS, are indicated with black lines along PSP's trajectory with their associated dates. {Times when PSP crossed the same solar wind stream are highlighted in the inset figures (yellow for E01, and magenta for E02), with each crossing labelled a number in the order that they were encountered. Dashed lines show the expected boundaries of each wind stream based on parker spiral trajectories that use the average radial wind speed from each wind stream.} The inner annulus at $25R_{\odot}$ displays the average AM flux from Figure \ref{longitudinallyAveraged} where the data are ballistically mapped to $25R_{\odot}$ using Parker spiral trajectories and then binned by Carrington longitude. } \label{trajectory} \end{figure*} During PSP's first orbit we use data from the 5th October 2018 to the 2nd December 2018, with perihelion occurring on the 6th November. This interval is henceforth referred to as E01. Data for the second orbit is available from the 3rd March 2019 to the 30th April 2019, with perihelion occurring on the 4th April; similarly this period is referred to as E02. For the first orbit, we supplement the public data during the inbound phase (during October 2018 only), with data supplied by the instrument team ({\color{blue}SWEAP team, private communication}). {During the first two orbits of PSP the alpha particle moments were not well recovered, and so in this letter we focus on the proton observations.} We remove proton and magnetic field data that have been flagged by the instrument teams as containing bad/problematic values. We also evaluate the data taken during the third orbit of PSP (E03), though this dataset is incomplete due to a technical failure of the SPC instrument on approach to perihelion. Therefore in this letter we focus on the first two orbits and present the third orbit as supplemental information in Appendix \ref{E03}. \section{Observed Solar Wind Angular Momentum Flux} \begin{figure*} \begin{center} \includegraphics[trim=.cm .cm .cm 0.cm, clip, width=0.9\textwidth]{f3.pdf} \end{center} \caption{{The trajectory of PSP plotted as a function of height from the ecliptic plane and cylindrical radius, shown in grey for E01 (top) and E02 (bottom). The 2.5 hour average AM flux is shown with coloured circles, and the times that PSP crossed Wind Stream 1 during E01, and Wind Stream 2 during E02, are highlighted in yellow and magenta respectively. The average AM flux during each crossing is also displayed.}} \label{windStreams} \end{figure*} Using the observations from PSP, we evaluate the solar wind AM flux ($F_{AM}$) as a sum of the mechanical AM carried by the protons ($F_{AM,p}$) and the transfer of AM through magnetic field stresses ($F_{AM,B}$), at the cadence of the SPC instrument. This is given by, \begin{align} r^2F_{AM}=& r^2F_{AM,p} + r^2F_{AM,B},\nonumber\\ =& r^3\sin\theta\rho v_r v_t -r^3\sin\theta\frac{B_t B_r}{4\pi}, \label{AMequation} \end{align} where $r$ is the radial distance of PSP, $\theta$ is its colatitude, $\rho$ is the proton density, $v_r$ is the radial wind speed of the protons, $v_t$ is the tangential wind speed of the protons, $B_r$ is the radial magnetic field strength, and $B_t$ is the tangential magnetic field strength. Here a factor of $r^2$ has been included to remove the dependence of the AM flux on radial distance, as it is only the poloidal vorticity-current stream function (i.e. $r\sin\theta v_t-r\sin\theta B_t B_r/4\pi\rho v_r$) that is a conserved quantity along magnetic field lines under the assumptions of ideal magnetohydrodynamics \citep[see][for the correct nomenclature]{goedbloed2019magnetohydrodynamics}. {The quantity presented throughout this letter $r^2F_{AM}$ is the flow of AM per solid angle (due to this normalisation with radius), though we often refer to this as an AM flux for simplicity.} This is the same quantity evaluated by previous authors using other spacecraft, and so can be directly compared (see Table \ref{AMvalues}). However, it is worth noting that equation (\ref{AMequation}) does not include the effect of thermal pressure anisotropies which could influence the magnetic stress term \cite[as discussed in][]{reville2020role}. When summing over all longitudes, equation (\ref{AMequation}) effectively assumes that any AM flux due to thermal pressure anisotropies will sum to zero. Observations suggest the solar wind between 0.3-1au has a mostly isotropic plasma pressure \citep[e.g.][]{marsch1984helios}. Due to small scale fluctuations in the solar wind, it is necessary to average the AM flux on a sufficient spatial and/or temporal scale to recover the character of the large-scale solar wind flow. {It is of course possible that these fluctuations transport an additional AM flux to that of the bulk solar wind, for example via compressible MHD waves \citep[e.g.][]{marsch1986acceleration}. However, at present we focus on constraining the properties of the bulk solar wind, as this is likely where the majority of the AM flux is contained.} In Figure \ref{fluxes}, we present the {flow of AM per solid angle ($r^2F_{AM}$)} averaged over 2.5 hour intervals versus time for both E01 and E02. The proton AM flux is shown with red vertical ticks, the magnetic field stresses with blue horizontal ticks, and their total with coloured circles. {The signal to noise on the SPC tangential wind speed observations generally decreases with radial distance, and so the percentage uncertainty increases. However the proton AM flux varies on a scale that is generally larger than these uncertainties.} These observations, indicate that the solar wind AM flux has a substantial spatial variation. PSP even observes significant periods of time with a sustained net negative AM flux (negative flux implies the addition of AM to the Sun). The most obvious example occurred when PSP was in close proximity to the Heliospheric Current Sheet (HCS), which is annotated in purple in the right panel of Figure \ref{fluxes}. In contrast to the protons, the magnetic field stresses do not show much variability (on average around $\sim 0.1 \times 10^{30}$erg/steradian) and are comparable in strength to previous spacecraft observations. Additionally, the uncertainties on a given measurement of $F_{AM,B}$ are much lower than $F_{AM,p}$ because the magnetic field direction is generally not as radial as the solar wind velocity. {Note that our averaging timescale is much greater than that of fluctuations due to ``switchbacks'' \citep[e.g.][]{mcmanus2020cross}, and so the magnetic stress term here relates to large-scale deviation of the interplanetary magnetic field from the radial direction. Future works could consider the effect of these switchbacks on the amount of AM stored in the magnetic field. However, given the observed structure of these fluctuations \citep[switchback rotation angles are investigated in][]{mozer2020switchbacks}, the momentum imparted to the plasma during relaxation of the magnetic field is likely directed radially on average.} Figure \ref{trajectory} shows the same 2.5 hour averages of the {flow of AM per solid angle}, now along the trajectory of PSP during E01 and E02 (with coloured circles). In the background of each panel, the polarity of the interplanetary magnetic field during each 2.5 hour interval is extrapolated into a Parker spiral using the proton radial wind speed as measured by SPC. This helps to visualise the large-scale structure of the magnetic field in the inner heliosphere. Significant magnetic field polarity reversals are highlighted in both Figures \ref{fluxes} and \ref{trajectory}. The variation of the AM flux during the closest approaches of PSP are most clear in the zoomed insets at the bottom right corner of each panel. During PSP's first perihelion, the proton AM flux increases with decreasing radial distance to the Sun, whereas during its second perihelion the proton AM flux is largest for the inbound and outbound observations and is decreased during closest approach (this dip coincides with a sharp decrease in the proton mass flux). Although there are differences between the AM flux during both perihelia (perihelia are also identified with green dotted lines in Figure \ref{fluxes}), the large-scale organisation of the AM flux, i.e. the locations of the strong positive/negative AM fluxes, show similarities between E01 and E02. {During the closest approach of PSP in E01, the same slow solar wind stream, hereafter referred to as ``Wind Stream 1'', was crossed three times (highlighted in yellow). Similarly, during E02 PSP crosses another designated ``Wind Stream 2'', though the evidence for this being the same stream throughout PSP's observations is less convincing. {The likely origins of these wind streams are discussed in detail within \cite{Panasenco2020ApJS}.} In Figure \ref{windStreams} we show the latitudinal extent of PSP's orbit during the first two perihelia and highlight these two wind steams. For each crossing we compute the average flow of AM per solid angle measured by PSP during the intervals. Note these crossing are a few days apart. Theoretically, for the same solar wind stream the AM flux is expected to be constant (apart from during interactions with other wind streams at stream interfaces). However within Wind Stream 1, the AM flux in the solar wind can be seen to vary from positive to negative. This is a surprising result as Wind Stream 1 is the best constrained stream, and so this can only be explained in a few ways: 1) AM flux varies substantially in space through individual wind streams, and perhaps these fluctuations relate to PSP's location with respect to the boundaries of other wind streams, 2) AM flux varies substantially in time, and perhaps these fluctuations relate to the formation of the slow solar wind \citep[see][]{rouillard2020relating, reville2020tearing}, or 3) the protons and magnetic field stresses do not account for the total AM flux, i.e. the alpha particles, pressure anisotropies, or other processes, are required to explain the observations. In comparison to Wind Stream 1, Wind Stream 2 appears to be closer to having a steady AM flux from crossing to crossing, despite the stream likely containing wind from various sources. During the perihelion of E02, there is a notable difference in the AM flux carried by Wind Stream 2 (an intermediate speed wind stream) and the slow wind streams either side of it. In this case the faster wind contains roughly a quarter of the AM flux observed in the slow wind either side. This observation, along with our analysis of the two wind streams may suggest that intermediate/fast solar wind streams contain a smaller but less time-varying AM flux, than the slow wind streams which host a larger, temporally and/or spatially varying, AM flux.} \begin{figure*} \centering \includegraphics[trim=2.5cm 1cm 2.5cm 0.5cm,clip, width=\textwidth]{f4.pdf} \caption{2D histogram of solar wind AM flux (in the protons and magnetic field stresses), versus Carrington longitude, for both E01 (left) and E02 (right). The observations have been ballistically mapped to $25R_{\odot}$ using Parker spiral trajectories with their observed radial wind speeds from SPC, and binned in $20^{\circ}$ bins. Darker green shades show an increased frequency of observation in the 2D histogram. Note that SPC sampled the solar wind at a much higher cadence during perihelion, which is clearly visible. The average AM flux for each bin as a function of Carrington longitude is then over plotted with a black line. The average values considering just the protons (red), and magnetic field stresses (blue), are also shown. Finally, the sum over all Carrington longitude bins is given in the bottom left text.} \label{longitudinallyAveraged} \end{figure*} \begin{figure*} \centering \includegraphics[trim=2.5cm 1cm 2.5cm 0.5cm,clip, width=\textwidth]{f5.pdf} \caption{AM flux versus heliographic latitude, in the same format as Figure \ref{longitudinallyAveraged}. The average AM flux for each $0.5^{\circ}$ bin as a function of latitude is shown for the protons (red), magnetic field stresses (blue), and their total (black).} \label{latitudinallyAveraged} \end{figure*} Given the variability of the AM flux, it is difficult to disentangle structure in the solar wind due to PSP's varying radial distance, heliographic latitude, and Carrington longitude. As an alternative to the temporal averages shown in Figure \ref{fluxes}, Figures \ref{longitudinallyAveraged} and \ref{latitudinallyAveraged} show the AM fluxes binned in Carrington longitude and heliographic latitude, respectively. Within each spatial bin, the frequency of observing a given value of AM flux colours each 2D histogram (with darker green being higher frequency). The average value from each bin is highlighted in black, along with the averages for just the proton component (red), and just the magnetic stresses (blue). When the data are binned by Carrington longitude (Figure \ \ref{longitudinallyAveraged}), both E01 and E02 show the AM flux to vary from positive to negative values (this is clearer in the upper right inset figures), with the source of the variation being the proton AM flux (as with the temporal averaging). The AM flux averaged in this way is also shown in an annulus in Figure \ref{trajectory} for both E01 and E02, for visual comparison. When the data are binned by heliographic latitude, the AM flux is observed to have a clearer structure, shown in Figure \ref{latitudinallyAveraged}. The inset figures show an approximately sinusoidal variation for both E01 and E02, but with the latitude dividing the positive and negative wind streams seemingly shifted. Again the magnetic stresses do not show much variability with latitude, in comparison to the proton AM flux. It remains unclear which binning technique best represents the average equatorial AM flux ($F_{AM,eq}$) in the solar wind, and the range of values likely represents the systematic uncertainties arising from the choice of binning method. The highest and lowest values ($r^2F_{AM,eq}=0.18\times 10^{30}$erg/steradian and $0.58\times 10^{30}$erg/steradian) are found by binning the data by Carrington longitude for E01 and E02 respectively. Hereafter, we adopt the average values when binning the data in latitude, due to the similarity in outcome for both orbits, and these values are given in the top entries in Table \ref{AMvalues}. For a \cite{weber1967angular} wind, mechanical AM is gained by the particle population from the stresses in the magnetic field as the wind travels through the heliosphere. This means the value of $F_{AM,p}/F_{AM,B}$ for a solar wind parcel should increase with radial distance. The precise value of this ratio and how it varies also provides information about the Alfv\'en point \citep{marsch1984distribution}. This ratio and the radial distance of each spacecraft from previous calculations are shown in Table \ref{AMvalues}. However given the variability of the AM flux with solar cycle \citep[see][]{finley2018effect}, and the varying precision of each instrument, it is difficult to draw any conclusions from this. The cause(s) of the spatially varying AM flux are unknown, {and there does not appear to be a simple correlation between the enhanced tangential wind speeds and the presence of switchbacks in the solar wind \citep{kasper2019alfvenic}.} Previously, wind stream-interactions have been used to explain the observed variation in $v_t$ \citep[e.g.][]{pizzo1978three}, though the values observed by PSP are far larger than magnetohydrodynamic models would predict. For example, the modelling of E01 by \cite{reville2020role} found a similar trend in positive and negative $v_t$, due to stream-interactions, but with an amplitude of only $\pm 5$km/s. For this reason \cite{reville2020role} suggested that pressure anisotropies, which modify the balance of magnetic field stresses and thus how much AM they transfer to the solar wind particles, might explain some (but not all) of the observed trends in $v_t$ with radial distance. Another potential source of the spatially varying AM flux is from magnetic field foot point motions in the photosphere, caused by circulation of the open magnetic flux \citep[][]{crooker2010suprathermal, fisk2020global}. Additionally, the coherent structure in the AM flux between orbits may also indicate a connection with the Sun's large-scale magnetic field. For example the HCS was likely similar in shape between E01 and E02 and so could have played a role in organising the AM flux. In our analysis, we have used the data as is, and considered the spatial variation of the AM flux and its conflation with the trajectory of PSP. However it is important to acknowledge the uncertainties on these measurements, especially the measurements of $v_t$ from SPC, which have yet to be fully explored. {For example, during more recent encounters in which the Solar Probe Analysers \citep[SPAN,][]{whittlesey2020solar} have been able to measure the proton velocities concurrently with SPC, there are discrepancies which have yet to be resolved.} Over the course of PSP's mission lifetime, as the instrument characteristics are better determined, our understanding of the relative contribution of physical flows and pointing error will increase. It is expected that the signal to noise of SPC, and PSP in general, is significantly higher than previous spacecraft, such as \textit{Helios} and \textit{Wind} \citep[see][for further details about SPC]{case2020solar}. \section{Conclusion} We have shown that PSP observed significant spatial variability in solar wind AM flux, with some coherent features between the first two orbits. In both orbits we find wind streams that carry positive and negative AM fluxes which are separated in longitude and latitude. {We evaluate two different winds streams which are repeatedly crossed by PSP around each perihelion. From this analysis we show that the AM flux within a given stream can vary substantially, with the slow wind stream (observed during E01) having the largest variations (from positive to negative values), and the intermediate wind stream (observed during E02) being closer to a steady-state flow. This contrast may be introduced by their different solar wind origins, however at present there are not enough observations to constrain this.} By averaging the data holistically we are able to produce smaller values for the equatorial AM flux than would be inferred by using only data from the closest approaches of PSP. These values are much closer to previous measurements from a variety of spacecraft at larger radial distances (see Table \ref{AMvalues}), where observations had previously been averaged over $\sim 27$ day intervals to improve the signal to noise \citep[e.g.][]{finley2019direct}. Assuming the solar wind AM flux is, on the large-scale, distributed as $F_{AM}(\theta)\approx F_{AM,eq}\sin^2\theta$, the global AM-loss rate implied by the average PSP observations ($r^2F_{AM,eq}=0.31-0.50\times 10^{30}$erg/steradian) is $2.6-4.2\times 10^{30}$erg. This value is around a factor of 2 smaller than what would be expected from a \cite{skumanich1972time} rotation period evolution \citep[\textit{rotation rate} $\propto$ \textit{age}$^{-1/2}$; e.g.][]{matt2015mass, amard2019first}. This may reflect a decrease in the AM-loss rate of Sun-like stars at the age of the Sun, as proposed by \cite{van2016weakened}, though this value perhaps indicates a less abrupt change to the AM-loss rate. On the other hand, models of stellar rotation-evolution (relying on measured rotation rates of stars at various ages) currently only probe the AM-loss as averaged over timescales of $\sim 10-100$Myrs. Historical estimates of the solar AM-loss rate are currently limited by the available reconstructions of solar activity, which are confined by the last ice-age \citep[see][]{finley2019solar}. Over this period of around 9000yrs, it is possible that the Sun had a reduced magnetic activity compared to other Sun-like stars \citep[e.g.][]{reinhold2020sun}, which should also be reflected in a weaker AM-loss rate. Thus, if the Sun's magnetism varies on much longer timescales than can currently be measured, AM-loss rates recovered from spacecraft observations would remain ambiguous in the context of stellar spin-evolution. However, this remains an interesting connection between the Sun and other Sun-like stars that will continue to be investigated using concurrent multi-spacecraft observations of the solar wind AM flux (at various radial distances), facilitated by PSP \citep{fox2016solar}, the \textit{Solar Orbiter} spacecraft \citep{mueller2013solar}, and existing instruments at 1au. Additionally, with solar activity increasing as the Sun enters solar cycle 25, such multi-spacecraft observations will be able to study the influence of varying activity on the solar AM-loss rate. \acknowledgments We thank the SWEAP and FIELDS instrument teams of Parker Solar Probe, and the NASA/GSFC's Space Physics Data Facility for providing this data. Parker Solar Probe was designed, built, and is now operated by the Johns Hopkins Applied Physics Laboratory as part of NASA’s Living with a Star (LWS) program (contract NNN06AA01C). Support from the LWS management and technical team has played a critical role in the success of the Parker Solar Probe mission. AJF and SPM acknowledge funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 682393 AWESoMeStars). VR acknowledges funding by the ERC SLOW{\_}\,SOURCE project (SLOW{\_}\,SOURCE - DLV-819189). MO is funded by Science and Technology Facilities Council (STFC) grant numbers ST/M000885/1 and ST/R000921/1. RFP acknowledges support from the French space agency (Centre National des Etudes 624 Spatiales; CNES; https://cnes.fr/fr). Figures in this work are produced using the python package matplotlib \citep{hunter2007matplotlib}.
null
null
null
proofpile-arXiv_000-10145
{"arxiv_id":"2009.08991","language":"en","timestamp":1600740020000,"url":"https:\/\/arxiv.org\/abs\/2009.08991","yymm":"2009"}
2024-02-18T23:40:25.114Z
2020-09-22T02:00:20.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10147"}
null
\section{\label{sec:intro}Introduction} The thermodynamics of classical Kerr-de Sitter (KdS) black holes has been investigated in \citep{law1deriv,zhang,sekiwa}, where it was shown, that they follow the First Law of black hole thermodynamics (see Sec. \ref{sec:firstlaw} for more details). In a more recent paper \citep{silk}, the authors have considered a particle-like picture of Kerr black holes in asymptotically flat backgrounds, and also verified that the First Law is satisfied for large quantum numbers, as expected from the Correspondence Principle. As mentioned in \citep{silk}, such a description may be used for studying Planck-scale relics. In this article, we show that the particle-like quantization scheme described in \citep{silk}, when extended to de-Sitter backgrounds, only admits finitely many excitation states ($n$) for a fixed $\Lambda$. Furthermore, there is a Root Mean Square (RMS) deviation from the First Law by an order, $\mathcal{O}(\Lambda^{0.45 \pm 0.01})$ (Sec. \ref{subsec:plots}), which vanishes for $\Lambda\rightarrow 0$ (consistent with existing literature) \footnote{We hasten to remind the reader, that the vanishing of RMS deviation does not imply, that the First Law is satisfied for all $n$. Indeed, it is still violated for small $n$ even in this case ($\Lambda=0$) and is only satisfied in the large $n$ limit, as expected from the Correspondence Principle.}. \section{\label{sec:kds}Kerr-de Sitter Spacetime} The Kerr-de Sitter black hole (KdS) in $(3+1)$ dimensional spacetime can be described by the following line element in Boyer-Lindquist coordinates $(t,r,\theta,\phi)$ \citep{,zhang,sekiwa}, \begin{equation} \dd{s^2} = -\frac{\Delta_r}{\rho^2}\left(\dd{t} - \frac{a^2\sin^2\theta}{\Xi}\dd\phi\right)^2 + \frac{\rho^2}{\Delta_r}\dd{r^2} + \frac{\rho^2}{\Delta_\theta}\dd\theta^2 + \frac{\Delta_\theta \sin^2\theta}{\rho^2}\left(a \dd{t} - \frac{r^2 + a^2}{\Xi}\dd\phi\right)^2, \label{metric} \end{equation} where, \begin{equation} \begin{split} \rho^2 = r^2 + a^2 \cos^2\theta,\quad\Delta_\theta = 1 + \frac{\Lambda}{3}a^2 \cos^2\theta \\ \Xi = 1 + \frac{\Lambda}{3}a^2,\quad \Delta_r = (r^2 + a^2)\left(1-\frac{\Lambda}{3}r^2\right) - 2m r. \end{split} \end{equation} We also know, that the KdS black hole mass ($M$) is given by $M = m/\,\Xi^2$. Throughout this paper, we have used Planck units ($c = \hbar = G = k_B = 1)$. \subsection{Horizons of the KdS spacetime} \label{subsec:hors} For the purposes of this article, we shall consider the inner (Cauchy) horizon (CH) and the outer (event) horizon (EH) denoted by $r_-$ and $r_+$ respectively. We assume that there are 3 positive roots of $\Delta_r(r)=0$, corresponding to the CH, EH and the cosmological horizon $r_c$. Using, $\Delta_r(r_-) = \Delta_r(r_+) = 0$, one can write, after slightly long but simple algebra, the following, \begin{align} a^2 &= \frac{r_- r_+\left[1 - \frac{\Lambda}{3} (r_+^2 + r_-^2 + r_- r_+)\right]}{1 + \frac{\Lambda}{3}r_- r_+},\nonumber\\ 2m &= \frac{(r_- + r_+)(r_-^2 + a^2)(r_+^2 + a^2)}{r_- r_+(r_+^2 + r_-^2 + r_- r_+ + a^2)},\nonumber\\ \Xi &= \frac{r_- r_+(r_+^2 + r_-^2 + r_- r_+ 2a^2 )-a^4}{r_- r_+(r_+^2 + r_-^2 + r_- r_+ + a^2)}. \label{horquant} \end{align} We shall use these expressions extensively in the rest of the paper. Moreover, for simplicity, we shall study only the extremal KdS black hole characterised by $r_+ = r_- = \Gamma$ (say) henceforth. The various quantities in (\ref{horquant}) will be investigated in the following sections. \section{\label{sec:qkds}Quantum KdS Black Holes} In this section, we shall follow the quantization scheme outlined in \citep{silk}, where they have considered a particle-like picture of a black hole, first discussed in \citep{particleRN} for a Reissner-Nordström black hole. Our primary assumption is that the EH should not be smaller than the associated Compton wavelength of the black hole, i.e. $r_+ = \frac{n}{M} = \Gamma$, where $n \in \mathbb{N}$ (in Planck units) denotes the excitation state of the quantum black hole. Substituting this value in the expression for $a^2$ in (\ref{horquant}), along with the extremality assumption, we get, \begin{equation}\label{quanta} a^2 = \frac{\Gamma^2(1 - \Lambda \Gamma^2)}{1 + \frac{\Lambda}{3}\Gamma^2} = \frac{n^2}{M^2}\left(\frac{1 - \frac{\Lambda n^2}{M^2}}{1 + \frac{\Lambda n^2}{3 M^2}}\right). \end{equation} Therefore, the angular momentum $J_H = Ma$ of the black hole EH is given by, \begin{equation}\label{angmom} J_H = n\left(\frac{1 - \frac{\Lambda n^2}{M^2}}{1 + \frac{\Lambda n^2}{3 M^2}}\right)^\frac{1}{2}. \end{equation} Clearly, this gives the correct result, as assumed in \citep{silk} (for the extremal case) when $\Lambda=0$. Other relevant quantities in \citep{silk,zhang,sekiwa} can be easily calculated following the quantization scheme, described above, and are all consistent with their asymptotically flat analogues. We shall skip those calculations and move on to a discussion of the First Law of black hole thermodynamics for the extremal case, in the following section. Using the expressions for $m$ and $\Xi$ in (\ref{horquant}), we can also obtain the following expression for $M$, \begin{equation}\label{massexpr} \frac{n (3 M^2 + n^2 \Lambda)}{3 (M^2 + n^2 \Lambda)^2}=1. \end{equation} Unfortunately, (\ref{massexpr}) cannot be analytically simplified further. However, one may again note, that for $\Lambda=0$, (\ref{massexpr}) gives, $M = \sqrt{n}$, which is the result obtained in Eq. (3) of \citep{silk}, for the extremal case. \section{\label{sec:firstlaw}First Law of thermodynamics} Before stating the Law, we mention that the angular velocity of the KdS horizons are given by \citep{zhang,sekiwa} as, \begin{equation}\label{omega} \tilde{\Omega} = \frac{a\Xi}{\Gamma^2 + a^2}, ~ \Omega_{\infty} = \frac{\Lambda}{3}a, \end{equation} while the black hole EH angular velocity is given by $\Omega_H=\tilde{\Omega} - \Omega_\infty$. The general form of the First Law of thermodynamics for uncharged black holes, may now be stated as\footnote{Ideally we should write, $\delta M = T_H\delta S_H + \Omega_H\delta J_H + V_H \delta P_H$, where $V_H$ is the horizon volume and $P_H = -\Lambda/8\pi$. Since in our case, $\Lambda$ is a constant, we simply ignore this term.}, \begin{equation}\label{firstlaw} \delta M = T_H\delta S_H + \Omega_H\delta J_H, \end{equation} where $S_H$ is the horizon entropy, given by $S_H = \pi (\Gamma^2 + a^2)/\,\Xi$, and $T_H$ is the Hawking temperature \citep{hawkrad} of the EH, given by, \begin{equation}\label{Th} T_H = \frac{1}{4\pi}\frac{\partial_r\Delta_r |_{\Gamma}}{\Gamma^2 + a^2} = 0, \end{equation} as expected for an extremal black hole. (\ref{Th}) is straightforward to check by substituting the values of the various quantities in $\partial_r\Delta_r$ and evaluating at $\Gamma = r_+ = r_-$. Therefore, to check (\ref{firstlaw}), we simply need to consider the second term on the RHS. In the following subsection, we numerically calculate both sides of (\ref{firstlaw}) for varying $n$. \subsection{\label{subsec:numan}Numerical Analysis} Since the analytical expressions of the quantities, in the First Law, could only be obtained as complicated identities or expressions (for e.g. $M$ in (\ref{massexpr})), we proceed to numerically calculate all relevant quantities for a given $\Lambda$ and over all \textit{allowed} values of $n$\footnote{As discussed in the following subsection, the range of \textit{allowed} $n$ refers to those values of $n$ starting from $1$, for which we have $J_H \in \mathbb{R}^+$.}. Furthermore, we confirm, that (\ref{firstlaw}) is not exactly valid for this quantum KdS black hole model (although it is seen to be valid for $\Lambda \rightarrow 0,\:n \rightarrow \infty$, as verified in \citep{silk}). To show this, we plotted $\delta M= M_{n+1}-M_n$ and compared it with $\Omega_H^n \delta J^n_H$ (superscript denotes the excitation state) obtained using (\ref{massexpr}), (\ref{angmom}) and (\ref{omega}). \begin{figure} \includegraphics[width=0.49\textwidth]{img/MPlot.png} \hspace*{\fill} \includegraphics[width=0.49\textwidth]{img/JPlot.png} \caption{(Left) - Plots depicting the mass, $M_{KdS}(\Lambda)$, for two different values of $\Lambda$ and the extremal Kerr black hole mass, ($M_{Kerr}$) \citep{silk}. (Right) - Similar plots for the angular momentum, $J_{KdS}(\Lambda)$, of an extremal quantum Kerr black hole and its analogue, $J_{Kerr}$ ($\Lambda=0$).}\label{fig:MJ} \end{figure} \subsection{\label{subsec:plots}Plots and Results} In Fig. \ref{fig:MJ}, we plot the masses and angular momenta corresponding to the KdS black hole excitation states, $n$. The orange curve denotes the extremal Kerr analogues of the respective quantities, found in \citep{silk}. There is always a maximal angular momentum attained for finite $\Lambda$, after which $J_H$ decreases with increasing $n$. We observe that the mass (energy), $M$, follows the $\sqrt{n}$ behaviour for small $n$, but deviates significantly for larger $n$, also demonstrating a maximum possible mass (energy state), which is attained for some finite $n$, with $\Lambda \neq 0$. However, for $\Lambda \rightarrow 0$, we see, from the plots in Fig. \ref{fig:MJ}, that $M \rightarrow \sqrt{n}$ and $J_H \rightarrow n$, as expected from their analytic expressions (\ref{angmom}) and (\ref{massexpr}). In Fig. \ref{fig:law1plots}, we plot $\delta M$ and $\Omega\delta J$, from (\ref{firstlaw}) (red and green curves respectively), along with their difference (blue curve). It can be seen, that the largest deviations from (\ref{firstlaw}) are near the boundary values of the allowed excitation ($n$) states. Finally, we calculated the Root Mean Square (RMS) deviation from the first law $(\delta_{RMS}(\Lambda))$ and fit the $\delta_{RMS}(\Lambda)$ vs $-log_{10}(\Lambda)$ plot in Fig. \ref{fig:fit}, using a trial function of the form $f(x) = c\:10^{n x}$ (where $c$ and $n$ are curve-fit parameters). From the values obtained in the fit, one can calculate that $\delta_{RMS}(\Lambda)$ is of the order, $\mathcal{O}(\Lambda^{0.45 \pm 0.01})$. \begin{figure} \begin{tikzpicture} \begin{customlegend}[legend columns=3,legend style={align=right,draw=none,column sep=2ex},legend entries={$\hspace{8em}\color{red}{\bullet}\quad\color{black}{\delta M}$, $\Omega\delta J$, $\delta M - \Omega\delta J$}] \csname pgfplots@addlegendimage\endcsname{red, mark=o*, only marks} \csname pgfplots@addlegendimage\endcsname{green, mark=square*, only marks} \csname pgfplots@addlegendimage\endcsname{blue, mark=diamond*, only marks} \end{customlegend} \end{tikzpicture} \includegraphics[width=0.49\textwidth]{img/e-2.png} \hspace*{\fill} \includegraphics[width=0.49\textwidth]{img/e-3.png} \includegraphics[width=0.49\textwidth]{img/e-4.png} \hspace*{\fill} \includegraphics[width=0.49\textwidth]{img/e-5.png} \caption{Plots, showing the variations of $\delta M$ and $\Omega\delta J$ in $n$, for four values of $\Lambda$. The difference is depicted by the blue curve. The maximum deviation occurs near the boundaries of \textit{allowed} $n$.}\label{fig:law1plots} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{img/rms.png} \caption{Best-fit curve for the trial functions $f(x) = c\:10^{n x}$ with fit-parameters $c$ and $n$.}\label{fig:fit} \end{figure} \section{\label{sec:results}Conclusion} In the initial sections of this paper, we investigated the particle-like picture of an extremal Kerr black hole in de-Sitter background, essentially extending the work of \citep{silk}. Novelty of this article lies in estimating the RMS deviation from the First Law, $\delta_{RMS}(\Lambda)$ (which is of order $\mathcal{O}(\Lambda^{0.45 \pm 0.01})$), due to quantization on a de-Sitter background. Interestingly, this deviation vanishes for both quantum Kerr black holes ($\Lambda=0$) \citep{silk}, as well as the classical case, in normal KdS spacetime \citep{zhang,sekiwa} and is therefore consistent with previous results in the appropriate limits. From the plots, it is evident, that the RMS deviation arises due to the finitely many \textit{allowed} $n$ for any $\Lambda \neq 0$. To the best of our knowledge, there has been no conclusive investigation as to the physical reasons behind the existence of only finitely many energy states. A completely consistent theory of quantum gravity might be able to re-affirm (or negate) the deviation results obtained here. Nevertheless, this is an interesting observation requiring further investigations. \bibliographystyle{elsarticle-num-names} \section{\label{sec:intro}Introduction} The thermodynamics of classical Kerr-de Sitter (KdS) black holes has been investigated in \citep{law1deriv,zhang,sekiwa}, where it was shown, that they follow the First Law of black hole thermodynamics (see Sec. \ref{sec:firstlaw} for more details). In a more recent paper \citep{silk}, the authors have considered a particle-like picture of Kerr black holes in asymptotically flat backgrounds, and also verified that the First Law is satisfied for large quantum numbers, as expected from the Correspondence Principle. As mentioned in \citep{silk}, such a description may be used for studying Planck-scale relics. In this article, we show that the particle-like quantization scheme described in \citep{silk}, when extended to de-Sitter backgrounds, only admits finitely many excitation states ($n$) for a fixed $\Lambda$. Furthermore, there is a Root Mean Square (RMS) deviation from the First Law by an order, $\mathcal{O}(\Lambda^{0.45 \pm 0.01})$ (Sec. \ref{subsec:plots}), which vanishes for $\Lambda\rightarrow 0$ (consistent with existing literature) \footnote{We hasten to remind the reader, that the vanishing of RMS deviation does not imply, that the First Law is satisfied for all $n$. Indeed, it is still violated for small $n$ even in this case ($\Lambda=0$) and is only satisfied in the large $n$ limit, as expected from the Correspondence Principle.}. \section{\label{sec:kds}Kerr-de Sitter Spacetime} The Kerr-de Sitter black hole (KdS) in $(3+1)$ dimensional spacetime can be described by the following line element in Boyer-Lindquist coordinates $(t,r,\theta,\phi)$ \citep{,zhang,sekiwa}, \begin{equation} \dd{s^2} = -\frac{\Delta_r}{\rho^2}\left(\dd{t} - \frac{a^2\sin^2\theta}{\Xi}\dd\phi\right)^2 + \frac{\rho^2}{\Delta_r}\dd{r^2} + \frac{\rho^2}{\Delta_\theta}\dd\theta^2 + \frac{\Delta_\theta \sin^2\theta}{\rho^2}\left(a \dd{t} - \frac{r^2 + a^2}{\Xi}\dd\phi\right)^2, \label{metric} \end{equation} where, \begin{equation} \begin{split} \rho^2 = r^2 + a^2 \cos^2\theta,\quad\Delta_\theta = 1 + \frac{\Lambda}{3}a^2 \cos^2\theta \\ \Xi = 1 + \frac{\Lambda}{3}a^2,\quad \Delta_r = (r^2 + a^2)\left(1-\frac{\Lambda}{3}r^2\right) - 2m r. \end{split} \end{equation} We also know, that the KdS black hole mass ($M$) is given by $M = m/\,\Xi^2$. Throughout this paper, we have used Planck units ($c = \hbar = G = k_B = 1)$. \subsection{Horizons of the KdS spacetime} \label{subsec:hors} For the purposes of this article, we shall consider the inner (Cauchy) horizon (CH) and the outer (event) horizon (EH) denoted by $r_-$ and $r_+$ respectively. We assume that there are 3 positive roots of $\Delta_r(r)=0$, corresponding to the CH, EH and the cosmological horizon $r_c$. Using, $\Delta_r(r_-) = \Delta_r(r_+) = 0$, one can write, after slightly long but simple algebra, the following, \begin{align} a^2 &= \frac{r_- r_+\left[1 - \frac{\Lambda}{3} (r_+^2 + r_-^2 + r_- r_+)\right]}{1 + \frac{\Lambda}{3}r_- r_+},\nonumber\\ 2m &= \frac{(r_- + r_+)(r_-^2 + a^2)(r_+^2 + a^2)}{r_- r_+(r_+^2 + r_-^2 + r_- r_+ + a^2)},\nonumber\\ \Xi &= \frac{r_- r_+(r_+^2 + r_-^2 + r_- r_+ 2a^2 )-a^4}{r_- r_+(r_+^2 + r_-^2 + r_- r_+ + a^2)}. \label{horquant} \end{align} We shall use these expressions extensively in the rest of the paper. Moreover, for simplicity, we shall study only the extremal KdS black hole characterised by $r_+ = r_- = \Gamma$ (say) henceforth. The various quantities in (\ref{horquant}) will be investigated in the following sections. \section{\label{sec:qkds}Quantum KdS Black Holes} In this section, we shall follow the quantization scheme outlined in \citep{silk}, where they have considered a particle-like picture of a black hole, first discussed in \citep{particleRN} for a Reissner-Nordström black hole. Our primary assumption is that the EH should not be smaller than the associated Compton wavelength of the black hole, i.e. $r_+ = \frac{n}{M} = \Gamma$, where $n \in \mathbb{N}$ (in Planck units) denotes the excitation state of the quantum black hole. Substituting this value in the expression for $a^2$ in (\ref{horquant}), along with the extremality assumption, we get, \begin{equation}\label{quanta} a^2 = \frac{\Gamma^2(1 - \Lambda \Gamma^2)}{1 + \frac{\Lambda}{3}\Gamma^2} = \frac{n^2}{M^2}\left(\frac{1 - \frac{\Lambda n^2}{M^2}}{1 + \frac{\Lambda n^2}{3 M^2}}\right). \end{equation} Therefore, the angular momentum $J_H = Ma$ of the black hole EH is given by, \begin{equation}\label{angmom} J_H = n\left(\frac{1 - \frac{\Lambda n^2}{M^2}}{1 + \frac{\Lambda n^2}{3 M^2}}\right)^\frac{1}{2}. \end{equation} Clearly, this gives the correct result, as assumed in \citep{silk} (for the extremal case) when $\Lambda=0$. Other relevant quantities in \citep{silk,zhang,sekiwa} can be easily calculated following the quantization scheme, described above, and are all consistent with their asymptotically flat analogues. We shall skip those calculations and move on to a discussion of the First Law of black hole thermodynamics for the extremal case, in the following section. Using the expressions for $m$ and $\Xi$ in (\ref{horquant}), we can also obtain the following expression for $M$, \begin{equation}\label{massexpr} \frac{n (3 M^2 + n^2 \Lambda)}{3 (M^2 + n^2 \Lambda)^2}=1. \end{equation} Unfortunately, (\ref{massexpr}) cannot be analytically simplified further. However, one may again note, that for $\Lambda=0$, (\ref{massexpr}) gives, $M = \sqrt{n}$, which is the result obtained in Eq. (3) of \citep{silk}, for the extremal case. \section{\label{sec:firstlaw}First Law of thermodynamics} Before stating the Law, we mention that the angular velocity of the KdS horizons are given by \citep{zhang,sekiwa} as, \begin{equation}\label{omega} \tilde{\Omega} = \frac{a\Xi}{\Gamma^2 + a^2}, ~ \Omega_{\infty} = \frac{\Lambda}{3}a, \end{equation} while the black hole EH angular velocity is given by $\Omega_H=\tilde{\Omega} - \Omega_\infty$. The general form of the First Law of thermodynamics for uncharged black holes, may now be stated as\footnote{Ideally we should write, $\delta M = T_H\delta S_H + \Omega_H\delta J_H + V_H \delta P_H$, where $V_H$ is the horizon volume and $P_H = -\Lambda/8\pi$. Since in our case, $\Lambda$ is a constant, we simply ignore this term.}, \begin{equation}\label{firstlaw} \delta M = T_H\delta S_H + \Omega_H\delta J_H, \end{equation} where $S_H$ is the horizon entropy, given by $S_H = \pi (\Gamma^2 + a^2)/\,\Xi$, and $T_H$ is the Hawking temperature \citep{hawkrad} of the EH, given by, \begin{equation}\label{Th} T_H = \frac{1}{4\pi}\frac{\partial_r\Delta_r |_{\Gamma}}{\Gamma^2 + a^2} = 0, \end{equation} as expected for an extremal black hole. (\ref{Th}) is straightforward to check by substituting the values of the various quantities in $\partial_r\Delta_r$ and evaluating at $\Gamma = r_+ = r_-$. Therefore, to check (\ref{firstlaw}), we simply need to consider the second term on the RHS. In the following subsection, we numerically calculate both sides of (\ref{firstlaw}) for varying $n$. \subsection{\label{subsec:numan}Numerical Analysis} Since the analytical expressions of the quantities, in the First Law, could only be obtained as complicated identities or expressions (for e.g. $M$ in (\ref{massexpr})), we proceed to numerically calculate all relevant quantities for a given $\Lambda$ and over all \textit{allowed} values of $n$\footnote{As discussed in the following subsection, the range of \textit{allowed} $n$ refers to those values of $n$ starting from $1$, for which we have $J_H \in \mathbb{R}^+$.}. Furthermore, we confirm, that (\ref{firstlaw}) is not exactly valid for this quantum KdS black hole model (although it is seen to be valid for $\Lambda \rightarrow 0,\:n \rightarrow \infty$, as verified in \citep{silk}). To show this, we plotted $\delta M= M_{n+1}-M_n$ and compared it with $\Omega_H^n \delta J^n_H$ (superscript denotes the excitation state) obtained using (\ref{massexpr}), (\ref{angmom}) and (\ref{omega}). \begin{figure} \includegraphics[width=0.49\textwidth]{img/MPlot.png} \hspace*{\fill} \includegraphics[width=0.49\textwidth]{img/JPlot.png} \caption{(Left) - Plots depicting the mass, $M_{KdS}(\Lambda)$, for two different values of $\Lambda$ and the extremal Kerr black hole mass, ($M_{Kerr}$) \citep{silk}. (Right) - Similar plots for the angular momentum, $J_{KdS}(\Lambda)$, of an extremal quantum Kerr black hole and its analogue, $J_{Kerr}$ ($\Lambda=0$).}\label{fig:MJ} \end{figure} \subsection{\label{subsec:plots}Plots and Results} In Fig. \ref{fig:MJ}, we plot the masses and angular momenta corresponding to the KdS black hole excitation states, $n$. The orange curve denotes the extremal Kerr analogues of the respective quantities, found in \citep{silk}. There is always a maximal angular momentum attained for finite $\Lambda$, after which $J_H$ decreases with increasing $n$. We observe that the mass (energy), $M$, follows the $\sqrt{n}$ behaviour for small $n$, but deviates significantly for larger $n$, also demonstrating a maximum possible mass (energy state), which is attained for some finite $n$, with $\Lambda \neq 0$. However, for $\Lambda \rightarrow 0$, we see, from the plots in Fig. \ref{fig:MJ}, that $M \rightarrow \sqrt{n}$ and $J_H \rightarrow n$, as expected from their analytic expressions (\ref{angmom}) and (\ref{massexpr}). In Fig. \ref{fig:law1plots}, we plot $\delta M$ and $\Omega\delta J$, from (\ref{firstlaw}) (red and green curves respectively), along with their difference (blue curve). It can be seen, that the largest deviations from (\ref{firstlaw}) are near the boundary values of the allowed excitation ($n$) states. Finally, we calculated the Root Mean Square (RMS) deviation from the first law $(\delta_{RMS}(\Lambda))$ and fit the $\delta_{RMS}(\Lambda)$ vs $-log_{10}(\Lambda)$ plot in Fig. \ref{fig:fit}, using a trial function of the form $f(x) = c\:10^{n x}$ (where $c$ and $n$ are curve-fit parameters). From the values obtained in the fit, one can calculate that $\delta_{RMS}(\Lambda)$ is of the order, $\mathcal{O}(\Lambda^{0.45 \pm 0.01})$. \begin{figure} \begin{tikzpicture} \begin{customlegend}[legend columns=3,legend style={align=right,draw=none,column sep=2ex},legend entries={$\hspace{8em}\color{red}{\bullet}\quad\color{black}{\delta M}$, $\Omega\delta J$, $\delta M - \Omega\delta J$}] \csname pgfplots@addlegendimage\endcsname{red, mark=o*, only marks} \csname pgfplots@addlegendimage\endcsname{green, mark=square*, only marks} \csname pgfplots@addlegendimage\endcsname{blue, mark=diamond*, only marks} \end{customlegend} \end{tikzpicture} \includegraphics[width=0.49\textwidth]{img/e-2.png} \hspace*{\fill} \includegraphics[width=0.49\textwidth]{img/e-3.png} \includegraphics[width=0.49\textwidth]{img/e-4.png} \hspace*{\fill} \includegraphics[width=0.49\textwidth]{img/e-5.png} \caption{Plots, showing the variations of $\delta M$ and $\Omega\delta J$ in $n$, for four values of $\Lambda$. The difference is depicted by the blue curve. The maximum deviation occurs near the boundaries of \textit{allowed} $n$.}\label{fig:law1plots} \end{figure} \begin{figure} \centering \includegraphics[width=0.75\textwidth]{img/rms.png} \caption{Best-fit curve for the trial functions $f(x) = c\:10^{n x}$ with fit-parameters $c$ and $n$.}\label{fig:fit} \end{figure} \section{\label{sec:results}Conclusion} In the initial sections of this paper, we investigated the particle-like picture of an extremal Kerr black hole in de-Sitter background, essentially extending the work of \citep{silk}. Novelty of this article lies in estimating the RMS deviation from the First Law, $\delta_{RMS}(\Lambda)$ (which is of order $\mathcal{O}(\Lambda^{0.45 \pm 0.01})$), due to quantization on a de-Sitter background. Interestingly, this deviation vanishes for both quantum Kerr black holes ($\Lambda=0$) \citep{silk}, as well as the classical case, in normal KdS spacetime \citep{zhang,sekiwa} and is therefore consistent with previous results in the appropriate limits. From the plots, it is evident, that the RMS deviation arises due to the finitely many \textit{allowed} $n$ for any $\Lambda \neq 0$. To the best of our knowledge, there has been no conclusive investigation as to the physical reasons behind the existence of only finitely many energy states. A completely consistent theory of quantum gravity might be able to re-affirm (or negate) the deviation results obtained here. Nevertheless, this is an interesting observation requiring further investigations. \bibliographystyle{elsarticle-num-names}
null
null
null
proofpile-arXiv_000-10146
{"arxiv_id":"2009.08939","language":"en","timestamp":1600654669000,"url":"https:\/\/arxiv.org\/abs\/2009.08939","yymm":"2009"}
2024-02-18T23:40:25.117Z
2020-09-21T02:17:49.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10148"}
null
\section{Introduction} Game developers, publishers, and platforms are all interested to know why certain players play certain games and whether a game would be successful in the market place. Many game marketplaces also have the practical problem of recommending new games to players from a large catalog of possible games. Businesses are interested in these methods, because good recommendations can increase sales and user engagement. The academic field of recommender systems has investigated methods that can be trained on data sets of historical game and user interactions to predict new interactions. Recommender systems therefore seem like a natural solution to the game recommendation problem: gaming facilitates large data sets of past interactions and often additional information is available about game content and user profiles. Recommender systems have been investigated in different contexts, but the most popular algorithms can be divided into collaborative filtering (CF) and content based (CB) \cite{ref1}. Collaborative filtering is based on a data set of past player and game interactions, and it does not use player or game features because the predictions are made possible by observed correlations. For example, if players often play two games together, the games are probably similar and we can recommend one game when a player has played the other. In content based predictions, available information about games and/or players is used in enabling the predictions. In a game database, games typically have tags, genres, reviews, textual information, gameplay videos, etc. that can be used to create features. Creating player features is very flexible, because one can directly ask the players to answer questions about their preferences, motivations, and gaming habits. Predictions are then based on learning how game features or player features, or both together, result in the observed game likes. So called hybrid recommenders combine the content information (i.e. game features and player features) with collaborative filtering. Academic research is often motivated by improving the accuracy of the methods, since this is an objective and easily evaluated task. However, the recommender system literature has started to recognize that accuracy may not always correlate with perceived utility \cite{ref2}. Collaborative filtering has been found to produce more accurate predictions, unless the methods are tasked to predict for players or games with few or no interactions \cite{ref3}. The setting with no interactions is known as the cold start problem, because collaborative filtering cannot predict in this setting. Research on game recommendation has evaluated methods by their predicted ability in historical player and game interactions, but in reality the problem of predicting for new games and new players is common. We therefore define four evaluation settings \cite{ref4}\cite{ref5}: predicting for past games and past players that have interactions (Setting 1), predicting for new games without players (Setting 2), predicting for new players without games (Setting 3), and predicting for new games and new players simultaneously (Setting 4). In this study, we apply new methods to game recommendation that generalize better than collaborative filtering to the different settings. The resulting methods are simple, fast and easily interpretable. We therefore also interpret the model parameters, and find that they provide useful game market information about player traits such as gaming motivations and gameplay preferences. The development of the methods can be motivated by the standard approach to collaborative filtering, the Singular Value Decomposition (SVD), where one or both of the latent vectors is given. The task is then to learn the response of players or games, or their interactions, to the given features. The number of possible game and player pairs makes learning the interactions infeasible with the standard approach, so we utilize a mathematical result known as the vec-trick \cite{ref6} to train an identical model very fast. \section{Related Work} Our recommendation models produce a list of game recommendations to a player. This is known as Top-N recommendation, where the methods are evaluated on the accuracy of the predicted list of recommendations, in contrast to how well the methods would predict missing rating or playtime values for example. The objective of predicting an accurate list of game recommendations is probably the most relevant real world task of these systems, since this task has been adopted by many commercial companies. Top-N recommendation differs from the traditional task of predicting missing values. The evaluation is based on ranking accuracy metrics such as precision or recall, not on error metrics like the root mean squared error (RMSE). There is no guarantee that algorithms optimized for the RMSE are also optimal for ranking because the task is different. Furthermore, item popularity has been found to have a large effect on the error metrics \cite{ref8}. The k Nearest Neighbour (kNN) and the Singular Value Decomposition (SVD) based matrix factorization have become standard methods in predicting missing values, but they also work for the top-N recommendation task \cite{ref3}. In the task of producing game recommendations, one should also take into account whether information about the player-game interactions is implicit or explicit. Implicit signals about liking a game are for example owned or played games, whereas explicit data is the ratings and opinions provided about the games. Implicit data is often complete, which means that every player and game pair has a value and the task is to rank the games by the probability of liking them. In explicit data, there are typically many missing values because a player has not rated every possible game, and the task is to predict the values of the missing ratings. For our models we technically use explicit data, because the player and game interactions are based on the favourite games mentioned in a survey. Similar data sets to our explicit survey based data on favorite games and player preferences could be obtained without using surveys, i.e. implicitly by crawling gaming platforms for the games that users own or play. In either case, these data sets are understood as complete data for our task. This means that there are no missing values because all player and game pairs have a value that denotes whether the player mentioned, owned or played the game. In this study, also we utilize player traits and preferences to construct player features. In game research, player preferences can be divided into four main categories: 1) player motivations \cite{x1, x2}, 2) preferred play styles \cite{x3, x4}, 3) gaming mentalities \cite{x5, x6}, and 4) gameplay type preferences \cite{x7, x8}. Player motivations measure general reasons why players play games, whereas models that investigate play styles are typically based on player behavior data rather than survey data. Gaming mentalities refer to the psychometric data on players' typical and preferred gaming intensity type (e.g. casual or hardcore gaming). \cite{x9} Of the four approaches on player preferences, gameplay type preference data is arguably the most promising for producing personalized game recommendations, because it is closely related to game features. Furthermore, it has been been shown that gameplay type preferences such as preference in exploration, management or aggression predicts habit to play games of specific genres \cite{x7}. Because of these reasons, we make use of gameplay type preference survey data in this study. Comparing recommender systems is difficult for several reasons. The performance may depend on the data set, the prediction task, and the chosen metric \cite{ref2}. In addition, many methods are sensitive to the choice of hyperparameters and the optimization method, which means that authors of new methods may have not always used the best baselines \cite{ref10}. There can also be performance differences between different software implementations of otherwise identical methods \cite{ref11}. However, the simple baseline methods tend to produce competitive results when the hyperparameters are carefully tuned \cite{ref10}. High accuracy if often assumed to correlate with an useful recommender system, but subjective utility recommender system has also become an important research goal in itself \cite{ref2}. Optimizing accuracy can lead to recommending popular items at the cost of personalized results \cite{ref8}. There are studies that have investigated the development of new recommender systems to games. The first study \cite{ref12} used probabilistic matrix factorization based collaborative filtering for the Xbox platform. The second study \cite{ref13} presented a recommender based on archetypes, where their formula (5) is a constrained case of the SVD. The study included comparisons to kNN (cosine) trained on the latent SVD factors, the standard kNN with somewhat small neighbourhood sizes, and random or most popular recommendations. The third study \cite{ref14} investigated a new case-based disability rehabilitation recommender, which is a type of content recommender. The content was based on game descriptions combined with social network information and questionnaire answers of the users. The fourth study \cite{ref15} presented a graph based approach with a biased random walk model inspired by ItemRank, which is a type of a hybrid recommender. The fifth study \cite{ref16} is a kNN (cosine) recommender based on content created by the latent semantic analysis of wikipedia article. The sixth study \cite{ref17} defined a content based recommender to find similar games through the cosine similarity of feature vectors based on Gamespot game reviews. The bag-of words representation was outperformed by information theoretic co-clustering of adjective-context word pairs to reduce the dimensionality. The seventh study evaluated the quantitative and qualitative performance of game recommenders on the Steam data set \cite{ref18}. They used BPR++, grouped BPR, kNN (cosine) on game tag content, kNN combined with grouped BPR, a simulation of Steam's recommender, SVD, Factorization Machine (FM), and popularity ranking. They found significant quantitative differences but no qualitative differences between the methods. \section{Data Set} \subsection{Survey data set} The survey data (N=15,894) was collected by using a total of 10 standalone web-based surveys. Each of the surveys focused on different aspects of player preferences but all surveys included open-ended questions about respondents favorite games. Most of the samples were collected by using a UK-based crowd sourcing platform, market companies providing large online panels, or by recruiting respondents from social media platforms such as Facebook or Reddit. The survey data includes representative samples from Finland, Denmark, USA, Canada, and Japan. A typical survey took up to 20 minutes to complete with a computer or a mobile phone, and was targeted to everyone between the ages of 18 and 60. Another type of survey data was collected by using a short online player type test. The short test was open to everyone regardless of their prior experience in playing games, or the possible lack of it. Before analyzing survey data of the individual survey data sets, the data was cleaned of participants who implied content non-responsivity by responding similarly to every question. The survey data analyzed in this study consists of 15894 observations, 6465 unique games, and 80916 favorite game mentions. There are 1 to 37 favorite game mentions per player, and on average a player mentioned 5 games as his or her favorites. Every game that is mentioned as a favorite game has from 1 to 1108 individual favorite game mentions i.e. players. The data set is very sparse, since only 0.08\% of possible (player, game)-pairs are favorite game mentions. The answers tended to be more novel and personal than data sets based on played or owned games. Content for the games was obtained by crawling game tags from publicly available sources, such as the Steam platform and the Internet Games DataBase (IGDB), the latter of which provided an API for this purpose. The presence or absence of every tag in each game was stored as a binary indicator. Content for the players was obtained by asking the survey participants' preferences using the Gameplay Activity Inventory (GAIN) which has been validated with cross-cultural data. The GAIN consists of 52 questions (5-Likert scale, 1=very unpleasant, 5=very pleasant) about gameplay preferences, and the inventory measures five dimensions of gameplay appreciation: aggression, management, caretaking, exploration, and coordination. These questions consists of items such as 'Exploding and Destroying' (factor: aggression), 'Searching for and collecting rare treasures' (factor: exploration), 'Flirting, seducing and romantic dating' (factor: caretaking), 'Matching tiles together' (factor: coordination), 'Generating resources such as energy or money' (factor: management) etc. (Vahlo et al. 2018). In addition to the GAIN model, we utilized also a 9-item inventory on players' game challenge preferences. These items measure how pleasurable players consider e.g. 'logical problem-solving', 'strategic thinking', 'puzzle solving', 'racing against time', etc. A typical survey included the full 52 plus 9 questions whereas the online player type test consisted of a partly randomized sample of these questions. \section{Models} \subsection{Data set and validation} Assume we have $n$ players and $m$ games. Denote player $i\in\{1,2,...,n\}$ and game $j\in\{1,2,...,m\}$. The player and game interactions are stored in a $n \times m$ binary game like matrix $R_{i,j}=\mathbb{I}(\text{player }i\text{ likes game }j)$. The matrix does not have missing entries, because the game like status is known for every player. For example, the player $i$ may have answered the first and the third game as their favourites: \begin{equation} \begin{array}{c} R_{i,:} = (1, 0, 1, 0, ..., 0) \end{array} \end{equation} The task is to predict the ranking of games that the player has not answered as their favourite but might like. These predictions are the matrix $R_{i,j}^{*}\in\mathbb{R}$, where only the order of the values in each row matters for ranking. For example, the predictions for player $i$ over all the $m$ games could be: \begin{equation} \begin{array}{c} R_{i,:}^{*} = (1.41, 0.10, 0.82, 0.04, ..., 0.21) \end{array} \end{equation} The recommendation list for a player is obtained by taking the indices of games with $k$ largest predicted values in $R_{i,:}^{*}$, where the games that the player has already liked are excluded. In addition to game likes, we also have player and game features that we can use for content based prediction. Denote the $m \times r$ matrix of game features as $X_{\text{tags}}$, where the feature vectors for the $m$ games are stored as rows. In our case these features are indicators of presence or absence of each of the $r$ game tags. Denote the $n \times s$ matrix of player features as $X_{\text{questions}}$, where the feature vectors for the $n$ players are stored as rows. In our case these features are the responses to the $s$ questions on a Likert scale of -2,-1,0,1,2. To test model performance, we split the data set into training and validation sets as follows. First, we randomly sampled 25\% of games into 'test games' an 25\% of players into 'test players' that the models do not see during training. These games and players test the performance of the model for new games and players. The other games and players belong to the training set. In Setting 1, the models are tested on the task of recommending known games for a known player who has mentioned 3 favourite games. We therefore further selected 20 \% of the training set players by randomly sampling amongst those who have liked more than 3 games. Randomly chosen 3 games of each player are the 'seed' which belongs to the training set, and the remaining games for these players belong to the validation set for Setting 1. The Setting 2 (new games) validation set consists of the unseen 'test games' for the known players. In Setting 3 (new players), the validations set consists of unseen 'test players' for the known games. In setting 4, the validation set is the game likes for the unseen 'test games' and 'test players'. We have illustrated the game like matrix $R$, the game features $X_{\text{tags}}$, the player features $X_{\text{questions}}$, and the different validation settings in Figure~\ref{figure:data}. \begin{figure}[htbp] \centerline{\includegraphics[width=\columnwidth]{illustration_validation.eps}} \caption{Illustration of the data set and validation settings.} \label{figure:data} \end{figure} \subsection{Metrics} We use Precision@20 and nDCG@m to measure accuracy in the validation sets. They are defined as follows. \subsubsection{Precision@20} The Precision@k metric counts the number of games the player has liked in the validation set, as a fraction of all games in a recommendation list of length $k$. Assume the model predicts $R_{i,:}^{*}$ for player $i$, and denote $r^{(i)}$ as the vector of indices that sorts the predictions. The element $r^{(i)}_{j}$ is the $j$'th game in the recommendation list, i.e. the index of $j$'th largest predicted value in $R_{i,:}^{*}$. The metric for the data set is the average precision over the players: \begin{equation} \begin{array}{c} \frac{1}{n}\sum_{i=1}^{n}\frac{1}{k}\sum_{j=1}^{k} \mathbb{I}(\text{player } i \text{ likes game } r^{(i)}_{j}) \end{array} \end{equation} Precision is a realistic measure of the real-world accuracy of a recommendation list, where $k$ is typically small and the position of a game in the list does not matter. \subsubsection{nDCG@k} The normalized Discounted Cumulative Gain (nDCG@k) metric measures the position of liked games in the recommendation list. When a player liked a game, its position in the player's recommendation list is rewarded by the inverse of its logarithmic rank. These are called the discounted cumulative gains. In the optimal ranking, we have ranked liked games on the top of the recommendation list, of the total $k_i=|\{j : R_{i,j}^{\text{validation}} = 1\}|$ liked games, and the discounted cumulative gain has the value $\text{IDCG}_i=\sum_{j=1}^{\text{min}(k_i,k)}1/\text{log}_2(j+1)$. The nDCG@k is the discounted cumulative gain in the recommendation list $r^{(i)}$ of length $k$, normalized by the maximum attainable value $\text{IDCG}_i$. The metric for the data set is the average over the players: \begin{equation} \begin{array}{c} \frac{1}{n}\sum_{i=1}^{n}\frac{1}{\text{IDCG}_i}\sum_{j=1}^{k} \frac{\mathbb{I}(\text{player } i \text{ likes game } r^{(i)}_{j})}{\text{log}_2(j+1)} \end{array} \end{equation} Because Precision@20 measures the top recommendations, we used used nDCG@m to measure the overall ranking. \subsection{Models} \subsubsection{Multivariate Normal Distribution (MVN)} First we present a simple new collaborative filtering model which has a competitive accuracy but simpler interpretation. This model allows us to explain the recommendations through explicit correlation matrix between games. The model also allows us to completely remove the influence of game popularity to investigate its effect. Assume that every row of the player like matrix is a sample from a multivariate normal distribution: $R_{i,:} \sim\mathcal{N}(\mu,\Sigma)$ with mean vector $\mu\in\mathbb{R}^{m}$ and covariance matrix $\Sigma\in\mathbb{R}^{m \times m}$. This model has a closed form solution for the distribution parameters, because the maximum likelihood estimate of these is the sample mean vector $\mu_j = \frac{1}{n}\sum_{i} R_{i,j}$ and the sample covariance matrix $\Sigma_{i,j} = \frac{1}{n}\sum_{s}(R_{s,i}-\mu_i)(R_{s,j}-\mu_j)$. In prediction time, we assume that the values of liked games are known to be one but the values for other games are missing. Denote the indices of liked games as $\mathcal{I}$ and the indices of other games as $\mathcal{J}$ so that $\mathcal{I}\cup\mathcal{J}=\{1,2,...m\}$. We use indexing $X_{\mathcal{J},\mathcal{I}}$ to denote the submatrix with rows from $\mathcal{J}$ and columns from $\mathcal{I}$, for example. The predictions for the missing game likes are then given by the expectation of the conditional distribution $R_{i,\mathcal{J}}^{*} = \mathrm{E}(R_{i,\mathcal{J}}|(R_{i,j}=1)_{j\in\mathcal{I}})$. This can be shown to equal the solution: \begin{equation} \begin{array}{c} R_{i,\mathcal{J}}^{*} = \mu_{\mathcal{J}} + \Sigma_{\mathcal{J},\mathcal{I}}(\Sigma_{\mathcal{I},\mathcal{I}})^{-1}(\overline{1}-\mu_{\mathcal{I}}) \end{array} \end{equation} To predict without game popularity, we remove the mean vectors from the formula and substitute the sample covariance matrix with the sample correlation matrix. This is equivalent mean centering and then normalizing the game like matrix column wise before applying the model. \subsubsection{k Nearest Neighbour (kNN)} The kNN is a simple recommendation method. Assume we have calculated the similarity between any two games in a $m \times m$ matrix $S_{ij}$. The rating prediction for game $j$ considers the $k$ most similar games in the game like matrix for player $i$. Denote this set of most similar games $D^{k}(i,j)$. The prediction for a player is the similarity weighted average to the player's like status of $k$ most similar games: $R_{i,j}^{*} = \sum_{s\in D^{k}(i,j)} S_{j,s} R_{i,s}/\sum_{s\in D^{k}(i,j)} S_{j,s}$. We evaluated the kNN on neigbhourhood sizes of $k=1,2,4,8,...,m$, but we always obtained the best results with the maximum neighbourhood size of $k=m$. As others have pointed out \cite{ref8}, the normalizing denominator is not necessary for the ranking task and we in fact obtained better predictions without it. We therefore predict simply by: \begin{equation} \label{similarity} \begin{array}{c} R_{i,j}^{*} = \sum_{s\in D^{k}(i,j)} S_{j,s} R_{i,s} \end{array} \end{equation} We define the similarity function as either the cosine (cos) or the Pearson correlation (phi), where $\mu_j=\sum_{i} R_{i,j}/n$ is the column mean of the game like matrix: \begin{equation} \begin{array}{c} S_{i,j}^{\text{cos}}=\frac{\sum_{s}R_{s,i}R_{s,j}}{\sqrt{\sum_{s}R_{s,i}^2}\sqrt{\sum_{s}R_{s,j}^2}} \end{array} \end{equation} \begin{equation} \begin{array}{c} S_{i,j}^{\text{phi}}=\frac{\sum_{s}(R_{s,i}-\mu_i)(R_{s,j}-\mu_j)}{\sqrt{\sum_{s}(R_{s,i}-\mu_i)^2}\sqrt{\sum_{s}(R_{s,j}-\mu_j)^2}} \end{array} \end{equation} \begin{figure*} \includegraphics[width=\textwidth]{illustration_models.eps} \caption{Illustration of the SVD and content models. Models can only generalize to settings (light green) for which they learn representations (light blue).} \label{fig:models} \end{figure*} \subsubsection{Singular Value Decomposition (SVD)} The SVD of dimension $k$ is defined in terms of $n \times k$ matrix $P$ of latent player factors as rows and $m \times k$ matrix $G$ of latent game factors as rows. It is a type of regularized matrix factorization because the predicted game like matrix is the product $R^{*}=PG^T$ of the factor matrices. A prediction for player $i$ and game $j$ is simply the product of the latent user vector $P_{i,:}\in\mathbb{R}^k$ and the latent game vector $G_{j,:}\in\mathbb{R}^k$. These latent vectors are initially unknown. We evaluated different choices of dimension $k=4,8,16,32,64,128$ and regularization parameter $\lambda$. For the SVD implementation in the Suprise library, a python package for implicit recommendations, we found that a grid of $\lambda=1,2,4,8,16,32,64,128,256,512$ produced a concave maximum between the values. We call the choice of $\lambda=0$ as PureSVD, because it is possible to use a standard singular value decomposition. The SVD was sensitive to regularization, but if the regularization choice was optimal we obtained almost identical results for dimensions $k\geq32$. The model parameters are found by minimizing the RMSE between observed game likes and predicted game likes: \begin{equation} \begin{array}{c} P, G = \text{argmin}_{P,G} \|R-PG^T\|^2_F + \lambda \|P\|^2_F + \lambda \|G\|^2_F \end{array} \end{equation} Where the matrix norm $\|X\|^2_F$ denotes the Frobenius norm, or the root mean square of every element in the matrix $X$. To find the parameters, one approach is to use Alternating Least Squares (ALS) optimization \cite{ref3}. In this method, either the latent game vectors $G$ or the latent player vectors $P$ are assumed to be fixed and the optimal solution for the other is found. Because in this case every row independently minimizes the squared error associated with that row, we can solve for each row with standard ridge regression either: \begin{equation} \begin{array}{c} P_{i,:} = (G^T G + \lambda I)^{-1} G^T R_{i,:}^T \end{array} \end{equation} \begin{equation} \begin{array}{c} G_{j,:} = (P^T P + \lambda I)^{-1} P^T R_{:,j} \end{array} \end{equation} The optimization starts by initializing $P$ and $G$ with random values. We iterate between fixing $G$ to find optimal values for $P$, and then fix the resulting $P$ to find optimal values for $G$. This is repeated until convergence. \subsubsection{Tags} The first content model is based on game features, which we call the 'Tags' model because our game features are based on game tags. We assume that each player has some interaction strength with each game feature. These interaction strengths are described by a player specific vector of length $r$. Collect these vectors as rows of the $n \times r$ model parameter matrix $T$, which needs to be learned from data. A given player may for example answer that they like 'Candy Crush' and 'Tetris', which implies that the player interacts positively with game tags 'puzzle' and 'tile-matching'. We predict the game likes as a product of the game features $X_{\text{tags}}$ and the player interaction strengths $T$: $R^{*}=TX_{\text{tags}}^T$. To find the parameters, we minimize the RMSE between observed game likes and predicted game likes: \begin{equation} \begin{array}{c} T = \text{argmin}_{T} \|R- TX_{\text{tags}}^T\|^2_F + \lambda \|T\|^2_F \end{array} \end{equation} Every row $T_{i,:}$ in fact independently minimizes the squared error associated with that row, so the model can be fitted separately for every row with standard ridge regression: \begin{equation} \begin{array}{c} T_{i,:} = (X_{\text{tags}}^T X_{\text{tags}} + \lambda I)^{-1} X_{\text{tags}}^T R_{i,:}^T \end{array} \end{equation} \subsubsection{Questions} The second content model is based on player features, which we call the 'Questions' model because our player features are based on questionnaire about gaming preferences. We assume that each game has an interaction strength with each player feature. These interaction strengths are described by a game specific vector of length $s$. Collect these vectors as rows of the $n \times s$ model parameter matrix $Q$, which needs to be learned from data. A given game 'Candy Crush' may for example be liked by players that have stated a preference for 'logical challenges' and 'racing against time'. We predict the game likes as a product of the player features $X_{\text{questions}}$ and the game interaction strengths $Q$: $R^{*}=X_{\text{questions}} Q^T$. To find the parameters, we minimize the RMSE between observed game likes and predicted game likes: \begin{equation} \begin{array}{c} Q = \text{argmin}_{Q} \|R- X_{\text{questions}} Q^T\|^2_F + \lambda \|Q\|^2_F \end{array} \end{equation} Again, every row $Q_{i,:}$ independently minimizes the squared error associated with that row, so the model can be fitted separately for every row of $Q$ with standard ridge regression: \begin{equation} \begin{array}{c} Q_{j,:} = (X_{\text{questions}}^T X_{\text{questions}} + \lambda I)^{-1} X_{\text{questions}}^T R_{:,j} \end{array} \end{equation} \subsubsection{Tags X Questions} The final content model is based on both game and player features, which we call the 'Tags x Questions' model because it is based on interactions between the game tags and the questionnaire answers. To model the game likes between every player $i$ and every game $j$, we assume that each (player, game)-pair is described by a feature vector. The feature vector for the pair is the tensor product of the player's features and the game's features. Given the $n \times s$ player feature matrix $X_{\text{questions}}$ and the $m \times r$ game feature matrix $X_{\text{tags}}$, the pair feature matrix can be represented as a $nm \times sr$ feature matrix $X_{\text{questions}} \otimes X_{\text{tags}}$. The logic behind this representation implies that the game likes are simply the sum of interaction strengths between player and game features, where the $r \times s$ interaction strength matrix as $A$ needs to be learned from data. A given player may for example answer that they like 'logical challenges' and 'racing against time', and the data implies that these answers interact positively with game tags 'puzzle' and 'tile-matching'. Denote the columnwise stacking of the interaction strength matrix as $\text{vec}(A)$, and the columnwise stacking of the $n \times m$ game like matrix as $\text{vec}(R^{*})$. Further denote the feature matrix as $X_{\text{interactions}}=X_{\text{questions}} \otimes X_{\text{tags}}$. The response is predicted as the sum of the player feature and game feature interactions: $\text{vec}(R^{*})=X_{\text{interactions}}\text{vec}(A)$. To find the model parameters, we minimize the RMSE between observed game likes and predicted game likes: \begin{equation} \begin{array}{c} A = \text{argmin}_{A} \|\text{vec}(R^{*})-X_{\text{interactions}} \text{vec}(A)\|^2_F + \lambda \|A\|^2_F \end{array} \end{equation} There is a mathematical optimization shortcut known as the vec-trick \cite{ref6}, which makes the minimization problem computationally tractable in the special case that the feature matrix is a Kronecker product. We use the python software package RLScore\footnote{http://staff.cs.utu.fi/~aatapa/software/RLScore} which implements this short cut to obtain an exact closed form solution to the ridge regression problem: \begin{equation} \begin{array}{c} \text{vec}(A) = (X_{\text{interactions}}^T X_{\text{interactions}} + \lambda I)^{-1} X_{\text{interactions}}^T \text{vec}(R^{*}) \end{array} \end{equation} \section{Results} \subsection{Model accuracy} \begin{table}[htbp] \caption{Ranking Accuracy by nDCG@m (\%)} \label{nDCG} \begin{center} \begin{tabular}{|r|r|r|r|r|} \hline \textbf{Model} & \textbf{Setting 1} & \textbf{Setting 2} & \textbf{Setting 3} & \textbf{Setting 4} \\ \hline Random & 13.9 & 13.2 & 14.6 & 13.4 \\ \hline MVN & \textbf{31.9} & 13.2 & 14.6 & 13.4 \\ \hline kNN (cos) & 30.0 & 13.2 & 14.6 & 13.4 \\ \hline kNN (phi) & 27.4 & 13.2 & 14.6 & 13.4 \\ \hline PureSVD & 28.4 & 13.2 & 14.6 & 13.4 \\ \hline SVD & 30.9 & 13.2 & 14.6 & 13.4 \\ \hline Tags & 23.4 & \textbf{22.3} & 14.6 & 13.4 \\ \hline Questions & 26.9 & 13.2 & \textbf{32.2} & 13.4 \\ \hline Tags X Questions & 23.2 & 19.9 & 24.3 & \textbf{20.1} \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} \begin{table}[htbp] \caption{Recommendation List Accuracy by Precision@20 (\%)} \label{Precision} \begin{center} \begin{tabular}{|r|r|r|r|r|} \hline \textbf{Model} & \textbf{Setting 1} & \textbf{Setting 2} & \textbf{Setting 3} & \textbf{Setting 4} \\ \hline Random & 0.1 & 0.1 & 0.1 & 0.1 \\ \hline MVN & \textbf{5.0} & 0.1 & 0.1 & 0.1 \\ \hline kNN (cos) & 3.9 & 0.1 & 0.1 & 0.1 \\ \hline kNN (phi) & 3.2 & 0.1 & 0.1 & 0.1 \\ \hline PureSVD & 3.8 & 0.1 & 0.1 & 0.1 \\ \hline SVD & 4.3 & 0.1 & 0.1 & 0.1 \\ \hline Tags & 2.3 & \textbf{1.7} & 0.1 & 0.1 \\ \hline Questions & 3.4 & 0.1 & \textbf{4.3} & 0.1 \\ \hline Tags X Questions & 2.4 & 1.1 & 2.7 & \textbf{1.2} \\ \hline \end{tabular} \label{tab1} \end{center} \end{table} We report the accuracy of different models using nDCG@m in Table~\ref{nDCG} and Precision@20 in Table~\ref{Precision}. The metrics have values between 0-100\%, yet it is challenging to interpret the quality of recommendations from their absolute values. We can use accuracy metrics to compare and improve the models, but the results need to be verified qualitatively in practise. Comparing models by accuracy tells a clear-cut story. The two metrics have the same interpretation. In Setting 1, collaborative filtering methods outperform content based approaches with the MVN model delivering the best results. In Setting 2 we generalize to the new games, and only the Tags and Tags X Questions models are able to generalize. The Tags model has a greater degree of freedom and it performs better in this setting. In Setting 3 we generalize to new players, and only the Questions and Tags x Questions models are able to generalize. Again, the Questions model has more freedom to fit the data and performs better in this setting. For the last setting, only the Tags X Questions model which uses both features is able to make useful predictions. The mathematics underlying the generalization ability is illustrated in Figure~\ref{fig:models}. Predictions can only be made for player and game pairs where both games and players have representations which have been learned from the training set. However, if the representations can be learned for the setting it is useful to use more flexible models. There is therefore an important trade-off between using the provided features to generalize better or learning latent features to have better performance inside the training set games and players. \subsection{Model interpretation} \begin{figure*} \includegraphics[width=\textwidth]{illustration_interpretation.eps} \caption{Example of model interpretation: correlated games (MVN), tag responses (Tags), question responses (Questions) and interactions (Tags x Questions)} \label{fig:interpretation} \end{figure*} Because the models are based on simple linear models, we can interpret the model coefficients. The coefficients provide an explanation of why certain players are predicted to like certain games. This information can be useful for game developers or publishers that seek to understand the gaming market, and they can be used to tune the model towards qualitatively better predictions. Recalling that the MVN model predictions are based on the game mean vector $\mu$ and covariance matrix $\Sigma$, we interpret how collaborative filtering arrives at the predictions. The mean vector is simply the popularity of each game, calculated as the fraction of players that play each game. This is the starting point of the predictions, which are then adjusted based on the strength or positive or negative correlations to the games the player has played. For example, in Figure~\ref{fig:interpretation}, we provide 4 example games and their 4 most correlated games. These correlations are very believable, and these are the games whose play probability increases the most when player mentions the game in question. The exact calculation of the probability is based on the assumption of a normal distribution, which while incorrect, seems to work well in practise. The Tags model is based on inferring a player specific response vectors $T$ to the game tags, based on the games the player has liked. Fitting the model therefore produces a vector of $r$ interaction strengths to each of the game tags for each of the $n$ players. In Figure~\ref{fig:interpretation} we illustrate 4 example players with 3 liked games each and 4 tags with strongest implied interactions. It seems that the model is able to correctly learn the content of the liked games and describe the player in terms of their tag interactions. At prediction time, the games with these tags are predicted to be played the most by the player. The model can therefore generalize to new games by predicting games that have similar tag content. The Questions model is based on inferring a game specific response vectors $Q$ to the player preferences, based on the players that have liked the game. Fitting the model therefore produces a vector of $s$ interaction strengths to each of the questionnaire answers for each of the $m$ games. Figure~\ref{fig:interpretation} illustrates 4 example games and 4 answers with the strongest implied interactions. The model is able to correctly learn the content of these games from the types of players that play the game, based on their questionnaire answers. At prediction time, the players with these answers are predicted to be play the game. This model can therefore generalize to new players by predicting players that have similar preferences. The Tags X Questions model infers the interaction strengths $A$ between all player questionnaire answers and game tags, based on every player and game pair. Fitting the model produces a matrix of $r \times s$ interaction strengths, where each answer and tag pair have their own value. Figure~\ref{fig:interpretation} illustrates 4 example tags and 4 answers with the strongest implied interactions. These interactions are very logical and similar to what one might manually define: Puzzle tag for example interacts with the answer of liking 'Challenges of crosswords and other word puzzles'. With 61 possible answers and 379 game tags, manually defining and tuning 23 119 parameters is however infeasible. At prediction time, the players that have preferences which best match the game tags are predicted to be play the game. This model can therefore generalize to both new players and new games simultaneously. \subsection{Model Recommendations} Finally, we illustrate the model recommendations in Table~\ref{TopRecommendations}. The MVN model produces recommendations which are close to the games the player has played, and the Tags model provides close matches in terms of game genres. The Question and Tags x Questions models rely on rather ambiguous question answers, and as a result provide quite generic recommendations. However, their answers still seem better than random. The Tags model seems better than the Questions model, even though accuracy metrics suggest the opposite conclusion. There is a significant popularity bias visible in the Questions and Tags x Questions models, and to some extent in the MVN model. The effect of popularity can be removed from the MVN model as described earlier, and there is a similar trick that can be used with the other models. First, normalize the game like matrix $R$ column wise: substract the mean vector $\mu$ and divide by the standard deviation $\sigma=\sqrt{\mu(1-\mu)}$ to produce a matrix of standardized deviations from baseline popularity. Second, more popular games tend to have more tags provided so produce a more egalitarian 'game profile' vector by projecting $X_{\text{Tags}}$ with PCA into a lower dimensional space and normalize this vector. We found 16 dimensions worked qualitatively well. We skip reporting these results because they produced worse accuracy on the metrics, even though they virtually eliminated the effect of recommending popular but unrelated games. \begin{table*}[htbp] \caption{Example Players and Top 5 Game Recommendations from Different Models} \label{TopRecommendations} \begin{center} \begin{tabular}{|p{8mm}|p{30mm}|p{23mm}|p{23mm}|p{23mm}|p{23mm}|p{23mm}|} \hline \textbf{Player} & \textbf{Questions Liked} & \textbf{Games Liked} & \textbf{MVN} & \textbf{Tags} & \textbf{Questions} & \textbf{Tags X Questions} \\ \hline 93519 & Engaging in battle, Weapons and skills selection for characters, Searching and collecting rare treasures & Child of Light, Dungeon Master, Shin Megami Tensei: Persona 3 & Persona 5, Xenogears, Shin Megami Tensei: Persona 4, Chrono Cross, Bravely Default & Costume Quest, Ori and the Blind Forest, Abyss Odyssey, Fortune Summoners, Bahamut Lagoon & World of Warcraft, The Witcher 3: Wild Hunt, Diablo, The Elder Scrolls V: Skyrim, Overwatch & Fallout 4, Mass Effect 2, Fallout 3, Fallout: New Vegas, Warframe \\ \hline 93520 & Piloting and steering vehicles, Racing in a high speed, Challenges of tactics & Project CARS, Gran Turismo 5, Forza Horizon 2 & Grand Theft Auto V, Pokémon GO, Forza Motorsport 6, Gran Turismo 6, Hill Climb Racing 2 & DiRT 4, Gran Turismo 2, Gran Turismo (PSP), Forza Motorsport 4, Forza Motorsport 2 & Call of Duty, Grand Theft Auto, Clash of Clans, Angry Birds, Battle-field & StarCraft: Brood War, StarCraft, StarCraft II: Legacy of the Void, Doom II RPG, Call of Duty \\ \hline 93521 & Running in a fast speed while avoiding obstacles, Developing skills and abilities, Challenges of fast reaction & Shovel Knight, Super Mario 3D World, The Legend of Zelda: Ocarina of Time & The Legend of Zelda: Breath of the Wild, Super Mario Galaxy, Super Mario 64, The Legend of Zelda: The Wind Waker, The Legend of Zelda: A Link to the Past & The Legend of Zelda: Twilight Princess, Rogue Legacy, Assassin's Creed IV: Black Flag, The Legend of Zelda: A Link to the Past, Power Stone 2 & TETRIS, League of Legends, Call of Duty, Crash Bandicoot, Minecraft & StarCraft, Tomb Raider, Dota 2, StarCraft: Brood War, Counter-Strike: Global Offensive \\ \hline 93522 & Hugging, kissing and making out, Investigating the story and its mysteries, Challenges of logical problem-solving & Heavy Rain, Steins;Gate, Life Is Strange & The Last of Us, Pokémon GO, The Witcher 3: Wild Hunt, World of Warcraft, TETRIS & Beyond: Two Souls, The Wolf Among Us, Zero Escape: Zero Time Dilemma, Persona 4 Golden, Alan Wake & Sudoku, The Sims, Angry Birds, Pokémon GO, Counter-Strike: Global Offensive & Mass Effect 2, Fallout 3, The Elder Scrolls IV: Oblivion, The Elder Scrolls V: Skyrim, Fallout: New Vegas \\ \hline 93523 & Decorating rooms and houses, Hugging, kissing and making out, Challenges of logical problem-solving & Cities: Skylines, Overcooked, The Sims 2 & The Sims 3, The Sims, Civilization V, The Sims 4, The Elder Scrolls V: Skyrim & Train Valley, Prison Architect, The Sims, RollerCoaster Tycoon 3: Platinum, Tropico 4 & Sudoku, The Sims, TETRIS, Gardenscapes, World of Warcraft & Warframe, The Elder Scrolls IV: Oblivion, Dragon's Dogma: Dark Arisen, The Sims, Stardew Valley \\ \hline \end{tabular} \label{tab1} \end{center} \end{table*} \section{Conclusion} Research in game recommendation has focused on the prediction of missing game likes in data sets of historical player and game interactions. However, many practical tasks require predictions for completely new games and players. Collaborative filtering has been found to be a useful model in the traditional setting, but for games or players with few or no interactions different models are required. We presented content based Tags, Questions, and Tags X Questions models that generalize into new games, new players, or both simultaneously. These methods are inspired by the Singular Value Decomposition (SVD), a standard approach in collaborative filtering, where one or both of the feature vectors are assumed to be given. The optimization corresponds to a linear model with computational shortcuts, which makes them fast and easily interpretable. We compared the following models: Random baseline, Multivariate Normal Distribution (MVN), k Nearest Neighbour (kNN) with cosine or Pearson similarity, PureSVD, SVD, Tags, Questions, Tags x Questions. We evaluated the performance with the nDCG and Precision@20 metrics within known games and players (Setting 1), new games (Setting 2), new players (Setting 3) and both new games and new players (Setting 4). We found that each content based model performed the best in the setting for which it was designed, and restricting the models to use the provided features instead of learning latent features is useful in terms of generalization ability but has a trade off in terms of accuracy. Each model can therefore be useful depending on the setting. Finally, we note that accuracy does not tell the full story because the qualitative results do not seem to perfectly correlate with accuracy.
null
null
null
proofpile-arXiv_000-10147
{"arxiv_id":"2009.08947","language":"en","timestamp":1600654688000,"url":"https:\/\/arxiv.org\/abs\/2009.08947","yymm":"2009"}
2024-02-18T23:40:25.118Z
2020-09-21T02:18:08.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10149"}
null
\chapter*{\centering Abstract} {\setstretch{1.1} In a Quantum Field Theory, the analytic structure of the 2-points correlation functions, \ie\ the propagators, encloses information about the properties of the corresponding quanta, particularly if they are or not confined. However, in Quantum Chromodynamics (QCD), we can only have an analytic solution in a perturbative picture of the theory. For the non-perturbative propagators, one resorts on numerical solutions of QCD that accesses specific regions of the Euclidean momentum space, as, for example, those computed via Monte Carlo simulations on the lattice. In the present work, we rely on Padé Approximants (PA) to approximate the numerical data for the gluon and ghost propagators, and investigate their analytic structures. In a first stage, the advantages of using PAs are explored when reproducing the properties of a function, focusing on its analytic structure. The use of PA sequences is tested for the perturbative solutions of the propagators, and a residue analysis is performed to help in the identification of the analytic structure. A technique used to approximate a PA to a discrete set of points is proposed and tested for some test data sets. Finally, the methodology is applied to the Landau gauge gluon and ghost propagators, obtained via lattice simulations. The results identify a conjugate pair of complex poles for the gluon propagator, that is associated with the infrared structure of the theory. This is in line with the presence of singularities for complex momenta in theories where confinement is observed. Regarding the ghost propagator, a pole at $p^2=0$ is identified. For both propagators, a branch cut is found on the real negative $p^2$-axis, which recovers the perturbative analysis at high momenta. \vspace{2em} \noindent \textbf{Keywords:} Analytic structure, Padé Approximant, Gluon propagator, Ghost propagator, Lattice QCD. } \newpage \blankpage{} \newpage \chapter*{\centering Resumo} {\setstretch{1.1} Numa Teoria Quântica de Campos, a estrutura analítica das funções de correlação de 2 pontos, \ie, os propagadores, contêm diversas informações acerca das propriedades dos quanta da teoria, em particular se estes estão, ou não, confinados. No entanto, em Cromodinâmica Quântica (QCD), uma solução analítica é apenas possível num quadro perturbativo da teoria. A obtenção dos propagadores de uma forma não perturbativa pode ser feita com recurso a soluções numéricas da QCD para momentos definidos no espaço Euclidiano. Estas soluções podem ser conseguidas com base, por exemplo, em simulações de Monte Carlo na rede. Neste trabalho baseamo-nos em Aproximantes de Padé (PA) para analisar os propagadores do gluão e do campo fantasma, dessa forma obtidos na gauge de Landau, e investigamos a sua estrutura analítica. Numa primeira fase, são exploradas as vantagens do uso de PAs para reproduzir as propriedades de uma função, em especial a sua estrutura analítica. É testada a utilização de sequências de PAs nas soluções não perturbativas dos propagadores, sendo feita uma análise de resíduos como auxílio à identificação da estrutura analítica. É, também, proposta e testada uma nova técnica para aproximar um conjunto discreto de pontos a um PA, que é, por último, aplicada aos propagadores do gluão e do campo fantasma provindos de simulações na rede. Um par conjugado de polos complexos, associado à estrutura de infravermelho da teoria, é identificado no propagador do gluão, estanto de acordo com a presença de singularidades em momentos complexos em teorias nas quais se observa confinamento. Quanto ao propagador do campo fantasma, é identificado um polo em $p^2=0$. Em ambos os propagadores é identificada uma descontinuidade no eixo-$p^2$ real negativo, sendo, desta forma, recuperada a análise perturbativa a altos momentos. \vspace{1em} \noindent \textbf{Palavras-chave:} Estrutura analítica; Aproximante de Padé; Propagador do gluão, Propagador do campo fantasma, QCD na rede. } \newpage \blankpage{} \newpage \chapter*{\centering Acknowledgements} First and foremost, I wish to show my deepest gratitude to my supervisor, Professor Orlando Oliveira, for all the time devoted, the endless willingness, and the wise advice. I would also like to thank his research group, for having received and accompanied me over the last year. A whole-heartedly thanks to my fellow colleagues, specially Guilherme and Maria, whose company is so spacial. Finally, to Rúben, who read my thesis more times than I did myself, for being messy, but kind, and for being with me most of the time; to my family, who will always be there to catch me if I should fall; to my friends, with whom I share the maxima and minima of my life; and to Dirac and Duna, for having guided me through every single day of the past treacherous year, my biggest thank you. \vspace{1em} This work was supported with funds from the Portuguese National Budget through Fundação para a Ciência e Tecnologia under the project UIDB/04564/2020. The use of Lindgren has been provided under DECI-9 project COIMBRALATT. The use of Sisu has been provided under DECI-12 project COIMBRALATT2. I also acknowledge the Laboratory for Advanced Computing at the University of Coimbra (http://www.uc.pt/lca) for providing access to the HPC resource Navigator. \newpage \blankpage{} \newpage \pagenumbering{roman} \setcounter{page}{9} \tableofcontents{} \newpage \addcontentsline{toc}{chapter}{List of Figures} \listoffigures \newpage \addcontentsline{toc}{chapter}{List of Tables} \listoftables \newpage \addcontentsline{toc}{chapter}{List of Abbreviations} \chapter*{List of Abbreviations} \noindent \textbf{DE} Differential Evolution\par\vspace{1em} \noindent \textbf{DSE} Dyson-Schwinger Equations\par\vspace{1em} \noindent \textbf{PA} Padé Approximant\par\vspace{1em} \noindent \textbf{QCD} Quantum Chromodynamics\par\vspace{1em} \noindent \textbf{QED} Quantum Electrodynamics\par\vspace{1em} \noindent \textbf{QFT} Quantum Field Theory\par\vspace{1em} \noindent \textbf{RGZ} Refined Gribov-Zwanziger\par\vspace{1em} \noindent \textbf{SA} Simulated Annealing\par\vspace{1em} \noindent \textbf{SPM} Schlessinger Point Method\par\vspace{1em} \newpage \pagenumbering{arabic} \chapter{Introduction} The current theoretical picture of the electromagnetic interaction, a component of the electroweak part of the standard model, and the strong interaction, between quarks and gluons, boils down to Quantum Electrodynamics (QED) and Quantum Chromodynamics (QCD), respectively. Both are gauge theories associated with different gauge groups. QED is an abelian gauge theory associated with the symmetry group U(1), whilst QCD is a non-abelian gauge theory with the symmetry group SU(3). The fundamental quanta of QED, \eg\ the electron and the photon, are experimentally observed particles, whereas the quanta of QCD are not. Indeed, single particle states associated with quarks and gluons were never observed experimentally. It is believed that quarks and gluon states do not belong to the Hilbert space of the physical states. Therefore, quarks and gluons can only be present in Nature as components of other particles, \ie, they are confined particles. In Quantum Field Theories (QFT), the 2-point correlation functions, \ie, the propagators, summarise the dynamical information of the theory. In the QED, that can be solved via perturbation theory, these propagators are well known, see, \eg, \cite{Ryder,Peskin}. Unfortunately, the same approach cannot be followed in QCD, where perturbative techniques can only be applied to the ultraviolet (UV) momentum region. Additionally, since these quanta cannot be experimentally observed alone, their behaviour and properties cannot be directly measured. Hence, the full knowledge of the gluon, quark and the unphysical ghost dynamics has to be acquired by means of theoretical \textit{ab initio} non-perturbative methods. There are mainly two such methods that are commonly applied to investigate the non-perturbative regime of QCD: the Dyson-Schwinger Equations (DSE), and lattice regularised Monte Carlo simulations (lattice QCD). Both offer non-perturbative solutions for QCD in the whole range of momentum, but both have limitations. Although the DSE promise us an exact solution for the theory, an infinite system of coupled integral equations has to be solved, and a self-consistent truncation scheme needs to be applied in such a way that the important properties and quantities are not compromised. Regarding the lattice calculations, they are limited by the finite volume of the lattice. Notwithstanding, these non-perturbative methods offer valuable information in the infrared (IR) momentum region, a region of momenta that is not accessed with perturbation theory. Most non-perturbative methods, including the ones above, are formulated in the Euclidean space. However, the real theory lives in the Minkowski space, where the observables are to be computed. Hence, the Wightman functions (Minkowski space correlation functions) must be obtained from the Schwinger functions (Euclidean space correlation functions) via a Wick rotation. This can only be done if the analytic structures of these functions are known. \vspace{1em} In general, the analytic structure, \ie, the set of zeros, singularities and branch cuts, of a propagator have a well defined physical interpretation. For example, in QED, the electron propagator has a singularity at the physical mass of this particle. For a typical theory, the analytic structure of a propagator is shown in Figure \ref{fig:TypAS}. In the complex $p^2$-plane, a pole that corresponds to the one particle state should appear, as well as a branch cut associated with two or more free particles; poles related to bound states also appear in the analytic structure \cite{Peskin}. All of these structures occur in the real $p^2$-axis. When a calculation is made, one can choose the integration path to go around these singularities. This also allows to perform a Wick rotation when going from the Euclidean space to the Minkowski space of momenta. However, this rotation is impracticable if complex singularities are present. While for four-dimensional QED, we do not find such singularities\footnote{To the author's best knowledge, complex singularities were found in QED only when formulated in lower dimensions, where it shows confinement, see, \eg, \cite{Maris1995,avkl2000}.}, in non-perturbative QCD it is a different story, since the propagators acquire a different analytic structure. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{TypAS.png} \caption[Analytic structure of a propagator, for a typical theory, obtained in the Minkowski space.]{Analytic structure of a propagator, for a typical theory, obtained in the Minkowski space. Image from \cite{Peskin}.} \label{fig:TypAS} \end{figure} In fact, in a theory that displays confinement, which is believed to be the case of QCD, the analytic structure of the propagators of confined particles may have singularities that are not associated with physical states \cite{Maris1994}. This has to do with a violation of the local axioms of QFT by theories that exhibit confinement \cite{Binosi2020,Hayashi2020,Alkofer2001}. Thereby, knowing the analytic structure of a propagator reveals to be crucial, not only because it gives information about the physical particle states of the theory, but also because it may bring new insight into the confinement mechanism. It is also indispensable to know the analytic structure when one attempts to go from the Euclidean space to the Minkowski space, whenever non-perturbative calculations are made. In this sense, many studies have been done with the main objective of finding the analytic structure of the full propagators for the QCD quanta, \ie, the gluon, the quark, and the unphysical ghost. Some predictions and studies around the existence of complex poles in the gluon propagator were made in, \eg, \cite{Sorella2011,Siringo2016,Stingl1986,Zwanziger1989,Alkofer2004,Hayashi2020}. Notably, the tree level solution for the propagator in the Refined Gribov-Zwanziger (RGZ) framework, that describes the lattice data extremely well, predicts the existence of a conjugate pair of complex poles at Euclidean momenta \cite{Dudal2010,Cucchieri2012}. These two complex poles were found using a global fit to lattice data, in \cite{Dudal2018}. Poles at similar positions in the complex $p^2$-plane were also found in \cite{Binosi2020}, using Padé Approximants (PA) to reconstruct the gluon and ghost propagators obtained via DSE and lattice simulations. Regarding the ghost propagator, its analytic structure seems to be similar to the one obtained perturbatively, \ie, with no complex poles, see, \eg, \cite{Binosi2020,Hayashi2020}. On the other hand, additional studies, \eg\ \cite{Strauss2012}, found no evidence of such complex singularities. Further studies were undertaken in other types of theories to investigate the consequences of confinement in the analytic structure of the respective propagators, see \eg \cite{Maris1995,Roberts1992,avkl2000}. In order to complement the already known results, this work focuses on the investigation of the analytic structure of the fundamental propagators in pure SU(3) Yang-Mills theory, obtained non-perturbatively via lattice calculations in the Landau gauge. No particular theoretical or empirical models to describe the lattice data will be considered. Instead, we rely on PAs to investigate the analytic structure of the QCD propagators, since it provides a general approach to study functions with singularities across the complex plane \cite{Hillion1977,Billi1994,Yamada2014,Boito2018}. \vspace{1em} This work is organised as follows: in the next chapter, we will start by introducing and defining Padé approximants in the context of the analytic continuation problem. The use of Padé approximants will, then, be tested for the perturbative propagators in the third chapter, in order to know how their analytic structures are reproduced, and how faithful this reproduction is. Considering that the full propagators come in the form of discrete sets of data points, a method of reconstructing the former with Padé approximants will be introduced and tested in the fourth chapter. In the fifth chapter, the obtained analytic structures for the gluon and ghost propagators will be investigated and discussed, followed by the final conclusions and ideas for possible future works. \chapter{Elements of Padé Approximants} \label{Chap:PA} The identification of the analytic structure of the gluon and ghost propagators requires the knowledge of the latter in the whole complex plane. However, the lattice simulations only provide the propagators in the real positive range of the Euclidean momenta. In this chapter, we will look in detail at rational functions, particularly at the PA, and explore their use to identify the singularities and branch cuts for arbitrary complex momenta. A series of tests will be made, to examine the reliability of PAs in the reproduction of analytic structures. \section{The numerical analytic continuation problem} The numerical analytic continuation, \ie, the task of extending the domain of a function beyond the regime where the information is available, for example from a finite set of data points, is a known problem in Physics. The reconstruction of real-time correlations of spectral functions \cite{Tripolt2019}, the calculation of scattering amplitudes \cite{Schlessinger1968}, and situations, \eg \cite{Vidberg1977,Cillero2010,Binosi2020,Masjuan2010,Tripolt2017}, where the analytic continuation allows us to access information in different regions of momenta, while our knowledge is restricted to the physical one, are good examples of this problem. A graphical representation of the analytic continuation is shown in Figure \ref{fig:AnalyticContinuation}. \begin{figure}[h] \centering \includegraphics[width=.6\textwidth]{AnalyticContinuation.png} \caption[Graphical representation of an example of analytic continuation.]{Graphical representation of an example of analytic continuation, from \cite{Schlessinger1968}.} \label{fig:AnalyticContinuation} \end{figure} By adjusting a function to a set of data points, we can use the former to calculate its values in the whole domain of the function, and not only where the data is available. Yet, if we do not know the form of the function represented by the finite set of data, which one do we choose? Power series may seem a good solution. However, we will see that rational functions, particularly PAs, offer a more general and faithful approximation, making it more useful to the numerical continuation of the data to the complex plane. \section{Why rational functions?} In order to convince ourselves that rational functions are indeed richer structures by capturing the analytic properties of the approximated functions, let us consider the following example, taken from \cite{Yamada2014,Masjuan2010}. Consider the function $F(x)$, given by \begin{equation} F(x)=\sqrt{\frac{1+2x}{1+x}}, \label{eq:Fx} \end{equation} represented graphically in Figure \ref{fig:TaylorVsRational}, that has the following expansion in power series, around $x=0$ \footnote{Without loss of generality, the expansion is made around the origin. The problem ahead is independent of the expansion point.} \begin{equation} F(x)=\sum_{n=0}^\infty a_n x^n =1+\frac{x}{2}-\frac{5x^2}{8}+\frac{13x^3}{16}-\frac{141x^4}{128}+\mathcal{O}(x^5), \label{eq:TaylorFx} \end{equation} where $a_n$ are the Taylor coefficients of $F(x)$. By denoting the Taylor expansion truncated at order $N$ as the partial sum \begin{equation} F^{[N]}(x) \equiv \sum_{n=1}^N a_n x^n, \end{equation} we have, for $N=4$, \begin{equation} F^{[4]}(x)=1+\frac{x}{2}-\frac{5x^2}{8}+\frac{13x^3}{16}-\frac{141x^4}{128}. \end{equation} Let us suppose that we only have access to the partial sum $F^{[4]}$ and that we have no idea of the exact function that originated it. If we want to compute the value of $F(x)$ at the origin, we have $F^{[4]}(0)=1$, which concurs perfectly with the exact result of $F(0)$. On the other hand, if we try to calculate the value of $F(x)$ at $x=0.5$ via $F^{[4]}(x)$, the approximation is less accurate. In fact, $F^{[4]}(0.5)≃1.1265$, while $F(0.5)=2\sqrt{3}/3≃1.1547$. Naturally, if we stray away from the expansion point and beyond the radius of convergence, in this case $r=1/2$, the approximation fails drastically, even using expansions truncated at higher orders, as it can be seen in Figure \ref{fig:TaylorVsRational}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{TaylorVsRational.pdf} \caption[Comparison between the original function $F(x)$ and the truncated Taylor expansions $F^{[4]}(x)$ and $F^{[100]}(x)$; and between the original function $F(x)$, the rational function $\widetilde{F}^{[4]}(w(x))$, and the PA $F^{[2|2]}(x)$.]{\textbf{Left: }Comparison between the original function $F(x)$ and the truncated Taylor expansions $F^{[4]}(x)$ and $F^{[100]}(x)$. \textbf{Right: }Comparison between the original function $F(x)$, the rational function $\widetilde{F}^{[4]}(w(x))$, and the PA $F^{[2|2]}(x)$. For $x>0$, the three curves overlap.} \label{fig:TaylorVsRational} \end{figure} In fact, computing $F(\infty)=\lim_{x\to\infty}F(x)$, with only the Taylor expansion, is a hopeless task. In this limit, $F^{[N]}(x)$ diverges for any $N\in\mathbb{N}$. Nonetheless, the original function $F(x)$ does not diverge when $x\to\infty$, $\lim_{x\to\infty}F(x)=\sqrt{2}$. How could we achieve this value with only $F^{[4]}(x)$? A cunning trick to transform the expansion in one which will let us estimate the value of $F(\infty)$ is: to perform the change of variables $x\equiv w/(1-2w)$; to define \begin{equation} \widetilde{F}(w) \equiv F(x(w))=(1-w)^{-\frac{1}{2}}; \end{equation} to re-expand it in $w$, \begin{equation} \widetilde{F}(w)=\sum_{n=0}^\infty b_n w^n =1+\frac{w}{2}+\frac{3w^2}{8}+\frac{5w^3}{16}+\frac{35w^4}{128}+\mathcal{O}(w^5); \label{eq:TaylorFw} \end{equation} and, in a similar way to $F(x)$, to define the truncated Taylor expansion of $\widetilde{F}(w)$ as the partial sum \begin{equation} \widetilde{F}^{[N]}(w)\equiv\sum_{n=0}^N b_n w^n. \end{equation} By doing this, the limit $x\to\infty$ is translated into $w\to1/2$. For this value of $w$, the Taylor expansion of (\ref{eq:TaylorFw}) converges, and we are able to approximate the value of $\widetilde{F}(1/2)$ and, therefore, of $F(\infty)$. Let us, then, do so for the first lowest $N$ of $\widetilde{F}^{[N]}(1/2)$. The resulting partial sums are shown in Table \ref{tab:Fkoo}. The sequence of values shown in Table \ref{tab:Fkoo} converges to $\sqrt{2}≃1.4241$. \begin{table}[t] \centering \begin{tabular}{c|cccccc} \toprule $N$ & 0 & 1 & 2 & 3 & 4 & $\cdot\cdot\cdot$ \\ \midrule $F^{[N]}(1/2)$ & 1 & 1.25 & 1.34375 & 1.38281 & 1.39990 & $\cdot\cdot\cdot$ \\ \bottomrule \end{tabular} \caption{Results for the lowest partial sums of Eq. (\ref{eq:TaylorFw}) with $w\to 1/2$.} \label{tab:Fkoo} \end{table} Now, let us go back and rewrite $\widetilde{F}^{[4]}(w)$ in terms of $x$ \footnote{Note that $\widetilde{F}^{[4]}(w(x))$ and $F^{[4]}(x)$ are not the same. The first comes from the expansion of $\widetilde{F}(w)$ in $w$, while the latter comes from the expansion of the original function before the change of variables.}, \begin{equation} \widetilde{F}^{[4]}(w(x))= \frac{1+(17/2)x+(219/8)x^2+(637/16)x^3+(2867/128)x^4}{(1+2x)^4}. \label{eq:Ft4x} \end{equation} Clearly, this is not a power expansion, but a rational function. We already saw that an approximation like (\ref{eq:Ft4x}) allows us to estimate not only the value of $F(x)$ near the origin, but also in the limit $x\to\infty$. By graphically comparing $\widetilde{F}^{[4]}(w(x))$ with the original function (Figure \ref{fig:TaylorVsRational}) an improvement can be seen, which was brought by the use of a rational function. Despite the fact that $\widetilde{F}^{[4]}(w(x))$ is defined in $x\in[-1,-1/2]$, where $F(x)$ is undefined (visible in Figure \ref{fig:TaylorVsRational}), the overall behaviour of the original function can be reproduced. If the use of rational functions can considerably improve the approximate description of a function, how do we build one? Surely, it is not of our interest to find the right change of variables for every function we come across. A particular type of rational functions is the PA. The idea of PAs is to use the first Taylor coefficients of a given function to build a ratio of polynomials, \ie, a rational function. A simple PA is \begin{equation} P(x)=\frac{a_0+a_1x}{1+b_1x}. \end{equation} In this case, the goal is to fix the unknowns $a_0$, $a_1$ and $b_1$ in such a way that the first three coefficients of the Taylor expansion of $P(x)$ match the first three Taylor coefficients of the function to be approximated. For the function $F(x)$, defined in (\ref{eq:Fx}), we find \begin{equation} P(x)=\frac{1+(7/4)x}{1+(5/4)x}= 1+\frac{x}{2}-\frac{5x^2}{8}+\frac{25x^3}{32}-\frac{125x^4}{128}+\mathcal{O}(x^5). \end{equation} When comparing it with (\ref{eq:TaylorFx}), it can be seen that the first three coefficients are exactly the same (but not the remaining ones). If we now use the limit of $P(x)$ to estimate the value of $F(\infty)$, we get $\lim_{x\to\infty}P(x)=1.4$, which is a better determination than any in Table \ref{tab:Fkoo}. By requiring the matching of the first five Taylor coefficients, we obtain the PA \begin{equation} P(x)=\frac{1+(13/4)x+(41/16)x^2}{1+(11/4)x+(29/16)x^2}, \end{equation} and $\lim_{x\to\infty}P(x)=41/29≃1.4137$. If we continue to higher numbers of matching Taylor coefficients, the precision in the estimation of $F(\infty)$ is increased. Indeed, for eleven matching coefficients we reach a precision of $\sim\num{e-8}$. Graphically (Figure \ref{fig:TaylorVsRational}), the precision of the approximation via PA is evident. The reproduction of the divergence at $x=-1$ is worth mentioning. This capability of reproducing divergences without damaging the overall behaviour of a function makes the PA a valuable tool. \section{The Padé Approximant} In order to use the PA as a tool, a rigorous definition must be made, as in \cite{Yamada2014,Masjuan2010,Boito2018}. For a more formal and complete definition see, \eg, \cite{George1970}. Thus, let us consider a function $f(z)$ that has a series expansion\footnote{Without loss of generality, an expansion around the origin is considered.} in the complex plane \begin{equation} f(z)=\sum_{n=0}^\infty c_n z^n, \end{equation} where $c_n$ are its Taylor coefficients. Let us also denote $f^{[N]}(z)$ as the respective truncated Taylor expansion of order $N$, \begin{equation} f^{[N]}(z)\equiv\sum_{n=0}^N c_n z^n. \end{equation} A Padé Approximant of order $[L|M]$ is defined as the ratio of two polynomials $Q_L(z)$ and $R_M(z)$, of orders $L$ and $M$ respectively, \begin{equation} P^{[L|M]}(z)\equiv\frac{Q_L(z)}{R_M(z)}=\frac{q_0+q_1z+q_2z^2+...+q_Lz^L}{1+r_1z+r_2z^2+...+r_Mz^M}. \label{eq:PA} \end{equation} As it is usually done, the normalisation $r_0=1$ is considered. The coefficients $q_0,...,q_L$ and $r_1,...,r_M$ will be called \textit{Padé coefficients}. The PA of the function $f(z)$ is denoted by $f^{[L|M]}(z)$, and is built such that the Taylor expansion of $f^{[L|M]}(z)$ reproduces exactly the first $L+M+1$ Taylor coefficients of $f(z)$. In this sense, we say that the PA has a \textit{contact} of order $L+M$ with the expansion of $f(z)$, and the difference between the PA and the original function satisfies \begin{equation} f(z)-f^{[L|M]}(z)=\mathcal{O}(z^{L+M+1}). \end{equation} When it exists, the PA is unique for any $L$ and $M$. As we will see later, sequences of PAs are extremely important (and fundamental in the scope of the present work), for it is their stability that gives us the confidence on the outcome of the approximations. Sequences with $L=M+J$ are called \textit{near-diagonal} when $J\neq0$, and \textit{diagonal} when $J=0$. Despite the already seen advantages of using PAs, there is a downside: unlike the Taylor series, there exists no general convergence theories for PAs. Nevertheless, there are some complete convergence theorems for some particular cases. A good resume of the existing convergence theorems and their details can be found, for example, in \cite{Masjuan2010}. \section{Analytic structure of a PA} \label{Sec:ASofPA} Let us now focus on the analytic structure of functions and on how well the PA can reproduce it. In \cite{Yamada2014}, a series of general examples with test functions are carried out with this objective. Here, we will aim our attention to some of them before we move further into more specific tests. We begin to consider the following complex test functions: \begin{align} \label{eq:f1} f_1(z) &= e^{-z}, \\ \label{eq:f2} f_2(z) &= \left(\frac{z-2}{z+2}\right)e^{-z}, \\ \label{eq:f3} f_3(z) &= e^{-z/(1+z)}, \\ \label{eq:f4} f_4(z) &= \sqrt{\frac{1+2z}{1+z}}. \end{align} The analytic structures of each of the functions above are represented, in the complex plane, in Figures \ref{fig:OriginalVsPade1} to \ref{fig:OriginalVsPade4}. These representations, made with the use of the software \textit{Mathematica} \cite{Mathematica}, are built in such a way that the poles, zeros, and branch cuts are enhanced. To do so, the argument of $f_i(z)$ is represented, instead of its value. This allows us to use the following key to read figures made in this way. \begin{figure}[H] \centering \includegraphics[width=.8\textwidth]{ComplexPlotLegend.png} \caption[Key for the identification of poles, zeros and essential singularities in representations of complex functions.]{Key for the identification of poles, zeros and essential singularities in representations of complex functions, where it is used a cyclic colour function over the argument of the represented function. Image from \cite{ComplexPlot}.} \label{fig:Key} \end{figure} \noindent The branch cut, it is identified by a black dashed line. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{OriginalVsPade1.pdf} \caption[{Representation of the test functions' analytic structure $f_1(z)$, and distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$.}]{\textbf{Left: }Representation of the test functions' analytic structure $f_1(z)$. The key for the structure identification is in Figure \ref{fig:Key}. \textbf{Right: }Distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$, for the test function $f_1(z)$.} \label{fig:OriginalVsPade1} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{OriginalVsPade2.pdf} \caption[{Representation of the test functions' analytic structure $f_2(z)$, and distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$.}]{\textbf{Left: }Representation of the test functions' analytic structure $f_2(z)$. The key for the structure identification is in Figure \ref{fig:Key}. \textbf{Right: }Distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$, for the test function $f_2(z)$. The pole at $z=-2$ and the zero at $z=2$ appear at the same position for the three represented orders, and are, therefore, overlapped.} \label{fig:OriginalVsPade2} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{OriginalVsPade3.pdf} \caption[{Representation of the test functions' analytic structure $f_3(z)$, and distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$.}]{\textbf{Left: }Representation of the test functions' analytic structure $f_3(z)$. The key for the structure identification is in Figure \ref{fig:Key}. \textbf{Right: }Distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$, for the test function $f_3(z)$.} \label{fig:OriginalVsPade3} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{OriginalVsPade4.pdf} \caption[{Representation of the test functions' analytic structure $f_4(z)$, and distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$.}]{\textbf{Left: }Representation of the test functions' analytic structure $f_4(z)$. The key for the structure identification is in Figure \ref{fig:Key}. \textbf{Right: }Distribution, in the complex plane, of poles and zeros for the sequence of diagonal PAs of orders $[5|5]$, $[10|10]$ and $[20|20]$, for the test function $f_4(z)$.} \label{fig:OriginalVsPade4} \end{figure} By looking at expressions (\ref{eq:f1}) to (\ref{eq:f4}) and Figures \ref{fig:OriginalVsPade1} to \ref{fig:OriginalVsPade4}, we can conclude that: \begin{itemize} \item $f_1(z)$ has no singularities for $|z|<\infty$; \item $f_2(z)$ has a zero at $z=2$ and a simple pole at $z=-2$; \item $f_3(z)$ has an essential singularity at $z=-1$; \item $f_4(z)$ has a branch cut along a line on [-1,-1/2]. \end{itemize} After computing the PA for a given function, its analytic structure can be extrapolated from the PA's own analytic structure. As a fraction between polynomials, the zeros of a PA correspond to the roots of its numerator - $Q_L(z)$ in (\ref{eq:PA}) -, and its poles correspond to the roots of its denominator - $R_M(z)$ in (\ref{eq:PA}). Note that these are the only structures present in the analytic structure of a PA. As a consequence, the reconstruction of the original function's analytic structure only relies on the distribution of poles and zeros of the associated PA. For some functions, an analytic expression of the respective PA can be found. For example, for $f_1(z)$ we have \begin{equation} Q_L(z)=\sum_{k=0}^L\frac{(2L-k)!L!}{(2L)!k!(L-k)!}(-z)^k, \end{equation} \begin{equation} R_M(z)=\sum_{k=0}^M\frac{(2M-k)!M!}{(2M)!k!(M-k)!}z^k. \end{equation} However, in general, a PA can be found numerically\footnote{For all the numerical calculations in the present work, the software \textit{Mathematica} \cite{Mathematica} was used.} for a given order $[L|M]$. Once it is done, the poles and zeros can be obtained and represented graphically in the complex plane, and the analytic structures can be compared. For our test functions, some orders of diagonal PAs were calculated. The corresponding distributions of poles and zeros are presented in Figures \ref{fig:OriginalVsPade1} to \ref{fig:OriginalVsPade4}. For $f_1(z)$, the distribution of poles and zeros of the obtained PAs is symmetric around the origin, as seen in Figure \ref{fig:OriginalVsPade1}. However, their position strongly depends on the order of the PA used. Indeed, by increasing the PA's order, its distribution seems to spread and move towards infinity, leaving no structure behind. The same happens with $f_2(z)$ (Figure \ref{fig:OriginalVsPade2}), except that, in this case, a pole and a zero appear in the expected positions, $z=-2$ and $z=2$ respectively. These pole and zero are stable, and their positions, in the complex plane, seem to be independent of the order of approximation. This behaviour indicates that the original poles and zeros are identified by stable ones, in the complex plane, in a PA sequence. On the other hand, unstable poles and zeros do not correspond to any characteristic of the analytic structure of $f_2(z)$. A similar, but opposite, behaviour to the one seen for $f_1(z)$ and $f_2(z)$ can be observed in the distribution of poles and zeros for $f_3(z)$. However, instead of spreading, the poles and zeros gather around the position where the essential singularity should be (see Figure \ref{fig:OriginalVsPade3}). Lastly, the branch cut of $f_4(z)$ is reproduced by the PA in the form of an alternating sequence of poles and zeros, between $z=-1$ and $z=-1/2$. In this case, the distance between nearby zeros and poles decreases when the order of the PA is increased. Additional numerical tests were performed in \cite{Yamada2014}. It was found that ``spurious poles'' may appear with the increase of the order of the approximation, due to insufficient numerical accuracy. Some of the poles have an associated zero that cancels its contribution to the analytic structure. These pole-zero pairings, often called \textit{Froissart doublets}, are artefacts of the approximation, and accumulate in structures around the origin. These have to be identified as such, in order to properly identify the correct analytic structure. Later on, we will introduce a way to remove these unwanted poles by performing a residue analysis (Section \ref{Sec:ResidueAnalysis}). We can, now, draw some conclusions regarding the reproduction of the analytic structure of a function: \begin{itemize} \item For a function with or without singularities, distributions of poles and zeros may appear. As the PA's order is increased, if they spread to infinity or are overall unstable, they are not associated with the analytic structure of the original function, but are artefacts of the method; \item For a function with an essential singularity, the structures of poles and zeros tighten around the position of the singularity for increasing orders of approximation; \item Poles and zeros that are stable throughout the PA sequence may be correctly identified as being part of the original function's analytic structure; \item A branch cut can be identified by a PA as a sequence of alternating poles and zeros, for which the distance between nearby poles and zeros decreases when the order of the PA is increased; \item The increase of the order of approximation may cause the emergence of Froissart doublets (pole-zero pairings), which do not contribute to the analytic structure. \end{itemize} \chapter{Preliminary tests} \label{Chap:AnalyticTests} In the previous chapter, we explored the use of PAs to identify the analytic structure of a function based on the distribution of poles and zeros. Notwithstanding, the functions that we are dealing with in our study are not so simple as the ones covered there, and more specific tests have to be performed. These will allow to identify a set of tools for a proper identification of poles and branch cuts. \section{The perturbative result for the gluon propagator} The results from the renormalisation group improved perturbation theory for the gluon propagator and its dressing function are given, respectively, by \begin{equation} D_{gl}(p^2)=\frac{1}{p^2}\left[\frac{11N_f\alpha_s}{12\pi}\ln\left(\frac{p^2}{\Lambda^2}\right)+1\right]^{-\gamma}, \label{Eq:Prop} \end{equation} \begin{equation} d_{gl}(p^2)\equiv p^2 D_{gl}(p^2) =\left[\frac{11N_f\alpha_s}{12\pi}\ln\left(\frac{p^2}{\Lambda^2}\right)+1\right]^{-\gamma}. \label{Eq:Dress} \end{equation} Following \cite{Dudal2018}, in the numerical tests we will use $\alpha_s=0.3837$, $\Lambda=0.425~\si{GeV}$ and $\gamma=13/22$. These results offer us a valuable opportunity to study the reliability of using PAs to study the QCD propagators, since we expect an equivalent behaviour in the UV limit. Throughout this chapter we look at the Equations (\ref{Eq:Prop}) and (\ref{Eq:Dress}) and study them as test functions to understand the behaviour and validity of the PA approach. To give us a visual idea of the behaviour of these test functions, a graphical representation of (\ref{Eq:Prop}) and (\ref{Eq:Dress}) is shown in Figure \ref{fig:PropAndDress}. As for the respective analytic structures, they are represented in the complex $p^2$-plane in Figure \ref{fig:PropAndDressAS}. For the propagator, we see a simple pole at the origin, created by the factor $1/p^2$, as well as a branch cut on the whole real negative $p^2$-axis, from the logarithm. On the other hand, only the branch cut in the real negative $p^2$-axis appears in the analytic structure of the dressing function. These are the structures we want to reproduce using PA sequences. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{PropAndDress.pdf} \caption[Graphical representation of the gluon propagator $D_{gl}(p^2)$, and of its dressing function $d_{gl}(p^2)$.]{Graphical representation of the gluon propagator $D_{gl}(p^2)$ (left), and of its dressing function $d_{gl}(p^2)$ (right).} \label{fig:PropAndDress} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{PropAndDressAS.pdf} \caption[Analytic structures of the gluon propagator $D_{gl}(p^2)$, and of its dressing function $d_{gl}(p^2)$.]{Analytic structures of the gluon propagator $D_{gl}(p^2)$ (left), and of its dressing function $d_{gl}(p^2)$ (right). The key for the structure identification is in Figure \ref{fig:Key}.} \label{fig:PropAndDressAS} \end{figure} \section{Relation between $L$ and $M$} \label{Sec:LM} The first step in the construction of a PA sequence is to establish the best relation between the orders $L$ and $M$ of the polynomials, in (\ref{eq:PA}), since this relation is what dictates the limit behaviour of a PA. We know that the propagator, as well as the dressing function, have a dependence only on $p^2$. For this reason, we may impose that only the coefficients associated with even powers of momentum have nonzero values. Hence, for simplicity, we will build our PAs in order of $p^2$ and not $p$, \ie, \begin{align} d_{gl}(p^2)\to d_{gl}^{[L|M]}(p^2) = \frac{Q_L(p^2)}{R_M(p^2)}&= \frac{q_0+q_1p^2+q_2(p^2)^2+...+q_L(p^2)^L}{1+r_1p^2+r_2(p^2)^2+...+r_M(p^2)^M} \nonumber \\ &= \frac{q_0+q_1p^2+q_2p^4+...+q_Lp^{2L}}{1+r_1p^2+r_2p^4+...+r_Mp^{2M}}. \end{align} The same happens to the propagator, \begin{equation} D_{gl}(p^2)\to D_{gl}^{[L|M]}(p^2). \end{equation} By looking at the representation of the dressing function (Figure \ref{fig:PropAndDress}), we see that it slowly goes to zero for high values of $p$. It does so as $[\ln p^2]^{-13/22}$, and, thus, the right choice seems to be a relation that reproduces a similar behaviour at large momenta. Unfortunately, a ratio of polynomials cannot describe exactly a logarithmic function over a wide range of its arguments and, so, we have to look for the best approach. In Figure \ref{fig:RelationLM}, the functions $[\ln p^2]^{-13/22}$ and $(1/p^2)[\ln p^2]^{-13/22}$ are shown together with some simple PAs. For relatively high values of momentum ($p\sim 10~\si{GeV}$), the dressing function seems to tend to $0$ between 1 and $1/p^2$ \footnote{Here, the value 1 is an example of a constant value. In fact, for any constant $c>0$, there is a value $p_\text{min}$ such that $[\ln p^2]^{-13/22}<c,~\forall_{p>p_\text{min}}$. Furthermore, it is verified that\[\exists_{p_\text{min}>0}:~c>[\ln p^2]^{-13/22}>\frac{1}{p^2},~\forall_{p>p_\text{min}}.\]}. A similar analysis for the propagator can be made. This time, for high values of momentum, the propagator goes to zero with $(1/p^2)[\ln p^2]^{-13/22}$, between $1/p^2$ and $1/(p^2)^2$, as seen in Figure \ref{fig:RelationLM} \footnote{Formally, it is verified that\[\exists_{p_\text{min}>0}:~\frac{1}{p^2}>\frac{1}{p^2}[\ln p^2]^{-13/22}>\frac{1}{(p^2)^2},~\forall_{p>p_\text{min}}.\]}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{RelationLM.pdf} \caption[{Graphical representation of the asymptotic behaviour of $D_{gl}(p^2)$ and $d_{gl}(p^2)$, together with PAs of orders $[N|N]$, $[N-1|N]$, and $[N-2|N]$.}]{Graphical representation of the asymptotic behaviour of $D_{gl}(p^2)$ (left), and $d_{gl}(p^2)$ (right), together with PAs of orders $[N|N]$ (constant function $1$), $[N-1|N]$ (function $1/p^2$), and $[N-2|N]$ (function $1/(p^2)^2$).} \label{fig:RelationLM} \end{figure} A criterion to determine the best sequence for both cases, by choosing the suitable relation between $L$ and $M$ to use, will be discussed in Section \ref{Sec:Hint}. \section{The expansion point} By definition, the PA of a function is built using its Taylor expansion, and so, it depends on the point around which it is made. Throughout the examples in Chapter \ref{Chap:PA}, we showed little concern for this matter, making all the expansions around the origin. However, we are now confronted with functions that are not defined at the origin, \eg, the logarithm. For this reason, we have to find the expansion point that enables us to make the best approximation. Since we are interested in values of $p$ between $\sim 1~\si{GeV}$ and $\sim 10~\si{GeV}$, we require a good precision in the reproduction of the original function in this range of momentum. In this sense, we choose the central point $p_0=5.5~\si{GeV}$, and examine the precision of the obtained PA along $p$. Figure \ref{fig:p05.5} shows $d_{gl}(p^2)$, together with the respective PA of order $[1|1]$, and the percent error of approximation. Other interesting points to expand around are the endpoints of the considered interval, \ie, $p_0=1~\si{GeV}$ and $p_0=10~\si{GeV}$. The PA of order $[1|1]$, and the percent error for these expansion points are shown in Figure \ref{fig:p01and10}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{p05p5.pdf} \caption[{Representation of $d_{gl}(p^2)$, together with the respective PA of order $[1|1]$ using, as expansion point, $p_0=5.5~\si{GeV}$, and the approximation error.}]{Representation of $d_{gl}(p^2)$, together with the respective PA of order $[1|1]$ using, as expansion point, $p_0=5.5~\si{GeV}$ (left), and the approximation error (right).} \label{fig:p05.5} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{p01and10.pdf} \caption[{Representation of $d_{gl}(p^2)$, together with the respective PAs of order $[1|1]$ using, as expansion points, $p_0=1~\si{GeV}$ and $p_0=10~\si{GeV}$, and the approximation error.}]{Representation of $d_{gl}(p^2)$, together with the respective PAs of order $[1|1]$ using, as expansion points, $p_0=1~\si{GeV}$ and $p_0=10~\si{GeV}$ (left), and the approximation error (right).} \label{fig:p01and10} \end{figure} By comparing the obtained errors, we see that if we choose the expansion point to be near an endpoint we gain precision around it, but loose it at the opposite side. Furthermore, by analysing the error curves, we conclude that the error is at its lowest at $p=p_0$, reaching its maximum values at the endpoints. Thus, a good expansion point should be one that lowers the error on both endpoints. For this reason, we just need to analyse the error at the endpoints, since we know that there will be no higher values in between. Let us, now, examine how the errors at $p=1~\si{GeV}$ and $p=10~\si{GeV}$ evolve when we increase the order of approximation $[N|N]$ in a diagonal sequence with $p_0=5.5~\si{GeV}$, represented by blue lines in Figure \ref{fig:p0choice}. We see that, for any value of $N$, the error at $p=1~\si{GeV}$ is the highest of the two, making it the maximum error reached in the interval $[1,10]~\si{GeV}$. The decrease of the error for both values of momentum is more pronounced for lower orders of approximation, up to $N\sim11$. From then on, the decrease in the maximum error is slower. Nonetheless, it is assured that the approximation should not present errors higher than $\sim0.02\%$ for orders of approximation with $N$ greater than 11. Nonetheless, we notice a discrepancy between the errors at $p=1~\si{GeV}$ and $p=10~\si{GeV}$. We can lower the error at $p=1~\si{GeV}$, without letting the error on the opposite side grow too much, by changing the expansion point. The evolution of the errors at $p=1~\si{GeV}$ and $p=10~\si{GeV}$, is represented in Figure \ref{fig:p0choice}, for four different values of the expansion point, $p_0=5.5,~5,~4.5~\text{and}~4~\si{GeV}$. This representation shows that the best compromise is obtained for $p_0=4.5~\si{GeV}$, allowing us to have the lowest maximum error for increasing values of $N$. Thus, during the next tests, PAs made around $p_0=4.5~\si{GeV}$ will be considered for both the dressing function and the propagator, since a very similar result can be obtained for the latter\footnote{Although this value may not be globally the best one - a deeper analysis could be made -, it does not have to be very precise. Variations up to $0.5~\si{GeV}$ in $p_0$ translate on minimal magnitude variations of the errors, as Figure \ref{fig:p0choice} suggests.}. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{p0choice.pdf} \caption{Evolution of the approximation error at the endpoints $p=1~\si{GeV}$ and $p=10~\si{GeV}$ with $N$, using $p_0=5.5,~5,~4.5~\text{and}~4~\si{GeV}$ as the expansion point, for the dressing function.} \label{fig:p0choice} \end{figure} \section{``Padé's hint''} \label{Sec:Hint} With the expansion point already chosen, we are finally able to calculate PAs. Let us go back and continue the discussion of Section \ref{Sec:LM}, on the relations between $L$ and $M$, and calculate, \eg, the PAs of orders $[3|3]$ (with $J=0$) and $[3|4]$ (with $J=-1$) for the dressing function. The Padé coefficients are displayed, respectively, in Tables \ref{tab:dress3.3} and \ref{tab:dress3.4}. \begin{table}[t] \begin{subtable}{.5\textwidth} \centering \begin{tabular}{ccc} \toprule $i$ & $q_i$ & $r_i$ \\ \midrule 0 & \num{5.7e-1} & $-$ \\ 1 & \num{4.3e-2} & \num{7.9e-2} \\ 2 & \num{8.7e-4} & \num{1.7e-3} \\ 3 & {\color{BrickRed}\num{3.7e-6}} & {\color{BrickRed}\num{8.3e-6}} \\ \bottomrule \end{tabular} \caption{} \label{tab:dress3.3} \end{subtable} \begin{subtable}{.5\textwidth} \centering \begin{tabular}{ccc} \toprule $i$ & $q_i$ & $r_i$ \\ \midrule 0 & \num{5.7e-1} & $-$ \\ 1 & \num{4.9e-2} & \num{9.0e-2} \\ 2 & \num{1.2e-3} & \num{2.4e-3} \\ 3 & {\color{BrickRed}\num{8.3e-6}} & {\color{BrickRed}\num{1.8e-5}} \\ 4 & $-$ & \num{3.0e-9} \\ \bottomrule \end{tabular} \caption{} \label{tab:dress3.4} \end{subtable} \caption[{Padé coefficients obtained for the orders of approximation $[3|3]$ and $[3|4]$ of the dressing function $d_{gl}(p^2)$.}]{Padé coefficients obtained for the orders of approximation $[3|3]$ (a) and $[3|4]$ (b) of the dressing function $d_{gl}(p^2)$.} \end{table} Notice the last coefficients\footnote{Here, the last coefficients are understood as the coefficients of the terms of highest orders.} of each polynomial in the tables mentioned above, from which a careful analysis grants us an important result. In Table \ref{tab:dress3.3}, the last coefficients, $q_3$ and $r_3$, are both of the same order of magnitude. On the other hand, if we compare the last coefficients in Table \ref{tab:dress3.4}, we see that $r_4$ is three orders of magnitude smaller than $q_3$. However, still in Table \ref{tab:dress3.4}, $r_3$ is just one order of magnitude higher than $q_3$. In general, for any PA of order $[L|M]$ of the dressing function, the following relations between the Padé coefficients can be verified\footnote{The first relation in (\ref{eq:dressCoefficients}), and later in (\ref{eq:propCoefficients}), are considered to be true for differences with a maximum of two orders of magnitude, \ie, $|\log(q_L/r_M)|\lesssim2$.}: \begin{equation} \begin{cases}q_L\sim r_M,\quad L=M\\q_L\ll r_M,\quad L>M\\q_L\gg r_M,\quad L<M\end{cases}. \label{eq:dressCoefficients} \end{equation} \begin{table}[t] \begin{subtable}{.5\textwidth} \centering \begin{tabular}{ccc} \toprule $i$ & $q_i$ & $r_i$ \\ \midrule 0 & \num{2.8e-2} & $-$ \\ 1 & \num{1.8e-3} & \num{1.2e-1} \\ 2 & \num{3.1e-5} & \num{4.6e-3} \\ 3 & {\color{BrickRed}\num{1.0e-7}} & \num{6.6e-5} \\ 4 & $-$ & {\color{BrickRed}\num{2.3e-7}} \\ \bottomrule \end{tabular} \caption{} \label{tab:prop3.4} \end{subtable} \begin{subtable}{.5\textwidth} \centering \begin{tabular}{ccc} \toprule $i$ & $q_i$ & $r_i$ \\ \midrule 0 & \num{2.8e-2} & $-$ \\ 1 & \num{2.1e-3} & \num{1.3e-1} \\ 2 & \num{4.5e-5} & \num{5.7e-3} \\ 3 & {\color{BrickRed}\num{2.6e-7}} & \num{1.0e-4} \\ 4 & $-$ & {\color{BrickRed}\num{5.5e-7}} \\ 5 & $-$ & \num{7.4e-11} \\ \bottomrule \end{tabular} \caption{} \label{tab:prop3.5} \end{subtable} \caption[{Padé coefficients obtained for the orders of approximation $[3|4]$ and $[3|5]$ of the propagator $D_{gl}(p^2)$.}]{Padé coefficients obtained for the orders of approximation $[3|4]$ (a) and $[3|5]$ (b) of the propagator $D_{gl}(p^2)$.} \end{table} Regarding the coefficients for the propagator, we observe that, for the order $[3|4]$, in Table \ref{tab:prop3.4}, the last coefficients are of the same order of magnitude, whereas for the order $[3|5]$, in Table \ref{tab:prop3.5}, the coefficients that are of the same order of magnitude are $q_3$ and $r_4$. Even if we had not discussed earlier, in Section \ref{Sec:LM}, that the relation between $L$ and $M$ should be $M=L-1$ or $M=L-2$, we could arrive to the same conclusion with a diagonal PA. A quick calculation for a PA of order $[3|3]$ shows exactly this: $q_2=\num{1.4e-5}$, $q_3=\num{3.2e-9}$ and $r_3=\num{2.9e-5}$. In this sense, similar relations to (\ref{eq:dressCoefficients}) can be established for the PAs of the propagator, \begin{equation} \begin{cases}q_L\sim r_M,\quad L=M-1\\q_L\ll r_M,\quad L>M-1\\q_L\gg r_M,\quad L<M-1\end{cases}. \label{eq:propCoefficients} \end{equation} The relations (\ref{eq:dressCoefficients}) and (\ref{eq:propCoefficients}) seem to indicate that, in a certain way, the PA is sensitive to the relation between $L$ and $M$ that better reproduces the original function. For the dressing function, the PA ``hints'' that the most faithful approximation is achieved for approximants where $M=L$, while for the propagator the PA ``advises'' us to use $L=M-1$. This ability of the PA to tell us the proper relation between $L$ and $M$, and to ``correct it'' if a different relation is considered, will be of great importance later on. \section{Poles and zeros distribution} We are now in a position to study the distributions of poles and zeros within PA sequences, and to compare them with the expected analytic structures (Figure \ref{fig:PropAndDressAS}). As it was already mentioned in Section \ref{Sec:ASofPA}, the zeros of a PA are given by the roots of its numerator and the poles by the roots of its denominator. In Figure \ref{fig:dNN}, the distribution of poles and zeros are represented, in the complex $p^2$-plane, for some values of $N$ within the diagonal PA sequence of order $[N|N]$, for the dressing function. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{dNN.pdf} \caption{Distribution of poles and zeros for some values of $N$ within the diagonal PA sequence of order $[N|N]$, for the perturbative gluon dressing function.} \label{fig:dNN} \end{figure} Form $N=1$ to $N=11$ we see an accumulation of alternating poles and zeros on the real negative $p^2$-axis. Following the conclusions of Section \ref{Sec:ASofPA}, this represents the original branch cut, associated with a branch point at $p^2=0$. Beginning at $N=12$, poles and zeros start to emerge in the rest of the complex plane, mostly on its right side. These new poles and zeros come in pairs, \ie, in the same position, and, therefore, the zeros cancel the pole's contributions to the total function. \vspace{1em} An analogous behaviour is seen for the near-diagonal PA sequence of order $[N-1|N]$ for the propagator, in Figure \ref{fig:DN1N}. However, this time we face an additional problem: the pole from the factor $1/p^2$ is in the same position as the branch point, so we cannot distinguish them by simply representing the distribution of poles and zeros. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{DN1N.pdf} \caption{Distribution of poles and zeros for some values of $N$ within the near-diagonal PA sequence of order $[N-1|N]$, for the perturbative gluon propagator.} \label{fig:DN1N} \end{figure} In the next section we will try to solve this problem, by carrying out a residue analysis. \section{Residue analysis} \label{Sec:ResidueAnalysis} Consider, for example, the distribution of poles and zeros of the PA of order $[50|50]$, for the dressing function (Figure \ref{fig:dNN}). Amidst so many poles and zeros, how can we separate the real ones from those that are artefacts of the method? We need a mathematical tool to decide if a pole is part of the analytic structure, or if it is just cancelled by a zero, forming a Froissart doublet. The use of the residues to sort out the undesired poles is suggested in \cite{Yamada2014}. In Figure \ref{fig:dRes50}, the distribution of poles and zeros obtained from the order $[50|50]$ PA is represented along with the absolute values of the respective residues\footnote{Recalling the definition of residue of a complex function $f(z)$ at $z=z_0$, it is defined as the coefficient of $(z-z_0)^{-1}$ in the respective Laurent expansion \cite{Arfken}.} $|A_k|$ for each pole $k$ \footnote{Throughout this work, the residues were numerically computed using the software \textit{Mathematica} \cite{Mathematica}.}. The existence of two distinct levels is clear. The first one, in red tones, with residue values above $\num{e-2}$, corresponds to the poles lying on the real negative $p^2$-axis. The second one, in green/blue tones, corresponds to the poles belonging to the pole-zero pairings. We can clear these last poles out of the representation by performing a \textit{cut} in the residues at $|A_k|=\num{e-2}$, \ie, by only representing the poles $k$ for which $|A_k|\gtrsim\num{e-2}$. For the PA of order $[50|50]$, the distribution of poles and zeros with a cut at $|A_k|=\num{e-2}$ is represented in Figure \ref{fig:dRes50cut}. We see that only the poles on the real negative $p^2$-axis remain after the cut. In this way, the branch cut of the dressing function is faithfully reproduced by the poles that remain after performing the cuts in the residues. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{dRes50.pdf} \caption[{Distribution of poles and zeros for the PA of order $[50|50]$ of $d_{gl}(p^2)$, and the absolute value of the residues $|A_k|$ for each pole $k$.}]{Distribution of poles and zeros for the PA of order $[50|50]$ of $d_{gl}(p^2)$ (left), and the absolute value of the residues $|A_k|$ for each pole $k$ (right). The values of $|A_k|$ are arranged in descending order. The colour code used in the left graphic corresponds to the one used in the right graphic, for the residues' absolute value.} \label{fig:dRes50} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{dRes50cut.pdf} \caption[{Distribution of poles and zeros for the PA of order $[50|50]$ of $d_{gl}(p^2)$, with a cut in the residues at $|A_k|=\num{e-2}$.}]{Distribution of poles and zeros for the PA of order $[50|50]$ of $d_{gl}(p^2)$, with a cut in the residues at $|A_k|=\num{e-2}$. The colour scheme codes the residue's absolute value of each pole.} \label{fig:dRes50cut} \end{figure} Let us, now, compare the distribution of poles and zeros for the PA of order $[50|50]$ for the dressing function with the distribution of poles and zeros for the PA of order $[50|51]$ of the propagator, represented in Figure \ref{fig:DpRes50cut}, already with the cut in the residues. With an attentive look, we observe that, while for the dressing function the absolute value of the residues decreases as the poles get nearer the branch point, the opposite happens to the propagator, thus suggesting the presence of a pole at the origin, as expected. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{DpRes50cut.pdf} \caption[{Distribution of poles and zeros for the PA of order $[50|51]$ of $D_{gl}(p^2)$, with a cut in the residues at $|A_k|=\num{e-2}$.}]{Distribution of poles and zeros for the PA of order $[50|51]$ of $D_{gl}(p^2)$, with a cut in the residues at $|A_k|=\num{e-2}$. The colour scheme codes the residue's absolute value of each pole.} \label{fig:DpRes50cut} \end{figure} \section{Adding mass generation terms} In order to investigate the reproduction of poles and branch cuts in other locations of the complex $p^2$-plane, we can change their positions by adding mass terms, $m_1^2$ and $m_2^2$, to $D_{gl}(p^2)$ and $d_{gl}(p^2)$, \begin{equation} D_{gl}(p^2)=\frac{1}{p^2+m_2^2}\left[\frac{11N_f\alpha_s}{12\pi}\ln\left(\frac{p^2+m_1^2}{\Lambda^2}\right)+1\right]^{-\gamma}, \label{Eq:PropM} \end{equation} \begin{equation} d_{gl}(p^2)=\left[\frac{11N_f\alpha_s}{12\pi}\ln\left(\frac{p^2+m_1^2}{\Lambda^2}\right)+1\right]^{-\gamma}. \label{Eq:DressM} \end{equation} Thereby, the branch points are moved from the origin to $p^2=-m_1^2$, both in $D_{gl}(p^2)$ and $d_{gl}(p^2)$, and the pole of $D_{gl}(p^2)$ will now appear at $p^2=-m_2^2$. As an example, the mass term $m_1^2$ in $d_{gl}(p^2)$ is set to four different values: $-5$, $5$, $-i10$ and $i10~\si{GeV^2}$. These should cause a translation of the branch point in the complex $p^2$-plane to $p^2=5,~-5,~i10~\text{and}~-i10~\si{GeV^2}$, respectively. However, these changes should not alter the direction of the branch cut, which is, according to Figure \ref{fig:PropAndDressAS}, parallel to the real $p^2$-axis and goes from the branch point to the left side of the plane. In Figure \ref{fig:BCdance}, the poles and zeros distributions obtained for the approximants in the PA sequences of order $[N|N]$ with $N=50$ \footnote{Despite the fact that only one element from each PA sequence is presented here, the positions of the important poles in the distribution of poles and zeros, according to the residue analysis, are very stable throughout the sequences, and, therefore, they are not shown.} are represented for the four values of $m_1^2$. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{BCdance.pdf} \caption[{Distribution of poles and zeros for the elements of the PA sequences of order $[N|N]$ with $N=50$ of $d_{gl}(p^2)$, obtained for $m_1^2=-5,~5,~-i10$ and $i10~\si{GeV^2}$.}]{Distribution of poles and zeros for the elements of the PA sequences of order $[N|N]$ with $N=50$ of $d_{gl}(p^2)$, obtained for $m_1^2=-5,~5,~-i10~\text{and}~i10~\si{GeV^2}$. The cut in the residues has been done at $|A_k|=\num{e-2}$, by following the residue analysis of Section \ref{Sec:ResidueAnalysis}. The colour scheme codes the residue's absolute value of each pole.} \label{fig:BCdance} \end{figure} In Figure \ref{fig:BCdance}, we see that, for $m_1^2=-5~\text{and}~5~\si{GeV^2}$, the branch cut and the branch point are in the expected positions and, thus, correctly identified. Notwithstanding, for $m_1^2=-i10~\text{and}~i10~\si{GeV^2}$, the branch cut is not so well reproduced\footnote{This is made based on the convention that the branch cut from the logarithm makes an angle of $\pi$ with the real axis in the complex plane.}. Despite the correct identification of the branch point, the identified branch cut is not parallel to the real $p^2$-axis, as anticipated. In fact, the grey dashed lines (Figure \ref{fig:BCdance}) reveal that the branch cut has the direction that connects the branch point to the expansion point\footnote{The same result can be obtained by moving the expansion point through the complex $p^2$-plane.}. \vspace{1em} Lastly, we need to examine the pole's and branch cut's behaviour for the propagator when the mass term $m_2^2$ is introduced. In Figure \ref{fig:Pdance}, the distribution of poles and zeros obtained from the approximants within the PA sequences of order $[N-1|N]$ with $N=50$ \footnote{Again, only one element from each PA sequence is presented here, due to the stability of the important poles and structures that emerge in the poles and zeros distributions throughout the sequences.} are represented for four different values of $m_2^2$: $-5,~5,~-5-i10~\text{and}~5+10~\si{GeV^2}$. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Pdance.pdf} \caption[{Distribution of poles and zeros for the elements of the PA sequences of order $[N-1|N]$ of $D_{gl}(p^2)$ with $N=50$, obtained for $m_2^2=-5,~5,~-5-i10~\text{and}~5+10~\si{GeV^2}$ and $m_1^2=0$.}]{Distribution of poles and zeros for the elements of the PA sequences of order $[N-1|N]$ of $D_{gl}(p^2)$ with $N=50$, obtained for $m_2^2=-5,~5,~-5-i10~\text{and}~5+10~\si{GeV^2}$ and $m_1^2=0$. The cut in the residues has been done at $|A_k|=\num{e-2}$, by following the residues analysis of Section \ref{Sec:ResidueAnalysis}. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Pdance} \end{figure} A quick analysis of Figure \ref{fig:Pdance} shows that, for the four different values of $m_2^2$, the pole and the branch point are in the expected positions. Thus, the position of these singularities in the analytic structure of the propagator are correctly identified by the distribution of poles and zeros of the PA. \section{The perturbative result for the ghost propagator} Regarding the ghost, the expression for its propagator and for its dressing function, obtained perturbatively, are very similar to the ones of the gluon. Indeed, the only difference between them is the anomalous dimension value, which is $\delta=9/44$ for the ghost. The tests made throughout the sections above were repeated for the ghost propagator and identical results were obtained. For this reason, the conclusions drawn in this chapter are valid for both particles. \chapter{Approximating a discrete set of points} \label{Chap:Discrete} The tests performed in the last chapter, move us a step closer in our journey to find the analytic structure of the gluon and ghost non-perturbative propagators. To do so, we have to approximate the lattice data to PAs. This is exactly when we reach a \textit{cul-de-sac}. Until now, the tests were all done with full knowledge of the original function's analytic expression. Now, we are confronted with a discrete set of data points, with associated statistical errors. In this chapter, we introduce a methodology to approximate a discrete set of data points to a PA. The method will be tested against data sets generated from a collection of simple functions, before applying it to the lattice data for the gluon and ghost propagators. \section{A ``simple fit''} \label{Sec:SimpleFit} The problem of building a PA from a discrete set of points, and not based on the Taylor expansion of a function, is not new. In fact, several methods used to approximate a set of data points by a rational function were already explored and employed in Physics, in situations of numerical analytic continuation. The \textit{Norm Method} and the \textit{Moment Method} - both require solving a system of equations -, and the \textit{Point Method} (SPM) - in which the coefficients can be determined recursively -, all introduced by Schlessinger \cite{Schlessinger1968}, are examples of the usage of PAs, \eg, in \cite{Haymaker1970,Tripolt2019,Binosi2020}. Some of these methods were tested in the context of this work, but with very poor results concerning the quality of approximation. Moreover, none of these methods take into account the statistical errors of the data. For these reasons, a different approach must be considered. Essentially, we want to reproduce a function $f(x)$ based on a number $K$ of data points $\{(x_1,y_1),...,(x_K,y_K)\}$, with a statistical error $\sigma_i$ associated with each value $y_i$. To do so, these data points will be approximated by a ratio of polynomials, \ie, a PA. The Padé coefficients can be computed by minimising an objective function that measures the deviation of the data points to the approximated function. Throughout the current work, we will use the chi squared $\chi^2$ as the objective function, which is defined as \begin{equation} \chi^2\equiv\sum_{i=1}^K\left(\frac{y_i-f(x_i)}{\sigma_i}\right)^2, \end{equation} where $\sigma_i$ is the statistical error associated with the value $y_i$. The quality of approximation can be measured from the reduced chi squared, given by \begin{equation} \widetilde{\chi}^2\equiv\frac{\chi^2}{\text{degrees of freedom}}, \end{equation} where the number of degrees of freedom is given by the difference between the number $K$ of points to be approximated and the number of Padé coefficients to be calculated. A good approximation to the data translates into a reduced chi squared close to unit, \ie, $\widetilde{\chi}^2\sim1$. From a mathematical point of view, we are dealing with a non-linear global optimisation problem of determining the absolute minimum of $\chi^2$. Global optimisation problems are non-trivial, and there is no available method that solves such class of problems for any function. Herein, the minimisation was done numerically using \textit{Mathematica} \cite{Mathematica}. The minimisation methods used in this work for this global optimisation problem were the \textit{Differential Evolution} (DE) method \cite{DifferentialEvolution}, and the \textit{Simulated Annealing} (SA) method \cite{SimulatedAnnealing}\footnote{The \textit{Levenberg Marquardt} method \cite{LevenbergMarquardt} was also used, but with very unstable results, compared with the other two methods.}. Both are stochastic optimisation methods. The first one, the DE, relies on the maintenance of a population of points, from which a new one is generated based on random processes. This population evolves to explore the search space, in order to escape possible local minima. The convergence is achieved when the best points of two consecutive populations are inside the chosen tolerance. As for the SA method, it is inspired by the physical/metallurgic process of annealing. At each iteration, a new point in the research space is randomly generated close to the previous one, replacing it if a lower value is reached. However, the algorithm allows it to be exchanged for a point with a higher value, with a probability that follows a Boltzmann distribution, giving the possibility of escaping local minima. As the number of iterations increases, the probability of a replacement for a point with a higher value decreases, simulating a decrease in temperature. The process ends when the maximum number of iterations is reached, or the method converges to a point within the chosen tolerance. In order to test the reliability of this methodology, we first apply it to test data sets, generated from given functions. \section{Reproduction of the analytic structure} The functions that will be used to generate the test data sets are: \begin{align} f_1(p^2) &= \ln(p^2+m^2), \\ f_2(p^2) &= \frac{1}{p^2+m^2}, \\ f_3(p^2) &= D_{gl}(p^2+m^2), \end{align} where $D_{gl}(p^2)$ is the perturbative gluon propagator given by (\ref{Eq:Prop}) and, in all cases, $p^2$ is dimensionless. For the mass terms, the cases with $m^2=0$ and $m^2=0.5$ will be explored. These functions give rise to the kind of singularities that are expected to be found in the analysis of the lattice data. Their analytic structures are: \begin{itemize} \item a pole at $p^2=-m^2$, for $f_1(p^2)$; \item a branch cut parallel to the real axis with the branch point at $p^2=-m^2$, for $f_2(p^2)$; \item a pole at $p^2=-m^2$ and a branch cut parallel to the real axis with the branch point also at $p^2=-m^2$, for $f_3(p^2)$. \end{itemize} \vspace{1em} For each function $f_i(p^2)$, a set of $K$ points $(p_j,f_i(p_j^2))$, with $j=1,...,K$, is generated by randomly choosing $K$ values of $p_j$ in the chosen interval and calculating $f_i(p_j^2)$ for each one. The statistical errors can be simulated by replacing the value of $f_i(p_j^2)$, at each $p_j$, by a random value near the original one, \ie, $f_i(p_j^2)\to y_j=f_i(p_j^2)(1+\varepsilon \rho)$, where $\rho$ is a random number that follows a Gaussian distribution centred in 0 and with a standard deviation of 1, and $\varepsilon$ is the desired percent error. Hence, the statistical error associated with $y_j$ is $\sigma_j=\varepsilon f_i(p_j^2)$. When it comes to the minimisation, each point $(p_j,y_j)$ will contribute more or less to it based on the associated weight, given by $1/\sigma_j^2$. The lattice data for the gluon and the ghost propagators that will be used has, in all cases, more than a hundred data points in the range $p\in[0,8]~\si{GeV}$, with statistical errors between $\sim1\%$ and $\sim0.1\%$. In this sense, test data sets of 100 points distributed in the range $p\in[0,8]$, with $\varepsilon=1\%$, were generated. Following the conclusions of Chapter \ref{Chap:AnalyticTests}, near-diagonal PA sequences of order $[N-1|N]$, with $N$ going from 1 to 20, will be calculated, and the PAs themselves will take be $p^2$ as the independent variable. \subsection{Identifying a branch cut} In Figure \ref{fig:Tt1chi2red}, the achieved values of $\widetilde{\chi}^2$ for $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, are represented for the two minimisation methods, DE and SA. The obtained values, which are close to unit, show the quality of the minimisation, and also show that the data is well adjusted by PAs. In Figure \ref{fig:Tt1curve}, we can visualise an example that illustrates how well the generated data is adjusted, by representing simultaneously the function $f_1(p^2)$, the respective generated data, and some PAs obtained within the calculated sequence for, for example, $m^2=0.5$ and DE\footnote{In general, a very good adjustment can be observed in graphical representations beginning at low orders of approximation, usually $N\sim3$. This type of representation will not be presented for more cases, to avoid overloading this work with unnecessary figures. The approximations' quality will be evaluated only through the analysis of $\widetilde{\chi}^2$.}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Tt1chi2red.pdf} \caption{Achieved values of $\widetilde{\chi}^2$ for the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.} \label{fig:Tt1chi2red} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=.85\textwidth]{Tt1curve.pdf} \caption{Representation of the test data points generated from the $f_1(p^2)$, which is also represented, using $m^2=0.5$ and DE, together with the obtained approximants of orders $[3|4]$ and $[17|18]$.} \label{fig:Tt1curve} \end{figure} A summary of the results can be made by representing simultaneously, in the complex $p^2$-plane, the poles obtained for all values of $N$. In Figure \ref{fig:Tt1all}, this \textit{all-poles representation} is made for $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. A circumference of unit radius, formed by poles with residues between $\sim\num{e-2}$, in purple, and $\sim\num{e0}$, in yellow/orange, is seen around the origin. In fact, this type of structures are an artefact of the approximation method, created by poles of the Froissart doublets, already studied in \cite{Yamada2014}. The fact that the poles on the right-hand side of the complex $p^2$-plane have, in general, smaller residues, and are much less scattered than the ones on the left-hand side, reflects the difficulty on identifying the analytic structure beyond the region where the data is defined. We also see an accumulation of poles with high residue on the real negative $p^2$-axis, which suggests the presence of a branch cut. However, we cannot be deceived by this all-poles representation, as it only serves as an overview of the structures that emerge in the PA sequence. An analysis of the evolution of the poles and zeros distributions with $N$ is required. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Tt1all.pdf} \caption[{All-poles representation obtained for the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{All-poles representation obtained for the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt1all} \end{figure} The evolution, with $N$, of the off-axis poles, \ie, poles that appear at $\text{Re}(p^2)\neq0$, is represented in Figure \ref{fig:Tt1offaxis}, for $f_1(p^2)$ with $m^2=0$ and $m^2=0.5$ and for both DE and SA. The relevant poles\footnote{When PAs are used to approximate a discrete set of data points with associated statistical errors, a residue analysis is impracticable, as it was presented in Section \ref{Sec:ResidueAnalysis}, since the levels in the absolute values of the residues become indistinguishable. Thus, from now on, the residue analysis is done based on the relative value of the poles' residues, \ie, poles with higher residues are considered to be more relevant.}, in orange/red tones, are not stable throughout the PA sequence for any case, since they are mixed with the poles from the Froissart doublets in the circumference of unit radius. Following the conclusions of Chapter \ref{Chap:PA}, these can be considered spurious poles. \begin{figure}[p] \centering \includegraphics[width=.9\textwidth]{Tt1offaxis.pdf} \caption[{Evolution of the off-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{Evolution of the off-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt1offaxis} \end{figure} \begin{figure}[p] \centering \includegraphics[width=.9\textwidth]{Tt1onaxis.pdf} \caption[{Evolution of the on-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{Evolution of the on-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_1(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt1onaxis} \end{figure} The evolution of the on-axis poles and zeros, \ie, poles and zeros that appear on the real $p^2$-axis, is shown in Figure \ref{fig:Tt1onaxis}, for $f_1(p^2)$ with $m^2=0$ and $m^2=0.5$ and for both DE and SA. For all orders of approximation the branch cut is well reproduced by alternating poles and zeros on the real negative $p^2$-axis, in the same way as the branch cuts were reproduced in Chapters \ref{Chap:PA} and \ref{Chap:AnalyticTests}. However, whereas for the case with $m^2=0$ the branch point is well identified, the alternating poles and zeros begin at the origin, for $m^2=0.5$ the position of the branch point is not so clear. Nonetheless, there is a difference between the results for the two minimisation methods: for the DE method, unimportant poles, according to the respective residues, appear near the origin; while for the SA method this is not seen, for which the nearest poles, on the real negative semiaxis, are at $p^2\sim-0.3$. \subsection{Looking at a single pole} In Figure \ref{fig:Tt2chi2red}, the $\widetilde{\chi}^2$ achieved in the minimisation for $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, is represented for both minimisation methods. Again, the values of $\widetilde{\chi}^2$ show that the PAs provide a good approximation to the data and the function. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Tt2chi2red.pdf} \caption{Achieved values of $\widetilde{\chi}^2$ for the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.} \label{fig:Tt2chi2red} \end{figure} For $f_2(p^2)$, the same artefact that was present for $f_1(p^2)$ (the circumference of poles around the origin) also appears in the all-poles representation, in Figure \ref{fig:Tt2all}, for each value of $m^2$, and for both minimisation methods. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Tt2all.pdf} \caption[{All-poles representation obtained for the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{All-poles representation obtained for the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt2all} \end{figure} The evolution of the off-axis poles, in Figure \ref{fig:Tt2offaxis}, shows that no important poles appear at complex $p^2$. \begin{figure}[p] \centering \includegraphics[width=.9\textwidth]{Tt2offaxis.pdf} \caption[{Evolution of the off-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{Evolution of the off-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt2offaxis} \end{figure} The expected pole at $p^2=-m^2$ can be seen in the evolution of the on-axis poles and zeros, in Figure \ref{fig:Tt2onaxis}. For $m^2=0$, both methods successfully identify the pole at the origin. Indeed, a very stable pole is seen at $p^2=0$ throughout the whole PA sequence. On the contrary, for $m^2=0.5$, the pole at $p^2=-0.5$ is far more unstable. For the DE method, a good precision in the pole's position is only achieved for $N\in [3,5]$, while for the SA the pole's position can be considered stable up to $N=9$, with some minor deviations. For higher orders of approximation we observe that the poles obtained with the SA method deviate toward the origin, followed by a decrease in the respective residue. \begin{figure}[p] \centering \includegraphics[width=.9\textwidth]{Tt2onaxis.pdf} \caption[{Evolution of the on-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{Evolution of the on-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_2(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt2onaxis} \end{figure} \subsection{Identification of the analytic structure for the perturbative propagator} Finally, for the perturbative gluon propagator $f_3(p^2)$, the generated data was, once more, well adjusted by the PAs within the sequence, as can be seen by the values of $\widetilde{\chi}^2$ in Figure \ref{fig:Tt3chi2red}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Tt3chi2red.pdf} \caption{Achieved values of $\widetilde{\chi}^2$ for the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.} \label{fig:Tt3chi2red} \end{figure} A quick look to the all-pole representation for $f_3(p^2)$, in Figure \ref{fig:Tt3all}, shows that, for the third time, the circumference of poles around the origin is present for all cases, reinforcing the fact that it is an artefact of the approximation. As for $f_1(p^2)$ and $f_2(p^2)$, there are no relevant poles in the complex $p^2$-plane apart from the ones on the real negative $p^2$-axis, as can be observed in the representation of the off-axis poles, in Figure \ref{fig:Tt3offaxis}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Tt3all.pdf} \caption[{All-poles representation obtained for the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{All-poles representation obtained for the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt3all} \end{figure} \begin{figure}[p] \centering \includegraphics[width=.9\textwidth]{Tt3offaxis.pdf} \caption[{Evolution of the off-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{Evolution of the off-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt3offaxis} \end{figure} \begin{figure}[p] \centering \includegraphics[width=.9\textwidth]{Tt3onaxis.pdf} \caption[{Evolution of the on-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA.}]{Evolution of the on-axis poles and zeros with $N$ in the PA sequence, obtained from the test data generated from $f_3(p^2)$, using $m^2=0$ and $m^2=0.5$, and for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Tt3onaxis} \end{figure} With respect to the expected pole at $p^2=-m^2$, for the case where $m^2=0$, it is well identified within the PA sequence by a stable pole for both DE and SA, clearly seen in the on-axis poles and zeros representation, in Figure \ref{fig:Tt3onaxis}. Additionally, the pole at the origin is also identified by poles that appear near the origin in the off-axis representation obtained for the DE method (Figure \ref{fig:Tt3offaxis}). The case where $m^2=0.5$ does not present so clear results. The pole at $p^2=-0.5$ is more difficult to find in the on-axis representation (Figure \ref{fig:Tt3onaxis}). Although we can perceive the existence of a pole between $p^2=-0.6$ and $p^2=-0.3$, its exact position cannot be read with high precision. By comparing the two minimisation methods, we notice that this identification is more difficult for the SA than for the DE, where there are some intervals of $N$ ($N\in[2,5]$ and $[15,18]$) for which the identified pole appears to be almost stable. As for the reproduction of the branch cut by the PA sequence, its presence in the on-axis representation (Figure \ref{fig:Tt3onaxis}) is not so clear as it was for $f_1(p^2)$. Nonetheless, alternating poles and zeros can be seen in the interval $p^2\in[-50,0[$, for lower orders of approximation. Similarly to $f_1(p^2)$, unimportant poles appear near the origin for the DE method, whereas for the SA this does not happen. This indicates that, despite the fact that the position of the branch point can hardly be found, it is not at $p^2=0$, but somewhere else in the real negative $p^2$-axis, as anticipated. \section{Guide-lines for analytic structure identification} Throughout Chapters \ref{Chap:PA} and \ref{Chap:AnalyticTests}, we performed numerical tests to investigate the analytic structure of a function and studied the distributions of poles and zeros obtained form the PA sequences. In this way, we arrive to the following guide-line list of how the analytic structure of a function is reproduced by a PA and how it can be identified: \begin{itemize} \item Poles in the complex plane are reproduced by stable poles in the distributions of poles and zeros throughout a PA sequence. A pole at the origin is well identified by poles from the PAs that have relatively high residues. In this case, poles with residues of the same order of magnitude can appear close to the origin in the off-axis representation, for the DE method. On the other hand, a pole in a position other than the origin is reproduced by poles with higher residues. However, their position is less stable, specially for higher orders of approximation. In this case, a better stability is usually obtained when using the SA minimisation method. \item The branch cuts are reproduced by alternating poles and zeros. These may be easier to identify at lower orders of approximation. The exact position of the branch point is very difficult to identify, particularly if it is not at the origin. However, if that is not the case, poles with a low residue in the distributions of poles and zeros appear near the origin throughout the PA sequence when the DE method is used. This does not happen for the sequence obtained with the SA minimisation. \item As $N$ grows in the PA sequence, spurious poles, as well as Froissart doublets, tend to emerge and gather around the origin, forming what looks like a circumference of unit radius. A quick residue analysis, together with the study on the poles' stability, seems be enough to discard the undesired poles from the analytic structure. \end{itemize} \chapter{Results and discussion} Now that we have a good idea of what seems to be a good guide to interpret the evolution of the distribution of poles and zeros within a PA sequence, and how the analytic structures of functions are reproduced by it, we can proceed and apply the PA analysis to investigate the gluon and ghost non-perturbative propagators. \section{The ghost propagator} The data for the non-perturbative ghost propagator investigated here was published in \cite{Duarte2016}. It was obtained via lattice simulation in the Landau gauge performed on an hypercubic spacetime lattice of volume $80^4$ with 70 gauge configurations, using the Wilson gauge action for $\beta=6.0$ and renormalised in the MOM-scheme at $\mu=3~\si{GeV}$, according to \begin{equation} D(\mu^2)\Big|_{\mu=3~\si{GeV}}=\frac{1}{\mu^2}. \end{equation} The propagator data can be seen in Figure \ref{fig:GhostProp}. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{GhostProp.pdf} \caption[Landau gauge lattice ghost propagator used in the analysis.]{Landau gauge lattice ghost propagator used in the analysis. The statistical errors are smaller than the size of the points.} \label{fig:GhostProp} \end{figure} Following the analysis of the previous two chapters, the lattice data for the ghost propagator was adjusted by PAs, for various orders of approximation within a PA sequence, using the two minimisation methods: DE and SA. On a first stage, various relations between $L$ and $M$ were tried. Following the conclusions of Section \ref{Sec:Hint}, the use of PAs of order $[N-1|N]$ showed to be the best choice\footnote{This is in accordance with the fact that, in the limit of high momenta, the full propagator has the same behaviour as the one given by the perturbative analysis.}. The values of $\widetilde{\chi}^2$ at the minima are represented in Figure \ref{fig:GhChi}, for $N\in[1,40]$. The values obtained are below the unit, and show the quality of the minimisation, and also show that the data is well reproduced by PAs. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{GhChi.pdf} \caption{Achieved values of $\widetilde{\chi}^2$ for the Landau gauge lattice ghost propagator, for both methods of minimisation, DE and SA.} \label{fig:GhChi} \end{figure} In Figure \ref{fig:GhAll}, the all-poles representation is shown for both minimisation methods. The same artefact that already appeared in the last chapter (the circumference of unit radius formed by Froissart doublets) is clearly visible. An accumulation of poles with high residues on the real negative $p^2$-axis, near the origin, can also be observed, suggesting the presence of a branch cut. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{GhAll.pdf} \caption[All-poles representation obtained for the ghost propagator data, for both methods of minimisation, DE and SA.]{All-poles representation obtained for the ghost propagator data, for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:GhAll} \end{figure} An analysis of the on-axis poles and zeros, which is represented in Figure \ref{fig:Ghonaxis}, shows a very stable pole with high residue at the origin, for both the DE and the SA methods. This is a clear indication of the presence of a pole at $p^2=0$ in the analytic structure of the ghost propagator. The remaining poles and zeros seem to indicate the presence of a branch cut on the real negative $p^2$-axis, with a branch point at $p^2\sim-0.1~\si{GeV^2}$ \footnote{We must not forget the numerical tests made in Chapter \ref{Chap:Discrete}, where we saw that the branch point is hard to identify if it is not at the origin. Nonetheless, in this case, the stability of the on-axis poles may allow us to correctly identify the branch point's position.}. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Ghonaxis.pdf} \caption[Evolution of the on-axis poles and zeros within the PA sequence obtained for the ghost propagator data, for both methods of minimisation, DE and SA.]{Evolution of the on-axis poles and zeros within the PA sequence obtained for the ghost propagator data, for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Ghonaxis} \end{figure} On the other hand, by looking at the evolution of the off-axis poles, in Figure \ref{fig:Ghoffaxis}, we see that no relevant stable poles appear throughout the PA sequence, in the remaining complex $p^2$-plane. \begin{figure}[tp] \centering \includegraphics[width=\textwidth]{Ghoffaxis.pdf} \caption[Evolution of the off-axis poles within the PA sequence obtained for the ghost propagator data, for both methods of minimisation, DE and SA.]{Evolution of the off-axis poles within the PA sequence obtained for the ghost propagator data, for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Ghoffaxis} \end{figure} \vspace{1em} These results are in agreement with \cite{Binosi2020,Hayashi2020}, where the analytic structure of the ghost propagator suggested to be similar to the perturbative result, with no complex singularities. The results also support the no-pole condition for the ghost propagator, as proposed in \cite{Gribov1978}. \section{The gluon propagator} For the gluon propagator, we use the results of lattice simulations in the Landau gauge using the Wilson gauge action for $\beta=6.0$, renormalised in the MOM-scheme at $\mu=3~\si{GeV}$, similarly to the ghost propagator in the last section. Several data sets are considered, which were obtained using: a $32^4$ lattice, physical volume of $(3.25~\si{fm})^4$, with 50 gauge configurations, from \cite{Bicudo2015}; a $64^4$ lattice, physical volume of $(6.50~\si{fm})^4$, with 2000 gauge configurations, from \cite{Dudal2018}; a $80^4$ lattice, physical volume of $(8.13~\si{fm})^4$, with 550 gauge configurations, from \cite{Dudal2018}; and a $128^4$ lattice, physical volume of $(13.01~\si{fm})^4$, with 35 gauge configurations, from \cite{Duarte2016}. The mentioned data sets for the gluon propagator are represented in Figure \ref{fig:GluonProp}\footnote{The zero momentum propagator is not considered.}. All of them are essentially compatible with each other at one standard deviation level, and so they define a unique curve. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{GluonProp.pdf} \caption{Gluon propagator used in the analysis, obtained with four different lattice volumes.} \label{fig:GluonProp} \end{figure} As seen in Figure \ref{fig:GluonProp}, the $128^4$ lattice simulation has more information in the IR region of momentum, \ie, $p\lesssim 1~\si{GeV}$. In fact, a larger lattice simulation means more information in the IR. In this sense, by considering these four data sets, a better sensitivity to the different regions of momentum is achieved. \vspace{1em} As it was done for the ghost propagator, various relations between $L$ and $M$ were tried. According to the conclusion of Section \ref{Sec:Hint}, the best choice revealed to be, once more, the use of PAs of order $[N-1|N]$. In Figure \ref{fig:GlChi}, the $\widetilde{\chi}^2$ at the minima are shown. The several obtained $\widetilde{\chi}^2$ are essentially the same for both minimisation methods. We also observe that the quality of the approximation increases ($\widetilde{\chi}^2$ decreases) with the increase of the lattice volume. \begin{figure}[tp] \centering \includegraphics[width=.9\textwidth]{GlChi.pdf} \caption{Achieved values of $\widetilde{\chi}^2$ for the Landau gauge lattice gluon propagator, and for both methods of minimisation, DE and SA.} \label{fig:GlChi} \end{figure} In Figure \ref{fig:GlAll}, the all-poles representation is shown for the four lattice volumes, and for both minimisation methods. Apart from the usual artefact, a new structure emerges in the complex $p^2$-plane: a conjugate pair of complex poles of high residue at $\text{Re}(p^2)<0$. This structure becomes clearer and well defined for higher lattice volumes, which indicates that this pair of poles is associated with the IR structure of the theory. On the other hand, for smaller lattice volumes, a branch cut may be identified on the real negative $p^2$-axis. Also in Figure \ref{fig:GlAll}, the slight suggestion of the existence of another conjugate pair of complex poles may be seen at $\text{Re}(p^2)>0$. However, these poles have a lower residue and are not present for all simulations. Additionally, for some cases, the pair is not identified with both minimisation methods. In this sense, further studies are needed to see if these poles are meaningful or artefacts of the method. Herein, we will not consider the poles at $\text{Re}(p^2)>0$. \begin{figure}[tp] \centering \includegraphics[width=.97\textwidth]{GlAll.pdf} \caption[All-poles representation obtained for the gluon propagator data with $32^4$, $64^4$, $80^4$ and $128^4$ lattices, for both methods of minimisation, DE and SA.]{All-poles representation obtained for the gluon propagator data with $32^4$, $64^4$, $80^4$ and $128^4$ lattices, for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:GlAll} \end{figure} In the following subsections we examine with more detail the positions of the identified poles and branch cut. \subsection{Complex poles} \begin{figure}[tp] \centering \includegraphics[width=.9\textwidth]{Gloffaxis1.pdf} \caption[Evolution of the off-axis poles within the PA sequence, obtained for the gluon propagator data with $32^4$ and $64^4$ lattices, for both methods of minimisation, DE and SA.]{Evolution of the off-axis poles within the PA sequence, obtained for the gluon propagator data with $32^4$ and $64^4$ lattices, for both methods of minimisation, DE and SA. A cut in the residues at $\log|A_k|=0$ has been performed. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Gloffaxis1} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=.9\textwidth]{Gloffaxis2.pdf} \caption[Evolution of the off-axis poles within the PA sequence, obtained for the gluon propagator data with $80^4$ and $128^4$ lattices, for both methods of minimisation, DE and SA.]{Evolution of the off-axis poles within the PA sequence, obtained for the gluon propagator data with $80^4$ and $128^4$ lattices, for both methods of minimisation, DE and SA. A cut in the residues at $\log|A_k|=0$ has been performed. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Gloffaxis2} \end{figure} In Figures \ref{fig:Gloffaxis1} and \ref{fig:Gloffaxis2}, the off-axis poles are represented for the four lattice volumes, and for both minimisation methods. A cut in the residues at $|A_k|=1$ was already performed, and only the relevant poles appear. As seen already in the all-poles representations, the complex poles are more stable throughout the PA sequence for higher lattice volumes, particularly for lower values of $N$. For this reason, the results for the simulation with the largest lattice volume, \ie, the $128^4$ lattice, are those appropriate to read out the position of the poles. In the last chapter, we saw that the position of a pole is identified with precision only for lower orders of approximation. In this case, despite the fact that the pole is reproduced throughout the whole PA sequence by poles with high residue, only the ones obtained for lower orders of $N$ should be used to estimate the position of this singularity in the analytic structure of the propagator. In fact, by looking at the off-axis poles for the DE method, in Figure \ref{fig:Gloffaxis2}, we observe that the poles are more stable for $N\in[2,8]$ than for the remaining orders. As for the SA method, although in the representation of Re$(p^2)$ the pole is very stable in one position for $N\in[2,7]$, and in another position for $N\in[8,40]$, the representation of Im$(p^2)$ shows otherwise: for $N\in[8,40]$, the imaginary part is much less stable. This behaviour is in accordance with the results of the last chapter. There, we saw that, starting at $N\sim9$ and only for the SA method, the identified pole began to falsely move toward the origin, followed by a decrease in the respective residue. The poles obtained with $N\in[2,8]$ for the DE method, and the ones obtained with $N\in[2,7]$ for the SA method, are used to estimate the position of these singularities in the analytic structure\footnote{Although the remaining poles are not used, their appearance is important to confirm the presence of these poles in the analytic structure of the propagator.}. An arithmetic average of the respective poles' positions gives the following results for the position of the poles present in the analytic structure of the gluon propagator, for both minimisation methods: \begin{align*} \text{DE:}\quad &p^2=-0.332(30) \pm i 0.506(11)~\si{GeV^2};\\ \text{SA:}\quad &p^2=-0.311(20) \pm i 0.500(10)~\si{GeV^2}. \end{align*} In Figure \ref{fig:ResultsComp}, the above results are represented, together with the following results, obtained in previous studies. In \cite{Dudal2018}, the tree level prediction of the RGZ action, to describe the lattice data up to $p\sim1~\si{GeV}$, was used to obtain the position of the singularity in the gluon propagator. There, a global fit identified a pole at Re$(p^2)\in[-0.32,-0.20]~\si{GeV^2}$ and Im$(p^2)\in\pm[0.38,0.59]~\si{GeV^2}$. In \cite{Binosi2020}, a fixed order PA, computed with the Schlessinger Point Method (SPM) mentioned in Section \ref{Sec:SimpleFit}, found the singularity at: $p^2=-0.30(7) \pm i 0.49(3)~\si{GeV^2}$, for the same $64^4$ lattice data here; and at $p^2=-0.21(3) \pm i 0.34(2)~\si{GeV^2}$ for the decoupling solution of the DSE. \begin{figure}[tp] \centering \includegraphics[width=.8\textwidth]{ResultsComp.pdf} \caption[Position of the poles present in the analytic structure of the gluon propagator obtained in the present work and in other studies.]{Position of the poles present in the analytic structure of the gluon propagator obtained in the present work - DE and SA -, and in other studies (see text). The confidence region for the positions is defined as an ellipse, for which the semiaxes correspond to the errors associated with each result. Only one of the complex poles in the conjugate pair is shown.} \label{fig:ResultsComp} \end{figure} By comparing the above results (see Figure \ref{fig:ResultsComp}), we can conclude that there is a discrepancy between the results obtained via DSE and the ones obtained using lattice data (DE, SA, SPM - lattice, and RGZ), even though the latter were obtained with different approaches to reproduce the lattice data. Indeed, the pole found with the DSE is at smaller absolute value of $\text{Re}(p^2)$ and $\text{Im}(p^2)$, and, thus, closer to the origin. Regarding the results based on the lattice data (DE, SA, SPM - lattice, and RGZ), we see that the identified pole is at a slightly smaller $\text{Re}(p^2)$, but at a similar $\text{Im}(p^2)$, when compered to the result from the DSE. Nonetheless, the good overall compatibility in the position of the conjugate pair of complex poles is reassuring. Furthermore, they match with the analysis inspired by the Gribov-Zwanziger type of actions \cite{Dudal2018}, for which the solution for the gluon propagator is itself a ratio of polynomials and, thus, a type of PA. \subsection{Branch cut and branch point} \begin{figure}[tp] \centering \includegraphics[width=.9\textwidth]{Glonaxis1.pdf} \caption[Evolution of the on-axis poles and zeros within the PA sequence, obtained for the gluon propagator data with $32^4$ and $64^4$ lattices, for both methods of minimisation, DE and SA.]{Evolution of the on-axis poles and zeros within the PA sequence, obtained for the gluon propagator data with $32^4$ and $64^4$ lattices, for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Glonaxis1} \end{figure} \begin{figure}[tp] \centering \includegraphics[width=.9\textwidth]{Glonaxis2.pdf} \caption[Evolution of the on-axis poles and zeros within the PA sequence, obtained for the gluon propagator data with $80^4$ and $128^4$ lattices, for both methods of minimisation, DE and SA.]{Evolution of the on-axis poles and zeros within the PA sequence, obtained for the gluon propagator data with $80^4$ and $128^4$ lattices, for both methods of minimisation, DE and SA. The colour scheme codes the residue's absolute value of each pole.} \label{fig:Glonaxis2} \end{figure} The evolution of the on-axis poles and zeros within the PA sequence is shown in Figures \ref{fig:Glonaxis1} and \ref{fig:Glonaxis2}, for both minimisation methods, DE and SA, applied to the four simulation data sets. In contrast to the complex poles identified in the last subsection, the presence of a branch cut on the real negative $p^2$-axis is more evident for the smaller lattices, where the alternating poles and zeros are more noticeable. Regarding the branch point, we have concluded, in Chapter \ref{Chap:Discrete}, that its position is difficult to identify using the present methodology. Notwithstanding, we can infer, from Figures \ref{fig:Glonaxis1} and \ref{fig:Glonaxis2}, that the branch point is not at the origin, but somewhere between $p^2=0$ and $p^2=-0.5~\si{GeV^2}$. A possible way to better identify the branch cut and the branch point might be the use of a much larger ensemble of gauge configurations. Another approach to this problem could be to use approximants inspired by the perturbative gluon propagator, for example, \begin{equation} D_{gl}(p^2)\approx\frac{Q_L(p^2)}{R_M(p^2)}\left[\omega \ln \frac{S_F(p^2)}{T_G(p^2)}+1\right]^{-\gamma} \end{equation} where $Q_L(p^2)/R_M(p^2)$ and $S_F(p^2)/T_G(p^2)$ are the usual PAs of orders $[L|M]$ and $[F|G]$, respectively. In this way, the information about the branch cut and the branch point could be accessed from the analysis of the poles and zeros of $S_F(p^2)/T_G(p^2)$. Several tests were made in the context of this work. However, the obtained results were very unstable. Thus, such results are not reported here. Notwithstanding, the interval of momenta identified above for the position of the branch point is in agreement with the naive identification of the latter with the quoted ``gluon mass'' term found in \cite{Siringo2016} and \cite{Gracey2019}, which is $0.12~\si{GeV^2}$ and $0.36~\si{GeV^2}$ for each, respectively. Additionally, in \cite{Dudal2018}, it was obtained the mass scale of $0.216~\si{GeV^2}$. Thus, we undoubtedly see the connection between the position of the branch point and the mass scale that regularises the logarithm correction to the perturbative result. This mass scale prevents the IR logarithmic divergence of the propagator, making it finite at zero momentum. \chapter{Conclusions and future work} Throughout this work, we explored the use of Padé Approximants to compute the analytic structure of the Landau gauge gluon and ghost propagators. The approximants were build as a global optimisation problem that minimises a chi squared function, resorting to two different numerical minimisation methods. The PAs showed to faithfully reproduce the original functions, as well as their analytic structure. This allowed to have a first glimpse of the analytic structure of the propagators, \ie, to identify its singularities and branch cuts, without relying on a particular theoretical or empirical model to describe the lattice data. Our methodology revealed the existence of a conjugate pair of poles in the complex $p^2$-plane, for the gluon propagator, clearly stemming from the IR structure of the theory. The presence of these complex singularities supports their connection with the non-perturbative phenomenon of confinement. Regarding the ghost propagator, a unique pole was found at $p^2=0$, in agreement with the respective perturbative result. A branch cut on the real negative $p^2$-axis was identified in the analytic structure of both propagators, with the branch points at $\text{Re}(p^2)<0$. Unfortunately, a precise value for their positions could not be achieved yet. \vspace{1em} In the future, it would be important to refine the methodology used here, in order to improve the sensitivity of the branch point position. Larger lattice volumes could be simulated, in order to enhance the IR structure of QCD. It would be interesting, as well, to reconstruct and explore the spectral functions for the gluon and the ghost, using the PAs obtained here for the respective propagators. Additionally, the general properties of the propagators in different gauges could be investigated by considering lattice simulations performed in other gauges. \addcontentsline{toc}{chapter}{References} \printbibliography[title=References] \end{document}
null
null
null
proofpile-arXiv_000-10148
{"arxiv_id":"2009.08961","language":"en","timestamp":1600654697000,"url":"https:\/\/arxiv.org\/abs\/2009.08961","yymm":"2009"}
2024-02-18T23:40:25.121Z
2020-09-21T02:18:17.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10150"}
null
\section{Introduction} Recent advances suggest that neutrino flavor transformation in core-collapse supernovae may occur on much shorter scales than previously believed. This new class of collective effects has been dubbed fast flavor conversion (FFC), in reference to the fact that the associated instabilities grow at rates proportional to the neutrino self-coupling potential $\mu = \sqrt{2} G_F n_\nu$ \cite{sawyer2005, sawyer2009, sawyer2016, chakraborty2016b, chakraborty2016c, dasgupta2017, dasgupta2018, dasgupta2018b, airen2018, abbar2018, abbar2019, capozzi2019, capozzi2019b, yi2019, chakraborty2020, martin2020, johns2020, bhattacharyya2020, abbar2020, shalgar2020b, capozzi2020, xiong2020, shalgar2020, bhattacharyya2020b}. Although a comprehensive appraisal of their prevalence and importance is still lacking, fast instabilities have now been located in several simulations of core-collapse supernovae and accretion-disk systems in remnants of coalescing neutron-star--neutron-star and neutron-star--black-hole binaries \cite{tamborra2017, wu2017, wu2017b, abbar2019b, azari2019, shalgar2019, nagakura2019, morinaga2020, glas2020, abbar2020b, padilla2020, george2020}. It appears quite possible that FFC engenders significant dynamical and compositional changes in these environments. In Ref.~\cite{johns2020} we proposed that FFC stems from specific features of the equations of motion that become more apparent upon expanding in angular moments. Taking the neutrino density to be high, the neutrino angular distributions to be axially symmetric, and the matter background to be homogeneous on the scales of interest, neutrino flavor evolves on short timescales according to the equation \begin{equation} \dot{\mathbf{P}}_v + v \mathbf{P}'_v \cong \mu \left( \mathbf{D}_0 - v \mathbf{D}_1 \right) \times \mathbf{P}_v, \label{pveom} \end{equation} where $\mathbf{P}_v$ is the polarization vector of neutrinos with propagation angle $v = \cos\theta$ ($P_{v,z}$ being its flavor content), $\mathbf{D}_l$ is the $l$th Legendre moment of $\mathbf{P}_v - \mathbf{\bar{P}}_v$, and the overdot and prime denote derivatives with respect to time and space. In the limit that the flavor field is homogeneous, FFC is the result of pendulum-like motion of $\mathbf{D}_1$, a point supported by linear stability and numerical analysis. In the other analytically tractable limit, where the flavor field is stationary, FFC appears to be the result of pendulum-like motion of $\mathbf{D}_0$. Although the conjecture regarding steady-state solutions is supported by direct manipulation of the equations of motion, we have not attempted to test it with stability analysis or numerics. In the numerical calculations of Ref.~\cite{johns2020}, FFC is observed to be nearly periodic. Aperiodicity occurs because each angular moment is coupled to its neighbors in an infinite chain, and nothing prevents power from making its way out to ever higher $l$, never to return. On longer time scales the influence of finite $\omega = \delta m^2 / 2p$ is felt as well, $p$ being neutrino momentum and $\delta m^2$ the mass-squared difference. Crucial conservation laws are broken by $\omega \neq 0$, and slow collective effects (with growth rates proportional to $\sqrt{\omega \mu}$) come into play, interacting with fast oscillations. Whereas in our previous study we emphasized the importance of low-$l$ moments in shaping FFC, here we focus on the dynamics at later times and smaller angular scales. Central concepts include irreversibility and collisionless relaxation. Under unitary evolution, there is only one way for a dense system of neutrinos to relax to flavor equilibrium: by transferring power to smaller scales in phase space. The asymptotic state is not a true equilibrium, which can only be achieved with collisional processes, but it is equilibrated in a coarse-grained sense, and for practical purposes the coarse-grained flavor field is what matters most. Viewing neutrino flavor evolution through the lens of collisionless relaxation brings out more clearly some of its important parallels with other physical systems. We discuss phase-space transfer at a general level in Sec.~\ref{analogues}. Flavor relaxation via phase-space transfer has been observed and discussed in numerous references, often with the appellations \textit{kinematic decoherence} and \textit{multi-angle decoherence}, to be contrasted with the collisional variety. Coarse-grained equilibration has been witnessed repeatedly in numerical toy models. Yet the theory describing how the flavor field relaxes---on what time scale, to what asymptotic ensemble, and with what parametric dependence---is underdeveloped. In our view, this is one of the most important gaps in our current understanding of the supernova oscillation problem, as it is the missing link connecting linear instabilities and idealized collective phenomena to realistic neutrino transport. Had we powerful enough computers, the full problem could be simulated and the physics we seek to understand would be incorporated organically. But alas we do not, and given the astrophysical stakes, waiting for computing power to catch up does not strike us as prudent. The foundational exposition of multi-angle decoherence was given by Raffelt and Sigl in Ref.~\cite{raffelt2007b}, where they showed that a nearly isotropic gas of neutrinos is prone to rapid flavor depolarization. One of the important achievements of that paper was to trace the origin of depolarization to exponential growth of $\mathbf{D}_1$ on a time scale proportional to $\sqrt{\omega\mu}^{-1}$. Recent years have seen this phenomenon reinterpreted as an example of a multi-zenith-angle (MZA) instability. But while linearized dispersion relations are a powerful analytic tool, they do not supersede hard-won insights at the level of the polarization vectors. The goal, after all, is not just to locate instabilities, but to understand their consequences. Shortly after the publication of Ref.~\cite{raffelt2007b}, it was established that realistic asymmetries between the fluxes of $\nu_e$ and $\bar{\nu}_e$ typically prevent this instability from taking hold during deleptonization \cite{esteban2007} (with possible caveats now coming from the emerging understanding of \textit{global} asymmetries in the supernova neutrino emission \cite{vartanyan2019}). Still, the observations regarding multi-angle decoherence remain interesting, especially in light of the connection between FFC and pendulum-like motion of $\mathbf{D}_1$. The exponential-growth solution is accompanied by rapid decoherence because, in accord with a conservation law in the equations of motion [Eq.~\eqref{esenergy} below], the growth occurs at the expense of $\mathbf{S}_0$, the $0$th Legendre moment of $\mathbf{P}_v + \mathbf{\bar{P}}_v$. In homogeneous FFC, however, the vector magnitude $D_1$ is constant. If \textit{fast} instability is to be complicit in multi-angle decoherence, it must accomplish this end differently than the slow instability identified in Ref.~\cite{raffelt2007b}. We show in Sec.~\ref{cascade} that FFC hastens relaxation by a distinct mechanism in which the flavor field is kinematically decohered by polarization-vector dephasing.\footnote{Here and in the following we use ``dephasing'' to refer to nonzero $\mathbf{A}\times\mathbf{B}$ for two polarization vectors $\mathbf{A}$ and $\mathbf{B}$. Most often, the relevant dephasing will be with respect to $\mathbf{D}_1$. We use ``kinematic decoherence'' to encompass any kind of collisionless depolarization.} This mechanism, whereby power leaks to smaller scales in momentum space, is in fact always operative, even when the flavor field is linearly stable (see below). Any amount of anisotropy is enough to expose the system to this source of effective irreversibility. Regardless of its origin, momentum-space transfer is a rather grave computational concern. Once power cascades down to the smallest angular scale of the calculation, the flavor field begins to feel the finite resolution of momentum space. Errors at the smallest scale become magnified as they propagate back up the chain of multipoles. As we show in Sec.~\ref{numerical}, even a minuscule amount of power slipping out to high $l$ is enough to wreak havoc. One solution, of course, is to evolve a large number of multipoles, but since the details at very high $l$ are practically irrelevant, this is a highly wasteful approach in a setting where one can ill afford to be profligate. A better solution would be to close the equations at a reasonable $l$ in such a way as to avoid spurious backreaction. Consistent with our previous study, we find that \textit{fast oscillations} are decently well approximated by evolving only the largest angular scales. But for \textit{relaxation} to occur, the low-$l$ moments must be able to lose power to the ``inertial range'' that forms at higher $l$. In this sense, cascade is a necessary evil. We do not solve the closure problem here, but at the end of Sec.~\ref{numerical} we indicate possible paths forward and point to some of the strategies that have been employed in plasma kinetics. In Sec.~\ref{conc} we summarize our analysis. The results presented here are based on a simplified model of flavor evolution and are by no means direct predictions of real-world outcomes. The issues we address, however, are quite general. If it turns out that instabilities in realistic settings prompt rapid flavor equipartition (as assumed in Refs.~\cite{wu2017b}, for example), or something comparably simple and robust, then the details of how the flavor field relaxes might be purely academic. But failing that, the physics of relaxation must be understood more deeply, and momentum-space transfer will be an essential element. We speculate, in the final section, on how our findings are likely to be modified in more realistic models. \section{Collisionless relaxation \label{analogues}} The transfer of energy across scales in phase space is a nearly universal feature of weakly collisional kinetic systems. In a classical context, it is related most basically to the law of inertia and the Hamiltonian preservation of phase-space density. The latter constrains what collisionless trajectories can accomplish: The best they can do, as far as transforming the distribution in phase space, is to stretch, squeeze, and knead it. Because phase-space transfer, which describes such reconfigurations, tends to move the system toward more typical macroscopic states, it serves as the classical mechanism of collisionless relaxation. Cascade---by which we mean sustained, directed transfer---occurs when a system's approach to equilibrium is facilitated by preferentially shifting energy to smaller scales than the one at which it is sourced (or, in the case of inverse cascade, to larger ones). Fluid and magnetohydrodynamic turbulence may be the most famous examples: In the classic turbulent system, energy at the driving scale cascades down to smaller scales until viscous dissipation becomes efficient. Fluids, of course, are not weakly collisional, but they are the exception that proves the rule of transfer's commonness. Even under the non-Hamiltonian, momentum-isotropizing effects of collisions, turbulent dynamics takes advantage of the other half of phase space and works across spatial scales. Despite neutrino oscillations being a quantum phenomenon, classical kinetics is a good starting point for thinking about weakly collisional neutrino transport. In the quantum-kinetic description \cite{sigl1993, vlasenko2014} (from which Eq.~\eqref{pveom}, for example, derives), physically relevant variations in time and space occur over scales large enough that the Wigner \textit{quasiprobability} distribution resembles a bona fide \textit{probability} distribution. Many-body correlations are also argued, one way or another, to be ignorable \cite{friedland2003, volpe2013}. Neutrino quantum kinetics then appears to describe a theory of individual particles, each carrying quantum degrees of freedom but free-streaming classically. These considerations imply significant resemblances between neutrino flavor evolution and classical kinetics, despite the quantum character of the former. Direct cascade in a kinetic system corresponds, in coordinate space, to the formation of small-scale inhomogeneities and, in momentum space, to the formation of small-scale spectral distortions and angular features. Coordinate- and momentum-space transfer (of which cascade is a special case) are both already possible in the collisionless, field-less Vlasov equation in one spatial dimension: \begin{equation} \dot{f} (t,x,u) + u f' (t,x,u) = 0, \label{simplevlasov} \end{equation} where $f$ is a particle distribution function and $u$ is the particle velocity. If the spectrum is monochromatic, $f$ is simply translated along $x$. But with a spectrum of velocities, inhomogeneity and polychromaticity work together to make for more complicated evolution, involving mode transfer in $x$- and $u$-space. The point, to rephrase the opening remarks of this section, is that particle free-streaming on its own is sufficient to generate smaller-scale features over time. Nonzero fields can enrich the dynamics further, but either way, Liouville's theorem implies that a collisionless Vlasov system can only approach equilibrium in a coarse-grained sense, by hiding the fluctuations below the resolved granularity. Cascade commonly poses numerical challenges. In high-Reynolds-number fluids, like those found in the convective regions of supernovae, spatially resolving the evolution down to the dissipation scale is often impossible \cite{radice2018}. In a collisionless plasma, phase-space filamentation causes numerical solutions of the Vlasov equation to fail in finite time, an issue that has been the subject of computationally oriented investigation going back decades \cite{joyce1971, klimas1987}. In collisionless gravitational systems, closely related issues arise when solving for the evolution in phase space \cite{rasio1989}. Cascade must be handled delicately in all of these settings. Besides being the cause of numerical trouble, it is also of \textit{physical} importance, being crucial to (among other things) Reynolds stresses in fluids, phase-space structures and enhanced collisionality in plasmas \cite{dupree1972, pezzi2016}, and phase mixing and violent relaxation in gravitational systems \cite{lynden1967}. In a collisionless gas of neutrinos, there are multiple potential sources and forms of phase-space transfer. The phenomenon most like the previously mentioned examples is the formation of spectral distortions and filamentation due to classical transport: \begin{equation} \dot{f}_{\nu_\alpha} (t, \mathbf{x}, \mathbf{\hat{u}}, E) + \mathbf{\hat{u}} \cdot \boldsymbol{\nabla} f_{\nu_\alpha} (t, \mathbf{x}, \mathbf{\hat{u}}, E) = 0, \label{classical} \end{equation} where $\alpha$ identifies flavor. Because ultrarelativistic neutrinos all travel with velocity $u \approx c = 1$ (hence the appearance of the unit vector $\mathbf{\hat{u}}$), Eq.~\eqref{classical} lacks the dispersive shearing exhibited by Eq.~\eqref{simplevlasov}. Moving along a streamline, spectral distortions therefore only appear in the angle-integrated distributions. Other than that, Eq.~\eqref{classical} permits all of the same relaxation processes one expects from the collisionless, field-less Vlasov equation. In practice, however, collisionless relaxation is not a particularly helpful lens through which to view the \textit{classical} transport of supernova neutrinos. It adds little to the usual picture of neutrinos as transiting from an opaque region to a transparent one as they propagate outward in radius. The angular distributions of momentum, for instance, are isotropized by emission, absorption, and scattering, and are rendered more forward-peaked by free-streaming out from a roughly spherical geometry. The microphysics and the environment are simply not conducive to small-scale angular features beyond those already captured by interpolating between the isotropic and single-beam limits. This fact underpins the adequacy of moment methods in neutrino radiative transfer. The situation is quite different with oscillating neutrinos. Flavor ceases to be conserved along trajectories, and the nonlinearity arising from neutrino--neutrino forward scattering supports collective effects that promote collisionless relaxation. Rather in contrast to collisional processes, oscillations \textit{favor} the generation of small-scale features in the neutrino flavor field of a supernova. The oscillation terms engineer this outcome in different ways. Ignoring matter currents (and suppressing independent variables, which are the same as above), the equation of motion of polarization vector $\mathbf{P}$ with unit velocity vector $\mathbf{\hat{u}}$ is \begin{align} \dot{\mathbf{P}} + \mathbf{\hat{u}} \cdot \boldsymbol{\nabla} \mathbf{P} = \bigg[ \omega &\mathbf{B} + \lambda \mathbf{L} + \sqrt{2} G_F \int d\Gamma' \mathbf{D}' \notag \\ &- \sqrt{2} G_F \int d\Gamma' \left( \mathbf{\hat{u}}\cdot\mathbf{\hat{u}}' \right) \mathbf{D}' \bigg] \times \mathbf{P}. \label{quantum} \end{align} Here, $d\Gamma'$ is a phase-space element, with the integrals over all (anti)neutrino momenta at $(t, \mathbf{x})$; $\lambda = \sqrt{2} G_F n_e$ is the matter potential; and $\mathbf{B}$, $\mathbf{L}$, and $\mathbf{D}$ are the mass, flavor, and difference vectors, respectively. The terms on the right-hand side of Eq.~\eqref{quantum} affect phase-space transfer in distinct ways. The dispersive $\omega \mathbf{B}$ term elicits transfer in $p$-space---something that we noted was not possible in Eq.~\eqref{classical}---by causing neutrinos of different energies to dephase. This effect is sometimes (but not always) what has been meant by the term \textit{kinematic decoherence}; it represents the momentum-space complement to wave-packet separation in coordinate space \cite{akhmedov2016}. Like all of the terms, $\omega \mathbf{B}$ interacts with convection. Structures in $p$-space that are generated by it in one spatial region thus get communicated to other regions, and vice versa. Under the common convention that antineutrinos obey the same equations but with $\omega \rightarrow - \omega$, the vacuum term is also responsible for dephasing neutrinos from antineutrinos. $\lambda \mathbf{L}$ is independent of neutrino momentum and therefore has no \textit{direct} effect on $\mathbf{p}$-space transfer. This is why it can be rotated away in settings that are fully homogeneous, neutrino flavor field included. The term is not entirely without consequence, however, even in the fully homogeneous case, as exemplified by the logarithmic slowdown of the bipolar instability \cite{hannestad2006}. In settings that are inhomogeneous in the flavor field, if not also in the matter background, it is responsible for multi-angle matter suppression \cite{esteban2008}. From one viewpoint, this phenomenon is due to the compression of oscillation patterns along non-radial trajectories. Equivalently, it can be seen as a dephasing effect associated with convection. This is the sense in which $\lambda \mathbf{L}$ alters $\mathbf{p}$-space transfer indirectly. In the homogeneous limit, which we adopt in the rest of the paper, the isotropic nonlinear term $\int d\Gamma' \mathbf{D}'$ synchronizes neutrinos and antineutrinos of different momenta by acting equally on all of them \cite{samuel1993, kostelecky1995, qian1995a, pastor2002, johns2016, johns2018}. The anisotropic nonlinear term $\int d\Gamma' \left( \mathbf{\hat{p}}\cdot\mathbf{\hat{p}}' \right) \mathbf{D}'$, on the other hand, causes dephasing because of its dependence on $\mathbf{\hat{u}}$. If the system is sufficiently simple or symmetric, (nearly) anisotropic collective oscillations can result; the dephasing need not be permanent. One of the main points of this paper, though, is that realistically this term always drives \textit{some} dephasing that is effectively irreversible. To some extent, if the environment is stationary \textit{and axially symmetric}, the roles of the nonlinear terms are swapped, $\int d\Gamma' \left( \mathbf{\hat{p}}\cdot\mathbf{\hat{p}}' \right) \mathbf{D}'$ acting to synchronize and $\int d\Gamma' \mathbf{D}'$ acting to dephase \cite{johns2020}. While the stationary and homogeneous limits give intuition, in general the flavor field is spatiotemporally evolving, and the influence of axial asymmetry may be significant. In this paper we focus exclusively on momentum-space transfer, using monochromatic (or integrated-over) spectra for simplicity's sake. Recent progress in the physics of collective neutrino oscillations has brought to the fore the significance of momentum-space anisotropy for flavor \textit{instability}. Our goal is to develop a deeper understanding of how anisotropy enables flavor \textit{relaxation}. As a starting point, and to make contact with Ref.~\cite{johns2020}, we also adopt axial symmetry. We comment on the extensions to axial asymmetry and $\mathbf{x}$-space transfer in the concluding section. \section{Oscillations and cascade \label{cascade}} For this analysis, we consider homogeneous, axially symmetric, collisionless flavor evolution, and we assume that the neutrino system is functionally monochromatic due to the high neutrino density. These simplifications allow us to isolate certain fundamental aspects of the momentum-space dynamics. In Sec.~\ref{conc} we offer some comments regarding the effects of inhomogeneity, axial asymmetry, and collisions. Effects associated with a spectrum of energies are not pursued here. \subsection{Analysis of collective effects \label{analysis}} With the assumptions stated above, and with the matter potential rotated out of the problem, the equations of motion of the neutrino and antineutrino polarization vectors are \begin{align} &\dot{\mathbf{P}}_v = +\omega \mathbf{B} \times \mathbf{P}_v + \mu \left( \mathbf{D}_0 - v \mathbf{D}_1 \right) \times \mathbf{P}_v, \notag \\ &\dot{\bar{\mathbf{P}}}_v = -\omega \mathbf{B} \times \mathbf{\bar{P}}_v + \mu \left( \mathbf{D}_0 - v \mathbf{D}_1 \right) \times \mathbf{\bar{P}}_v. \label{homogeom} \end{align} (Relative to Eq.~\eqref{pveom} we have dropped the convective term and restored the vacuum term.) Forming the sum and difference vectors $\mathbf{S}_v = \mathbf{P}_v + \mathbf{\bar{P}}_v$ and $\mathbf{D}_v = \mathbf{P}_v - \mathbf{\bar{P}}_v$, then taking Legendre moments, one has \begin{align} &\dot{\mathbf{S}}_l = \omega \mathbf{B} \times \mathbf{D}_l + \mu \mathbf{D}_0 \times \mathbf{S}_l \notag \\ &\hspace{.85 in}- \frac{\mu}{2} \mathbf{D}_1 \times \left( a_l \mathbf{S}_{l-1} + b_l \mathbf{S}_{l+1} \right), \notag \\ &\dot{\mathbf{D}}_l = \omega \mathbf{B} \times \mathbf{S}_l + \mu \mathbf{D}_0 \times \mathbf{D}_l \notag \\ &\hspace{.85 in}- \frac{\mu}{2} \mathbf{D}_1 \times \left( a_l \mathbf{D}_{l-1} + b_l \mathbf{D}_{l+1} \right), \end{align} where $a_l = 2l / (2l+1)$ and $b_l = 2 (l+1) / (2l+1)$ \cite{raffelt2007b}. With $\mu \gg \omega$, $\mathbf{D}_0$ is constant on $\mu^{-1}$ timescales. Fast instability must therefore be caused by instability in $\mathbf{D}_1$. To understand the evolution of $\mathbf{D}_1$ on short timescales, it is helpful to switch to a frame rotating with frequency $\mu D_0$ about $\mathbf{\hat{D}}_0$. We will use primes to denote vectors in the rotating frame. Following Ref.~\cite{johns2020}, we define a (unit-length) ``pendulum vector'' \begin{equation} \boldsymbol{\delta}' = \frac{\mathbf{D}'_1}{D_1}, \end{equation} having angular momentum \begin{equation} \mathbf{L}' = \frac{1}{3} \left( \mathbf{D}'_0 + 2 \mathbf{D}'_2 \right) \end{equation} and spin \begin{equation} \sigma = \boldsymbol{\delta}' \cdot \mathbf{L}', \end{equation} and being acted on by a gravitational force that points opposite to \begin{equation} \mathbf{G}' = \frac{2}{5} \mathbf{D}'_3. \label{gravity} \end{equation} Dropping terms proportional to $\omega$, one then finds that $\boldsymbol{\delta}'$ satisfies the pendulum-like equation \begin{equation} \frac{\boldsymbol{\delta}' \times \ddot{\boldsymbol{\delta}}'}{\mu} + \sigma \dot{\boldsymbol{\delta}}' = \mu D_1 \mathbf{G}' \times \boldsymbol{\delta}', \label{fastpendulum} \end{equation} warranting the interpretations just given. Furthermore, the length $D_1$ of the pendulum is constant and its motion is restricted by conservation of the energy-like quantity \begin{equation} E_D = \mu \mathbf{G}' \cdot \mathbf{D}'_1 + \frac{\mu}{2} \mathbf{L}'^2, \end{equation} as well as by a series of more complicated conservation laws. $\mathbf{G}'$ plays the role of gravity but is itself coupled to $\boldsymbol{\delta}'$ directly and to higher moments via $\mathbf{D}'_4$. These features distinguish the fast pendulum from the slow (bipolar) pendulum of Ref.~\cite{hannestad2006}, for which gravity is a fixed external field. Despite the differences, the pendulum analysis accounts for features of fast oscillations seen in linear stability and nonlinear numerics. We reiterate, though, that the fast-pendulum analysis relies on vacuum effects being negligible aside from seeding instability. Nonzero $\omega$ leads to nonconservation of $E_D$, $\mathbf{D}_0$, and $D_1$, among other quantities. A different quantity, \begin{equation} E_S = \omega \mathbf{B} \cdot \mathbf{S}_0 + \frac{\mu}{2} \left( \mathbf{D}_0^2 - \mathbf{D}_1^2 \right), \label{esenergy} \end{equation} is still conserved, however \cite{raffelt2007b}. In isotropic settings, this is the energy of the bipolar pendulum \begin{equation} \mathbf{Q} = \mathbf{S}_0 - \frac{\omega}{\mu} \mathbf{B}. \end{equation} Letting $\mathbf{q} = \mathbf{Q} / Q$ and assuming isotropic angular distributions, the bipolar pendulum equation reads \begin{equation} \frac{\mathbf{q} \times \ddot{\mathbf{q}}}{\mu} + \varsigma \dot{\mathbf{q}} = \omega Q \mathbf{B} \times \mathbf{q}, \label{slowpendulum} \end{equation} where the spin $\varsigma$ of the pendulum is \begin{equation} \varsigma = \mathbf{q} \cdot \mathbf{D}_0. \end{equation} Allowing again for anisotropic angular distributions, the monopole now evolves as \begin{equation} \dot{\mathbf{S}}_0 = \omega \mathbf{B} \times \mathbf{D}_0 + \mu \mathbf{D}_0 \times \mathbf{S}_0 - \mu \mathbf{D}_1 \times \mathbf{S}_1, \label{s0eqtn} \end{equation} hence \begin{equation} \dot{\mathbf{Q}} = \mu \mathbf{D}_0 \times \mathbf{Q} - \mu \mathbf{D}_1 \times \mathbf{S}_1. \label{qeqtn} \end{equation} Anisotropy disrupts the slow pendulum, and it does so in a way that is mediated by the fast pendulum $\mathbf{D}_1$. \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_p0z_70km_6000m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_p0z_70km_6000m_ih_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_s0_70km_6000m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_s0_70km_6000m_ih_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_d1_70km_6000m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.42\textwidth]{plot_d1_70km_6000m_ih_5e-7.pdf} } \end{subfigure} \caption{Flavor evolution---$P_{0,z}$ (top), $S_0$ (middle), and $D_1$ (bottom)---with time $t$ in homogeneous, axially symmetric calculations, using the angular distributions and ratio $\mu / \omega$ from a spherically symmetric \textsc{Fornax} simulation at a radius of 70 km and a post-bounce time of 200 ms. The $\bar{\nu}_e$ angular distribution is rescaled to $\alpha = n_{\bar{\nu}_e} / n_{\nu_e} = 0.90$ (black) and $\alpha = 0.85$ (red), and the normalization is such that $| \mathbf{P}_{0} (0) | = 0.5$. The normal hierarchy (NH) is shown on the left, inverted hierarchy (IH) on the right. Time is in units of $\omega^{-1}$. For typical supernova conditions and a neutrino energy of $\sim 10$ MeV, the horizontal axis spans about a microsecond. Gray curves represent the same data as the black curves but time-averaged over windows of duration $6\times 10^{-3} \omega^{-1}$, spanning about 3 periods of fast collective oscillations. In the bottom panels, the dashed lines mark the initial values. At $\alpha = 0.90$, the system is unstable to fast oscillations, which facilitate kinematic decoherence (decay of $S_0$) by a dephasing mechanism. See text for further discussion.} \label{p0z_6000m} \end{figure*} Raffelt and Sigl \cite{raffelt2007b} studied kinematic decoherence in systems with equal fluxes of $\nu_e$ and $\bar{\nu}_e$. They observed that, because $E_S$ is conserved, $S_0$ can only go to zero if appropriate changes occur in $D_0$, $D_1$, or both. In particular, they identified an exponential solution for $\mathbf{D}_1$ as being key to decoherence. Specifically, when the $\mathbf{D}_l$ vectors are small, \begin{equation} \ddot{\mathbf{D}}_1 \cong - \frac{1}{3} \omega \mu \left( \mathbf{B} \cdot \mathbf{S}_0 \right) \mathbf{D}_1. \label{d1exp} \end{equation} In the normal hierarchy (NH), $\mathbf{\hat{B}} \cdot \mathbf{\hat{S}}_0 \approx -1$ and exponential growth is possible when $\mathbf{S}_0$ is in its stable position. In the inverted hierarchy (IH), exponential growth only occurs when the pendulum is unstable and swings away from its initial inverted position. Scenarios with small $\mathbf{D}_l$ vectors are special cases, however, and it has been observed in numerical simulations that collective oscillations often exhibit behavior more in line with single-angle expectations than multi-angle decoherence \cite{duan2006, duan2006b}. As shown in Ref.~\cite{esteban2007}, a large difference between the $\nu_e$ and $\bar{\nu}_e$ fluxes suppresses decoherence. Working with the bulb model, wherein neutrinos are emitted semi-isotropically from a single decoupling surface, and using $F(\nu_\alpha)$ to denote the number flux of flavor $\alpha$, the authors identified \begin{equation} \epsilon = \frac{F(\nu_e) - F(\bar{\nu}_e)}{F(\bar{\nu}_e) - F(\bar{\nu}_x)} \end{equation} as the decisive asymmetry parameter determining the transition from quasi-single-angle to decoherent multi-angle behavior. Insofar as bulb-model results are accounted for by analyzing the homogeneous Eqs.~\eqref{homogeom}, the suppression of decoherence can be attributed to other terms showing up on the right-hand side of Eq.~\eqref{d1exp}. It is now understood that the bulb model is itself a special case, assuming as it does that all flavors have angular distributions of the same shape. Quasi-single-angle evolution, kinematic decoherence, and fast oscillations are all related to asymmetries in the fluxes, and in a realistic supernova no single parameter controls the relative significance of each of these facets of the problem. The relevance to fast oscillations of the ratios $D_l / D_0$, for example, was emphasized in Ref.~\cite{johns2020}. The importance of asymmetry in controlling kinematic decoherence can be seen in the equations \begin{align} &\dot{\mathbf{S}}_1 = \omega \mathbf{B} \times \mathbf{D}_1 + \mu \mathbf{D}_0 \times \mathbf{S}_1 + \frac{1}{3} \mu \left( \mathbf{S}_0 + 2 \mathbf{S}_2 \right) \times \mathbf{D}_1, \notag \\ &\dot{\mathbf{D}}_1 = \omega \mathbf{B} \times \mathbf{S}_1 + \frac{2}{3} \mu \left( 2 \mathbf{D}_0 + \mathbf{D}_2 \right)\times \mathbf{D}_1, \label{l1eoms} \end{align} where as usual the role of $\mathbf{D}_1$ in coupling adjacent moments is on display. If $D_0$ is large enough, these equations become \begin{align} &\dot{\mathbf{S}}_1 \cong \mu \mathbf{D}_0 \times \mathbf{S}_1, \notag \\ &\dot{\mathbf{D}}_1 \cong \frac{4}{3} \mu \mathbf{D}_0 \times \mathbf{D}_1. \label{adiabaticd0} \end{align} That is, both $\mathbf{S}_1$ and $\mathbf{D}_1$ tend to track $\mathbf{D}_0$ as it evolves, keeping the decohering term $\mathbf{D}_1 \times \mathbf{S}_1$ small in Eq.~\eqref{s0eqtn}. Then, since \begin{equation} \dot{\mathbf{D}}_0 = \omega \mathbf{B} \times \mathbf{S}_0 \end{equation} as always, the equation of motion of $\mathbf{S}_0$ is approximately that of the isotropic bipolar system. The larger $D_0$ is, the more adiabatic the tracking, the more suppressed the decoherence, and the more accurate the isotropic approximation. A large isotropic asymmetry $D_0$ is not the only way to prevent multi-angle decoherence, however. Large initial values of $D_1$ can also preempt the exponential solution of Eq.~\eqref{d1exp}. The goal of the analysis thus far has been to indicate how fast collective oscillations, slow collective oscillations, and multi-angle decoherence all arise out of the same system of equations. These phenomena can be understood in isolation by appealing to Eqs.~\eqref{fastpendulum}, \eqref{slowpendulum}, and \eqref{d1exp}, respectively. In a linear analysis, they correspond to the fast, bipolar, and multi-zenith-angle instabilities. \subsection{Relaxation and cascade} We now turn to one of our central points, which is that decoherence need not proceed through exponential decay of $S_0$. Even in the quasi-isotropic limit, $\mathbf{D}_1 \times \mathbf{S}_1$ does not vanish precisely and decoherence is expected to take place at some level. Moreover, fast oscillations of $\mathbf{D}_1$ have the potential to accelerate the relaxation rate by dephasing $\mathbf{D}_1$ and $\mathbf{S}_1$. This is quite in contrast to bipolar oscillations, which tend to keep these vectors (anti)aligned as discussed above. Acceleration of kinematic decoherence by FFC is shown in Fig.~\ref{p0z_6000m}, which presents numerical solutions using $\mu / \omega$ and angular distributions taken from a spherically symmetric \textsc{Fornax} simulation at 200 ms post-bounce and a radius of $70$ km. The number density $n_{\bar{\nu}_e}$ is treated as a free parameter, and the ratio $\alpha = n_{\bar{\nu}_e} / n_{\nu_e}$ is varied in order to compare evolution under conditions stable to FFC ($\alpha = 0.85$) and conditions unstable to it ($\alpha = 0.90$). The top panel of the figure compares the isotropic flavor composition $P_{0,z}$ for the two choices of $\alpha$. In both mass hierarchies, $\alpha = 0.85$ exhibits quasi-isotropic evolution, whereas $\alpha = 0.90$ exhibits fast oscillations modulated by bipolar motion and---as confirmed by the middle panel, showing $S_0$---by kinematic decoherence. Consistent with the foregoing analysis, no decoherence is visible in the quasi-isotropic evolution, but it is substantial in the NH when the system is unstable to fast oscillations. FFC-assisted decoherence is evident in the IH as well, albeit to a lesser extent. The bottom panel indicates that decoherence proceeds through the growth (NH) or decay (IH) of $D_1$, as expected from the conservation of $E_S$. (Though not shown, $D_0$ is very nearly constant in all calculations presented in the figure.) On $\mu^{-1}$ timescales and in the limit $\mu \gg \omega$, the mass hierarchies are described by approximately identical equations of motion. But as noted previously, $D_1$ is constant under the same assumptions. Decoherence and its hierarchy-dependence can thus be traced back to the term $\omega \mathbf{B} \times \mathbf{S}_1$ in the second of Eqs.~\eqref{l1eoms}. In particular, \begin{equation} \frac{d}{dt} D_1^2 = - 2 \omega \mathbf{B} \cdot \left( \mathbf{D}_1 \times \mathbf{S}_1 \right). \end{equation} The mass hierarchies are distinguished by the orientation of the decohering term $\mathbf{D}_1 \times \mathbf{S}_1$ with respect to $\omega \mathbf{B}$. \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.45\textwidth]{plot_pilvsl_70km_90a_6000m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.45\textwidth]{plot_pilvsl_70km_90a_6000m_ih_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.45\textwidth]{plot_pilvsl_70km_85a_6000m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.45\textwidth]{plot_pilvsl_70km_85a_6000m_ih_5e-7.pdf} } \end{subfigure} \caption{Momentum-space angular power spectra of neutrino flavor for the same set of calculations presented in Fig.~\ref{p0z_6000m}. The power $\Pi_l$ in angular moment $l$ is defined in Eq.~\eqref{powerdef}. From darkest to lightest, the curves show $\log_{10} \Pi_l$ at $t = 0.2$, $0.4$, $0.6$, $0.8$, and $1.0$, in units of $\omega^{-1}$. FFC enhances cascade and hastens relaxation. All cases show interesting features at angular scales intermediate between the expanding front of nonzero power and the low-$l$ scales directly involved in fast and bipolar collective oscillations.} \label{pilvsl_6000m} \end{figure*} \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pvz_70km_90a_6000m_nh_5e-7_01.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pvz_70km_90a_6000m_nh_5e-7_05.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pvz_70km_90a_6000m_nh_5e-7_10.pdf} } \end{subfigure} \caption{Flavor composition $P_{v,z}$ plotted as a function of propagation angle $v = \cos \theta$, for the $\alpha = 0.90$, NH calculation of the previous figures. The curves are color-coded by time, beginning at the top of a fast-oscillation dip (blue), lasting a duration of $\sim 10^{-3}$, and ending at the bottom of the dip at time $t_f$ (red). As usual, times are given in units of $\omega^{-1}$. Finite-$\omega$ effects---the disruption of the fast pendulum and the cascade of power to smaller angular scales---become increasingly apparent at later times.} \label{pvz_6000m} \end{figure*} Conservation of the energy-like quantity $E_S$ offers one perspective on collisionless relaxation. Another perspective is provided by unitarity. Defining the power $\Pi_l$ in angular moment $l$, \begin{equation} \Pi_l = \left( l + \frac{1}{2} \right) \big| \mathbf{P}_l \big| ^2, \label{powerdef} \end{equation} unitarity implies that the sum of the power over all angular scales is conserved: \begin{equation} \sum_{l=0}^\infty \Pi_l = \textrm{const}. \label{unitarity} \end{equation} (Similar quantities can of course be defined for antineutrinos. In the cases we study the evolution of antineutrinos is comparable to that of neutrinos.) As the isotropic moment relaxes, higher ones are excited. Recalling that the term in the Hamiltonian proportional to $\mathbf{D}_1$ couples each $\mathbf{P}_l$ to its nearest neighbors, the default expectation is that power will continue to be relayed out to higher $l$. This process constitutes collisionless relaxation via phase-space transfer in homogeneous, axially symmetric flavor evolution. Power is principally lost from the isotropic moments. Although we do not analyze the impact of collisions in this paper, their influence---and especially the relative importance of emission/absorption and scattering processes---will depend on where power resides and how it is transferred collisionlessly. Fig.~\ref{pilvsl_6000m} displays the ``angular power spectra'' in both mass hierarchies, again comparing $\alpha = 0.85$ and $\alpha = 0.90$. As anticipated, cascade occurs in all cases and is greatly magnified by FFC. Enhancement is seen both in the rate at which power travels out to higher $l$ and in the amplitude at which it does so. As the cascade front moves outward, roughly flat regions form in its wake, expanding and becoming flatter with time. (For $\alpha = 0.90$, the rapid oscillations as a function of $l$ are flattened by time-averaging over many fast oscillation periods.) The orders-of-magnitude difference between typical $\Pi_l$ values in the $\alpha = 0.90$ and $\alpha = 0.85$ calculations explains why kinematic decoherence is visible in Fig.~\ref{p0z_6000m} for the former but not the latter. These regions are analogous to the inertial ranges encountered in fluid turbulence, which span intermediate scales between those at which driving and dissipation occur. Here the driving is oscillations at large angular scales. Oscillations induce dephasing and thereby momentum-space transfer. In this collisionless system, there is no analogue of a dissipation scale, and so power continues to cascade perpetually out to higher $l$. The asymptotic state, one imagines, is infinitesimal power equally distributed over all moments out to infinity. This is the closest the system can get to fully relaxed while still satisfying unitarity, Eq.~\eqref{unitarity}. Over time, as power cascades to higher $l$, small-scale angular features becomes increasingly apparent in the flavor composition $P_{v,z}$ as a function of propagation angle $v$. Fig.~\ref{pvz_6000m} shows this development by comparing the $v$-dependence of fast oscillations at different times. Interaction of FFC with momentum-space transfer and slow collective evolution causes the $P_{v,z}$ profile and its periodicity to be increasingly disrupted \cite{johns2020, shalgar2020}. The effect should not be overstated, however, as the outline of the $\omega \rightarrow 0$ collective behavior persists to some degree, at least in these calculations. The essence of the fast pendulum remains even as the pendulum is affected and modulated by finite-$\omega$ effects. We have seen, in this subsection, how the dephasing mechanism of Sec.~\ref{analysis} brings about kinematic decoherence; how the mechanism is intensified by FFC; how the power lost at large angular scales cascades down to smaller ones, relaxing the system in a unitary manner; and how cascade manifests as small-scale angular structures in the flavor evolution. Next we look more closely at the nature of momentum-space transfer far from the monopole. \subsection{Transfer at small angular scales} Further insight into the momentum-space dynamics is gained by observing the evolution of an initially isolated seed in the angular power spectrum. An experiment of this kind allows us to focus in on transfer without the complicating features specific to low-$l$ evolution, namely fast and bipolar collective oscillations and the existence of a natural cutoff at $l=0$. (In a real physical setting these features are of course essential, and Ref.~\cite{johns2020} was devoted to studying them.) The section following this one then goes to the other extreme, examining the effects on transfer of an artificial cutoff at $l = l_\textrm{max}$. In Fig.~\ref{pilvsl_diff}, power has been placed by hand at $t=0$ in the $l=50$ moments of $\mathbf{P}_v$ and $\mathbf{\bar{P}}_v$. Color-coded snapshots of the angular power spectrum are shown. These calculations were done using the NH and $\alpha = 0.85$. Without the $l=50$ seeds, they would be the same as the calculation shown in the bottom-left panel of Fig.~\ref{pilvsl_6000m}. The top panel of Fig.~\ref{pilvsl_diff} shows the evolution that results when the seeds are chosen to be parallel to the flavor axis $\mathbf{z}$ at $t=0$. Gently sloping plateaus form on the sides of the $l=50$ spike in the angular power spectrum and expand outward over time. Because of the relatively low amplitudes of these plateaus, little power is lost from $l=50$. The bottom panel shows the evolution when the seeds are initially \textit{perpendicular} to the flavor axis. In this case, a single plateau forms, which encompasses $l=50$ and is flat when averaged over suitable intervals of time. The entire spike vanishes into this expanding region, which consequently has a much higher amplitude than in the top panel, where only a tiny fraction of the seeded power is sapped by the moments neighboring $l=50$. In contrast with the top panel, here the inverse cascade emanating from the seed overwhelms the direct cascade from low $l$. \begin{figure} \centering \begin{subfigure}{ \centering \includegraphics[width=.44\textwidth]{plot_pilvsl_diff_diag.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.44\textwidth]{plot_pilvsl_diff_off.pdf} } \end{subfigure} \caption{Angular power spectra in calculations with power seeded at $l=50$. In the plot labels, ``Parallel'' indicates that $\mathbf{P}_{50}$ and $\mathbf{\bar{P}}_{50}$ are set parallel to the flavor axis $\mathbf{z}$ at $t=0$; ``Perpendicular'' indicates that they are perpendicular to the flavor axis and parallel to each other. From darkest to lightest, the curves are at times $1.1 \times 10^{-5}$, $2.2 \times 10^{-3}$, $4.4 \times 10^{-3}$, $6.6 \times 10^{-3}$, and $8.8 \times 10^{-3}$ in units of $\omega^{-1}$.} \label{pilvsl_diff} \end{figure} The plateaus in $\Pi_l$ form due to the final term in \begin{align} &\dot{\mathbf{P}}_l = \omega \mathbf{B} \times \mathbf{P}_l + \mu \mathbf{D}_0 \times \mathbf{P}_l \notag \\ &\hspace{.85 in}- \frac{\mu}{2} \mathbf{D}_1 \times \left( a_l \mathbf{P}_{l-1} + b_l \mathbf{P}_{l+1} \right). \label{pleqtn} \end{align} $\mathbf{D}_1$ causes $\mathbf{P}_{49}$ and $\mathbf{P}_{51}$ to sap power from the seed, but only when $\mathbf{P}_{50}$ has a part orthogonal to $\mathbf{D}_1$. This accounts for the different behaviors seen in the two panels. When $\mathbf{\hat{P}}_{50}$ starts (and stays) close to $\pm \mathbf{\hat{D}}_1$, power gradually leaks out of $l=50$, but when they are initially orthogonal, $\mathbf{D}_1$ quickly smears out the power concentrated there. Based on a non-exhaustive parametric study with fixed initial distributions, we find that the spreading rate and height of the plateau are proportional to $\mu$ and, in the parallel case, $\left( \omega \sin 2\theta / \mu \right)^2$, respectively. (Since the seed quickly dissolves into the plateau in the perpendicular calculation, the plateau's height is simply set by the seed power and the plateau width.) These findings are sensible. The final term in Eq.~\eqref{pleqtn} tells us that the rate of moment transfer is scaled by $\mu$, hence the proportionality to $\mu$ of the spreading rate. As for the height, the angular separation between $\mathbf{\hat{P}}_{50}$ and $\pm \mathbf{\hat{D}}_1$ is roughly expected to scale like $\omega \sin 2 \theta / \mu$, where the numerator is the typical separation due to vacuum oscillations and the denominator is suppression due to vectors adiabatically tracking $\mathbf{D}_0$, as discussed below Eq.~\eqref{adiabaticd0}. Because $\Pi_l \propto | \mathbf{P}_l |^2$, one roughly expects the scaling reported above. Unsurprisingly, larger initial $D_0$, which enhances adiabaticity, does tend to decrease the plateau height. Conversely, a larger initial $D_1$ \textit{increases} the height, as it shifts the balance from adiabaticity to decoherence: The third term on the right-hand side of Eq.~\eqref{pleqtn} is enhanced relative to the second one. The empirical scalings are not always straightforward, however, as the typical angular separations are determined by competing effects. The spreading rate, dependent only on the last term of Eq.~\eqref{pleqtn}, is independent of $D_0$ and proportional to $D_1$. In the parallel case, the decoherence rate of $\mathbf{P}_{50}$ is expected to be roughly the product of the height and twice the rate at which each plateau front advances, the factor of two coming from the fact that the seed sources both a direct and an inverse cascade. As noted before, these plateaus slope downward away from $l=50$. We have not attempted to estimate the slope. In the perpendicular case, the plateau is approximately flat when averaged over a time scale long relative to the fluctuations in a given $\mathbf{P}_l$ and short relative to the expansion rate. The flatness reflects the efficient scrambling of the initial seed. The $\mu \mathbf{D}_1$ term that causes the plateau to expand is, naturally, also responsible for transferring power between neighbors \textit{within} the plateau. As cascade proceeds, power therefore remains equilibrated (again in a time-averaged sense) over the entirety of the plateau. The parallel seed, in contrast, is a persistent, ``low-entropy'' feature that greatly delays this sort of equilibration. It is worth making contact here with the observation of Ref.~\cite{raffelt2007b} that the equations of motion bear resemblance to drift--diffusion equations in multipole space. Treating $l$ as a continuous variable, the authors showed that the $\mathbf{D}_l$ equation of motion, for example, can be rewritten as [their Eq.~(50)] \begin{align} \dot{\mathbf{D}}_l \cong \omega \mathbf{B} &\times \mathbf{S}_l - \mu \mathbf{D}_1 \times \left( \frac{1}{2l+1} \frac{d \mathbf{D}_l}{dl}+ \frac{1}{2} \frac{d^2 \mathbf{D}_l}{dl^2} \right) \notag \\ &\hspace{.3 in}+ \mu \left( \mathbf{D}_0 - \mathbf{D}_1 \right) \times \mathbf{D}_l. \label{drift} \end{align} In this interpretation of the equations, equipartition between the kinetic and potential energies constituting $E_S$ [Eq.~\eqref{esenergy}] implies that the drift speed and diffusivity are proportional to $\sqrt{\omega \mu}$. Equipartition was expected and observed in Ref.~\cite{raffelt2007b} due to saturation of the exponential instability. The situation here is different. Because our test cases are stable against the exponentially growing $\mathbf{D}_1$ solution, equipartition is not reached, and relaxation occurs instead through subtle dephasing effects in which the oscillatory behavior of $\mathbf{D}_1$ is critical. The drift--diffusion behavior reflects this distinction, as it is linked to the mechanism that releases power from low $l$. \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_p0z_70km_200m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_p0z_70km_200m_ih_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_s0_70km_200m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_s0_70km_200m_ih_5e-7.pdf} } \end{subfigure} \caption{Same as the top four panels of Fig.~\ref{p0z_6000m}, but the calculations here were done using $l_\textrm{max} = 199$. Significant discrepancies, including spurious decoherence, set in at late times due to truncation of the angular-moment expansion.} \label{p0z_200m} \end{figure*} \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pilvsl_70km_85a_200m_nh_5e-7_1.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pilvsl_70km_85a_200m_nh_5e-7_2.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pilvsl_70km_85a_200m_nh_5e-7_3.pdf} } \end{subfigure} \caption{Angular power spectra, comparing a calculation (NH, $\alpha = 0.85$) with $l_\textrm{max} = 199$ (light blue) to one that is converged in $l_\textrm{max}$ (dark blue). In the former case, error accrues at $l_\textrm{max}$ and propagates back to lower $l$. Power artificially builds up until backreaction incites spurious decoherence.} \label{pilvsl_200m_slow} \end{figure*} \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pilvsl_70km_90a_200m_nh_5e-7_1.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pilvsl_70km_90a_200m_nh_5e-7_2.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.31\textwidth]{plot_pilvsl_70km_90a_200m_nh_5e-7_3.pdf} } \end{subfigure} \caption{Same as Fig.~\ref{pilvsl_200m_slow}, but with $\alpha = 0.90$. Errors associated with truncation at $l_\textrm{max}$ again cause highly spurious evolution. Note that the sampled times and the range of the vertical axis differ with respect to the previous figure.} \label{pilvsl_200m_fast} \end{figure*} \section{Numerical challenges \label{numerical}} \begin{figure*} \centering \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_p0z_70km_4m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_p0z_70km_4m_ih_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_s0_70km_4m_nh_5e-7.pdf} } \end{subfigure} \begin{subfigure}{ \centering \includegraphics[width=.43\textwidth]{plot_s0_70km_4m_ih_5e-7.pdf} } \end{subfigure} \caption{Same as the top four panels of Fig.~\ref{p0z_6000m}, but the calculations here were done using $l_\textrm{max} = 3$. Because $l \leq 3$ are the most relevant for evolution of the slow and fast pendulums, important features of the evolution are decently well approximated. Relaxation is \textit{not} captured, however, because momentum-space cascade is prohibited by the small number of angular moments.} \label{p0z_4m} \end{figure*} In the previous section we observed the continuous transfer of power to smaller angular scales. In some instances, though, such as when the system is stable to FFC, the cascade is of such a modest magnitude that kinematic decoherence is negligible. It might be supposed that the cascade is therefore harmless and irrelevant. That conclusion, we now show, would be incorrect. Spurious evolution occurs when the angular-moment expansion is truncated at too small a value of $l$. Fig.~\ref{p0z_200m}, based on calculations with $200$ moments, exemplifies this point. Here all $\mathbf{P}_l$ and $\mathbf{\bar{P}}_l$ are set to zero at all times for $l \ge 200$. The figure can be directly compared to Fig.~\ref{p0z_6000m}, which is converged in the number of moments. Although the calculations agree at early times, discrepancy eventually sets in. The most dramatic difference is that the calculations with 200 moments are beset by spurious decoherence. Tellingly, its onset is largely independent of mass hierarchy but does depend on whether FFC is present or not. The root of the problem is especially clear in the $\alpha = 0.85$ calculations. Fig.~\ref{pilvsl_200m_slow} shows the angular power spectrum $\log_{10} \Pi_l$, comparing the 200-moment calculations to the converged ones. Once the power at $l=199$ becomes nonzero, errors due to truncation start to accrue and propagate back to lower moments. As time proceeds, power artificially builds up in the ``inertial range'' rather than passing through $l=199$ out toward infinity. The build-up, through its backreaction on large angular scales, causes the spurious decoherence seen in the preceding figure. By the end of the 200-moment calculation, the time-averaged power is nearly equipartitioned over $0 \leq l \leq 199$. The situation is similar in the $\alpha = 0.90$ calculations, as shown in Fig.~\ref{pilvsl_200m_fast}. Errors arise at $l=199$, propagate backward, and disrupt the low-$l$ evolution. Ultimately the system tends toward moment equipartition. The artificial build-up of power in the inertial range is less pronounced here, but still evident. Because FFC amplifies and quickens the cascade of power, spurious features arise earlier in the presence of FFC than in its absence. It is well known that multi-angle calculations of neutrino flavor evolution are plagued by spurious instabilities. The traditional solution, working with a discretized approach, has been to chop up the angular coordinate very finely. In bulb-model computations, the total number of angle bins is typically $\mathcal{O}(10^3)$, if not more. Given that modern discrete-ordinate supernova simulations employ $\mathcal{O}(10)$ bins \cite{nagakura2018}, the apparent need to evolve 1000s of bins to track oscillations is clearly alarming. Past work aimed at understanding the origin of spurious instabilities has demonstrated the appearance of erroneously unstable modes in the linear regime \cite{sarikas2012c, morinaga2018}. As the angular resolution is increased, the unstable modes proliferate in number but migrate toward the real axis of the complex plane of frequency $\Omega$ (for temporally growing modes) or wave number $K$ (for spatially growing ones). It has been argued that by working with integral quantities instead of discretizing, problematic logarithms are explicitly retained in the linear analysis and spurious instabilities are consequently avoided \cite{morinaga2018, johns2020}. (Possible evidence against this is found in Ref.~\cite{sarikas2012c}, whose authors reported that spurious instabilities still appeared in their analysis when using angular moments. Since they did not go into detail on this point, it is difficult for us to interpret with their finding.) If this argument is correct, then spurious instabilities are more properly thought of as numerical artifacts of discretization, as opposed to resolution. The distinction is that spurious instabilities never fully disappear for discretized distributions, no matter how fine the resolution is, whereas they never appear for distributions expanded in basis functions, no matter how coarse the resolution is. This viewpoint is physically appealing, as there is no reason (that we are aware of) for thinking that collective instabilities are acutely sensitive to the details at extremely small scales. It also seems to be evidenced by Refs.~\cite{sarikas2012c} and \cite{morinaga2018}, which both found that physical modes are already rather well captured with very few bins. Rephrasing slightly, the problem is not that a low-resolution calculation is unaware of the real physics. The problem is that it also knows about the spurious side effects of discretization. The spurious evolution we are addressing in this paper is due to errors in the evolution of $\mathbf{P}_{l_\textrm{max}}$ and $\mathbf{\bar{P}}_{l_\textrm{max}}$ being sequentially communicated to larger angular scales through the $\mathbf{D}_1$-mediated coupling of moment $l$ to moment $l - 1$. Even if angular moments evade the problem of spurious evolution as the term has usually been used (\textit{i.e.}, in reference to instabilities apparently arising from discretization), they do not escape this kind. We surmise that backreaction may have been responsible for the limitations of the multipole calculations reported on in Ref.~\cite{duan2014}, but again, lacking details, we cannot be sure. Another closely related though distinct numerical phenomenon is the recurrence effect described in Ref.~\cite{raffelt2007b}. Recurrence occurs in the moment-truncated system because the number of degrees of freedom has been rendered artificially finite. Power is reflected at $l_\textrm{max}$ and, after a long enough period, the initial system is (nearly) restored to its initial configuration. As the authors of that paper note, nonlinearity is subdominant in their toy model demonstrating recurrence, allowing for the exponential solution to be effectively reversed as power returns to $l=0$. In our calculations, where kinematic decoherence is effected by the dephasing mechanism, power is scrambled over the finite range of moments as power reflected at $l_\textrm{max}$ interacts with power cascading from $l = 0$. We believe that the lack of recurrence in our calculations is, from another perspective, related to the different drift--diffusion behavior [Eq.~\eqref{drift}] that results from the relaxation process of interest to us here. It is also worth distinguishing the build-up/backreaction problem from another numerical pitfall. Step size is always an issue in the numerical integration of differential equations, but in this system it is particularly critical that a small enough step size be used that $E_S$ is conserved to high precision. Failure to do so can enable the (often rather sudden) onset of spurious decoherence---a problem not quite the same as the step-by-step accumulation of error. We have found in our Runge--Kutta calculations that, for the same step size or error criterion, nonconservation of $E_S$ is typically much more severe when FFC occurs. The spurious onset of decoherence due to integration error is, once more, closely related to but distinct from the onset due to moment truncation. In Ref.~\cite{johns2020} it was pointed out that the moments $l \leq 3$ are indispensable for homogeneous, axially symmetric FFC. The equation of motion of the fast pendulum, Eq.~\eqref{fastpendulum}, involves these four moments explicitly. Retaining only $l \leq 2$ makes FFC impossible, since the pendulum is never unstable if the gravity vector $\mathbf{G}'$ [Eq.~\eqref{gravity}], a rotated and scaled version of $\mathbf{D}_3$, always vanishes. Fortuitously, supernova simulations using M1 closure do provide the first four moments of the classical angular distributions. For these reasons it is interesting to look at the results of evolving only the first four moments. Fig.~\ref{p0z_4m} shows the same calculations, but now using $l_\textrm{max} = 3$, that were plotted in Fig.~\ref{p0z_6000m} (converged in $l_\textrm{max}$) and Fig.~\ref{p0z_200m} ($l_\textrm{max} = 199$). The results, including the fast oscillations, are not entirely dissimilar from those presented in Fig.~\ref{p0z_6000m}. The principal difference is that relaxation is virtually nonexistent in the calculations with only four moments, owing to the fact that there is no inertial range to facilitate kinematic decoherence. Not having an inertial range turns out to be helpful for capturing fast collective oscillations because power is unable to build up artificially and backreact on large scales, but the price is that the realistic transfer of power to small scales cannot occur either. We suspect this finding might have some utility in more sophisticated approaches, for instance if the full flavor transformation can be approximated by taking the $l \leq 3$ evolution and modulating it with a prescription for relaxation. The guiding idea behind such a technique would be that, although relaxation is important, and cascade necessary for achieving it, the details at high $l$ are not critical. If an approach of this kind can be made to work, it would still remain to be seen whether it generalizes to more realistic models, where there may not be as neat a division between large-scale ``pendulum moments'' and small-scale ``cascade moments.'' As mentioned in Sec.~\ref{analogues}, the breakdown of numerical calculations due to increasingly fine phase-space structure is a familiar problem in plasma kinetics. Various strategies have been developed in response. Early work established that the breakdown is caused by approximating an infinite spectrum with a finite one and proposed the addition of small imaginary parts to the truncated set of eigenvalues, either by analytically extrapolating past the evolved modes or explicitly adding damping or collisional terms to the equations of motion \cite{grant1967, joyce1971}. Other approaches include periodically filtering the phase-space distribution in such a way as to smooth out the fine features \cite{cheng1976}, or allowing small-scale information to escape the calculation by imposing outgoing-wave boundary conditions \cite{eliasson2001}. All of these methods artificially generate entropy over time, but it is also possible to implement filtration in a manner that does not \cite{klimas1987}. More details and references on numerically solving the Vlasov equation can be found in Ref.~\cite{eliasson2010}. These ideas---extrapolation, damping, filtration, boundary conditions---need to be adapted to neutrino quantum kinetics, and at this point we cannot vouch for their usefulness. We mention them here only as possible sources of inspiration for future efforts. Our own investigations have been limited, and on the whole we have found that the evolution is somewhat delicate to intervention. If strong artificial damping is applied to all $l \geq l_d$, for example, the reflection problem is not avoided. (As the damping rate is made \textit{very} large, the equations effectively resemble those of the system truncated at $l_d$.) Artificial damping that grows with $l$ has shown some promise, but our exploration of this approach has been far from exhaustive. These plasma-inspired techniques are alternatives to the one described in connection to Fig.~\ref{p0z_4m}, in which only a small number of moments are evolved and relaxation is superimposed on the system in a physically motivated but non-self-consistent way. The plasma strategies are a compromise, evolving just enough of the cascade to capture relaxation at larger scales. At the other extreme is the approach traditionally taken in oscillation calculations: attaining convergence through very high resolution. We emphasize once more that relying on the last of these approaches continues to be a major impediment to progress in computing neutrino flavor evolution, and it may in fact be unnecessary. \section{Discussion \label{conc}} In this paper we have studied relaxation of the neutrino flavor field via momentum-space cascade, operating under the assumptions of monochromaticity, homogeneity, axial symmetry, and collisionless evolution. These simplifications are highly restrictive---axial symmetry and homogeneity above all, most likely---and preclude our calculations from being anything like \textit{predictions} of the flavor evolution taking place at specific times and locations in a supernova. Simplicity is what we are after, however. Our objective is not, at this point, to simulate flavor evolution, but rather to understand what the essential ingredients are and determine how they can be captured with both fidelity to the physics and deference to finite computational resources. In calculations with parameters motivated by simulation data, we have seen that if kinematic decoherence does not occur through an exponential instability, it instead occurs through the comparatively gradual seepage of power to smaller angular scales. Fast oscillations expedite this process, sometimes considerably, by enhancing the dephasing of the relevant polarization vectors. We have situated this relaxation mechanism alongside the other major features of the model: bipolar oscillations, fast oscillations, and the multi-zenith-angle instability. Cascade of power is a serious numerical concern, as errors at $l_\textrm{max}$ propagate back to larger scales, growing in magnitude until their backreaction on the isotropic moment is enough to spuriously decohere it. The computations performed for this study evolved $\mathbf{P}_l$ and $\mathbf{\bar{P}}_l$ directly. We expect similar spurious results to appear in calculations that evolve discretized angle bins, since the origin of the problem---the development of features at small angular scales---is a real aspect of the physics. The problem is not unique to neutrino flavor, either. It is commonly encountered in weakly collisional kinetic systems of all kinds. We believe this is reason for optimism. Various strategies have already been developed in the context of kinetic plasmas for addressing essentially the same numerical challenges. If analogous techniques can be employed successfully for neutrino quantum kinetics, it may be possible to considerably lighten the computational burden of oscillation calculations. The key hypothesis we are advancing, which would make such computational strides possible, is that the evolution of the flavor field should not depend qualitatively on the details at very small angular scales. In reality, collisions damp the angular power spectrum, especially in the high-$l$ region. The scattering rate is typically much smaller than $\mu$, however, and collisions on their own are unlikely to resolve the numerical challenges of cascade. Speaking from a computational standpoint, we suspect that power still makes its way out to unacceptably large $l$. On the other hand, collisions have also been noted to exert a less direct but still possibly significant influence through the halo effect \cite{cherry2012, sarikas2012b, cirigliano2018}. From the angular-moment perspective, the result of the halo effect is to isotropize power initially spread over many moments, potentially giving that power (\textit{i.e.}, the scattered particles) greater leverage on the overall flavor evolution. Having emphasized the numerical significance of our findings, let us return to the physical implications. Relaxing the assumption of axial symmetry permits additional instabilities, both fast and slow. At a more fundamental level, the dimensionality of phase space is simply larger. Whether or not the flavor field becomes unstable as a result of axial asymmetry, there are more channels for collisionless relaxation. There are, in other words, more ways to shift power from large angular scales to small ones. Crucially, the isotropic multipole can be decohered by any of the three vectors $\mathbf{D}_1^0$, $\mathbf{D}_1^{\pm1}$, where $\mathbf{D}_l^m$ is the $(l, m)$ spherical harmonic of the difference vector. Each of these three vectors interacts with the others, the effect of which is probably to disrupt any would-be periodic behavior even beyond the extent that we have seen here. This may already be apparent in the axially asymmetric calculations of Ref.~\cite{shalgar2020}. With inhomogeneity comes a spectrum of instabilities associated with the nonvanishing convective term, and the dimensionality of phase-space is again expanded. Coordinate-space transfer differs in a number of significant ways from momentum-space transfer, and the manner in which our results are modified by convection is likely to depend on the nature of the inhomogeneity itself. What we can say, quite generally, is that we expect phase-space relaxation to figure prominently in realistic settings where the flavor field is unstable---with all of the ramifications that go along with it. \begin{acknowledgments} We are grateful to Evan Grohs, Kei Kotake, Zidu Lin, Amol Patwardhan, and Manibrata Sen for stimulating conversations at the INT workshop on ``Neutrinos from the Lab to the Cosmos'' and to Shashank Shalgar and Irene Tamborra for enjoyable correspondence. Support for this work was provided by NASA through the NASA Hubble Fellowship grant \# HST-HF2-51461.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. LJ and GMF acknowledge support from NSF Grant No. PHY-1914242, from the Department of Energy Scientific Discovery through Advanced Computing (SciDAC-4) grant register No. SN60152 (award number de-sc0018297), and from the NSF N3AS Hub Grant No. PHY-1630782 and Heising-Simons Foundation Grant No. 2017-22. AB acknowledges support from the U.S. Department of Energy Office of Science and the Office of Advanced Scientific Computing Research via the Scientific Discovery through Advanced Computing (SciDAC4) program and Grant DE-SC0018297 (subaward 00009650). In addition, he acknowledges support from the U.S. NSF under Grants AST-1714267 and PHY-1804048. \end{acknowledgments}
null
null
null
proofpile-arXiv_000-10149
{"arxiv_id":"2009.09024","language":"en","timestamp":1600740096000,"url":"https:\/\/arxiv.org\/abs\/2009.09024","yymm":"2009"}
2024-02-18T23:40:25.128Z
2020-09-22T02:01:36.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10151"}
null
\section{Introduction} X-ray binaries (XRBs), systems in which a compact object (a black hole [BH] or neutron star [NS]) accretes material from a less evolved stellar companion, are important probes of stellar and binary evolution, compact object populations, and physical accretion mechanisms. Studies of XRB populations in nearby galaxies have revealed important scalings between XRB populations and host galaxy properties, including star formation rate (SFR), stellar mass, and metallicity \citep[e.g.,][]{Prestwich2013,Lehmer2019}. In particular, the galaxy-integrated emission from high-mass XRBs (HMXBs), systems in which the donor star is massive ($>$ 8 $M_{\odot}$), has been shown to scale with SFR \citep[e.g.,][]{Grimm2003,Fabbiano2006,Mineo2012,Lehmer2010,Lehmer2019,Kou2020}, while the emission from low-mass XRBs (LMXBs), systems with low-mass donor stars, has been observed to scale with stellar mass \citep[e.g.,][]{Colbert2004,Gilfanov2004,Boroson2011,Lehmer2010,Lehmer2019,Lehmer2020}. These scalings can be explained by stellar evolution timescales: the high-mass donor stars in HMXBs die off rapidly ($\lesssim$ 40 Myr) following a star forming episode, while the low-mass donors in LMXBs will live for billions of years following an episode of star formation. These locally-derived scaling relations for galaxy-integrated $L_{\mbox{\scriptsize{X}}}$\ with SFR and mass have also been shown empirically to evolve with redshift \citep{BZ13LBG,Lehmer2016,Aird2017}, and very recently \citet{Fornasini2019,Fornasini2020} demonstrated that the increase of galaxy-integrated $L_{\mbox{\scriptsize{X}}}$\ per unit SFR with increasing redshift is likely tied to the metallicity evolution of the Universe. This metallicity dependence of $L_{\mbox{\scriptsize{X}}}$\ per unit SFR ($L_{\mbox{\scriptsize{X}}}$/SFR) is supported by studies of HMXBs and ultra-luminous X-ray sources (ULXs), XRBs with $L_{\mbox{\scriptsize{X}}}$\ $>$ 10$^{39}$ erg~s$^{-1}$, whose galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR has been shown to increase with decreasing metallicity \citep[e.g.,][]{Prestwich2013,Douna2015,Brorby2014,Brorby2016,Basu-Zych2013,Basu-Zych2016}. This metallicity scaling for HMXBs and ULXs is corroborated by theoretical binary population synthesis models \citep[e.g.,][]{Linden2010,Fragos2013,Wiktor2017,Wiktor2019}, which find order-of-magnitude differences in galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR between environments at solar and 0.1 $Z_{\odot}$ metallicities. The inverse correlation between XRB $L_{\mbox{\scriptsize{X}}}$/SFR and metallicity is due to the effects of metallicity on stellar and binary evolution, namely the production of more massive BHs and/or more compact binaries, and therefore brighter systems, at lower metallicities \citep[e.g.,][]{Mapelli2010,Linden2010}. A key implication of these empirically derived and theoretically corroborated scalings of $L_{\mbox{\scriptsize{X}}}$\ with host galaxy properties is the importance of XRBs to normal galaxy emissivity across cosmic time. Theoretical binary population synthesis models, when coupled with prescriptions for the cosmic star formation history (SFH) and metallicity evolution of the Universe predict that HMXBs will begin to dominate normal galaxy emissivity over LMXBs at $z$ $\gtrsim$ 1--2, and further that the normal galaxy emissivity due to XRBs may begin to dominate over active galactic nuclei (AGN) at $z$ $\gtrsim$ 5--6 \citep[e.g.,][]{Fragos2013,Madau2017}. The rising dominance of XRBs in the early Universe coupled with cosmological models suggests that emission from XRBs may provide a non-negligible heating source to the intergalactic medium (IGM) during the ``epoch of heating" at $z \approx$ 10--20, prior to reionization \citep[e.g.,][]{Mesinger2013,Mesinger2014,Pacucci2014,Mirocha2014,Fialkov2014,Fialkov2017}. This further suggests that XRBs could have a significant imprint on the 21-cm signal observed from these redshifts \citep[e.g.,][]{Das2017}. In the near future, 21-cm interferometers like the Hydrogen Epoch of Reionisation Array (HERA) and Square Kilometre Array (SKA) are expected to observe the signals from this epoch of heating, thus providing constraints on the ionizing properties of X-ray sources during this epoch \citep[e.g.,][]{Greig2017,Park2019}. However, interpreting the 21-cm results in the context of XRB populations, and further refining predictions from binary population synthesis models for the importance of XRBs at different epochs requires empirically constraining the metallicity dependence of the X-ray spectral energy distribution (SED). In particular, it is critical to constrain the X-ray SED for {\it low-metallicity}, star-forming galaxies, which serve as better analogs to the first galaxies, and in the rest frame soft band (0.5--2 keV), which is the energy band of interest for the photons that most strongly interact with the IGM at high redshift \citep[e.g.,][]{McQuinn2012}. In this work, we present the 0.3--30 keV SED of the low-metallicity, starburst galaxy VV~114, providing an important empirical benchmark for the metallicity dependence of the X-ray SED in both the soft (0.5--2 keV) and hard (2--30 keV) bands. \begin{deluxetable*}{ccccc} \centering \tablewidth{\textwidth} \tablecaption{Archival and New Observations Used in This Work \label{tab:obs}} \tablehead{ \colhead{Obs. Start Date} & \colhead{Obs. ID} & \colhead{Inst.} & \colhead{Eff. Exposure (ks)} & \colhead{PI} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} } \startdata \hline & & {{\itshape Chandra\/}} & & \\ \hline 2005-10-20 & 7063 & ACIS-S & 59 & T. Heckman \\ \hline & & { {\it XMM-Newton\/}} & & \\ \hline 2019-01-10 & 0830440101 & EPIC-pn & 26 & B. Lehmer \\ 2019-01-10 & 0830440101 & EPIC-MOS1 & 30 & B. Lehmer \\ 2019-01-10 & 0830440101 & EPIC-MOS2 & 26 & B. Lehmer \\ \hline & & { {\it NuSTAR\/}} & & \\ \hline 2019-01-19 & 50401001002 & FPMA & 205 & B. Lehmer \\ 2019-01-19 & 50401001002 & FPMB & 204 & B. Lehmer \\ \enddata \tablecomments{Col. (1): observation start date. Col. (2): observation ID. Col. (3): instrument. Col. (4): good time interval effective exposure times in ks after removing flared intervals. Col. (4): observation PI.} \end{deluxetable*} VV~114 is a prime target for calibrating the metallicity dependence of the X-ray SED as it is a relatively nearby ($D$ = 88 Mpc\footnote[2]{We calculate the distance for VV~114 taking $z = 0.02$ from NED and assuming $H_{0}$ = 70 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{M}$ = 0.3, and $\Omega_{\Lambda}$ = 0.7}) Lyman break analog (LBA). LBAs are highly star-forming, yet relatively dust- and metal-poor galaxies at $z < 0.3$ that resemble higher redshift ($z > 2$) Lyman break galaxies \citep[e.g.,][]{Heckman2005,Hoopes2007,BZ2009}. With VV~114's gas phase metallicity of 12+log(O/H) = 8.4\footnote[3]{We calculate the gas-phase metallicity for VV~114 from the [\ion{O}{iii}]~$\lambda$5007 and [\ion{N}{ii}]~$\lambda$6584 emission line ratios taken from \citet{Moustakas2006}, and using the method outlined in \citet[][``PP04 {\it O3N2}"]{PP04}. In what follows, we adopt a global metallicity of 0.51 $Z_{\odot}$ for VV~114, assuming $Z_{\odot}$ corresponds to 12+log(O/H) = 8.69 \citep{Asplund2009}.}, global UV + IR SFR of $\sim$ 38 $M_{\odot}$ yr$^{-1}$ as measured from {\it GALEX} and {\it WISE}, and stellar mass of log $M_{\star}$ = 10.65 $M_{\odot}$ \citep{Basu-Zych2013}, scaling relations dictate that it should host a substantial XRB population. With a specific star formation rate (SFR/$M_{\star}$) $>$ 10$^{-10}$ yr$^{-1}$, VV~114 is further expected to be dominated by HMXBs or ULXs, as opposed to LMXBs \citep[e.g.,][]{Lehmer2010}. Indeed, previous X-ray studies of VV~114 have revealed a galaxy with a well-populated X-ray luminosity function (XLF), comprised of six ULXs, which put VV~114 above the ``normal" $L_{\mbox{\scriptsize{X}}}$/SFR derived from nearby star-forming galaxies \citep{Basu-Zych2013,Basu-Zych2016}. Thus, VV~114 offers a unique environment for studying the X-ray SED in that it is highly star forming, relatively low metallicity, and is known to host six ULXs \citep{Basu-Zych2016}. In this paper, we use new nearly-simultaneous observations of VV~114 from {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ coupled with archival {\itshape Chandra\/}\ data to characterize its 0.3--30 keV SED in terms of both the galaxy-wide X-ray emission and the resolved X-ray source population. This paper is organized as follows: Section~\ref{sec:reduce} discusses the data sets in use, as well as the reduction procedures and spectral fitting techniques employed in the analysis of these datasets. Section~\ref{sec:results} provides the results of the custom spectral modeling of both the point source population and galaxy-wide X-ray emission of VV~114 using all three data sets. Section~\ref{sec:discuss} presents a discussion and interpretation of these results for the low-metallicity SED in the context of previous works, the theoretical scalings of XRB emission with SFR and metallicity, and future 21-cm measurements. Finally, in Section~\ref{sec:conclude} we summarize this work, and discuss future directions. Throughout this paper we assume a \citet{Kroupa} initial mass function (IMF) and, when comparing to any previous works, correct all SFRs following this assumption. Furthermore, we standardize all quoted gas-phase metallicities to values determined using the {\it O3N2} \citet{PP04} calibration. \section{Observations \& Data Reduction}\label{sec:reduce} In this section we describe the observations, both archival ({\itshape Chandra\/}) and new ({\it XMM-Newton\/}\ and {\it NuSTAR\/}), used as part of this analysis, as well as an overview of the data reduction procedures. \subsection{{\itshape Chandra} Imaging \& Spectra}\label{sec:chobs} We used archival {\itshape Chandra\/}\ data to assess the point source population in VV~114, as only {\itshape Chandra\/}\ has the required spatial resolution to resolve the galaxy-wide emission of VV~114 into individual point sources. The archival {\itshape Chandra\/}\ observation is listed in Table~\ref{tab:obs}, where the observation was performed with ACIS-S in ``Very Faint" (VFAINT) mode, and the listed effective exposure time includes only good time intervals (GTIs). We reduced the archival {\itshape Chandra\/}\ observation using the standard reduction tools included with {\tt CIAO} version 4.10 and {\tt CALDB} version 4.8.1\footnote[7]{\url{http://cxc.harvard.edu/ciao/download/}}. The {\tt level=1} event files were reprocessed to {\tt level=2} event files using the latest calibration and script {\tt chandra$\_$repro}. We subsequently filtered the {\tt level=2} event files on GTIs determined from background light curves filtered with the task {\tt lc$\_$clean} with default filtering parameters. \begin{figure} \vspace{-5pt} \includegraphics[width=\linewidth]{chandra_rgb_nucontours.pdf} \caption{Adaptively smoothed three-color (red: 0.3--1 keV, green: 1--2 keV, and blue: 2--7 keV) {\itshape Chandra\/}\ image of VV~114 showing the six {\itshape Chandra\/}-detected point sources as green crosses, annotated in order of decreasing brightness, as well as the hot, diffuse gas which permeates the galaxy. The white dashed curves overlaid represent the 4--25 keV {\it NuSTAR\/}\ intensity contours (1.2 $\times$ 10$^{-5}$ counts s$^{-1}$, 1.0 $\times$ 10$^{-5}$ counts s$^{-1}$, 8.6 $\times$ 10$^{-6}$ counts s$^{-1}$), which are comparable to a single source PSF for VV~114.} \label{fig:chandra_rgb} \end{figure} \begin{figure} \vspace{-5pt} \includegraphics[width=\linewidth]{hst_rgb_regions_contours.pdf} \caption{Three-color (red: F814W, green: F435W, and blue: F336W) {\itshape HST\/}\ image of VV~114 with the positions of the six {\itshape Chandra\/}-detected point sources overlaid as green circles, where the region size is scaled by the point source luminosity. The white dashed curves are the 0.3--12 keV {\it XMM-Newton\/}\ intensity contours (2.3 $\times$ 10$^{-3}$ counts s$^{-1}$, 1.2 $\times$ 10$^{-3}$ counts s$^{-1}$, 4.0 $\times$ 10$^{-4}$ counts s$^{-1}$), comparable to a single point source as for the {\it NuSTAR\/}\ observations.}\label{fig:hst_rgb} \end{figure} We then created an exposure map and point spread function (PSF) map using regions that encompassed 90\% of the encircled energy of the PSF using the {\tt CIAO} tools {\tt fluximage} and {\tt mkpsfmap}. We used the images from these procedures as input to {\tt wavdetect} to determine positions and PSF-corrected extraction regions for each of the six point sources in the galaxy. We then extracted spectral files for each of the six point sources using the task {\tt specextract} with the {\tt wavdetect}-determined source positions and extraction regions and the flag {\tt psfcorr = yes}. The {\tt specextract} task produces not only source spectra, but also response matrix files (RMFs), ancillary response files (ARFs), and background spectra when provided with a background extraction region. We use this task for extracting all subsequent {\itshape Chandra\/}\ spectral products, as described next. For extracting background spectra we chose regions encircling VV~114 free of point sources and uncontaminated by diffuse emission from the galaxy. We estimated the diffuse extent ($\sim$ 13 kpc) of the galaxy visually using a combination of soft-band (0.3--1 keV and 1--2 keV) {\itshape Chandra\/}\ and optical {\itshape HST\/}\ images. We also extracted spectral products for the point-source-free diffuse emission using a region encompassing the diffuse extent as described above, but excluding the 90\% encircled energy fraction regions for all six detected point sources. We likewise extracted spectral products for the galaxy-wide emission, including all six detected point sources, the diffuse emission, and any unresolved component. These {\itshape Chandra\/}\ spectral products are used in constraining the components of the galaxy-wide emission and compared with the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ data, for which the galaxy appears as a single source, as described in Section~\ref{sec:point_src_decomp}. Finally, we created exposure-corrected images in the soft (0.3--1 keV), medium (1--2 keV), and hard (2--7 keV) bands using the task {\tt fluximage}. We imposed a $\sim$ 1$^{\prime \prime}$ $\times$ 1$^{\prime \prime}$ pixel binning (2 $\times$ 2 native pixels) on the resultant images, which we subsequently used to create a three-color, adaptively smoothed image using {\tt csmooth}. The three-color, adaptively smoothed image is shown in Figure~\ref{fig:chandra_rgb}, and highlights the locations of each of the six {\itshape Chandra\/}-detected point sources (VV~114 X-1 to X-6), along with the hot, diffuse gas, which suffuses the galaxy. \subsection{{\itshape XMM-Newton} Imaging \& Spectra}\label{sec:xmmobs} In addition to using the archival {\itshape Chandra\/}\ observation to measure the resolved components of VV~114, we obtained new {\it XMM-Newton\/}\ observations of the galaxy to provide additional constraints on the galaxy-wide emission. Observational data files (ODFs) for these new observations were processed using the {\it XMM-Newton\/}\ Science Analysis System ({\tt SAS} version 17.0)\footnote[8]{\url{https://www.cosmos.esa.int/web/xmm-newton/sas}}. We created event lists from the ODFs for the EPIC-MOS \citep{Turner2001} and EPIC-pn detectors \citep{Struder2001} using the {\tt SAS} tasks {\tt emchain} and {\tt epchain}, respectively. We applied standard filters for the EPIC-MOS detectors to include single, double, and quadruple events (PATTERN $\leq$ 12 \&\& flag== \#XMMEA$\_$EM), and similarly applied standard criteria to include single and double events with conservative flagging for the EPIC-pn detector (PATTERN $\leq$ 4 \&\& FLAG == 0). With the filtered event lists for each detector, we next constructed X-ray light curves from the entire field, from which we determined the rate thresholds for filtering the event lists for background flaring events. For the MOS detectors we created $>$ 10 keV light curves, creating GTIs by filtering out periods with count rates $>$ 0.2 counts s$^{-1}$, and for the pn camera we created a 10-12 keV light curve, filtering out periods with count rates $>$ 0.5 counts s$^{-1}$. The effective exposures for each observation after filtering on these GTIs are listed in Table~\ref{tab:obs}. Following GTI correction, we performed source detection on images in five bands for each detector using the task {\tt edetect$\_$chain}. We cross-correlated 29 of the detected point sources with counterparts from the {\itshape Chandra\/}\ observation to determine the translation shift between the observations, finding shifts of +0\farcs25 in R.A. and +1\farcs66 uncertainty in decl. necessary to bring the images into astrometric alignment. As VV~114 is consistent with being a single source in {\it XMM-Newton\/}, we determined the appropriate galaxy-wide spectral extraction region for VV~114 in the {\it XMM-Newton\/}\ observations by simulating the combination of the {\itshape Chandra\/}-detected point source PSFs on each {\it XMM-Newton\/}\ detector, using the point source physical positions determined from source detection after astrometric correction was applied. To determine the overall expected PSF of VV~114 in each {\it XMM-Newton\/}\ exposure we used the {\tt SAS} task {\tt psfgen} to simulate the PSF for each point source at its physical position on the {\it XMM-Newton\/}\ detectors, and then combined the simulated PSFs for each of the six point sources, accounting for the physical offsets between each source. The 80\% encircled energy fraction spectral extraction regions in each EPIC exposure determined from this procedure were found to be in good agreement with the optical extent of VV~114 from {\itshape HST\/}\ imaging. In Figure~\ref{fig:hst_rgb}, we show a three-color {\itshape HST\/}\ image of VV~114 with 0.3--12 keV intensity contours from {\it XMM-Newton\/}\ overlaid in white, where the contours approximate the extent of the galaxy-wide spectral extraction region constructed from the PSF simulation procedure. We extracted source spectra for VV~114 for each detector with the task {\tt evselect} using the source regions described above and a spectral bin size of five for the pn and 15 for the MOS exposures. For the pn detector we extracted a background spectrum using {\tt evselect} from a source-free region at a similar RAWY position as VV~114 on the detector, while for both MOS detectors we chose background regions from source-free areas on the same CCD as VV~114. We produced the associated RMFs and ARFs for each spectrum using the tasks {\tt rmfgen} and {\tt arfgen}. These {\it XMM-Newton\/}\ spectral products are used in the analysis of the galaxy-wide spectrum of VV~114 as described in Section~\ref{sec:gal_wide_spec}. \subsection{{\itshape NuSTAR} Imaging \& Spectra}\label{sec:nuobs} The {\it NuSTAR\/}\ data were reduced using {\tt HEASoft} v6.24 and the {\it NuSTAR\/}\ Data Analysis Software {\tt NuSTARDAS} v1.8.0 with {\tt CALDB} version 20190627. We produced level 2 data products by processing the level 1 data files through {\tt nupipeline}, which filters out bad pixels, screens for cosmic rays and high background intervals, and projects the detected events to proper sky coordinates. The {\it NuSTAR\/}\ PSF has an 18$^{\prime \prime}$ FWHM core and a 58$^{\prime \prime}$ half power diameter \citep{Harrison2013} resulting in all six point sources in VV~114 appearing blended as one source in the {\it NuSTAR\/}\ observations (see white, dashed contours, Figure~\ref{fig:chandra_rgb}). Given the extent of the {\it NuSTAR\/}\ PSF, we chose a 30$^{\prime \prime}$ region for extracting the galaxy-wide source spectra to encompass most of the emission from VV~114 while minimizing background contamination. We defined a region for extraction of background spectra from a source-free area on the same detector as VV~114, but separated by at least 20$^{\prime \prime}$ from the galaxy. We produced source and background spectra using these regions, as well as RMFs and ARFs for both the FPMA and FPMB using the task {\tt nuproducts}, ensuring no background subtraction was performed on the source spectra during extraction. These spectral products were used in our analysis of the 3--30 keV spectrum of VV~114, as described in Section~\ref{sec:gal_wide_spec}. \subsection{X-ray Spectral Fitting Technique}\label{sec:fit_techniques} All spectral fitting was performed with {\tt XSPEC} v12.10.0c \citep{Arnaud1996} using the Cash statistic \citep{Cash1979} as the fit statistic. Because the Cash statistic does not yield a straightforward way to evaluate goodness-of-fit (gof), we evaluated the gof of spectral models using the Anderson-Darling (ad) test statistic and the {\tt XSPEC} Monte Carlo command {\tt goodness}. For each model, we ran the {\tt goodness} command for 1000 realizations with the {\tt ``nosim''} and {\tt ``fit''} options. This procedure simulates spectra based on the current best-fit model, fits these simulated spectra to the model, and then calculates the new test statistic. The {\tt goodness} command returns the distribution of test statistics for the simulated data, which can then be compared to the test statistic for the actual data. Our reported ``gof'' for each model fit is the fraction of simulations returned from {\tt goodness} with a test statistic as large, or larger (i.e., statistically worse fits) than the test statistic for the actual data \citep[e.g.,][]{Maccarone2016}. Therefore, gof = 0.5 is expected for data consistent with the model, and gof $\sim$ 1 can be interpreted as overfitting the data, since it implies nearly all simulations produced worse fits than the data itself. If all simulations returned smaller test statistics (better fits) than the actual data, the model is rejected (gof $<$ 10$^{-3}$). It is important to note that the gof calculated in this way provides a measure of the confidence level with which a model can be rejected, not a probability for whether the model is correct. Errors on all free model parameters are reported as 90\% confidence intervals, and are computed using the {\tt XSPEC error} command using the output of the {\tt XSPEC mcmc} routine. In all models we set abundances relative to solar using the \citet{Asplund2009} abundance tables. All spectral fits were performed on the unbinned source spectra, without any background subtraction. To perform such fits, we must define a model for the background for each instrument. For each observation, we modeled the background with both a sky component, representing the contribution from the diffuse background and the unresolved cosmic X-ray background \citep[e.g.,][]{Kuntz2000,Lumb2002}, as well as an instrumental component, representing the contribution from the instrumental continuum and detector lines. We describe the details of the sky and instrumental background components for each instrument below. We modeled the sky background for both {\itshape Chandra\/}\ and {\it XMM-Newton\/}\ as an absorbed two-temperature thermal plasma ({\tt APEC}) plus power-law. These model components represent the diffuse Galactic and extragalactic cosmic X-ray background, respectively, where we fix the photon index for the cosmic X-ray background to $\Gamma$ = 1.42 \citep{Lumb2002}. For {\it NuSTAR\/}, we modeled the sky background as an absorbed single temperature thermal plasma plus power-law, accounting for the ``Solar" and cosmic X-ray background components, respectively \citep{Wik2014back}. For each sky background model we fixed the foreground Galactic absorption component ({\tt Tbabs}) to $N_{H}$ = 1.20 $\times$ 10$^{20}$ cm$^{-2}$ \citep{nHpimms}. From our best-fit background models to the background spectra we found sky background {\tt APEC} temperatures of $kT_{1} = 0.1$ keV and $kT_{2} = 0.24$ keV for {\itshape Chandra\/}, and $kT_{1} = 0.1$ keV and $kT_{2} = 0.27$ keV for {\it XMM-Newton\/}. We set $kT = 0.27$ keV for {\it NuSTAR\/}\ following {\it XMM-Newton\/}. The instrumental background for {\itshape Chandra\/}\ was modeled as a power-law continuum superposed with Gaussians, representing detector lines, with the energies and widths of the Gaussian lines fixed following \citet{Bartalucci2014}, but line normalizations allowed to vary relative to the continuum. For all {\it XMM-Newton\/}\ detectors, the instrumental background was composed of a broken power-law for the continuum with detector fluorescence lines as described in \citet{Garofali2017}. The {\it NuSTAR\/}\ instrumental background was modeled as a broken power-law overlaid with 29 Lorentzians following \citet{Wik2014back}, where the line normalizations are allowed to vary relative to the continuum, and FPMA and FPMB were handled independently. We fit the above described background models to the background spectra for each observation to determine the shape of the background at the location of VV~114 for each observation and detector. In subsequent fits to the source region, which includes source plus background data, we include this background as a model component, albeit with nearly all free parameters fixed to their best-fit values (e.g., plasma temperatures listed above), and the normalization fixed to the best-fit normalization for the background spectrum scaled by the ratio of the source to background extraction region areas. In this way, we fix the {\it shape} of the background in the fits to the source region based on the best-fit models for the background spectra, and constrain the {\itshape contribution} of the background to the source spectra via the known source and background spectral extraction region areas. Thus, we fit the spectra for the source region without performing any background subtraction while still minimizing the number of free parameters. \begin{deluxetable*}{cccccccccccccc} \tabletypesize{\tiny} \tablewidth{\textwidth} \tablecaption{Spectral Fit Results for {\itshape Chandra\/}-detected Point Sources \label{tab:point_src_fits}} \tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{$kT_{\scaleto{1}{3pt}}$} & \colhead{$A_{\scaleto{kT_{1}}{3pt}}$} & \colhead{$N_{\scaleto{\rm H,2}{3pt}}$} & \colhead{$kT_{\scaleto{2}{3pt}}$} & \colhead{$A_{\scaleto{kT_{2}}{3pt}}$} & \colhead{$N_{\scaleto{\rm H, 3}{3pt}}$} & \colhead{} &\colhead{$A_{\scaleto{\Gamma}{3pt}}$} & \colhead{log($L_{\scaleto{{\rm 0.5-8~keV}}{3pt}}$)} & \colhead{log($L_{\scaleto{{\rm 2-10~keV}}{3pt}}$)} & \colhead{} \\ \colhead{Source} & \colhead{Counts} & \colhead{C$_{\scaleto{kT}{3pt}}$} & \colhead{(keV)} & \colhead{(10$^{-5}$)} & \colhead{(10$^{22}$ cm$^{-2}$)} & \colhead{(keV)} & \colhead{(10$^{-4}$)} & \colhead{(10$^{22}$ cm$^{-2}$)} & \colhead{$\Gamma$} & \colhead{(10$^{-5}$)} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} & \colhead{gof} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)} & \colhead{(12)} & \colhead{(13)} & \colhead{(14)} } \startdata diffuse$^{{\tt a}}$ & 2477 & $\cdots$ & 0.36$^{+0.26}_{-0.05}$ & 4.67$^{+1.47}_{-2.33}$ & 0.44$^{+0.23}_{-0.12}$ & 0.80$^{+0.11}_{-0.06}$ & 1.36$^{+0.18}_{-0.37}$ & $\cdots$ & 1.78$^{+0.27}_{-0.17}$ & 1.55$^{+0.50}_{-0.30}$ & 41.23$^{+0.02}_{-0.02}$ & 40.76$^{+0.08}_{-0.08}$ & 0.65 \\ X-1$^{{\tt b}}$ & 519 & $\cdots$ & $\dagger$ & 0.18$^{+0.12}_{-0.08}$ & 2.11$^{+0.36}_{-0.23}$ & $\dagger$ & 2.20$^{+0.93}_{-0.80}$ & $\cdots$ & 1.01$^{+0.59}_{-0.24}$ & 0.97$^{+1.24}_{-0.30}$ & 41.01$^{+0.05}_{-0.05}$ & 41.05$^{+0.07}_{-0.07}$ & 0.06 \\ X-2$^{{\tt c}}$ & 744 & 0.04$^{+0.06}_{-0.03}$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$& $\dagger$ & 0.22$^{+0.10}_{-0.06}$ & 2.02$^{+0.22}_{-0.22}$ & 2.44$^{+0.50}_{-0.48}$ & 40.92$^{+0.04}_{-0.04}$ & 40.75$^{+0.08}_{-0.08}$ & 0.14 \\ X-3$^{{\tt c}}$ & 293 & 0.03$^{+0.03}_{-0.02}$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & 0.03$^{+0.16}_{-0.02}$ & 1.53$^{+0.45}_{-0.23}$ & 0.55$^{+0.35}_{-0.12}$ & 40.57$^{+0.07}_{-0.07}$ & 40.45$^{+0.11}_{-0.12}$ & 0.27 \\ X-4$^{{\tt c}}$ & 545 & 0.14$^{+0.05}_{-0.05}$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & 0.18$^{+0.14}_{-0.09}$ & 2.50$^{+0.53}_{-0.40}$ & 1.09$^{+0.69}_{-0.36}$ & 40.61$^{+0.04}_{-0.04}$ & 40.14$^{+0.05}_{-0.05}$ & 0.98 \\ X-5$^{{\tt c}}$ & 178 & 0.06$^{+0.05}_{-0.02}$ & $\dagger$& $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & 0.25$^{+6.99}_{-0.08}$ & 2.17$^{+1.78}_{-0.79}$ & 0.28$^{+2.98}_{-0.18}$ & 40.16$^{+0.09}_{-0.08}$ & 39.76$^{+0.24}_{-0.23}$ & 0.72 \\ X-6$^{{\tt c}}$ & 196 & 0.09$^{+0.04}_{-0.02}$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & $\dagger$ & $>$ 0.10 & 2.34$^{+3.38}_{-0.46}$ & 0.19$^{+0.94}_{-0.08}$ & 40.16$^{+0.07}_{-0.07}$ & 39.55$^{+0.25}_{-0.27}$ & 0.09 \\ \enddata \tablecomments{Best-fit model parameters from spectral fits to the {\itshape Chandra\/}\ observation of each component of VV~114: hot gas and point sources VV~114 X-1 to X-6. Quoted uncertainties are 90\% confidence intervals. Col. (1): source name, footnote describes the spectral model employed in {\tt XSPEC}. Col. (2): total number of counts used in spectral fit. Col. (3): multiplicative constant modifying fixed diffuse model component. Col. (4): plasma temperature in keV of lower temperature {\tt APEC} model component. Col. (5): normalization for lower temperature {\tt APEC} component. Col. (6): column density in units of 10$^{22}$ cm$^{-2}$ for higher temperature {\tt APEC} component. Col. (7): plasma temperature in keV for higher temperature {\tt APEC} component. Col. (8): normalization for higher temperature {\tt APEC} component. Col. (9): column density in units of 10$^{22}$ cm$^{-2}$ for power-law component. Col. (10): photon index for power-law component. Col. (11): normalization for power-law component. Col. (12): 0.5--8 keV luminosity, corrected for foreground Galactic absorption and assuming $D$ = 88 Mpc. Col. (13): 2--10 keV luminosity, corrected for foreground Galactic absorption and assuming $D$ = 88 Mpc. Col. (14): goodness-of-fit measure (see Section~\ref{sec:fit_techniques}).} \tablenotetext{\dagger}{\footnotesize Parameters fixed to best-fit values from the fit to the point-source-free spectrum (``diffuse").} \tablenotetext{{\tt a}}{\footnotesize {\tt XSPEC} model: \texttt{tbabs$_{\scaleto{\rm Gal}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$ + pow}), where the foreground Galactic absorption was fixed (\texttt{tbabs$_{\scaleto{\rm Gal}{4pt}}$}; $N_{\scaleto{\rm H}{4pt}}$ = 1.2 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$), and the thermal models assumed $Z$ = 0.51 $Z_{\odot}$.} \tablenotetext{{\tt b}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*(apec$_{\scaleto{2}{4pt}}$ + pow))}, where the foreground Galactic absorption was fixed (\texttt{tbabs$_{\scaleto{\rm Gal}{4pt}}$}; $N_{\scaleto{\rm H}{4pt}}$ = 1.2 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$).} \tablenotetext{{\tt c}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(constant$_{\scaleto{kT}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$) + tbabs$_{\scaleto{3}{4pt}}$*pow}), where the foreground Galactic absorption was fixed (\texttt{tbabs$_{\scaleto{\rm Gal}{4pt}}$}; $N_{\scaleto{\rm H}{4pt}}$ = 1.2 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$).} \end{deluxetable*} \section{Results}\label{sec:results} To construct the galaxy-wide X-ray SED of VV~114 and estimate the XRB contribution, we use the archival {\itshape Chandra\/}\ observation of VV~114 in conjunction with the newly obtained, nearly-simultaneous {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations. In Section~\ref{sec:point_src_decomp}, we present our overall spectral modeling approach and fit results for each major component of VV~114, including the point source population and the hot, diffuse gas component of the galaxy. As the {\itshape Chandra\/}\ observation was taken $\sim$ 13~yr prior to the newly obtained {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations, we consider the potential impact of variability from the sources of compact emission in VV~114 (ULXs and possible AGN) on this analysis. To mitigate the impact of variability, we analyze separately the {\itshape Chandra\/}\ spectra from the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations, applying the constraints from the {\itshape Chandra\/}\ spectral fits, particularly for the non-time variable hot gas component of the galaxy, to the newly obtained {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ data. In Section~\ref{sec:gal_wide_spec} we present a comparison of results between the different epochs, as well as the galxy-wide spectrum and associated X-ray SED for VV~114. \subsection{{\itshape Chandra\/}\ Point Source Decomposition}\label{sec:point_src_decomp} We began our investigation of the galaxy-wide SED of VV~114 through a spectral decomposition of the major galaxy components using the archival {\itshape Chandra\/}\ observation. With {\itshape Chandra\/}, VV~114 is resolved into six discrete point sources embedded in hot, diffuse gas (see Figure~\ref{fig:chandra_rgb}). All six point sources are ULXs with $L_{\rm 2-10~keV}$ $\approx$ (3--110) $\times$ 10$^{39}$ erg~s$^{-1}$, and are detected with sufficient counts ($\gtrsim$ 200) for simple spectral fitting (Table~\ref{tab:point_src_fits}). The brightest point source in the eastern region of the galaxy (VV~114 X-1) is a possible AGN \citep{Iono2013,Saito2015}, and we devote more attention to discussion of possible AGN contamination in Section~\ref{sec:agn_contrib}. For each of the six point sources, as well as the point-source-free diffuse emission component we fit the unbinned source spectra with an appropriately scaled background model as described in Section~\ref{sec:fit_techniques}. Below we describe the spectral models for each component resolved with {\itshape Chandra\/}. We first assessed the point-source-free hot, diffuse gas component using an absorbed (foreground Galactic and intrinsic) two-temperature thermal plasma ({\tt APEC}) model along with a power-law continuum to account for any unresolved XRB emission from undetected XRBs and the wings of the PSFs of X-ray detected point sources. Given the measured gas-phase metallicity for VV~114 (12 + log (O/H) = 8.4), we fixed the abundance for the {\tt APEC} model components to 0.51 $Z_{\odot}$, a direct conversion assuming \citet{Asplund2009} abundances, where $Z_{\odot}$ corresponds to 12 + log(O/H) = 8.69. The choice of the two-temperature thermal plasma model with intrinsic absorption is physically motivated assuming the diffuse emission detected with {\itshape Chandra\/}\ is produced via a hot disk seen through an intrinsic obscuring column (higher temperature, absorbed {\tt APEC} component), as well as a relatively unobscured hot halo (lower temperature, unabsorbed {\tt APEC} component) \citep[e.g.,][]{Martin2002,Strickland2004}. Such a model is consistent with hot, diffuse emission produced by feedback from supernovae and stellar winds \citep[e.g.,][]{Strickland2004,Grimes2005}. The model choice is further motivated by previous empirical studies, which have found that a two-temperature thermal plasma with intrinsic absorption well-describes the diffuse emission in star-forming galaxies across a range of SFRs \citep[e.g.,][]{Summers2003,Hartwell2004,MineoGas,Lehmer2015,Smith2018}. The choice to fix the abundances of the {\tt APEC} components to the measured gas-phase metallicity for VV~114 is supported by previous X-ray investigations of the hot interstellar medium (ISM) in star-forming galaxies, which have found spectral degeneracies when attempting to fit for metal abundances using X-ray spectra \citep[e.g.,][]{Weaver2000,Dahlem2000}, and further that the gas-phase metallicity is a good proxy for the metal abundance of the hot ISM \citep[e.g.,][]{Ott2005I,Grimes2005,Grimes2006}. The best-fit values for the free parameters from this diffuse component spectral model are listed in the first row of Table~\ref{tab:point_src_fits}, with the {\tt XSPEC} description of the model listed in the table notes. The diffuse gas in VV~114 is well described by $\sim$ 0.4 keV and $\sim$ 0.8 keV components, consistent with plasma temperatures measured for the hot ISM in other star-forming galaxies \citep[e.g.,][]{Ott2005I,Grimes2005,MineoGas,Smith2018}. Previous X-ray analysis of VV~114 using the same {\itshape Chandra\/}\ data was performed by \citet{Grimes2006}, finding $kT = 0.3^{+0.75}_{-0.20}$ and $kT = 0.62 \pm 0.03$ keV. While the lower temperature component from \citet{Grimes2006} is consistent with our findings, the high temperature component of their model is inconsistent with our $kT = 0.80^{+0.11}_{-0.06}$ keV component. This inconsistency despite the same dataset may be due to differences in the spectral extraction and modeling approach. In particular, the \citet{Grimes2006} analysis did not explicitly separate the diffuse emission from the point source emission as we do here, and further employed a {\tt vmekal} model for the hot gas, fitting for the abundances using the {\tt angr} abundance tables \citep{Anders1989} in {\tt XSPEC}, while in this work we have employed the {\tt APEC} model with fixed abundances relative to the \citet{Asplund2009} abundance tables. In subsequent modeling of the six point sources, we included the diffuse gas component listed in Table~\ref{tab:point_src_fits} as a fixed component modified by a free multiplicative constant to account for any residual hot gas in the point source extraction regions. For the point sources themselves, we employed simple absorbed power-law models, appropriate for either XRBs or AGN. Each point source was therefore fit with four freely varying components for the source (i.e., diffuse gas normalization, intrinsic column density, photon index, and power-law normalization). \begin{deluxetable*}{ccccccccccccc} \tabletypesize{\tiny} \tablewidth{\textwidth} \tablecaption{Spectral Fit Results for Galaxy-Wide Models \label{tab:gal_wide_fits}} \tablehead{ \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{} & \colhead{$N_{\scaleto{\rm H, XRB}{3pt}}$} & \colhead{} & \colhead{$E_{\scaleto{\rm break}{3pt}}$} & \colhead{} & \colhead{$A_{\scaleto{\Gamma_{\rm XRB}}{3pt}}$} & \colhead{log $L^{\scaleto{\rm gal}{3pt}}_{\scaleto{{\rm 0.5-8~keV}}{3pt}}$} & \colhead{log $L^{\scaleto{\rm gal}{3pt}}_{\scaleto{{\rm 2-10~keV}}{3pt}}$} & \colhead{} \\ \colhead{Model} & \colhead{Inst.} & \colhead{C$_{\scaleto{\it XMM}{3pt}}$} & \colhead{C$_{\scaleto{{\it NuSTAR\/}}{3pt}}$} & \colhead{C$_{\scaleto{kT}{3pt}}$} & \colhead{(10$^{22}$ cm$^{-2}$)} & \colhead{$\Gamma_{\scaleto{\rm XRB,1}{3pt}}$} & \colhead{(keV)} & \colhead{$\Gamma_{\scaleto{\rm XRB,2}{3pt}}$} & \colhead{(10$^{-5}$)} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} & \colhead{gof} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} & \colhead{(9)} & \colhead{(10)} & \colhead{(11)} & \colhead{(12)} & \colhead{(13)} } \startdata {\scriptsize {\tt pow$_{\scaleto{\rm XRB}{2pt}}$ + pow$_{\scaleto{\rm AGN}{2pt}}$}} & {\itshape Chandra\/}$^{{\tt a}}$ & $\cdots$ & $\cdots$ & 1.26$^{+0.14}_{-0.13}$ & 0.13$^{+0.05}_{-0.03}$ & 2.07$^{+0.19}_{-0.16}$ & $\cdots$ & $\cdots$ & 6.94$^{+1.60}_{-0.85}$ & 41.65$^{+0.02}_{-0.01}$ & 41.43$^{+0.03}_{-0.03}$ & 0.07 \\ {\scriptsize {\tt pow$_{\scaleto{\rm XRB}{2pt}}$}} & {\itshape Chandra\/}$^{{\tt b}}$ & $\cdots$ & $\cdots$ & 1.28$^{+0.15}_{-0.09}$ & 0.08$^{+0.04}_{-0.04}$ & 1.69$^{+0.11}_{-0.14}$ & $\cdots$ & $\cdots$ & 6.37$^{+0.76}_{-0.94}$ & 41.64$^{+0.02}_{-0.02}$ & 41.41$^{+0.04}_{-0.04}$ & 0.43 \\ {\scriptsize {\tt pow$_{\scaleto{\rm XRB}{2pt}}$ + pow$_{\scaleto{\rm AGN}{2pt}}$}} & {\it XMM}+{\it NuSTAR\/}$^{{\tt a\dagger}}$ & 0.97$^{+0.05}_{-0.04}$ & 0.56$^{+0.05}_{-0.05}$ & $\cdots$ & $\dagger$ & $\dagger$ & $\cdots$ & $\cdots$ & $\dagger$ & 41.60$^{+0.02}_{-0.01}$ & 41.30$^{+0.03}_{-0.03}$ & $<$ 10$^{-3}$ \\ {\scriptsize {\tt pow$_{\scaleto{\rm XRB}{2pt}}$}} & {\it XMM}+{\it NuSTAR\/}$^{{\tt b\ddagger}}$ & 0.96$^{+0.05}_{-0.04}$ & 0.63$^{+0.06}_{-0.06}$ & $\cdots$ & $\ddagger$ & $\ddagger$ & $\cdots$ & $\cdots$ & $\ddagger$ & 41.59$^{+0.02}_{-0.02}$ & 41.30$^{+0.03}_{-0.03}$ & $<$ 10$^{-3}$ \\ {\scriptsize {\tt bknpow$_{\scaleto{\rm ULX}{2pt}}$}} & {\it XMM} + {\it NuSTAR}$^{{\tt c}}$ & 1.39$^{+0.12}_{-0.04}$ & 1.62$^{+0.33}_{-0.17}$ & $\cdots$ & 0.01$^{+0.05}_{-0.01}$ & 1.44$^{+0.08}_{-0.07}$ & 4.03$^{+1.37}_{-0.50}$ & 2.51$^{+0.33}_{-0.19}$ & 2.61$^{+0.23}_{-0.32}$ & 41.54$^{+0.02}_{-0.02}$ & 41.27$^{+0.04}_{-0.04}$ & 0.06 \\ \enddata \tablecomments{Best-fit model parameters from spectral fits to galaxy-wide {\itshape Chandra\/}, {\it XMM-Newton\/}, and {\it NuSTAR\/}\ spectra of VV~114. Quoted uncertainties are 90\% confidence interval. Col. (1): model descriptor. Col. (2): instrument(s), footnote describe the spectral model employed in {\tt XSPEC}. Col. (3): multiplicative constant for {\it XMM-Newton\/}\ spectra ({\it XMM-Newton\/}\ + {\it NuSTAR\/}\ fits only). Col. (4): multiplicative constant for {\it NuSTAR\/}\ spectra ({\it XMM-Newton\/}\ + {\it NuSTAR\/}\ fits only). Col. (5): multiplicative constant modifying the diffuse model component ({\itshape Chandra\/}\ fits only). Col. (6): column density in units of 10$^{22}$ cm$^{-2}$ for XRB power-law or broken power-law component. Col. (7): photon index for XRB power-law or first photon index for broken power-law component. Col. (8): break energy in keV for XRB broken power-law component. Col. (9): second photon index for XRB broken power-law component. Col. (10): normalization for XRB power-law or broken power-law component. Col. (11): galaxy-integrated 0.5--8 keV $L_{\mbox{\scriptsize{X}}}$\ derived from the model, corrected for foreground Galactic absorption and assuming $D$ = 88 Mpc. Col. (12): galaxy-integrated 2--10 keV $L_{\mbox{\scriptsize{X}}}$\ derived from the model, corrected for foreground Galactic absorption and assuming $D$ = 88 Mpc. Col. (13): goodness-of-fit measure (see Section~\ref{sec:fit_techniques}).} \tablenotetext{{\tt a}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(constant$_{\scaleto{kT}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$) + tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm AGN}{4pt}}$ + tbabs$_{\scaleto{4}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$)}, where foreground Galactic absorption ({\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$}) was fixed to $N_{\scaleto{\rm H}{4pt}}$ =1.20 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$, all ({\tt apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$}) components were fixed to values from the fit to the point-source free spectrum, the ({\tt tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm AGN}{4pt}}$}) components were fixed to the values from the fit to the spectrum of VV~114 X-1, and the ({\tt tbabs$_{\scaleto{4}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$}) components are allowed to freely vary.} \tablenotetext{{\tt b}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(constant$_{\scaleto{kT}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$) + tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$)}, where foreground Galactic absorption ({\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$}) was fixed to $N_{\scaleto{\rm H}{4pt}}$ =1.20 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$, all ({\tt apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$}) components were fixed to values from the fit to the point-source free spectrum, and the ({\tt tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$}) components were allowed to freely vary.} \tablenotetext{{\tt a\dagger}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(constant$_{\scaleto{\rm inst}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$ + tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm AGN}{4pt}}$ + tbabs$_{\scaleto{4}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$))}, where the only freely varying parameter is the instrumental constant ({\tt constant$_{\scaleto{\rm inst}{4pt}}$}). The foreground Galactic absorption ({\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$}) was fixed to $N_{\scaleto{\rm H}{4pt}}$ =1.20 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$, all ({\tt apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$}) components were fixed to values from the fit to the point-source free spectrum, the ({\tt tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm AGN}{4pt}}$}) components were fixed to the values from the fit to the spectrum of VV~114 X-1, and the ({\tt tbabs$_{\scaleto{4}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$}) component was fixed to the best-fit values from the galaxy-wide fit to the {\itshape Chandra\/}\ observation (model {\tt a}).} \tablenotetext{{\tt b\ddagger}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(constant$_{\scaleto{\rm inst}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$ + tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$))}, where the only freely varying parameter is the instrumental constant ({\tt constant$_{\scaleto{\rm inst}{4pt}}$}). The foreground Galactic absorption ({\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$}) was fixed to $N_{\scaleto{\rm H}{4pt}}$ =1.20 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$, all ({\tt apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$}) components were fixed to values from the fit to the point-source free spectrum, and the ({\tt tbabs$_{\scaleto{3}{4pt}}$*pow$_{\scaleto{\rm XRB}{4pt}}$}) component was fixed to the best-fit values from the galaxy-wide fit to the {\itshape Chandra\/}\ observation (model {\tt b}).} \tablenotetext{{\tt c}}{\footnotesize {\tt XSPEC} model: {\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$*(constant$_{\scaleto{\rm inst}{4pt}}$*(apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$ + tbabs$_{\scaleto{3}{4pt}}$*bknpow))}, where the only freely varying parameters are the instrumental constant ({\tt constant$_{\scaleto{\rm inst}{4pt}}$}) and the parameters of the ({\tt tbabs$_{\scaleto{3}{4pt}}$*bknpow}) model component. The foreground Galactic absorption ({\tt tbabs$_{\scaleto{\rm Gal}{4pt}}$}) was fixed to $N_{\scaleto{\rm H}{4pt}}$ =1.20 $\times$ 10$^{\scaleto{20}{4pt}}$ cm$^{\scaleto{-2}{4pt}}$, and all ({\tt apec$_{\scaleto{1}{4pt}}$ + tbabs$_{\scaleto{2}{4pt}}$*apec$_{\scaleto{2}{4pt}}$}) components were fixed to values from the fit to the point-source-free {\itshape Chandra\/}\ spectrum.} \end{deluxetable*} Using the above described power-law-plus-hot-gas model for VV~114 X-2 to X-6, we find steep photon indices ($\Gamma > 1.5$), relatively low column densities modifying the power-law components, and minimal contributions from the surrounding hot gas, as indicated by the small values of the normalizations to the fixed diffuse gas components. The best-fit parameters and their associated uncertainties (90\% confidence intervals) along with the 0.5--8 and 2--10 keV luminosities for each source from this model are summarized in Table~\ref{tab:point_src_fits}. The models for point sources VV~114 X-2 to X-6 are consistent with their being either collections of unresolved XRBs or ULXs embedded in hot gas, indicative of recent star formation. We initially applied the default power-law-plus-hot-gas model with fixed parameters to VV~114 X-1, but found gof = 0.03, suggesting the model could be improved. We next attempted to fit VV~114 X-1 with a simple absorbed power-law, but found the fit left residuals at energies $<$ 0.5 keV and at the location of emission line complexes between 1--2 keV, indicating the need to include one or more thermal components. Given these results and that the multiwavelength data available for VV~114 (e.g., Figure~\ref{fig:hst_rgb}) indicates heavy obscuration in the eastern portion of the galaxy where VV~114 X-1 is located, we next adopted a slightly altered version of the default model. The new model for VV~114 X-1 consists of an unabsorbed {\tt APEC} component (unobscured hot halo), as well as an absorbed {\tt APEC}-plus-power-law component, representing the obscured emission from the hot disk and VV~114 X-1. In this model for VV~114 X-1 we fixed the {\tt APEC} temperatures to the values from the default model ($kT_{1}$ = 0.36 keV and $kT_{2}$ = 0.80 keV), but allowed the {\tt APEC} normalizations to freely vary. We likewise allowed the intrinsic column density and power-law parameters to freely vary. The fit to VV~114 X-1 using this model yields gof = 0.06, a slight improvement over the default model, returning a high column density ($N_{\rm H} = 2.11^{+0.36}_{-0.23} \times 10^{22}$ cm$^{-2}$) modifying the power-law and higher temperature {\tt APEC} components, and a photon index of $\Gamma = 1.01^{+0.59}_{-0.24}$. The best-fit values and associated uncertainties from this model are listed in the second row of Table~\ref{tab:point_src_fits}. Importantly, the values for the column density and photon index from this model are consistent with values from \citet{Grimes2006} (their source VV~114E) using the same {\itshape Chandra\/}\ data, albeit a slightly different source model than the one employed here (see discussion in Section~\ref{sec:agn_contrib}). This lends additional support for the adoption of this model for VV~114 X-1. We discuss the significance of this spectral fit result for VV~114 X-1 as a possible AGN in more detail in Section~\ref{sec:agn_contrib}. \subsection{Galaxy-Wide Spectral Analysis}\label{sec:gal_wide_spec} We extend the results from the spectral decomposition of VV~114 to construct a galaxy-wide spectral model and determine the dominant spectral component at higher energies. In fitting the {\itshape Chandra\/}\ observation of the galaxy-wide spectrum we consider all major spectral components as derived from the spectral decomposition described in Section~\ref{sec:point_src_decomp}, and then apply these spectral constraints as appropriate in building a galaxy-wide spectral model to be applied to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations, in which VV~114 is consistent with being a single source (see contours in Figures~\ref{fig:chandra_rgb}--\ref{fig:hst_rgb}). We note that it is not as straightforward to interpret the gof measure for the galaxy-wide fits as for the point source fits. This is because the {\tt goodness} command simulates spectra assuming variance due only to counting statistics, whereas in fitting the galaxy-wide spectra we may be dominated by systematics. Thus, for the galaxy-wide fits, we caution against interpreting very low gof values as indicating the need for additional model components or free parameters, or interpreting gof $\sim$ 1 as ``overfitting,'' as proper inclusion of systematic error would likely serve to widen the distribution of test statistics for the simulated spectra. \begin{figure} \centering \includegraphics[width=0.5\textwidth,trim=0 0 0 0, clip]{chandra_specfits_allresid.pdf} \caption{Spectral fit to the {\itshape Chandra\/}\ spectrum (grey points), where the data points have been binned to a minimum significance of five per spectral bin for plotting purposes. The best-fit total model is shown as the solid black line. Each component of the model is also displayed: the hot gas component (two-temperature thermal plasma model) as a dotted red line, the XRB component (absorbed power-law model) as a dashed blue line, and the background, both sky and instrumental, as the solid gold line. We display the residuals for the {\tt pow$_{\rm XRB}$}, {\tt bknpow$_{\rm ULX}$}, and {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} models as applied to the {\itshape Chandra\/}\ observation in the bottom three panels, annotated with the gof for the fit. The model consisting of a single absorbed power-law is the most consistent with the {\itshape Chandra\/}\ data; however, as shown by the residuals, the quality of the {\itshape Chandra\/}\ data, particularly at energies $>$ 5 keV is not sufficient to effectively rule out any of the models tested here.}\label{fig:chandra_spec_resid} \end{figure} \begin{figure} \centering \includegraphics[width=0.5\textwidth,trim=0 0 0 0, clip]{XMMNU_specfits_allresid.pdf} \caption{Spectral fit to joint, nearly-simultaneous {\it XMM-Newton\/}\ (pn: grey) and {\it NuSTAR\/}\ (FPMA + FPMB: cyan) spectra, where the data points have been grouped by instrument and to a minimum significance of five in {\tt XSPEC} for plotting purposes only. The total best-fit spectral model is displayed as a solid black line, with each major component of this model also labeled. The dotted red line shows the hot gas component (a two-temperature thermal plasma model), the dashed blue line represents the ULX component (an absorbed broken power-law), and the solid gold line shows the combined sky and instrumental background (described in Section~\ref{sec:fit_techniques}). Below the plotted spectra with best-fit model we show the residuals for all three models listed in the last three rows of Table~\ref{tab:gal_wide_fits} that were fit to the joint {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra. In each residual panel we list a shorthand for the model type and the gof for the fit. Only the model with the broken power-law component is consistent with the data, indicating that the global emission of VV~114 is dominated by ULXs.}\label{fig:xmmnu_spec_resid} \end{figure} We first fit the galaxy-wide {\itshape Chandra\/}\ spectrum of VV~114 using a model comprised of a hot gas component, an obscured AGN-like component, and an XRB population component (model: {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$}). In this {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} model we fixed the hot gas component to the best-fit model for the point-source-free spectrum (first row of Table~\ref{tab:point_src_fits}), allowing this component to be modified only by a freely varying multiplicative constant. We likewise fixed the obscured AGN-like component to the column density and power-law slope and normalization from the best-fit model to VV~114 X-1 (second row of Table~\ref{tab:point_src_fits}), under the assumption X-1 is an AGN candidate distinct from the other detected point sources. We consider the XRB population component of VV~114 to consist of the known ULXs VV~114 X-2 to X-6 as well as the unresolved XRBs (e.g., the power-law component in the diffuse-only spectrum), for which the best-fit models return varying levels of intrinsic absorption and a range of photon indices and power-law normalizations. In all galaxy-wide models for VV~114 we therefore model the XRB population with a single absorbed power-law component to account for the {\it combination} of all known ULXs and the unresolved XRBs. The assumption of a single absorbed power-law for the XRB population is consistent with a fit to the stacked spectra of sources VV~114 X-2 to X-6. We allow all parameters associated with this absorbed power-law model (i.e., intrinsic absorption, photon index, normalization) to be freely varying in order to determine the best-fit parameters for the ensemble XRB population on the galaxy-wide emission. The results for the fit to the galaxy-wide {\itshape Chandra\/}\ spectrum using the {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} model are recorded in the first row of Table~\ref{tab:gal_wide_fits}, where we list the best-fit values for the free parameters and their associated uncertainties, as well as the gof value for the model. The {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} model produces an acceptable fit (gof = 0.07) to the galaxy-wide {\itshape Chandra\/}\ spectrum (residuals in the bottom panel of Figure~\ref{fig:chandra_spec_resid}), returning a photon index $\Gamma = 2.07^{+0.19}_{-0.16}$ for the XRB population power-law component, and a power-law normalization consistent with the summation of normalizations for the known ULXs and unresolved XRBs from the decomposition fits in Table~\ref{tab:point_src_fits}. We also tested a simpler model consisting of the same hot gas component, with parameters fixed to the best-fit values from the point-source-free spectrum, but only a single absorbed power-law component with freely varying absorption, photon index, and normalization (model: {\tt pow$_{\rm XRB}$}). In this model, the single power-law represents the combination of all six point sources (ULXs and possible AGN) and unresolved XRBs. The results from this {\tt pow$_{\rm XRB}$} model fit to the galaxy-wide {\itshape Chandra\/}\ spectrum are listed in the second row of Table~\ref{tab:gal_wide_fits} and shown in the top panel of Figure~\ref{fig:chandra_spec_resid}, with associated residuals in the panel just below. The {\tt pow$_{\rm XRB}$} model returns a photon index of $\Gamma = 1.69^{+0.11}_{-0.14}$, consistent with expectations for a population of XRBs. The gof = 0.43 further suggests that this {\tt pow$_{\rm XRB}$} model is a somewhat more acceptable fit to the galaxy-wide {\itshape Chandra\/}\ spectrum, indicating that an additional power-law component describing VV~114 X-1 is not {\it required} to model the galaxy-wide X-ray emission. We next applied these two models (i.e., {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$}, and {\tt pow$_{\rm XRB}$} alone), to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra. To start, we simply applied each model to the joint {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra with the parameters of all model components fixed to the best-fit values from the fits to the {\itshape Chandra\/}\ spectrum, save for an overall multiplicative scaling constant for each instrument, which we allowed to vary to account for flux calibration differences or intrinsic variability. We record the results from applying these {\itshape Chandra\/}-derived models to {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ in the third and fourth rows of Table~\ref{tab:gal_wide_fits}, finding that neither model is consistent with the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ data (gof $<$ 10$^{-3}$). We also attempted fitting the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra with {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} and {\tt pow$_{\rm XRB}$} models with freely varying power-law parameters, i.e., without fixing these parameters to the {\itshape Chandra\/}-derived values, but found unacceptable fits in both cases (gof $<$ 10$^{-3}$). The residuals from the {\itshape Chandra\/}-derived {\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} and {\tt pow$_{\rm XRB}$} models (bottom two panels of Figure~\ref{fig:xmmnu_spec_resid}) indicate a reasonable fit to the {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ data at $E \lesssim$ 2--3 keV, but an overestimate of the $E >$ 2--3 keV emission (overall gof $<$ 10$^{-3}$). These results suggest that the extension of an XRB-like + AGN-like power-law or a single XRB-like power-law to higher energies using parameters derived from fits to the {\itshape Chandra\/}\ data is inconsistent with the observed {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra. We discuss these results in the context of the potential AGN source VV~114 X-1 in Section~\ref{sec:agn_contrib}. \begin{deluxetable*}{cccccccc} \tabletypesize{\tiny} \tablewidth{\textwidth} \tablecaption{Luminosity of Galaxy Components from Best-Fit Spectral Models \label{tab:comp_lx}} \tablehead{ \colhead{} & \colhead{} & \colhead{log $L^{\scaleto{\rm gas}{3pt}}_{\scaleto{{\rm 0.5-2~keV}}{3pt}}$ } & \colhead{log $L^{\scaleto{\rm XRB}{3pt}}_{\scaleto{{\rm 0.5-2~keV}}{3pt}}$} & \colhead{log $L^{\scaleto{\rm gas}{3pt}}_{\scaleto{{\rm 0.5-8~keV}}{3pt}}$} & \colhead{log $L^{\scaleto{\rm XRB}{3pt}}_{\scaleto{{\rm 0.5-8~keV}}{3pt}}$} & \colhead{log $L^{\scaleto{\rm gas}{3pt}}_{\scaleto{{\rm 2-10~keV}}{3pt}}$} & \colhead{log $L^{\scaleto{\rm XRB}{3pt}}_{\scaleto{{\rm 2-10~keV}}{3pt}}$} \\ \colhead{Inst.} & \colhead{Model} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} & \colhead{(erg s$^{-1}$)} \\ \colhead{(1)} & \colhead{(2)} & \colhead{(3)} & \colhead{(4)} & \colhead{(5)} & \colhead{(6)} & \colhead{(7)} & \colhead{(8)} } \startdata {\itshape Chandra\/} & {\scriptsize {\tt pow$_{\scaleto{\rm XRB}{2pt}}$}} & 41.06$^{+0.03}_{-0.03}$ & 41.04$^{+0.04}_{-0.04}$ & 41.09$^{+0.03}_{-0.03}$ & 41.50$^{+0.02}_{-0.02}$ & 39.98$^{+0.03}_{-0.03}$ & 41.39$^{+0.04}_{-0.04}$ \\ {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ & {\scriptsize {\tt bknpow$_{\scaleto{\rm ULX}{2pt}}$}} & 41.09$^{+0.04}_{-0.04}$ & 40.87$^{+0.04}_{-0.04}$ & 41.13$^{+0.04}_{-0.04}$ & 41.33$^{+0.04}_{-0.04}$ & 40.01$^{+0.04}_{-0.04}$ & 41.25$^{+0.04}_{-0.05}$ \\ \enddata \tablecomments{Luminosities in three different bands of the components (hot gas and XRB population) comprising the galaxy-integrated $L_{\mbox{\scriptsize{X}}}$\ of VV~114 from the best-fit spectral models to the {\itshape Chandra\/}\ and {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra. Col. (1): 0.5--2 keV luminosity of the hot gas component. Col. (2): 0.5--2 keV luminosity of the XRB population component. Col. (3): 0.5--8 keV luminosity of the hot gas component. Col. (4): 0.5--8 keV luminosity of the XRB population component. Col. (5): 2--10 keV luminosity of the hot gas component. Col. (6): 2--10 keV luminosity of the XRB population component.} \end{deluxetable*} We next swapped the power-law component in the {\tt pow$_{\rm XRB}$} model for a broken power-law (model: {\tt bknpow$_{\rm ULX}$}), a component which is physically and observationally motivated assuming a ULX-dominated population \citep[e.g.,][]{Gladstone2009, Wik2014,Walton2013,Walton2015,Rana2015,Lehmer2015,Yukita2016}. As in previous fits, we fixed all the diffuse gas model parameters to the best-fit values to the {\itshape Chandra\/}\ point-source-free spectrum, but allowed all broken power-law parameters as well as the overall multiplicative constant for each instrument to vary. This {\tt bknpow$_{\rm ULX}$} model yields a fit consistent with the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations (gof = 0.06), demonstrating that the data are consistent with a spectral turnover at $\sim$ 4 keV. We record the values for the free parameters and their associated uncertainties from the {\tt bknpow$_{\rm ULX}$} model in the last row of Table~\ref{tab:gal_wide_fits}. Given the success of this {\tt bknpow$_{\rm ULX}$} model in fitting the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra, we test this model on the {\itshape Chandra\/}\ observation as well, allowing only the broken power-law normalization and intrinsic absorption to freely vary. We show the residuals for the {\tt bknpow$_{\rm ULX}$} as applied to the {\itshape Chandra\/}\ data in the middle panel of Figure~\ref{fig:chandra_spec_resid}, demonstrating that this model provides an acceptable fit (gof = 0.27) to the {\itshape Chandra\/}\ observation. However, as shown by the residuals in Figure~\ref{fig:chandra_spec_resid}, the quality of the {\itshape Chandra\/}\ data, particularly at energies $>$ 5 keV, is not sufficient to distinguish between the different models tested here. In fact, in order to obtain an acceptable fit to the {\itshape Chandra\/}\ spectrum with the {\tt bknpow$_{\rm ULX}$} model we must freeze the majority of the model parameters to the best-fit values obtained from fits to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra, indicating that the {\itshape Chandra\/}\ spectrum alone cannot constrain parameters such as the broken power-law photon indices and break energy. This underlines that, while {\itshape Chandra\/}\ is powerful for resolving point sources from the hot, diffuse emission in the galaxy, {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ are critical for high energy ($>$ 5 keV) constraints, where spectral turnover from a ULX population (or lack thereof) is more apparent. We display the best-fit {\tt bknpow$_{\rm ULX}$} model and its associated components (hot, diffuse gas, broken power-law, and background) as applied to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra in the top panel of Figure~\ref{fig:xmmnu_spec_resid}, with the residuals for this model in the second panel from the top. The residuals from the {\itshape Chandra\/}-derived models ({\tt pow$_{\rm XRB}$ + pow$_{\rm AGN}$} and {\tt pow$_{\rm XRB}$}), which were poor fits to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations, are shown in the bottom two panels of the same figure for reference. The preference for a broken power-law over power-law component(s) as constrained by the nearly-simultaneous {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra is consistent with the galaxy-wide emission of VV~114 being dominated by ULXs at energies $\gtrsim$ 2 keV \citep[e.g.,][]{Gladstone2009}. We discuss this ULX-dominated interpretation as it relates to the possible AGN in VV~114, as well as metallicity effects, in Sections~\ref{sec:agn_contrib}--\ref{sec:sed_metal}. In columns 11--12 of Table~\ref{tab:gal_wide_fits} we list the galaxy-integrated {\it total} X-ray luminosity in the 0.5--8 keV and 2--10 keV bands (log $L^{\rm gal}_{\rm 0.5-8~keV}$ and log $L^{\rm gal}_{\rm 2-10~keV}$) corrected for foreground Galactic absorption from each of the spectral models applied to the {\itshape Chandra\/}\ and {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ spectra. In Table~\ref{tab:comp_lx} we likewise list the galaxy-integrated luminosities of the {\it components} which constitute $L^{\rm gal}_{\rm X}$, namely the luminosities of the hot, diffuse gas (log $L^{\rm gas}_{\rm X}$) and XRB population (log $L^{\rm XRB}_{\rm X}$), in the 0.5--2 keV, 0.5--8 keV, and 2--10 keV bands derived from the best-fit model to the {\itshape Chandra\/}\ spectrum ({\tt pow$_{\rm XRB}$}) and the best-fit model to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra ({\tt bknpow$_{\rm ULX}$}). The discrepancy between the model XRB component luminosities listed in Table~\ref{tab:comp_lx} as measured with {\itshape Chandra\/}\ versus {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ can be attributed to multiple factors, including flux calibration differences between instruments, model differences, and the different integration times and epochs between observations. In Table~\ref{tab:comp_lx} we list XRB component luminosities derived from the {\it best-fit} models for each set of observations, thus some disagreement between {\itshape Chandra\/}\ and {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ is expected given these observations are fit with different best-fit models ({\tt pow$_{\rm XRB}$} and {\tt bknpow$_{\rm ULX}$}, respectively). However, when we calculate the XRB $L_{\mbox{\scriptsize{X}}}$\ from the {\tt bknpow$_{\rm ULX}$} model fit to the {\itshape Chandra\/}\ spectrum we find log $L^{\rm XRB}_{\rm 0.5-8~keV}$ = 41.47 $\pm$ 0.02 and log $L^{\rm XRB}_{\rm 2-10~keV}$ = 41.33 $\pm$ 0.02, still inconsistent with the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ derived XRB component luminosities listed in Table~\ref{tab:comp_lx} using this same model. That the inconsistency is larger in the 0.5--8 keV band indicates that the depth of the observations is at least partly to blame for the discrepancy between luminosities. In particular, we use the {\it XMM-Newton\/}\ spectra to constrain the luminosity in the 0.5--8 keV band given its increased sensitivity across the entirety of the bandpass relative to {\it NuSTAR\/}; however, {\it XMM-Newton\/}\ is the shallowest of all observations used here, thus the XRB component luminosity in the 0.5--8 keV bandpass as constrained by {\it XMM-Newton\/}\ is likely systematically low compared to {\itshape Chandra\/}. By contrast, the discrepancy in the 2--10 keV band, where we use both {\it NuSTAR\/}\ and {\it XMM-Newton\/}\ to constrain the XRB component luminosity, is within the range expected due to flux calibration differences between instruments \citep{Madsen2015}. It is also possible that variability between epochs affects the derived luminosities, however, as we further demonstrate in Section~\ref{sec:compare_prev} using a set of archival {\it XMM-Newton\/}\ observations, the inconsistency between {\itshape Chandra\/}\ and {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ derived luminosities does not necessarily suggest substantial variability between epochs. All luminosities listed in Tables~\ref{tab:point_src_fits}--\ref{tab:comp_lx} are calculated assuming $D$ = 88 Mpc for VV~114, and using the {\tt cflux} convolution model in {\tt XSPEC} as a component modifying either the overall model (luminosities in Table~\ref{tab:point_src_fits} and Table~\ref{tab:gal_wide_fits}) or the appropriate model component (luminosities in Table~\ref{tab:comp_lx}). Therefore, all luminosities are based on fluxes corrected for Galactic extinction, but not intrinsic extinction. In the {\tt cflux} model component, we fixed the minimum and maximum energy parameters to return the flux in the appropriate band, and likewise fixed the normalizations of any model component modified by {\tt cflux} to the best-fit values from Tables~\ref{tab:point_src_fits}--\ref{tab:gal_wide_fits}. In refitting the models with {\tt cflux} to produce a galaxy-integrated total or component flux, we allowed only the flux parameter of the {\tt cflux} component and any free parameters excluding the component normalizations in the model to freely vary. \begin{figure*} \centering \includegraphics[width=\textwidth,trim=0 0 0 0, clip]{vv114_sed_unfolded-eps-converted-to.pdf} \caption{The 0.3--30 keV SED of VV~114 (solid green line) from the best-fit model to the joint {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ spectra, with the major components of the SED included: hot gas component (dotted red line), and the XRB component (dashed blue line). The SED has been normalized by the SFR of VV~114 (38 $M_{\odot}$ yr$^{-1}$). We display the background-subtracted unfolded data points from {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ binned to a minimum significance of ten per spectral bin to roughly indicate how well the SED is constrained at different energies, although we note that we fit the unbinned spectra without any background subtraction to produce the SED shown in green. In the background as a light grey line we show the simulated SED for a star-forming galaxy at 0.5 $Z_{\odot}$ (see Figure~\ref{fig:sed_comp} for simulated SEDs at other metallicities, and Section~\ref{sec:sed_metal} for details on the construction of simulated SEDs). The SFR-normalized SED of VV~114 is consistent with the simulated SED for a 0.5 $Z_{\odot}$ galaxy, in line with its measured global, gas-phase metallicity of $\sim$ 0.5 $Z_{\odot}$.}\label{fig:sed_unfolded} \end{figure*} Finally, in Figure~\ref{fig:sed_unfolded} we present the SFR-normalized, unfolded 0.3--30 keV SED of VV~114 derived from the best-fit {\tt bknpow$_{\rm ULX}$} spectral model to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra, where the solid green line in Figure~\ref{fig:sed_unfolded} represents the total X-ray SED of VV~114, and the dotted red and dashed blue lines represent the contributions from the hot, diffuse gas and XRB population in the galaxy, respectively. On this same plot, we overlay the unfolded data points from the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra, which have been binned to a minimum significance of ten per spectral bin and background subtracted for display purposes only. We display the data in this way to give a sense for the uncertainty associated with the galaxy SED, particularly at higher energies where the data become more highly background dominated. \section{Discussion}\label{sec:discuss} Previous studies of VV~114 have found an elevated galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR relative to the average value of local star-forming galaxies \citep{Basu-Zych2013,Basu-Zych2016}. In this section, we discuss the possible explanations for the elevated $L_{\mbox{\scriptsize{X}}}$/SFR in VV~114, namely the potential presence of an AGN and the sub-solar metallicity of the galaxy, ultimately ruling out a significant contribution from an AGN to the galaxy-integrated $L_{\mbox{\scriptsize{X}}}$\ in VV~114. We present a further discussion of the SFR-scaled 0.3--30 keV SED of VV~114 in the context of metallicity effects on XRB populations and future 21-cm measurements. \subsection{Potential AGN Contribution}\label{sec:agn_contrib} Previous X-ray analyses of VV~114 have explored the possibility that the galaxy contains an AGN in its heavily obscured eastern region, but have been inconclusive as to the definitive presence of an accreting supermassive BH \citep[e.g.,][]{Grimes2006,Basu-Zych2016}. Multiwavelength analysis beyond X-rays offers evidence in favor of the AGN interpretation for VV~114 X-1. Using ALMA, \citet{Iono2013} and \citet{Saito2015} showed evidence of compact and broad molecular emission component coincident with the eastern nucleus of VV~114, where the high detected HCN (3--4)/ HCO$^{+}$ (4--3) ratio is indicative of the presence of a dust-enshrouded AGN, possibly surrounded by compact star-forming regions. In the mid-IR, \citet{LeFloch2002} demonstrated that the strong continuum emission in the eastern portion of the galaxy may be indicative of a heavily obscured AGN. However, \citet{Basu-Zych2013} used optical line ratio diagnostics to demonstrate that VV~114 lies squarely in the star-forming region of the Baldwin, Phillips, and Terlevich diagram \citep{BPT}, indicating that an AGN is unlikely to be the {\it dominant} source of ionizing photons in VV~114. Using the same {\itshape Chandra\/}\ data as analyzed in this work, \citet{Grimes2006} performed a spectral fit to VV~114E (our source VV~114 X-1), finding that their fits required the addition of Gaussian line components centered at 1.39 and 1.83 keV superposed on the continuum emission, roughly line energies corresponding to enhanced Mg and Si, respectively. These authors note that such emission lines may be associated with the presence of a low-luminosity AGN, but are also consistent with emission features expected from a region of intense star formation. Whereas the \citet{Grimes2006} spectral model for VV~114E (our source VV~114 X-1) consists of a single thermal plasma, Gaussian lines, and an absorbed power-law, our model for this source does not require additional Gaussian lines at 1.39 and 1.83 keV to produce an acceptable fit to the source. Rather, our model consists of a two-temperature thermal plasma plus absorbed power-law as summarized in the second row of Table~\ref{tab:point_src_fits}. We find that adding the Gaussian lines at the energies included in \citet{Grimes2006} is degenerate with the features of our two-temperature thermal plasma with $kT = 0.36$ keV and $kT = 0.80$ keV. Furthermore, under the assumption that the hot gas component in VV 114 is associated with starburst activity, our model is strongly motivated by previous studies which have found that the hot gas component in star-forming galaxies is well-described by a two-temperature thermal plasma with characteristic temperatures $<$ 1 keV \citep[e.g.,][]{Strickland2004,Ott2005I,Ott2005II,Grimes2005,Tull2006I,Tull2006II,MineoGas,Smith2018}. Despite these model differences for the hot gas component, we find a photon index and column density for the power-law component of our model for VV~114 X-1 that is consistent with the values found by \citet{Grimes2006} for VV~114E using the same {\itshape Chandra\/}\ data. These fit results, both from this work as recorded in Table~\ref{tab:point_src_fits} and from \citet{Grimes2006} for the same {\itshape Chandra\/}\ data, demonstrate that VV~114 X-1 {\it is} unique relative to the other five point sources resolved by {\itshape Chandra\/}. In particular, VV~114 X-1 is consistent with a power-law spectrum with $\Gamma \sim 1.0$, while the point sources present in the western portion of the galaxy are consistent with power-law models with $\Gamma > 1.5$ . This harder spectrum is to be expected given that VV~114 X-1 sits behind a much higher column density than any sources in the western region of the galaxy (see Figure~\ref{fig:hst_rgb}). The photon index returned from our fit to the {\itshape Chandra\/}\ spectrum of VV~114 X-1 ($\Gamma = 1.01^{+0.59}_{-0.24}$) differs from expectations for a population of HMXBs or a ULX \citep[e.g.,][]{Remillard2006,Berghea2008,Gladstone2009}; however, within the upper range of the 90\% confidence interval on the best-fit value the photon index is consistent with the power-law slope for a population of more heavily obscured XRBs or perhaps a single ULX \citep[e.g.,][]{Lehmer2013}. Photon indices in the range $\Gamma \sim 1$, such as the best-fit value for VV~114 X-1, have been measured both for a subset of ULXs with $L_{\mbox{\scriptsize{X}}}$ $\gtrsim$ 10$^{40}$ erg s$^{-1}$, possibly indicative of ULXs in the power-law dominated very high state \citep[e.g.,][]{Berghea2008}, as well as for some Compton-thick AGN \citep[e.g.,][]{Winter2008}. We cannot distinguish between these possibilities based on the {\itshape Chandra\/}\ data alone, though our measured column density for VV~114 X-1 does not support the interpretation of this source as a Compton-thick AGN. We note that \citet{Prestwich2015} find a similarly hard spectrum for the highly luminous source Haro 11 X-1 ($\Gamma = 1.2$), a source which they report is consistent with being a single compact accretor. It is possible that the $\Gamma \sim 1$ photon index measured for VV~114 X-1 is a function of the limited data quality, where we are not sensitive to features such as a spectral turnover, which would point more definitively to a ULX versus AGN interpretation. In any case, in the absence of higher quality spectra or long-term monitoring to detect possible state transitions, we cannot distinguish between an AGN or ULX for VV~114 X-1 on the basis of the {\itshape Chandra\/}\ data alone. Although we cannot definitively determine the nature of VV~114 X-1, it is important to note that our spectral analysis using the newly obtained {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ data indicates that the galaxy-wide emission of VV~114 is not dominated by an AGN at energies $\gtrsim$ 2 keV. For an AGN-dominated galaxy we would expect a spectrum well-fit by a simple power-law \citep[e.g.,][]{Winter2008,Winter2009}. We find that the spectral fits to the joint {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations favor a broken power-law model with $E_{\rm break} \sim 4$~keV, and that the inclusion of a $\Gamma$ $\sim$ 1 power-law model component (i.e., VV~114 X-1) extended to higher energies is inconsistent with the data (see Figure~\ref{fig:xmmnu_spec_resid}). While AGN may exhibit spectral turnover features, the break energies typically occur at $\gtrsim$ 50 keV \citep[e.g.,][]{Molina2006,Winter2009}. The break energy at $\sim$ 4~keV found from our model is consistent with the spectral behavior measured from high-quality ULX spectra, indicative of disk and Comptonized corona components around accreting stellar mass compact objects \citep{Gladstone2009}. Thus, the X-ray spectral analysis presented here demonstrates that VV~114 X-1 does not dominate the global 0.3--30 keV emission of VV~114, and that in fact the galaxy-integrated emission is dominated by emission from ULXs. Notably, this finding is corroborated by previous X-ray studies of VV 114: \citet{Grimes2006} showed that if VV~114 X-1 is an AGN it does not dominate the global emission of VV~114, and similarly \citet{Basu-Zych2016} concluded that the removal of VV~114 X-1 from the XLF results in a luminosity distribution consistent with a collection of blended HMXBs drawn from a ``standard" HMXB XLF. Both of these studies thus conclude that the galaxy-integrated X-ray emission from VV~114 is consistent with expectations for a galaxy with $\gtrsim$ 2 keV emission dominated by XRBs. In the following sections, we therefore discuss our results for VV~114 assuming the $\gtrsim$ 2 keV emission is dominated by such sources. \subsection{Comparison with SEDs in other Star-Forming Galaxies}\label{sec:sed_comp} \begin{figure*} \centering \includegraphics[width=\textwidth,trim=0 0 0 0, clip]{vv114_sed_comparison-eps-converted-to.pdf} \caption{The SFR-normalized 0.3--30 keV SED of VV~114 (green), based on the best-fit spectral model to the nearly-simultaneous {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ data. The light grey lines in the background are simulated SEDs for a star-forming galaxy at 1.5 $Z_{\odot}$, $Z_{\odot}$, 0.5 $Z_{\odot}$, and 0.1 $Z_{\odot}$, annotated going from low to high normalization (see Section~\ref{sec:sed_metal} for details). We display the observed SFR-normalized SEDs of four other star-forming galaxies at different metallicities for comparison: NGC 253 (yellow, $\sim$ 1.1 $Z_{\odot}$; \citealt{Zaritsky1994}), NGC 3256 (red, $\sim$ 1.5 $Z_{\odot}$; \citealt{MH1999}, \citealt{Trancho2007}), M83 (dark red, $\sim$ 0.96 $Z_{\odot}$; \citealt{Zaritsky1994}), and NGC 3310 (cyan, $\sim$ 0.30 $Z_{\odot}$; \citealt{Eng2008}). The SFR-normalized SED of VV~114 is notably elevated relative to the roughly solar metallicity star-forming galaxies (NGC 253, NGC 3256, and M83), in line with theoretical predictions.}\label{fig:sed_comp} \end{figure*} In this section we present the 0.3--30 keV SED of VV~114 relative to the X-ray SEDs from a small sample of star-forming galaxies at different metallicities already in the literature to discuss the effects of metallicity and SFH on the X-ray SED. In Figure~\ref{fig:sed_comp} we show the SFR-normalized SED of VV~114 (green) relative to the SFR-normalized SEDs of NGC 253 (yellow, \citealt{Wik2014}), NGC 3256 (red, \citealt{Lehmer2015}), M 83 (dark red, \citealt{Yukita2016}), and NGC 3310 (blue, \citealt{Lehmer2015}). All the galaxies in Figure~\ref{fig:sed_comp} have a similar overall SED shape, with spectral turnovers at energies between $\approx$ 3--8 keV, indicating that all these galaxies have substantial ULX populations. While the basic SED shape indicates the same class of source provides the bulk of the hard (2--30 keV) X-ray emission in all these galaxies, the normalizations for the sub-solar metallicity galaxy SEDs of VV~114 and NGC 3310 are noticeably elevated with respect to the normalizations of the solar to super-solar metallicity galaxies \citep[NGC 253, NGC 3256, and M 83;][]{Zaritsky1994,MH1999,Trancho2007}. In particular, the SED of VV~114 is elevated by a factor of $\sim$ 3--11 relative to the solar metallicity SEDs, a factor consistent with theoretical predictions for the enhancement in the bright XRB population at half solar metallicity from \citet{Fragos2013}. Furthermore, the SED of VV~114 appears elevated in the soft band relative to the solar metallicity galaxies, which, as will be discussed in Section~\ref{sec:sed_metal}, is possibly the result of different ISM conditions at lower metallicity. The SED of NGC 3310 likewise appears elevated in the soft band relative to the solar metallicity galaxies, and furthermore displays an even higher normalization than VV~114 at energies $>$ 1 keV (enhancement by a factor of $\sim$ 8--25 relative to solar). This enhancement factor for NGC 3310 is more consistent with theoretical predictions for an XRB population at 0.1 $Z_{\odot}$, as the theoretical simulations suggest a nearly order of magnitude difference between the galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR between XRB populations at solar and 0.1 $Z_{\odot}$ \citep{Linden2010,Fragos2013}. However, the reported gas phase metallicity for NGC 3310 is closer to 0.3 $Z_{\odot}$ \citep{dG2003,Eng2008}, which is only slightly lower than the half solar value reported for VV~114. It is possible that the simulated SEDs based on theoretical predictions presented in Figure~\ref{fig:sed_comp}, although simplified, are actually constraining the metallicity of NGC 3310, indicating that the galaxy may have a gas phase metallicity closer to the low end ($\sim$ 0.2 $Z_{\odot}$) of the measured uncertainty range \citep{Eng2008}. Alternatively, statistical scatter due to XLF sampling can affect galaxy-integrated XRB $L_{\mbox{\scriptsize{X}}}$\ and therefore SED normalization \citep[e.g.,][]{Lehmer2019}. Given $M_{\star}$ $\approx$ 9 $\times$ 10$^{9}$ $M_{\odot}$ and SFR $\approx$ 6 $M_{\odot}$ yr$^{-1}$ for NGC 3310 \citep{Lehmer2015}, and $M_{\star}$ $\approx$ 4 $\times$ 10$^{10}$ $M_{\odot}$ and SFR $\approx$ 38 $M_{\odot}$ yr$^{-1}$ for VV 114 \citep{Basu-Zych2016}, we would expect the statistical scatter in XRB $L_{\mbox{\scriptsize{X}}}$\ for these galaxies to be on the order of 0.3 dex and 0.2 dex, respectively \citep{Lehmer2019}. The SED normalizations of VV 114 and NGC 3310 are elevated with respect to the SEDs of NGC 253, NGC 3256, and M 83 by more than 0.5 dex in all cases, and the measured difference in normalization between NGC 3310 and VV 114 is $\sim$ 0.4 dex. Thus, statistical scatter cannot explain the difference in SED normalization between the sub-solar and solar metallicity galaxies, and it is not likely to be responsible for the difference in normalization between NGC 3310 and VV 114. Another possible factor affecting the normalization of the SED is the SFH of the galaxy. HMXBs and ULXs represent a snapshot in the evolution of massive stars in binaries, and thus appear at early times ($\lesssim$ 50 Myr) following a burst of star formation and evolve rapidly ($\sim$ Myrs) away from their X-ray bright phase following core-collapse of the secondary donor star. Binary population synthesis models thus predict that the underlying SFH, or age of the stellar population, will affect the integrated $L_{\mbox{\scriptsize{X}}}$\ from such a population, in addition to the aforementioned metallicity effects. Such models predict a peak in the number of bright XRBs produced at $Z = 0.4 Z_{\odot}$ on timescales 5--10 Myr post-starburst \citep{Linden2010}. In these models the lowest metallicity populations ($Z < 0.1 Z_{\odot}$) produce vastly more HMXBs, and therefore higher $L_{\mbox{\scriptsize{X}}}$/SFR, than solar metallicity environments, but only on timescales $>$ 10 Myr post-starburst. A number of recent observational studies have corroborated this SFH-dependence for HMXB production using spatially and temporally resolved SFHs in the vicinities of HMXBs \citep[e.g.,][]{Antoniou2010,Antoniou2016,Lehmer2017,Antoniou2019,Garofali2018}. As we do not have a detailed SFH of VV~114 or measurements of individual cluster ages, a joint analysis of the effect of both stellar population age and metallicity on the SED is beyond the scope of this work. However, we can make some conjectures as to the differences between the host environment in NGC 3310 and VV~114 using measured star cluster ages in NGC 3310. These cluster ages, derived from SED-fitting of {\itshape HST\/}\ data, reveal a peak in the cluster age distribution at $\sim$ 30 Myr post-starburst \citep{dG2003SED,dG2003}, well beyond the most favorable timescale ($<$ 10 Myr post-starburst) for boosted HXMB or ULX production at $Z = 0.4 Z_{\odot}$ discussed above. On its surface, this would seem to indicate that NGC 3310 does not have a more favorable SFH in terms of XRB production relative to VV~114, and that instead metallicity, perhaps as low as 0.1 $Z_{\odot}$, is the primary effect driving the enhanced SED normalization for NGC 3310. Of course, this analysis is highly simplified, as it assumes simple bursts of star formation when galaxies in fact have much more complex SFHs \citep[e.g.,][]{Eufrasio2017}. In fact, recent work exploring the $L_{\mbox{\scriptsize{X}}}$-SFR scaling relation for XRBs using sub-galactic regions in NGC 3310 identified stellar population age as the likely dominant factor in driving the excess of XRB emission relative to galaxy-wide scaling relations \citep{An2019}. This highlights the need for further studies exploring {\it both} the age and metallicity effects on XRB production, ideally for a larger sample of galaxies across different metallicities, in order to provide improved empirical constraints for the scaling of XRB $L_{\mbox{\scriptsize{X}}}$\ with these host galaxy properties. \subsection{The Effect of Metallicity on the X-ray SED}\label{sec:sed_metal} \begin{figure*} \centering \includegraphics[width=\textwidth,trim=0 0 0 0, clip]{sed_theory_threepanel.pdf} \caption{Simulated SFR-normalized SEDs for a star-forming galaxy at three different metallicities, from left to right: $Z_{\odot}$, 0.5 $Z_{\odot}$, and 0.1 $Z_{\odot}$. In each panel the solid black line shows the total, galaxy-integrated SED, while the dotted red and dashed blue lines show the hot gas and XRB components, respectively. The $Z_{\odot}$ model is constructed to approximate an average of the SED from star-forming galaxies presented in \citet{Wik2014}, \citet{Lehmer2015}, and \citet{Yukita2016}. The 0.5 $Z_{\odot}$ and 0.1 $Z_{\odot}$ models are constructed by scaling the normalization of the XRB component of the solar metallicity SED in the left-hand panel following the theoretical predictions of \citet{Fragos2013}. The progression of panels from left to right illustrates the theoretical expectation for how the elevation of galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR due to an enhanced XRB population with decreasing metallicity is reflected in the galaxy-wide SED.}\label{fig:sed_theory} \end{figure*} As demonstrated in Section~\ref{sec:sed_comp} for the small sample of nearby, star-forming galaxies, metallicity appears to be a key property affecting the emergent X-ray SED of a galaxy. Likewise, studies of nearby, star-forming galaxies have demonstrated that galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR increases with decreasing metallicity, an effect which is corroborated by theoretical binary population synthesis work \citep[e.g.,][]{Mapelli2010,Linden2010,Fragos2013,Prestwich2013,Douna2015,Brorby2016,Basu-Zych2016,Wiktor2017,Wiktor2019}. This behavior can be attributed to the effects of metallicity on stellar and binary evolution and thus the characteristics of the resultant XRB population, namely the formation of more massive BHs at lower metallicities given weaker stellar winds \cite[e.g.,][]{Mapelli2010}, the formation of more high accretion rate Roche lobe overflow (RLOF) systems due to the more compact nature of the binaries at lower metallicities \citep[e.g.,][]{Linden2010}, and the wider parameter space leading to survivable common envelope phases and therefore production of RLOF systems at low metallicity \citep[e.g.,][]{Bel2010,Linden2010}. The net effect of metallicity on XRB production is therefore the appearance of not only {\it more} HMXBs with decreasing metallicity, but also possibly the presence of more {\it luminous} HMXBs, leading to the expectation of a population whose XLF has both a higher normalization and a flatter slope. While observational and theoretical studies alike suggest that there are more luminous XRBs per unit SFR at lower metallicity, the effect of this enhanced $L_{\mbox{\scriptsize{X}}}$/SFR on the emergent X-ray SED is not yet constrained empirically. To understand where our newly measured low-metallicity SED for VV~114 fits in with theoretical expectations for the metallicity dependence of XRB populations we must first build up a theoretically-motivated picture of the changes to the X-ray SED with metallicity. To simulate X-ray SEDs for star-forming galaxies at different metallicities we begin with a baseline X-ray SED informed by the SED studies of the nearby star-forming galaxies shown in Figure~\ref{fig:sed_comp} from \citet{Wik2014}, \citet{Lehmer2015}, and \citet{Yukita2016}. Our baseline SED is constructed from a {\tt Tbabs*(apec + vphabs*apec + vphabs*bknpow)} model in {\tt XSPEC}. This model choice is empirically motivated: the hot gas component in star-forming galaxies has been shown to be well-fit by two-temperature thermal plasma models (e.g., {\tt apec + vphabs*apec}) across a range of SFRs as described in Section~\ref{sec:point_src_decomp} \citep{Strickland2004,Ott2005I,Ott2005II,Grimes2005,Tull2006I,Tull2006II,Li2013,MineoGas,Smith2018}, and studies of Milky Way XRBs and extragalactic ULXs, the dominant sources of compact emission in star-forming galaxies, show spectra well-described by broken power-laws \citep{McClintock2006,Gladstone2009,Fragos2013other}. In this model we set the column densities (both foreground and intrinsic), thermal plasma temperatures and normalizations, and broken power-law break energy, photon indices and normalization to reproduce the average SED of M83, NGC 3256, NGC 253, and NGC 3310 which represent the best current empirical constraints on the form of the X-ray SED for star-forming galaxies \citep{Wik2014,Lehmer2015,Yukita2016}. In what follows, we use this toy model to address how the emergent SED evolves away from the solar metallicity benchmark described above due to changes in gas-phase metallicity; however, our approach is simplified, as we cannot address all the complexities of the effect of metallicity on both the hot gas and XRB emission given the relative lack of observational constraints on the X-ray emission from star-forming galaxies across a range of metallicities. We thus account for metallicity effects on this baseline spectrum in two ways: (1) through the abundances of the {\tt vphabs} components; and (2) through a change to the {\tt bknpow} normalization. Altering the {\tt vphabs} abundances for a chosen metallicity is straightforward, where we use the \citet{Asplund2009} abundance tables in {\tt XSPEC} to set the abundances relative to solar. To account for the increase in galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR with decreasing metallicity due to XRBs, we scale the {\tt bknpow} normalization from the baseline model by a factor determined from the theoretical scalings of galaxy-integrated XRB $L_{\mbox{\scriptsize{X}}}$\ with metallicity from \citet{Fragos2013}. We choose to scale the XRB component normalization using theoretical scalings, as such scaling relations provide a physically-motivated estimate of XRB $L_{\mbox{\scriptsize{X}}}$\ as a function of metallicity that is broadly consistent with empirical constraints. We leave the {\tt APEC} model parameters fixed with changes in metallicity, as we do not yet have strong observational or theoretical constraints to show how the underlying hot gas component varies with metallicity and SFR; however, we do not expect the {\it shape} of the intrinsic hot gas component to vary strongly with stellar mass or SFR of a star-forming galaxy \citep[e.g.,][]{Ott2005I,Grimes2005,Smith2005,MineoGas,An2016,Smith2018,An2019}. In the above described model, the {\tt APEC} abundances are thus fixed at solar metallicity for {\it all} simulated SEDs, regardless of assumed gas-phase metallicity. This is in contrast to our fits to the spectra of VV~114, where we set the {\tt APEC} abundances to the measured gas-phase metallicity of the galaxy. In the case of VV~114, we are able to constrain via fits to the observed spectra the characteristics of the hot gas (e.g., temperature and normalization) given the sub-solar abundance. In the case of our simulated SEDs, the {\tt APEC} component temperatures and normalizations are set based on observed constraints from largely solar metallicity galaxies, thus the adopted values in the toy model are appropriate assuming solar metallicity abundances. Because the emission from hot gas in star-forming galaxies may be a complex function of metallicity, we choose not to change the {\tt APEC} abundances in the simulated SEDs in order to keep the shape of the {\it intrinsic} hot gas component fixed as a function of metallicity. We stress that these are simplifying assumptions, meant to produce toy models of the X-ray SED for star-forming galaxies at different metallicities for the purposes of comparison with observed SEDs, as described below. A much larger sample of star-forming galaxies across a range of metallicities would be required to produce a more universal model for the X-ray SED on the basis of host galaxy properties such as metallicity, SFR, and stellar mass. In Figure~\ref{fig:sed_theory}, we show our simulated X-ray SEDs for star-forming galaxies at $Z_{\odot}$, 0.5 $Z_{\odot}$, and 0.1 $Z_{\odot}$. By construction, the shape of neither the intrinsic XRB nor the intrinsic hot gas component changes in our simulated SEDs from $Z_{\odot}$ to 0.1 $Z_{\odot}$; however, the overall normalization of the SED changes with metallicity. In particular, the flux from the XRB component at 0.1 $Z_{\odot}$ is $\sim$ 10$\times$ higher than at $Z_{\odot}$ in the 0.5--8 keV band. The flux due to hot gas likewise increases from $Z_{\odot}$ to 0.1 $Z_{\odot}$, albeit by a factor of $\sim$ 3$\times$ in the 0.5--8 keV band. The nearly order of magnitude change in the normalization of the XRB component from $Z_{\odot}$ to 0.1 $Z_{\odot}$ is due to the theoretical increase in XRB $L_{\mbox{\scriptsize{X}}}$\ per unit SFR with decreasing metallicity, while the increase in normalization for the hot gas component at 0.1 $Z_{\odot}$ relative to solar can be ascribed to decreased absorption, particularly of soft band photons, given the sub-solar metallicity assumed for the ISM. We show the simulated SED at 0.5 $Z_{\odot}$ as a light grey labelled line relative to the observed SED of VV~114 (green line) in Figure~\ref{fig:sed_unfolded}, and similarly display simulated SEDs at 1.5 $Z_{\odot}$, $Z_{\odot}$, 0.5 $Z_{\odot}$, and 0.1 $Z_{\odot}$ relative to other star-forming galaxies in Figure~\ref{fig:sed_comp}. The measured SED of VV~114 becomes ULX-dominated at energies $\gtrsim$ 1.5 keV and is entirely consistent with the simulated SED at 0.5 $Z_{\odot}$ in this energy range. In other words, with the newly measured 0.3--30 keV SED of VV~114 we confirm theoretical predictions for the effect of metallicity on the high energy SED of a star-forming galaxy, namely an elevated normalization relative to solar indicative of an enhanced XRB population at lower metallicity. By contrast, the soft band (0.5--2 keV) portion of the SED of VV~114 does not agree well with the soft band of the simulated SED at 0.5 $Z_{\odot}$. In particular, the emergent 0.5--2 keV flux of VV~114 is $\sim$ 20$\times$ higher than the flux of our simulated 0.5 $Z_{\odot}$ SED in this same band. This discrepancy is further highlighted when comparing the hot gas versus XRB population contributions to the total soft band emission for VV~114 relative to the simulated SED. From Table~\ref{tab:comp_lx}, we find that $L^{\rm XRB}_{\rm 0.5-2~keV}$ is $\sim$ 40\% of the {\it total} emergent soft band luminosity for VV~114, while it is $\sim$ 20\% of the total for the simulated SED at 0.5 $Z_{\odot}$. Similarly, the soft band portion of the SED of NGC 3310 (blue line, Figure~\ref{fig:sed_comp}) appears elevated relative to the simulated soft band SED at 0.5 $Z_{\odot}$. Recently, \citet{An2019} measured that the hot diffuse component of NGC 3310 constitutes $\sim$ 57\% of the soft band emission in NGC 3310, implying that the XRB component in this galaxy likewise provides a larger share of the total soft band emission relative to expectations from the simulated SED at similar metallicity. These results imply that the disagreement between the measured and simulated soft band SEDs at low metallicity may stem from incorrect assumptions about ISM properties and, notably, the level of intrinsic absorption, which is most important at energies $\lesssim$ 1.5 keV and likely varies galaxy to galaxy. In the low-metallicity simulated SEDs, the hot gas component is modeled with the simplified assumption that it approximates the hot ISM of a solar metallicity galaxy, where metallicity is only accounted for in the abundances set in the {\tt vphabs}, or intrinsic absorption, component modifying the hot gas model. As noted above, this assumption is made in order to keep the intrinsic hot gas shape constant as a function of metallicity, and because there is a lack of observational constraints in the literature on how detailed ISM abundance patterns affect the hot gas emission. Additionally, the broken power-law normalization representing the XRB contribution to the simulated SED is scaled by the theoretical predictions for the change in XRB $L_{\mbox{\scriptsize{X}}}$\ with metallicity from \citet{Fragos2013}; however, these theoretical scalings for XRB $L_{\mbox{\scriptsize{X}}}$\ with metallicity are predicated on X-ray SEDs modeled after Milky Way XRBs, and therefore assume Milky Way-like intrinsic absorption modifying the XRB flux. It is quite possible that the ISM properties of a solar metallicity galaxy are different from those of a lower metallicity galaxy such as VV~114 or NGC 3310 at the same SFR. Comparing the measured SED of VV~114 to our toy model for the X-ray SED at 0.5 $Z_{\odot}$ suggests that the assumption of intrinsic absorption measured primarily from solar metallicity galaxies (e.g., M83, NGC 3256, and NGC 253) may be inappropriate for lower metallicity galaxies. This is possibly because more metal-poor galaxies have lower intrinsic column densities, an effect which is manifested most strongly in the soft band portion of the SED. As we do not yet have strong observational constraints on how the ISM properties change as a function of host galaxy properties (e.g., metallicity and SFR) we leave it to a future work to provide more rigorous investigation of the origins of the differences in the soft band SED in star-forming galaxies across different metallicities. \subsection{Comparison of Galaxy-Integrated Properties with Empirical and Theoretical Scaling Relations}\label{sec:compare_prev} As discussed in Section~\ref{sec:sed_comp}, the 0.3--30 keV SED of VV~114 displays a clearly elevated normalization relative to the X-ray SEDs of solar metallicity star-forming galaxies. In this section, we focus on comparing our results for the galaxy-integrated $L_{\mbox{\scriptsize{X}}}$/SFR of VV~114 with results from previous works, as well as expected theoretical and empirical scalings of XRB $L_{\mbox{\scriptsize{X}}}$/SFR for star-forming galaxies as a function of metallicity. As reported in Table~\ref{tab:gal_wide_fits}, we measured a galaxy-integrated total luminosity of log $L^{\rm gal}_{\rm 2-10~keV}$ = 41.27 $\pm$ 0.04 and log $L^{\rm gal}_{\rm 2-10~keV}$ = 41.41 $\pm$ 0.04 from the {\tt bknpow$_{\rm ULX}$} model fit to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra and the {\tt pow$_{\rm XRB}$} model fit to the {\itshape Chandra\/}\ spectrum, respectively. The only other comparable spectral analysis in the literature for VV~114 was performed by \citet{Grimes2006}, using the same {\itshape Chandra\/}\ data as presented in this work, in addition to using shallow, archival {\it XMM-Newton\/}\ data (not used here). Their analysis found log $L^{\rm gal}_{\rm 2-10~keV}$ = 41.38 for the galaxy-wide emission from fits to the {\itshape Chandra\/}\ spectra of the eastern and western components of the galaxy, consistent within the uncertainties of our {\itshape Chandra\/}-derived total X-ray luminosity (log $L^{\rm gal}_{\rm 2-10~keV}$ = 41.41 $\pm$ 0.04). They do not report a luminosity from their best-fit model to the archival {\it XMM-Newton\/}\ data, so we input the best-fit parameters from their model (Table~5 in \citealt{Grimes2006}) in {\tt XSPEC} and use the {\tt flux} command to derive log~$L^{\rm gal}_{\rm 2-10~keV}$ = 41.23 (corrected for foreground Galactic absorption), consistent with our log~$L^{\rm gal}_{\rm 2-10~keV}$~=~41.27~$\pm$~0.04 for the newly obtained {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations. We note that this agreement is significant, as the luminosity derived from the \citet{Grimes2006} {\it XMM-Newton\/}\ model is based on archival {\it XMM-Newton\/}\ data taken at a different epoch than the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations presented here. This agreement therefore suggests that variability, in a galaxy-integrated sense, is not significant between the epochs at which the archival and new {\it XMM-Newton\/}\ observations were obtained. We next turn to the comparison of XRB $L_{\mbox{\scriptsize{X}}}$/SFR for VV~114 with recent theoretical and empirical constraints on the scaling of these quantities with metallicity. To get a ``clean" estimate of XRB $L_{\mbox{\scriptsize{X}}}$\ a thorough accounting of the different contributions, including resolved point sources (ULXs), unresolved XRBs, the hot ISM contribution, and any possible AGN contamination, is important. In this work, we measure XRB $L_{\mbox{\scriptsize{X}}}$\ from the broken power-law component of the {\tt bknpow$_{\rm ULX}$} model fit to the {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ spectra. We convert luminosities reported in different energy bands from other works to the 0.5--8 keV band using conversion factors determined from webPIMMs under the assumption of a simple power-law spectrum with $\Gamma$ = 1.7, appropriate for XRBs. \begin{figure} \centering \includegraphics[width=0.5\textwidth,trim=0 0 0 0, clip]{vv114_scaling_comparison-eps-converted-to.pdf} \caption{Galaxy-integrated XRB $L_{\mbox{\scriptsize{X}}}$\ (0.5--8 keV) per unit SFR as a function of gas-phase metallicity for a selection of empirical and theoretical studies. We show the results for VV~114 from this work as the black, labelled circle, as well as from a sample of LBAs \citep[salmon squares;][]{BZ13LBG}, $z = 0.1-0.9$ galaxy stacks \citep[yellow triangles;][]{Fornasini2020}, and $z = 2$ galaxy stacks \citep[maroon triangles;][]{Fornasini2019}. The salmon square outlined in black is the value for VV~114 from \citet{BZ13LBG}. We likewise show the canonical $L_{\mbox{\scriptsize{X}}}$/SFR values derived from XLF-fitting of resolved XRB populations in star-forming galaxies from \citet{Mineo2012} and \citet{Lehmer2019} as the hashed purple and green regions, respectively, where the extent of the regions shows denotes the mean metallicity and standard deviations of the galaxies in each sample. Finally, we show the empirically derived scaling for the $L_{\mbox{\scriptsize{X}}}$-SFR-$Z$ plane from \citet{Brorby2016} as the solid blue line, with the dispersion in the relation shown in light blue, and the theoretical scaling of galaxy-integrated XRB $L_{\mbox{\scriptsize{X}}}$/SFR with metallicity from \citet{Fragos2013} as the dashed grey line. The observed galaxy-integrated XRB $L_{\mbox{\scriptsize{X}}}$/SFR value for VV~114 is consistent with the empirical and theoretical scalings of $L_{\mbox{\scriptsize{X}}}$/SFR which account for the effects of metallicity on XRB populations.}\label{fig:scale_comp} \end{figure} In Figure~\ref{fig:scale_comp} we show the $L^{\rm XRB}_{\rm 0.5-8~keV}$/SFR of VV~114, assuming a SFR of 38 $M_{\odot}$ yr$^{-1}$, (black circle) relative to the best-fit $L_{\mbox{\scriptsize{X}}}$/SFR in the same band based on fits to the XLF of star-forming galaxies from \citealt{Lehmer2019} (hashed green region) and \citealt{Mineo2012} (hashed purple region). The horizontal extents of each region show the mean metallicity of all galaxies in each sample and the standard deviations, which for the \citet{Lehmer2019} sample come from their Table~1 and for the \citet{Mineo2012} sample are taken from the calculations of \citet{Fornasini2019}. The vertical extents of the hashed regions represent the 1$\sigma$ uncertainties on both scalings. Both the \citet{Mineo2012} and \citet{Lehmer2019} scalings should be taken as appropriate for roughly solar metallicity environments. We find the measured $L^{\rm XRB}_{\rm 0.5-8~keV}$/SFR of VV~114 from the {\tt bknpow$_{\rm ULX}$} model fit to the {\it XMM-Newton\/}\ + {\it NuSTAR\/}\ spectra is a 1$\sigma$ outlier from the \citet{Mineo2012} relation, but consistent with the \citet{Lehmer2019} relation within the 1$\sigma$ uncertainties on their XLF-derived scaling. We next compare our derived $L_{\mbox{\scriptsize{X}}}$/SFR for VV~114 to theoretical and empirical scalings for the dependence of $L_{\mbox{\scriptsize{X}}}$/SFR on metallicity from \citet{Fragos2013} and \citet{Brorby2016}, respectively. In Figure~\ref{fig:scale_comp} we show the theoretical \citet{Fragos2013} evolution of $L_{\mbox{\scriptsize{X}}}$/SFR for XRBs with metallicity from binary population synthesis models as a dashed grey line (absorbed SED model from \citet{Fragos2013}), and the empirical findings of \citet{Brorby2016} based on analysis of the X-ray emission from a sample of Lyman break galaxies as a solid blue line, with the dispersion in their relation shown as light blue. Our measured value for VV~114 is consistent with the \citet{Brorby2016} empirical relation, though this is expected as their $L_{\mbox{\scriptsize{X}}}$-SFR-$Z$ scaling is derived from a sample of highly star-forming galaxies similar to VV~114. Likewise, VV~114 is consistent with the theoretical predictions from \citet{Fragos2013}, despite the different assumptions for the underlying SED in the \citet{Fragos2013} models, which come from analyzing the spectra of Galactic BH and NS XRBs in different accretion states \citep{Fragos2013other}. We also show three observational samples for comparison in Figure~\ref{fig:scale_comp}, namely a selection of LBAs from \citet{BZ13LBG}, the high-sSFR sample of $z = 2$ galaxy stacks binned by metallicity from \citet{Fornasini2019}, and the sample of $z = 0.1-0.9$ galaxy stacks binned by metallicity from \citet{Fornasini2020}, all of which are selected to be HMXB-dominated samples. VV~114 is included in the original \citet{BZ13LBG} sample, so we plot VV~114's XRB $L_{\mbox{\scriptsize{X}}}$/SFR from \citet{BZ13LBG} as the salmon square with black outline. The value for VV~114 from \citet{BZ13LBG} is taken from galaxy-wide 2--10 keV luminosity from the \citet{Grimes2006} analysis. To make an appropriate comparison, we therefore correct the \citet{BZ13LBG} $L_{\mbox{\scriptsize{X}}}$\ value for VV~114 to the 0.5--8 keV band, and subtract the contribution from the hot gas component based on the galaxy-wide spectral model presented in \citet{Grimes2006}. With this correction, we find the values from this work and \citet{Grimes2006} and \citet{BZ13LBG} are in good agreement. Notably, all observed samples (VV~114, LBAs, $z = 2$ stacks, and $z = 0.1-0.9$ stacks) are elevated with respect to the scalings derived from XLF fitting for nearby star-forming galaxies \citep{Mineo2012,Lehmer2019}, but are consistent with the theoretical and empirical scalings which account for the metallicity dependence of XRB populations. These results underscore the necessity of accounting for metallicity effects in studies of XRBs in star-forming environments, including future empirical constraints on the dependence of the XRB XLF on metallicity. \subsection{Soft Band SED and Relevance to IGM Thermal History}\label{sec:gas_igm} In Section~\ref{sec:sed_comp}, we showed that the normalization of the hard band (2--30 keV) SED of VV~114 agrees with theoretical predictions for enhanced XRB $L_{\mbox{\scriptsize{X}}}$/SFR at lower metallicity. There are no such theoretical predictions currently in the literature for how the shape or normalization of the soft band (0.5--2 keV) SED scales with metallicity; however, there are past observational works which have investigated the scaling of the hot gas $L_{\mbox{\scriptsize{X}}}$\ (as measured in the 0.5--2 keV band) in star-forming galaxies with SFR \citep[e.g.,][]{Strickland2004,Ott2005I,Ott2005II,Grimes2005,MineoGas,Smith2018}. In particular, the \citet{MineoGas} study used {\itshape Chandra\/}\ data for a sample of 21 star-forming galaxies, covering a range of SFRs from 0.1--20 $M_{\odot}$~yr$^{-1}$, to parametrize the linear scaling between $L^{\rm gas}_{\rm X}$ and SFR. Below, we discuss our results for the hot gas $L_{\mbox{\scriptsize{X}}}$\ of VV~114 derived from our best-fit SED relative to the \citet{MineoGas} empirical scaling, and connect this discussion to the importance of the soft band SED to the thermal history of the IGM. From the {\tt APEC} components of the {\tt bknpow$_{\rm ULX}$} model fit to the {\it NuSTAR\/}\ + {\it XMM-Newton\/}\ spectra (Table~\ref{tab:gal_wide_fits}), we find $L^{\rm gas}_{\rm X}$(0.5--2 keV) = 1.23 $\times$ 10$^{41}$ erg~s$^{-1}$ for VV~114. By contrast, the linear $L^{\rm gas}_{\rm X}$--SFR scaling derived from the \citet{MineoGas} sample predicts $L^{\rm gas}_{\rm X}$(0.5--2 keV) $\sim$ 3 $\times$ 10$^{40}$ erg~s$^{-1}$ given the SFR for VV~114, a difference of $\sim$ 0.6 dex from our measured hot gas $L_{\mbox{\scriptsize{X}}}$. It is possible the lower metallicity of VV 14 is what results in this discrepancy, as the \citet{MineoGas} study does not explicitly account for metallicity in deriving the hot gas $L_{\mbox{\scriptsize{X}}}$--SFR scaling and quotes a dispersion of only $\sigma$ = 0.34 dex in the relation. Several previous X-ray studies with {\itshape Chandra\/}\ have also investigated the scaling of $L^{\rm gas}_{\rm X}$ with SFR across a range of star-forming galaxies, from ULIRGs to dwarf starbursts \citep{Grimes2005,Ott2005I,Ott2005II}. Notably, \citet{Ott2005I,Ott2005II} studied a sample of eight dwarf starbursts using {\itshape Chandra\/}, of which the majority were low-metallicity ($Z < 0.3 Z_{\odot}$), finding that the diffuse gas $L_{\mbox{\scriptsize{X}}}$\ was linearly correlated with ``current" (i.e., H$\alpha$-based) SFR. While \citet{Ott2005I, Ott2005II} {\it did} consider the gas-phase metallicity of the dwarf starbursts in their sample in the context of their study, they did not find any explicit correlation between the diffuse gas $L_{\mbox{\scriptsize{X}}}$\ per unit SFR as a function of metallicity. Two of their lowest metallicity galaxies did not have any detected diffuse component, though they note this may because such emission was below their detection thresholds, while four of their low-metallicity ($Z <$ 0.3 $Z_{\odot}$) dwarf starbursts had substantial diffuse emission detected. Interestingly, they found that the diffuse emission in these dwarf starburst galaxies constituted 60--80\% of the 0.3--8 keV photons from the galaxy \citep{Ott2005I}, a much higher percentage contribution from hot gas $L_{\mbox{\scriptsize{X}}}$\ than predicted from our simulated low-metallicity SEDs presented in Section~\ref{sec:sed_metal}, but in line with the measured percentage contribution from diffuse emission to the soft band for the low-metallicity galaxies VV~114 and NGC 3310 ($\sim$ 60\%, this work and \citealt{An2019}, respectively). The scaling of $L^{\rm gas}_{\rm X}$(0.5--2 keV) with SFR and metallicity has implications for the importance of X-ray photons from star-forming galaxies at high redshift, and their effect on the thermal history of the IGM. In particular, the soft band (0.5--2 keV) portion of the X-ray SED is most important for the epoch of heating, prior to the epoch of reionization \citep[e.g.,][]{Pacucci2014,Das2017}. The mean free paths of photons at $z \sim$10--20 approach the Hubble length at energies $\gtrsim$ 2 keV, thus photons with energies $\gtrsim$ 2 keV will effectively ``free stream'' through the IGM during this epoch \citep[e.g.,][]{McQuinn2012}. Theoretical studies have shown that the cosmic 21-cm signal, which should be measurable with second generation interferometers such as HERA and SKA, will therefore be sensitive to the shape of the soft band SED of the first galaxies and its scaling with SFR \citep[e.g.,][]{Mesinger2014,Greig2017}. Likewise, theoretical work indicates that the {\it timing} of IGM heating is affected by the shape of the X-ray SED, with ``early" heating which precedes reionization predicted for softer SEDs, and ``late" heating which occurs during reionization predicted for harder SEDs \citep[e.g.,][]{Fialkov2014}. Critically, these theoretical studies often assume that the X-ray photons which heat the IGM are produced solely by XRBs in the first star-forming galaxies, without accounting for the hot gas emission from such galaxies \citep[e.g.,][]{Fialkov2014,Madau2017}. As we have shown here, the hot gas emission can be substantial in the soft band, especially for low-metallicity galaxies. Thus, work such as this, which constrains both the form of the X-ray SED as well as its component parts, is critical to constraining {\it when} significant IGM heating may occur and predicting the 21-cm fluctuations measurable by next generation interferometers. The best current empirical constraints on the soft band X-ray SED from star-forming galaxies do not account for the host galaxy metallicity and its effect on the hot gas versus XRB contribution to the emergent soft band flux \citep[e.g.,][]{Grimes2005,MineoGas}. Theoretical studies based on these empirical results show that there may be a factor of three difference in the 21-cm power on large scales between assuming hot gas versus XRBs dominate the soft band emission (soft versus hard spectrum, respectively) \citep[e.g.,][]{Pacucci2014}. As we might expect the pristine, low-metallicity galaxies in the early Universe to have different ISM properties than local galaxies, nearby low-metallicity galaxies such as VV~114 serve as better analogs for the first galaxies when it comes to constraining the form of the X-ray SED as it applies to the epoch of heating and the cosmic 21-cm signal. Modeling of the expected 21-cm signal shows that tuning model predictions to constraints based on local star-forming galaxies (ostensibly at solar metallicity) can lead to estimates of 5$\times$ fewer soft photons escaping the galaxy compared to a ``metal-free" ISM, which is more transparent \citep[e.g.,][]{Das2017}. This, in turn, affects the thermal history of the IGM; if the ISM conditions in the early Universe are similar to star-forming galaxies today, this implies fewer soft photons escape galaxies at high redshift, leading to inefficient heating of the IGM which occurs closer to reionization. We present evidence that VV~114 has a higher than expected elevation of $L_{\mbox{\scriptsize{X}}}$\ per unit SFR in the soft band relative to other highly star-forming galaxies at solar metallicity \citep[e.g.,][]{MineoGas,Wik2014,Lehmer2015,Yukita2016}. This may imply more soft photons escape galaxies at low metallicity at high redshift, ultimately leading to larger fluctuations in the IGM temperature, and therefore higher amplitude for the large-scale 21-cm power spectrum \citep{Pacucci2014}. As noted above, the soft band portion of the SED is in general not well calibrated down to low metallicities, but VV~114 offers tantalizing evidence that X-rays from star-forming galaxies may play a critical role in heating the IGM. \section{Summary \& Future Work}\label{sec:conclude} Here we have measured, for the first time, the 0.3--30 keV SED of the low-metallicity, star-forming galaxy VV~114. Through detailed spectral fitting of archival {\itshape Chandra\/}\ as well as the newly obtained, near simultaneous {\it XMM-Newton\/}\ and {\it NuSTAR\/}\ observations we showed that the SED of VV~114 has (1) an elevated normalization relative to the X-ray SEDs of solar metallicity galaxies; and (2) a characteristic break at high energies ($\sim$ 4 keV). These SED characteristics are indicative of an enhanced ULX population, which dominates the global X-ray emission from VV~114. Our findings for VV~114 are consistent with theoretical expectations, which predict a factor of at least two enhancement in the galaxy-integrated $L_{\mbox{\scriptsize{X}}}$\ from XRBs at 0.5 $Z_{\odot}$ relative to production of XRBs at solar metallicity. We further show that the X-ray SED has a similar {\it shape} for star-forming galaxies of different metallicities, namely that the SED is ULX-dominated at high energies with a substantial hot gas contribution in the soft band. We also present evidence, for the first time, that VV~114 has an elevated soft band (0.5--2 keV) luminosity relative to predictions for the scaling of diffuse gas emission with SFR from previous empirical studies. This elevated soft band $L_{\mbox{\scriptsize{X}}}$\ for VV~114 is due possibly to the more pristine ISM conditions in the galaxy given its lower metallicity. This work underlines the importance of broadening the sample of low-metallicity galaxies across a range of SFRs for which there are measured X-ray SEDs, with constraints on the contribution from both hot, diffuse gas, as well as compact sources of emission such as XRBs, to offer the best possible empirical framework for interpreting future high redshift measurements and informing binary population synthesis work. \section*{Acknowledgements} We thank the referee for very helpful comments, which greatly improved the quality of the manuscript. K.G. and B.D.L. gratefully acknowledge financial support from NASA grant 80NSSC18K1605. {\it Facilities:} {\it XMM-Newton} (EPIC-MOS and EPIC-pn), {\it NuSTAR} (FPMA and FPMB), {\it Chandra} (ACIS-S) \software{Astropy \citep{astropy2013,astropy2018}, Matplotlib \citep{matplotlib}, XSPEC \citep{Arnaud1996}}, MARX \citep{marx2017}, SAOImage DS9 \citep{ds9}, CIAO \citep{ciao}, HEASoft \citep{heasoft2014} \newpage \bibliographystyle{aasjournal}
null
null
null
proofpile-arXiv_000-10150
{"arxiv_id":"2009.08985","language":"en","timestamp":1600740015000,"url":"https:\/\/arxiv.org\/abs\/2009.08985","yymm":"2009"}
2024-02-18T23:40:25.135Z
2020-09-22T02:00:15.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10152"}
null
\section{1. Background} In China, with the exploding evolution of e-commerce, online-sell-platform has been iterated qualitatively and quantitatively to a highly competitive period. Therefore, in order to survive in such a dynamic and competition-fierce market environment, every online-sell-platform has to innovate new techniques to attract more consumers in buyer side and increase more retailers' revenue in seller side. This work solves a concrete problem, i.e. how to help retailers (shops) to lift up revenue by employing DMCs. As illustrated in Figure \ref{iphone}, each online shop (on the leftmost plot) on the online-sell-platform has given their DMCs in red. And in the rightmost plot each shop can complete the DMC settings manually or automated by using DMC recommendation indicated in red. The DMC recommendation is all what this paper concerning about. Though given the prototype plots in Figure \ref{iphone}, as a leading online-sell-platform in China the real application has been implemented by authors before this paper. \section{2. Introduction} In traditional marketing, campaign is often used by retailers to lift up revenues. In recent years, with the rushing development of e-commerce, DMC has been in tendency in online marketing. Especially, with the flexible implementation of combination of DMCs, more revenue can be achieved compared with those not-combined DMCs. For example, campaign threshold-discount pairs $<threshold,discount>$, say $<60, 5>$ and $<70, 8>$, mean that in an online shop a consumer can have two campaigns at the same time. If the consumer buys a basket of food worth $\$60$ then the campaign $<60, 5>$ is triggered, and the consumer obtains $\$5$'s discount. If the consumer buys a basket of food worth $\$70$ then the both campaigns $<60, 5>$ $<70, 8>$ are triggered, and the consumer obtains $\$8$'s discount. Applying these DMCs is of benefit to transfer the low-value consumer to high-value consumer, which will finally result in boosting revenue. In traditional retailing times when the big data infrastructure is not accessible, sellers use their selling experience frequently. However in new retailing era, with the help of big data, online-sell-platform can help retailers to increase their return by automated generating combined multiple DMCs. This paper presented a comprehensive solution to generating combined multiple DMCs. Here it is necessary to introduce the rules of DMC. It can be understood as a special kind of digital coupon. That is the DMC is issued by individual online shops, hence each shop on the online-sell-platform has its own DMCs oriented to distinct marketing objectives. And each DMC has a trigger threshold and a discount amount, i.e. DMC threshold-discount pair. For example, an online shop issues a DMC with a threshold-discount pair of $<90, 10>$, then if a consumer's shopping cart is fulfilled with products worth more than $\$90$ then the $<90, 10>$ DMC is triggered. And the consumer will obtain a $\$10$ discount, i.e. the consumer saves $\$10$ and only needs to pay $\$80$. \begin{figure*}[t] \centering \includegraphics[width=0.6\textwidth]{background3.png} \caption{The leftmost plot illustrates the online-sell-platform with three independent online-shops, in which there list distinct current DMCs in red in the shop. The middle plot gives the consumer side interface with red fonts indicating the current DMCs active in the shop. The rightmost plot shows the interface of the retailer side, setting DMCs with the red fonts giving the multiple DMCs recommendation. } \label{iphone} \end{figure*} In fact, it is a rather complicated problem to assure the effectiveness of recommending the digital marketing campaigns. From the macro aspect, the price elasticity can be depicted as the relationship between the discount strength and the selling volume, it is a multi-factor problem and it is hard to collect data to compute price elasticity. Therefore the problem is difficult to solve with price elasticity method commonly used in marketing field. This paper proposed a comprehensive solution to solve such problem based on DMCNet and Randomlized USM (unconstrained submodular maximization) Algorithm\ {\cite{buchbinder2015sb1}}. DMCNet is a neural network model proposed to calculate the DMC's triggering probability for each consumers. More importantly, the revenue of each DMC for a online shop can be calculated by DMCNet. More detail will be shown in Section 3. Randomlized USM Algorithm proposed a solution to non-monotonic submodular problem like optimal combination of DMCs. Together with DMCNet, the maximum of DMCs revenue will be obtained. This paper contains four parts, In section $3$, DMCNet will be introduced. Section $4$ describes the motivation to resolve optimal combination of DMCs as a non monotonic submodular problem, and the method to obtain optimal revenue of this problem. The whole recommendation solution and pseudo code are presented in section $5$. Experiment will be given in section 6. Last section concludes the paper. \begin{figure*}[t] \centering \includegraphics[width=.7\textwidth]{DMCNetLSTM.pdf} \caption{Topological structure of DMCNet}. \label{figDMCNet} \end{figure*} \subsection{Review of Related Techniques} The main techniques invested in this work contain recommender system (RS), deep learning (DL). Recommender system is a tool to retrieve information to help users make decisions efficiently. Especially with the fast development of e-commerce, RS plays a pivotal and indispensable role, not only in consecutively ameliorate the consuming experience but also helpful in lifting up retailers' revenue. Essentially, a typical RS leverages user profile, item features, user-item interaction information and other inputs like temporal and spatial data to predict the user's next choice. Typical RS can be categorized as three classes, i.e. collaborative filtering, content-based RS and hybrid RS, see \cite{rscategory}. Deep learning has been receiving in decades more and more attentions after tremendous advance in theory \cite{lecun1989cnn0,lecun1998cnn1,hinton2009DBN,cho2014gru} and computing infrastructure \cite{dean2008mapreduce,shvachko2010hadoop,zaharia2011spark,jouppi2017TPU}. Massive applications have been innovating in industry and achieving in SOTA, e.g. in fields of natural language processing \cite{vaswani2017attention, devlin2018bert,yang2019xlnet} and computer vision \cite{simonyan2014vgg,he2016resnet,redmon2016you}. The hybrid model incorporating deep learning into recommender system has become the tendency in industrial recommendation system \cite{zhang2019rssurvey}, e.g. deep collaborative model \cite{li2015deepcf}, wide and deep model \cite{cheng2016wide}, deepfm model \cite{guodeepfm}, deep interest model \cite{zhou2018din,zhou2019dien}. Here it is necessary to mention that the comprehensive solution in this work lends the idea from deep-learning based recommender system and combinatorial optimization. \section{3. Digital-Marketing-Campaign Net} In this paper the DMCNet (Digital-Marketing-Campaign Net) is employed to score the probability of triggering a single DMC by a consumer. The score is calibrated to the probability, hence the higher the score the more probable that a consumer browsing the shop would trigger a DMC. \subsection{Features Employed in DMCNet} The DMCNet is a deep neural net model and its topological net structure is shown in Figure \ref{figDMCNet}. The input layer contains four kinds of features, i.e. dense features, sparse features, target DMC threshold-discount pair and not-target DMC threshold-discount pair. The dense part contains nine features, including order date, shop id, city id, customer id, GMV in recent 30 days, GMV in recent 60 days, GMV in recent 90 days, target DMC threshold and target DMC discount. The sparse features employed in DMCNet includes shop category, consumer age and consumer gender, which are all one-hot handled. Features specifying campaign threshold and campaign discount are divided into two parts by target and not-target. Target part colored in orange in Figure \ref{figDMCNet} contains target campaign threshold and target campaign discount, while in the not-target part many not-target campaign threshold-discount pairs colored in purple in Figure \ref{figDMCNet} are employed. Not-target campaign threshold-discount pairs are used to depict campaigns simultaneously presented to the consumers along with the target campaign threshold-discount pair. \subsection{DMCNet Structure } For dense features, a three-hidden-layer fully connected forward network is employed to learn high order features. And the result of learned vector will be concatenated with the sparse features. Here the target campaign threshold-discount pair is a scalar tuple which defines the DMC triggering threshold and the discount that the DMC triggered customer can obtain. Among the observed samples, the range of the threshold and discount varies largely, therefore in this paper the both features are monotonically encoded with a 500 length vector, which is termed as the isotonic encoding. For example, a threshold-discount pair is $<10, 1>$, meaning that in this campaign if a consumer buys $\$10$ food then the consumer will obtain a $\$1$'s discount. Then the isotonic encoding for this campaign threshold-discount pair is that $\$10$ is encoded as $(1,1,1,1,1,1,1,1,1,1,0,0,0,...,0)^{\top}_{500\times 1}$ and $\$5$ as $(1,1,1,1,1,0,0,0,...,0)^{\top}_{500\times 1}$. After isotonic encoding the target campaign threshold-discount, the 500-dimensional threshold and discount vectors are element-wise multiplied individually with the above mentioned concatenated output from dense and sparse features. The rest not-target campaign threshold-discount pairs are also handled with isotonic encoding. The encoded vectors are separately element-wise multiplied to form many output vectors with 500 dimensions. \subsection{DMC Embedding} As is shown in Figure \ref{figDMCNet}, many not-target campaign threshold-discount pairs colored in purple are employed in DMCNet, whose volume varies in diverse contexts. That is for different consumers the number of all DMC threshold-discount pairs (target DMC plus not-target DMC) are different. Some consumers may be exposed with more campaigns than others. Therefore, the campaign threshold-discount pairs are specially handled as the sequential features, some sequence may be long while some may be short. Popular methods to handle the sequence data contains LSTM, GRU and Transformer-encoder. Here the LSTM with Attention is used to generate an embedding for the input not-target campaign threshold-discount pairs. This means different quantity of not-target campaign threshold-discount pairs can be compressed into an embedding, i.e. DMC-Embedding, having the same length after LSTM-Attention module on the DMCNet. Then the DMC-Embedding will be processed by a three-hidden-layer fully connected forward net. After concatenating and again a three-hidden-layer fully connected forward net on the Figure 1, the softmax score will be returned a score to specify the likelihood of the consumer's triggering the target DMC. \section{4. Marketing Campaign Combination Optimization} \subsection{Marketing Campaign Combination} Marketing campaign response depicts the relationship between campaign and revenue of retailer, which represented by function $f$. The input of $f$ is campaigns set up by retailer, the output of $f$ is revenue. From business sense, submodularity of this function can be proved. First, campaigns could improve revenue by encouraging consumers to place orders, however, consumers’ willingness-to-pay is limited, which means more campaigns will make marginal effect decrease. According to the analysis before, function $f$ obviously is submodular, which satisfies \textit{c.f.} Definition 1 and Definition 2. \begin{definition}{\cite{buchbinder2018deterministicsb3}} Given a ground set $\mathcal{N} = \{u_0, u_1,...,u_n\},\ \text{set function } f: 2^{\mathcal{N}} \xrightarrow{} R\ $ is called submodular if and only if $$f(A) + f(B) \ge f(A\cap B) + f(A\cup B)\ \forall A,B \subseteq \mathcal{N}$$ \end{definition} \begin{definition}{\cite{buchbinder2018deterministicsb3}} Given a ground set $\mathcal{N}$ = $\{u_0, u_1,...,u_n\}$, set function $f: 2^\mathcal{N} \xrightarrow{} R\ $ is called submodular if and only if $$f(A + u) - f(A) \ge f(B + u) - f(B) \ \forall A, B \subseteq \mathcal{N}, u \in \mathcal{N} \backslash B$$ \end{definition} More importantly, campaign combinatorial problem is not a strict monotonic increasing problem. For example, campaign\{a\}=$<39, 3>$, campaign\{b\}=$<29, 1>$, when \{b\} and \{a\} exposed to consumers at the same time, some consumers may choose campaign \{b\} which leads the decrease of customer price, that is $Revenue(\{a,b\}) < Revenue(\{a \})$. Marketing campaign combinatorial problem is described as follows, $\max f(A),\ s.t. \ |A| = k,\ A\subseteq \mathcal{N}$, $\mathcal{N}$ = $\{u_0, u_1,...,u_n\}.$ $k$ is the number of combination. Therefore, campaign combinatorial problem is an unconstrained and non monotonic submodular maximization problem. According to the discussion, this paper proposes a solution \textit{c.f.} Figure 3 in next section . \section{5. Recommendation Solution} \subsection{Initializing Candidate Threshold-Discount Pairs} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{processgraphv3.pdf} \caption{The process graph of multiple DMCs recommending solution.} \label{processgraph} \end{figure} Firstly, an iteration traverses from minimum threshold to maximum threshold, accompanying the second iteration from 0 to threshold. As the initial candidate set of threshold-discount pairs, set $C$ contains all combinations of threshold and discount, \textit{c.f.} Algorithm 1. \begin{algorithm}[h] \caption{Initializing Basic Threshold-Discount Pairs} \begin{algorithmic}[1] \STATE $C \gets \varnothing$ \FOR{$threshold \gets threshold_{min}$ $to$ $threshold_{max}$} \FOR{$discount \gets 0 $ $to$ $threshold$} \STATE $C \gets <threshold, discount>$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Computing Profit of Each Threshold-Discount Pair Employing DMCNet} Generally speaking, this paper selects consumers whose geographic location is around shops as our potential consumers. By computing threshold and discount of each potential consumer and shop, revenue of each threshold-discount pair can be obtained. Pseudo code is given as follows \textit{c.f.} Algorithm 2. Function $F$ is the DMCNet which discussed at previous section. \begin{algorithm}[h] \caption{Computing Threshold-Discount-Pair Revenue } \begin{algorithmic}[1] \STATE $i \gets 0;$ \STATE $n \gets Count(C);$ \STATE $j \gets 0;$ \STATE $m \gets Count(user)$ \FOR{$i \gets 0$ $to$ $n$} \STATE $Revenue_{C_i} \gets 0$ \FOR{$j \gets 0$ $to$ $m$} \STATE $Revenue_{C_i} \gets Revenue_{C_i} + F(C_i, user_j, \varnothing)$ \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} \subsection{Iterating Greedy Algorithm to Obtain Sequence Of Threshold-Discount Pairs} So far the set of all threshold-discount pairs and their revenue have been obtained. The set generated previously contains all threshold-discount pairs, like $<60, 1>$, $<60, 2>$, $<60, 3>$, $<50, 1>$, $<50, 2>$, $<50, 3>$ and so on, which can not be shown simultaneously. For example, in actual threshold-discount pairs, either $<60, 1>$ or $<60, 2>$ will be displayed, which means only one will be shown. Therefore, more business rules is necessary, such as, one threshold can only have one discount. The difference between adjacent thresholds must be at least $n$ , which always set $n=\$5$, finally, as threshold increases, discount must increase similarly, for example, if campaign $<50, 4>$ is contained, then the discount of 60 must be higher than $\$4 $. All of the inputs based on even common sense or business rules, will have the following advantages, firstly, it will reduce the optimal threshold-discount pair computing scale afterwards. Secondly, it makes our model more consistent with business logic. By sort set $C$ with revenue and filter set $C$ with business rules, a new set $C$ is generated. Function $fit(C_i, R, D )$ return true if $C_i$ conforms with the rules $R$ \textit{c.f.} Algorithm 3. \begin{algorithm}[h] \caption{Generating Fundamental Threshold-Discount-Pair Sequence} \begin{algorithmic}[1] \STATE $C \gets Sort(C, revenue)$ \STATE $D \gets \varnothing$ \FOR{$i \gets 0 $ $to$ $n$} \IF{$fit(C_i, R, D )$} \STATE $D \gets C_i + D$ \ENDIF \ENDFOR \STATE $C \gets D$ \end{algorithmic} \end{algorithm} \subsection{Obtain Optimal Threshold-Discount Pairs By Randomized USM Algorithm} Finally, based on the generated set of threshold-discount pairs, threshold-discount pair is added to the final set successively using greedy algorithm. For each adding action, recompute revenue to decide whether this pair should be added to final set or not. Besides this, this paper use randomized USM algorithm (\textit{c.f.} Algorithm 4) which is proved more effectively. Not only it computes whether revenue is uplifted after one specific threshold-discount pair is added, but also computes the change of revenue after the pair is removed. \begin{algorithm}[h] \caption{Randomized USM Algorithm} \begin{algorithmic}[1] \STATE $X_0 \gets \varnothing, Y_0 \gets Random(C)$ \FOR{$i = 1 $ $to$ $n$} \STATE $a_i \gets {F(X_{i-1} + c_i, user) - F(X_{i-1}, user)}$ \STATE $b_i \gets {F(Y_{i-1} - c_i, user) - F(Y_{i-1}, user)}$ \STATE ${a_i'} \gets max\{a_i,0\}, {b_i'} \gets max\{b_i,0\}$ \STATE $with$ $probability$ $\frac{a_i'}{{a_i'} + {b_i'}} $ $do$ : $X_i \gets X_{i-1} + c_i, Y_i \gets Y_{i-1}$ \STATE $with$ $probability$ $\frac{b_i'}{{a_i'} + b_i'} $ $do$ : $X_i \gets X_{i-1}, Y_i \gets Y_{i-1} - c_i$ \ENDFOR \STATE $C \gets X_n$ \end{algorithmic} \end{algorithm} It can be proved that this algorithm is 1/2\ approximation algorithm for the USM problem, that means $\text{E}(f(A_k)) \geq \frac{1}{2}f(OPT)$. {\cite{buchbinder2015sb1}} \begin{table}[htb] \centering \begin{tabular}{l|l|l} \hline\hline Search Method & GMV & Time Cost \\ \hline\hline Global Optimum Searching & {3,976,560} & 6.3 hours \\ Randomized USM Searching & {2,177,239} & 20 minutes \\ Greedy Searching & {1,802,452} & 14 minutes \\ \hline\hline \end{tabular} \caption{GMVs and related time costs in employed searching methods. } \label{tb6} \end{table} \begin{table} \centering \begin{tabular}{l|l|l} \hline \hline City & Sample Size Control & Sample Size Treatment \\ \hline \hline Jinhua & 38 & 17 \\ Ningbo & 42 & 19 \\ Changsha & 62 & 20 \\ Tianjin & 158 & 19 \\ \hline \hline \end{tabular} \caption{Sample sizes in control group and treatment group distributed in four Chinese cities.} \label{tb1} \end{table} \begin{table}[htb] \centering \begin{tabular}{l|l|l} \hline\hline Model & Train AUC & Test AUC \\ \hline \hline XGBoost & 0.637 & 0.604 \\ DMCNet without LSTM Emb. & 0.750 & 0.700 \\ DMCNet with LSTM Emb. & 0.810 & 0.750 \\ DMCNet with LSTM Emb. \& Att. &0.810 & $\mathbf{0.760}$ \\ \hline\hline \end{tabular} \caption{AUC comparison with four test models.} \label{tb5} \end{table} \begin{table*}[htb] \centering \begin{tabular}{l|l|l|l|l} \hline \hline Group & Time & Avg Net GMV pS & Avg Order Vol. pS & Avg Net Cust. Price pS \\ \hline \hline Treatment & Week1 & 1019.61 & 23.4 & 43.58 \\ Treatment & Week2 & 1083.45 & 24.34 & 44.51 \\ \hline chain ratio week1 over week2 & & +6.26 & +4.02\% & +2.13\% \\ \hline Control & Week1 & 1207.11 & 28.74 & 42.01 \\ Control & Week2 & 1216.89 & 29 & 41.96 \\ \hline chain ratio week1 over week2 & & +0.81\% & +0.9\% & -0.12\% \\ \hline Ratio of Treatment over Control & & +5.45\% & +3.11\% & +2.25\% \\ \hline \hline \end{tabular} \caption{Comparison of average net GMVs (Gross Merchandise Volume) per shop and average order volume per shop and average net customer price per shop between control group and treatment group.} \label{tb2} \end{table*} \begin{table*}[htb] \centering \begin{tabular}{l|l|l|l} \hline \hline Group & Time & Net Cust. Price Aft. Triggering Camp. & Not Net Cust. Price Aft. Triggering Camp. \\ \hline \hline Treatment & Week1 & 45.65 & 42.98 \\ Treatment & Week2 & 51.13 & 41.90 \\ \hline chain ratio week1 over week2 & & +12.02\% & -2.52\% \\ \hline Control & Week1 & 43.64 & 41.42 \\ Control & Week2 & 41.73 & 42.05 \\ \hline chain ratio week1 over week2 & & -4.39\% & +1.53\% \\ \hline Ratio of Treatment over Control & & +16.41\% & -4.05\% \\ \hline \hline \end{tabular} \caption{Comparison of control group and treatment group in net customer after triggering a campaign and not net customer after triggering a campaign. } \label{tb3} \end{table*} \begin{table*}[htb] \centering \begin{tabular}{l|l|l|l|l} \hline \hline Group & Time & Ratio of Camp.-based Order Vol. Over Total Order Vol. & App P1 & App P2 \\ \hline\hline Treatment & Week1 & 22.31\% & 5.87\% & 24.74\% \\ Treatment & Week2 & 28.14\% & 6.27\% & 26.61\% \\ \hline chain ratio week1 over week2 & & 26.13\% & 6.81\% & 7.56\% \\ \hline Control & Week1 & 26.64\% &7.25\% & 27.85\% \\ Control & Week2 & 27.94\% & 7.18\% & 29.44\% \\ \hline chain ratio week1 over week2 & & +4.88\% & -0.97\% & +5.71\% \\ \hline Ratio of Treatment over Control & & +21.25\% & +7.78\% & +1.85\% \\ \hline \hline \end{tabular} \caption{Comparison of control group and treatment group in ratio of campaign-based order volume over total order volume, app p1 (click-through) and app p2 (conversion).} \label{tb4} \end{table*} \section{6. Real Online Experiment} \subsection{Experiment Setup} The online experiment is an A/B-test which was performed in diverse Chinese cities' online-shops. Treatment group and control group are selected from four Chinese cities, see Table \ref{tb1}. Totally, 65 online-shops for treatment group and 300 for control group are chosen. Two time windows (two weeks in total) are selected, one for control group between 2020-04-17 and 2020-04-23 and another for treatment group between 2020-04-24 and 2020-04-30. There are many diverse types of shops in authors' online-sell-platform, including online supermarket, online convenience-store, online fruit-shop and online fresh-supermarket. In this work only online convenience-store are tested, since this kind shops are popular in Chinese cities. \subsection{Model Comparison} Deep neural network tested in the experiment contains four distinct models, including XGBoost, DMCNet without embedding, DMCNet with only LSTM embedding, and DMCNet with LSTM embedding and multihead-attention. From Table \ref{tb5}, it is outstanding that the DMCNet with LSTM embedding and multihead attention performs better than the other models. Especially, the difference between DMCNet with and without LSTM embedding is about 10 percent large. Hence it shows that the DMCNet with LSTM embedded sequence of the threshold-discount pairs is vital in improving the model performance. \subsection{Optimization Methods Comparison} This paper compares three optimization methods on 100 stores data during one month. The greedy algorithm and randomized USM Algorithm and Global Search Algorithm. From Table \ref{tb6}, randomized USM algorithm obtains higher revenue than the greedy algorithm and less time than global search algorithm. \subsection{Evaluation Metrics} In this experiment, eight distinct metrics have been employed, including average net GMV per shop, average order volume per shop, and average net customer price per shop, \textit{c.f.} Table \ref{tb2}. Also net customer price after triggering a campaign and net customer price not triggering a campaign are contained in Table \ref{tb3}. And ratio of campaign-based order volume versus total order volume, and app p1 (click-through) and app p2 (conversion) are included as evaluation metrics in Table \ref{tb4}. \subsection{Experiment Results Discussion} First of all, within two weeks' experiment, the real net GMV increasing rate arrived at $6.26\%$ compared to the pre-set target rate of $3\%$. Secondly, the daily net GMV in treatment group through week 2 is $5.45\%$ higher than it in week 1, \textit{c.f.} Table \ref{tb1}. Thirdly, compared with control group, the increase rate of net customer price in treatment group is higher than it in control group. From the perspective of net customer price after triggering a campaign and net customer price not triggering a campaign, the former increases positively while the later negatively. This evicts that the former's effect can terminate the negative effect by the later and eventually pushes up the total net customer price, see Table \ref{tb3}. And the two weeks' chain ratios of net customer price after triggering a campaign between the control and treatment groups also demonstrates that the DMC recommendation is reasonable. Fourthly, in metric of campaign-based order volume over total order volume, treatment group's performance is better than the control group. This shows that the DMC recommendation indeed enlarge the order volume. Combining with the third point, it concludes that the DMC recommendation is the direct momentum to increase the net GMV in treatment group. Last but not the least, the click-through (APP P1) and conversion (APP P2) both increase in treatment group while not in the control group, see Table \ref{tb4}, meaning that it has positive relationship with applying multiple DMCs. \section{7. Conclusion} This work proposes a comprehensive solution of using digital-marketing-campaign to increase shop's GMV. First of all, the DMCNet is employed to compute the probability of triggering DMC, whose network structure is a hybrid deep learning model combining deep net and LSTM-Attention. Deep net component is used to learn the representation of user profile and contextual features and LSTM-Attention component is served as the DMC embedding generator. The combinatorial optimization is performed based on the sub-modular optimization theory to generate a set of optimal threshold-discount pairs. The real online A/B-test on a online e-commerce platform evicts that the proposed solution increase the revenue of retailers significantly.
null
null
null
proofpile-arXiv_000-10151
{"arxiv_id":"2009.08949","language":"en","timestamp":1600654688000,"url":"https:\/\/arxiv.org\/abs\/2009.08949","yymm":"2009"}
2024-02-18T23:40:25.142Z
2020-09-21T02:18:08.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10153"}
null
\section{Introduction} Multi-loop scattering amplitudes are complicated functions of momenta and helicities of external states. In certain kinematical regimes, gauge theory amplitudes are constrained by more symmetries thus allow a simpler form. The typical situation is when the external momenta becoming either soft or collinear, where a gauge theory amplitude factorizes into a product of universal emission factor and lower-point amplitude. The soft and collinear behaviors of gauge theory amplitudes are of great theoretical and phenomenological interest for several reasons: \begin{itemize} \item[a)] In application of QCD, soft and collinear factorization are essential for resuming large logarithms that appears in fix order calculations~\cite{Collins:1989gx, Sterman:1995fz, Sterman:2004pd}, they also provide the theoretical basis of parton shower algorithms for Monte Carlo event generators for high-energy particle collisions~\cite{Buckley:2011ms}. \item[b)] Fix order calculations and subtractions rely on infrared factorization, they are essential in formulating general algorithms~\cite{Catani:1996vz, Frixione:1995ms} to handle and cancel infrared singularities. The extension of these next-to-leading order (NLO) algorithms to next-to-next-to-leading order (NNLO) has been performed at last ten years to improve the theoretical accuracy of perturbative QCD predictions~\cite{GehrmannDeRidder:2005cm, Catani:2007vq, Czakon:2010td, Boughezal:2011jf, Czakon:2014oma}. The first step at next-to-next-to-next-to-leading order (N${}^{3}$LO) has been taken in the last couple of years through the complete calculation of fully inclusive Higgs production at hadron colliders, and has been making exclusive use of the soft limit expansion to obtain each component of squared real-virtual~\cite{Anastasiou:2013mca,Kilgore:2013gba}, double-virtual-real~\cite{Duhr:2013msa,Li:2013lsa,Duhr:2014nda,Dulat:2014mda}, double-real-virtual~\cite{Li:2014bfa, Anastasiou:2015yha} and triple-real radiation~\cite{Anastasiou:2013srw} contributions. \item[d)] Soft theorems, together with some mild restrictions of locality, gauge invariance/Adler zero, are sufficient to fix (tree-level) scattering amplitude for a variety of theory~\cite{Hamada:2018vrw, Rodina:2018pcb, Kampf:2019mcd}. Thus an independent study of gauge theory soft/collinear behaviors using Wilson lines will help offer a different sight of view. \end{itemize} Factorization at the amplitude level for the collinear sector has been obtained in~\cite{Badger:2015cxa,DelDuca:2019ggv,DelDuca:2020vst} for both tree-level quadruple and one-loop triple splitting. In this work, we focus on the soft sector, especially on the leading power behavior of QCD amplitudes when two soft partons are emitted, from general number of hard partons. At tree level, the soft amplitude can be cast into an abelian part plus an irreducible correlation part which is a color dipole~\cite{Catani:1999ss}. At one loop, the amplitude receives logarithmic loop corrections, while the structure of the amplitude is similar, it is still an abelian contribution plus an non-abelian correlated contribution. The abelian contribution will be the product of two ``independent'' single soft emissions, with one of them gets one-loop correction. The non-abelian contribution still allows a dipole structure, as in tree-level case. Combined with known results for two loop single soft amplitude that couples up to three hard particles~\cite{Dixon:2019lnw}, and the tree-level triple soft emission whose irreducible three-gluon correlated emissions involve color and kinematical correlations with only one hard parton at a time~\cite{Catani:2019nqv}, and the pure dipole part~\cite{Duhr:2013msa,Li:2013lsa,Li:2016ctv,Moult:2018jzp}, the picture of NNNLO soft correlation with multiple hard lines is now complete at amplitude level, which promotes a further step into performing calculations for the NNNLO soft function with multiple legs~\cite{Gao:2019ojf}. The outline of the paper is as follows. In section~\ref{sec:Tree}, we present the definition of a soft amplitude in terms of Wilson line and review known results at tree level. In section~\ref{sec:LOOP} we present explicit expressions for double soft gluons and double soft quarks amplitudes in color space. In section~\ref{sec:SDE} we go to explain a novel approach of calculating master integrals and later in section~\ref{sec:AC} perform analytical continuation to different relevant kinematic configurations. We have tried to present the material as elementary as possible, for readers not familiar with the formalism we have provided a detailed introduction in appendix. \section{Soft Factorization and tree level amplitude\label{sec:Tree}} The soft current in QCD is defined as the low-energy(soft) limit of the corresponding QCD amplitude, with the properties of universal factorization~\cite{Catani:2000pi,Catani:1999ss,Bern:1998sc,Bern:1999ry}. It is defined to all loop orders and contains all the quantum corrections to tree-level(classical) eikonal factorization formula. The one-loop single soft factorization formula has been proved in~\cite{Catani:2000pi}, the main conclusion there is, in the soft limit of a single gluon with color index $a$, the $n$-point amplitude factorizes into a color operator (see in~\ref{sec:colorSpace}) ${\bf J}^{a}$ acting on a $n-1$ point amplitude \begin{align} \label{eq:single soft fac} \mathcal{M}^{a}_n{{\buildrel q \rightarrow 0 \over\longrightarrow}}\,\, { \varepsilon}^{\mu}(q) {\bf J}^{a}_{\mu} \cdot \mathcal{M}_{n-1}\,. \end{align} The generalization to multiple soft emission and to all loop orders is given in Ref~\cite{Feige:2014wja}, \begin{align} \langle a_1,\dots,a_n|&\mathcal{M}(q_1,\dots,q_n;p_1,\dots,p_m)\rangle|_{ \{q_1,\dots,q_n \} \rightarrow 0} \nonumber\\& \,\longrightarrow\,\,{ \varepsilon}^{\mu_1}(q_1)\dots{ \varepsilon}^{\mu_n} (q_n) {\bf J}^{a_1 \dots a_n}_{\mu_1\dots \mu_n}(q_1,\dots,q_n)|\mathcal{M}(p_1,\dots,p_m)\rangle\,. \label{frac theo} \end{align} The factorization formula holds at leading power in the soft limit~\cite{Bauer:2001yt}, setting $n_k=p_k/p_k^0$, the explicit form of the color operator is\footnote{We cast the factorization formulae in a form as if the soft partons were gluons, for soft quarks one replace gluon polarizations and colors with quark polarizations and colors.} \begin{align} {\varepsilon}^{\mu_1}(q_1)\dots{ \varepsilon}^{\mu_n}(q_n) {\bf J}^{a_1 \dots a_n}_{\mu_1 \dots \mu_n}(q_1,\dots,q_n) &\equiv\langle q_1,\dots,q_n;a_1,\dots,a_n|\prod_{k=1}^n Y^{\dagger}_{n_k}(0) |\Omega\rangle\,, \end{align} \begin{align} Y^{\dagger}_{n} (x)\equiv P \{ \exp[ i g_s {\bf T}^a \int_{0}^{\infty}{n\cdot A^a(x + s n)e^{-\eta s}}{ds}] \}\,. \label{eq:Wilson line} \end{align} where $Y^{\dagger}_{n}$ corresponds to outgoing soft Wilson line and ${\bf T}^a$ is the color space generator~\cite{Catani:2000pi}, see also in appendix~\ref{sec:colorSpace}, and $\eta$ is the small prescription parameter. Expanding the integral we get \begin{align} Y^{\dagger}_{n} =&\sum_{m=0}^\infty \frac{(i g_s)^m}{m!} \bf{T}_n^{a_m} \dots \bf{T}_n^{a_2} \bf{T}_n^{a_1} \int_0^\infty d s_1 \int_{s_1}^\infty d s_2 \dots \int_{s_{m-1}}^\infty d s_m e^{-\eta \sum_j s_j} \cr\,& {\rm{T}}\left[ \,n\cdot A^{a_1}(s_1 n) n\cdot A^{a_2}(s_2 n) \dots n\cdot A^{a_m}(s_m n) \right]+ \text{permutations}\,, \end{align} Suppose now we have some Wilson line evaluated at space-time point $x=0$ (for simplicity we consider two of them), the expectation value between vacuum and outgoing states $\beta$ is \begin{align} &\langle \beta| Y^{\dagger}_{n_i}Y^{\dagger}_{n_j}|\Omega\rangle =\sum_{m=0}^\infty \frac{(i g_s)^m}{m!} \prod_{i=m}^1\bf{T}_{n_i}^{a_i} \sum_{l=0}^\infty \frac{(i g_s)^l}{l!} \prod_{j=l}^1\bf{T}_{n_j}^{b_j} \int d \vec{s}\int d \vec{t}\,\,e^{-\eta \sum_i s_i}e^{-\eta \sum_j t_j} \cr\,& \times\langle \beta|\,\, {\rm{T}}\, \bigg[{n_i}\cdot A^{a_1}(s_1 {n_i}) \dots {n_i}\cdot A^{a_m}(s_m {n_i}) \,\,n_j\cdot A^{b_1}(t_1 n_j) \dots n_j\cdot A^{b_l}(t_l n_j) \bigg] |\Omega\rangle \cr\, & +\text{permutations} \cr\,& =\sum_{m=0}^\infty \frac{(i g_s)^m}{m!} \prod_{i=m}^1\bf{T}_{n_i}^{a_i} \sum_{l=0}^\infty \frac{(i g_s)^l}{l!} \prod_{j=l}^1\bf{T}_{n_j}^{b_j} \int_{k_i} \int_{q_j}\int d \vec{s}\,e^{-i (k_1\cdot {n_i}-i\eta) s_1}\dots e^{-i (k_m-i\eta)\cdot {n_i} s_m} \cr\,& \times\int d \vec{t} \,e^{-i (q_1\cdot n_j -i\eta)t_1}\dots e^{-i (q_l-i\eta)\cdot n_j t_l} \bm{\mathcal{F}} \bigg[\langle\beta| {\rm T} \,n_i\cdot A^{a_1}(s_1 {n_i}) \dots n_j\cdot A^{b_l}(t_l n_j) |\Omega\rangle \bigg] \cr\, &+\text{permutations} \cr\,& =\sum_{{m,l}=0}^\infty \frac{(g_s)^{m+l}}{m! l!} \int_{k_i} \int_{q_j}\frac{\bf{T}_{n_i}^{a_m}\bf{T}_{n_i}^{a_{m-1}} \dots \bf{T}_{n_i}^{a_1} } {{n_i}\cdot k_m {n_i}\cdot (k_m+k_{m-1})\dots {n_i}\cdot \sum k_i} \frac{\bf{T}_{n_j}^{b_l}\bf{T}_{n_j}^{b_{l-1}} \dots \bf{T}_{n_j}^{b_1} } {n_j\cdot q_l n_j\cdot (q_l+q_{l-1})\dots n_j\cdot \sum q_j} \cr\,& \times\bm{\mathcal{F}} \bigg[\langle\beta| {\rm T} \,n_i\cdot A^{a_1}(s_1 {n_i}) \dots n_j\cdot A^{b_l}(t_l n_j) |\Omega\rangle \bigg] +\text{permutations}\,, \label{feynRule1} \end{align} where above we took for short hand \begin{align} \int d \vec{s}\equiv\int_0^\infty d s_1 \int_{s_1}^\infty d s_2 \dots \int_{s_{m-1}}^\infty d s_m\,,\quad\quad n_i\cdot (k_m+\dots)\rightarrow n_i\cdot (k_m+\dots)-i\eta\,. \end{align} In the second step of Eq.~(\ref{feynRule1}) we use the fact the fields along $n_i$ direction has space-like distance to those in the direction $n_j$, so we only need one time-ordering operator $\rm T$. And $\bm\mathcal{F}$ denotes the Fourier transformation of position space Green function into momentum space, thus the bracket equals the sum of all Feynman diagrams with incoming state corresponding to vacuum $\Omega$, outgoing lines on the mass shell corresponding to the states $\beta$, and lines off the mass shell (including propagators) corresponding to the gauge field operators $ n_i\cdot A^{a_1} \dots n_j\cdot A^{b_l}$. The multiplicity of the Green functions always cancels with the factors $m!\,l!$. To illustrate, we begin with a tree level example of double soft gluon~\cite{Catani:1999ss} (throughout the paper we will always take $q_2$ and $q_3$ to be soft), \begin{align} \bf{J}_{a_2 a_3 }^{\mu_2\mu_3 (0)}(q_2,q_3)= & \quad\Graph{1.}{SoftTree.pdf} \nonumber\\ =& \frac{1}{2} \bigg\{ {\bf J}_{a_2}^{\mu_2 (0)}(q_2) \;, {\bf J}_{a_3}^{\mu_3 (0)}(q_3) \bigg\} + i f_{a_2a_3a} \sum_{i} {\bf T}_i^a \bigg\{ \frac{n_i^{\mu_2} q_2^{\mu_3} - n_i^{\mu_3} q_3^{\mu_2}}{(q_2\cdot q_3) \,[n_i\cdot (q_2+q_3)]} \nonumber\\& - \frac{n_i\cdot (q_2-q_3)}{2 [n_i\cdot (q_2+q_3)]} \left[ \frac{n_i^{\mu_2} n_i^{\mu_3}}{(n_i \cdot q_2) (n_i \cdot q_3)} + \frac{g^{\mu_2\mu_3}}{q_2\cdot q_3} \right]\bigg \}\,, \label{eq:TreeCurrent} \end{align} where above the bracket is the anticommutator of tree level single soft current \begin{align} {\bf J}_{a}^{\mu (0)}(q)= - \sum_{i} {\bf T}^{a}_i \frac{n_i^{\mu}}{n_i \cdot q}\,, \label{eq:treesingle} \end{align} which is the only piece that survives in an abelian theory while the color antisymmetry part is typical of non-abelian theory. The current fulfills several properties: \begin{itemize} \item[a)] It is independent of the helicity and flavor of the massless hard parton, the only information it carries is the color charge and the direction of the hard scattered parton, the latter property is dubbed ``rescaling invariance''. \item[b)] Its divergence is proportional to the total color charge of the hard partons, which is a statement of on-shell gauge invariance \footnote{The QED Ward Identity $q_\mu \mathcal{M}^{\mu}(q) = 0$ does not require on-shell condition, but QCD does.} , see a detailed discussion in appendix~\ref{sec:colorSpace}. \begin{align} q_{2\mu_2} {\bf J}_{a_2a_3 }^{\mu_2\mu_3(0)}(q_2,q_3) &= \left( {\bf J}_{a_3 }^{\mu_3 (0)}(q_3) \;\delta_{a_2 a} + \frac{i}{2} f_{a_2a_3a} \frac{q_2^{\mu_3}}{q_2\cdot q_3} \right) \sum_{i=1}^{n} {\bf T}_i^a \;\;, \nonumber\\ q_{3\mu_3} {\bf J}_{a_2a_3 }^{\mu_2\mu_3 (0)}(q_2,q_3)& = \left( {\bf J}_{a_2 }^{\mu_2 (0)}(q_2) \;\delta_{a_3 a} + \frac{i}{2} f_{a_3a_2a} \frac{q_3^{\mu_2}}{q_2\cdot q_3} \right) \sum_{i=1}^{n} {\bf T}_i^a \;\;. \label{eq:gaugeInvar} \end{align} \end{itemize} In terms of spinor helicity variables, we make the current into an amplitude, in color singlet basis $\gamma^*\rightarrow q \bar q$ it takes the following form \begin{align} \label{eq:doubletree} \langle i_1{\; \bar\imath}_4 |{\bf J}_{a_2a_3}^{\mu_2\mu_3 (0)}(q_2,q_3){\varepsilon_{\mu_2}}(q_2) {\varepsilon_{\mu_3}}(q_3)|\gamma^*\rightarrow q {\bar q}\rangle \equiv \sum_{\sigma \in S_2}{ 2\,(t^{a_{\sigma(2)}}t^{a_{\sigma(3)}})^{~{\; \bar\imath}_4}_{i_1} }\mathop{\cal S}\nolimits^{\rm tree}(1_q,\sigma(2),\sigma(3),4_{\bar q}) \end{align} where above we took $q$ in leg 1 and ${\bar q}$ in leg 4 with fundamental color indices $i_1$ and ${\; \bar\imath}_4$, this convention of notation is kept all through the paper. The expression for the form factors is \begin{align} \mathop{\cal S}\nolimits^{\rm tree}(1,2^+,3^+,4) &= \frac{ \spa{1}.4}{ \spa{1}.2 \spa2.3 \spa3.4}\,, \nonumber\\ \mathop{\cal S}\nolimits^{\rm tree}(1,2^+,3^-,4)& =\frac{1} {\spa{1}.2\spb2.4+ \spa1.3 \spb3.4\,}\left(\frac{1}{s_{12} +s_{13} }\frac{\spb{1}.4 \spa1.3^3 }{ \spa{2}.3\spa1.2}+\frac{1}{ s_{24}+s_{34}} \frac{\spa{1}.4\spb2.4^3}{ \spb2.3 \spb3.4}\right)\,. \label{eq:formf} \end{align} Other color and helicity coefficients in Eq.~(\ref{eq:doubletree}) can be obtained by exchanging in Eq.~(\ref{eq:formf}) legs 1 and 4 or by complex conjugation, for example \begin{align} \mathop{\cal S}\nolimits^{\rm tree}(1,3^-,2^+,4)=\mathop{\cal S}\nolimits^{\rm tree}(4,2^+,3^-,1)\,,\quad\quad \mathop{\cal S}\nolimits^{\rm tree}(1,2^-,3^+,4)=\mathop{\cal S}\nolimits^{\rm tree}(1,2^+,3^-,4)^*\,. \end{align} Likewise, we present the results for double soft quarks $Q \bar Q$ at tree level \begin{align} ({\bf J}^{(0)})_{i_2}^{~{\; \bar\imath}_3}(q_2,q_3)\equiv\sum_{n}\frac{-\slash{n}}{n\cdot (q_2+q_3)}\frac{1}{s_{23}}(t^{d})^{~{\; \bar\imath}_3}_{i_2}\bf{T}_n^d \,, \end{align} where above $t^d$ is the color matrix in fundamental representation, and $i_2$ ${\; \bar\imath}_3$ are color indices for $Q$ and $ \bar Q$ respectively. Taking again the color singlet basis $\gamma^*\rightarrow q \bar q$ \begin{align} \langle i_1{\; \bar\imath}_4 |\bar u(q_3)({\bf J}^{(0)})_{i_2}^{~{\; \bar\imath}_3}(q_2,q_3) v(q_2)|\gamma^*\rightarrow q {\bar q}\rangle \equiv 2\,(t^{d})^{~{\; \bar\imath}_4}_{i_1} (t^{d})^{~{\; \bar\imath}_3}_{i_2}\mathop{\cal S}\nolimits^{{\rm tree}} (2\bar Q^+,3Q^-)\,, \end{align} \begin{align} \mathop{\cal S}\nolimits^{{\rm tree}} (2\bar Q^+,3Q^-)= \frac{-1}{s_{23}}\Bigg(\frac{\spa{3}.1 \spb{1}.2}{s_{12}+s_{13}} - \frac{\spa{3}.4 \spb{4}.2}{s_{24}+s_{34}}\Bigg)\,. \end{align} \section{Double soft current at one-loop\label{sec:LOOP}} \label{sec:double soft result} \begin{figure}[h] \begin{center} \includegraphics[width=\textwidth]{DoubleGluons} \includegraphics[width=\textwidth]{DoubleQuarks} \end{center} \caption{Non-vanishing diagrams for double soft gluons and quarks in Feynman gauge. Diagram 4 and 5 can couple to three Wilson lines, but this contribution factorizes into the abelian part. } \label{fig:3} \end{figure} Going beyond tree level, the soft amplitude receives quantum corrections. The single soft amplitude was first obtained in Ref.~\cite{Bern:1995ix} for color-ordered amplitudes, and was later re-derived in color space~\cite{Catani:2000pi} in axial gauge using Catani-Grazzini soft insertion rules, see also in~\cite{Bern:1999ry,Bern:1998sc,Dixon:2019lnw}. For double soft emissions, the presence of sub-leading color structures result in an even deeper entanglement of color and kinematics. Historical treatment in the 1990s have made exclusive use of \textquoteleft primitive amplitudes\textquoteright, which served as an starting point for a clean factorization of one-loop amplitudes in the soft and collinear regions by separating color issues from the kinematic issues ~\cite{Bern:1998sc,Bern:1999ry}. A typical example of primitive amplitude decomposition can be found in the artwork of helicity amplitudes for $\gamma*\to4$ partons~\cite{Bern:1997sc,Bern:1996ka}, from there in principle the explicit expression for double soft emissions could be subtracted out (as we have done), but the procedure is tedious due to presence of momenta conservation in the helicity form and due to nonlinear relation of Schouten identity. Thus a direct calculation in color space is desired, we will show the details below. \subsection{Setups} Expanding both sides of Eq.~(\ref{frac theo}) to one loop order in terms of bare strong coupling for double soft emission, the resulting formula becomes our starting point of the calculations \begin{align} &\langle a_2,a_3|\mathcal{M}^{(1)}{(q_2,q_3;p_1,\dots,p_m)}\rangle|_{ q_2 \rightarrow 0,q_3 \rightarrow 0} ={ \varepsilon}^{\mu_2}(q_2){ \varepsilon}^{\mu_3} (q_3)g_s^2 \nonumber\\& \times\left[{\bf J}^{a_2 a_3 (0)}_{\mu_2 \mu_3}(q_2,q_3)|\mathcal{M}^{(1)}{(p_1,\dots,p_m)}\rangle +\bar a \,{\bf J}^{a_2 a_3 (1)}_{\mu_2 \mu_3}(q_2,q_3)|\mathcal{M}^{(0)}{(p_1,\dots,p_m)}\rangle\right]\,, \label{eq:start} \end{align} where above the tree level current is in Eq.~(\ref{eq:TreeCurrent}), and we have factored out a tree level normalization of $g_s^2$ for the soft current, and expanded the results in terms of rescaled coupling \begin{align} \bar a\equiv\frac{g_s^2}{(4 \pi)^{2-\epsilon}}e^{-\epsilon \gamma_E}=\frac{\alpha_s}{4 \pi}\frac{e^{-\epsilon \gamma_E}}{(4 \pi)^{-\epsilon}}\,, \end{align} where $\alpha_s=g_s^2/(4 \pi)$, and $\gamma_E=0.577216$ is the Euler-Mascheroni constant. We have adopted a diagram calculation approach with Feynman diagrams (see Figure \ref{fig:3}) generated by \texttt{Qgraph}~\cite{Nogueira:1991ex}. The color/Dirac algebra and integrand manipulations were performed in \texttt{form}~\cite{Ruijl:2017dtg}. To keep track of regularization scheme dependence we set regularization-dependent dimension to be $4-2\delta_R \epsilon$, with $\delta_R=0$ referring to Four Dimensional Helicity scheme ($\rm FDH $)~\cite{Bern:1991aq,Bern:2002zk} and $\delta_R=1$ referring to 't~Hooft-Veltman scheme ($\rm HV$)~\cite{tHooft:1972tcz}. The $d$-dimension loop reduction was based on IBP identities~\cite{Chetyrkin:1981qh} implemented in the Mathematica package \texttt{LiteRed}~\cite{Lee:2012cn}. % Below we present the expressions of the soft amplitude, with a normalization factor of \begin{align} c_\Gamma\equiv{\Gamma(1+\epsilon)\Gamma^2(1-\epsilon)\over \Gamma(1-2\epsilon)}\,. \end{align} We will first work in time-like kinematic regions ($s_{i j}>0$) and later consider analytical continuations. \subsection{Time-Like results for double soft gluons} For double soft gluons, we cast the amplitudes into 2 independent helicity configurations (other helicity configuration can be achieved by complex conjugation). Just like in tree level case Eq.~(\ref{eq:TreeCurrent}), the amplitude was further decomposed into an abelian part and a correlated emission part (non-abelain) \begin{align} { \varepsilon}_{\mu_2}(q_2){ \varepsilon}_{\mu_3}(q_3)\bf{J}_{a_2 a_3 }^{\mu_2 \mu_3 (1)}(q_2,q_3)=\mathcal{M}_{a_2 a_3}^{\text{ab.}}(2,3)+\mathcal{M}_{a_2 a_3}^{\rm{nab.}}(2,3). \end{align} The abelian part is a direct product of a one-loop single soft and tree-level single soft emission. \begin{align} \Delta^{\mu}(i,j;q;\epsilon)\equiv&{{ \Gamma(1-\epsilon)\Gamma(1+\epsilon)}\over{\epsilon}^2}\Bigg({\mu^2(-s_{i j}-i\eta)\over{(-s_{i q}-i\eta)\,\,(-s_{q j} -i\eta)\,}}\Bigg)^{\epsilon} \left(\frac{n_i^{\mu}}{n_i \cdot q}-\frac{n_j^{\mu}}{n_j \cdot q}\right)\,, \cr\, \mathcal{M}_{a_2 a_3}^{\rm{ab.}}(2^+,3^+)=& \sum_{i \neq j}^m i f_{a_2 c d}\bf{T}^c_{i}\bf{T}^d_{j}\Delta^{\mu_2}(i,j;q_2;\epsilon){ \varepsilon}_{\mu_2}(q_2) \sum_k^m \frac{-n_k\cdot{ \varepsilon}(q_3)}{n_k\cdot q_3} \bf{T}_k^{a_3} + 2 \longleftrightarrow 3 \,. \label{eq:ab.} \end{align} \begin{align} &\mathcal{M}_{a_2 a_3}^{\rm{nab.}}(2^+ ,3^+)= -4 { \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}}\sum_{i \neq j}^m f_{b c a_2}f_{b d a_3} \bf{T}^c_{i}\bf{T}^d_{j} {\spa i.j \over \spa i.2 \spa 2.3 \spa3.j} \cr & \times\Bigg\{ {-2\over \epsilon^2} - {1\over \epsilon} \ln{s_{ij} s_{2 3}\over s_{i2} s_{3j}} - {1\over 2}\Bigg(\ln^2{s_{ij}s_{2 3} \over (s_{i2}+s_{i3})(s_{2j}+s_{3j})}+\ln^2{ s_{i2}\over s_{i2} + s_{i3}}+\ln^2{s_{3j}\over s_{2j}+s_{3j}}\Bigg )\, \cr & - \text{Li}_2(1- { s_{i2}\over s_{i2} + s_{i3}})-\text{Li}_2(1- {s_{3j}\over s_{2j}+s_{3j}})-\text{Li}_2(1- {s_{ij}s_{2 3} \over (s_{i2}+s_{i3})(s_{2j}+s_{3j})}) \Bigg\} \cr &+{ \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}} \Bigg\{{ \frac{C_A-n_f-C_A \delta_R \epsilon}{(-1+\epsilon)(-3+2\epsilon)(-1+2\epsilon)} } \sum_{i}^m i f_{b a_2 a_3} \bf{T}_i^b\, {\frac{s_{i2}-s_{i3}}{s_{i2}+s_{i3}}}\frac{-1}{\spa 2.3^2} \Bigg\} \,, \label{eq:plusplus} \end{align} \begin{align} &\mathcal{M}_{a_2 a_3}^{\rm{nab.}}(2^+ ,3^-)= -4 { \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}}\sum_{i \neq j}^m f_{b c a_2}f_{b d a_3} \bf{T}^c_{i}\bf{T}^d_{j} \cr &\times\Bigg\{{ {1 \over \spa{i}.2\spb2.j+ \spa i.3 \spb3.j\,}\left({1\over s_{i2}+s_{i3} }{\spb{i}.j \spa i.3^3 \over \spa{2}.3\spa i.2}+{1\over s_{2j}+s_{3j}}{\spa{i}.j\spb2.j^3 \over \spb2.3 \spb3.j}\right)} \cr &\times\Bigg({-2\over \epsilon^2} - {1\over \epsilon} \ln{s_{ij} s_{2 3}\over s_{i2} s_{3j}} +{1\over 2} \ln^2{s_{ij} s_{2 3}\over s_{i2} s_{3j}} - {\pi^2\over 3}\Bigg) \cr &+{\spa i.j\over \spa i.2 \spa2.j }{\spb i.j\over \spb i.3 \spb3.j } \times\left({1\over2} \ln^2{s_{ij} s_{2 3}\over s_{i2} s_{3j}} \right) \cr &+ { 1\over s_{i2}+s_{i3}}\Bigg({ -1 \over \spa{i}.2\spb2.j+ \spa i.3 \spb3.j\,} { \spb{i}.j \spa i.3^3 \over \spa{2}.3\spa i.2} + { -1 \over \spb{i}.2\spa2.j+ \spb i.3 \spa3.j\,} { \spa{i}.j \spb i.2^3 \over \spb{2}.3\spb i.3} \Bigg) \cr &\times\Bigg({1\over 2} \ln^2{s_{i2}\over s_{i2} +s_{i3} }+ {1\over 2} \ln^2{s_{ij} s_{2 3}\over (s_{i2} +s_{i3} )s_{3j} }\Bigg) \cr &+{ 1\over s_{2j}+s_{3j}}\Bigg({ -1 \over \spa{i}.2\spb2.j+ \spa i.3 \spb3.j\,} { \spa{i}.j \spb2.j^3 \over \spb{2}.3\spb3.j} +{ -1 \over \spb{i}.2\spa2.j+ \spb i.3 \spa3.j\,} { \spb{i}.j \spa3.j^3 \over \spa{2}.3\spa2.j} \Bigg) \cr &\times\Bigg({1\over 2} \ln^2{s_{3j}\over s_{2j} +s_{3j} } + {1\over 2} \ln^2{s_{ij} s_{2 3}\over (s_{2j} +s_{3j} )s_{i2} }\,\Bigg)\Bigg\}\,, \label{eq:plusminus} \end{align} \begin{align} \label{eq:strong-order} &\mathcal{M}_{a_2 a_3}^{\rm{nab.}}(2^+ ,3^+,2 \ll 3) =4 \sum_{i \neq j}^m f_{b c a_2}f_{b d a_3} \bf{T}^c_{i}\bf{T}^d_{j} {\spa i.3 \over \spa i.2 \spa2.3} {\spa i.j \over \spa i.3 \spa3.j}\, \cr & \Bigg[\Bigg( {1\over \epsilon^2 } + {1\over \epsilon} \ln{-s_{i3}\,\mu^2\over s_{i2}\,s_{2 3}} +{1\over 2} \ln^2{-s_{i3}\,\mu^2\over s_{i2}\,s_{2 3}} + {\pi^2\over 6}\Bigg) +\Bigg( {1\over \epsilon^2 } + {1\over \epsilon} \ln{-s_{ij}\,\mu^2\over s_{i3}\,s_{3j}} + {1\over 2} \ln^2{-s_{ij}\,\mu^2\over s_{i3}\,s_{3j}} + {\pi^2\over 6}\Bigg)\Bigg]\,, \cr\, &\mathcal{M}_{a_2 a_3}^{\rm{nab.}}(2^+ ,3^-,2 \ll 3) =4 \sum_{i \neq j}^m f_{b c a_2}f_{b d a_3} \bf{T}^c_{i}\bf{T}^d_{j} {\spa i.3 \over \spa i.2 \spa2.3} {-\spb i.j \over \spb i.3 \spb3.j}\, \cr & \Bigg[\Bigg( {1\over \epsilon^2 } + {1\over \epsilon} \ln{-s_{i3}\,\mu^2\over s_{i2}\,s_{2 3}} +{1\over 2} \ln^2{-s_{i3}\,\mu^2\over s_{i2}\,s_{2 3}} + {\pi^2\over 6}\Bigg) +\Bigg( {1\over \epsilon^2 } + {1\over \epsilon} \ln{-s_{ij}\,\mu^2\over s_{i3}\,s_{3j}} + {1\over 2} \ln^2{-s_{ij}\,\mu^2\over s_{i3}\,s_{3j}} + {\pi^2\over 6}\Bigg)\Bigg]\,. \cr\, \end{align} A particular application of our result is to take the strong-ordered limit of the emitted soft gluons, the resulted expressions, as was collected in Eq.~(\ref{eq:strong-order}), can be deduced from a first principle computation. We have taken the strong-ordered limit to be $q_2\ll q_3$, the resulted strong-ordered double soft gluon current is obtained by successive application of the single soft factorization formula Eq.~(\ref{eq:single soft fac}) : \begin{align} \langle a_2 \,a_3 | \mathcal{M} \rangle\rightarrow \langle a_3 | \bf{\tilde J}^{a_2}_{\mu_2}|\mathcal{M}\rangle= \langle a_3 | \bf{\tilde J}^{a_2}_{\mu_2}|a\rangle\langle a|\mathcal{M}\rangle \rightarrow \langle a_3 | \bf{\tilde J}^{a_2}_{\mu_2}|a\rangle \bf{ J}^{a}_{\mu_3} |\mathcal{M}\rangle\,. \end{align} The strong-ordered double soft gluon current is then \begin{align} \label{eq:strong-ordered} \bf{J}_{a_2 a_3 }^{\mu_2 \mu_3 }(q_2\ll q_3)= \langle a_3 | \bf{\tilde J}^{a_2}_{\mu_2}|a\rangle \bf{ J}^{a}_{\mu_3}\,. \end{align} The current $\bf{\tilde J}^{a_2}_{\mu_2}$ acts as an operator in the color space of \text{`soft gluon $3$ + hard partons'}, and the current $\langle a_3 | \bf{\tilde J}^{a_2}_{\mu_2}|a\rangle$ is partially contracted in the color indices of soft gluon $3 $ and thus acting as an operator in the space of the hard partons, making its multiplication with current $\bf{ J}^{a}_{\mu_3}$ legal. It can be written as \begin{align} \langle a_3 | \bf{\tilde J}^{a_2}_{\mu_2}|a\rangle= \bf{ J}^{ a_2}_{\mu_2}\langle a_3 | a\rangle+\Delta \bf{ J}^{a;a_2,a_3}_{\mu_2} =\bf{ J}^{ a_2}_{\mu_2}\delta_{a_3 a}+\Delta \bf{ J}^{a;a_2,a_3}_{\mu_2}\,, \end{align} where the current $\bf{ J}^{ a_2}_{\mu_2}$ is the normal single soft current and $\Delta \bf{ J}^{a;a_2,a_3}_{\mu_2}$ incorporates nontrivial effects of color correlation with soft gluon $3$. The tree level expression is already obtained in~\cite{Catani:1999ss} : \begin{align} \bf{J}_{a_2 a_3 }^{ (0) \mu_2 \mu_3 }(q_2\ll q_3)=&\left(\bf{ J}^{(0) a_2}_{\mu_2}\langle a_3 | a\rangle+\langle a_3 |T_3^{a_2}| a\rangle \frac{q_{3 \mu_2}}{-q_3 \cdot q_2}\right) \bf{J}^{(0) a}_{\mu_3} \cr\, =&\left(\bf{ J}^{(0) a_2}_{\mu_2}\delta_{a_3 a}+i f_{a_3 a_2 a} \frac{q_{3 \mu_2}}{-q_3 \cdot q_2}\right) \bf{J}^{(0) a}_{\mu_3}\,, \cr\, \Delta \bf{ J}^{(0) a;a_2,a_3}_{\mu_2}=&i f_{a_3 a_2 a} \frac{q_{3 \mu_2}}{-q_3 \cdot q_2}\,. \label{eq:dtree} \end{align} The one-loop expression needs a little bit effort, expanding Eq.~({\ref{eq:strong-ordered}}) to one-loop order we get \begin{align} \bf{J}^{(1) a_2 a_3 }_{ \mu_2 \mu_3 }(q_2\ll q_3) =& \langle a_3| \bf{\tilde J}^{(0) a_2}_{\mu_2} |a\rangle \bf{J}^{(1) a}_{\mu_3}+ \langle a_3| \bf{\tilde J}^{(1) a_2}_{\mu_2} |a\rangle \bf{J}^{(0) a}_{\mu_3} \cr\, =&\left[ \bf{J}^{(0) a_2}_{\mu_2} \delta_{a_3 a}+\Delta \bf{ J}^{(0) a;a_2,a_3}_{\mu_2} \right] \bf{J}^{(1) a}_{\mu_3} + \left[ \bf{J}^{(1) a_2}_{\mu_2} \delta_{a_3 a}+\Delta \bf{ J}^{(1) a;a_2,a_3}_{\mu_2} \right] \bf{J}^{(0) a}_{\mu_3} \cr\, =&\bf{J}^{(0)a_2}_{\mu_2}\bf{J}^{(1)a_3}_{\mu_3}+ \bf{J}^{(1)a_2}_{\mu_2}\bf{J}^{(0)a_3}_{\mu_3}+ \Delta \bf{ J}^{(0) a;a_2,a_3}_{\mu_2} \bf{J}^{(1) a}_{\mu_3}+ \Delta \bf{ J}^{(1) a;a_2,a_3}_{\mu_2}\bf{J}^{(0) a}_{\mu_3} \cr\, =&\bigg\{\bf{J}^{(1)a_3}_{\mu_3}\bf{J}^{(0)a_2}_{\mu_2}+ \bf{J}^{(1)a_2}_{\mu_2}\bf{J}^{(0)a_3}_{\mu_3}\bigg\} + \bigg\{ \left[\bf{J}^{(0)a_2}_{\mu_2},\bf{J}^{(1)a_3}_{\mu_3}\right] +\Delta \bf{ J}^{(0) a;a_2,a_3}_{\mu_2} \bf{J}^{(1) a}_{\mu_3}\bigg\} \cr\,+& \Delta \bf{ J}^{(1) a;a_2,a_3}_{\mu_2}\bf{J}^{(0) a}_{\mu_3}\,. \end{align} The current receives three independent contributions, the first bracket denotes an abelian contribution. The remaining contributions are intrinsically non-abelian, and are gauge invariant on their own : \begin{align} q_2^{\mu_2}\Delta \bf{ J}^{(1) a;a_2,a_3}_{\mu_2}\bf{J}^{(0) a}_{\mu_3}\simeq 0 \,,\quad\quad q_2^{\mu_2}\left(\left[\bf{J}^{(0)a_2}_{\mu_2},\bf{J}^{(1)a_3}_{\mu_3}\right]+\Delta \bf{ J}^{(0) a;a_2,a_3}_{\mu_2} \bf{J}^{(1) a}_{\mu_3}\right)\simeq 0\,. \end{align} The first contribution is \begin{align} \Delta \bf{ J}^{(1) a;a_2,a_3}_{\mu_2} \bf{J}^{(0) a}_{\mu_3}=& 2\sum_{i=1}^m i f_{a_2 c d } \langle a_3| \bf {T}_3^c |a \rangle \bf {T}_i^d \Delta_{\mu_2}(3,i;q_2;\epsilon) \sum_{j=1}^m \frac{-{n_j}_{\mu_3}}{n_j\cdot q_3} \bf{T}_j^{a} \cr\, =&-2\sum_{i=1}^m f_{a_2 c d } f_{a_3 c a} \bf {T}_i^d \Delta_{\mu_2}(3,i;q_2;\epsilon) \sum_{j=1}^m \left(\frac{{n_i}_{\mu_3}}{n_i\cdot q_3}-\frac{{n_j}_{\mu_3}}{n_j\cdot q_3}\right) \bf{T}_j^{a} \cr\, =&2\sum_{i\neq j}^m f_{b c a_2} f_{b d a_3} \bf {T}_i^c \bf {T}_j^d \Delta_{\mu_2}(i,3;q_2;\epsilon) \left(\frac{{n_i}_{\mu_3}}{n_i\cdot q_3}-\frac{{n_j}_{\mu_3}}{n_j\cdot q_3}\right)\,. \label{eq:{1contri}} \end{align} The bracket of the second contribution is \begin{align} \left[\bf{J}^{(0)a_2}_{\mu_2},\bf{J}^{(1)a_3}_{\mu_3}\right] = &\sum_k^m\sum_{i\neq j}^m \frac{-{n_k}_{\mu_2}}{n_k\cdot q_2} i f_{a_3 c d} \Delta_{\mu_3}(i,j;q_3;\epsilon) \left[ \bf{T}_k^{a_2}, \bf{T}_i^{c} \bf{T}_j^{d} \right] \cr\, =&\sum_k^m\sum_{i\neq j}^m \frac{-{n_k}_{\mu_2}}{n_k\cdot q_2} i f_{a_3 c d} \Delta_{\mu_3}(i,j;q_3;\epsilon) \times \left( i f_{a_2 c e} \bf{T}_k^e \bf{T}_j^d \delta_{ki}+ i f_{a_2 d e} \bf{T}_i^c \bf{T}_k^e \delta_{kj} \right) \cr\, =&-2\sum_{i\neq j}^m f_{b c a_2} f_{b d a_3} \bf{T}_i^c \bf{T}_j^d \Delta_{\mu_3}(i,j;q_3;\epsilon) \frac{-{n_i}_{\mu_2}}{n_i\cdot q_2}\,. \end{align} The overall second contribution is then \begin{align} \left[\bf{J}^{(0)a_2}_{\mu_2},\bf{J}^{(1)a_3}_{\mu_3}\right]+\Delta \bf{ J}^{(0) a;a_2,a_3}_{\mu_2} \bf{J}^{(1) a}_{\mu_3} =& -2\sum_{i\neq j}^m f_{b c a_2} f_{b d a_3} \bf{T}_i^c \bf{T}_j^d \Delta_{\mu_3}(i,j;q_3;\epsilon) \frac{-{n_i}_{\mu_2}}{n_i\cdot q_2} \cr\, +& \sum_{i\neq j}^m f_{a_3 a_2 a} f_{a c d} \bf{T}_i^c \bf{T}_j^d \Delta_{\mu_3}(i,j;q_3;\epsilon) \frac{{q_3}_{\mu_2}}{q_3\cdot q_2} \cr\, =& 2\sum_{i\neq j}^m f_{b c a_2} f_{b d a_3} \bf{T}_i^c \bf{T}_j^d \Delta_{\mu_3}(i,j;q_3;\epsilon) \left(\frac{{n_i}_{\mu_2}}{n_i\cdot q_2}- \frac{{q_3}_{\mu_2}}{q_3\cdot q_2}\right)\,, \cr \label{eq:{2contri}} \end{align} where in the last step we have used the identity : \begin{align} \sum_{i\neq j}^m ( f_{b c a_2} f_{b d a_3}+ f_{b c a_3} f_{b d a_2}) \bf{T}_i^c \bf{T}_j^d \Delta_{\mu_3}(i,j;q_3;\epsilon) = 0\,. \end{align} Adding Eq.~(\ref{eq:{1contri}}) and Eq.~(\ref{eq:{2contri}}) we arrive at \begin{align} \bf{J}^{(1) a_2 a_3 }_{ \mu_2 \mu_3 }(q_2&\ll q_3)|_{\rm{nab.}}= 2\sum_{i\neq j}^m f_{b c a_2} f_{b d a_3} \bf{T}_i^c \bf{T}_j^d \left(\frac{{n_i}_{\mu_2}}{n_i\cdot q_2}- \frac{{q_3}_{\mu_2}}{q_3\cdot q_2}\right) \times \left(\frac{{n_i}_{\mu_3}}{n_i\cdot q_3}-\frac{{n_j}_{\mu_3}}{n_j\cdot q_3}\right) \cr\, \times& {{ \Gamma(1-\epsilon)\Gamma(1+\epsilon)}\over{\epsilon}^2} \left[ \Bigg({\mu^2(-s_{i 3}-i\eta)\over{(-s_{i 2}-i\eta)\,\,(-s_{2 3 } -i\eta)\,}}\Bigg)^{\epsilon} + \Bigg({\mu^2(-s_{i j}-i\eta)\over{(-s_{i 3}-i\eta)\,\,(-s_{3 j} -i\eta)\,}}\Bigg)^{\epsilon} \right]\,, \end{align} which is in fully agreement with Eq.~(\ref{eq:strong-order}). \footnote{${n_i \cdot \varepsilon^+(q)}/{n_i \cdot q}-{n_j \cdot \varepsilon^+(q)}/{n_j \cdot q}=-\sqrt 2 {\langle i j \rangle}/ ( {\langle i q \rangle \langle q j \rangle )}$} \subsection{Time-Like results for double soft quarks} The soft quark amplitude is decomposed into four gauge invariant building blocks, a piece of uniform transcendental weight $\mathcal{M}^{\rm {u.t.}}$, a piece which violates uniform transcendentality $\mathcal{M}^{\rm {u.t.v.}}$, a sub-leading-color contribution given by the last triangle diagram in Figure \ref{fig:3} $\mathcal{M}^{\rm {s.l.}}$, and the $n_f$ term $\mathcal{M}^{\rm {nf}}$. Amplitudes of helicity configuration $\bar Q^-Q^+$ can be obtained by complex conjugation\footnote{Colors left untouched, the complex conjugation was only implemented on the kinematical part. }. \begin{align} \bar u(q_3)({\bf J}^{(1)})_{i_2}^{~{\; \bar\imath}_3}(q_2,q_3) v(q_2)= (\mathcal{M}^{\rm {u.t.}})_{i_2}^{~{\; \bar\imath}_3}+(\mathcal{M}^{\rm{u.t.v.}})_{i_2}^{~{\; \bar\imath}_3} +(\mathcal{M}^{\rm{s.l.}})_{i_2}^{~{\; \bar\imath}_3}+(\mathcal{M}^{\rm{n_f}})_{i_2}^{~{\; \bar\imath}_3}\,. \end{align} \begin{align} \label{eq:softQuark} &(\mathcal{M}^{\rm {u.t.}})_{i_2}^{~{\; \bar\imath}_3}(2^+,3^-)=\cr\, &4{ \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}}\sum_{i\neq j}^m ( t^d t^c)^{~{\; \bar\imath}_3}_{i_2} \bf{T}_{i}^c \bf{T}_{j}^d \Bigg\{\frac{-1}{s_{23}}\Bigg(\frac{\spa{3}.i \spb{i}.2}{s_{i2}+s_{i3}} - \frac{\spa{3}.j \spb{j}.2}{s_{2j}+s_{3j}}\Bigg) \Bigg(- \frac{1}{\epsilon^2} - \frac{1}{\epsilon}\ln \frac{s_{ij} s_{23}}{s_{i2} s_{3j}}\Bigg) \cr &+\Bigg(\frac{\spa{i}.j \spb{i}.2^2}{(s_{i2}+s_{i3})(\spb{i}.2 \spa{2}.j + \spb{i}.3 \spa{3}.j)\spb{2}.3}-\frac{\spb{i}.j \spa{i}.3^2}{(s_{i2}+s_{i3})(\spa{i}.2 \spb{2}.j + \spa{i}.3 \spb{3}.j)\spa{2}.3}\Bigg) \cr &\times\Bigg(\frac{\pi^2}{6} + \frac{1}{2} \ln^2\frac{s_{i2}}{s_{i2}+s_{i3}} + \frac{1}{2} \ln^2\frac{s_{ij}s_{23}}{(s_{i2}+s_{i3})s_{3j}} - \frac{1}{4} \ln^2 \frac{s_{ij} s_{23}}{s_{i2}s_{3j}} \Bigg) \cr &+\Bigg(\frac{\spa{i}.j \spb{2}.j^2}{(s_{2j}+s_{3j})(\spa{i}.2 \spb{2}.j + \spa{i}.3 \spb{3}.j)\spb{2}.3}- \frac{\spb{i}.j \spa{3}.j^2}{(s_{2j}+s_{3j})(\spb{i}.2 \spa{2}.j + \spb{i}.3 \spa{3}.j)\spa{2}.3}\Bigg) \cr &\times\Bigg(\frac{\pi^2}{6} + \frac{1}{2} \ln^2\frac{s_{3j}}{s_{2j}+s_{3j}} + \frac{1}{2} \ln^2\frac{s_{ij}s_{23}}{(s_{2j}+s_{3j})s_{i2}} - \frac{1}{4} \ln^2 \frac{s_{ij} s_{23}}{s_{i2}s_{3j}} \Bigg) \Bigg\}\,, \end{align} \begin{align} (\mathcal{M}^{\rm{u.t.v.}})_{i_2}^{~{\; \bar\imath}_3}(2^+,3^-)=-&{ \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}} \frac{2}{s_{23}} \frac{13-(20+\delta_R)\epsilon+8\epsilon^2}{2\epsilon(-1+\epsilon)(-3+2\epsilon)(-1+2\epsilon)} \cr\, \times& \sum_{i}^mN_c ( t^d )^{~{\; \bar\imath}_3}_{i_2}\bf{T}_i^d \frac{ \spa{3}.i \spb{i}.2}{s_{i2}+s_{i3}}\,, \cr\, % (\mathcal{M}^{\rm{s.l.}})_{i_2}^{~{\; \bar\imath}_3}(2^+,3^-)=-&{ \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}} \frac{2}{s_{23}}\frac{(2\epsilon-1)\epsilon^2\delta_R-2\epsilon^2+3\epsilon-2}{\epsilon^2(-1+2\epsilon)(-1+\epsilon)} \cr\, \times& \left(-\frac{4 }{N_c}+\frac{N_c}{2}\right) \sum_{i}^m ( t^d )^{~{\; \bar\imath}_3}_{i_2} \bf{T}_i^d\frac{ \spa{3}.i \spb{i}.2}{s_{i2}+s_{i3}}\,, \cr\, (\mathcal{M}^{\rm{n_f}})_{i_2}^{~{\; \bar\imath}_3}(2^+,3^-)=-&{ \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}} \frac{2}{s_{23}}\frac{2-2\epsilon}{\epsilon(-1+2\epsilon)(-3+2\epsilon)} \cr\, \times&n_f\sum_{i}^m ( t^d )^{~{\; \bar\imath}_3}_{i_2} \bf{T}_i^d \frac{\spa{3}.i \spb{i}.2}{s_{i2}+s_{i3}}\,, \cr\, (\mathcal{M}^{\rm {u.t.}})_{i_2}^{~{\; \bar\imath}_3}(2^+,3^-,2 \ll 3) =&4{ \left(\frac{-s_{2 3}-i\eta}{\mu^2}\right)^{-\epsilon}} \sum_{i\neq j}^m ( t^d t^c)^{~{\; \bar\imath}_3}_{i_2} \bf{T}_{i}^c \bf{T}_{j}^d \frac{\spb{i}.j}{\spb{i}.3 \spb{3}.j \spa{2}.3} \cr\, \times& \Bigg( \frac{1}{\epsilon^2}+\frac{1}{\epsilon}\ln\frac{s_{ij}s_{23}}{s_{i3}s_{3j}} +\frac{\pi^2}{3}+\frac{1}{2}\ln^2\frac{s_{i2}}{s_{i3}}+\frac{1}{2}\ln^2\frac{s_{ij}s_{23}}{s_{i3}s_{3j}}\Bigg)\,. \end{align} \section{Simplified differential equation approaches for master integrals} \label{sec:SDE} \begin{figure}[h] \begin{center} \includegraphics[width=0.6\textwidth]{topology} \end{center} \caption{Topology classifications, with double-line referring to Wilson line} \label{fig:7} \end{figure} The set of master-integrals in our problem can be classified into two topologies as depicted in Figure \ref{fig:7}. The first corresponds to single soft emission. The propagator list for the second pentagon topology is \begin{align} \Bigg\{k^2+i\eta,\,(k+q_2)^2+i\eta,\,(k-q_3)^2+i\eta,\,-n_1\cdot(k+q_2)-i\eta,\,n_4\cdot(k-q_3)-i\eta\Bigg\}\,. \label{eq:pentagon} \end{align} The results for soft triangle or soft box can be achieved by traditional Feynman parameter approach. To obtain the soft pentagon integral~\cite{Bern:1993kr}, traditional differential equations could be too involved to solve, the proposal here is to formulate differential equations with respect to a parameter~\cite{Papadopoulos:2014lla}. To this end we define a new topology with a parameter $z$, which amounts to replacing $q_2$ with $z q_2$: \begin{align} \Bigg\{k^2,\,(k+z q_2)^2,\,(k-q_3)^2,\,-n_1\cdot(k+z q_2),\,n_4\cdot(k-q_3)\Bigg\}\,. \label{eq:topolist} \end{align} Now each master integral carries the parameter $z$, and the soft triangle or soft box are obtained by replacing $q_2$ with $z q_2$ in their results Eq.~(\ref{eq:low point}). For the pentagon integral $I_{z;11111}$ we form an differential equation with respect to $z$, in doing so we define three rescaling invariants for the current problem: \begin{align} \label{z-change} x\equiv{s_{1 4}s_{2 3} \over (s_{1 2}+s_{1 3})(s_{2 4 }+s_{3 4})}\,, \quad t\equiv\frac{s_{1 2}}{s_{1 2}+s_{1 3}}\,, \quad s\equiv \frac{s_{2 4}}{s_{2 4}+s_{3 4}}\,, \end{align} and set without loss of generality \begin{align} n_1\cdot n_4=2\,,\quad q_2\cdot q_3=x/2\,,\quad n_1\cdot q_2=t\,,\quad n_1\cdot q_3=1-t\,,\quad n_4\cdot q_2=s\,,\quad n_4\cdot q_3=1-s\,. \end{align} Then the differential equation with respect to the parameter $z$ becomes \begin{align} \partial_z\bigg( Q(z) I_{z;11111}\bigg)=& \frac{Q(z)}{z \left(s t z^2-2 s t z+s t+s z-s+t z-t-x z+1\right)} \cr\, \times&\bigg\{ \frac{ s t z^2-s t+s+t-1}{(s-1) t z (s z-s+1) (t z-t+1)}\,\epsilon\, I_{z;01100} \cr\, +&\frac{ s z }{s z-s+1}(2 \epsilon +1)I_{z;11011} -s z (2 \epsilon+1)I_{z;11101} \cr\, +&\frac{(t-1)}{t z-t+1} (2 \epsilon +1)I_{z;10111} - (t-1) (2 \epsilon +1)I_{z;11110} \bigg\}\,, \end{align} where $Q(z)$ is a integration factor introduced such that the right hand side no longer depends on the pentagon itself \begin{align} Q(z)\equiv z \left(s t z^2-2 s t z+s t+s z-s+t z-t-x z+1\right)^{\epsilon +1}\,. \end{align} The integration factor makes the problem a simple integration, but at the price of introducing a square root \begin{align} Q(z)=&z \left(s t (z - r_1) (z - r_2)\right)^{\epsilon +1}\,,\quad \Delta(x,s,t)\equiv\sqrt{s^2+4 s t x-2 s t-2 s x+t^2-2 t x+x^2}\,. \cr\, r_1=&\frac{-s - t + 2 s t + x - \Delta(x,s,t)}{2 s t}\,,\quad\quad r_2=\frac{-s - t + 2 s t + x + \Delta(x,s,t)}{2 s t}\,. \end{align} We are now left with the boundary conditions. In general the limit $z \rightarrow 0$ does not commute with the integration over the loop momentum and it seems that an independent calculation of the boundary term was necessary. But since the integration factor is proportional to $z$, it was conjectured and later confirmed that $Q(z) I_{z;11111}$ vanishes at the origin provided that $ I_{z;11111}$ behaves regular at the origin (the same trick was also applied in~\cite{Papadopoulos:2014lla}). Below in Eq.~(\ref{eq:low point}) and Eq.~(\ref{eq:five-point}) we collect the Euclidean results for the master integrals with a pre-factor of $c_\Gamma(-s_{23}-i\eta)^{-\epsilon}$ \begin{align} I'_{01100}&\equiv\frac{1-2\epsilon}{-2\epsilon}I_{01100}=-\frac{1}{2 \epsilon^2} \,, \quad I_{10011}=\frac{\Gamma(1-2\epsilon)\Gamma(1+\epsilon)}{\epsilon^2\Gamma(1-\epsilon)} \frac{4}{s_{1 4}}\Bigg(\frac{s_{14}s_{23}}{s_{12}s_{34}} \Bigg)^\epsilon \,,\cr\, I_{11011}&=\frac{4 \Gamma(1-2\epsilon) \Gamma(1+\epsilon)}{\epsilon^2 \Gamma(1-\epsilon)}\frac{4}{s_{1 2} s_{3 4}}\left(\frac{s_{14}s_{23}}{s_{12}s_{34}}\right)^\epsilon \, _2F_1\left(1,1+\epsilon,1-\epsilon,-\frac{s_{2 4} }{s_{3 4}}\right) \,, \cr\, I_{11110}&=-\frac{4}{s_{2 3} (s_{1 2}+s_{1 3})}\frac{1}{\epsilon^2}\,_2F_1\left(1,1,1-\epsilon,\frac{s_{1 3} }{s_{1 2}+s_{1 3}}\right) \,, \cr\, I_{01111}&=\frac{\Gamma(1-2\epsilon)}{\Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}\frac{4}{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})}\left(1- \frac{s_{1 4} s_{2 3} }{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})} \right)^{-1-\epsilon} \cr\, &\times\bigg\{ \frac{2 \Gamma^3(1-\epsilon)\Gamma^2(1+\epsilon)}{\epsilon^2\Gamma(1-2\epsilon)} \left(\frac{s_{1 4} s_{2 3} }{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})}\right)^\epsilon-\frac{2 \Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}{\epsilon^2\Gamma(1-2\epsilon)} \cr\, &\times\, _2F_1\left(-\epsilon,-\epsilon,1-\epsilon,\frac{s_{1 4} s_{2 3} }{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})}\right) \bigg\} \,. \label{eq:low point} \end{align} \begin{align} &I_{11111}=\frac{4}{s_{12}s_{34}(s_{12}+s_{13})(s_{24}+s_{34})s_{23}}\frac{1}{\epsilon^2} \cr\,&\times \Bigg\{s_{34}(s_{12}+s_{13})\Bigg[1+\epsilon\ln\frac{s_{34}(s_{12}+s_{13})}{s_{12}(s_{24}+s_{34})}+\frac{1}{2}\epsilon^2\ln^2\frac{s_{34}(s_{12}+s_{13})}{s_{12}(s_{24}+s_{34})}\Bigg] \cr\, &+(s_{12}+s_{13})(s_{24}+s_{34})\Bigg[1-\epsilon\ln\frac{s_{12}s_{34}}{(s_{12}+s_{13})(s_{24}+s_{34})} +\frac{1}{2}\epsilon^2\ln^2\frac{s_{12}s_{34}}{(s_{12}+s_{13})(s_{24}+s_{34})}\Bigg] \cr\, &+s_{12}(s_{24}+s_{34})\Bigg[1+\epsilon\ln\frac{s_{12}(s_{24}+s_{34})}{s_{34}(s_{12}+s_{13})}+\frac{1}{2}\epsilon^2\ln^2\frac{s_{12}(s_{24}+s_{34})}{s_{34}(s_{12}+s_{13})}\Bigg] \cr\,&- \frac{s_{14}s_{23}s_{34}(s_{12}+s_{13})}{(s_{12}+s_{13})(s_{24}+s_{34})-s_{23}s_{14}} \Bigg[\epsilon\ln\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})} +\frac{1}{2}\epsilon^2\Bigg(\ln^2\frac{s_{14}s_{23}}{(s_{24}+s_{34})s_{12}} \cr\,& +\ln^2\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})} -\ln\frac{s_{14}s_{23}}{(s_{12}+s_{13})s_{34}}+\ln^2\frac{s_{34}}{s_{24}+s_{34}}-\ln^2\frac{s_{12}}{s_{12}+s_{13}}\Bigg)\Bigg] \cr\,&- \frac{s_{14}s_{23}s_{12}(s_{24}+s_{34})}{(s_{12}+s_{13})(s_{24}+s_{34})-s_{23}s_{14}} \Bigg[\epsilon\ln\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})} +\frac{1}{2}\epsilon^2\Bigg(\ln\frac{s_{14}s_{23}}{(s_{12}+s_{13})s_{34}} \cr\,& +\ln^2\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})} -\ln^2\frac{s_{14}s_{23}}{(s_{24}+s_{34})s_{12}}+\ln^2\frac{s_{12}}{s_{12}+s_{13}}-\ln^2\frac{s_{34}}{s_{24}+s_{34}}\Bigg)\Bigg] \cr\,&+ 2 s_{34}(s_{12}+s_{13})\epsilon^2\Bigg[\frac{\pi^2}{6}+\text{Li}_2\left(1-\frac{s_{24}+s_{34}}{s_{34}}\right)\Bigg] + 2 s_{12}(s_{24}+s_{34})\epsilon^2\Bigg[\frac{\pi^2}{6}+\text{Li}_2\left(1-\frac{s_{12}+s_{13}}{s_{12}}\right)\Bigg] \cr\,&+ 2(s_{12}+s_{13})(s_{24}+s_{34})\epsilon^2\Bigg[\text{Li}_2\left(1- \frac{s_{34}}{s_{24}+s_{34}}\right)+\text{Li}_2\left(1- \frac{s_{12}}{s_{12}+s_{13}}\right)\Bigg] \cr\,&+ (s_{12}+s_{13})(s_{24}+s_{34})\frac{(s_{12}+s_{13})(s_{24}+s_{34})-s_{23}s_{14}-s_{34}(s_{12}+s_{13})-s_{12}(s_{24}+s_{34})}{(s_{12}+s_{13})(s_{24}+s_{34})-s_{23}s_{14}} \cr&\times \epsilon^2\text{Li}_2\left(1-\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})}\right)-(s_{12}+s_{13})(s_{24}+s_{34})\epsilon^2\frac{\pi^2}{3} \Bigg\}+\cal O(\epsilon)\,. \label{eq:five-point} \end{align} So far we have been presenting the main results in this work from section~\ref{sec:double soft result} to section~\ref{sec:SDE}, the checks we have done includes : \begin{itemize} \item By comparing our results with those subtracted from the one loop FDH amplitudes $\gamma^* $ $\rightarrow$ $q\bar q gg$ and $\gamma^* $ $\rightarrow$ $q\bar q$ $\bar Q Q$~\cite{Bern:1997sc,Bern:1996ka}. \item On-shell gauge invariance (see details in appendix~\ref{sec:colorSpace}) \begin{align} q_{2\mu_2}\bf{J}_{a_2 a_3 }^{\mu_2 \mu_3 (1)}(q_2,q_3) &\propto \sum_{i=1}^{m} \bf{T}_i^a \;\;,\cr q_{3\mu_3} \bf{J}_{a_2 a_3 }^{\mu_2 \mu_3 (1)}(q_2,q_3)& \propto \sum_{i=1}^{m} \bf{T}_i^a \;\;. \end{align} \item Numeric checks of the master integrals using toolbox \texttt{pySecDec}~\cite{Borowka:2017idc}. \end{itemize} \section{Analytical continuation}\label{sec:AC} Recall our results are captured in time-like kinematics with positive Mandelstam variables, the corresponding topology with explicit prescription is \begin{align} {\rm{e^+e^-}}:\Bigg\{k^2+i\eta,\,(k+q_2)^2+i\eta,\,(k-q_3)^2+i\eta,\,-n_1\cdot(k+q_2)-i\eta,\,n_4\cdot(k-q_3)-i\eta\Bigg\}\,. \end{align} The crossing into ingoing states amounts to reversing some of the hard momenta , say $n_1\rightarrow-n_1$ ( $n_4\rightarrow -n_4$ ), or equivalently one still insists on a physical hard momentum with positive energy, etc. $n_1^0>0$ and $n_4^0>0$, but change accordingly the prescriptions. We take the latter method so we still have positive Mandelstam variables, in this way we can obtain other relevant topologies with different prescriptions : \begin{align} {\rm{DIS}}:\Bigg\{k^2+i\eta,\,(k+q_2)^2+i\eta,\,(k-q_3)^2+i\eta,\,-n_1\cdot(k+q_2)+i\eta,\,n_4\cdot(k-q_3)-i\eta\Bigg\}\,, \cr\, {\rm{DY}}:\Bigg\{k^2+i\eta,\,(k+q_2)^2+i\eta,\,(k-q_3)^2+i\eta,\,-n_1\cdot(k+q_2)+i\eta,\,n_4\cdot(k-q_3)+i\eta\Bigg\}\,. \end{align} Below we will perform analytical continuations from topology ${\rm{e^+e^-}}$ to topology ${\rm{DIS}}$ and topology ${\rm{DY}}$ for the master integrals and the soft amplitudes. \subsection{Analytical continuation of the master integrals} Recall we have made three rescaling invariants for the topologies \begin{align} \label{z-change} x\equiv{s_{1 4}s_{2 3} \over (s_{1 2}+s_{1 3})(s_{2 4 }+s_{3 4})}\,, \quad t\equiv\frac{s_{1 2}}{s_{1 2}+s_{1 3}}\,, \quad s\equiv \frac{s_{2 4}}{s_{2 4}+s_{3 4}}\,. \end{align} By reversing a single Wilson line, the phase factor cancels in the numerator and denominator of each invariant, so no continuation is actually needed. By reversing two Wilson lines at the same time, we will have \begin{align} \label{z-change} x\rightarrow |x| e^{-2\pi i}\,,\quad\quad t\rightarrow t\,,\quad\quad s\rightarrow s\,. \end{align} The strategy is then taking a expansion around phase space region $q_2\parallel q_3$ or equivalently $x\to0$. For example\footnote{The ellipsis denotes non-singular terms in the limit $x \rightarrow 0$.}, \begin{align} \lim_{x\rightarrow 0}\text{Li}_{2}(1-x)&=-\ln(x)\sum_n \frac{(-x)^n}{n}+\dots=-\ln(x) \ln(1-x)+\dots\,, \cr \lim_{x\rightarrow 0}\text{Li}_{3}(1-x)&=-\ln(x)\times\frac{-1}{2} \ln^2(1-x)+\dots\,. \end{align} Such functions fail to have Laurent expansions around the origin $x \rightarrow 0$ due to collinear enhancement proportional to $\ln(x)$, which result in discontinuities \begin{align} \label{eq:acRules} {\rm{disc.}}\left[\ln(x)\right]&=-2 \pi i\,, \cr {\rm{disc.}}\left[\text{Li}_{2}(1-x)\right]&=2\pi i\, \ln(1-x)\,, \cr {\rm{disc.}}\left[\text{Li}_{3}(1-x)\right]&=\pi i \,\ln^2(1-x)\,. \end{align} In section~\ref{sec:SDE} we have obtained the Euclidean results for the master integrals. The analytical continuation to physical region where both Wilson lines are outgoing (for configurations like $\gamma^*\rightarrow$ 4 partons) is trivially obtained by \begin{align} (-s_{23}-i\eta)^{-\epsilon}\rightarrow \abs{s_{23}} e^{i\pi\epsilon}\,. \end{align} If some of the Wilson lines correspond to initial state, the rules is then clarified in Eq.~(\ref{eq:acRules}) and we conclude \begin{itemize} \item[a)] No distinction between both outgoing and one of the Wilson lines outgoing. \item[b)] If both Wilson lines are ingoing, only box $I_{01111}$ and pentagon $I_{11111}$ develops nontrivial analytical continuation behaviors. \end{itemize} For the soft box $I_{01111}$ we have the following result for both ingoing Wilson line \begin{align} I^{\rm{ingoing}}_{01111}&=\frac{\Gamma(1-2\epsilon)}{\Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}\frac{4}{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})}\left(1- \frac{s_{1 4} s_{2 3} }{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})} \right)^{-1-\epsilon} \cr\, &\times\bigg\{ \frac{2 \Gamma^3(1-\epsilon)\Gamma^2(1+\epsilon)}{\epsilon^2\Gamma(1-2\epsilon)} \left(\frac{s_{1 4} s_{2 3} }{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})}\right)^\epsilon e^{- 2\pi i\epsilon}-\frac{2 \Gamma^2(1-\epsilon)\Gamma(1+\epsilon)}{\epsilon^2\Gamma(1-2\epsilon)} \cr\, &\times\, _2F_1\left(-\epsilon,-\epsilon,1-\epsilon,\frac{s_{1 4} s_{2 3} }{(s_{1 2}+s_{1 3})(s_{2 4}+s_{3 4})}\right) \bigg\} \,. \end{align} For the pentagon, the corresponding result with both ingoing Wilson lines can be obtained by replacing in Eq~(\ref{eq:five-point}) \begin{align} \ln\frac{s_{23}s_{14}}{(s_{12}+s_{13})s_{34}}\rightarrow&\ln\frac{s_{23}s_{14}}{(s_{12}+s_{13})s_{34}}-2\pi i\,, \cr\, \ln\frac{s_{23}s_{14}}{s_{12}(s_{24}+s_{34})}\rightarrow&\ln\frac{s_{23}s_{14}}{s_{12}(s_{24}+s_{34})}-2\pi i\,, \cr\, \ln\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})}\rightarrow&\ln\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})}-2\pi i\,, \cr\, \text{Li}_2\left(1-\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})}\right)\rightarrow&\text{Li}_2\left(1-\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})}\right) \cr\, +&2\pi i \ln\left(1-\frac{s_{23}s_{14}}{(s_{12}+s_{13})(s_{24}+s_{34})}\right)\,. \cr\, \end{align} \subsection{Drell Yan TMD Soft Function Vs. DIS TMD Soft Function Vs. $e^+e^-$ TMD Soft Function} Another application of our analytical continuation rules is to give a direct technical proof of the statement in~\cite{Moult:2018jzp}, that the Drell Yan and DIS and $e^+e^-$ Soft Function all equals up to three-loop, similar considerations and statements can also be found in~\cite{Kang:2015moa,Rothstein:2016bsq}. Indeed we found that \begin{align} \mathcal{M}_{a_2 a_3}^{\rm{DIS}}-&\mathcal{M}_{a_2 a_3}^{\rm{e^+e^-}}=0\,. \end{align} \begin{align} \mathcal{M}_{a_2 a_3}^{\rm{DY}}(2^+ ,3^+)-&\mathcal{M}_{a_2 a_3}^{\rm{DIS}}(2^+ ,3^+)= 4 { \left(\frac{s_{2 3}}{\mu^2}\right)^{-\epsilon}}\sum_{i \neq j}^2 f_{b c a_2}f_{b d a_3} \bf{T}^c_{i}\bf{T}^d_{j} {\spa i.j \over \spa i.2 \spa 2.3 \spa3.j} \cr\, \times& (- 2 \pi i ) \bigg[ \frac{1}{\epsilon}+\ln \frac{{s_{ij}s_{2 3} \over (s_{i2}+s_{i3})(s_{2j}+s_{3j})}}{1-{s_{ij}s_{2 3} \over (s_{i2}+s_{i3})(s_{2j}+s_{3j})}} \bigg]+\cal O(\epsilon)\,, \end{align} \begin{align} \mathcal{M}_{a_2 a_3}^{\rm{DY}}(2^+ ,3^-)-&\mathcal{M}_{a_2 a_3}^{\rm{DIS}}(2^+ ,3^-)= -4 { \left(\frac{s_{2 3}}{\mu^2}\right)^{-\epsilon}}\sum_{i \neq j}^2 f_{b c a_2}f_{b d a_3} \bf{T}^c_{i}\bf{T}^d_{j} \times(- 2 \pi i ) \cr\, \times&\bigg[ { {1 \over \spa{i}.2\spb2.j+ \spa i.3 \spb3.j\,}\left({1\over s_{i2}+s_{i3} }{\spb{i}.j \spa i.3^3 \over \spa{2}.3\spa i.2}+{1\over s_{2j}+s_{3j}}{\spa{i}.j\spb2.j^3 \over \spb2.3 \spb3.j}\right)} \cr\, \times&\left( -\frac{1}{\epsilon}+\ln{s_{ij} s_{2 3}\over s_{i2} s_{3j}} \right) \cr\, +&{\spa i.j\over \spa i.2 \spa2.j }{\spb i.j\over \spb i.3 \spb3.j } \times\bigg( \ln{s_{ij} s_{2 3}\over s_{i2} s_{3j}}\bigg) \cr\, +&{ 1\over s_{i2}+s_{i3}}\Bigg({ -1 \over \spa{i}.2\spb2.j+ \spa i.3 \spb3.j\,} { \spb{i}.j \spa i.3^3 \over \spa{2}.3\spa i.2} + { -1 \over \spb{i}.2\spa2.j+ \spb i.3 \spa3.j\,} { \spa{i}.j \spb i.2^3 \over \spb{2}.3\spb i.3} \Bigg) \cr\, \times&\ln{s_{ij} s_{2 3}\over (s_{i2} +s_{i3} )s_{3j} } \cr\, +&{ 1\over s_{2j}+s_{3j}}\Bigg({ -1 \over \spa{i}.2\spb2.j+ \spa i.3 \spb3.j\,} { \spa{i}.j \spb2.j^3 \over \spb{2}.3\spb3.j} +{ -1 \over \spb{i}.2\spa2.j+ \spb i.3 \spa3.j\,} { \spb{i}.j \spa3.j^3 \over \spa{2}.3\spa2.j}\Bigg) \cr\, \times& \ln{s_{ij} s_{2 3}\over (s_{2j} +s_{3j} )s_{i 2} } \bigg] +\cal O(\epsilon)\,. \end{align} The differences are proportional to $2 \pi i$ , and cancels when combined with their complex conjugation, thus the Virtual-Real-Real correction is universal. Since we know the triple-real contribution does not develop crossing issues, and the existing results for the double virtual real~\cite{Duhr:2013msa,Li:2013lsa} and virtual real square~\cite{Catani:2000pi} contribution clearly shows a universality because of their global phase factor, we can conclude that the three-loop soft function is universal. \section{Conclusions} In this work we presented compact amplitude expressions of double soft gluons and double soft quarks emissions with generic color structure. Since gauge redundancy is inherent in color space formalism, we however found that the amplitude is best described in helicity variables to minimize the redundancy~\cite{Dixon:1996wi}. In general we found that the one-loop soft amplitude can be cast into an abelian contribution (for gluon) plus an non-abelian color correlated emission, the latter couples to two hard legs at one time. We took strong ordering of the soft amplitude and obtained a factorized expression of two successive eikonal emission. We also talked about analytical continuation properties of the soft amplitude, and found non-vanishing discontinuities in going into initial states. However, the discontinuities turns out to be purely imaginary, which shows that the squared amplitude is invariant under crossing. Phenomenologically, this demonstrates the equivalence of DY, DIS and $e^+e^-$ TMD soft function up to three-loop, which was taken by default in the work~\cite{Li:2020bub}. Combined with the known results for two-loop single soft amplitude that couples up to three hard particles~\cite{Dixon:2019lnw}, and the tree-level triple soft emissions which square to give a quadruple correlation~\cite{Catani:2019nqv}, the picture of NNNLO soft correlations with multiple hard legs becomes clear, which promotes a further step into the calculation of TMD soft function with multiple soft Wilson lines and the investigation of factorization violation~\cite{Gao:2019ojf}. \acknowledgments I am grateful for Hua Xing Zhu for all the supports and guidance during the project . I also thank Yi Bei~Li and Kai~Yan for useful discussions. Finally, the feynman diagrams were generated by \text{feynMF}. This work was supported in part by NSFC under contract No. 11975200.
null
null
null
proofpile-arXiv_000-10152
{"arxiv_id":"2009.08919","language":"en","timestamp":1600654624000,"url":"https:\/\/arxiv.org\/abs\/2009.08919","yymm":"2009"}
2024-02-18T23:40:25.144Z
2020-09-21T02:17:04.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10154"}
null
\section{Introduction} \label{sec:intro} Cataclysmic variables (CVs) are binary star systems composed of a white dwarf primary accreting matter from a main sequence or evolved secondary star \citep{Patterson84,Warner95,Kalomeni16}. Similarly, low mass X-ray binaries (LMXBs) are analogous systems where the primary is either a black hole or a neutron star, instead of a white dwarf \citep{Remillard06}. There are only about 18 dynamically confirmed black hole X-ray binaries known \citep{Casares14}. Finding and modelling CVs and LMXBs allows us to better understand the formation of compact objects and test binary evolution models \citep{Jonker04,Repetto15}. The \textit{Chandra} Galactic Bulge Survey (GBS) is a survey tasked with finding more quiescent Low Mass X-ray Binaries (qLMXB). Towards this goal the GBS covered a total of 12 deg$^2$ near the bulge of the galaxy and found 1640 X-ray sources \citep{Jonker11,Jonker14}. Subsequent studies have identified counterparts to these sources in multiple wavelengths; from radio \citep{Maccarone12} and near infrared \citep{Greiss14} to optical \citep{Hynes12, Udalski12, Britt14, Wevers16, Wevers17}. Some of these counterparts have been deemed likely accreting binaries, motivating further photometric and spectroscopic follow up \citep{Ratti13, Wevers16_CVn, Johnson17}. In this work we focus on one of these objects, CXOGBS J175553.2-281633 (hereafter CX137). The optical counterpart to CX137 was first identified by \cite{Udalski12} and classified as an eclipsing binary with a spotted donor star and an orbital period of $10.345$ hr. Subsequent spectroscopic follow up by \cite{Torres14} revealed broad H$\alpha$ emission and an orbital period consistent with that of \cite{Udalski12}. Based on the properties of the H$\alpha$ emission line and an X-ray luminosity of \mbox{$L_x > 5.8\times10^{30}$ erg s$^{-1}$}, \cite{Torres14} classified the source as a potential low-accretion rate CV or qLMXB with a G/K-type secondary, supporting the ellipsoidal light curve interpretation. In this work we build on the analysis from \cite{Torres14} by including two extra years of I-band photometry, where the sinusoidal shape of the light curve can be explained by ellipsoidal modulations. Additionally, we see long-term variations in the shape of the light curve, these are consistent with either accretion disk that changes in shape and luminosity, or a spotted secondary star. In this work we aim to settle the true nature of the object by performing a dynamical study and find that CX137 is a CV with a K7 secondary and an orbital period of \mbox{$P = 10.34488 \pm 0.00006$ h}, in agreement with previous studies. The source shows no outbursts in our seven years of optical photometry. This paper is organized as follows: in \S\ref{sec:data} we describe the OGLE photometry and the optical spectroscopy obtained for this study. In \S\ref{sec:data_analysis} we provide an analysis of the data; where we determine the orbital period, generate a radial velocity curve, and describe the spectral features. In \S\ref{sec:modeling} we present our light curve models, fitting routines and resulting output parameters. We finally outline our discussion in \S\ref{sec:params} and conclusion in \S\ref{sec:conclusions}. All quoted errors represent $1\sigma$ uncertainty, unless otherwise stated. \section{Observations} \label{sec:data} \subsection{Gaia} \label{subsec:gaia} \textit{Gaia} provides precise coordinates for the optical component of CX137 at \mbox{R.A.=${\rm 17^h55^m53^s.26}$}, \mbox{decl.=$-28^\circ16'33''.84$} (ICRS), in addition to proper motion components of \mbox{$\mu_{\rm R.A.} = 1.139\pm0.108$ mas yr$^{-1}$}, and \mbox{$\mu_{\rm decl.}=-6.977\pm0.087$ mas yr$^{-1}$}. The parallax of the source was measured by \textit{Gaia} DR2 to be \mbox{$\pi = 1.116 \pm 0.069$ mas}, which corresponds to a distance of \mbox{$d = 879^{+59}_{-52}$ pc} \citep{Bailer18, Gaia18}. \begin{figure} \centering \includegraphics[width=\columnwidth]{Both_Periods.pdf} \caption{\textit{Top:} Optical photometry phased at the best orbital period of $P = 10.34488$ h. \textit{Bottom:} Full OGLE Light curve where long term periodic variations in luminosity are seen. The green dots are all the data, while the black dots are binned in phase bins of 0.01 for the phased light curve and bins of 20 days for the full light curve. We show a tentative period of 796 days as a damped sine curve fit to the binned data. The error bars are approximately equal to the size of the data points and are not plotted for clarity. \label{fig:phased}} \end{figure} \subsection{OGLE Photometry} \label{subsec:photometry} The optical counterpart of CX137 was observed during the fourth phase of the Optical Gravitational Lensing Experiment (OGLE) project with the 1.3m Warsaw telescope at Las Campanas Observatory \citep{Udalski15}. OGLE provided us with 7 years of \mbox{I-band} photometry, from 2010 to 2016. The typical cadence of these observations ranges from 20 minutes to nominally once a night with exposure times of 100s. There is a three month period in each year when the source is not visible. The photometry was obtained using the difference image analysis method outlined in \cite{Wozniak00}. The individual photometry has typical errors of $<0.01$ mag, see \mbox{Table~\ref{tab:photometry}} for a log of observations. \begin{figure} \centering \includegraphics[width=\columnwidth]{spectra.pdf} \caption{Average continuum-normalized blue and red arm ISIS spectrum for CX137 in the rest frame of the secondary. The spectra show Balmer lines in emission, associated with the accretion disk. Strong stellar features are indicated. The interstellar Na\,D is also marked. $\oplus$ denotes prominent telluric features not associated with the binary. \label{fig:spectra}} \end{figure} \subsection{Optical Spectroscopy} \label{subsec:spectra} We observed CX137 with the Intermediate dispersion Spectrograph and Imaging System (ISIS; \citealt{Jorden90}) on the 4.2 m William Herschel Telescope (WHT) at the Roque de los Muchachos Observatory on La Palma, Spain, during 5 different observing runs between June and August 2017. The ISIS spectrograph has a dichroic that splits the spectra into a red and blue arm, allowing for a wide spectral range to be observed simultaneously. For the blue arm we used gratings R158, R300, and R600; and for the red arm we used gratings of R158 and R600. We also obtained one high resolution spectrum with the Inamori-Magellan Areal Camera and Spectrograph (IMACS; \citealt{dressler11}) on the Magellan Baade 6.5 m Telescope at Las Campanas observatory with the R1200 grating. We provide a log of spectroscopic observations and specifications of each grating in Table~\ref{tab:spectroscopy}. The spectral resolutions provided in the table were approximated by measuring the width of spectroscopic lines in an arc lamp spectrum taken with each grating. We reduced the spectra using standard IRAF\footnote{\label{IRAF}IRAF is written and supported by the National Optical Astronomy Observatories, operated by the Association of Universities for Research in Astronomy, Inc. under cooperative agreement with the National Science Foundation.} routines. The data were bias-subtracted and flat-fielded, sky emission subtracted, the spectra were optimally extracted and wavelength calibrated using an arc lamp taken after each spectrum. We determine the zero-point of the wavelength calibration of our spectra by measuring the positions of bright sky lines in each spectrum, and apply the corresponding shift to each individual spectrum such that the wavelength of the sky lines match between all the spectra. For the ISIS spectroscopy, we analyzed the data taken in both the red and blue arms using the same procedure, but treat them as individual spectra. \begin{figure} \centering \includegraphics[width=\columnwidth]{RVfit.pdf} \caption{Heliocentric Radial velocity curve measured from both the blue arm and red arm of ISIS spectrograph. The best fit sine curve is shown in black. The dashed horizontal line marks the 0 point of the sinusoid. The best fit values to the systemic radial velocity and semi-amplitude of the radial velocity are \mbox{$\gamma = 54 \pm 4$ km s$^{-1}$} and K$_2 = 161 \pm 6$ km s$^{-1}$, respectively. \label{fig:rvfit}} \end{figure} \subsection{Spectral Templates} \label{subsec:templates} Throughout this work we make use of spectral templates from the X-shooter library \citep{Chen11}. We selected spectra from 71 M stars, 33 K stars and 23 G stars of varying luminosity classes and evolutionary stages. All templates were taken with a $0.''7$ slit with the VIS arm of X-shooter and a nominal resolution of \mbox{R $\sim 10,000$}, equivalent to $\sim30$ km s$^{-1}$ at a wavelength of 8600\AA. All the spectra of CX137 and templates were subsequently processed using {\tt Molly}\footnote{{\tt Molly} is a code developed and maintained by T. Marsh and it is available at \url{http://deneb.astro.warwick.ac.uk/phsaap/software/molly/html/INDEX.html}\label{foot:molly}}. First, we apply a heliocentric velocity correction to all spectra using the \textit{hfix} task. We then use \textit{vbin} to bin all the data to a uniform velocity scale so the dispersion of the templates matches that of the CX137 spectra. We then normalize each spectrum by dividing it by a fit to the star's continuum. To estimate the continuum we fit a 2nd-order polynomial to each spectrum, masking out regions with strong absorption lines or telluric bands. \begin{deluxetable}{cc} \tablecaption{OGLE I-band photometry. All exposure times were 100s. \label{tab:photometry}} \tablehead{\colhead{\hspace{0.5cm}UT Date Range\hspace{0.5cm}} & \colhead{\hspace{0.5cm}\ Exposures}\hspace{0.5cm}} \startdata Mar 5 - Nov 4 2010 & 1685 \\ Feb 3 - Nov 9 2011 & 2042 \\ Feb 3 - Nov 11 2012 & 936 \\ Feb 3 - Oct 30 2013 & 868 \\ Feb 1 - Oct 26 2014 & 848 \\ Feb 7 - Nov 7 2015 & 804 \\ Feb 6 - Oct 30 2016 & 1641 \\ \enddata \end{deluxetable} \section{Data Analysis} \label{sec:data_analysis} \begin{deluxetable*}{ccccccccc} \tablecaption{Optical Spectroscopy of CX137 \label{tab:spectroscopy}} \tablewidth{0pt} \tablehead{ \colhead{UT Date} & \colhead{Exposure} & \colhead{Telescope +} & \colhead{Grating} & \colhead{Dispersion} & \colhead{Resolution} & \colhead{Resolution} & \colhead{Slit width} & \colhead{Wavelength range} \\ & \colhead{(s)} & \colhead{Instrument} & (lines/mm) & (\AA\ pixel$^{-1}$) & (\AA) & (km s$^{-1}$) & (arcsec) & (\AA)} \startdata 2017 Jun 24 & $6 \times 600$ & WHT + ISIS-red & R158 & 1.81 & 7.70 & 307 & 1.0" & 5500 - 8100 \\ 2017 Jun 24 & $6 \times 600$ & WHT + ISIS-blue & R158 & 1.62 & 7.81 & 520 & 1.0" & 3500 - 5400 \\ 2017 Jul 11 & $6 \times 600$ & WHT + ISIS-red & R158 & 1.81 & 7.70 & 307 & 1.0" & 5500 - 8100 \\ 2017 Jul 11 & $6 \times 600$ & WHT + ISIS-blue & R300 & 0.86 & 4.10 & 273 & 1.0" & 3500 - 5400 \\ 2017 Jul 21 & $15 \times 900$ & WHT + ISIS-red & R600 & 0.49 & 1.81 & 72 & 1.0" & 5500 - 8800 \\ 2017 Jul 21 & $15 \times 900$ & WHT + ISIS-blue & R600 & 0.45 & 2.02 & 134 & 1.0" & 3500 - 5400 \\ 2017 Aug 27 & $3 \times 900$ & WHT + ISIS-red & R600 & 0.49 & 1.81 & 72 & 1.0" & 5500 - 7150 \\ 2017 Aug 27 & $3 \times 900$ & WHT + ISIS-blue & R600 & 0.45 & 2.02 & 134 & 1.0" & 3910 - 5400 \\ 2017 Aug 29 & $4 \times 900$ & WHT + ISIS-red & R600 & 0.49 & 1.81 & 72 & 1.0" & 5500 - 7150 \\ 2017 Aug 29 & $4 \times 900$ & WHT + ISIS-blue & R600 & 0.45 & 2.02 & 134 & 1.0" & 3910 - 5400 \\ 2017 Oct 8 & $3 \times 900$ & Magellan + IMACS & R1200 & 0.376 & 1.54 & 54 & 0.9" & 8500 - 8900 \\ \enddata \tablecomments{The spectral resolution is measured at 4500\AA\ for the ISIS-blue arm, 7500\AA\ for the ISIS-red arm, and 8600\AA\ for IMACS. The wavelength range represents only the high quality portion of the spectra used for our analysis.} \end{deluxetable*} \subsection{Photometric Periodicities} \label{sec:period} We use all 7 years of OGLE I-band photometry to determine the orbital period of the binary. For this we employ the {\tt gatspy} python package \citep{VanderPlas15}, which provides an implementation of the Lomb-Scargle periodogram to find periodicities in the photometric data. The strongest peak of the periodogram is at a period of $P = 5.17244$ h. When the data are phase-folded at this period we see large scatter in the light curve, which is due to the fact that the maxima at phase 0.25 and 0.75 have different strengths (see Figure~\ref{fig:phased}). Figure~\ref{fig:phased} shows the light curve phase-folded at a period of twice that of the corresponding strongest peak, $P = 10.34488 \pm 0.00006$ hr, consistent with the period found in \cite{Udalski12} and \cite{Torres14}. Motivated by the fact that spin periods in the range of $0.1-10$\% of the orbital period have previously been observed in magnetic CVs \citep{Norton04}, we search for periodicities in the 100--20,000 s range with null results. We detect no measurable change in the orbital period over our 7 year baseline. On the other hand, we detect a possible long term trend at a period of $\sim 796$ days. Since the full span of the light curve is only 3 times this period, more data are required to confirm if this is a real periodicity or just a temporary artifact. The data phase-folded at this period is shown in the bottom panel of Figure~\ref{fig:phased}. \subsection{Spectral Type of the Secondary} \label{sec:spectra} Figure~\ref{fig:spectra} shows the blue and red normalized ISIS spectra of CX137 averaged in the rest frame of the secondary star. The spectra are mostly dominated by absorption lines from the secondary, with additional Balmer emission lines from an accretion disk. We detect the Mg triplet absorption lines from the secondary, and interstellar Na D lines. We see a weak contribution from TiO bands of the secondary in the $\sim 6100 - 6300$ \AA\ range, and no evidence for Helium emission lines, which are common in CVs. This might be due to the lines being veiled by a large flux contribution from the secondary. We can set an upper limit to the absolute equivalent width of \ion{He}{1} $\lambda7065$ to be $< 1.6$\AA, and $<1.2$\AA\ for \ion{He}{2} $\lambda4686$. To estimate the temperature of the secondary star we first average all the CX137 ISIS data taken with the R600 grating to use as a high S/N reference. We compare this spectrum to that of the X-shooter templates described in Section~\ref{subsec:templates}. First, we corrected each template spectrum for the systemic velocity of each star and broaden it by convolving it with a Gaussian function to match the spectral resolution of ISIS. We subtract each template to the normalized CX137 spectrum in the $ 5580 - 6150$ \AA\ wavelength range (masking out telluric lines and emission lines not associated with the secondary), and search for the template star that produces the lowest residuals, allowing for a varying multiplicative $f$ factor, which represents the fractional contribution of the template star from the total flux. We find that the spectrum of CX137 best matches that of HD79349, a K7IV star with a temperature of $3850 \pm 30$K, and a systemic velocity of \mbox{$47.12 \pm 0.15$ km s$^{-1}$} \citep{Arentsen19, Gaia18}. We find a best fit for the average optimum factor of $f = 0.52 \pm 0.06$. \subsection{Radial Velocities} \label{subsec:RV} To measure the radial velocity of the secondary in each spectrum we use the \textit{xcor} task in {\tt Molly} to cross-correlate the CX137 spectra with the spectrum of the K7IV star HD79349, the template star that best matches the spectra of CX137 (described in Section~\ref{sec:spectra}). The actual choice of template star does not have a noticeable effect in the measured radial velocities. We correct the template star's spectrum for its systemic velocity and broaden it by convolving it with a Gaussian function to match the spectral resolution of the CX137 spectra. We consider the wavelength range listed in Table~\ref{tab:spectroscopy} for each CX137 spectrum, masking out telluric features and emission lines not associated with the secondary before cross-correlating them. We calculate the radial velocities from both the red and blue arms of the ISIS spectrograph independently. The resulting radial velocity curve is shown in Figure~\ref{fig:rvfit}, with the individual measurements provided in Table~A.\ref{tab:radial_velocities}. We note that the radial velocities measured near phase $0.25$ have a large scatter due to noisy spectra taken in poor weather conditions. We model the radial velocity curve with a sine function of the form: \begin{equation}\label{eq:rv} v = \gamma + K_2\sin \left[2\pi \left( \frac{t}{P} + \phi \right) \right], \end{equation} \noindent where we fix the orbital period to be $P = 10.34488$ h, as determined in section~\ref{sec:period}. We fit for the radial velocity semi-amplitude $K_2$, a systemic velocity $\gamma$, and a phase offset $\phi$, where $\phi = 0$ corresponds to the photometric phase 0, or inferior conjunction of the secondary star. We find a best fit model with \mbox{K$_2 = 161 \pm 6$ km s$^{-1}$}, \mbox{$\gamma = 54 \pm 4$ km s$^{-1}$}, and $\phi = 0.00 \pm 0.02$ with a corresponding $\chi^2 = 141$ and 64 degrees of freedom. The quoted uncertainties are for a model where we scale the error bars of the individual radial velocity measurements to correspond to a reduced $\chi^2 = 1$ (e.g., \citealt{Marsh94}). The error-bars shown in Figure~\ref{fig:rvfit} are the true measured error-bars, not scaled. \subsection{Estimation of Rotational Broadening} \label{subsec:RB} To estimate the rotational broadening of the secondary star we compare the set of spectral templates described in Section~\ref{subsec:templates} to the high resolution IMACS spectrum of CX137 taken near photometric orbital phase 0. We normalize the IMACS and X-shooter spectra by dividing them by a 2nd degree polynomial fit to their respective continuum (masking out absorption features) in the $8500-8900$\AA\ range. We then scale down the resolution of the X-shooter templates to match that of the IMACS spectrum by convolving them with a Gaussian function. We then broaden the templates by a range of velocities from $20-200$ km s$^{-1}$ in steps of 1 km s$^{-1}$ using the \textit{rbroad} task in {\tt Molly}. This task takes the input spectrum and broadens it through convolution with the rotational profile of \cite{Gray92}, where we adopt a limb darkening coefficient of 0.75. Finally, we subtract the broadened templates from the CX137 spectrum, following the same procedure as described in \S\ref{sec:spectra}. Through $\chi^2$ minimization we find a best fit of \mbox{$v \sin(i) = 101 \pm 3$ km s$^{-1}$} to the rotational velocity of the secondary star in CX137. We find the best match to be comparably good for a K4III, K3.5III, K2III, and K7IV template star. G and M stars produce statistically worst fits. The individual $v\sin(i)$ measurements are shown in Table~\ref{tab:templates}. From the \mbox{$v \sin(i) = 101 \pm 3$ km s$^{-1}$} estimate and velocity semi-amplitude \mbox{$K_2 = 161 \pm 6$ km s$^{-1}$} calculated in section~\ref{subsec:RV} we estimate a mass ratio of \mbox{$q = 0.79 \pm 0.06$} using equation~\ref{eq:vsiniq}, which holds for a Roche Lobe filling secondary that co-rotates with the binary orbit \citep{Wade88}. \begin{equation}\label{eq:vsiniq} \frac{v \sin(i)}{K_2} = 0.462 [(1 + q)^2q]^{1/3} \end{equation} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{final_models.jpg} \caption{\textit{Left} : Optical light curves phased at the photometry ephemeris for all eight epochs of observations of CX137. We include the best fit model described in Section~\ref{sec:spots} in black, where the luminosity of each epoch is divided by its average luminosity. Each panel shows a different epoch in order of time, error bars are not plotted since they are smaller than the data marker size. \textit{Right} : Fractional residuals of the best-fit model to the light curve. \label{fig:model}} \end{figure*} \section{Light Curve Modeling} \label{sec:modeling} \begin{deluxetable}{cccc} \tablecaption{Rotational broadening of CX137 for different templates \label{tab:templates}} \tablewidth{0pt} \tablehead{ \colhead{Star} & \colhead{Spectral Type} & \colhead{v$\sin(i)$} & \colhead{$f$} \\ & & \colhead{km s$^{-1}$} & } \startdata HD37763 & K2III & $99 \pm 3$ & $0.48 \pm 0.06$ \\ HD79349 & K7IV & $100 \pm 3$ & $0.41 \pm 0.03$ \\ BS4432 & K3.5III & $100 \pm 3$ & $0.52 \pm 0.04$ \\ HD74088 & K4III & $104 \pm 3$ & $0.50 \pm 0.04$ \\ \enddata \tablecomments{$f$ is the corresponding optimum factor measured in the $8500-8900$\AA\ range.} \end{deluxetable} We proceed to model the optical light curve of CX137 using {\tt XRbinary}\footnote{A full description of {\tt XRbinary} can be found at \url{http://www.as.utexas.edu/~elr/Robinson/XRbinary.pdf}}, a light curve synthesis code developed by E.L. Robinson. This code assumes a binary system composed of a compact primary and a co-rotating secondary star that fills its Roche Lobe and is transferring mass via an accretion disk. The code models the tidal distortion of the secondary (responsible for the ellipsoidal modulations), and accounts for irradiation of the surface of the secondary from the bright accretion disk. The accretion disk is assumed to be optically thick and to emit as a multi-temperature blackbody. The disk's temperature profile as a function of radius is given by an equation of the form $T^4 \propto R^{-3} \left(1 - (R_{\rm in} / R)^{0.5} \right)$, where $R_{\rm in}$ is the inner disk radius. The precise inner radius of the disk has a negligible effect in the output light curve. In order to account for the observed symmetries in the light curve, we model the photometry of CX137 with three different models: (i) a model with a Roche Lobe filling secondary and an accretion disk that is allowed to vary in luminosity and eccentricity, (ii) a similar model, but with a circular accretion disk where the temperature of the edge of the disk can have a hot and a cool side, and (iii) a model with a circular accretion disk and an edge of uniform temperature, but with two spots on the surface of the secondary. For all models we fit the light curve using the {\tt emcee} MCMC sampler \citep{Foreman13}. The relevant parameters of the model are: the inclination of the system $i$; an orbital phase shift of the photometric $T_0$ with respect to the spectroscopic $T_0$, $\phi$; the temperature of the secondary star $T_2$; the temperature of the edge of the accretion disk $T_E$; the mass ratio \mbox{$q = M_2 / M_1$}; the argument of periastron of the disk $\omega_D$; the outer disk radius $R_D$; the disk luminosity $L_D$; the height of the accretion disk $H_D$; the eccentricity of the accretion disk $e_D$; the temperature ratio between the hot and cool sides of the disk edge $T_h$; the width of the hot side of the disk edge $W_h$; the location of the center of the hot edge of the disk $\theta_h$; the polar coordinates of the first and second spot on the surface of the secondary $\phi_{S1}$, $\theta_{S1}$, $\phi_{S2}$, and $\theta_{S2}$, respectively; the temperature ratio between the spots' temperature and the secondary temperature $T_{S1}$, and $T_{S2}$ respectively, and the size of the spots $R_{S1}$, and $R_{S2}$. Only the relevant parameters are included in each of the three versions of the models described in the following section. In all models we fix the semi-amplitude of the radial velocity of the secondary to \mbox{$K_2 = 161$ km s$^{-1}$} (derived in \S\ref{subsec:RV}). We use wide uniform uninformative priors for $\phi$, $T_2$, $T_E$, $\omega_D$, $R_D$, $e_D$, $W_h$, and all the parameters pertaining to the spots. For $L_D$ we use a prior that is flat in log space to allow for even sampling of the parameter space across orders of magnitude. For $i$ we use a prior that is flat in $\cos(i)$. We implement a Gaussian prior on the mass ratio centered at \mbox{$q = 0.79 \pm 0.06$} (derived in \S\ref{subsec:RB}). We restrict the accretion disk to be larger than the circularization radius $R_c = (1 + q) (0.5 - 0.227 \log(q))^4$ \citep{Frank02}. And finally, apply a flat prior on the temperature of the secondary of \mbox{$T_2 = [3500, 4100]$}, based on the temperature of the template star that best matches the spectra of CX137 (derived in Section~\ref{sec:spectra}). The {\tt XRbinary} code interpolates the temperature from a table of Kurucz models, therefore the measurement of the temperature of the secondary is not very precise ($\pm125$K), we report only the statistical model uncertainties in Table~\ref{tab:models}. In order to account for the year-to-year variations in the light curve we separate the photometry into eight epochs, nominally one for every year of data. Dividing the photometry into eight epochs allows us to roughly track the evolution of the system, assuming the parameters of the system are approximately constant in the $\sim 8$ months of data each epoch spans (see Figure~\ref{fig:phased}). We see the shape of the light curve does remain fairly constant within each epoch, except for the 2016 epoch, which we therefore split into two epochs of equal time span named 2016a and 2016b, each of which do have a stable light curve shape. Subdividing the epochs further proved to be too computationally expensive. Given that we know the orbital period of the binary is \mbox{$P = 10.34488$ h} we can calculate the mass function according to the equation: \begin{equation}\label{eq:massfunction} \frac{M_2^3 \sin(i)^3}{(M_1 + M_2)^2} = \frac{P K_2^3}{2 \pi G}, \end{equation} \noindent where we are able to determine the primary and secondary mass of the system by reparametrizing $q$, $K_2$, and $i$ using equation~\ref{eq:vsiniq}. \subsection{Model 1 : Variable Disk}\label{sec:eccentric} For the first model we allow the accretion disk to vary in luminosity and eccentricity, but do not include any spots on the disk or the secondary. For all epochs we keep $i$, $\phi$, $T_2$, $K_2$, $T_E$, and $q$ constant but allow the parameters that define the accretion disk $\omega_D$, $R_D$, $L_D$, and $e_D$, $H_D$ to vary from epoch to epoch. The temperature of the edge of the disk $T_E$ could conceivably change from epoch to epoch, but since this parameter has little to no effect on the output light curve we constrain it to be the same at all epochs for computational purposes. First, we fit each epoch of photometry independently, we then use the posterior distribution of those MCMC chains as starting positions when fitting all eight epochs simultaneously. We run the MCMC sampler for 1600 steps with 400 walkers and discard the first 50\% as burn-in. We test for convergence by using the Gelman-Rubin statistic and see that the potential scale reduction factor is $\hat{R} < 1.3$ \citep{Gelman92}. The most likely values are shown in Table \ref{tab:parameters}. We see that the posterior distribution of all the relevant parameters is mostly Gaussian. For this model we interpret the changes in the light curve as being due to an accretion disk of varying shape and luminosity. We see the light curves are well modeled by a disk that gets smaller and more eccentric from 2010 to 2013, and then recedes back to its original luminosity 3 years later and circularizes into a less eccentric disk. The best fit parameters for each epoch are shown in Table~\ref{tab:parameters}. For this model we find a best fit for the secondary temperature of $4055\pm25$ K and a secondary mass of $M_2 = 0.62\pm0.04$ M$_\odot$, both consistent with a main sequence K7 star \citep{Cox00} and in agreement with the best fit template match found in Section~\ref{sec:spectra}. We note that some of the best-fit eccentricity measurements are as high as $e = 0.58$, which is not expected for a low accretion-rate CV with a small accretion disk of radius $\sim R_c$, or for the high mass ratio $q>0.7$. For this reason we proceed to model the light curve with a disk that is forced to be circular, but with an edge that has two zones of independent temperature. \subsection{Model 2 : Asymmetrical Edge Brightness}\label{sec:asymmetrical} In this model we fix the eccentricity of the disk $e$ and argument of periastron $\omega_D$ to be 0. In the previous variable disk model we found the phase shift $\phi$ to be consistent with 0 with an uncertainty in phase shift of just $0.002$, we therefore also fix this parameter to 0 for computational purposes. In this model we allow the disk edge to have two different temperatures. We model this in {\tt XRbinary} by using a ``spot" that is allowed to cover an arbitrary width of the edge of the disk, effectively creating a hot and a cool zone on the outer edge of the disk. Physically, this could be produced by the impact of the gas stream on the disk, which causes the region near the impact hot spot to be hotter than the region on the opposite side of the disk. In this model we fit for the temperature ratio between the hot and cool side of the disk edge $T_h$, the width of the hot side of the disk edge $W_h$, and the location of the center of the hot edge of the disk $\theta_h$; these last two measured in degrees. $\theta_h$ is defined such that $\theta_h = 0$ is the direction pointing from the primary straight away from the secondary, and $\theta_h = 90$ points towards the observer at phase 0.75, when the observer sees the side of the disk where we would expect an accretion hot spot to be. We fit the model in the same way as described in Section~\ref{sec:eccentric}, in this case running the MCMC with 2000 steps and 400 walkers, and also discarding the first 50\% as burn-in. The resulting model has an $\hat{R} < 1.4$. The most likely values are shown in Table \ref{tab:parameters}. In this model we see a correlation between $W_h$ and $T_h$, since a large hot zone can produce a similar light curve to a smaller zone with a higher temperature. These parameters are also correlated with the disk height $H_D$, which together with $W_h$ define the effective area of the hot zone. The best fit disk radius is $\sim 1.5 R_c$ throughout all epochs. Models predict that for the best fit parameters of CX137, a typical hot spot would cover $\lesssim 5$ deg of the edge of the disk \citep{Livio93}. From observations, \citep{Warner95} find spots that cover the range of $14-40$ deg. We find that only a hot region that covers $\gtrsim 100$ deg of the edge of the disk is able to reproduce the light curve, much larger than what is expected for an impact hot spot. For this model we find a best fit for the secondary temperature of $3814\pm20$ K and a secondary mass of $M_2 = 0.68\pm0.03$ M$_\odot$. A cooler but more massive star is not necessarily consistent with the K7 secondary we expect from our spectral analysis in Section~\ref{sec:spectra}. Allowing the disk to be hotter effectively lowered the temperature of the secondary to the point where this model is not entirely self-consistent, and therefore disfavored. Nevertheless we present this model since the best fit results from it can help towards a better understanding of the systematic uncertainties in measuring $M_1$, $M_2$, and $i$. Finally, we explore a third model in which the accretion disk is circular and the disk edge has one uniform temperature, but we include two spots on the surface of the secondary. \begin{longrotatetable} \begin{deluxetable*}{cccccccccc} \tablecaption{Best-fit model parameters for each epoch \label{tab:parameters}} \tablehead{\colhead{Parameter} & \colhead{Prior} & \colhead{2010} & \colhead{2011} & \colhead{2012} & \colhead{2013} & \colhead{2014} & \colhead{2015} & \colhead{2016a} & \colhead{2016b}} \startdata \rule{0pt}{4ex} \\ \multicolumn{3}{l}{Model 1 : Variable Disk} \\ \hline $\phi^\dagger$ & $[-0.1 - 0.1] $ & $0.001\pm 0.002$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $T_E [K]^\dagger$ & $[500 - 5000] $ & $2301^{+2000}_{-500}$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\log({\rm L_D / [erg / s]})$ & $[30 - 35] $ & $33.44\pm0.05$ & $33.68\pm0.11$ & $33.67\pm0.14$ & $34.06\pm0.3$ & $33.52\pm0.08$ & $33.46\pm0.04$ & $33.46\pm0.46$ & $33.57\pm0.14$ \\ $e_D$ & $[0.0 - 0.9] $ & $0.15\pm0.04$ & $0.49\pm0.03$ & $0.52\pm0.03$ & $0.58\pm0.05$ & $0.11\pm0.07$ & $0.21\pm0.03$ & $0.46\pm0.11$ & $0.08\pm0.05$ \\ $\omega_D (\rm deg)$ & $[0 - 360] $ & $12.21\pm3.58$ & $108.13\pm2.9$ & $96.21\pm4.49$ & $83.15\pm8.74$ & $17.85\pm19.97$ & $100.63\pm7.09$ & $73.14\pm4.67$ & $78.12\pm21.11$ \\ $R_D [R_c]$ & $[1.0 - 3.0] $ & $1.30\pm0.03$ & $1.03\pm0.03$ & $1.01\pm0.02$ & $1.02\pm0.03$ & $1.48\pm0.10$ & $1.99\pm0.09$ & $1.33\pm0.23$ & $2.10\pm0.21$ \\ $H_D [a]$ & $[0.005 - 0.1]$ & $0.009\pm0.002$ & $0.029\pm0.001$ & $0.028\pm0.001$ & $0.027\pm0.001$ & $0.019\pm0.003$ & $0.046\pm0.003$ & $0.041\pm0.005$ & $0.041\pm0.004$ \\ $f^*$ & & $0.52\pm0.02$ & $0.48\pm0.03$ & $0.52\pm0.03$ & $0.49\pm0.03$ & $0.49\pm0.02$ & $0.52\pm0.02$ & $0.55\pm0.03$ & $0.48\pm0.02$ \\ \rule{0pt}{4ex} \\ \multicolumn{3}{l}{Model 2 : Asymmetrical Disk Brightness} \\ \hline $\log({\rm L_D / [erg / s]})$ & $[30 - 35] $ & $32.94\pm0.09$ & $33.06\pm0.10$ & $33.19\pm0.20$ & $32.99\pm0.2$ & $33.16\pm0.08$ & $32.95\pm0.08$ & $32.90\pm0.25$ & $33.11\pm0.16$ \\ $R_D [R_c]$ & $[1.0 - 3.0] $ & $1.45\pm0.16$ & $1.57\pm0.08$ & $1.52\pm0.17$ & $1.52\pm0.16$ & $1.43\pm0.10$ & $1.56\pm0.14$ & $1.47\pm0.18$ & $1.36\pm0.13$ \\ $H_D [a]$ & $[0.005 - 0.1]$ & $0.027\pm0.002$ & $0.028\pm0.003$ & $0.030\pm0.003$ & $0.028\pm0.004$ & $0.026\pm0.007$ & $0.028\pm0.006$ & $0.029\pm0.006$ & $0.029\pm0.007$ \\ $T_E$ & $[500, 5000] $ & $1664 \pm 180$ & $1669 \pm 104$ & $1772 \pm 97$ & $1762 \pm 113$ & $1647 \pm 203$ & $1706 \pm 182$ & $1671 \pm 150$ & $1474 \pm 132$ \\ $T_h$ & $[1.0 - 10.0] $ & $1.97\pm0.20$ & $2.23\pm0.16$ & $2.27\pm0.13$ & $2.33\pm0.17$ & $2.08\pm0.24$ & $2.03\pm0.20$ & $2.13\pm0.18$ & $2.04\pm0.10$ \\ $\theta_h [deg]$ & $[0.0 - 180] $ & $10.7\pm1.7$ & $102.3\pm0.7$ & $88.3\pm0.7$ & $61.7\pm0.5$ & $13.4\pm1.6$ & $150.0\pm1.1$ & $70.3\pm1.5$ & $100.9\pm3.1$ \\ $W_h [deg]$ & $[0.0 - 300 ] $ & $149.7\pm3.4$ & $177.2\pm10.6$ & $98.4\pm2.2$ & $168.3\pm0.004$ & $249.9\pm16.8$ & $188.3\pm3.8$ & $196.9\pm5.1$ & $119.0\pm19.0$ \\ $f^*$ & & $0.58\pm0.02$ & $0.53\pm0.02$ & $0.49\pm0.02$ & $0.55\pm0.02$ & $0.49\pm0.02$ & $0.58\pm0.02$ & $0.60\pm0.02$ & $0.52\pm0.02$ \\ \rule{0pt}{4ex} \\ \multicolumn{3}{l}{Model 3 : Spotted Secondary} \\ \hline $R_D [R_c^\dagger]$ & $[1.0 - 3.0] $ & $1.48\pm 0.01$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $H_D [a]^\dagger$ & $[0.005 - 0.1]$ & $0.030\pm0.001$ & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata & \nodata \\ $\log({\rm L_D / [erg / s]})$ & $[30 - 35] $ & $33.65\pm0.07$ & $32.74\pm0.01$ & $33.57\pm0.09$ & $33.49\pm0.08$ & $33.70\pm0.06$ & $33.22\pm0.07$ & $33.43\pm0.09$ & $33.47\pm0.07$ \\ $T_E$ & $[500 - 5000] $ & $2223 \pm 150$ & $4509 \pm 21 $ & $2959 \pm 122$ & $3007 \pm 113$ & $2358 \pm 185$ & $2363 \pm 137$ & $2239 \pm 201$ & $2392 \pm 165$ \\ $\theta_{S1}$ & $[0 - 250] $ & $170.4 \pm 2.5$ & $68.2 \pm 1.6$ & $153.1 \pm 3.1$ & $162.9 \pm 2.8$ & $163.1 \pm 5.4$ & $51.7 \pm 1.9$ & $146.4 \pm 2.2$ & $143.1 \pm 7.2$ \\ $\phi_{S1}$ & $[-110 - 110] $ & $-80.8 \pm 3.1$ & $-55.2 \pm 2.6$ & $-99.9 \pm 2.4$ & $-91.1 \pm 4.3$ & $-94.0 \pm 42.9$ & $-45.5 \pm 3.5$ & $-93.6 \pm 1.9$ & $-88.3 \pm 5.9$ \\ $R_{S1}$ & $[0.0 - 20.0] $ & $12.5 \pm 0.4$ & $13.7 \pm 1.4$ & $16.0 \pm 0.9$ & $16.9 \pm 0.7$ & $12.7 \pm 1.5$ & $14.3 \pm 0.9$ & $16.3 \pm 1.1$ & $9.5 \pm 2.8$ \\ $T_{S1}$ & $[0.1 - 1.0] $ & $0.5 \pm 0.2$ & $0.4 \pm 0.2$ & $0.4 \pm 0.1$ & $0.4 \pm 0.2$ & $0.5 \pm 0.2$ & $0.5 \pm 0.1$ & $0.3 \pm 0.2$ & $0.6 \pm 0.3$ \\ $\theta_{S2}$ & $[0 - 250] $ & $64.2 \pm 2.9$ & $88.2 \pm 0.3$ & $80.9 \pm 1.6$ & $88.5 \pm 1.5$ & $109.9 \pm 8.4$ & $62.1 \pm 1.2$ & $94.2 \pm 1.3$ & $99.3 \pm 2.8$ \\ $\phi_{S2}$ & $[70 - 290 ] $ & $209.4 \pm 5.3$ & $159.9 \pm 1.7$ & $116.8 \pm 2.4$ & $157.4 \pm 3.9$ & $159.4 \pm 18.6$ & $174.5 \pm 4.7$ & $157.4 \pm 3.8$ & $160.9 \pm 8.7$ \\ $R_{S2}$ & $[0.0 - 20.0] $ & $6.9 \pm 0.7$ & $12.9 \pm 0.2$ & $16.1 \pm 0.7$ & $14.4 \pm 0.2$ & $4.1 \pm 0.5$ & $10.7 \pm 0.5$ & $10.9 \pm 0.2$ & $6.8 \pm 0.4$ \\ $T_{S2}$ & $[0.1 - 1.0] $ & $0.5 \pm 0.2$ & $0.5 \pm 0.1$ & $0.4 \pm 0.1$ & $0.5 \pm 0.1$ & $0.4 \pm 0.2$ & $0.6 \pm 0.2$ & $0.4 \pm 0.2$ & $0.6 \pm 0.2$ \\ $f^*$ & & $0.43\pm0.02$ & $0.69\pm0.02$ & $0.46\pm0.02$ & $0.49\pm0.02$ & $0.41\pm0.02$ & $0.59\pm0.02$ & $0.51\pm0.02$ & $0.50\pm0.02$ \\ \enddata \tablecomments{Best model parameters and 1$\sigma$ error bars for the realizations shown in Figure~\ref{fig:model}. The parameters of the disk are: The orbital phase $\phi$, the luminosity L$_D$, the eccentricity $e_D$, the argument of periastron $\omega_D$, the disk radius $R_D$, and edge height $H_D$; these last two in units of the semi major axis $a$, and the temperature of the edge of the disk $T_E$. We also include the fractional contribution of the donor star to the total flux of the system $f$, calculated in $V$-band from the posterior distribution of the other parameters from the model. The uncertainties are purely statistical error bars obtained from the posterior distribution of the MCMC. For most parameters we adopt a flat prior, except for the disk luminosity, which is flat in $\log({\rm L_D})$. The disk radius $R_D$ is limited to be less than 0.9 times the Roche Lobe radius of the primary $R_1$.} \tablenotetext{*}{These parameters were not fit for, but were calculated using all the posterior distribution samples of the fitted parameters.} \tablenotetext{\dagger}{These parameters are kept constant throughout all epochs.} \end{deluxetable*} \end{longrotatetable} \subsection{Model 3 : Spotted Secondary}\label{sec:spots} Finally, we model the light curves with a model in which the accretion disk is forced to be circular, and have an edge with a single temperature, fixing $W_h = 0$, $\theta_h = 0$, and $T_h = 1$. We place two spots on the surface of the secondary with polar coordinates $\phi_{S1}$, $\theta_{S1}$, $\phi_{S2}$, and $\theta_{S2}$, respectively; and fix $ -110 {\rm\ deg} < \phi_{S1} < 110 {\rm\ deg}$, and $ 70 {\rm\ deg} < \phi_{S2} < 290 {\rm\ deg}$. This prior effectively constrains spot 1 to be on the side of the secondary facing the observer during orbital phase 0.75, and spot 2 on the opposite side of the secondary, allowing for a small overlap region of $20$ deg. The spots have respective angular sizes $R_{S1}$, and $R_{S2}$, and a temperature ratio with respect to the secondary $T_{S1}$, and $T_{S2}$, which are constrained to be $< 1$. We fit for the size and height of the disk as in the previous models, but for computational purposes we constrain them to be the same throughout all epochs. We find that the spotted secondary model requires two spots to be able to explain the fact that the brighter peak at phase 0.75 exhibits larger brightness variations than the dimmer peak at phase 0.25 (See the top panel of Figure~\ref{fig:phased}). We fit the model in the same way as the one described in Section~\ref{sec:eccentric}, running the MCMC with 2500 steps and 400 walkers, discarding the first 50\% as burn-in. The resulting model has an $\hat{R} < 1.5$. The most likely values are shown in Table \ref{tab:parameters}. We caution that the parameters of the spots are very highly correlated, a small cold spot can produce the same light curve as a large but hotter spot. Nevertheless, the relevant physical parameters such as the mass ratio and inclination appear Gaussian and mostly unaffected by the spot parameters. In this model we find that $\sim 3$\% of the surface of the secondary is covered by the two modeled spots. For reference, \citep{Watson06} find through Roche Lobe tomography that for the 9.9 hr orbital period CV AE Aqr $\sim 18$\% of one hemisphere of the secondary is spotted. Similarly, the 15 hr orbital period CV BV Cen has $\sim 25$\% of a hemisphere covered by spots \citep{Watson07}. In this model we find a secondary temperature of $4050\pm30$ K and a secondary mass of $M_2 = 0.65\pm0.05$ M$_\odot$; very similar to the parameters obtained from the variable disk model described in Section~\ref{sec:eccentric}. We show the light curve of each epoch and the corresponding most likely model, as well as the residuals of each model in Figure~\ref{fig:model}. We only include a plot of the spotted secondary model, since all three models presented here are able to reproduce the light curve shape, and visually speaking are effectively indistinguishable. The data are shown phase-folded at the photometry ephemeris with $T_0 = 2455260.8204$ and orbital period $P = 10.34488$ h. \section{Discussion} \label{sec:params} \begin{deluxetable*}{ccccc} \tablecaption{Best fit parameters \label{tab:models}} \tablewidth{0pt} \tablehead{\colhead{Parameter} & \colhead{Prior} & \colhead{Variable Disk} & \colhead{Asymmetrical Brightness} & \colhead{Spotted Secondary}} \startdata $i$ & $\cos([0.0, 90])$ & $63.8\pm0.5$ deg & $62.2\pm0.2$ deg & $63.1\pm0.4$ deg \\ $T_2$ & $[3500, 4100]$ & $4055\pm25$ K & $3850\pm50$ K & $4050\pm30$ K \\ $q$ & $0.79 \pm 0.06$ & $0.767\pm0.005$ & $0.78\pm0.01$ & $0.779\pm0.006$ \\ $v \sin(i)^*$ & $101 \pm 3.0$ & $99.5\pm0.2$ km s$^{-1}$ & $100.6\pm0.1$ km s$^{-1}$ & $100.5\pm0.2$ km s$^{-1}$ \\ ${M_1}^*$ & \nodata & $0.81\pm0.05$ M$_\odot$ & $0.86\pm0.03$ M$_\odot$ & $0.83\pm0.05$ M$_\odot$ \\ ${M_2}^*$ & \nodata & $0.62\pm0.04$ M$_\odot$ & $0.68\pm0.03$ M$_\odot$ & $0.65\pm0.05$ M$_\odot$ \\ ${R_{\rm 2}}^*$ & \nodata & $0.92\pm0.09$ R$_\odot$ & $1.02\pm0.07$ R$_\odot$ & $0.97\pm0.10$ R$_\odot$ \\ \enddata \tablecomments{List of the best fit parameters that are constant throughout all epochs of photometry and fit for in all models. $i$ is the orbital inclination, $T_2$ is the secondary temperature, $q$ is the mass ratio, $v \sin(i)$ is the secondary's rotational velocity, and $M_1$ and $M_2$ are the primary and secondary mass, respectively. And ${R_{\rm 2}}^*$ is the radius of the Roche Lobe of the secondary. For most fit for parameters we adopt a flat prior, except for the orbital inclination, which is flat in $\cos(i)$, and the mass ratio, which has a Gaussian prior.} \tablenotetext{*}{These parameters were not fit for, but were calculated using all the posterior distribution samples of the fitted parameters.} \end{deluxetable*} For the individual models presented in \S\ref{sec:modeling} we determine the fractional contribution of the template star from the total flux of the system $f$. We calculate $f$ from the light curve models by measuring the relative flux fraction that the secondary contributes to the total flux of the system in the $V$-band, the closest band to the $ 5580 - 6150$ \AA\ wavelength range used in Section~\ref{sec:spectra} to derive $f = 0.52 \pm 0.06$ from the spectroscopy. From the light curve modeling we find $f$-factors averaged over a full orbit for all epochs of photometry of: $f = 0.50 \pm 0.03$ for the variable disk model, $f = 0.54 \pm 0.04$ for the asymmetrical edge brightness model and $f = 0.51 \pm 0.09$ for the spotted star model. Most of these are in perfect agreement with the value measured from the spectra. The $f$ as a function of epoch is shown in Table~\ref{tab:parameters}. We find best-fit values for the primary mass of \mbox{$M_1 = 0.81$ M$_\odot$}, \mbox{$M_1 = 0.86$ M$_\odot$}, and \mbox{$M_1 = 0.83$ M$_\odot$} for models 1, 2, and 3, respectively. The statistical uncertainties reported in Table~\ref{tab:models} are in the order of the systematic uncertainties from assuming different models. Accounting for these, we adopt a primary mass estimate of \mbox{$M_1 = 0.83 \pm 0.06$ M$_\odot$}, typical for white dwarfs in CVs (e.g. $M_{\rm WD} = 0.83 \pm 0.23$ M$_\odot$; \citealt{Zorotovic11}). Correspondingly, the best estimate for the mass for the secondary is \mbox{$M_2 =0.65\pm0.07$ M$_\odot$}, consistent with the mass of a main sequence K7 star \citep{Cox00} and in agreement with the best fit template match found in Section~\ref{sec:spectra}. We find a best fit radius for the Roche Lobe of the secondary of $R_2 = 0.97 \pm 0.15$ R$_\odot$. This radius is larger than expected for a main sequence K7 star (which have typical values of $R \sim 0.65$ R$_\odot$; \citealt{Pecaut13}), supporting an evolved secondary in CX137. From the spectra, we determine the ratio of the double-peak separation (DP) to the full width half maximum (FWHM) of the H$\alpha$ emission line following the method of \cite{Casares15}. We fit H$\alpha$ with a double Gaussian function to measure the DP between the two line peaks and then fit a single Gaussian to determine the FWHM. We find the average ratio to be DP$/$FWHM$ = 0.55 \pm 0.02$, the result of these fits are shown in Figure~\ref{fig:halpha}. In Figure~\ref{fig:casares} we show the $q$, and DP/FWHM of $H\alpha$ plotted alongside the values for other known CVs. We confirm that our parameter estimates agree well with the $q-$DP/FWHM relation for CVs determined by \cite{Casares15}. \cite{Torres14} suggested the double-peaked structure of $H\alpha$ might be due to contamination from photospheric absorption lines from the secondary. Nevertheless, the values we derived for CX137 agree with this trend, and strengthens the case that CX137 is a CV. \begin{figure} \centering \includegraphics[width=\columnwidth]{Halpha.pdf} \caption{Emission line profiles for $H\alpha$ at 6 different phases. The best-fit separation between the two peaks (DP) and the FWHM is shown in each panel. We determine a ratio of DP$/$FWHM$=0.55 \pm 0.02$ following the methods of \cite{Casares16}. \label{fig:halpha}} \end{figure} We measure the systemic velocity of CX137 from the optical spectra to be \mbox{$\gamma = 54 \pm 4$ km s$^{-1}$} (Figure~\ref{fig:rvfit}). Given the proper motion and distance to CX137 obtained by \textit{Gaia} (See section~\ref{subsec:gaia}), we can determine the space velocity of CX137 with respect to the Sun to be \mbox{$v = 62 \pm 4$ km s$^{-1}$}, statistically consistent with other CVs (\mbox{$v = 51 \pm 7$ km s$^{-1}$}; \citealt{Ak10}). \subsection{X-ray Luminosity} \cite{Torres14} provide a lower limit to the X-ray luminosity of CX137 of \mbox{$L_x > 5.8\times10^{30}$ erg s$^{-1}$} for a distance of 0.7 kpc and assuming a hydrogen column density $N_H = 10^{21}$ cm$^{-2}$. Here, we improve this measurement by using the distance to CX137 from \textit{Gaia} of \mbox{$d = 879^{+59}_{-52}$ pc}. In addition, we obtain the extinction in the line of sight to CX137 from the Bayestar19 3D dust maps \citep{Green19} to be $A_V \approx 0.58$. We obtain an $N_H = 1.7\times10^{21}$ cm$^{-2}$ from the $A_V$--$N_H$ relation from \cite{Guver09}. We calculate a counts to unabsorbed flux conversion ratio of $5.6\times10^{-15}$ erg cm$^{-2}$ s$^{-1}$ count$^{-1}$ for a 2.16ks exposure taken with ACIS-I during \textit{Chandra} Cycle 9, using a power-law spectrum with $\Gamma = 2$. This corresponds to a $0.5-10$ keV unabsorbed flux of $(8.4 \pm 2.1)\times10^{-14}$ erg cm$^{-2}$ s$^{-1}$, or $L_x = (7.8 \pm 2.2) \times10^{30}$ erg s$^{-1}$ at the \textit{Gaia} distance. We can estimate an accretion rate following the method of \cite{Beuermann04} using $L_{\rm acc} = \dot{M} G M_1 (1/R_1 - 1/R_D)$, $R_1 = (1.463 - 0.885 (M_1 / M_\odot)) \times 10^9$ cm, and $L_{\rm acc} = (1 + \alpha) L_x$; where $\alpha$ is typically 0.1. We adopt our best estimate for the primary mass of $M_1 = 0.83$ M$_\odot$, and a typical disk radius of $R_D = 10^{10}$ cm, as determined by our models presented in \S\ref{sec:modeling}. We obtain an accretion rate estimate of $\dot{M} \sim 10^{15}$ g s$^{-1}$. Bahramian et al. (in prep) detected CX137 at a higher $L_x$ in the \textit{Swift} Bulge Survey \citep{Shaw20} during repeated biweekly scans of the Galactic Bulge with the Neil Gehrels \textit{Swift} Observatory. They measured an average $L_x=5\times10^{31}$ erg s$^{-1}$ over many epochs in 2017, with a peak of $L_x = 3\pm2\times10^{32}$ erg s$^{-1}$, indicating a flux increase of $38^{+33}_{-26}$ compared to the \textit{Chandra} GBS measurement, which would consequently bring up the accretion rate to $\dot{M} \sim 4\times10^{16}$ g s$^{-1}$ during this period. \cite{Teeseling96} found that the accretion rate in non-magnetic CVs is likely underestimated by a factor of $\sim 2$ for systems with inclinations of $\gtrsim 60$ deg. This would bring the accretion rate to $\dot{M} \sim 10^{17}$ g s$^{-1}$, closer to the $\dot{M}$ expected for a Roche Lobe filling subgiant with an orbital period of $10$ hours \citep{King96}. Using the $L_x$ vs. duty cycle correlation for dwarf novae from \citet{Britt15} we can estimate the duty cycle for CX137 to be $0.063 \pm 0.022$. Accounting for observational cadence, the source should have been in outburst during $94\pm34$ days out of the 1,504 days CX137 was observed by OGLE. One explanation for the lack of outbursts might be that CVs with long orbital periods tend to have shorter outbursts \citep{Hameury17}. Given that the secondary star in CX137 contributes a large fraction of the total flux, an outburst would be of low amplitude, and we could have missed it if it happened when the source was not being observed. Nevertheless, systems with similar or longer orbital periods than CX137, such as AE Aqr (P = 9.9 hr) \citep{Watson06} and BV Cen (P = 14.7 hr) \citep{Watson07} do show frequent outbursts. \section{Conclusion} \label{sec:conclusions} \begin{figure} \includegraphics[clip,width=0.90\columnwidth]{casares.pdf} \caption{Relation between mass ratio $q$ and ratio of peak separation DP to FWHM of $H\alpha$ for known CVs. The black line is an empirical relation found in \cite{Casares15}, from which this figure is adapted. The parameters found for CX137, shown in red, agree well with the existing relations for CVs. Error bars not visible are smaller than the marker size. \label{fig:casares}} \end{figure} We have modeled 7 years of optical photometry of the binary star CX137. The optical light curve has an asymmetrical sine curve shape, which we interpret as being due to ellipsoidal modulations of a tidally distorted secondary star. We see long term variations in the shape of the light curve, which we model with: (i) an accretion disk that changes in shape and luminosity, (ii) a circular accretion disk with an edge with of asymmetrical temperature, and (iii) a circular accretion disk with a uniform edge temperature but with two spots on the surface of the secondary. We find model (i) and (iii) to be consistent with a K7 star in terms of the best-fit secondary mass and temperature, which is not the case for model (ii), which is therefore disfavored. The light curve variations are well fit by either an accretion disk that changes in shape and luminosity, or a spotted secondary star. From the light curve modeling we derive a best fit inclination of \mbox{$i = 63.0\pm0.7$ deg}, a primary mass of \mbox{$M_1 = 0.83 \pm 0.06$ M$_\odot$}, consistent with a white dwarf accretor, and a secondary mass of \mbox{$M_2 = 0.65 \pm 0.07$ M$_\odot$}, consistent with a K7 secondary. We obtained multiple spectra of the source to construct a radial velocity curve, from which we measure a $K_2 = 161.1 \pm 0.7$ km s$^{-1}$ and a systemic velocity \mbox{$\gamma = 54 \pm 4$ km s$^{-1}$}. \acknowledgments This project was supported in part by an NSF GROW fellowship. PGJ and ZKR acknowledge funding from the European Research Council under ERC Consolidator Grant agreement no 647208. JS was supported by a Packard Fellowship. MAPT acknowledge support by the Spanish MINECO under grant AYA2017-83216-P and support via Ram\'on y Cajal Fellowship RYC-2015-17854. We thank Tom Marsh for the use of {\tt molly}. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. The ISIS spectroscopy was obtained with the WHT, operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofisica de Canarias. This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. This research has made use of NASA’s Astrophysics Data System. This research has made use of the SIMBAD database, operated at CDS, Strasbourg, France. The OGLE project has received funding from the National Science Centre, Poland, grant MAESTRO 2014/14/A/ST9/00121 to AU. \software{Astropy\citep{astropy18}, PyRAF\citep{science12}, SAOImage DS9 \citep{Smithsonian00}, emcee\citep{Foreman13}, corner \citep{Foreman16}, Matplotlib\citep{hunter07}, SciPy\citep{Walt11}, NumPy\citep{Oliphant07}, extinction(\citep{Barbary16}), PYPHOT(\url{https://github.com/mfouesneau/pyphot}), Molly({\url{http://deneb.astro.warwick.ac.uk/phsaap/software/molly/html/INDEX.html}}), XRbinary(\url{http://www.as.utexas.edu/~elr/Robinson/XRbinary.pdf}).}
null
null
null
proofpile-arXiv_000-10153
{"arxiv_id":"2009.08983","language":"en","timestamp":1600740013000,"url":"https:\/\/arxiv.org\/abs\/2009.08983","yymm":"2009"}
2024-02-18T23:40:25.148Z
2020-09-22T02:00:13.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10155"}
null
\section{Improving User Engagement Through an AI-driven Design for Diagnostic Interface} As the user decides whether to continue their study through \emph{Santa} after viewing the diagnostic page, we consider the page design that encourages user engagement and motivates them to study further. Throughout this section, we explore the design of the diagnostic page that can most effectively express the AI features brought by back-end AI models and better explains the user’s problem-solving process in the diagnostic test. We propose two page designs summarizing the user’s diagnostic test result: page design A (Figure \ref{fig:page_A}) and page design B (Figure \ref{fig:page_B}). Each page design provides analytics of the diagnostic test result at different levels of information and explainability, and is powered by different AI models running behind \emph{Santa}. The effectiveness of each page design and its impact on user engagement is investigated through controlled A/B tests in Section \ref{sec:exp}. \begin{figure*} \centering \begin{subfigure}[ht]{0.49\columnwidth} \centering \includegraphics[width=0.5\columnwidth]{figures/page_A.png} \caption{} \label{fig:page_A} \end{subfigure} \begin{subfigure}[ht]{0.49\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B.png} \caption{} \label{fig:page_B} \end{subfigure} \caption{Page design A(a) and B(b) proposed in the paper. Note that the original version of each page is in Korean. We present the English version of each page design in order to facilitate the understanding of readers around the world.} \label{fig:page_AB} \end{figure*} \subsection{Page Design A} Page design A presents the following four components: \emph{Estimated Score}, \emph{Grade by Part}, \emph{Comparison to Users in the Target Score Zone} and \emph{Tutor’s Comment}. \begin{figure*} \centering \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_A_score.png} \caption{} \label{fig:page_A_score} \end{subfigure} \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_A_part.png} \caption{} \label{fig:page_A_part} \end{subfigure} \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_A_tutor_comment.png} \caption{} \label{fig:page_A_tutor_comment} \end{subfigure} \caption{\emph{Estimated Score}, \emph{Grade by Part} and \emph{Tutor's Comment} components in the page design A.} \end{figure*} \subsubsection{Estimated Score} This component shows the user’s overall expected performance on the actual TOEIC exam and presents the following features: estimated scores, target scores and percentile rank (Figure \ref{fig:page_A_score}). The estimated scores are computed from the back-end score estimation model and the target scores are values the user entered before the diagnostic test. The estimated scores and the target scores are presented together so that the user easily compares them. The percentile rank is obtained by comparing the estimated score with scores of more than a million users recorded in the database of \emph{Santa}. \subsubsection{Grade by Part} This component provides detailed feedback for the user’s ability on each question type to help them identify their strengths and weaknesses (Figure \ref{fig:page_A_part}). For each part in TOEIC exam, the red and white bar graphs show the user’s current proficiency level and the required proficiency level to achieve the target score, respectively. The red bar graphs are obtained by averaging the estimated probabilities of the user correctly answering the potential questions for each part. Similarly, the white bar graphs are obtained by computing the averaged correctness probabilities for each part of users in the target score zone. \begin{figure*} \centering \includegraphics[width=0.4\columnwidth]{figures/page_A_radar_chart.png} \caption{\emph{Comparison to Users in the Target Score Zone} in the page design A. The radar chart is intended to give the feeling that the AI teacher is analyzing the users closely from multiple perspectives by presenting the five features of their particular aspects of ability.} \label{fig:page_A_radar_chart} \end{figure*} \subsubsection{Comparison to Users in the Target Score Zone} This component shows a radar chart of five features representing the user’s particular aspects of ability (Figure \ref{fig:page_A_radar_chart}). The five features give explanations of how AI models analyze the user’s problem-solving process, making \emph{Santa} looks more like an AI teacher. The five features are the followings: \begin{itemize} \item Performance: The user’s expected performance on the actual TOEIC exam. \item Correctness: The probability that the user will correctly answer each given question. \item Timeliness: The probability that the user will solve each given question under time limit. \item Engagement: The probability that the user will continue studying with \emph{Santa}. \item Continuation: The probability that the user will continue the current learning session. \end{itemize} The red and white pentagons present the five features with values of the current user and averaged values of users in the target score zone, respectively. This component is particularly important as shown in Section \ref{sec:exp} that users’ engagement factors vary greatly depending on the presence or absence of the radar char. \subsubsection{Tutor's Comment} This component presents natural language text describing the user’s current ability and suggestions for achieving the target score (Figure \ref{fig:page_A_tutor_comment}). This feature is intended to provide a learning experience of being taught by a human teacher through a more human-friendly interaction. Based on the user’s diagnostic test result, the natural language text is selected among a set of pre-defined templates. \subsection{Page Design B} Although the page design A is proposed to provide AI-powered feedback for the user’s diagnostic test result, it has limitations in that the composition is difficult to deliver detailed information and insufficient to contain all the features computed by AI models. To this end, the page design A is changed in the direction of making better use of AI features, leading to the page design B which is more informative and explainable to the user’s problem-solving process in the diagnostic test. The page design B consists of the following seven components: \emph{Estimated Score}, \emph{Analysis of Problem-Solving Process}, \emph{Skill Proficiency}, \emph{Recommended Lectures}, \emph{Analysis of Users Similar to You}, \emph{Adaptively Offered Curriculum} and \emph{Santa Labs}. \begin{figure*} \centering \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B_score.png} \caption{} \label{fig:page_B_score} \end{subfigure} \centering \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B_problem_solving.png} \caption{} \label{fig:page_B_problem_solving} \end{subfigure} \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B_proficiency.png} \caption{} \label{fig:page_B_proficiency} \end{subfigure} \caption{\emph{Estimated Score}, \emph{Analysis of Problem-Solving Process} and \emph{Skill Proficiency} components in the page design B.} \end{figure*} \subsubsection{Estimated Score} The target scores and the mountain shaped graphic illustrating the percentile rank in the \emph{Estimated Score} component of the page design A are excluded in that of the page design B (Figure \ref{fig:page_B_score}). The \emph{Estimated score} component of the page design B only shows the estimated scores and the number indicating the percentile rank, making this component more concise and intuitive. \subsubsection{Analysis of Problem-Solving Process} This component provides an overall review session for the diagnostic test (Figure \ref{fig:page_B_problem_solving}). It presents the percentage of correct answers, the time taken to complete the diagnostic test and how much time has passed than the recommended time. Also, through the \emph{View Explanations} button, the user can review the questions in the diagnostic test and their explanations. \subsubsection{Skill Proficiency} This component shows the user’s current proficiency level on each skill for TOEIC exam, making AI’s analysis of diagnostic test result more transparent and explainable (Figure \ref{fig:page_B_proficiency}). The radar chart represents proficiency levels on a scale of 1 to 10 for the following five skills: listening, reading, grammar knowledge, grammar application and vocabulary. Each proficiency level is obtained by normalizing the average estimated correctness probabilities of potential questions for each skill. The red and white pentagons present the values for the five skills of the current user and averaged values of users in the target score zone, respectively. \begin{figure*} \centering \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B_lectures.png} \caption{} \label{fig:page_B_lectures} \end{subfigure} \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B_analysis.png} \caption{} \label{fig:page_B_analysis} \end{subfigure} \begin{subfigure}[ht]{0.33\columnwidth} \centering \includegraphics[width=1\columnwidth]{figures/page_B_curriculum.png} \caption{} \label{fig:page_B_curriculum} \end{subfigure} \caption{\emph{Recommended Lectures}, \emph{Analysis of Users Similar to You} and \emph{Adaptively offered curriculum} components in the page design B.} \end{figure*} \subsubsection{Recommended Lectures} This component helps the user identify their weaknesses and suggests lectures to complement (Figure \ref{fig:page_B_lectures}). Among the five skills in the \emph{Skill Proficiency} component, two skills with the lowest proficiency and their sub-skills are presented and two lectures on these skills are recommended. \subsubsection{Analysis of Users Similar to You} This component provides information of the change in average scores of \emph{Santa} users at the similar level to the current user, conveying the feeling that the specific score can be attained by using \emph{Santa} (Figure \ref{fig:page_B_analysis}). It shows how the scores of the \emph{Santa} users change by dividing them into top 20\%, median 60\% and bottom 20\%, and presents the estimated average score attained after studying with \emph{Santa} for 60 hours. This feature is obtained by finding \emph{Santa} users with the same estimated score as the current user and computing the estimated score every time they consume a learning item. \subsubsection{Adaptively Offered Curriculum} This component presents the learning path personalized to the user to achieve their learning objective (Figure \ref{fig:page_B_curriculum}). When the user changes the target date and target score by swiping, \emph{Santa} dynamically suggests the number of questions and lectures the user must study per day based on their current position. The amount of study the user needs to consume every day is computed by finding \emph{Santa} users whose initial state is similar to the current user and tracking how their learning progresses so that the user can achieve the target score on the target date. \begin{figure*} \centering \includegraphics[width=0.8\columnwidth]{figures/page_B_Santa_labs.pdf} \caption{\emph{Santa Labs} component in the page design B. When the user presses the flask button next to the \emph{Estimated Score} component, a window appears with an explanation of how the estimated scores are calculated.} \label{fig:page_B_Santa_labs} \end{figure*} \subsubsection{Santa Labs} When the user presses the flask button next to each component, a window pops up and provides an explanation of AI models used to compute the features of the component (Figure \ref{fig:page_B_Santa_labs}). For instance, when the user presses the flask button next to the \emph{Estimated Score} component, a window appears with an explanation about the Assessment Modeling \cite{choi2020assessment}, the \emph{Santa}'s score estimation modeling method. This component conveys information about the AI technology provided by \emph{Santa} to the user, giving them a feeling that the AI is actually analyzing them, increasing the credibility of the system. \subsection{Back-end AI Engine} The features in the components of each page design are computed by processing the output of \emph{Santa}’s AI engine, which takes the user's past learning activities and models individual users. Whenever the user consumes a learning item suggested by \emph{Santa}, the AI engine updates models of individual users and makes predictions on specific aspects of their ability. The predictions that the AI engine makes include the followings: response correctness, response timeliness, score, learning session dropout and engagement. The response correctness prediction is made by following the approaches introduced in \cite{lee2016machine} and \cite{choi2020towards}. \cite{lee2016machine} is the Collaborative Filtering (CF) based method which models users and questions as low-rank matrices. Each vector in the user matrix and question matrix represents latent traits of each user and latent concepts for each question, respectively. SAINT \cite{choi2020towards} is a deep learning based model that follows the Transformer \cite{vaswani2017attention} architecture. The deep self-attentive computations in SAINT allows to capture complex relations among exercises and responses. Since the CF-based model can quickly compute the probabilities of response correctness for the entire questions of all users and SAINT predicts the response correctness probability for each user with high accuracy, the two models are complementary to each other in real world applications where both accuracy and efficiency are important. Assessment Modeling (AM) \cite{choi2020assessment} is a pre-train/fine-tune approach to address the label-scarce educational problems, such as score estimation and review correctness prediction. Following the pre-train/fine-tune method proposed in AM, a deep bidirectional Transformer encoder \cite{devlin2018bert} based score estimation model is first pre-trained to predict response correctnesses and timelinesses of users conditioned on their past and future interactions, and then fine-tuned to predict scores of each user. The response timeliness and score are predicted from the pre-trained model and the fine-tuned model, respectively. The learning session dropout prediction is based on the method proposed in DAS \cite{lee2020deep}. DAS is a deep learning based dropout prediction model that follows the Transformer architecture. With the definition of session dropout in a mobile learning environment as an inactivity for 1 hour, DAS computes the probability that the user drops out from the current learning session whenever they consume each learning item. The engagement prediction is made by the Transformer encoder based model. The model is trained by taking the user’s learning activity record as an input and matching the payment status based on the assumption that the user who makes the payment is engaged a lot with the system. \section{Conclusions and Future Work} We have investigated the effects of AI-driven interface design for ITS. In particular, we hypothesized that diagnostic page design summarizing analytics for students' problem-solving process that makes better use of AI features would encourage student engagement. For this, we proposed several page designs that couple the interface with AI features in different levels and empirically verified their impacts on student engagement. We have conducted A/B tests on new students using an active mobile ITS \emph{Santa}. We considered conversion rate, Average Revenue Per User (ARPU), total profit and the average number of free questions a user consumed as factors measuring the degree of engagement. The A/B test results showed that the page designs that effectively expresses the AI features brought by back-end AI engine and thus better explain analysis of the user’s diagnostic test result promote all factors of engagement, concluding that AI-driven interface design improves student engagement. Avenues of future work include 1) updating a page summarizing the AI’s analysis on the user in the learning session after the diagnostic test. 2) finding more interface designs that can further enhance student engagement by making good use of, expressing and utilizing AI features. \section{Experimental Studies} \label{sec:exp} In this section, we provide supporting evidence that AI-driven interface design for ITS promotes student engagement by empirically verifying the followings through the real world application: 1) the impact of the radar chart in the page design A on user engagement, and 2) comparison of the page design A and B on user engagement. We conduct A/B tests on new incoming users of \emph{Santa}, with the users randomly assigned either group A or B. Both groups of users take the diagnostic test, and at the end, users in different groups are shown different diagnostic test analytics pages. Throughout the experiments, we consider the following four factors of engagement: conversion rate, Average Revenue Per User (ARPU), total profit and the average number of free questions a user consumed. Monetary profits are an essential factor for evaluating the users’ engagement since paying for a service means that the users are highly satisfied with the service and requires a strong determination of actively using the service. For users without the determination to make payment, the average number of free questions a user consumed after the diagnostic test is a significant measure of engagement since it represents their motivation to continue the current learning session. \subsection{Impact of Radar Chart in Page Design A on Student Engagement} From April 15th to 24th, we conducted an A/B test by randomly assigning two different diagnostic test analytics pages to the users: one without the radar chart in the page design A (1,391 users) and another one the page design A (1,486 users). Table \ref{tab:radar_chart} shows the overall results. We see that the page design A with the radar chart improves all factors of user engagement. With the radar chart, the conversion rate, ARPU, total profit and the average number of free questions a user consumed are increased by 22.68\%, 17.23\%, 25.13\% and 11.78\% respectively, concluding that a more AI-like interface design for ITS encourages student engagement. Figure \ref{fig:daily_payment_radar_chart} and Figure \ref{fig:daily_questions_radar_chart} show the comparison of the conversion rate and the average number of free questions a user consumed per day between the users of the A/B test, respectively. We observe that the users of the page design A with the radar chart made more payments and solved more free questions throughout the A/B test time period. \begin{table}[h] \caption{Impacts of the radar chart on user engagement.} \centering \begin{tabular}{ccc} \toprule & w/o radar chart & w/ radar chart \\ \hline Conversion rate (\%) & 5.25 & 6.26 \\ ARPU (\$) & 5.92 & 6.94 \\ Total profit (\$) & 8,236.01 & 10,305.58 \\ \# of free questions consumed & 14.77 & 16.51 \\ \bottomrule \end{tabular} \label{tab:radar_chart} \end{table} \begin{figure*}[h] \centering \includegraphics[width=0.7\columnwidth]{figures/daily_payment_radar_chart.pdf} \caption{Comparison of the conversion rate on every day between the users of the A/B test.} \label{fig:daily_payment_radar_chart} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=0.7\columnwidth]{figures/daily_questions_radar_chart.pdf} \caption{Comparison of the average number of free questions a user consumed on every day between the users of the A/B test.} \label{fig:daily_questions_radar_chart} \end{figure*} \subsection{Comparison of Page Design A and B on user engagement} The A/B test of the page design A and B was conducted from August 19th to September 11th by randomly allocating them to the users. 9,442 users were allocated to the page design A and 9,722 users were provided the page design B. The overall results are shown in Table \ref{tab:AB}. Compared to the page design A, the page design B is better at promoting all factors of user engagement by increasing 11.07\%, 10.29\%, 12.57\% and 7.19\% of the conversion rate, ARPU, total profit and the average number of free questions a user consumed, respectively. Note that although the page design with the radar chart in the previous subsection and the page design A are the same, there is a difference between the values of the engagement factors of page design with the radar chart in Table \ref{tab:radar_chart}, and those of the page design A in Table \ref{tab:AB}. The absolute value of each number can be changed by external factors, such as timing and the company's public relations strategy, and these external factors are not a problem as they apply to both A and B groups in the A/B test. The comparisons of the conversion rate and the average number of free questions a user consumed on every two days between the users assigned to the page design A and B are presented in Figure \ref{fig:daily_payment_design_AB} and Figure \ref{fig:daily_questions_design_AB}, respectively. We can observe in the figures that users experiencing the page design B made more payments and solved more free questions during the A/B test time period. Throughout the experiment, the results show that a more informative and explainable design of interface for ITS by making better use of AI features improves student engagement. \begin{table}[h] \caption{Impacts of the page design A and B on user engagement.} \centering \begin{tabular}{ccc} \toprule & Page design A & Page design B \\ \hline Conversion rate (\%) & 5.60 & 6.22 \\ ARPU (\$) & 5.54 & 6.11 \\ Total profit (\$) & 83,335.76 & 93,807.27 \\ \# of free questions consumed & 15.15 & 16.24 \\ \bottomrule \end{tabular} \label{tab:AB} \end{table} \begin{figure*}[h] \centering \includegraphics[width=0.7\columnwidth]{figures/daily_payment_design.pdf} \caption{Comparison of the conversion rate on every two days between the users of the A/B test.} \label{fig:daily_payment_design_AB} \end{figure*} \begin{figure*}[h] \centering \includegraphics[width=0.7\columnwidth]{figures/daily_questions_design.pdf} \caption{Comparison of the average number of free questions a user consumed on every two days between the users of the A/B test.} \label{fig:daily_questions_design_AB} \end{figure*} \section{Introduction} The recent COVID-19 pandemic has caused unprecedented impact all across the globe. With social distancing measures in place, many organizations have implemented virtual and remote services to prevent widespread infection of the disease and support the social needs of the public. Educational systems are no exception and have changed dramatically with the distinctive rise of online learning. Also, the demands for evaluation methods of learning outcomes that are safe, reliable and acceptable have led the educational environment to take a paradigm shift to formative assessment. An Intelligent Tutoring System (ITS), which provides pedagogical services in an automated manner, is a promising technique to overcome the challenges post COVID-19 educational environment has brought to us. However, despite the development and growing popularity of ITS, most of the studies in ITS research have mainly focused on diagnosing students' knowledge state and suggesting proper learning items, and less effort has been conducted to design the interface of ITS that promotes students' interest in learning, motivation and engagement by making better use of AI features. For example, Knowledge Tracing (KT), a task of modeling students' knowledge through their learning activities over time, is a long-standing problem in the field of Artificial Intelligence in Education (AIEd). From Bayesian Knowledge Tracing \cite{corbett1994knowledge,yudelson2013individualized} to Collaborative Filtering \cite{thai2010recommender,lee2016machine} and Deep Learning \cite{piech2015deep,zhang2017dynamic,choi2020towards,ghosh2020context}, various approaches have been proposed and KT is still being actively studied. Learning path construction is also an essential task that ITS performs, where learning items are suggested to maximize students' learning objectives. This task is commonly formulated as a reinforcement learning framework \cite{liu2019exploiting,huang2019exploring,bassen2020reinforcement,zhou2020improving} and is also an active research area in AIEd. On the other hand, little works have been done in the context of the user interface for ITS including intelligent authoring shell \cite{granic2000user}, affective interface \cite{lin2014usability,lin2014influence} and usability testing \cite{chughtai2015usability,koscianski2014design,roscoe2014writing}. Although they cover important aspects of ITS, the methods are outdated and their effectiveness is not reliable since the experiments were conducted on a small scale. Interface of ITS not fully supportive of making AI’s analysis transparent to students adversely affects their engagement. Accordingly, improving the interface of ITS is also closely related to explainable AI. Explaining what exactly makes AI models arrive at their predictions and making them transparent to users is an important issue \cite{gunning2017explainable,gunning2019darpa,dove2020monsters}, and have been actively studied in both human-computer interaction \cite{abdul2018trends,kizilcec2016much,stumpf2009interacting,wang2019designing} and machine learning \cite{samek2017explainable} communities. There are lots of works about the issue of explainability in many subfields of AI including computer vision \cite{norcliffe2018learning,fong2017interpretable,kim2017interpretable,zhang2018interpretable}, natural language processing \cite{lei2017interpretable,fyshe2015compositional,jiang2018interpretable,panigrahi2019word2sense} and speech processing \cite{ravanelli2018interpretable,korzekwa2019interpretable,sun2020fully,tan2015improving}. Explainability in AIEd is mainly studied in the direction of giving feedback that helps students identify their strengths and weaknesses. A method of combining item response theory with deep learning has been proposed, from which students' proficiency levels on specific knowledge concepts can be found \cite{cheng2019dirt,wang2019neural,yeung2019deep}. Also, \cite{barria2019explaining,choi2020choose} attempted to give students insight why the system recommends a specific learning material. In this paper, we explore AI-driven design for the interface of ITS describing diagnostic feedback for students' problem-solving process and investigate its impacts on their engagement. We propose several interface designs composed of different AI-powered components. Each page design couples the interface with AI features in different levels, providing different levels of information and explainability. We empirically evaluate the impacts of each design on student engagement through \emph{Santa}, an active mobile ITS. We consider conversion rate, Average Revenue Per User (ARPU), total profit, and the average number of free questions a student consumed as factors measuring the degree of engagement. Controlled A/B tests conducted on more than 20K students in the wild show that AI-driven interface design improves the factors of engagement by up to 25.13\%. \section{Related Works} \begin{comment} The work of this paper is related to the following two categories of research areas: UI for ITS and and explainability of AIEd. Explaining what exactly makes AI models arrive at their predictions and making them transparent to users is an important issue \cite{gunning2017explainable,gunning2019darpa,dove2020monsters}, and have been actively studied in both HCI \cite{abdul2018trends,kizilcec2016much,stumpf2009interacting,wang2019designing} and ML \cite{samek2017explainable} community. There are lots of works about the issue of explainability in many subfields of AI including computer vision \cite{norcliffe2018learning,fong2017interpretable,kim2017interpretable,zhang2018interpretable}, natural language processing \cite{lei2017interpretable,fyshe2015compositional,jiang2018interpretable,panigrahi2019word2sense} and speech processing \cite{ravanelli2018interpretable,korzekwa2019interpretable,sun2020fully,tan2015improving}. In this section, we narrow down the scope and describe how the AIEd researchers have addressed the issue. \end{comment} \subsection{Design of UI for ITS} Although the development of ITS has become an active area of research in recent years, most of the studies have mainly focused on learning science, cognitive psychology and artificial intelligence, resulting in little works done in the context of UI. \cite{granic2000user} describes the UI issue of an intelligent authoring shell, which is an ITS generator. Through experiences in the usage of TEx-Sys, an authoring shell proposed in the paper, the authors discusses the importance of a well designed UI that brings system functionality to users. \cite{glavinic2001interacting} considers applying multiple views to UI for ITSs. The paper substantiates the usage of multiple perspectives on the domain knowledge structure of an ITS by the implementation of MUVIES, a multiple views UI for ITS. Understanding students' emotional states has become increasingly important for motivating their learning. Several works \cite{lin2014influence,lin2014usability} incorporate affective interface in ITS to monitor and correct the students' states of emotion during learning. \cite{lin2014influence} studies the usage affective ITS in Accounting remedial instruction. Virtual agents in the system analyze, behave and give feedback appropriately to students' emotions to motivate their learning. \cite{lin2014usability} proposes ATSDAs, an affective ITS for digital arts. ATSDAs analyzes textual input of a student to identify emotion and learning status of them. A visual agent in the system adapts to the student, provides text feedback based on the inferred results and thereby increases their learning interest and motivation. The performance of a software can be measured by its usability, a quality that quantifies ease of use. Whether applying usability testing and usability principles to the design of UI can improve the performance of ITS is an open question \cite{chughtai2015usability}. \cite{koscianski2014design} discusses the importance of UI design, usability and software requirements and suggests employing heuristics from software engineering and learning science domains in the development process of ITS. An example of applying basic usability techniques to the development and testing of ITS is presented in \cite{roscoe2014writing}. The paper introduces Writing Pal, an ITS for helping to improve students’ writing proficiency. The design of Writing Pal includes many usability engineering methods, such as internal usability testing, focus groups and usability experiments. \subsection{Explainability in AIEd} Providing an explainable feedback which can identify strengths and weaknesses of a student is a fundamental task in many educational applications \cite{conati2018ai}. DIRT \cite{cheng2019dirt} and NeuralCDM \cite{wang2019neural} propose methods to enhance explainability of educational systems through a cognitive diagnosis modeling, which aims to discover student’s proficiency levels on specific knowledge concepts. DIRT incorporates neural networks to compute parameters of Item Response Theory (IRT) model. With the great feature representation learning power of neural networks, DIRT could learn complex relations between students and exercises, and give explainable diagnosis results. A similar approach is taken in NeuralCDM. However, the diagnosis model of NeuralCDM is an extended version of IRT with monotonicity assumption imposed on consecutive fully-connected neural network layers before the final output. Deep-IRT \cite{yeung2019deep} is a synthesis of IRT and DKVMN \cite{zhang2017dynamic}, a memory-augmented neural networks based knowledge tracing model. Deep-IRT leverages intermediate computations of DKVMN to estimate the item difficulty level and the student ability parameters of IRT model. EKT, proposed in \cite{huang2019ekt}, is a bidirectional LSTM based knowledge tracing model. EKT explains the change of knowledge mastery levels of a student by modeling evolution of their knowledge state on multiple concepts over time. Also, equipped with the attention mechanism, EKT quantifies the relative importance of each exercise for the mastery of the student’s multiple knowledge concepts. As pointed in \cite{manouselis2012recommender}, explainability also poses challenges to educational recommender systems. \cite{barria2019explaining} addresses this issue by providing a visual explanation interface composed of concepts’ mastery bar chart, recommendation gauge and textual explanation. When a certain learning item is recommended, the concepts’ mastery bar chart shows concept-level knowledge of a student, the recommendation gauge represents suitability of the item and the textual explanation describes the recommendation rule why the item is suggested. Rocket, a tinder-like UI introduced in \cite{choi2020choose}, also provides explainability in learning contents recommendation. When an ITS proposes a learning material to a user, Rocket shows a polygonal visual summary of AI-extracted features, such as the probability of the user correctly answering the question being presented and expected score gain when the user correctly answers the question, which gives the user insight into why the system recommends the learning material. Based on the AI-extracted features, the user can decide whether to consume the suggested learning material or not through swiping or tapping action. \section{Santa: A Self-Study Solution Equipped with an AI Tutor} \begin{figure*} \centering \includegraphics[width=1\columnwidth]{figures/Santa_flow.pdf} \caption{The flow of a user entering and interacting with \emph{Santa}} \label{fig:santa_flow} \end{figure*} \emph{Santa}\footnote{\url{https://aitutorsanta.com}} is a multi-platform AI tutoring service with more than a million users in South Korea available through Android, iOS and Web that exclusively focuses on the Test of English for International Communication (TOEIC) standardized examination. The test consists of two timed sections named Listening Comprehension (LC) and Reading Comprehension (RC) with a total of 100 questions, and 4 or 3 parts respectively. The final test score ranges from 10 to 990 in steps of 5 points. Santa helps users prepare the test by diagnosing their current state and dynamically suggesting learning items appropriate for their condition. Once a user solves each question, \emph{Santa} provides educational feedback to their responses including explanation, lecture or another question. The flow of a user entering and using the service is described in Figure \ref{fig:santa_flow}. When a new user first opens \emph{Santa}, they are greeted by a diagnostic test (1). The diagnostic test consists of seven to eleven questions resembling the questions that appear on the TOEIC exam (2). As the user progresses through the diagnostic test, \emph{Santa} records the user's activity and feeds it to a back-end AI engine that models the individual user. At the end of the diagnostic test, the user is presented with a diagnostic page detailing the analytics of the user’s problem-solving process in the diagnostic test (3). After that, the user may choose to continue their study by solving practice questions (4).
null
null
null
proofpile-arXiv_000-10154
{"arxiv_id":"2009.08976","language":"en","timestamp":1600740008000,"url":"https:\/\/arxiv.org\/abs\/2009.08976","yymm":"2009"}
2024-02-18T23:40:25.152Z
2020-09-22T02:00:08.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10156"}
null
\section{Challenges and Issues}\label{sec:challenges} The central concept in the VR literature is to define a more advanced form of user-computer interaction in real-time using synthetic three-dimensional multisensory devices \cite{Burdea94, LaValle:2019}. Objects can be presented, providing a sense that they are not the same as the user's perception when observing them in a limited area, such as a monitor screen. VR can be classified according to the user's presence, as immersive and non-immersive. It is said to be immersive when the user has the sensation of being present within the virtual world using sensory devices (stereoscopic glasses or helmets with motion trackers) that capture their movements and behavior. Moreover, it is considered non-immersive when the user is partially transported to the virtual world via a monitor or projector \cite{Tori06}. \begin{figure*} \centering \includegraphics[scale = 0.7]{vr_architecture.pdf} \caption{General VR application architecture (Source: own construction, based on \protect\cite{Capilla:2004})} \label{fig:vr_arch} \end{figure*} According to Machado et al. \cite{machado_2003} and LaValle \cite{LaValle:2019}, immersion, and interactivity correspond to how the user interacts with the virtual environment, using devices that allow the manipulation of virtual objects. Several graphical libraries have emerged to help develop applications that reproduce requirements in a virtual environment; these libraries act as an abstraction layer providing information about the device's position or orientation, without the application knowing in which device the information is processed. Despite the benefits of adopting VR for the development of applications in several areas, this poses new challenges for software quality assurance activities. For example, software developed for the context of VR has unique software structures, which may represent new sources of faults for the programs developed \cite{Takala:2017}. These new challenges have motivated the development of some approaches that aim to contribute to the quality assurance process of software in the context of VR. Automating software testing activities is often a complicated and challenging process. The main tasks of this activity include organizing, executing, registering the execution of the test cases, and verifying the result of their execution. In order to address these tasks, in the context of VR, some key points discussed in the next subsections should be understood. \subsection{What should be tested?}\label{subsec:what_test} Virtual reality systems use individual hardware devices to allow the interaction with the user and the system. The work of graphics engines is not the primary concern for VR application developers. Defining scene graphs for organizing 3D objects in a VR world, managing virtual users, controlling sensors for detecting events such as object collision and processing events for reacting to user inputs are some of the typical elements of VR systems that the developers should be concerned about \cite{Zhao:2009}. According to Runeson \cite{Runeson:2006}, unit testing aims to test the smallest units that make up the system, thus ensuring the proper functioning of elements and easy-to-find programming faults, incorrect or poorly implemented algorithms, incorrect data structures, limiting the internal logic within the limits of the unit. Figure \ref{fig:vr_arch} presents a general architecture of VR applications, as we can observe, VR applications differ from general programs by handling specific devices, and data structures used to represent the objects in a three-dimensional scene. Beyond represent various aspects of the real or imaginary world, such as geometric descriptions, appearance, transformations, behaviors, cameras, and lighting. Each of the properties above is created, aiming to represent objects present in a virtual environment, thus emerging new challenges related to how to test them. By observing the organization of 3D object elements and assets in scene graphs, it seems that it needs a higher-level type of test. In general, because they are independent, they do not have an architecture correlation of the source code. Therefore, integration testing tends to be a more appropriate approach to be used. In integration testing, the main aim is to verify the communication between the units that make up the system. \subsection{What does ``adequate testing'' mean?} The solution to define this question: \textit{``What does adequate testing mean?"} is to apply test criteria, which consists of a set of rules for dividing and evaluating the valid input domain for the program being tested. A test criterion defines elements of a program that must be exercised during its execution, thereby guiding the tester in the process of designing the test cases for the system. A test requirement may be, for example, a software-specific execution path, a functionality obtained through specification, a mutation-based approach, etc \cite{Bertolino:2003}. As pointed out in the previous question, due to the different structures existing in VR applications, it is difficult to define which aspects of a VR application should be considered when designing a test routine for VR applications. For example, the Figure \ref{fig:scene_render_ex} shows that for a single frame existing in a 3D scene, there are different layers that have different aspects that can be taken into account when testing the application. Corrêa et al. \cite{Santos:2013} presented a set of studies that deal with the application of software testing techniques to programs in the VR context, showing that there is an interest in the literature on the subject, however, there is still no concept regarding systematized practices for the activity. \begin{figure}[ht] \centering \includegraphics[scale = .43]{scene_render_example.pdf} \caption{Example of scene rendering process (Source: own construction, based on \protect\cite{Adrian:2016})} \label{fig:scene_render_ex} \end{figure} Due to the lack of defined requirements, it is not easy to identify test adequacy criteria for VR systems. How can we decide that testing is enough? This question needs to be adapted to the context. \subsection{What is a failure in a VR software?} In the context of VR applications, the testing activity hinges on the difficulty of systematizing how the behavior of a test case can be measured. This difficulty is described in the literature as ``test oracle problems" and it appears in cases where traditional means of measuring the execution of a test case are impractical or are of little use in judging the correction of outputs generated from the input domain data \cite{Rapps:1985, Barr:2015}. Test oracles deal with a primary issue in software testing activities - deciding on program correctness based on predetermined test data. In general, the test oracle accesses a data set needed to evaluate the correctness of the test output. This data set is taken from the specification of the program under testing and should contain sufficient information to support the final decision of the Oracle \cite{Oliveira14}. It is possible to explore a wide range of faults in the context of VR software. It is possible to verify the strictness of specification based on scene graphs concepts, in addition to specific features related to the virtual environment, it is also possible to verify the behavior of objects and the actions performed by multiple devices. As can be seen, in addition to traditional source code routines, several characteristics need to be taken into account when testing and the definition of what can be considered a failure is an important step for each of the described aspects above can be analyzed correctly during the testing activity. \subsection{Can we reuse something?} General tools such as capturing and replaying can be used, but they offer a shallow level of abstraction. Thus, any small change to the system will result in the fact that the tests should be redone \cite{Leotta:2013}. Therefore, using capture and replay tools cannot be used when the system is in development. From a unit test point of view, we can still reuse a traditional approach in which we can quickly gauge the expected output to a method execution, ensuring that the smallest units of the VR system have been sufficiently tested against their specifications. Regarding integration testing which is expected to handle new kinds of elements (3D objects, assets, behaviors, etc.) the literature review shows that we still need better-systematized practices for this activity \cite{Santos:2013}. \subsection{What is done nowadays?} Almost all 3D applications require some common features. Therefore, developers tend to use platforms that provide these features out-of-the box. Using game engines is one of the most popular techniques among developers due to the fact it helps produce the systems, besides speeding up the development process. Recently popular game engines, such as \textit{Unity3D}\footnote{\url{https://docs.unity3d.com/Manual/testing-editortestsrunner.html}} and \textit{Unreal Engine}\footnote{\url{https://docs.unrealengine.com/latest/INT/Programming/Automation}}, released their own set of testing tools, which allows developers to produce automated testing during the development phase of the system which can substantially increase the stability of the product developed. Despite the existence of tools, it still does not provide observable test criteria, perhaps due to the lack of studies that propose, experiment and validate effectively applicable approaches, which can be additionally repeatable, documented and do not rely only on the tester’s creativity. \section{Conclusion and Future Work}\label{sec:conclusion} This paper discusses the main challenges related to using software testing practice in the VR domain. Some of the critical issues related to the quality of these systems were pointed out and possible solutions were also discussed that could be used and adapted to deal with such issues. We discussed whether or not there is a real need to test VR systems. To better understand this, a comprehensive study was conducted, guided by 3 research questions, whose objective was: to understand the state of the practice of software testing in the context of VR programs ($RQ_1$), to measure metrics and quality attributes in VR software ($RQ_2$), and finally to evaluate fault proneness in the collection of the software analyzed ($RQ_3$). In order to answer the raised questions, a collection of 119 VR projects, available in open source projects and manually analyzed, was cataloged to understand the state of the practice concerning the application of software testing techniques. Regarding the application of software testing techniques ($RQ_1$), it was observed that out of all the projects, only 6 of them had some test cases in their project. Given the results pointed out by $RQ_1$, we decided to evaluate how the negligence of the practice of software testing can be detrimental to a software project, and it was decided to evaluate the distribution of code smells among the analyzed projects. Smells related to architecture, design, and implementation were analyzed. It can be concluded that there is a high incidence of smells in the projects analyzed, especially regarding implementation smells. We discussed the most common smells for each of the categories and how they can discourage the practice of software testing, and also how they can be avoided if a software testing activity is appropriately conducted. Finally, considering the results of $RQ_2$, it was decided to investigate how the lack of good practices and the presence of code smells can impact the quality of the source code produced. To do so, an approach that evaluates code metrics was used to point out classes that are fault-prone ($RQ_3$). The study pointed out that about 12\% of the analyzed VR classes have such characteristics, revealing a significant risk to the success of the projects. The distribution of these classes was also evaluated when observed concerning the size of the projects analyzed. It was observed that the larger a project becomes, there is a higher incidence of fault-prone classes, which may be an indication that neglecting test practices in larger projects becomes even more riskier. We believe that the results reported in this paper will contribute to raising the awareness of the software testing and virtual reality community about the needs of software testing approaches for VR developers. As software testing phase, also makes up one of the development phases, it is necessary to understand the point of view of stakeholders involved in the process, allowing these groups narrow what they deem important, thus making it possible to prioritize verification concerning failures that should not manifest in VR applications. Observing such aspects, it is possible to guide the development of a project using a specific testing approach for the VR domain. So, in future works, we intend to survey what types of faults in VR applications contribute to a negative experience. The goal of this study is to obtain a view of the interest groups (for example, which types of failures are most critical, which are less relevant, how much each affects the quality of the final product, etc.), in addition to investigating the knowledge of the groups of interest regarding specific types of failures in VR applications. Understanding the intends of stakeholders we expect to propose a fault taxonomy to the context of VR programs. It is believed that having such taxonomy, it would be possible to encourage the development of specific software testing techniques and criteria to the context of VR programs, thus spreading the practice of software testing in order to mitigate possible problems and move towards software projects that best meet software quality requirements. \section{Results and Discussion}\label{sec:discussion} In this section, we address the research questions and further discuss the results obtained from the analysis and our observations with the empirical study. \subsection{How is VR software tested?} Existing software testing techniques seek to establish rules to define test case sets that exercise the testing requirements needed to eliminate software faults when they exist. Testing techniques and criteria are of paramount importance in selecting test cases because the tester can create more effective test cases, and thus reduce the number of test cases, ensuring that specific pieces of software are exercised by the tests. Testing techniques differ from one another in the source of information used to create the test requirements. Regarding the first research question ($\mathbf{RQ_1}$) of the study, which aims to understand the question: \textit{``How does testing happen in open-source VR software systems?"}, the 119 projects were manually evaluated and it was found that only 6 VR projects (\textit{Bowlmaster - 53 tests, CameraControls - 60 tests, GraduationGame - 15 tests, MiRepositorio\_VRPAD - 11 tests, space\_concept - 11 tests, UnityBenchmarkSamples - 4 tests}) are concerned with the software testing practices, including a total of 154 unit test cases, to evaluate the projects' functionalities. Despite the existence of unit testing, we were unable to calculate information regarding testing criteria, such as code coverage, since \textit{Unity} does not provide an out-of-the box solution to code coverage (till the present time) and \textit{Unity} uses its own fork of \textit{Mono} \cite{Mono}, which makes it impractical to use other C\# coverage tools \cite{Haas:2014}. Based on the information collected, it can be observed that from the 119 analyzed projects, only 6 (5.04\%) have some software testing activity, and even the projects that have test cases, do not present many tests that can ensure that the main functionalities of the applications were adequately tested. Another interesting aspect observed when analyzing the 6 projects that have test cases is the fact that the \textit{Bowlmaster} project is part of a popular course, with more than 334,000 students enrolled \footnote{\url{https://github.com/CompleteUnityDeveloper/08-Bowlmaster-Original}}, which aims to teach VR application development practices. This demonstrates that there is a concern on the part of educators regarding the awareness that testing activity is a determining factor in the software development process and it is expected that students are capable of apply concepts related to testing practice software in their future projects. Concerning $RQ_1$ we came to the conclusion that there is not yet consensus regarding the application of software testing practices for VR applications and this motivated us to explore the next research questions. These results are in agreement with the most recent papers in the literature. Karre et al. \cite{Karre:2019} conducted an empirical study of VR practitioners to learn the current practices and challenges faced in industry. The software testing related results points out to the absence of adequate tools, as well as uncertainty about how to test the VR app apart from conducting a standard field evaluation. As a consequence, this lack of usability evaluation methods and automated testing tools tend to cost a lot of time to release a VR product. One possible explanation is due to the challenges of systematizing how the behavior of a test case can be measured in the context of VR programs. This difficulty is described in the literature as a ``test oracle problem" and arises in cases where traditional means of gauging the execution of a test case are impractical or are of little use in judging the correctness of outputs generated from input domain data \cite{Rapps:1985, Barr:2015}. Test oracles deal with a primary issue in software testing activities - deciding on program correctness based on predetermined test data. In general, the test oracle accesses a data set needed to evaluate the correctness of the test output. This data set is taken from the specification of the program under testing and should contain sufficient information to support the final decision of the Oracle \cite{Oliveira14}. It should be emphasized that building a project with good test cases is not an easy task. Testing requires discipline, concentration, and extra effort \cite{Kasurinen:2010}. As a reward, the code presents a set of characteristics such as cleanliness, easy-to-maintain, loosely coupled, and reusable APIs. Besides the fact that testable code is easier to understand, maintain and extend. In order to understand the risks and advantages of these characteristics and to accurately answer $RQ_2$ and $RQ_3$, in the next sessions, we compare the difference between the VR projects and Non-VR projects concerning code smells and fault-proneness distribution. \subsection{Distribution of Code smell} Observing the lack of software testing practice in all the other projects, we decided to investigate how this practice is reflected within the projects. To do so, we decided to measure the incidence of code smells \cite{Fowler:2018} within the projects investigated. This leads us to the second research question ($\mathbf{RQ_2}$) presented: \textit{``What are the distribution of architecture, implementation, and design smells in VR projects?"}. In general, the presence of code smells in software projects indicates the presence of quality problems. For instance, one of the most well-known code smell, God Class, is defined as a class that knows or does too much in the software system. God Class is a strong indicator of possible problems because this component is aggregating functionality that should be distributed among several others components, therefore increasing the risk of software faults \cite{Hall:2014}. Such problems directly impact features such as maintainability and contribute to make it difficult for software to evolve. To perform the evaluation discussed in this section we started from an assumption that projects that do not have test cases in their composition tend to have a lower code quality opposed to projects that have been tested, considering developers probably may not have observed aspects that would be capable of triggering unexpected behavior in the application, besides the fact the smaller the number of bugs in the system, the higher the quality related to a given project. In order to better understand what is related to the lack of tests, we compared the results obtained in the VR projects with the results obtained in Non-VR applications, which have well-defined test cases within the projects. To better understand $\mathbf{RQ_2}$, we identified three different types of code smells in the projects: \begin{itemize} \item \textbf{Architecture smells}: focus on identifying points of interest for possible structural problems that can negatively contribute and hamper activities such as debugging and refactoring, as well as increasing the cost for fault correction and refactoring, due to the characteristic of increasing the complexity of the software, when present \cite{Mo:2015}. \item \textbf{Implementation smells}: code smells or implementation smells were first introduced by Fowler \cite{Fowler:2018} and seek to establish a concept to classify shortcomings in object-oriented design principles. This class of smells covers principles such as data abstraction, encapsulation, modularity, hierarchy, etc. \item \textbf{Design smells}: are specific types of structures that may indicate a violation of a fundamental principle, which can impact aspects of design quality \cite{Suryanarayana:2014}. \end{itemize} In order to calculate the distribution of the code smells previously described within the projects, we use the \textit{Designite} tool \cite{Sharma:2016}. The smells were classified according to the number of occurrences in the analyzed classes and percentage distribution. The data is presented in Tables \ref{tab:architecture_smells}, \ref{tab:implementation_smells} and \ref{tab:design_smells}. It is worth mentioning that test case classes were not taken into account for this smell classification, once our initial target was to measure the quality aspects of the source code classes. Besides that, smells in software test codes require a whole different classification approach \cite{Tufano:2016}. \subsubsection{Architecture smells results} It can be observed that in Table \ref{tab:architecture_smells}, among the VR projects, there is a low incidence of architecture smells, with only three types (ACD, AFC, and AGC) presenting a percentage of occurrence between 0.93 \% and 1.70 \%. Observing the Non-VR projects, it can be observed that this category of smells had a lower incidence compared to VR projects. The AUD, AGC and AFC smells showed the highest occurrence rates, with percentages between 0.26\% and 0.57\%. VR project behavior can be justified due to the fact that within the \textit{Unity} platform, although an object-oriented language (C\#) is mostly used, the development model is considered a component-based programming approach. This approach focuses on the separation of concerns regarding the features to be developed in the system. \begin{table}[ht] \scalefont{0.73} \centering \caption{Description of the detected architecture smells and their distribution} \begin{tabular}{l|l|r|r|r|r} \hline \textbf{ID} & \textbf{Smell} & \multicolumn{2}{c|}{\textbf{VR}} & \multicolumn{2}{c}{\textbf{Non-VR tested}} \\ \hline AAI & Ambiguous Interface & 29 & 0.13\% & 1 & 0.02\% \\ ACD & Cyclic Dependency & 212 & 0.99\% & 6 & 0.14\% \\ ADS & Dense Structure & 3 & 0.01\% & 2 & 0.05\% \\ AFC & Feature Concentration & 366 & 1.70\% & 24 & 0.57\% \\ AGC & God Component & 201 & 0.93\% & 12 & 0.29\% \\ ASF & Scattered Functionality & 84 & 0.39\% & 8 & 0.19\% \\ AUD & Unstable Dependency & 78 & 0.36\% & 11 & 0.26\% \\ \hline Std Dev & \multicolumn{1}{c|}{\graycell} & 118.34 & \multicolumn{1}{c|}{\graycell} & 7.17 & \multicolumn{1}{c}{\graycell} \\ \hline Average & \multicolumn{1}{c|}{\graycell} & 139.0 & \multicolumn{1}{c|}{\graycell} & 9.14 & \multicolumn{1}{c}{\graycell} \\ \hline \end{tabular} \label{tab:architecture_smells} \end{table} Among the main advantages of a component-based programming approach, we can point out the high reuse capacity of the developed components due to the low coupling characteristics of the components that make up the systems. Despite Non-VR applications presenting lower rates of architecture smells, it mainly shows a higher incidence of smell AFC. This smell occurs when a component performs more than one architectural concern/feature. This can be explained due to the programming model adopted. A large part of Non-VR projects corresponds to web applications, which typically use a Model-View-Controller (MVC) standard for application development. As shown by Aniche et al. \cite{Aniche:2018}, systems that adopt such architecture can be affected by types of poor practices that lead to the apparition of such a smell. From a software testing point of view, the lower rate of architecture smells can be considered as a decisive successful factor, since the low dependence between modules is a characteristic that facilitates the application of unit tests \cite{Aniche:2013}. In general, when it is necessary to communicate with other units of code, sometimes \textit{stubs} or \textit{mock} objects are used to represent this communication. A huge benefit of this approach is that by lower coupling the system it is possible to reproduce complex scenarios more easily. Despite the advantages, it is important to keep in mind that there are some threats related to the use of a component-based approach in the context of integration testing, which as discussed in subsection \ref{subsec:what_test} must be one of the characteristics prioritized in the context of testing VR applications. The main problems are related to the use of components produced by third parties since in general they work as a black box and it is necessary to trust their correct functionality. An example of this model is the components asset store available in the Unity platform\footnote{\url{https://assetstore.unity.com/}}. In order to better understand the presented results, regarding each of the classes of smells analyzed, we verified if there is, in fact, a statistical difference between the presence of smells between groups of classes that were not tested and groups of classes that were tested during its development process. Therefore, due to the low number of smell types for each category (architecture, design and implementation), and since we can not guarantee that the data collected departs from a normal distribution, we applied the Mann-Whitney test \cite{Fay:2010} to verify whether there is a statistical difference between the presence of smells for each category of smells evaluated. The null hypothesis ($H_0$) of the Mann-Whitney test indicates that \textit{``The distribution of the variable in question is identical (in the population) in the two groups"}, that is, there is no difference in the presence of smells between classes that have not been tested and classes that have been tested and the alternative hypothesis ($H_1$) indicates that \textit{``The distributions in the two groups are not the same"}, therefore, there is a statistical difference between the incidence of smells for classes that were not tested against classes that were tested. Considering the value of alpha = 0.05, which comprises the complement of the margin of a confidence level of 95\%, for the architecture smells, $H_0$ with a p-value = 0.00760 could be rejected. Thus, it indicates that there is a statistical difference between the presence of smells when comparing architecture smells in classes that were not tested against classes that were tested. Using a descriptive analysis, obtained by analyzing the number of occurrences of each type of smells, it could be observed that classes that were not tested tend to present a higher rate of architecture smells in relation to classes that were tested. \subsubsection{Implementation smells results} It can be observed that, different from the architecture smells, in Table \ref{tab:implementation_smells}, we can identify a high rate of implementation smells in the VR projects. We highlight ILI, ILS, and IMN, which had a percentage of occurrence of 31.81\%, 55.55\%, and 117.46\%, respectively. Although it does not pose a direct risk to the source code produced, smell ILI may be an indicator that something can be revised/refactored. A very long identifier may be an indication that there is a need for too much text to distinguish/identify variables and in some instances, this may indicate that the programmer may not be using the most suitable data structure to represent it. \begin{table}[ht] \scalefont{0.69} \centering \caption{Description of the detected implementation smells and their distribution} \begin{tabular}{l|l|r|r|r|r} \hline \textbf{ID} & \textbf{Smell} & \multicolumn{2}{c|}{\textbf{VR}} & \multicolumn{2}{c}{\textbf{Non-VR}} \\ \hline ICM & Complex Method & 1,812 & 8.42\% & 9 & 0.22\% \\ ICC & Complex Conditional & 684 & 3.18\% & 14 & 0.33\% \\ IDC & Duplicate Code & 9 & 0.04\% & 1 & 0.02\% \\ IECB & Empty Catch Block & 150 & 0.70\% & 5 & 0.12\% \\ ILM & Long Method & 583 & 2.71\% & 9 & 0.22\% \\ ILPL & Long Parameter List & 2,117 & 9.84\% & 13 & 0.31\% \\ ILI & Long Identifier & 6,841 & 31.81\% & 12 & 0.29\% \\ ILS & Long Statement & 11,947 & 55.55\% & 40 & 0.96\% \\ IMN & Magic Number & 25,264 & 117.46\% & 36 & 0.86\% \\ IMD & Missing Default & 931 & 4.33\% & 17 & 0.65\% \\ IVMCC & Virtual M. C. C.** & 35 & 0.16\% & 5 & 0.12\% \\ \hline Std Dev & \multicolumn{1}{c|}{\graycell} & 7,425.91 & \multicolumn{1}{c|}{\graycell} & 11.87 & \multicolumn{1}{c}{\graycell} \\ \hline Average & \multicolumn{1}{c|}{\graycell} & 4,579.36 & \multicolumn{1}{c|}{\graycell} & 14.63 & \multicolumn{1}{c}{\graycell} \\ \hline \multicolumn{6}{l}{**Virtual Method Call from Constructor} \\ \hline \end{tabular} \label{tab:implementation_smells} \end{table} ILS occurs when there is an excessively long statement. Long declarations tend to make it difficult to manage the code and are consequently villains if observed from the practice of software testing. Very long code snippets tend to be harder to test because they often become too complex when compared to smaller snippets that are managed more efficiently. Finally, IMN occurs when an unexplained number is used in an expression. In general, magic numbers are unique values that have some symbolic meaning. Good programming practices indicate that in these cases, such numbers should be declared as constants to facilitate the reading of the source code, as well as to standardize its use. Non-VR projects again presented a lower occurrence rate. The most frequent smells were ILS, IMN and IMD which achieved, respectively, percentages of 0.96\%, 0.85\%, and 0.65\%. The occurrence of this type of smells is connected with the lack of guidelines for standardization of code as well as the lack of code refactoring practices. Usually, numbers have a meaning, therefore it is recommended that it should be assigned variables to make the code more readable and self-explaining. The names of the variables should at least reflect what the variable means, not necessarily its value. Basic guides guide to the test to give a more appropriate context and explanation of whatever numbers are present within the test. The more sloppily the tests are written, the worse the actual code will be and could become a door to possible faults, details about that possibility will be addressed in the next section. From the standpoint of software testing, opting to use of constants instead of magic numbers can ensure that once the value of the constant has been tested, there is no risk that the value of the constant is erroneously declared in the future. We also applied the Mann-Whitney test to verify whether there is a statistical difference between the presence of implementation smells in groups of classes that were not tested when compared to the classes that were tested. Adopting a confidence interval of 95\%, the test presented the p-value = 0.00040, which rejects the null hypothesis of the test and confirms the data presented in Table \ref{tab:implementation_smells}, proving that classes that were tested tend to present a lower rate of implementation smells. \begin{table}[ht] \scalefont{0.65} \centering \caption{Description of the detected design smells and their distribution} \begin{tabular}{l|l|r|r|r|r} \hline \textbf{ID} & \textbf{Smell} & \multicolumn{2}{c|}{\textbf{VR}} & \multicolumn{2}{c}{\textbf{Non-VR}} \\ \hline DBH & Broken Hierarchy & 245 & 1.14\% & 8 & 0.19\% \\ DBM & Broken Modularization & 991 & 4.61\% & 46 & 1.10\% \\ DCM & Cyclically-dependent M. & 3,149 & 14.64\% & 45 & 1.08\% \\ DCH & Cyclic Hierarchy & 6 & 0.03\% & 4 & 0.10\% \\ DDH & Deep Hierarchy & 0 & 0.00\% & 1 & 0.02\% \\ DDE & Deficient Encapsulation & 8,101 & 37.67\% & 13 & 0.31\% \\ DDA & Duplicate Abstraction & 2,469 & 11.48\% & 11 & 0.26\% \\ DHM & Hub-like Modularization & 4 & 0.02\% & 16 & 0.38\% \\ DIA & Imperative Abstraction & 627 & 2.92\% & 9 & 0.22\% \\ DIM & Insufficient Modularization & 1,171 & 5.44\% & 44 & 1.05\% \\ DMH & Missing Hierarchy & 18 & 0.08\% & 2 & 0.05\% \\ DMA & Multifaceted Abstraction & 209 & 0.97\% & 2 & 0.05\% \\ DMH & Multipath Hierarchy & 1 & 0.00\% & 2 & 0.05\% \\ DRH & Rebellious Hierarchy & 389 & 1.81\% & 9 & 0.22\% \\ DUE & Unexploited Encapsulation & 15 & 0.07\% & 2 & 0.05\% \\ DUH & Unfactored Hierarchy & 483 & 2.25\% & 7 & 0.17\% \\ DUA & Unnecessary Abstraction & 3,741 & 17.39\% & 17 & 0.41\% \\ DTA & Unutilized Abstraction & 6,987 & 32.49\% & 23 & 0.55\% \\ DWH & Wide Hierarchy & 64 & 0.30\% & 5 & 0.12\% \\ \hline Std Dev & \multicolumn{1}{c|}{\graycell} & 2,344.41 & \multicolumn{1}{c|}{\graycell} & 14.59 & \multicolumn{1}{c}{\graycell} \\ \hline Average & \multicolumn{1}{c|}{\graycell} & 1,508.94 & \multicolumn{1}{c|}{\graycell} & 14.00 & \multicolumn{1}{c}{\graycell} \\ \hline \end{tabular} \label{tab:design_smells} \end{table} \subsubsection{Design smells results} Finally, we have the design smells which seek to identify breaches of design principles. It can be concluded from Table \ref{tab:design_smells} it is possible to conclude that this class of smells was the one that presented the highest degree of incidence in the VR projects. DUA, DTA and DDE smells were the ones with the highest percentage of occurrence with 17.39\%, 32.49\%, and 37.67\% respectively. The DUA smell deals with the practice of unnecessary abstractions and is identified when an abstraction has more than one responsibility attributed to it. This smell tends to occur when there is an application of procedural programming features in the context of object-oriented programming languages \cite{Suryanarayana:2014}. From the standpoint of VR applications that adopt component-based programming, the appearance of this smell can be explained by the fact that the programming approach focuses on creating interchangeable code modules that work almost independently, not requiring that to be familiar with their inner workings in order to use them. Unnecessary design abstractions increase their complexity needless and affect the comprehensibility of the overall design. From a software testing point of view, this bad practice tends to hamper test practices DTA occurs when an abstraction is left unused, is not being used directly, or because it is not reachable in the source code. This smell correlates with DUA since unnecessary abstractions tend not to be used. Another impact factor for the appearance of this smell is linked to possible code maintenance/refactoring activities, which tend to leave traces of code that are no longer needed. From the standpoint of software testing, the existence of a test base that can be used as a regression test tends to facilitate the localization of source code that is no longer necessary, causing the occurrence of this smell to be reduced. From a tester's point of view, if there is a code that is not being used in the project, it does not need to be tested. Therefore, identifying these snippets of code can lead to more efficient testing activities. Finally, smell DDE, which identifies cases of poor encapsulation, had the highest occurrence rate in this class of smells. This smell occurs when the declaration of attribute visibility of a class is more permissive than necessary. For example, when the attributes of a class are unnecessary declared as public. From the standpoint of software testing, separation of interests allows implementation details to be hidden. If an abstraction exposes implementation details unnecessarily, it leads to undesirable coupling in the source code. This will have an impact on the testing activity because checking units that have a high degree of coupling becomes a more challenging task due to the need for more complex mocks and stubs.. Similarly, the high degree of coupling causes changes that are made in a code snippet to reflect in various parts of the application causing previously designed tests to fail if they are not adequately designed. Non-VR applications had a lower occurrence in this category of smells, in which DBM and DCM are the two that presented the highest occurrence, with 1.10\% and 1.08\% respectively. The explanations for the occurrence rate for the DCM smell are related to the cyclic dependence issue of the MVC model and the DBM smell arises when data and/or methods that ideally should have been localized into a single abstraction are separated and spread across multiple abstractions. Once again, we applied the Mann-Whitney test to check whether the data obtained from our empirical evaluation can draw a real picture about the behavior of class that does not have tests compared to classes that were properly tested. Once again the Mann-Whitney test proved with a p-value = 0.00089 that classes that were tested tend to present lower rates of smells when compared to classes that were not tested. Our second research question ($RQ_2$) sought to understand \textit{``What are the distribution of architecture, implementation and design smells in VR projects?"}. We investigated the main types of smells for VR applications and compared their results with Non-VR applications. We observed that in the context of VR applications there is a greater incidence of code smells related to implementation and design respectively since these two categories have code smells that are repeated frequently due to the characteristics existing in the development of VR applications. We also presented a discussion about how software testing practices can benefit from avoiding the smells that obtained the highest occurrence rate. Finally, the statistical tests performed showed that when comparing VR and Non-VR projects, it was possible to observe that the software testing practice can contribute to increase the quality criteria and to reduce the presence of code smells. \begin{figure*} \centering \includegraphics[scale= 0.8]{acl_workflow.pdf} \caption{General process of ACL fault prediction approach (Source: own construction)} \label{fig:acl_workflow} \end{figure*} According to Hall et al. \cite{Hall:2014}, code smells have a significant but small effect on faults. This can justify the fact that Non-VR application projects, which have test cases, present a lower rate of code smells when compared to VR applications, which do not have, for the most part, a well-defined test activity. However, the presence of smells not only hides potential source code flaws but also contributes to hindering the maintainability and evolution of the source code in larger projects. This leads us to the last research question of this study ($RQ_3$), which aims to investigate the fault proneness of VR projects. In a similar way to the analysis of code smells, evaluate this in more depth, we inserted the analysis of Non-VR projects so as to better understand the results. \subsection{Analyzing fault proneness} As mentioned before, the presence of code smells can indicate the absence of quality attributes in the source code and this can be an indication of faults in a software \cite{Hall:2014}. Similarly, as previously mentioned, the higher occurrence rate of code smells in the projects can hinder the practice of software testing. To understand the risks of neglecting this activity, we analyzed the projects concerning fault proneness. Since code smells are identified according to rules and thresholds defined in code metrics \cite{Khomh:2009}, we aim to investigate ($\mathbf{RQ_3}$) the question: \textit{``Can we draw a relationship between code metrics and fault proneness?"}. To do so, we use the code metrics described in Table \ref{tab:projects_metrics} with a fault prediction technique, which uses the metrics value as an indicator to suggest whether a given source code is fault-prone or not. By exploring relationships between software metrics and fault proneness, we seek to justify the need for software testing activities. For instance, a high threshold in a specific metric may lead us to suspect, with high probability, about the reliability of some parts of the code. The effectiveness of fault prediction techniques is often demonstrated using historical repository data \cite{Herbold:2017}. However, once these techniques are adopted, it is not clear how they would affect projects that do not match with the characteristics (language, platform, domain) of the built model \cite{Peng:2015}. Since we do not have access to a dataset or a bug track history maintained with VR systems data, we tried to exploit an approach that uses an unsupervised fault prediction technique \cite{Yang:2016}, that does not rely on historical data, to investigate fault proneness on the analyzed projects. We use the Average Clustering and Labeling (ACL) \cite{Yang:2016} approach to predict fault proneness in unlabeled datasets. ACL models obtain good prediction performance and are comparable to typical supervised learning models in terms of precision and recall, offering a viable choice for fault prediction when we do not have historical data related to faults. This study can help software developers to understand the characteristics of VR software and the potential implications of neglect software testing activities. Raising awareness is the first step towards VV\&T activities. Figure \ref{fig:acl_workflow} describes the process used by the approach to attest if a given instance of code is defined as fault-prone or not. In general terms, the the process to run the approach consist mostly of four steps: \begin{itemize} \item calculates the average value for each of the code metrics used; \item build a violation matrix metric; \item calculates metrics of instance violation; and \item defines whether the analyzed instance is considered as fault-prone or clean. \end{itemize} The first step is self-explanatory and uses the individual metrics for each class, similar to those presented in Table \ref{tab:projects_metrics}, as input data. After this stage, a violation matrix, which evaluates each metric, is built using as a basis the average of the results constituted for all classes in the project. The next step is to check the number of violations for each class concerning the metrics evaluated and, finally, in the last step to classify whether a particular instance is fault-prone or not. To perform the classification, it is necessary to define a cutoff that will be used as a threshold and, if violated, it will identify whether the class is fault-prone or not. The cutoff point is calculated using the number of metrics that are used in the evaluation. Full details of the approach and the implementation can be found in \cite{Yang:2016} and in the repository that contains the information about this work. The 119 VR projects were analyzed using the described approach and according to the classification metric adopted, from 21,508 classes contained in all the projects, a total of 2,627 classes or 12.21\% were classified as classes with a high probability of having faults, due to the fact they extrapolate the threshold defined by the approach to consider them as clean. Similarly, in the 107 Non-VR projects, out of 21,568 classes, a total of 1,921 were labeled as fault-prone, which corresponds to a percentage of 8.90\% of the analyzed classes. In a superficial analysis it is possible to have a mistaken view of the results of this study and imagine that the percentage of propensity to fail presented appears low and, therefore, the time and expense necessary to identify them is perhaps not justified. However, according to previous investigations \cite{Walkinshaw:2018}, the \textit{Pareto} principle also tends to apply to a software faults context. It is believed that 20\% of the files are responsible for up to 80\% of the faults found in a project. Therefore, it is natural that the percentage of classes with a propensity to fail to follow this same trend, since it is not an exact proportion and the classification approach is not an exact formula and serves only as a mechanism to assist in efforts to apply a software testing approach. As pointed out by Nam \cite{Nam:2014}, defect prediction approaches play a role as a complementary approach to help identify potential problems in the source code as well as a mechanism to improve it and consequently get rid of productivity bottlenecks and future issues. Thus, the results presented here are not intended to point out the exact number of problems in a software product evaluated, but to strengthen the hypotheses that projects that adopt quality criteria, such as software testing practice, tend to be less predisposed to future issues. It's also important to note that, since there is no precise information about the test criteria used in Non-VR projects, as well as any information regarding the coverage reached by the designed tests, it is impossible to guarantee that the tests designed for a class are enough to ensure that it is free of any problems. Therefore, it is natural that the percentage of fault-proneness between projects that have not been tested (VR projects) and projects that have test cases (Non-VR projects) is slightly similar. It can be observed that despite having a larger number of classes and lines of code for those analyzed in the VR projects, Non-VR projects presented a lower fault-proneness rate. It is worth noting that the fault-prone algorithm is executed only in the classes related to the source code of the application, thus disregarding the test classes in the Non-VR projects. This analysis could be an indication that due to the practice of testing, classes of the Non-VR projects have a higher degree of reliability, and therefore are less fault-prone when compared to the classes existing in the VR projects, which mostly do not present test cases. \begin{figure*} \centering \includegraphics[scale = 0.35]{pie_chart.pdf} \caption{Classification of the VR projects according to the ACL approach (Source: own construction)} \label{fig:pie_chart_vr} \end{figure*} These numbers can be observed in Figure \ref{fig:pie_chart_vr} and are alarming numbers since they show the negative impact that a lack of robust and standardized testing technologies can cause to the software industry \cite{Planning:2002}. Consequently, it contributes to an increase in the incidence of avoidable faults that tend to appear only after the software is used by its end users. Similarly, the software development cost tends to increase because historically the process of identifying and correcting faults during the software development process represents more than half of the costs incurred during the development cycle \cite{Brooks:1995}. This delay in the product development can lead to situations such as the increase in the time needed to put a product on the market, also resulting in market opportunity losses \cite{Afonso:2008}. We went further trying to understand how the faults pointed out by the approach are distributed into the projects. Since the projects have a great variety of sizes, we grouped them into 6 different categories (by the number of classes) in order to observe how the distribution of fault-prone classes occurs. \begin{figure}[ht] \centering \includegraphics[scale=0.6]{fault_distribution.pdf} \caption{Distribution of fault-prone classes according to the size of the projects (Source: own construction)} \label{fig:lines_chart} \end{figure} Figure \ref{fig:lines_chart} shows this distribution. It can be observed that in both VR and Non-VR projects, there is a relation between the number of fault-prone classes and the size of the projects. This relation points out that the higher the number of classes in the projects, the higher the average fault-prone classes, and leads us to conclude that neglecting testing activity in larger projects may be even more riskier in terms of the project’s success. Future analyses could be extracted from the data obtained. However, we believe that the presented data are capable of attesting a clear answer to $RQ_3$, making it clear that in a general context, the lack of software testing techniques have a direct impact on quality attributes, as demonstrated by the metrics extracted from the analyzed projects and this directly reflects the adoption of bad development practices, which lead to the existence of code smells, consequently becoming an outlet for the increase in faults. $RQ_3$ sought to understand the question: \textit{``Can we draw a relationship between code metrics and fault proneness?"} and by implementing the approach to detect fault proneness in the projects investigated, it could be observed that neglecting the test activity can lead to a higher probability of development problems. According to our analysis, it was seen that the VR projects, which do not present test cases, have a higher propensity to present faults to Non-VR projects, and that propensity tends to increase as the complexity of the projects increases. It was also observed that although Non-VR projects present test cases in all projects, they still present a high rate of fault proneness. This underscores the importance of the developing software testing practice within the scope of project development. Although Non-VR projects have test cases, the test sets provided do not meet the basic test criteria, such as code coverage, so that part of the code that is not tested is still prone to possible failures. Another point that this study raised is the need for specific test practices for a specific domain. Software of different domains have different characteristics, which must be adequately investigated. In the context of VR applications, the simple use of unit tests may not be sufficient to attest the quality of the developed product, since the technological advancement has led to the development of systems with advanced features such as images, sounds, videos, and differentiated interaction, presenting new challenges when compared to software testing in conventional domains, such as the lack of information on typical defects and even the lack of a precise definition of a test case and the oracle problem \cite{Rapps:1985, Barr:2015}. \section{Do We Really Need to Test Virtual Reality Software?}\label{sec:experiment} Considering popularizing VR application development, we are interested in understanding, from the software engineering point of view, how the development process of these applications is currently conducted. We are especially interested in software testing practices in the development process of such applications in order to address what kinds of malfunctions the lack of test practice can lead to. One of the most used approaches to quantify quality attributes in software projects is the evaluation of source code metrics. Source code metrics are a significant component for the software measurement process and are commonly used to measure fault proneness and improve the quality of the source code itself \cite{Palomba:2014}. \begin{figure*}[t] \centering \includegraphics[scale = 0.6]{svr_study_design.pdf} \caption{Steps taken to carry out the study (Source: own construction)} \label{fig:study_design} \end{figure*} Another factor that can be exploited to evaluate code quality is to identify anti-patterns since some studies show that there is a correlation with fault proneness \cite{Khomh:2012}. Therefore, these are two aspects that are taken into account in the evaluation carried out in our study to investigate the quality aspects of the code in the context of software testing. Ghrairi et al. \cite{Ghrairi:2018} made an exploratory study on \textit{Github} and \textit{Stack Overflow} in order to investigate which are the most popular languages and engines used in VR projects. According to their results, the most popular language for VR development is C\#, and Unity is the most used game engine during VR application development. Thus, we focus our analyses targeting these characteristics. \subsection{Overview of the study} We formulated the following research questions regarding the quality analysis goal of VR projects. \begin{itemize} \item $\mathbf{RQ_1}:$ \textit{``How does testing happen in open-source VR software systems?"} We focus on understanding how testing practices are being applied in open-source VR projects. \item $\mathbf{RQ_2}:$ \textit{``What are the distribution of architecture, implementation and design smells in VR projects?"} We investigated the distribution of smells to find out whether there is a set of code smells that occur more frequently in VR systems. \item $\mathbf{RQ_3}:$ \textit{``Can we draw a relationship between code metrics and fault proneness?"} It is commonly believed that code metrics and fault-proneness, i.e., if a set of code metrics reaches a predefined threshold, it is very likely that the project could also have some faults. We investigate this using an unsupervised defect prediction approach. \end{itemize} \begin{table*}[ht] \scalefont{0.85} \centering \caption{Characteristics of the repositories used in the experiment} \begin{tabular}{l|r|r|r|r|r|r} \hline \multicolumn{1}{c|}{\multirow{2}[4]{*}{Features / Characteristics}} & \multicolumn{2}{c|}{Small size ( 1 $\thicksim$ 80 classes)} & \multicolumn{2}{c|}{Medium size (80 $\thicksim$ 200 classes)} & \multicolumn{2}{c}{Large size ( 201+ classes)} \\ \cmidrule{2-7} & \multicolumn{1}{c|}{\textbf{VR}} & \multicolumn{1}{c|}{\textbf{non-VR}} & \multicolumn{1}{c|}{\textbf{VR}} & \multicolumn{1}{c|}{\textbf{non-VR}} & \multicolumn{1}{c|}{\textbf{VR}} & \multicolumn{1}{c}{\textbf{non-VR}} \\ \hline Nº C\# Classes & 31.5 & 30.2 & 121.7 & 126.4 & 374.8 & 4320 \\ \hline Nº Branches & 1.45 & 1.7 & 1.7 & 2.5 & 2.56 & 11.4 \\ \hline Nº Commits & 78.4 & 45.5 & 48.8 & 245 & 66.2 & 8931 \\ \hline Nº Contributors & 2.6 & 1.4 & 2.5 & 2.2 & 2.9 & 47.1 \\ \hline Nº Forks & 34.7 & 1.9 & 10.9 & 1 & 76.6 & 82.4 \\ \hline Nº Subscribers & 15.8 & 2 & 8.1 & 2.5 & 22.3 & 176.8 \\ \hline Nº Stars & 111.6 & 4 & 23.8 & 0.7 & 187.8 & 89.4 \\ \hline \end{tabular}% \label{tab:github_stats}% \end{table*}% \subsection{Study Design} The process of selecting open-source projects consists of a systematized search, on \textit{Github}, using the keywords ``virtual reality" and ``VR". With the objective of drawing a more specific profile, the search focused only on projects developed for the \textit{Unity} platform, since it has emerged as the most popular VR development platform due to its extensive documentation \cite{Ghrairi:2018}. Our primary aim is to explore virtual reality projects from both project and source code perspectives. To do so, we cataloged and analyzed a total of 151 open-source projects, available in \textit{Github}. Some of the projects could not be analyzed due to either missing external dependencies or custom-build mechanisms (i.e., missing standard C\# project files), thus we were able to analyze a total of 119 projects. In order to draw a picture concerning research questions $RQ_2$ and $RQ_3$, we also cataloged a set of general (Non-VR) open-source projects, which have similar characteristics (same C\# programming language). The goal is to try to compare the information observed in the VR application with Non-VR application. Therefore, we catalog a total of 177 Non-VR projects. After an individual process analysis, we removed duplicated projects and projects that had missing external dependencies or custom-build mechanism. In the end we achieved a total of 107 Non-VR projects able to be used in our experiment. Since our goal is to analyze which impacts the lack of software testing practice can cause on VR projects, concerning Non-VR projects, we will use only data related to classes that have been properly tested. Thus, for all the experiments, two types of projects were gathered. The first is a set of VR projects. The second is a set of general-purpose projects that hold some unit test code used to evaluate their modules. The purpose of the first step is to manually evaluate VR projects in order to understand how much open-source VR application developers are concerned regarding testing practices for this specific domain. Based on the observation of the results of the initial analysis, the second step is to assess how much the lack of software testing activity can contribute to the construction of codes that become more difficult to maintain over time. For this, an analysis is made regarding the presence of code smells and the results obtained in VR projects are compared with the results obtained in general-purpose projects. Finally, the last step aims to assess how much code metrics are capable to point about fault proneness. The goal is to evaluate the projects and observe how the VR projects and projects that have been tested behave, in order to understand whether test practice can contribute to a given code snippet being less fault-prone regarding untested code snippets. The procedure carried out during the execution of the experiment is described in Figure \ref{fig:study_design} and consists of the following steps: \begin{enumerate} \item Search the \textit{Github} platform for projects with the targeted characteristics. \begin{enumerate} \item VR projects that were developed using the Unity platform. \item General purpose projects that were developed using the C\# programming language and have software testing practices in their composition. \end{enumerate} \item Data extraction related to testing practice in VR application projects. \item Code metrics and code smells information extraction for all projects. \item Use of code metrics to calculate fault-proneness in all projects. \item Summary of the results and answer the research questions. \end{enumerate} \subsection{Overview of Projects} One of the biggest concerns when carrying out experiments using open-source artifacts is related to the representativeness of the artifacts and whether therefore the results found can be properly generalized \cite{Hillenbrand:2005}. With it in mind, we were careful to try to select the largest number of study objects, in such a way that they could meet a wide variety of characteristics and not be limited to toy examples. Table \ref{tab:github_stats} summarizes the information on the artifacts used in the experiments developed in this paper. The data related to the main characteristics of the repositories are presented in the form of average, by classifying the repositories among small projects (with up to 80 classes), medium (with projects that have between 80 and 200 classes) and large projects with a number of higher classes to 200. Below we present the main features cataloged in the repositories used: \begin{itemize} \item Branches - correspond to branches used to implement new features or correct faults, in such a way that the unstable code is not merged into the main project files. \item Commits - commits refer to the act of submitting the latest source code changes to the repository and making these changes part of the main version of the repository. \item Contributors - refers to the number of people outside the main project development team who have contributed or wish to contribute to changes to the project. \item Forks - copies from a repository. Forking a repository allows third parties to perform experiments on the source code without affecting the original project. \item Subscribers - number of people who are registered to receive notifications related to updates on activities and changes made to the project. \item Stars - indicates the number of people who have defined markers in the project so that they can follow what is happening with the project in the future ( according to Kalliamvakou et al. \cite{Kalliamvakou:2014} this metric is commonly used to measure the popularity of repositories). \end{itemize} In order to provide a view of the general scope of the artifacts used, Table \ref{tab:projects_overview} presents information about the general characteristics of all projects. \begin{table}[ht] \centering \caption{General characteristics of the analyzed projects} \begin{tabular}{l|r|r} \hline \multicolumn{1}{c|}{\textbf{Attributes}} & \multicolumn{1}{c|}{\textbf{VR}} & \multicolumn{1}{c}{\textbf{Non-VR}} \\ \hline Projects & 119 & 107 \\ Number of tested classes & 63 & 4,186 \\ Number of classes (total) & 21,508 & 21,563 \\ Lines of code (C\# only) & 2,314,522 & 2,455,766 \\ \hline \end{tabular} \label{tab:projects_overview} \end{table} Aiming to estimate non-functional requirements used to evaluate the performance of a system, such as software quality attributes of the projects, and to give an overall view of the projects analyzed, we computed, according to an object-oriented design \cite{Chidamber:1994}, a set of metrics that are summarized in Table \ref{tab:projects_metrics}. \begin{table}[ht] \scalefont{0.7} \centering \caption{Metrics of the analyzed projects} \begin{tabular}{l|r|r|r|r} \hline \textbf{Metric} & \textbf{VR} & \textbf{Average} & \textbf{Non-VR} & \textbf{Average} \\ \hline Number of Children & 3,811 & 0.17 & 716 & 0.17 \\ Number of Fields & 102,785 & 4.77 & 7,062 & 1.68 \\ Number of Methods & 107,516 & 4.99 & 14,374 & 3.43 \\ Number of Properties & 24,395 & 1.13 & 67 & 0.01 \\ Number of Public Fields & 49,793 & 2.31 & 1,368 & 0.32 \\ Depth of Inheritance Tree & 4,072 & 0.18 & 1,278 & 0.30 \\ Number of Public Methods & 65,624 & 3.05 & 9,582 & 2.28 \\ Lack of Cohesion of Methods & 34,197 & 1.58 & 7,958 & 1.90 \\ Weighted Method per Class & 190,658 & 8.86 & 21,959 & 5.24 \\ \hline \end{tabular} \label{tab:projects_metrics} \end{table} All the information about the projects, the data used for plotting the tables and graphs, as well for the discussion, are available in the experiment repository\footnote{\url{https://github.com/stevao-andrade/ACL_defect_prediction}}. \section{Introduction}\label{sec:introducao} Researchers have studied software faults extensively. These studies lead to the characterization of defects, both in the theoretical and practical contexts. Such characterization is essential to evaluate software testing techniques concerning their ability to reveal certain types of faults. These characterizations and taxonomies can provide guides for defining test approaches, which can support fault detection in specific software domains. Technological advancement has led to the development of systems with new features such as images, sounds, videos, and differentiated interaction. Thus, technologies such as Virtual Reality (VR) have led to possibilities of creating three-dimensional environments with real-time interaction. There are various techniques to obtain a greater sensation of immersion, according to the computational resources, equipment and systems used, vision, touch, and hearing experiences can be reproduced. Thus, in addition to simulating real situations, VR also allows users to depict and interact with hypothetical situations, involving static or moving virtual objects. Despite the great benefits of adopting VR for the development of applications in various areas, it poses new challenges for verification, validation, and testing (VV\&T) activities. For example, VR software presents original software structures, such as scene graphs, which may represent new sources of defects for programs. These new challenges motivated the development of some approaches that aim to contribute to the quality assurance process of software in the context of VR. As mentioned by Corrêa et al. \cite{Santos:2013}, there is interest in the literature on the subject. However, there is still no concept regarding systematized practices for conducting this activity. Studies have shown that the major problem remains in the difficulty to deal with test oracles, which is considered to be an open-ended research problem. In general, the test activities for the VR domain are manually performed and mostly conducted only after the end of the development phase \cite{Correa:2018}. Such events generally support the generation of test requirements (functional and non-functional) which must be guaranteed before the product is delivered. The lack of studies that evaluate the cost of developing new techniques or using existing ones assess their effectiveness or even propose tools that can support their application, thus contributing to impact VV\&T activities in general and aggravate this scenario. Regardless of the programming technology used, a primary development goal is to produce high-quality software. Consequently, VR also needs to be tested and vetted for quality. According to Neves et al. \cite{Neves:2018} there are series of conceptual challenges and key questions related to quality that need to be addressed when planning to apply software testing practices in new domains: \textit{What should be tested?, What does ``adequate testing" mean?, What is a failure in VR software?, Can we reuse something?, What is done nowadays?} Systematic testing of the VR system must be based on fault models that reflect the structural and behavioral characteristics of VR software. Criteria and strategies for testing VR should be developed based on that fault model. In this paper, we discuss whether new challenges for VV\&T of VR exist that require novel techniques and methods, or instead, we need new ways of combining and using existing approaches. We also try to evaluate how much the lack of VV\&T activities can negatively impact VR software development. To do so, we analyze the most popular open-source projects and categorize fault-proneness codes that could be mitigated by adopting VV\&T activities. This paper is organized as follows: Section \ref{sec:related} present the related work and discusses how they differ from our work; Section \ref{sec:challenges} discusses the critical questions described above, as well as what testing approaches proposed in other domains could be reused for the VR domain; Section \ref{sec:experiment} presents an exploratory study to assess how much the lack of VV\&T activities can be prejudicial to open-source projects; Section \ref{sec:discussion} discusses the results of the experiment presented; Section \ref{sec:limitations} points out some limitations related to this study; finally the conclusions and future work are shown in Section \ref{sec:conclusion}. \section{Limitations and Threats to Validity}\label{sec:limitations} The main limitations of this study are related to the fact that the data used in this study were gathered from \textit{Github}. Although the collected data enabled us to discuss the state of practice regarding the application of software testing techniques in the context of VR, open source projects still represent only a small portion of what is produced in the context of VR applications. Commercial projects and closed projects are also part of this universe, and it is not possible to attest that the results discussed from data extracted from open source projects can be generalized for these other scenarios. The problem described above opens opportunities to develop similar research in partnership with industry in order to understand whether the results converge in the same direction. In addition to the limitations described above, another obstacle that must be pointed out is the fact that despite the fact that the assumptions made during the study were related to the context of VR applications, it should be emphasized that all the samples observed only use a single technology (\textit{Unity}), therefore the results indicated here cannot be generalized for other platforms. To achieve such generalization, new studies should be carried out to corroborate or counter the results presented in this study. The validity of the results achieved in experiments depends on factors in the experiment settings. Different types of validity can be prioritized depending on the goal of the experiment. Concerning the threats to the validity of this study, which are related both to the evaluation of code smells and to fault-proneness detection, we can highlight the fact of performing an analysis using samples extracted only from a platform and just for open source projects was a significant threat to validity - this relies on the lack of representativity of the projects in serving as a real sample of the universe of all types of projects for the VR domain. Unfortunately the lack of representativity is a problem that affects the entire software engineering area, since there is no well-fledged theory capable of ensuring that a set of programs is considered a representative sample for experimentation. To try to mitigate this threat, the most significant possible number of projects was assembled, varying in size (small, medium and large) and application purposes (entertainment, simulation, training, health). Another measure taken to try to mitigate the threat described above was to analyze Non-VR projects, which served as subsidies to compare with the results obtained from VR projects, ensuring a better grounded discussion and a minimum baseline for comparison, since, unfortunately, there are still no projects cataloged with VR applications that meet the requirements to be used in this work. Related to threats to construct validity, possible mistakes can be pointed out both in the analysis of codes smells, as well as in the evaluation of fault proneness. To minimize this threat, the tool \textit{Designite} was used to detect code smells, \textit{Designite} is a commercial tool and has already been successfully used in other experiments \cite{Sharma:2017}. In the context of the experiments carried in this study, in addition to using a commercial tool to support the gathering of data related to the code smell, all the data presented was evaluated using both descriptive analyses, and hypothesis testing in order to guarantee that the conclusions drawn from this work enable us to paint a clear and quantitative picture about the subject explored. Regarding the approach to detect fault proneness, the strategy used was previously validated through experiments in large datasets to attest its efficacy \cite{Yang:2016}, and it is worth mentioning the fact that the main point of the approach was not, in fact, to detect faults in the projects, but point out classes that have a high probability of having faults, serving as a guide to direct testing efforts and to discuss the necessity of apply software testing techniques. Finally, we discuss threats to the internal validity of the study, which are related to the level of confidence between the expected results and the results obtained. The whole study was conducted in a way that minimized this threat. To increase confidence about the presented results, the data were analyzed using tables and graphs and were also made available in a repository to enable the replication if it is deemed necessary. \section{0pt}{12pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{10pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{8pt plus 3pt minus 3pt}{1pt plus 1pt minus 1pt} \newcommand\graycell{\cellcolor{lightgray}} \usepackage{comment} \usepackage{colortbl} \usepackage{scalefnt} \usepackage{multirow} \usepackage{subcaption,mwe} \usepackage{ amssymb } \title{Towards the Systematic Testing of Virtual Reality Programs (extended version)} \usepackage{authblk} \renewcommand*{\Authfont}{\bfseries} \author[1]{Stevão A. Andrade} \author[2]{Fatima L. S. Nunes} \author[1]{Marcio E. Delamaro} \affil[ ]{Universidade de São Paulo} \affil[1]{Instituto de Ciências Matemáticas e de Computação} \affil[2]{Escola de Artes, Ciências e Humanidades} \affil[ ]{\tt{\{stevao, delamaro\}@icmc.usp.br, [email protected]}} \begin{comment} \title[Towards the Systematic Testing of Virtual Reality Programs]{Towards the Systematic Testing of Virtual Reality Programs} \author[S. A. Andrade, F. L. S. Nunes, and M. E. Delamaro]{ \affil{\textbf{Stevão A. Andrade}~\href{https://orcid.org/0000-0002-1559-6789}{\textcolor{orcidlogo}{\aiOrcid}}~~[~\textbf{Universidade de São Paulo - ICMC~}|\href{mailto:[email protected]}{~\textbf{\textit{[email protected]}}} ]} \affil{\textbf{Fatima L. S. Nunes}~\href{https://orcid.org/0000-0003-0040-0752}{\textcolor{orcidlogo}{\aiOrcid}}~~[~\textbf{Universidade de São Paulo - EACH~}|\href{mailto:[email protected]}{~\textbf{\textit{[email protected]}}} ]} \affil{\textbf{Marcio E. Delamaro}~\href{https://orcid.org/0000-0001-7535-5891}{\textcolor{orcidlogo}{\aiOrcid}}~~[~\textbf{Universidade de São Paulo - ICMC~}|\href{mailto:[email protected]}{~\textbf{\textit{[email protected]}}} ]} } \end{comment} \begin{document} \twocolumn[ \begin{@twocolumnfalse} \maketitle \begin{abstract} \textbf{Abstract} Software testing is a critical activity to ensure that software complies with its specification. However, current software testing activities tend not to be completely effective when applied in specific software domains in Virtual Reality (VR) that has several new types of features such as images, sounds, videos, and differentiated interaction, which can become sources of new kinds of faults. This paper presents an overview of the main VR characteristics that can have an impact on verification, validation, and testing (VV\&T). Furthermore, it analyzes some of the most successful VR open-source projects to draw a picture concerning the danger of the lack of software testing activities. We compared the current state of software testing practice in open-source VR projects and evaluate how the lack of testing can be damaging to the development of a product. We assessed the incidence of code smells and verified how such projects behave concerning the tendency to present faults. We also perform the same analyses on projects that are not VR related to have a better understanding of these results. The results showed that the practice of software testing is not yet widespread in the development of VR applications. It was also found that there is a high incidence of code smells in VR projects. Analyzing Non-VR projects we noticed that classes that have test cases tend to produce fewer smells compared to classes that were not tested. Regarding fault-proneness analysis, we used an unsupervised approach to VR and Non-VR projects. Results showed that about 12.2\% of the classes analyzed in VR projects are fault-prone, while Non-VR projects presented a lower fault-proneness rate (8.9\%). Regarding the application of software testing techniques on VR projects, it was observed that only a small number of projects are concerned about developing test cases for VR projects, perhaps because we still do not have the necessary tools to help in this direction. Concerning smells, we concluded that there is a high incidence in VR projects, especially regarding implementing smells and this high incidence can have a significant influence on faults. Finally, the study related to fault proneness pointed out that the lack of software testing activity is a significant risk to the success of the projects. \end{abstract} \keywords{Software Testing \and Virtual Reality \and Validation \and Verification \and Code Smells \and Fault Proneness} \vspace{0.35cm} \end{@twocolumnfalse} ] \input{introduction.tex} \input{related.tex} \input{challenges.tex} \input{experiment.tex} \input{discurssion.tex} \input{limitations.tex} \input{conclusion.tex} \section*{Acknowledgements} Stevão A. Andrade research was funded by FAPESP (São Paulo Research Foundation), process number 2017/19492-1. This study was also financed in part by FAPESP (São Paulo Research Foundation) process number 2019/06937-0. The authors are grateful to Brazilian National Council of Scientific and Technological Development (CNPq) for assistance (process 308615/2018-2), and National Institute of Science and Technology Medicine Assisted by Scientific Computing INCT MACC ( Process 157535/2017-7). We also would like to thanks Tushar Sharma and \textit{Designite} team by providing us an Academic license of \textit{Designite} tool. \bibliographystyle{apalike} \section{Related Work}\label{sec:related} Although virtual reality studies date back a long time \cite{Boman:1995}, only recently have few studies addressed the development of VR applications from a software engineering perspective. The increase in community interest has emerged with the recent popularization of tools that have facilitated access, and consequently developers' interest in this technology, which is still considered as emerging. Rodriguez and Wang \cite{Rodriguez:2017} present a survey about projects developed for the \textit{Unity} platform, highlighting the growth of the number of projects in recent years. Another highlight is the fact that despite the higher number of applications focused on games and entertainment, there has been an increase in the number of applications for other purposes, such as training and simulations. Unlike our work, the paper does not analyze the content of the cataloged material in detail, and is limited to studying the growing trends, development involvement, favorite topics, and frequent file changes in these projects. Ghrairi et al. \cite{Ghrairi:2018} conducted a study on the exploratory analysis of \textit{Github} projects and questions extracted from \textit{StackOverflow}, analyzing it from a software engineering point of view. The study demonstrates the current state of practice regarding the development of open-source VR applications, highlighting mainly the most used platforms and technologies. Moreover, the paper also discusses topics of interest for VR developers by analyzing the VR questions extracted from \textit{StackOverflow}. The main results show the greater popularity of the \textit{Unity} and \textit{Unreal Engine} platforms as being the most popular among VR application developers. However, they also point out that more work needs to be done to better understand the VR requirements under a software engineering context, which is one of the points that our work seeks to elucidate. From the perspective of software testing applications in the context of VR, we highlight the work produced by Corrêa et al. \cite{Correa:2018}, which presents a proposal for application software testing for VR applications. This study generates test data using specified requirements through a semi-formal language for VR application development. This approach moves in the direction pointed out by our study, which is the proposition of mechanisms that allow the systematization of the test activity for VR applications. Karre et al. \cite{Karre:2019} conducted a year-long multi-level exploratory study to understand the various software development practices within VR product development teams in order to understand whether the traditional software engineering practices are still exercised during VR product development. The main results show that VR practitioners adopted hybrid software engineering approaches in VR product development and, in general, interviewed developers complain about the lack of software testing tools. An alternative presented to solve this problem is the involvement of consumers during the testing (pre-release versions) phase to understand the customer experiences and reactions. Posteriorly Karre et al. \cite{Karre:2020} also assessed aspects related to tasks such as scene design, acoustic design, vergence manipulation, image depth, etc. which are specific to VR apps and hence require evaluation processes that may be different from the traditional ones. They presented a categorization that can support decision making in choosing a usability evaluation method for the future development of VR products. By using the categorization, the teams can use unique methods to improve the usability of their products, once depending on the industry (automobile, education, healthcare, etc.) the usability and the metrics of evaluation methods can change.
null
null
null
proofpile-arXiv_000-10155
{"arxiv_id":"2009.08930","language":"en","timestamp":1600654657000,"url":"https:\/\/arxiv.org\/abs\/2009.08930","yymm":"2009"}
2024-02-18T23:40:25.154Z
2020-09-21T02:17:37.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10157"}
null
\section{Introduction} Planets are ubiquitous in the Milky Way. The conditions under which the known planets formed exist in other galaxies as well. Yet each external galaxy occupies such a small area of the sky that the high projected stellar density makes it difficult to study individual stars in enough detail to detect the signatures of planets through either radial velocity measurements or transit detection, the two methods responsible for the discovery of more than 4300 exoplanets (exoplanet.eu). External galaxies host relatively small numbers (a handful to several hundred) of bright X-ray sources (XRSs). Luminous XRSs in external galaxies can therefore be spatially resolved, and we can measure the count rate and X-ray flux from each XRS as a function of time (i.e., deriving the ``light curve''). The dominant set of bright XRSs in external galaxies are X-ray binaries (XRBs) in which black holes (BHs) or neutron stars (NSs), the remnants of massive stars, accrete matter from a stellar companion. \citet{2018ApJ...859...40I} suggested that XRBs may be ideal places to search for planets, because the cross-sectional areas of the X-ray emitting regions can be comparable to or even smaller than planetary cross sections. A planet passing in front of the X-ray emitting region may produce a total or near-total eclipse of the X-rays \citep{2018ApJ...859...40I}. Furthermore, we know that planets are likely to inhabit XRBs. For example, studies of radio emission from four NSs that spin at millisecond periods ({\sl recycled pulsars}) and which were previously XRBs, have led to the discovery of planets \citep{WOLSZCZAN20122}. We therefore expect active XRBs to also host planets On a related note, eclipse timing variations in the XRB MXB~$1658-298$, suggest the presence of a 23 Jupiter-mass object, a {brown dwarf}\footnote{Planets have masses below roughly $13\, M_J$, where $M_J$ is Jupiter's mass; brown dwarfs have masses between that and $\sim 0.075\, M_\odot=75\, M_J$, the lower mass limit for stars.}, in a 1.5 AU orbit \citep{Jain_Paul_etal}. M51-ULS-1b is the first planet candidate discovered when it passed in front of an XRS whose size is comparable to its own. It completely blocked the X-rays from the XRB M51-ULS-1 for a time interval of 20-30 minutes, with the excursion from baseline lasting roughly 3 hours. There were no simultaneous observations in the optical or infrared, but the regions emitting at these longer wavelengths are so large in comparison with the XRS that there would likely not have been a detectable decline in flux. This phenomenon is to be contrasted with planetary transits of {\sl stars}, which produce relatively small dips in flux across wavebands. During stellar transits the dip in X-ray flux is ${\cal O}(1\%)$, rather than ${\cal O}(100\%)$. The specific shape of the stellar-transit X-ray dip can be modeled by the interactions of X-rays from the stellar corona with the planetary atmosphere. \citep{HD18933b_in_Xrays}. The method of X-ray transits we discuss here applies to XRBs rather than stars, and can produce a full eclipse. During the transit of M51-ULS-1, a candidate planet passes in front of a soft-X-ray source that has an effective radius of $\sim 1/3\, R_J$, where $R_J$ is the radius of Jupiter. The method can be applied to external galaxies like M51, because the field of view of the present generation of X-ray detectors is large enough to encompass dozens of bright XRSs, and the total exposure times extend to about $1$~Ms ($\sim 11.6$~d). XRSs in the Local Group and in the Milky way, which are generally intrinsically dimmer, can be studied as well. When applied to them, the method is sensitive to planets in closer orbits in which the transits repeat on shorter time scales. There is a special excitement to discovering planets in external galaxies. The identification of X-ray transits is the only method in which host stellar systems at distances of Mpc to tens of Mpc can be unambiguously identified. Microlensing is sensitive to planetary masses in external galaxies, and possible free-floating planets found through quasar microlensing have been reported \citep{2018ApJ...853L..27D}. Microlensing of light from stars in galaxies such as M31 can also lead to planet discoveries, and in the era of LSST, deep drilling in some crowded fields combined with the survey's planned image differencing analysis (so-called pixel lensing), should discover planets that orbit intervening stars \citep{2009MNRAS.399..219I}. The identification of the host star can, however, range from difficult to impossible. In fact, for only a small number of the 120 Milky-Way planets discovered via microlensing is information about the host star available (exoplanets.edu). For planets discovered via X-ray transits, we know and can study the system orbited by the star, even when only a single transit is detected. We can, for example, identify the range of orbital separations the candidate planet has from the XRS, and study the feasibility of its survival within its present environment as well as its survival during previous stages of the binary's evolution. The discovery of M51-ULS-1b initiates investigations of planets and substellar masses orbiting massive stars, which have proved difficult to discover, with only a handful of the known exoplanets orbiting stars with masses larger than $2-3\, M_\odot$. High-mass stars (those with mass greater than about $10\, M_\odot$) tend to be formed only in binaries or in systems with higher-multiplicities \citep{PQ}. Binary-high-mass stars can undergo interesting evolutions that lead to a range of energetic hydrogen-poor supernovae and eventually to BH-BH, NS-NS, or BH-NS mergers. The candidate planet we have discovered is in a circumbinary orbit around a system experiencing an intermediate phase of evolution. One star has evolved and is now a BH or NS and its companion is a massive star donating matter, making the compact object highly luminous. With a luminosity $> 10^{39}$~erg~s$^{-1}$, M51-ULS-1 is an {\sl ultraluminous} X-ray source. Its ultimate fate depends on the mass of the donor relative to that of the accretor. The discovery of planets around such a system expands the realm of known planetary environments. In \S2 we describe our search through archived X-ray data for X-ray transits, and the identification of a transit of the XRB M51-ULS-1. Section 3 is devoted to establishing the properties of the XRB, which are subsequently used to better understand the planet candidate and its orbit. One of the properties of the XRS that is particularly useful is that its spectrum is thermal, so that we can derive the effective radius of the XRS. Optical observations establish that the XRB is young, likely younger than $20$~Myr. To ensure that the transit is not simply an example of a commonly found type of X-ray dip, we compare it in \S 4 with other dip-like events. We find that the shape and spectral evolution of the transit are different from those of other flux dips. In particular the constancy of the spectrum is what is expected during an eclipse or transit rather than obscuration due to gas and dust, or to a state change. In \S 5 we establish that the light curve is well fit by a transit model which yields the size of eclipser relative to the XRS as well as the relative speed. In \S 6 we explore the nature of the eclipser. We find that the eclipser is substellar and is most likely to be a planet. In \S 7 we study the candidate planet's orbit. We find that it is wide enough to expect that a planet could survive in the radiation field of the XRB, and also to suggest that a planet could have survived the prior evolution of the XRB. Section 8 considers the implications of the discovery, specifically how large a population of planets could inhabit the set of XRBs we studied? and what are the prospects for future detections? \begin{figure*}[ht!] \begin{center} \includegraphics[width=0.75\textwidth]{figures/data_limited_new.pdf} \caption{Background-subtracted X-ray light curves defined by data points for \textit{Chandra} ObsID 13814. {\sl Black:} counts in 1~ks bins, and the associated 1-$\sigma$ uncertainties. {\sl Red:} running average computed over a timescale of $\pm$2~ks. {\sl Horizontal axis:} time in ks; {\sl Vertical axis:} number of counts per bin. {\bf Top panel:} the short duration eclipse and roughly 20 ks on each side. {\bf Bottom panel:} the entire duration of the observation.} \label{fig:1} \end{center} \end{figure*} \section{Search for X-Ray Transits} We conducted a systematic search for possible transits in the {\sl Chandra} X-ray light curves of XRSs in three galaxies: M51 (a face-on interacting late-type galaxy), M101 (a face-on late-type galaxy), and M104 (an edge-on early-type galaxy with some star formation). These light curves were available because they had recently been studied for other purposes \citep{2018MNRAS.477.3623W, 2016ApJ...831...56U, 2016MNRAS.456.1859U}. It is possible to make discoveries of planetary transits in archived X-ray light curves, because short-term time variability was often not a primary focus of the original observing programs. Even stellar eclipses lasting ten or more hours have been found after the initial analyses were complete. Planetary transits, a phenomenon that has apparently not been previously targeted, exhibit short-duration deficits of photons, and are therefore particularly prone to be missed or misidentified. We considered all observations of duration greater than $5$~ks, and all obsids (individual observations) for each XRS observed to have had a flux corresponding to $L_X[0.5\, {\rm keV} -8\, {\rm keV}]>10^{37}$~erg~s$^{-1}$ during at least one observation. We studied 667 light curves produced by 55 XRSs in M51, 1600 light curves from 64 XRSs in M101, and 357 light curves from 119 XRSs in M104. The numbers of light curves are larger than the total numbers of XRSs because each physical source was in the fields of multiple exposures. We conducted an automated search specifically designed to identify transits. We required only that there be at least one 1-ks interval with no X-ray counts, and that, however long the low state lasted, there should be a baseline with roughly equal count rates prior to and after the downward dip. We applied our search algorithm to all 2624 light curves in the sets described above. The criteria, that the light curve should exhibit a drop to zero measured count rate in at least one 1-ks bin, and that the downward deviation should start from and return to a baseline, were enforced as follows. Considering an individual light curve, and denoting the counts in bin $i$ as $C(i),$ we identified all values of $i$ for which $C(i)=0.$ We then considered the time bins just before $i$ ($C(i-j)$, $j=1,2,...$) and just after ($C(i+j)$, $j=1,2,...$). The purpose of this was to measure the duration of the interval during which the count rate was consistent with zero. We did this by counting the total number of consecutive bins in which the count rate was equal to or smaller than 1\footnote{Given the uncertainty associated with small numbers of counts, this was a strict criterion. In trials where we relaxed it, the low states were often clearly associated with longer-term variability rather than with well-defined events.}. The first and last of these bins were, respectively, $i_{low}$ and $i_{high}$, so the duration of the low state is $[(i_{high}-i_{low})+1]$~ks. To determine whether the low state corresponds to a transit, we needed to establish whether the dip started from and returned to a baseline. We therefore considered, in turn, four pairs of points: $({\cal C}_1=C(i_{low}-k), {\cal C}_2=C(i_{high}+k)),$ where the value of $k$ ranges from $1$ to $4$. For each of the four pairs we defined $\sigma=\sqrt{max({\cal C}_1,{\cal C}_2)}$ If the absolute value of the difference between ${\cal C}_1$ and ${\cal C}_2$ was less than $2\, \sigma$, we considered that pair to be a match. We also required that both ${\cal C}_1$ and ${\cal C}_2$ be $7$ or larger, to ensure that the count rate at baseline is significantly higher than it would have been during the low-count-rate interval. We conducted this check for four pairs of points ($k=1,2,3,4$). If at least two of the four pairs had high enough count rates and were also matches, we considered the event to be a possible transit and flagged it for visual inspection. The 2624 light curves in our study yielded one interval for inspection. This was the light curve in Figure \ref{fig:1}, with an apparent transit lasting $10$~ks to $12$~ks. \begin{figure*} \begin{center} \includegraphics[width=0.42\linewidth]{figures/m51_xray_cut.pdf} \includegraphics[width=0.42\linewidth]{figures/m51_hst_cut.pdf} \caption{Left: false RGB stacked \textit{Chandra}/ACIS-S image of the Whirlpool Galaxy, M\,51 (total exposure of $\approx$850\,ks). Colored points are XRSs: Red is 0.3-1\,keV; green is 1-2\,keV; blue is 2-7\,keV. M\,51-ULS1 is the orange source at the center of the $60^{\prime\prime}\times60^{\prime\prime}$ dashed white box. Diffuse emission is from hot gas. Right: \textit{HST} image of the area defined by the white box in the top image. Red is the F814W band; green is F555W; blue is F435W. The magenta circle marks the X-ray position of M\,51-ULS, which lies at the edge of a young star cluster. The source is located at right ascension and declination 13:29:43.30, +47:11:34.7, respectively.} \label{fig:m51_image} \end{center} \end{figure*} We also employed other approaches to study the light curves. For each of the 2624 light curves, we plotted the cumulative count rate, using a method developed by \citet{2017Sci...355..817I} to discover flares in XRSs. We tested the method to determine whether it could also discover dips, and found it to be effective. The signature of an eclipse, for example, is a flat region in the plot of cumulative count rate versus time. We also measured the total number of counts in each exposure, $C$, and used $C/T_{exp}$, where $T_{exp}$ is the exposure time, to compute the average count rate. We computed running averages of the counts per ks, located the positions of local extrema, and binned the data, initially selecting the bin size so that there would be an average of 10 counts per bin. We subsequently conducted a visual inspection of each light curve with $C>100$ to look for dips in flux. We compared the results of our algorithmic analyses (e.g., significance of changes in flux) with visually identifiable features in the light curve. This process led to the selection of the event shown in Figure \ref{fig:1}. In the appendix we present the details of the {\sl Chandra} and {\sl XMM-Newton} data sets we used to study M51. Data employed for our searches in M101 and M81 was used in a similar manner by previous studies and more details can be found in \citep{2018MNRAS.477.3623W, 2016ApJ...831...56U, 2016MNRAS.456.1859U}. That our thorough search through the {\sl Chandra} light curves of M51, M101, and M104 identified a single transit candidate, demonstrates both that transits can be found, and also that transit profiles are not common features of the light curves of extragalactic XRSs. Criteria that are less strict (e.g., not requiring a drop to zero flux) might have identified more candidates; our goal, however, was to identify only strong candidates that could then be subjected a sequence of further tests. As expected for the transit of a spherical object with a well-defined edge, the dip is roughly symmetric and, as we will show in \S 4, the features of the transit are consistent for photons with different energies. The transit has the characteristic shape expected when the size of the transiting object is similar to that of the background source. Fortunately, X-ray studies of the binary, described below, provide an estimate of the size of the XRS. \section{The X-Ray Binary M51-ULS-1} \subsection{X-Ray Properties} The X-ray source exhibiting the apparent transit is M51-ULS-1, one of the brightest XRSs in M51, located at right ascension and declination 13:29:43.30, +47:11:34.7, respectively. The X-ray luminosity (0.3--7 keV) of M51-ULS-1 is $\sim 10^{39}$~erg~s$^{-1}$ \citep{2016MNRAS.456.1859U}, roughly $10^5 - 10^6$ times brighter in X-ray emission than is the Sun at all wavelengths combined. The high X-ray luminosity ensured that the count rate was large enough to both identify and study the transit. M51-ULS-1 belongs to a subclass of fast-accreting XRSs known as {\it ultraluminous supersoft sources} (ULSs), characterized by high luminosity and an almost purely thermal spectrum with typical blackbody temperature $\sim 100$~eV and emitting radius $\sim 10^9$~cm \citep{2016MNRAS.456.1859U}. The data exhibiting the short eclipse were collected during a 190-ks Chandra pointing (ObsID 13814, 2012 September 20). During that observation, the average effective radius of the X-ray-emitting region was estimated to be $R_{\rm X} = 2.5 ^{+4.1}_{-1.1} \times 10^9$~cm (90\% confidence interval) \citep{2016MNRAS.456.1859U}. The radius is extracted from a fit to the broadband X-ray data collected just prior to and after the dip to zero flux. \footnote{If the underlying spectrum is not a blackbody, the size of the X-ray emitting region could be somewhat smaller or larger in a way not accounted for in the uncertainty limits.} In this particular case, the radius, $R_X$ of the transited XRS is on the order of a few $\times 10^9$~cm, consistent with sizes of known planets. In \S 5 we find that the light curve data are well fit by a transit model. The model yields a range of eclipser radii in units of $R_X$, as well as a range of relative velocities. \subsection{Optical Properties and Age} Optical observations of M51 provide clues to the age of M51-ULS-1, a good indicator of the age of M51-ULS-1b, since the latter is likely to have been formed with the binary, or else in the binary's natal cluster. Alternatively, if it formed as a result of the evolution of the binary, or one of its components, M51-ULS-1b would be younger. Several lines of evidence suggest that M51-ULS-1 is a young system. The most direct evidence is from \citep{Terashima} who have identified a possible counterpart in an HST image, consistent with stellar type B2-8. Comparison with the relevant isochrones yield an age range estimate of 4~million~yr to 16~million~yr. While there is a probability of $0.17$ that a star this bright or brighter would be present by chance, we note that a bright counterpart is expected because a donor that can give mass at a rate high enough to produce an accretion luminosity of $10^{39}$~erg~s$^{-1}$ must either be massive or highly evolved, and would be bright in either case. The fact that there is no bright red star in the vicinity argues for a massive star, with the colors consistent with those of a blue supergiant. It is possible, however, that the counterpart includes light from the accretion disk, altering the age estimate. While the counterpart suggests an age smaller than about 20~million~yrs, other considerations independently constrain the age to be less than $\sim 10^8$~yrs. For example, the HST image shows that M51-ULS-1 is located on the edge of a young stellar cluster surrounded by diffuse H$_\alpha$ emission (Figure 2). Furthermore, \citep{2017MNRAS.466.1019S} place M51-ULS-1 in a spiral arm. They and other authors have identified M51-ULS-1 a high-mass X-ray binary, indicating that the donor is young. Finally, ULSs are preferentially found in young stellar populations. \citep{2016MNRAS.456.1859U}. The preferred age of the system is less than about 20~Myr, and its maximum age is roughly $10^8$~yr. \subsection{Binary Properties} The total mass, $M_{tot}$, of the XRB. is the sum of the accretor's mass, $M_a$ and the donor's mass, $M_d$. If the accretor is a BH, its own mass may be near or above $10\, M_\odot.$ The value of $M_{tot}$ could be in the range of tens of solar masses. If the accretor is a NS, its mass is likely to be $\sim 1.4\, M_\odot$. However, for both NS and BH accretors, the observed high luminosity requires a high rate of mass transfer that can be achieved only by donors that are significantly more massive than a NS and/or highly evolved. The lack of evidence of a red luminous giant, is consistent with mass being provided by a high-mass donor. Indeed, as mentioned above, M51-ULS-1 is considered to be a high-mass X-ray binary (HMXB) \citep{2017MNRAS.466.1019S}. The present-day value of the donor's mass may be a sum of its primordial mass and mass gained during a previous stage of mass transfer from the progenitor of the presently-observed accretor. The highly luminosity of M51-ULS-1 is driven by a high rate of mass infall. An accretion rate of ($\sim 10^{-6}~M_\odot$~yr$^{-1}$ is needed to produce a luminosity of $10^{39}$~erg~s$^{-1}$). To provide mass at this rate, a star that is not a red supergiant is likely to be close to filling its Roche lobe: $R_d \le (2-3)\, R_L.$. Since the orbital radius is typically just a few times larger than $R_L,$ the Roche-lobe radius, the size of the orbit is linked to the radius of the donor. The donor may be a blue supergiant \citet{Terashima} with radius $ \leq 25\, R_\odot$; it may be even smaller if the light from the counterpart is blended with light from the accretion disk, or if the true counterpart is dimmer than the HST-observed emitter. The Roche lobe cannot be more than roughly 2-3 times larger than the donor if mass is to reach the accretor at the high requisite rate. Putting this all together, we find that the most probable range for $a_{bin}/R_d$ is $4-10$, with values toward the lower end of the range more likely given the high luminosity. We therefore expect the maximum possible size of the binary orbit to be about $3\, AU$, with the most likely value several times smaller. \section{Comparing the Transit to Other Light Curve Features}\label{sec:transitfeatures} \subsection{Accretion-Related Dips} X-ray light curves exhibit variability of many types. Flares, long-lasting high and/or low states are observed, as are short-lasting dips that are not transits. It is therefore important to compare the event we identified with others, in order to determine whether its characteristics set it apart from other dip events. We contrast the transit with dip-like behavior found in M51-ULS-1 and in other XRSs in our sample. X-ray telescopes not only count the numbers of photons received, but also record the energy of each incoming photon, so we can explore how lower-energy (``soft'') photons behave compared to higher-energy (``hard'') photons\footnote{We have dubbed the higher-energy band ``hard'' (H) and the lower-energy band ``soft'' (S) in keeping with X-ray astronomy convention. See Figure~4 for {\sl Chandra}; Figure~5 for {\sl XMM-Newton}.}. Energy dependence observed during events can provide clues to the cause of the variability. \begin{figure} \centering \includegraphics[width=0.90\linewidth]{lcs/lc_13814_behr_split.pdf} \caption{Background-subtracted light curve demonstrating the lack of spectral variation across the transit event in M51-ULS-1. The {\sl top panel} shows the background-subtracted {\sl Chandra} count rate light curves in the soft ($S$:0.3-0.7\,keV; red histogram) and hard ($H$:0.7-7\,keV; blue histogram) passbands along with the Gehrels-approximated error bars (vertical bars). The solid black line indicates the net rate in the broad ($0.3-7$~keV) passband, with error bars omitted for clarity. The {\sl bottom panel} shows the color hardness ratio ($C=\log{\frac{S}{H}}$) computed using an accurate Poisson model \citep{2006ApJ...652..610P}. The grey-shaded bands denote the 90\% HPD intervals for counts accumulated over time intervals before, during, and after the eclipse. The error bars increase in size when the counts decrease during the eclipse, but the spectral hardness shows no evidence of a change. Note the continuity of the grey-shaded bands.} \label{fig:eclipse_hardness} \end{figure} One case in which energy-dependence should be minimal is during a transition in to or out of eclipse. Figure \ref{fig:eclipse_hardness} demonstrates that despite the sharp drop in intensity during the transit of M51-ULS-1b, the spectrum shows no evidence of a change. It is clear that the value of the ratio of the numbers of soft (S) to hard (H) photons, quantified by the so-called hardness ratio ($log_{10}(S/H)$) during the transit is consistent with its value out of transit. This behavior supports the interpretation of the event as a transit. Were the count rate higher, we could quantify the similarity through transit-model fits in different energy bands. \begin{figure} \centering \includegraphics[width=0.90\linewidth]{lcs/lc_0303420201_behr_split.pdf} \caption{As in Figure \ref{fig:eclipse_hardness}, but for the M\,51-ULS-1 double-dip feature from \textit{XMM-Newton} observation 303420201. Here, the \textit{XMM-Newton} soft and hard passbands are 0.2-0.7\,keV (red histogram) and 0.7-10\,keV (blue histogram), respectively. Unlike the short-duration eclipse, the hardness ratio of M\,51-ULS varies across the duration of the double-dip, suggesting that this event and the one in Figure~3 are caused by different physical processes.} \label{fig:m51_doubledip} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{lcs/lc_934_behr_split.pdf}\\ \includegraphics[width=0.8\linewidth]{lcs/lc_4737_behr_split.pdf}\\ \includegraphics[width=0.8\linewidth]{lcs/lc_5338_behr_split.pdf} \caption{As in Figure \ref{fig:eclipse_hardness}, but for M\,101-ULS, with \textit{Chandra} observations 934, 4737 and 5338 (top, mid, bottom, respectively). Each epoch shows strong energy-dependent variability, unlike what is seen for the M\,51 ULS transit event in Figure~\ref{fig:eclipse_hardness}.} \label{fig:M101_eclipse_lcs} \end{figure} Figure~\ref{fig:m51_doubledip} shows a different dip-like event from the light curve of M51-ULS-1. In contrast to \ref{fig:eclipse_hardness}, this dip is not symmetric; the flux doesn't fall to zero; and the hardness ratio changes across the event. It serves to illustrate that the transit has distinctive features not typical of other dipping behavior. The behavior of the event shown in Figure~\ref{fig:m51_doubledip} suggests that the dip is due to interactions with a high-density feature associated with accretion. Accretion at high rates, whether in XRBs or young stars \citep{2014AJ....147...82C}, can lead to irregularities in the accretion stream or clumping of the accreting material. In XRBs, clumps or portions of the accretion stream may have high enough column density to block X-rays. Because, however, these are diffuse structures, they exhibit variations in gas and dust density that translate into different amounts of absorption as they pass near or in front of the XRS. In such cases, X-rays of different energies exhibit different effects as the density of the material passing in front of the XRS changes. We also note that the passage of material associated with accretion is not generally expected to produce a symmetric light curve dip; furthermore, the irregularity may not be large enough or well centered enough in relation to the XRS to block all of the photons. This double-dip event not only illustrates ways in which the properties of an absorption-induced event can differ from those of a transit, it also tells us that there are absorbing clouds or streams, likely associated with the accretion disk, along our line of sight to M51-ULS-1. This indicates that our line of sight is aligned with that of the accretion disk of M51-ULS-1. Because the accretion plane is generally aligned with the binary orbital plane, {\sl our line of sight is also aligned with the orbital plane}, suggesting that we may detect eclipses of the XRS by the donor star. Figure~\ref{fig:M101_eclipse_lcs} shows three events from M101, each of which also provides a clear contrast with the transit event. The first is an energy-dependent double-dip-like feature similar to the one in Figure~\ref{fig:m51_doubledip}, likely associated with accretion. The two lower panels show transitions from a low-state to a higher state which itself exhibits dips. These events exhibit clear signs of energy dependence. Thus, the characteristics of these and many other events in the 2624 light curves we studied stand in contrast to the characteristics of the transit. {\sl The transit in M51-ULS-1 is an {\bf energy-independent} high-low-high transition with a well-define baseline. It was uniquely selected by our automated search directed at identifying events with the simplest set of features associated with a transit. It is approximately symmetric, and has a shape typical of transits in which the source and transiting object have comparable size. In \S 5 we will show that it is well fit by a transit model.} \subsection{Intrinsic Variability} In addition to variations caused by the passage of matter in front of the XRS, XRBs exhibit a wide range of intrinsic variability. Intrinsic variations generally show both intensity and spectral changes. One particular type of state common to soft XRBs, is an X-ray ``off'' state. These were first observed in luminous supersoft X-ray sources (SSSs) in the Galaxy and Magellanic Clouds \citet{1996LNP...472..165S}. For these nearby XRSs, we know that the X-ray flux diminishes to undetectable levels, while the optical flux increases. Although the transitions appear not to have been observed, the behavior is consistent with an expansion, and then later, when the X-ray emission returns, a contraction of the photosphere \citep{JCGRD}. During observations, M51-ULS-1 exhibited at least one clear X-ray off-state (830191401) for which the transition was not observed. We cannot determine whether that off state corresponded to an interval of large photosphere or to the middle portion of an eclipse. If the system was in eclipse, the eclipse lasted longer than 98~ks. As Table~2 shows, there are several additional candidates for X-ray ``off'' states during observations which included no interval of higher count rate. We make no assumptions about the nature(s) of the X-ray off states. Although transitions to off states in SSSs and ULSs have generally not been observed, they are expected to take longer than a ks \citep{JCGRD}. Furthermore, the hardness ratio would change significantly during the transitions into and out of the X-ray off states, in marked contrast to what happens during eclipse. \subsection{Stellar Eclipses} In addition to off states extending over an entire observation, there are two observations, one by {\sl Chandra} and one by {\sl XMM-Newton} in which there was a transition from a low to a high state, and a high to a low state, respectively. In each case, the transition occurred during an interval of a ks. The variation shown in the left-hand panel of Figure~\ref{fig:m51_eclipse_lcs}, observed by {\sl Chandra}, exhibits behavior consistent with an egress from an eclipse, presumably by the donor star. The low state is consistent with zero flux. This state begins prior to the start of the exposure and continues for $15~ks$; the duration of the observed portion of the low state is longer than the full duration of the transit event we present in this paper. This event has two characteristics of eclipse: (1) the rapid change from zero flux to a significantly larger count rate, and (2) no change in hardness ratio during the transition. A change in hardness ratio would likely signal a change in state, whereas during an eclipse the decrease in flux from the harder and softer X-rays occurs at roughly the same time. If the event is an egress from eclipse, the steep rise indicates that, in contrast to the transit, the eclipser is significantly larger than the XRS. \begin{figure} \centering \includegraphics[width=0.8\linewidth]{lcs/lc_13815_behr_split.pdf}\\ \includegraphics[width=0.8\linewidth]{lcs/lc_0824450901_behr_split.pdf} \caption{As in Figures \ref{fig:eclipse_hardness} and \ref{fig:m51_doubledip}, but for other variable events in M\,51-ULS-1. Left; a long-duration eclipse egress in \textit{Chandra} observation 13815. Right: a long-duration eclipse ingress in \textit{XMM-Newton} observation 824450901. These events are thought to be occultations by the companion star.} \label{fig:m51_eclipse_lcs} \end{figure} \begin{figure} \centering \includegraphics[width=0.8\linewidth]{figures/m51_xmm_noeclipse.pdf}\\ \includegraphics[width=0.8\linewidth]{figures/m51_xmm_ineclipse.pdf}\\ \includegraphics[width=0.8\linewidth]{figures/m51_xmm_chan.pdf} \caption{Left: \textit{XMM-Newton}/pn 0.2-10\,keV image of the observation 303420201 during which M\,51-ULS-1 appears active. Middle: as in the left panel, but for the portion of \textit{XMM-Newton} observation 824450901 in which the source is in eclipse (i.e., $\sim$30--75\,ks in Figure \ref{fig:m51_eclipse_lcs}, right). Right: stacked 0.3-7\,keV \textit{Chandra}/ACIS image of the same field. The 25$^{\prime\prime}$ radius green circles in each image is comparable to the \textit{XMM-Newton} extraction region for M\,51-ULS-1. It is clear from the \textit{Chandra} image that a number of nearby point sources are likely contaminating the \textit{XMM-Newton} extraction region for M\,51-ULS-1 and may be causing the residual hard emission seen while the source is in eclipse.} \label{fig:m51_xmm_field} \end{figure} The right-hand panel shows what appears to be an ingress to eclipse, as observed by {\sl XMM-Newton}. There is a sharp decline as expected for ingress. There is, however, residual emission that shows some spectral variation detectable even during the low state. {\sl XMM-Newton's} point spread function is large, including X-ray emission from sources within a few tens of pc at the distance to M51. We have examined the Chandra images of M51-ULS-1 and its surroundings and found several (fainter) point-like XRSs and diffuse emission inside the XMM-Newton/EPIC source extraction region (Figure 8). It is therefore reasonable to hypothesize that the low state is a full eclipse, and that the faint residual emission seen during that time interval in the XMM-Newton data comes from those other sources unresolved by XMM-Newton. The {\sl Chandra}-observed transition from a low state is an excellent candidate for an eclipse egress, and the {\sl XMM-Newton}-observed transition from a low state is a good candidate for an eclipse ingress. The significance of detecting an ingress to or egress from a stellar eclipse is that it tells us that our line of sight is roughly aligned with the orbital plane. In \S 7 we will show that the probability of an ingress and/or egress occurring during the roughly 1 Ms of observations afforded M51-ULS-1 may be close to unity. \section{Fits to the Short-Duration Eclipse} \label{sec:mcmcfits} We model the X-ray light curve (see Figure~\ref{fig:xraylc}) using a method that is optimized to analyze low-count X-ray data. We explicitly use the Poisson likelihood which is appropriate in this regime. We fit the light curve with a function spanning the eclipse over the interval from 135~ks to 165~ks. We represent the XRS as a circular source of radius $R_x$, and the eclipser as an opaque circular disk of radius $R_{ec}=f_{ec} R_x$ We express all distances in units of $R_x$; thus $R_{ec}$ is replaced by $f_{ec}$. The light curve is defined by five parameters, $\theta=\{c_{\rm X},T_{\rm mid},b, f_{ec},v_{pl}\}$, where $c_{\rm X}$ is the X-ray counts per bin outside the eclipse, $T_{\rm mid}$ is the midpoint of the eclipse, $b$ is the smallest unsigned distance from the center of the eclipser to the center of the source during the eclipse, and $v_{pl}$ is the velocity at which the eclipser moves across the source. We use data in the range 135~ks$\leq{t}\leq$165~ks after the start of the {\sl Chandra} observation to construct the light curve; the eclipse occurs approximately between 145~ks and 158~ks. The events are binned at $\Delta{t}=471.156$~s, corresponding to $150 \times$ the CCD readout duration ({\tt TIMEDEL}=$3.14104$~s). The velocity is computed in units of $\frac{R_{\rm X}}{\Delta{t}}$, and $T_{\rm mid}$ in ks starting from the beginning of the observation. We compute the area of overlap between the foreground object and the X-ray source by considering them as planar circles whose centers ars this area in steradians, \begin{eqnarray} {\rm A}(t) &=& 0 ~~~~{\rm for}~~d(t)>1+f_{ec} \nonumber \\ &=& \pi ~~~~{\rm for}~~\max\{1,f_{ec}\}>d(t)+\min\{1,f_{ec}\} \nonumber \\ &=& (\alpha_X - \cos{\alpha_X}\,\sin{\alpha_X}) \nonumber \\ && + f_{ec}^2 (\alpha_{ec} - \cos{\alpha_{ec}}\,\sin{\alpha_{ec}}) ~~~~{\rm otherwise} \,, \label{eq:intersection} \end{eqnarray} where $\alpha_X$ and $\alpha_{ec}$ are the angles subtended by the intersecting arcs of the star and the foreground object respectively, \begin{eqnarray} \alpha_X &=& \arccos{\frac{d(t)^2+1-f_{ec}^2}{2\,d(t)}} \nonumber \\ \alpha_{ec} &=& \arccos{\frac{d(t)^2+f_{ec}^2-1}{2\,d(t)\,f_{ec}}} \,. \nonumber \\ \end{eqnarray} Note that when the source and eclipser are of the same size, $f_{ec}=1$, and $\alpha_X$=$\alpha_{ec}$=$\arccos{\frac{d}{2}}$, which results in ${\rm A}$=$\pi$ at $d$=$0$ (complete overlap), and ${\rm A}$=$0$ at $d$=$2$ (complete disassociation). \begin{figure} \centering \includegraphics[width=1.0\linewidth]{m51_471_eclipselc-eps-converted-to.pdf} \caption{X-ray light curve of the short-duration eclipse, used for the light curve modeling in Section~\ref{sec:mcmcfits}. {\sl Top panel} shows the light curve of the ULS as the blue histogram over the full duration of the {\sl Chandra} observation. The contribution of the background, obtained in a source-free region of twice the source area, is scaled and presented as the red histogram. The light curve is binned at 150$\times$ CCD readout time, so Moire patterns are not expected. {\sl Bottom panel} shows a zoomed in look spanning the eclipse. The horizontal red line represents the constant background level estimated for the observation, and dashed red lines indicate $\pm{1}\sigma$ errors on the background. A representative light curve that shows an estimated source count level in the absence of an eclipse is shown as the green dotted line; the level is determined from counts in bins indicated by filled green circles. } \label{fig:xraylc} \end{figure} The model light curve is then computed as \begin{equation} {\rm light~curve}(t) = c_{\rm X} \cdot \frac{\pi - {\rm A}(t)}{\pi} + background \,, \end{equation} where the normalization $c_{\rm X}$ denotes the expected number of counts in each time bin outside the eclipse, and a time-independent background, scaled from a source-free region of the same observation, is added on. We carry out the fitting using a Markov Chain Monte Carlo (MCMC) approach \citep{BDA3}. We use a Metropolis scheme, where new parameter values are drawn based on their current values; we employ as proposal distributions, a Gaussian for $c_{\rm X}$ and $T_{\rm mid}$, Uniform for $b$, and Uniform in log for $f_{ec}$ and $v_{pl}$. We further restrict \begin{eqnarray} 0< & c_{\rm X} & < \infty \,, \nonumber \\ 0 \leq & f_{ec} & \leq 50 \,, \nonumber \\ 0 \leq & b & \leq 51 \,, \nonumber \\ 10^{-3} \leq & v_{pl} & \leq 10 \,, \nonumber \\ 145 \leq & T_{\rm mid} & \leq 158 \,. \end{eqnarray} We do not explicitly tie together $f_{ec}$ and $b$, though the fact that an eclipse is observed naturally requires that $b<(f_{ec}+1)$; we expect this correlation to be recovered from the MCMC draws. We also sample the background level in each iteration from a Gaussian distribution, $background \sim N(0.38,0.018^2)$ to account for uncertainty in background determination. We compute the Poisson likelihood of the observed light curve counts for each realization of a model light curve for the parameter values drawn in that iteration, and accept or reject the parameter draw based on the Metropolis rule (always accept if the likelihood is increased; accept with probability equal to the ratio of the new to old likelihood otherwise). We first run the MCMC chain for $10^5$ iterations using a starting point of $\theta^{(0)}=\{c_{\rm X}=7.75,b=0.5,f_{ec}=2.0,v_{pl}=0.9,T_{\rm mid}=150\}$. We sample 40 different starting points as random deviates from the resulting posterior distributions, and again run $10^5$ iterations for each case. The first 2000 iterations are discarded as burn-in in each case. We combine all the iterations after verifying that the chains converge to the same levels for all parameters. We then construct posterior probability distributions for each parameter as histograms from the MCMC draws after thinning them to the effective sample size ($N_{\rm eff}=\frac{1-\rho}{1+\rho}$, where $\rho$ is the 1-lag correlation) in 5000 iteration increments. We convert the relative units of $b$, $f_{ec}$, and $v_{pl}$ to physical units by convolving the MCMC posterior draws with a representative distribution of $p(R_{\rm X})$ derived from the X-ray data. As noted above, the 90\% bounds on $R_{\rm X}$ are asymmetric, at [$-1.1$,$+4.1$]~$\times{10^9}$~cm from the nominal best-fit value of $2.5\times{10^9}$~cm. This can be represented by half-Gaussians with widths appropriate for the corresponding 90\% bounds (note that for a standard Gaussian distribution, 90\% of the area on one side of the mean is covered at $\pm1.95\sigma$), which are then rescaled to be continuous through the best-fit value which now represents the mode of the pasted Gaussian (see the red and blue dashed curves in left panel of Figure~\ref{fig:probRx}). However, such rescaling, while it preserves the location of the mode, makes the overall distribution narrower, and the resulting 90\% bounds are no longer consistent with the observed values (see intersections of the dashed red and blue cumulative distribution with the horizontal dotted lines at 5\% and 95\% levels, in the right panel of Figure~\ref{fig:probRx}). We therefore adopt a Gamma distribution\footnote{The gamma distribution is a highly flexible distribution, defined as $$\gamma(R_{\rm X},\alpha,\beta)=\frac{\beta^\alpha}{\Gamma(\alpha)} \cdot R_{\rm X}^{\alpha-1}~e^{-\betaR_{\rm X}},$$ where $\alpha$ and $\beta$ are parameters that control the location and shape of the distribution. The mean=$\frac{\alpha}{\beta}$, variance=$\frac{\alpha}{\beta^2}$, and mode=$\frac{\alpha-1}{\beta}$ are simple functions of the parameters, and conversely, given estimates of the mean, mode, and the variance, we can compute the corresponding parameters.} (solid green lines in Figure~\ref{fig:probRx}) as the representative distribution for $R_{\rm X}$. Specifically, we choose $\gamma(R_{\rm X};\alpha=5.36,\beta=1.45)$; the peak here is displaced by $\approx$20\%, but we consider this a better representation of $R_{\rm X}$ because it matches the measured bounds of $R_{\rm X}$ at the 5\% and 95\% levels well. \begin{figure}[H] \centering \includegraphics[width=1.0\linewidth]{trueRx_probdensity-eps-converted-to.pdf} \caption{The nominal probability distribution of $R_{\rm X}$. {\sl Left panel} shows the differential density distributions for the conjoined half-Gaussians (red and blue dashed lines) and for a gamma distribution (green solid line). {\sl Right panel} shows the cumulative distributions of the two candidate distributions, along with vertical dotted lines indicating the 5\% and 95\% bounds of $R_{\rm X}$, and horizontal dashed lines indicating the corresponding levels. The gamma distribution matches the bounds better, but has a larger mode.} \label{fig:probRx} \end{figure} Summaries of parameter values are given in Table~\ref{tab:mcmcresults}. The distributions of the various parameters are shown in Figures~\ref{fig:mcmchisto_normtmid}~and~\ref{fig:mcmccontour}. In each case, the location of the mode (vertical solid orange line), the 68\% (blue dashed vertical lines) and 90\% (green dashed vertical lines) highest-posterior density intervals, and the mean and $1\sigma$ errors (horizontal red line situated at the same vertical level as peak of the distribution) are marked. Notice that in several cases the distributions are skewed, and traditional estimates like the mean and standard deviation are not useful summaries. We thus also provide the mode of the distribution and the 90\% highest-posterior density intervals\footnote{These constitute the intervals that encloses the highest values of the posterior probability density, and are consequently the smallest uncertainty intervals that can be set.} There are also strong correlations present between $b$, $f_{ec}$, and $v_{pl}$, as seen from the contour plots of their joint posteriors (constructed without thinning the iterations). This suggests that the intervals derived from the marginalized 1D posteriors are too coarse, and that narrower intervals may be obtained over smaller ranges. For instance, $v_{pl}>40$~km~s$^{-1}$ are predominantly obtained when $f_{ec}>2$~R$_{\rm Jup}$, which itself has a lowered probability of explaining the data. Thus, the preponderance of the probability suggests that the system is better described with smaller values of $f_{ec}$ and $v_{pl}$. Furthermore, notice that $b$ and $f_{ec}$ have a large and narrow extension to large values; this can occur essentially because an eclipse can occur for large $b$ only when $f_{ec}$ is also large enough to cover the source even at large displacements. That is, the space of possible models that allow this situation are predominantly driven solely by the depth of the eclipse and not the profile. This suggests that the number of states that the system can occupy in such configurations is limited, and thus can be described as having low entropy. This measure is not included in our likelihood, but indicates that smaller values of $f_{ec}$ and $b$ are preferred. \begin{table} \centering \begin{tabular}{l r r r} \hline\hline Parameter & Mode & 90\% bounds$^\dag$ & Mean $\pm 1\sigma$\\ \hline $c_{\rm X}$ [ct~bin$^{-1}$] & $7.6$ & $(7.3, 8.0)$ & $7.6 \pm 0.2$ \\ $b$ [km] & $0$ & $(0, 1.8\times{10^5})$ & $(7 \pm 11) \times 10^{4}$ \\ $f_{ec}$ [R$_{\rm Jup}$] & $0.74$ & $(0.18, 2.7)$ & $1.4 \pm 1.3$ \\ $v_{pl}$ [km~s$^{-1}$] & $17.1$ & $(5.1, 56)$ & $30 \pm 20$ \\ $T_{\rm mid}$ [ks] & $152.7$ & $(152.2, 153.4)$ & $152.8 \pm 0.4$ \\ $^\ddag$Eclipse start [ks] & $147.8$ & $(143.9, 151.3)$ & $ 147.4 \pm 2.6$ \\ $^\ddag$Eclipse duration [ks] & $10.5$ & $(3.1, 17.9)$ & $11 \pm 5$ \\ \hline \multicolumn{4}{l}{$\dag:$ Highest-posterior density bounds} \\ \multicolumn{4}{l}{$\ddag:$ These are values computed from model parameters, not fitted directly} \end{tabular} \caption{Results from the MCMC analysis} \label{tab:mcmcresults} \end{table} \begin{figure} \centering \includegraphics[width=1.0\linewidth]{m51mcmc_histo_norm_linlin-eps-converted-to.pdf} \includegraphics[width=1.0\linewidth]{m51mcmc_histo_tmid_linlin-eps-converted-to.pdf} \caption{The marginalized posterior density distributions of $c_{\rm X}$ (top) and $T_{\rm mid}$ (bottom). The locations of the mode (vertical solid orange line), the 68\% (blue dashed vertical lines) and 90\% (green dashed vertical lines) highest-posterior density intervals, and the mean and $1\sigma$ errors (horizontal red line placed at the same level as the mode) are marked (see Table~\ref{tab:mcmcresults}), as well as listed at the top right of the panels. } \label{fig:mcmchisto_normtmid} \end{figure} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{m51mcmc_contour_bbR2-eps-converted-to.pdf} \\ \includegraphics[width=0.9\linewidth]{m51mcmc_contour_bbVel-eps-converted-to.pdf} \\ \includegraphics[width=0.9\linewidth]{m51mcmc_contour_R2Vel-eps-converted-to.pdf} \caption{The joint posterior distribution of $(b,f_{ec})$ (top), $(b,v_{pl})$, (middle), and $(f_{ec},v_{pl})$ (bottom) are shown. Each contour plot shows the joint density, marked at enclosed probability regions for 39\% (blue; 2D Gaussian $1\sigma$ equivalent), 67\% (red; $1.5\sigma$), 85\% (green; $2\sigma$), and 95\% (yellow; $2.5\sigma$). The solid white dot indicates the mode of the distribution. Note that strong correlations are present between these parameters; see text for discussion.} \label{fig:mcmccontour} \end{figure} \section{The Nature of the Transiting Object} \begin{figure*} \includegraphics[width=\linewidth]{figures/xrb_radius.png} \caption{Radius distribution of the eclipser (black solid curve). The lower x-axis is in units of $10^9$ centimeters and the upper x-axis is in units of Jupiter radii ($R_J$). The y-axis represents relative probability for the radius of the eclipser. The left-most (darkest) grey region shows models of giant planets from roughly 0.5 to 10$M_J$ and 5 to 10 Myr old. The contiguous, somewhat lighter grey region corresponds to the range of possible radii for brown dwarfs 5 Myr to 10 Myr; the lightest grey region on the far right, shows the overlap in radii between brown dwarfs and M dwarfs from 5 to 10 Myr \citep{Baraffe:2003}. The radius distribution of the known transiting hot Jupiter population is shown in orange. The radius of the brown dwarf within the Upper Scorpius association (5-10 Myr), RIK 72b, is shown as the red dashed line.} \label{fig:radius_ecl_candidates} \end{figure*} The size of an object is a powerful indicator of its nature. Four classes of objects have equilibrium radii in the range consistent with the $90\%$ confidence limits of our model: planets (including rocky planets as well as ice giants and gas giants, e.g. \citep{2019AJ....157..245H}); white dwarfs (WDs); M dwarfs; (stars with mass less than about $0.5\, M_\odot$); and brown dwarfs. WDs have radii that, for different possible WD masses, span the relevant range of derived values of $R_{ec}.$ WDs are the high-density remnants of stars with initial mass smaller than roughly $9\, M_\odot$. The radii of the most massive WDs are comparable to the radius of Earth. Less massive WDs have radii of up to a few times $10^9$~cm. WDs, however, can only form when the stars that give rise to them have begun to evolve, generally at ages greater than $10^8$~yrs. M51-ULS-1 is almost certainly too young to be associated with WDs, since most stars that produce WDs will not yet have begun to evolve. In Appendix B we show that there is an independent reason to eliminate WDs from consideration: in the expected range of distances from the XRS, they would serve as gravitational lenses, increasing the amount of light we receive from the XRS rather than causing a dip. M-dwarfs have radii that are strongly dependent on age {and irradiation}. For example, a 0.2~$M_\odot$ star has a radius of 13.8~$R_J$ at 1~Myr and 7~$R_J$ at 10~Myr and falls within the 90\% confidence interval at 2~$R_J$ only at ages $\gtrsim$100~Myr. At an estimated age of $10$~Myr, all M dwarfs have significantly larger radii than is estimated for the eclipser. Furthermore, the fact that the eclipser is highly irradiated (with typical flux received comparable to that recieved by a hot Jupiter; \S 7) by the XRB means that it will shrink more slowly toward its equilibrium radius. We note in addition that there are fewer model solutions near the upper end of the confidence limits, making the large-$R_{ec}$ solutions less plausible. We briefly consider the effects of irradiation on young low-mass objects. A study in 2003 \citep{Baraffe:2003} produced theoretical predictions for the radii of objects from 0.5$M_J$ to 100$M_J$, spanning the range from gas giant planets with mass five times that of Saturn to the very lowest mass M dwarf stars. Their calculations explored the effects of irradiation over time for these objects. The environment is at 0.046AU from a host star that has an effective temperature of 6000K. This provides a rough guide to the effects of irradiation. The radii of these objects monotonically decreases with increasing age so only the oldest objects are found at the smallest radii. For young objects (5 to 10 Myr): giant planets of mass 0.5-13$M_J$ have radii between 1.3-1.8$R_J$, brown dwarfs of mass 13-80$M_J$ have radii between 1.8-5.4$R_J$, and M dwarfs of mass 0.08-0.10$M_\odot$ have radii spanning the range 3.7-5.6$R_J$. Figure \ref{fig:radius_ecl_candidates} shows the radius distribution of the eclipser, together with the ranges of sizes predicted by the (sub)stellar models \citep{Baraffe:2003} for brown dwarfs and M dwarfs as functions of age. The radius distribution of roughly 300 transiting hot Jupiters is shown for a comparison to planet-mass objects in an irradiated environment. On the basis of size, M-dwarfs can be eliminated as candidates for the transiting object. Brown dwarfs have radii that overlap the upper end of the confidence interval, which as we have noted has fewer solutions and is therefore less likely. In addition, brown dwarfs are rare relative to planets: several recent studies \citep{Carmichael:2019, Subjak:2019} have addressed this phenomenon, known as the ``brown dwarf desert". Empirically, a small-radius object is far more likely to be a planet than a brown dwarf. Thus, although we cannot eliminate the possibility that the transiting object is a brown dwarf, we find that planets are much more likely. Planets, unlike brown dwarfs, can have radii across the entire range encompassed by the high-confidence interval, and they are more common companions to stars, as illustrated by Figure 13. \begin{figure*} \begin{center} \includegraphics[width=.9\linewidth]{figures/mass_period_distribution_xrb.png} \caption{Mass-period distribution of a sample of low-mass stellar companions, all known transiting brown dwarfs, and a sample of the transiting giant planet population. The population of brown dwarfs is exaggerated here in that \textit{all} known transiting brown dwarfs are shown while only a sample of the giant planet population (courtesy of http://exoplanet.eu) and a sample of the low-mass stellar companion population \citep{2017A&A.608A.129T} are shown.} \label{fig:mass_period_dist} \end{center} \end{figure*} \section{The Orbit of the Transiting Mass and its Implications} \subsection{The Orbit} The orbit of the transiting mass determines its position relative to the XRB and allows us to assess whether it can survive the incoming flux, as well as whether it would have been possible for the candidate planet to survive the evolution of the XRB up until now. The size of the orbit and the orbital period also feed into calculations of the probability of transit detection. For a circumbinary orbit it is straightforward to determine the value of $a_{pl}$, the distance of M51-ULS-1b from the binary's center of mass at the time of transit, since the value of $v_{pl}$ was measured from the short-eclipse fit. The most likely value of $v_{pl}$, the mode of the distribution, is 17~km/s, and the $68\%$ uncertainty bounds are at 8~km/s and 34~km/s. Kepler's law demands that $a_{pl}$ scale as $M_{tot}/v_{pl}^2$, as long as the transiter's mass is much smaller than the mass of the binary. \begin{equation} a_{pl}=45~{\rm AU} \, \Bigg(\frac{M_{tot}}{20\, M_\odot}\Bigg)\, \Bigg(\frac{20\, {\rm km/s}}{v_{pl}}\Bigg)^2 \end{equation} The equation above demonstrates that for values of $M_{tot}$ and $v_{pl}$ similar to those we expect, the distance between the candidate planet and the center of mass of the XRB is in the range of tens of AU. Note that $a_{pl}$ corresponds to the distance at the time of transit. In appendix B we show that for orbits wider than a few AU, a WD would produce a lensing event rather than a dip in flux. This, in addition to the young age of the system, eliminates the possibility that the transiting mass could be a WD. With an orbital radius on the order of tens of AUs, M51-ULS-1b orbits both components of the XRB. However, even if the combination of $M_{tot}$ and $v_{pl}$ is such that the value of $a_{pl}$ is only on the order of several AU, M51-ULS-1b's orbit is still almost certainly circumbinary, since the binary's orbital radius is likely to be a few times smaller than the maximum value of $3$~AU. We now consider the requirement of orbital stability. The semimajor axis of the candidate planet's orbit and the binary's orbital radius must be larger than roughly $3$. This suggests that the planet-candidate's orbit forms a hierarchical system with the XRB; a value of $a_{pl}$ in the range of several AU or higher is consistent with the XRB's properties and the condition of orbital stability. \subsection{Incident Flux and the Survival of M51-ULS-1b} The XRS is highly luminous. We compute the ratio of the flux incident on the planet candidate to the flux incident on Earth from the Sun. The luminosity of the Sun is $4\times 10^{33}$~erg~s$^{-1}$, and we take the luminosity of M51-ULS-1 to be $10^6$ times larger. \begin{equation} \frac{\cal F}{{\cal F}_{\oplus}} = 490\, \Bigg(\frac{45\, {\rm AU}}{a_{pl}}\Bigg)^2 = 156 \, \Bigg(\frac{80\, {\rm AU}}{a_{pl}}\Bigg)^2 \end{equation} This is similar to the flux incident on a planet orbiting a solar-luminosity star at 0.05~AU. Gas giants found in such orbits are referred to as ``hot Jupiters''. The high effective temperature of the XRS ($\sim 10^6$~K) means that it is not only a copious emitter of X-rays, but also that a large fraction of the radiation it emits is highly ionizing. Such radiation can lead to the loss of the planetary atmosphere. Although highly luminous systems like M51-ULS-1 have not yet been considered as planetary hosts, an analogous case has been studied. Specifically, main-sequence Sun-like binaries whose components are close enough to interact tidally have been studied as hosts for circumbinary planets \citep{2014A&A...570A..50S}. The stars in such systems have active coronospheres, and are therefore more luminous in X-rays than they would be had they been isolated. Calculations conducted for several such real systems have explored the range of parameters consistent with planetary survival. At the distance we have estimated for M51-ULS-1b from its XRS, its atmosphere can survive the presently observed X-ray active phase of M51-ULS-1. At optical and infrared wavelengths the dominant source of flux may be the donor star, although the magnitude of the HST-discovered counterpart suggests that the donor does not have a higher bolometric luminosity than the XRS. Thus the discussion above will not be significantly altered by including the effects of the donor star. In summary, a candidate planet in the orbital range we derive for M51-ULS-1b can survive its present-day conditions. This is in contrast to what would be expected for gas or ice giants in close orbits with M51-ULS-1, which would have their envelopes destroyed on relatively short time scales. Of course M51-ULS-1b is influenced by the incident radiation. In analogy to close-orbit exoplanets it would experience bloating, having a radius somewhat larger than expected for an object of the same mass in a region without the large amount of incident flux. Bloating would also affect brown dwarfs and low mass stars in this environment. See, e.g., \citep{2019MNRAS.488.3067H}. \subsection{Feasibility of Wide Orbits} We know that the existence of planets in wide orbits is plausible, because planets with orbits having semimajor axes in the range of tens and hundreds of AU are common among Galactic exoplanets. Direct imaging has led to the discovery of 15 confirmed exoplanets with estimated mass smaller than $13\, M_J$ and semimajor axes between $10$~AU and $100$~AU; similarly, 12 exoplanets have semimajor axes wider than $100$~AU ({\sl exoplanet.eu}; 9 July 2020). There is also a case of a planet in a $23$~AU orbit about a former XRB in M4 \citep{2000ApJ...528..336F}. Even without a detailed evolutionary model, we know that the binary M51-ULS-1 had an interesting history. Here we discuss key elements of that history and show that a wide-orbit planet could survive. M51-ULS-1 experienced an earlier phase of activity during which the star that evolved into today's compact accretor was active. This star could have transferred mass to its companion. Because, however, the companion was not compact, less accretion energy would have been released per unit mass than is released today. The evolution of the most massive star would, however, have had consequences for a circumbinary planet. The evolving star would become more luminous and larger. It would also shed mass through winds, which would tend to make the planetary orbit wider. If a significant amount of mass was ejected in the orbital plane, the planet's orbit would likely have been driven toward the midplane. It is important to note that, even if the first-evolved star reached giant dimensions, a planet in an orbit that was initially several AU wide would be able to survive and would likely be pushed into a wider orbit. If the present-day compact object is a BH, the formation event may have been a ``failed supernova'' during which little mass was lost. If instead the present-day compact object is a NS, significant mass may have been lost through a supernova, and there may also have been a ``kick''. Nevertheless, the presence of the massive star that is today's donor would have allowed the system to survive and would have moderated the speed of NS's natal the kick. Just as it is possible for a binary to survive a supernova explosion, it is also possbile for wide-orbit planets to stay bound. The bottom line is that the wide orbit we derive is consistent with the existence and survival of the planet, both in the presently-observed binary and through the possible evolution of the primordial binary, even though not every planet hosted by the XRB will have the same fate. \section{Galactic Populations of Planets} \subsection{The Number of Planets in our Sample} What is the probability of observing a transit by a small object in our data set? By answering this question we can use our detection of M51-ULS-1b to estimate how many planet-size objects are likely to be orbiting the XRBs whose observations comprise our data set. In Appendix~C we derive an expression for ${\mathbb P}_{trans},$ the probability of detecting a transit. \begin{equation} {\mathbb P}_{transit}=7.8\times 10^{-5} \, g\, \frac{1}{\alpha} \Big[\frac{T_{obs}}{{\mathrm Ms}}\Big] \Big[\frac{45 \, {\mathrm AU}}{a_{pl}}\Big]^\frac{3}{2} \Big[\frac{M_{tot}}{20\, M_\odot}\Big]^\frac{1}{2} \end{equation} New quantities of the right-hand side are: $T_{obs}$, the total time duration of the X-ray observations; $\alpha$, a parameter whose value is ${\cal O}(1);$ and $g$. The value of $g$ is unity if the planetary orbit and the binary orbit are coplanar; otherwise it is smaller. Coplanarity appears to hold, at least approximately in M51-ULS-1/M51-ULS-1b, suggesting that $g$ is likely to have a value larger than $0.01-0.1$. We can use ${\mathbb P}_{trans}$ to estimate the number of wide-orbit planets likely to be present in the sample of XRBs we studied. There were $238$~XRSs satisfying our selection criteria in our sample of three galaxies, each with a total exposure time comparable to $1$~Ms. Not every XRS detected within the area covered by a galaxy is an XRB. \footnote{Supermassive BHs at galaxy centers may emit X-rays, and supernova remnants can be bright XRSs. Some XRSs may be distant quasars or nearby stars. Many of the latter can be identified by crossmatching with data from other surveys, such as Gaia\citep{Gaia}.} The probability of detecting a transit by a planet candidate in our survey is estimated by multiplying the probability ${\mathbb P}_{trans}$ by $200 \times (N_{xrb}/200)$. Since we observed a single transit, the number of wide-orbit planet-size objects around the XRBs in our sample can be estimated to be: \begin{equation} N_{pl}= \frac{64\, \alpha}{g}\, \Bigg(\frac{a_{pl}}{45 \, {\mathrm AU}}\Bigg)^\frac{3}{2} \Bigg(\frac{20\, M_\odot}{M_{tot}}\Bigg)^\frac{1}{2} \Bigg(\frac{200}{N_{xrb}}\Bigg) \Bigg(\frac{\mathrm Ms}{T_{obs}}\Bigg) \end{equation} Our detection of a planetary transit may therefore signal the presence of roughly $64/g$ substellar objects in wide orbits around the XRBs in our sample. Note that, because $g$ is almost certainly smaller than unity, the number of small-radius objects orbiting the XRBs in our sample could be even larger than several dozen. Some XRBs may be more likely to host planets than others, but more investigations are required to determine relative populations. Furthermore, our search may not have discovered smaller objects in equally wide orbits, or even larger objects in closer orbits. \subsection{Prospects for Future Observations} There is no reason to suggest that the data sets we employed are extraordinary. Were we to examine a set of XRSs drawn from similar extragalactic populations we would expect a similar result. We therefore examined archived data to determine how many independent and roughly equivalent studies could be conducted, to explore the prospects for future discoveries. Both {\sl XMM-Newton} and {\sl Chandra} data are available for this purpose. {\sl XMM-Newton} provides the advantage of a la rger effective area, yielding higher count rates. {\sl Chandra}'s low noise and superior spatial resolution, mean that there is little or no confusion, even in relatively crowded fields. Archived and new data from both observatories can discover short-duration transits. A search of the {\sl Chandra} archive found that at least 7 galaxies have been observed for $750-1500$~ks, and $13$ others for $250$~ks to $500$~ks. Two of the best observed galaxies, M31 and M33, are members of the Local Group, where sources tens to a hundred times less luminous than the ones we have studied provide enough photons to allow the detection of short transits. Data from dozens of other galaxies with shorter observations are also useful. {\sl XMM-Newton's} archives are comparably rich. In short, the archives contain enough data to conduct surveys comparable to ours more than ten times over. We therefore anticipate the discovery of more than a dozen additional extragalactic candidate planets in wide orbits. Furthermore, additional data from external galaxies is collected every year. Below we discuss how existing data can additionally be used to search for planets with closer orbits and also for planets orbiting dimmer XRBs. The reason external galaxies are good places to hunt for planets is that the field of view of today's X-ray telescopes encompass a large fraction of the bright portions of galaxies at distances larger than $6-7$~Mpc. This means that a single observations can collect counts from dozens to hundreds of XRSs. As we consider galaxies nearer to us, the advantage of a broad field of view is diminished. There is nevertheless a significant advantage to be gained, because, for example at the distance to M31 we collect $\sim 100$ times as many counts as we would from the same XRS in a galaxy at $8$~Mpc. In addition, for bright sources, this makes us sensitive to shorter-lived deviations from baseline. Thus, small planets in orbits like that of M51-ULS-1b can be detected, and planets in closer orbits can be detected as well. Furthermore, since the numbers of XRSs at lower luminosities is larger than the number of high-luminosity sources, planet searches can be conducted on the much larger populations of dimmer XRBs. For example, the central region of M31 contains roughly 400 XRSs with total observing times larger than 0.5~Ms, most with luminosities between $10^{36}~{\rm erg~s}^{-1}$ and $10^{38}~{\rm erg~s}^{-1}$. At lower luminosities substellar objects may be able to survive for longer times in smaller orbits. We will be able to either discover such planets or place meaningful limits on their existence. Finally, the closest XRSs to us are in our own Galaxy, where even light curves of WDs that accrete from close companions at low rates (cataclysmic variables), with luminosities on the order of $10^{31}$~erg~s$^{-1}$ can be examined for evidence of transits. Unless, however, the target is a cluster or other crowded field, only one XRS may be in a single field of view. Furthermore, long exposures are available for smaller numbers of XRSs than in external galaxies. Nevertheless, some XRSs have had excellent time coverage by, for example, the past X-ray mission, RXTE or with the current NICER mission. In addition, new missions, such as ATHENA and the proposed Lynx mission, will increase the X-ray count rates significantly, making it possible to discover more planets in all of the environments considered above. \subsection{Conclusion} It is worth noting that it has been possible for us to find something as new as an X-ray transit due to a candidate planet, simply because we were looking for it. XRBs are so variable, and dips due to absorption are so ubiquitous, that transit signatures are not readily recognized\footnote{The signal we report on here with the full participation of all coauthors was originally misidentified by two of us \citep{2016MNRAS.456.1859U}.} Yet, because planets have been found in all environments that have been searched for them, it is reasonable to look for signs of planets in XRBs. Once the results from successful searches are known, new discoveries are likely to emerge from a variety of research groups who may take new looks at interesting light curve features. Our discovery of a single transit will lead to more detailed studies of planets and other low-radius objects in external galaxies. An equally thorough study of independent data sets will be important to develop better statistics. It is within the reach of the present generation of X-ray telescopes to develop information about the population of planets orbiting XRBs and for future generations of instruments to develop a comprehensive view. The discovery of M51-ULS-1b has established that external galaxies host candidate planets. It also demonstrates that the study of X-ray transits can reveal the presence of otherwise invisible systems, which will also include brown dwarfs and low-mass stars.\footnote{Our method is also capable of discovering them. That a planet-size object was the first discovery may simply reflect a larger population of circumbinary planets than brown dwarfs or M dwarfs.} Discovering and studying extragalactic planets and other small objects in external galaxies can establish connections and contrasts with the Sun's environment in the Milky Way, provide insight into the mutual evolution of stellar and binary orbits, and expand the realm within which we can search for extraterrestrial life. Extending the search will expand the scope of what we can say about our place in the universe. \newpage {\bf Data availability:} The {\sl Chandra} and {\sl XMM-Newton} data that support the findings of this study are available from the HEASARC web site: ``https://heasarc.gsfc.nasa.gov/docs/archive.html''. \medskip {\bf Code availability:} We will make all scripts used to run the MCMC analysis in Section~\ref{sec:mcmcfits} available in a google Drive folder. The scripts use several routines in PINTofALE \url{https://hea-www.harvard.edu/PINTofALE/}. The hardness ratio code BEHR, used in Section~\ref{sec:transitfeatures}, is available at \url{https://hea-www.harvard.edu/AstroStat/BEHR/}. \ \bibliographystyle{mnras}
null
null
null
proofpile-arXiv_000-10156
{"arxiv_id":"2009.08987","language":"en","timestamp":1600740018000,"url":"https:\/\/arxiv.org\/abs\/2009.08987","yymm":"2009"}
2024-02-18T23:40:25.161Z
2020-09-22T02:00:18.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10158"}
null
\section*{Main} The most accurate methods for studying matter at the atomic scale are wavefunction-based approaches which explicitly account for many-electron interactions. Given only the positions and nuclear charges of atoms, we can now predict, among basically every observable property, the binding strength of relatively small molecular systems (\textit{i.e.} less than 50 atoms) to within a few tenths of a kcal mol$^{-1}$ using many-body solutions to the Schr\"{o}dinger equation~\cite{Carter2008,Dubecky2013a,Chan-Science}. This value is better than the so-called ``chemical accuracy" of 1 kcal mol$^{-1}$ required for reliable predictions of thermodynamic properties. Indeed, the relative stabilities of many non-covalently bound materials such as 2D layered materials, pharmaceutical drugs, and different polymorphs of ice, are underpinned by small energy differences on the order of tenths of a kcal mol$^{-1}$~\cite{Reilly2016}. However, experimentally determining binding affinities under well-defined, pristine conditions is notoriously challenging~\cite{Muller-Dethlefs2000}. In addition, thousands of computational works describe physical interactions in materials, which are not well understood at the experimental level, for instance, as part of rational design initiatives in novel materials including soft colloidal matter, nanostructures, metal organic and covalent organic frameworks~\cite{Wang2018,Lee2018,Ongari2019}. The present shortage of benchmark information is a major setback for forming reliable predictions across the natural sciences and is frequently addressed through demanding, but increasingly feasible, wavefunction-based methods. However, extending the use of highly-accurate methods to a regime of larger molecules is hindered by theoretical and technical challenges due to the steep increase in computational cost required for an accurate description of many-electron interactions~\cite{Al-Hamdani2019,CCSD_DMC_solidH}. \begin{figure}[b] \includegraphics[width=0.45\textwidth]{figure1.png} \caption{\label{fig1:toc} The CCSD(T) and FN-DMC computed binding energies of a water-benzene dimer~\cite{waterongraphene} is shown in comparison to a buckyball-ring complex computed here. It can be seen that the binding energy increases by a factor $\sim10$, near-linearly with the size of the system, whereas the corresponding disagreement between CCSD(T) and FN-DMC increases by a factor of $\sim100$.} \end{figure} Here we use two widely trusted wavefunction methods that can provide sub-chemically accurate solutions to the electronic Schr{\"o}dinger equation for non-covalent interactions. First, we utilize coupled-cluster (CC) theory with single, double, and perturbative triple excitations [CCSD(T)]~\cite{CCSD(T)} -- approximated via the local natural orbital (LNO) scheme to be practicable [LNO-CCSD(T)]~\cite{LocalCC3,LocalCC4}. Coupled cluster theory has gained great prominence in the last 30 years and the label of `gold-standard' for remarkable accuracy on virtually all systems in its domain of applicability~\cite{ShavittBartlettBook}. Second, a stochastic quantum method that computes the energy for the many-electron wavefunction directly is known as fixed-node diffusion Monte Carlo (FN-DMC). This method has seen a surge of use in recent years, particularly for predicting large molecules and periodic systems with non-covalent interactions~\cite{Dubecky2016,CCSD_DMC_solidH}, such as molecular crystals and adsorption on 2D materials~\cite{Dubecky2016,Zen2018,al-hamdani2017cnt}. As we demonstrate in Fig.~\ref{fig1:toc}, CCSD(T) and FN-DMC \pn{interaction energies} are in sub-chemical agreement in small systems such as the benzene-water dimer~\cite{waterongraphene}. Nonetheless, FN-DMC and CCSD(T) are still prohibitively expensive for most applications in biology and chemistry, and as result, very little is known about how predictive these theoretical methods are in the regime of larger molecules. Straightforward extrapolations of interactions from small molecules to large complexes are difficult to make due to the interplay and accumulation of interactions that are non-additive, anisotropic, or have many-body character~\cite{Ambrosetti-Science,Jordan2019,Jenness2010a,waterongraphene,mp2_rpa_nci_fruche}. As such, a deeper understanding of non-covalent interactions can be gained by directly applying state-of-the-art methods in larger molecular complexes. Here, we use a frequently studied compilation, the L7 molecular data set from Sedlak \textit{et al.}~\cite{L7set} to ascertain the predictive power of FN-DMC and CCSD(T) for relatively large complexes involving intricate $\pi-\pi$ stacking, electrostatic interactions, and hydrogen-bonding (see Fig.~\ref{fig1:intro}). In addition, we consider a larger system of a C$_{60}$ buckyball inside a [6]-cycloparaphenyleneacetylene ring (which we label as C$_{60}$@[6]CPPA), consisting of 132 atoms. This structure has a number of interesting features: (i) an open-framework that can be found in covalent organic frameworks and carbon nanotubes, (ii) the buckyball has a large polarizability ($76\pm8$~\AA$^3$)~\cite{Antoine1999} which gives \yas{rise} to considerable dispersion interactions, and (iii) confinement between the ring and the buckyball that may cause non-trivial long-range repulsive interactions~\cite{Sadhukhan2017}. \begin{figure}[hbt] \includegraphics[width=8cm]{figure2.png} \caption{\label{fig1:intro} The supramolecular complexes from L7 data set~\cite{L7set} and a buckyball-ring supramolecular complex consisting of 132 atoms.} \end{figure} Following recent algorithmic advances for more efficient CCSD(T) and FN-DMC, we predict interaction energies for a set of supramolecular complexes and converge numerical thresholds to the best of our joint knowledge and expertise. \yas{Hereafter, we refer to CCSD(T) and FN-DMC interaction energies but note that a number of approximations are used in both methods. More specifically, the CCSD(T) interaction energies we report come from systematically converging LNO-CCSD(T) towards canonical CCSD(T). Meanwhile, the significance of approximations in FN-DMC interaction energies are assessed using statistical measures.} CCSD(T) and FN-DMC \pn{are consistent} for five of the eight supramolecular complexes, covering a range of interactions including hydrogen bonding and $\pi-\pi$ stacking. However, we find that three key complexes reveal several kcal mol$^{-1}$ differences between best estimated CCSD(T) and FN-DMC calculations. Most notably, a substantial disagreement of $\sim$8 kcal mol$^{-1}$ (or $20~\%$) is found in the interaction energy ($E_\text{int}$ as defined in Methods) of the buckyball-ring system. \pn{This 8 kcal mol$^{-1}$ inconsistency remains on top of the uncertainty estimates incorporating all controllable sources of errors. We also gauge the impact of approximations intrinsic to each method, not covered in the numerical uncertainty estimates, and find that 8 kcal mol$^{-1}$ is an order of magnitude beyond these. It is thus yet unclear whether this discrepancy would also be present between the approximation-free CCSD(T) and DMC results or it is a result of an unexplored source of error.} \pn{As shown in Fig. \ref{fig1:toc} and in Table \ref{tab:my-table} below, such a sizable deviation cannot be explained solely by the size-extensive growth of the difference between CCSD(T) and FN-DMC.} Consequently, the interaction energies of three of the complexes considered here are still unsettled. We applied two different, widely-used and well-performing DFT approaches developed for capturing long-range \yas{dispersion interactions: DFT+D4~\cite{dftd4} and DFT+MBD~\cite{MBD}. Both methods model London dispersion based on a coarse-grained description and account for all orders of many-body dispersion in different manner. See Refs.~\cite{Hermann2017a,Grimme2016} for an overview of various ways to capture dispersion in the DFT framework.} We find that DFT+MBD closely matches FN-DMC, while the recent DFT+D4 method agrees well with CCSD(T), irrespective of the level of disagreement between CCSD(T) and FN-DMC. Therefore, the absence of either CCSD(T) or FN-DMC references could incorrectly suggest that one of the DFT methods performs better than the other. This illustrates that the unprecedented level of disagreement amongst state-of-the-art methods in large organic molecules has consequences well outside the developer communities. \subsection*{State-of-the-art methods for non-covalent interactions} CCSD(T) and FN-DMC methods account for dynamic electron correlation through an expansion in electron configurations in the former and through the real-space fluctuation of electrons in the latter. These two equally viable formulations can be \pn{illustrated by} the corresponding expressions of $\Psi({\bf R)}$, the exact wavefunction: \begin{itemize} \item [1.] {\bf DMC:} Imaginary time ($\tau$) propagation of a trial function $\Psi_{\mathrm{T}}({\bf R})$ in real space: $\displaystyle|\Psi({\bf R}) \rangle = \lim_{\tau \to \infty} \text{exp}\!\left[-\tau(\hat{H}-E_\mathrm{T})\right] |\Psi_{\mathrm{T}}({\bf R}) \rangle$ \item[2.] {\bf CC:} Expansion of excited determinants generated via the operator $\hat{T}_n$ from a reference wavefunction: $\displaystyle|\Psi({\bf R}) \rangle = \text{exp}\!\left[\sum_{n=1}^{\infty}\hat{T}_n\right] |\Psi_{\mathrm{T}}({\bf R}) \rangle$ \end{itemize} The crucial challenge lies in extensively accounting for relatively small fluctuations in the electron charge densities. In FN-DMC this implies the need for relatively small time-steps $\Delta\tau$ for the projection of the wavefunction as well as an extensive sampling of electron configurations in real-space ($\lim_{\tau \to \infty}$) in order to reduce the stochastic noise in the predicted energy. In coupled cluster theory, non-covalent interactions require a high-order treatment of many-electron processes, as is included in CCSD(T), and a sufficiently large single-particle basis set. Reaching basis set saturation and well-controlled local approximations concurrently for the studied systems required previously unfeasible computational efforts as shown by the several kcal mol$^{-1}$ scatter of interaction energy predictions reported for the L7 set (see Fig. \ref{fig2:mainres}). Our recent efforts enabled the following: (i) a systematically converging series of local CCSD(T) results is presented for highly-complicated complexes, (ii) both the local and the basis set incompleteness (BSI) errors are closely monitored using comprehensive uncertainty measures~\cite{LocalCC4}, (iii) convergence up to chemical accuracy is reached for the complete L7 set concurrently in the local approximations as well as in the basis set saturation. The benefit of such demanding convergence studies is that the resulting interaction energies, up to the respective error bars, can be considered independent of the \pn{corresponding} approximations. Consequently, we expect that the CBS limit of the exact CCSD(T) results could, in principle, be approached similarly using alternative basis sets~\cite{QMC_CC_solids,CCSD_DMC_solidH,MRA_CC2} or local correlation methods~\cite{DLPNO-CCSD(T),PNOCCreview,PNO-CCSDHattig,DLPNO-CCSD(T)-F12}, as it is clearly observed for some of the present complexes (see, \textit{e.g.} GGG or CBH in Fig. \ref{fig2:mainres}). We use highly-optimized algorithms both for FN-DMC and CCSD(T) as outlined in Methods, and push them beyond the typically applied limits. We used \textit{circa} 0.7 and 1 million CPU core hours for FN-DMC and CCSD(T), respectively. This is equivalent to running a modern 28 core machine constantly for $\sim7$ years. \subsection*{Losing consensus on supramolecular interactions} Demonstrating agreement between fundamentally different electronic structure methods for solving the Schr\"{o}dinger equation provides a proof-of-principle for the accuracy of the methods beyond technical challenges. \pn{To date, disagreements between CCSD(T) and FN-DMC have been reported only for systems where their key assumptions, \textit{e.g.} single-reference wavefunction, accurate node-structure, etc. were not completely fulfilled~\cite{Be2_dmc,dlpno_spincrossover}. Previously however, CCSD(T) and FN-DMC were found in agreement within the error bars, for the interaction energies of small organic molecules with pure dynamic correlation~\cite{A24set2,Dubecky2016,Al-Hamdani2019} as well as some extended systems~\cite{waterongraphene,Zen2018,lih_dmc_cc}. Establishing this agreement for systems at the 100 atom range has, however, been hindered by the sizable or unavailable error estimates for finite systems~\cite{Al-Hamdani2019}. } \yas{For example, binding energies of large host-guest complexes derived from experimental association free energies~\cite{s12l,Sure2015} motivated previous FN-DMC~\cite{Hermann2017} as well as local CCSD(T)~\cite{Calbo2015} computations. However, conclusive remarks could not be made on the consistency of FN-DMC and local CCSD(T) on these complexes due to technical difficulties and unavailable uncertainty estimates for local CCSD(T), and large error estimates on both experimental and FN-DMC energies (up to a few kcal mol$^{-1}$). } \pn{Here, we consider similar but somewhat smaller supramolecular complexes (Fig.~\ref{fig1:intro}) and obtain tightly converged local CCSD(T) and FN-DMC results sufficient for rigorous comparisons (see Fig.~\ref{fig2:mainres} and Table~\ref{tab:my-table}).} The level of uncertainty in our results is indicated by stochastic error bars for FN-DMC and the sum of local and BSI error estimates for CCSD(T). The complexes are arranged in Fig. \ref{fig2:mainres} according to increasing interaction strength, which roughly scales with the size of the interacting surface. CCSD(T) and FN-DMC agree on the interaction energy \yas{to within 0.5 kcal mol$^{-1}$, taking error bars into account,} for a subset of the complexes we consider: GGG, CBH, GCGC, C3A and PHE. These complexes are between 48 and 112 atoms in size and exhibit $\pi-\pi$ stacking, hydrogen bonding, and dispersion interactions. Therefore, the agreement for these five complexes indicates their \yas{absolute} interaction energies are established references and can be used to benchmark other methods for large molecules. \yas{Here, relative differences of very small interaction energies have to be interpreted carefully as they are sensitive to the uncertainty estimates. In GGG for example, $\Delta_\text{min}$ is 0.1 kcal mol$^{-1}$ whilst the relative disagreement lies between 3$\%$ and 50$\%$. In contrast, the relative disagreement between FN-DMC and CCSD(T) is better resolved in the more strongly interacting C$_{60}$@[6]CPPA complex, at 20--31$\%$.} \begin{table}[htb!] \caption{Interaction energies in kcal mol$^{-1}$ for best estimated CCSD(T) and FN-DMC\pn{, as well as their minimum differences ($\Delta_\text{min}$)} for the L7 supramolecular data set and the buckyball-ring complex (C$_{60}$@[6]CPPA). The indicated errors for CCSD(T) are extrapolated from the convergence of basis sets and local approximations in LNO-CCSD(T). The errors indicated in FN-DMC interaction energies are stochastic errors, with 1 standard deviation (1-$\sigma$). } \label{tab:my-table} \begin{ruledtabular} \begin{tabular}{c rrrr} Complex & No. of atoms & CCSD(T) & FN-DMC & $\Delta_\text{min}$ \\ \hline GGG & 48 & $-2.1 $ $\pm~0.2$ & \yas{$-1.5 $ $\pm~0.3$} & 0.1 \\ CBH & 112 & $-11.0$ $\pm~0.2$ & $-11.4$ $\pm~0.4$ & 0.0 \\ GCGC & 58 & $-13.6$ $\pm~0.4$ & \yas{$-12.3$ $\pm~0.3$} & 0.5 \\ C3A & 87 & $-16.5$ $\pm~0.8$ & \yas{$-15.0$ $\pm~0.5$} & 0.2 \\ C2C2PD & 72 & $-20.6$ $\pm~0.6$ & $-18.1$ $\pm~0.4$ & 1.5 \\ PHE & 87 & $-25.4$ $\pm~0.2$ & $-26.5$ $\pm~0.7$ & 0.3 \\ C3GC & 101 & $-28.7$ $\pm~1.0$ & $-24.2$ $\pm~0.7$ & 2.9 \\ C$_{60}$@[6]CPPA & 132 & $-41.7$ $\pm~1.7$ & $-31.1$ $\pm~0.7$ & 8.3 \end{tabular}% \end{ruledtabular} \end{table} A salient and surprising finding is the disagreement between state-of-the-art methods on the interaction energy of three non-trivial complexes: coronene dimer (C2C2PD), circumcoronene-GC base pair (C3GC), and buckyball-ring (C$_{60}$@[6]CPPA). Considering the error bars, the minimum differences ($\Delta_\text{min}$), as indicated in \pn{Table~\ref{tab:my-table} and} Fig.~\ref{fig2:mainres} are 1.5, 2.9, and 8.3 kcal mol$^{-1}$ for C2C2PD, C3GC, and C$_{60}$@[6]CPPA, respectively, and could be as high as 3.5, 6.2, and 13.1 kcal mol$^{-1}$. \pn{Considering the comparable size of C3A, PHE, and CBH to C2C2PD, C3GC, and C$_{60}$@[6]CPPA, the $\Delta_\text{min}$ values of the latter three complexes are not explained simply by the large size or the large area of the interacting surface.} CCSD(T) predicts consistently stronger interaction in these complexes than FN-DMC,\pn{ but at this point it is unclear what the exact interaction energies are.} C2C2PD has attracted the most attention to date in the CCSD(T) context as it represents a stepping stone between two widely studied systems: benzene dimer and graphene bilayer~\cite{Al-Hamdani2019}. Already C2C2PD has posed a significant challenge to various local CCSD(T) methods due to its slowly-decaying long-range interactions~\cite{DHNLsupramol,L7CCGrimme1,B97-3c,XSAPT-MBD,DLPNO-CCSD(T)-F12,PNOCCreview,LocalCC4}. Considerable efforts have been devoted recently~\cite{DLPNO-CCSD(T)-F12,PNOCCreview,LocalCC4} to narrow down the local CCSD(T) interaction energy of C2C2PD to the range of about $-19$ to $-21$ kcal mol$^{-1}$. Thus \pn{the presently reported $-20.6$ $\pm~0.6$ kcal mol$^{-1}$ interaction energy and previous local CCSD(T) results, containing analogous local approximations,} consistently indicate stronger interaction than FN-DMC for C2C2PD. \begin{figure}[htb] \includegraphics[width=8.6cm]{figure3.png} \caption{\label{fig2:mainres} CCSD(T) and FN-DMC interaction energies for the supramolecular complexes of the L7 data set~\cite{L7set} and the C$_{60}$@[6]CPPA buckyball-ring complex arranged in terms of increasing interaction strength. Gray bars mark the range of interaction energies reported in the literature using alternative wavefunction based methods [\textit{e.g.} QCISD(T)~\cite{L7set}, and various local CCSD(T) approaches~\cite{DHNLsupramol,L7CCGrimme1,B97-3c,XSAPT-MBD,DLPNO-CCSD(T)-F12,PNOCCreview}]. The yellow bars indicate the delta value ($\Delta_\text{min}$) which is the minimum difference between best converged CCSD(T) and FN-DMC, given by the estimated and stochastic error bars, respectively.} \end{figure} \subsection*{Distinct errors using DNA base molecules on circumcoronene}\label{sec:convergence} The C3GC and C3A complexes are ideal for assessing the convergence of CCSD(T) and FN-DMC, due to their chemical similarity and importance of $\pi-\pi$ stacking interactions, \textit{i.e.} nucleobases stacked on circumcoronene. CCSD(T) and FN-DMC \yas{agree within 1 kcal mol$^{-1}$} for the interaction energy of C3A, whereas there is a notable disagreement of at least 2.9 kcal mol$^{-1}$ in the interaction energy of C3GC. Interestingly, both systems involve similar interaction mechanisms, with C3GC exhibiting both stacking and hydrogen-bonding interactions. CCSD(T) and FN-DMC interaction energies involve multiple approximations. Known sources of error to consider in our FN-DMC calculations are: \begin{itemize} \item The fixed-node approximation which restricts the nodal-structure to that of the input guiding wavefunction. \item Time-step bias from the discretization of imaginary time for propagating the wavefunction. \item Pseudopotentials to approximate core electrons for each atom. \item Non-uniform quality of optimized trial wavefunctions for fragments and bound complex \yas{at larger time-steps}. \end{itemize} In obtaining CCSD(T) interaction energies, the sources of error are: \begin{itemize} \item Local approximations of long-range electron correlation according to the LNO scheme. \item Single-particle basis representation of the CCSD(T) wavefunction. \item Neglected core electron correlation. \item Missing high-order many-electron contributions beyond CCSD(T). \end{itemize} In Fig.~\ref{fig3:convergence} we analyse the most critical approximations for each method on the example of the C3A and C3GC complexes, and we also consider the other remaining known sources of error in Methods. \begin{figure*}[htb] \includegraphics[width=16cm]{figure4.png} \caption{\label{fig3:convergence} The interaction energy of the C3A and C3GC complexes using LNO-CCSD(T) (panels a and b) and FN-DMC (panel c). The orange and green dashed horizontal lines, for C3A and C3GC, respectively, enclose the best estimated CCSD(T) (panel a and b) and the final FN-DMC (panel c) interaction energies using the corresponding uncertainty estimates and stochastic error bars. The FN-DMC error bars indicate a stochastic error of 1-$\sigma$. The yellow bar denotes the minimum difference between CCSD(T) and FN-DMC ($\Delta_\text{min}$). (a) CP-corrected and uncorrected LNO-CCSD(T) interaction energies using the aug-cc-pV$X$Z basis sets, as well as CBS($X$,$X+1$) extrapolation. (b) Convergence of half CP-corrected LNO-CCSD(T)/CBS(Q,5) interaction energies using a series of LNO thresholds as well as Normal--Tight (N-T) and Tight--very Tight (T-vT) extrapolations. (c) FN-DMC interaction energies with two nodal surfaces \yas{for C3GC} from DFT (PBE0 and LDA) and different time-steps (given in a.u.) \yas{for C3A and C3GC}. (d) C3A complex. (e) C3GC complex. } \end{figure*} For the single-particle basis representation in CCSD(T) we employed conventional correlation-consistent basis sets augmented with diffuse functions~\cite{AUGPVXZ}, aug-cc-pV$X$Z ($X$=T, Q, and 5) as shown in panel a) of Fig.~\ref{fig3:convergence}. The remaining BSI is alleviated using extrapolation~\cite{CorrConv} toward the complete basis set (CBS) limit [CBS($X$,$X+1$), $X$=T, Q], and counterpoise (CP) corrections~\cite{BSSE}. The local errors decrease systematically as the LNO threshold sets are tightened (Normal, Tight, very Tight) enabling extrapolations, \textit{e.g.} Normal--Tight (N--T), to estimate the canonical CCSD(T) interaction energy~\cite{LocalCC4} (see panel b) of Fig.~\ref{fig3:convergence}). Exploiting the systematic convergence properties, an upper bound for both the local and the BSI errors can be given~\cite{cc-error-estimate}. \pn{ Benchmarks presented previously for energy differences of a broad variety of systems showed excellent overall accuracy at the Normal--Tight extrapolated LNO-CCSD(T)/CBS(T,Q) level (M1)~\cite{LocalCC4}. However, the BSI error bar of 1.0~kcal~mol$^{-1}$ and the local error bar of 2.2~kcal~mol$^{-1}$ obtained for C3GC at this M1 level are impractical for a definitive comparison with FN-DMC. The next steps along both series of approximations towards chemical accuracy, \textit{i.e.} the use of very Tight LNO thresholds and the aug-cc-pV5Z basis set (M2), have been enabled by our recent method development efforts~\cite{LocalCC3,LocalCC4,MRCC}. With these better converged interaction energies, the M2 level uncertainty estimates are up to a factor of three smaller than at the M1 level. Explicitly, 0.7 (0.4) kcal mol$^{-1}$ local (BSI) error estimate is obtained for C3GC. The same measures are the largest for C$_{60}$@[6]CPPA at the M2 level being 1.1 and 0.6 kcal mol$^{-1}$, respectively.} Moreover, for the remaining L7 complexes, the local (BSI) uncertainty estimates indicate even better convergence of 0.1-0.4 (0.1-0.3) kcal mol$^{-1}$. Additional details are provided in Methods and in Section \ref{SMCC} of the Supplemental Material (SM). FN-DMC has the advantage that the wavefunction is sampled in real-space without the need for the numerical representation of many-particle basis states thus \yas{reducing} sensitivity to the single-particle basis set. Instead, pertinent sources of error in FN-DMC which we assess in Fig.~\ref{fig3:convergence} are the effects of the fixed-node approximation and the time-step bias. \pn{Note that these sources of error are not included in the FN-DMC stochastic error bars.} First, the different nodal surfaces from these DFT methods serve to indicate the dependence of the FN-DMC interaction energy on the nodal structure. Indeed, from Fig.~\ref{fig3:convergence}, we find no indication that the FN-DMC interaction energies of C3GC is affected by the nodal structure \pn{outside of the stochastic error bars}. Second, FN-DMC energies are sensitive to the time-step and we rely on recent improvements in FN-DMC algorithms~\cite{zen2016boosting,Zen2019}, that enable convergence of time-steps as large as 0.05~a.u. We used 0.03~a.u. and 0.01~a.u. time-steps to compute the interaction energies of C3A and C3GC. Fig.~\ref{fig3:convergence} indicates that the interaction energy is unchanged for C3A and C3GC (within the stochastic error) for the different time-steps considered here. The time-step and fixed-node approximations perform similarly well for the coronene dimer and the buckyball-ring complex (see Section~\ref{SMDMC} of the SM). \subsection*{Open challenges for next generation of many-body methods} \yas{CCSD(T) and FN-DMC have been shown to agree with sub-chemical accuracy for small organic dimers~\cite{A24set2,Dubecky2016,Al-Hamdani2019}, molecular crystals~\cite{Zen2018}, and small physisorbed molecules on surfaces~\cite{waterongraphene,lih_dmc_cc}. Indeed, we also find good agreement in the absolute interaction energies for five of the eight complexes considered here.} However, we find that the disagreement by several kcal mol$^{-1}$ in C$_{60}$@[6]CPPA particularly, cannot be explained by the controllable sources of error. While both methods are highly sophisticated, they are still approximations to the exact solution of the many-electron Schr{\"o}dinger equation. Moreover, there can be non-trivial coupling between approximations within each method, which remain poorly understood for complex many-electron wavefunctions. \pn{Here, we estimate the magnitude of additional approximations which are generally regarded as even more robust and contemplate potential strategies for improvements.} \subsubsection{Are we there yet with FN-DMC?} \yas{The reported interaction energies of C2C2PD, C3GC, and C$_{60}$@[6]CPPA indicate that FN-DMC stabilizes the interacting complexes more weakly than CCSD(T). Therefore, one possibility for the discrepancy between the methods is that FN-DMC (as applied here) does not capture the correlation energy in the bound complexes sufficiently. Reasons for this can include the fixed-node approximation and more generally, insufficient flexibility in the wavefunction ansatz. The Slater-Jastrow ansatz was applied here using a single determinant combined with a Jastrow factor containing explicit parameterizable functions to describe electron-electron, electron-nucleus, and electron-electron-nucleus interactions. We have evaluated FN-DMC interaction energies for different nodal structures for C3GC, C2C2PD, C$_{60}$@[6]CPPA and in all cases the FN-DMC interaction energies are in 1-$\sigma$ agreement (see Section~\ref{SMDMC} of SM) with stochastic errors that are mostly under 1 kcal mol$^{-1}$. Among these systems, the largest potential deviation ($\Delta_\text{max}$) due to the fixed-node error is estimated to be $\sim$3.7 kcal mol$^{-1}$ in C$_{60}$@[6]CPPA. Although this potentially large source of error is not enough to explain the 8.3 kcal mol$^{-1}$ $\Delta_\text{min}$ disagreement with CCSD(T), it remains a pertinent issue for establishing chemical accuracy. Reducing the fixed-node error, for example by using more than one Slater determinant to systematically improve the nodal structure, in such large molecules remains challenging~\cite{Morales2012,Scemama2016}. Promising alternatives include the Jastrow antisymmetrized geminal power approach which has recently been shown to recover near-exact results for a small, strongly correlated cluster of hydrogen atoms~\cite{Genovese2019}. The Jastrow factor is a convenient approach to increase the efficiency of FN-DMC since in the zero time-step limit and with sufficient sampling, the FN-DMC energy is independent of this term. However, the quality of the Jastrow factor can be non-uniform for the bound complex and the non-interacting fragments, which can introduce a bias at larger time-steps. The recent DLA method in FN-DMC reduces this effect~\cite{Zen2019} and was applied to the C$_{60}$@[6]CPPA complex reported in Table \ref{tab:my-table} and also tested for GGG, C3A, and C2C2PD complexes (see Methods for further details). In all cases, FN-DMC with DLA is in decent agreement (within 1-$\sigma$) with non-DLA FN-DMC interaction energies. For example, the C2C2PD FN-DMC interaction energy with DLA is $-17.4 \pm 0.5$ kcal mol$^{-1}$ whilst with standard LA, it is $-18.1 \pm 0.4$ kcal mol$^{-1}$. Moreover, the interaction strengths tend towards being weaker with DLA in the systems we consider,\textit{ i.e.} further from the CCSD(T) interaction energies. As such, the discrepancy between FN-DMC and CCSD(T) remains regardless of any potential error from the Jastrow factor in our findings. We estimate the error from the use of Trail and Needs pseudopotentials~\cite{Trail2005,Trail2005b} in FN-DMC at the Hartree-Fock (HF) level using interaction energy of C2C2PD. We find 0.1 kcal mol$^{-1}$ difference in the HF interaction energy with the employed pseudopotentials and without (\textit{i.e.} all-electron) which is well within the acceptable uncertainty for our findings. In addition, Zen \textit{et al.}~\cite{Zen2018} have previously compared Trail and Needs pseudopotentials with correlated electron pseudopotentials~\cite{Trail2013} at the FN-DMC level using the binding energy of an ammonia dimer and found agreement within 0.1 kcal mol$^{-1}$. In principle, a more flexible wavefunction ansatz allows a more accurate many-body wavefunction to be reached in DMC, thus recovering electron correlation more effectively. To this end, recently introduced machine learning approaches~\cite{NNsolvSchrodingerEq,NNsolvSchrodingerEq2} are promising but more expensive due to the considerable increase in parameters. However once feasible, a systematic assessment of the amount of electron correlation recovered by these different ansatze in non-covalently bound systems will bring valuable insight to the current puzzle. } \subsubsection{Potential avenues for improvement upon CCSD(T)} \pn{Considering the complexes exhibiting significant $\pi$-$\pi$ interactions, CCSD(T) is found to predict stronger interaction than FN-DMC.} As some of the individually negligible but collectively important long-range interactions are estimated in local CC methods, \pn{these potentially overestimated interaction energy contributions could benefit from a higher-level theoretical description~\cite{fragmentbook,PNOCCreview}.} In the case of the LNO scheme, the majority of the local approximations have marginal effect on the interaction energies when very Tight settings are employed~\cite{LocalCC4}. \pn{For instance, long-range interactions that do not benefit from the full CCSD(T) treatment add up to at most 2.9 kcal mol$^{-1}$ for the interaction energy of C$_{60}$@[6]CPPA. The presented error estimate allowing for almost 40\% error in this term reliably covers this approximation.} While these and other non-negligible LNO approximations are closely monitored (see Section \ref{localconvg} of SM), remaining uncertainties outside of the presented error bars cannot be ruled out. All in all, the convergence measures assessing the local errors of LNO-CCSD(T) interaction energies indicate at least 97.4\% or 1.1 kcal mol$^{-1}$ certainty. The employed single-particle basis sets perform exceptionally well for CCSD(T) computations of small molecules~\cite{AUGPVXZ,CorrConv}, but approaching the CBS limit of CCSD(T) for large systems is mostly an uncharted territory in the literature~\cite{LocalCC4,PNOCCreview}. \pn{The agreement of CP corrected CBS(T,Q), CBS(Q,5), and uncorrected CBS(Q,5) within 0.06--0.36 kcal mol$^{-1}$ is highly satisfactory (see Sect. \ref{CCbasis} of SM). Furthermore, the CBS(5,6) results obtained with the aug-cc-pV6Z basis set for GGG are fully consistent with the CBS(Q,5) interaction energies (see Sect. \ref{CCbasis} of SM).} CC methods exploiting explicitly correlated wavefunction forms~\cite{PNOCCreview,DLPNO-CCSD(T)-F12} as well as alternatives to the conventional Gaussian basis sets~\cite{QMC_CC_solids,CCSD_DMC_solidH,MRA_CC2} have emerged recently, which could provide independent verification of the systematic convergence studies performed here. The higher-order contribution of three-, four-, etc. electron processes \pn{on top of CCSD(T)} are usually found to be negligible for weakly-correlated molecules~\cite{A24set2}. However, the available numerical experience is limited to complexes below about a dozen atoms, and for some highly polarizable systems the beyond CCSD(T) treatment of three-electron processes has been shown to contribute significantly to three-body dispersion~\cite{LR3bodydispersion}. \pn{The weakly-correlated nature of all complexes is indicated by that the perturbative (T) contribution to the total correlation energy component of the CCSD(T) interaction energy is consistently around 18--20\%. Additionally, the CC amplitude based measures all point to pure dynamic correlation. According to our LNO approximated estimations for the GGG complex, the infinite-order three-electron terms on top of the perturbative treatment of (T) is only about -0.01~kcal~mol$^{-1}$, while the perturbative four-electron contribution~\cite{Pert} is around -0.02~kcal~mol$^{-1}$ (see Sect. \ref{coreTQ} of SM). Due to the extreme computational cost of such computations,} it remains an open and considerable challenge to establish, \pn{on a representative sample size,} whether the contribution of higher-order processes is within sub-chemical accuracy for larger and more complex molecules. \pn{ The effect of core correlation is expected to be very small, thus we attempted to evaluate it independently from the numerical noise of the other approximations. For instance, the all electron interaction energy of C2C2PD is even stronger than the frozen core one at second-order by 0.2 kcal mol$^{-1}$ (4.6~cal~mol$^{-1}$ per C atom). All in all, core and higher-order correlation effects appear to strengthen the CCSD(T) interaction energies and slightly increase the deviation compared to FN-DMC.} \subsubsection{Insights from experiments and density-functional approximations} Experimental binding energies or association constants of supramolecular complexes are particularly valuable, when available, but also have their limitations as back-corrections are needed to separate the effects of thermal fluctuations and solvent effects for example~\cite{dissocEreview16}. In the case of C$_{60}$@[6]CPPA for example, the association constant is measured in a benzene solution and indicates a stable encapsulated complex, but one which could not be well-characterised by X-ray crystallography; purportedly due to the rapid rotation of the buckyball guest~\cite{Kawase2003}. Instead, a non-fully encapsulated structure was successfully characterized using toluene anchors on the buckyball. This demonstrates a number of physical leaps that exist between what can be measured and what can be accurately computed. Other high-level methods, such as the full configuration interaction quantum Monte-Carlo (FCI-QMC) method~\cite{QMC_CC_solids,CCSD_DMC_solidH}, can be key to assessing the shortcomings from major approximations such as the fixed node approximation and static correlation. Once the severe scaling with system size associated with FCI-QMC and similar methods is addressed, larger molecules will become feasible. However, in the present time the lack of references in large systems remains a salient problem. The scarcity of reference information has an impact on all other modelling methods, \pn{including density-functional approximations (DFAs), semi-empirical, force field or machine learning based models, etc.} which are validated \pn{or parameterized} based on higher-level benchmarks. In particular, there is a race to simulate larger, more anisotropic, and complex materials, accompanied by a \jgb{difficulty} of choice for modelling methods. To demonstrate \pn{the consequences of inconsistent references}, Fig. \ref{fig4:dftcomparison} shows interaction energy discrepancies obtained with DFAs, PBE0+D4~\cite{dftd4} and PBE0+MBD~\cite{MBD}, that are both designed to capture all orders of many-body dispersion interactions in different manner. Intriguingly, the PBE0+D4 method is in close agreement with CCSD(T) (mean absolute deviation, MAD=1.1~kcal~mol$^{-1}$), whereas PBE0+MBD is closer to FN-DMC (MAD=1.5~kcal~mol$^{-1}$)\pn{, but their performance is hard to characterize when CCSD(T) and FN-DMC disagree.} Moreover, we decomposed the interaction energies from the DFAs into dispersion components and find that, for C$_{60}$@[6]CPPA the main difference between PBE0+MBD and PBE0+D4 is 6.5~kcal~mol$^{-1}$ in the two-body dispersion contribution. Differences in beyond two-body dispersion interactions are smaller and at most 1.6~kcal~mol$^{-1}$ in C$_{60}$@[6]CPPA. \begin{figure}[htb] \includegraphics[width=8.5cm]{figure5.pdf} \caption{\label{fig4:dftcomparison} $\Delta_{\text{min}}(|E^{\text{A}}_{\text{int}} - E^{\text{B}}_{\text{int}}|)$ is shown which is the smallest absolute difference in $E_{\text{int}}$ between pairs of methods ($|E^{\text{A}}_{\text{int}} - E^{\text{B}}_{\text{int}}|$) in kcal mol$^{-1}$, with methods $\text{A}$ and $\text{B}$ indicated along the vertical axis. $\Delta_{\text{min}}$ takes into account the error estimates for CCSD(T) and FN-DMC to show smallest possible differences with respect to these reference methods. The DFT methods have no quantified uncertainty estimates associated with them. The compared methods are: CCSD(T), FN-DMC, PBE0+MBD and PBE0+D4. The supramolecular complexes are those in the L7 data set and the C$_{60}$@[6]CPPA buckyball-ring complex.} \end{figure} \section*{Discussion} Until now, disagreements between reference \pn{interaction energies of extended organic complexes} have typically been ascribed to \pn{unconverged results due to} practical bottlenecks. Here, we report highly-converged results at the frontier of wavefunction based methods; uncovering a disconcerting level of disagreement in the interaction energy for three supramolecular complexes. We have computed interaction energies from CCSD(T) and FN-DMC for a set of supramolecular complexes of up to 132 atoms exhibiting challenging intermolecular interactions. The accuracy of these methods have been repeatedly corroborated in the domain of dozen-atom systems \pn{with single-reference character} and here we find CCSD(T) and FN-DMC are in excellent agreement for five of the supramolecular complexes suggesting that these methods are able to maintain remarkable accuracy in some larger molecules. However, FN-DMC and CCSD(T) interaction energies disagree by 1.5 kcal mol$^{-1}$ in the coronene dimer (C2C2PD), 2.9 kcal mol$^{-1}$ in GC base pair on circumcoronene (C3GC) and 8.3 kcal mol$^{-1}$ in a buckyball-ring complex (C$_{60}$@[6]CPPA). These disagreements are cemented by reporting sub-kcal mol$^{-1}$ stochastic errors in FN-DMC and a systematically converging series of local CCSD(T) interaction energies accompanied by uncertainty estimates approaching chemical accuracy. Therefore, despite our best efforts to suppress all controllable sources of error, the marked disagreement of FN-DMC and CCSD(T) prevents us from providing conclusive reference interaction energies for these three complexes. Such large differences in interaction energies surpass the widely-sought 1 kcal mol$^{-1}$ chemical accuracy and indicate that the highest level of caution is required even for our most advanced tools when employed at the hundred-atom scale. The supramolecular complexes we report feature $\pi-\pi$ stacking, hydrogen bonding, and intermolecular confinement, that are ubiquitous across natural and synthetic materials. Thus our immediate goals are to elucidate the sources of the underlying discrepancies and to explore the scope of systems where such deviations between reference wavefunction methods occur. \pn{Well-defined reference interaction energies and the better characterization of their predictive power have growing importance as they} \jgb{are frequently applied} \pn{in chemistry, material, and biosciences.} Our findings should motivate cooperative efforts between experts of computational and experimental methods in obtaining well-defined interaction energies and thereby extending the predictive power of first principles approaches across the board. \section*{Methods} The L7 structures have been defined by Sedlak \textit{et al.}~\cite{L7set} and structures can be found on the begdb database~\cite{Rezac2008}. Note that the interaction energy, $E_{\text{int}}$, is defined with respect to two fragments even where the complex consists of more than two molecules (as in GGG, GCGC, PHE, and C3GC): \begin{equation} \label{IEdef} E_{\text{int}} = E_{\text{com}} - E_{\text{frag}}^1 - E_{\text{frag}}^2 \end{equation} where $E_{\text{com}}$ is the total energy of the full complex, and $E_{\text{frag}}^1$ and $E_{\text{frag}}^2$ are the total energies of isolated fragments 1 and 2, respectively. The fragment molecules have the same geometry as in the full complex, \textit{i.e.} not relaxed. Further details on the configurations can be found in the SM and in Ref.~\citenum{L7set}. The C$_{60}$@[6]CPPA complex is based on similar complexes in previous theoretical and experimental works~\cite{c60_at_CPPs_angew,s8_chemcomm,Hermann2017} and has been chosen to represent confined $\pi$-$\pi$ interaction that are numerically still tractable by our methodologies. Its geometry has been symmetrized to $D_{3d}$ point group, the individual fragments of C$_{60}$ and [6]CPPA are kept frozen ($I_h$ and $D_{6h}$, respectively). The structure is provided in the SM. \subsection*{The local natural orbital CCSD(T) method} In order to reduce the $N^7$-scaling of canonical CCSD(T) with respect to the system size ($N$), the inverse sixth power decay of pairwise interactions can be exploited (local approximations) and the wavefunction can be compressed further via natural orbital (NO) techniques.~\cite{fragmentbook} Building on such cost-reduction techniques a number of highly-efficient local CCSD(T) methods emerged in the past decade~\cite{fragmentbook,DLPNO-CCSD(T),DLPNO-CCSD(T)-F12,PNO-CCSDHattig,PNOCCreview,LocalCC3,LocalCC4}. As the local approximation free CCSD(T) energy can be approached by the simultaneous improvement of all local truncations in most of these techniques, in principle, all local CCSD(T) methods are expected to converge to the same interaction energy. Here we employ the local natural orbital CCSD(T) [LNO-CCSD(T)] scheme~\cite{LaplaceT,LocalCC3}, which, for the studied systems, brings the feasibility of exceedingly well-converged CCSD(T) calculations in-line with FN-DMC. The approximations of the LNO scheme automatically adapt to the complexity of the underlying wavefunction and enable systematic convergence towards the exact CCSD(T) correlation energy, with up to 99.99\% accuracy using sufficiently tight settings~\cite{LocalCC4} . The price of improvable accuracy is that the computational requirements can drastically increase depending on the nature of the wavefunction: while LNO-CCSD(T) has been successfully employed for macromolecules, such as small proteins at the 1000 atom range~\cite{LocalCC3,LocalCC4}, sizable long-range interactions appearing in the here studied complexes pose a challenge for any local CCSD(T) method~\cite{PNOCCreview,DLPNO-CCSD(T),DLPNO-CCSD(T)-F12,LocalCC4}. This motivated the implementation of several recent developments in our algorithm and computer code over the lifetime of this project, which cumulatively resulted in about 2-3 orders of magnitude decrease in the time-to-solution and data storage requirement of LNO-CCSD(T)~\cite{LaplaceT,LocalCC3,LocalCC4}, and made well-converged computations feasible for all complexes. For instance, we have designed a massively parallel conventional CCSD(T) code specifically for applications within the LNO scheme~\cite{MPICCSDpT} and integrated it with our highly optimized LNO-CCSD(T) algorithms~\cite{LaplaceT,LocalCC3,LocalCC4}. Here, we report the first large-scale LNO-CCSD(T) applications which exploit the resulted high performance capabilities using the most recent implementation of the {\sc Mrcc} package~\cite{MRCC} (release date February 22, 2020). \subsection*{Computational details for CCSD(T)} The LNO-CCSD(T)-based CCSD(T)/CBS estimates were obtained as the average of CP-corrected and uncorrected (``half CP'')~\cite{BSSE}, Tight--very Tight extrapolated LNO-CCSD(T)/CBS(Q,5) interaction energies~\cite{LocalCC4}. Except for C3A, C3GC, and C$_{60}$@[6]CPPA, the CBS(Q,5) notation refers to CBS extrapolation~\cite{CorrConv} using aug-cc-pV$X$Z basis sets~\cite{AUGPVXZ} with $X$=Q and 5. For C3A, C3GC, and C$_{60}$@[6]CPPA, a Normal LNO-CCSD(T)/CBS(Q,5)-based BSI correction ($\Delta_\text{BSI}$) was added to the Tight--very Tight extrapolated LNO-CCSD(T)/aug-cc-pVTZ interaction energies, exploiting the parallel convergence of the LNO-CCSD(T) energies for these basis sets~\cite{LocalCC4}. Local error bars shown, \textit{e.g.} on panel b) of Fig. \ref{fig3:convergence} are obtained via the extrapolation scheme of Ref.~\cite{LocalCC4}. Error bars accompanying the LNO-CCSD(T) interaction energies of Fig. \ref{fig2:mainres} and Table \ref{tab:my-table}, and determining the interval enclosed by the dashed lines on panels a) and b) of Fig. \ref{fig3:convergence} are the sums of the BSI and local error estimates. The BSI error measure is the maximum of two separate error estimates: the difference between CP-corrected and uncorrected CBS(Q,5) energies, and the difference between CP-CBS(T,Q) and CP-CBS(Q,5) results. This BSI error bar is increased with an additional term if $\Delta_\text{BSI}$ is employed according to Sect.~\ref{CCbasis} of the SM. The local error bar of the best estimated CCSD(T) results (see Table \ref{lccres} of the SM) is obtained from the difference of the Tight and very Tight LNO-CCSD(T) results evaluated with the largest possible basis sets~\cite{LocalCC4}. \subsection*{Computational details for FN-DMC} Our FN-DMC calculations use the Slater-Jastrow ansatz with the single Slater determinants obtained from DFT. The Jastrow factor for each system contains explicit electron-electron, electron-nucleus, and \yas{three-body} electron-electron-nucleus terms. The parameters of the Jastrow factor were optimized for each complex using the variational Monte Carlo (VMC) method and the varmin algorithm which allows for systematic improvement of the trial wavefunction, as implemented in CASINO v2.13.610~\cite{Needs_2009}. Note that bound complexes were used in the VMC optimizations and the resulting Jastrow factor was used to compute the corresponding fragments. \yas{All systems were treated in real-space as non-periodic open systems in VMC and FN-DMC.} We performed FN-DMC with the locality approximation (LA) for the non-local pseudopotentials~\cite{Mitas1991} and 0.03 a.u. time-step for all L7 complexes. Smaller time-steps of 0.003 and 0.01 a.u. were also used to compute the interaction energy of the C2C2PD complex and the interaction energy was found to be in agreement within the stochastic error bars with all three time-steps. The C$_{60}$@[6]CPPA complex exhibited numerical instability \yas{using the standard LA}. This prevented sufficient statistical sampling and therefore we computed this complex with two alternative and more numerically stable approaches. First, the energy reported in Fig.~\ref{fig2:mainres} and Table~\ref{tab:my-table} is using the recently developed determinant localization approximation (DLA)~\cite{Zen2019} implemented \yas{in CASINO v2.13.809}~\cite{Needs_2009}. The DLA gives: (i) better numerical stability \yas{than the LA algorithm} allowing for more statistics to be accumulated, \yas{(ii)} smaller dependence on the Jastrow factor, and \yas{(iii)} addresses an indirect issue related to the use of \yas{non-local} pseudopotentials. Second, the T-move approximation~\cite{Casula2006} (without DLA) \yas{was also applied to C$_{60}$@[6]CPPA for comparison}. \yas{The T-move scheme is more numerically stable than the standard LA algorithm but is also more} time-step dependent and therefore we used results from 0.01 and 0.02 a.u. time-steps to extrapolate the interaction energy to the zero time-step limit, as reported in SM. The extrapolated interaction energy \yas{with the T-move scheme} is $-31.14 \pm 2.57$ kcal mol$^{-1}$ using LDA nodal structure and $-29.16 \pm 2.33$ kcal mol$^{-1}$ using PBE0 nodal structure. Due to the large stochastic error on these results, we report the better converged DLA-based interaction energy (with PBE0 nodal structure) in the main results, but we note that all three predictions from FN-DMC agree within the statistical error bars. \yas{Furthermore, as the DLA is less sensitive to the Jastrow factor at finite time-steps, we have also tested the interaction energies of GGG, C3A, and C2C2PD complexes, finding agreement with the LA-based FN-DMC results within one standard deviation. Further details can be found in the SM.} The initial DFT orbitals (which define the nodal structure in FN-DMC) were prepared using PWSCF in Quantum Espresso v.6.1~\cite{Giannozzi2009}. Trail and Needs pseudopotentials~\cite{Trail2005,Trail2005b} were used for all elements, with a plane-wave energy cut-off of 500 Ry. The plane-wave representation of the molecular orbitals from PWSCF were expanded in terms of B-splines. Since PWSCF uses periodic boundary conditions, all complexes were centered in an orthorhombic unit cell with a vacuum spacing of $\sim8$~\AA~in each Cartesian direction \yas{to ensure that the single-particle orbitals are fully enclosed}. LDA orbitals were used for L7 complexes and in addition, PBE0 orbitals were also considered for C2C2PD, C3GC, and C$_{60}$@[6]CPPA. In all cases, the final FN-DMC interaction energy from LDA and PBE0 nodal structures are in agreement within the stochastic errors. \section*{Acknowledgments} We thank Dr. Andrea Zen for discussions. We thank HPC staff for their support and access to the IRIS cluster at the University of Luxembourg and to the DECI resource Saga based in Norway at Trondheim with support from the PRACE aisbl (NN9914K). \textbf{Funding:} YSA thanks funding from NIH grant number R01GM118697 and is supported by The National Centre of Competence in Research (NCCR) Materials Revolution: Computational Design and Discovery of Novel Materials (MARVEL) of the Swiss National Science Foundation (SNSF). The work of PRN is supported by the \'UNKP-19-4-BME-418 New National Excellence Program of the Ministry for Innovation and Technology and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences. JGB acknowledges support from the Alexander von Humboldt foundation. MK is grateful for the financial support from the National Research, Development, and Innovation Office (NKFIH, Grant No. KKP126451) and the BME-Biotechnology FIKP grant of EMMI (BME FIKP-BIO). AT acknowledges financial support from the European Research Council (ERC-CoG grant BeStMo). \section*{Author contributions} YSA and PRN contributed equally to this work. Major \textit{investigation} was conducted by PRN (performing coupled cluster calculations) and YSA (performing quantum Monte Carlo calculations). JGB performed PBE0+D4 calculations and supporting validation calculations. DB performed PBE0+MBD calculations. The work has been \textit{conceptualized} by YSA and AT with additional contribution from JGB and PRN. \textit{Software development} to expand the application of LNO-CCSD(T) method in this work has been conducted by PRN and MK. PRN performed \textit{formal analysis} to obtain uncertainty estimates from LNO-CCSD(T) data. The \textit{original draft} of the manuscript was written by YSA and PRN. Additional \textit{review and editing} of the manuscript was undertaken by JGB, AT, and MK. \textit{Project administration} was led by YSA with contribution from JGB and AT. JGB and AT supervised the work. \include{supplemental_material} \end{document} \section*{Supplemental Material} } \tableofcontents \section{Details of CCSD(T) computations} \label{SMCC} Definitions: \begin{itemize} \item Interaction energy: according to the Methods Section of the main text, the difference of the complex's energy consisting of all molecules and of the two subsystem energies, using unrelaxed structures for the latter. Notation: $E^{\mathrm{LNO-CCSD(T)}}_\mathrm{Y}$[aug-cc-pVXZ], where $Y$ refers to the level of local approximations ({\it Normal, Tight}, or {\it very\,Tight}) and X labels the cardinal number of the basis set. \item counterpoise (CP) corrected interaction energy: the energy of the subsystems are evaluated for the interaction energy expression using all single-particle basis functions of the complete complex including basis functions residing on the atomic positions of the other subsystem. \item local error bar: difference of the Tight and very Tight LNO-CCSD(T) results evaluated with the largest possible basis set. \item basis set incompleteness (BSI) error bar: maximum of two BSI error indicators, which are the difference of the CP corrected and uncorrected LNO-CCSD(T)/CBS(Q,5) interaction energies, and the difference of CP corrected LNO-CCSD(T)/CBS(T,Q) and LNO-CCSD(T)/CBS(Q,5) interaction energies. \end{itemize} \subsection{Convergence of local approximations} \label{localconvg} \pn{ The LNO-CCSD(T) energy expression reformulates the CCSD(T) energy in terms of localized molecular orbitals (LMOs, ${i^\prime},{j^\prime}$)~\cite{LocalCC2,LaplaceT,LocalCC3}: \begin{equation}\label{lnoccsdptE} E^{\mathrm{LNO-CCSD(T)}}= \sum_{i^\prime} \left [ \delta E_{{i^\prime}}^\mathrm{CCSD(T)} + \Delta E_{{i^\prime}}^\mathrm{MP2} + \frac12 \sum\limits_{{j^\prime}}^\text{ distant} \delta E_{{i^\prime}{j^\prime}} \right ]. \end{equation} The correlation energy contribution of distant LMO pairs is obtained at the level of approximate MP2~\cite{LocalMP2,LocalCC3} (third term), while all remaining LMO-pairs contribute to the CCSD(T) level treatment (first term). For the latter, first, local natural orbitals (LNOs) are constructed individually for each LMO at the MP2 level using a large domain of atomic and correlating (virtual) orbitals surrounding the LMO. The $ \delta E_{{i^\prime}}^\mathrm{CCSD(T)} $ contribution is then computed in this compressed LNO orbital space, while the second term of Eq. (\ref{lnoccsdptE}) represents a correction for the truncation of the LNO space at the MP2 level of theory. } The convergence of all approximations in LNO-CCSD(T) can be assessed via the use of pre-defined threshold sets, which provide systematic improvement simultaneously for all approximations of the LNO scheme~\cite{LocalCC,LocalCC2,LocaldRPA,LocalMP2,LaplaceT,LocalCC3,LocalCC4}. In this series of threshold sets ({\it Normal}, {\it Tight}, {\it very\,Tight}), the accuracy determining cutoff parameters are tightened in an exponential manner~\cite{LocalCC4}. For instance, the {\it very\,Tight} set collects an order of magnitude tighter truncation thresholds than those of the {\it Normal} set, which is the default choice. The convergence behavior of the LNO-CCSD(T) interaction energies separates the studied complexes (see Fig. 2 of manuscript) into two groups. For GGG, PHE, CBH, and GCGC we observe rapid convergence toward the corresponding canonical CCSD(T) interaction energy as indicated, \textit{e.g.} by the local error estimates collected in Table~\ref{lccres}. The excellent convergence is apparent as the differences of the {\it Tight} and {\it very\,Tight} interaction energies are all in the 0.1-0.3 kcal mol$^{-1}$ range for these four complexes. This uncertainty range is highly satisfactory for the local approximations considering that the estimated basis set incompleteness (BSI) errors for LNO-CCSD(T) are also comparable. \begin{table}[H] \footnotesize \caption{Best converged [{\it Tight}--{\it very\,Tight} extrapolated LNO-CCSD(T)/CBS(Q,5) based] CCSD(T) interaction energies (IEs) and corresponding error estimates with full, half, and no CP correction. Our best estimates are highlighted in bold and are used throughout the manuscript.} \label{lccres} \begin{center} \begin{tabular}{l rrrrrrrr}\hline\hline System: & \;\; GGG \;\; & \;\;CBH \;\; & \;\;GCGC \;\; & \;\;C3A \;\; & \;\;C2C2PD \;\;& \;\;PHE \;\; & \;\;C3GC \;\; & \;\;C$_{60}$@[6]CPPA \;\; \\ \hline IE, no CP & -1.98& -10.93 & -13.38& -16.79 & -20.36 & -25.34 & -28.67 & -41.60 \\ IE, CP & -2.20& -11.10 & -13.80& -16.28 & -20.84 & -25.38 & -28.73 & -41.89 \\ IE, half CP & \bf -2.09&\bf -11.01 &\bf -13.59 &\bf -16.53 &\bf -20.60 &\bf -25.36 &\bf -28.70 &\bf -41.74 \\ \hline Local error & 0.09 &0.10 &0.16 &0.42 &0.38 &0.07 &0.65 &1.10 \\ BSI error & 0.11 &0.06 &0.22 &0.24 &0.24 &0.12 &0.19 &0.36 \\ $\Delta_\mathrm{BSI}$ error & & & & 0.10 & & &0.17 &0.25 \\ Total error & 0.20 &0.15 &0.39 &0.75 &0.62 &0.18 &1.01 &1.71 \\ \hline \hline \end{tabular} \end{center} \end{table} Consequently, we perform an even more thorough analysis of the local errors for the remaining four complexes, C2C2PD, C3A, C3GC, and C$_{60}$@[6]CPPA, where the local error estimate of the LNO-CCSD(T) interaction energies is larger than 0.3 kcal mol$^{-1}$. The convergence patter with {\it Normal}, {\it Tight}, and {\it very\,Tight} settings of the C3A and C3GC interaction energies is shown on the panel b) of Fig. \ref{fig3:convergence} of the main text. The monitored convergence is monotonic and the remaining local error is about halved in each step, as observed for multiple systems previously~\cite{LocalCC4} as well for the above four complexes. Additionally, the {\it Normal}--{\it Tight} and the {\it Tight}--{\it very\,Tight} based CCSD(T) estimates (data points with error bars on panel b) of Fig.~\ref{fig3:convergence} of the main text) agree closely, and the {\it Tight}--{\it very\,Tight} error bars are enveloped by the {\it Normal}--{\it Tight} ones. The same trends can be observed in Fig.~\ref{coronene_basis} for the coronene dimer, where LNO-CCSD(T) interaction energies are collected with all investigated basis sets and all three LNO threshold combinations. Again, the convergence patterns with the improving local approximations are parallel for all basis sets, the {\it Normal}--{\it Tight} and {\it Tight}--{\it very\,Tight} estimates agree within 0.5~kcal~mol$^{-1}$, and the {\it Tight}--{\it very\,Tight} error bars are 2-3 times narrower. Although, in the case of C$_{60}$@[6]CPPA, the {\it Normal} to {\it very Tight} series is only available with the aug-cc-pVTZ basis set, the 0.2 kcal mol$^{-1}$ agreement of {\it Normal}--{\it Tight} and the {\it Tight}--{\it very\,Tight} based CCSD(T) estimates and the threefold improvement provided by the {\it Tight}--{\it very Tight} error bar over the {\it Normal}--{\it Tight} one illustrate analogous behavior to the cases of C2C2PD, C3A, and C3GC. \begin{figure}[ht!] \includegraphics[width=0.9\textwidth]{coronene_basis_both.png} \caption{Convergence of LNO-CCSD(T) interaction energies for the C2C2PD complex with various basis sets and LNO threshold sets. The left (right) panel collects results obtained without (with) CP correction. The {\it Normal}--{\it Tight} and the {\it Tight}--{\it very\,Tight} extrapolated results are plotted with a smaller point size also at the {\it Tight} and {\it very\,Tight} x axis labels, respectively, and they are accompanied by error bars indicating the uncertainty estimate of the local approximations at that level. For comparison the best CCSD(T)/CBS estimate [Tight-very Tight approximated, half CP corrected LNO-CCSD(T)/CBS(Q,5)] result and its corresponding uncertainty estimate is depicted on both panels via the light blue error bars and dashed horizontal lines. Note the different y ranges of the two panels as highlighted by dashed blue lines connecting the two panels. Also note that symbols corresponding to a given basis set are slightly shifted along the x-axis to improve visibility for all data points.} \label{coronene_basis} \end{figure} \pn{ Considering briefly the corresponding absolute energies collected in Sect. \ref{absenergies}, one can observe that the LNO-CCSD(T) correlation energies are also sufficiently well converged. For instance, for the C3GC complex {\it very Tight} based results provide four (almost five) converged significant digits in the correlation energies, which is then reliably translates into the observed cca. 0.65 kcal mol$^{-1}$ local uncertainty of the LNO-CCSD(T) interaction energies. } \pn{ One can also consider internal convergence indicators besides the total energy. At the very Tight level, the $\pi$--$\pi$, $\pi$--$\sigma$, and also the majority of the $\sigma$--$\sigma$ orbital interactions benefit from the full CCSD(T) treatment for all complexes. Additionally, none of the remaining weak electronic interactions, contributing only about 0.01\% or lower portion of the correlation energy, are neglected, they are, however, approximated via second-order pair energies~\cite{LocalCC3,LocalCC4}. At the very Tight level, the orbital domains employed for the LNO-CCSD(T) treatment include all atoms, all atomic orbitals, and the majority of the correlating (virtual) space, spanned by, on the average, 80--95\% of the orbitals of the entire complexes.} \subsection{Single-particle basis set convergence} \label{CCbasis} Regarding the convergence of the interaction energies with respect to the single particle basis set, we rely on approaches used routinely in wavefunction computations on small molecules. Dunning's correlation consistent basis sets~\cite{AUGPVXZ} employed here are designed to systematically approach the complete basis set (CBS) limit with a polynomial convergence rate, which can be exploited to reduce the remaining basis set incompleteness (BSI) error via basis set extrapolation approaches~\cite{KMSCFCBS,CorrConv}. We employ two-point formulae for extrapolation, labeled as CBS($X$,$X+1$), where $X$ refers to the cardinal number of the aug-cc-pV$X$Z basis set~\cite{AUGPVXZ} with $X$=T, Q, and 5. For the proper description of important medium- and long-range interactions and of the cross-polarization of the monomers in the complex, it is crucial to the employ diffuse, i.e., spatially spread basis functions. The use of such diffuse basis functions, however, greatly enhances technical challenges characteristic of interaction energy computations with atom centered Gaussian type basis functions. As long as the basis set expansion of the monomers is not saturated completely, the basis functions residing on the atoms of one monomer can contribute to the description of the wavefunction components of the other monomer. Thus, the resulting basis set superposition error (BSSE) emerges from the unbalanced improvement of the basis set expansion of the monomers and the dimer and usually leads to artificially overestimated interaction strength. The BSSE can be decreased significantly by counterpoise (CP) corrections~\cite{BSSE}, i.e., by using the entire dimer basis set also for the monomer calculations. Naturally, for small basis sets this approach might lead to a more saturated basis set expansion on the monomers and can potentially overcorrect the BSSE. In the case of aug-cc-pV$X$Z with $X$=T, Q, and 5 the CP correction decreases monotonically with increasing basis set size, thus a decreasing CP correction is an excellent indicator of basis set saturation, which we employ here. To characterize the convergence of the LNO-CCSD(T) interaction energies in terms of the basis set completeness, the maximum of two BSI error indicators is considered with the best available LNO threshold set. One of them is the difference of the CP corrected and uncorrected LNO-CCSD(T)/CBS(Q,5) interaction energies, and the other one is the difference of CP corrected LNO-CCSD(T)/CBS(T,Q) and LNO-CCSD(T)/CBS(Q,5) interaction energies. The resulting BSI error bar values of Table~\ref{lccres} indicate that the above two four-membered groups exhibit much more homogeneous basis set convergence behavior. For the GGG, GCGC, PHE, and CBH interaction energies, this BSI measure is 0.06-0.22 kcal mol$^{-1}$, while for the other four complexes a twice as large uncertainty of 0.19-0.36 kcal mol$^{-1}$ is found. Compared to the similar or larger local error bars, we find this level of basis set convergence to be highly satisfactory. We again investigate more closely only the C3A, C3GC, C2C2PD, and C$_{60}$@[6]CPPA quartet. The convergence of LNO-CCSD(T) interaction energies with improving basis sets for C3A and C3GC is shown on panel a) of Fig.~\ref{fig3:convergence} of the main text. The large BSSE obtained with the aug-cc-pVTZ, and to some extent also with the aug-cc-pVQZ basis set is apparent for both complexes. Such large BSSE also affects the extrapolation, the CBS(T,Q) results clearly overshoot the basis set limit due to the underestimation of the aug-cc-pVTZ result. The BSSE is significantly reduced by the CP correction. All CP corrected results (solid symbols) closely agree already at the aug-cc-pVTZ level. Most importantly, the CBS(Q,5) entries of both the CP corrected and uncorrected series match each other within a few tenth of a kcal mol$^{-1}$, providing strong indication of basis set saturation. Upon inspection of the CP corrected and uncorrected interaction energies of Table~\ref{lccres}, this statement can be extended for the remaining six complexes as well. The left and right panels of Fig.~\ref{coronene_basis} collect CP uncorrected and corrected LNO-CCSD(T) interaction energies for the coronene dimer. The overbinding of the aug-cc-pVTZ and aug-cc-pVQZ results caused by the BSSE is again significant, close to 50 and 20\%, respectively. With the exception of the overshooting CBS(T,Q) extrapolation, the aug-cc-pV$X$Z energies, with $X$=T, Q, and 5, as well as the CBS(Q,5) extrapolation form a highly convincing, converging series of results both with and without CP correction. The CP corrected and uncorrected CBS results approach the region of convergence from the opposite directions, hence their average, i.e., the half CP corrected results appear to be the best estimate at the CBS(Q,5) level. Concerning CBS(T,Q), the fully CP corrected results are found more reliable due to the excessive BSSE obtained with aug-cc-pVTZ. \pn{Although we find the level of convergence regarding the basis set satisfactory, we invested additional efforts to perform LNO-CCSD(T)/aug-cc-pV6Z computations for the GGG complex. The CP corrected CBS(Q,5) and CBS(5,6) results at the very Tight LNO-CCSD(T) level agree up to 0.1 kcal mol$^{-1}$, which is within the local uncertainty.} Finally, we assess the accuracy of the composite BSI correction approach employed for C3A, C3GC, and C$_{60}$@[6]CPPA. Due to the prohibitive computational costs, the most accurate interaction energies presented here for these three systems are obtained by adding a $\Delta_\text{BSI} = E^{\mathrm{LNO-CCSD(T)}}_\mathrm{Normal}[\mathrm{CBS(Q,5)}]- E^{\mathrm{LNO-CCSD(T)}}_\mathrm{Normal}$[aug-cc-pVTZ] BSI correction to the $E^{\mathrm{LNO-CCSD(T)}}_\mathrm{Tight-very \; Tight}$[aug-cc-pVTZ] interaction energies. This formula exploits the similarity of the local approximation convergence curves obtained with different basis sets and it is numerically identical to $E^{\mathrm{LNO-CCSD(T)}}_\mathrm{Tight-very \; Tight}$[CBS(Q,5)] if the local convergence patterns are exactly parallel. To assess the quality of $\Delta_\text{BSI}$, we compared $\Delta_\text{BSI}$ to the analogous $\Delta_\text{BSI}^{\text{very Tight}}= E^{\mathrm{LNO-CCSD(T)}}_\mathrm{very \; Tight}[\mathrm{CBS(Q,5)}]-E^{\mathrm{LNO-CCSD(T)}}_\mathrm{very \; Tight}$[aug-cc-pVTZ] wherever it is available. For the system most similar with the above three, that is, for C2C2PD, the $|\Delta_\text{BSI}^{\text{very Tight}}-\Delta_\text{BSI}|$ value is about 0.12 kcal mol$^{-1}$. To account for the potentially size-extensive nature of this unparallelity error, the final $\Delta_\text{BSI}$ error estimates of Table~\ref{lccres} were obtained by scaling the 0.12 kcal mol$^{-1}$ with the ratio of the interaction energies of the given complex and C2C2PD. The ``Total error bar'' values of Table~\ref{lccres} also include this third, $\Delta_\text{BSI}$ related uncertainty estimate for these three complexes. \pn{ Even more details can be learned observing the convergence of the total HF and the LNO-CCSD(T) correlation energies separately for the complexes and monomers (see Sect.~\ref{absenergies}). The HF total energies are converged to six significant digits at the CBS(Q,5) level, which translates into a highly convincing convergence level of 0.01 kcal mol$^{-1}$ regarding the HF part of the interaction energies. In other words, that BSI error estimates collected in Table~\ref{lccres} have negligible HF and sizable correlation contribution. Furthermore, the CP corrected interaction energies are converged up to this 0.01 kcal mol$^{-1}$ level already with the smallest, aug-cc-pVTZ basis set. As expected, the CCSD(T) correlation energies tend significantly more slowly to the CBS limit with the cardinal number, but the agreement of the CBS(T,Q) and CBS(Q,5) values up to 4 significant digits is again highly satisfactory. This shows that the BSI error estimates being below 0.36 kcal mol$^{-1}$, just as the LNO error estimates, are consistent with the absolute energies and do not benefit from sizable error compensation. Additionally, the computation of the interaction energies is warranted according to the supermolecular approach [see Eq. (1) of the main text], because total energies are converged to the necessary number of significant digits. } \pn{ \subsection{LNO-CCSD(T) energies plotted on Figs.~\ref{fig3:convergence}. and~ \ref{coronene_basis}} \label{absenergies} In Tables~\ref{c3gctab}--\ref{c2tab}, we collect the absolute HF, the LNO-CCSD(T) correlation, and the corresponding interaction energies using all possible combinations of settings ({\it Normal} to {\it very Tight}, aug-cc-pVTZ to aug-cc-pV5Z, corresponding extrapolated energies, and various use of CP corrections) to document the numerical data plotted in Fig.~\ref{fig3:convergence} and~\ref{coronene_basis}. Additional analysis is provided in Sects.~\ref{localconvg} and ~\ref{CCbasis}. \begin{table}[h!] \scriptsize \caption{HF energies and LNO-CCSD(T) correlation energies [in a.u.], and corresponding interaction energies [$\Delta$E in kcal mol$^{-1}$] obtained for the C3GC dimer with all employed basis sets and LNO threshold combinations, including CBS and LNO extrapolations as well as full (CP) and half CP (half CP) corrected results.$^\text{a}$ \label{c3gctab} } \begin{tabular}{l|rrr|rr|ccc} & C3GC & circ. & GC & circ. CP & GC CP & \;\;\;\;\; $\Delta$E \;\;\;\; & $\Delta$E CP & $\Delta$E half CP \\ \hline aug-cc-pVTZ & & & & & & & & \\ HF & -2988.683486 & -2056.327131 & -932.381768 & -2056.328307 & -932.383046 & 15.95 & 17.49 & 16.72 \\ Normal & -12.6945 & -8.8704 & -3.7261 & -8.8783 & -3.7319 & -45.57 & -35.46 & -40.52 \\ Tight & -12.6941 & -8.8746 & -3.7284 & -8.8824 & -3.7341 & -41.15 & -31.19 & -36.17 \\ very Tight & -12.6955 & -8.8771 & -3.7294 & -8.8848 & -3.7352 & -39.84 & -29.90 & -34.87 \\ Normal--Tight & -12.6938 & -8.8768 & -3.7296 & -8.8844 & -3.7353 & -38.93 & -29.05 & -33.99 \\ Tight--very Tight & -12.6962 & -8.8784 & -3.7299 & -8.8860 & -3.7357 & -39.19 & -29.25 & -34.22 \\ \hline aug-cc-pVQZ & & & & & & & & \\ HF & -2988.845437 & -2056.435660 & -932.437072 & -2056.435922 & -932.437370 & 17.13 & 17.48 & 17.30 \\ Normal & -13.2457 & -9.2508 & -3.9073 & -9.2529 & -3.9092 & -37.84 & -34.95 & -36.40 \\ Tight & -13.2452 & -9.2551 & -3.9096 & -9.2570 & -3.9115 & -33.42 & -30.67 & -32.05 \\ very Tight & -13.2467 & -9.2576 & -3.9106 & -9.2595 & -3.9125 & -32.11 & -29.38 & -30.75 \\ Normal--Tight & -13.2450 & -9.2572 & -3.9108 & -9.2591 & -3.9126 & -31.20 & -28.54 & -29.87 \\ Tight--very Tight & -13.2474 & -9.2588 & -3.9111 & -9.2607 & -3.9130 & -31.46 & -28.74 & -30.10 \\ \hline aug-cc-pV5Z & & & & & & & & \\ HF & -2988.878943 & -2056.457991 & -932.448732 & -2056.458027 & -932.448766 & 17.43 & 17.48 & 17.45 \\ Normal & -13.4437 & -9.3858 & -3.9722 & -9.3874 & -3.9728 & -36.35 & -34.95 & -35.65 \\ Tight & -13.4432 & -9.3901 & -3.9745 & -9.3914 & -3.9751 & -31.92 & -30.67 & -31.30 \\ very Tight & -13.4446 & -9.3926 & -3.9755 & -9.3939 & -3.9761 & -30.62 & -29.38 & -30.00 \\ Normal--Tight & -13.4430 & -9.3922 & -3.9757 & -9.3935 & -3.9762 & -29.71 & -28.54 & -29.12 \\ Tight--very Tight & -13.4453 & -9.3938 & -3.9760 & -9.3951 & -3.9766 & -29.97 & -28.74 & -29.35 \\ \hline CBS(T,Q) & & & & & & & & \\ HF & -2988.889784 & -2056.465378 & -932.452216 & -2056.465391 & -932.452245 & 17.45 & 17.48 & 17.46 \\ Normal & -13.6479 & -9.5285 & -4.0395 & -9.5263 & -4.0387 & -32.74 & -34.57 & -33.66 \\ Tight & -13.6475 & -9.5327 & -4.0418 & -9.5304 & -4.0409 & -28.31 & -30.30 & -29.31 \\ very Tight & -13.6489 & -9.5352 & -4.0428 & -9.5329 & -4.0420 & -27.01 & -29.01 & -28.01 \\ Normal--Tight & -13.6472 & -9.5349 & -4.0430 & -9.5325 & -4.0420 & -26.10 & -28.16 & -27.13 \\ Tight--very Tight & -13.6496 & -9.5365 & -4.0433 & -9.5341 & -4.0425 & -26.36 & -28.36 & -27.36 \\ \hline CBS(Q,5) & & & & & & & & \\ HF & -2988.884505 & -2056.461698 & -932.450668 & -2056.461696 & -932.450658 & 17.48 & 17.48 & 17.48 \\ Normal & -13.6514 & -9.5274 & -4.0403 & -9.5284 & -4.0395 & -35.05 & -34.94 & -35.00 \\ Tight & -13.6509 & -9.5317 & -4.0426 & -9.5325 & -4.0417 & -30.63 & -30.67 & -30.65 \\ very Tight & -13.6524 & -9.5342 & -4.0436 & -9.5349 & -4.0428 & -29.32 & -29.38 & -29.35 \\ Normal--Tight & -13.6507 & -9.5338 & -4.0438 & -9.5345 & -4.0429 & -28.42 & -28.53 & -28.47 \\ Tight--very Tight & -13.6531 & -9.5354 & -4.0441 & -9.5361 & -4.0433 & -28.67 & -28.73 & -28.70 \end{tabular} \\ \textsuperscript{\emph{a}} Tight and very Tight results obtained with aug-cc-pVQZ and aug-cc-pV5Z, as well as any derivatives thereof employ the additive BSI correction according to the Methods and ~\ref{CCbasis} Sections. \end{table} \begin{table}[h!] \scriptsize \caption{HF and LNO-CCSD(T) energies for systems of the C3A complex. See caption of~\ref{c3gctab} for more details. \label{c3atab} } \begin{tabular}{l|rrr|rr|ccc} & C3A & circ. & adenine & circ. CP & adenine CP & \;\;\;\;\; $\Delta$E \;\;\;\; & $\Delta$E CP & $\Delta$E half CP \\ \hline aug-cc-pVTZ & & & & & & & & \\ HF & -2520.996126 & -2056.327134 & -464.682371 & -2056.327896 & -464.683038 & 8.40 & 9.29 & 8.84 \\ Normal & -10.8264 & -8.8705 & -1.9001 & -8.8753 & -1.9033 & -26.65 & -20.77 & -23.71 \\ Tight & -10.8272 & -8.8746 & -1.9008 & -8.8792 & -1.9038 & -24.12 & -18.40 & -21.26 \\ very Tight & -10.8285 & -8.8770 & -1.9009 & -8.8818 & -1.9040 & -23.32 & -17.52 & -20.42 \\ Normal--Tight & -10.8276 & -8.8767 & -1.9011 & -8.8812 & -1.9041 & -22.86 & -17.22 & -20.04 \\ Tight--very Tight & -10.8291 & -8.8783 & -1.9010 & -8.8831 & -1.9040 & -22.92 & -17.08 & -20.00 \\ \hline aug-cc-pVQZ & & & & & & & & \\ HF & -2521.130595 & -2056.435693 & -464.709364 & -2056.435867 & -464.709527 & 9.08 & 9.29 & 9.18 \\ Normal & -11.2905 & -9.2505 & -1.9900 & -9.2518 & -1.9912 & -22.27 & -20.53 & -21.40 \\ Tight & -11.2912 & -9.2546 & -1.9907 & -9.2558 & -1.9917 & -19.74 & -18.16 & -18.95 \\ very Tight & -11.2925 & -9.2571 & -1.9908 & -9.2584 & -1.9918 & -18.94 & -17.27 & -18.11 \\ Normal--Tight & -11.2916 & -9.2567 & -1.9910 & -9.2577 & -1.9920 & -18.48 & -16.97 & -17.73 \\ Tight--very Tight & -11.2932 & -9.2583 & -1.9909 & -9.2597 & -1.9919 & -18.54 & -16.83 & -17.69 \\ \hline aug-cc-pV5Z & & & & & & & & \\ HF & -2521.158239 & -2056.458030 & -464.714962 & -2056.458051 & -464.714981 & 9.26 & 9.28 & 9.27 \\ Normal & -11.4564 & -9.3855 & -2.0221 & -9.3867 & -2.0225 & -21.34 & -20.26 & -20.80 \\ Tight & -11.4571 & -9.3897 & -2.0227 & -9.3907 & -2.0231 & -18.81 & -17.89 & -18.35 \\ very Tight & -11.4584 & -9.3921 & -2.0229 & -9.3933 & -2.0232 & -18.01 & -17.01 & -17.51 \\ Normal--Tight & -11.4575 & -9.3917 & -2.0231 & -9.3927 & -2.0234 & -17.54 & -16.70 & -17.12 \\ Tight--very Tight & -11.4591 & -9.3933 & -2.0229 & -9.3946 & -2.0233 & -17.61 & -16.56 & -17.09 \\ \hline CBS(T,Q) & & & & & & & & \\ HF & -2521.167416 & -2056.465420 & -464.716755 & -2056.465433 & -464.716780 & 9.26 & 9.29 & 9.27 \\ Normal & -11.6291 & -9.5278 & -2.0556 & -9.5266 & -2.0553 & -19.38 & -20.35 & -19.87 \\ Tight & -11.6299 & -9.5320 & -2.0563 & -9.5305 & -2.0559 & -16.86 & -17.98 & -17.42 \\ very Tight & -11.6312 & -9.5344 & -2.0564 & -9.5331 & -2.0560 & -16.06 & -17.09 & -16.57 \\ Normal--Tight & -11.6303 & -9.5340 & -2.0566 & -9.5325 & -2.0562 & -15.59 & -16.79 & -16.19 \\ Tight--very Tight & -11.6318 & -9.5356 & -2.0565 & -9.5344 & -2.0560 & -15.66 & -16.65 & -16.15 \\ \hline CBS(Q,5) & & & & & & & & \\ HF & -2521.162828 & -2056.461737 & -464.715891 & -2056.461734 & -464.715887 & 9.29 & 9.28 & 9.28 \\ Normal & -11.6304 & -9.5272 & -2.0557 & -9.5283 & -2.0555 & -20.52 & -19.97 & -20.25 \\ Tight & -11.6312 & -9.5313 & -2.0564 & -9.5323 & -2.0561 & -17.99 & -17.60 & -17.80 \\ very Tight & -11.6324 & -9.5338 & -2.0565 & -9.5349 & -2.0562 & -17.19 & -16.72 & -16.95 \\ Normal--Tight & -11.6315 & -9.5334 & -2.0567 & -9.5342 & -2.0563 & -16.73 & -16.42 & -16.57 \\ Tight--very Tight & -11.6331 & -9.5350 & -2.0566 & -9.5362 & -2.0562 & -16.79 & -16.28 & -16.53 \end{tabular} \end{table} \begin{table}[h!] \scriptsize \caption{HF and LNO-CCSD(T) energies for systems of the C2C2PD complex. See caption of \ref{c3gctab} for more details.$^\text{a}$ \label{c2tab} } \begin{tabular}{l|rrr|ccc} & C2C2PD & \;\;\;\;\; coronene \;\;\;\;\; & coronene CP & \;\;\;\;\; $\Delta$E \;\;\;\; & $\Delta$E CP & $\Delta$E half CP \\ \hline aug-cc-pVTZ & & & & & & \\ HF & -1832.429002 & -916.226325 & -916.227229 & 14.84 & 15.97 & 15.41 \\ Normal & -8.0606 & -3.9915 & -3.9968 & -33.84 & -26.01 & -29.92 \\ Tight & -8.0585 & -3.9929 & -3.9982 & -30.76 & -22.93 & -26.84 \\ very Tight & -8.0574 & -3.9932 & -3.9985 & -29.68 & -22.01 & -25.84 \\ Normal--Tight & -8.0574 & -3.9936 & -3.9989 & -29.22 & -21.39 & -25.30 \\ Tight--very Tight & -8.0569 & -3.9934 & -3.9986 & -29.14 & -21.55 & -25.34 \\ \hline aug-cc-pVQZ & & & & & & \\ HF & -1832.526572 & -916.275811 & -916.276010 & 15.72 & 15.97 & 15.84 \\ Normal & -8.3972 & -4.1641 & -4.1655 & -27.58 & -25.62 & -26.60 \\ Tight & -8.3898 & -4.1628 & -4.1641 & -24.57 & -22.66 & -23.62 \\ very Tight & -8.3866 & -4.1621 & -4.1635 & -23.49 & -21.43 & -22.46 \\ Normal--Tight & -8.3861 & -4.1622 & -4.1635 & -23.07 & -21.19 & -22.13 \\ Tight--very Tight & -8.3850 & -4.1617 & -4.1632 & -22.94 & -20.82 & -21.88 \\ \hline aug-cc-pV5Z & & & & & & \\ HF & -1832.546868 & -916.286130 & -916.286155 & 15.93 & 15.97 & 15.95 \\ Normal & -8.5176 & -4.2252 & -4.2258 & -26.24 & -25.44 & -25.84 \\ Tight & -8.5008 & -4.2194 & -4.2199 & -22.98 & -22.33 & -22.66 \\ very Tight & -8.4937 & -4.2166 & -4.2171 & -22.05 & -21.33 & -21.69 \\ Normal--Tight & -8.4924 & -4.2165 & -4.2169 & -21.35 & -20.78 & -21.07 \\ Tight--very Tight & -8.4901 & -4.2152 & -4.2158 & -21.59 & -20.83 & -21.21 \\ \hline CBS(T,Q) & & & & & & \\ HF & -1832.553290 & -916.289361 & -916.289368 & 15.96 & 15.97 & 15.96 \\ Normal & -8.6429 & -4.2901 & -4.2886 & -23.42 & -25.34 & -24.38 \\ Tight & -8.6316 & -4.2868 & -4.2852 & -20.46 & -22.47 & -21.46 \\ very Tight & -8.6268 & -4.2852 & -4.2839 & -19.37 & -21.01 & -20.19 \\ Normal--Tight & -8.6260 & -4.2852 & -4.2835 & -18.98 & -21.03 & -20.00 \\ Tight--very Tight & -8.6243 & -4.2845 & -4.2833 & -18.83 & -20.28 & -19.55 \\ \hline CBS(Q,5) & & & & & & \\ HF & -1832.550237 & -916.287842 & -916.287839 & 15.97 & 15.96 & 15.97 \\ Normal & -8.6439 & -4.2893 & -4.2891 & -25.02 & -25.25 & -25.13 \\ Tight & -8.6172 & -4.2788 & -4.2784 & -21.50 & -21.98 & -21.74 \\ very Tight & -8.6061 & -4.2738 & -4.2734 & -20.74 & -21.22 & -20.98 \\ Normal--Tight & -8.6039 & -4.2735 & -4.2730 & -19.74 & -20.35 & -20.05 \\ Tight--very Tight & -8.6005 & -4.2713 & -4.2709 & -20.36 & -20.84 & -20.60 \end{tabular} \\ \textsuperscript{\emph{a}} Tight and very Tight results obtained with aug-cc-pVQZ and aug-cc-pV5Z are also directly evaluated without replying on the additive BSI correction according to the Methods and \ref{CCbasis} Sections. \end{table} } \pn{ \clearpage \subsection{Core and higher-order correlation on top of CCSD(T)} \label{coreTQ} Core correlation effects are evaluated using the highly-optimized density-fitting (DF) MP2 implementation of the {\sc Mrcc} package~\cite{MRCC} using large basis sets. In that way, the magnitude of the frozen core approximation can be determined independently from the local and BSI errors. The augmented weighted core-valence basis sets~\cite{wCVXZ12Row}, aug-cc-pwCV$X$Z with $X$=T and Q, were employed in combination with CP corrections. The core correlated DF-MP2 interaction energies of the C2C2PD complex, both with aug-cc-pwCVTZ and with aug-cc-pwCVQZ, as well as with CBS(T,Q), are consistently stronger by 0.22~kcal~mol$^{-1}$ (4.6~cal~mol$^{-1}$ per C atom) than those obtained using the frozen core approach and otherwise identical settings. The missing higher-order electron correlation on top of the CCSD(T) treatment was estimated using the CCSDT(Q) scheme, which includes infinite-order three-electron and the perturbative four-electron contributions~\cite{Pert}. As the conventional ninth power-scaling CCSDT(Q) calculations are many-orders of magnitude more expensive than CCSD(T), we relied on the analogous LNO approximations implemented also for CCSDT(Q)~\cite{LocalCC,LocalCC3} in the {\sc Mrcc} package~\cite{MRCC}. Considering that large basis set CCSDT(Q) computations are only feasible for systems with only a few atoms, highly-converged LNO-CCSDT(Q) computations are still well beyond the current capabilities even for the smallest GGG complex. With relying on looser LNO truncations and the moderate basis set of 6-31G**(0.25,0.15)~\cite{L7set}, we were able to perform by far the largest LNO-CCSDT(Q) calculation ever presented for the GGG complex. The cumulative local and BSI error of the LNO-CCSDT(Q) interaction energies are estimated to be about 38\% at the corresponding LNO-CCSD(T)/6-31G**(0.25,0.15) level. Up to this uncertainty, the CCSDT contribution on top of CCSD(T) is found to be -0.013~kcal~mol$^{-1}$, while the (Q) correction on top of CCSDT is about -0.021~kcal~mol$^{-1}$. Clearly, both corrections are negligibly small compared to the deviation of CCSD(T) and FN-DMC. As the even higher-order CC terms are expected to be even smaller, it is unlikely that higher-order electron correlation effects missing from CCSD(T) could completely explain the disagreement of CCSD(T) and FN-DMC. The weakly-correlated character of the studied system is also verified via the T1~\cite{t1measure} diagnostics. The T1 measures obtained for the most complicated C3GC and C$_{60}$@[6]CPPA complexes are found to be at most 0.016 and 0.014, respectively. Considering that the T1 measure grows with the number of basis functions and that smaller than 0.02 T1 values are considered weakly-correlated already for very small systems~\cite{t1measure}, there appears to be no indication of even moderate static correlation. Moreover, neither the HF nor the CCSD iterations indicated any problems emerging usually for strongly correlated systems. The size of the singles and doubles amplitudes were also monitored in all domain CCSD computations indicating the validity of the single-reference approach, while it is convincing that the LNO approximations were found to operate excellently also for moderately statically correlated species~\cite{polypyrrolBM}. The magnitude of the (T) correction compared to the full CCSD(T) interaction energy is also an informative measure of the static or dynamic nature of the correlation. This ratio is consistently around 18-20\% for all 8 complexes, which is well within the range observed for smaller and simpler systems, e.g., in the well-known S66 test set (cca. 13--24\%)~\cite{S66}. } \section{Details of Quantum Monte Carlo calculations} \label{SMDMC} The FN-DMC calculations mostly used 10 nodes with 28 cores each, and 14,000 walkers distributed across the cores (\textit{i.e.} 50 walkers per core). We used 20 nodes for the C$_{60}$@[6]CPPA complex and 28,000 walkers to reduce the stochastic error in a shorter time. Here we give further details on (i) the optimization of the Jastrow factor for the reported complexes, (ii) time-step and node-structure tests for the coronene dimer and (iii) results of additional FN-DMC simulations of C$_{60}$@[6]CPPA. In addition, we report the total energies for C3A and C3GC complexes here in Table~\ref{tab:dmcabsenergy}. \begin{table}[ht] \centering \caption{The total energy in Ha for each complex and its monomers are given here with stochastic errors from FN-DMC calculations alongside description of the DFT orbitals and plane-wave cut-offs (in Ry) and FN-DMC time step (in a.u.) and algorithm (\textit{i.e.} standard locality approximation (LA) or determinant localization approximation (DLA)). Resulting interaction energy (IE) is reported lastly.} \label{tab:dmcabsenergy} \scriptsize \begin{tabular}{ccccc} \hline \hline FN-DMC setup & C3GC & circ. & GC & IE (kcal mol$^{-1}$)\\ \hline LDA orb/500Ry, 0.03 time-step/LA & $-485.928809\pm0.000934$ & $-317.320337\pm0.000485$ & $-168.569987\pm0.000209$ & $-24.2 \pm 0.7$ \\ PBE0 orb/400Ry, 0.03 time-step/LA & $-485.939114\pm0.001047$ & $-317.326450\pm0.000731$ & $-168.575218\pm0.000148$ & $-23.5 \pm 0.8$\\ PBE0 orb/400Ry, 0.01 time-step/LA & $-485.933462\pm0.000881$ & $-317.325540\pm0.000579$ & $-168.570123\pm0.000603$ & $-23.8 \pm 0.8$ \\ \hline FN-DMC setup & C3A & circ. & adenine & IE (kcal mol$^{-1}$)\\ \hline LDA orb/500Ry, 0.03 time-step/LA & $-398.351177\pm0.000648$ & $-317.320307\pm0.000492$ & $-81.006922\pm0.000162$ & $-15.0 \pm 0.5$ \\ LDA orb/500Ry, 0.01 time-step/LA & $-398.351280\pm0.000966$ & $-317.321748\pm0.000886$ & $-81.006670\pm0.000196$ & $-14.3 \pm 0.8$ \\ LDA orb/500Ry, 0.01 time-step/DLA & $-398.438355\pm0.000546$ & $-317.393190\pm0.000585$ & $-81.022893\pm0.000228$ & $-14.0 \pm 0.5$ \\ \hline \hline \end{tabular}% \end{table} \subsection{Variational Monte Carlo Optimization of the Jastrow Factor} Variational Monte Carlo (VMC) obeys the variational principle, allowing the initial Slater-Jastrow wavefunction to be optimized iteratively towards a lower energy. Importantly, the zero-variance principle ensures that variance of the energy tends to zero as the exact energy of the system is approached. This is used in the varmin and varmin--linjas optimization algorithms in CASINO~\cite{Needs_2009} to optimize the variable parameters of the Jastrow factor. The Jastrow factor is composed of explicit distance-dependent polynomial functions for inter-particle interactions, such as electron-electron (\textit{u}), electron-nucleus ($\chi$), and electron-electron-nucleus (\textit{f}), and is also system-dependent. For all complexes, we performed a term-by-term optimization using 24 parameters for \textit{u}, 12-14 parameters per element for $\chi$, and 8 parameters per element for \textit{f}. The resulting VMC energy and variance for the complexes is given in Table~\ref{tab:vmcopt}. \begin{table}[ht] \centering \caption{The variance ($\sigma^2$) and VMC energy ($\textrm{E}_{\textrm{VMC}}$) in atomic units for each complex, as a result of optimizing the trial wavefunction. The uncertainty is indicated in parentheses.} \label{tab:vmcopt} \small \begin{tabular}{ccc} \hline \hline Complex & $\sigma^2$ & $\textrm{E}_{\textrm{VMC}}$ \\ \hline CBH & 4.07(5) & -249.26(1) \\ C2C2PD & 4.76(6) & -285.769(8) \\ GGG & 5.18(3) & -290.195(5) \\ GCGC & 5.71(3) & -336.296(1) \\ PHE & 6.03(4) & -367.239(8) \\ C3A & 6.38(4) & -397.085(6) \\ C3GC & 7.82(5) & -484.474(7) \\ C$_{60}$@[6]CPPA & 10.54(5) & -624.140(6) \\ \hline \hline \end{tabular}% \end{table} \subsection{Time-Step and Node-Structure Dependence of the Coronene Dimer} It can be seen from Fig.~\ref{fig:convc2c2pd} that the FN-DMC interaction energy of C2C2PD is converged within the stochastic error bar (corresponding to 1 standard deviation) with respect to the time-step in FN-DMC (from 0.003 to 0.03 a.u.). In addition, we computed PBE0 and PBE initial determinants (orbitals) from PWSCF, in order to assess the FN-DMC dependence of the interaction energy on the nodal-structure. Fig.~\ref{fig:convc2c2pd} shows that the FN-DMC interaction energy is the same within the stochastic error bars of $\sim$0.5 kcal mol$^{-1}$ across the three nodal-structures. \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{conv_c2c2pd_updated.pdf} \caption{\label{fig:convc2c2pd} FN-DMC interaction energy of C2C2PD (coronene dimer) with 0.003, 0.01 and 0.03 a.u. time-steps. Different nodal-structures from LDA (black circle), PBE (blue triangle), and PBE0 (red square) initial orbitals are reported using 0.01 a.u. time-step; these are slightly offset along the x-axis for clarity. } \end{figure} \subsection{The GGG Trimer and Coronene Dimer with the Determinant Localization Approximation} Using non-local pseudopotentials in FN-DMC requires an approximation for the evaluation of the local energy -- not to be confused with the type of local approximations, such as LNO, made in local CCSD(T) methods. The recent determinant localization approximation (DLA) introduced by Zen \textit{et al.}~\cite{Zen2019} has some advantages over the pre-existing standard algorithms: the locality approximation~\cite{Mitas1991} (LA) and T-move scheme~\cite{Casula2006}). The DLA FN-DMC energies are less sensitive to the Jastrow factor that is used in combination with pseudopotentials at larger time-steps. This enables better overall convergence with the time-step in FN-DMC and the DLA method is also more numerically stable than LA. We tested the use of the DLA method for the GGG trimer and the coronene dimer and present the results in Table~\ref{tab:dla}. \begin{table}[ht] \centering \caption{Comparison of the standard LA to the DLA method in the GGG and C2C2PD complexes.} \label{tab:dla} \small \begin{tabular}{cccc} \hline \hline Complex & Approximation & Time-step & IE (kcal mol$^{-1}$) \\ \hline GGG & standard LA & 0.03 & $1.5\pm0.3$ \\ GGG & DLA & 0.03 & $1.4\pm0.2$ \\ C2C2PD & standard LA & 0.03 & $-18.1\pm0.4$ \\ C2C2PD & DLA & 0.01 & $-17.4\pm0.5$ \\ \hline \hline \end{tabular}% \end{table} The interaction energies of the GGG and C2C2PD complexes remain in agreement, within the one-standard deviation stochastic errors, between the DLA and the standard LA algorithms. The results support that the FN-DMC results are converged with respect to the time-steps and employed Jastrow factors. \subsection{FN-DMC with T-move on the C$_{60}$@[6]CPPA Complex} The C$_{60}$@[6]CPPA complex proved to be more challenging to compute with FN-DMC, due to numerical instabilities when using the locality approximation. This was alleviated by the use of the DLA method, and separately using the T-move approximation in place of the locality approximation. The T-move scheme reinstates variational form of the energy, but the energies with this approximation are more time-step dependent, as can be seen in Fig.~\ref{fig:convc606cppa}. Linear extrapolations to 'zero' time-step limit yield $-31.14 \pm 2.57$ kcal mol$^{-1}$ using LDA orbitals and $-29.16 \pm 2.33$ kcal mol$^{-1}$ using PBE0 orbitals. Moreover, we show the DLA obtained FN-DMC interaction energy at 0.03 and 0.01 a.u. time-steps. In this way, the independence of the interaction energy on the nodal structure and the FN-DMC algorithm is established. \begin{figure}[ht] \includegraphics[width=0.5\textwidth]{dmc_s12lext.pdf} \caption{\label{fig:convc606cppa} FN-DMC interaction energy of C$_{60}$@[6]CPPA complex using two algorithms. T-move interaction energies at 0.01 and 0.02 a.u. time-steps are shown for LDA (black stars) and PBE0 orbitals (red stars). The linear extrapolation to zero time-step for each set is indicated by the dashed lines, with the result in circles. The error on the zero time-step FN-DMC interaction energies are propagated according to the extrapolation. For comparison, the DLA method is shown (with the locality approximation) in blue squares. The DLA FN-DMC interaction energy at 0.01 a.u. is slightly offset along the x-axis for clarity. } \end{figure} \pn{ \section{Computational requirements of LNO-CCSD(T) and FN-DMC} \label{secmemtime} CPU core time and minimal memory requirements are collected in Table \ref{memtime} for representative examples: the CBH and C3GC complexes. The very Tight LNO-CCSD(T)/aug-cc-pVTZ computation for C3GC was found to be the upper limit for the CPU time requirement among all LNO-CCSD(T) computations. Compared to that it is interesting to note the case of CBH, which contains even more atoms and almost as much AOs as C3GC. However, due to the relatively low complexity of the wavefunction of CBH, its CPU time demand is found to be up to 100 times smaller than that of C3GC when using the same settings. Unfortunately, the computations were scattered on multiple clusters and CPU types preventing the straightforward comparison of runtimes with various settings. For that reason CPU core times and the corresponding CPU types are reported. With that in mind, we find similar trends as in our previous report~\cite{LocalCC4}. For instance, the memory requirement of LNO-CCSD(T) is exceptionally small compared to alternative CCSD(T) implementations, which was essential for the C3A, C3GC, and C$_{60}$@[6]CPPA computations. Moreover, about 3-5 times more operations were performed when using one step tighter LNO settings, just as in our previous computations~\cite{LocalCC4}, which trend is highly useful to estimate the feasibility of analogous computations. It is also important to note that the CPU and memory requirement grow much more slowly with the basis set size than with conventional CCSD(T), where the operation count and data storage increase by about a factor of 10 with one step in the cardinal number hierarchy (e.g., from aug-cc-pVTZ to aug-cc-pVQZ). Compared to LNO-CCSD(T), the FN-DMC runtimes depend less on the chemical composition and can be estimated more accurately based on the number of computed particles. The notably small memory requirement and the ease of efficient parallelization are also apparent benefits of the FN-DMC method. Moreover, the computational cost of FN-DMC does not change as steeply with various input nodal structures and time-steps allowing for the estimation of these effects using manageable additional computational time. } \begin{table}[h!] \scriptsize \label{memtime} \caption{CPU core time (i.e., [number of nodes]*[core per node]*[wall time in years]) and minimum memory [in GB] requirement of the LNO-CCSD(T) and FN-DMC calculations for the CBH and C3GC complexes with all settings.} \begin{tabular}{ll|cc|cc} Complex: & & \multicolumn{2}{c}{CBH} & \multicolumn{2}{c}{C3GC} \\ \hline No. of atoms: & & \multicolumn{2}{c}{112} & \multicolumn{2}{c}{101} \\ & & \;\;\; memory [GB] \;\;\; & \;\;\; time [core year] \;\;\; & \;\;\; memory [GB] \;\;\; & \;\;\; time [core year] \;\;\; \\ \hline LNO-CCSD(T)&AOs in aug-cc-pVTZ: & \multicolumn{2}{c}{3404} & \multicolumn{2}{c}{4002} \\ &Normal & 3$^\text{\#}$ & 0.02$^\text{a}$ & 70$^\text{\#}$ & 0.4$^\text{a}$ \\ &Tight & 7$^\text{\#}$ & 0.08$^\text{a}$ & 73$^\text{\#}$ & 3.6$^\text{a,b,e,*}$ \\ &very Tight & 12$^\text{\#}$ & 0.2$^\text{a}$ & 200 & 20$^\text{c,g,*}$ \\ &AOs in aug-cc-pVQZ: & \multicolumn{2}{c}{6376} & \multicolumn{2}{c}{7128} \\ &Normal & 11$^\text{\#}$ & 0.04$^\text{a}$ & 110$^\text{\#}$ & 1.6$^\text{a,*}$ \\ &Tight & 24$^\text{\#}$ & 0.13$^\text{a}$ & - & - \\ &very Tight & 29$^\text{\#}$ & 0.4$^\text{d}$ & - & - \\ &AOs in aug-cc-pV5Z: & \multicolumn{2}{c}{10652} & \multicolumn{2}{c}{11511} \\ &Normal & 32$^\text{\#}$ & 0.1$^\text{e}$ & 63 & 2.7$^\text{f,*}$ \\ &Tight & 50$^\text{\#}$ & 0.3$^\text{h}$ & - & - \\ &very Tight & 86$^\text{\#}$ & 1.2$^\text{f,*}$ & - & - \\ \hline FN-DMC & LA/0.03 a.u. & 7$^{\dagger}$ & 2.5$^\text{c,e}$ & 15$^{\dagger}$ & 3.3$^\text{c,e}$ \\ \end{tabular} \\ \textsuperscript{\emph{a}} Intel Xeon E5-2670 v3 2.3 GHz \textsuperscript{\emph{b}} Intel Xeon E5-1650 v2 3.5 GHz \textsuperscript{\emph{c}} Intel Xeon Gold 6132 2.3 GHz \textsuperscript{\emph{d}} Intel Xeon E5-2680 v2 2.8 GHz \textsuperscript{\emph{e}} Intel Xeon E5-2680 v4 2.4 GHz \textsuperscript{\emph{f}} Intel Xeon E5-2680 v3 2.5 GHz \textsuperscript{\emph{g}} Intel Xeon Platinum 8180M 2.3 GHz \textsuperscript{\emph{h}} Intel Xeon Gold 6138 1.9 GHz \textsuperscript{\emph{*}} Estimated CPU time due to large number of restarts. \textsuperscript{\emph{\#}} Fully integral-direct integral transformation, minimal memory algorithm would require about up to 3--4 times less memory. \textsuperscript{\emph{$\dagger$}} Maximum shared memory used, mainly determined by size of wavefunction file. \end{table} \section{Details of DFT calculations} The PBE0+MBD calculations were performed using FHI-aims v.190225 with all-electron numerical basis sets, with ``tight" defaults and tier 2 basis functions for all elements. The total energy threshold for self-consistent convergence was set to $10^{-7}$ eV. Spin and relativistic effects have not been included. London dispersion energies from the D4 model are computed with the {\sc dftd4} standalone program using the electronegativity equilibration charges (EEQ) and include a coupled-dipole based many-body dispersion correction (D4(EEQ)-MBD)~\cite{dftd4}. The same geometries have been used as for the benchmark calculations for all structures. \section{Geometry of the L7 and the C$_{60}$@[6]CPPA complexes } The structures and fragment definitions in Ref.~\citenum{L7set} were used for the L7 calculations. For C$_{60}$@[6]CPPA, a C$_{70}$@[6]CPPA geometry from Ref.~\citenum{Hermann2017} was modified, by replacing C$_{70}$ with C$_{60}$ and the complex was symmetrized to D$_\text{3d}$ point group. The high-symmetry structure allows more efficient calculations with LNO-CCSD(T) with a speedup proportional to the rank of the point group~\cite{LocalCC,LocalCC3}. The stability of this complex was assessed by relaxing the geometry whilst retaining the symmetry group, at the DFT level (B97-3c exchange-correlation functional). The interaction strength increases by less than 0.1 kcal mol$^{-1}$ with respect to the unrelaxed structure. Relaxing the C$_{60}$ and [6]CPPA fragments reduces the interaction strength by 0.9 kcal mol$^{-1}$. The C$_{60}$@[6]CPPA Cartesian coordinates used in LNO-CCSD(T) and FN-DMC calculations is given here. \begin{center} C -0.72650728 -1.22225849 -3.24715547 \\ C 0.72650728 -1.22225849 -3.24715547 \\ C -1.42176054 -0.01804451 -3.24715547 \\ C 1.42176054 -0.01804451 -3.24715547 \\ C 2.59727407 0.14217202 -2.40825772 \\ C 1.17551245 -2.32039134 -2.40825772 \\ C 2.30045705 -2.16706760 -1.60544793 \\ C 3.02696412 -0.90872044 -1.60544793 \\ C -3.02696412 -0.90872044 -1.60544793 \\ C -2.30045705 -2.16706760 -1.60544793 \\ C -2.59727407 0.14217202 -2.40825772 \\ C -1.17551245 -2.32039134 -2.40825772 \\ C 0.00000000 -2.99907290 -1.88979043 \\ C -2.30045914 -2.68553418 -0.24808191 \\ C -1.17551125 -3.33502315 0.24808191 \\ C 0.00000000 -3.49523729 -0.59081634 \\ C -3.02696429 1.74761865 -0.59081634 \\ C -3.47597040 0.64948897 0.24808191 \\ C -2.59727332 1.49953645 -1.88979043 \\ C -3.47597040 -0.64948897 -0.24808191 \\ C -3.02696429 -1.74761865 0.59081634 \\ C -3.02696412 0.90872044 1.60544793 \\ C -2.59727407 -0.14217202 2.40825772 \\ C -2.59727332 -1.49953645 1.88979043 \\ C -0.72650707 3.07578804 -1.60544793 \\ C -1.17551125 3.33502315 -0.24808191 \\ C -1.42176162 2.17821932 -2.40825772 \\ C -2.30045914 2.68553418 0.24808191 \\ C -2.30045705 2.16706760 1.60544793 \\ C -0.00000000 3.49523729 0.59081634 \\ C -0.00000000 2.99907290 1.88979043 \\ C -1.17551245 2.32039134 2.40825772 \\ C 0.69525326 1.24030300 -3.24715547 \\ C 1.42176162 2.17821932 -2.40825772 \\ C -0.69525326 1.24030300 -3.24715547 \\ C 0.72650707 3.07578804 -1.60544793 \\ C 1.17551125 3.33502315 -0.24808191 \\ C 2.59727332 1.49953645 -1.88979043 \\ C 3.02696429 1.74761865 -0.59081634 \\ C 2.30045914 2.68553418 0.24808191 \\ C 0.72650728 1.22225849 3.24715547 \\ C 1.17551245 2.32039134 2.40825772 \\ C 2.30045705 2.16706760 1.60544793 \\ C 1.42176054 0.01804451 3.24715547 \\ C -0.69525326 -1.24030300 3.24715547 \\ C -1.42176054 0.01804451 3.24715547 \\ C -0.72650728 1.22225849 3.24715547 \\ C 0.69525326 -1.24030300 3.24715547 \\ C 0.72650707 -3.07578804 1.60544793 \\ C -0.72650707 -3.07578804 1.60544793 \\ C -1.42176162 -2.17821932 2.40825772 \\ C 1.42176162 -2.17821932 2.40825772 \\ C 3.02696429 -1.74761865 0.59081634 \\ C 2.30045914 -2.68553418 -0.24808191 \\ C 1.17551125 -3.33502315 0.24808191 \\ C 2.59727332 -1.49953645 1.88979043 \\ C 3.02696412 0.90872044 1.60544793 \\ C 3.47597040 0.64948897 0.24808191 \\ C 3.47597040 -0.64948897 -0.24808191 \\ C 2.59727407 -0.14217202 2.40825772 \\ C -4.43498968 4.84818396 -0.00326334 \\ C -5.43089278 3.84285784 -0.00366147 \\ C -3.85193459 5.28074380 -1.21899357 \\ C -3.85171966 5.27950986 1.21280412 \\ C -2.64632983 5.97544200 -1.21280412 \\ C -2.64729099 5.97624511 1.21899357 \\ C -1.98115563 6.26490571 0.00326334 \\ C -6.04345890 2.78186219 -0.00366147 \\ H -4.31742848 4.99255327 -2.16174586 \\ H -4.31701338 4.99032575 2.15534928 \\ H -2.16324219 6.23380613 -2.15534928 \\ H -2.16496372 6.23527938 2.16174586 \\ C 1.98115563 6.26490571 0.00326334 \\ C 0.61256612 6.62472003 0.00366147 \\ C 2.64632983 5.97544200 -1.21280412 \\ C 2.64729099 5.97624511 1.21899357 \\ C 3.85193459 5.28074380 -1.21899357 \\ C 3.85171966 5.27950986 1.21280412 \\ C 4.43498968 4.84818396 -0.00326334 \\ C -0.61256612 6.62472003 0.00366147 \\ H 2.16324219 6.23380613 -2.15534928 \\ H 2.16496372 6.23527938 2.16174586 \\ H 4.31742848 4.99255327 -2.16174586 \\ H 4.31701338 4.99032575 2.15534928 \\ C 6.41614531 1.41672175 -0.00326334 \\ C 6.04345890 2.78186219 -0.00366147 \\ C 6.49922558 0.69550131 -1.21899357 \\ C 6.49804949 0.69593214 1.21280412 \\ C 6.49804949 -0.69593214 -1.21280412 \\ C 6.49922558 -0.69550131 1.21899357 \\ C 6.41614531 -1.41672175 0.00326334 \\ C 5.43089278 3.84285784 -0.00366147 \\ H 6.48239220 1.24272611 -2.16174586 \\ H 6.48025557 1.24348038 2.15534928 \\ H 6.48025557 -1.24348038 -2.15534928 \\ H 6.48239220 -1.24272611 2.16174586 \\ C 4.43498968 -4.84818396 0.00326334 \\ C 5.43089278 -3.84285784 0.00366147 \\ C 3.85171966 -5.27950986 -1.21280412 \\ C 3.85193459 -5.28074380 1.21899357 \\ C 2.64729099 -5.97624511 -1.21899357 \\ C 2.64632983 -5.97544200 1.21280412 \\ C 1.98115563 -6.26490571 -0.00326334 \\ C 6.04345890 -2.78186219 0.00366147 \\ H 4.31701338 -4.99032575 -2.15534928 \\ H 4.31742848 -4.99255327 2.16174586 \\ H 2.16496372 -6.23527938 -2.16174586 \\ H 2.16324219 -6.23380613 2.15534928 \\ C -1.98115563 -6.26490571 -0.00326334 \\ C -0.61256612 -6.62472003 -0.00366147 \\ C -2.64729099 -5.97624511 -1.21899357 \\ C -2.64632983 -5.97544200 1.21280412 \\ C -3.85171966 -5.27950986 -1.21280412 \\ C -3.85193459 -5.28074380 1.21899357 \\ C -4.43498968 -4.84818396 0.00326334 \\ C 0.61256612 -6.62472003 -0.00366147 \\ H -2.16496372 -6.23527938 -2.16174586 \\ H -2.16324219 -6.23380613 2.15534928 \\ H -4.31701338 -4.99032575 -2.15534928 \\ H -4.31742848 -4.99255327 2.16174586 \\ C -6.41614531 -1.41672175 0.00326334 \\ C -6.04345890 -2.78186219 0.00366147 \\ C -6.49804949 -0.69593214 -1.21280412 \\ C -6.49922558 -0.69550131 1.21899357 \\ C -6.49922558 0.69550131 -1.21899357 \\ C -6.49804949 0.69593214 1.21280412 \\ C -6.41614531 1.41672175 -0.00326334 \\ C -5.43089278 -3.84285784 0.00366147 \\ H -6.48025557 -1.24348038 -2.15534928 \\ H -6.48239220 -1.24272611 2.16174586 \\ H -6.48239220 1.24272611 -2.16174586 \\ H -6.48025557 1.24348038 2.15534928 \\ \end{center}
null
null
null
proofpile-arXiv_000-10157
{"arxiv_id":"2009.08927","language":"en","timestamp":1600654646000,"url":"https:\/\/arxiv.org\/abs\/2009.08927","yymm":"2009"}
2024-02-18T23:40:25.168Z
2020-09-21T02:17:26.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10159"}
null
\section{Introduction} \label{sec:intro} The coincident detection of gravitational and electromagnetic radiation from GW170817 \cite{GW_discovery} has allowed us to observe directly the late inspiral of binary neutron stars together with a short gamma-ray burst (sGRB) and kilonova launched in its aftermath. The interpretation of these observations requires theoretical models, which can be provided by numerical relativity simulations (see, e.g., \cite{ShiFHKKST17,RadPZB18,RezMW18,RuiST18,GieDU19}, as well as \cite{Pas17,BaiR17} for reviews). Since a host of different physical processes and phenomena -- including relativistic magnetohydrodynamics, nuclear reactions and radiation transport (both electromagnetic and neutrinos) -- play important roles in the merger of binary neutron stars and the launch of sGRBs and kilonovae, all of these processes also must be accounted for in the numerical simulations. While several current codes can handle at least some of these processes, and can evolve the remnant of neutron star mergers for at least several tens of dynamical timescales, i.e.~tens of milliseconds, the complete modeling of secular processes requires even longer evolution times, posing a formidable challenge for most codes (see, e.g., \cite{RuiLPS16,RadPBZ18,CioKKG19,DePFFLPS20,NedBRDEPSSL20}). On the other hand, the radiation will propagate radially at large distance, where it is measured, and the remnant will rather quickly settle down into an approximately axisymmetric configuration. These are just some motivations for considering algorithms in curvilinear coordinates, which can take optimal advantage of such symmetries, be they exact or approximate. One disadvantage of curvilinear coordinates is the appearance of coordinate singularities. It turns out, however, that these do not affect the stability of suitable evolution schemes as long as all singular terms are handled analytically. The latter can be accomplished with the help of a reference-metric formulation (see \cite{BonGGN04,ShiUF04,Bro09,Gou12,MonC12}) together with a proper rescaling of all tensorial quantities. Such an approach was first demonstrated for Einstein's equations in spherical polar coordinates in full 3+1 dimensions by \cite{BauMCM13}, and very similar methods have now been implemented in the {\tt Einstein Toolkit} (also in spherical polar coordinates \cite{MewZCREB18}), the {\tt NRPy++} code (in more general classes of curvilinear coordinates \cite{RucEB18}), as well as the {\tt SpEC} code (in cylindrical coordinates \cite{Jesetal20}). When coupling matter fields to Einstein's equations in this approach, it is advantageous to cast these matter fields in a reference-metric formulation as well. This has been demonstrated for hydrodynamics in \cite{MonBM14} (hereafter MBM, see also \cite{BauMM15}), magnetohydrodynamics (see \cite{MewZCBEAC20}), as well as electrodynamics (see \cite{BauGH19}), but not yet for radiation hydrodynamics -- which is the subject of this paper. An exact description of radiation transfer entails solving the Boltzmann equation for the specific (energy-dependent) radiation intensity (see, e.g., \cite{Lin66,MihWM84,BauS10}), which, without any approximation or simplifying assumptions, is well beyond the reach of current numerical codes. As an approximation, local effects of radiative cooling can be estimated with a leakage scheme (see, e.g.,~\cite{RufJS96,BauJKST96,RosL03,SekKKS11,GalKRF13,PerCK16,RadGLROR16,RadPHFBR18,GizORPCN19}). Radiation transport can be approximated by evolving the lowest angular moments of the intensity only, and expressing higher-order moments with the help of suitable closure relations (see \cite{Tho81}). In flux-limited diffusion schemes, only the zeroth-order moment (the radiation energy density) is evolved (see, e.g., \cite{Pom81,BurL86,Sha89,BauST96,RahJJ19,Bruetal20} and references therein). In a two-moment (so-called M1) scheme, the first-order moment (the radiation momentum density, or flux) is evolved together with the zeroth-order moment (e.g.~\cite{RezM94,FarLLS08,MueJD10,ShiKSS11,CarEM13,SadNTZ13,WanSNKKS14,JusOJ15,Oco15,KurTK16,FouDKNPS18,SkiDBRV19,VinFDHKPS20,WeiOR20}). In general, the moments depend on energy in addition to location and time, but in so-called ``gray" treatments this dependence is suppressed by integrating over the energy. In this paper we retrace the derivation of such a gray, two-moment formalism using a reference-metric framework, and present numerical examples. Specifically, we follow the treatment of \cite{FarLLS08}, hereafter FLLS, in Section \ref{sec:eqs}, focussing on the optically-thick regime, but adopt a reference-metric formalism in order to bring the equations into a form that is suitable for implementation in curvilinear coordinates. Unlike in some previous treatments we also use a systematic 3+1 decomposition of all tensorial quantities, thereby avoiding the potential for confusion between tensors of different types. In Section \ref{sec:numerics} we demonstrate the feasibility of solving the equations in spherical polar coordinates by presenting numerical results for two test problems, namely planar radiation hydrodynamics shock problems in flat spacetimes, and Oppenheimer-Snyder collapse to a black hole with radiation. We carefully analyze the early transient behavior of the radiative quantities for the latter, and compare the subsequent radiation field with an approximate analytical solution derived within the relativistic diffusion approximation \cite{Sha89}. Throughout this paper we adopt geometrized units with $G = 1 = c$. We denote spacetime indices with $a, b, c \ldots$ and spatial indices with $i, j, k \ldots$. \section{Equations} \label{sec:eqs} \subsection{Preliminaries} \label{sec:prelims} We assume that the spacetime $M$ has been foliated by a family of spatial slices that coincide with level surfaces of a coordinate time $t$. The spacetime line element can then be written as \begin{eqnarray} \label{line_element} ds^2 & = & g_{ab} dx^a dx^b \nonumber \\ & = & - \alpha^2 dt^2 + \gamma_{ij} (dx^i + \beta^i dt)(dx^j + \beta^j dt), \end{eqnarray} where $g_{ab}$ is the spacetime metric, $\alpha$ the lapse function, $\beta^i$ the shift vector, and \begin{equation} \label{gamma_def} \gamma_{ab} = g_{ab} + n_a n_b \end{equation} the induced spatial metric on the spatial slices. In the last expression, $n^a$ is the future-pointing normal vector on the spatial slices, which we may express as \begin{equation} \label{n} n^a = \alpha^{-1} (1, - \beta^i) \mbox{~~~or~~~} n_a = (- \alpha, 0,0,0). \end{equation} For applications in curvilinear coordinates it is convenient to introduce a spatial reference metric $\hat \gamma_{ij}$ (see, e.g., \cite{BonGGN04,ShiUF04,Bro09,Gou12}). In numerical applications it is most natural to choose this reference metric to be the flat metric in whatever coordinate system is used -- in our code, for example, it is taken to be the flat metric in spherical polar coordinates. This assumption is not necessary, however. In our treatment below we will assume only that $\hat \gamma_{ij}$ is independent of time (which could also be relaxed, for example for applications in cosmology), and will present an analytical example with a curved reference metric in Section \ref{sec:numerics_OS}. The Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation of Einstein's equations \cite{NakOK87,ShiN95,BauS98}, governing the evolution of the gravitational fields, has been expressed in terms of a reference metric by \cite{Bro09,Gou12}, and implemented numerically, assuming spherical polar coordinates, in \cite{MonC12,BauMCM13}. In the following we also assume the presence of fluid matter. The equations governing the fluid follow from conservation of rest mass, \begin{equation} \label{bar_cons} \nabla_a (\rho_0 u^a) = 0, \end{equation} and conservation of total stress-energy, \begin{equation} \label{T_cons} \nabla_a T^{ab} = \nabla_a (T^{ab}_{\rm fluid} + R^{ab} ) = 0. \end{equation} Here $\rho_0$ is the fluid's rest-mass density, $u^a$ the fluid's four-velocity, $\nabla_a$ the covariant derivative associated with the spacetime metric $g_{ab}$, and the fluid's stress-energy tensor is given by \begin{equation} T^{ab}_{\rm fluid} = \rho_0 h u^a u^b + P g^{ab}, \end{equation} where $h = 1 + \epsilon + P / \rho_0$ is the specific enthalpy, $\epsilon$ the specific internal energy density, and $P$ the fluid pressure. In (\ref{T_cons}) we have accounted for the presence of radiation by including the radiation stress-energy tensor $R^{ab}$ introduced in Eq.~(\ref{rad_se}) below. As shown in MBM, the equations of relativistic hydrodynamics can also be rewritten with the help of a reference metric, thereby avoiding some of the numerical problems encountered in curvilinear coordinates, and casting the equations in a framework that meshes well with that for the gravitational field equations. In Section \ref{sec:eqs_dynamics} we will follow a very similar procedure to rewrite the dynamical equations for the radiation fields. \subsection{Radiation fields in the fluid frame} \label{sec:eqs_fluid_frame} We assume that the radiation stress-energy tensor $R^{ab}$ can be written as \begin{equation} \label{rad_se} R^{ab} = E u^a u^b + F^a u^b + u^a F^b + {\mathcal P}^{ab}. \end{equation} Here $u^a$ is the fluid four-velocity, \begin{equation} \label{E} E = \int d \nu d \Omega I_\nu, \end{equation} the radiation energy density as measured by an observer comoving with the fluid, \begin{equation} \label{Fa} F^a = h^a_{~b} \int d\nu d\Omega I_\nu N^b, \end{equation} the comoving radiation flux four-vector, \begin{equation} {\mathcal P}^{ab} = h^a_{~c} h^b_{~d} \int d\nu d\Omega I_\nu N^c N^d \end{equation} the comoving radiation stress tensor, $I_\nu$ is the specific intensity, and \begin{equation} h^{a}_{~b} = g^{a}_{~b} + u^a u_b \end{equation} the projection operator that projects onto slices orthogonal to the fluid four-velocity. To illustrate our approach, we assume for simplicity that the radiation is nearly isotropic everywhere, which is appropriate in media that are optically thick. In this case, the radiation stress tensor takes the form ${\mathcal P}^{ab} = {\mathcal P} h^{ab}$, where ${\mathcal P}$ is the radiation pressure. The system of equations may then be closed by adopting an Eddington factor of 1/3, so that \begin{equation} \label{closure} {\mathcal P}^{ab} = {\mathcal P} h^{ab} = \frac{1}{3} E h^{ab} \end{equation} (see, e.g., \cite{ShiKSS11,CarEM13,SadNTZ13,Fou18,FouDKNPS18,WeiOR20} and references therein for more sophisticated closure schemes). In the above integrals, $d \Omega$ is the differential solid angle, $\nu$ is the frequency and $I_\nu = I(x^a; N^i, \nu)$ is the specific intensity of radiation at a location $x^a$, moving in the direction $N^a = p^a / (h\nu)$, all measured in the local Lorentz frame of a fiducial observer. In the last expression $p^a$ is the photon four-momentum, and $h$ the Planck constant. We also note that $F^a$ is orthogonal to the fluid four-velocity, \begin{equation} \label{u_dot_F_1} u_a F^a = 0. \end{equation} The dynamical equations governing the radiation can then be written as \begin{equation} \label{eom} \nabla_b R^{ab} = - G^a, \end{equation} where $G^a$ is the radiation four-force \begin{equation} \label{rad_force} G^a = \rho_0 \kappa^{\rm abs} (E - 4 \pi B) u^a + \rho_0 (\kappa^{\rm abs} + \kappa^{\rm sc}) F^a. \end{equation} Here $\kappa^{\rm abs}$ and $\kappa^{\rm sc}$ are the (frequency-independent) gray-body absorption and scattering opacities, respectively (see, e.g., FLLS for details). In (\ref{rad_force}), the frequency-integrated equilibrium intensity $B(T)$ can be written as \begin{equation} 4 \pi B = a_R T^4, \end{equation} where $T$ is the temperature and $a_R$ a constant. The value of the latter depends on the type of radiation considered: for thermal radiation it equals the usual radiation constant $a$; for each flavor of non-degenerate neutrinos or anti-neutrinos it is $(7/16) \,a$; and for all neutrinos and antineutrinos combined it is $(7 N_{\nu} / 8)a$, where $N_\nu$ is the number of neutrino flavors contributing to the thermal radiation (see FLLS); here we assume that $kT \gg m_\nu$, as is the case in most stellar applications. For situations in which the radiation is in thermal equilibrium with the fluid we have $E = 4 \pi B$, but we will not assume that in general. The radiation moments $E$ and $F^a$, both describing quantities measured by an observer comoving with the fluid, form the {\em primitive} radiation variables. Coupling the equations of motion to the evolution of spacetime, it is often advantageous to employ a 3+1 decomposition, and to express the radiation equations in terms of {\em conserved} quantities that are related to quantities measured by normal observers. \subsection{Radiation fields in the normal frame} \label{sec:eqs_normal_frame} We start by decomposing the tensors appearing in Section \ref{sec:eqs_fluid_frame} into their normal and spatial components. Using (\ref{gamma_def}), we can write the fluid four-velocity $u^a$, for example, as \begin{equation} u^a = g^a_{~b} u^b = \gamma^a_{~b} u^b - n^a n_b u^b. \end{equation} Defining the Lorentz-factor between normal and fluid observers as \begin{equation} W \equiv - n_a u^a = \alpha u^t \end{equation} and \begin{equation} \label{v_def} v^a \equiv \frac{1}{W} \gamma^a_{~b} u^b = (0, u^i/W + \beta^i/\alpha), \end{equation} we may write \begin{equation} \label{u_decomp} u^a = W ( v^a + n^a). \end{equation} Note that $v^a$ is spatial by construction, $n_a v^a = 0$. Our definition follows that used in the ``Valencia" formulation of relativistic hydrodynamics, but differs from that used by many other authors, including FLLS, \begin{equation} \label{v_FLLS} v^i_{\rm FLLS} \equiv \frac{u^i}{u^t} = \alpha v^i - \beta^i. \end{equation} We similarly decompose the radiation flux four-vector into its normal and spatial parts, \begin{equation} {\mathcal F} \equiv - n_a F^a = \alpha F^t,~~~~~~~~{\mathcal F}^a \equiv \gamma^a_{~b} F^b, \end{equation} so that \begin{equation} \label{F_decomp} F^a = {\mathcal F}^a + {\mathcal F} n^a. \end{equation} Note that the orthogonality condition (\ref{u_dot_F_1}) can now be expressed as \begin{equation} \label{u_dot_F_2} {\mathcal F} = v_a {\mathcal F}^a. \end{equation} Following the same approach for the radiation four-force (\ref{rad_force}) we obtain \begin{equation} {\mathcal G} \equiv - n_a G^a = \rho_0 \kappa^{\rm abs} (E - 4 \pi B) W + \rho_0 (\kappa^{\rm abs} + \kappa^{\rm sc}) {\mathcal F} \end{equation} and \begin{equation} {\mathcal G}^a \equiv \gamma^a_{~b} G^b = \rho_0 \kappa^{\rm abs} (E - 4 \pi B) W v^a + \rho_0 (\kappa^{\rm abs} + \kappa^{\rm sc}) {\mathcal F}^a \end{equation} We now decompose the radiation stress-energy tensor (\ref{rad_se}) into purely normal, purely spatial, and mixed components. Specifically, the purely normal component results in the radiation energy density as observed by a normal observer, \begin{align} \label{rhorad} \bar \rho & \equiv n_a n_b R^{ab} = \alpha^2 R^{tt} \nonumber \\ & = W^2 E + 2 W {\mathcal F} + {\mathcal P}(W^2 - 1), \end{align} where we have used $n_a n_b h^{ab} = W^2 -1$ in the last equality. Adopting the closure relation (\ref{closure}) we obtain \begin{equation} \label{rho_bar} \bar \rho = \frac{4}{3} W^2 E - \frac{1}{3} E + 2 W {\mathcal F}. \end{equation} The mixed normal-spatial components of (\ref{rad_se}) yield the momentum flux as observed by a normal observer, \begin{align} \label{jrad} \bar \jmath^a & \equiv - \gamma^a_{~b} n_c R^{bc} = \alpha ( R^{at} + \beta^a R^{tt} ) \nonumber \\ & = \frac{4}{3} E W^2 v^a + W {\mathcal F}^a + {\mathcal F} W v^a, \end{align} where we have used $\gamma^a_{~b} n_c h^{bc} = - W^2 v^a$. Finally, the radiation stress tensor as observed by a normal observer is given by a purely spatial projection of (\ref{rad_se}), \begin{align} \label{Srad} \bar S^{ab} & \equiv \gamma^a_{~c} \gamma^b_{~d} R^{cd} = R^{ab} - \alpha n^a R^{bt} - \alpha n^b R^{at} + \alpha^2 n^a n^b R^{tt} \nonumber \\ & = \frac{4}{3} E W^2 v^a v^b + \frac{1}{3} E \gamma^{ab} + W {\mathcal F}^a v^b + W v^a {\mathcal F}^b. \end{align} In the above expressions we introduced the bars in order to distinguish these radiation quantities from similar quantities often defined in the 3+1 decomposition of the matter stress-energy tensor. \subsection{Dynamical equations for the radiation fields} \label{sec:eqs_dynamics} We now project the dynamical equations (\ref{eom}) both along the normal and the into spatial slice. The former will give rise to the radiation energy equation (\ref{tau_dot}) below, while the latter results in the radiation momentum (or flux) equations (\ref{S_dot}). \subsubsection{The energy equation} \label{sec:eqs_dynamics_energy} We start with a normal projection of (\ref{eom}), \begin{equation} \label{eom_normal} n_a \nabla_b R^{ab} = \nabla_b (n_a R^{ab}) - R^{ab} \nabla_b n_a = - n_a G^a = {\mathcal G}. \end{equation} Applying the identity \begin{equation} \nabla_a V^a = \frac{1}{\sqrt{|g|}} \partial_a \left( \sqrt{|g|} \, V^a \right) \end{equation} twice -- once for the covariant derivative $\nabla_a$ associated with the spacetime metric $g_{ab}$ and its determinant $g$, and once for the covariant derivative $\hat {\mathcal D}_i$ associated with the reference metric $\hat \gamma_{ij}$ and its determinant $\hat \gamma$ -- we may rewrite the first term in the first equality of (\ref{eom_normal}) as \begin{align} \label{eq1} &\nabla_b (n_a R^{ab}) = \frac{1}{\sqrt{-g}} \partial_b \left( \sqrt{-g} n_a R^{ab} \right) \\ & = \frac{1}{\sqrt{-g}} \left\{ \partial_t \left( \sqrt{-g} n_a R^{at} \right) + \partial_i \left( \sqrt{-g} n_a R^{ai} \right) \right\} \nonumber \\ &= - \frac{1}{\alpha \sqrt{\gamma}} \left\{ \partial_t \left( \sqrt{\gamma} \,\alpha^2 R^{tt} \right) + \partial_i \left( \sqrt{\gamma} \,\alpha^2 R^{it} \right) \right\} \nonumber \\ &= - \frac{1}{\alpha \sqrt{\gamma / \hat \gamma}} \left\{ \partial_t \left( \sqrt{\gamma / \hat \gamma} \, \alpha^2 R^{tt} \right) + \hat {\mathcal D}_i \left( \sqrt{\gamma / \hat \gamma} \, \alpha^2 R^{it} \right) \right\}. \nonumber \end{align} Here we have used $g = -\alpha \gamma$, where $\gamma$ is the determinant of the spatial metric $\gamma_{ij}$. We have also assumed that the determinant of the reference metric, $\hat \gamma$, is independent of time, which would be easy to generalize if desired. Inserting (\ref{eq1}) into (\ref{eom_normal}) we obtain \begin{equation} \label{tau_dot} \fbox{$ \partial_t \bar \tau + \hat {\mathcal D}_i f_{\bar \tau}^i = s_{\bar \tau} - \alpha \sqrt{\gamma / \hat \gamma}\, {\mathcal G}, $} \end{equation} where we have defined the radiation energy density variable \begin{equation} \label{tau} \bar \tau \equiv \sqrt{\gamma / \hat \gamma}\, \alpha^2 R^{tt} = \sqrt{\gamma / \hat \gamma} \, \bar \rho, \end{equation} its associated energy flux, \begin{align} \label{f_tau} f_{\bar \tau}^i & \equiv \sqrt{\gamma / \hat \gamma} \, \alpha^2 R^{it} = \sqrt{\gamma / \hat \gamma} \, (\alpha \bar \jmath^i - \bar \rho \beta^i ) \\ & = \bar \tau (\alpha v^i - \beta^i) + \alpha \sqrt{\gamma / \hat \gamma} \,\left( \frac{1}{3} E v^i - W {\mathcal F} v^i + W {\mathcal F}^i \right) \nonumber \end{align} as well as the source term \begin{align} \label{s_tau} s_{\bar \tau} & \equiv - \alpha \sqrt{\gamma / \hat \gamma}\, R^{ab} \nabla_b n_a = \alpha \sqrt{\gamma / \hat \gamma}\, R^{ab} ( K_{ba} + n_b a_a) \nonumber \\ & = \sqrt{\gamma / \hat \gamma}\, ( \alpha \bar S^{ij} K_{ij} - \bar \jmath^i \partial_i \alpha ). \end{align} In the last equation \begin{equation} K_{ab} \equiv - \gamma_a^{~c} \gamma_b^{~d} \nabla_c n_d = - \nabla_a n_b - n_a a_b \end{equation} is the extrinsic curvature, and \begin{equation} a_b \equiv n^a \nabla_a n_b = \gamma_b^{~c} \partial_c \ln \alpha \end{equation} the acceleration of the normal observer. We note that, in the reference-metric formalism, all quantities are defined using ratios between determinants, rather than just the determinants themselves, and are therefore tensor-densities of weight zero. We will discuss some other computational advantages of the reference-metric formalism in Section \ref{sec:numerics_OS_early_analytical} below. \subsubsection{The momentum equation} \label{sec:eqs_dynamics_momentum} We now take a spatial projection of (\ref{eom}), which yields \begin{equation} \label{eom_spatial} \gamma_{ia} \nabla_b R^{ab} = - \gamma_{ia} G^a = - {\mathcal G}_i. \end{equation} We first rewrite \begin{equation} \gamma_{ia} \nabla_b R^{ab} = g_{ia} \nabla_b R^{ab} = \nabla_b (R_i^{~b}) \end{equation} and then use the identity \begin{equation} \nabla_b T_a^{~b} = \frac{1}{\sqrt{|g|}} \partial_b \left( \sqrt{|g|} \, T_a^{~b} \right) - T_c^{~b} \, \Gamma^c_{ab} \end{equation} twice to find \begin{align} \label{eq3} & \gamma_{ia} \nabla_b R^{ab} \nonumber \\ & ~~~ = \frac{1}{\alpha \sqrt{\gamma / \hat \gamma}} \left\{ \partial_t \left(\alpha \sqrt{\gamma / \hat \gamma} \, R_i^{~t} \right) + \hat {\mathcal D}_j \left(\alpha \sqrt{\gamma / \hat \gamma} \, R_i^{~j} \right) \right\} \nonumber \\ & ~~~~~~ + R_j^{~k} \hat \Gamma^j_{ki} - R_c^{~b} \, \Gamma^c_{bi}, \end{align} where the $\Gamma^c_{ab}$ are the Christoffel symbols associated with the spacetime metric $g_{ab}$, and the $\hat \Gamma^j_{ki}$ are those associated with the reference metric $\hat \gamma_{ij}$. Inserting (\ref{eq3}) into (\ref{eom_spatial}) we obtain \begin{equation} \label{S_dot} \fbox{$ \partial_t \bar S_i + \hat {\mathcal D}_j (f_{\bar S})_i^{~j} = (s_{\bar S})_i - \alpha \sqrt{\gamma / \hat \gamma} \, {\mathcal G}_i, $} \end{equation} where we have defined the radiation momentum density, or flux, variable \begin{equation} \label{S} \bar S_i \equiv \alpha \sqrt{\gamma / \hat \gamma} \, R_i^{~t} = \sqrt{\gamma / \hat \gamma} \, \bar \jmath_i, \end{equation} its associated momentum flux \begin{align} \label{S_flux} (f_{\bar S})_i^{~j} & \equiv \alpha \sqrt{\gamma / \hat \gamma} \, R_i^{~j} = \sqrt{\gamma / \hat \gamma} \, ( \alpha \bar S_i^{~j} - \bar \jmath_i \beta^j) \nonumber \\ & = \bar S_i ( \alpha v^j - \beta^j) \\ &~~~~~~+ \alpha \sqrt{\gamma / \hat \gamma} \,\left(\frac{1}{3} E \delta_i^{~j} - W {\mathcal F} v_i v^j + W v_i {\mathcal F}^j \right), \nonumber \end{align} and the source term \begin{equation} \label{s_S} (s_{\bar S})_i \equiv \alpha \sqrt{\gamma / \hat \gamma} \, ( R_c^{~b} \Gamma^c_{bi} - R_j^{~k} \hat \Gamma^j_{ki} ). \end{equation} We now write \begin{equation} R_c^{~b} \Gamma^c_{bi} - R_j^{~k} \hat \Gamma^j_{ki} = R^{cb} \Gamma_{cbi} - R^{ck} g_{jc} \hat \Gamma^j_{ki}, \end{equation} expand $R^{ab}$ into its projections (\ref{rhorad}), (\ref{jrad}) and (\ref{Srad}), and use \begin{equation} \Gamma_{(bc)i} = \partial_i g_{bc} = - g_{db} g_{ec} \partial_i g^{de} \end{equation} (where $()$ denotes symmetrization) to rewrite the source term (\ref{s_S}) as \begin{equation} (s_{\bar S})_i = \sqrt{\gamma / \hat \gamma} \left( - \bar \rho \, \hat {\mathcal D}_i \alpha + \bar \jmath_j \hat {\mathcal D}_i \beta^j + \frac{1}{2} \alpha \bar S^{jk} \hat {\mathcal D}_i \gamma_{jk} \right) \end{equation} (see Section III.B in MBM; also note that $\bar \jmath_t = \bar \jmath_i \beta^i$). For most numerical applications, a natural choice for the reference metric $\hat \gamma_{ij}$ is the flat (spatial) metric in whatever coordinate system used. If so, Eqs.~(\ref{tau_dot}) and (\ref{S_dot}) reduce to familiar expressions (e.g.~Eqs.~(35) and (38) of FLLS) when evaluated in Cartesian coordinates, for which $\hat \gamma = 1$ and $\hat {\mathcal D}_i = \partial_i$. In curvilinear coordinates, we evaluate the flux terms in both equations by writing, for example, $\hat {\mathcal D}_i f_{\bar \tau}^i = \partial_i f_{\bar \tau}^i + f_{\bar \tau}^j \hat \Gamma^i_{ji}$, where the Christoffel symbols $\hat \Gamma^i_{jk}$ are known analytically. We then move these Christoffel terms to the right-hand sides of the equations, as discussed in MBM, so that they act as source terms. Eqs.~(\ref{tau_dot}) and (\ref{S_dot}) now form the dynamical equations for the conserved radiation variables $\bar \tau$ and $\bar S_i$; in a numerical simulation they have to be solved together with the equations for the gravitational fields, relativistic hydrodynamics and any other fields or sources that are being considered. We present a simple analytical example, also highlighting some advantages of the reference-metric formulation, in Section \ref{sec:numerics_OS_early_analytical}. \subsection{Recovery} \label{sec:eqs_recovery} Solving Eqs.~(\ref{tau_dot}) and (\ref{S_dot}) yields the conserved radiation variables $\bar \tau$ and $\bar S_i$. In the course of the dynamical evolution, however, we also need the primitive variables $E$ and $F^a$ -- or, equivalently, $E$, ${\mathcal F}$ and ${\mathcal F}^i$. The latter variables therefore have to be recovered from the conserved variables. For the hydrodynamical variables, a similar recovery step generally requires a numerical iteration, but for the radiation equations treated here the recovery can be accomplished algebraically, as was the case in FLLS. We start by using (\ref{rho_bar}) in (\ref{tau}), \begin{equation} \label{recov_1} \bar \tau = \sqrt{\gamma / \hat \gamma} \left( \frac{1}{3} ( 4 W^2 - 1) E + 2 W {\mathcal F} \right). \end{equation} Next we compute the contraction $n_a u_b R^{ab}$ twice; once expressing $R^{ab}$ as in (\ref{rad_se}), i.e.~projected with respect to $u^a$, \begin{equation} n^a u^b R_{ab} = W E + {\mathcal F}, \end{equation} and once expressing $R^{ab}$ in terms of its spatial projections, \begin{align} n^a u^b R_{ab} & = n^a g_c^{~b} u^c R_{ab} = n^a (\gamma_c^{~b} - n_c n^b) u^c R_{ab} \nonumber \\ & = \bar \rho W - \bar \jmath_c u^c = W (\bar \rho - \bar \jmath_i v^i). \end{align} Multiplying both expressions with $\sqrt{\gamma / \hat \gamma}$ and equating them yields \begin{equation} \label{recov_2} W (\bar \tau - \bar S_i v^i) = \sqrt{\gamma / \hat \gamma} \, ( W E + {\mathcal F}). \end{equation} This is equivalent to Eq.~(66) in FLLS, once the different definitions of the spatial velocity $v^i$ have been taken into account (see Eq.~\ref{v_FLLS} above). Eqs.~(\ref{recov_1}) and (\ref{recov_2}) now provide two equations for two unknowns $E$ and ${\mathcal F}$ that can be solved directly given values of the conserved variables $\bar \tau$ and $\bar S_i$. Finally, we insert (\ref{jrad}) into (\ref{S}), \begin{equation} \bar S^i = \sqrt{\gamma / \bar \gamma} \left( \frac{4}{3} E W^2 v^a + W {\mathcal F}^a + W {\mathcal F} v^a \right), \end{equation} and solve for ${\mathcal F}^a$ to obtain \begin{equation} \label{f_recovery} {\mathcal F}^a = \frac{1}{W \sqrt{\gamma / \hat \gamma}} \bar S^i - \frac{4}{3} E W v^a - {\mathcal F} v^a \end{equation} (compare Eq.~67 in FLLS). This completes the recovery of the primitive variables $E$, ${\mathcal F}$, and ${\mathcal F}^a$ from the conserved variables $\bar \tau$ and $\bar S_i$. While the recovery of the primitive radiation variables involves algebraic equations only, the solution may nevertheless be affected by significant numerical error, especially at large optical depths. This is because, in such regions, the flux variables ${\mathcal F}$ and ${\mathcal F}^i$ will often be much smaller than the radiation energy density $E$ as well as the conserved quantities $\bar \tau$ and $\bar S_i$. In this case, the flux variables are computed as the small differences between (potentially) much larger numbers, which generally leads to increased numerical error. We will discuss a concrete example in Section \ref{sec:numerics_OS_early_numerical} below. \section{Numerical examples} \label{sec:numerics} \subsection{Numerical implementation} \label{sec:numerics_implementation} Most features of our numerical implementation have been described in \cite{BauMCM13,MonBM14,BauMM15}. Specifically, we use a reference-metric approach \cite{BonGGN04,ShiUF04,Bro09,Gou12} to express the Baumgarte-Shapiro-Shibata-Nakamura (BSSN) formulation \cite{NakOK87,ShiN95,BauS98} of Einstein's equations as well as the equations of relativistic hydrodynamics in spherical polar coordinates. Specifically, we adopt the flat metric in spherical polar coordinates as our reference metric $\hat \gamma_{ij}$. We rescale all tensorial quantities with appropriate powers of $r$ and $\sin \theta$ so that all singular terms can be treated analytically. For example, for a vector with covariant components we write \begin{equation} \label{rescaling_vector} \bar S_i = \left( \begin{array}{c} \tilde S_r \\ r \, \tilde S_\theta \\ r \sin \theta \, \tilde S_\varphi \end{array} \right) \end{equation} and evolve the variables $\tilde S_i$ in our code. For vectors with contravariant components we divide by similar factors; for the fluxes $(f_{\bar S})_i^{~j}$ in (\ref{S_flux}) with mixed indices we write \begin{equation} \label{rescaling_tensor} (f_{\bar S})_i^{~j} = \left( \begin{array}{ccc} (\tilde f_{\bar S})_r^{~r} & (\tilde f_{\bar S})_r^{~\theta}/r & (\tilde f_{\bar S})_r^{~\varphi} / (r \sin \theta) \\ r \, (\tilde f_{\bar S})_\theta^{~r} & (\tilde f_{\bar S})_\theta^{~\theta} & (\tilde f_{\bar S})_\theta^{~\varphi} / \sin \theta \\ r \sin \theta \, (\tilde f_{\bar S})_\varphi^{~r} & \sin \theta \, (\tilde f_{\bar S})_\varphi^{~\theta} & (\tilde f_{\bar S})_\varphi^{~\varphi} \\ \end{array} \right). \end{equation} We impose parity boundary conditions to allow finite-differencing across the origin and the axis (see, e.g., Table I in \cite{BauMCM13}), and Robin-type conditions on the outer boundaries. The latest version of our code uses fourth-order differencing for all spatial derivatives in Einstein's field equations, together with a fourth-order Runge-Kutta time integrator \cite{BauGH19}. We solve the equations of relativistic hydrodynamics using an HLLE approximate Riemann solver \cite{HarLL83,Ein88}, together with a simple monotonized central-difference limiter reconstruction scheme \cite{Van77}. The latter is second-order accurate in most regions, but reduces to first order close to discontinuities or extrema. More accurate schemes are used by many groups (e.g.~\cite{ShiF05,YamST08,Jesetal20}; see also \cite{Tor99,RezZ13}), but are not needed for the numerical examples presented here. We have now implemented the equations of radiation hydrodynamics, in the gray, optically-thick two-moment approximation of Section \ref{sec:eqs} above, in the exact same computational framework as those of relativistic hydrodynamics, allowing for fully relativistic radiation hydrodynamics simulations in spherical polar coordinates. As numerical demonstrations we consider flat spacetime tests in Sect.~\ref{sec:numerics_flat}, and heated Oppenheimer-Snyder collapse in Sect.~\ref{sec:numerics_OS}. \subsection{Flat spacetime tests} \label{sec:numerics_flat} \begin{table*} \begin{tabular}{c|c|c|c||c|c|c|c||c|c|c|c} & & & & \multicolumn{4}{c||}{left state} & \multicolumn{4}{c}{right state} \\ \hline type & $\kappa^{\rm abs}$ & $\Gamma$ & $a_R m^4$ & $\rho_0$ & $P$ & $u^z$ & $E$ & $\rho_0$ & $P$ & $u^z$ & $E$ \\ \hline \hline continuous & 0.08 & 5/3 & $1.39 \times 10^{8}$ & 1.0 & $6 \times 10^{-3}$ & 0.69 & $0.18$ & 3.65 & $3.59 \times 10^{-2}$ & $0.189$ & $1.30$ \\ shock & 0.24 & 5/3 & $1.24 \times 10^{10}$ & 1.0 & $3 \times 10^{-5}$ & 0.015 & $1.0 \times 10^{-8}$ & 2.4 & $1.61 \times 10^{-4}$ & $6.25 \times 10^{-3}$ & $2.51 \times 10^{-7}$ \\ \end{tabular} \caption{Left and right states for the flat spacetime tests of Sect.~\ref{sec:numerics_flat}. The continuous solution in the top row corresponds to Test 4 in Table I of FLLS, while the shock solution in the bottom row corresponds to their Test 1 (except that we use $\kappa^{\rm abs} = 0.24$).} \label{Table:1} \end{table*} Stationary and slab-symmetric solutions to the equations of relativistic radiation hydrodynamics in flat (Minkowski) spacetimes can be derived by assuming that the solutions to Eqs.~(\ref{eom}), as well as the equations of relativistic hydrodynamics, are independent of time, and depend on one spatial Cartesian coordinate only, say $z$ (see \cite{ZelR66,MihWM84}). Further assuming a $\Gamma$-law equation of state, \begin{equation} \label{gamma_law} P = (\Gamma - 1) \epsilon \rho_0, \end{equation} the equations reduce to a set of five coupled ordinary differential equations that can be solved as discussed in Appendix C of FLLS \footnote{We would like to thank Yuk Tung Liu for pointing out that the first term in the second line of Eq.~(C20) in FLLS misses an overall factor $u^0_R$.}. We will assume that $\kappa^{\rm sc} = 0$, and that $\kappa^{\rm abs}$ is constant. We then transform these semi-analytical solutions to spherical polar coordinates, and adopt the resulting data as initial data for our dynamical evolutions. Given that the data describe stationary solutions, any departure from the initial data serves as a measure of numerical error. \subsubsection{Continuous solutions} \label{sec:numerics_flat_cont} Continuous semi-analytic solutions can be constructed by adopting boundary conditions at the lower boundary for $E$ and $F^z \ll E$, as well as for the fluid's rest-mass density $\rho_0$, pressure $P$, and four-velocity $u^z$, and integrating to larger values of $z$. In particular, we assume that the radiation is in thermal equilibrium with the fluid at the lower boundary, i.e.~$E = 4 \pi B = a_R T^4 = a_R m^4 (P / \rho_0)^4$. Here we have adopted the Maxwell-Boltzmann ideal gas law $P = \rho_0 T / m$, where $m$ is the mean mass of the fluid particles, and where we have chosen units in which Boltzmann's constant is unity, $k_B = 1$. Since no shocks are encountered in this test, we replaced the monotonized central-difference limiter reconstruction scheme with simple quadratic interpolation, and therefore expect second-order convergence for these simulations. \begin{figure}[t] \includegraphics[width = 0.4 \textwidth]{Fig_1a} \includegraphics[width = 0.4 \textwidth]{Fig_1b} \caption{A continuous flat-spacetime solution, showing the fluid rest-mass density $\rho_0$ in the top panel and the radiation energy density $E$ in the bottom panel. The black rectangular grid represents the semi-analytical solution for the data in the top row of Table \ref{Table:1}, while the colored surface shows the numerical solution at time $t = 10.053$, obtained with $N_r = 320$ radial and $N_\theta = 120$ angular grid points, with the grid extending to $r_{\rm out} = 24$. The white lines represent our spherical polar coordinate system, showing every 12-th radial and every 4-th angular grid line.} \label{Fig:1} \end{figure} As an example, we show results for the boundary values listed in the top row of Table \ref{Table:1}, which correspond to Test 4 listed in Table I of FLLS. We show profiles of this solution in Fig.~\ref{Fig:1}, comparing the numerical solution at time $t = 10.053$ (displayed as the colored surface with spherical polar coordinate lines) with the semi-analytical solution (represented by rectangular grid). It is difficult to see any difference in these plots. \begin{figure}[t] \includegraphics[width = 0.45 \textwidth]{Fig_2} \caption{Convergence test for the continuous solution shown in Fig.~\ref{Fig:1}, except that we have also boosted the solution with a speed $\beta = 0.1$ in the $z$-direction for this test. The different lines show differences $N^2 \Delta E$, rescaled assuming second-order convergence, between the numerical and semi-analytical solutions. The different lines show interpolations to the $z$-axis, for grids with $N_r = 32 N$ radial and $N_\theta = 12 N$ grid points, and with the numerical grid extending to $r_{\rm out} = 24$. The top left inset shows the behavior in the vicinity of the center, demonstrating second-order convergence even in the presence of the coordinate singularities at the origin. The bottom right inset shows results for the norm $||\Delta E|| \equiv \int | \Delta E | dV$, integrated to a radius of $r = 12$. The solid line in this inset represents a power-law $(1 / N)^2$.} \label{Fig:2} \end{figure} In Fig.~\ref{Fig:2} we show a convergence test for this setup, except that we have also boosted the solution with a speed $\beta = 0.1$ in the positive $z$-direction for this test. We interpolate our numerical solutions to the $z$-axis, then compute the difference $\Delta E$ between this numerical solution and the semi-analytical solution, and finally multiply these differences with $N^2$. The plot shows that these rescaled differences $N^2 \Delta E$ converge, establishing the expected second-order convergence. The bottom right inset shows that integrals of the numerical error also decrease with $N^{-2}$, as expected. \subsubsection{Discontinuous solutions} \label{sec:numerics_flat_discont} Solutions featuring a shock discontinuity, on the other hand, can be constructed by assuming that the radiation is in thermal equilibrium with the fluid at both the lower and the upper boundary. As discussed in Appendix C of FLLS, a ``shooting method" can then be employed to integrate the equations from both boundaries to the location of a shock discontinuity at $z = z_{\rm shock}$, and imposing matching conditions there \footnote{It appears difficult to construct these solutions exactly as described in Appendix C of FLLS, because the solutions feature exponential growth approaching the shock discontinuity from both sides. For large optical depths, the iteration used to match these solutions at the discontinuity requires more precision than can be achieved even with long double precision. We avoided these problems by restricting our optical depths to values slightly smaller than those reported by FLLS, choosing $\kappa^{\rm abs} = 0.24$ rather than their value of $\kappa^{\rm abs} = 0.4$.}. \begin{figure}[t] \includegraphics[width = 0.4 \textwidth]{Fig_3a} \includegraphics[width = 0.4 \textwidth]{Fig_3b} \caption{Same as Fig.~\ref{Fig:1}, except for a solution featuring a shock discontinuity (see the bottom row in Table \ref{Table:1}). For this test, performed with $N_r = 256$ radial and $N_\theta = 48$ angular gridpoints, and the outer boundary at $r_{\rm out} = 16$, we placed the shock discontinuity at $z = 2$ rather than at $z = 0$, so that the shock front does not coincide with the symmetry plane of the coordinate system.} \label{Fig:3} \end{figure} As an example, we show profiles of the fluid rest-mass density $\rho_0$ and the radiation energy density $E$ at $t = 10.053$ in Fig.~\ref{Fig:3}, demonstrating that the shock discontinuity is well-resolved, even when the shock front does not coincide with a coordinate plane. \subsection{Heated Oppenheimer-Snyder collapse} \label{sec:numerics_OS} An analytical solution describing ``heated Oppenheimer-Snyder collapse", i.e.~collapse of a homogeneous dust sphere to a black hole (see \cite{OppS39}) with radiation, has been derived in \cite{Sha89} (see also \cite{Sha96} and \cite{BauS10} for a summary). This solution makes several assumptions that are realized only approximately in numerical simulations that adopt a two-moment radiation formalism. One of these assumptions is that all pressure and radiation terms are sufficiently small so that they do not affect the spacetime and dust evolution; another assumption is that the radiative processes can be described in the relativistic diffusion limit (see Appendix A.1 in FLLS). The former condition can be met in a numerical simulation by making suitable choices for the equation of state and the initial data. While the latter approximation is quite adequate during most of the evolution for a star of sufficiently large optical depth, it is violated during an initial transient phase, lasting a few light-travel times across the initial dust sphere, after which it improves in accuracy. In this Section we carefully discuss the resulting transition from the initial data to post-transient diffusion solution. An exact numerical solution has been obtained by integrating the Boltzmann equation without approximation in \cite{Sha96}. \subsubsection{Oppenheimer-Snyder collapse} \label{sec:numerics_OS_OS} \begin{figure}[t] \includegraphics[width = 0.45 \textwidth]{Fig_4.pdf} \caption{A schematic spacetime diagram for Oppenheimer-Snyder collapse. The (red) lines starting out vertically at $\tau = 0$ trace the worklines of selected dust particles, with the thick line marking the surface. The (black) dotted horizontal line shows a surface of constant proper time $\tau$ (where $\tau$ is measured by observers comoving with the dust), while the (black) dashed line sketches a surface of constant coordinate time $t$. Also included are two characteristics originating at the stellar surface and traveling inwards; one for the radiation field, and one for gauge perturbations (see text for details). } \label{Fig:OS} \end{figure} Oppenheimer-Snyder collapse describes the collapse from rest of a constant-density dust sphere to a black hole \cite{OppS39}. An analytical solution for this collapse can be constructed by matching a closed-Friedmann solution for the stellar interior to a Schwarzschild solution for the exterior. Expressed in Gaussian normal coordinates, the interior line element is given by \begin{equation} \label{OS_g} ds^2 = - d\tau^2 + a^2(\tau) ( d \chi^2 + \sin^2 \chi d\Omega^2), \end{equation} where $0 \leq \chi \leq \chi_0$ is a radial coordinate that is comoving with the dust particles, $a(\tau)$ a scale factor, and $\tau$ the proper time as observed by observers comoving with the dust. It is also useful to define the conformal time $\eta$ according to $d\tau = a(\tau) \, d\eta$, in terms of which the scale factor $a$ can be expressed as \begin{equation} a = \frac{1}{2} a_m \left(1 + \cos \eta \right), \end{equation} and the proper time $\tau$ as \begin{equation} \tau = \frac{1}{2} a_m \left(\eta + \sin \eta \right) \end{equation} with $0 \leq \eta \leq \pi$. Matching this interior solution to a Schwarzschild exterior solution at the stellar surface results in relations for the initial scale factor $a_m = a(0)$, \begin{equation} a_m = \left( \frac{R_0^3}{2 M} \right)^{1/2} \end{equation} and the maximum value of the radial coordinate $\chi$, \begin{equation} \chi_0 = \sin^{-1} \left( \sqrt{2 M / R_0} \right). \end{equation} Here $R_0$ is the initial areal radius of the dust cloud and $M$ its gravitational mass (see also Section 1.4 in \cite{BauS10} as well as \cite{StaBBFS12}). The dust's rest-mass density $\rho_0 = u_a u_b T^{ab}$, where $u^a$ is the dust particles' four-velocity and $T^{ab}$ the stress-energy tensor, remains homogenous on each slice of constant proper time $\tau$ and is given by \begin{equation} \label{rho_0_OS} \frac{\rho_0(\tau)}{\rho_0(0)} = \left( \frac{a_m}{a(\tau)} \right)^3. \end{equation} The initial rest-mass density $\rho_0(0)$ is related to the dust cloud's mass and initial radius by \begin{equation} M = \frac{4 \pi}{3} \rho_0(0) R_0^3. \end{equation} \subsubsection{Heated Oppenheimer-Snyder collapse: diffusion approximation} \label{sec:numerics_OS_diff} Given a suitable number of approximations, the radiation emerging from a ``heated" Oppenheimer-Snyder collapse can be described analytically (see \cite{Sha89}). Specifically, the solutions assumes that neither the spacetime nor the dust evolution are affected by the radiation field, that the radiation is in local thermal equilibrium everywhere (so that $4 \pi B = E = a_R T^4$), and that certain time derivatives can be neglected in an optically thick gas, so that equations (\ref{tau_dot}) and (\ref{S_dot}) can be combined to form a relativistic diffusion equation (see also Appendix A.1 in FLLS). Further assuming that the initial radiation energy density is constant, $E(0) = E_0$ and that the initial flux vanishes, the analytical solution is given by equations (3.23) and (3.25) in \cite{Sha89}. In \cite{Sha96}, this analytical solution was compared with an exact numerical solution of the Boltzmann equation (radiation transport equation) without approximation, including the exact boundary conditions at the surface; the two approaches showed very good agreement for the energy density $E(\tau)$ and the emergent flux, i.e.~the radiation momentum density $F^a$ evaluated on the stellar surface. However, the comparison focussed on the post-transient behavior, after a few light-crossing times across the star. \begin{figure}[t] \includegraphics[width = 0.4 \textwidth]{Fig_5a} \includegraphics[width = 0.4 \textwidth]{Fig_5b} \includegraphics[width = 0.4 \textwidth]{Fig_5c} \caption{A comparison of numerical results and analytical expressions for the rest-mass density $\rho_0$ (top panel), the radiation energy density $E$ (middle panel), and the magnitude of the flux $F \equiv (F_a F^a)^{1/2}$ (bottom panel) for a heated Oppenheimer with $R_0 = 10 M$ (see text for details). The markers represent individual Lagrangian fluid tracers (rather than grid points), while the solid lines represent the analytical solution in the diffusion approximation, computed from the proper times and areal radii recorded by the Lagrangian tracers.} \label{Fig:5} \end{figure} In Fig.~\ref{Fig:5} we show a comparison between our numerical results and the analytical diffusion approximation. For these simulations, we choose an initial areal radius $R_0 = 10 M$, we set up the initial data as described in \cite{StaBBFS12}, and we approximate dust as a fluid with a Gamma-law equation of state (\ref{gamma_law}) with $\Gamma = 1.001$ and with $P = 10^{-6} \rho_0$ initially. We also choose the initial radiation energy density to be $E(0) = E_0 = 10^{-5} \rho_0$, and impose local thermal equilibrium by setting $B = E / (4 \pi)$ as in the analytical solution of \cite{Sha89}, and set the initial flux $F^a$ to zero. With these choices the pressure is radiation dominated and has little influence on the dynamics ($P / \rho_0 \ll M/r$). We adopted $N_r = 2048$ radial equidistant grid points extending to an exterior outer boundary at an isotropic radius of $r_{\rm max} = 24 M$, and evolved with moving-puncture coordinates, i.e.~1+log slicing for the lapse \cite{BonMSS95}, and a Gamma-driver condition for the shift \cite{ThiBB11,ThiBHBR11}. In the notation of Eq.~(39) in \cite{StaBBFS12} we chose the parameter $\mu_S$ appearing in the Gamma-driver condition according to $\mu_S = \alpha^2$. Since we restrict our analysis to the optically thick stellar interior, we impose radiation boundary conditions close to the stellar surface. For strictly outgoing isotropic emission at the stellar surface the boundary condition is $F = 0.5 E$, where $F$ is the magnitude of the flux \begin{equation} \label{flux_magnitude} F \equiv \left( F_a F^a \right)^{1/2} = \left( {\mathcal F}_i {\mathcal F}^i - {\mathcal F}^2 \right)^{1/2}. \end{equation} Since $E$ quickly plummets at the surface once the evolution is underway, we follow \cite{Sha89} and adopt the ``zero-temperature approximation" for $E$ at the surface, i.e.~$E=0$. The flux is much smaller than $E$ everywhere in the interior and we find that its behavior is insensitive to its precise boundary value we choose near and at the surface provided it is kept small. In keeping with our zero-temperature approximation for $E$ we therefore set ${\mathcal F}^i = 0$ near the surface. We caution that our closure relation (\ref{closure}) does not provide a realistic prescription in the optically thin regions very close to the surface, but in the limit of arbitrarily large values of the opacity this region is infinitesimally thin geometrically. We have chosen the absorption opacity $\kappa^{\rm abs}$ so that the optical depth of the center is $\tau^{\rm abs} = \kappa^{\rm abs} \rho_0 R = 25$, as well as $\kappa^{\rm sc} = 0$. Note that the optical depth increases as $R^{-2}$ as the collapse proceeds. We follow 25 Lagrangian fluid tracers, and record the fluid and radiation variables observed by these fluid particles together with their proper times $\tau$ and areal radii $R$. At selected instants of coordinate time $t$ we then plot these variables, and compare with the analytical solutions computed from $\tau$ and $R$. Both the analytical and numerical solutions remain valid even after the entire star is inside a black hole. We find very good agreement of the our numerical results with the analytical expression (\ref{rho_0_OS}) for the rest-mass density $\rho_0$ (top panel in Fig.~\ref{Fig:5}), and -- consistent with the findings of \cite{Sha96} -- quite good agreement with the diffusion result for the radiation energy density $E$ (middle panel in Fig.~\ref{Fig:5}). Likewise, the magnitude of the flux $F$ (lower panel), which, unlike the components of $F^a$, is a scalar and can be compared directly, is in reasonably good agreement after the initial transient. Similar to the findings of \cite{Sha96}, the values on the surface, which determine the emergent flux, are not all that different from the diffusion values, but during the initial transient the behavior is quite different in the stellar interior. The numerical data for the flux drop to very small values probably dominated by numerical truncation error even close to the surface, while the analytical solution follows an approximately exponential decay toward greater optical depths. In order to better understand these differences, and to illuminate some of the features of the numerical solution, we analyze the behavior of solutions to the dynamical radiation equations (\ref{tau_dot}) and (\ref{S_dot}) at early times. \subsubsection{Initial transient -- analytical treatment} \label{sec:numerics_OS_early_analytical} We will assume again that neither the spacetime nor the dust evolution are affected by the radiation field, so that both are still given by the expressions of Section \ref{sec:numerics_OS_OS}. Adopting the same Gaussian normal coordinates as used there we can identify the lapse $\alpha = 1$, the shift vector $\beta^i = 0$, and the spatial metric \begin{equation} \gamma_{ij} = a^2(\tau) \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \sin^2 \chi & 0 \\ 0 & 0 & \sin^2 \chi \, \sin^2 \theta \end{array} \right) \end{equation} from the line element (\ref{OS_g}). In these coordinates, slices of constant coordinate time $t$ coincide with slices of constant proper timer $\tau$, and the mean curvature on these slices is given by \begin{equation} \label{K} K \equiv \gamma^{ij} K_{ij} = - \frac{3}{a} \, \frac{da}{d\tau}. \end{equation} We also note that, in these comoving coordinates, both normal observers and dust particles follow geodesics; therefore the normal vector $n^a$ on slices of constant $\tau$ must be aligned with the dust's four-velocity $u^a$, $n^a = u^a$. From (\ref{v_def}) we see that the dust's spatial velocity $v^a$ must therefore vanish in these coordinates, $v^a = 0$, and that $W = - n_a u^a = 1$. In order to evaluate the dynamical equations (\ref{tau_dot}) and (\ref{S_dot}) for heated Oppenheimer-Snyder collapse we first choose \begin{equation} \label{reference_OS} \hat \gamma_{ij} = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \sin^2 \chi & 0 \\ 0 & 0 & \sin^2 \chi \, \sin^2 \theta \end{array} \right) \end{equation} as our reference metric, so that \begin{equation} \sqrt{ \gamma / \hat \gamma } = a^3. \end{equation} We note that this ratio depends on $\tau$ only; in particular it remains finite and non-zero at the center of the coordinate system, highlighting another advantage of the reference-metric formalism. Without using this formalism we would have encountered the term $\gamma^{1/2} = a^3 \sin^2 \chi \, \sin \theta$ instead, which vanishes at the origin, and displays a significantly more complicated dependence on the coordinates. Similarly, in spherical polar coordinate systems, $\gamma^{1/2}$ itself typically scales with $r^2 \sin \theta$ close to the origin. In the reference-metric formalism, this term can be canceled out by choosing the reference metric $\hat \gamma_{ij}$ to be the flat metric in spherical polar coordinates -- thereby avoiding the numerical issues associated with a vanishing determinant. This is essentially what we did in (\ref{reference_OS}). We also note that, from (\ref{u_dot_F_2}) with $v^a = 0$, we must have \begin{equation} {\mathcal F} = 0. \end{equation} We then have \begin{equation} \bar \tau = a^3 \bar \rho = a^3 E \end{equation} from (\ref{tau}) and (\ref{rhorad}), \begin{equation} f_{\bar \tau}^i = a^3 {\mathcal F}^i \end{equation} from (\ref{f_tau}), and \begin{equation} s_{\bar \tau} = \frac{a^3}{3} E \gamma^{ij} K_{ij} = - a^2 E \frac{da}{d\tau} \end{equation} from (\ref{s_tau}), where we have used (\ref{K}) in the last expression. Inserting the last three equations into the energy equation (\ref{tau_dot}) we obtain \begin{equation} \label{tau_dot_OS} \partial_\tau (a^4 E) + a \hat {\mathcal D}_i (a^3 {\mathcal F}^i ) = 0, \end{equation} where we have used $d\tau = dt$ in Gaussian normal coordinates. We similarly evaluate (\ref{S}), (\ref{S_flux}) and (\ref{s_S}) to find \begin{align} \bar S_i & = a^3 {\mathcal F}_i, \label{S_Reg_I} \\ (f_{\bar S})_i^{~j} & = \frac{a^3}{3} E \, \delta_i^{~j}, \\ (s_{\bar S})_i & = \frac{a^3}{6} E \, \gamma^{jk} \hat {\mathcal D}_i \gamma_{jk} = 0 \label{s_S_i} \end{align} (where we have used $\hat {\mathcal D}_i \gamma_{jk} = \hat {\mathcal D}_i (a^2 \hat \gamma_{jk}) = 0$ in the last expression), and insert these into the momentum equation (\ref{S_dot}) to find \begin{equation} \label{S_dot_OS} \partial_\tau (a^3 {\mathcal F}_i) + \frac{1}{3} \hat {\mathcal D}_j (a^3 E \delta_i^{~j} ) = - a^3 \rho_0 (\kappa^{\rm abs} + \kappa^{\rm sc}) {\mathcal F}_i. \end{equation} We briefly note that the term $(s_{\rm \bar S})_i$ in (\ref{s_S_i}) vanishes by virtue of the reference-metric formalism; without this formalism, the covariant derivative $\hat {\mathcal D}_i$ would appear as a partial derivative $\partial_i$ instead, and one would rely on the resulting non-zero terms to be canceled by new terms originating from the appearance of $\gamma^{1/2}$ rather than $(\gamma / \hat \gamma)^{1/2}$ in the divergence term on the left-hand side of (\ref{S_dot_OS}) (see also the discussion in Section III.E in MBM). Equations (\ref{tau_dot_OS}) and (\ref{S_dot_OS}) now form a pair of equations for the two primitive variables $E$ and ${\mathcal F}^i$. Combining the two equations, one can show that, as a consequence of our adopted closure relation (\ref{closure}), the characteristic speeds of the radiation field take the expected values $c_{\rm rad} = \pm \sqrt{1/3}$ as measured by a normal observer comoving with the matter. In the schematic spacetime diagram of Fig.~\ref{Fig:OS} we include one such characteristic, originating at the stellar surface at the initial time and traveling towards the stellar center, as the (green) line labeled ``radiation char.". Now consider as initial data a homogeneous radiation energy density $E(0) = E_0$ throughout the star, and zero flux ${\mathcal F}^i(0) = 0$ everywhere. By contrast, $E(0)$ is set equal to zero outside the star, which distinguishes the stellar surface. For these data, the spatial derivatives in equations (\ref{tau_dot_OS}) and (\ref{S_dot_OS}) vanish identically in the interior. In the domain of dependence of the interior initial data, the flux will then remain zero, ${\mathcal F}^i(\tau) = 0$, while the energy equation (\ref{tau_dot_OS}) is solved by adiabatic heating \begin{equation} \label{E_OS_early} \frac{E(\tau)}{E_0} = \left( \frac{a_m}{a(\tau)} \right)^4, \end{equation} as one might have expected. In the spacetime diagram of Fig.~\ref{Fig:OS}, the domain of dependence of the interior initial data is given by the area marked as Regions I and II, below the radiation characteristic originating from the surface. In these two regions, the analytical solution to the radiation equations is given by (\ref{E_OS_early}) together with ${\mathcal F}^i = 0$ {\em in any coordinate system}. Only in Region III can the radiation field approach the diffusive analytical solution of \cite{Sha89,Sha96}. This reflects the difference between the full transport equations, which are hyperbolic, and the diffusion approximation, which is parabolic. \subsubsection{Initial transient -- numerical results} \label{sec:numerics_OS_early_numerical} \begin{figure}[t] \includegraphics[width = 0.45 \textwidth]{Fig_6} \caption{The lapse $\alpha$ and the dust velocity $v^r$ at coordinate time $t = 5.52 M$ (compare the (green) circled data in Fig.~\ref{Fig:5}). Regions I, II, and III are labeled as in the schematic spacetime diagram of Fig.~\ref{Fig:OS}. Note that $\alpha$ does not depend on $R$, and $v^r = 0$, in Region I.} \label{Fig:6} \end{figure} Numerical codes usually do not adopt Gaussian coordinates, however; instead, a common choice are moving-puncture coordinates with 1+log slicing \cite{BonMSS95}, \begin{equation} \label{1+log} (\partial_t - \beta^i \partial_i) \alpha = - 2 \alpha K. \end{equation} The properties of Oppenheimer-Snyder collapse as rendered in 1+log slicing with initial condition $\alpha(0) = 1$ were analyzed by \cite{StaBBFS12}; in particular, the authors pointed out the existence of a gauge characteristic with characteristic speed $c_{\rm gauge} = \pm \sqrt{2 / \alpha}$ as measured by a normal observer. In Fig.~\ref{Fig:OS}, the (blue) gauge characteristic labeled ``gauge char." originating from the surface at the initial time and propagating toward the center separates Region I from Region II. As pointed by \cite{StaBBFS12}, the lapse remains spatially constant in Region I, and takes the value \begin{equation} \label{lapse} \alpha = 1 + 6 \ln\left( a(\tau) / a_m \right) \end{equation} there, while outside of Region I the lapse will depend on space also. In the top panel of Fig.~\ref{Fig:6} we show a snapshot at $t = 5.52 M$ (corresponding to the data shown as the (green) circles in Fig.~\ref{Fig:5}). In Region I, where the lapse remains spatially constant, slices of constant coordinate time $t$ will coincide with slices of constant proper time $\tau$, as sketched in the schematic spacetime diagram of Fig.~\ref{Fig:OS}. Outside of Region I, however, where the lapse is no longer spatially constant, slices of constant coordinate time depart from those of constant proper time (the dashed and dotted lines, respectively, in Fig.~\ref{Fig:OS}). Furthermore, the normal vector $n^a$ is still be aligned with the dust's four-velocity $u^a$ in Region I, so that we still have $v^a = 0$ in this region, as shown in the bottom panel of Fig.~\ref{Fig:6}. \begin{figure}[t] \includegraphics[width = 0.45 \textwidth]{Fig_7} \caption{Same as Fig.~\ref{Fig:6} but for the primitive radiation variables $E$ (top panel) and ${\mathcal F}^r$ (bottom panel). The inset shows the small increase in $E$ towards larger radii in Region II.} \label{Fig:7} \end{figure} \begin{figure}[t] \includegraphics[width = 0.45 \textwidth]{Fig_8} \caption{Same as Fig.~\ref{Fig:6} but for the conserved radiation variables $\bar \tau$ (top panel) and $\bar S_r$ (bottom panel) in the stellar interior. } \label{Fig:8} \end{figure} We can now discuss the consequences of these coordinate properties on the radiation quantities. In Fig.~\ref{Fig:7} we show the primitive radiation energy density $E$ and flux ${\mathcal F}^r$. As expected, $E$ is constant in Region I, according to (\ref{E_OS_early}) together with the observation that slices of constant $t$ and $\tau$ coincide in this region. The latter is not the case in Region II; since a constant coordinate time $t$ corresponds to a later proper time $\tau$ at larger radius, the energy density $E$ slightly increases outwards in Region II (shown in the inset), before dropping significantly more rapidly in Region III, which has come into causal contact with the stellar surface. We also see that the flux ${\mathcal F}^r$ is non-zero in Region III, but very close to zero in Regions I and II, as we would expect. Note, however, that ${\mathcal F}^r$ appears to be affected by significantly more numerical error in Region II than in Region I. This behavior can be understood in terms of the conserved radiation quantities, shown in Fig.~\ref{Fig:8}. As we discussed above, we have (up to numerical error from the hydrodynamical evolution and recovery) $v^r = 0$ in Region I (see bottom panel in Fig.~\ref{Fig:6}); we also expect ${\mathcal F}^r = 0$ and ${\mathcal F} = 0$ in this region. According to equation (\ref{S_Reg_I}) this implies $\bar S_i = 0$ in Region I, consistent with our numerical results shown in the bottom panel of Fig.~\ref{Fig:8}. We therefore expect that all terms on the right-hand side of equation (\ref{f_recovery}) will be small, and that we will hence be able to obtain the analytical solution ${\mathcal F}^i = 0$ to high accuracy. Outside of Region I, however, the lapse is no longer spatially constant, which, as we discussed above, results in non-zero velocities $v^r$ (see the bottom panel of Fig.~\ref{Fig:6}). By the same token, this results in non-zero values for $\bar S_i$ (see the bottom panel of Fig.~\ref{Fig:8}). Solving the recovery equation (\ref{f_recovery}) in Region II, we see that we now compute a (vanishingly) small quantity ${\mathcal F}^i$ from differences between non-zero quantities; evidently, this will lead to larger numerical error than in Region I, as we observed in the bottom panel of Fig.~\ref{Fig:7}. Also note that $\bar S_r$ changes sign at around $R = 8.25 M$ in Region III; in the outer part we have $\bar S_r > 0$, reflecting an outward flux close to the surface, while in the inner parts, at larger optical depths, our normal observers see the radiation being dragged inward by the collapsing matter, so that $\bar S_r < 0$. The behavior shown for ${\mathcal F}^r$ in Fig.~\ref{Fig:7} can also be seen for $F = F_a F^a$ at early times in Fig.~\ref{Fig:5}. For $t = 0.92 M$ and $t = 5.52 M$, we can clearly distinguish Regions I, II and III. At later times, following the initial transient, both the gauge and radiation characteristics have reached the center, the entire star is now in Region III, and the radiation solution starts to approximate closely that described by the solution to the diffusion equation. The diffusion approximation, in turn, agrees quite well with the exact numerical solution of the Boltzmann equation \cite{Sha96} after the initial transition, even when exact boundary conditions are incorporated at the stellar surface. We point out that the heated Oppenheimer-Snyder collapse problem we probed here is specifically designed to highlight the difference between an exact hyperbolic and an approximate radiation diffusion (parabolic) treatment. In particular, by adopting a very compact initial configuration ($R_0 / M = 10$) and matter that undergoes free-fall collapse at nearly the speed of light, the transient phase, during which the two approaches differ, takes up a non-negligible fraction of the total collapse time. For more realistic scenarios the transient phase, which only lasts a few light travel times across the initial star, represents an insignificant fraction of the total evolution and radiative transport time. \section{Summary} \label{sec:summary} We adopt a two-moment approximation together with a reference-metric formalism to bring the moment equations of relativistic radiation transfer into a form that is well suited for numerical implementations in curvilinear coordinates. While curvilinear coordinates can be very effective in taking advantage of either exact or approximate symmetries, they also introduce coordinate singularities that can be problematic in numerical implementations. One approach is to treat all singular terms analytically, and the reference-metric formalism provides a framework that allows such a treatment. In this paper we derive the equations governing the radiation fields within this formalism, resulting in Eq.~(\ref{tau_dot}) for the radiation energy density and Eq.~(\ref{S_dot}) for the radiation momentum density, or flux. In contrast to many previous treatments we also employ a systematic 3+1 decomposition of the radiation fields. We focus here on the optically-thick regime and adopt an Eddington factor of 1/3 (see Eq.~\ref{closure}), together with a gray (frequency-independent) opacity, but both restrictions can be relaxed. The equations for the radiation fields take a form that is very similar to the corresponding equations of hydrodynamics; an existing relativistic hydrodynamics code can therefore be augmented to treat radiation as well by incorporating the radiation equations into the hydrodynamics algorithm. We implement these equations in a code that adopts spherical polar coordinates, and, as numerical demonstrations, present results for two test problems. Specifically, we consider stationary planar shock solutions in flat spacetimes, for which a semi-analytical solution can be constructed by solving ordinary differential equations, as well as heated Oppenheimer-Snyder collapse, for which we carefully analyze the transition from an early transient to a post-transient phase that is well approximated by an analytical-known relativistic diffusion solution. Many astrophysical objects and processes -- including single stars, remnants of neutron star merger or supernova collapse, and accretion onto black holes -- display at least an approximate symmetry. Taking advantage of these symmetries as effectively as possible usually entails adopting curvilinear coordinates, for example spherical polar or cylindrical coordinates. Even when the matter fields lack symmetry, the radiation propagates radially at large distances, where it is measured. The formalism presented in this paper provides one approach for such simulations, and we hope that it will prove useful for the modeling of radiation transport (EM and/or neutrinos) in a number of interesting and important astrophysics scenarios, including the above. \acknowledgments It is a pleasure to thank Yuk Tung Liu for helpful conversations. This work was supported by National Science Foundation (NSF) grants PHY-1707526 and PHY-2010394 to Bowdoin College, NSF grants PHY-166221 and PHY-2006066 and National Aeronautics and Space Administration (NASA) grant 80NSSC17K0070 to the University of Illinois at Urbana-Champaign, and through sabbatical support from the Simons Foundation (Grant No.~561147 to TWB).
null
null
null
proofpile-arXiv_000-10158
{"arxiv_id":"2009.08990","language":"en","timestamp":1600740020000,"url":"https:\/\/arxiv.org\/abs\/2009.08990","yymm":"2009"}
2024-02-18T23:40:25.178Z
2020-09-22T02:00:20.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10160"}
null
\section{Introduction} \label{sec:intro} The next generation of galaxy surveys will provide a detailed chart of cosmic structure and its growth on our cosmic light cone. These include the {\it Euclid} space telescope \citep{2011arXiv1110.3193L,2019arXiv191009273E}, the Dark Energy Spectroscopic Instrument (DESI) \citep{2016arXiv161100036D,2016arXiv161100037D}, the Rubin Observatory Legacy Survey of Space and Time (LSST) \citep{2019ApJ...873..111I,2009arXiv0912.0201L,2018arXiv180901669T}, the Square Kilometre Array (SKA) \citep{2015MNRAS.450.2251Y,2020PASA...37....7S}, the Wide Field InfraRed Survey Telescope (WFIRST) \citep{2015arXiv150303757S}, the Subaru Hyper Suprime-Cam (HSC) and Prime Focus Spectrograph (PFS) surveys \citep{2018PASJ...70S...4A,2016SPIE.9908E..1MT} and the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx) \citep{2014arXiv1412.4872D,2018arXiv180505489D}. These data sets will provide unprecedented statistical power to constrain the initial perturbations, the growth of cosmic structure, and the cosmic expansion history. To access this information requires accurate theoretical models of large-scale structure statistics, such as power spectra and bispectra. While analytical work, such as standard perturbation theory \citep[SPT,][]{Jain:1993jh,1986ApJ...311....6G}, Lagrangian perturbation theory \citep[LPT,][]{Bouchet:1994xp,Matsubara:2007wj}, renormalised perturbation theory \citep{Crocce:2005xy} and effective field theory~\citep[EFT,][]{Carrasco:2012cv,Vlah:2015sea,2016arXiv161009321P}, has made great strides (see also \citet{Bernardeau:2001qr,Desjacques:2016bnm} for reviews), the reference models for large-scale structure are based on computationally intensive $N$-body simulations that compute the complex non-linear regime of structure growth. In recent years, the BACCO simulation project \citep{2020arXiv200406245A}, the Outer Rim Simulation \citep{2019ApJS..245...16H}, the Aemulus project I \citep{DeRose_2019}, the ABACUS Cosmos suite \citep{2018ApJS..236...43G}, the Dark Sky Simulations \citep{2014arXiv1407.2600S}, the MICE Grand Challenge \citep[MICE-GC,][]{10.1093/mnras/stv1708}, the Coyote Universe I \citep{2010ApJ...715..104H} and the Uchuu simulations \citep{2020arXiv200714720I}, among others, involved generation of expensive $N$-body simulations. While analytical methods compute expectation values of large-scale structure statistics, a simulation generates a single realisation and its output therefore suffers from sample variance. Reducing this variance to a point where it is subdominant to the observational error therefore requires running ensembles of simulations. Computational cosmologists have been tackling the challenge of optimising $N$-body codes and gravity solvers for a growingly larger number of particles. Widely used codes include the parallel Tree Particle-Mesh (TreePM or TPM) codes \texttt{GADGET-II} by \citet{2005MNRAS.364.1105S} and \texttt{GreeM} by \citet{2009PASJ...61.1319I}, the adaptive treecode \texttt{2HOT} by \citet{2013arXiv1310.4502W}, the GPU-accelerated \texttt{ABACUS} code originated from \citet{phdAbacus}, the Hardware/Hybrid Accelerated Cosmology Code (\texttt{HACC}) developed by \citet{2016NewA...42...49H} and the distributed-memory and GPU-accelerated \texttt{PKDGRAV3}, based on Fast Multipole Methods and adaptive particle timesteps, from \citet{2017ComAC...4....2P}. The memory and CPU time requirements of such computations are a bottleneck for future work on new-generation cosmological data sets. As an example, the 43,100 runs in the \textit{Quijote} simulations from \cite{villaescusanavarro2019quijote}, of which the data outputs are public and used in this paper, required $35$ million CPU-core-hours. The search for solutions has led to alternative, fast and approximate ways to generate predictions for large-scale structure statistics. The COmoving Lagrangian Acceleration (COLA) solver of \citet{Tassev_2013} is a PM code that solves the particle equations of motion in an accelerated frame given by LPT. Particles are nearly at rest in this frame for much of the mildly non-linear regime. As a consequence, much larger timesteps can be taken, leading to significant time savings. The $N$-body solver \texttt{F\textsubscript{AST}PM} of \citet{2016MNRAS.463.2273F} operates on a similar principle, using modified kick and drift factors to enforce the Zel'dovich approximation in the mildly non-linear regime. The spatial COLA (sCOLA) scheme \citep{2015arXiv150207751T} extends the idea of using LPT to guide the solution in the spatial domain. \citet{2020arXiv200304925L} have carefully examined and implemented these ideas to allow splitting large $N$-body simulations into many perfectly parallel, independently evolving small simulations. In a different family of approaches, but still using LPT, \citet{2013MNRAS.433.2389M} proposed a parallelised implementation of the PINpointing Orbit Crossing-Collapsed HI-erarchical Objects (PINOCCHIO) algorithm from \citet{pino}. \citet{2015MNRAS.446.2621C} developed a physically motivated enhancement of the Zel'dovich approximation called EZmocks. Recently, so-called {\it emulators} have been of great interest: they predict statistics in the non-linear regime based on a generic mathematical model whose parameters are trained on simulation suites covering a range of cosmological parameters. An emulator is trained by \citet{2020arXiv200406245A} on the BACCO simulations; similarly, the Aemulus project contributions II and III \citep{McClintock_2019,Zhai_2019} respectively construct an emulator for the halo mass function and the galaxy correlation function using the Aemulus I suite \citep{DeRose_2019}. Not only do emulators that map cosmological parameters to certain outputs need large numbers of simulations for training, they also do not guarantee unbiased results with respect to full simulation codes, especially outside the parameter range used during training. Recent advances in deep learning have allowed training emulators that reproduce particle positions or density fields starting from initial conditions, therefore essentially emulating the full effect of a low-resolution cosmological $N$-body code---these include the Deep Density Displacement Model ($\mathrm{D}^3\mathrm{M}$) of \citet{He_2019} stemming from the U-NET architecture \citep{2015arXiv150504597R}. \citet{Kodi_Ramanah_2020} describe a complementary deep learning tool that increases the mass and spatial resolution of low-resolution $N$-body simulations using a variant of Generative Adversarial Networks \citep[GAN,][]{2014arXiv1406.2661G}. None of these fast approximate solutions exactly reproduce the results of more computationally intensive codes. They trade computational accuracy for computational speed, especially in the non-linear regime. In this vein, the recent series of papers \citet{2019MNRAS.482.1786L}, \citet{2019MNRAS.485.2806B} and \citet{2019MNRAS.482.4883C} compare the covariance matrices of clustering statistics given by several low-fidelity methods to those of full $N$-body codes and find statistical biases in the parameter uncertainties by up to $20\%$. A different approach to this problem is to reduce the stochasticity of the initial conditions, thereby modifying the statistics of the observables in such a way as to reduce sample variance. This is the spirit of the method of fixed fields invented and first explored by \citet{2016PhRvD..93j3519P} and \citet{2016MNRAS.462L...1A}. While this approach does not guarantee that any given statistic will be unbiased, the numerical study by \citet{Villaescusa_Navarro_2018} showed that ``fixing'' succeeds in reducing variance for several statistics of interest with no detectable bias when comparing to an ensemble of hundreds of full simulations and at no additional cost to regular simulations. Still, it is clear that other statistics must necessarily be biased, for example, the square of any variance-reduced statistic, such as four-point functions. In this paper, we show that it is possible to get the best of both worlds: the speed of fast surrogates \textit{and} the guarantee of full-simulation accuracy.\footnote{As a jargon reminder, the accuracy and precision of an estimate refer, respectively, to the trueness of its expectation (in terms of the statistical bias) and the confidence in the expectation (standard errors, confidence intervals).} We take inspiration from \textit{control variates}, a classical variance reduction technique (see \citet{Lavenberg1981APO} for a review, and \citet{GORODETSKY2020109257} and \citet{doi:10.1137/15M1046472} for related recent applications), to devise a way to combine fast but approximate simulations (or \textit{surrogates}) with computationally intensive accurate simulations to vastly accelerate convergence while \textit{guaranteeing} arbitrarily small bias with respect to the full simulation code. We call this Convergence Acceleration by Regression and Pooling (CARPool).\footnote{We will consider surrogates to be much faster than simulations, so that we only need to consider the number of simulations to evaluate computational cost.} The paper is organised as follows. In Section~\ref{sec:2sec}, we explore the theory of univariate and multivariate estimation with control variates and highlight some differences in our setting for cosmological simulations. In Section~\ref{sec:3sec}, we briefly discuss both the $N$-body simulation suite and our choice of fast surrogates we use in the numerical experiments presented in Section~\ref{sec:results}. We conclude in Section~\ref{sec:conclusions}. Table~\ref{table:notations} lists mathematical notation and definitions used throughout this paper. \begin{table} \centering \caption{Mathematical notation and definitions} \begin{tabular}{| m{12em} | m{15em}|} \hline Notation & Description \\ \hline \hline $\mathcal{S}_{N} = \left\{ r_1, \dots, r_N \right\}$ & Set of $N$ random seeds $r_n$ of probability space \\ \hline $\boldsymbol{y}(r_n)\equiv\boldsymbol{y}_n$ & Random column vector of size $p$ at seed $r_n$\\ \hline $\mathbb{E}\left[ \boldsymbol{y} \right]\equiv\boldsymbol{\mu_y}$ & Expectation value of random vector $\boldsymbol{y}$ \\ \hline $\llbracket m,n \rrbracket$ & Set of integers from $m$ to $n$\\ \hline $\boldsymbol{M}^{\boldsymbol{T}}$ & Transpose of real matrix $\boldsymbol{M}$ \\ \hline $\boldsymbol{M}^{\boldsymbol{\dagger}}$ & Moore-Penrose pseudo-inverse of matrix $\boldsymbol{M}$ \\ \hline $\det\left( \boldsymbol{M}\right)$ & Determinant of matrix $\boldsymbol{M}$ \\ \hline $\mathbb{E} \left[\left( \boldsymbol{x} - \mathbb{E} \left[ \boldsymbol{x}\right]\right) \left(\boldsymbol{x} - \mathbb{E} \left[\boldsymbol{x}\right]\right)^{\boldsymbol{T}} \right]$ $\equiv \boldsymbol{\Sigma_{\boldsymbol{xx}}}$ & Variance-covariance matrix of random vector $\boldsymbol{x}$\\ \hline $\mathbb{E} \left[ \left( \boldsymbol{y} - \mathbb{E} \left[ \boldsymbol{y} \right] \right) \left( \boldsymbol{x} - \mathbb{E} \left[ \boldsymbol{x}\right] \right) ^{\boldsymbol{T}} \right]$ $\equiv \boldsymbol{\Sigma_{\boldsymbol{yx}}}$ & Cross-covariance matrix of random vectors $\boldsymbol{y}$ and $\boldsymbol{x}$\\ \hline $\sigma_{y}^2$ & Variance of scalar random variable $y$ \\ \hline $\boldsymbol{0}_{p,q}$ and $\boldsymbol{0}_p$ & Null matrix in $\mathbb{R}^{p \times q}$ and null vector in $\mathbb{R}^{p}$\\ \hline $\boldsymbol{I}_p$ & Square $p \times p$ identity matrix \\ \hline \end{tabular} \label{table:notations} \end{table} \section{Methods}\label{sec:2sec} Let us consider a set of observables $y_i$ we would like to model (e.g., power spectrum or bispectrum bins) and collect them into a vector $\boldsymbol{y}$. The standard estimate of the theoretical expectation of $\boldsymbol y$, $\mathbb{E}\left[ \boldsymbol{y}\right] =\boldsymbol{\mu}$, from a set of \textit{independent and identically distributed} realisations $\boldsymbol{y}_n$, $n=1,\dots N$, is the \textit{sample mean} \begin{equation} \boldsymbol{\bar{y}}=\frac{1}{N}\sum_{n=1}^{N} \boldsymbol{y}_n. \end{equation} Then the standard deviation $\sigma_{i}$ of each element $\bar{y}_i$ decreases as $\mathcal{O}(N^{-\frac{1}{2}})$, under mild regularity conditions (principally that $\sigma_{i}$ exists). Our goal is to find a more precise---i.e. lower-variance---and unbiased estimator of $\mathbb{E}\left[ \boldsymbol{y}\right]$ with a much smaller number of simulations $\boldsymbol{y}_n$. The means by which we achieve this is to construct another set of quantities that are fast to compute such that 1) their means are small enough to be negligible, and 2) their errors are anti-correlated with the errors in the $\boldsymbol{y}_n$,\footnote{The intuition behind this principle is that for two random scalars $a$ and $b$, we have $\sigma_{a+b}^2 = \sigma_{a}^2 + \sigma_{b}^2+2\mathrm{cov}(a,b)$.} and add some multiple of these to $\boldsymbol{\bar{y}}$ to cancel some of the error in the $y_n$. This is the \textit{control variates} principle. \subsection{Theoretical framework} In what follows we will use the word \textit{simulation} to refer to costly high-fidelity runs and \textit{surrogate} for fast but low-fidelity runs. \subsubsection{Introduction with the scalar case}\label{sec:unicv} Let us consider a scalar simulated observable $y$, such that $\mathbb{E} \left[ y \right] = \mu $, and a surrogate $c$ of $y$ with $\mathbb{E} \left[ c \right] = \mu_c$. Note that $\mu_c\ne\mu$ in general. For any $\beta \in \mathbb{R}$, the quantity \begin{equation}\label{eq:scalarCV} x(\beta) = y - \beta \left( c - \mu_c \right) \end{equation} is an unbiased estimator of $\mu$ by construction. The optimal value for $\beta$ is determined by minimising the variance of the new estimator, \begin{equation}\label{eq:varScal} \sigma_{x(\beta)}^2 = \beta^2 \sigma_c^2 - 2\beta \mathrm{cov}(y, c) + \sigma_y^2\,. \end{equation} The second-order polynomial \eqref{eq:varScal} has a unique root \begin{equation}\label{eq:betaStar} \beta^{\star} =\argmin_{\beta \in \mathbb{R}} \sigma_{x(\beta)}^2= \frac{\mathrm{cov}(y,c)}{\sigma_c^2}\,. \end{equation} Plugging equation \eqref{eq:betaStar} into equation \eqref{eq:scalarCV} allows us to express the variance reduction ratio of control variates as \begin{equation} \frac{\sigma_{x(\beta)}^2}{\sigma_y^2} = 1 - \rho_{y,c}^2\,, \end{equation} with $\rho_{y,c}$ the Pearson correlation coefficient between $y$ and $c$. The latter result shows that no matter how biased the surrogate $c$ might be, the more correlated it is with the simulation $y$, the better the variance reduction. For the classical control variates method, the choice of $c$ is restricted to cases where $\mu_c$ and $\beta$ are known \textit{a priori}. In section \ref{sec:InPractice} below, we will consider the more general case, typically encountered in practice, where $\beta$ is not known and we must estimate it from data. \subsubsection{Multivariate control variates} \label{sec:mvcv} Let $\boldsymbol{y}$ be an unbiased and costly simulation statistic of expectation ${\boldsymbol{\mu} \in \mathbb{R}^p}$, and $\boldsymbol{c}$ an approximate realisation with $\mathbb{E} \left[ \boldsymbol{c} \right] = \boldsymbol{\mu_c} \in \mathbb{R}^q$. \\ Similarly to the scalar case, for any $\boldsymbol{\beta} \in \mathbb{R}^{p \times q}$ the control variates estimator is \begin{equation}\label{eq:mvCV} \boldsymbol{x}(\boldsymbol{\beta}) = \boldsymbol{y} - \boldsymbol{\beta} \left( \boldsymbol{c} - \boldsymbol{\mu_c} \right)\,. \end{equation} $\boldsymbol{\Sigma_{xx}}$, the covariance matrix of the random vector $\boldsymbol{x(\beta)}$, is expressed as a function of $\boldsymbol{\beta}$, \begin{equation}\label{eq:varMV} \boldsymbol{\Sigma_{xx}}(\boldsymbol{\beta}) = \boldsymbol{\beta}\boldsymbol{\Sigma_{cc}} \boldsymbol{\beta^T} - \boldsymbol{\beta}\boldsymbol{\Sigma_{yc}^T} - \boldsymbol{\Sigma_{yc}} \boldsymbol{\beta^T} + \boldsymbol{\Sigma_{yy}}\,. \end{equation} Optimising variance reduction here means minimising the confidence region associated to $\mathbb{E}\left[ \boldsymbol{x}(\boldsymbol{\beta})\right]$ and represented by the generalised variance $\det\left(\boldsymbol{\Sigma_{xx}}(\boldsymbol{\beta})\right)$. Appendix \ref{app:bayesDer} presents a Bayesian solution to a more general version of this optimisation problem. Here we present an outline of the derivation in \citet{DEOPORTANOVA199380} and \citet{VENKATRAMAN198637}. The course by \citet{LectureCCA} provides an overview of canonical correlation analysis, which is used in the derivation. The oriented volume of the $p$-dimensional parallelepiped spanned by the columns of $\boldsymbol{\Sigma_{xx}(\beta)}$ is minimised as the analogue of an error bar in the univariate case. \citet{10.1287/opre.33.3.661} proved that \begin{equation}\label{eq:betaStarMV} \boldsymbol{\beta^{\star}} = \argmin_{\boldsymbol{\beta} \in \mathbb{R}^{p \times q}} \det \left(\boldsymbol{\Sigma_{xx}}(\boldsymbol{\beta})\right) = \boldsymbol{\Sigma_{yc}}\boldsymbol{\Sigma_{cc}^{-1}}\,. \end{equation} Combining equations \eqref{eq:betaStarMV} and \eqref{eq:mvCV} gives the generalised variance reduction \begin{equation} \label{eq:mvVarReduc} \begin{aligned} \frac{\det \left(\boldsymbol{\Sigma_{xx}}(\boldsymbol{\beta})\right)}{\det \left( \boldsymbol{\Sigma_{yy}} \right)} &= \frac{\det \left( \boldsymbol{\Sigma_{yy}} \left( \boldsymbol{I}_{p} - \boldsymbol{\Sigma_{yy}^{-1}} \boldsymbol{\Sigma_{yc}}\boldsymbol{\Sigma_{cc}^{-1}}\boldsymbol{\Sigma_{yc}^T} \right) \right)}{\det \left( \boldsymbol{\Sigma_{yy}} \right)} \\ &= \prod_{n=1}^{s = \mathrm{rank}\left( \boldsymbol{\Sigma_{yc}} \right)} \left( 1 - \lambda_n^2 \right)\,, \end{aligned} \end{equation} where the scalars $\lambda_1^2 \geq \lambda_2^2 \geq \dots \geq \lambda_s^2 \geq 0 $ are the eigenvalues of $ \boldsymbol{\Sigma_{yy}^{-1}} \boldsymbol{\Sigma_{yc}}\boldsymbol{\Sigma_{cc}^{-1}}\boldsymbol{\Sigma_{yc}^T}$ and whose square roots are the canonical correlations between $\boldsymbol{y}$ and $\boldsymbol{c}$. More precisely, $\lambda_1$ is the maximum obtainable cross-correlation between any linear combinations $\boldsymbol{u_1^T}\boldsymbol{y}$ and $\boldsymbol{v_1^T}\boldsymbol{c}$, \begin{equation}\label{eq:cca} \displaystyle \lambda_1 = \argmax_{\boldsymbol{u_1} \in \mathbb{R}^p, \boldsymbol{v_1} \in \mathbb{R}^q} \frac{\boldsymbol{u_1^T}\boldsymbol{\Sigma_{yc}}\boldsymbol{v_1}}{\sqrt{\boldsymbol{u_1^T}\boldsymbol{\Sigma_{yy}}\boldsymbol{u_1}} \sqrt{\boldsymbol{v_1^T}\boldsymbol{\Sigma_{cc}}\boldsymbol{v_1}}}\,, \end{equation} and $\left\{\lambda_n ; n \leq s \right\}$ are found recursively with the constraint of uncorrelatedness between $\left\{ \boldsymbol{u_n^T y}, \boldsymbol{v_n^T c} \right\}$ and $\left\{ \boldsymbol{u_1^T y}, \boldsymbol{v_1^T c}, \dots, \boldsymbol{u_{n-1}^T y}, \boldsymbol{v_{n-1}^T c} \right\}$. At the end, we have two bases for the transformed vectors $\boldsymbol{u}=\left[ \boldsymbol{u_1^Ty}, \dots, \boldsymbol{u_s^Ty}\right]^{\boldsymbol{T}}$ and $\boldsymbol{v}=\left[ \boldsymbol{v_1^Tc}, \dots, \boldsymbol{v_s^Tc}\right]^{\boldsymbol{T}}$ in which their cross-covariance matrix is diagonal, i.e. $\boldsymbol{\Sigma_{uv}}=\mathrm{diag}\left( \lambda_1,\dots, \lambda_s \right)$. \subsection{Estimation in practice}\label{sec:InPractice} In this section, we examine practical implications of the control variates implementation when the optimal control matrix $\boldsymbol{\beta}$ (or coefficients) and the mean of the cheap estimator $\boldsymbol{\mu_c}$ are unknown. We will consider an online approach in order to improve the estimates of \eqref{eq:betaStar} or \eqref{eq:betaStarMV} as simulations and surrogates are computed. Estimating $\boldsymbol{\mu_c}$ is done through an inexpensive pre-computation step that consists in running fast surrogates. From now on, to differentiate our use of the control variates principle and its application to cosmological simulations from the theory presented above, we will refer to it as the CARPool technique. For the purposes of this paper, we will take as our goal to produce low-variance estimates of expectation values of full simulation observables. When we discuss model error, it is therefore only relative to the full simulation. From an absolute point of view the accuracy of the full simulation depends on a number of factors such as particle number, force resolution, timestepping, inclusion of physical effects, \textit{et cetera.} The numerical examples of full simulations we give are not selected for their unmatched accuracy, but for the availability of a large ensemble that we can use to validate the CARPool results. \subsubsection{Estimation of $\boldsymbol{\mu_c}$} In the textbook control variates setting, the crude approximation $\boldsymbol{\mu_c}$ of $\boldsymbol{\mu}$ is assumed to be known. There is no reason for this to be the case in the context of cosmological simulations, thus we compute $\bar{\boldsymbol{\mu}}_{\boldsymbol{c}}$ with surrogate samples drawn on a separate set of seeds $\mathcal{S}_{M} = \left\{ r_{1}, \dots, r_{M} \right\}$ ($\mathcal{S}_{N} \cap \mathcal{S}_{M} = \emptyset$, where $\mathcal{S}_{N}$ is the set of initial conditions of simulations). What is then the additional variance-covariance of the control variates estimate stemming from the estimation of $\boldsymbol{\mu_c}$? First, write each cheap-estimator realisation as $\boldsymbol{c}=\boldsymbol{\mu_c} + \boldsymbol{\delta}$, with $\mathbb{E}\left[ \boldsymbol{\delta} \right] = \boldsymbol{0}_q$, \begin{equation}\label{eq:errMuc} \begin{aligned} \bar{\boldsymbol{\mu}}_{\boldsymbol{c}} &= \boldsymbol{\mu_c} + \frac{1}{M}\sum_{i=1}^M \boldsymbol{\delta}_i\,, \\ \boldsymbol{\Sigma}_{\boldsymbol{\bar{\boldsymbol{\mu}}_{\boldsymbol{c}}}\boldsymbol{\bar{\boldsymbol{\mu}}_{\boldsymbol{c}}}} &= \boldsymbol{\Sigma}_{\boldsymbol{\bar{\delta}}\boldsymbol{\bar{\delta}}} = \frac{1}{M}\boldsymbol{\Sigma_{c,c}}\,. \end{aligned} \end{equation} Replacing $\boldsymbol{\mu_c}$ by $\bar{\boldsymbol{\mu}}_{\boldsymbol{c}}$ in equations~\eqref{eq:sampleCV} and \eqref{eq:varMV} results in \begin{equation} \begin{aligned} \boldsymbol{\bar{x}}(\boldsymbol{\hat{\beta}}, \boldsymbol{\bar{\boldsymbol{\mu}}_{\boldsymbol{c}}}) &= \boldsymbol{\bar{y}} - \boldsymbol{\hat{\beta}} \left( \boldsymbol{\bar{c}} - \boldsymbol{\mu_c} \right) + \boldsymbol{\hat{\beta}}\boldsymbol{\bar{\delta}}\,, \\ \boldsymbol{\Sigma_{x(\hat{\beta},\bar{\boldsymbol{\mu}}_{\boldsymbol{c}})x(\hat{\beta}, \bar{\boldsymbol{\mu}}_{\boldsymbol{c}})}} &= \boldsymbol{\Sigma_{x(\hat{\beta})x(\hat{\beta})}} + \boldsymbol{\hat{\beta}} \frac{\boldsymbol{\Sigma_{cc}}}{M}\boldsymbol{\hat{\beta}}^{\boldsymbol{T}}\,. \end{aligned} \end{equation} The $\boldsymbol{\hat{\beta}}\boldsymbol{\bar{\delta}}$ term above is statistically independent of the rest of the sum, since it is computed on a separate set of seeds. As expected, additional uncertainty is brought by $\boldsymbol{\Sigma_{cc}}$ and scaled by the estimated control matrix. See Appendix \ref{app:bayesDer} for a more general, Bayesian derivation of the combined uncertainty while taking into account possible prior information on $\boldsymbol{\mu_y}$ and/or $\boldsymbol{\mu_c}$. \subsubsection{Estimation of the control matrix}\label{sec:estControl} The matrices in equation \eqref{eq:betaStar} need to be estimated from data via the bias-corrected sample covariance matrix: \begin{equation}\label{eq:empMat} \begin{aligned} \boldsymbol{\widehat{\Sigma}_{yc}} &= \frac{1}{N-1} \sum_{i=1}^N \left( \boldsymbol{y}_i -\boldsymbol{\bar{y}} \right) \left( \boldsymbol{c}_i -\boldsymbol{\bar{c}} \right)^{\boldsymbol{T}}\,, \\ \boldsymbol{\widehat{\Sigma}_{cc}} &= \frac{1}{N-1} \sum_{i=1}^N \left( \boldsymbol{c}_i -\boldsymbol{\bar{c}} \right) \left( \boldsymbol{c}_i -\boldsymbol{\bar{c}} \right)^{\boldsymbol{T}}\,, \\ \boldsymbol{\widehat{\Sigma_{cc}^{-1}}} &= \boldsymbol{\widehat{\Sigma}_{cc}}^{-1}\,. \end{aligned} \end{equation} The computational cost of $\boldsymbol{y}$ is the limiting factor for estimating $\boldsymbol{\Sigma_{yc}}$. Therefore, the cross-covariance matrix is estimated online, as our primary motivation is to reduce the computation time: for instance, we certainly do not want to run more costly simulations in a precomputation step like we do for $\boldsymbol{\mu_c}$ with fast simulations. Simply put, $\boldsymbol{\widehat{\Sigma}_{yc}}$ is updated each time a new simulation pair is available. Note that for finite $N$, the estimator in equation \eqref{eq:empMat} of the precision matrix $\boldsymbol{\Sigma_{cc}^{-1}}$ is not unbiased \citep{Hartlap_2006}. Moreover, $\boldsymbol{\Sigma_{cc}^{-1}}$ is not defined when $\det \left(\boldsymbol{\Sigma_{cc}} \right)=0$. We have consequently replaced $\boldsymbol{\Sigma_{cc}^{-1}}$ by the Moore-Penrose pseudo-inverse $\boldsymbol{\Sigma_{cc}^{\dagger}}$ in equation \eqref{eq:betaStarMV} for the numerical analysis presented in Section \ref{sec:results}. Since the singular value decomposition (SVD) exists for any complex or real matrix, we can write $\boldsymbol{\Sigma_{yc}} = \boldsymbol{UVW^{T}}$ and $\boldsymbol{\Sigma_{cc}} = \boldsymbol{OPQ^{T}}=\boldsymbol{OPO^{T}}$ by symmetry. The optimal control matrix now gives $\boldsymbol{\beta^{\star}} = \boldsymbol{UVW^{T}OP^{-1}O^{T}}$. The product $\boldsymbol{-P^{\frac{1}{2}}O^{T}}$ whitens the centered surrogate vector elements (principal component analysis whitening), $\boldsymbol{OP^{-\frac{1}{2}}}$ restretches the coefficients and returns them to the surrogate basis, and then $\boldsymbol{UVW^{T}}$ projects the scaled surrogate elements into the high-fidelity simulation basis and rescales them to match the costly simulation covariance. It follows that, when using ${\boldsymbol{\hat\beta}}$ in practice, the projections are done in bases specifically adapted to the $\boldsymbol{y}$ and $\boldsymbol{c}$ samples available. With this argument, we justify why we use the same simulation/surrogate pairs to compute ${\boldsymbol{\hat\beta}}$ first (with the Moore-Penrose pseudo-inverse of the surrogate covariance replacing the precision matrix) and estimate the CARPool mean after that. An online estimation of both ${\boldsymbol{\hat\beta}}$ and $\boldsymbol{\bar{x}}(\boldsymbol{\hat{\beta}})$, considering incoming $\left\{ \boldsymbol{y}_n,\boldsymbol{c}_n \right\}$ pairs computed on the same seed $r_n$, amounts to computing a collection of $N$ samples as functions of $\widehat{\boldsymbol{\beta}}$, \begin{equation}\label{eq:mvComp} \boldsymbol{x}_n(\boldsymbol{\hat{\beta}}) = \boldsymbol{y}_n - \boldsymbol{\hat{\beta}} \left( \boldsymbol{c}_n - \boldsymbol{\bar{\mu}_c} \right)\,. \end{equation} We implement equation \eqref{eq:mvCV} by taking the sample mean of $N$ such variance-reduced samples, \begin{equation}\label{eq:sampleCV} \boldsymbol{\bar{x}}(\boldsymbol{\hat{\beta}}) = \boldsymbol{\bar{y}} - \boldsymbol{\hat{\beta}} \left( \boldsymbol{\bar{c}} - \boldsymbol{\bar{\mu}_c} \right)\,. \end{equation} This way, equation \eqref{eq:sampleCV} can be computed each time a simulation/surrogate pair is drawn from a seed in $\mathcal{S}_{N} = \left\{ r_{1}, \dots, r_{N} \right\}$, after updating ${\boldsymbol{\hat\beta}}$ according to equation~\eqref{eq:empMat}. \subsubsection{Multivariate versus univariate CARPool} So far we have not assumed any special structure for $\boldsymbol{\beta}$. If, as in the classical control variates setting, the (potentially dense) covariances on the right-hand side of equation \eqref{eq:betaStarMV} are known \textit{a priori}, then $\boldsymbol{\beta^{\star}}$ is the best solution because it exploits the mutual information between all elements of $\boldsymbol{y}$ and $\boldsymbol{c}$. In practice, we will be using the online approach discussed in Section \ref{sec:estControl} for a very small number of simulations. If we are limited by a very small number of $\left\{ \boldsymbol{y}_n,\boldsymbol{c}_n \right\}$ pairs compared to the number of elements of the vectors, the estimate of $\boldsymbol{\beta^{\star}}$ can be unstable and possibly worsen the variance of equation~\eqref{eq:sampleCV}, though unbiasedness remains guaranteed. We will demonstrate below that in the case of small number of simulations and a large number of statistics to estimate from the simulations, it is advantageous to impose structure on $\boldsymbol{\beta}$. In the simplest case, we can set the off-diagonal elements to zero. This amounts to treating each vector element separately and results in a decoupled problem with a separate solution \eqref{eq:betaStar} for each vector element, as we will discuss below. The univariate setting of \ref{sec:unicv} applied individually to each vector element (\textit{bin}) will be referred to as ``diagonal $\boldsymbol{\beta}$'' or $\boldsymbol{\beta^{\mathrm{diag}}}$, as it amounts to fixing the non-diagonal elements of $\boldsymbol{\Sigma_{cc}}$ and $\boldsymbol{\Sigma_{yc}}$ to zero in equation \eqref{eq:betaStarMV} and only estimating the diagonal elements. The intent of this paper is to show the potential of control variates for cosmological simulations; to this end, we will compare the following unbiased estimators: \begin{itemize}[labelwidth=*] \item \texttt{GADGET}, where we compute the sample mean $\boldsymbol{\bar{y}}$ from $N$-body simulations only. \item Multivariate CARPool described by equation \eqref{eq:mvCV}, where we estimate the control matrix $\boldsymbol{\beta}$ online using equations \eqref{eq:empMat}, and denote it by $\boldsymbol{\beta^{\star}}$. \item Univariate CARPool, where we use the empirical counterpart of equation \eqref{eq:betaStar} as the control coefficient for each element of a vector. In this case, we will write $\boldsymbol{\beta}$ as $\boldsymbol{\beta^{\mathrm{diag}}}$. \end{itemize} Other, intermediate choices between fully dense and diagonal $\boldsymbol{\beta}$ are possible and may be advantageous in some circumstances. We will leave an exploration of these to future work, and simply note here that this freedom to tune $\boldsymbol{\beta}$ does not affect the mean of the CARPool estimate. \section{Cosmological simulations} \label{sec:3sec} This section describes the simulation methods that we use to compute the statistics presented in Section~\ref{sec:results}. The simulations assume a $\Lambda$ Cold Dark Matter ($\Lambda$CDM) cosmology congruent with the {\it Planck} constraints provided by \citet{2018arXiv180706209P}: $\Omega_\mathrm{m}=0.3175$, $\Omega_\mathrm{b}=0.049$, $h=0.6711$, $n_s=0.9624$, $\sigma_8=0.834$, $w=-1.0$ and $M_{\nu}=0.0~\mathrm{eV}$. \subsection{\textit{Quijote} simulations at the fiducial cosmology} \citet{villaescusanavarro2019quijote} have publicly released data outputs from $N$-body cosmological simulations run with the full TreePM code \texttt{GADGET-III}, a development of the previous version \texttt{GADGET-II} by \citet{2005MNRAS.364.1105S}.\footnote{Instructions to access the data are given at \url{https://github.com/franciscovillaescusa/Quijote-simulations}.} Available data and statistics include simulation snapshots, matter power spectra, matter bispectra and matter probability density functions. The sample mean of each statistic computed from all available realisations gives the unbiased estimator of $\mathbb{E} \left[ \boldsymbol{y} \right] = \boldsymbol{\mu}$. The fiducial cosmology data set contains 15,000 realisations; their characteristics are grouped in Table~\ref{table:yFeat}. \begin{table} \centering \caption{Characteristics of \texttt{GADGET-III} simulations} \begin{tabular}{ | m{12em} | m{12em}| } \hline Characteristic/Parameter & Value \\ \hline \hline Simulation box volume & $\left( 1000~h^{-1} {\rm Mpc} \right)^3$ \\ \hline Number of CDM particles & $N_\mathrm{p} = 512^3$ \\ \hline Force mesh grid size & $N_\mathrm{m} = 1024$ \\ \hline Starting redshift & $z_i = 127$ \\ \hline Initial conditions & Second-order Lagrangian Perturbation Theory (2LPT) \\ \hline Redshift of data outputs & $z \in \left\{ 3.0, 2.0, 1.0, 0.5, 0.0\right\}$ \\ \hline \end{tabular} \label{table:yFeat} \end{table} As discussed in section \ref{sec:InPractice}, the \textit{Quijote} simulations are selected because we have access to an extensive ensemble of simulations that we can use to validate the CARPool approach. In the following we will look at wavenumbers $k=\sim 1~h{\rm Mpc}^{-1}$ where the \textit{Quijote} simulations may not be fully resolved. This is not important for the purposes of this paper; we will consider the full simulation ensemble as the gold standard that we attempt to reproduce with a much smaller number of simulations plus fast surrogates. In the next subsection, we present the chosen low-fidelity simulation code which provides an approximate statistic $\boldsymbol{c}$ for our numerical experiments. \subsection{Choice of approximate simulation method} Any fast solution can be used for $\boldsymbol{c}$, provided that it can be fed with the same initial conditions as of the \textit{Quijote} simulations. To this end, the matter power spectrum from \texttt{CAMB} \citep{Lewis_2000} at $z=0$ is rescaled at the initial redshift $z_i = 127$ to generate the initial conditions, as in~\citet{villaescusanavarro2019quijote}. In this work, we use the \texttt{L-PICOLA} developed by \citet{Howlett_2015}, an MPI parallel implementation of the COLA method \citep{Tassev_2013}. The core idea of COLA is to add residual displacements computed with a PM $N$-body solver to the trajectory given by the first- and second-order LPT approximations. The evolution of the residual displacement field $\boldsymbol{x_\mathrm{res}}$ appears by rewriting the equation of motion in a frame comoving with the LPT trajectory, \begin{equation}\label{eq:colaEOM} \partial_t^2 \boldsymbol{x_\mathrm{res}} = -\nabla \Phi - \partial_t^2 \boldsymbol{x_\mathrm{LPT}}\,, \end{equation} with $\boldsymbol{x_\mathrm{res}} \equiv \boldsymbol{x} - \boldsymbol{x_\mathrm{LPT}}$. Here, $\boldsymbol{x_\mathrm{LPT}}$ is the LPT approximation to $\boldsymbol{x}$, the Eulerian position of matter particles. The time integration is performed by discretising the derivative $\partial_t^2$ only on the left-hand side of equation \eqref{eq:colaEOM}, while the (second-order) LPT displacements are computed only once and stored. $\Phi$ is the gravitational potential obtained by solving the Poisson equation for the density field of the (Eulerian) CDM particles' positions $\boldsymbol{x}$. \texttt{L-PICOLA} has its own initial conditions generator and uses a slightly modified version of the \texttt{2LPTic} code.\footnote{The parallelised version of the code is available at \url{http://cosmo.nyu.edu/roman/2LPT/}.} To generate \texttt{L-PICOLA} snapshots and extract statistics, we set the free parameters as presented in Table~\ref{table:cFeat}. Justification for these choices, along with more details on COLA and the \texttt{L-PICOLA} implementation, can be found in Appendix \ref{app:cola}. \begin{table} \centering \caption{Characteristics of \texttt{L-PICOLA} simulations} \begin{tabular}{ | m{12em} | m{12em}| } \hline Characteristic/Parameter & Value \\ \hline \hline Number of timesteps & 20 (linearly spaced)\\ \hline Modified timestepping from \citet{Tassev_2013} & $nLPT=+0.5$\\ \hline Force mesh grid size & $N_\mathrm{m} = 512$ \\ \hline Starting redshift & $z_i = 127$ \\ \hline Initial conditions & Second-order Lagrangian Perturbation Theory (2LPT) \\ \hline Redshift of data outputs & $z \in \left\{1.0, 0.5, 0.0\right\}$ \\ \hline \end{tabular} \label{table:cFeat} \end{table} \section{Application and results}\label{sec:results} In this section, we apply the CARPool technique to three standard cosmological statistics: the matter power spectrum, the matter bispectrum, and the one-dimensional probability density function (PDF) of matter fractional overdensity. We seek to improve the precision of estimates of theoretical expectations of these quantities as computed by \texttt{GADGET-III}. To assess the actual improvement, we need the sample mean $\boldsymbol{\bar{y}}$ of the \textit{Quijote} simulations on the one hand, and the estimator \eqref{eq:sampleCV} on the other hand. Additionally, unless stated otherwise, each test case has the following characteristics: \begin{itemize} \item $N_\mathrm{max}=500 $ $\left\{ \boldsymbol{y}_i, \boldsymbol{c}_i \right\}$ simulation pairs are generated, and the cumulative sample mean $\boldsymbol{\bar{y}}$ (resp. $\boldsymbol{\bar x(\beta)}$) is computed for every other $5$ additional simulations (resp. simulation pairs). \item $M=1,500$ additional fast simulations are dedicated to the estimation of $\boldsymbol{\mu_c}$. \item The sample mean of 15,000 $N$-body simulations, accessible in the \textit{Quijote} database, is taken as the true $\boldsymbol{\mu}$. \item $p = q$ since we post-process \texttt{GADGET-III} and \texttt{L-PICOLA} snapshots with the same analysis codes (e.g. same vector size for $\boldsymbol{y}$ and $\boldsymbol{c}$). \item The analysis is performed at redshift $z=0.5$. \item $\delta(\boldsymbol{x}) \equiv \rho(\boldsymbol{x})/\bar{\rho} - 1$ is the matter density contrast field; the first term designates the matter fractional overdensity field computed with the Cloud-in-Cell (CiC) mass assignment scheme. $\boldsymbol{x}$ exceptionally denotes the grid coordinate here, not to be confused with the notations used so far. \item $N_\mathrm{grid}$ designates the density contrast grid size when post-processing snapshots. \item We use bias-corrected and accelerated (BCa) bootstrap,\footnote{Implemented in \url{https://github.com/cgevans/scikits-bootstrap}} with $B =$~5,000 samples with replacement, to compute the $95\%$ confidence intervals of the estimators. \citet{efron1994introduction} explain the computation. \end{itemize} The procedure of the method is illustrated in Figure~\ref{fig:flow}. The first step is to run $M$ fast surrogates to compute the approximate mean $\boldsymbol{\mu}_{\boldsymbol{c}}$. How large $M$ should be depends on the accuracy demanded by the user. Then, for each newly picked initial condition, both the expensive simulation code and the low-fidelity method are run to produce a snapshot pair. Only in this step do we need to run the high-fidelity simulation code $N$ times. The mean \eqref{eq:sampleCV} can be computed for each additional pair to track the estimate. In the next section, we assess the capacity of CARPool to use less than $10$ simulations and a set of fast surrogates to match the precision of a large number of $N$-body simulations. All the statistics are calculated from the snapshots with the Python 3 module \texttt{Pylians3}.\footnote{Available in the repository \url{https://github.com/franciscovillaescusa/Pylians3}} \begin{figure*} \centering \includegraphics[width=1.85\columnwidth, height = 0.95\textheight]{Figures/Method/cvFlowchart-3.png} \caption{Flowchart of the practical application of CARPool to cosmological simulations. We highlight the estimation of $\boldsymbol{\mu_c}$ as a precomputation step using $M$ additional fast simulations. The larger the $M$, the less impacted the variance/covariance of the control variates estimator, as expressed in \eqref{eq:errMuc} and Appendix~\ref{app:bayesDer}. Fractional overdensity images are projected slices of width $60~h{\rm Mpc^{-1}}$.}\label{fig:flow} \end{figure*} \subsection{Matter power spectrum} This section is dedicated to estimating the power spectrum of matter density in real space at $z=0.5$, the lower end of the range covered by next-generation galaxy redshift surveys. The density contrast $\delta(\boldsymbol{x})$ is computed from each snapshot with the grid size $N_\mathrm{grid} = 1024$. The publicly available power spectra range from $k_\mathrm{min}=$ \num{8.900e-3} $h {\rm Mpc^{-1}}$ to $k_\mathrm{max}=5.569$ $h {\rm Mpc^{-1}}$ and contain $886$ bins. The following analysis is restricted to $k_\mathrm{max}=1.194$ $h {\rm Mpc^{-1}}$, which results in $190$ bins. We simplify our test case by compressing the power spectra into $p=95$ bins, using the appropriate re-weighting by the number of modes in each $k$ bin given in \texttt{Pylians3}. Univariate CARPool gives the best results since we are using the smallest possible number of costly $N$-body simulations; for this reason, power spectrum estimates using the multivariate framework are not shown here. As we discuss in appendix \ref{app:cola}, we intentionally run our fast surrogate (COLA) in a mode that produces a power spectrum that is highly biased compared to the full simulations, with a power deficit of more than $60\%$ on small scales. \subsubsection{CARPool versus $N$-body estimates}\label{sec:estPk} Figure \ref{fig:pkEst5v500} shows the estimated power spectrum with $95\%$ confidence intervals enlarged by a factor of $20$ for better visibility. Only $5$ $N$-body simulations are needed to compute an unbiased estimate of the power spectrum with much higher precision than $500$ $N$-body runs on large scales and on the scale of Baryon Acoustic Oscillations (BAO). On small scales, confidence intervals are of comparable size.\footnote{While bootstrap is robust for estimating the $95\%$ error bars of a sample mean with $500$ simulation, it is not equally reliable with a very small number of realisations. This leads to large bin-to-bin variations of the estimated CARPool confidence intervals in Figure \ref{fig:pkEst5v500}. An alternative, parametric computation of confidence intervals with very few samples can be found in Appendix \ref{app:collFigs}, using Student $t$-score values.} \begin{figure} \includegraphics[width=\columnwidth]{Figures/4Pk/FullBoot/reducedPk_500Vs5_samples.png} \caption{Estimated power spectrum with $500$ $N$-body simulations versus $5$ pairs of ``$N$-body + cheap'' simulations, from which $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ is derived. The estimated $95\%$ confidence intervals are computed with the BCa bootstrap. They are enlarged by a factor of 20 for better visibility.} \label{fig:pkEst5v500} \end{figure} We must verify that these results are not produced by a ``lucky'' set of $5$ simulation pairs. To this end, we compute $100$ CARPool means $\boldsymbol{\bar{x}(\widehat{\beta^\mathrm{diag})}}$ from distinct sets of $5$ random seeds. The CARPool estimates fall within a sub-percent accuracy relative to the sample mean from 15,000 $N$-body simulations, as illustrated by the upper panel of Figure~\ref{fig:pkRatios5v500}. The \texttt{GADGET} sample mean percentage error of $500$ simulations with respect to 15,000 simulations is plotted with $95\%$ confidence intervals. We stress here that every percentage error plot in this paper shows an error with respect to 15,000 $N$-body simulations. The mean of $500$ \texttt{GADGET} realisations is thus not at zero percent, though the difference is very small. \paragraph*{Beta smoothing.} Since we use a very small number of simulations, the estimates of the diagonal elements of $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ are noisy. This leads to some heavy tailed distributions for the CARPool estimates. Using the fact that we have freedom to modify $\boldsymbol{\beta}$ without affecting unbiasedness, we can exploit the fact that we expect neighboring bins to have similar optimal $\boldsymbol{\beta}$. Convolving the diagonal elements with a $5$-bin-wide top-hat window slightly reduces the spread at small scales of CARPool estimates computed with only $5$ \texttt{GADGET} power spectra and removes outliers. The comparison of the two panels in Figure \ref{fig:pkRatios5v500} illustrates this point. Using a $9$-bin-wide Hanning window for the smoothing yields similar results. We call this technique beta smoothing and use it with a 5-bin-wide top-hat window in what follows. Both panels of Figure~\ref{fig:pkRatios5v500} show the symmetric $95\%$ confidence intervals of the estimation of the surrogate mean in dashed lines. They represent the $95\%$ error band likely to stem from the estimation of $\boldsymbol{\mu_c}$, relatively to the mean of 15,000 \texttt{GADGET} simulations, hence the fact that, at large scales especially, the CARPool means concentrate slightly away from the null percentage error. Though the unbiased estimator in equation \eqref{eq:sampleCV} takes a precomputed cheap mean, the practitioner can decide to run more approximate simulations on the fly to improve the accuracy of $\boldsymbol{\bar{\mu}_c}$. Note that the CARPool means with $5$ $N$-body simulations still land withing the $95\%$ confidence intervals from $500$ \texttt{GADGET} simulations, even at large scales where the difference due to the surrogate mean is visible. Figure \ref{fig:pkBin} exhibits the convergence of one power spectrum bin at the BAO scale as we add more simulations: the $95\%$ error band of the control variates estimate shrinks extremely fast compared to that of the $N$-body sample mean. \begin{figure} \includegraphics[width=0.49\textwidth]{Figures/4Pk/FullBoot/TruePk_5CVRatios100Means500T0.png} \includegraphics[width=0.49\textwidth]{Figures/4Pk/BootSmF/TruePk_5CVRatios100Means500T0.png} \caption{Estimated power spectrum percentage error with respect to 15,000 $N$-body runs: $500$ $N$-body simulations versus $100$ sets of $5$ pairs of ``$N$-body + cheap'' simulations. Each set uses a distinct $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$, calculated with the same seeds used for $\boldsymbol{\bar{x}}$. The upper panel estimate uses $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ while the lower panel convolves the diagonal elements of $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ with a narrow top-hat window. Beta smoothing removes outliers and Gaussianises the tails by effectively increasing the number of degrees of freedom for each $\beta$ estimate. Both panels use the same random seeds. The estimated $95\%$ confidence intervals are plotted for the $N$-body sample mean only, using BCa bootstrap. The dark blue symbols show the $68\%$ percentile of the CARPool estimates ordered by the absolute value of the percentage error; the rest appears in light blue symbols.} \label{fig:pkRatios5v500} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{Figures/4Pk/Visualization/pk_CV_perf_12.png} \caption{Convergence of a single $k$-bin at the BAO scale: the cumulative sample mean $\boldsymbol{\bar{y}}$ of $N$-body simulations versus the sample mean $\boldsymbol{\bar{x}(\widehat{\beta^\mathrm{diag}})}$. Confidence intervals take into account that $\boldsymbol{\beta^\mathrm{diag}}$ is estimated from the same number of samples used to compute the CARPool estimate of $P(k)$.} \label{fig:pkBin} \end{figure} \subsubsection{Empirical variance reduction}\label{sec:varPk} The left panel of Figure~\ref{fig:empVarPk} shows the empirical generalised variance reduction of the CARPool estimate compared to the standard estimate, as defined in equation \eqref{eq:mvVarReduc}. The vertical axis corresponds to the volume ratio of two parallelepipeds of dimension $p=95$, in other words the volume ratio of error ``boxes'' for two estimators. The determinant $\det\left(\boldsymbol{\widehat{\Sigma_{yy}}}\right)$ is fixed because we take all 15,000 $N$-body simulations available in \textit{Quijote} to compute the most accurate estimate of $\boldsymbol{\Sigma_{yy}}$ we have access to, whereas $\det\left(\boldsymbol{\Sigma_{xx}}(\boldsymbol{\hat{\beta}})\right)$ changes each time new simulation pairs are run. More precisely, for each data point in Figure~\ref{fig:empVarPk}, we take the control matrix estimate computed with $5k,k \in \llbracket 1,100 \rrbracket$ simulation pairs and generate 3,000 $\boldsymbol{x}$ samples according to \eqref{eq:mvComp} to obtain an estimator of $\boldsymbol{\Sigma_{xx}}$. For that, we use 3,000 \textit{Quijote} simulations and 3,000 additional \texttt{L-PICOLA} surrogates run with the corresponding seeds. \begin{figure*} \includegraphics[width=0.49\textwidth]{Figures/4Pk/Visualization/varReduc_Pk.png} \includegraphics[width=0.49\textwidth]{Figures/4Pk/Visualization/sigmaReduc_Pk.png} \caption{Left panel: Generalised variance ratio for the power spectrum up to $k_\mathrm{max} \approx 1.2~h {\rm Mpc^{-1}}$ as a function of the number of available simulations. Each $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ serves to generate 3,000 samples according to \eqref{eq:mvComp} to estimate the CARPool covariance matrix. Right panel: Standard deviation reduction for each power spectrum bin due to CARPool. The blue and black curves use $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ estimated with $500$ samples. The dashed grey curve exhibits the actual standard deviation ratio when we have $5$ samples only to compute $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$. $\boldsymbol{\Sigma_{yy}}$ is estimated using all 15,000 available power spectra from the \textit{Quijote} simulations.} \label{fig:empVarPk} \end{figure*} The simpler univariate scheme outperforms the estimation of the optimal $\boldsymbol{\beta^{\star}}$ for $N=5k,k \in \llbracket 1,100 \rrbracket$, corroborating the experiments of Section \ref{sec:estPk}. Furthermore, variance reduction granted by a sub-optimal diagonal $\boldsymbol{\beta^\mathrm{diag}}$ improves rapidly and reaches its apparent limit quickly. We suspect that the slight worsening of the variance reduction, when the number of available samples to estimate $\boldsymbol{\beta^{\star}}$ neighbors the vector size $p$, is linked to the eigenspectrum of $\boldsymbol{\Sigma_{c,c}^{\dagger}}$ and could be improved by projecting out the eigenmodes corresponding to the smallest, noisiest eigenvalues. We depict the scale-dependent performance of CARPool for the matter power spectrum in the right panel of Figure~\ref{fig:empVarPk}. The vertical axis is the variance reduction to expect from the optimal control coefficients (or matrix). Namely, we take the data points of the left panel for $500$ simulation/surrogate pairs, extract the diagonal of the covariance matrices and divide the arrays. The blue and black curves show the variance reduction with respect to the sample mean of $N$-body simulations using all 500 simulation/surrogate pairs to estimate the control matrix. In practice, we estimate $\boldsymbol{\beta}$ using only 5 simulation/surrogate pairs; does this noisy $\boldsymbol{\hat\beta}$ lead to significant inefficiency? The grey dashed curve shows the actual standard deviation reduction brought by the rough estimate of $\boldsymbol{\beta^\mathrm{diag}}$ using $5$ simulation pairs only, with which the results of Figures \ref{fig:pkEst5v500} and \ref{fig:pkRatios5v500} are computed. A few $k$-bins fluctuate high but the variance reduction remains close to optimal, especially considering that only 5 simulations were used, and we have not attempted any further regularisation except for beta smoothing. \subsection{Matter bispectrum} We compute the shot-noise corrected matter bispectrum in real space \citep{Hahn_2020,villaescusanavarro2019quijote}, using \texttt{pySpectrum}\footnote{Available in the repository \url{https://github.com/changhoonhahn/pySpectrum}} with $N_\mathrm{grid}=360$ and bins of width $\Delta k= 3k_\mathrm{f}$ = \num{1.885e-2} $h {\rm Mpc^{-1}}$, where $k_\mathrm{f} = \frac{2\pi}{1000}$ $h {\rm Mpc^{-1}}$ is the fundamental mode depending on the box size. As in the previous section, we present only the results using $\boldsymbol{\beta^\mathrm{diag}}$ instead of $\boldsymbol{\beta^{\star}}$. We examine two distinct sets of bispectrum coefficients: in the first case we study the bispectrum for squeezed isosceles triangles as a function of opening angle only, averaging over scale; in the second case we compute equilateral triangles as a function of $k$. \subsubsection{Squeezed isosceles triangles} We start the analysis by regrouping isosceles triangles ($k_1=k_2$) and re-weighting the bispectrum monopoles for various $\frac{k_3}{k_1}$ ratios in ascending order. Only squeezed triangles are considered here: $\left(\frac{k_3}{k_1}\right)_\mathrm{max} = 0.20$ so that the dimension of $\boldsymbol{y}$ is $p=98$ (see Table \ref{table:notations}). \paragraph*{CARPool versus $N$-body estimates.} On the order of $5$ samples are required to achieve a precision similar to that of the sample mean of 500 $N$-body simulations as we show in Figure \ref{fig:bkQkEstSmF5v500} (upper panel). Figure \ref{fig:bkQkRatiosSmF5v500} (upper panel) corroborates the claim by showing the percentage error of 100 CARPool means using $5$ costly simulations each. The reference is the mean of the 15,000 bispectra from the \textit{Quijote} simulations. As in the previous section, we show the $95\%$ error band due to estimation of the surrogate mean $\boldsymbol{\mu_c}$ by dashed curves. \begin{figure} \includegraphics[width=0.49\textwidth]{Figures/4Bk/BootSmF/Bk_500Vs5_samples.png} \includegraphics[width=0.49\textwidth]{Figures/4QkEq/BootSmF/Qk_500Vs5_samples.png} \caption{Upper panel: Estimated bispectrum for squeezed isosceles triangles with $500$ $N$-body simulations versus $5$ pairs of ``$N$-body + cheap'' simulations, from which the smoothed $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ is derived. The estimated $95\%$ confidence intervals are computed with the BCa bootstrap. They are enlarged by a factor of 20 for better visibility. Lower panel: As in the upper panel, but for the reduced bispectrum of equilateral triangles.\label{fig:bkQkEstSmF5v500}} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{Figures/4Bk/BootSmF/TrueBk_5CVRatios100Means500T0.png} \includegraphics[width=0.49\textwidth]{Figures/4QkEq/BootSmF/TrueQk_5CVRatios100Means500T0.png} \caption{Upper panel: Estimated bispectra percentage error for squeezed isosceles triangles with respect to 15,000 $N$-body runs: $500$ $N$-body simulations versus $100$ sets of $5$ pairs of ``$N$-body + cheap'' simulations. Each set uses a distinct $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$, calculated with the same seeds intervening in $\boldsymbol{\bar{x}}$ and smoothed by a $5$-bin-wide flat window. The estimated $95\%$ confidence intervals are plotted for the $N$-body sample mean only, using BCa bootstrap. The dark blue symbols show the $68\%$ percentile of the CARPool estimates ordered by the absolute value of the percentage error; light-blue symbols represent the rest. Lower panel: As in the upper panel, but for the reduced bispectrum of equilateral triangles.\label{fig:bkQkRatiosSmF5v500}} \end{figure} \paragraph*{Empirical variance reduction.} As for the power spectrum, the upper left panel of Figure~\ref{fig:varBkQk} shows that the generalised variance reduction is much more significant when separately estimating control coefficients for each triangle configuration. The right-hand side of the curve suggests an increasing improvement of the multivariate case, but in this range of numbers of required samples the variance reduction scheme loses its appeal. We have used 1,800 additional simulations to compute the covariance matrices intervening in the generalised variance estimates. In the upper right panel of the figure, the calculation of the standard deviation ratio for each triangle configuration follows the same logic as in Section~\ref{sec:varPk}. The grey dashed curve corresponds to the standard deviation reduction brought by control coefficients (i.e. the univariate CARPool framework) estimated with $5$ simulation/surrogate pairs only. \begin{figure*} \includegraphics[width=0.49\textwidth]{Figures/4Bk/Visualization/varReduc_Bk.png} \includegraphics[width=0.47\textwidth]{Figures/4Bk/Visualization/sigmaReduc_Bk.png} \includegraphics[width=0.49\textwidth]{Figures/4QkEq/Visualization/varReduc_Qk.png} \includegraphics[width=0.47\textwidth]{Figures/4QkEq/Visualization/sigmaReduc_Qk.png} \caption{Upper left panel: Generalised variance ratio of bispectrum for squeezed isosceles triangles as a function of the number of available simulations. Each $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ serves to generate 1,800 samples according to \eqref{eq:mvComp} to estimate the CARPool covariance matrix. Upper right panel: Standard deviation reduction for each squeezed isosceles triangle to expect from CARPool. The blue and black curves respectively use $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ estimated with $500$ samples. The dashed grey curve exhibits the actual standard deviation ratio when we have $5$ samples only to compute $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$. $\boldsymbol{\Sigma_{yy}}$ is estimated with all 15,000 available power spectra from the \textit{Quijote} simulations. Lower panels: As in the upper panels, but for the reduced bispectrum of equilateral triangles.} \label{fig:varBkQk} \end{figure*} \subsubsection{Equilateral triangles} Here, we analyse equilateral triangles with the modulus of $k_1=k_2=k_3$ varying up to $k_\mathrm{max} = 0.75$ $h {\rm Mpc^{-1}}$. For better visibility, we show the reduced bispectrum monopole $Q(k_1,k_2,k_3)$. \paragraph*{CARPool versus $N$-body estimates.} Similarly to the previous set of triangle configurations, we compare the precision of the CARPool estimator using $5$ $N$-body simulations with that of the sample mean from $500$ \texttt{GADGET} runs. Figure~\ref{fig:bkQkEstSmF5v500} (lower panel) exhibits the estimated reduced bispectrum with $5$ seeds, while Figure~\ref{fig:bkQkRatiosSmF5v500} (lower panel) shows the relative error of various CARPool sets with respect to the reference from $15,000$ $N$-body samples. \paragraph*{Empirical variance reduction.} In Figure~\ref{fig:varBkQk} (lower panels), we observe a trend similar to that of the previous experiments: the univariate control coefficients are much better than the control matrix in terms of generalised variance reduction for a realistic number of full $N$-body simulations. \subsection{Probability density function of smoothed matter fractional overdensity} The power spectrum and the bispectrum are Fourier-space statistics. How does CARPOOL fare on a purely direct-space statistic? In the \textit{Quijote} simulations, the probability density function of the matter fractional overdensity, or the \textit{matter PDF}, is computed on a grid with $N_\mathrm{grid}=512$, smoothed by a top-hat filter of radius $R$. There are $100$ histogram bins in the range $\rho/\bar{\rho} \in \left[ 10^{-2}, 10^{2}\right]$. We work with the $R=5~h^{-1} {\rm Mpc}$ case and restrict the estimation of the PDF to the interval $\rho/\bar{\rho} \in \left[ \num{8e-2}, \num{5e1}\right]$ that contains $p=70$ bins. Note that we intentionally do not do anything to improve the correspondence of the surrogate and simulation histograms, an example of which is displayed in Figure~\ref{fig:pdfEx}. \begin{figure} \includegraphics[width=\columnwidth]{Figures/Method/pdfExample.png} \caption{Probability density function of the smoothed matter fractional overdensity of \texttt{GADGET-III} and \texttt{L-PICOLA} snapshots at $z=0.5$ for the same initial conditions. The characteristics of \texttt{L-PICOLA} are provided in Table \ref{table:cFeat}.} \label{fig:pdfEx} \end{figure} \subsubsection{Empirical variance reduction} For the matter PDF, we show the empirical variance reduction results before the actual estimates: Figure~\ref{fig:varPDF} shows that the variance reduction is much milder for the PDF than for the power spectrum or the bispectrum, both for the univariate and multivariate CARPool frameworks. While the multivariate case does eventually lead to significant gains, CARPool needs $\mathcal{O}(100)$ simulations to learn how to map density contrast in COLA outputs to density contrast in \texttt{GADGET-III} simulations. While COLA places overdense structures close to the right position, their density contrast is typically underestimated, meaning a level sets of the COLA output is informative about a different level set of the \texttt{GADGET-III} simulation. The right panel nonetheless proves that it is possible to reduce the variance of the one-point PDF with CARPool, unlike with paired-fixed fields \citep{Villaescusa_Navarro_2018}. As for the bispectrum, we took the data outputs of 1,800 additional simulations to compute the covariance matrices intervening in the generalised variance and standard error estimates. \begin{figure*} \includegraphics[width=0.49\textwidth]{Figures/4PDF/Visualization/varReduc_PDF.png} \includegraphics[width=0.47\textwidth]{Figures/4PDF/Visualization/sigmaReduc_PDF.png} \caption{Left panel: Generalised variance ratio of the matter PDF as a function of the number of available simulations. Each $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ serves to generate 1,800 samples according to \eqref{eq:mvComp} to estimate the CARPool covariance matrix. Right panel: Standard deviation reduction for the PDF bin to expect from CARPool. The blue and black curves respectively use $\widehat{\boldsymbol{\beta}}$ and $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ estimated with $500$ samples. The dashed grey curve exhibits the actual standard deviation ratio when we have $10$ samples only to compute $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$. $\boldsymbol{\Sigma_{yy}}$ is estimated with all 15,000 available power spectra from the \textit{Quijote} simulations.} \label{fig:varPDF} \end{figure*} \subsubsection{CARPool versus $N$-body estimates} For the matter PDF we compare CARPool estimates in both the multivariate and univariate settings. Figures \ref{fig:pdfEst} and \ref{fig:pdfRatios} are paired and show the comparable performance at the tails of the estimated PDF for the smoothed $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ with $50$ samples on the one hand, and the dense $\widehat{\boldsymbol{\beta}}$ matrix obtained with $125$ simulations on the other. We can expect $\mathcal{O}(10^1)$ fewer $N$-body simulations to compute an accurate estimate of the PDF when applying the simple univariate CARPool technique ($50$ instead of $500$ here). As discussed above, with enough simulations CARPool can learn the mapping between the density contrasts of COLA and \texttt{GADGET} outputs. Therefore, the matter PDF is a case where the multivariate framework, which involves the estimation of $p \times p$ covariance matrices, shows improvement over the more straightforward univariate case once the number of available simulation pairs passes a threshold. \begin{figure} \includegraphics[width=0.49\textwidth]{Figures/4PDF/DiagSmF/PDF_500Vs50_samples.png} \includegraphics[width=0.49\textwidth]{Figures/4PDF/FullBeta/PDF_500Vs125_samples.png} \caption{Estimated matter PDF with $500$ $N$-body simulations versus CARPool estimates. $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ is used in the upper panel whereas the full control matrix is computed in the lower panel. The estimated $95\%$ confidence intervals are computed with the BCa bootstrap. They are enlarged by a factor of $40$ for better visibility.}\label{fig:pdfEst} \end{figure} \begin{figure} \includegraphics[width=0.49\textwidth]{Figures/4PDF/DiagSmF/TruePDF_50CVRatios10Means500T0.png} \includegraphics[width=0.49\textwidth]{Figures/4PDF/FullBeta/TruePDF_125CVRatios4Means500T0.png} \caption{Estimated matter PDF percentage error with respect to 15,000 $N$-body runs: sample mean of 500 $N$-body simulations versus CARPool estimates. In the upper panel, $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$ is used for each set and smoothed by a $5$-bin-wide flat window. In the lower panel, the full control matrix $\widehat{\boldsymbol{\beta}}$ is estimated for each group of seeds. The estimated $95\%$ confidence intervals are plotted for the $N$-body sample mean only, using BCa bootstrap.}\label{fig:pdfRatios} \end{figure} While we wanted to test the performance of CARPool with minimal tuning, we expect that with some mild additional assumptions and tuning the univariate CARPool approach could be improved and similar gains to the multivariate case could be obtained with a smaller number of simulations. As an example, one could pre-process the COLA outputs to match the PDF (and power spectrum) of \texttt{GADGET-III} using the approach described in \citet{2013JCAP...11..048L} to guarantee a close correspondence between bins of density contrast. In addition, a regularising assumption would be to consider transformations from COLA to \texttt{GADGET-III} density contrasts that are smooth and monotonic. \subsection{Summary of results} Here we present a summary of the variance reduction observed in our numerical experiments. With $M=$1,500 additional fast simulations reserved for estimating the cheap mean $\boldsymbol{\bar{\mu}_c}$, and with percentage errors relative to the mean of $15,000$ full $N$-body runs available in \textit{Quijote}, we find: \begin{itemize}[labelwidth=*,label=\textbullet, font = \color{black} \large] \item With only $5$ $N$-body simulations, the univariate CARPool technique recovers the $95$-bin power spectrum up to $k_\mathrm{max} \approx 1.2$ $h {\rm Mpc^{-1}}$ within the $0.5\%$ error band, when the control coefficients are smoothed. \item For the bispectrum of $98$ squeezed isosceles triangle configurations, the recovery is within $2\%$ when $5$ $N$-body simulations are available, and $1\%$ when we have $10$ of them, still with the smoothed $\widehat{\boldsymbol{\beta^\mathrm{diag}}}$. \item The bispectrum estimator of equilateral triangles on $40$ bins falls within the $2\%$ (resp. $1\%$) error band with $5$ simulations (resp. $10$) at large $k$, and performs better than the mean of $500$ \texttt{GADGET} simulations at large scales. \item The variance of matter PDF bins can also be reduced with CARPool, by factors between 3 and 10, implying that the number of required costly simulations is lowered by an order of magnitude. \end{itemize} In Appendix \ref{app:collFigs}, we provide the power spectrum and bispectrum results when the CARPool means are computed with $10$ simulation/surrogate pairs instead of the $5$ pairs presented so far. \section{Discussion and Conclusions}\label{sec:conclusions} We presented Convergence Acceleration by Regression and Pooling (CARPool), a general scheme for reducing variance on estimates of large-scale structure statistics. It operates on the idea of forming a combination (pooling) of a small number of accurate simulations with a larger number of fast but approximate surrogates in such a way as to not introduce systematic error (zero bias) on the combination. The result is equivalent to having run a much larger number of accurate simulations. This apporach is particularly adapted to cosmological applications where our detailed physical understanding has resulted in a number of pertubative and non-perturbative methods to build fast surrogates for high-accuracy cosmological simulations. To show the operation and promise of the technique, we computed high-accuracy and low-variance predictions for statistics of \texttt{GADGET-III} cosmological $N$-body simulations in the $\Lambda$CDM model at $z=0.5$. A large number of surrogates are available; for illustration we selected the approximate particle mesh solver \texttt{L-PICOLA}. For three different examples of statistics, the matter power spectrum, the matter bispectrum, and the probability density function of matter fractional overdensity, CARPool reduces variance by factors 10 to 100 even in the non-linear regime, and by much larger factors on large scales. Using only 5 \texttt{GADGET-III} simulations CARPool is able to compute Fourier-space two-point and three-point functions of the matter distribution at a precision comparable to 500 \texttt{GADGET-III} simulations. CARPool requires 1) inexpensive access to surrogate solutions, and 2) strong correlations of the fluctuations about the mean of the surrogate model with the fluctuations of the expensive and accurate simulations. By construction, CARPool estimates are unbiased compared to the full simulations no matter how biased the surrogates might be. In all our examples we achieved substantial variance reductions even though the fast surrogate statistics were highly biased compared to the full simulations. So far we have presented CARPool as a way to accelerate the convergence of ensemble averages of accurate simulations. An equivalent point of view would be to consider it a method to remove approximation error from ensembles of fast mocks by running a small number of full simulations. Such simulations often already exist, as in our case with the {\it Quijote} simulations, not least because strategies to produce fast surrogates are often tested against a small number of simulations. In some cases there are opportunities to use CARPool almost for free: for instance, using linear theory from the initial conditions as a surrogate model has the advantage that $\boldsymbol{\mu_c}$ (the mean linear theory power spectrum) is perfectly known \textit{a priori}. In addition, the de-correlation between linearly and non-linearly evolved perturbations is well-studied, and can be used to set $\boldsymbol{\beta}$. Even for just a single $N$-body simulation, and without the need to estimate $\boldsymbol{\mu_c}$ from an ensemble of surrogates, this would remove cosmic variance on the largest scales better than in our numerical experiments with \texttt{L-PICOLA}, which are limited by the uncertainty of the $\boldsymbol{\mu_c}$ estimate. Regardless of the details of the implementation, the reduction of sample variance on observables could be used to avoid having to run ensembles of simulations (or even surrogates) at the full survey volume. This would simplify simulation efforts for upcoming large surveys since memory limitations rather than computational time are currently the most severe bottleneck for full-survey simulations \citep{2017ComAC...4....2P}. In comparison to other methods of variance reduction, CARPool has the main advantage of guaranteeing lack of model error (``bias'') compared to the full simulation. ``Fixing'' \citep{2016PhRvD..93j3519P,2016MNRAS.462L...1A} explicitly modifies the statistics of the generated simulation outputs; which observables are unbiased must be checked on a case-by-case basis, either through theoretical arguments or through explicit simulation \citep{Villaescusa_Navarro_2018}. \citet{2020MNRAS.496.3862K} argue that ``fixed'' field initialisation is unsuitable for simulation suites to estimate accurate covariance matrices, and they are pessimistic about the possibility of generating mock galaxy catalogues solely with this technique. \citet{2016PhRvD..93j3519P} and \citet{2016MNRAS.462L...1A} also introduce and study the ``pairing'' technique. ``Pairing'' reduces variance for $k$-space observables (such as the power spectrum) by a factor of $\mathcal{O}(1)$ by combining two simulations whose initial conditions only differ by an overall minus sign, that is they are \textit{phase-flipped}. This technique can be analysed simply in the control variates framework of CARPool. Consider the phase-flipped simulation as the surrogate for the moment. The mean of an ensemble of phase-flipped simulations is identical to the mean of the unflipped simulations by symmetry. ``Pairing'' then amounts to taking $\boldsymbol{\beta}=-1$ to cancel off contributions of odd-order terms in the initial conditions \citep{2016PhRvD..93j3519P,2016MNRAS.462L...1A} to reduce variance on the simulation output. Inserting this $\boldsymbol{\beta}$ in the equation \eqref{eq:scalarCV} and taking the expectation shows that ``pairing'' is an unbiased estimator of the simulation mean. Other opportunities of exploiting the control variates principle abound; related ideas have been used in the past. As an example, a very recent study \citep{2020arXiv200711417S} succeeds in reducing the variance of the quadrupole estimator of the two-point clustering statistic in redshift space. In this case, the variance reduction is achieved by combining different, correlated lines of sight through the halo catalogue of the Outer Rim simulation. Though not driven by a general theoretical framework that guarantees unbiasedness and optimal variance reduction, for the specific application at hand their approach does not require pre-computation of fast surrogates and uses a control matrix set based on physical assumptions. While we intentionally refrained from tuning CARPool for this first study, there are opportunities to use physical insight to adapt it for cosmological applications. For instance, the one-point remapping technique proposed by \citet{2013JCAP...11..048L}, which allows us to increase the cross-correlation between LPT-evolved density fields and full $N$-body simulations, could be a surrogate itself or improve snapshots of a chosen surrogate for CARPool. In future work we plan to explore intermediate forms of CARPool between the multivariate and univariate versions we study in this paper. Any given entry of $\boldsymbol{y}$ could be predicted by an optimal combination of a small subset of $\boldsymbol{c}$. In this case, the variance reduction could be improved compared to the univariate case while the reduced dimension of the control matrix would ensure a stable estimate using a moderate number of simulations. The CARPool setup can be applied to numerous ``$N$-body code plus surrogate'' couples for cosmology. Rather than using a single surrogate, taking advantage of multiple low-fidelity methods for variance reduction is also a possibility to explore, especially if the cost of running a large number of surrogates is non-neglible. For instance, taking the linear theory as a second surrogate in addition to \texttt{L-PICOLA} would have strongly reduced the number of \texttt{L-PICOLA} runs required to match the variance of the $\boldsymbol{\mu_c}$ estimate to the massively reduced variance of $\boldsymbol{y}-\boldsymbol{\beta}\left(\boldsymbol{c} - \boldsymbol{\mu_c} \right)$. In this regard, the multi-fidelity Monte Carlo scheme of \citet{doi:10.1137/15M1046472} and the approximate control variates framework of \citet{GORODETSKY2020109257} are recent examples of techniques that reduce variance with multiple surrogates for a fixed computational budget. Furthermore, we can combine CARPool with other techniques. For instance, if the paired-fixed fields initialisation of \citet{2016MNRAS.462L...1A} is found to be unbiased in practice for a particular statistic, then one can combine it with CARPool for further variance reduction. The simplicity of the theory behind CARPool makes the method attractive for various applications both in and beyond cosmology, as long as the conditions given above are satisfied. Our results suggest that CARPool allows estimating the expectation values of any desired large-scale structure correlators with negligible variances from a small number of accurate simulations, thereby providing a useful complement to analytical approaches such higher-order perturbation theory or effective field theory. We are planning to explore a number of these applications in upcoming publications. \section*{Acknowledgements} We thank Martin Crocce, Janis Fluri, Cullan Howlett and Hans Arnold Wither for their advice on COLA, and Boris Leistedt for stimulating discussions. N.C. acknowledges funding from LabEx ENS-ICFP (PSL). B.D.W. acknowledges support by the ANR BIG4 project, grant ANR-16-CE23-0002 of the French Agence Nationale de la Recherche; and the Labex ILP (reference ANR-10-LABX-63) part of the Idex SUPER, and received financial state aid managed by the Agence Nationale de la Recherche, as part of the programme Investissements d'avenir under the reference ANR-11-IDEX-0004-02. The Flatiron Institute is supported by the Simons Foundation. Y.A. is supported by LabEx ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*. F.V.N acknowledges funding from the WFIRST program through NNG26PJ30C and NNN12AA01C. \bibliographystyle{mnras}
null
null
null
proofpile-arXiv_000-10159
{"arxiv_id":"2009.08970","language":"en","timestamp":1600654709000,"url":"https:\/\/arxiv.org\/abs\/2009.08970","yymm":"2009"}
2024-02-18T23:40:25.183Z
2020-09-21T02:18:29.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10161"}
null
\section{Introduction} \label{sec:introduction} If $G$ is a graph and $\ensuremath{\mathcal{H}}$ is a set of subgraphs of $G$, we say that an edge-coloring of $G$ is $\ensuremath{\mathcal{H}}$-{\it polychromatic } if every graph from $\ensuremath{\mathcal{H}}$ has all colors present in $G$ on its edges. The $\ensuremath{\mathcal{H}}$-polychromatic number of $G$, denoted $\poly_\ensuremath{\mathcal{H}} (G)$ is the largest number of colors in an $\ensuremath{\mathcal{H}}$-polychromatic coloring. If an $\ensuremath{\mathcal{H}}$-polychromatic coloring of $G$ uses $\ensuremath{\poly_{\sH}(G)}$ colors, it is called an {\it optimal } $\ensuremath{\mathcal{H}}$-polychromatic coloring of $G$. Alon \emph{et. al.} \cite{Alon:2007cd} found a lower bound for $\ensuremath{\poly_{\sH}(G)}$ when $G=Q_n$, the $n$-dimensional hypercube, and $\ensuremath{\mathcal{H}}$ is the family of all subgraphs isomorphic to $Q_d$, where $d$ is fixed. Offner \cite{Offner:2008vb} showed this lower bound is, in fact, the exact value for all $d$ and sufficiently large $n$. Bialostocki \cite{Bialostocki:1983wo} showed that if $d=2$, then the polychromatic number is $2$ and that any optimal coloring uses each color about half the time. Goldwasser \emph{et. al.} \cite{group_paper} considered the case when $\ensuremath{\mathcal{H}}$ is the family of all subgraphs isomorphic to $Q_d$ minus an edge or $Q_d$ minus a vertex. Bollobas \emph{et. al.} \cite{BPRS} treated the case where $G$ is a tree and $\ensuremath{\mathcal{H}}$ is the set of all paths of length at least $r$, where $r$ is fixed. Goddard and Henning \cite{Goddard:2018} considered vertex colorings of graphs such that each open neighborhood gets all colors. For large $n$, it makes sense to consider $\ensuremath{\poly_{\sH}(K_n)}$ only if $\ensuremath{\mathcal{H}}$ consists of sufficiently large graphs. Indeed, if the graphs from $\ensuremath{\mathcal{H}}$ have at most a fixed number $s$ of vertices, then $\ensuremath{\poly_{\sH}(K_n)} =1$ for sufficiently large $n$ by Ramsey's theorem, since even with only two colors there exists a monochromatic clique with $s$ vertices. Axenovich \emph{et. al.} \cite{previous_paper} considered the case where $G=K_n$ and $\ensuremath{\mathcal{H}}$ is one of three families of spanning subgraphs: perfect matchings (so $n$ must be even), $2$-regular graphs, and Hamiltonian cycles. They determined $\ensuremath{\poly_{\sH}(K_n)}$ precisely for the first of these and to within a small additive constant for the other two. In this paper, we determine the exact $\ensuremath{\mathcal{H}}$-polychromatic number of $K_n$, where $q$ is a fixed nonnegative integer and $\ensuremath{\mathcal{H}}$ is one of three families of graphs: matchings spanning precisely $n-q$ vertices, $(n-q)$-cycles, and $2$-regular graphs spanning at least $n-q$ vertices (so $q=0$) gives the results of Axenovich \emph{et. al.} in \cite{previous_paper} without the constant.) This paper is organized as follows. We give a few definitions and state the main results in Section \ref{sec:main_results}. We give some more definitions in Section \ref{Definitions}. The optimal polychromatic colorings in this paper are all based on a type of ordering, and in Section \ref{Lemmas} we state and prove the technical ordering lemmas we will need. In Section \ref{sec:optimal_polychromatic_colorings} we describe precisely the various ordered optimal polychromatic colorings of $K_n$. In Section \ref{sec:proof_of_theorem:one} we prove Theorem \ref{theorem:one}, a result about matchings. In Section \ref{sec:_c_q_polychromatic_numbers_1_and_2} we use some classical results on Ramsey numbers for cycles to take care of polychromatic numbers 1 and 2 for cycles. In Section \ref{sec:proofs_of_theorem_and_lemmas_on_long_cycles} we prove Theorem \ref{theorem:six}, a result about coloring cycles, and use some results on long cycles in the literature to prove a lemma we need. In Section \ref{Theorems} we give the rather long proofs of the three main lemmas we need. In Section \ref{sec:polychromatic_cyclic_ramsey_numbers} we show how our results can be reconstituted in a context which generalizes the classical results on Ramsey numbers of cycles presented in Section \ref{sec:_c_q_polychromatic_numbers_1_and_2}. In Section \ref{sec:conjectures_and_closing_remarks} we state a general conjecture of which most of our results are special cases. \section{Main Results} \label{sec:main_results} We call an edge coloring $\varphi$ of $K_n$ {\it ordered} if there exists an ordering $v_1,v_2,\ldots,v_n$ of $V(K_n)$ such that $\varphi(v_i v_j)=\varphi(v_i v_m)$ for all $1\leq i<j<m\leq n$. Moreover this coloring is {\it simply-ordered} if for all $i<j<m$, $\varphi(v_i v_m)=\varphi(v_j v_m) =a$ implies that $\varphi(v_tv_m)=a$ for all $i\leq t\leq j$. Simply-ordered colorings play a fundamental role in this paper. An ordered edge coloring $\varphi$ induces a vertex coloring $\varphi'$ on $V(K_n)$ called the {\it $\varphi$-inherited coloring}, defined by $\varphi'(v_i)=\varphi(v_i v_m)$ for $i<m\leq n$ and $\varphi'(v_n)=\varphi'(v_{n-1})$. We can represent the induced vertex coloring $\varphi'$ by the sequence $c_1,c_2,\ldots,c_n$ of colors, where $c_i=\varphi'(v_i)$ for each $i$. A \emph{block} in this sequence is a maximal set of consecutive vertices of the same color. If $\varphi$ is simply-ordered then the vertices in each color class appear in a single block, so in that case, the number of blocks equals the number of colors. Let $q$ be a fixed nonnegative integer. We define four families of subgraphs of $K_n$ as follows. \begin{enumerate} \item $F_q(n)$ is the family of all matchings in $K_n$ spanning precisely $n-q$ vertices (so $n-q$ must be even) \item $C_q(n)$ is the family of all cycles of length precisely $n-q$ \item $R_q(n)$ is the family of all $2$-regular subgraphs spanning at least $n-q$ vertices. \item $C_q^*(n)$ is the family of all cycles of length precisely $n-q$ where $n$ and $q$ are such that $\poly_{C_q(n)}(K_n)\geq 3$. \end{enumerate} Further, let $\pfq(n)=\poly_{F_q(n)}(K_n)$, $\pcq(n)=\poly_{C_q(n)}(K_n)$, and $\prq(n)=\poly_{R_q(n)}(K_n)$. Our main result is that for $F_q(n), R_q(n)$, and $C^*_q(n)$ there exist optimal polychromatic colorings which are simply ordered, or almost simply ordered (except for $C_q(n)$ if $\pcq(n)=2$). Once we know there exists an optimal simply ordered (or nearly simply ordered) coloring, it is easy to find it and to determine a formula for the polychromatic number. Our main results are the following. \begin{theorem}\label{theorem:one} For all integers $q$ and $n$ such that $q$ is nonnegative and $n-q$ is positive and even, there exists an optimal simply-ordered $F_q$-polychromatic coloring of $K_n$. \end{theorem} \begin{theorem}\label{theorem:two} \cite{previous_paper} If $n\geq 3$, then there exist optimal $R_0$-polychromatic and $C_0$-polychromatic colorings of $K_n$ which can be obtained from simply-ordered colorings by recoloring one edge. \end{theorem} \begin{theorem}\label{theorem:three} If $n\geq 4$, then there exist optimal $R_1$-polychromatic and $C_1$-polychromatic colorings of $K_n$ which can be obtained from simply-ordered colorings by recoloring two edges. \end{theorem} \begin{theorem}\label{theorem:four} Let $q\geq 2$ be an integer. If $n\geq q+3$, then there exists an optimal simply-ordered $R_q$-polychromatic coloring of $K_n$. If $n\geq q+4$, then there exists an optimal simply-ordered $C_q$-polychromatic coloring except if $n\in[2q+2,3q+2]$ and $n-q$ is odd. \end{theorem} \begin{theorem}\label{theorem:five} Suppose $q\geq 2$ and $n\geq 6$ \begin{enumerate}[label=\alph*)] \item If $n-q$ is even then there exists a $C_q$-polychromatic 2-coloring of $K_n$ if and only if $n\geq 3q+3$. \item If $n-q$ is odd then there exists a $C_q$-polychromatic 2-coloring of $K_n$ if and only if $n\geq 2q+2$. \end{enumerate} \end{theorem} Theorem \ref{theorem:five} follows from results of Bondy and Erd{\H{o}}s \cite{Bondy:1972} and Faudree and Schelp \cite{Faudree:1974}. The following result, which is needed for the proof of Theorem \ref{theorem:four}, may be of independent interest, so we state it as a theorem: \begin{theorem}\label{theorem:six} Let $n$ and $j$ be integers with $4\leq j\leq n$, and let $\varphi$ be an edge-coloring of $K_n$ with at least three colors so that every $j$-cycle gets all colors. Then every cycle of length at least $j$ gets all colors under $\varphi$. \end{theorem} The statements about cycles in Theorems \ref{theorem:two}--\ref{theorem:five} can be used to get an extension of the result of Faudree and Schelp \cite{Faudree:1974} in the following manner. Let $s$ and $t$ be integers with $t\geq2, s\geq 3$, and $s\geq t$. The $t$-polychromatic cyclic Ramsey number $\pr_t(s)$ is the smallest integer $N\geq s$ such that in any $t$-coloring of the edges of $K_N$ there exists an $s$-cycle whose edges do not contain all $t$ colors. Note that in the special case $t=2$, this is the classical Ramsey number for cycles, the smallest integer $N$ such that in any $2$-coloring of the edges of $K_N$ there exists a monochromatic $s$-cycle. These numbers were determined for all $s$ by Faudree and Schelp \cite{Faudree:1974}, confirming a conjecture of Bondy and Erd{\H{o}}s \cite{Bondy:1972}. \begin{theorem}\label{extension} Let $\pr_t(s)$ be the smallest integer $n\geq s\geq 3$ such that in any $t$-coloring of the edges of $K_n$ there exists an $s$-cycle whose edges do not contain all $t$ colors. If $t\geq 3$, \[ \pr_t(s)=\begin{cases} s, & \mathrm{if\ } 3<s\leq 3\cdot 2^{t-3}\\ s+1, & \mathrm{if\ } s\in [3\cdot2^{t-3}+1,5\cdot2^{t-2}-2]\\ s+2, & \mathrm{if\ } s\in [5\cdot 2^{t-2}-1, 5\cdot 2^{t-1}-4]\\ s + \round\left(\frac{s-2}{2^t-2}\right), & \mathrm{if\ } s\geq 5\cdot2^{t-1}-3 \end{cases} \] \end{theorem} \noindent where $\round\left(\frac{s-2}{2^t-2}\right)$ is the closest integer to $\frac{s-2}{2^t - 2}$, rounding up if it is $\frac{1}{2}$ more than an integer. \section{Definitions} \label{Definitions} Recall that if $\varphi$ is an ordered edge coloring of $K_n$ with respect to the ordering $v_1, \ldots, v_n$ of its vertices, we say that $\varphi'$ is the {\it $\varphi$-inherited coloring} (or just {\it inherited coloring}) if it is the vertex coloring of $K_n$ defined by $\varphi'(v_i)=\varphi(v_i v_j)$ for $1\leq i<j\leq n$ and $\varphi'(v_n)=\varphi'(v_{n-1})$. Given an ordering of $V(K_n)$, any vertex coloring $\varphi'$ such that $\varphi'(v_{n-1})=\varphi'(v_n)$ uniquely determines a corresponding ordered coloring. We define a \emph{color class $M_i$ of color $i$} to be the set of all vertices $v$ where $\varphi'(v)=i$. In this paper, we shall always think of the ordered vertices as arranged on a horizontal line with $v_i$ to the left of $v_j$ if $i<j$. We say that an edge $v_iv_m$, $i<m$ goes from $v_i$ to the right and from $v_m$ to the left. If $X$ is a (possibly empty) subset of $V(K_n)$, we say that the edge-coloring $\varphi$ of $K_n$ is \begin{itemize} \item \emph{$X$-constant} if for any $v\in X$, $\varphi(v u)=\varphi(v w)$ for all $u, w\in V\setminus X$, \item \emph{$X$-ordered} if it is $X$-constant and the vertices of $X$ can be ordered $x_1, \ldots, x_m$ such that for each $i = 1,\ldots, m$, $\varphi(x_i x_p ) = \varphi(x_i x_m) = \varphi(x_i w)$ for all $i<p\leq m$ and all $w\in V\setminus X$, \end{itemize} If $Z$ is a nonempty subset of $V(K_n)$ we say $\varphi$ is \begin{itemize} \item \emph{$Z$-quasi-ordered} if \begin{enumerate} \item $\varphi$ is $Z$-constant \item Each vertex $v_i$ in $Z$ is incident to precisely $n-2$ edges of one color, which we call the \emph{main color} of $v_i$, and one edge $v_i v_j$ of another color, where $v_j\in Z$. If that other color is $t$, then $v_j$ is incident to precisely $n-2$ edges of color $t$. \end{enumerate} \end{itemize} It is not hard to show that there are only two possibilities for the set $Z$ in a $Z$-quasi-ordered coloring: \begin{enumerate} \item $\abs{Z}=3$, the three vertices in $Z$ have different main colors, and there is one edge in $Z$ of each of these colors \item $\abs{Z}=4$, with two vertices $u,v$ in $Z$ with one main color, say $i$ and two vertices $y,z$ in $Z$ with another main color, say $j$, and $\varphi(uv)=\varphi(uy)=\varphi(vz)=i,\varphi(yz)=\varphi(yv)=\varphi(zu)=j$. \end{enumerate} \begin{itemize} \item \emph{quasi-ordered} if it is $Z$-quasi-ordered and $\varphi$ restricted to $V\setminus Z$ is ordered \item \emph{quasi-simply ordered} if it is $Z$-quasi-ordered and $\varphi$ restricted to $V\setminus Z$ is simply ordered. \item \emph{nearly $X$-ordered} if it is $Z$-quasi-ordered and the restriction of $\varphi$ to $V(K_n)\setminus Z$ is $T$-ordered for some (possibly empty) subset $T$ of $V(K_n)\setminus Z$ and $X=Z\cup T$. (If $\varphi$ is nearly $X$-ordered then one or two edges could be recolored to get an $X$-ordered coloring.) \end{itemize} It is easy to check that if $\varphi$ is quasi-ordered (quasi-simply-ordered) for some set $Z$ then if $\abs{Z}=3$ one edge can be recolored, and if $\abs{Z}=4$, then two edges can be recolored to get an ordered (simply-ordered) coloring. The \emph{maximum monochromatic degree} of an edge coloring of $K_n$ is the maximum number of edges of the same color incident with a single vertex. If the maximum monochromatic degree of a coloring is $d$, and the vertex $v$ is incident with $d$ edges of color $t$, and the other $n-1-d$ edges incident with $v$ have color $s$, we say $v$ is a $t$-max vertex and also a \emph{$(t,s)$-max vertex} with \emph{majority color $t$} and \emph{minority color $s$}. We extend the notion of inherited to quasi-ordered colorings as follows. If $\varphi$ is a quasi-ordered coloring with $\psi$ the ordered coloring which is a restriction of $\varphi$ to $V\setminus Z$, we define $\varphi'$, the $\varphi$-inherited coloring, by letting $\varphi'(x)$ equal the main color of $x$ if $x\in Z$ and $\varphi(y)=\psi'(y)$ if $y\not\in Z$. We think of the vertices in $Z$ preceding those not in $Z$, in the order left to right, and if $\abs{Z}=4$ we list two vertices in $Z$ with the same main color first, then the other two vertices with the same main color. \section{Ordering Lemmas} \label{Lemmas} Let $\varphi$ be an ordered edge coloring of $K_n$ with vertex order $v_1,v_2,\ldots,v_n$, colors $1, \ldots, k$, and $\varphi'$ be the inherited coloring of $V(K_n)$. For each $t\in[k]$ and $j\in[n]$, let $M_t$ be a color class $t$ of $\varphi'$ and $M_t(j)=M_t\cap\{v_1,v_2,\ldots,v_j\}$. The next Lemma is a key structural lemma that characterizes ordered polychromatic colorings. \begin{lem}\label{orderedPClemma} Let $\varphi:E(K_n) \to[k]$ be an ordered or quasi-ordered coloring with vertex order $v_1,v_2 \ldots, v_n$. Then the following statements hold: \begin{enumerate}[label=(\Roman*)] \item\label{OPC-1F} $\varphi$ is $F_q$-polychromatic $\Longleftrightarrow$ $\forall t\in [k]$ $\exists j\in[n]$ such that $\abs{M_t(j)} >\frac{j+q}{2}$, \item\label{OPC-HC} $\varphi$ is $C_q$-polychromatic $\Longleftrightarrow$ $\forall t\in [k]$ either \begin{enumerate}[label=(\alph*)] \item\label{HCone} $\exists j\in[q+1,n-1]$ such that $\abs{M_t(j)} \geq \frac{j+q}{2}$ or \item\label{HCtwo} $q=0$, $\varphi$ is $Z$-quasi-ordered with $\abs{Z}=3$ and $t$ is the color of some edge in $Z$ or \item\label{HCthree} $q=1$, $\varphi$ is $Z$-quasi-ordered with $\abs{Z}=4$ and $t$ is the color of some edge in $Z$. \end{enumerate} \item\label{OPC-2F} $\varphi$ is $R_q$-polychromatic $\Longleftrightarrow$ $\forall t\in [k]$ either \begin{enumerate} \item\label{2Fone} $\exists j\in[n]$ such that \begin{enumerate}[label=(\roman*)] \item\label{one} $\abs{M_t(j)}>\frac{j+q}{2}$ or \item\label{two} $\abs{M_t(j)}=\frac{j+q}{2}$ and $j\in\{2+q,n-2\}$ or \item\label{three} $\abs{M_t(j)}=\frac{j+q}{2}$ and $\abs{M_t(j+2)}=\frac{j+q+2}{2}$ where $j\in[4+q,n-3].$ \end{enumerate} \item\label{2Ftwo} $q=0$, $\varphi$ is $Z$-quasi-ordered and $t$ is the color of some edge in $Z$ \item\label{2Fthree} $q=1$, $\varphi$ is $Z$-quasi-ordered with $\abs{Z}=3$ and $t$ is the color of some edge in $Z$ \end{enumerate} \end{enumerate} \end{lem} \begin{proof} Note that to prove the lemma, it is sufficient to consider an arbitrary color $t$ and show for $\ensuremath{\mathcal{H}} \in \{ F_q, C_q, R_q \}$ and for each $H\in \ensuremath{\mathcal{H}}$, that the given respective conditions are equivalent to $H$ containing an edge of color $t$. \emph{\ref{OPC-1F}} Let $j$ be an index such that $\abs{M_t(j)}=m_j>(j+q)/2$ and let $H$ be a $1$-factor. Let $x_1, \ldots, x_{m_j}$ be the vertices of $M_t$ in order and let $y_1,\ldots y_{j-{m_j}}$ be the other vertices of $\{v_1,v_2,\ldots,v_{j}\}$ in order. Since $j-m_j<\frac{j-q}{2}$ and $m_j-q>\frac{j-q}{2}$, then at least one edge of $H$ with an endpoint in $M_t(j)$ must go to the right, and thus, have color $t$. On the other hand, by way of contradiction, assume that for each $j\in[n]$, $\abs{M_t(j)}\leq (j+q)/2$. Letting $m=\abs{M_t}$, we have $m\leq (n+q)/2$. Consider a $1$-factor that spans all vertices except for $q$ vertices in $M_t$. Let $x_1, \ldots, x_{m-q}$ be the $m-q$ vertices remaining from $M_t$ in order and let $y_1,\ldots y_{n-m}$, be the vertices outside of $M_t$ in order. Note that since $m\leq (j+q)/2$, it follows that $n-m\geq m-q$ since if $n-m<m-q$ then $n<2m-q$ and so $j>n$ which is impossible. Now, let $H$ consist of the edges $x_1y_1, x_2y_2, \ldots, x_{m-q}y_{m-q}$ and a perfect matching on $\{y_{m-q+1}, \ldots, y_{n-m}\}$ (if this set is non-empty). We will show that $y_i$ precedes $x_i$ in the order $v_1,v_2,\ldots,v_n$ for each $i\in[m-q]$, so $H$ has no edge of color $t$. By way of contradiction, assume $x_i$ precedes $y_i$ for some $i\in[m-q]$. Letting $j=2i-1+q$, $y_i$ cannot be among the first $j$ vertices in the order $v_1,v_2,\ldots,v_n$, because if it were there would be at least $i+q$ vertices of color $t$ among these $j$ vertices, so a total of at least $2i+q>j$ vertices. Hence \[ \frac{j+q}{2}=\frac{2i+2q-1}{2}<i+q\leq \abs{M_t(j)}\leq \frac{j+q}{2} \] which is impossible. Hence $y_i$ precedes $x_i$ for each $i$ and $\varphi$ is not $F_q$ polychromatic. \emph{\ref{OPC-HC}} If $t$ is a color such that \ref{HCone} holds with strict inequality, the argument in \ref{OPC-1F} shows there is an edge of $H$ with color $t$. If $\abs{M_t(j)}=\frac{j+q}{2}$ for some $j\in[q+1,n-1]$ and every edge in $H$ incident to a vertex in $M_t(j)$ goes to the left then, since each of these edges has its other vertex not in $M_t(j)$, $H$ contains $\frac{j-q}{2}$ vertices in $M_t(j)$ and the same number not in $M_t(j)$. If $\frac{j-q}{2}=1$, then the vertex in $M_t(j)$ is incident with at least one edge which goes to the right, and if $\frac{j-q}{2}\geq 2$ then $H$ contains a $2$-regular subgraph, which is impossible because an $n-q$ cycle can't have a $2$-regular subgraph on less than $n-q$ vertices. If $t$ is such that \ref{HCtwo} holds, then note that $t$ must be the main color of a vertex in $Z$ and that the cycle must contain 2 edges incident with each vertex in $Z$. Any choice of these edges will contain an edge of color $t$ since only one edge incident with each vertex in $Z$ is not the main color of that vertex. If $t$ is such that \ref{HCthree} holds, then note that $t$ must be the main color of a vertex in $Z$ and any cycle on $n-1$ vertices must contain 2 edges incident with at least three of the four vertices in $Z$. Any choice of these edges will contain an edge of color $t$ since only one edge incident with each vertex in $Z$ is not the main color of that vertex. On the other hand, suppose that for each $j\in [q+1,n-1]$, $\abs{M_t(j)}=m<\frac{j+q}{2}$ and $\varphi$ is not $Z$-quasi-ordered with $t$ a main color. In particular, when $j=n-2$, we have that $\abs{M_t(j)}=m<\frac{n+q}{2}-1$. Consider a cycle that spans all vertices except for $q$ vertices in $M_t$. Let $x_1,\ldots,x_{m-q}$ be the other $m-q$ vertices in $M_t$ in order and $y_1,\ldots,y_{n-m}$ be the vertices outside of $M_t$ in order. Note that if $m<\frac{j+q}{2}$, then $n-m>m-q$ since $n-m\leq m-q \implies j>n$ which is impossible. Consider the cycle $y_1 x_1 y_2 x_2 \cdots y_{m-q} x_{m-q} y_{m-q+1}\cdots y_{n-m} y_1$. Suppose $y_i$ is to the right of $x_i$ for some $i$. Then at most $i$ of the first $j=2i+q$ vertices are not in $M_t(j)$, so $\abs{M_t(j)}\geq i+q=\frac{j+q}{2}$, which is impossible. Hence $y_i$ and $y_{i+1}$ are to the left of $x_i$ for each $1\leq i\leq m$, all edges of $H$ incident to $M_t$ go to the left, and thus are not of color $t$. \begin{obs} If $H$ is a $2$-regular subgraph that has no edge of color $t$, and $M$ is any subset of $M_t$, then all edges of $H$ incident to $M$ go to the left, so at most half the vertices in $H$ are in $M_t$ and if $\abs{M_t(j)}=\frac{j+q}{2}$, then of the first $j$ vertices, precisely $j-q$ are in $H$, precisely half of these in $M_t$, and if $j-q\geq 4$ then these $j-q$ vertices induce a 2-regular subgraph of $H$. \end{obs} \emph{\ref{OPC-2F}} Let $j$ be an index such that \ref{2Fone}~\ref{one}, \ref{two}, or \ref{three} holds. Assume first that \ref{one} holds, i.e., that $\abs{M_t(j)}>\frac{j+q}{2}$ and let $H$ be a $2$-factor. Then the argument given in \ref{OPC-1F} shows that at least one edge of $H$ with an endpoint in $M_t(j)$ must go to the right, and thus, have color $t$. Assume that \ref{two} holds. If $j=2+q$, then $M_t$ contains $q+1$ of the first $q+2$ vertices, so $H$ contains a vertex in $M_t$ which has an edge that goes to the right, so there is an edge of color $t$ in $H$. If $j=n-2$ and $H$ has no edges of color $t$, then (by the previous observation) the subgraph of $H$ induced by $[n-2]$ is a $2$-factor. Since the remaining two vertices do not form a cycle, $H$ is not a $2$-factor, a contradiction. Finally, assume that \ref{three} holds. If $H$ does not have an edge of color $t$, then by the previous observation, $H$ has a 2-regular subgraph spanning $j-q+2$ vertices, which has a 2-regular subgraph spanning $j-q$ vertices, which is impossible. If \ref{2Ftwo} or \ref{2Fthree} holds, by the argument for \ref{OPC-HC}, $H$ has an edge of color $t$. On the other hand, suppose that none of \ref{2Fone}, \ref{2Ftwo}, or \ref{2Fthree} hold. We shall construct a $2$-factor that does not have an edge of color $t$. If $\abs{M_t(j)}<\frac{j+q}{2}$ for each $j\in[q+1,n-1]$, then there is a cycle with no color $t$ edge as described in \ref{OPC-HC}. If not, let $i_1,i_2,\ldots,i_k$ be the values of $j$ in $[4+q,n-3]$ for which $\abs{M_t(j)}=\frac{j+q}{2}$. Since \ref{2Fone}\ref{three} is not satisfied, $i_{q+1}-i_q$ is at least 4 and even for $q=1,2,\ldots,k-1$. As before, suppose there are $m$ vertices of color $t$. Let $x_1,x_2,\ldots,x_{m-q}$ be the last $m-q$ of these, in order, and let $y_1,y_2,\ldots,y_{n-m}$ be the other vertices, in order. Note that since $m\leq \frac{n+q}{2}$ we have $m-q\leq\frac{n-q}{2}$ and $n-m\geq\frac{n-q}{2}$. For each $q$ in $[1,k-1]$, moving left to right within the interval $[i_q+1,i_{q+1}]$, there are always more $y$'s than $x$'s (except an equal number of each at the end of the interval), since otherwise there would have been another value of $j$ between $i_q$ and $i_{q+1}$ where $\abs{M_t(j)}=\frac{j+q}{2}$. Form an $(i_{q+1}-i_q)$-cycle by alternately taking $y$'s and $x$'s, starting with the $y$ with the smallest subscript. Also form an $i_1-q$ cycle using the first $\frac{i_1-q}{2}$ $y$'s and the same number of $x$'s, and an $n-i_k$ cycle at the end, first alternating the $y$'s and $x$'s, putting any excess $y$'s at the end. \end{proof} \begin{lem}\label{O2SO} Let $\ensuremath{\mathcal{H}} \in \{F_q, R_q, C_q\}$. If there exists an ordered (quasi-ordered) $\ensuremath{\mathcal{H}}$-polychromatic coloring of $K_n$ with $k$ colors, then there exists one which is simply-ordered (quasi-simply-ordered) with $k$ colors. \begin{proof} Let $V(K_n) =[n]$ with the natural order. If $c'$ is a coloring of $[n]$, a {\it block} of $c'$ is a maximal interval of integers from $[n]$ which all have the same color. So a simply-ordered $k$-polychromatic coloring has precisely $k$ blocks. We define a {\it block shift operation} as follows. Assume that $t\in[k]$ is a color for which there are at least $2$ blocks. Let $j(t)=j$ be the smallest integer so that $M_t(j)>(j+q)/2$ if such exists. If there is a block $[m,s]$ in $M_t$ where $m>j$, delete this block, then take the color of the last vertex in the remaining sequence, and add $s-m+1$ more vertices with this color at the end of the sequence. If each block of color $t$ has its smallest element less than or equal to $j$, consider the block $B$ of color $t$ that contains $j$ and consider another block $B_1$ of color $t$ that is strictly to the left of $B$. Form a new coloring by ``moving'' $B_1$ next to $B$. We see that the resulting coloring has at least one less block. Let $c$ be a ordered (quasi-ordered) $F_q$-polychromatic coloring of $K_n$ on vertex set $[n]$ with $k$ colors such that the inherited vertex coloring $c'$ has the smallest possible number of blocks. Assume that color $t$ has at least $2$ blocks. Let $j(t)=j$ be the smallest integer so that $M_t(j)>(j+q)/2$. Such $j$ exists by Lemma \ref{orderedPClemma}$\ref{OPC-1F}$, and the color of $j$ is $t$. Apply the block shifting operation. The condition from part $\ref{OPC-1F}$ of Lemma \ref{orderedPClemma} is still valid for all color classes, so the new coloring is $F_q$-polychromatic using $k$ colors. This contradicts the choice of $c$ having the smallest number of blocks. If $c$ is an ordered (quasi-ordered) $C_q$-polychromatic coloring of $K_n$, an argument very similar to the one above shows if \ref{OPC-HC}\ref{HCone}, \ref{HCtwo}, or \ref{HCthree} hold, there exists a simply-ordered (quasi-simply-ordered) coloring that uses the same number of colors and that is $C_q$-polychromatic. Finally, let $c$ be an ordered ($X$-quasi-ordered) $R_q$-polychromatic coloring of $K_n$ on vertex set $[n]$ with $k$ colors such that the inherited vertex coloring $c'$ has the minimum possible number of blocks. Assume that $t\in[k]$ is a color for which there are at least $2$ blocks. If \ref{2Ftwo} or \ref{2Fthree} hold, then the block shifting operation gives a coloring that is still $R_q$-polychromatic with the same number of colors and fewer blocks. Thus, by Lemma \ref{orderedPClemma}\ref{OPC-2F} there exists $j$ such that \begin{enumerate}[label=($\arabic*$)] \item\label{first} $\abs{M_t(j)}>(j+q)/2$ or \item\label{seconda} $\abs{M_t(2+q)}=1+q$ or \item\label{secondb} $\abs{M_t(n-2)}= (n+q-2)/2$ or \item\label{secondc} $\abs{M_t(n-1)}= (n+q-1)/2$ or \item\label{third} $\abs{M_t(j)} = (j+q)/2$ and $\abs{M_t(j+2)} = (j+q+2)/2$ and $4+q\leq j\leq n-3.$ \end{enumerate} If \ref{first} holds, then we apply the block shifting operation and observe, as in the case of $F_q$, that the resulting coloring is still $R_q$-polychromatic with the same number of colors and fewer blocks. The case when \ref{seconda} applies is similar. Assume neither \ref{first} nor \ref{seconda} holds. If \ref{secondb} holds then, since $c'(v_{n-1})=c'(v_n)$, neither $v_{n-1}$ nor $v_n$ can have color $t$. Hence there is another block of color $t$ vertices to the left of the one containing $v_{n-2}$, so we can do a block shift operation ot reduce the number of blocks, a contradiction. The same argument works if \ref{secondc} holds. Finally, assume that none of \ref{first}--\ref{secondc} holds, but \ref{third} holds. This implies that $c'(j)=c'(j+2)=t$ and $c'(j+1)=u\neq t$. Now define $c''$ by $c''(i)=c'(i)$ if $i\not\in\{j+1, j+2\}, c''(j+1)=t$, and $c''(j+2)=u$. Clearly $c''$ has at least one fewer block than $c'$. Since $j+q+1$ is odd, the only situation where $c''$ would not be $R_q$-polychromatic is if $M_u(j+1)>\frac{j+q+1}{2}$. However, then $\abs{M_u(j-1)}=\abs{M_u(j+1)}-1>\frac{j+q-1}{2}$, so $c''$ is $R_q$-polychromatic after all. \end{proof} \end{lem} \section{Optimal Polychromatic Colorings} \label{sec:optimal_polychromatic_colorings} The seven following colorings are all optimal $F_q,R_q$, or $C_q$ polychromatic colorings for various values of $q$ and $n$. Each of them is simply-ordered or quasi-simply-ordered. We describe the color classes for each, and give a formula for the polychromatic number $k$ in terms of $q$ and $n$. \subsection{$F_q$-polychromatic coloring $\pfq$ of $E(K_n)$ (even $n-q\geq 2$).} \label{subsec:_k_1f_polychromatic} Let $q$ be nonnegative and $n-q$ positive and even with $k$ a positive integer such that \begin{equation}\label{n_eq_F} (q+1)(2^k-1)\leq n<(q+1)(2^{k+1}-1). \end{equation} Let $\varphi_{F_q}$ be the simply-ordered edge $k$-coloring with colors $1,2,\ldots,k$ and inherited vertex $k$ coloring of $\varphi'_{F_q}$ with successive color classes $M_1, M_2,\ldots, M_k$, moving left to right such that $\abs{M_i}=2^{i-1}(q+1)$ if $i<k$ and $\abs{M_k}=n-\sum_{i=1}^{k-1}\abs{M_i}=n-(2^{k-1}-1)(q+1)$. We have $k\leq \log_2\frac{n+q+1}{q+1}<k+1$ so $\pfq=k=\floor{\log_2\frac{n+q+1}{q+1}}$. \subsection{$R_q$-polychromatic coloring $\varphi_{R_q}$ ($q\geq 2$)} \label{sec:_prq_polychromatic_qgeq_2_} If $q\geq 2$, $n\geq q+3$ and $n$ and $k$ are such that \eqref{n_eq_F} is satisfied, we let $\varphi_{R_q}=\varphi_{F_q}$ (same color classes), giving us the same formula for $k$ in terms of $n$. \subsection{$C_q$-polychromatic coloring $\varphi_{C_q}$, ($q\geq 2$).} \label{subsec:_k_hc_polychromatic_coloring} If $q\geq 2$, $n\geq q+3$ and \begin{equation}\label{n_eq_C} (2^{k}-1)q+2^{k-1}<n\leq (2^{k+1}-1)q+2^k \end{equation} let $\varphi_{C_q}$ be the simply-ordered edge $k$-coloring with colors $1,2,\ldots,k$ and inherited vertex $k$ coloring $\varphi'_{C_q}$ with successive color classes $M_1,M_2,\ldots,M_k$ of sizes given by: \begin{align*} \abs{M_1}&=q+1\\ \abs{M_i}&=2^{i-1}q+2^{i-2} \rm{\ if\ }i\in[2,k-1]\\ \abs{M_k}&=n-\sum_{i=1}^{k-1}\abs{M_i}=n-2^{k-1}q-2^{k-2} \end{align*} From equation \eqref{n_eq_C} we get $\pcq = k=\floor{\log_2\frac{2(n+q-1)}{2q+1}}$. \subsection{$R_0$-polychromatic coloring $\varphi_{R_0}$ ($q=0$).} \label{subsec:_k_2f_polychromatic_coloring} If $n\geq 3$ and $2^{k-1}-1\leq n<2^{k-1}$ let $\varphi_{R_0}$ be the quasi-simply-ordered coloring with $\abs{X}=3$ and color class sizes $\abs{M_1}=\abs{M_2}=1$ and $\abs{M_3}=n-2$ if $3\leq n\leq 6$, and if $n\geq 7$: \begin{align*} \abs{M_1}&=\abs{M_2}=\abs{M_3}=1\\ \abs{M_i}&=2^{i-2} \rm{\ if\ }i\in[4,k-1]\\ \abs{M_k}&=n-\sum_{i=1}^k-1\abs{M_i}=n-2^{k-2}+1 \end{align*} From this, we get $\operatorname{P_{R_0}}=k=1+\floor{\log_2(n+1)}$ where $n\geq 3$. \subsection{$C_0$-polychromatic coloring $\varphi_{C_0}$ ($q=0$)} \label{sec:_c_0_polychromatic_q_0_} If $n\geq 3$ and $3\cdot 2^{k-3}<n\leq 3\cdot 2^{k-2}$ let $\varphi_{C_0}$ be the quasi-simply-ordered coloring with $\abs{X}=3$ and color class sizes $\abs{M_1}=\abs{M_2}=1$ and $\abs{M_3}=n-2$ if $3\leq n\leq 6$, and if $n\geq 7$: \begin{align*} \abs{M_1}&=\abs{M_2}=\abs{M_3}=1\\ \abs{M_i}&=3\cdot 2^{i-4} \rm{\ if\ }i\in[4,k-1]\\ \abs{M_k}&=n-\sum_{i=1}^{k-1}\abs{M_i}=n-3\cdot 2^{k-4} \end{align*} From this, we get $\operatorname{P_{C_0}}=k=\floor{\log_2\frac{8(n-1)}{3}}$ where $n\geq 4$. \subsection{$R_1$-polychromatic colring $\varphi_{R_1}$ ($q=1$)} \label{sec:_r_1_polychromatic_q_1_} If $n\geq 4$ and $3\cdot 2^{k-1}-2\leq n<3\cdot 2^k-2$ let $\varphi_{R_1}$ be the quasi-simply-ordered coloring with $\abs{X}=4$ and color class sizes $\abs{M_1}=2$ and $\abs{M_2}=n-2$ if $4\leq n\leq 9$, and if $n\geq 10$: \begin{align*} \abs{M_1}&=\abs{M-2}=2\\ \abs{M_i}&=3\cdot 2^{i-2} \rm{\ if\ }i\in[3,k-1]\\ \abs{M_k}&=n-\sum_{i=1}^{k-1}\abs{M_i}=n-3\cdot2^{k-2}+2 \end{align*} From this, we get $\operatorname{P_{R_1}}=k=\floor{\log_2\frac{2(n+2)}{3}}$ where $n\geq 4$. \subsection{$C_1$-polychromatic coloring $\varphi_{C_1}$ ($q=1$)} \label{sec:_c_1_polychromatic_q_1_} If $n\geq 4$ and $5\cdot 2^{k-2}\leq n< 5\cdot 2^{k-1}$ let $\varphi_{C_1}$ be the quasi-simply-ordered coloring with $\abs{X}=4$ and color class sizes $\abs{M_1}=\abs{M_2}=2$ and $\abs{M_3}=n-4$ if $4\leq n\leq 9$ and change every edge of color $3$ to color $2$, and if $n\geq 10$: \begin{align*} \abs{M_1}&=\abs{M_2}=2\\ \abs{M_i}&=5\cdot 2^{i-3} \rm{\ if\ }i\in[3,k-1]\\ \abs{M_k}&=n-\sum_{i=1}^{k-1}\abs{M_i}=n-5\cdot 2^{k-3}+1 \end{align*} From this, we get $\operatorname{P_{C_1}}=k=\floor{\log_2\frac{4n}{5}}$ where $n\geq 4$. \section{Proof of Theorem \ref{theorem:one} on Matchings} \label{sec:proof_of_theorem:one} We prove Theorem \ref{theorem:one}. This proof is similar to the proof of Theorem 1 in \cite{previous_paper}. Let $k=\pfq(n)$ be the polychromatic number for $1$-factors spanning $n-q$ vertices in $G=K_n=(V,E)$. Among all $F_q$-polychromatic colorings of $K_n$ with $k$ colors we choose ones that are $X$-ordered for a subset $X$ (possibly empty) of the largest possible size, and, of these, choose a coloring $c$ whose restriction to $V\setminus X$ has the largest possible maximum monochromatic degree. Let $v$ be a vertex of maximum monochromatic degree, $r$, in $c$ restricted to $G[V\setminus X]$, let the majority color on the edges incident to $v$ in $V\setminus X$ be color $1$. By the maximality of $\abs{X}$, there is a vertex $u$ in $V\setminus X$ such that $c(uv)\neq 1$. Assume $c(uv)=2$. If every $1$-factor spanning $n-q$ vertices containing $uv$ had another edge of color $2$, then the color of $uv$ could be changed to $1$, resulting in a $F_q$-polychromatic coloring where $v$ has a larger maximum monochromatic degree in $V\setminus X$, a contradiction. Hence, there is a $1$-factor $F$ spanning $n-q$ vertices in which $uv$ is the only edge with color $2$ in $c$. Let $c(vy_i)=1$, $y_i\in V\setminus X$, $i=1, \ldots, r$. Note that for each $k\in[r]$, $y_k$ must be in $F$. If not, then $F-{uv}+{vy_k}$ is a $1$-factor spanning $n-q$ vertices with no edge of color $2$ (since $uv$ was the unique edge of color $2$ in $F$ and $vy_k$ is color $1$). For each $i\in[r]$, let $y_i w_i$ be the edge of $F$ containing $y_i$ (perhaps $w_i=y_j$ for some $j\neq i$). See Figure \ref{fig:1Fswitch}. We can get a different $1$-factor $F_i$ by replacing the edges $uv$ and $y_i w_i$ in $F$ with edges $v y_i$ and $u w_i$. Since $F_i$ must have an edge of color $2$ and $c(v y_i)=1$, we must have $c(u w_i)=2$ for each $i\in[r]$. \begin{figure}[htbp] \centering \begin{tikzpicture}[every text node part/.style={align=center},scale=1,inner sep=1.75mm] \node[circle,ultra thick,draw=black,fill=white] (v) at (5,4) {$v$}; \node[circle,ultra thick,draw=black,fill=white] (u) at (7,4) {$u$}; \node[circle,ultra thick,draw=black,fill=white] (y1) at (1,2) {$y_1$}; \node[circle,ultra thick,draw=black,fill=white] (y2) at (3,2) {$y_2$}; \node[circle,ultra thick,draw=black,fill=white,inner sep=.3mm] (y3) at (5,2) {$y_3$\\$w_4$}; \node[circle,ultra thick,draw=black,fill=white,inner sep=.3mm] (y4) at (7,2) {$y_4$\\$w_3$}; \node (dots1) at (8,2) {\large $\ldots$}; \node[circle,ultra thick,draw=black,fill=white] (yr) at (9,2) {$y_r$}; \node[circle,ultra thick,draw=black,fill=white] (w1) at (1,0) {$w_1$}; \node[circle,ultra thick,draw=black,fill=white] (w2) at (3,0) {$w_2$}; \node (dots2) at (8,0) {\large $\ldots$}; \node[circle,ultra thick,draw=black,fill=white] (wr) at (9,0) {$w_r$}; \draw[ultra thick, red] (v) -- node[above] {2} (u); \draw[ultra thick, blue,dotted] (v) -- node[above left] {1} (y1); \draw[ultra thick, blue,dotted] (v) -- node[right] {1} (y2); \draw[ultra thick, blue,dotted] (v) -- node[right] {1} (y3); \draw[ultra thick, blue,dotted] (v) -- node[right] {1} (y4); \draw[ultra thick, blue,dotted] (v) -- node[above right] {1} (yr); \draw[ultra thick, black] (y3) -- (y4); \foreach \i in {1,2,r} { \draw[ultra thick, black] (y\i) -- (w\i); } \end{tikzpicture} \caption{Maximum polychromatic degree in an $F_q$-polychromatic coloring} \label{fig:1Fswitch} \end{figure} If $w_i\in X$ for some $i$ then, since $c$ is $X$-constant, $c(w_iy_i) = c(w_iu) =2$, so $y_i w_i$ and $uv$ are two edges of color $2$ in $F$, a contradiction. So, $w_i\in V\setminus X$. Thus $c(u v)=c(uw_1) = \cdots =c(uw_r) = 2$, and the monochromatic degree of $u$ in $V\setminus X$ is at least $r+1$, larger than that of $v$, a contradiction. Hence $X=V$, $c$ is ordered, and, by Lemma \ref{O2SO}, there exists a simply-ordered $F_1$-polychromatic coloring $c_s$ with $k$ colors. By Lemma \ref{orderedPClemma}$\ref{OPC-1F}$, if $M_1,M_2,\ldots,M_k$ are the successive color classes, moving left to right, of the inherited vertex coloring $c'_s$, then $\abs{M_t}\geq 2^{t-1}(q+1)$ for $t=1,2,\ldots,k$. Since this inequality holds with equality for $t=1,2,\ldots,k-1$ for the inherited vertex-coloring $\pfq$, the number of color classes of $c_s$ cannot be greater than that of $\pfq$, so $k\leq \floor{\log_2 \frac{n+q+1}{q+1}}$. \qed \section{$C_q$-polychromatic Numbers 1 and 2} \label{sec:_c_q_polychromatic_numbers_1_and_2} The following theorem is a special case of a theorem of Faudree and Schelp. \begin{theorem}\cite{Faudree:1974}\label{FS} Let $s\geq 5$ be an integer and let $c(s)$ denote the smallest integer $n$ such that in any 2-coloring of the edges of $K_n$ there is a monochromatic $s$-cycle. Then $c(s)=2s-1$ if $s$ is odd and $c(s)=\frac{3}{2}s-1$ if $s$ is even. \end{theorem} Faudree and Schelp actually determined all values of $c(r,s)$, the smallest integer $n$ such that in any coloring of the edges of $K_n$ with red and blue, there is either a red $r$-cycle or a blue $s$-cycle. Their theorem extended partial results and confirmed conjectures of Bondy and Erd\H{o}s \cite{Bondy:1972} and Chartrand and Schuster \cite{ChartrandSchuster} (who showed $c(3)=c(4)=6$). The coloring of $K_{2s-2}$ to prove the lower bound for $s$ odd is a copy of $K_{s-1,s-1}$ of red edges with all other edges blue, while for $s$ even it's a red $K_{\frac{s}{2}-1,s-1}$ with all other edges blue. \begin{proof}[Proof of Theorem \ref{theorem:five}] By Theorem \ref{FS}, if $s\geq 5$ is odd then there is a polychromatic 2-coloring of $K_n$ if and only if $n\leq 2s-2=2(n-q)-2$, so if and only if $n\geq 2q+2$. If $s\geq 5$ is even then there is a polychromatic 2-coloring if and only if $n\leq \frac{3}{2}s-2=\frac{3}{2}(n-q)-2$, so if and only if $n\geq 3q+4$. Hence if $n\in[2q+2,3q+2]$ then $\pcq(n)=1$ if $n-q$ is even and $\pcq(n)=2$ if $n-q$ is odd. The smallest value of $n$ for which there is a simply ordered $C_q$-polychromatic 2-coloring is $n=3q+3$, so there does not exist one if $n-q$ is odd and $n\leq 3q+2$. \end{proof} We remark that the only values for $q\geq2$ and $n$ such that there is no optimal simply-ordered $C_q(n)$-polychromatic coloring of $K_n$ are the ones given in Theorem \ref{theorem:five} ($n\in[2q+2,3q+2]$ and $n-q$ is odd), and $q=2$, $n=5$ (two monochromatic $C_5$'s is a coloring of $K_5$ with no monochromatic $C_3$'s). \section{Proofs of Theorem \ref{theorem:six} and Lemmas on Long Cycles} \label{sec:proofs_of_theorem_and_lemmas_on_long_cycles} We will need some results on the existence of long cycles in bipartite graphs. \begin{theorem}[Jackson \cite{Jackson:1985}]\label{Jackson} Let $G$ be a connected bipartite graph with bipartition $V(G)=S\cup T$ where $\abs{S}=s$, $\abs{T}=t$, and $s\leq t$. Let $m$ be the minimum degree of a vertex in $S$ and $p$ be the minimum degree of a vertex in $T$. Then $G$ has a cycle with length at least $\min\{2s,2(m+p-1)\}$. \end{theorem} \begin{theorem}[Rahman, Kaykobad, Kaykobad \cite{Rahman:2013}]\label{Rahman} Let $G$ be a connected $m$-regular bipartite graph with $4m$ vertices. Then $G$ has a Hamiltonian cycle. \end{theorem} \begin{lem}\label{disjoint_union} Let $B$ be a bipartite graph with vertex bipartition $S,T$ where $\abs{S}=s$, $\abs{T}=t$, and $s\leq t$. Suppose each vertex in $T$ has degree $m$ and each vertex in $S$ has degree $t-m$. Then $B$ has a $2s$-cycle unless $s=t=2m$ and $B$ is the disjoint union of two copies of $K_{m,m}$. \begin{proof} Suppose $s<t$. Summing degrees in $S$ and $T$ gives us $s(t-m)=tm$, so \[ m=\frac{st}{s+t}>\frac{st}{2t}=\frac{s}{2} \] so $B$ is connected. By Theorem \ref{Jackson}, $B$ has a $2s$-cycle, since $2[m+(t-m)-1]=2(t-1)\geq 2s$. If $s=t$, then $B$ is an $m$-regular graph with $4m$ vertices. If $B$ is connected then, by Theorem \ref{Rahman}, it has a $2s$-cycle. If $B$ is not connected then clearly it is the disjoint union of two copies of $K_{m,m}$. \end{proof} \end{lem} We say that a cycle $H'$ of length $n-q$ is obtained from a cycle $H$ of length $n-q$ by a {\it twist} of disjoint edges $e_1$ and $e_2$ of $H$ if $E(H)\setminus \{e_1, e_2\} \subseteq E(H')$, i.e. we remove $e_1, e_2$ from $H$ and introduce two new edges to make the resulting graph a cycle. Note that the choice of the two edges to add is unique (due to connectedness), however, both choices would result in a $2$-regular subgraph. One main difference between the definitions of $C_q(n)$ and $R_q(n)$ is that for the former, we consider only cycles of length precisely $n-q$, whereas, in the latter, we consider all $2$-regular subgraphs spanning \emph{at least} $n-q$ vertices. This is because we can prove Theorem \ref{extension} for cycles, however, a similar result for $2$-regular subgraphs remains elusive (see Conjecture \ref{2-regular_conjecture}). \subsection{Proof of Theorem \ref{theorem:six}} \label{subsec:proof_of_theorem_theorem_six} Suppose not. Let $m$ be an integer in $[j,n-1]$ such that every $m$-cycle gets all colors but there is an $(m+1)$-cycle $H$, $v_1v_2,\ldots,v_{m+1}v_1$ which does not have an edge of color $t$. Then $c(v_i v_{i+2})=t$ for all $i$, where the subscripts are read mod $(m+1)$, because otherwise, there is an $m$-cycle with no edge of color $t$. \begin{case} If $m+1$ is odd, then $v_1 v_3 v_5\cdots v_{m+1} v_2 v_4\cdots v_{m-2} v_1$ is an $m$-cycle with at most two colors, since all edges except possibly $v_{m-2} v_1$ have color $t$. This is impossible. \end{case} \begin{case} Suppose $m+1$ is even. Then $c_E = v_2 v_4\cdots v_{m+1} v_2$ and $c_O=v_1 v_3\cdots v_m v_1$ are $\frac{m+1}{2}$-cycles with all edges of color $t$. Suppose $H$ has a chord $v_j v_{j+r}$ with color $t$ for some $j$ and odd integer $r$ in $[3,m-2]$. Then $v_{j+2} v_{j+4} \cdots v_{j-2} v_j v_{j+r} v_{j+r+2}\cdots v_{j+r-4}$ is a path with $m$ vertices (missing $v_{j+r-2}$) and all edges of color $t$, so there is an $m$-cycle with at most two colors, which is impossible. Hence if $v_i$ is a vertex in $c_E$ and $v_j$ is a vertex in $c_O$, then $v(v_i v_j)\neq t$. We claim that for each $j$ and even integer $s$, $c(v_j v_{j+s})=t$. If not, then $v_j v_{j+s} v_{j+s+1} \cdots \allowbreak v_{j-3} v_{j-2} v_{j+s-1} v_{j+s-2}\cdots\allowbreak v_{j+1}v_j$ is an $m$-cycle (missing $v_{j-1}$) with no edge of color $t$ (note $c(v_{j-2}v_{j+s-1})\allowbreak\neq t$ because $j-2$ and $j+s-1$ have different parities). Hence, the vertices of $c_E$ and $c_O$ each induce a complete graph with $\frac{m+1}{2}$ vertices and all edges of color $t$, and there are no other edges of color $t$ in $K_n$. If there is a color $w$, different than $t$, such that there exist two disjoint edges of color $w$, then it is easy to find an $m$-cycle with two edges of color $w$ and the rest of color $t$. If there do not exist two such edges of color $w$, then all edges of color $w$ are incident to a single vertex $x$, so any $m$-cycle with $x$ incident to two edges of color $t$ does not contain an edge of color $w$ (these exist since $\frac{m+1}{2}\geq 3$).\hfill\qedsymbol \end{case} We remark that the statement in Theorem \ref{theorem:six} would be false without the requirement that there be at least three colors. If $m\geq 3$ is odd, then two vertex disjoint complete graphs each with $\frac{m+1}{2}$ vertices and all edges of color $t$ with all edges between them of color $w$ has an $(m+1)$-cycle with all edges of color $w$, while every $m$-cycle has edges of both colors. This is the reason for the difference between odd and even values of $n-q$ in Theorem \ref{theorem:five}. The statement would also be false with three colors if $j=3$ and $n=4$. \section{Main Lemmas and Proofs of Theorems} \label{Theorems} We now state and prove the three main lemmas needed for the proofs of Theorems \ref{theorem:two}, \ref{theorem:three}, and \ref{theorem:four}. \begin{lem}\label{max-vertex} \hfill \begin{enumerate}[label=(\alph*)] \item\label{mvert1} Let $\ensuremath{\mathcal{H}}\in\{R_q(n),C^*_q(n)\}$. Of all optimal $\ensuremath{\mathcal{H}}$-polychromatic colorings, let $\varphi$ be one which is $X$-ordered on a (possibly empty) subset $X$ of $V(K_n)$ of maximum size and, of these, such that $G_m=K_n[Y]$ has a vertex $v\in Y$ of maximum possible monochromatic degree $d$ in $G_m$ where $Y=V(K_n)\setminus X$, $\abs{Y}=m$, and $d<(m-1)$. If $v$ is incident in $G_m$ to $d$ edges of color 1 and $u\in Y$ is such that $\varphi(vu)=2$, then $v$ is a $(1,2)$-max vertex in $G_m$ and $u$ is a $(2,t)$-max vertex in $G_m$ for some color $t$ (possibly $t=1$). \item\label{mvert2} The same is true if $X\neq \emptyset$ and $\varphi$ is nearly $X$-ordered. \end{enumerate} \begin{proof}[Proof of \ref{mvert1}] Let $y_1,y_2,\ldots,y_d\in Y$ be such that $\varphi(vy_i)=1$. Let $H\in C_q^*$ or $H\in R_q$ be such that $uv$ is the only edge of color 2. There must be such an $H$ otherwise we could change the color of $uv$ from $2$ to $1$, giving an $\ensuremath{\mathcal{H}}$-polychromatic coloring with monochromatic degree greater than $d$ in $G_m$. \Lightning. Orient the edges of $H$ to get a directed cycle or $2$-regular graph $H'$ where $\vv{uv}$ is an arc. If $y_i\in H'$ then the predecessor $w_i$ of $y_i$ in $H'$ must be such that $\varphi(w_i u)=2$, because otherwise we can twist $uv$ and $w_i y_i$ to get an $(n-q)$-cycle (if $H\in C_q^*$) or a $2$-regular graph (if $H\in R_q$) with no edge of color 2. Note that $w_i$ must be in $Y$ because otherwise, since $\varphi$ is $X$-constant, $\varphi(w_i u)=\varphi(w_i y_i)=2$, contradicting the assumption that $uv$ is the only edge in $H$ of color 2. Suppose $y_i\not\in H$ for some $i\in[d]$. If $\varphi(y_i u)\neq 2$, then $J=(H\setminus\{uv\})\cup\{vy_i,y_iu\}$ has no edge of color $2$. This is impossible if $H\in R_q$, because $J$ is a $2$-regular graph spanning $n-q+1$ vertices. If $H\in C_q^*$, then $J$ is an $(n-q+1)$-cycle with no edge of color $2$, so by Theorem \ref{theorem:six}, since the polychromatic number of $H$ is at least $3$, there exists an $(n-q)$-cycle which is not polychromatic, a contradiction. Hence $\varphi(y_i u)=2$ in either case. Thus, for each $i\in[d]$, either $y_i\not\in H$ and $\varphi(y_i u)=2$, or $y_i\in H$ and $\varphi(w_i u)=2$ where $w_i$ is the predecessor of $y_i$ in $H'$. That gives us $d$ edges in $G_m$ of color $2$ which are incident to $u$. Since $v$ has maximum monochromatic degree in $G_m$, it follows that $v=w_i$ for some $i$ (otherwise $uv$ is a different edge of color $2$ incident to $u$) and it also follows that no edge in $G_m$ incident to $v$ can have color $t$ where $t\not\in\{1,2\}$. This is because if $vz$ were such an edge, as shown above, then either $z\in H$ and $\varphi(w'u=2)$ where $w'$ is the predecessor of $z$ in $H'$, or $z\not\in H$ and $\varphi(zu)=2$. In either case we get $d+1$ edges of color $2$ in $G_m$ incident to $u$, a contradiction. So $v$ is a $(1,2)$-max-vertex and $u$ is a $(2,t)$-max-vertex for some color $t$. The proof of \ref{mvert2} is exactly the same. \end{proof} \end{lem} \begin{lem}\label{structurelemma}Let $n\geq 7$ and $\ensuremath{\mathcal{H}}\in\{R_q(n),C_q(n)\}$. If there does not exist an optimal {\ensuremath{\mathcal{H}}}-polychromatic coloring of $K_n$ with maximum monochromatic degree $n-1$, then one of the following holds. \begin{enumerate}[label=\alph*)] \item\label{structure-one} $\ensuremath{\mathcal{H}}=C_q(n)$, $n-q$ is odd and $n\in[2q+2,3q+2]$ (and $\pcq(n)=2$). \item $q=0$ and there exists an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring which is $Z$-quasi-ordered with $\abs{Z}=3$. \item $q=1$ and there exists an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring which is $Z$-quasi-ordered with $\abs{Z}=4$. \end{enumerate} \begin{proof}\let\qed\relax First assume that $\ensuremath{\mathcal{H}}=C_q(n)$ and that $q\geq 2$ and $n$ are such that $\pcq(n)\leq 2$. If $n-q$ is even then, by Theorem \ref{theorem:five}, there is a $C_q$-polychromatic 2-coloring if and only if $n\geq 3q+3$. Since $3q+3$ is the smallest value of $n$ such that the simply-ordered $C_q$-polychromatic coloring $\varphi_{C_q}$ uses two colors, if $\pcq(n)\leq 2$ and $n-q$ is even, then there is an optimal simply-ordered $C_q$-polychromatic coloring, and this coloring has a vertex (in fact $q+1$ of them) with monochromatic degree $n-1$. If $n-q$ is odd then, by Theorem \ref{theorem:five}, there is a $C_q$-polychromatic 2-coloring if and only if $n\geq 2q+2$. Since there is a simply-ordered $C_q$-polychromatic 2-coloring if $n\geq 3q+3$, that means that if $n-q$ is odd, $\pcq(n)\leq 2$ and $n\not\in[2q+2,3q+2]$ then there is a simply-ordered $C_q$-polychromatic coloring. Thus if $\pcq(n)\leq 2$, there is an optimal simply-ordered $C_q$-polychromatic coloring, and hence one with maximum monochromatic degree $n-1$, unless $n-q$ is odd and $n\in[2q+2,3q+2]$, which are the conditions for \ref{structure-one}. Now let $\ensuremath{\mathcal{H}}\in\{R_q(n),C^*_q(n)\}$ and suppose there does not exist an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring of $K_n$ with maximum monochromatic degree $n-1$. Of all optimal $\ensuremath{\mathcal{H}}$-polychromatic colorings of $K_n$, let $\varphi$ be the one with maximum possible monochromatic degree $d$ (so $d<n-1$). \end{proof} \end{lem} \begin{claim}\label{dsize} $d>\frac{n-1}{2}$. \begin{proof}\let\qed\relax Since there are only two colors at a max-vertex, certainly $d\geq \frac{n-1}{2}$. Assume $d=\frac{n-1}{2}$ (so n is odd) and that $x$ is a max-vertex where colors $i$ and $j$ appear. Then $x$ is both an $i$-max and $j$-max vertex so, by Lemma \ref{max-vertex}, each vertex in $V$ is a max-vertex. Suppose there are more than 3 colors, say colors $i,j,s,t$ are all used. If $i$ and $j$ appear at $x$ then no vertex $y$ can have colors $s$ and $t$, because there is no color for $xy$. So the sets of colors on the vertices is an intersecting family of $2$-sets. Since there are at least 4 colors, the only way this can happen is if some color, say $i$, appears at every vertex. Let $n_{ij}, n_{is}$, and $n_{it}$ be the number of $(i,j)$-max, $(i,s)$-max, and $(i,t)$-max vertices with $n_{ij}\leq n_{is} \leq n_{it}$. Then $n_{ij}<\frac{n}{2}$ (in fact, $n_{ij}\leq \frac{n}{3}$). If $x$ is an $(i,j)$-max vertex and $y$ is an $(i,s)$-max vertex, then $c(xy)=i$. Hence the number of edges of color $j$ incident to $x$ is at most $n_{ij}-1<\frac{n-2}{2}<d$, a contradiction. Now suppose there are precisely 3 colors. Let $A, B, C$ be the set of all $(1,2)$-max, $(2,3)$-max, and $(1,3)$-max vertices, respectively, with $\abs{A}=a, \abs{B}=b$, and $\abs{C}=c$. All edges from a vertex in $A$ to a vertex in $B$ have color 2, from $B$ to $C$ have color 3, from $A$ to $C$ have color 1; internal edges in $A$ have color 1 or 2, in $B$ have color 2 or 3, in $C$ have color 1 or 3. We clearly cannot have $a,b,$ or $c$ greater than $\frac{n-1}{2}$ so, without loss of generality, we can assume $a\leq b\leq c\leq \frac{n-1}{2}$ and $a+b+c=n$. Consider the graph $F$ formed by the edges of color 1 or 2. Vertices of $F$ in $B$ or $C$ have degree $\frac{n-1}{2}$, while vertices in $A$ have degree $n-1$. Since $a\leq c$ we have $a\leq \frac{n-b}{2}$. The internal degree in $F$ of each vertex in $B$ is $\frac{n-1}{2}-a\geq \frac{n-1}{2}-\frac{n-b}{2}=\frac{b-1}{2}$. As is well known (Dirac's theorem), that means there is a Hamiltonian path within $B$. Similarly there is one within $C$. If $a\geq 2$, that makes it easy to construct a Hamiltonian cycle in $F$. If $a=1$ we must have $b=c=\frac{n-1}{2}$, so $F$ is two complete graphs of size $\frac{n+1}{2}$ which share one vertex. This graph has a spanning 2-regular subgraph if $n\geq 7$ (a 3-cycle and a 4-cycle if $n=7$), so no $R_q$-polychromatic coloring with 3 colors for any $q\geq 0$ if $n\geq 7$. If $a=1$ and $b=c=\frac{n-1}{2}$ consider the subgraph of all edges of colors 1 or 3. It consists of a complete bipartite graph with vertex parts $A\cup B$ and $C$, with sizes $\frac{n+1}{2}$ and $\frac{n-1}{2}$, plus internal edges in $C$. Clearly this graph has an $(n-1)$-cycle, but no Hamiltonian cycle. Hence there can be a $C_q$-polychromatic 3-coloring only if $q=0$. However, the $C_0$-polychromatic coloring $\varphi_{C_0}$ uses at least 4 colors if $n\geq 7$, so there is no optimal one with maximum monochromatic degree $\frac{n-1}{2}$. \end{proof} \end{claim} \begin{claim} If $q=0$, then, up to relabeling the colors, there is a $(1,2)$-max-vertex, a $(2,3)$-max-vertex and a $(3,1)$-max-vertex. \begin{proof}\let\qed\relax Assume that every max-vertex has majority color either $1$ or $2$. Then $u$ must be a $(2,1)$-max-vertex. This is because by Lemma \ref{max-vertex}, if it were a $(2,t)$-max-vertex for some third color $t$, and $c(u z)=t$, then $z$ would have to be a $t$-max-vertex, a contradiction. Hence, every max-vertex is either a $(1,2)$-max-vertex or a $(2,1)$-max-vertex. Let $S$ be the set of all $(1,2)$-max-vertices, $T$ be the set of all $(2,1)$-max-vertices, and $W=V\setminus(S\cup T)$. Edges within $S$ and from $S$ to $W$ must have color $1$ (because any minority color edge at a max-vertex is incident to a max-vertex of that color), edges within $T$ and from $T$ to $W$ must have color $2$, and all edges between $S$ and $T$ must have color $1$ or $2$. If $\abs{S}=s$ and $\abs{T}=t$ and $m=n-1-d$, then each vertex in $S$ is adjacent to $m$ vertices in $T$ by edges of color $2$ (and adjacent to $t-m$ vertices in $T$ by edges of color $1$), and each vertex in $T$ is adjacent to $m$ vertices in $S$ by edges of color $1$. Suppose $s<t$ and consider any edge $ab$ from $S$ to $T$ of color 2. As before, there is an $H\in\ensuremath{\mathcal{H}}$ which contains $ab$, but no other edges of color 2. Hence $H$ has no edges from $T$ to $W$. Since $s<t$ there must be an edge of $H$ with both vertices in $T$, so it does have another edge of color 2 after all, a contradiction. The same argument works if $t<s$ with an edge with color 1. To avoid this, we must have $s=t=2m$. If there is an edge from $S$ to $W$ then, again, $H$ has an internal edge in $T$, which is impossible. Hence if $\ensuremath{\mathcal{H}}=C^*_0$ then $W=\emptyset$ and every edge has color 1 or 2, which is impossible since $H$ has at least 3 colors. If $\ensuremath{\mathcal{H}}=R_0$ then the subgraph of $H$ induced by $S\cup T$ is the union of cycles. If $m=1$ then $S\cup T$ induces a 4-cycle in $H$, two edges of each color, so $ab$ is not the only edge with color 2. If $m\geq 2$ then two applications of Hall's Theorem gives two disjoint perfect matchings of edges of color 1 between $S$ and $T$, whose union is a 2-factor of edges of color 1 spanning $S\cup T$, which together with the subgraph of $H$ induced by $W$, produces a 2-factor $H'\in R_0$ with no edge of color 2. We have shown that $u$ is not a $(2,1)$-max vertex, so it must be a $(2,3)$-max vertex for some other color 3. Say $\varphi(uz)=3$. Then, by Lemma \ref{max-vertex}, $z$ is a $3$-max vertex. If $\varphi(vz)=2$, then $z$ would be a $2$-max vertex. So $z$ would be both a $2$-max and a $3$-max vertex, and so $d=\frac{n-1}{2}$, a contradiction to Claim \ref{dsize}. Hence $\varphi(vz)=1$, which means $z$ must be a $(3,1)$-max vertex. \end{proof} \end{claim} \begin{claim}\label{structure} If $q=0$ then $V$ can be partitioned into sets $A,B,D,E$ where the following properties hold (see Figure \ref{fig:graphfigure}). \begin{enumerate} \item\label{C5one} All vertices in $A$ are $(1,2)$-max-vertices. \item\label{C5two} All vertices in $B$ are $(2,3)$-max-vertices. \item\label{C5three} All vertices in $D$ are $(3,1)$-max-vertices. \item\label{C5four} No vertex in $E$ is a max-vertex. \item\label{C5five} All edges within $A$, from $A$ to $D$, and from $A$ to $E$ are color 1. \item\label{C5six} All edges within $B$, from $B$ to $A$, and from $B$ to $E$ are color 2. \item\label{C5seven} All edges within $D$, from $D$ to $B$, and from $D$ to $E$ are color 3. \item\label{C5eight} $\abs{A}=\abs{B}=\abs{D}=m=n-1-d$. \end{enumerate} \begin{figure}[htbp] \centering \begin{tikzpicture}[every text node part/.style={align=center},scale=3,inner sep=1mm] \node[circle,ultra thick,draw=black,fill=white,inner sep=6.5mm] (e) at (0,0) {$E$}; \node[circle,ultra thick,draw=green!50!black,fill=white] (d) at (0,1.1547) {$D$\\$(3,1)$-max}; \node[circle,ultra thick,draw=blue,fill=white] (a) at (-1,1.73205) {$A$\\$(1,2)$-max}; \node[circle,ultra thick,draw=red,fill=white] (b) at (1,1.73205) {$B$\\$(2,3)$-max}; \draw [ultra thick,blue] (a) .. controls (-1.85,1.85) and (-1.4,2.5) .. (a) {node [above left,pos=.5] {\large 1}}; \draw [ultra thick,loosely dashed,red] (b) .. controls (1.4,2.5) and (1.85,1.85) .. (b) {node [above right,pos=.5] {\large 2}}; \draw [ultra thick,dotted,green!50!black] (d) .. controls (-.5,2) and (.5,2) .. (d) {node [below,pos=.5] {\large 3}}; \draw [ultra thick,blue] (a) -- node[below] {\large 1} (d); \path (a) edge [ultra thick,blue,bend right] node[below left] {\large 1} (e); \path (b) edge [ultra thick,loosely dashed,red,bend right] node[above] {\large 2} (a); \path (b) edge [ultra thick,loosely dashed,red,bend left] node[below right] {\large 2} (e); \draw [ultra thick,dotted,green!50!black] (d) -- node[below] {\large 3} (b); \draw [ultra thick,dotted,green!50!black] (d) -- node[left] {\large 3} (e); \end{tikzpicture} \caption{} \label{fig:graphfigure} \end{figure} \begin{proof}\let\qed\relax Let $A=\{x : x\textrm{ is a }(1,2)\textrm{-max vertex}\}$, $B=\{x : x\textrm{ is a }(2,3)\textrm{-max vertex}\}$, $D=\{x : x\textrm{ is a }(3,1)\textrm{-max vertex}\}$ and $E=V\setminus (A\cup B \cup D)$. Let $x\in A$. If $y\in A$, then $\varphi(xy)=1$ because if $\varphi(xy)=2$, then $y$ would be a $2$-max vertex. If $y\in B$, then $\varphi(xy)=2$ because that is the only possible color for an edge incident to $x$ and $y$ and, similarly, if $y\in D$, then $\varphi(xy)=1$. Suppose $w$ is a max-vertex in $E$. Then the two colors on edges incident to $w$ must be a subset of $\{1,2,3\}$, because, otherwise, it would be disjoint from $\{1,2\}$, $\{2,3\}$, or $\{1,3\}$, so there would be an edge incident to $w$ for which there is no color. Say $1$ and $2$ are the colors at $w$. Since $w\not\in A$, $w$ is a $(2,1)$-max vertex. Let $z$ be a $(3,1)$-max vertex. Then the edge $wz$ must have color 1 so, by Lemma \ref{max-vertex}, $z$ is a $1$-max vertex, a contradiction. We have now verified \eqref{C5one}--\eqref{C5four}. If $x\in A$ and $w\in E$ then $\varphi(xw)=1$ because if $\varphi(xw)=2$ then $w$ would be a $2$-max vertex. Similar arguments show that if $y\in B$ then $\varphi(yw)=2$ and if $y\in D$ then $\varphi(yw)=3$. We have now verified \eqref{C5one}--\eqref{C5seven}. We have shown that if $x$ is in $A$ then $\varphi(xy)=2$ if and only if $y\in B$. That means $\abs{B}=m$, and by the same argument $\abs{A}=\abs{C}=m$ as well, completing the proof of Claim \ref{structure}. \end{proof} \end{claim} \begin{claim}\label{q=0_optimal-quasi-ordered} If $\ensuremath{\mathcal{H}}\in\{C_0^*,R_0\}$, and there exists an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring satisfying \eqref{C5one}--\eqref{C5eight} with $m>1$, then there exists one with $m=1$, i.e. one that is $Z$-quasi-ordered with $\abs{Z}=3$. \begin{proof} Let $A=\{a_i:i\in[m]\}, B=\{b_i:i\in[m]\}, D=\{d_i:i\in[m]\}$. Define an edge coloring $\gamma$ by \begin{align*} \gamma(a_1 b_i)&=1\mathrm{\ if\ }i>1\\ \gamma(b_1 d_i)&=2\mathrm{\ if\ }i>1\\ \gamma(d_1 a_i)&=3\mathrm{\ if\ }i>1\\ \gamma(u v)&=\varphi(u v)\mathrm{\ for\ all\ other\ }u,v\in V.\\ \end{align*} It is easy to check that $\gamma$ has the structure described above with $m=1$. We have essentially moved $m-1$ vertices from each of $A$, $B$, and $D$, to $E$. Since $a_1, b_1,$ and $c_1$ each have monochromatic degree $n-2$, any 2-factor must have edges of colors 1,2, and 3 under the coloring $\gamma$, so if it had all colors under $\varphi$, it still does under $\gamma$. \end{proof} \end{claim} We remark that the coloring $\gamma$ with $m=1$ in Claim \ref{q=0_optimal-quasi-ordered} is $Z$-quasi-ordered with $\abs{Z}=3$. As we have shown, if there exists such an $R_0$-polychromatic coloring $\varphi$ with $m>1$, then there exists one with $m=1$. However, if $m>1$ and $n>6$, a coloring $\varphi$ satisfying properties \eqref{C5one}--\eqref{C5eight} might not be $R_0$-polychromatic. This is because if $E$ has no internal edges with color $1$, then any $2$-factor with a $2m$-cycle consisting of alternating vertices from $A$ and $B$ has no edge with color $1$. However, the modified coloring $\gamma$ (with $m=1$) is an $R_0$-polychromatic coloring because then colors $1$, $2$, and $3$ must appear in any $2$-factor. \begin{claim}\label{max-vertex-q=1} If $q\geq 1$ then, up to relabelling colors, every max vertex is a $(1,2)$-max vertex or a $(2,1)$-max vertex. \begin{proof}\let\qed\relax As before, we assume $v$ is a $(1,2)$-max vertex, that $\varphi(uv)=2$ and that $H\in R_q$ (or $H\in C_q^*$) is such that $uv$ is the only edge of color 2. We know that $u$ is a $(2,t)$-max vertex for some color $t$. By way of contradiction, suppose $u$ is a $(2,3)$-max vertex. Then we have the configuration of Figure \ref{fig:graphfigure}, with $\abs{A}=\abs{B}=\abs{D}=m$. If $uw$ is also an edge of $H$ then $w\in D$, since otherwise $\varphi(uw)=2$. Let $Q$ be the set of vertices not in $H$ (so $\abs{Q}=q>0$) and suppose $p\in Q$ but $p\not\in B$. Then we can replace $u$ in $H$ with $p$ to get a 2-regular graph (cycle) with no edge of color 2. Hence $Q\subseteq B$. Orient the edges of $H$ to get a directed graph $H'$ where $\vv{uv}$ is an arc. Since $\abs{B\setminus Q}<\abs{D}$, and every vertex in $D$ appears in $H'$, for some $d\in D$ and $e\not\in B$, $\vv{de}$ is an arc in $H'$. Since $\varphi(du)=3$ and $\varphi(ev)=1$, when you twist $uv$ and $de$ you get a 2-regular graph (cycle) with no edge of color 2, a contradiction. Hence every max-vertex is a $(1,2)$-max vertex or $(2,1)$-max vertex. \end{proof} \end{claim} \begin{claim}\label{q>1-XnotEmpty} If $q=1$ then, up to relabelling colors, the vertex set can be positioned into $S,T,W$ such that \begin{enumerate} \item\label{q>1-first} $S$ is the set of all $(1,2)$-max vertices \item $T$ is the set of all $(2,1)$-max vertices \item $W$ has no max vertices \item All internal edges in $S$ and all edges from $S$ to $W$ have color 1; all internal edges in $T$ and all edges from $T$ to $W$ have color 2 \item\label{q>1-last} The edges of color 1 between $S$ and $T$ form two disjoint copies of $K_{m,m}$, as do the edges of color 2 (so $\abs{S}=\abs{T}=2m$, where $n-m-1$ is the maximum monochromatic degree) \end{enumerate} \begin{proof}\let\qed\relax By Claim \ref{max-vertex-q=1}, if $q\geq 1$, then every max vertex is a $(1,2)$ or $(2,1)$-max vertex. Let $S$ be the set of all $(1,2)$-max vertices and $T$ be the set of all $(2,1)$-max vertices, with $\abs{S}=s$ and $\abs{T}=t$, $s\leq t$, and let m be the maximum monochromatic degree. Let $W=V(G)\setminus (S\cup T)$ and let $B$ be the complete bipartite graph with vertex bipartition $S,T$ and edges colored as they are in $G$. So each vertex of $B$ in $S$ is incident with $m$ edges of color 2 and $t-m$ edges of color 1, and each vertex of $B$ in $T$ is incident with $m$ edges of color 1 and $s-m$ edges of color 2. All edges of $G$ within $S$ and between $S$ and $W$ have color 1 (otherwise there would be a $(2,1)$-max vertex not in $T$) and all edges within $T$ and between $T$ and $W$ have color 2. We note that the edges of color 1 in $B$ satisfy the conditions of Lemma \ref{disjoint_union}, so $B$ has a $2s$-cycle of edges of color 1 unless $s=t=2m$ and the edges of color 1 (and those of color 2) form two disjoint copies of $K_{m,m}$. Again, let $v\in S$ and $u\in T$ be such that $c(uv)=2$, and let $H\in C_q^*(n)$ (or $H\in R_q(n)$), $q\geq 1$, be such that $uv$ is the only edge of color 2. If $uw$ is also an edge of $H$ then $w\in S$, because otherwise $c(uw)=2$. Hence if $z$ is a vertex of $G$ not in $H$ then $z\in T$, because otherwise we can replace $u$ with $z$ in $H$ to get $H''\in C_q^*(n)$ (or $H''\in R_q(n)$) with no edge of color 2. That means that if $Q$ is the set of vertices of $G$ not in $H$, then $Q\subseteq T$. Since $uv$ is the only edge in $H$ with color 2, each vertex in $T\setminus Q$ is adjacent in $H$ to two vertices in $S$, so there are $2(t-q)$ edges in $H$ between $S$ and $T$, where $q=\abs{Q}\geq t-s$. Let $M$ be the subgraph of $H$ remaining when the $2(t-q)$ edges in $H$ between $S$ and $T$ have been removed (along with any remaining isolated vertices). If $q=t-s$ then, since every edge in $H$ incident to a vertex in $T$ goes to $S$, either $H$ is a $2s$-cycle and $W=\emptyset$ (if $H\in C_q^*(n)$) or the union of the components of $H$ which have a vertex in $T$ is a 2-regular graph spanning $S$ and $s=t-q$ vertices in $T$. In either case, since $s<t$, we can replace the components of $H$ which intersect $T$ with the $2s$-cycle of edges of color 1 promised by Theorem \ref{Jackson}, to get an $H''\in C_q^*(n)$ (or $H''\in R_q(n)$) with no edge of color 2. Hence $q>t-s$. Each component of $M$ is a path with at least one edge, both endpoints in $S$ with interior points in $S$ or $W$. If a component has $j>2$ vertices in $S$, we split it into $j-1$ paths which each have their endpoints in $S$ with all interior points in $W$. If a vertex of $S$ is an interior point in a component then it is an endpoint of two of these paths. The number of such paths is $\frac{2(s-(t-q))}{2}=s-(t-q)>0$. We denote the paths by $P_1,P_2,\ldots,P_r$ where $r=s-(t-q)$. For each $i$ in $[r]$ where $P_i$ has more than 2 vertices, we remove the edges containing the two endpoints (which are both in $S$), leaving a path $W_i$ whose vertices are all in $W$ (the union of the vertices in all the $W_i$'s is equal to $W$). We will now show that there cannot be a $2s$-cycle of edges of color 1 in $B$. Suppose $J$ is such a $2s$-cycle. Let $R=\{x_1,x_2,\ldots,x_r\}$ be the set of any $r$ vertices in $T\cap V(J)$ and let $K$ be the subgraph of $J$ obtained by removing the $r$ vertices in $R$. For each $i\in [r]$ let $y_{ia}$ and $y_{ib}$ be the vertices adjacent to $x_i$ in $J$. Both are in $S$ and possibly $y_{ib}=y_{ja}$ if $i\neq j$. Now, for each $i\in[r]$, attach $W_i$ to $y_{ia}$ and $y_{ib}$ ($R_i$ can be oriented either way). More precisely, if $W_i$ is the path $w_{i1},w_{i2},\ldots,w_{id}$ in $W$, we attach it to $K$ by adding the edges $y_{ia}w_{i1}$ and $y_{ib}w_{id}$, while if $W_i$ is empty (meaning the $i^{\rm{th}}$ component of $M$ has only two vertices, so none in $W$) we add the edge $y_{ia}y_{ib}$. The resulting graph $H''$ has no edge of color 2, since we constructed it using only edges from $J$ and edges from $H$ within $S\cup W$. Since $V(H'')=V(G)\setminus R$, $H''$ has $n-q$ vertices. Clearly $H''$ is 2-regular and, if $H$ is a cycle, so is $H''$ (if $H$ is not a cycle, $H''$ will still be a cycle if $H$ does not have any components completely contained in $W$). Thus $H''\in R_q(n)$ ($H''\in C_q^*(n)$) and has no edge of color 2, a contradiction. Hence there is no $2s$-cycle of edges of color 1 in $B$. By Lemma \ref{disjoint_union} it follows that $s=t=2m$ with the edges of color 1 forming two vertex-disjoint copies of $K_{m,m}$. (If these two disjoint copies have vertex sets $S_1\cup T_1$ and $S_2\cup T_2$, where $S_1\cup S_2=S$ and $T_1\cup T_2=T$, then $S_1\cup T_2$ and $S_2\cup T_1$ are the vertex sets which induce two disjoint copies of $K_{m,m}$ with edges of color 2.) We have now verified that properties \eqref{q>1-first}--\eqref{q>1-last} hold if $q\geq 1$. We will now show we get a contradiction if $q\geq 2$. Assume $q\geq 2$. Let $T_1$ and $T_2$ be the sets of vertices in $T$ in the two $s$-cycles of edges of color 1 ($\abs{T_1}=\abs{T_2}=\frac{s}{2}$, $T_1\cup T_2=T$). Recall that $v\in S$, $u\in T$, and $uv$ is the only edge of $H$ of color 2. The subgraph $M$ of $H$ defined earlier still consists of paths which can be split into paths $P_1,P_2,\ldots,P_q$ (since $r=s-t+q=q$) with endpoints in $S$ and interior points in $W$. Let $J$ be the union of the two $s$-cycles of edges of color 1. Choose the subset $Q$ of size $q$ so that it has at least one vertex in each of $T_1$ and $T_2$, say $Q=\{x_1,x_2,\ldots,x_q\}$ where $x_1\in T_1$ and $x_q\in T_2$. Again, let $K$ be the subgraph obtained from $J$ by removing the vertices in $Q$. Then, as before, the paths $W_1,W_2,\ldots,W_q$ (perhaps some of them empty) can be stitched into $K$. We attach $W_i$ to $y_{ia}$ and $y_{ib}$ if $i\in[2,q-1]$ (just adding the edge $y_{ia}y_{ib}$ if $W_i$ is empty). We attach $W_1$ to $y_{1a}$ and $y_{qb}$ and $W_q$ to $y_{1b}$ and $y_{qa}$, creating an $(n-q)$-cycle if no component of $H$ is contained in $W$, and a 2-regular graph spanning $n-q$ vertices if $H$ has a component contained in $W$. There is no edge of color 2 in this graph contradicting the assumption that if $q\geq 2$ and $\mathcal{H}\in\{R_q(n),C_q^*(n)\}$ then the maximum monochromatic degree in all optimal $\mathcal{H}$-polychromatic colorings is less than $n-1$. \end{proof} \end{claim} \begin{claim} If $\ensuremath{\mathcal{H}}\in\{C_1^*,R_1\}$ and there exists an $\ensuremath{\mathcal{H}}$-polychromatic coloring satisfying \eqref{q>1-first}--\eqref{q>1-last} in Claim \ref{q>1-XnotEmpty} with $m>1$, then there exists one with $m=1$, i.e. one that is $Z$-quasi-ordered with $\abs{Z}=4$. \begin{proof} Assume there is an $R_1$-polychromatic coloring ($C_1^*$-polychromatic coloring) $c$ with $q=1$ satisfying \eqref{q>1-first} -- \eqref{q>1-last} of Claim \ref{q>1-XnotEmpty} where $s=t>2$. Let $v$ and $x$ be vertices in $S$ and $u$ and $y$ be vertices in $T$ such that $c(vu)=c(xy)=2$ and $c(xu)=c(vy)=1$. Let $c'$ be the coloring obtained from $c$ by recoloring the following edges (perhaps they are recolored the same color they had under $c$): \begin{center} \begin{tabular}{lll} $c'(vp)=1$ & for all & $p\in T\setminus\{u,y\}$\\ $c'(xp)=1$ & for all & $p\in T\setminus\{u,y\}$\\ $c'(zu)=2$ & for all & $z\in S\setminus\{v,x\}$\\ $c'(zy)=2$ & for all & $z\in S\setminus\{v,x\}$\\ $c'(zp)=3$ & for all & $p\in T\setminus\{u,y\}$ and $z\in S\setminus\{v,x\}$ \end{tabular} \end{center} Since all but one edge incident to $v$ and $x$ have color 1 under $c'$, certainly every $(n-1)$-cycle contains an edge of color 1. Similarly for $u$ and $y$ and edges of color 2. Every edge which was recolored had color 1 or 2 under $c$, so $c'$ must be a polychromatic coloring with the same number of colors. It has the desired form with $\abs{S}=\abs{T}=2$, so, in fact, is $Z$-quasi-ordered with $Z=\{v,x,u,y\}$. \end{proof} \end{claim} % We remark that a coloring $c$ satisfying properties \eqref{q>1-first}--\eqref{q>1-last} of Claim \ref{q>1-XnotEmpty} with $s=t>2$ is actually not $R_1$-polychromatic. To see this, let $S_1 \cup T_1$ and $S_2\cup T_2$ be the vertex sets of the two copies of $K_{m,m}$ of edges of color 1 ($S_1\cup S_2 = S$, $T_1\cup T_2=T$) where $v\in S_1, u\in T_2$ and $uv$ is the only edge of color 2 in $H\in R_1$. The subgraph $M$ of $H$ in the proof of Claim \ref{q>1-XnotEmpty} has only one component (since $s-(t-q)=1$), a path $d w_1 w_2\ldots w_e z$ where $d\in S_1$, $z\in T_1$, and $\{w_1,w_2,\ldots,w_e\}\subseteq W$. To construct a 2-regular subgraph with no edges of color 2 spanning $n-1$ vertices, remove a vertex $x$ in $T_2$ from one of the two $s$-cycles of edges of color 1. If $y_a$ and $y_b$ are the two vertices in $S_2$ adjacent to $x$ in the $s$-cycle, attach the path $w_1 w_2 \ldots w_e$ to $y_a$ and $y_b$ to get a 2-regular subgraph with no edge of color 2 spanning $n-1$ vertices. However, this construction cannot be done when $m=1$, so in this case you do get an $R_1$-polychromatic coloring. \begin{lem}\label{XoToO} Let $\ensuremath{\mathcal{H}}\in\{R_q(n),C^*_q(n)\}$. \begin{enumerate}[label=\alph*)] \item Suppose for some $X\neq \emptyset$ there exists an optimal $X$-ordered $\ensuremath{\mathcal{H}}$-polychromatic coloring of $K_n$. Then there is one which is ordered. \item Suppose there exists an optimal $Z$-quasi-ordered $\ensuremath{\mathcal{H}}$-polychromatic coloring of $K_n$. Then there is one which is quasi-ordered \end{enumerate} \begin{proof} Among all such $\ensuremath{\mathcal{H}}$-polychromatic colorings we assume $\varphi$ is one such that \begin{enumerate}[label=\alph*)] \item\label{XoToO-a} if $\varphi$ is $X$-ordered then $X$ has maximum possible size \item\label{XoToO-b} if $\varphi$ is $Z$-quasi-ordered then the restriction of $\varphi$ to $V(K_n)\setminus Z$ is $T$-ordered for the largest possible subset $T$ of $V(K_n)\setminus Z$. In this case, we let $X=Z\cup T$ so $\varphi$ is nearly $X$-ordered (one or two edges could be recolored to make it $X$-ordered). \end{enumerate} For both \ref{XoToO-a} and \ref{XoToO-b} we assume that $\varphi$ is such that its restriction to $G_m=K_n[Y]$ has a vertex $v$ of maximum possible monochromatic degree in $G_m$, where $Y=V(K_n)\setminus X$, $\abs{Y}=m$, and the degree of $v$ in $G_m$ is $d<m-1$ (if $d=m-1$ then $\abs{X}$ is not maximal). Since $v$ has maximum monochromatic degree $d$ in $G_m$, by Lemma \ref{max-vertex} it is a $(1,2)$-max vertex in $G_m$, for some colors 1 and 2, and if $u\in Y$ is such that $\varphi(uv)=2$, then $u$ is a $(2,t)$-max vertex for some color $t$ (perhaps $t=1$). As before, let $y_1,y_2,\ldots,y_d$ be vertices in $Y$ such that $c(vy_i)=1$ for $i=1,2,\ldots,d$. As before, let $H\in\ensuremath{\mathcal{H}}$ be such that $uv$ is its only edge with color 2. Let $H'$ be a cyclic orientation of the edges of $H$ such that $\vv{uv}$ is an arc, and let $w_i$ be the predecessor of $y_i$ in $H'$ for $i=1,2,\ldots,d$. As shown before, $c(w_i v)=2$ for $i=1,2,\ldots,d$. Suppose there is an edge of $H$ which has one vertex in $X$ and one in $Y$. Then there exist $w\in Y$ and $x\in X$ such that $\vv{wx}\in H'$. Certainly $w$ is not the predecessor in $H'$ of any $y_i$ in $Y$. Since $\varphi$ is $X$-constant and $uv$ is the only edge of color 2 in $H$, $\varphi(xv)=\varphi(xw)\neq 2$. Now twist $xw,uv$ in $H$. Since $\varphi(xv)\neq 2$, we must have $\varphi(wu)=2$, so $u$ is incident in $G_m$ to at least $d+1$ vertices of color 2, a contradiction \Lightning. Hence $H$ cannot have an edge with one vertex in $X$ and one in $Y$. Now suppose $x\in X$ and $x\not\in H$. If $\varphi(xv)=\varphi(xu)\neq 2$ then $H\setminus\{uv\}\cup\{ux,xv\}$ is an $(n-q+1)$-cycle with no edge of color 2, which is clearly impossible if $\ensuremath{\mathcal{H}}=R_q(n)$, and is impossible if $\ensuremath{\mathcal{H}}=C^*_q(n)$ by Theorem \ref{theorem:six}. Hence $\varphi(xv)=\varphi(xu)=2$ for each $x\in X$. Since $u$ is a $(2,t)$-max vertex for some color $t\neq 2$, we can repeat the above argument with $u$ in place of $v$. That shows that $\varphi(xv)=\varphi(xu)=t$ for each $x\in X$, which is clearly impossible. It remains to consider the possibility that $\ensuremath{\mathcal{H}}=R_q(n)$ and $X$ is spanned by a union of cycles in $H$. Suppose $xz$ is an edge of $H$ contained in $X$. Then we can twist $xz$ and $uv$ to get another subgraph in $R_q$ and, unless either $x$ or $z$ has main color $2$, this subgraph has no edge of color $2$. Hence at least half the vertices in $X$ have main color $2$ (and more than half would if $H$ had an odd component in $X$). The above argument can be repeated with $u$ in place of $v$. If $u$ is a $(2,t)$-max-vertex then that would show that at least half the vertices in $X$ have main color $t\neq 2$. So each vertex in $X$ has main color $2$ or $t$. Since $\varphi$ is $X$-ordered or nearly $X$-ordered, some vertex $x\in X$ has monochromatic degree $n-2$ or $n-1$ and the main color of $x$ must be 2 or $t$. Assume it is $2$. Then every cycle containing $x$ has an edge with color 2, contradicting the assumption that $H$ has only one edge with color 2. Similarly, we get a contradiction if the main color of $x$ is $t$. We have shown there is no vertex $v$ with monochromatic degree $d<m-1$, so $\varphi$ is ordered or quasi-ordered. \end{proof} \end{lem} Now there is not much left to do to prove Theorems \ref{theorem:two}, \ref{theorem:three}, and \ref{theorem:four}. \subsection{Proof of Theorem \ref{theorem:four}} \label{sec:proof_of_theorem_ref_theorem_four} Theorem \ref{theorem:five} takes care of the case of $C_q$-polychromatic colorings when $q\geq 2$ and $n\in[2q+2,3q+2]$. The smallest value of $n$ for which there is a simply-ordered $C_q$-polychromatic 2-coloring is $n=3q+3$ (the coloring $\varphi_{C_q}$ in Section \ref{subsec:_k_hc_polychromatic_coloring}). Hence if $q\geq 2$ and $\pcq\leq2$ then there exists an optimal simply-ordered $C_q$-polychromatic coloring except if $n-q$ is odd and $n\in[2q+2,3q+2]$, or if $q=2$ and $n=5$ (the coloring of $K_5$ with two monochromatic 5-cycles has no monochromatic 3-cycle). So we need only consider $\ensuremath{\mathcal{H}}\in\{R_q(n),C^*_q(n)\}$ (when $q\geq 2$). Since \ref{structure-one} is not satisfied in Lemma \ref{structurelemma}, there exists an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring with maximum monochromatic degree $n-1$. That means it is $X$-ordered, for some nonempty set $X$, so by Lemma \ref{XoToO} there exists an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring which is ordered, and then, by Lemma \ref{O2SO}, one which is simply-ordered.\hfill\qedsymbol \subsection{Proof of Theorem \ref{theorem:two}} \label{sec:proof_of_theorem_ref_theorem_two} If $\ensuremath{\mathcal{H}}\in\{R_0(n),C_0(n)\}$ then, by Lemma \ref{structurelemma}, there exists an optimal $\ensuremath{\mathcal{H}}$-polychromatic coloring which is $Z$-quasi-ordered with $\abs{Z}=3$. Then, by Lemma \ref{XoToO}, there exists one which is quasi-ordered and then, by Lemma \ref{O2SO}, one which is quasi-simply-ordered with $\abs{Z}=3$, so recoloring one edge would give a simply-ordered coloring.\qedsymbol \subsection{Proof of Theorem \ref{theorem:three}} \label{sec:proof_of_theorem_ref_theorem_three} Exactly the same as the proof of Theorem \ref{theorem:two}, except now $\abs{Z}=4$, so two edges need to be recolored to get a simply-ordered coloring. \section{Polychromatic cyclic Ramsey numbers} \label{sec:polychromatic_cyclic_ramsey_numbers} Let $s, t$, and $j$ be integers with $t\geq 2, s\geq 3, s\geq t$, and $1\leq j\leq t-1$. We define $\cyram(s,t,j)$ to be the smallest integer $n$ such that in any $t$-coloring of the edges of $K_n$ there exists an $s$-cycle that uses at most $j$ colors. Erd{\H{o}}s and Gy\'{a}rf\'{a}s \cite{Erdos:1997} defined a related function for cliques instead of cycles. So $\cyram(s,t,1)$ is the classical $t$-color Ramsey number for $s$-cycles and $\cyram(s,2,1)=c(s)$, the function in Theorem \ref{FS}. While it may be difficult to say much about the function $\cyram(s,t,j)$ in general, if $j=t-1$ we get $\cyram(s,t,t-1)=\pr_t(s)$ the smallest integer $n\geq s$ such that in any $t$-coloring of $K_n$ there exists an $s$-cycle that does not contain all $t$ colors. This is the function of Theorem \ref{extension} if $t\geq 3$, while $\pr_2(s)=c(s)$. \subsection{Proof of Theorem \ref{extension}} \label{sec:proof_of_theorem_extension} Let $q\geq 0, s\geq 3$, and $n$ be integers with $n=q+s$. Assume $q\geq 2$. By Theorem \ref{theorem:four} and the properties of the coloring $\varphi_{C_q}$ (see Section \ref{subsec:_k_hc_polychromatic_coloring}), there exists a $C_q$-polychromatic $t$-coloring of $K_n$ if and only if \begin{align*} q+s&=n\geq (2^t -1)q+2^{t-1}+1,\\ s&\geq(2^t-2)q+2^{t-1}+1,\\ q&\leq\frac{s-2^{t-1}-1}{2^t -2} = \frac{s-2}{2^t -2}-\frac{1}{2} \end{align*} Since $q\geq 2$, we want to choose $s$ so that the right-hand side of the last inequality is at least 2, so \begin{align*} s-2&\geq \frac{5}{2}(2^t -2) = 5\cdot 2^{t-1}-5\\ s&\geq 5\cdot 2^{t-1}-3 \end{align*} So if $s\geq 5\cdot 2^{t-1}-3$, then the smallest $n$ for which there does not exist a $C_q$-polychromatic $k$-coloring is $n=q+s$ where $q>\frac{s-2}{2^t-2}-\frac{1}{2}$, so $n=s+\left\lfloor \frac{s-2}{2^t-2} +\frac{1}{2} \right\rfloor=s+\round\left(\frac{s-2}{2^t-2}\right)$. We note that if $s\geq 5\cdot 2^{t-1}-3$ then $\round\left( \frac{s-2}{2^t-2} \right)\geq \round\left( \frac{5}{2} \right)=3$, so $\pr_t(s)\geq s+3$ if $s\geq 5\cdot 2^{t-1}-3$. Now we assume that $\pr_t(s)=s+2$. So $s+2$ is the smallest value of $n$ for which in any $t$-coloring of the edges of $K_n$ there is an $s$-cycle which does not have all colors, which means there is a polychromatic $t$-coloring when $n=s+1$. Since $q=1$ in such a coloring, by Theorem \ref{theorem:three} and the properties of the coloring $\varphi_{C_1}$, $n\geq 5\cdot 2^{t-2}$. Hence if $s\in[5\cdot 2^{t-2}-1,5\cdot 2^{t-1}-4]$, then $\pr_t(s)=s+2$. Now we assume that $\pr_t(s)=s+1$. So $n-s$ is the largest value of $n$ such that in any $t$-coloring of $K_n$, every $s$-cycle gets all colors. So $q=n-s=0$ and, by Theorem \ref{theorem:two} and properties of the coloring $\varphi_{C_0}$, $n\geq 3\cdot 2^{t-3}+1$. Finally, since the $t$-coloring $\varphi_{C_0}$ requires $n\geq 3\cdot 2^{t-3}+1$ where $t\geq 4$ if $n\leq 3\cdot 2^{t-3}$ and $t\geq 4$, then in any $t$-coloring of $K_n$, some Hamiltonian cycle will not get all colors, so $\pr_t(s)=s$ if $3<s\leq3\cdot2^{t-3}$. \section{Conjectures} \label{sec:conjectures_and_closing_remarks} We mentioned that we have been unable to prove a result for 2-regular graphs analogous to Theorem \ref{theorem:six} for cycles. In fact we think it even holds for two colors, except for a few cases with $j$ and $n$ small. \begin{conj}\label{2-regular_conjecture} Let $n\geq 6$ and $j$ be integers such that $3\leq j <n$, and if $j=5$ then $n\geq 9$, and let $\varphi$ be an edge-coloring of $K_n$ so that every 2-regular subgraph spanning $j$ vertices gets all colors. Then every 2-regular subgraph spanning at least $j$ vertices gets all colors under $\varphi$. \end{conj} This does not hold for $j=3, n=4$, and 3 colors; $n=5, j=3$, and 2 colors. We can extend the notions of $Z$-quasi-ordered, quasi-ordered, and quasi-simply-ordered to sets $Z$ of larger size, allowing a main color to have degree less than $n-2$. Let $q\geq 0$ and $r\geq 1$ be integers such that $q\leq 2r-3$. Hence $\frac{2r-2}{q+1}\geq 1$, and we let $k=\left\lfloor \frac{2r-2}{q+1} \right\rfloor+1\geq 2$ and $z=k(q+1)$. Let $Z$ be a set of $z$ vertices. We define a \emph{seed-coloring} $\varphi$ with $k$ colors on the edges of the complete graph $K_z$ with vertex set $Z$ as follows. Partition the $z$ vertices into $k$ sets $S_1,S_2,\ldots,S_k$ of size $q+1$. For $j=1,2,\ldots,k$, all edges within $S_j$ have color $j$, all edges between $S_i$ and $S_j$ ($i\neq j$) have color $i$ or $j$, and for each $j$ and each vertex $v$ in $S_j$, $v$ is incident to $\left\lceil \frac{(q+1)(k-1)}{2} \right\rceil$ or $\left\lfloor \frac{(q+1)(k-1)}{2} \right\rfloor$ edges with colors other than $j$ (so, within round off, half of the edges from each vertex in $S_j$ to vertices in other parts have color $j$). We say each vertex in $S_j$ has main color $j$. If $n\geq z$, we get a $Z$-quasi-ordered coloring $c$ of $K_n$ which is an extension of the coloring $\varphi$ on $Z$ if for each $j$ and each $v\in S_j$, $c(vy)=j$ for each $y\in V(K_n)\setminus Z$. If $c$ is $Z$-quasi-ordered then it is quasi-ordered if $c$ restricted to $V(K_n)\setminus Z$ is ordered, and quasi-simply-ordered if $c$ restricted to $V(K_n)\setminus Z$ is simply-ordered. If $r>0$ and $q\geq 0$ are integers we let $\mathscr{R}(n,r,q)$ be the set of all $r$-regular subgraphs of $K_n$ spanning precisely $n-q$ vertices (assume $n-q$ is even if $r$ is odd, so the set is nonempty), and if $r\geq 2$ let $\mathscr{C}(n,r,q)$ be the set of all such subgraphs which are connected. Since $k-1=\left\lfloor \frac{2r-2}{q+1} \right\rfloor\leq \frac{2r-2}{q+1}$, we have $r\geq\frac{(q+1)(k-1)}{2}+1>\left\lceil \frac{(q+1)(k-1)}{2} \right\rceil$. So if $H$ is in $\mathscr{R}(n,r,q)$ or $\mathscr{C}(n,r,q)$, then $H$ contains an edge with each of the $k$ colors on edges within $Z$, because it contains at least one vertex in $S_j$ for each $j$, and fewer than $r$ of the edges incident to this vertex have colors other than $j$. We can get an $\mathscr{R}(n,r,q)$-polychromatic or $\mathscr{C}(n,r,q)$-polychromatic quasi-simply-ordered coloring of $K_n$ with $m>k$ colors by making the color classes $M_t$ on the vertices in $V(K_n)\setminus Z$ for $t=k+1,k+2,\ldots,m$ sufficiently large. If $H\in\mathscr{R}(n,r,q)$, for each $t\in[k+1,m]$ we will need the size of $M_t$ to be at least $q+1$ more than the sum of the sizes of all previous color classes, while if $H\in\mathscr{C}(n,r,q)$ we will need the size of $M_t$ to be at least $q$ more than the sum of the sizes of all previous classes, with an extra vertex in $M_m$. To try to get optimal polychromatic colorings we make the sizes of these color classes as small as possible, yet satisfying these conditions. For example, if $r=2$ and $q=0$ then $k=\left\lfloor \frac{2r-2}{q+1} \right\rfloor +1=3$ and $z=k(q+1)=3$, and we get the quasi-simply-ordered colorings $\varphi_{R_0}$ and $\varphi_{C_0}$ with $\abs{Z}=3$ of Theorem \ref{theorem:two}. If $r=2$ and $q=1$ then $k=2$ and $z=4$, and we get the colorings $\varphi_{R_1}$ and $\varphi_{C_1}$ with $\abs{Z}=4$ of Theorem \ref{theorem:three}. \begin{example}[$r=3,q=0$, so $k=5,z=5$]\label{previous-example} Let $\varphi$ be the edge coloring obtained where $\{v_1,v_2,v_3,v_4,v_5\}=Z$ such that $v_i v_{i+1}$ and $v_i v_{i+2}$ ($\operatorname{mod} 5$) have color $i$. The edges connecting $v_i$ to the remaining vertices in $V(K_n)\setminus Z$ are color $i$. See Figure \ref{fig:kFrpoly}. \begin{figure}[htbp] \centering \begin{tikzpicture}[mystyle/.style={draw,shape=circle,fill=white}] \def5{5} \node[regular polygon,regular polygon sides=5,minimum size=3cm] (p) {}; \foreach\x in {1,...,5}{\node[mystyle] (p\x) at (p.corner \x){$v_{\x}$};} \foreach\i in {1,...,5} { \foreach\t in {1,2}{ \pgfmathsetmacro{\j}{int(mod(\i-1+\t,5)+1)} \draw (p\i) --node {\i} (p\j); } } \foreach \i in {2,...,5}{ \pgfmathsetmacro{\angle}{int(18+72*\i)} \draw (p\i) --node {\i} (\angle:2.5); } \draw (p1) --node[left] {1} (90:2.5); \path [draw=black,fill=gray,fill opacity=.25,even odd rule] (0,0) circle (3.5) (0,0) circle (2.5); \node (graphlabel) at (0,3) {$V(K_n)\setminus Z$}; \end{tikzpicture} \caption{The coloring for Example \ref{previous-example}.} \label{fig:kFrpoly} \end{figure} \end{example} \begin{example}[$r=3,q=3,k=2,z=8$] $Z$ has two color classes, 4 vertices in each. The complete bipartite graph between these two sets of vertices could have two vertex disjoint copies of $K_{2,2}$ of one color and also of the other color, or could have an 8-cycle of each color. \end{example} \begin{example}[$r=4,q=2,k=3,z=9$] So $S_1,S_2,S_3$ each have size $q+1=3$. One way to color the edges between parts is for $j=1,2,3$, each vertex in $S_j$ is incident with 2 edges of color $j$ to vertices in $S_{j+1}$ and 1 edge of color $j$ to a vertex in $S_{j-1}$ (so is incident with one edge of color $j+1$ and two edges of color $j-1$, cyclically). The smallest value of $n$ for which this seed can generate a quasi-simply-ordered $\mathscr{R}(n,4,2)$-polychromatic coloring with 5-colors is $n=45$ (the 4$^{\rm{th}}$ and 5$^{\rm{th}}$ color classes would have sizes $9+2+1=12$ and $21+2+1=24$ respectively), while to get a simply-ordered $\mathscr{R}(n,4,2)$-polychromatic coloring with 5 colors you would need $n\geq 69$ (color class sizes $3,3,9,18,36$ works). \end{example} \begin{conj} Let $r\geq 1$ and $q\geq 0$ be integers such that $q\leq 2r-3$. Let $k=\left\lfloor \frac{2r-2}{q+1} \right\rfloor+1\geq 2$ and $z=k(q+1)$. If $n\geq z$ and $n-q$ is even if $r$ is odd, then there exist optimal quasi-simply-ordered $\mathscr{R}(n,r,q)$ and $\mathscr{C}(n,r,q)$-polychromatic colorings with seed $Z$ with parameters $r,q,k,z$. \end{conj} It is not hard to check that each of these quasi-simply-ordered colorings does at least as well as a simply-ordered coloring for those values of $r$ and $q$. The only quesiton is whether some other coloring does better and the conjecture says no. What if $\frac{2r-2}{q+1}<1$? Then $k=\left\lfloor \frac{2r-2}{q+1} \right\rfloor+1=1$, which seems to be saying no seed $Z$ exists with at least 2 colors. \begin{conj} Let $r\geq 1$ and $q\geq 0$ be integers with $q\geq 2r-2$, $n\geq q+r+1$, and not both $r$ and $n-q$ are odd. Then there exists an optimal simply-ordered $\mathscr{R}(n,r,q)$-polychromatic coloring of $K_n$. If $r\geq 2$ then there exists a $\mathscr{C}(n,r,q)$-polychromatic coloring of $K_n$ (unless $r=2$, $q\geq 2$, $n-q$ is odd, and $n\in[2q+2,3q+1]$). \end{conj} Theorem \ref{theorem:one} says this conjecture is true for $r=1$. Theorem \ref{theorem:four} says it is true for $\mathscr{C}(n,r,q)$ for $r=2$ and that it would be true for $\mathscr{R}(n,r,q)$ for $r=2$ if Theorem \ref{theorem:six} held for 2-regular graphs. \bibliographystyle{plain}
null
null
null
proofpile-arXiv_000-10160
{"arxiv_id":"2009.08960","language":"en","timestamp":1600654696000,"url":"https:\/\/arxiv.org\/abs\/2009.08960","yymm":"2009"}
2024-02-18T23:40:25.188Z
2020-09-21T02:18:16.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10162"}
null
\section{Introduction} Suppose that $G$ is a compact connected Lie group. For an integer $n\geqslant 1$ let $\Hom(\Z^{n},G)$ be the space of ordered commuting $n$-tuples in $G$ endowed with the subspace topology as a subset of $G^{n}$. The group $G$ acts by conjugation on $\Hom(\Z^{n},G)$ and the space of representations $\Rep(\Z^{n},G):=\Hom(\Z^{n},G)/G$ can be identified with the moduli space of isomorphism classes of flat connections on principal $G$-bundles over the torus $(\SS^{1})^{n}$. When $n=2$ or $n=3$ these moduli spaces appear naturally in quantum field theories such as Yang–Mills and Chern–Simons theories. Motivated by this a systematic study of the spaces $\Rep(\Z^{n},G)$ was initiated by Borel, Friedman and Morgan in \cite{BFM} and also by Kac and Smilga in \cite{KS}. In both of these papers the authors showed that the representation spaces $\Rep(\Z^{n},G)$ for $n=2,3$ can be described in terms of the root system associated to a choice of maximal torus in $G$. If $G$ is simply--connected and simple, then $\Rep(\Z^2,G)$ can be furthermore identified with the moduli space of semistable principal bundles over an elliptic curve with structure group the complexification of $G$, and this is known to be a weighted projective space \cite{BS79, FMW, Laszlo, Looijenga}. On the other hand, in \cite{AC} Adem and Cohen started a systematic study of the spaces of homomorphisms $\Hom(\Z^{n},G)$ from the point of view of homotopy theory. Since then a variety of authors have studied these spaces using techniques from geometry and homotopy theory. See for example \cite{ACG,Baird,BLR,GPS,GH19,PS,RS,STG}, among others. The spaces of ordered commuting pairs $\Hom(\Z^2, G)$ turn out to have a suprisingly complicated structure; their integral homology is not known for $G=SU(m)$ when $m>2$, and torsion can appear at primes which divide the order of the Weyl group. From the point of view of algebraic geometry, $\Hom(\Z^2,G)$ can be identified with $\Lambda (G)$, the inertia stack of $G$ with the adjoint action. In this context $ \Rep(\Z^2,G)$ can be regarded as the coarse moduli space of the stack, and as the local isotropy groups are not finite, the geometry can be rather intricate. In this article we describe the second homology group of the principal path--component $\Hom(\Z^2,G)_{\BONE}$ as an extension of the Schur multiplier of $\pi_1(G)^2$. \begin{reptheorem}{thm:mainpairs2} Suppose that $G$ is a semisimple compact connected Lie group. Then there is an extension \[ 0\to \Z^s \to H_2(\Hom(\Z^2,G)_{\BONE};\Z) \to H_2(\pi_1(G)^2;\Z)\to 0\,, \] where $s\geqslant 0$ is the number of simple factors in the Lie algebra of $G$. \end{reptheorem} Suppose that $G$ is simply--connected and simple. As $\Rep(\Z^2,G)$ is a weighted projective space one has $\pi_2(\Rep(\Z^2,G))\cong \Z$. The preceding theorem will be deduced from a calculation of $\pi_2(\Hom(\Z^2,G))$. One of the fundamental results in Lie group theory is that $\pi_2(G)=0$, and $\pi_3(G)\cong\Z$. There is a canonical 3-dimensional integral cohomology class that can be realized through a group homomorphism $SU(2)\to G$. In this paper we obtain the rather surprising result that for commuting pairs there is a canonical class which now appears in dimension two. \begin{reptheorem}{thm:mainpairs} Let $G$ be a simply--connected and simple compact Lie group. Then \[ \pi_2(\Hom(\Z^2,G)) \cong \Z\,, \] and on this group the quotient map $\Hom(\Z^2,G)\to \Rep(\Z^2,G)$ induces multiplication by the Dynkin index $\textnormal{lcm}\{n_0^\vee,\dots,n^\vee_r\}$ where $n_0^\vee,\dots,n_r^\vee \geqslant 1$ are the coroot integers of $G$. \end{reptheorem} The Dynkin index of $G$ can be defined as the greatest common divisor of the degrees of $\pi_3(G)\to \pi_3(SU(N))$ for all representations $G\to SU(N)$. The values of the Dynkin index of $G$ are explicitly computed in \cite[Proposition 4.7]{KN97} and \cite[Proposition 2.6]{Laszlo-Sorger1997} and agree with the expressions in terms of coroot integers which are tabulated below. In Theorem \ref{thm:su2represents} we show that any embedding $SU(2)\to G$ corresponding to a long root of $G$ induces an isomorphism $\pi_2(\Hom(\Z^2,SU(2)))\cong \pi_2(\Hom(\Z^2,G))$. \begin{table}[b] \centering \def\arraystretch{1.5} \begin{tabular}{|c||c|c|c|c|c|c|c|c|} \hline $G$ & $SU(m)$ & $Spin(k)$ & $Sp(l)$ & $E_6$ & $E_7$ & $E_8$ & $F_4$ & $G_2$ \\ \hline \hline coroot integers & $1$ & $1,2$ & $1$ & $1,2,3$ & $1,2,3,4$ & $1,2,3,4,5,6$ & $1,2,3$ & $1,2$ \\ \hline lcm & $1$ & $2$ & $1$ & $6$ & $12$ & $60$ & $6$ & $2$ \\ \hline \end{tabular} \vspace{10pt} \caption{The set of coroot integers and their least common multiple for the simple simply--connected compact Lie groups ($m\geqslant 1$, $k\geqslant 7$, $l\geqslant 2$).} \label{table:coroot} \end{table} We note that Theorem \ref{thm:mainpairs} effectively computes $\pi_2(\Hom(\Z^2,G)_{\BONE})$ when $G$ is the group of complex or real points of any reductive algebraic group. Indeed, the main result of \cite{PS} asserts that $\Hom(\Z^2,G)_{\BONE}$ deformation retracts onto $\Hom(\Z^2,K)_{\BONE}$ where $K$ is a maximal compact subgroup of $G$. So we may assume that $G$ is a compact connected Lie group. From the standard classification theorems we know that $G\cong \tilde{G}/L$, where $\tilde{G} =(\SS^1)^k\times G_1\times\dots\times G_s$ is a product of a torus and simply--connected simple compact Lie groups $G_1,\dots , G_s$, and $L$ is a finite subgroup in the center of $\tilde{G}$. By \cite[Lemma 2.2]{Goldman} the quotient map $\tilde{G}\to G$ induces a covering map $\Hom(\Z^2,\tilde{G})\to \Hom(\Z^2,G)_{\BONE}$. From Theorem \ref{thm:mainpairs} we then deduce that \[ \pi_{2}(\Hom(\Z^{2},G)_{\BONE})\cong \pi_{2}(\Hom(\Z^{2},G_{1}))\times\cdots\times \pi_{2}(\Hom(\Z^{2},G_{s}))\cong \Z^{s}. \] \begin{corollary} Let $G$ be the component of the identity of the group of complex or real points of a reductive algebraic group, which we assume is defined over $\R$ in the latter case. Then \[ \pi_2(\Hom(\Z^2,G)_{\BONE})\cong \Z^{s}\,, \] where $s\geqslant 0$ is the number of simple factors in the Lie algebra of $G$. \end{corollary} For the groups $SU(m)$ and $Sp(k)$ we extend our computations to commuting $n$-tuples for $n>2$. \begin{reptheorem}{thm:generalsecondhomology} Suppose that $G$ is $SU(m)$ or $Sp(m)$ with $m\geqslant 1$. For every $n\geqslant 1$ the quotient map $\Hom(\Z^{n},G)\to \Rep(\Z^{n},G)$ induces an isomorphism \[ \pi_{2}(\Hom(\Z^{n},G))\cong \pi_{2}(\Rep(\Z^{n},G))\, . \] \end{reptheorem} We then calculate $\pi_2(\Rep(\Z^n,G))$ and obtain the following result. \begin{reptheorem}{thm:casesymplecticandunitary} Let $n\geqslant 1$. Then \begin{itemize} \item[(i)] for every $m\geqslant 3$ there is an isomorphism \[ \pi_{2}(\Hom(\Z^{n},SU(m))) \cong \Z^{\binom{{n}}{{2}}}\,, \] and the standard inclusion $SU(m)\to SU(m+1)$ induces an isomorphism \[ \pi_2(\Hom(\Z^n,SU(m))) \xrightarrow{\cong} \pi_2(\Hom(\Z^n,SU(m+1)))\,, \] \item[(ii)] for every $k\geqslant 1$ there is an isomorphism \[ \pi_2(\Hom(\Z^n,Sp(k))) \cong \Z^{\binom{{n}}{{2}}}\oplus (\Z/2)^{2^{n}-1-n-\binom{{n}}{{2}}}\,, \] and the standard inclusion $Sp(k)\to Sp(k+1)$ induces an isomorphism \[ \pi_2(\Hom(\Z^n,Sp(k)))\xrightarrow{\cong} \pi_2(\Hom(\Z^n,Sp(k+1)))\, . \] \end{itemize} \end{reptheorem} By the same reasoning as above, using the covering space $S^1\times SU(m)\to U(m)$ we observe that for every $m,n\geqslant 1$ the inclusion map $SU(m)\to U(m)$ induces an isomorphism \[ \pi_{2}(\Hom(\Z^{n},SU(m)))\cong \pi_{2}(\Hom(\Z^{n},U(m))) \] so that the preceding theorem also covers the case of the unitary groups. For spaces of commuting pairs in $Spin(m)$ we also explore the behavior of $\pi_2$ with respect to stabilization. \begin{reptheorem}{thm:spinstability} For all $m\geqslant 5$ the standard map $Spin(m)\to Spin(m+1)$ induces an isomorphism \[ \pi_2(\Hom(\Z^2,Spin(m)))\xrightarrow{\cong} \pi_2(\Hom(\Z^2,Spin(m+1)))\, . \] \end{reptheorem} Along the way, in Theorem \ref{thm:stabilityrepspin} we prove an integral homology stability result for the moduli spaces $\Rep(\Z^2,Spin(m))$. A motivation for the computations provided in this article is the construction of non-trivial transitionally commutative structures (TC structures) on a trivial principal $G$-bundle. For a Lie group $G$ the classifying bundle for commutativity, $E_{com}G\to B_{com}G$, is a principal $G$--bundle constructed out of the spaces of ordered commuting $n$-tuples in $G$ for all $n\geqslant 0$ (see Section \ref{TC structures} for the definition). In this setting, the analogue of Steenrod's homotopy classification of bundles is as follows: the space $B_{com}G$ classifies principal $G$-bundles that come equipped with a TC structure. A TC structure on a principal $G$-bundle $p\co E\to X$ is a choice of a lifting $\tilde{f}\co X\to B_{com}G$, up to homotopy, of the classifying map $f\co X\to BG$ of the bundle $p\co E\to X$. Therefore, the same underlying bundle can admit different TC structures. In what follows we focus on the associated bundle for the component of the identity $E_{com}G_{\BONE}\to B_{com}G_{\BONE}\to BG$. In the classical setting, the computation $\pi_3(G)\cong \Z$ implies that $\pi_4(BG)\cong \Z$ and this can be used to construct non--trivial principal $G$--bundles over the 4--sphere $\SS^4$. Our main result in the commutative context (see Corollary \ref{cor:pi4}) is the following computation for any simply--connected simple compact Lie group $G$: \[ \pi_4(E_{com}G_{\BONE})\cong \Z\, \textnormal{ and }\, \pi_4(B_{com}G_{\BONE})\cong \pi_4( E_{com}G_{\BONE})\oplus \pi_4(BG)\cong \Z\oplus \Z\, . \] We provide explicit generators for these groups. Moreover we construct examples of non-trivial TC structures on the trivial principal $G$-bundle over the sphere $\SS^{4}$. As a by-product of this construction we obtain geometric representatives for the generators of the reduced commutative $K$-theory group $\widetilde{K}_{com}(\SS^{4})$. \medskip This article is organized as follows. In Section \ref{sec:cohomology} we present a cohomology computation with the goal of determining the rank of $\pi_2(\Hom(\Z^n,G)_{\BONE})$. In Section \ref{SectionBredon} we set up the spectral sequence for the homology of the homotopy orbit space $EG\times_G \Hom(\Z^{n},G)_{\BONE}$, which will be the main device used for our computations. In Section \ref{sec:sumspm} we calculate $\pi_2$ of the space of ordered commuting $n$-tuples in $G=SU(m)$ and $G=Sp(k)$. Section \ref {sec:pairs} begins with a review of some known facts concerning centralizers of pairs of commuting elements. Then we prove some of the main technical results of the paper, eventually arriving at the calculation of $\pi_2(\Hom(\Z^2,G))$. In Section \ref{sec:stability} we study the stability behavior for spaces of commuting pairs in the Spin groups. In Section \ref{sec:rolesu2} we study the distinguished role that the group $SU(2)$ plays in the computation of the groups $\pi_{2}(\Hom(\Z^{2},G))$. Finally, in Section \ref{TC structures} we provide geometric interpretations for the results obtained in this article in terms of transitionally commutative principal $G$-bundles. \section*{Acknowledgements} We thank Shrawan Kumar and Burt Totaro for their valuable feedback. \section{Cohomological computations}\label{sec:cohomology} Let $G$ be a compact connected Lie group, let $T\leqslant G$ be a fixed maximal torus, and let $W=N(T)/T$ be the Weyl group. We begin by recalling some known facts concerning the path--components and the fundamental group of $\Hom(\Z^n,G)$ which will be used in the sequel. In general, $\Hom(\Z^n,G)$ is not path--connected. If $\pi_{1}(G)$ is torsionfree, however, then $\Hom(\Z^2,G)$ is path--connected, because the centralizer of any element of $G$ is connected (see \cite[IX \S 5.3 Corollary 1]{Bourbaki}). In fact, classical theorems of Borel \cite{Bo60} show that $\Hom(\Z^2,G)$ is path--connected if and only if $\pi_1(G)$ is torsionfree, and for $n\geqslant 3$ the space $\Hom(\Z^n,G)$ is path--connected if and only if $H_\ast(G;\Z)$ is torsionfree. For example, this is the case for $G=SU(m)$, $G=U(l)$ and $G=Sp(k)$. When $\Hom(\Z^{n},G)$ is not path--connected, we denote by $\Hom(\Z^{n},G)_{\BONE}$ the path--connected component that contains the trivial homomorphism $\BONE\co \Z^n\to G$, equivalently the tuple $\BONE=(1_G,\dots,1_G)$. Then $\Hom(\Z^{n},G)_{\BONE}$ consists precisely of those $n$-tuples that are contained in some maximal torus of $G$. For simply--connected $G$ this follows, because the rank of the centralizer of a commuting $n$-tuple is locally constant (see \cite[Corollary 2.3.2]{BFM}). For general $G$ it follows by the same argument applied to the universal cover of $G$ using \cite[Lemma 2.2]{Goldman}. Consequently, \[ \Rep(\Z^n,G)_\BONE:=\Hom(\Z^n,G)_\BONE/G \cong T^n/W\,, \] where on the right hand side the Weyl group acts diagonally on $T^n$. All of our statements will concern the path-components $\Hom(\Z^n,G)_{\BONE}$ and $\Rep(\Z^n,G)_{\BONE}$. The fundamental group of $\Hom(\Z^n,G)_{\BONE}$ is also well-understood. By \cite[Theorem 1.1]{GPS} the inclusion $\Hom(\Z^n,G)_{\BONE}\subseteq G^n$ induces an isomorphism \[ \pi_1(\Hom(\Z^n,G)_{\BONE})\cong \pi_1(G)^n\, . \] In particular, $\Hom(\Z^n,G)_{\BONE}$ is simply--connected if $G$ is. In this situation also $\Rep(\Z^n,G)_{\BONE}$ is simply--connected by \cite[Theorem 1.1]{BLR}. We will use these facts frequently without mention. Now we turn our attention to the rational homology of $\Hom(\Z^n,G)_{\BONE}$. As explained below the rational cohomology ring of $\Hom(\Z^n,G)_{\BONE}$ admits a well known description as a ring of $W$-invariants. A closed formula for the Poincar{\'e} series of $\Hom(\Z^n,G)_{\BONE}$ was derived in \cite{RS} (see Remark \ref{rem:poincare}). For a fixed homological degree $i\in \N$, however, determination of the rank of $H_i(\Hom(\Z^n,G)_{\BONE};\Q)$ can pose a combinatorial challenge. In this section we present a calculation of $H_2(\Hom(\Z^n,G)_{\BONE};\Q)$. This determines the rank of $\pi_2(\Hom(\Z^n,G)_{\BONE})$ as an abelian group. To describe the rational cohomology of $\Hom(\Z^n,G)_{\BONE}$ consider the (surjective) map \begin{alignat*}{1} G\times T^{n}&\to \Hom(\Z^{n},G)_{\BONE}\\ (g,t_{1},\dots,t_{n})&\mapsto (gt_{1}g^{-1},\dots,gt_{n}g^{-1})\, . \end{alignat*} It is invariant under the action of the normalizer $N(T)$ on $G$ by right translation and on $T^n$ diagonally by conjugation. Since $G\times_{N(T)} T^n\cong G/T\times_W T^n$, we obtain an induced map \[ G/T\times_{W}T^{n}\to \Hom(\Z^{n},G)_{\BONE}\, . \] Baird showed in \cite[Theorem 4.3]{Baird} that if $k$ is a field in which $|W|$ is invertible, then this map induces an isomorphism \[ H^\ast(\Hom(\Z^n,G)_{\BONE};k)\cong H^\ast(G/T\times_W T^n;k)\cong H^\ast(G/T\times T^n;k)^W\, . \] \begin{theorem}\label{2nd homology} Suppose that the compact connected Lie group $G$ is simple. Then \[ H_{2}(\Hom(\Z^{n},G)_{\BONE};\Q)\cong \Q^{\binom{n}{2}}. \] \end{theorem} \begin{proof} Because of the universal coefficient theorem, we may as well use complex coefficients throughout this proof. Let $W$ act diagonally on $H^{1}(T;\C)\otimes H^{1}(T;\C)$. As a first step we are going to prove that $(H^{1}(T;\C)\otimes H^{1}(T;\C))^{W}\cong \C$. Let $\ti$ be the Lie algebra of $T$ and $\ti_{\C}$ its complexification. There is an isomorphism of $W$-representations $H^{1}(T;\C)\cong \ti^{*}_{\C}$, where $\ti_{\C}^{*}$ denotes the dual of $\ti_{\C}$. Therefore $(H^{1}(T;\C)\otimes H^{1}(T;\C))^{W}\cong (\ti_{\C}^{*}\otimes \ti_{\C}^{*})^{W}$. To complete the first step we need to prove that $(\ti_{\C}^{*}\otimes \ti_{\C}^{*})^{W}$ is a complex vector space of dimension $1$. Note that $\text{dim}_{\C}(\ti_{\C}^{*}\otimes \ti_{\C}^{*})^{W}$ is the number of times that the trivial representation appears in the $W$-representation $\ti_{\C}^{*}\otimes \ti_{\C}^{*}$. This number is given by \[ \text{dim}_{\C}(\ti_{\C}^{*}\otimes \ti_{\C}^{*})^{W}=\left<\chi_{\ti_{\C}^{*}}^{2},1\right> =\frac{1}{|W|}\sum_{w\in W}\overline{\chi_{\ti_{\C}^{*}}(w)}^{2}. \] In the above equation $\chi_{\ti_{\C}^*}$ denotes the character of $\ti_{\C}^{*}$ and $\left<\cdot,\cdot\right>$ the usual inner product of characters. As $\ti_{\C}^{*}$ is (the complexification of) a real $W$-representation we have $\chi_{\ti_{\C}^{*}}(w)\in \R$. Moreover, $\ti_{\C}^{*}$ is irreducible by \cite[Proposition 14.31]{FultonHarris} since $\g_{\C}$, the complexification of the Lie algebra of $G$, is simple. Therefore, \[ \text{dim}_{\C}(\ti_{\C}^{*}\otimes \ti_{\C}^{*})^{W} =\left<\chi_{\ti_{\C}^{*}}^{2},1\right> =\left<\chi_{\ti_{\C}^{*}},\chi_{\ti_{\C}^{*}}\right> =1. \] Next we prove that $H^{2}(\Hom(\Z^{n},G)_{\BONE};\C)\cong \C^{\binom{n}{2}}$. With complex coefficients there are isomorphisms \[ H^{2}(\Hom(\Z^{n},G)_{\BONE};\C)\cong H^{2}(G/T\times_{W}T^{n};\C) \cong H^{2}(G/T\times T^{n};\C)^{W}. \] Using the K{\"u}nneth theorem we obtain a $W$-equivariant isomorphism \[ H^{2}(G/T\times T^{n};\C)\cong H^{2}(G/T;\C)\oplus (H^{1}(G/T;\C)\otimes H^{1}(T^{n};\C))\oplus H^{2}(T^{n};\C). \] The flag variety $G/T$ is simply--connected and thus $H^{1}(G/T;\C)=0$. Furthermore, as an ungraded ring, $H^{*}(G/T;\C)$ is the regular $W$-representation and the only copy of the trivial representation is in degree $0$. This implies that $H^{2}(G/T;\C)^{W}=0$. Therefore, \[ H^{2}(G/T\times T^{n};\C)^{W}\cong H^{2}(T^{n};\C)^{W}. \] By \cite[IX \S 5.2 Corollary 1]{Bourbaki} the space $T/W$ is homeomorphic to $A/\pi_1(G)$, where $A$ is the closure of a Weyl alcove. Now $\pi_1(G)$ is finite as $G$ is simple, and $A$ is contractible, hence $H^{2}(T;\C)^{W}\cong H^{2}(T/W;\C) \cong H^2(A/\pi_1(G);\C)\cong H^2(A;\C)^{\pi_1(G)}=0$. Using this and the K{\"u}nneth theorem we obtain an isomorphism \[ H^{2}(T^{n};\C)^{W}\cong \bigoplus^{\binom{n}{2}}(H^{1}(T;\C)\otimes H^{1}(T;\C))^{W} \cong \C^{\binom{n}{2}}. \] Putting everything together we conclude that $H^{2}(\Hom(\Z^{n},G)_{\BONE};\C)\cong \C^{\binom{n}{2}}$. The universal coefficient theorem now finishes the proof. \end{proof} \begin{corollary} \label{cor:rank} Let $G$ be a simple compact connected Lie group. Then $\pi_{2}(\Hom(\Z^{n},G)_{\BONE})$ has rank $\binom{n}{2}$ as an abelian group. \end{corollary} \begin{proof} As explained in the introduction $\pi_2(\Hom(\Z^n,G)_{\BONE})\cong \pi_2(\Hom(\Z^n,\tilde{G})_{\BONE})$, where $\tilde{G}$ is the universal covering group of $G$. It is compact as $G$ is simple. Now $\Hom(\Z^{n},\tilde{G})_{\BONE}$ is simply--connected by \cite[Theorem 1.1]{GPS}. Therefore, by Hurewicz' theorem, we obtain \[ \pi_{2}(\Hom(\Z^{n},\tilde{G})_{\BONE})\cong H_{2}(\Hom(\Z^{n},\tilde{G})_{\BONE};\Z)\,. \] The assertion follows from Theorem \ref{2nd homology}. \end{proof} \begin{remark} \label{rem:poincare} Let $G$ be a simple compact connected Lie group. We can view the Weyl group $W$ as a reflection group on $\ti$ and every element $w\in W$ as a linear transformation $w:\ti\to \ti$. Let $d_{1}, d_{2},\dots , d_{r}$ be the characteristic degrees of $W$. By \cite[Theorem 1.1]{RS} the Poincar\'e series of $\Hom(\Z^{n},G)_{\BONE}$ is given by \[ P(\Hom(\Z^{n},G)_{\BONE})(t)=\frac{\prod_{i=1}^{r}(1-t^{2d_{i}})}{|W|} \left(\sum_{w\in W}\frac{\text{det}(1+tw)^{n}}{\text{det}(1-t^{2}w)}\right). \] Our Theorem \ref{2nd homology} shows that the coefficient of $t^{2}$ in this polynomial is precisely $\binom{n}{2}$. Moreover, as $G$ is simple $\pi_1(G)$ is finite. Then $\pi_1(\Hom(\Z^n,G)_{\BONE})\cong \pi_1(G)^n$ is finite too, so \[ P(\Hom(\Z^{n},G)_{\BONE})(t)=1+{\binom{n}{2}}t^{2}+t^{3}q(t)\,, \] where $q(t)$ is some polynomial with non-negative integer coefficients. \end{remark} \section{The Bredon spectral sequence for $\Hom(\Z^n,G)_{\BONE}$}\label{SectionBredon} In this section we will analyze the Bredon spectral sequence associated to the $G$-space $\Hom(\Z^n,G)_{\BONE}$. \subsection{The set-up} We begin with an observation: \begin{lemma}\label{equihom1} Let $G$ be a simply--connected compact Lie group. There is an isomorphism \[ \pi_2(\Hom(\Z^n,G)_{\BONE})\cong H_{2}(EG\times_G\Hom(\Z^n,G)_{\BONE};\Z). \] \end{lemma} \begin{proof} Since $G$ is assumed to be simply--connected, $BG$ is 3-connected, and $\Hom(\Z^n,G)_{\BONE}$ is simply--connected by \cite[Theorem 1.1]{GPS}. The long exact sequence of homotopy groups associated to the fibration sequence \[ \Hom(\Z^n,G)_{\BONE}\to EG\times_G \Hom(\Z^n,G)_{\BONE}\to BG \] then implies that $EG\times_G \Hom(\Z^n,G)_{\BONE}$ is simply--connected, and that $\pi_2(\Hom(\Z^n,G)_{\BONE})\cong \pi_2(EG\times_G \Hom(\Z^n,G)_{\BONE})$. The result follows now from the Hurewicz theorem. \end{proof} As a consequence, the homotopy group $\pi_2(\Hom(\Z^n,G)_{\BONE})$ can be computed from the Borel equivariant homology of $\Hom(\Z^n,G)_{\BONE}$. Now if $X$ is a $G$-CW-complex, then the skeletal filtration of $X$ gives rise to an Atiyah-Hirzebruch style spectral sequence \[ E^2_{p,q}=H^G_p(X;\mathcal{H}_q)\Longrightarrow H_{p+q}(EG\times_G X;\Z)\,, \] where $H^{G}_{p}(X;\H_{q})$ denotes Bredon homology with respect to the covariant coefficient system \[ G/K\mapsto \mathcal{H}_q(G/K):=H_q(BK;\Z)\, , \] for $K$ a closed subgroup of $G$. For Bredon homology in the setting of topological groups we refer the reader to the paper by Willson \cite{willson}. The spectral sequence is a special case of \cite[Theorem 3.1]{willson}. We will refer to it as the \emph{Bredon spectral sequence}. \medskip If $G$ is a compact Lie group, then $\Hom(\Z^n,G)$ may be viewed as an affine $G$-variety, and thus the underlying space admits a $G$-CW-structure by \cite[Theorem 1.3]{ParkSuh}. In fact, if $G$ is simply--connected, it is not too difficult to construct a $G$-CW-structure on $\Hom(\Z^n,G)_{\BONE}$ directly, and we will do this in Lemma \ref{lem:cw} (see also Remark \ref{rem:cwgeneral}). Thus, taking $X=\Hom(\Z^n,G)_{\BONE}$ the spectral sequence we will study takes the form \begin{equation} \label{eq:bredonss} E^2_{p,q}=H^G_p(\Hom(\Z^n,G)_{\BONE};\mathcal{H}_q) \Longrightarrow H_{p+q}(EG\times_G \Hom(\Z^n,G)_{\BONE};\Z)\, . \end{equation} Notice that when $q=0$ then $\mathcal{H}_q=\underline{\Z}$ is the constant coefficient system. Therefore, \[ E^2_{p,0}=H^G_{p}(\Hom(\Z^n,G)_{\BONE};\underline{\Z})\cong H_p(\Rep(\Z^n,G)_{\BONE};\Z)\, . \] We will be interested not only in calculating the homotopy group $\pi_2(\Hom(\Z^n,G)_{\BONE})$, but also in describing the effect of the quotient map \[ \pi\co \Hom(\Z^n,G)_{\BONE}\to \Rep(\Z^n,G)_{\BONE} \] on $\pi_2$. By Lemma \ref{equihom1}, this map can be identified with the one induced on $H_2$ by the projection \[ EG\times_G\Hom(\Z^n,G)_{\BONE}\to \Hom(\Z^n,G)_{\BONE}/G= \Rep(\Z^n,G)_{\BONE} \] from the homotopy orbit to the strict orbit. In the spectral sequence this corresponds to the composite \[ H_2(EG\times_G\Hom(\Z^n,G)_{\BONE};\Z)\to E^\infty_{2,0}\hookrightarrow E^2_{2,0}\cong H_2(\Rep(\Z^n,G)_{\BONE};\Z)\, . \] The case $n=2$ is an interesting special case; it is known that if $G$ is simply--connected and simple, then $\Rep(\Z^2,G)$ can be identified naturally with the moduli space of semistable principal bundles over an elliptic curve with structure group the complexification of $G$, and this moduli space is a weighted projective space $\mathbb{CP}(\mathbf{n}^\vee)$, where $\mathbf{n}^\vee=(n_0^\vee,\dots,n_r^\vee)$ is the tuple of coroot integers of $G$ (see Section \ref{sec:componentgroup} for the definition of coroot integers and Section \ref{sec:quotients} for the definition of a weighted projective space). This follows from the work of various authors \cite{BS79,FMW,Laszlo,Looijenga,NS,Ra}. Our proofs will be self-contained; in Proposition \ref{prop:weightedprojective} we will prove independently, and by elementary topological methods, that $\Rep(\Z^2,G)$ is homotopy equivalent to $\mathbb{CP}(\mathbf{n}^\vee)$. The homology of weighted projective spaces was computed by Kawasaki \cite{K73} from which one obtains, for simple simply--connected $G$ of rank $r$, \[ H_p(\Rep(\Z^2,G);\Z)\cong \begin{cases} \Z & \textnormal{if }p\leqslant 2r \textnormal{ is even}, \\ 0 & \textnormal{otherwise.} \end{cases} \] Figure \ref{figure:e2page} depicts the spectral sequence in this situation. We will be interested only in the part of the spectral sequence that is relevant for homological degree two. \begin{figure}[b] \begin{tikzpicture}[scale =0.9] \small \clip (-0.5, -0.5) rectangle (6.5, 5); \draw[step=1.5, gray, very thin] (0,0) grid (7, 5); \draw[pattern=north east lines, pattern color=gray] (0,4.5) rectangle (7,6); \draw[pattern=north east lines, pattern color=gray] (1.5,3) rectangle (7,4.5); \draw[pattern=north east lines, pattern color=gray] (4.5,1.5) rectangle (7,3); \draw[pattern=north east lines, pattern color=gray] (6,0) rectangle (7,1.5); \draw[->, thick](0,0)--(6.5, 0); \draw[->, thick](0,0)--(0, 5); \draw (0.75, 0.75) node[anchor=center]{$\Z$}; \draw (2.25, 0.75) node[anchor=center]{$0$}; \draw (3.75, 0.75) node[anchor=center]{$\Z$}; \draw (5.25, 0.75) node[anchor=center]{$0$}; \draw (0.75, 2.25) node[anchor=center]{$E^2_{0,1}$}; \draw (2.25, 2.25) node[anchor=center]{$E^2_{1,1}$}; \draw (3.75, 2.25) node[anchor=center]{$E^2_{2,1}$}; \draw (0.75, 3.75) node[anchor=center]{$E^2_{0,2}$}; \foreach \x in {0,1, 2, 3} \draw (0.8+3*\x/2,0) node[anchor=north] {$\x$}; \foreach \y in {0,1, 2} \draw (0,0.8+3*\y/2) node[anchor=east] {$\y$}; \draw (2.05, 1.35) node[anchor=center]{$d_{2}$}; \draw[->] (3.3,0.9)-- (1.2, 2.1); \draw[->] (3.3,2.4)-- (1.2, 3.6); \draw[->] (4.8,0.9)-- (2.7, 2.1); \end{tikzpicture} \caption{A portion of the $E^2$-page of the Bredon spectral sequence in the case of commuting pairs in a simple, simply--connected, compact Lie group of rank $\geqslant 1$.} \label{figure:e2page} \end{figure} \subsection{First consequences} Going back to the general situation, we will prove that $E^2_{0,2}$ vanishes in certain cases. For this we will use the following lemma. \begin{lemma}\label{computation spectral sequence} Suppose that $G$ is a compact Lie group and $X$ is a path--connected $G$-CW complex that has a basepoint $x_{0}$ fixed by $G$. Assume that $\M$ is a covariant coefficient system such that $\M(G/G)=0$ and for every morphism $G/H\to G/K$ in the orbit category the induced map $\M(G/H)\to \M(G/K)$ is surjective. Then $H^{G}_{0}(X;\M)=0$. \end{lemma} \begin{proof} After passing to an appropriate subdivision we may assume without loss of generality that the $G$-CW complex structure has a $0$-dimensional cell corresponding to the basepoint $x_{0}$. As this basepoint is fixed by $G$ this cell must be of the form $G/G$. By definition $H^{G}_{p}(X;\M)=H_{p}(C_{*}(X;\M))$. Here the group $C_{i}(X;\M)$ of cellular $i$-chains is defined by \[ C_{i}(X;\M)=\bigoplus_{\sigma\in S_{i}(X)} \M(G/G_{\sigma}), \] where $S_{i}(X)$ is an indexing set of all $i$-dimensional $G$-cells of $X$, and $G_\sigma$ is the isotropy group of $\sigma\in S_i(X)$. To prove the lemma we are going to show that the differential \[ d_{1}\co \bigoplus_{\tau\in S_{1}(X)} \M(G/G_{\tau})\to \bigoplus_{\sigma\in S_{0}(X)} \M(G/G_{\sigma}) \] is surjective. Let us first recall the definition of $d_1$. Suppose that $\tau\in S_1(X)$ and let $f_\tau\co G/G_\tau\times \partial D^1\to X^{(0)}$ be the attaching map of $\tau$. Write $\partial D^1=\{0,1\}$. Suppose that $f_\tau$ restricts to $G$-maps $f_{\tau,0}\co G/G_{\tau}\times \{0\}\to G/G_{\sigma_0}\times D^0$ and $f_{\tau,1}\co G/G_{\tau}\times \{1\} \to G/G_{\sigma_1}\times D^0$ for $\sigma_i\in S_0(X)$, $i=0,1$. Given $y\in \mathcal{M}(G/G_\tau)$, then \[ d_1(y)=\mathcal{M}(f_{\tau,1})(y)-\mathcal{M}(f_{\tau,0})(y)\,. \] Now suppose that $\sigma\in S_{0}(X)$ and $z\in \M(G/G_{\sigma})$. We must show that $z\in \text{Im}(d_{1})$. As $X$ is assumed to be path--connected, we can find $\sigma_0,\dots,\sigma_n \in S_0(X)$ and $\tau_1,\dots,\tau_n\in S_1(X)$ such that the attaching map $f_{\tau_k}$ restricts to $f_{\tau_k,0}\co G/G_{\tau_k}\times \{0\}\to G/G_{\sigma_{k-1}}\times D^0$ and $f_{\tau_k,1}\co G/G_{\tau_k}\times \{1\}\to G/G_{\sigma_k}\times D^0$, and such that $\sigma_n=\sigma$ and $\sigma_0$ is the $G$-cell corresponding to the $G$-fixed basepoint $x_0$. By hypothesis $\mathcal{M}(f_{\tau_k,i})$ is surjective for every $k=1,\dots,n$ and $i=0,1$. Therefore, we find $y_{n}\in \M(G/G_{\tau_{n}})$ such that $\mathcal{M}(f_{\tau_n,1})(y_{n})=z_n:=z$, and $d_{1}(y_{n})=z_n-z_{n-1}$ for some $z_{n-1}\in \M(G/G_{\sigma_{n-1}})$. Continuing like this we find for every $k=1,\dots,n$ elements $y_{k}\in \M(G/G_{\tau_{k}})$ and $z_{k-1}\in \mathcal{M}(G/G_{\sigma_{k-1}})$ such that $\mathcal{M}(f_{\tau_k,1})(y_{k})=z_{k}$ and $d_{1}(y_{k})=z_{k}-z_{k-1}$. Since $\M(G/G_{\sigma_{0}})=\M(G/G)=0$, we have that $z_0=0$ and thus $d_1(y_1)=z_1$. Then \[ d_{1}(y_{n}+y_{n-1}+\cdots+ y_{1}) =(z_{n}-z_{n-1})+(z_{n-1}-z_{n-2})+\cdots +(z_{2}-z_{1})+z_1=z_{n}=z. \] This proves that $d_{1}$ is surjective. \end{proof} In addition, we must make an assumption about the components of the isotropy groups of $\Hom(\Z^n,G)_{\BONE}$ under the conjugation action by $G$. Suppose that $\underline{x}=(x_1,\dots,x_n)\in \Hom(\Z^n,G)_{\BONE}$ is an $n$-tuple of commuting elements. Then the isotropy group of $\underline{x}$ is \[ G_{\underline{x}}=Z_G(\underline{x})\,, \] i.e., the centralizer of the subset $\{x_1,\dots,x_n\}\subseteq G$. The following fact is explained in \cite[~Example~2.4]{AG1}. \begin{lemma} \label{lem:connectedisotropy} Let $G$ be $SU(m)$ or $Sp(m)$ for some $m\geqslant 1$, and let $n\geqslant 1$. Then for every $\underline{x}\in \Hom(\Z^n,G)_{\BONE}$ the centralizer $Z_G(\underline{x})$ is connected. \end{lemma} In general, $Z_{G}(\underline{x})$ may not be connected. For example, when $G=Spin(m)$ with $m\geqslant 7$ there exist pairs $(x,y)\in \Hom(\Z^2,G)$ such that $\pi_0(Z_G(x,y))\cong \Z/2$. The following result is a special case of \cite[Corollary 7.5.3]{BFM} and a proof will be given in Section \ref{sec:componentgroup}. \begin{lemma} \label{lem:componentsisotropy} Let $G$ be a simply--connected and simple compact Lie group. For every $(x,y)\in \Hom(\Z^2,G)_{\BONE}$ the group $\pi_0(Z_G(x,y))$ is a finite cyclic group (of order at most $6$). \end{lemma} We can now prove: \begin{proposition} \label{prop:vanishinge02} Suppose that either $n=2$, or that $n\geqslant 2$ and $G$ is $SU(m)$ or $Sp(m)$ for some $m\geqslant 1$. Then $E^2_{0,2}=H_0^G(\Hom(\Z^n,G)_{\BONE};\mathcal{H}_2)= 0$. \end{proposition} \begin{proof} For ease of notation we denote $\Hom(\Z^n,G)_{\BONE}$ simply by $X$. Let $H$ and $K$ be isotropy groups of $X$ and assume that $gHg^{-1}\leqslant K$ for some $g\in G$. If we can show that the map \[ \H_2(G/H)=H_{2}(BH;\Z)\to H_{2}(BK;\Z)=\H_2(G/K) \] induced by conjugation and inclusion is surjective, then Lemma \ref{computation spectral sequence} finishes the proof, for $x_{0}=(1_{G},\dots, 1_{G})\in X$ is fixed by the conjugation action and $\H_2(G/G)=H_{2}(BG;\Z)=0$ since $BG$ is $3$-connected. Factoring the map through the isomorphism induced by $H\to gHg^{-1}$, it suffices to prove surjectivity in the case where $H\leqslant K$ and $H_2(BH;\Z)\to H_2(BK;\Z)$ is induced by the inclusion of $H$ into $K$. To see this we consider first the inclusion of the identity components $H_0\leqslant K_0$. There is a torus $T\leqslant H_0$ which is maximal for both $H_0$ and $K_0$ as both groups have maximal rank. By \cite[Theorem V 7.1]{BD} the maps $\pi_{1}(T)\to \pi_{1}(H_{0})$ and $\pi_{1}(T)\to \pi_{1}(K_{0})$ induced by the inclusions are both surjective. This in turn implies that the map $\pi_{1}(H_{0})\to \pi_{1}(K_{0})$ induced by the inclusion is surjective. From the natural isomorphisms \[ H_{2}(BH_{0};\Z)\cong \pi_{2}(BH_{0})\cong \pi_{1}(H_{0})\,, \] and the corresponding ones for $K_0$, we derive that $H_{2}(BH_{0};\Z) \to H_{2}(BK_{0};\Z)$ is surjective. Surjectivity of $H_2(BH;\Z)\to H_2(BK;\Z)$ follows then, because there is a commutative diagram with exact rows \[ \xymatrix{ H_2(BH_0;\Z)_{\pi_0(H)} \ar[r] \ar[d] & H_2(BH;\Z) \ar[d] \ar[r] & 0 \ar@{=}[d] \\ H_2(BK_0;\Z)_{\pi_0(K)} \ar[r] & H_2(BK;\Z) \ar[r] & 0 } \] resulting from the Serre exact sequence of the homotopy fiber sequence \[ BK_{0}\to BK\to B\pi_{0}(K)\,, \] and the corresponding one for $H$. In more detail, consider the Serre spectral sequence \[ \tilde{E}^{2}_{p,q}=H_{p}(\pi_{0}(K);H_{q}(BK_{0};\Z))\Longrightarrow H_{p+q}(BK;\Z)\,. \] As $BK_0$ is path--connected and simply--connected, $H_0(BK_0;\Z)\cong \Z$ is the trivial $\pi_0(K)$-module and $H_1(BK_0;\Z)= 0$. In particular, both $\tilde{E}^{2}_{1,1}$ and $\tilde{E}_{2,1}^2$ vanish. Now observe that $\pi_0(K)$ is a cyclic group; if $n=2$, then this is Lemma \ref{lem:componentsisotropy}, and if $G$ is either $SU(m)$ or $Sp(m)$, then $K$ is connected by Lemma \ref{lem:connectedisotropy}. As a consequence, we have that $H_2(\pi_0(K);\Z)= 0$ and, therefore, $\tilde{E}^{2}_{2,0}=0$. Furthermore, we can identify $\tilde{E}_{0,2}^{2}$ with the coinvariants $H_{2}(BK_{0};\Z)_{\pi_{0}(K)}$. The vanishing of $\tilde{E}_{2,1}^{2}$ implies that $\tilde{E}_{0,2}^{3}=\tilde{E}_{0,2}^{2}$. It follows that \[ \tilde{E}^{4}_{0,2}= H_{2}(BK_{0};\Z)_{\pi_{0}(K)}/\Im(d_{3}\co\tilde{E}_{3,0}^{3}\to \tilde{E}_{0,2}^{3})\,. \] For degree reasons, the differentials from or to $\tilde{E}^{r}_{0,2}$ are zero when $r\geqslant 4$, which implies that $\tilde{E}^{\infty}_{0,2}=\tilde{E}^{4}_{0,2}$. As a result, there is an exact sequence \[ 0\to \Im(d_{3}\co\tilde{E}_{3,0}^{3}\to \tilde{E}_{0,2}^{3}) \to H_{2}(BK_{0};\Z)_{\pi_{0}(K)} \to H_2(BK;\Z)\to 0\,, \] and a corresponding exact sequence for $H$. The diagram is obtained by naturality of the Serre spectral sequence. \end{proof} As a consequence of Proposition \ref{prop:vanishinge02} we obtain the following theorem, which was stated in the introduction in terms of $\pi_2$ rather than $H_2$ (the two statements being equivalent by the Hurewicz theorem). \begin{theorem}\label{thm:generalsecondhomology} Suppose that $G$ is $SU(m)$ or $Sp(m)$ with $m\geqslant 1$. For every $n\geqslant 1$ the quotient map $\pi\co \Hom(\Z^{n},G)\to \Rep(\Z^{n},G)$ induces an isomorphism \[ H_{2}(\Hom(\Z^{n},G);\Z)\cong H_{2}(\Rep(\Z^{n},G);\Z)\, . \] \end{theorem} \begin{proof} Let $X:=\Hom(\Z^{n},G)$ and consider the Bredon spectral sequence \[ E^{2}_{p,q}=H^{G}_{p}(X;\H_{q})\Longrightarrow H_{p+q}(EG\times_G X;\Z)\,. \] As pointed out before, the map we aim to show is an isomorphism is identified with the composite \[ H_2(EG\times_G X;\Z)\to E_{2,0}^\infty \hookrightarrow E_{2,0}^2\, . \] By Lemma \ref{lem:connectedisotropy}, $G$ acts on $X$ with connected isotropy groups, hence $E^{2}_{p,1}=H^G_{p}(X;\H_1)=0$ for all $p\geqslant 0$. This implies that $d_2\co E^2_{2,0}\to E^2_{0,1}$ is trivial, so that $E^{\infty}_{2,0}\cong E^{2}_{2,0}$. Now $E^2_{0,2}$ vanishes by Proposition \ref{prop:vanishinge02}, whence the only possibly non-zero term of total degree two on the $E^\infty$-page is $E^\infty_{2,0}$. Consequently, the projection $H_2(EG\times_G X;\Z)\to E^{\infty}_{2,0}$ is an isomorphism. \end{proof} As a result, the computation of $\pi_2(\Hom(\Z^n,G))$ for $G=SU(m)$ or $G=Sp(m)$ is reduced to the calculation of $H_2(\Rep(\Z^n,G);\Z)$. This will be adressed in Section \ref{sec:sumspm}. \medskip For the case of commuting pairs we record an intermediate result in the following lemma. \begin{lemma} \label{lem:pretheorem} Let $G$ be a simply--connected and simple compact Lie group. Then \[ \pi_{2}(\Hom(\Z^{2},G))\cong \Z\oplus E^2_{1,1}\,, \] where $E^2_{1,1}=H_1^G(\Hom(\Z^2,G);\mathcal{H}_1)$ is a finite group. Moreover, the image of \[ \pi_\ast\co \pi_2(\Hom(\Z^2,G))\to \pi_2(\Rep(\Z^2,G)) \] is a finite index subgroup $d\Z\subseteq \Z$ where $d$ equals the order of $E^2_{0,1}=H_0^G(\Hom(\Z^2,G);\mathcal{H}_1)$. \end{lemma} \begin{proof} In the Bredon spectral sequence for $\Hom(\Z^2,G)$ we have that $E^2_{2,0}\cong \Z$ (see Figure \ref{figure:e2page}), and $E^{2}_{0,2}=0$ by Proposition \ref{prop:vanishinge02}. Because $EG\times_G \Hom(\Z^2,G)$ is simply--connected, we must have $E^\infty_{0,1}= 0$. The only way this can happen is by $d_2\co E^2_{2,0}\to E^2_{0,1}$ being surjective. As all isotropy groups of $\Hom(\Z^2,G)$ are compact, the coefficient system $\mathcal{H}_1$ takes values in finite abelian groups. Since $\Hom(\Z^2,G)$ is compact, $E^2_{0,1}=H_0^G(\Hom(\Z^2,G);\mathcal{H}_1)$ is finitely generated. Together this implies that $E^2_{0,1}$ is finite. Therefore, the subgroup $E^\infty_{2,0}\leqslant E^2_{2,0}$ is $\ker(d_2)\cong d\Z$, where $d$ is the order of the cyclic group $E^2_{0,1}$. Finally, $E^{\infty}_{1,1}\cong E^2_{1,1}$ because $E^2_{3,0}=0$. Since $E^{\infty}_{2,0}=\ker(d_2)\cong \Z$ is free, we have that $\pi_2(\Hom(\Z^2,G))\cong E^\infty_{2,0}\oplus E^{2}_{1,1}$. \end{proof} The groups $H_p^G(\Hom(\Z^2,G);\H_1)$, $p\geqslant 0$ will be calculated in Section \ref{sec:pairs}. \section{Commuting $n$-tuples in $SU(m)$ and $Sp(k)$} \label{sec:sumspm} In this section we focus on the cases $G=SU(m)$ and $G=Sp(k)$. Since $\Hom(\Z^n,G)$ and $\Rep(\Z^n,G)$ are path--connected for all $n\geqslant 1$, we will omit the subscript $\mathds{1}$ in this section. \begin{theorem}\label{thm:casesymplecticandunitary} Let $n\geqslant 1$. Then \begin{itemize} \item[(i)] for every $m\geqslant 3$ there is an isomorphism \[ \pi_{2}(\Hom(\Z^{n},SU(m))) \cong \Z^{\binom{{n}}{{2}}}\,, \] and the standard inclusion $SU(m)\to SU(m+1)$ induces an isomorphism \[ \pi_2(\Hom(\Z^n,SU(m))) \xrightarrow{\cong} \pi_2(\Hom(\Z^n,SU(m+1)))\,, \] \item[(ii)] for every $k\geqslant 1$ there is an isomorphism \[ \pi_2(\Hom(\Z^n,Sp(k))) \cong \Z^{\binom{{n}}{{2}}}\oplus (\Z/2)^{2^{n}-1-n-\binom{{n}}{{2}}}\,, \] and the standard inclusion $Sp(k)\to Sp(k+1)$ induces an isomorphism \[ \pi_2(\Hom(\Z^n,Sp(k)))\xrightarrow{\cong} \pi_2(\Hom(\Z^n,Sp(k+1)))\, . \] \end{itemize} \end{theorem} \begin{proof} Let us first consider the quaternionic unitary groups $Sp(k)$ with $k\geqslant 1$. Because of Theorem \ref{thm:generalsecondhomology} it suffices to calculate $H_{2}(\Rep(\Z^{n},Sp(k));\Z)$. By \cite[Proposition 6.3]{ACG} there is a homeomorphism $\Rep(\Z^{n},Sp(k))\cong SP^{k}((\SS^{1})^{n}/\Z/2)$, where $SP^k$ denotes the $k$-th symmetric power. Here $\Z/2$ acts by complex conjugation on $\SS^{1}\subseteq \C$ and diagonally on $(\SS^{1})^{n}$. Under this identification the inclusion $Sp(k)\to Sp(k+1)$ given by block sum with a $1\times 1$ identity matrix induces the natural inclusion $SP^{k}((\SS^{1})^{n}/\Z/2)\to SP^{k+1}((\SS^{1})^{n}/\Z/2)$. For the space $(\SS^{1})^{n}/\Z/2$ we know by \cite[Proposition 6.9]{ACG} that \[ H_{1}((\SS^{1})^{n}/\Z/2;\Z)=0 \ \text{ and } \ H_{2}((\SS^{1})^{n}/\Z/2;\Z)\cong \Z^{\binom{{n}}{{2}}}\oplus (\Z/2)^{2^{n}-1-n-\binom{{n}}{{2}}}\,. \] This together with the theorem of Dold and Thom imply that $SP^{\infty}((\SS^{1})^{n}/\Z/2)$ is a simply--connected space, and \[ H_{2}(SP^{\infty}((\SS^{1})^{n}/\Z/2);\Z)\cong \pi_{2}(SP^{\infty}((\SS^{1})^{n}/\Z/2)) \cong \Z^{\binom{{n}}{{2}}}\oplus (\Z/2)^{2^{n}-1-n-\binom{{n}}{{2}}}. \] By Steenrod's splitting \cite[Section 22]{Steenrod} the map $SP^{k}((\SS^{1})^{n}/\Z/2)\to SP^{k+1}((\SS^{1})^{n}/\Z/2)$ is split injective in homology for every $k\geqslant 1$. As the groups $H_{2}(SP^{k}((\SS^{1})^{n}/\Z/2);\Z)$ for $k=1$ and $k=\infty$ agree, each of the maps \begin{equation*} H_{2}(SP^{k}((\SS^{1})^{n}/\Z/2);\Z)\to H_{2}(SP^{k+1}((\SS^{1})^{n}/\Z/2);\Z) \end{equation*} must be an isomorphism. This proves part (ii) of the theorem. Consider now the special unitary group $SU(m)$. By \cite[Proposition 4.4]{LR15} the space $\Rep(\Z^n,SU(m))$ is homotopy equivalent to the universal cover of $\Rep(\Z^n,U(m))$. There is a homeomorphism $\Rep(\Z^n,U(m))\cong SP^m((\mathbb{S}^1)^n)$, so that we obtain \[ \pi_2(\Rep(\Z^n,SU(m)))\cong \pi_2(SP^m((\mathbb{S}^1)^n))\, . \] By the Dold-Thom theorem, there is an isomorphism \[ \pi_2(SP^\infty((\SS^1)^n))\cong H_2((\SS^1)^n;\Z)\cong \Z^{\binom{n}{2}}\, . \] To prove part (i) of the theorem it suffices to show that for every $m\geqslant 3$ the natural map $SP^m((\SS^1)^n)\to SP^{\infty}((\SS^1)^n)$ induces an isomorphism of second homotopy groups. But this follows at once from the unpublished preprint \cite[Section 8]{Tripathyd}, where it is shown that for a connected based CW complex $X$ and every $k\geqslant 0$ the map $\pi_k(SP^m(X))\to \pi_k(SP^{m+1}(X))$ is an isomorphism whenever $m> k$. \footnote{For the particular case $X=(\mathbb{S}^1)^n$ this stabilization result can also be proved directly by constructing a suitable CW-structure on $SP^m((\mathbb{S}^1)^n)$. However, to avoid digression, we chose not to include a proof of this fact.} \end{proof} \begin{remark} \label{rem:symplecticandunitary} As $SU(2)\cong Sp(1)$ also the case of $SU(2)$ is covered by Theorem \ref{thm:casesymplecticandunitary}. In the case of commuting pairs the inclusion $SU(2)\hookrightarrow SU(3)$ induces an isomorphism $\pi_2(\Hom(\Z^2,SU(2)))\cong \pi_2(\Hom(\Z^2,SU(3)))$. This follows from Theorem \ref{thm:generalsecondhomology} and the fact that the induced map of representation spaces $\Rep(\Z^2,SU(2))\to \Rep(\Z^2,SU(3))$ can be identified with the standard inclusion $\mathbb{CP}^1\subseteq \mathbb{CP}^2$ as shown in \cite[Lemma 5.4]{LR15}. \end{remark} \section{Commuting pairs in a simple and simply--connected group} \label{sec:pairs} Throughout this section $G$ will denote a simply--connected and simple compact Lie group of rank $r\geqslant 1$, unless stated otherwise. In this case $\Hom(\Z^2,G)$ is path--connected and simply--connected, hence we will drop the subscript $\mathds{1}$ from the notation. One objective of this section is to prove Theorem \ref{thm:mainpairs}, which asserts that $\pi_2(\Hom(\Z^2,G))\cong \Z$ and on $\pi_2$ the map $\Hom(\Z^2,G)\to \Rep(\Z^2,G)$ induces multiplication by the Dynkin index of $G$. In view of Lemma \ref{lem:pretheorem} the proof amounts to a calculation of $H_p^G(\Hom(\Z^2,G);\mathcal{H}_1)$ for $p=0,1$. Note that, by Lemma \ref{lem:componentsisotropy}, $\pi_0(H)$ is abelian as $H$ ranges over the isotropy groups of $\Hom(\Z^2,G)$. Thus, \[ \H_1(G/Z_G(x,y))=H_1(BZ_G(x,y);\Z)\cong \pi_0(Z_G(x,y))\,, \] for all $(x,y)\in \Hom(\Z^2,G)$. This justifies introducing the coefficient system \[ \underline{\pi}_0\co G/Z_G(x,y)\mapsto \pi_0(Z_G(x,y))\, , \] which we will use from now on instead of $\H_1$. \subsection{The component group $\pi_0(Z_G(x,y))$} \label{sec:componentgroup} In order to calculate the homology groups $H_\ast^G(\Hom(\Z^2,G);\underline{\pi}_0)$ we must acquire a good understanding of the coefficient system $\underline{\pi}_0$. Thus, our first goal is to describe the group of components of the centralizer $Z_G(x,y)$ of a pair $(x,y)\in \Hom(\Z^2,G)$. The description will be in terms of the fundamental group of the derived group $DZ_G(x)$ (Lemma \ref{lem:pi0inpi1}). This description can be found in \cite{BFM}, but since the proofs may not be easy to find we include them here for convenience. We begin by explaining some of our notation regarding root systems and the fundamental alcove. Let $\g_{\C}$ be the complexified Lie algebra of $G$. As a Cartan subalgebra of $\g_{\C}$ we choose $\t_{\C}$, the complexification of the Lie algebra $\t$ of the maximal torus $T$. We will work with the real roots of the root system associated with $(\g_{\C},\t_{\C})$. Thus the roots are $\R$-valued functionals $\alpha:\t \to \R$ such that the weight space \[ \g_{\alpha}=\{Z\in \g_{\C}~|~ [H,Z]=2\pi i\alpha(H)Z \text{ for all } H\in \t_{\C}\} \] is non-trivial. Choose a $W$-invariant inner product $\langle\cdot,\cdot\rangle$ on $\t$. For each root $\alpha\in \t^\ast$ we can find a unique element $h_{\alpha}\in \t$ such that $\alpha(H)=\langle H,h_{\alpha}\rangle$ for every $H\in \t$. Define $\alpha^{\vee}=2h_{\alpha} /\langle h_{\alpha},h_{\alpha}\rangle \in \t$. The element $\alpha^{\vee}$ is called the coroot associated to the root $\alpha$ and is independent of the choice of inner product $\langle\cdot,\cdot\rangle$. Let $\Delta=\{\alpha_{1},\dots,\alpha_{r}\}$ be a fixed set of simple roots for $(\g_\C,\t_\C)$ and $\alpha_{0}$ the lowest root. We can write $-\alpha_{0}^{\vee}$ in the form \[ -\alpha_{0}^{\vee}=n_{1}^{\vee}\alpha_{1}^{\vee}+n_{2}^{\vee}\alpha_{2}^{\vee} +\cdots+n_{r}^{\vee}\alpha_{r}^{\vee} \] for unique integers $n_{1}^{\vee},n_{2}^{\vee},\dots,n_{r}^{\vee}\geqslant 1$ which we call the \emph{coroot integers} of $G$. It will be convenient to set $n_{0}^{\vee}:=1$. The coroot integers $n_0^\vee,\dots,n_r^\vee$ can be found, for instance, in the appendix of \cite{BFM} and they are also listed in Table \ref{table:coroot}. Within $\t$ we find the coroot lattice $Q^\vee$ defined as the $\Z$-span of $\{\alpha_1^\vee,\dots,\alpha_r^\vee\}\subseteq \t$, and the integral lattice $\Lambda:=\ker(\exp\co \t\to T)$ which contains $Q^\vee$. In general, these two lattices determine the fundamental group of $G$ by the well known formula \[ \pi_1(G)\cong \Lambda/Q^\vee\,, \] see \cite[IX \S 4.9 Theorem 2(b)]{Bourbaki}. Since we assume that $G$ is simply--connected, we have that $\Lambda=Q^\vee$. Let $A=A(\Delta)\subseteq \t$ be the closed alcove contained in the closed Weyl chamber determined by $\Delta$ and that contains $0\in \t$. The alcove $A$ is an $r$-dimensional simplex supported by the hyperplanes $\{\alpha_j=0\}$ for $1\leqslant j\leqslant r$ and the affine hyperplane $\{\alpha_{0}=-1\}$. (If $G$ were semisimple, then $A$ would be the product of the alcoves of its simple factors.) As explained in \cite[IX \S 5.2 Corollary 1]{Bourbaki}, since $G$ is compact and simply--connected, the exponential map induces a homeomorphism \[ A\xrightarrow{\cong} T/W\, . \] In particular, every $x\in G$ is conjugate to an element of the form $\exp(\tilde{x})$ for a uniquely determined $\tilde{x}\in A$. Thus, for the purpose of describing $Z_G(x)$ we may assume that $x=\exp(\tilde{x})$ for some $\tilde{x}\in A$. In this case we choose $T\leqslant Z_G(x)$ as a maximal torus for $Z_G(x)$. Let $\tilde{\Delta}:=\Delta\cup \{\alpha_0\}$ be the extended set of simple roots. Abusing notation slightly, we will sometimes regard $\tilde{\Delta}$ simply as the set of indices $\{0,1,\dots,r\}$ of the extended set of simple roots. Let $\tilde{\Delta}(x)\subseteq \tilde{\Delta}$ be the proper subset defined by \begin{equation} \label{eq:deltax} \tilde{\Delta}(x):=\{ \alpha\in\tilde{\Delta}\mid \tilde{x} \textnormal{ lies in the wall of }A\textnormal{ determined by }\alpha\}\, . \end{equation} Let $Q^\vee(x)\leqslant Q^\vee$ be the sublattice of the coroot lattice spanned by $\{\alpha^\vee_i \mid i \in \tilde{\Delta}(x)\}$. Then $\tilde{\Delta}(x)$ is a system of simple roots for $Z_G(x)$ relative to $T$, and $Q^\vee(x)$ is the corresponding coroot lattice, see for example \cite[V.2 Proposition 2.3(ii)]{BD}. As $Z_G(x)$ has the same integral lattice as $G$ we have \begin{equation} \label{eq:pi1zgx} \pi_1(Z_G(x))\cong \Lambda/ Q^\vee(x)\cong Q^\vee/Q^\vee(x)\, . \end{equation} The following lemma is a combination of Corollary 3.1.3 and Proposition 7.6.1 of \cite{BFM}. A proof of the first part may be found in \cite[Theorem 1]{KS}. \begin{lemma} \label{lem:pi1derived} The fundamental group of $DZ_G(x)$ is isomorphic to the torsion subgroup of $Q^\vee/Q^\vee(x)$ and is a cyclic group of order \[ n^\vee(x):=\textnormal{gcd}\{n_i^\vee\mid i\in \tilde{\Delta}\backslash \tilde{\Delta}(x)\}\,. \] A representative in $Q^\vee$ for the generator of $\pi_1(DZ_G(x))$ is \[ \zeta(x):= \frac{1}{n^\vee(x)} \sum_{i\in\tilde{\Delta}\backslash \tilde{\Delta}(x)} n^\vee_i \alpha_i^\vee \, . \] Moreover, if $x'=\exp(\tilde{x}')$ is such that $\tilde{x}'\in A$ and $Z_G(x)\leqslant Z_G(x')$, then the inclusion $Z_G(x)\to Z_G(x')$ induces an injection $\pi_1(DZ_G(x))\to \pi_1(DZ_G(x'))$ sending \[ \zeta(x)\mapsto \frac{n^\vee(x')}{n^\vee(x)} \zeta(x')\,. \] \end{lemma} \begin{proof} By \cite[IX \S 4.6 Corollary 3]{Bourbaki} the inclusion $DZ_G(x)\to Z_G(x)$ induces an isomorphism of $\pi_1(DZ_G(x))$ onto the torsion subgroup of $\pi_1(Z_G(x))\cong Q^\vee/Q^\vee(x)$. Writing the coroot lattice as $Q^\vee=(\bigoplus_{i\in\tilde{\Delta}}\Z\langle \alpha_i^\vee\rangle)/\Z\langle \sum_{i\in\tilde{\Delta}}n_i^\vee\alpha_i^\vee\rangle$, we may write $Q^\vee/Q^\vee(x)$ in the form \[ Q^\vee/Q^\vee(x)\cong \left(\bigoplus_{i\in \tilde{\Delta}\backslash \tilde{\Delta}(x)}\Z\langle \alpha_i^\vee\rangle \right) / \Z\langle\sum_{i\in\tilde{\Delta} \backslash\tilde{\Delta}(x)}n_i^\vee \alpha^\vee_i\rangle = \left(\bigoplus_{i\in \tilde{\Delta}\backslash \tilde{\Delta}(x)}\Z\langle \alpha_i^\vee\rangle \right) / \Z\langle n^\vee(x)\zeta(x)\rangle\, . \] Since $\zeta(x)$ is a linear combination of $\{\alpha^\vee_i \mid i\in\tilde{\Delta}\backslash\tilde{\Delta}(x)\}$ with coprime coefficients, it can be completed to a basis of $\bigoplus_{i\in\tilde{\Delta}\backslash\tilde{\Delta}(x)}\Z\langle \alpha_i^\vee\rangle$. In this basis it becomes obvious that $Q^\vee/Q^\vee(x)\cong \Z^{|\tilde{\Delta}\backslash\tilde{\Delta}(x)|-1}\oplus \Z/n^\vee(x)$, where $\Z/n^\vee(x)$ is generated by the image of $\zeta(x)$. For the second part notice that $Z_G(x)\leqslant Z_G(x')$ implies that every wall of $A$ containing $\tilde{x}$ also contains $\tilde{x}'$, hence $\tilde{\Delta}(x)\subseteq \tilde{\Delta}(x')$. Therefore, $Q^\vee(x)\leqslant Q^\vee(x')$ and by naturality of the isomorphism (\ref{eq:pi1zgx}) the map $\pi_1(Z_G(x))\to \pi_1(Z_G(x'))$ corresponds to the projection $ Q^\vee/Q^\vee(x)\to Q^\vee/Q^\vee(x')$. On the torsion subgroups this projection maps \[ \zeta(x)\mapsto \frac{1}{n^\vee(x)}\sum_{i\in\tilde{\Delta} \backslash\tilde{\Delta}(x')}n_i^\vee\alpha_i^\vee=\frac{n^\vee(x')}{n^\vee(x)}\zeta(x')\,, \] which is an injection $\Z/n^\vee(x)\to \Z/n^\vee(x')$. \end{proof} Let $(x,y)\in \Hom(\Z^2,G)$ and let $\tilde{y}$ denote a lift of $y\in Z_G(x)$ in the universal covering group $\widetilde{Z_G(x)}$. The component group $\pi_0(Z_G(x,y))$ may be described as follows. \begin{lemma} \label{lem:pi0inpi1} For $z\in Z_G(x,y)$ let $[z]\in \pi_0(Z_G(x,y))$ denote the path component determined by $z$. Let $\tilde{z}$ be any lift of $z$ in $\widetilde{Z_G(x)}$. Then the map \[ \delta_y \co \pi_0(Z_G(x,y)) \to \pi_1(DZ_G(x)) \leqslant Z(\widetilde{DZ_G(x)}) \] defined by $\delta_y([z])=[\tilde{y},\tilde{z}]$ is an injective group homomorphism. \end{lemma} \begin{proof} We follow \cite[Section 7.3]{BFM}. Consider the universal covering sequence \[ 1\to Q^\vee/Q^\vee(x) \to \widetilde{Z_G(x)}\to Z_G(x)\to 1\, . \] The sequence is acted upon by the cyclic group $\langle \tilde{y}\rangle$ through conjugation by $\tilde{y}$ on $\widetilde{Z_G(x)}$ and by $y$ on $Z_G(x)$, leaving invariant the central subgroup $Q^\vee/Q^\vee(x)$. Passing to fixed points and noting that $Z_G(x)^{\langle \tilde{y}\rangle}\cong Z_G(x,y)$ yields the exact sequence \[ 1\to Q^\vee/Q^\vee(x)\to \widetilde{Z_G(x)}^{\langle\tilde{y}\rangle} \to Z_G(x,y)\xrightarrow{\delta} Q^\vee/Q^\vee(x)\,, \] where the connecting homomorphism $\delta$ is defined by $\delta(z)=\tilde{z}^{\tilde{y}}\tilde{z}^{-1}=[\tilde{y},\tilde{z}]$. Now $\widetilde{Z_G(x)}^{\langle\tilde{y}\rangle}$ is connected, by \cite[IX \S 5.3 Corollary 1]{Bourbaki}, because it is the centralizer of $\tilde{y}$ in the simply--connected group $\widetilde{Z_G(x)}$. Therefore, $\widetilde{Z_G(x)}^{\langle\tilde{y}\rangle}$ maps to the identity component of $Z_G(x,y)$. Since $Q^\vee/Q^\vee(x)$ is discrete, and by exactness, $\delta$ descends to an injective map $\delta_y\co \pi_0(Z_G(x,y))\to Q^\vee/Q^\vee(x)$. To finish the proof, note that $\pi_0(Z_G(x,y))$ is finite, since $Z_G(x,y)$ is compact, so $\delta_y$ factors through the torsion subgroup of $Q^\vee/Q^\vee(x)$. By Lemma \ref{lem:pi1derived}, the latter is identified with $\pi_1(DZ_G(x))$. \end{proof} Lemma \ref{lem:componentsisotropy} is now immediate: \begin{proof}[Proof of Lemma \ref{lem:componentsisotropy}] Let $(x,y)\in \Hom(\Z^2,G)$. By Lemmas \ref{lem:pi1derived} and \ref{lem:pi0inpi1}, $\pi_0(Z_G(x,y))$ is a subgroup of a cyclic group of order $n^\vee(x)$. A look at the coroot diagrams in the appendix of \cite{BFM} (or at Table \ref{table:coroot}) shows that $1\leqslant n^\vee(x) \leqslant 6$. But then $\pi_0(Z_G(x,y))$ must also be cyclic, and of order at most six. \end{proof} For later use we record a further consequence of the preceding lemmas, a special case of \cite[Corollary 7.6.2]{BFM}. \begin{lemma} \label{lem:pi0injective} Let $x=\exp(\tilde{x})$ and $x'=\exp(\tilde{x}')$ for some $\tilde{x},\tilde{x}'\in A$, and let $y,y'\in T$. Suppose that $Z_G(x)\leqslant Z_G(x')$ and $Z_G(x,y)\leqslant Z_G(x',y')$. Then the map $\pi_0(Z_G(x,y))\to \pi_0(Z_G(x',y'))$ induced by the inclusion is injective. \end{lemma} \begin{proof} By naturality of the connecting homomorphism, there is a commutative diagram \[ \xymatrix{ \pi_0(Z_G(x,y)) \ar[r]^-{\delta_y} \ar[d] & \pi_1(DZ_G(x)) \ar[d] \\ \pi_0(Z_G(x',y')) \ar[r]^-{\delta_{y'}} & \pi_1(DZ_G(x')) } \] in which the left hand vertical map is induced by the inclusion $Z_G(x,y)\leqslant Z_G(x',y')$, and the one on the right is induced by the inclusion $Z_G(x)\to Z_G(x')$. The assertion follows, because $\delta_y$ is injective by Lemma \ref{lem:pi0inpi1}, and $\pi_1(DZ_G(x))\to \pi_1(DZ_G(x'))$ is injective by Lemma \ref{lem:pi1derived}. \end{proof} \subsection{Equivariant cell structure} \label{sec:equivariant} We will now describe $\Hom(\Z^2,G)$ as a $G$-equivariant CW-complex. This will enable us to compute the $p$-localization of $H_\ast^G(\Hom(\Z^2,G);\underline{\pi}_0)$ as the homology of a certain $G$-subcomplex of $\Hom(\Z^2,G)$, see Corollary \ref{cor:decomposition}. The $G$-CW-structure on $\Hom(\Z^2,G)$ is obtained from the simplicial structure of the Weyl alcove $A$ as follows. Recall that $\Rep(\Z^2,G)\cong T^2/W$ and $T/W\cong A$. Let \[ p_i\co \Hom(\Z^2,G)\to A\,,\quad i=1,2 \] be the composition of the quotient map $\pi\co \Hom(\Z^2,G)\to \Rep(\Z^2,G)$ and the projection onto the $i$-th component $T^2/W \to A$. Let $\mathscr{F}_n$, $n=0,\dots,r$, denote the set of $n$-dimensional faces of $A$. $A$ has a standard CW-structure whose set of $n$-cells is $\mathscr{F}_n$. Let $(A\times A)^{(n)}$ be the $n$-skeleton of $A\times A$ in the product CW-structure. Define \[ \Hom(\Z^2,G)^{(n)}:=(p_1\times p_2)^{-1}((A\times A)^{(n)})\, . \] This defines an increasing sequence of $G$-spaces \begin{equation} \label{eq:gcw} \Hom(\Z^2,G)^{(0)} \subseteq \Hom(\Z^2,G)^{(1)}\subseteq \cdots \subseteq \Hom(\Z^2,G)^{(2r)}=\Hom(\Z^2,G) \end{equation} such that $\Hom(\Z^2,G)^{(n)}$ is obtained from $\Hom(\Z^2,G)^{(n-1)}$ by attaching a set of equivariant $n$-cells, as we now explain. \begin{notation*} To simplify the notation, we identify $A$ with a subset of $T$ without making the exponential map explicit. For a face $\sigma\subseteq A$ we let $b(\sigma)\in \sigma$ denote its barycenter. Since the centralizer $Z_G(x)$ of some $x\in G$ equals the stabilizer of $x$ under the conjugation action of $G$ on itself, we may write $G_x$ instead of $Z_G(x)$. Moreover, since a face $\sigma\subseteq A$ is pointwise fixed if and only if $b(\sigma)$ is fixed, we shall write $G_{\sigma}$ for $G_{b(\sigma)}$. Similarly, we write \begin{flushleft} \def\arraystretch{1.2} \begin{tabular}{rl} $W_{\sigma}$ & for the isotropy group of $b(\sigma)\in T$ under the action of $W$, \\ $G_{(x,y)}$ & for the centralizer $Z_G(x,y)$, \\ $G_{(\sigma,w\tau)}$ & for $G_{(b(\sigma),wb(\tau))}$ where $w\in W$ and $\sigma,\tau$ are faces of $A$. \end{tabular} \end{flushleft} \end{notation*} For each pair of faces $\sigma,\tau$ of $A$, let $\mathscr{C}(\sigma,\tau)$ denote a complete set of representatives for the double cosets $W_{\sigma}\backslash W /W_{\tau}$. For $n\geqslant 0$ the indexing set $J_n$ of the $G$-$n$-cells is \[ J_n=\bigsqcup_{\substack{(\sigma,\tau)\in \mathscr{F}_i\times \mathscr{F}_j \\ i+j=n}} \mathscr{C}(\sigma,\tau)\,. \] The $G$-$n$-cells $\{e^n_\alpha\mid \alpha \in J_n\}$ are built from the faces of the alcove $A$ in the following fashion. Given $\alpha=(\sigma,\tau,w)\in J_n$, the closed $G$-$n$-cell $e_\alpha^n\subseteq \Hom(\Z^2,G)$ is of the form $e^n_{\alpha}=\phi^{n}_\alpha(G/G_{(\sigma,w \tau)}\times \sigma\times \tau)$ where the characteristic map $\phi_\alpha^n$ is given by \begin{alignat*}{2} && \phi_\alpha^n\co G/G_{(\sigma,w\tau)}\times \sigma\times \tau & \to \Hom(\Z^2,G) \\ && (gG_{(\sigma,w\tau)},x,y) & \mapsto (x,wy)^g\,. \end{alignat*} Here the superscript $g$ indicates simultaneous conjugation by $g$. In the definition of $\phi^n_\alpha$ it must be checked that the right hand side is independent of the choice of representative for the coset $gG_{(\sigma,w\tau)}$. To see this notice that $G_\sigma\leqslant G_x$ and $G_{w\tau}=G_\tau^{\tilde{w}}\leqslant G_y^{\tilde{w}}=G_{wy}$ where $\tilde{w}\in N_G(T)$ is a lift of $w\in W$. Therefore, \begin{equation} \label{eq:isotropyboundary} G_{(\sigma,w\tau)}=G_\sigma \cap G_{w\tau}\leqslant G_x\cap G_{wy}=G_{(x,wy)}\,, \end{equation} showing that $\phi^n_\alpha$ is well defined. \begin{lemma} \label{lem:cw} The filtration (\ref{eq:gcw}) is a $G$-CW-structure on $\Hom(\Z^2,G)$ whose set of $G$-$n$-cells is $\{e_\alpha^n\mid \alpha\in J_n\}$. \end{lemma} \begin{proof} Set $\Hom(\Z^2,G)^{(-1)}:=\emptyset$ and assume $n\geqslant 0$. Let $P$ denote the pushout of \[ \bigsqcup_{(\sigma,\tau,w)\in J_n} G/G_{(\sigma,w\tau)}\times\sigma\times\tau \longleftarrow \bigsqcup_{(\sigma,\tau,w)\in J_n} G/G_{(\sigma,w\tau)}\times \partial(\sigma\times \tau) \xrightarrow{\sqcup_{J_n}f^n_\alpha} \Hom(\Z^2,G)^{(n-1)} \] where the attaching maps $\{f^n_\alpha \mid \alpha\in J_n\}$ arise as the restriction of the characteristic maps $\{\phi^n_\alpha \mid \alpha\in J_n\}$ to the boundary of the complex $\sigma\times \tau$. We must show that the map \[ h\co P\to \Hom(\Z^2,G)^{(n)} \] induced by the characteristic maps and the inclusion $\Hom(\Z^2,G)^{(n-1)}\hookrightarrow \Hom(\Z^2,G)^{(n)}$ is a homeomorphism. In fact, as $P$ is compact and $\Hom(\Z^2,G)^{(n)}$ is Hausdorff, it is enough to show $h$ is a bijection. Clearly, the image of $h$ contains $\Hom(\Z^2,G)^{(n-1)}\subseteq \Hom(\Z^2,G)^{(n)}$. Now suppose that $(z_1,z_2)\in \Hom(\Z^2,G)^{(n)}\backslash \Hom(\Z^2,G)^{(n-1)}$. Since $G$ is assumed simply--connected, there exists $g\in G$ such that $(z_1,z_2)=(x,w'y)^g$ for some $w'\in W$ and uniquely determined $x,y\in A$. Furthermore, there are unique faces $\sigma\in\mathscr{F}_i$ and $\tau\in \mathscr{F}_{n-i}$ such that $(x,y)$ is in the relative interior of $\sigma\times \tau$. Now suppose that $w\in \mathscr{C}(\sigma,\tau)$ represents the double coset determined by $w'$. Then $w'=awb$ for $a\in W_{\sigma}$ and $b\in W_{\tau}$. Thus, $(x,w'y)=(x,wy)^{\tilde{a}}$ where $\tilde{a}\in N_G(T)$ is a lift of $a$. Now $\phi^n_{(\sigma,\tau,w)}\co (g\tilde{a}G_{(\sigma,w\tau)},x,y)\mapsto (z_1,z_2)$ showing that $h$ is surjective. Now suppose that $p,p'\in P$ and $h(p)=h(p')$. If either $p$ or $p'$ is represented by an element of $\Hom(\Z^2,G)^{(n-1)}$ then so is the other. It follows that $p=p'$ as $\Hom(\Z^2,G)^{(n-1)}\hookrightarrow \Hom(\Z^2,G)^{(n)}$ is injective. If neither $p$ nor $p'$ lifts to $\Hom(\Z^2,G)^{(n-1)}$, then $h(p)$ and $h(p')$ each lie in the image of a charactersitic map. This means that there are $(\sigma,\tau,w),(\sigma',\tau',w')\in J_n$ and $g,g'\in G$ such that $h(p)=(x,wy)^g$ and $h(p')=(x',w'y')^{g'}$ for some $(x,y)\in\textnormal{int}(\sigma\times \tau)$ and $(x',y')\in\textnormal{int}(\sigma'\times \tau')$ (where $\textnormal{int}$ denotes the relative interior of a cell). Then \[ (x,wy)\equiv (x',w'y') \textnormal{ modulo }W\,, \] which implies that $x=x'$ and $y=y'$ (by projecting to $A\times A$), and further that $\sigma=\sigma'$ and $\tau=\tau'$ as every point of $A\times A$ lies in the relative interior of a unique cell. Let $w''\in W$ be such that $(x,wy)=(w''x,w''w'y)$. Then $w''\in W_{\sigma}$ and $w^{-1}w''w'\in W_{\tau}$. This implies that $w\in \mathscr{C}(\sigma,\tau)$ and $w'\in \mathscr{C}(\sigma,\tau)$ represent the same double coset, hence $w=w'$. Finally, $(x,wy)^g=(x,wy)^{g'}$ implies that $g\equiv g'$ modulo $G_{(\sigma,w\tau)}$, and therefore $p=p'$. It follows that $h$ is injective. \end{proof} \begin{remark} \label{rem:cwgeneral} In the same way, one can construct a $G$-CW-structure on $\Hom(\Z^k,G)_{\BONE}$ for any $k\geqslant 1$. The $G$-$n$-cells are then indexed over $k$-tuples $(\sigma_1,\dots,\sigma_k)\in \mathscr{F}_{i_1}\times \cdots \times \mathscr{F}_{i_k}$ such that $\sum_j i_j=n$ and a complete set of representatives $\mathscr{C}(\sigma_1,\dots,\sigma_k)$ for the $W$-orbits of the diagonal $W$-set $W/W_{\sigma_1}\times \cdots \times W/W_{\sigma_k}$. (When $k=2$ the latter reduces to a set of representatives for the double cosets $W_{\sigma_1}\backslash W / W_{\sigma_2}$.) This, in fact, gives the indexing set of the $n$-cells in a (non-equivariant) CW-structure on $T^k/W$. A proof analogous to that of Lemma \ref{lem:cw} then shows that this CW-structure can be lifted to a $G$-equivariant CW-structure on $\Hom(\Z^k,G)_{\BONE}$. \end{remark} Let $\pi_0(G_{(x,y)})_{(p)}=\pi_0(G_{(x,y)})\otimes_{\Z} \Z_{(p)}$ denote the localization of the abelian group $\pi_0(G_{(x,y)})$ (see Lemma \ref{lem:componentsisotropy}) at $p$. Relative to the CW-structure just described we have: \begin{lemma} \label{lem:subcomplex} Let $p$ be a prime. The subspace $X_G(p)\subseteq \Hom(\Z^2,G)$ defined by \[ X_G(p):=\{(x,y)\in \Hom(\Z^2,G)\mid \pi_0(G_{(x,y)})_{(p)}\neq 0\} \] is a $G$-subcomplex of $\Hom(\Z^2,G)$. \end{lemma} \begin{proof} It is clear that $X_G(p)$ is a union of open $G$-cells. We must show that it is also a union of closed $G$-cells. Let $\alpha=(\sigma,\tau,w)\in J_n$ and write \[ \partial e^n_\alpha:=e^n_\alpha\cap \Hom(\Z^2,G)^{(n-1)}\,,\quad\; \textnormal{int}(e^n_\alpha):=e^n_\alpha\backslash \partial e^n_\alpha\,. \] Suppose that $\textnormal{int}(e^n_\alpha)\subseteq X_G(p)$. This means that $\pi_0(G_{(\sigma,w\tau)})_{(p)}\neq 0$. We are going to show that $\partial e^n_\alpha\subseteq X_G(p)$. Suppose that $(z_1,z_2)\in \partial e^n_\alpha$. Then there is $g\in G$ such that $(z_1,z_2)=(x,wy)^g$ where $(x,y)\in \partial(\sigma\times \tau)$. In particular, $\pi_0(G_{(z_1,z_2)})\cong \pi_0(G_{(x,wy)})$. As noted in (\ref{eq:isotropyboundary}) we have that $G_\sigma\leqslant G_x$ and $G_{(\sigma,w\tau)}\leqslant G_{(x,wy)}$. By Lemma \ref{lem:pi0injective}, the map $\pi_0(G_{(\sigma,w\tau)})\to \pi_0(G_{(x,wy)})$ induced by the inclusion is injective. The map remains injective after $p$-localization, hence $\pi_0(G_{(x,wy)})_{(p)}\neq 0$. But this implies that $(z_1,z_2)\in X_G(p)$, which finishes the proof. \end{proof} Let us write $(\underline{\pi}_0)_{(p)}$ for the coefficient system obtained by localizing $\underline{\pi}_0$ objectwise at $p$. \begin{corollary} \label{cor:decomposition} Let $\mathcal{P}$ denote the set of primes which divide at least one coroot integer of $G$. Then there is an isomorphism \[ H_\ast^G(\Hom(\Z^2,G);\underline{\pi}_0)\cong \bigoplus_{p\in \mathcal{P}} H_\ast^G(X_G(p);(\underline{\pi}_0)_{(p)})\, . \] \end{corollary} \begin{proof} We know from Lemma \ref{lem:componentsisotropy} that $\underline{\pi}_0$ takes values in finite abelian groups. Hence, it splits as a direct sum of its localizations at the various primes. On the other hand, we derive from Lemmas \ref{lem:pi1derived} and \ref{lem:pi0inpi1} that $(\underline{\pi}_0)_{(p)}$ is trivial unless $p$ divides a coroot integer of $G$. Therefore, $\underline{\pi}_0\cong \bigoplus_{p\in\mathcal{P}} (\underline{\pi}_0)_{(p)}$. The corollary is now a consequence of Lemma \ref{lem:subcomplex}; by definition of $X_G(p)$, the inclusion of the chain complex for $H_\ast(X_G(p);(\underline{\pi}_0)_{(p)})$ into the chain complex for $H_\ast(\Hom(\Z^2,G);(\underline{\pi}_0)_{(p)})$ is an isomorphism. \end{proof} To understand why the decomposition of Corollary \ref{cor:decomposition} is useful, we must take a closer look at the coefficient system $(\underline{\pi}_0)_{(p)}$. \begin{lemma} \label{lem:equivarianthomologyquotient} Let $p\in \mathcal{P}$. Suppose that $G\notin\{E_7,E_8\}$ or that $p>2$. Then \[ H_\ast^G(X_G(p);(\underline{\pi}_0)_{(p)}) \cong H_\ast(R_G(p);\Z/p) \] where $R_G(p):=X_G(p)/G$. \end{lemma} \begin{proof} Let $G$ and $p$ be fixed. The assumption that either $G\notin \{E_7,E_8\}$ or that $p>2$ will only become relevant towards the end of the proof. Let $\mathcal{O}_G$ denote the orbit category of $G$, whose objects are the homogeneous spaces $G/H$ for closed subgroups $H\leqslant G$ and whose morphisms are the $G$-equivariant maps. For the construction of the chain complex $C_\ast(X_G(p);\underline{\pi}_0)$ only a subcategory $\mathcal{O}_G'$ of $\mathcal{O}_G$ is relevant, namely the subcategory generated by the equivariant maps that enter into the definition of the differentials. To prove the lemma it suffices to show that under the stated assumptions the restriction of $(\underline{\pi}_0)_{(p)}$ to $\mathcal{O}_G'$ is naturally isomorphic to the constant coefficient system $\underline{\Z/p}$. The maps defining the differential $C_n(X_G(p);\underline{\pi}_0)\to C_{n-1}(X_G(p);\underline{\pi}_0)$ arise by considering the composite of an attaching map $f^n_\alpha$ for an $n$-cell $e^n_\alpha$ and the collapse of the complement of an $(n-1)$-cell $e^{n-1}_\beta$ into a point \cite{willson}. In our situation this looks as follows. Let $\alpha=(\sigma,\tau,w)\in J_n$, let $\beta=(\sigma',\tau',w')\in J_{n-1}$, and let $e^n_\alpha$ respectively $e^{n-1}_\beta$ be the corresponding closed cells of $\Hom(\Z^2,G)$. The pair $(e^n_\alpha,e^{n-1}_\beta)$ can contribute to the differential only if $\textnormal{int}(e^{n-1}_\beta)\cap \partial e^n_\alpha\neq \emptyset$. Let us assume that the intersection is non-empty. It follows, then, that $\sigma'\times\tau'\subseteq \partial(\sigma\times\tau)$ and that $e^{n-1}_\beta\subseteq \partial e^n_\alpha$. Now $\textnormal{int}(e^{n-1}_\beta)$ contains the homeomorphic image (under the characteristic map $\phi^{n-1}_\beta$ of $e^{n-1}_\beta$) of the orbit $G/G_{(\sigma',w'\tau')}\times \{(b(\sigma'),b(\tau'))\}$. The preimage of this orbit under the attaching map $f^n_\alpha$ is $G/G_{(\sigma,w\tau)} \times\{(b(\sigma'),b(\tau'))\}$. The composite map \[ G/G_{(\sigma,w\tau)} \times\{(b(\sigma'),b(\tau'))\} \xrightarrow{(\phi^{n-1}_\beta)^{-1}\circ f^n_\alpha} G/G_{(\sigma',w'\tau')}\times \{(b(\sigma'),b(\tau'))\} \] is a $G$-equivariant map and is therefore determined by the image of $eG_{(\sigma,w\tau)}$. Recalling the definition of $f^n_\alpha$ and $\phi^{n-1}_\beta$ we find that $eG_{(\sigma,w\tau)}\mapsto g^{-1}G_{(\sigma',w'\tau')}$ where $g$ is any element of $G$ satisfying \begin{equation} \label{eq:conditionog} (b(\sigma'),wb(\tau'))^g=(b(\sigma'),w'b(\tau'))\, . \end{equation} (The existence of such $g$ is implicit in the assumption that $e^{n-1}_\beta\subseteq \partial e^n_\alpha$.) In particular, we have that $g\in G_{\sigma'}$. Now let $\mathcal{O}_G'$ be the subcategory of $\mathcal{O}_G$ generated by the $G$-maps \begin{alignat*}{2} && G/G_{(\sigma,w\tau)} & \to G/G_{(\sigma',w'\tau')} \\ && eG_{(\sigma,w\tau)} & \mapsto g^{-1}G_{(\sigma',w'\tau')} \end{alignat*} where $g$ satisfies condition (\ref{eq:conditionog}), and $\sigma'\subseteq \sigma$ and $\tau'\subseteq \tau$. Then $\mathcal{O}_G'$ includes all morphisms needed to form the chain complex $C_\ast(X_G(p);\underline{\pi}_0)$. We are going to show that the restriction of $(\underline{\pi}_0)_{(p)}$ to $\mathcal{O}'_G$ is naturally isomorphic to the constant coefficient system $\underline{\Z/p}$. Consider a morphism in $\mathcal{O}_G'$ and let \[ \pi_0(G_{(\sigma,w\tau)})\to \pi_0(G_{(\sigma',w'\tau')}) \] be the map obtained by applying $\underline{\pi}_0$ to it. It can be factored as the map induced by the inclusion $G_{(\sigma,w\tau)}\to G_{(\sigma',w\tau')}$ and the map induced by conjugation $(-)^g\co G_{(\sigma',w\tau')}\to G_{(\sigma',w'\tau')}$. This yields the upper row in the following diagram: \begin{equation} \label{dgr:naturalisomorphism} \xymatrix{ \pi_0(G_{(\sigma,w\tau)}) \ar[r] \ar[d]^-{\delta_{wb(\tau)}} & \pi_0(G_{(\sigma',w\tau')}) \ar[r]^-{(-)^g} \ar[d]^-{\delta_{wb(\tau')}} & \pi_0(G_{(\sigma',w'\tau')}) \ar[d]^-{\delta_{w'b(\tau')}} \\ \pi_1(DG_\sigma) \ar[r] & \pi_1(DG_{\sigma'}) \ar@{=}[r] & \pi_1(DG_{\sigma'}) \\ \Z/p \ar@{=}[rr] \ar[u]_-{\epsilon_\sigma \zeta(b(\sigma))} && \Z/p \ar[u]_-{\epsilon_{\sigma'}\zeta(b(\sigma'))}. } \end{equation} The first map in the middle row is the one induced by the inclusion $G_\sigma\to G_{\sigma'}$. Let us show that the upper half of the diagram commutes. The left hand square commutes by naturality of the connecting homomorphism. To see that the right hand square commutes, let $\tilde{g}$ denote a lift of $g$ in the universal cover $\widetilde{G_{\sigma'}}$, and let $z\in G_{(\sigma',w\tau')}$. By definition of the connecting homomorphism (see Lemma \ref{lem:pi0inpi1}), we obtain \[ \delta_{w'b(\tau')}(z^g)=[\widetilde{w'b(\tau')},\widetilde{z^g}]= [\widetilde{(wb(\tau'))^g},\widetilde{z^g}]=[\widetilde{wb(\tau')},\tilde{z}]^{\tilde{g}} =\delta_{wb(\tau')}(z)\,. \] In the second equality we used (\ref{eq:conditionog}), and in the last equality we used the fact that the commutator is in the center of $\widetilde{G_{\sigma'}}$. Next, we are going to show that the vertical arrows in the upper half of the diagram become isomorphisms after $p$-localization. Let $(x,y)\in X_G(p)$ with $x\in A$. Then, by Lemma \ref{lem:pi0inpi1}, $\pi_1(DG_x)$ contains $p$-torsion, hence $p\mid n^\vee(x)$ as $\pi_1(DG_x) \cong \Z/n^\vee(x)$. The set of coroot integers of $G$ displayed in Table \ref{table:coroot} is at the same time the set of possible values that $n^\vee(x)$ can attain as $x$ ranges over the alcove $A$. If $G\notin \{E_7,E_8\}$ or $p>2$, then $n^\vee(x)$ does not contain repeated primes. Hence, $\pi_1(DG_x)_{(p)}\cong \Z/p$. By Lemma \ref{lem:pi0inpi1}, the map $\delta_y\co \pi_0(G_{(x,y)})\to \pi_1(DG_x)$ is injective, and it remains so after $p$-localization. It follows that \[ \pi_0(G_{(x,y)})_{(p)}\xrightarrow{\cong} \pi_1(DG_x)_{(p)}\, . \] To finish the proof we show that if $\sigma'\subseteq \sigma$ and $p\mid n^\vee(b(\sigma))$, then the map \[ \pi_1(DG_\sigma)_{(p)}\xrightarrow{\cong} \pi_1(DG_{\sigma'})_{(p)} \] induced by the inclusion $G_\sigma\leqslant G_{\sigma'}$ is naturally isomorphic to the identity at $\Z/p$. This is achieved by making an appropriate choice of generators. Recall from Lemma \ref{lem:pi1derived} that $\pi_1(DG_\sigma)$ is a cyclic group of order $n^\vee(b(\sigma))$ generated by $\zeta(b(\sigma))$. Let \[ \epsilon_\sigma=\begin{cases} 3 & \textnormal{if }n^\vee(b(\sigma))=6\textnormal{ and } p=2, \\ 2 & \textnormal{if }n^\vee(b(\sigma))=6\textnormal{ and } p=3, \\ 1 & \textnormal{otherwise.} \end{cases} \] Then $\Z/p\to \pi_1(DG_\sigma)$ sending $1\mapsto \epsilon_\sigma \zeta(b(\sigma))$ is injective and becomes an isomorphism after $p$-localization. Moreover, the map $\pi_1(DG_\sigma)\to \pi_1(DG_{\sigma'})$ sends $\epsilon_\sigma \zeta(b(\sigma))\mapsto \epsilon_{\sigma'}\zeta(b(\sigma'))$ by the second part of Lemma \ref{lem:pi1derived}. This demonstrates commutativity of the lower part of diagram (\ref{dgr:naturalisomorphism}). As a consequence, (the $p$-localization of) diagram (\ref{dgr:naturalisomorphism}) establishes a natural isomorphism of $(\underline{\pi}_0)_{(p)}$ with $\underline{\Z/p}$ as asserted. \end{proof} Let us comment on what happens when $G \in\{E_7,E_8\}$ and $p=2$. Let us restrict the coefficient system $\underline{\pi}_{(0)}$ to the $G$-subcomplex $X_G(2)$ of $\Hom(\Z^2,G)$. According to Table \ref{table:coroot} the coefficient system $(\underline{\pi}_0)_{(2)}$ can now evaluate to $\Z/2$ and $\Z/4$. Thus, in contrast to Lemma \ref{lem:equivarianthomologyquotient} it need not be isomorphic to a constant coefficient system. To handle these cases we observe that the argument of Lemma \ref{lem:subcomplex} shows that \[ X_G(4):=\{(x,y)\in X_G(2) \mid \pi_0(G_{(x,y)})_{(2)}\cong \Z/4\} \] is a $G$-subcomplex of $X_G(2)$. Restricted to $X_G(4)$ the coefficient system $(\underline{\pi}_0)_{(2)}$ is naturally isomorphic to $\underline{\Z/4}$ and as a consequence we obtain the following. \begin{lemma} \label{lem:e7e8rg4} Let $G=E_7$ or $G=E_8$. Then \begin{equation*} \label{eq:RG4} H^G_\ast(X_G(4);(\underline{\pi}_0)_{(2)})\cong H_\ast(R_G(4);\Z/4)\,, \end{equation*} where $R_G(4):=X_G(4)/G$. \end{lemma} \subsection{The quotient spaces $R_G(p)$} \label{sec:quotients} Recall the $G$-CW-complex \[ X_G(p)=\{(x,y)\in \Hom(\Z^2,G)\mid \pi_0(Z_G(x,y))_{(p)}\neq 0\} \] and its orbit space $R_G(p)=X_G(p)/G$. At this point, we have reduced the computation of $H_\ast^G(\Hom(\Z^2,G);\underline{\pi}_0)$ to a computation of the non-equivariant homology of $R_G(p)$, with an exception when $G$ is $E_7$ or $E_8$. As $R_G(p)$ is the orbit space of a $G$-subcomplex of $\Hom(\Z^2,G)$ it is a subcomplex in the induced CW-structure of $T^2/W$. It would be interesting to describe this subcomplex, but here we will content ourselves with a calculation of the homology of $R_G(p)$. This will be achieved by providing an explicit homotopy equivalence of $R_G(p)$ with a weighted projective space. Let $\mathbf{w}=(w_0,\dots,w_r)$ be a tuple of positive integers. Consider the weighted action of the circle group $\SS^1\subseteq \C$ on the unit sphere $\SS^{2r+1}\subseteq \C^{r+1}$ defined by \[ \lambda\cdot (z_0,\dots,z_r)=(\lambda^{w_0}z_0,\dots,\lambda^{w_r}z_r) \; \quad \; (\lambda\in \SS^1,\; (z_0,\dots,z_r)\in \SS^{2r+1})\, . \] The quotient space \[ \mathbb{CP}(\mathbf{w}):=\SS^{2r+1}/\SS^1_\mathbf{w} \] is called a \emph{weighted projective space}. Here we use the subscript $\mathbf{w}$ to indicate the weighted $\SS^1$-action. The most familiar example arises when $\mathbf{w}=(1,\dots,1)$ in which case $\mathbb{CP}(\mathbf{w})=\mathbb{CP}^{r}$ is the usual complex projective space. \medskip Recall that, relative to a fixed simply--connected simple compact Lie group $G$, we let $\mathcal{P}=\{n_1^\vee,\dots,n_r^\vee\}\backslash\{1,4,6\} \cap \{n_1^\vee,\dots,n_r^\vee\}$ denote the set of primes dividing a coroot integer of $G$. Define \[ \mathcal{Z}:=\{n_1^\vee,\dots,n_r^\vee\}\backslash \{1,6\}\cap\{n_1^\vee,\dots,n_r^\vee\}\, . \] Thus $\mathcal{Z}=\mathcal{P}$ except when $G$ is $E_7$ or $E_8$ in which case $\mathcal{Z}=\mathcal{P}\cup \{4\}$. The objective of this subsection is to prove: \begin{proposition} \label{prop:weightedprojective} Let $\mathbf{n}^\vee=(n_0^\vee,\dots,n^\vee_r)$ be the tuple of coroot integers of $G$ and let $p\in\mathcal{Z}$ be fixed. Let $\mathbf{n}^\vee(p)=(n_{i_0}^\vee,\dots,n_{i_k}^\vee)$ denote the tuple obtained from $\mathbf{n}^\vee$ by removing those entries that are not divisible by $p$. Let $\iota_p\co \mathbb{CP}(\mathbf{n}^\vee(p))\to \mathbb{CP}(\mathbf{n}^\vee)$ be the map defined in homogeneous coordinates by $[y_0,\dots,y_k]\mapsto [z_0,\dots,z_r]$ where $z_{l}=y_j$ if $l=i_j$ for some $0\leqslant j\leqslant k$, and $z_l=0$ otherwise. Then there is a commutative diagram \[ \xymatrix{ R_G(p) \ar[r]^-{\simeq} \ar[d] & \mathbb{CP}(\mathbf{n}^\vee(p)) \ar[d]^-{\iota_p} \\ \Rep(\Z^2,G) \ar[r]^-{\simeq} & \mathbb{CP}(\mathbf{n}^\vee) } \] in which the horizontal arrows are homotopy equivalences and the left hand vertical arrow is the subspace inclusion. \end{proposition} Like any complete toric variety a weighted projective space is simply--connected (for the definition of weighted projective space as a toric variety see for example \cite[p. 35]{Fulton}, and for the result on the fundamental group see \cite[Section 3.2]{Fulton}). The integral cohomology of $\mathbb{CP}(\mathbf{w})$ was computed by Kawasaki in \cite[Theorem 1]{K73}. The homology is then obtained from the universal coefficient theorem: \begin{equation} \label{eq:homologyofweightedprojectivespace} H_k(\mathbb{CP}(\mathbf{w});\Z)\cong \begin{cases} \Z\,, & \textnormal{if } k\leqslant 2r \textnormal{ is even,} \\ 0\,, &\textnormal{otherwise.} \end{cases} \end{equation} In view of this result we have: \begin{corollary} \label{cor:homologyrp} Let $p\in \mathcal{P}$ and assume that $G\not\in \{E_7,E_8\}$ or that $p>2$. Then \[ H_k^G(X_G(p);(\underline{\pi}_0)_{(p)})\cong \begin{cases} \Z/p\,, & \textnormal{if } k \leqslant 2(\ell-1)\textnormal{ is even}, \\ 0\,, & \textnormal{otherwise}, \end{cases} \] where $\ell\geqslant 1$ is the number of coroot integers of $G$ divisible by $p$. \end{corollary} \begin{proof} By Lemma \ref{lem:equivarianthomologyquotient}, $H_\ast^G(X_G(p);(\underline{\pi}_0)_{(p)})\cong H_\ast(R_G(p);\Z/p)$. By Proposition \ref{prop:weightedprojective}, $R_G(p)$ is homotopy equivalent to a weighted projective space of complex dimension $\ell-1$. The homology groups follow therefore from Kawasaki's computation (\ref{eq:homologyofweightedprojectivespace}). \end{proof} Our approach to Proposition \ref{prop:weightedprojective} is as follows. The homotopy equivalence $\Rep(\Z^2,G)\simeq \mathbb{CP}(\mathbf{n}^\vee)$ will be obtained as a composite of maps, \begin{equation} \label{eq:equivalencewp} \Rep(\Z^2,G)\xrightarrow{\cong} A\otimes_{\mathbb{I}} F\xrightarrow{\simeq} A\otimes_{\mathbb{I}} \bar{F} \xrightarrow{\cong} \mathbb{CP}(\mathbf{n}^\vee)\, . \end{equation} The two spaces in the middle are certain coends that will be defined below, and the homotopy equivalence is induced from a natural equivalence of diagrams $F\xrightarrow{\sim} \bar{F}$. The equivalence $R_G(p)\simeq \mathbb{CP}(\mathbf{n}^\vee(p))$ will be obtained from this by restriction to subspaces, whence the diagram in Proposition \ref{prop:weightedprojective} will commute by construction. In brief, the homotopy equivalence $\Rep(\Z^2,G)\simeq \mathbb{CP}(\mathbf{n}^\vee)$ is given by \begin{equation} \label{eq:explicitequivalence} (x,y) \textnormal{ modulo }G \mapsto (a_0,a_1t_1,\dots,a_rt_r)/\sqrt{\sum_{i=0}^r a_i^2} \textnormal{ modulo } \SS^1_{\mathbf{n}^\vee}\, , \end{equation} where $x\in A$ and $(a_0,\dots,a_r)\in \Delta^r$ are the barycentric coordinates of $x$, and $y\in T$ and $(t_1,\dots,t_r)\in (\SS^1)^r$ are the coordinates of $y$ with respect to the one-parameter subgroups of $T$ determined by the coroots $\alpha_1^\vee,\dots,\alpha_r^\vee$. This will be clear once we have proved Lemma \ref{lem:coendwp}. \medskip To describe the first one of the two homeomorphisms in (\ref{eq:equivalencewp}) we will regard $\Rep(\Z^2,G)$ and $R_G(p)$ as spaces over the alcove $A$. Fix $p\in \mathcal{Z}$. For an integer $m$ let $A(m)\subseteq A$ be the subspace of the fundamental alcove defined by \[ x\in A(m)\Longleftrightarrow m\mid n^\vee(x)\,. \] This is again a simplex, in fact, a face of $A$ as we explain below (Lemma \ref{lem:amface}). Note that if $(x,y)\in X_G(p)$ and $x\in A$, then $p\mid n^\vee(x)$, hence $x\in A(p)$. Thus, there is a commutative diagram \[ \xymatrix{ R_G(p) \ar[r] \ar[d]^-{q} & \Rep(\Z^2,G) \ar[d]^-{\textnormal{pr}_1} \\ A(p) \ar[r] & A } \] in which $\textnormal{pr}_1$ is the projection of $\Rep(\Z^2,G)$ onto the first component and $q$ is its restriction to the subspace $R_G(p)$. The two horizontal maps are the inclusions. We will describe the fibers of $\textnormal{pr}_1$ and $q$. \begin{notation*} When $K$ is a group, we write $K/\Ad_K$ for the quotient by the conjugation (or adjoint) action of $K$ on itself. The conjugacy class of an element $g\in K$ is denoted by $(g)\in K/\Ad_K$. Similarly, given $(x,y)\in \Hom(\Z^2,G)$ we write $((x,y))\in \Rep(\Z^2,G)$ for its equivalence class under simultaneous conjugation. When $(x,y)\in T^2$, we will also write $((x,y))\in T^2/W$ for its equivalence class under the diagonal action of $W$. \end{notation*} Clearly, if $x\in A$, then the assignment $(y)\mapsto ((x,y))$ defines a homeomorphism \begin{equation} \label{eq:pr1fiber} Z_G(x)/\Ad_{Z_G(x)}\cong \textnormal{pr}_1^{-1}(x)\,. \end{equation} On the other hand, if $x\in A(p)$, then $q^{-1}(x)=\textnormal{pr}_1^{-1}(x)\cap R_G(p)$ corresponds to a subspace of $Z_G(x)/\Ad_{Z_G(x)}$. To describe it we rewrite (\ref{eq:pr1fiber}) using the finite covering \begin{alignat*}{2} \rho\co && \widetilde{DZ_G(x)}\times Z(Z_G(x))_0 & \to Z_G(x) \\ && (g,t) & \mapsto u(g)t\,, \end{alignat*} where $Z(Z_G(x))_0$ is the identity component of the center of $Z_G(x)$, and $u\co \widetilde{DZ_G(x)}\to DZ_G(x)$ is the universal covering (see \cite[IX \S 1.4 Corollary 1]{Bourbaki}). There is a finite subgroup $C \leqslant Z(\widetilde{DZ_G(x)})\times Z(Z_G(x))_0$ such that $\rho$ descends to an isomorphism \begin{equation} \label{eq:coveringderivedcenter} \widetilde{DZ_G(x)}\times_{C}Z(Z_G(x))_0\xrightarrow{\cong} Z_G(x)\, . \end{equation} Under this isomorphism (\ref{eq:pr1fiber}) becomes \begin{equation} \label{eq:pr1fiber2} \textnormal{pr}_1^{-1}(x)\cong \left(\widetilde{DZ_G(x)}/ \Ad_{\widetilde{DZ_G(x)}}\right)\times_{C} Z(Z_G(x))_0\, . \end{equation} Here $C$ acts on the adjoint orbits in the natural way: If $K$ is any group, then an action of the center $Z(K)$ on $K/\Ad_K$ is defined by \begin{equation} \label{eq:actionofcenter} c\cdot (g)=(cg)\,,\quad (c\in Z(K),\,g\in K)\, . \end{equation} We can now describe the fibers of $q\co R_G(p)\to A(p)$. \begin{lemma} \label{lem:qfiber} Let $p\in \mathcal{Z}$, and let $x\in A(p)$. Then, there is an element $\xi(x)\in Z(\widetilde{DZ_G(x)})$ and a homeomorphism \[ q^{-1}(x)\cong \left( \widetilde{DZ_G(x)}/\Ad_{\widetilde{DZ_G(x)}} \right)^{\langle \xi(x)\rangle}\times_{C} Z(Z_G(x))_0\,. \] \end{lemma} Note that both $C$ and $\langle \xi(x)\rangle$ act through subgroups of the center $Z(\widetilde{DZ_G(x)})$. As the center is abelian, the two actions commute, and there is an induced action of $C$ on the $\langle \xi(x)\rangle$-fixed points. \begin{proof} Let us discuss the case where $p$ is a prime. The case $p=4$ is treated analogously, see Remark \ref{rem:fiberqp4}. By definition of $R_G(p)$ and the isomorphism (\ref{eq:pr1fiber}) we have that \begin{equation} \label{eq:qx-1} q^{-1}(x)\cong\{y\in Z_G(x) \mid \pi_0(Z_G(x,y))_{(p)}\neq 0\}/\Ad_{Z_G(x)}\, . \end{equation} To reformulate the condition $\pi_0(Z_G(x,y))_{(p)}\neq 0$ we will use the connecting homomorphism defined in Lemma \ref{lem:pi0inpi1}. To this end, let \[ \xi(x):=\begin{cases} \textnormal{any generator of } \pi_1(DZ_G(x))_{(p)}\cong \Z/p & \textnormal{if }p>2, \\ \textnormal{the unique element of order $2$ in }\pi_1(DZ_G(x))_{(2)} & \textnormal{if }p=2 \end{cases} \] viewed as an element in the center of $\widetilde{DZ_G(x)}$. Then we have that \[ \pi_0(Z_G(x,y))_{(p)}\neq 0\Longleftrightarrow \xi(x)\in \Im(\delta_y) \Longleftrightarrow \exists \tilde{z}\in \widetilde{Z_G(x)}: [\tilde{y},\tilde{z}]=\xi(x)\, . \] Since $\widetilde{Z_G(x)}\cong \widetilde{DZ_G(x)}\times \widetilde{Z(Z_G(x))_0}$, the last condition is equivalent to $y\in U(x)\times \widetilde{Z(Z_G(x))_0}$, where \[ U(x):=\{\tilde{a}\in \widetilde{DZ_G(x)}\mid \exists \tilde{z}\in \widetilde{DZ_G(x)}: [\tilde{a},\tilde{z}]=\xi(x)\}\,. \] Similarly to (\ref{eq:pr1fiber2}) we then obtain \[ q^{-1}(x)\cong \left(U(x)/\Ad_{\widetilde{DZ_G(x)}}\right)\times_{C} Z(Z_G(x))_0\, . \] Finally, writing out the commutator shows that \[ \exists \tilde{z}\in \widetilde{DZ_G(x)}: [\tilde{a},\tilde{z}] =\xi(x) \Longleftrightarrow \xi(x)\cdot (\tilde{a})=(\tilde{a}) \,, \] hence $U(x)/\Ad_{\widetilde{DZ_G(x)}}\cong \left( \widetilde{DZ_G(x)}/\Ad_{\widetilde{DZ_G(x)}}\right)^{\langle \xi(x)\rangle}$, and the lemma follows. \end{proof} \begin{remark} \label{rem:fiberqp4} In the case $p=4$ the defining condition for $q^{-1}(x)$ reads $\pi_0(Z_G(x,y))_{(2)}\cong \Z/4$ and one chooses $\xi(x)$ to be any generator of $\pi_1(DZ_G(x))_{(2)}\cong \Z/4$. The rest of the proof goes through verbatim. \end{remark} Observe that the fibers $q^{-1}(x)$ only depend on the face $\sigma\subseteq A(p)$ such that $x\in \textnormal{int}(\sigma)$ and not on the specific point $x$ chosen within $\textnormal{int}(\sigma)$. That is, if $x,y\in \textnormal{int}(\sigma)$, then $q^{-1}(x)=q^{-1}(y)=q^{-1}(b(\sigma))$, and likewise for $\textnormal{pr}_1$. Together with the combinatorial structure of the alcove this allows for a more manageable description of $R_G(p)$ and $\Rep(\Z^2,G)$. To this end, recall that the faces of the alcove $A$ are in one-to-one correspondence with the proper subsets $I\subseteq \tilde{\Delta}$ of the extended set of simple roots as follows: \[ I\subsetneq \tilde{\Delta}\; \longleftrightarrow\; \sigma^{I}\in \mathscr{F}_{r-|I|}\, , \] where \[ \sigma^I:=\{x\in A\mid \forall \alpha\in I: x\textnormal{ lies in the wall of } A\textnormal{ determined by }\alpha\}\, . \] Since $A$ is a simplex and $\sigma^I$ is an intersection of facets, $\sigma^I$ is a face of $A$. For example, if $I=\emptyset$, then $\sigma^{\emptyset}=A$. If $I=\tilde{\Delta}\backslash \{\alpha_j\}$ for some $j\in \{0,\dots,r\}$, then $\sigma^I$ is the vertex opposite the wall of $A$ determined by $\alpha_j$. If $x\in A$ and $I=\tilde{\Delta}(x)$ (as defined in Section \ref{sec:componentgroup} eq. (\ref{eq:deltax})), then $\sigma^I$ is the minimal face of $A$ containing $x$. Also relevant to us is: \begin{lemma} \label{lem:amface} Let $m$ be an integer and let $I_m=\{\alpha_j\in \tilde{\Delta}\mid m\nmid n_j^\vee\}$. Then $\sigma^{I_m}=A(m)$. \end{lemma} \begin{proof} We have that \begin{equation*} \begin{split} x\in A(m) & \Longleftrightarrow m\mid n^\vee(x) \\& \Longleftrightarrow \forall j\in \tilde{\Delta}\backslash \tilde{\Delta}(x): m\mid n_j^\vee \\& \Longleftrightarrow I_m\subseteq \tilde{\Delta}(x) \\& \Longleftrightarrow \forall \alpha\in I_m : x\textnormal{ lies in the wall of $A$ determined by }\alpha \\& \Longleftrightarrow x\in \sigma^{I_m}\, , \end{split} \end{equation*} by definition of $A(m)$, $n^\vee(x)$, $\tilde{\Delta}(x)$ and $I_m$. \end{proof} It follows that $A(m)$ is a face of $A$, hence a simplex.\medskip Let $\I$ denote the poset of proper subsets of $\tilde{\Delta}$ partially ordered by inclusion. Note that when $I,J\in \I$ and $I\subseteq J$, then $\sigma^J$ is a face of $\sigma^I$. Therefore, the assignment $I\mapsto \sigma^I$ defines a contravariant functor from $\I$ to $\mathbf{Top}$ sending the inclusion $I\subseteq J$ to the opposite inclusion $\sigma^{J}\subseteq \sigma^I$. This functor encodes the face structure of the fundamental alcove. Abusing notation, we denote it by \[ A\co \I^{\op} \to \mathbf{Top}\,, \quad I \mapsto A(I)=\sigma^I\, . \] Similarly, for an integer $m$ we denote by \[ A(m)\co \I(m)^{\op}\to \mathbf{Top} \] the restriction of $A$ to the subposet $\I(m)\subseteq \I$ consisting of those subsets $I$ containing $I_m$ (see Lemma \ref{lem:amface}). It encodes the face structure of the simplex $A(m)$. On the other hand, we can define the following covariant functors on $\I$. Recall that if $\sigma,\tau$ are faces of $A$ and $\tau\subseteq \sigma$, then there is an inclusion $Z_G(b(\sigma))\leqslant Z_G(b(\tau))$. This induces a map $Z_G(b(\sigma))/\Ad_{Z_G(b(\sigma))}\to Z_G(b(\tau))/\Ad_{Z_G(b(\tau))}$ (which is, in fact, a surjection rather than an inclusion) and thus by (\ref{eq:pr1fiber}) a map \[ \textnormal{pr}_1^{-1}(b(\sigma))\to \textnormal{pr}_1^{-1}(b(\tau))\,. \] Now if $\sigma,\tau$ are faces of $A(p)$ (where $p$ is as in Lemma \ref{lem:qfiber}), then this restricts to a map \[ q^{-1}(b(\sigma))\to q^{-1}(b(\tau))\,. \] This is perhaps most easily seen from the description of the fibers of $q$ displayed in (\ref{eq:qx-1}). We have that $(y)\in q^{-1}(b(\sigma))$ if and only if $\pi_0(Z_G(b(\sigma),y))_{(p)}\neq 0$. But then $\pi_0(Z_G(b(\tau),y))_{(p)}\neq 0$ by Lemma \ref{lem:pi0injective}, hence $(y)\in q^{-1}(b(\tau))$. Therefore, there are functors \[ F\co \I \to \mathbf{Top}\,, \quad I \mapsto \textnormal{pr}_1^{-1}(b(\sigma^{I})) \] and \[ F'\co \I(p) \to \mathbf{Top}\,, \quad I \mapsto q^{-1}(b(\sigma^{I}))\, . \] We may now identify $\Rep(\Z^2,G)$ and $R_G(p)$ as coends of the functor pairs $(A,F)$ and $(A(p),F')$, respectively. To this end, recall \begin{definition} Let $\mathcal{C}$ be a small category and $M\co \mathcal{C}\to \mathbf{Top}$ and $L\co \mathcal{C}^{\op}\to \mathbf{Top}$ a pair of co- and contravariant functors. The \emph{coend} $L\otimes_{\mathcal{C}} M$ is the topological space defined by \[ L\otimes_{\mathcal{C}} M=\left(\bigsqcup_{c\in \Ob(\mathcal{C})} L(c)\times M(c)\right)/\approx\,, \] where the equivalence relation $\approx$ is given by \[ (L(i)(a),b)\approx (a,M(i)(b)) \] for all $c,d\in \mathcal{C}$, $i\in \Mor_{\mathcal{C}}(c,d)$, $a\in L(d)$ and $b\in M(c)$. \end{definition} \begin{notation*} For $x\in L(c)$ and $y\in M(c)$ we denote by $x\otimes y$ the equivalence class of $(x,y)$ in the coend. \end{notation*} The following lemma explains the first one of the two homeomorphisms in (\ref{eq:equivalencewp}). \begin{lemma} \label{lem:coend} There is a commutative diagram \[ \xymatrix{ R_G(p) \ar[r]^-{\cong} \ar[d] & A(p)\otimes_{\I(p)} F' \ar[d] \\ \Rep(\Z^2,G) \ar[r]^-{\cong} & A\otimes_{\I} F } \] in which the two vertical maps are inclusions of subspaces. \end{lemma} \begin{proof} We first construct the bottom map. Each $((x,y))\in \Rep(\Z^2,G)$ is represented by a pair $(x,y)$ with $x\in A$. Define \begin{alignat*}{2} h\co \Rep(\Z^2,G) & \to A\otimes_{\I} F && \\ ((x,y)) & \mapsto x\otimes (y) & \quad \quad & (x\in \sigma^{\tilde{\Delta}(x)},\, (y)\in Z_G(x)/\Ad_{Z_G(x)}). \end{alignat*} To see that $h$ is well defined suppose that $((x,y))=((x',y'))$ and both $x,x'\in A$. Then $x=x'$ and there is $g\in Z_G(x)$ such that $y'=y^g$. Hence, $(y)=(y')$ as elements of $Z_G(x)/\Ad_{Z_G(x)}$. To see that $h$ is surjective observe that $F(\emptyset)=Z_G(b(A))/\Ad_{Z_G(b(A))}=T$, since $b(A)$ is a regular point of $G$. Hence, $A\times T$ appears as one of the disjoint summands in the definition of the coend. Since for each $\sigma\subseteq A$ the map $T\to Z_G(b(\sigma))/\Ad_{Z_G(b(\sigma))}$ is surjective, every element of $A\otimes_{\I}F$ has a representative in $A\times T$ and is therefore in the image of $h$. Finally, to see that $h$ is injective suppose that $x\otimes (y)=x'\otimes (y')$. Since for each inclusion $I\subseteq J$ the map $\sigma^J\to \sigma^I$ is injective, we must have $x=x'$. But then, $(y)=(y')$ as elements of $Z_G(x)/\Ad_{Z_G(x)}$, so $((x,y))=((x',y'))$. Together this proves that $h$ is a continuous bijection. Since $\Rep(\Z^2,G)$ is compact and $A\otimes_{\I} F$ is Hausdorff, $h$ is a homeomorphism. To obtain the homeomorphism in the top row of the square, first observe that the inclusion map \[ \bigsqcup_{I\in \I(p)} \sigma^I\times F'(I) \hookrightarrow \bigsqcup_{I\in\I} \sigma^I\times F(I) \] induces a homeomorphism $i$ of $A(p)\otimes_{\I(p)}F'$ onto its image in $A\otimes_{\I}F$. The homeomorphism $h$ from before restricts to a bijection of $R_G(p)$ with $i(A(p)\otimes_{\I(p)}F')$. Since both $R_G(p)$ and $i(A(p)\otimes_{\I(p)}F)$ carry the subspace topology, this bijection is a homeomorphism. \end{proof} We now turn our attention to the homotopy equivalence in (\ref{eq:equivalencewp}). To this end, we introduce another diagram on $\I$ and $\I(p)$. Given $I\in \I$ let $\t(I)\leqslant \t$ be the $\R$-linear span of $\{\alpha^\vee_i\mid i\in I\}$, and let $T(I)\leqslant T$ be the subtorus whose Lie algebra is $\t(I)$; in other words, $T(I)$ is the image of $\t(I)$ under $\exp\co \t\to T$. Note that $T(I)=T\cap DZ_G(b(\sigma^I))$ can be identified with the maximal torus of $DZ_G(b(\sigma^I))$. When $I\subseteq J$, then $T(I)\leqslant T(J)$, and there is a quotient map $T/T(I)\to T/T(J)$. This gives rise to a diagram of tori \[ \bar{F}\co \I \to \mathbf{Top}\,, \quad I \mapsto T/T(I)\, . \] We will not distinguish notationally between $\bar{F}$ and its restriction to $\I(p)$. \begin{lemma} \label{lem:coendequivalence} There are natural equivalences of diagrams $F\xrightarrow{\simeq} \bar{F}$ and $F'\xrightarrow{\simeq} \bar{F}$ which fit into a commutative diagram \[ \xymatrix{ A(p)\otimes_{\I(p)} F' \ar[r]^-{\simeq} \ar[d] & A(p)\otimes_{\I(p)} \bar{F} \ar[d] \\ A\otimes_{\I} F \ar[r]^-{\simeq} & A\otimes_{\I} \bar{F} } \] where the two vertical maps are inclusions of subspaces. \end{lemma} Suppose that $K$ is a compact, simply--connected, simple Lie group. To prove the lemma we need a fact concerning the action of the center $Z(K)$ on $K/\Ad_K$ specified in (\ref{eq:actionofcenter}). Let $S\leqslant K$ be a maximal torus, $\s$ its Lie algebra and assume that appropriate choices have been made that allow us to identify $K/\Ad_K$ with (the closure of) an alcove $A_K\subseteq \s$. Then it is well known that the resulting action of $Z(K)$ on $A_K$ is through affine isometries of $\s$ permuting the vertices of $A_K$ (see \cite[Section 3.2]{BFM}). \begin{lemma} \label{lem:alcovecontractible} Let $K$ be a compact simply--connected semi-simple Lie group with center $Z(K)$. Then $K/\Ad_K$ is $Z(K)$-equivariantly contractible. \end{lemma} \begin{proof} It suffices to prove the lemma in the case where $K$ is simple, because in the general case there is $s\geqslant 1$ such that $K\cong K_1\times \dots \times K_s$ where each $K_i$ is simple. Hence, $K/\Ad_K \cong K_1/\Ad_{K_1}\times \dots\times K_s/\Ad_{K_s}$, and the action of $Z(K)\cong Z(K_1)\times \dots \times Z(K_s)$ is through the action of $Z(K_i)$ on $K_i/\Ad_{K_i}$ for every $i=1,\dots,s$. Assuming that $K$ is simple, we identify $K/\Ad_K$ with a Weyl alcove $A_K$ in the Lie algebra $\s$ of a maximal torus for $K$. Then $A_K$ is a simplex given in barycentric coordinates by \[ A_K=\left\{ \sum_{v\in V} a_v v \mid a_v\in \R_{\geqslant 0} \textnormal{ for all } v\in V \textnormal{ and } \sum_{v\in V}a_v=1\right\}\subseteq \s\, , \] where $V\subseteq A_K$ is the set of vertices. As $Z(K)$ acts on $A_K$ through affine isometries of $\s$ permuting the vertices of $A_K$, the action is determined by this permutation action on the vertex set $V$. In particular, the barycenter $b(A_K)=\left(\sum_{v\in V} v\right)/|V|$ is a global fixed point for the $Z(K)$-action. Now let $\Gamma\leqslant Z(K)$ be any subgroup. Let $V=V_1\sqcup \dots \sqcup V_{\ell}$ be the decomposition of $V$ into orbits with respect to the permutation action of $\Gamma$ on $V$. Then a point $x=\sum_{v\in V} a_v v$ of $A_K$ is fixed by $\Gamma$ if and only if $a_v=a_u$ for all $v,u\in V_j$ and all $j=1\dots \ell$. Clearly, if $x$ is fixed by $\Gamma$, then so is $ab(A_K)+(1-a)x$ for every $a\in [0,1]$. It follows that the fixed point space $A_K^\Gamma$ is a star-shaped domain in $\s$ with respect to the barycenter $b(A_K)$ and therefore contractible. As this holds for all subgroups $\Gamma\leqslant Z(K)$, $A_K$ is $Z(K)$-equivariantly contractible. \end{proof} \begin{proof}[Proof of Lemma \ref{lem:coendequivalence}] We will only describe the equivalence $F'\xrightarrow{\simeq} \bar{F}$ as the equivalence $F\xrightarrow{\simeq} \bar{F}$ follows analogously. By Lemma \ref{lem:qfiber}, \[ F'(I)=q^{-1}(b(\sigma^I))\cong \left( \widetilde{DZ_G(b(\sigma^I))}/\Ad_{\widetilde{DZ_G(b(\sigma^I))}} \right)^{\langle \xi(b(\sigma^I))\rangle}\times_{C} Z(Z_G(b(\sigma^I)))_0\, . \] Since both $\langle \xi(b(\sigma^I))\rangle$ and $C$ act through the center of $\widetilde{DZ_G(b(\sigma^I))}$, Lemma \ref{lem:alcovecontractible} implies that the projection onto $Z(Z_G(b(\sigma^I)))_0$ induces a homotopy equivalence \[ F'(I)\xrightarrow{\simeq} Z(Z_G(b(\sigma^I)))_0/C\, , \] that is natural in $I$. The latter space can be identified naturally with $Z_G(b(\sigma^I))/DZ_G(b(\sigma^I))$ as a look at the isomorphism (\ref{eq:coveringderivedcenter}) shows. On the other hand, as $T(I)=T\cap DZ_G(b(\sigma^I))$ the inclusion $T\hookrightarrow Z_G(b(\sigma^I))$ induces a natural isomorphism \[ T/T(I) \cong Z_G(b(\sigma^I))/DZ_G(b(\sigma^I))\, . \] This proves the equivalence $F'\xrightarrow{\simeq} \bar{F}$. To see that the natural equivalences $F'\xrightarrow{\simeq} \bar{F}$ and $F\xrightarrow{\simeq} \bar{F}$ induce homotopy equivalences $A(p)\otimes_{\I(p)} F'\simeq A(p)\otimes_{\I(p)}\bar{F}$ and $A\otimes_{\I} F\simeq A\otimes_{\I} \bar{F}$ we recognize the coends as homotopy colimits of the diagrams $F'$, $F$ and $\bar{F}$. For this recall that, if $M\co \mathcal{C}\to \mathbf{Top}$ is a small diagram of spaces, then a model for the homotopy colimit of $M$ is the coend \[ \underset{\mathcal{C}}{\hocolim}\, M=B(-/ \mathcal{C})\otimes_{\mathcal{C}} M\,, \] where $B(-)$ is the classifying space functor and $c /\mathcal{C}$ denotes the category of objects under $c\in\mathcal{C}$, see \cite[XII. \S 2.1]{BK72}. Since the diagrams $A\co I\mapsto \sigma^I$ and $I\mapsto B(I/\I)$ are naturally isomorphic, we find that \[ \Rep(\Z^2,G)\cong A\otimes_{\I} F\cong B(-/\I)\otimes_{\I} F = \underset{\I}{\hocolim}\, F\, , \] and similarly, \[ R_G(p)\cong \hocolim_{\I(p)} F'\,. \] The required homotopy equivalences are now implied by homotopy invariance of homotopy colimits. Finally, commutativity of the diagram follows by inspection. \end{proof} The following lemma completes the proof of Proposition \ref{prop:weightedprojective}. Essentially, this is an identification of the coend $A\otimes_{\I} \bar{F}$ with the weighted projective space $\mathbb{CP}(\mathbf{n}^\vee)$. This kind of identification is well known in toric topology (see e.g. \cite[Section 5.3]{WZZ}), but we will give a direct proof here. \begin{lemma} \label{lem:coendwp} There is a commutative diagram \[ \xymatrix{ A(p)\otimes_{\I(p)} \bar{F} \ar[r]^-{\cong} \ar[d] & \mathbb{CP}(\mathbf{n}^\vee(p)) \ar[d]^-{\iota_p} \\ A\otimes_{\I} \bar{F} \ar[r]^-{\cong} & \mathbb{CP}(\mathbf{n}^\vee) } \] in which the left hand vertical map is a subspace inclusion. \end{lemma} \begin{proof} The top horizontal map is simply the restriction of the bottom map, so we first construct the latter. To this end, we replace $A\otimes_{\I}\bar{F}$ by the homeomorphic identification space $(A\times T)/{\approx}$ where \[ (x,t)\approx (x',t') \Longleftrightarrow x=x'\textnormal{ and } t^{-1}t'\in T(\tilde{\Delta}(x))\,. \] The homeomorphism $(A\times T)/{\approx}\to A\otimes_{\I} \bar{F}$ is given by mapping $(x,t)\mapsto x\otimes t$. To define a homeomorphism of $(A\times T)/{\approx}$ with $\mathbb{CP}(\mathbf{n}^\vee)=\SS^{2r+1}/\SS^1_{\mathbf{n}^\vee}$ we shall view $\SS^{2r+1}$ as the $(r+1)$-fold unreduced join \[ \SS^{2r+1}\cong \SS^1\ast\cdots \ast \SS^1=(\SS^1)^{\ast(r+1)}\, . \] It is convenient to write elements of $(\SS^1)^{\ast(r+1)}$ formally as tuples $\langle a_0 t_0, \dots, a_r t_r\rangle$ with \[ a_i\in [0,1],\, t_i\in \SS^1\textnormal{ for } i=0,\dots,r\textnormal{ and }\sum_{i=0}^r a_i=1\,, \] and subject to the identification $0 t=0 t'$ for all $t,t'\in \SS^1$. The homeomorphism with the unit sphere $\SS^{2r+1}\subseteq \C^{r+1}$ is then given by \[ \langle a_0 t_0,\dots,a_r t_r\rangle \mapsto (a_0t_0,\cdots,a_rt_r)/\rVert(a_0t_0,\cdots,a_rt_r)\rVert\, . \] Note that this map is $\SS^1_{\mathbf{n}^\vee}$-equivariant for the diagonal (weighted) action on $(\SS^1)^{\ast(r+1)}$. We identify points of $A$ with their barycentric coordinates $\mathbf{a}=(a_0,\dots,a_r)\in\Delta^r$. Given $t\in T$, we let $(t_1,\dots,t_r)\in (\SS^1)^r$ denote the coordinates of $t$ with respect to the decomposition \[ \SS^1_{\alpha_1^\vee} \times \cdots \times \SS^1_{\alpha_r^\vee} \xrightarrow{\cong} \SS^1_{\alpha_1^\vee} \cdots \SS^1_{\alpha_r^\vee} = T\,, \] where $\SS^1_{\alpha_i^\vee}\leqslant T$ is the one-parameter subgroup determined by the coroot $\alpha_i^\vee$.\footnote{Not to be confused with our notation $\SS^1_{\mathbf{w}}$ for the circle group acting on $\SS^{2r+1}$ with weights $\mathbf{w}$.} Define \begin{alignat*}{1} \phi\co (A\times T)/{\approx} & \to \SS^{\ast(r+1)}/\SS^1_{\mathbf{n}^\vee} \\ [\mathbf{a},t] & \mapsto \langle a_0 e,a_1 t_1,\dots,a_r t_r\rangle \SS^1_{\mathbf{n}^\vee}\, , \end{alignat*} where $e\in \SS^1$ is the identity element. We claim that $\phi$ is a continuous bijection of the compact space $(A\times T)/\approx$ with the Hausdorff space $\SS^{\ast(r+1)}/\SS^1_{\mathbf{n}^\vee}$, thus a homeomorphism. To see that $\phi$ is well-defined it suffices to check that if $a_i=0$ for some $i\in\{0,\dots,r\}$, then $\phi([\mathbf{a},ts])=\phi([(\mathbf{a},t)])$ for any $s\in \SS^1_{\alpha_i^\vee}$ and $t\in T$. This is clear when $1\leqslant i\leqslant r$, because in this case $\phi([\mathbf{a},t])$ is independent of $t_i$ by the identifications made in the join construction. Now suppose that $a_0=0$ and let $s \in \SS^1_{\alpha_0^\vee} \leqslant T$. Since $-\alpha_0^\vee=\sum_{i=1}^rn_i^\vee\alpha_i^\vee$, we can parametrize $\SS_{\alpha_0^\vee}^1$ by $\lambda \mapsto (\lambda^{n_1^\vee},\dots,\lambda^{n_r^\vee})\in \prod_{i=1}^r \SS^1_{\alpha_i^\vee}$, thus $s=(\lambda^{n_1^\vee},\dots,\lambda^{n_r^\vee})$ for some $\lambda\in \SS^1$. Then \begin{equation*} \begin{split} \phi([(\mathbf{a},ts])& =\langle 0 e,a_1 t_1\lambda^{n_1^\vee}, \dots,a_r t_r\lambda^{n_r^\vee}\rangle \SS^1_{\mathbf{n}^\vee} \\ &=\langle 0 \lambda^{-n_0^\vee},a_1 t_1,\dots,a_r t_r\rangle \SS^1_{\mathbf{n}^\vee}\\&=\phi([(\mathbf{a},t])\, . \end{split} \end{equation*} Here we used the weighted action of $\SS^1_{\mathbf{n}^\vee}$ in the second equality. We conclude that $\phi$ is well defined and continuous. It is clear that $\phi$ is surjective, that is, that every element of $(\SS^1)^{\ast(r+1)}$ is equivalent to one of the form $\langle a_0 e,a_1 t_1,\dots,a_r t_r\rangle$ modulo $\SS^1_{\mathbf{n}^\vee}$. To see that $\phi$ is injective suppose that $\phi([\mathbf{a},t])=\phi([\mathbf{a}',t'])$, that is, suppose that \[ \langle a_0 e,a_1 t_1,\dots,a_r t_r\rangle \SS^1_{\mathbf{n}^\vee} =\langle a_0' e,a_1' t_1',\dots,a_r' t_r'\rangle \SS^1_{\mathbf{n}^\vee}\, . \] Then there exists $\lambda \in \SS^1$ such that \begin{equation} \label{eq:joinequal} \langle a_0 e,a_1 t_1',\dots,a_r t_r'\rangle =\langle a_0 \lambda ,a_1 t_1 \lambda^{n_1^\vee},\dots,a_r t_r\lambda^{n_r^\vee}\rangle\, . \end{equation} First, this implies that $\mathbf{a}=\mathbf{a}'$ by the properties of the join construction. In addition, we claim that $t^{-1}t'\in T(\tilde{\Delta}(\mathbf{a}))$. From this it will follow that $(\mathbf{a},t)\approx (\mathbf{a}',t')$, hence that $\phi$ is injective. As a subset of $\{0,\dots,r\}$ the set $\tilde{\Delta}(\mathbf{a})$ consists of those $i$ such that $a_i=0$. Consider first the case $0\not\in \tilde{\Delta}(\mathbf{a})$, that is, $a_0\neq 0$. From (\ref{eq:joinequal}) we deduce that $\lambda=e$, and $t_i=t_i'$ for all $i\not\in \tilde{\Delta}(\mathbf{a})$. Therefore, $t^{-1}t'\in T(\tilde{\Delta}(\mathbf{a}))$. In the case $a_0=0$, we deduce that $t_i^{-1}t_i'=\lambda^{n_i^\vee}$ for all $i\not\in \tilde{\Delta}(\mathbf{a})$. Therefore, letting $s=(\lambda^{n_1^\vee},\dots,\lambda^{n_r^\vee})\in \SS^1_{\alpha_0^\vee}$, we have that $t^{-1}t's^{-1}\in T(\tilde{\Delta}(\mathbf{a})\backslash\{0\})$, hence $t^{-1}t'\in T(\tilde{\Delta}(\mathbf{a}))$. This completes the proof that $\phi$ is a homeomorphism. The top horizontal map in the diagram is obtained by restriction of $\phi$. The subspace $A(p)\otimes_{\I(p)} \bar{F}$ of $A\otimes_{\I} \bar{F}$ consists of those $[\mathbf{a},t]$ with $\mathbf{a}\in A(p)$. By definition, $\mathbf{a}\in A(p)$ is equivalent to $a_i=0$ for all $i$ such that $p\nmid n_i^\vee$. Thus $\phi$ maps $A(p)\otimes_{\I(p)} \bar{F}$ homeomorphically onto the subspace \[ \left\{ \langle a_0t_0,\dots,a_rt_r\rangle \SS_{\mathbf{n}^\vee}^1\in \SS^{\ast(r+1)}/\SS^1_{\mathbf{n}^\vee} \mid a_i=0 \textnormal{ if }p\nmid n_i^\vee\right\}\subseteq \SS^{\ast(r+1)}/\SS^1_{\mathbf{n}^\vee}\,. \] This space is homeomorphic to the weighted projective space $\mathbb{CP}(\mathbf{n}^\vee(p))$. By inspection, one identifies the inclusion into $\SS^{\ast(r+1)}/\SS^1_{\mathbf{n}^\vee}$ with the map $\iota_p\co \mathbb{CP}(\mathbf{n}^\vee(p))\to \mathbb{CP}(\mathbf{n}^\vee)$ described in Proposition \ref{prop:weightedprojective}. \end{proof} \subsection{The cases $E_7$ and $E_8$ and the proofs of Theorems \ref{thm:mainpairs} and \ref{thm:mainpairs2}} \label{sec:e7e8} Thus far we have determined $H_\ast(X_G(p);(\underline{\pi}_0)_{(p)})$ under the assumption that $G\not\in \{E_7,E_8\}$ or $p>2$. In this subsection we calculate additional homology groups for $G=E_7$ and $G=E_8$ which are needed to prove Theorem \ref{thm:mainpairs}. \begin{lemma} \label{lem:e7e8homology} Suppose that $G=E_7$ or $G=E_8$. Then \[ H_k^G(X_G(2);(\underline{\pi}_0)_{(2)}) \cong \begin{cases} \Z/4\,, & \textnormal{ if }k=0, \\ 0 \,, & \textnormal{ if }k=1. \end{cases} \] \end{lemma} \begin{proof} To start observe that Lemma \ref{lem:e7e8rg4} and Proposition \ref{prop:weightedprojective} imply that \begin{equation}\label{caseX(4)} H_k^G(X_G(4);(\underline{\pi}_0)_{(2)}) \cong \begin{cases} \Z/4\,, & \textnormal{ if }k=0, \\ 0 \,, & \textnormal{ if }k=1. \end{cases} \end{equation} In what follows we are going to prove that \begin{equation}\label{relativeBredon} H_{k}^{G}(X_G(2),X_G(4);(\underline{\pi}_0)_{(2)})=0 \ \ \text{ for } \ \ k=0, 1. \end{equation} The result is then obtained using (\ref{caseX(4)}) and (\ref{relativeBredon}) in the long exact sequence in Bredon homology associated to the pair $(X_G(2),X_G(4))$ \[ \cdots\to H^{G}_{k}(X_G(4);(\underline{\pi}_0)_{(2)})\to H_{k}^{G}(X_G(2);(\underline{\pi}_0)_{(2)}) \to H_{k}^{G}(X_G(2),X_G(4);(\underline{\pi}_0)_{(2)})\to \cdots \] A standard argument using excision and the long exact sequence associated to the pair $(X_G(2)/X_G(4),X_G(4)/X_G(4))$ shows that (\ref{relativeBredon}) follows from \[ H_{k}^{G}(X_G(2)/X_G(4);(\underline{\pi}_0)_{(2)})=0 \ \ \text{ for } \ \ k=0, 1. \] We show this next. Let $\mathcal{Q}$ denote the coefficient system whose value is $\Z/2$ everywhere, except for $G/G$ where we set $\mathcal{Q}(G/G)=0$, and for which each map $\Z/2\to \Z/2$ is an isomorphism. The coefficient systems $(\underline{\pi}_0)_{(2)}$ and $\mathcal{Q}$ agree when restricted to the quotient complex $X_G(2)/X_G(4)$, hence \[ H_{k}^{G}(X_G(2)/X_G(4);(\underline{\pi}_0)_{(2)})\cong H_{k}^{G}(X_G(2)/X_G(4);\mathcal{Q}). \] Therefore, the proof of the lemma reduces to showing that $H_{k}^{G}(X_G(2)/X_G(4);\mathcal{Q})$ vanishes when $k=0, 1$. To see this for $k=0$ we observe that the coefficient system $\mathcal{Q}$ satisfies the conditions of Lemma \ref{computation spectral sequence}. Moreover, $X_G(2)/X_G(4)$ is path--connected as the same is true for $X_G(2)$, and the basepoint $X_G(4)/X_G(4)$ is fixed by the $G$-action. Lemma \ref{computation spectral sequence} then implies that \begin{equation}\label{stepk=0} H_{0}^{G}(X_G(2)/X_G(4);\mathcal{Q})=0. \end{equation} To finish we handle the case $k=1$. For this notice that Proposition \ref{prop:weightedprojective} implies that, up to homotopy equivalence, both $R_G(2)$ and $R_G(4)$ can be identified with weighted projective spaces. Therefore, for the constant coefficient $\underline{\Z/2}$ we have \begin{equation}\label{trivialstep1} H_{k}^{G}(X_G(2)/X_G(4);\underline{\Z/2}) \cong H_{k}(R_G(2)/R_G(4);\Z/2)\cong \begin{cases} \Z/2\,, & \textnormal{ if }k=0, \\ 0 \,, & \textnormal{ if }k=1. \end{cases} \end{equation} Let $\K$ denote the coefficient system which is $0$ everywhere, except for $G/G$ where we set $\K(G/G)= \Z/2$. Then we have a short exact sequence of coefficient systems \[ 0\to \K\to \underline{\Z/2}\to \mathcal{Q}\to 0. \] Using (\ref{stepk=0}), (\ref{trivialstep1}) and the long exact sequence in Bredon homology associated to the above short exact sequence we obtain the short exact sequence \begin{equation}\label{longesE} 0\to H_{1}^{G}(X_G(2)/X_G(4);\mathcal{Q}) \to H_{0}^{G}(X_G(2)/X_G(4);\K) \to H_{0}^{G}(X_G(2)/X_G(4);\underline{\Z/2}) \to 0. \end{equation} By definition of $\K$, $H_{*}^{G}(X_G(2)/X_G(4);\K)$ is the $\Z/2$-homology of the space of $G$-fixed points of $X_G(2)/X_G(4)$. The only point fixed is the basepoint, so $H_{0}^{G}(X_G(2)/X_G(4);\K)\cong \Z/2$. From the exact sequence (\ref{longesE}) we know that \[ H_{0}^{G}(X_G(2)/X_G(4);\K)\to H_{0}^{G}(X_G(2)/X_G(4);\underline{\Z/2})\cong \Z/2 \] is surjective, hence an isomorphism. Exactness of (\ref{longesE}) thus implies that \[ H_{1}^{G}(X_G(2)/X_G(4);\mathcal{Q})=0\,, \] as we wanted to show. \end{proof} With the calculations of the previous lemma we have gathered all the necessary pieces to prove \begin{theorem} \label{thm:mainpairs} Let $G$ be a simply--connected and simple compact Lie group. Then \[ \pi_2(\Hom(\Z^2,G)) \cong \Z\,, \] and on this group the quotient map $\Hom(\Z^2,G)\to \Rep(\Z^2,G)$ induces multiplication by the Dynkin index $\textnormal{lcm}\{n_0^\vee,\dots,n^\vee_r\}$ where $n_0^\vee,\dots,n_r^\vee \geqslant 1$ are the coroot integers of $G$. \end{theorem} \begin{proof} To calculate $\pi_2(\Hom(\Z^2,G))$ our starting point is Lemma \ref{lem:pretheorem}, where we showed that $\pi_2(\Hom(\Z^2,G))\cong \Z\oplus E_{1,1}^2$. By Corollary \ref{cor:decomposition}, the group $E_{k,1}^2$ splits for each $k\geqslant 0$ into a direct sum \[ E_{k,1}^2=H_k^G(\Hom(\Z^2,G);\underline{\pi}_0)\cong \bigoplus_{p\in \mathcal{P}} H_k^G(X_G(p);(\underline{\pi}_0)_{(p)})\,, \] where $\mathcal{P}$ is the set of primes dividing a coroot integer of $G$. For $k=1$ each direct summand is trivial, either by Corollary \ref{cor:homologyrp} or by Lemma \ref{lem:e7e8homology}, hence $E_{1,1}^2=0$. This proves that $\pi_2(\Hom(\Z^2,G))\cong \Z$. Lemma \ref{lem:pretheorem} also showed that the degree\footnote{By \textit{degree} of a map $\Z\to \Z$ we will always mean its absolute value.} of $\pi_\ast\co \pi_2(\Hom(\Z^2,G))\to \pi_2(\Rep(\Z^2,G))$ equals the order of the finite group $E_{0,1}^2$. When $G$ is neither $E_7$ nor $E_8$, then $E_{0,1}^2\cong \bigoplus_{p \in \mathcal{P}}\Z/p$ by Corollary \ref{cor:homologyrp}. On the other hand, when $G=E_7$ or $G=E_8$, then Lemma \ref{lem:e7e8homology} implies that $E_{0,1}^2\cong\Z/4\oplus \bigoplus_{p\in\mathcal{P}\backslash\{2\}}\Z/p$. By inspection (see Table \ref{table:coroot}), the order of $E_{0,1}^2$ equals $\textnormal{lcm}\{n_0^\vee,\dots,n_r^\vee\}$, and the proof of the theorem is complete. \end{proof} Combining Proposition \ref{prop:weightedprojective} and Theorem \ref{thm:mainpairs} we derive the following more general result: \begin{theorem} \label{thm:mainpairs2} Suppose that $G$ is a semisimple compact connected Lie group. Then there is an extension \[ 0\to \Z^s \to H_2(\Hom(\Z^2,G)_{\BONE};\Z) \to H_2(\pi_1(G)^2;\Z)\to 0\,, \] where $s\geqslant 0$ is the number of simple factors in the Lie algebra of $G$. \end{theorem} \begin{proof} Consider the Serre spectral sequence for the universal covering sequence \[ \Hom(\Z^2,\tilde{G})\to \Hom(\Z^2,G)_{\BONE}\to B\pi_1(G)^2\,. \] Inspection of the proof of \cite[Lemma 2.2]{Goldman} shows that the action by deck translation of $\pi_1(G)^2$ on $\Hom(\Z^2,\tilde{G})$ is simply $(x,y)\mapsto (ax,by)$ where $a,b\in \pi_1(G)$ are viewed as elements in the center of $\tilde{G}$. We claim that this action is trivial on $H_2(\Hom(\Z^2,\tilde{G});\Z)$. Let us first treat the case when $G$ is simple. Then we can view $H_2(\Hom(\Z^2,\tilde{G});\Z)\cong \Z$ as a $\pi_1(G)^2$-submodule of $H_2(\Rep(\Z^2,\tilde{G});\Z)$ through Theorem \ref{thm:mainpairs}. The action of $Z(\tilde{G})$ on $\Rep(\Z^2,\tilde{G})$ given by translation of either coordinate is trivial on homology, because it extends to an action of a maximal torus on $\SS^{2r+1}/\SS^1$ under the identification in (\ref{eq:explicitequivalence}). Since $H_2(\Hom(\Z^2,\tilde{G});\Z)$ is torsionfree, the differential $d_3\co H_3(\pi_1(G)^2;\Z)\to H_2(\Hom(\Z^2,\tilde{G});\Z)$ is trivial, and the extension follows. The same argument applies when $G$ is semisimple, with the only difference that now $\Hom(\Z^2,\tilde{G})\cong \Hom(\Z^2,G_1)\times \cdots \times \Hom(\Z^2,G_s)$, hence $H_2(\Hom(\Z^2,\tilde{G});\Z)\cong \Z^s$, where $G_1,\dots,G_s$ are the simple factors in $\tilde{G}$. \end{proof} Let $G$ be simple, and suppose that the finite group $\pi_1(G)$ is not cyclic; then neither is $H_2(\pi_1(G)^2;\Z)$, and we deduce from Theorem \ref{thm:mainpairs2} that $H_2(\Hom(\Z^2,G)_{\BONE};\Z)$ necessarily has torsion. This happens for $G=PSO(4n)$ for any $n\geqslant 1$, since $\pi_1(PSO(4n))\cong \Z/2\oplus \Z/2$. As another example consider $SO(3)$. One shows that $H_2(\Hom(\Z^2,SO(3))_{\BONE};\Z)\cong \Z$ (for example, using the description $\Hom(\Z^2,SO(3))_{\BONE}\cong (\SS^2\times_{\Z/2}(\SS^1)^2)/(\SS^2\times_{\Z/2}\ast)$ provided in \cite[Theorem 3.1]{STG}). In this case the extension is $\Z\stackrel{2}{\longrightarrow}\Z\to \Z/2$.\medskip For $n\geqslant 2$ let $p_{i,j}\co \Hom(\Z^n,G)_{\BONE}\to \Hom(\Z^2,G)$ be the projection onto the $i$-th and $j$-th component. Denote by $p\co \Hom(\Z^n,G)_{\BONE}\to \prod_{1\leqslant i<j\leqslant n} \Hom(\Z^2,G)$ the map whose $(i,j)$-th component is $p_{i,j}$. For later reference we record \begin{corollary} \label{cor:h2modtorsion} Let $n\geqslant 2$. Then the map $p\co \Hom(\Z^n,G)_{\BONE}\to \prod^{\binom{n}{2}}\Hom(\Z^2,G)$ induces an isomorphism \[ p_\ast \co \pi_2(\Hom(\Z^n,G)_{\BONE})/\textnormal{torsion} \xrightarrow{\cong} \bigoplus^{\binom{n}{2}} \pi_2(\Hom(\Z^2,G))\,. \] \end{corollary} \begin{proof} For each $1\leqslant i< j\leqslant n$ we have a natural inclusion of the $i$-th and $j$-th component \[ I_{i,j}:\Hom(\Z^2,G)\to \Hom(\Z^n,G)_{\BONE} \] which inserts the identity in all the other components. Observe that, since $\pi_2(G)=0$, for all $i,j,k,l$ we have that $(p_{i,j})_\ast\circ (I_{k,l})_\ast=0$, unless $i=k$ and $j=l$ in which case $(p_{i,j})_\ast\circ (I_{i,j})_\ast=id$. This implies that $p_{*}$ is surjective. By Corollary \ref{cor:rank} and Theorem \ref{thm:mainpairs}, both the domain (modulo torsion) and codomain of $p_\ast$ are free abelian groups of rank $\binom{n}{2}$. Since $p_{*}$ is a surjective map of free abelian groups of the same rank it must be an isomorphism. \end{proof} \section{Stability for commuting pairs in Spin groups} \label{sec:stability} In this section we study the stability behavior in the case of Spin groups. For $m\geqslant 3$ the group $Spin(m)$ is the universal covering group of $SO(m)$. The standard inclusion $SO(m)\hookrightarrow SO(m+1)$, given by block sum with a $1\times 1$ identity matrix, induces a map $Spin(m)\to Spin(m+1)$. The purpose of this section is to prove Theorem \ref{thm:spinstability} which asserts that for $m\geqslant 5$ the map \begin{equation} \label{eq:stabilizationmapspin} \Hom(\Z^2,Spin(m))\to \Hom(\Z^2,Spin(m+1)) \end{equation} induces an isomorphism of second homotopy groups. We begin by proving a general homology stability result for representation spaces of spinor groups which may be interesting on its own. Only a special case of it will be needed to prove Theorem \ref{thm:spinstability}. Let \[ i_{\ell-1}^{\textnormal{ev}}\co \Hom(\Z^2,Spin(2\ell-2))\to \Hom(\Z^2,Spin(2\ell)) \] denote the map obtained by iterating the stabilization map (\ref{eq:stabilizationmapspin}). Similarly, define \[ i_{\ell-1}^{\textnormal{odd}}\co \Hom(\Z^2,Spin(2\ell-1))\to \Hom(\Z^2,Spin(2\ell+1))\, . \] Let $\bar{i}_{\ell-1}^{\textnormal{ev}}$ and $\bar{i}_{\ell-1}^{\textnormal{odd}}$ denote the respective maps induced on representation spaces. \begin{theorem} \label{thm:stabilityrepspin} \leavevmode \begin{itemize} \item[(i)] When $\ell\geqslant 4$, then the map \[ (\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast\co H_k(\Rep(\Z^2,Spin(2\ell-2));\Z)\to H_k(\Rep(\Z^2,Spin(2\ell));\Z) \] is an isomorphism for all $k\leqslant 2\ell-7$ and multiplication by $2$ for $k=2\ell-6$. \smallskip \item[(ii)] When $\ell\geqslant 3$, then the map \[ (\bar{i}_{\ell-1}^{\textnormal{odd}})_\ast\co H_k(\Rep(\Z^2,Spin(2\ell-1));\Z)\to H_k(\Rep(\Z^2,Spin(2\ell+1));\Z) \] is an isomorphism for all $k\leqslant 2\ell-5$ and multiplication by $2$ for $k=2\ell-4$. \end{itemize} \end{theorem} \begin{proof} We begin by proving (i). For $\ell\geqslant 3$ the group $Spin(2\ell)$ is described by the root system $D_\ell$ (for $\ell=3$ there is an isomorphism $D_3\cong A_3$ yielding the exceptional isomorphism $Spin(6)\cong SU(4)$). Let $\{\alpha_1,\dots,\alpha_\ell\}$ be a set of simple roots. It may be chosen in such a way that, when $\ell\geqslant 4$, the subset $\{\alpha_2,\dots,\alpha_\ell\}$ determines a subroot system of $D_\ell$ of type $D_{\ell-1}$ and the image of $Spin(2\ell-2)$ in $Spin(2\ell)$ is the subgroup corresponding to this subroot system. Now let $\ell\geqslant 4$. Let $\mathbf{n}^{\vee}_{\ell-1}=(1,1,2,\dots,2,1,1)\in\Z^{\ell}$ be the tuple of coroot integers of $Spin(2\ell-2)$ (of which $\ell-4$ entries are equal to $2$). By Proposition \ref{prop:weightedprojective}, the map $\bar{i}_{\ell-1}^{\textnormal{ev}}$ is equivalent to a map $f_{\ell-1} \co \mathbb{CP}(\mathbf{n}^\vee_{\ell-1}) \to \mathbb{CP}(\mathbf{n}^\vee_{\ell})$, so that $\deg((\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast)=\deg((f_{\ell-1})_\ast)$. Let $(\mathbf{n}^\vee_{\ell-1})'\in \Z^{\ell-2}$ be the tuple consisting of the first $\ell-2$ entries of $\mathbf{n}^\vee_{\ell-1}$. We are going to describe $f_{\ell-1}$ explicitly, and show that it fits into a commutative diagram \begin{equation} \label{dgr:wp12} \xymatrix{ \mathbb{CP}((\mathbf{n}^\vee_{\ell-1})')\ar[d] \ar[dr] & \\ \mathbb{CP}(\mathbf{n}^\vee_{\ell-1}) \ar[r]^-{f_{\ell-1}} & \mathbb{CP}(\mathbf{n}^\vee_{\ell}). } \end{equation} Here the map $\mathbb{CP}((\mathbf{n}^\vee_{\ell-1})')\to \mathbb{CP}(\mathbf{n}^\vee_{\ell-1})$ is the ``standard inclusion'', described in homogeneous coordinates by $[z_0,\dots,z_{\ell-3}]\mapsto [z_0,\dots,z_{\ell-3},0,0]$, and $\mathbb{CP}((\mathbf{n}^\vee_{\ell-1})')\to \mathbb{CP}(\mathbf{n}^\vee_{\ell})$ is the ``standard inclusion'' $[z_0,\dots,z_{\ell-3}]\mapsto [z_0,\dots,z_{\ell-3},0,0,0]$. Let us first show how the conclusion of the theorem is obtained from commutativity of the diagram. If $\mathbf{w}\in \Z^{r+1}_{>0}$, $r\geqslant 1$, and $p_{\mathbf{w}}\co \mathbb{CP}^r\to \mathbb{CP}(\mathbf{w})$ is the projection given by $[z_0,\dots,z_r]\mapsto [z_0^{w_0},\dots,z_r^{w_r}]$, then the degree of $(p_{\mathbf{w}})_\ast\co H_{2k}(\mathbb{CP}^r;\Z)\to H_{2k}(\mathbb{CP}(\mathbf{w});\Z)$ is given by \begin{equation} \label{eq:degreeofmapofweightedprojectivespaces} \textnormal{lcm}\left\{\frac{w_{i_0}\cdots w_{i_k}}{\textnormal{gcd}\{w_{i_0},\dots,w_{i_k}\}} \mid 0\leqslant i_0 <\cdots < i_k\leqslant r \right\}\,, \end{equation} see \cite[p. 244]{K73}. This can be used to show that the inclusion $\mathbb{CP}((\mathbf{n}^\vee_{\ell-1})')\to \mathbb{CP}(\mathbf{n}^\vee_{\ell-1})$ induces homology isomorphisms in degrees $k\leqslant 2\ell-6$, whereas $\mathbb{CP}((\mathbf{n}^\vee_{\ell-1})')\to \mathbb{CP}(\mathbf{n}^\vee_\ell)$ induces isomorphisms in degrees $k\leqslant 2\ell-7$ and multiplication by $2$ in degree $k=2\ell-6$. Consequently, $f_{\ell-1}\co \mathbb{CP}(\mathbf{n}^\vee_{\ell-1})\to \mathbb{CP}(\mathbf{n}^\vee_\ell)$ induces isomorphisms in degrees $k\leqslant 2\ell-7$ and multiplication by $2$ in degree $k=2\ell-6$. Now we construct $f_{\ell-1}$. Fix a maximal torus $T_\ell\leqslant Spin(2\ell)$, and let $W_\ell$ be the corresponding Weyl group. The maximal torus $T_{\ell-1}\leqslant Spin(2\ell-2)$ is the subtorus of $T_\ell$ generated by the one-parameter subgroups corresponding to the coroots $\alpha_2^\vee,\dots,\alpha_\ell^\vee$. Let $\theta_{\ell-1}\co T_{\ell-1}\hookrightarrow T_{\ell}$ denote the inclusion. The map $\bar{i}_{\ell-1}^{\textnormal{ev}}\co \Rep(\Z^2,Spin(2\ell-2))\to \Rep(\Z^2,Spin(2\ell))$ may be identified with the map $(T_{\ell-1}\times T_{\ell-1})/W_{\ell-1}\to (T_{\ell}\times T_{\ell})/W_{\ell}$ induced by $\theta_{\ell-1}$. Let $A_\ell$ denote the fundamental alcove determined by $\{\alpha_1,\dots,\alpha_\ell\}$, and let $A_{\ell-1}$ be the alcove in $\t_{\ell-1}$ determined by $\{\alpha_2,\dots,\alpha_\ell\}$. Our aim is to define a map $f_{\ell-1}$ making the following diagram commute: \begin{equation} \label{dgr:fl} \xymatrix{ (T_{\ell-1}\times T_{\ell-1})/W_{\ell-1} \ar[d]^-{\simeq} \ar[r]^-{\bar{i}_{\ell-1}^{\textnormal{ev}}} & (T_{\ell}\times T_{\ell})/W_{\ell} \ar[d]^-{\simeq} \\ (A_{\ell-1}\times T_{\ell-1})/{\approx} \ar[r]^-{f_{\ell-1}} & (A_{\ell}\times T_{\ell})/{\approx} } \end{equation} The spaces in the bottom row are homeomorphic to weighted projective spaces, see Lemma \ref{lem:coendwp}. The vertical maps are the equivalences established in the course of proving Proposition \ref{prop:weightedprojective}. For example, the right hand map takes a $W_\ell$-orbit $((t_1,t_2))$ to the equivalence class under $\approx$ of any representative $(x,t)$ of $((t_1,t_2))$ with $x\in A_\ell$ and $t\in T_\ell$. The inclusion $\theta_{\ell-1}\co T_{\ell-1}\hookrightarrow T_\ell$ descends to a map $T_{\ell-1}/W_{\ell-1}\to T_\ell/W_\ell$, hence defines a map of alcoves $\bar{\theta}_{\ell-1}\co A_{\ell-1}\to A_\ell$. To define $f_{\ell-1}$ we begin by describing $\bar{\theta}_{\ell-1}$ in barycentric coordinates. For this we will work with explicit formulae for the root system $D_\ell$ which can be found, for example, in \cite[Plate IV]{Bourbaki46}. Note that $D_\ell$ is self-dual, which means that the expressions for roots and weights shown in \cite{Bourbaki46} also represent the coroots and coweights, respectively. We identify the Lie algebra $\mathfrak{t}_\ell$ of $T_\ell$ with $\R^\ell$. With respect to the standard basis $\{e_1,\dots,e_\ell\}$ of $\R^\ell$, the simple coroots can be chosen to be \begin{equation} \label{eq:corootsstandardbasis} \alpha_i^\vee=e_i-e_{i+1}\;\textnormal{ for }\;i=1,\dots,\ell-1\,,\;\textnormal{ and }\; \alpha^\vee_\ell=e_{\ell-1}+e_{\ell}\,. \end{equation} The Weyl group $W_\ell$ acts on $\t_\ell$ by signed permutations $e_i\mapsto \pm e_{\sigma(i)}$ ($\sigma\in \Sigma_\ell$) such that the total number of negative signs is even. Note that $\mathfrak{t}_{\ell-1}=\textnormal{span}(\alpha_2^\vee,\dots,\alpha_\ell^\vee) =\textnormal{span}(e_2,\dots,e_\ell) \subseteq \t_\ell$. Recall from \cite[VI \S 2.2 Corollary of Proposition 5]{Bourbaki46} that, as a subspace of $\mathfrak{t}_\ell$, $A_{\ell}$ is the convex hull \[ A_{\ell}=\textnormal{Conv}(0,\omega^\vee_1/n_1,\dots,\omega^\vee_\ell/n_\ell)\,, \] where $\omega_i^\vee$ is the $i$-th fundamental coweight (defined by $\alpha_j(\omega_i^\vee)=\delta_{ij}$) and $n_i$ is the root integer associated to $\alpha_i$ (which in the case of $D_\ell$ equals the $i$-th coroot integer $n_i^\vee$). Let us write $v_i:=\omega_i^\vee/n_i$, so that $\{0,v_1,\dots,v_\ell\}$ is the set of vertices of $A_\ell$. Let $\{0,u_1,\dots,u_{\ell-1}\}$ denote the set of vertices of $A_{\ell-1}\subseteq \t_{\ell-1}\subseteq \t_\ell=\R^\ell$. Using \cite[Plate IV]{Bourbaki46} we can write the vertices in terms of the standard basis of $\R^\ell$. The result is displayed in Table \ref{table:vertices}. \begin{table}[h] \centering \def\arraystretch{1.5} \begin{tabular}{ll} $u_1=e_2$ & $v_1=e_1$ \\ $u_2=\frac{1}{2}\left(e_2+e_3\right)$ & $v_2=\frac{1}{2}\left(e_1+e_2\right)$ \\ \vdots & \vdots \\ $u_{\ell-3}= \frac{1}{2}\left(e_2+\cdots + e_{\ell-2}\right)$ & $v_{\ell-3} =\frac{1}{2}\left(e_1+\cdots+e_{\ell-3}\right)$ \\ $u_{\ell-2}= \frac{1}{2}\left(e_2+\cdots + e_{\ell-1}-e_\ell\right)$ & $v_{\ell-2}=\frac{1}{2}\left(e_1+\cdots+e_{\ell-2}\right)$ \\ $u_{\ell-1}= \frac{1}{2}\left(e_2+\cdots + e_{\ell-1}+e_\ell\right)$ & $v_{\ell-1}=\frac{1}{2}\left(e_1+\cdots+e_{\ell-1}-e_{\ell}\right)$ \\ & $v_{\ell}=\frac{1}{2}\left(e_1+\cdots+e_{\ell-1}+e_\ell\right)$ \end{tabular} \vspace{10pt} \caption{Vertices of $A_{\ell-1}=\textnormal{Conv}(0,u_1,\dots,u_{\ell-1})$ and $A_{\ell}=\textnormal{Conv}(0,v_1,\dots,v_\ell)$.} \label{table:vertices} \end{table} Let $x\in A_{\ell-1}$. Then $\bar{\theta}_{\ell-1}(x)=wx$, where $w\in W_\ell$ is any element such that $w x\in A_\ell$. Let $\sigma^+\in W_\ell$ be the cyclic permutation mapping $e_i\mapsto e_{i-1}$ for $i=2,\dots,\ell$, and $e_1\mapsto e_\ell$. Let $\sigma^-\in W_\ell$ be the element whose underlying permutation is $\sigma^+$, but which sends $e_1\mapsto -e_\ell$ and $e_\ell\mapsto -e_{\ell-1}$. One can check that $\sigma^+ u_i=\sigma^-u_i=v_i$ for all $1\leqslant i\leqslant \ell-3$, that $\sigma^+(u_{\ell-2}+u_{\ell-1}) =\sigma^-(u_{\ell-2}+u_{\ell-1})=2v_{\ell-2}$, and that $\sigma^+ u_{\ell-1}=\sigma^- u_{\ell-2} =(v_{\ell-1}+v_{\ell})/2$. Now suppose that $x=a_1u_1+\dots+a_{\ell-1}u_{\ell-1} \in A_{\ell-1}$ with $a_i\geqslant 0$ and $a_1+\cdots+a_{\ell-1}\leqslant 1$. If $a_{\ell-2}\leqslant a_{\ell-1}$, then, writing \[ x=a_1u_1+\dots+a_{\ell-3}u_{\ell-3}+a_{\ell-2}(u_{\ell-2}+u_{\ell-1})+(a_{\ell-1}-a_{\ell-2})u_{\ell-1}\,, \] we see that \[ \sigma^+ x=a_1v_1+\cdots+a_{\ell-3}v_{\ell-3}+2a_{\ell-2}v_{\ell-2}+\frac{a_{\ell-1}-a_{\ell-2}}{2}(v_{\ell-1}+v_{\ell})\, , \] which is a convex combination of $0,v_1,\dots,v_\ell$, hence a point in $A_\ell$. On the other hand, if $a_{\ell-2}>a_{\ell-1}$, then we find in a similar way that \[ \sigma^- x=a_1v_1+\cdots+a_{\ell-3}v_{\ell-3}+2a_{\ell-1}v_{\ell-2}+\frac{a_{\ell-2}-a_{\ell-1}}{2}(v_{\ell-1}+v_{\ell})\,, \] which is again in $A_{\ell}$. From this we derive a description of the map $\bar{\theta}_{\ell-1}\co A_{\ell-1}\to A_\ell$ in barycentric coordinates: \[ (a_0,\dots,a_{\ell-1})\stackrel{\bar{\theta}_{\ell-1}}{\longmapsto} \left(a_0,\dots,a_{\ell-3},2\min\{a_{\ell-2},a_{\ell-1}\}, \frac{|a_{\ell-1}-a_{\ell-2}|}{2},\frac{|a_{\ell-1}-a_{\ell-2}|}{2}\right)\, . \] Now let $[x,t]\in (A_{\ell-1}\times T_{\ell-1})/{\approx}$, and let $(a_0,\dots,a_{\ell-1})$ be the barycentric coordinates of $x$. Since we want the diagram (\ref{dgr:fl}) to commute, we are forced to set \[ f_{\ell-1}([x,t]):=[\bar{\theta}_{\ell-1}(x),w\theta_{\ell-1}(t)]\,, \] where $w=\sigma^+$ if $a_{\ell-2}\leqslant a_{\ell-1}$, and $w=\sigma^-$ if $a_{\ell-2}> a_{\ell-1}$. Let $(t_1,\dots,t_{\ell-1})\in (\SS^1)^{\ell-1}$ be the coordinates of $t\in T_{\ell-1}$ with respect to $\alpha_2^\vee,\dots,\alpha_\ell^\vee$. Then it is easily verified, using the action of $W_\ell$ on the coroots $\alpha_1^\vee,\dots,\alpha_\ell^\vee$ (see (\ref{eq:corootsstandardbasis})), that \[ \sigma^+\theta_{\ell-1}(t)= (t_1,\dots,t_{\ell-3},t_{\ell-1}t_{\ell-2},t_{\ell-1},t_{\ell-1})\,, \quad \sigma^-\theta_{\ell-1}(t)=(t_1,\dots,t_{\ell-3},t_{\ell-1}t_{\ell-2},t_{\ell-2},t_{\ell-2})\,. \] Now it is not difficult to check that $f_{\ell-1}$ is well defined and continuous (for the definition of the equivalence relation $\approx$ see the proof of Lemma \ref{lem:coendwp}). Moreover, the diagram (\ref{dgr:fl}) commutes by construction. Using the homeomorphism $\phi$ defined in the proof of Lemma \ref{lem:coendwp} we could give a description of $f_{\ell-1} \co \mathbb{CP}(\mathbf{n}^\vee_{\ell-1})\to \mathbb{CP}(\mathbf{n}^\vee_\ell)$ in homogeneous coordinates. Relevant to our proof, however, is merely the observation that the restriction of $f_{\ell-1}$ to the first $\ell-2$ homogenous coordinates of $\mathbb{CP}(\mathbf{n}^\vee_{\ell-1})$ (i.e., setting $a_{\ell-2}=a_{\ell-1}=0$) equals the standard inclusion $[z_0,\dots,z_{\ell-3}]\mapsto [z_0,\dots,z_{\ell-3},0,0,0]$; but this is evident from the description of $\bar{\theta}_{\ell-1}$ and $w\theta_{\ell-1}$ given above, as these coordinates depend only on $a_0,\dots,a_{\ell-3}$ and $t_1,\dots,t_{\ell-3}$. This proves part (i) of the theorem. Part (ii) for $\ell\geqslant 4$ is proved in the same way. The group $Spin(2\ell+1)$ is described by the root system $B_{\ell}$. A basis $\{\alpha_1,\dots,\alpha_\ell\}$ of simple roots can be chosen in such a way that the subgroup $Spin(2\ell-1)$ corresponds to the subroot system $B_{\ell-1}$ obtained by omitting the simple root $\alpha_1$. A calculation as before shows that the stabilization map $\bar{i}_{\ell-1}^{\textnormal{odd}}$ is equivalent to the one induced by the map $A_{\ell-1}\times T_{\ell-1}\to A_{\ell}\times T_\ell$ sending \[ ((a_0,\dots,a_{\ell-1}),(t_1,\dots,t_{\ell-1}))\mapsto ((a_0,\dots,a_{\ell-2},a_{\ell-1},0),(t_1,\dots,t_{\ell-2},t_{\ell-1}^2,t_{\ell-1}))\, . \] In particular, the restriction to the first $\ell-1$ homogeneous coordinates is equal to the standard inclusion. The conclusion follows again from \cite{K73} and a commutative diagram similar to (\ref{dgr:wp12}). For the case $\ell=3$ see Remark \ref{rem:spin5spin7}. \end{proof} \begin{remark} \label{rem:oddspin} Theorem \ref{thm:stabilityrepspin} should be compared with the homology stability result of Ramras and Stafa, \cite[Theorem 1.1]{RS2}. From their result one deduces that the rational homology groups of $\Rep(\Z^2,Spin(2\ell-1))$ and $\Rep(\Z^2,Spin(2\ell+1))$, and of $\Rep(\Z^2,Spin(2\ell-2))$ and $\Rep(\Z^2,Spin(2\ell))$, respectively, are abstractly isomorphic up to homological degree $k\leqslant \ell-1$. \end{remark} Next we prove stability of $\pi_2$ for spaces of commuting pairs in spinor groups. As in the previous theorem, it is natural to divide the analysis into the case of even and odd Spin groups; it will be enough, however, to treat the even case from which the general case may be deduced. \begin{theorem} \label{thm:spinstability} For all $m\geqslant 5$ the map $Spin(m)\to Spin(m+1)$ induces an isomorphism \[ \pi_2(\Hom(\Z^2,Spin(m)))\xrightarrow{\cong} \pi_2(\Hom(\Z^2,Spin(m+1)))\, . \] \end{theorem} \begin{proof} We first focus on the range $m\geqslant 6$. The case $m=5$ will be dealt with separately at the end. Let $\ell\geqslant 4$. By Theorem \ref{thm:mainpairs}, all three groups in the sequence \[ \pi_2(\Hom(\Z^2,Spin(2\ell-2))) \to \pi_2(\Hom(\Z^2,Spin(2\ell-1)))\to \pi_2(\Hom(\Z^2,Spin(2\ell))) \] are isomorphic to $\Z$. Thus, if the composite map is an isomorphism, then so are the two component maps. To prove the theorem in the range $m\geqslant 6$ it therefore suffices to show that for all $\ell\geqslant 4$ the map $i_{\ell-1}^{\textnormal{ev}}\co \Hom(\Z^2,Spin(2\ell-2))\to \Hom(\Z^2,Spin(2\ell))$ induces an isomorphism of second homotopy groups. Consider the commutative diagram \[ \xymatrix{ \pi_2(\Hom(\Z^2,Spin(2\ell-2)))\ar[r]^-{(i_{\ell-1}^{\textnormal{ev}})_\ast} \ar[d]^-{(\pi_{\ell-1})_\ast} & \pi_2(\Hom(\Z^2,Spin(2\ell))) \ar[d]^-{(\pi_{\ell})_\ast} \\ \pi_2(\Rep(\Z^2,Spin(2\ell-2))) \ar[r]^-{(\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast} & \pi_2(\Rep(\Z^2,Spin(2\ell))) } \] in which the maps are the obvious ones. As all groups in the diagram are isomorphic to $\Z$, each map is determined by its degree. The degrees of the vertical maps are described by Theorem \ref{thm:mainpairs}, from which we deduce that \begin{equation*} \label{eq:deg} \deg((i_{\ell-1}^{\textnormal{ev}})_\ast)=\frac{\deg((\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast) \deg((\pi_{\ell-1})_\ast)}{\deg((\pi_{\ell})_\ast)}=\begin{cases} \deg((\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast)/2 & \textnormal{if } \ell=4, \\ \deg((\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast) & \textnormal{if } \ell>4. \end{cases} \end{equation*} Theorem \ref{thm:stabilityrepspin} implies that $\deg(\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast=1$ if $\ell >4$ and $\deg(\bar{i}_{\ell-1}^{\textnormal{ev}})_\ast=2$ if $\ell=4$, hence $\deg((i_{\ell-1}^{\textnormal{ev}})_\ast)=1$ for all $\ell\geqslant 4$. This proves the theorem in the range $m\geqslant 6$. It remains to prove that $Spin(5)\to Spin(6)$ induces an isomorphism as stated. It is enough to show that the map $\pi_2(\Hom(\Z^2,Spin(5)))\to \pi_2(\Hom(\Z^2,Spin(6)))$ is surjective as both groups are isomorphic to $\Z$. Consider the following sequence of matrix groups \[ \xymatrix{ Spin(4) \ar[r] & Spin(5) \ar[r] & Spin(6) \\ SU(2)\times SU(2) \ar@{}[u]|{\rotatebox{90}{$\cong$}} & Sp(2) \ar@{}[u]|{\rotatebox{90}{$\cong$}} & SU(4) \ar@{}[u]|{\rotatebox{90}{$\cong$}} } \] Recall that $Sp(k)=\{A\in GL(k,\mathbb{H})\mid A^\dagger A=\BONE\}$. In particular, $Sp(1)\cong SU(2)$ is the group of quaternions of unit norm. It is wellknown, see \cite[Theorem 5.20]{MT91}, that the first map in the sequence is block sum $Sp(1)^2\to Sp(2)$, $(A,B)\mapsto A\oplus B$. The second map is the inclusion $Sp(2)\hookrightarrow SU(4)$ resulting from \[ M(2,\mathbb{H})\to M(4,\C)\,,\; A+\mathbf{j}B \mapsto \begin{pmatrix} A & -\bar{B} \\ B & \bar{A} \end{pmatrix}\, , \] see \cite[Theorem 5.21]{MT91}. There is a permutation $P\in \Sigma_4\leqslant U(4)$ such that the composition of $SU(2)^2\to Sp(2) \to SU(4)$ is conjugate by $P$ to block sum. As $U(4)$ is path--connected the composition is homotopic to block sum. Then its restriction to the first $SU(2)$-factor is homotopic to the canonical inclusion of $SU(2)$ into $SU(4)$, which by Theorem \ref{thm:casesymplecticandunitary} and Remark \ref{rem:symplecticandunitary} induces an isomorphism $\pi_2(\Hom(\Z^2,SU(2)))\cong \pi_2(\Hom(\Z^2,SU(4)))$. As a consequence, the map $\pi_2(\Hom(\Z^2,Spin(4)))\to \pi_2(\Hom(\Z^2,Spin(6)))$ is surjective, hence so is $\pi_2(\Hom(\Z^2,Spin(5)))\to \pi_2(\Hom(\Z^2,Spin(6)))$. \end{proof} \begin{remark} \label{rem:spin5spin7} To prove the case $\ell=3$ of Theorem \ref{thm:stabilityrepspin} (ii) we consider the sequence of maps \[ \xymatrix{ \Rep(\Z^2,Spin(6)) \ar[r] \ar@/^1.4pc/[rr]^-{2} & \Rep(\Z^2,Spin(7)) \ar[r] \ar@/_1.4pc/[rr]_-{1} & \Rep(\Z^2,Spin(8)) \ar[r] & \Rep(\Z^2,Spin(9)) \\ } \] The two labels indicate the degree of the induced map of second homotopy groups as implied by the case $\ell=4$ of Theorem \ref{thm:stabilityrepspin}. It follows that the first map in the sequence has degree $2$ on second homotopy groups. Now $\pi_2(\Rep(\Z^2,Spin(5)))\to \pi_2(\Rep(\Z^2,Spin(6)))$ is an isomorphism, by Theorem \ref{thm:spinstability}, hence $\pi_2(\Rep(\Z^2,Spin(5)))\to \pi_2(\Rep(\Z^2,Spin(7)))$ must have degree 2. \end{remark} \section{The distinguished role of $SU(2)$} \label{sec:rolesu2} In this section we continue to work with a fixed simply--connected simple compact Lie group $G$. Let \[ \nu \co SU(2)\hookrightarrow G \] be the embedding that corresponds to the highest root of $G$. Then the induced map \[ \nu_\ast\co H_3(SU(2);\Z)\xrightarrow{\cong} H_3(G;\Z) \] is an isomorphism, see \cite[III Proposition 10.2]{BS52}. \begin{theorem} \label{thm:su2represents} The map $\nu \co SU(2)\hookrightarrow G$ induces an isomorphism \[ \pi_2(\Hom(\Z^2,SU(2)))\cong \pi_2(\Hom(\Z^2,G))\, . \] \end{theorem} \begin{proof} A standard argument with the semi-algebraic triangulation theorem \cite[Theorem 9.2.1]{BCR} shows that we can give $G^2=G\times G$ a CW-structure in such a way that $\Hom(\Z^2,G)$ is a subcomplex, hence the inclusion $i\co \Hom(\Z^2,G)\to G^2$ is a cofibration. Let us consider the Puppe sequence of $i$, \[ \Hom(\Z^2,G)\xrightarrow{i} G^2\to G^2 /\Hom(\Z^2,G) \xrightarrow{\partial} \Sigma \Hom(\Z^2,G)\xrightarrow{-\Sigma i} \Sigma G^2\, . \] By the K{\"u}nneth theorem, and the fact that $G$ is $2$-connected, we have that \[ H_3(\Sigma G^2;\Z)\cong H_2(G^2;\Z)=0\,. \] Moreover, if $j\co G\vee G\to \Hom(\Z^2,G)$ is the inclusion, then the composite map \[ H_3(G\vee G;\Z)\xrightarrow{j_\ast} H_3(\Hom(\Z^2,G);\Z) \xrightarrow{i_\ast} H_3(G^2;\Z) \] is an isomorphism. Hence, $i_\ast$ is surjective. From the long exact homology sequence associated to the Puppe sequence we derive the isomorphism \[ \partial_\ast\co H_3(G^2/\Hom(\Z^2,G);\Z)\xrightarrow{\cong} H_2(\Hom(\Z^2,G);\Z)\, . \] Let $\overline{\nu\times\nu}\co SU(2)^2/\Hom(\Z^2,SU(2))\to G^2/\Hom(\Z^2,G)$ be the map induced by $\nu\co SU(2)\hookrightarrow G$ upon passage to quotients. By naturality of the connecting map $\partial$ and by the Hurewicz theorem, it suffices to show that there is an isomorphism \[ (\overline{\nu\times \nu})_\ast \co H_3(SU(2)^2/\Hom(\Z^2,SU(2));\Z)\xrightarrow{\cong} H_3(G^2/\Hom(\Z^2,G);\Z)\, . \] To see this we consider the commutator map $G^2\to G$, $(x,y)\mapsto [x,y]$. It is constant on commuting pairs, hence induces a map \[ \gamma\co G^2/\Hom(\Z^2,G)\to G\, . \] We claim that when $G=SU(2)$ the induced map \[ \gamma_\ast\co H_3(SU(2)^2/\Hom(\Z^2,SU(2));\Z)\xrightarrow{\cong} H_3(SU(2);\Z) \] is an isomorphism. Indeed, observe that \[ SU(2)^2/\Hom(\Z^2,SU(2))\cong Y^+\,, \] where $Y^+$ is the one-point compactification of the space $Y:=SU(2)^2\backslash \Hom(\Z^2,SU(2))$ of non-commuting pairs in $SU(2)$. It is known that the commutator map restricted to $Y$ is a locally trivial fiber bundle over $SU(2)\backslash \{1\}$ with compact fiber $F:=\{(x,y)\in SU(2)^2\mid [x,y]=-1\}$, see \cite[VI 1 (a)]{AMcC}. In particular, as $SU(2)\backslash \{1\}$ is contractible (it is a sphere with one point removed), there is a homeomorphism $Y\cong (SU(2)\backslash\{1\})\times F$ under which the commutator map corresponds to the projection onto the first factor. Therefore, $\gamma$ is equivalent to the map $SU(2)\wedge F_+\to SU(2)$ induced by $F_+\to S^0$. Because this map has a section, and because $H_3(SU(2)\wedge F_+;\Z)\cong H_2(\Hom(\Z^2,SU(2));\Z)\cong \Z$, the induced map $H_3(SU(2)\wedge F_+;\Z)\to H_3(SU(2);\Z)$ must be an isomorphism. Consequently, $\gamma_\ast$ is an isomorphism in the case $G=SU(2)$. Finally, we contemplate the commutative diagram \[ \xymatrix{ H_3(SU(2)^2/\Hom(\Z^2,SU(2));\Z) \ar[r]^-{\gamma_\ast}_-{\cong} \ar[d]^-{(\overline{\nu\times \nu})_\ast} & H_3(SU(2);\Z) \ar[d]^-{\nu_\ast}_-{\cong} \\ H_3(G^2/\Hom(\Z^2,G);\Z) \ar[r]^-{\gamma_\ast} & H_3(G;\Z) } \] By Theorem \ref{thm:mainpairs} all groups in the diagram are isomorphic to $\Z$, which forces both the left hand vertical arrow as well as the bottom horizontal arrow to be isomorphisms. This finishes the proof of the theorem. \end{proof} In the study of spaces of homomorphisms $\Hom(\Gamma,G)$ it is natural to ask for the properties of the classifying space map \[ B\co \Hom(\Gamma,G)\to \textnormal{map}_\ast(B\Gamma,BG)\,, \] which takes a homomorphism $\phi\co \Gamma\to G$ to the classifying map $B\phi\co B\Gamma\to BG$ of a flat principal $G$-bundle over $B\Gamma$ with holonomy $\phi$. On path--connected components $B$ describes the contribution of flat bundles to the set of isomorphism classes of principal $G$-bundles over $B\Gamma$; on higher homotopy groups there is a similar interpretation in terms of families of flat bundles (cf. \cite[Section 7]{R19}). For $\Gamma=\Z^2$ we have the following corollary of Theorem \ref{thm:su2represents}. \begin{corollary} \label{cor:classifyingmap} The classifying space map \[ B\co \Hom(\Z^2,G)\to \textnormal{map}_\ast(\SS^1\times \SS^1,BG) \] induces an isomorphism on $\pi_k$ for all $k\leqslant 2$. \end{corollary} Here we identify $B\Z^2\simeq \SS^1\times \SS^1$ up to homotopy. \begin{proof} Since $BG$ is $3$-connected, the space $\textnormal{map}_\ast(\SS^1\times \SS^1,BG)$ is path--connected. By adjunction and the homotopy equivalence $\SS^k\wedge(\SS^1\times \SS^1)\simeq \SS^{k+2}\vee \bigvee^2 \SS^{k+1}$ (for $k>0$), we have natural isomorphisms \[ \pi_k(\textnormal{map}_\ast(\SS^1\times \SS^1,BG))\cong [\SS^k\wedge (\SS^1\times \SS^1),BG]\cong \pi_{k+1}(G)\oplus \pi_{k}(G)\oplus \pi_k(G) \] for all $k>0$. In particular, $\textnormal{map}_\ast(\SS^1\times \SS^1,BG)$ is simply--connected. Since $\Hom(\Z^2,G)$ is also path--connected and simply--connected, it only remains to prove the case $k=2$ of the corollary. We shall prove this case by reducing first to $G=SU(2)$, and then further to the stable unitary group $U=\colim_{n\to \infty} U(n)$ for which the result is known. By Theorem \ref{thm:su2represents}, by naturality of the classifying space map, and because $\nu\co SU(2)\hookrightarrow G$ induces an isomorphism $\pi_3(SU(2))\cong \pi_3(G)$, it suffices to show that \[ \pi_2(\Hom(\Z^2,SU(2))) \to \pi_2(\textnormal{map}_\ast(\SS^1\times \SS^1,BSU(2)))\cong \pi_3(SU(2)) \] is an isomorphism. The isomorphism $\pi_3(SU(2))\cong \pi_3(U)$ induced by the inclusion $SU(2)\to U$ is classical. On the other hand, we deduce from Theorem \ref{thm:casesymplecticandunitary} and Remark \ref{rem:symplecticandunitary} that $SU(2)\to U$ also induces an isomorphism $\pi_2(\Hom(\Z^2,SU(2)))\cong \pi_2(\Hom(\Z^2,U))$. It is then enough to show that the classifying space map induces an isomorphism \[ \pi_2(\Hom(\Z^2,U)) \xrightarrow{\cong} \pi_2(\textnormal{map}_\ast(\SS^1\times \SS^1,BU))\, . \] This isomorphism is an application of \cite[Theorem 3.4]{R11}. \end{proof} Loosely speaking, the corollary says that every principal $G$-bundle over $\SS^2\times (\SS^1)^2$ arises from an $\SS^2$-family of flat bundles over $(\SS^1)^2$, and the associated family of holonomies $\SS^2\to \Hom(\Z^2,G)$ is uniquely determined up to homotopy. Also observe that since $\pi_2(\Hom(\Z^2,G))\to \pi_2(\Rep(\Z^2,G))$ need not be an isomorphism, the map induced by $B$ on second homotopy groups does not factor through $\Rep(\Z^2,G)$ in general. This is in contrast to the fact that $\pi_0(\Hom(\Gamma,G))\to [B\Gamma,BG]$ factors through $\pi_0(\Rep(\Gamma,G))$ so long as $G$ is connected (cf. \cite[Lemma 2.5]{AC}). \begin{remark} Theorem \ref{thm:mainpairs} showed that the image of $\pi_2(\Hom(\Z^2,G))$ in $\pi_2(\Rep(\Z^2,G))$ has index $\textnormal{lcm}\{n_0^\vee,\dots,n_r^\vee\}$. With the ideas of the previous theorem we can give an alternative explanation of this fact. Without reference to Theorem \ref{thm:mainpairs}, the proof of Theorem \ref{thm:su2represents} and the fact that $\pi_2(\Hom(\Z^2,G))$ has rank one as an abelian group (Corollary \ref{cor:rank}) show that the map $\nu\co SU(2)\to G$ induces an isomorphism \[ \pi_2(\Hom(\Z^2,SU(2))) \cong \pi_2(\Hom(\Z^2,G))/\textnormal{torsion}\, . \] By Theorem \ref{thm:generalsecondhomology}, $\pi_\ast\co \pi_2(\Hom(\Z^2,SU(2)))\to \pi_2(\Rep(\Z^2,SU(2)))$ is an isomorphism, hence the degree of $\pi_\ast\co \pi_2(\Hom(\Z^2,G))/\textnormal{torsion}\to \pi_2(\Rep(\Z^2,G))$ equals the degree of the map $\pi_2(\Rep(\Z^2,SU(2)))\to \pi_2(\Rep(\Z^2,G))$ induced by $\nu$. To calculate the degree one identifies the map $\Rep(\Z^2,SU(2))\to \Rep(\Z^2,G)$ up to equivalence with a map $\mathbb{CP}(1,1)\to \mathbb{CP}(\mathbf{n}^\vee)$ through a case-by-case argument with root systems similar to the proof of Theorem \ref{thm:spinstability}. One finds that, for each $G$, there exists $j\in \{1,\dots,r\}$ such that $\mathbb{CP}(1,1)\to \mathbb{CP}(\mathbf{n}^\vee)$ is homotopic to \[ \mathbb{CP}(1,1)\xrightarrow{i_j} \mathbb{CP}(1,\dots,1)\xrightarrow{p_{\mathbf{n}^\vee}} \mathbb{CP}(\mathbf{n}^\vee)\,, \] where $i_j$ is the inclusion $[z_0,z_1] \mapsto [z_0,0,\dots,0,z_1,0,\dots,0]$ with $z_1$ in the $j$-th position, and the second map is the projection $[z_0,\dots,z_r]\mapsto [z_0^{n_0^\vee},\dots,z_r^{n_r^\vee}]$. By the formula displayed in (\ref{eq:degreeofmapofweightedprojectivespaces}), the degree of the composite map on second homology groups equals $\textnormal{lcm}\{n_0^\vee,\dots,n_r^\vee\}$, independent of $j$. \end{remark} \begin{remark} Consider a representation $\rho\co G\to SU(N)$ and let $D_{\rho}$ be the Dynkin index of $\rho$, that is the degree of $\rho_\ast\co \pi_3(G)\to \pi_3(SU(N))$. Theorem \ref{thm:su2represents} (or Corollary \ref{cor:classifyingmap}) shows that the degree of $\pi_2(\Hom(\Z^2,G))\to \pi_2(\Hom(\Z^2,SU(N)))$ equals $D_\rho$. On the other hand, as the degree of $\pi_2(\Hom(\Z^2,SU(N)))\to \pi_2(\Rep(\Z^2,SU(N)))$ is one (Theorem \ref{thm:mainpairs}), the degree of $\pi_2(\Rep(\Z^2,G))\to \pi_2(\Rep(\Z^2,SU(N)))$ equals $D_\rho/D$, where $D=\textnormal{gcd}\{D_\rho\mid \rho\co G\to SU(N),\, N\geq 1\}$ is the Dynkin index of $G$, which equals $\textnormal{lcm}\{n_0^\vee,\dots,n_r^\vee\}$ as noted in the introduction. \end{remark} \section{Application: TC structures on trivial principal $G$-bundles}\label{TC structures} In this section we provide a geometric interpretation for the calculations given in this article. In particular, we show how examples of non-trivial transitionally commutative (TC) structures on trivial principal $G$-bundles can be constructed using non-trivial elements in $\pi_{2}(\Hom(\Z^{2},G))$. \subsection{TC structures on principal $G$-bundles} We start by reviewing the concept of a TC structure on a principal $G$-bundle. Assume that $G$ is a Lie group. We can associate to $G$ a simplicial space, denoted $[B_{com}G]_{*}$ and defined by $[B_{com}G]_n:=\Hom(\Z^{n},G)\subset G^n$. The face and degeneracy maps in $[B_{com}G]_{*}$ are defined as the restriction of the face and degeneracy maps in the classical bar construction $[BG]_{*}$. The geometric realization of the simplicial space $[B_{com}G]_*$ is denoted by $B_{com}G$ and called the classifying space for commutativity in $G$ \cite{ACT,AG}. The space $B_{com}G$ is naturally a subspace of $BG$ and we denote by $p_{com}\co E_{com}G\to B_{com}G$ the restriction of the universal principal $G$-bundle $p\co EG\to BG$ over $B_{com}G$. The space $B_{com}G$ classifies principal $G$-bundles that come equipped with an additional structure that we will refer to as a transitionally commutative structure. To explain this further we need to recall some basic definitions from bundle theory. Suppose that $q\co E\to X$ is a principal $G$-bundle with $G$ acting on the right on $E$ and that $X$ is a CW-complex. By local triviality we can find an open cover $\U=\{U_{i}\}_{i\in I}$ of $X$ together with trivializations $\varphi_{i}\co q^{-1}(U_{i})\to U_{i}\times G$ for every $i\in I$. If $U_{i}$ and $U_{j}$ are such that $U_{i} \cap U_{j}\ne \emptyset$, then for every $x\in U_{i}\cap U_{j}$ and all $g\in G$ we have $\varphi_{i} \varphi_{j}^{-1}(x,g)=(x,\rho_{i,j}(x)g)$. Here $\rho_{i,j}:U_{i}\cap U_{j}\to G$ is a continuous function called the transition function. The different transition functions satisfy the cocycle identity \[ \rho_{i,k}(x)=\rho_{i,j}(x) \rho_{j,k}(x) \] for every $x\in U_{i}\cap U_{j}\cap U_{k}$. Now, if for every $i,j,k\in I$ and every $x\in U_{i}\cap U_{j}\cap U_{k}$ the elements $\rho_{i,j}(x), \rho_{j,k}(x)$ and $\rho_{i,k}(x)$ commute with each other, then we say that $\{\rho_{i,j}\}$ is a commutative cocycle. Following \cite{AG}, we call a principal $G$-bundle $q\co E\to X$ transitionally commutative if it admits a trivialization in such a way that the corresponding transition functions define a commutative cocycle. Let $f\co X\to BG$ be the classifying map of a principal $G$--bundle $q\co E\to X$ over a finite CW--complex $X$. Then by \cite[Theorem 2.2]{AG} it follows that $f$ factors through $\BC G$, up to homotopy, if and only if $q$ is transitionally commutative. Next we define the notion of a transitionally commutative structure on principal bundles as in \cite{SimonPhD} and \cite{RV}. \begin{definition} Suppose that $q:E\to X$ is a transitionally commutative principal $G$-bundle. A \textsl{transitionally commutative structure (TC structure)} on $q:E\to X$ is a choice of map $\tilde{f}\co X\to B_{com}G$, up to homotopy, such that $i \tilde{f}\co X\to BG$ is a classifying map for $q\co E\to X$. Here $i\co B_{com}G\to BG$ denotes the natural inclusion. \end{definition} With the previous definition the set of TC structures on principal $G$-bundles over a space $X$ is precisely the set $[X,B_{com}G]$ of homotopy classes of maps $X\to B_{com}G$. \begin{example} Consider the trivial principal $G$-bundle $\textnormal{pr}_{1}\co X\times G\to X$. This bundle admits a TC structure given by the homotopy class of a constant map $X\to B_{com}G$. We refer to this TC structure as the trivial TC structure. \end{example} Observe that the definition of TC structures implies that the same underlying principal $G$-bundle can admit TC structures in many different ways. \subsection{Examples of TC structures on the trivial bundle} Our next goal is to construct non-trivial TC structures for the trivial principal $G$-bundle over $\SS^{4}$. Let $E_{com}G$ denote the homotopy fiber of the inclusion $i\co B_{com}G\to BG$. As a direct consequence of \cite[Theorem 6.3]{ACT}, the homotopy fiber sequence $E_{com}G\to B_{com}G\to BG$ induces for every $n\geqslant 0$ a split short exact sequence \begin{equation} \label{eq:sespibcom} 1\to \pi_{n}(E_{com}G)\to \pi_{n}(B_{com}G) \xrightarrow{i_{*}}\pi_{n}(BG)\to 1\,. \end{equation} If $G$ is connected, then both $B_{com}G$ and $BG$ are simply--connected. In particular, for every $n\geqslant 0$ we have that $[\SS^{n},B_{com}G]\cong \pi_{n}(B_{com}G)$ so that $\pi_{n}(B_{com}G)$ agrees with the set of TC structures on principal $G$-bundles over $\SS^{n}$. Let $f\co \SS^{n}\to B_{com}G$ be a TC structure on the trivial principal $G$-bundle over $\SS^{n}$. Then $[f]\in \pi_{n}(B_{com}G) $ belongs to \[ \Ker(i_{*}\co \pi_{n}(B_{com}G)\to \pi_{n}(BG))\cong \pi_{n}(E_{com}G)\,. \] In fact, elements in $\pi_{n}(E_{com}G)$ correspond precisely to TC structures on the trivial principal $G$-bundle over $\SS^{n}$. If $G=SU(m)$ or $G=Sp(k)$, then $E_{com}G$ is $3$-connected by \cite[Proposition 3.2]{AG}. This implies that the lowest dimensional sphere for which a non-trivial TC structure on the trivial principal $G$-bundle can exist is $\SS^{4}$. For other simply--connected compact Lie groups $G$ this may not hold, as a result of the fact that $\Hom(\Z^n,G)$ can be disconnected. However, the following variation of $B_{com}G$ was considered in \cite{AG}. Let $[B_{com}G_{\BONE}]_\ast$ denote the sub-simplicial space of $[B_{com}G]_\ast$ defined by $[B_{com}G_{\BONE}]_n:=\Hom(\Z^n,G)_{\BONE}$. Let $E_{com}G_{\BONE}$ denote the homotopy fiber of the inclusion $B_{com}G_{\BONE}\to BG$. The proof of \cite[Theorem 6.3]{ACT} shows that there is a split exact sequence of the form (\ref{eq:sespibcom}) also when $E_{com}G$ and $B_{com}G$ are replaced by $E_{com}G_{\BONE}$ and $B_{com}G_{\BONE}$, respectively. Therefore, \cite[Proposition 3.2]{AG} implies that $E_{com}G_{\BONE}$ is $3$-connected if $G$ is simply--connected. Moreover, $\pi_4(B_{com}G_{\BONE})\cong \pi_4(E_{com}G_{\BONE})\oplus \Z$ if in addition $G$ is assumed simple. \begin{corollary} \label{cor:pi4} Let $G$ be a simply--connected simple compact Lie group. Then \[ \pi_4(E_{com}G_{\BONE})\cong \Z\, \textnormal{ and }\, \pi_4(B_{com}G_{\BONE})\cong \Z\oplus \Z\, . \] \end{corollary} \begin{proof} As explained in \cite{AG} a model for $E_{com}G_{\BONE}$ is the geometric realization of a simplicial space $[E_{com}G_{\BONE}]_\ast$ with $n$-simplices $[E_{com}G_{\BONE}]_n:=\Hom(\Z^n,G)_{\BONE}\times G$. To keep our proof short we refer to \cite{AG} for more details about the simplicial structure. The filtration of $E_{com}G_{\BONE}$ arising from the simplicial structure leads to a spectral sequence (see \cite[Theorem 11.14]{MayGeometry}) which takes the form \[ E^2_{p,q}=H_p(H_q([E_{com}G_{\BONE}]_\ast;\Z)) \Longrightarrow H_{p+q}(E_{com}G_{\BONE};\Z)\, . \] The proof of \cite[Proposition 3.3]{AG} shows that $E^2_{p,q}=0$ for all $p,q\geqslant 0$ with $0< p+q\leqslant 3$. Let us determine $E^2_{p,q}$ for $p+q=4$. Since $\Hom(\Z^n,G)_{\BONE}\times G$ is path--connected and simply--connected for all $n\geqslant 0$, we have that $E_{p,0}^2=0$ for all $p\geqslant 1$ and $E_{p,1}^2=0$ for all $p\geqslant 0$. The chain complex computing $E^2_{1,3}$ reads \[ \cdots \to H_3(\Hom(\Z^2,G)\times G;\Z)\xrightarrow{d_2} H_3(G\times G;\Z) \xrightarrow{d_1} H_3(G)\to 0\,, \] with differentials $d_k=\sum_{i=0}^{k}(-1)^i\partial_i$ where $\partial_i\co [E_{com}G]_{k}\to [E_{com}G]_{k-1}$ is the $i$-th face map. Let $i_2\co G\to G\times G$ be the inclusion $x\mapsto (1,x)$. Then a short calculation shows that $\ker(d_1)= \Im((i_2)_\ast)$. Moreover, if $i_2'\co G\to \Hom(\Z^2,G) \times G$ denotes the inclusion $x\mapsto (1,1,x)$, then $d_2(i_2')_\ast=(i_2)_\ast$. Hence, $E_{1,3}^2=\ker(d_1)/\Im(d_2) =0$. The chain complex computing $E_{2,2}^2$ reads \[ \cdots \to H_2(\Hom(\Z^3,G)_{\BONE}\times G;\Z)\xrightarrow{d_3} H_2(\Hom(\Z^2,G)\times G;\Z) \xrightarrow{d_2} 0\, , \] because $G$ is assumed simply--connected and therefore $H_2(G\times G;\Z)=0$ by the K{\"u}nneth theorem. Now $H_2(\Hom(\Z^2,G)\times G;\Z)\cong H_2(\Hom(\Z^2,G);\Z)\cong \Z$, by Theorem \ref{thm:mainpairs}. As a consequence, $d_3$ factors through $H_2(\Hom(\Z^3,G)_{\BONE};\Z)/\textnormal{torsion}$. We claim that $d_3=0$, hence $E^2_{2,2}\cong \Z$. To see this let $j\co \bigvee^3 \Hom(\Z^2,G)\to \Hom(\Z^3,G)_{\BONE}$ be the map induced by $(x,y)\mapsto (x,y,1)$, $(x,y)\mapsto (x,1,y)$, and $(x,y)\mapsto (1,x,y)$. By Corollary \ref{cor:h2modtorsion}, the map $j_\ast$ induced by $j$ on second homology groups is an isomorphism modulo torsion. Therefore, to see that $d_3=0$ it is enough to show that $d_3j_\ast=0$, but this is an easy calculation. Finally, it is clear that the differential $d_1\co H_4(G\times G;\Z)\to H_4(G;\Z)$ in the chain complex computing $E_{0,4}^2$ is surjective, so $E_{0,4}^2=0$. It follows that the only non-trivial group in total degree $4$ of the $E^2$-page is $E^2_{2,2}$ and there are no non-trivial differentials originating from or arriving at $E^2_{2,2}$. Hence, we conclude that $H_4(E_{com}G_{\BONE};\Z)\cong E^{2}_{2,2}\cong \Z$. The isomorphism $\pi_4(E_{com}G_{\BONE})\cong \Z$ is obtained from the Hurewicz theorem and the fact that $E_{com}G_{\BONE}$ is $3$-connected. \end{proof} We show next how an element of $\pi_{2}(\Hom(\Z^{2},G))\cong \Z$ can be used to construct a TC structure on the trivial principal $G$-bundle over $\SS^{4}$. Let $\beta:\SS^{2}\to \Hom(\Z^{2},G)$ be any map. If $\beta_{1}$ and $\beta_{2}$ are the components of $\beta$, then we have that $\beta_{1}(x),\beta_{2}(x)\in G$ commute for all $x\in \SS^{2}$. We are going to use the functions $\beta_{1}$ and $\beta_{2}$ to construct a commutative cocycle with values in $G$. The construction we give follows the idea of \cite[Section 3]{RV} where the case $G=O(2)$ was studied. Consider \[ \SS^{4}=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \R^{5}~|~ x_{0}^{2}+x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{4}^{2}=1\}. \] We can cover $\SS^{4}$ using the closed sets $C_{1}$, $C_{2}$ and $C_{3}$ given by \begin{align*} C_{1}&=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \SS^{4} \mid x_{0}\leqslant 0\}\,,\\ C_{2}&=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \SS^{4} \mid x_{0}\geqslant 0,\, x_{4}\geqslant 0\}\,,\\ C_{3}&=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \SS^{4} \mid x_{0}\geqslant 0,\, x_{4}\leqslant 0\}\,. \end{align*} Notice that \[ C_{1}\cap C_{2}\cap C_{3}=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \SS^{4} \mid x_{0}= 0,\, x_{4}= 0\} \cong \SS^{2}. \] From now on we identify $\SS^{2}$ with $C_{1}\cap C_{2}\cap C_{3}$. In addition, observe that $C_{1}\cap C_{2}\cong \D^{3}$ and under this identification the boundary $\SS^{2}$ corresponds to $C_{1}\cap C_{2}\cap C_{3}$. The same is true for $C_{1}\cap C_{3}$ and $C_{2}\cap C_{3}$. Recall that $\pi_{2}(G)= 0$, hence $\beta_1\co \SS^2\to G$ is null-homotopic. Since $C_{1}\cap C_{2}\cong \D^{3}$, we can find a continuous map $\rho_{1,2}\co C_{1}\cap C_{2}\to G$ such that $\left.\rho_{1,2}\right|_{C_{1}\cap C_{2}\cap C_{3}}\co C_{1}\cap C_{2}\cap C_{3}\to G$ agrees with $\beta_{1}$. Similarly, the choice of a null-homotopy of $\beta_2$ defines a continuous map $\rho_{2,3}\co C_{2}\cap C_{3}\to G$ such that $\left.\rho_{2,3}\right|_{C_{1}\cap C_{2}\cap C_{3}}\co C_{1}\cap C_{2}\cap C_{3}\to G$ agrees with $\beta_2$. To define the transition function $\rho_{1,3}\co C_1\cap C_3\to G$ we consider the retraction $r\co C_{2}\to C_{2}\cap C_{3}$ defined by \[ r(x_{0},x_{1},x_{2},x_{3},x_{4}) := \left(\sqrt{1-x_{1}^{2}-x_{2}^{2}-x_{3}^{2}},x_1,x_{2},x_{3},0 \right). \] Then define $\rho_{1,3}\co C_{1}\cap C_{3}\to G$ by \[ \rho_{1,3}(0,x_{1},x_{2},x_{3},x_{4}) :=\rho_{1,2}(0,x_{1},x_{2},x_{3},-x_{4})\rho_{2,3}(r(0,x_{1},x_{2},x_{3},-x_{4})). \] For $i>j$ we set $\rho_{i,j}:=\rho_{j,i}^{-1}$. Notice that if $x\in C_{1}\cap C_{2}\cap C_{3}$, then $x=(0,x_{1},x_{2},x_{3},0)$ and $r(x)=x$. Therefore, for $x\in C_{1}\cap C_{2}\cap C_{3}$ we have that $\rho_{1,3}(x)=\rho_{1,2}(x)\rho_{2,3}(x)$ by definition of $\rho_{1,3}$. Thus, $\{\rho_{i,j}\}$ satisfies the cocycle condition. Moreover, since $\rho_{1,2}(x)=\beta_{1}(x)$ and $\rho_{2,3}(x)=\beta_{2}(x)$ for all $x\in C_{1}\cap C_{2}\cap C_{3}$, we conclude that $\{\rho_{i,j}\}$ defines a commutative cocycle relative to the closed cover $\mathcal{C}:=\{C_{1},C_{2},C_{3}\}$ of $\SS^4$. Let $E_\beta$ be the space defined by \[ E_\beta:=(C_{1}\times G\sqcup C_{2}\times G\sqcup C_{3}\times G)/{\sim}\,, \] where $(j,x,g)\sim (i,x,\rho_{ij}(x)g)$. The projection map $E_\beta\to \SS^{4}$ induced by $(j,x,g)\mapsto x$ defines a principal $G$-bundle by \cite[Lemma 3.1]{RV}. \begin{lemma} \label{lem:trivialbundle} The principal $G$-bundle $E_\beta \to \SS^4$ is trivial. \end{lemma} \begin{proof} By \cite[Lemma 3.2]{RV}, the principal $G$-bundle $E_\beta$ is isomorphic to the bundle obtained using the clutching function $\varphi\co C_{1}\cap (C_{2}\cup C_{3})\cong \SS^{3}\to G$ given by \begin{equation*} \varphi(x)=\begin{cases} \rho_{1,2}(x)\rho_{2,3}(r(x))\,, &\text{if } x\in C_{1}\cap C_{2}\,,\\ \rho_{1,3}(x)\,, &\text{if } x\in C_{1}\cap C_{3}\,. \end{cases} \end{equation*} By construction this function satisfies $\varphi(0,x_{1},x_{2},x_{3},x_{4})=\varphi(0,x_{1},x_{2},x_{3},-x_{4})$. This implies that $\varphi$ factors through $C_1\cap C_2\cong \D^{3}$, hence $\varphi$ is null-homotopic. Therefore, the principal $G$-bundle $E_\beta$ is trivial. \end{proof} Let $N_\ast(\mathcal{C})$ denote the {\v C}ech nerve of the closed cover $\mathcal{C}$ of $\SS^4$. The commutative cocycle $\{\rho_{i,j}\}$ defines a simplicial map $[f_\beta]_\ast \co N_{*}(\mathcal{C})\to [B_{com}G_{\BONE}]_{*}$ sending $x\in C_{i_0}\cap \dots \cap C_{i_n}$ to $(\rho_{i_0,i_1}(x),\dots,\rho_{i_{n-1},i_n}(x))\in \Hom(\Z^n,G)_{\BONE}$. Upon geometric realization we obtain a map \[ f_\beta:=|[f_\beta]_\ast|\co |N_\ast(\mathcal{C})|\to B_{com}G_{\BONE}\, . \] We can choose open neighbourhoods $U_i\supset C_i$, $i=1,2,3$, such that for every choice of indices the inclusion $C_{i_0}\cap \dots \cap C_{i_n}\hookrightarrow U_{i_0}\cap \dots \cap U_{i_n}$ is a homotopy equivalence. Let $\mathcal{U}=\{U_1,U_2,U_3\}$ be the resulting open cover of $\SS^4$. The map of {\v C}ech nerves $N_\ast(\mathcal{C})\to N_\ast(\mathcal{U})$ is a levelwise homotopy equivalence of proper simplicial spaces, hence induces a homotopy equivalence $|N_\ast(\mathcal{C})|\simeq |N_\ast(\mathcal{U})|$. Since $\mathcal{U}$ is numerable, the natural map $|N_{*}(\mathcal{U})|\to \SS^4$ is a homotopy equivalence by \cite[Proposition 4.1]{Segal1}. We conclude that $|N_\ast(\mathcal{C})|\to \SS^4$ is a homotopy equivalence. Let $\lambda\co \SS^4\to |N_\ast(\mathcal{C})|$ be a homotopy inverse. The argument of \cite[Lemma 3.3]{RV} shows that $f_\beta \lambda\co \SS^4\to B_{com}G_{\BONE}$ is a TC structure on $E_\beta$, i.e., that $if_\beta \lambda\co \SS^4\to BG$ classifies $E_\beta$. Since $E_\beta$ is trivial, by Lemma \ref{lem:trivialbundle}, $if_\beta \lambda$ is null homotopic. Thus, $f_\beta\lambda$ factors, up to homotopy, through the homotopy fiber $E_{com}G_{\BONE}$. \medskip Let us construct another commutative cocycle on $\SS^4$, this time representing a generator of $\pi_4(BG)\cong \Z$. To this end, we cover $\SS^4$ by the contractible closed sets \begin{align*} D_{1}&=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \SS^{4} \mid x_{0}\leqslant 0\}\,,\\ D_{2}&=\{(x_{0},x_{1},x_{2},x_{3},x_{4})\in \SS^{4} \mid x_{0}\geqslant 0\}\,. \end{align*} Observe that $D_1\cap D_2\cong \SS^3$. Let $\tau_{1,2}\co D_{1}\cap D_{2} \to G$ represent a generator of $\pi_3(G)$. To be concrete, we choose $\tau_{1,2}=\nu$, where $\nu\co SU(2)\to G$ is the map defined in Section \ref{sec:rolesu2}. Then $\tau_{1,2}$ defines trivially a commutative cocycle relative to the closed cover $\mathcal{D}:=\{D_1,D_2\}$. As before, the cocycle $\{\tau_{i,j}\}$ defines a simplicial map $[g_\nu]_\ast \co N_\ast(\mathcal{D})\to [B_{com}G_{\BONE}]_\ast$ and thus a map $g_\nu \co |N_\ast(\mathcal{D})|\to B_{com}G_{\BONE}$. Upon choosing a homotopy equivalence $\lambda'\co \SS^4\to |N_\ast(\mathcal{D})|$ one obtains a classifying map $ig_\nu\lambda'\co \SS^4\to BG$ for the bundle clutched by $\nu\co \SS^3\to G$. Since $\nu$ represents a generator of $\pi_3(G)$, the homotopy class of $ig_\nu \lambda'$ generates $\pi_4(BG)$.\medskip From now on we tacitly identify $|N_\ast(\mathcal{C})|$ and $|N_\ast(\mathcal{D})|$ with $\SS^4$ and drop the homotopy equivalences $\lambda$ and $\lambda'$ from the notation. \begin{theorem} Let $[\beta]\in \pi_2(\Hom(\Z^2,G))$ and $[\nu]\in \pi_3(G)$ be generators. Then the TC structures $[f_\beta]$ and $[g_\nu]$ generate $\pi_4(B_{com}G_{\BONE})\cong \Z\oplus \Z$. In particular, $f_{\beta}$ lifts to a generator of $\pi_4(E_{com}G_{\BONE})\cong \Z$. \end{theorem} \begin{proof} The proof is by comparison of the spectral sequences associated to the simplicial spaces $N_\ast(\mathcal{C})$, $N_\ast(\mathcal{D})$ and $[B_{com}G_{\BONE}]_\ast$. Let us first consider the spectral sequence \[ {}_{\mathcal{C}}E^2_{p,q}=H_p(H_q(N_\ast(\mathcal{C});\Z)) \;\Longrightarrow\; H_{p+q}(|N_\ast(\mathcal{C})|;\Z)\,. \] In each degree $N_\ast(\mathcal{C})$ is a disjoint union of contractible spaces and spaces homeomorphic to $\SS^2$. This readily shows that ${}_{\mathcal{C}}E^2_{p,q}=0$ unless $q=0,2$. In the case $q=0$ the chain complex $H_0(N_\ast(\mathcal{C});\Z)$ computes the homology of a $2$-simplex, hence ${}_{\mathcal{C}}E^2_{4,0}=0$. In the case $q=2$ we observe that $H_2(N_2(\mathcal{C});\Z)\cong H_2(C_1\cap C_2\cap C_3;\Z)\cong \Z$, hence ${}_{\mathcal{C}}E^2_{2,2}$ is a quotient of $\Z$. In fact, we must have ${}_{\mathcal{C}}E^2_{2,2}\cong \Z$, because $H_4(|N_\ast(\mathcal{C})|;\Z)\cong \Z$. For the same reason, ${}_{\mathcal{C}}E^2_{2,2}$ is not hit by any non-zero differential. Since ${}_{\mathcal{C}}E^2_{0,3}=0$, there is no non-zero differential leaving ${}_{\mathcal{C}}E^2_{2,2}$ either. We conclude that ${}_{\mathcal{C}}E^\infty_{2,2}\cong H_2(C_1\cap C_2\cap C_3;\Z)\cong \Z$ is the only non-zero group in total degree $4$. The analysis of the spectral sequence $\{{}_{\mathcal{D}}E^\ast_{p,q}\}$ calculating $H_\ast(|N_\ast(\mathcal{D})|;\Z)$ is very similar; one finds that ${}_{\mathcal{D}}E_{1,3}^\infty\cong H_3(D_1\cap D_2;\Z)\cong \Z$ is the only non-zero group in total degree $4$. Finally, we consider the spectral sequence \[ E^2_{p,q}=H_p(H_q([B_{com}G_{\BONE}];\Z))\; \Longrightarrow \; H_{p+q}(B_{com}G_{\BONE};\Z)\, . \] Since $[B_{com}G_{\BONE}]_0$ is a one point space, we trivially have that $E^2_{0,q}=0$ for all $q>0$. Because $\Hom(\Z^n,G)_{\BONE}$ is path--connected and simply--connected for all $n\geqslant 0$, we further have that $E^2_{4,0}=E^2_{3,1}=0$. On the other hand, we find that $E_{1,3}^2$ is a quotient of $H_{3}(G;\Z)\cong \Z$, and likewise $E^2_{2,2}$ is a quotient of $H_2(\Hom(\Z^2,G);\Z)\cong \Z$ (Theorem \ref{thm:mainpairs}). In Corollary \ref{cor:pi4} we showed that $H_4(B_{com}G_{\BONE};\Z)\cong \Z\oplus \Z$; but this is possible only if $E^2_{1,3}\cong \Z$ and $E_{2,2}^2\cong \Z$ and none of these is hit by a non-zero differential. Furthermore, for degree reasons and because $E_{0,3}^2=0$, there are no non-zero differentials originating from either $E^2_{1,3}$ or $E^2_{2,2}$. Hence, $E^\infty_{1,3}\cong H_3(G;\Z)\cong \Z$ and $E^\infty_{2,2}\cong H_2(\Hom(\Z^2,G);\Z)\cong \Z$. The simplicial maps $[f_\beta]_\ast\co N_\ast(\mathcal{C})\to [B_{com}G_{\BONE}]_\ast$ and $[g_\nu]_\ast\co N_\ast(\mathcal{D})\to [B_{com}G_{\BONE}]_\ast$ give rise to the following diagram of extensions: \[ \xymatrix{ 0 \ar[r] & 0 \ar[r] \ar[d] & H_{4}(|N_\ast(\mathcal{C})|;\Z) \ar[r] \ar[d]^-{(f_\beta)_\ast} & {}_{\mathcal{C}}E_{2,2}^{\infty}\cong \Z \ar[r] \ar[d]^-{\cong}_-{\beta_\ast} & 0 \\ 0 \ar[r] & E_{1,3}^{\infty}\cong \Z \ar[r] & H_{4}(B_{com}G_{\BONE};\Z) \ar[r] & E_{2,2}^{\infty}\cong \Z \ar[r] & 0 \\ 0 \ar[r] & {}_{\mathcal{D}}E_{1,3}^{\infty}\cong \Z \ar[r] \ar[u]_-{\cong}^-{\nu_\ast} & H_{4}(|N_\ast(\mathcal{D})|;\Z) \ar[r] \ar[u]_-{(g_\nu)_\ast} & 0 \ar[u] \ar[r] & 0 } \] The preceding discussion shows that the map ${}_{\mathcal{D}}E_{1,3}^\infty\to E^\infty_{1,3}$ can be identified with the map $(\nu)_\ast\co H_3(\SS^3;\Z)\to H_3(G;\Z)$, and ${}_{\mathcal{C}}E^\infty_{2,2}\to E^\infty_{2,2}$ may be identified with $\beta_\ast\co H_2(\SS^2;\Z)\to H_2(\Hom(\Z^2,G);\Z)$; both are isomorphisms by choice. The commutative diagram together with the Hurewicz theorem imply that $[f_\beta]$ and $[g_\nu]$ generate $\pi_4(B_{com}G_{\BONE})$. Since $if_\beta$ is null homotopic, $[f_\beta]\in \Ker(i_\ast\co \pi_4(B_{com}G_{\BONE})\to \pi_4(BG))\cong \pi_4(E_{com}G_{\BONE})$ and it is clear that $[f_{\beta}]$ generates $\pi_4(E_{com}G_{\BONE})$. \end{proof} \subsection{An explicit generator of $\pi_2(\textnormal{Hom}(\Z^2,G))$} We finish our discussion by constructing a map $\beta \co \SS^2\to \Hom(\Z^2,G)$ whose homotopy class generates the group $\pi_2(\Hom(\Z^2,G))\cong \Z$. In theory, this enables us to write down an explicit commutative cocycle on $\SS^4$ (relative to the closed cover $\{C_1, C_2,C_3\}$ described above) which represents the generator of $\pi_4(E_{com}G_{\BONE})\cong \Z$. By Theorem \ref{thm:su2represents} it suffices to construct $\beta$ in the case $G=SU(2)$; composition with the embedding $\nu\co SU(2)\to G$ yields a generator for general $G$. Let $T\leqslant SU(2)$ be the maximal torus consisting of all diagonal matrices and identify it with the unit circle $\SS^1\subseteq \C$. Under this identification the action of the Weyl group $W\cong \mathbb{Z}/2$ corresponds to complex conjugation on $\mathbb{S}^1$. Let $I=[0,1]$ denote the unit interval and let $\Delta=\{(s,t)\in I^2 \mid s\leqslant t\}$. Our model for $\mathbb{S}^2$ will be the boundary of the prism $P=\Delta\times I$. Consider the continuous map \begin{alignat*}{2} r & \co \Delta && \to (\mathbb{S}^1)^2 \subset \Hom(\Z^2,SU(2)) \\ & (s,t) && \mapsto (e^{2\pi i s}, e^{2\pi i t})\, , \end{alignat*} whose image is the closure of a fundamental domain for the diagonal $\mathbb{Z}/2$--action on $(\mathbb{S}^1)^2$. The basic idea to construct the desired homotopy class is to choose a null homotopy of \[ r|_{\partial \Delta}\co \partial \Delta \to \Hom(\Z^2,SU(2))\, , \] which exists because $\Hom(\Z^2,SU(2))$ is simply--connected. Up to homotopy, this induces a map $\Delta /\partial \Delta\cong \mathbb{S}^2 \to \Hom(\Z^2,SU(2))$. However, some care must be taken as the choice of null homotopy will in general affect the resulting homotopy class. Let $\rho\co (I,\partial I)\to (\mathbb{S}^1,1)$ represent a generator of $\pi_1(\mathbb{S}^1)$ and let $h \co (I,\partial I)\times I\to (SU(2),1)$ be \emph{any} fixed null homotopy of $i \rho$, where $i\co (\mathbb{S}^1,1)\hookrightarrow (SU(2),1)$ is the inclusion. Let $i_{1} \co SU(2)\to \Hom(\Z^2,SU(2))$ denote the inclusion of the first factor. Similarly, let $i_2$ denote the inclusion of the second factor. Let $d\co SU(2)\to \Hom(\Z^2,SU(2))$ be the diagonal map. We now extend $r$ to a continuous map \[ \beta \co \partial P\to \Hom(\Z^2,SU(2)) \] as follows: Define $\beta|_{\Delta\times \{0\}}:=r$ and let $\beta|_{\Delta\times \{1\}}$ be the constant map with value $(1,1)$. The boundary $\partial \Delta$ is the union of the three intervals \[ \Delta_{01}=\{s=0\} \,, \quad \Delta_{02}=\{s=t\}\,, \quad \Delta_{12}=\{t=0\}\,, \] each of which is identified with $I$. Define $\beta|_{\Delta_{01}\times I}:=i_2 h$, $\beta|_{\Delta_{02}\times I}:=d h$, and $\beta|_{\Delta_{12}\times I}:=i_1 h$. By inspection, these maps can be glued together and define a continuous map $\beta\co \partial P \to \Hom(\Z^2,SU(2))$. \begin{proposition}\label{prop:geometricgenerator} The homotopy class of $\beta$ generates $\pi_2(\Hom(\Z^2,SU(2)))$. \end{proposition} \begin{proof} By Theorem \ref{thm:mainpairs} it suffices to show that the composite map \[ \partial P\xrightarrow{\beta} \Hom(\Z^2,SU(2)) \xrightarrow{\pi} \Rep(\Z^2,SU(2)) \] represents a generator of $\pi_2(\Rep(\Z^2,SU(2)))$. Since $\Rep(\Z^2,SU(2)) \cong \mathbb{S}^2$, it is enough to verify that $\pi \beta$ has degree $\pm 1$. Let $\textnormal{int}(\Delta)$ denote the relative interior of the bottom face of $\partial P$, and let $U\subseteq \SS^2$ be the image of $\textnormal{int}(\Delta)$ under $\pi\beta$. Then $U$ is open and $\pi\beta$ maps $\partial P\backslash \textnormal{int}(\Delta)$ into $\SS^2\backslash U$. Any $x\in U$ has a unique preimage under $\pi\beta$. By considering local degrees it follows that $\pi \beta$ has degree $\pm 1$. \end{proof} Let $\nu\co SU(2)\to G$ be the embedding corresponding to the highest root of $G$. \begin{corollary} The homotopy class of $(\nu\times \nu)\beta$ generates $\pi_2(\Hom(\Z^2,G))$. \end{corollary} \begin{proof} This is immediate from Proposition \ref{prop:geometricgenerator} and Theorem \ref{thm:su2represents}. \end{proof} \begin{remark} In \cite{AG,AGLT} a K-theory group was introduced, defined for a finite CW complex $X$ by $\tilde{K}_{com}(X):=[X,B_{com}U]$. In \cite[Proposition 5.2]{Simon} it was shown that the inclusion $SU(2)\to U$ induces an isomorphism $\pi_4(B_{com}SU(2))\cong \pi_4(B_{com}U)$, and $\tilde{K}_{com}(\SS^4)\cong \Z\oplus \Z$. Thus, the results of this section can be used to construct explicit commutative cocycles over $\SS^4$, relative to the closed covers $\mathcal{C}$ and $\mathcal{D}$, representing the two generators of $\tilde{K}_{com}(\SS^4)$. \end{remark}
null
null
null
proofpile-arXiv_000-10161
{"arxiv_id":"2009.09045","language":"en","timestamp":1600740142000,"url":"https:\/\/arxiv.org\/abs\/2009.09045","yymm":"2009"}
2024-02-18T23:40:25.194Z
2020-09-22T02:02:22.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10163"}
null
\section{Introduction} \label{intro} Over the past few years, the adoption of Machine Learning (ML) techniques to the field of network security has become prominent \cite{granville1,granville2}. This is mainly due to the possibility of tackling a range of ever more sophisticated attacks able to circumvent security-based systems which rely on classic features inspection (e.g. port-control, signature-based flow detection, IPs black-listing, etc.). It is useful to recall that, especially in encrypted traffic analysis, the difference between \textit{deterministic} and \textit{stochastic} features is crucial. Deterministic ones pertains to ``static'' information embodied in security protocols such as TLS (e.g. record length, handshake types, cipher suites, etc.) \cite{cisco-nids}, or IPSec (ISAKMP SPI Initiator/Responder, payload length, etc. ). On the other hand, stochastic features exploit the probabilistic nature (hard to hide in encrypted flows as well) of some traffic characteristics (e.g. the distribution of inter-arrival times). On behalf of ML-based techniques, it becomes quite straightforward to manage stochastic features which, coupled with the deterministic ones, allow to characterize as accurately as possible an encrypted data flow. Moreover, new network intrusion detection systems (NIDS) \cite{gaspary1,gaspary2} can actually interact with ML-based engines in order to learn the statistical features that characterize the various traffic flows and, in turn, classify them according to specific performance/time efficiency trade-offs. Among various possibilities of interaction, the Simple Network Management Protocol (SNMP) offers a flexible and standardized way to collect traffic data from a number of agents, by relying on Management Information Base (MIB) objects which provide information about some features to be managed \cite{cerroni1,cerroni2,cerroni3}. This notwithstanding, one of the biggest problems is to deal with the jungle of ML techniques which, depending on the underlying strategy, can exhibit completely different performance when applied in the intrusion detection field. In fact, most attempts at surveying ML-based traffic classification problems have resulted in non-homogeneous comparisons and often unfair outcomes. Another flaw found in existing literature concerns the choice of a valid dataset. Most of studies, in fact, rely on the obsolete KDD$99$ dataset \cite{kdd-dataset}, or its evolved version NSL-KDD \cite{nsl-kdd}. These datasets either do not contemplate essential features of modern data traffic (e.g. voice, video), or contain outdated signatures of network attacks \cite{icissp-dataset3}. In this paper we address the aforementioned shortcomings, providing the following main contributions. \textit{i)} We survey neural-based techniques, which have recently taken prominence thanks to deep-based methods, in the specific context of traffic classification problems; we provide a critical comparison of algorithms that share a common paradigm (e.g. deep neural networks, linear vector quantization, etc.), including also techniques that are not typically applied to intrusion detection, such as weightless neural networks, whose results are quite unexpected. \textit{ii)} We go beyond a traditional survey, providing an experimental-based assessment; we take into account novel traffic datasets (such as CIC-IDS-2017/2018), where both single-class cases (namely benign vs. malign) and multi-class cases (namely benign vs. malign$_1$ ... vs. malign$_k$) are considered; we perform both performance analysis (through accuracy/F-measure figures) and time-complexity analysis. The paper is organized as follows. Section \ref{sec:rw} presents an {\em excursus} of related literature and, by means of a comparative table, we highlight differences and commonalities with other surveys. In Sect. \ref{sec:nn} we review the most credited neural-based methods. Section \ref{sec:dataset} presents details about the novel considered datasets, where traffic features are grouped according to a similarity criterion. In Sect. \ref{sec:experim} we provide an experimental-based comparison, where different neural-based techniques are juxtaposed through performance and time-complexity analyses. Finally, Section \ref{sec:conclusion} concludes this work by also indicating some promising research directions. \section{Related Work on ML techniques applied to network intrusion detection} \label{sec:rw} ML-based intrusion detection is becoming attractive for a variety of network environments including service providers \cite{classif_dimauro}, sensor networks \cite{classif_liotta}, Internet of Things \cite{tnsm-classif-iot}, and automotive \cite{classif_pascale}. This notwithstanding, a great part of scientific literature, which faces the problem of data traffic classification through machine learning techniques, suffers from the datasets obsolescence issue. For many years, the only dataset available to the scientific community was the so-called KDD$99$, which is still broadly used today to validate ML-based algorithms and techniques when dealing with traffic classification problems. Unfortunately, this is an outdated $20$-years-old dataset involving network attacks that have already been mitigated. An example is {\em satan probing}, an attack aimed at probing a computer for security loophole, based on the {\em satan} tool that was created at the end of the 1990’s, but is today no longer documented as a Web page. Other examples include: {\em warezmaster/warezclient}, which exploits some old vulnerabilities of the anonymous FTP protocol, and {\em smurf attacks}, aimed at attaining default router settings that allowed directed broadcasts. Yet, currently, router vendors simply deactivate this functionality. An ameliorated version of KDD$99$ is known as NSL-KDD. However, despite providing some improvements (e.g. no redundant records, better balancing between training and test set), NSL-KDD is still not taking into account crucial information that characterizes novel cyber attacks. This notwithstanding, both KDD$99$ and NSL-KDD datasets are still broadly used to test some functionalities of NIDS systems that implement neural-based techniques. For instance, in \cite{precic_paper2,precic_paper3,ids_ann_19} the authors show the effectiveness of using an NIDS in conjunction with an artificial neural network (ANN) to improve the quality of traffic detection, where a validation stage onto the KDD$99$ dataset is performed. A deep learning approach is used in \cite{precic_paper1}, based on KDD$99$, to verify accuracy against an SVM methodology. A novel approach based on ANN (referred to as self-taught learning) is applied in \cite{precic_paper4} to enable an NIDS to detect previously unseen attacks via reconstructions made on unlabeled data. This work provides tests on both KDD$99$ and NSL-KDD. In \cite{precic_paper5} and \cite{precic_paper6} the authors adopt neural-based methods exploiting Self-Organizing Maps. In \cite{precic_paper7} and \cite{precic_paper8}, Learning Vector Quantization is coupled with SVM and k-Nearest Neighbor, respectively, to detect traffic anomalies. In \cite{ids_dnn_1,ids_dnn_2,ids_dnn3,ids_dnn4} deep neural networks concepts are applied to intrusion detection systems. The KDD$99$ and NSL-KDD datasets have also been used to test a variety of non neural-based techniques such as Support Vector Machine \cite{kdd_svm1,kdd_svm2,kdd_svm3,kdd_svm4,kdd_svm5}, Principal Component Analysis \cite{kdd_pca1,kdd_pca2,kdd_pca3,kdd_pca4,kdd_pca5}, Decision Trees \cite{kdd_decision1,kdd_decision2,kdd_decision3,kdd_decision4,kdd_decision4,kdd_decision5}, and various unsupervised approaches \cite{kdd_unsup1,kdd_unsup2,kdd_unsup3,kdd_unsup4,kdd_unsup5,kdd_unsup6}. By contrast to the aforementioned works, some recent datasets are emerging from new testbeds which adhere more strictly to real-world network scenarios. For instance, the Cyber Range Lab of the Australian Centre for Cyber Security provides two recent datasets: the UNSW-NB15 dataset \cite{unsw2} which includes a mix of legitimate network activities and synthetic attacks, and the Bot-IoT dataset \cite{bot-iot1} which embeds normal and simulated network traffic gathered in an IoT-compliant testbed, along with various types of attacks. Again, the datasets recently released by the Canadian Institute for Cybersecurity (CIC) \cite{cic} represent the state-of-the-art, in terms of both complexity and novelty of network attacks. These datasets have been created starting from an experimental testbed under controlled conditions \cite{icissp-dataset3}, whereby an attack network (including a router, a switch, and four PCs equipped with Kali Linux OS, which is a popular Linux distribution to perform penetration testing) is counterposed to a victim network (including routers, firewalls, switches, and various PCs equipped with Windows, Linux, and MacOS operating systems). An evolved version of this testbed has been designed to run on Amazon AWS \cite{aws}. In this case, the attacking infrastructure includes $50$ PCs, and the victim network includes $420$ PCs and $30$ servers. The implemented attacks encompass Distributed Denial of Service (DDoS), Portscan, Bruteforce, along with some typical Android-based network attacks injecting malicious codes such as adwares, malwares, and ransomwares. The interest for such novel datasets is proven by novel works as detailed in the following. In \cite{cic_paper1}, authors validate an artificial neural network system onto a CIC-released dataset to detect the malicious traffic generated by a botnet, out of the regular traffic. A hybrid neural network to detect anomalies in network traffic is proposed in \cite{cic_paper2}, where the CIC-IDS-2017 dataset has been exploited. Specifically focused on DDoS detection is the work in \cite{cic_paper3}, where a neural-based approach relying on the implementation of a simple Multi-Layer Perceptron is contrasted to the Random Forest technique. Again focused on DDoS detection is \cite{cic_paper4}, where some classic ML-based techniques (e.g. Na\"{\i}ve Bayes and Logistic Regression) are used to distinguish regular traffic from malicious one. \begin{table*}[h] \centering \caption{Prominent Related Work surveying ML techniques applied to Network Intrusion Detection.} \small \renewcommand{\arraystretch}{1.4} \begin{tabular}{|p{2.7cm}|p{3cm}|p{2.5cm}|p{6cm}|} \hline \textbf{Authors} & \textbf{Experiments} & \textbf{Single/Multi Class} & \textbf{Description} \\ \hline Nguyen et al. \cite{survey_pure1} & N/A & N/A & Classic survey on ML-based techniques with pointers to other works but with no experiments. \\ \hline Boutaba et al. \cite{survey_pure2} & N/A & N/A & Survey on ML-based techniques applied to various networking-related problems (from traffic classification to routing or QoS/QoE management). \\ \hline Hindy et al. \cite{survey_pure3} & N/A & N/A & Survey on IDS techniques taking into account ML algorithms such as ANN, K-means and SVM. \\ \hline Khraisat et al. \cite{survey_pure4} & N/A & N/A & Survey on Signature and Anomaly-based IDS techniques applying ML methods on NSL-KDD dataset. \\ \hline Aldweesh et al. \cite{survey_pure5} & N/A & N/A & Survey on Deep Learning techniques for IDSs with pointers to other works but with no experiments. \\ \hline Fernandes et al. \cite{survey_pure6} & N/A & N/A & Survey on various techniques (ML, Statistical, Information Theory) for intrusion detection with pointers to other works but with no experiments. \\ \hline Buczak et al. \cite{survey_pure7} & N/A & N/A & Survey on Data Mining and ML methods for intrusion detection with pointers to other works but with no experiments. \\ \hline Tidjon et al. \cite{survey_pure8} & N/A & N/A & Survey on various techniques (mainly ML-based) to be applied in intrusion detections with pointers to other works but with no experiments. \\ \hline Azwar et al. \cite{azwar} & Performance analysis & Single Class & Non-homogeneous comparisons among various approaches (trees, NN) by using modern CIC-IDS$17$ dataset. \\ \hline Moustafa et al. \cite{moustafa_table} & Performance analysis & Single Class & Holistic Survey on ML methods with experiments on feature reduction techniques (ARM, PCA, ICA). \\ \hline Meena et al. \cite{Meena} & Time analysis & Single Class & Non-homogeneous comparisons between J48 and Na\"{\i}ve Bayes techniques on KDD and NSL-KDD datasets. \\ \hline Rama et al. \cite{kdd_rama} & Performance analysis, Time analysis (partial) & Single Class & Non-homogeneous comparisons among various algorithms (e.g. J48, Na\"{\i}ve Bayes, Bagging) on KDD and NSL-KDD datasets. \\ \hline Yin et al. \cite{kdd_rnn} & Performance analysis & Single/Multi Class & Non-homogeneous comparisons among various and different approaches (e.g. J48, ANN, SVM) on KDD$99$ dataset. \\ \hline This work & Performance analysis, Time analysis & Single/Multi Class & Homogeneous comparisons among neural-based approaches (Deep, Weightless NN, LVQ, SOM) performed on modern CIC-IDS-2017/2018 datasets. \\ \hline \end{tabular} \label{tab:works} \end{table*} Going more precisely into the set of papers that share with the proposed work the purpose of surveying and/or comparing ML techniques for intrusion detection, we have gathered the prominent papers in Table \ref{tab:works}. The first column contains a pointer to the source; the second column reports the type of experimental analysis (if any); the third column highlights the type of datasets utilized (single/multi-class); the last column provides a brief description of the surveyed material, whereby the qualification ``non-homogeneous" refers to a comparison among techniques belonging to different families, thus, hardly comparable. Going beyond the adoption of novel datasets, we want to highlight the two most significant differences arising between the proposed review and the technical literature: First, to avoid dispersions, we prefer to focus on a specific family of ML techniques, namely the neural networks, so to guarantee more fair comparisons among outcomes. Such a focus allows us to better reveal the surprising behavior of weightless neural networks (traditionally exploited in the field of image classification) which exhibits an extraordinary advantageous accuracy/time complexity trade-off w.r.t. other NN-based methods. Then, we carry out an experimental analysis including performance and time complexity (this latter often neglected in technical literature) about all the described NN techniques; this effort overcomes the limit of gathering findings from various works (where different authors exploit different testbeds exhibiting different performance), thus, it results in a uniform vision of neural methods in the field of intrusion detection. Accordingly, Table \ref{tab:works} helps pinpointing such novel aspects of the present paper, as per last row. \section{Neural-based techniques under scrutiny} \label{sec:nn} In this section we offer a brief recap of neural-based techniques that we then employ in Section V to perform our comparative assessment. We start with Multi-Layer Perceptron (MLP) which represents the basis for implementing ANNs and deep neural networks (DNN). Then, we consider WiSARD, one of the most representative algorithms of weightless neural networks. These typically provide interesting performance results, but have traditionally been applied in domains such as image classification rather than intrusion detection. We examine Learning Vector Quantization (LVQ) methods (with $3$ variants), where the notion of {\em codebook vector} will be introduced. Finally, we take into account the Self-Organizing Maps (SOM) technique. \subsection{Multi-Layer Perceptron} The Multilayer Perceptron is one of the most representative type of feedforward-based ANNs, whereby there are no cycles arising in the connections among nodes. MLPs exhibit a fully connected structure of neurons, where each individual neuron calculates the weighted sum of $n$ inputs, adds a bias term $b$, and then applies an activation function $\phi(\cdot)$ to produce the output $s$, namely \begin{equation} s=\phi \left( \sum_{i=1}^{n}w_i x_i +b \right). \label{eq:neuron} \end{equation} Figure \ref{fig:nn} depicts \textit{i)} (left panel) the model of a single neuron (or single-unit perceptron) implementing (\ref{eq:neuron}), and \textit{ii)} (middle panel) a simplified structure of a typical MLP model with $5$ neurons in the input layer, $3$ neurons in the hidden layer, and $1$ neuron in the output layer. In case of multiple hidden layers, MLP implements a Deep Neural Network (DNN) \cite{bengio} as reported in the right panel. The basic MLP functioning is described next. Input $I_n$ (see Fig. \ref{fig:nn} - middle panel) activates the hidden layer (or layers) by following the forward activation direction, which is what justifies the term {\em feed-forwarding} neural network. Similarly, neurons in the hidden layers feed forward into output neurons, thus, an output value is obtained. It is useful to recall that the activation function is aimed at deciding whether or not a neuron would be activated, introducing some non-linearity into the neuron output. A variety of activation functions exist \cite{glorot} including: step function, linear, sigmoid, hyperbolic tangent, ReLU (Rectified Linear Unit), Softmax and many others. \begin{figure*}[t] \centering \includegraphics[scale=0.65,angle=90]{Images/nn3} \caption{Single Neuron model (left panel). MLP model (middle panel). Deep NN model (right panel)}. \label{fig:nn} \end{figure*} The MLP training stage is achieved through \textit{backpropagation}, a mechanism exploited to adjust weights of neurons aimed at progressively minimizing the error through Gradient Descent (GD), an iterative optimization algorithm useful to find local minima. Precisely, the purpose is to minimize an error function (e.g. least squares): \begin{equation} \mathcal{E}=\sum_{p \in \mathcal{P}} || t_p - s_p ||^2, \end{equation} where $\mathcal{P}$ is the set of training patterns, $t_p$ is the target, and $s_p$ is the output for the example pattern $p$. The weight-updating rule, used to progressively compute the new weight $w_{new}$, is derived by evaluating the gradient $\partial \mathcal{E} / \partial w$, so that: \begin{eqnarray} w_{new}=w+\Delta w, \nonumber \\ \nonumber \\ \Delta w = - \eta \frac{\partial \mathcal{E}}{\partial w} + \alpha \Delta w_{prev}, \end{eqnarray} where: \textit{i)} $\eta$ is the {\em learning rate}, namely a hyperparameter lying in the range $(0,1)$ associated to the step size of the GD procedure (N.B.: too small $\eta$ implies difficulty of convergence, whereas too large $\eta$ could result in indefinite oscillations); \textit{ii)} $\alpha$ is defined as the {\em momentum}, a term lying in the range $(0,1)$ used to weight the influence of each iteration. It is worth noting that, since derivative operations are involved in the backpropagation algorithm, non-linear activation functions (e.g. Sigmoid, ReLU) have to be exploited, whose derivative functions exist and are finite. In our MLP-based experiment we use two types of activation functions: ReLU for all the layers except for the output, and Softmax for the neurons in the output layer. ReLU has been proven to be one of the most effective activation functions when dealing with deep neural networks \cite{relu1,relu2} since it allows the whole network to converge rapidly. Conversely, Softmax is particularly suited for handling multiple classes in the output stage \cite{softmax}. \subsection{Evolved Deep architectures} Deep learning approaches allow to face a problem in a hierarchical way. Lower layers of the model are associated to a basic representation of the problem, whereas higher layers encode more complex aspects. Inputs feeding each layer of a DNN are manipulated through transformations which are parametrized by a number of weights. Although deep approaches are very promising, two main issues remain opened: first, training these architectures requires a significant computational power, and, then, the huge number of hyperparameters makes the tuning process very hard. In the following, we briefly discuss the most recent architectures relying on a deep-based approach. \textbf{Convolutional Neural Networks (CNNs)}: Such a technique \cite{cnn} takes inspiration from the human visual cortex, which embodies areas of cells responsive to particular regions of the visual field. This structure makes CNNs exceptionally suited for applications such as images classification or objects detection. Two main stages characterize the CNN lifecycle: feature extraction and classification. In the first stage, the so-called convolutional filters extract multiple features from the input and encode them in feature maps. The ``convolution'' is the mathematical operation consisting in an element-wise product and sum between two matrices, and has the main drawback of being hugely time consuming. The output of each convolutional layer feeds the activation function layer (e.g. ReLU) which produces an activation map by starting from a feature map. Finally, an optional pooling layer keeps only significant data. On the other hand, the classification stage is composed of a number of fully connected (or dense) layers followed by a Softmax output layer. \textbf{Recurrent Neural Networks (RNNs)}: It is a class of DNNs conceived on the basis of a work of Rumelhart \cite{rnn}, explicitly designed to deal with sequential data. Thus, RNNs are well suited for modelling language (intended as a sequence of interconnected words) in the field of the so-called natural language processing (NLP). The key concept in RNNs is the presence of cycles, which represent the internal memory exploited to evaluate current data with respect to the past. Such a temporal dependency calls for the introduction of time-based hidden states obeying to: \begin{eqnarray} h(t)=\phi(h(t-1),x(t)), \end{eqnarray} with the meaning that an internal state $h$ at time $t$ can be represented in terms of the input at time $t$ and the previous hidden state at time $t-1$. In this way, an RNN is helpful to predict the next element of a time series, or the next word in a sentence based on the number of the previous words. One of the main drawbacks of RNNs is dealing with long-term dependencies connected with transferring information from earlier to later times steps across too long sequences. To tackle this issue, two more sophisticated variants of RNNs have been introduced: LSTM and GRU. \textbf{Long Short-Term Memory (LSTM)}: It is a special RNN architecture \cite{lstm} conceived to learn long-term dependencies, namely, to store information for a long period of time. Basically, an LSTM unit (replacing an ordinary node in the hidden layer) is represented by a cell state in charge of carrying the information, and by three different \textit{gates} aimed at regulating the information flow: \textit{forget}, \textit{input}, and \textit{output} gates. The \textit{forget} gate decides what information keep or discard on the basis of a forgetting coefficient calculated by input data $x(t)$ and the previous hidden state $h(t-1)$; the \textit{input} gate decides how to update the cell state; the \textit{output} gate decides which information has to be passed to the next unit, on the basis of input data and the previous hidden state. For the $i$-th LSTM unit, the hidden state at time $t$ can be expressed as: \begin{eqnarray} h^i(t)=out^i(t) \cdot tanh(c^i(t)) , \end{eqnarray} where $out^i(t)$ is an output gate which tunes the amount of memory, and $c^i(t)$ represents the cell state of LSTM unit $i$ at time $t$. \textbf{Gated Recurrent Unit (GRU)}: It is a lightweight version of LSTM \cite{gru} and has only two gates: \textit{update} gate and \textit{reset} gate. The former plays a similar role of forget and input gates of LSTM; the latter is exploited to decide how many past data to forget. In the GRU model, the hidden state $h^i(t)$, corresponding to the $i$-th GRU unit, can be expressed as a linear interpolation between the hidden state at time $t-1$ and the candidate activation (or hidden state) $\widetilde{h^i}(t)$ viz. \begin{eqnarray} h^i(t)=(1-z^i(t)) h^i(t-1) + z^i(t) \widetilde{h^i}(t), \end{eqnarray} where $z^i(t)$ is the update gate which decides about the updating amount of its candidate activation. \begin{figure}[t] \centering \includegraphics[scale=0.36,angle=90]{Images/discriminator} \caption{Model of a single class discriminator for the WiSARD algorithm.} \label{fig:discri} \end{figure} \subsection{WiSARD} WiSARD\footnote{\textbf{Wi}lkes, \textbf{S}tonham and \textbf{A}leksander \textbf{R}ecognition \textbf{D}evice } is a supervised method \cite{wisard,wisard_new} that was originally conceived for image classification, but has recently been proven to be effective in more general multi-class problems. WiSARD falls under the class of weightless neural networks (WNNs) since it exploits lookup tables, instead of weights, to store the functions evaluated by the individual neurons. WNNs rely on a mechanism inspired to the random access memory (RAM) encoding functionalities, since input data are transformed into binary. This process has a noteworthy benefit in terms of time complexity due to the use of Boolean logic, which can be further improved by exploiting pipelining and parallelism. In short, WiSARD is composed of a set of classifiers (or {\em discriminators}), each one in charge of learning binary patterns associated to a particular class. In turn, a discriminator is composed of a set of neurons, referred to as RAM neurons as depicted in Fig. \ref{fig:discri}. Similarly to conventional RAM circuits, a RAM neuron (a.k.a. $n$-tuple neuron) can be interpreted as a RAM having $2^n$ memory locations addressed by $n$ address lines (inputs) representative of neuron connectivity. During the training stage, each RAM neuron learns the occurrences of $n$-tuple vectors extracted from a training pattern, and stores them in a memory cell (equivalent to a RAM writing operation). Precisely, given $\mu_{a,i}$ the memory cell with address $a$ in the $i$-th RAM (initially empty), the following update rule holds: \begin{equation} \mu_{a,i}=\theta \left( \sum_{{p}\in{\mathcal{P}}} \delta_{a,a_i (p)} \right), \end{equation} where: function $\theta(z)$ amounts to $z$ if $0\le z \le 1$ and to $1$ if $z>1$; $p$ is a pattern defined in the training set $\mathcal{P}$; $a_i (p)$ is the address generated starting from pattern $p$; $\delta$ is the Kronecker delta function, amounting to $1$ if $a=a_i (p)$ and 0 elsewhere. The classification stage (equivalent to a RAM reading operation) consists of classifying an unseen pattern $s$ by assigning $s$ to a class $c$, provided that the correspondent discriminator exhibits the highest output, namely \begin{equation} \argmax_{c} \left( \sum_{i=1}^{K} \mu_{a_i(s),i} \right), \end{equation} whereas, the response of the $c$-th class discriminator on pattern $s$ is: \begin{equation} r_c(s)=\frac{1}{K} \sum_{i=1}^{K} \mu_{a_i(s),i}. \end{equation} Accordingly, since WiSARD is made of a set of discriminators, the overall response of a set of discriminators trained on $N$ classes produces a response vector $\bold{r}(s)=[r_{c_1}(s),\dots,r_{c_N}(s)]$. \subsection{Learning Vector Quantization} Learning Vector Quantization (LVQ) \cite{lvq} directly stems from classical Vector Quantization, a signal-approximation method aimed at representing the input data vectors $x \in \mathbb{R}^n$ through a finite number of {\em codebook} vectors $m_i \in \mathbb{R}^n, i=1,2,\dots,k$. The goal is to find the codebook vector ${m_v}$ that best approximates $x$, usually in terms of Euclidean distance, namely: \begin{equation} v= \argmin_{i} || x - m_i ||. \label{eq:lvq} \end{equation} The main purpose of LVQ is to define {\em class regions} (over the input data space), each one containing a subset of a similarly labeled codebook. Accordingly, it is possible to pinpoint hyperplanes between neighboring codebook vectors defining the so-called {\em quantization regions}. By assuming that all samples of ${x}$ derive from a finite set of classes $S_k$, the idea is to \textit{i)} assign a subset of codebook vectors to each class $S_k$, and \textit{ii)} search for ${m_v}$ having the smallest Euclidean distance from ${x}$. Different versions of LVQs have been introduced in the literature with slight differences, as introduced in the following. \vspace{6pt} \subsubsection{LVQ1} Assume that $x(t)$ is an input sample and $m_i(t)$ contains sequential values of $m_i$ ($t=0,1,2,\dots$). The LVQ1 algorithm allows to find values of $m_i$ in (\ref{eq:lvq}) that asymptotically minimize errors, and is defined by the following set of equations: \begin{eqnarray} &m_v(t+1)&=m_v(t)+\alpha(t)\left[x(t)-m_v(t)\right], \label{eq:lvq_1} \\ &m_i(t+1)&=m_i(t), \label{eq:lvq_2} \label{eq:lvq1} \end{eqnarray} where (\ref{eq:lvq_1}) refers both for cases that $x$ and $m_v$ belong to the same or different classes, (\ref{eq:lvq_2}) holds for $ i \neq v$, and $\alpha(t)$ is the learning rate. This algorithm admits also an optimized version (a.k.a. OLVQ1), where an individual learning rate $\alpha_i$ is assigned to $m_i$, thus, basic $m_v(t+1)$ equation in (\ref{eq:lvq1}) becomes: \begin{equation} m_v(t+1)=\left[ 1- c(t)\alpha_v(t) \right] m_v(t) + c(t)\alpha_v(t)x(t), \end{equation} where $c(t)=+1 [-1]$ if the classification is correct [wrong]. \vspace{6pt} \subsubsection{LVQ2} Differently from the standard procedure implemented in LVQ1, here two codebook vectors $m_i$ and $m_j$ (belonging to the correct and to the wrong class, respectively) are simultaneously updated. In this case, $x$ must fall within a ``window" defined around the hyperplanes of $m_i$ and $m_j$. The correspondent algorithm is defined by the following set of equations: \begin{eqnarray} &m_i(t+1)&=m_i(t)-\alpha(t)\left[ x(t) -m_i(t) \right], \label{eq:lvq2_1} \\ &m_j(t+1)&=m_j(t)-\alpha(t)\left[ x(t) -m_j(t) \right], \label{eq:lvq2_2} \end{eqnarray} where (\ref{eq:lvq2_1}) holds when $x$ and $m_i$ belong to the same class, (\ref{eq:lvq2_2}) holds when $x$ and $m_i$ belong to different classes, and with $m_i$ and $m_j$ being the two closest codebook vectors to $x$. Moreover, $x$ has to fall within a window of relative width $w$ if \begin{equation} \textnormal{min} \left( \frac{e_i}{e_j}, \frac{e_j}{e_i} \right) > k, \end{equation} where $e_i$ and $e_j$ represent, respectively, the Euclidean distances of $x$ from $m_i$ and $m_j$, and $k=\frac{1-w}{1+w}$. \vspace{6pt} \subsubsection{LVQ3} \label{sec:lvq3} This variant of LVQ2 admits the same set of equations (\ref{eq:lvq2_1}), (\ref{eq:lvq2_2}), with the difference that the learning rate is weighted with a parameter $\epsilon$, whose best values are found empirically to lie in the [0.1-0.5] interval of values: \begin{equation} m_h(t+1)=m_h(t)+\epsilon \alpha(t) \left[x(t)-m_h(t)\right], \end{equation} for $k \in {i,j}$ if $x$, $m_i$, and $m_j$ belong to the same class. \subsection{Self-Organizing Maps} SOM takes inspiration from a particular adaptive characteristic that allows the human brain to empower the experience. Specifically, given a physical stimulus (namely, the input) which activates multiple neurons of a certain brain area in parallel, those neurons that are more sensitive to the input stimulus will go about influencing all other neighboring neurons. This biological mechanism has led to designing SOM as a nonlinear mapping of high-dimensional input data onto elements of a low-dimensional array (a.k.a. \textit{lattice}) \cite{kohonen}, according to a principle known as competitive learning. This is different from the classic ANN-based approach, where weights are updated to iteratively minimize errors. In competitive learning, several neurons are fed with the same input (in parallel) and compete to become the possible ``winner" in relation to that particular input. According to this strategy, and assuming that the weight vector of neurons has the same dimensionality as the input, the output neuron activation increases with larger similarity between the weight vector of the neuron and the input \cite{aggarwal}. Precisely, by considering a network composed of $k$ neurons (with $k<<n$ being $n$ the data set size), an input vector $x=\left[ x_1, x_2, \dots, x_n \right]^T \in \mathbb{R}^n$ and a reference (or weight) vector $m_i=\left[m_{i1}, m_{i2}, \dots, m_{in} \right]^T \in \mathbb{R}^n$ associated with neuron $i$, the competitive approach can be summarized according to the following steps: \begin{itemize} \item[] \textit{i)} Compute the smallest Euclidean distance $d=\argmin_{i}$ $\lvert\lvert$ $x$ $-$ m$_i$ $\lvert\lvert$ $\forall i \in (1,\dots,m)$ having the meaning of the activation value for the $i$-th neuron. The neuron having $d$ is declared as winner. \item[] \textit{ii)} The $i$-th neuron is updated according to the following rule: $m_i(t+1)=m_i(t) h(t)\left[ x(t) - m_i(t) \right]$ where $t=(0,1,\dots)$ is an integer discrete time reference, whereas $h(t)$ is the {\em neighborhood function} defined over the lattice points that is typically implemented through a Gaussian kernel (the same as then one exploited for the present experimental analysis): \begin{equation} h(t)=\alpha(t)\cdot \textnormal{exp} \left( - \frac{\lvert\lvert \ell_c - \ell_i \lvert\lvert^2}{2 \sigma^2 (t)} \right), \end{equation} where: $\alpha(t)$ is the learning rate factor, $\ell_c \in \mathbb{R}^2$ and $\ell_i \in \mathbb{R}^2$ are, respectively, the location vectors of neurons $c$ and $i$ mapped on the lattice structure (supposed to be bi-dimensional), and $\sigma(t)$ represents the kernel width. \end{itemize} Although SOM may be considered as the unsupervised counterpart version of LVQ, a common way to exploit SOM for classification purposes is to consider a supervised version (sometimes dubbed as LVQ-SOM \cite{kohonen} and implemented in our analysis) relying on the following consideration: once we know that each training sample $x(t)$ and $m_i(t)$ have been assigned to specific classes, $h(t)$ has to be set to positive if $x(t)$ and $m_i(t)$ belong to the same class, and negative if they belong to different classes, by taking into account that such a rule is applied for each $m_i(t)$ in the neighborhood of the winner. \begin{table}[htp] \centering \caption{Synthetic description of adopted features} \resizebox{.45\textwidth}{!}{ \begin{tabular}{c|l} \textbf{Family} & \multicolumn{1}{c}{\textbf{List of features}} \\ \midrule \multirow{5}[2]{*}{\textbf{Coarse-Grained}} & 1. Source IP Address \\ & 2. Destination IP Address \\ & 3. Source Port \\ & 4. Destination Port \\ & 5. Transport Protocol Type \\ \midrule \multirow{23}[2]{*}{\textbf{Time-Based}} & 6. Flow duration \\ & 7. Average inter-arrival times (IAT) between two flows \\ & 8. IAT standard deviation (std) between two flows \\ & 9. IAT max between two flows \\ & 10. IAT min between two flows \\ & 11. IAT tot between two pkts sent in fwd direction \\ & 12. IAT avg between two pkts sent in fwd direction \\ & 13. IAT std between two pkts sent in fwd direction \\ & 14. IAT max between two pkts sent in fwd direction \\ & 15. IAT min between two pkts sent in fwd direction \\ & 16. IAT tot between two pkts sent in bwd direction \\ & 17. IAT avg between two pkts sent in bwd direction \\ & 18. IAT std between two pkts sent in bwd direction \\ & 19. IAT max between two pkts sent in bwd direction \\ & 20. IAT min between two pkts sent in bwd direction \\ & 21. Avg time a flow was active before becoming idle \\ & 22. Std time a flow was active before becoming idle \\ & 23. Min time a flow was active before becoming idle \\ & 24. Max time a flow was active before becoming idle \\ & 25. Avg time a flow was idle before becoming active \\ & 26. Std time a flow was idle before becoming active \\ & 27. Min time a flow was idle before becoming active \\ & 28. Max time a flow was idle before becoming active \\ \midrule \multirow{6}[2]{*}{\textbf{Flow-Based}} & 29. Flow byte rate \\ & 30. Flow pkt rate \\ & 31. Avg no. of pkts in a sub-flow in fwd direction \\ & 32. Avg no. of pkts in a sub-flow in bwd direction \\ & 33. Avg no. of bytes in a sub-flow in fwd direction \\ & 34. Avg no. of bytes in a sub-flow in bwd direction \\ \midrule \multirow{20}[2]{*}{\textbf{Packet-Based}} & 35. Tot pkts in the fwd direction \\ & 36. Tot length of pkts in the fwd direction \\ & 37. Avg length of pkts \\ & 38. Std length of pkts \\ & 39. Variance length of pkts \\ & 40. Avg length of pkts in the fwd direction \\ & 41. Std length of pkts in the fwd direction \\ & 42. Max length of pkts in the fwd direction \\ & 43. Min length of pkts in the fwd direction \\ & 44. Tot pkts in the bwd direction \\ & 45. Tot length of pkts in the bwd direction \\ & 46. Avg length of pkts in the bwd direction \\ & 47. Std length of pkts in the bwd direction \\ & 48. Max length of pkts in the bwd direction \\ & 49. Min length of pkts in the bwd direction \\ & 50. Avg no. of pkt bulk rate in fwd direction \\ & 51. Avg no. of pkt bulk rate in bwd direction \\ & 52. Fwd pkt rate \\ & 53. Bwd pkt rate \\ & 54. Min segment size in fwd direction \\ \midrule \multirow{6}[2]{*}{\textbf{Byte-Based}} & 55. Avg no. of byte rate in fwd direction \\ & 56. Avg no. of byte rate in bwd direction \\ & 57. No. of bytes sent in init win in fwd direction \\ & 58. No. of bytes sent in init win in bwd direction \\ & 59. Tot bytes used for headers in fwd direction \\ & 60. Tot bytes used for headers in bwd direction \\ \midrule \multirow{19}[2]{*}{\textbf{Flag-Based}} & 61. No. of times URG flag set in fwd direction \\ & 62. No. of times PSH flag set in fwd direction \\ & 63. No. of times FIN flag set in fwd direction \\ & 64. No. of times SYN flag set in fwd direction \\ & 65. No. of times RST flag set in fwd direction \\ & 66. No. of times ACK flag set in fwd direction \\ & 67. No. of times URG flag set in bwd direction \\ & 68. No. of times PSH flag set in bwd direction \\ & 69. No. of times FIN flag set in bwd direction \\ & 70. No. of times SYN flag set in bwd direction \\ & 71. No. of times RST flag set in bwd direction \\ & 72. No. of times ACK flag set in bwd direction \\ & 73. PSH flag count \\ & 74. FIN flag count \\ & 75. SYN flag count \\ & 76. RST flag count \\ & 77. ACK flag count \\ & 78. ECE flag count \\ \bottomrule \end{tabular}% } \label{tab:superfeatures}% \end{table}% \begin{table*}[t] \centering \caption{Optimized hyperparameters for the exploited algorithms.} \small \renewcommand{\arraystretch}{1.35} \begin{tabular}{|p{2.1cm}|p{8cm}|} \hline \textbf{Algorithm} & \textbf{Optimized hyperparameters and models info} \\ \hline MLP-$1$, Deep-$2$, Deep-$3$ & $\cdot$ Stochastic Gradient Descent with adaptive hyperparams. (Adam version - \cite{adam}) \\ & $\cdot$ LR (learn. rate)=$0.001$ \\ & $\cdot$ Number of weights=$2000$ \\ & $\cdot$ Exp. Decay (first moment estimate)=$0.9$ \\ & $\cdot$ ReLU activation function \\ & $\cdot$ Neurons per hidden layer: MLP-$1$($26$); Deep-$2$($23$,$10$); Deep-$3$($20$,$16$,$11$) \\ \hline Convolutional & $\cdot$ $7$ filters $(4x1)$, $8$ neurons fully connected \\ & $\cdot$ $1$ Pooling layer with Pooling size = 2 \\ & $\cdot$ $1$ Dropout layer with Dropout rate = 0.3 \\ & $\cdot$ ReLU activation function \\ \hline Recurrent-type & $\cdot$ $18$ Recurrent units (RNN), $10$ neurons fully connected \\ & $\cdot$ $6$ LSTM units, $8$ neurons fully connected\\ & $\cdot$ $8$ GRU units, $10$ neurons fully connected \\ \hline WiSARD & $\cdot$ Batch Size=$100$ \\ & $\cdot$ Resolution (in bit) per neuron: $8$ \\ \hline LVQ(1,2,3) & $\cdot$ Batch Size=$100$ \\ & $\cdot$ LR=$0.3$ \\ & $\cdot$ Codebook Vectors=$20$ \\ & $\cdot$ Window Size (for LVQ$2$ and LVQ$3$)=$0.3$ \\ & $\cdot$ $\epsilon$ (for LVQ$3$)=$0.1$ \\ \hline SOM & $\cdot$ Batch Size=$100$ \\ & $\cdot$ LR=$0.3$ \\ & $\cdot$ Hexagonal Topology with Neighborhood Size = $8$ \\ & $\cdot$ Neighborhood Function: Gaussian \\ \hline \end{tabular} \label{tab:hyperparams} \end{table*} \section{The experimental Datasets} \label{sec:dataset} In this section we present further details about datasets employed in our experimental analysis. As already remarked in Section \ref{sec:rw}, the datasets must convey the most recent information about a variety of cyber attacks across data networks. We adopted datasets released from CIC \cite{cic}, and precisely: the {\em DDoS } dataset, containing traffic relating to distributed denial of service attacks designed to saturate network resources; the {\em Portscan} dataset, including attempts of Portscan, a technique used to discover open ports on network devices; the {\em WebAttack} dataset which encompasses various malicious traffic ranging from Cross-Site Scripting to Sql Injection; the {\em TOR} dataset, a collection of network traffic traversing the anonymous TOR circuit often conveying malicious information; and the {\em Android} dataset, embedding a number of mobile (Android-based) adwares. These datasets have been used to perform two kinds of neural-based analyses: a {\em single-class} analysis, aimed at classifying the network traffic as benign or malign (binary information); and a {\em multi-class} analysis, aimed at distinguishing more classes of attacks. Each dataset has been cleaned and re-balanced through a Python routine designed from scratch, and contains $2\cdot10^4$ instances and $78$ features, which are then grouped in $6$ macro-classes, as indicated in the following (refer to the Table \ref{tab:superfeatures} for an exhaustive list of features): \begin{itemize} \item \textbf{Coarse-grained features:} Source and Destination Port, Protocol Type, Source and Destination IP Address; \item \textbf{Time-based features:} Backward/Forward inter-arrival times between two flows, duration of active flow (mean, std, min, max), duration of an idle flow (mean, std, min, max), etc.; \item \textbf{Flow-based features:} Length of a flow (mean, min, max, etc.); \item \textbf{Packet-based features:} Backward/Forward number of packets in a flow, Backward/Forward length of packets in a flow (mean, std, min, max, etc.); \item \textbf{Byte-based features:} Backward/Forward number of bytes in a flow, Backward/Forward number of bytes used for headers, etc.; \item \textbf{Flag-based features:} Number of packets with active TCP/IP flags (SYN, FIN, PUSH, RST, URG, etc.). Please note that, for example, feature $62$ ($68$) indicates the no. of times the PSH flag is set in the forward (backward) direction, whereas feature $73$ indicates the overall number of packets containing such a flag. \end{itemize} These features allow to derive statistical information that cannot be hidden in a possible malicious flow. Let us consider, for instance, a DDoS attack. This is typically designed to overwhelm the resources of a target network \cite{ddos1bis,ddos2} by conveying legitimate information (e.g. trivial HTTP requests in case of an application-layer DDoS attack) which would pass unnoticed to a classic signature-based detection systems. However, DDoS attacks are designed to be a coordinated effort where multiple, malicious entities (a.k.a. {\em bots}) send few, tiny packets to the target. Similarly, the structure of a Portscan attack, characterized by a very quick scan of the victim's destination port by the network attacker, can be learned by crossing information of destination port and time-based information. \begin{figure*}[t!] \centering \begin{tabular}{cc} \subfloat[]{\includegraphics[scale=0.51]{Images/single-dd-perf-colab2}} \hspace{2.5mm} \subfloat[]{\includegraphics[scale=0.51]{Images/single-ps-perf-colab2}} \hspace{2.5mm} \\ \subfloat[]{\includegraphics[scale=0.51]{Images/single-wa-perf-colab2}} \hspace{2.5mm} \subfloat[]{\includegraphics[scale=0.51]{Images/single-tor-perf-colab2}} \hspace{2.5mm} \end{tabular} \caption{Performance in terms of Accuracy/F-Measure for different single-class datasets: (a) DDoS; (b) Portscan, (c) WebAttack, (d) TOR.} \label{fig:perf_single} \end{figure*} \section{Experimental-based Assessment} \label{sec:experim} The main purpose of our experimental analysis is to compare the neural techniques introduced in Sect. \ref{sec:nn} across the datasets described in Section IV. Aimed at a fair comparison, we adopt a $10$-fold cross validation for each experiment. The model structure for each algorithm and the pertinent hyperparameters are summarized in Table \ref{tab:hyperparams}. \noindent The whole assessment comprises: \begin{itemize} \item \textbf {Performance analysis}: obtained by evaluating two metrics typically used in the field of traffic classification \cite{accuracy2,tnsm-perf}, viz. \SubItem {\em Accuracy}: ratio of correctly predicted observations to the total, calculated by \begin{equation} \frac{TP+TN}{TP+TN+FP+FN}; \end{equation} \SubItem {\em F-Measure}: an indicator of a per-class performance, calculated by \begin{equation} 2 \cdot \frac{\textnormal{Precision} \cdot \textnormal{Recall} }{\textnormal{Precision}+\textnormal{Recall}}, \end{equation} where Precision is the ratio of correctly classified traffic over the total predicted traffic in a class, and Recall is the ratio of correctly classified traffic over all ground truth traffic in a class. \item \textbf {Time complexity analysis}: derived by measuring the whole classification process (including training time) as the number of instances grows from $10^3$ to $2\cdot10^4$. \end{itemize} We perform the overall analysis on a PC equipped with Intel CoreTM i5-7200U CPU@ 2.50GHz CPU and 16 GB of RAM. For the sake of convenience, we split our analysis in two: the single-class and the multi-class cases. \begin{figure*}[t!] \centering \begin{tabular}{cc} \subfloat[]{\includegraphics[scale=0.47]{Images/single_perf_var_inst_ddos}} \hspace{2.5mm} \subfloat[]{\includegraphics[scale=0.47]{Images/times_single_ddos}} \hspace{2.5mm} \\ \subfloat[]{\includegraphics[scale=0.47]{Images/single_perf_var_inst_ps}} \hspace{2.5mm} \subfloat[]{\includegraphics[scale=0.47]{Images/times_single_ps}} \hspace{2.5mm} \end{tabular} \caption{Single-class (DDoS): (a) Average Accuracy vs. dataset size; (b) Classification Time vs. dataset size; Single-class (Portscan): (c) Average Accuracy vs. dataset size; (d) Classification Time vs. dataset size.} \label{fig:aveacc_time} \end{figure*} \subsection{Single-Class Analysis} We start by evaluating the performance of the different neural-based techniques by considering four single-class datasets ({\em DDoS}, {\em Portscan}, {\em WebAttack}, and {\em TOR}), reporting accuracy and F-measure in Figures \ref{fig:perf_single}(a), \ref{fig:perf_single}(b), \ref{fig:perf_single}(c), and \ref{fig:perf_single}(d), respectively. The choice of four completely different datasets (representative of four substantially different network attacks) is useful to verify the effectiveness of the tested algorithms and their relationships with the data. Among classic ANN-based algorithms we distinguish between deep versions (Deep-$2$, Deep-$3$) and non-deep ones (MLP-$1$) whose detailed structure is reported in Table \ref{tab:hyperparams}. MLP-$1$ refers to a standard MLP with $3$ layers: $1$ input layer, $1$ hidden layer, and $1$ output layer. On the other hand, deep versions are indicated by Deep-$2$ ($1$ input layer, $2$ hidden layer, $1$ output layer) and Deep-$3$ ($1$ input layer, $3$ hidden layer, $1$ output layer). In order to compare algorithms with similar performances, MLP-$1$, Deep-$2$ and Deep-$3$ have been set with the same number of weights ($2k$). This is about one order of magnitude smaller than the dataset size, according to what is recommended in the literature \cite{lisboa,prasad08}. We also note that the number of weights for MLP-$1$, Deep-$2$ and Deep-$3$ directly results from the different number of neurons chosen per each hidden layer, namely: $26$ neurons for MLP-1, $23$ and $10$ neurons for the two hidden layers of Deep-2, and $20$, $16$, and $11$ neurons for the $3$ hidden layers of Deep-3 (see also Table \ref{tab:hyperparams}). As concerns the evolved deep models (CNN, RNN, LSTM, GRU), details about their structures are available in Table \ref{tab:hyperparams}. Also for these approaches, the choice of a particular structure (e.g. the number of filters in CNN, the number of recurrent units in RNN, and so forth), is aimed at obtaining a number of weights comparable with MLP-$1$, Deep-$2$, and Deep-$3$ so to have a fair comparison. In our analysis, the MLP-based versions (MLP-$1$, Deep-$2$ and Deep-$3$) and the evolved deep techniques (CNN, RNN, LSTM, GRU) exhibit good values of average accuracy (all around $0.99$) for all the four examined datasets (panels of Fig. \ref{fig:perf_single}). Similarly, for all the aforementioned algorithms and for each dataset, F-measure exhibits high values, indicating a very negligible fraction of false positives and false negatives. Similar high performance is exhibited by Wisard for all the datasets with a slight exception of Portscan Dataset. By contrast, LVQs and SOM methodologies have lower performance and seem to suffer from the presence of more false positives and false negatives. This is visible for the Portscan dataset (Fig. \ref{fig:perf_single}(b)) that exhibits an oscillating F-measure, which can be ascribed to the different codebook vectors obtained when considering different input datasets. In particular, competitive learning methods seem suffer from the particular type of attack dataset. In fact, the Euclidean distance (which is used as the main metric within the discussed competitive learning approaches) may not perfectly capture the properties of various datasets, being they structurally different. This is due to the fact that Euclidean distance is a not scale invariant measure, thus, it has difficulty to deal with vector components having different dynamic range. This is also the reason why the $3$ variants of LVQ can behave slightly differently when considering diverse datasets. Another useful analysis is aimed at evaluating the impact of dataset size on performance. For the sake of compactness, we choose DDoS and Portscan as benchmark datasets, being DDoS and Portscan representative of two completely different and well structured network attacks. Precisely, we evaluate the average accuracy (between DDoS/Portscan and benign classes) against an interval of dataset size varying between $10^3$ and $2\cdot10^4$ instances. The results shown in Figures \ref{fig:aveacc_time}(a) and \ref{fig:aveacc_time}(c) provide a clear indication that: \textit{i)} both ANNs and WiSARD exhibit very stable performance as dataset size varies; \textit{ii)} LVQ shows some minor fluctuations, with an average accuracy that is just slightly under $0.9$ in few cases. In this case, fluctuations can be ascribed to the codebook size that would require heuristical adjustments as the dataset size varies. For the sake of fairness, the performance comparison must be complemented by a time-complexity analysis, as shown in Figs. \ref{fig:aveacc_time}(b-d). Therein, classification time (in seconds) is plotted against the number of instances (from $10^3$ to $2\cdot10^4$). For both experiments (DDoS dataset in \ref{fig:aveacc_time}(b) and Portscan dataset in \ref{fig:aveacc_time} (d)) we can reasonably recognize three slots in the Y axis: the first one, ranging from $10^3$ to $15\cdot10^4$ seconds, where MLP-$1$, Deep-$2,3$ and evolved deep architectures (CNN, RNN, LSTM, GRU) operate (slowest algorithms); the second one, ranging from about $10$ to $10^2$ seconds which includes WiSARD (medium-fast algorithm); the third one, ranging from about $1$ to $5$ seconds, including LVQ$1$, LVQ$2$, LVQ$3$, and SOM (faster algorithms). We are not surprised that deep networks are up to $2$ orders of magnitude slower w.r.t. other techniques, due to the fully connected neurons between each layer. As regards the evolved deep architectures, time complexity is typically associated to the adoption of more sophisticated structures such as the convolutional layers (CNNs), or mechanisms to retain the internal memory states (RNN, LSTM, GRU). \begin{figure*}[t!] \centering \begin{tabular}{cc} \subfloat[]{\includegraphics[scale=0.51]{Images/multi-perf-accu-colab2}} \hspace{2.5mm} \subfloat[]{\includegraphics[scale=0.51]{Images/multi-perf-fmeas-colab2}} \end{tabular} \caption{Performance analysis for a $4$-classes dataset: (a) Accuracy, (b) F-Measure.} \label{fig:perf_multi} \end{figure*} \begin{figure*}[t!] \centering \begin{tabular}{cc} \subfloat[]{\includegraphics[scale=0.5]{Images/multi_perf_var_inst-colab}} \hspace{2.5mm} \subfloat[]{\includegraphics[scale=0.5]{Images/times_multi-colab}} \end{tabular} \caption{Multi-class dataset: (a) Average Accuracy vs. dataset size; \\ ~~~ (b) Classification Time vs. dataset size.} \label{fig:multi_aveacc_time} \end{figure*} On the other hand, approaches based on competitive learning (LVQs, SOM) are faster since only the winner neurons are enabled to update their weights, although this comes at the cost of a lower accuracy. Surprisingly, the best trade-off between accuracy and time complexity is offered by WiSARD. Its RAM-based structure, in fact, allows a fast learning stage since it deals with binary information regardless on data dimension. In terms of time complexity, the feed-forward incremental learning scheme implemented in WiSARD is more advantageous than classic backpropagation, requiring many iterations to converge. \begin{figure*}[t!] \centering \begin{tabular}{cc} \subfloat[]{\includegraphics[scale=0.38,angle=90]{Images/mse_single-colab3}} \hspace{3mm} \subfloat[]{\includegraphics[scale=0.5]{Images/mse_multi-colab}} \end{tabular} \caption{MSE analysis when introducing more sophisticated deep models (CNN, RNN, LSTM, GRU). Single-class (DDoS benchmark dataset) (a), Multi-class (DDoS-Portscan-Adware-Benign dataset) (b).} \label{fig:mse_a2} \end{figure*} \subsection{Multi-Class Analysis} Multi-class analysis is useful to analyze how the various algorithms react when dealing with different classes of traffic. With this aim, we built a $4$-class dataset by mixing (in a balanced way) three malicious classes (DDoS, Portscan, and Adware) with a Benign class. The rationale behind this choice is to take into account a mix of peculiar threats. As regards the performance analysis, for the sake of readiness, we show multi-class accuracy and F-measure separately in Figs. \ref{fig:perf_multi}(a) and \ref{fig:perf_multi}(b), respectively. As a general trend, we observe a satisfactory classification performance by MLP and Deep algorithms. WiSARD exhibits good performance in classifying DDoS and Portscan ($0.9957$ and $0.9952$ accuracy, respectively), but is poor in classifying Adware and Benign classes ($0.784$ and $0.782$ accuracy, respectively). The reason is to be found behind the structural difference between those different types of attacks. DDoS and Portscan are both highly ``structured" attacks. The former can be characterized in terms of high rate of messages that a bot sends to a victim; Portscan hides continuous ping-based requests towards a target in order to unveil possible open ports. By contrast, Adwares can be easily confused within a benign flow, since they just convey annoying banners, as often occurs also within legitimate web portals. The situation changes drastically when we look at the remaining algorithms (LVQs and SOM), where false positives have high weight especially for Portscan, Adware, and Benign classes, as shown in Fig. \ref{fig:perf_multi}(b). Also, the F-measure is often below $0.4$ in case of Adware and Benign classes. With respect to the single-class case, here, the effects produced by the codebook class assignment and peculiarity of Euclidean measure are amplified. Finally, among competitive-based approaches, LVQ-$3$ exhibits the best performance, due to the introduction of the empirical parameter $\epsilon$ (see Sect. \ref{sec:lvq3}), which helps to better regulating the distance between the data and the pertinent codebook vectors. This notwithstanding, all the considered techniques exhibit fairly stable accuracy averages, across the whole range of dataset sizes (from $10^3$ to $2\cdot10^4$ instances), as revealed by Fig. \ref{fig:multi_aveacc_time}(a). As expected, the average accuracy in all cases pertaining the multi-class experiment is slightly lower than their single-class counterpart, due to the fact that classifying more than two classes is a more challenging task. Let us now discover if the multi-class analysis has an impact on the time complexity of the considered algorithms. Figure \ref{fig:multi_aveacc_time}(b) shows how the order of magnitude in classification times per algorithm is similar w.r.t. the case of single-class analysis. As to be expected, the slight increase in classification time going from single to multi-class cases can be ascribed to the higher number of different traffic flows to be classified. For example, along the varying dataset size $\left[ 10^3, 2\cdot10^3, 5\cdot10^3, 10^4, 2\cdot10^4 \right]$, WiSARD exhibits the following (approximated) classification times: $\left[ 8.047, 12.24, 30.731, 53.474, 104.783 \right]$ seconds in the single class case, and $\left[ 8.389, 15.058, 31.831, 61.958, 119.848 \right]$ seconds in the multi-class case. Once again, WiSARD appears to provide the optimal trade-off between performance needs and time complexity issues. \subsection{General Considerations} A number of interesting considerations may be derived from our comparative analysis. First, not all neural-based techniques are equally applicable to network intrusion detection, particularly when performance is the key issue. ANN approaches (including modern deep-based methods) tend to offer noteworthy accuracy, both in single-class and multi-class datasets. Unfortunately, though, accuracy comes at the expenses of time complexity, which will demand for deep-based techniques to rely on specialized GPU-based hardware. Moreover, as an auxiliary investigation pertaining the deep approach, we evaluate the impact of the updating weight mechanism on the overall performance, expressed through the Mean Square Error (MSE). The behavior is shown in Figs. \ref{fig:mse_a2}(a) and \ref{fig:mse_a2}(b) for the single and multi-class datasets, respectively, evaluated over $100$ epochs. Such analysis reveals that most techniques exhibit limited fluctuations around the median value of MSE and a quite low MSE value. The only exception is given by CNN where the higher MSE is reasonably due to the complexity of the convolution operation. Similar considerations raised in a bio-informatic study \cite{cnn_fluct} where only MLP and CNN techniques have been compared. Detailed values are reported in Table \ref{tab:mse_results}, in terms of median and inter-quartile range (difference between third and first quartile) values. On average, the most ``stable'' techniques are the classic Deep-$2$ and Deep-$3$ architectures. The addition of a bit more sophisticated structure (e.g. convolutional layer for CNN, LSTM/GRU units, and so forth) can translate into MSE fluctuations due to the presence of additional hyperparameters to tune. Obviously, such fluctuations directly reflect the internal structure of each deep technique and can be more (e.g. CNN) or less (e.g. RNN) pronounced. \begin{table}[ht] \centering \caption{ Median and IQR values (MSE analysis) for deep-based techniques} \begin{tabular}{c c c c c } & \multicolumn{2}{c}{\textbf{Median}} & \multicolumn{2}{c}{\textbf{IQR}} \\ \cmidrule{1-5} \textbf{Technique} & \multicolumn{1}{c}{Single-Class} & \multicolumn{1}{c}{Multi-Class} & \multicolumn{1}{c}{Single-Class} & \multicolumn{1}{c}{Multi-Class} \\ \midrule Deep-2 & 1.5$\cdot$ 10$^{-3}$ & 2.02 $\cdot$ 10$^{-2}$ & 1.8$\cdot$ 10$^{-3}$ & 7 $\cdot$ 10$^{-3}$ \\ Deep-3 & 7.4$\cdot$ 10$^{-4}$ & 1.45 $\cdot$ 10$^{-2}$ & 1.6$\cdot$ 10$^{-3}$ & 6.1$\cdot$ 10$^{-3}$ \\ CNN & 1.74 $\cdot$ 10$^{-2}$ & 6.2 $\cdot$ 10$^{-2}$ & 4.1$\cdot$ 10$^{-3}$ & 1.19$\cdot$ 10$^{-2}$ \\ RNN & 1.4 $\cdot$ 10$^{-3}$ & 2.06 $\cdot$ 10$^{-2}$ & 2.1$\cdot$ 10$^{-3}$ & 6.6$\cdot$ 10$^{-3}$ \\ LSTM & 1.8 $\cdot$ 10$^{-3}$ & 3.02 $\cdot$ 10$^{-2}$ & 2.9$\cdot$ 10$^{-3}$ & 6.7 $\cdot$ 10$^{-3}$ \\ GRU & 2.4 $\cdot$ 10$^{-3}$ & 2.45 $\cdot$ 10$^{-2}$ & 2$\cdot$ 10$^{-3}$ & 8.1 $\cdot$ 10$^{-3}$ \\ \hline \end{tabular}% \label{tab:mse_results}% \end{table}% Remarkably, deep-learning techniques could find interesting application in the network intrusion management field when the training set exhibits slow time dynamics. In this case, the training operation (being time/resource consuming) could be performed only once (or performed rarely across the time). As regards competitive learning techniques (LVQ in the $3$ variants and SOM), pros and cons are (almost) reversed w.r.t. ANN algorithms. In the case of single-class analysis, a dependency on the type of dataset is highlighted (better performance in DDoS dataset than in Portscan). This is reasonably due to the fact that different inputs produce different mappings with codebook vectors. As to be expected, this effect is further amplified in multi-class datasets, where false positives and false negatives lead to unstable f-measure figures (Fig. \ref{fig:perf_multi} (b)). However, the competitive learning algorithms exhibit very appealing time complexity figures (up to $3$ orders of magnitude lower than ANNs), thanks to their approach to taking into account only the ``winner" neurons during the whole classification process. Accordingly, cooperative techniques could find useful application in highly time-variant intrusion detection settings, where it is crucial to quickly adapt to the dataset variations. Possibly, they can operate an early (even rough) detection to be refined later by means of other algorithms. Finally, an unexpected finding is offered by WiSARD that, through its weightless mechanism, provides the best trade-off between performance (accuracy/F-measure) and time complexity, in both single and multi-class cases. This outstanding behavior is mainly due to two aspects: first, the binarization scheme (jointly with the RAM-like neurons structure) adopted by WiSARD allows to handle multivariable data in a fast way. Then, the training stage follows an incremental approach since each sample reinforces the past knowledge in updating the network state; this implies a fast convergence and makes WiSARD particularly suitable in domains where online learning is crucial. What is even more interesting is that WiSARD has hardly ever been considered in the field of traffic flow classification, while it appears to be a very interesting candidate to be exploited in intrusion detection. This WNN-based approach should certainly be considered as a promising alternative to existing methods, particularly for its potential to meet the near-real-time constraints involved in NIDS classification problems. \section{Conclusion} \label{sec:conclusion} This work explores the applicability of prominent neural-based techniques in network intrusion detection. We carry out an experiment-based comparison to quantify performance and trade-off achievable with each solution. Our aim to perform a fair comparison has directed our investigation to focus on artificial neural networks (ANN). This was to avoid the typical issues arising when comparing ML methods relying on different rationale (e.g. ANN vs. SVM vs. Decision Trees). In the related work section, we have provided pointers to numerous other surveys, illustrating similarities (in aims) and differences (in methodologies). The key peculiarity of our paper is its experimental-based approach to reviewing alternative ANN options. We based our evaluation on modern datasets (CIC-IDS-2017/2018), while most of the earlier experimental results are based on the outdated KDD$99$ dataset (reflecting network attack issues that have largely been solved). Our review provides useful performance (in terms of accuracy and F-measure) and time complexity data, providing the basis for a trade-off analysis across different ANNs such as deep networks or competitive learning-based networks. To add value to our study, we wanted to consider also methods that have not typically been employed in intrusion detection, particularly the weightless neural networks (WNNs). The outcomes reveal a number of interesting findings. \textit{i)} ANN-based approaches (including deep networks) are characterized by outstanding performance in almost all cases (as to be expected). Yet, they suffer the drawback of being slow due to the underlying backpropagation algorithm, which is typically slow to converge. \textit{ii)} Neural-based techniques relying on a competitive learning approach (LVQ$1$, LVQ$2$, LVQ$3$, SOM) overturn this perspective, thanks to a much reduced time complexity (due to the winner-neuron mechanism). However, this comes with a lower performance when different datasets are considered (due to the different mapping mechanism between input and codebook vectors). \textit{iii)} The WiSARD algorithm (representative of WNNs) exhibits a surprisingly best trade-off in terms of performance and time complexity, making it an appealing candidate to operate in conjunction with intrusion detection systems. The proposed analysis could be extended in several ways. The first (and perhaps natural) direction is to better investigate the potential of weightless neural networks in the context of network intrusion detection. Beyond WiSARD, in fact, other WNN algorithms such as Probabilistic Logic Node (PLN), Goal Seeking Neuron (GSN), or General Neural Unit (GNU) could reveal noteworthy properties when applied in this field. Then, we would suggest to explore how the addition of a feature selection (FS) pre-processing stage would reduce time complexity. This could be particularly advantageous for deep techniques (the most critical in terms of time constraints), even if the adoption of a non well designed FS strategy could result in a further computational overhead. Finally, having identified pros and cons of neural-based techniques in the field of intrusion detection, an engine which allows to automatically select (or combine) the NN-based strategies best fitting the underlying network environment (e.g. hugely vs. scarcely time-variant traffic profiles) can be designed. This latter point could have intriguing implications onto prospective $6G$ scenarios which, according to the network experts, will be characterized by automatic service provisioning, intelligent resource management, and smart network adjustments. \bibliographystyle{unsrt}
null
null
null
proofpile-arXiv_000-10162
{"arxiv_id":"2009.09011","language":"en","timestamp":1600740064000,"url":"https:\/\/arxiv.org\/abs\/2009.09011","yymm":"2009"}
2024-02-18T23:40:25.210Z
2020-09-22T02:01:04.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10164"}
null
\section{Introduction} Most of the real world is governed by complex and chaotic dynamical systems, vibration engineering, cardiac arrhythmia and stock prices. All of these dynamical systems pose a challenge in modelling via using neural networks.\\ There are several types of neural networks, such as feed-forward, CNN (Convolutional Neural Networks), RNN (Recurrent Neural Networks), Autoencoders, etc. Feed-forward and convolutional neural networks are made up of neurons connected in layers, which take inputs from the previous layer and pass on the output to the next layer. Recurrent neural networks differ from both feed-forward and convolutional neural networks as the neurons in them are connected in a “recurrent” manner, and may have feedback loops between the sets of neurons. The feedback loop allows the network additional capability to capture temporal dynamics and learn time series data. Reservoir computing, which is a subset of recurrent neural networks, was first proposed by Jaeger \cite{jaeger}. In the reservoir computing approach\cite{jaeg-haas,herzog}, the large pool of neurons act as a “reservoir” where the input data is imparted a dynamic behaviour, to generate output. \\ The primary motivation behind this research is to model complex dynamics without relying on a recurrent neural network by including a temporal part in an activation function. The activation function fluctuates based on a time-dependent parameter that can be generated in different ways such as the logistic or cubic map. \begin{figure} \begin{center} {\includegraphics[height=12.5cm]{szder.png}} \end{center} \caption{Activation Function and its derivative for $\phi(t)=1$, $\phi(t)=2.8$.}\label{fszd} \end{figure} \section{Spatio-Temporal Activation Function}\label{lgnn} Each type of network may differ in structural topology, but all share the same basic building block - the neuron and an activation function. In neural networks, the activation function acts as a way to map the weighted sum of the inputs going into each neuron to a useful output. There are four major types of activation functions: binary step, sigmoid, tanh, and ReLU. The neural networks vary in different ways, for example, the sigmoid function generates output that is normalised between $0$ and $1$, while tanh is normalised between $-1$ and $1$. In the this research work, a new spatio-temporal activation function, similar to the sigmoid activation function, is proposed, which is a function of the sum of weighted inputs, $z$, and also of a temporal term. The proposed spatio-temporal function $S(z,t)$ is \begin{equation} \label{esz} S(z,t) = \frac{1}{1-exp(-\phi(t)z)} \end{equation} The derivative of $S(z,t)$ being \begin{equation} \label{eszd} \frac{dS(z, t)}{dz}=\phi(t)S(z)(1-S(z)) \end{equation} The derivative of the new activation function, $S(z,t)$, is similar to the Gaussian probability distribution, and the temporal term $\phi(t)$ plays the role of varying its standard deviation. Figure \ref{fszd} shows the activation function and its derivative for constant $\phi(t)=1$ and $\phi(t)=2.8$. The temporal term $\phi(t)$ in the present case is a linear function of a time-dependant parameter, $\alpha(t)$, as follows \begin{equation} \label{newequation} \phi(t) = \phi_0+k\alpha(t) \end{equation} where $\phi_0$ and $k$ are constants. Figure \ref{diagram} shows the schematic of the activation function.\\ In this case, $\alpha(t)$ can follow the dynamics of the logistic map \cite{salis} based on the growth parameter $r$ as follows \begin{equation}\label{elog1} \alpha (t+1) = r \alpha(t)(1-\alpha(t)) \end{equation} The function $f(\alpha)=r\alpha(1-\alpha)$ is the logistic map, $f:[0,1] \mapsto[0,1]$, where $3.6<r<4$, giving rise to chaotic dynamical behaviour as shown in Figure \ref{flog}. It should be noted that Eqn. \ref{elog1} is the time derivative of $\alpha(t)$ while Eqn. \ref{eszd} is the derivative of the sigmoid function, and both follow a similar functional form, one in time domain $t$ and other in input domain $z$. Interestingly, the derivative of the activation function in Eqn. \ref{eszd} is also the logistic map. \begin{figure} \begin{center} {\includegraphics[height=3cm]{logistic_data.png}} \end{center} \caption{Chaotic dynamical plot of $\alpha(t)$ for $r=4 $.}\label{flog} \end{figure} \begin{figure} \begin{center} {\includegraphics[height=6cm]{logistic_bifurcation.png}} \end{center} \caption{Bifurcation diagram of the logistic map}\label{bifurcation} \end{figure} \\The temporal function $\phi(t)$ is normalised between $(\phi_{min},\phi_{max})$ as follows \begin{equation}\label{enorm} \phi(t) = \phi_{min} + \frac{(\alpha(t) -\alpha_{min})}{(\alpha_{max} - \alpha_{min})}(\phi_{max}-\phi_{min}) \end{equation} where $\alpha_{max}$ and $\alpha_{min}$ are the maximum and minimum value taken by function $\alpha(t)$. \section{Response of spatio-temporal activation function} In the case of the logistic map, when $r = 4$, $\alpha_{min}$ is 0 and $\alpha_{max}$ is 1. These values vary and can be read from the logistic bifurcation diagram as shown in Figure \ref{bifurcation}. The value of $\phi_{max}$ and $\phi_{min}$ depends on the range of the data you want to generate. Increasing the range between the $\phi_{max}$ and $\phi_{min}$ will increase the range of the neuron's output. For example, with $\alpha_{max} = 1$ and $\alpha_{min}=0$, the function $\phi(t)$ is \begin{equation}\label{enorm} \phi(t) = \phi_{min} + \alpha(t)(\phi_{max}-\phi_{min}) \end{equation} Figure \ref{fanno} shows the output from a neuron displaying the chaotic behaviour using $\phi_{max} = 1.1$ and $\phi_{min} = 0.9$. The standard deviation corresponding to this range of $\phi_{max}$ and $\phi_{min}$ is found to be $0.013$. Figure \ref{standard} shows the relationship between the standard deviation of the output generated and the difference between $\phi_{max}$ and $\phi_{min}$. By increasing the range of the bounds $(\phi_{max} - \phi_{min})$, the chaotic behaviour of the resulting output of the neuron varies with larger deviations around the mean, e.g. for $\phi_{max} = 1.2$ and $\phi_{min} = 0.8$, the standard deviation increases proportionately to $0.026$. Figure \ref{fauto} shows the autocorrelation of the resulting output of the neuron, which hovers around zero value, confirming the ability of the modified activation function to display chaotic behaviour. The logistic map is not the only chaotic map that can be used in the proposed activation function. For example, we can use the cubic map, $x_{t+1} = rx_t - x_{t}^3$, which also produces chaotic behaviour, for $2.3 < r < 3$, as shown in Figure \ref{cubicbifurcation}. As described earlier, $\alpha_{min}$ and $\alpha_{max}$ need to be set correctly by checking the range of the bifurcation diagram for the corresponding chaotic function being used. It can be worked out by looking at the maximum and minimum value of the graph for certain $r$ values. Table \ref{rtable} gives $\alpha_{min}$ and $\alpha_{max}$ for different values of $r$ for the logistic map, $x_{t+1} = rx_t(1-x_t)$. Table \ref{rtable2} gives the $\alpha_{min}$ and $\alpha_{max}$ values for the cubic map, $x_{t+1} = rx_t - x_t^3$. Both of these are rough estimates measured by generating time series data for different values of $r$, which can be improved by generating longer time series data. \begin{figure} \begin{center} {\includegraphics[height=10cm]{Diagram.png}} \end{center} \caption{Neuron with the spatio-temporal activation function}\label{diagram} \end{figure} \begin{table} \begin{tabular}{|c|c|c|} \hline $r$ & $\alpha_{min}$ & $\alpha_{max}$ \\ \hline\hline 3.5 & 0.382 & 0.875 \\ \hline 3.6 & 0.333 & 0.894 \\ \hline 3.7 & 0.261 & 0.923 \\ \hline 3.8 & 0.181 & 0.949 \\ \hline 3.9 & 0.123 & 0.967 \\ \hline 4.0 & 0.000 & 1.000 \\ \hline \end{tabular} \caption{$\alpha_{max}$ and $\alpha_{min}$ for different values of $r$ in the logistic map.}\label{rtable} \end{table} \begin{table} \begin{tabular}{|c|c|c|} \hline $r$ & $\alpha_{min}$ & $\alpha_{max}$ \\ \hline\hline 2.3 & 0.668 & 1.342 \\ \hline 2.4 & 0.585 & 1.408 \\ \hline 2.5 & 0.286 & 1.520 \\ \hline 2.6 & -1.605 & -0.035 \\ \hline 2.7 & -1.698 & 1.664 \\ \hline 2.8 & -1.759 & 1.405 \\ \hline 2.9 & -1.884 & 1.899 \\ \hline 3.0 & -1.953 & 1.861 \\ \hline \end{tabular} \caption{$\alpha_{max}$ and $\alpha_{min}$ for different values of $r$ in the cubic map.}\label{rtable2} \end{table} \begin{figure} \begin{center} {\includegraphics[height=4.9cm]{chart.png}} \end{center} \caption{Standard deviation corresponding to different ranges of $(\phi_{max} - \phi_{min})$.}\label{standard} \end{figure} \begin{figure} \begin{center} {\includegraphics[height=4.5cm]{annout.png}} \end{center} \caption{Output of the neuron displaying the chaotic dynamical behaviour using flat input data.}\label{fanno} \end{figure} \begin{figure} \begin{center} {\includegraphics[height=4.5cm]{autocorr.png}} \end{center} \caption{Autocorrelation of the resulting output of the neuron.}\label{fauto} \end{figure} \begin{figure} \begin{center} {\includegraphics[height=5.4cm]{cubic.png}} \end{center} \caption{Bifurcation diagram of the cubic map.}\label{cubicbifurcation} \end{figure} \section{Numerical Experiment}\label{ann} A numerical experiment was done with a flat time series as the input and chaotic time series as the output, as shown in Figure \ref{outputvsinput}. A single neuron using the proposed spatio-temporal activation function was trained against this chaotic data, while the input was a flat value, with parameters $\phi_{max} = 3.5$ and $\phi_{min} = -2.7$ while $\alpha_{max}$ and $\alpha_{min}$ were 1 and 0 respectively. \begin{figure} \begin{center} {\includegraphics[height=4cm]{outputandinput.png}} \end{center} \caption{Chaotic time series and flat input data.}\label{outputvsinput} \end{figure} After training, the same flat input data was fed to the neural network and the generated output time series data compared well with the actual chaotic time series data as shown in Figure \ref{output}. In this study, the Lyapunov exponent\cite{salis}, $\lambda$, is used as measure of chaos. The Lyapunov exponent of the function $f(x)$ is defined as follows: \begin{equation} \lambda = \frac{1}{n} \sum_{i=0}^{n-1}ln(|f'(x_i)|) \end{equation} where $x_{n+1} = f(x_n)$. For the chaotic data, the Lyapunov exponent, $\lambda = -1.01$, while the neural network's output's $\lambda = -0.98$. Thus, the new activation function is able to map the chaotic time series data successfully.\\ Figure \ref{sigmoid} shows the different instances of the activation function while generating the output time series as depicted in Figure \ref{output}. An important thing to notice is that the activation function occasionally flips because the $\phi_{min}$ taken in this example is negative. Though it may be odd, this helps the neuron generate chaotic data that matches the range of the expected output.\\ \begin{figure} \begin{center} {\includegraphics[height=5cm]{trained.png}} \end{center} \caption{Chaotic time series vs Output of neural network.}\label{output} \end{figure} \begin{figure} \begin{center} {\includegraphics[height=4cm]{sigmoid.png}} \end{center} \caption{Instances of activation function with different $\phi$ values during the generation of the chaotic time series output.}\label{sigmoid} \end{figure} \section{Conclusion}\label{conclusion} In the present research work, a two-dimensional activation function is proposed, by including an additional temporal term to impart dynamic behaviour on the output without relying on recurrent neural networks. The temporal term can be either a logistic/cubic map or any other chaotic map. The derivative at any instance of the activation function is similar to the Gaussian probability distribution with a varying, time-dependent standard deviation. Consequently, when the temporal term changes with time, the activation function fluctuates, leading to dynamical behaviour in the output. The new activation function is able to successfully map the chaotic data. Further research is required to investigate the spatio-temporal activation function in a multilayered feed forward neural network and its ability to capture chaos using logistic/cubic map.
null
null
null
proofpile-arXiv_000-10163
{"arxiv_id":"2009.08931","language":"en","timestamp":1600654658000,"url":"https:\/\/arxiv.org\/abs\/2009.08931","yymm":"2009"}
2024-02-18T23:40:25.215Z
2020-09-21T02:17:38.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10165"}
null
\section{Introduction} {\it Ab initio} calculations aim to describe nuclear features while employing high-precision interactions that describe two- and three-nucleon systems (often referred to as ``realistic interactions"), such as those derived from meson exchange theory \cite{machleidt1987bonn, machleidt1989meson} (e.g. CD-Bonn \cite{Machleidt01}), chiral effective field theory \cite{van1994few, epelbaum2009modern, machleidt2011chiral} (e.g. NNLO$_\mathrm{opt}$ \cite{Ekstrom13} and N3LO \cite{EntemM03}), or $J$-matrix inverse scattering (JISP16 \cite{ShirokovMZVW07, Shirokov2010nn}). As such calculations do not depend on any information about the nucleus in consideration, these methods can be used in nuclear regions where experimental data is currently sparse or not available, e.g., along the pathways of nucleosynthesis and toward a further exploration of exotic physics of rare isotopes. While realistic interactions build upon rich physics at the nucleon-nucleon (NN) level, it is impossible to identify terms in the interaction that are responsible for emergent dominant features in nuclei, such as deformation, pairing, and clustering. These features, which are revealed in even the earliest of data on nuclear structure, have informed many successful nuclear models such as Elliott's \SU{3} model \cite{Elliott58, Elliott58b,ElliottH62} and Bohr collective model \cite{BohrMottelson69} with a focus on deformation, as well as algebraic \cite{Racah42,Belyaev58} and exact \cite{Richardson1964} pairing models. Recently, we have shown that calculations that consider Hamiltonians that build upon the ones used in these earlier studies and, in addition, allow for configuration mixing \cite{DreyfussLTDB13,TobinFLDDB14,MioraLKPD2019}, yield results that are consistent with the ones in the {\it ab initio} symmetry-adapted no-core shell model (SA-NCSM) \cite{LauneyDD16, DytrychLDRWRBB20}. In particular, the no-core symplectic model (NCSpM) has offered successful descriptions for excitation energies, monopole and quadrupole transitions, quadrupole moments, and rms radii for a range of nuclei (from $A$=8 to $A$=24 systems, including cluster effects in the $^{12}$C Hoyle state) \cite{DreyfussLTDB13,TobinFLDDB14,DreyfussLTDBDB17}, by employing quadrupole-quadrupole ($Q \cdot Q$) and spin-orbit interaction terms. In Ref. \cite{MioraLKPD2019}, exact solutions to the shell model plus isoscalar and isovector pairing have been provided for low-lying $0^+$ states and, e.g., the energy of the lowest isobaric analog state in $^{12}$C has been shown to agree with the corresponding \textit{ab initio} findings. Therefore, it is interesting to trace this similarity in outcomes down to specific features of the realistic interactions. In this paper, we provide new insight into correlations within realistic interactions through the use of the deformation-related \SU{3} symmetry. Specifically, we show that only a part of the nucleon-nucleon interaction appears to be essential for the description of nuclear dynamics, especially at low energies. When expressed in the \SU{3} symmetry-adapted basis, the interaction -- given as \SU{3} tensors -- shows a clear preference toward a specific subset of tensors, allowing us to determine its dominant components. Most importantly, these features appear regardless of the underlying theory used to construct the interaction. Furthermore, an almost universal behavior is revealed by ``soft-core" potentials such as JISP16, or by the renormalized (``softened'') counterparts of ``harder" interactions that use, e.g., Okubo-Lee-Suzuki (OLS) \cite{Okubo1954diagonalization, LeeSuzuki80} and Similarity Renormalization Group (SRG) \cite{BognerFP07} renormalization techniques. And further, to complete the picture, we show that these features are directly linked to the important physics, i.e., deformation, clustering, pairing, and spin-orbit effects, that drove the development of earlier, and considerably simpler, schematic models. The importance of various interaction components is studied in SA-NCSM calculations. In particular, we study nuclear structure observables of $^{12}$C, such as the low-lying excitation spectrum, B(E2) reduced transition probabilities and root mean square (rms) radii. We compare the results that use the entire interaction with those that use interactions that have been selected down to their dominant components. The agreement observed for all these observables is remarkable, even when a small fraction of the interaction is used. \section{Theoretical method} \begin{figure*} [t] \centering \includegraphics[width=0.99\textwidth]{decomp_JISP16.pdf} \includegraphics[width=0.99\textwidth]{decomp_N3LO.pdf} \caption{\label{fig:decomp} Relative strengths $\mathfrak s$ (in \%) for the \SU{3}-coupled JISP16 (top) and N3LO (bottom) NN interactions and their effective counterparts with $\hbar\Omega=15$ MeV and 20 MeV, respectively, in the $N_\mathrm{max}$ = 6 model space. The “eff. JISP16” is obtained by the OLS technique for $A$=12, while “eff. N3LO” is by SRG with $\lambda_\mathrm{SRG}$ = 2.0 fm$^{-1}$. $T$ is the isospin of the two nucleon system. A set of $(\lambda_0 \mu_0)S_0$ quantum numbers and its conjugate correspond to each of the interaction terms. Only terms with $>$1\% relative strength for each $T$ are shown; there are more than 120 terms with less than 1\% strength for this model space. } \end{figure*} \subsection{SA-NCSM framework} The SA-NCSM is a no-core shell model with an \SU{3}-coupled or \SpR{3}-coupled symmetry-adapted basis \cite{LauneyDD16,DytrychLDRWRBB20}. Similar to NCSM \cite{NavratilVB00,NavratilVB00b}, it uses a harmonic oscillator (HO) basis, where the HO major shells are separated by a parameter $\hbar \Omega$. The model space is capped by an $N_\mathrm{max}$ cutoff which is the maximum total number of oscillator quanta above the lowest HO configuration for a given nucleus. The SA-NCSM calculates eigenvalues and eigenvectors of the nuclear interaction Hamiltonian and subsequently uses the eigenvectors for calculations of the nuclear observables. The results approach the exact value as the $N_\mathrm{max}$ increases, and at the $N_\mathrm{max}$ $\rightarrow \infty$ limit they become independent of the HO parameter $\hbar \Omega$. Within a given complete $N_\mathrm{max}$ model space, the SA-NCSM results exactly match those of the NCSM for the same interaction. The use of symmetries in SA-NCSM allows one to select the model space by considering only the physically relevant subspace, which is only a fraction of the corresponding complete $N_\mathrm{max}$ space. In the SA-NCSM, the SA basis is constructed using an efficient group-theoretical algorithm for each HO major shell \cite{DraayerLPL89}. While we do not use explicit construction of conventional NCSM bases, for completeness, we show the unitary transformation from a two-particle $JT$-coupled basis state to an \SU{3}-coupled state: \begin{eqnarray} &&\ket{ \eta_r \eta_s \omega \kappa (LS) \Gamma M_\Gamma } \nonumber \\ &=&\frac{1}{ \sqrt{1+\delta_{\eta_r\eta_s}} } \{a^\dagger_{(\eta_r\, 0) \ensuremath{\textstyle{\frac{1}{2}}} } \times a^\dagger_{(\eta_s\,0) \ensuremath{\textstyle{\frac{1}{2}}} } \}^{ \omega \kappa (LS) \Gamma M_\Gamma}\ket{0} \nonumber \\ &=&\frac{1}{ \sqrt{1+\delta_{\eta_r\eta_s}} } \sum_{\substack{ l_r l_s\\ j_r j_s}}\Pi_{j_r j_s L S} \RedCG{(\eta_r\ 0)l_r}{(\eta_s\ 0) l_s}{\omega \kappa L} \nonumber \\ &\times&\Wigninej{l_r}{l_s}{L}{1/2}{1/2}{S}{j_r}{j_s}{J} \{a^\dagger_{r} \times a^\dagger_{s} \}^{ \Gamma M_\Gamma}\ket{0}, \label{su3basis} \end{eqnarray} where we use conventional labels $\textstyle { r(s)=\{\eta (l\,\frac{1}{2})j \textstyle{ t=\frac{1}{2}}\} }$ and $\Gamma=JT$, with $\eta=0,1,2,\dots$ is the oscillator shell number and $\Pi_j=\sqrt{2j+1}$, and with $a^\dagger_{(\eta \,0)\ensuremath{\textstyle{\frac{1}{2}}}}$ being the creation operator that creates a particle of spin $\ensuremath{\textstyle{\frac{1}{2}}}$ and in a HO major shell $\eta$. We use \SU{3} quantum numbers, $\textstyle { \omega \equiv (\lambda\, \mu) = (\eta_r\, 0) \times (\eta_s\, 0) }, \, \tilde{\omega} \equiv (\mu\, \lambda) $, and $\kappa$ the multiplicity of total orbital momentum $L$ for a given $\omega$; $S$ is the total intrinsic spin, and $\RedCG{}{}{}$ are reduced SU(3) Clebsch-Gordan coefficients. \subsection{\SU{3} interaction tensors} Two-body isoscalar (charge-independent) interactions are typically given in a representation of a $JT$-coupled HO basis, $\ket{rs\Gamma M_\Gamma}$, that is, $V^{\Gamma}_{rstu} = \braketop{rs \Gamma M_\Gamma=0}{V}{tu \Gamma M_\Gamma=0}$. This takes advantage of the fact that this interaction transforms as a scalar under rotations in coordinate and isospin space, that is, it is an \SO{3}$\times$ \SU{2}$_T$ tensor of rank zero. Analogously, the interaction can be represented in an $\SU{3}\times$\SU{2}$_S\times$\SU{2}$_T$-coupled HO basis $\ket{ \eta_r \eta_s \omega \kappa (LS) \Gamma M_\Gamma }$ (\ref{su3basis}). The corresponding interaction matrix elements are similarly given as $V^{\Gamma}_{(\chi \omega \kappa L S)_{fi}}\equiv \braketop{(\chi \omega \kappa (LS) \Gamma M)_f}{V}{(\chi \omega \kappa (LS) \Gamma M)_i}$, with $\chi \equiv \{\eta_r \eta_s\}$ and with symmetry properties $V^{\Gamma}_{(\chi \omega \kappa L S)_{if}}=V^{\Gamma}_{(\chi \omega \kappa L S)_{fi}}$. Using that the interaction can be represented as a sum of \SU{3}$\times $\SU{2}$_S$ tensors, $V=\sum_{\rho_0 \omega_0 \kappa_0 S_0}V^{\rho_0 \omega_0 \kappa_0 S_0}$, the matrix elements can be further reduced with respect to \SU{3} and the spin-isospin space (for $T_0=0$), $V_{(\chi \omega S)_{if};T}^{\rho_0 \omega_0 \kappa_0 S_0}\equiv \braketop{(\chi \omega S)_f;T}{|V^{\omega_0 \kappa_0 S_0}|}{(\chi \omega S)_i;T}_{\rho_0}$ (see Appendix) The following conjugation relations hold for the $\SU{3}\times\SU{2}_S$ tensors, \begin{eqnarray} V_{(\chi \omega S)_{if};T}^{\rho_0 \omega_0 \kappa_0 S_0}&=&(-)^{S_i-S_f+S_0}(-)^{\omega_f-\omega_i}\sqrt{\frac{\dim \omega_f}{\dim \omega_i}} V_{(\chi \omega S)_{fi};T}^{\rho_0 \tilde \omega_0 \kappa_0 S_0} \nonumber \\ V_{(\chi \omega S)_{ii};T}^{\rho_0 \omega_0 \kappa_0 S_0}&=&(-)^{S_0}V_{(\chi \omega S)_{ii};T}^{\rho_0 \tilde \omega_0 \kappa_0 S_0}, \end{eqnarray} where \begin{equation} \dim \omega = \frac{1}{2}(\lambda +1)( \mu+1)(\lambda+\mu + 2) \label{dim}. \end{equation} To simplify the equations in the paper, we introduce a symmetrized tensor, \begin{equation} v_{(\chi \omega S)_{if};T}^{\rho_0 \omega_0 \kappa_0 S_0}=(-)^{\omega_i-S_i-T }\sqrt{\dim \omega_i}V_{(\chi \omega S)_{if};T}^{\rho_0 \omega_0 \kappa_0 S_0}, \end{equation} with a conjugation relation, \begin{equation} v_{(\chi \omega S)_{if};T}^{\rho_0 \omega_0 \kappa_0 S_0}=(-)^{S_0}v_{(\chi \omega S)_{fi};T}^{\rho_0 \tilde \omega_0 \kappa_0 S_0}. \end{equation} We note that, in the case when $\chi_i=\chi_f$, $\omega _i =\omega _f$, and $S_i = S_f$, we will use the notation $v_{(\chi \omega S);T}^{\rho_0 \omega_0 \kappa_0 S_0}$. \subsection{Strength of \SU{3} interaction tensors} \begin{figure}[t] \includegraphics[width=0.48\textwidth]{ExEn_vs_comp.pdf} \caption{\label{fig:12Cen} Excitation energy of the first $2^+$ and $4^+$ states in $^{12}$C from SA-NCSM calculations (connected lines) as a function of the fraction of the terms kept in the interaction, and compared to experiment \cite{ASelove90} (labeled as ``Expt."). Results for $N_\mathrm{max}$ = 6, 8, 10, and 12 are shown for various selections of the JISP16 interaction with $\hbar\Omega=15$ MeV. Specifically, the value 1 on the abscissa indicates the full interaction (100\%) was used, while an abscissa value of 0.4 implies that only the most significant 40\% of the tensors were retained, etc. } \end{figure} The significance of the various \SU{3} tensors can be estimated by their Hilbert-Schmidt norm, which is analogous to the norm of a matrix $A$ defined as $||A||=\sqrt[]{\sum_{ij}A_{ij}A_{ji}}$. In particular, the strength of a Hamiltonian $H$ can be estimated by the norm $\sigma_H$ constructed as \cite{HechtD74,French66, FrenchR71, ChangFT71, KotaH10, LauneyCPC2014} \begin{equation} \sigma^2_H= \expV{(H- \expV{H})^\dagger (H- \expV{H}) }=\expV{H^2 }-\expV{H}^2, \label{sigma} \end{equation} where $\expV{\dots}\equiv \frac{1}{ \mathcal N } \text{Tr}(\dots)$ specifies the trace of the Hamiltonian matrix divided by the ${\mathcal N}$ number of diagonal matrix elements. In the present study, $H$ is a two-body Hamiltonian, and ${\mathcal N}$ enumerates all possible two-particle configurations. \begin{figure*} \includegraphics[width=1 \textwidth]{obs_vs_comp.pdf} \caption{\label{fig:obs} Same as Fig. \ref{fig:12Cen}, but for the rms radius (in fm) of the $^{12}$C ground state (experimental value from Ref. \cite{Tanihata85}) and the B(E2: $2^+_1 \rightarrow 0^+_1$) value (in e$^2$fm$^4$) (experimental value from \cite{ASelove90}) as a function of the fraction of the terms kept in the interaction. SA-NCSM calculations use various selections for the JISP16 interaction for $\hbar\Omega=15$ MeV and different $N_\mathrm{max}$ model spaces. } \end{figure*} For given $T_f=T_i=T$ and a $\ket{ \chi^* \omega \kappa (LS) \Gamma M_\Gamma }$ basis with $\chi ^* \equiv \{ \eta_r \eta_s \}, \eta_r \leq \eta_s $, the norm $\sigma_{\omega_0 \kappa_0 S_0;T}$ of each SU(3)-symmetric tensor is determined using Eq. (\ref{sigma}): \begin{eqnarray} \sigma^2_{\omega_0 \kappa_0 S_0;T}&=&\frac{1}{ \mathcal N }\sum_{(\chi^*\omega S)_{f,i}\rho_0} \frac{1}{\Pi^2_{T_f S_0 T_0}\dim \omega_0} | v_{(\chi \omega S)_{if};T}^{\rho_0 \omega_0 \kappa_0 S_0} |^2 \nonumber \\ &&-(V_c^{\omega_0 \kappa_0 S_0;T})^2, \end{eqnarray} where the number of two-particle basis states ${\mathcal N}$ and the average monopole part $V_c^{\omega_0 \kappa_0 S_0}=\expV{V^{\omega_0 \kappa_0 (L_0=S_0 S_0)\Gamma_0=0 M_{\Gamma_0} =0}}$ are given, respectively, as \begin{equation} \mathcal N = \sum_{\chi^*\omega \kappa LSJ M_J} 1 = \sum_{\chi^*\omega S} \Pi^2_{S} \dim \omega, \end{equation} \begin{eqnarray} V_c^{\omega_0 \kappa_0 S_0} =\frac{1}{ \mathcal N} \sum_{\substack{\chi^*\omega \kappa \\ LSJ \rho_0}} \frac{\Pi^2_J\Pi_L}{\Pi_{S_0T}\sqrt{{\rm dim}\omega}} (-1)^{S_0+L+J-T-\omega} \nonumber \\ \times \Wigsixj{L}{S}{J}{S}{L}{S_0} \langle\omega \kappa L;\omega_{0} \kappa_{0} L_{0}\|\omega \kappa L \rangle_{\rho_0} v_{(\chi \omega S);T}^{\rho_0 \omega_0 \kappa_0 S_0}. \end{eqnarray} For a given isospin $T$, the strength of the entire Hamiltonian $H_T$ is determined by the strengths of its components, $\sigma^2_{H_T}=\sum_{\omega_0 \kappa_0 S_0} \sigma^2_{\omega_0 \kappa_0 S_0;T}$. We can then define a relative strength for each SU(3)-symmetric component ($\omega_0 \kappa_0 S_0$) as \begin{equation} {\mathfrak s}^2_{\omega_0 \kappa_0 S_0;T}=\frac{\sigma^2_{\omega_0 \kappa_0 S_0;T} }{\sigma^2_{H_T}}=\frac{\sigma^2_{\omega_0 \kappa_0 S_0;T} }{\sum_{\omega_0 \kappa_0 S_0}{\sigma^2_{\omega_0 \kappa_0 S_0;T}}}. \label{relsigma} \end{equation} Using Eq. (\ref{JTtoSU3}), we can decompose any two-body interaction into \SU{3}-symmetric components. The contribution of each of the components within the interaction is given by its relative strength (\ref{relsigma}) (see Fig. \ref{fig:decomp} for the realistic JISP16 and N3LO interactions). As can be seen from these results, only a small number of \SU{3} tensors dominate the interaction, with the vast majority of the components having less than 1\% of the total strength. Similar behavior is observed for other interactions. It should be noted that in the $JT$-coupled basis, no such dominance of interaction matrix elements is apparent. This exercise demonstrates a long-standing principle that holds across all of physics; namely, one should work within a framework that is as closely aligned with the dynamics as possible. \section{Results and Discussions} \subsection{Observables in $^{12}$C} The decomposition of the interaction in the \SU{3} basis allows us to choose sets of major components to construct new selected interactions. These interactions can be used for calculations of various nuclear properties that can then be compared to the results from the initial interaction. In this way, we can examine how sensitive specific nuclear properties are to the interaction components. Several selected interactions were constructed for this study. The selection is done by ordering the interaction tensors from the highest relative strength to the lowest and then including the largest ones to add up to 60 - 90\% of the initial total strength. Depending on the $N_\mathrm{max}$ of the interaction the number of selected \SU{3} tensors differs. For example, JISP16 interaction in $N_\mathrm{max}$ = 10, $\hbar\Omega$=15 MeV has overall 169 unique $(\lambda_0 \mu_0)S_0$ tensors, out of which 51 largest ones account for about 80\% of the total strength. After selection the total strengths are not rescaled to the initial interaction. Throughout this work we will refer to selected interactions in terms of the fraction of interaction tensors kept, that is the number of SU(3)-symmetric components in the selected interaction relative to the number of all such components in the initial interaction for a given $N_\mathrm{max}$ and $\hbar\Omega$. Analysis of the results shows that low-lying excitation energies of $^{12}$C are not sensitive to the number of selected \SU{3} tensors, given that the most dominant ones are included in the interaction (Fig. \ref{fig:12Cen}). With only half of the interaction tensors the excitation energies essentially do not differ from the corresponding results that use the full interaction, and even with less than 30\% of the interaction components the deviation for most of the values is insignificant. The comparatively large deviation in 4$^+$ energy for $N_\mathrm{max}$ = 6 that happens when about 20\% of the \SU{3} components are used is likely due to the small model space. This issue disappears in higher $N_\mathrm{max}$ values, and even $N_\mathrm{max}$ = 6 results for the 2$^+$ state compare remarkably well to the initial interaction for all selections. The selected interactions yield very close results to the initial one for other observables as well. For example, the $^{12}$C rms radius of the ground state and the B(E2: $2^+ \rightarrow 0^+$) have very low dependence on the selection (Fig. \ref{fig:obs}), with variations nearly inconsequential compared to the deviations from the experiment (the underprediction of these observables for the JISP16 interaction has been addressed, e.g., in Ref. \cite{DytrychMLDVCLCS11}). Specifically, the values are essentially the same when half of the interaction components are used. With less than 30\% of interaction components, the difference from the initial interaction results is less than 2\% for rms radius and less than 7\% for B(E2). Thus, small deviations start to appear only at significantly trimmed interactions, indicating that the long-range physics is mostly preserved when only the dominant interaction terms are used. In addition, vital information about the nuclear structure can be found through analysis of the ($\lambda \mu$)$S$ configurations that comprise the SA-NCSM wavefunction. This uncovers the physically relevant features that arise from the complex nuclear dynamics as shown in Ref. \cite{LauneyDD16}. In other words, the wavefunctions contain a manageable number of major \SU{3} components that account for most of the underlying physics. Indeed, we find that calculations with various selected interactions largely preserve the major components of the wavefunction (Fig. \ref{fig:wfns}). For the ground state of $^{12}$C calculated in the $N_\mathrm{max}$ = 12 model space the probability amplitude for each set of the quantum numbers ($\lambda \mu$)$S$ almost does not change when a little less then half (46\%) of the JISP16 interaction tensors are used for the calculations. Even with about quarter (26\%) of the tensors, the \SU{3} structure remains the same with only a slight difference in the amplitudes. \begin{figure} [b] \includegraphics[width=0.5 \textwidth]{wfns.pdf} \caption{\label{fig:wfns} Probability amplitudes for the $(\lambda\,\mu)S$ configurations that make up the $^{12}$C ground state ($0^+_1$), calculated in $N_\mathrm{max}$ = 12 model space using JISP16 interaction for $\hbar\Omega=15$ MeV (labeled by ``All") and two selected interactions (labeled by the fraction of the interaction components kept, 46\% and 26\%). Only states with probability amplitudes $>0.003$ are shown. } \end{figure} \begin{figure} \includegraphics[width=0.45 \textwidth]{rms_select_vs_hw3.pdf} \caption{\label{fig:rms_select_hw} $^{12}$C ground state rms radius from SA-NCSM calculations with $N_\mathrm{max}$ = 6 model space vs. $\hbar\Omega$, using the full (``All") and selected (labeled by the percentage of the tensors kept) JISP16 interaction. } \end{figure} \begin{figure} \includegraphics[width=0.5 \textwidth]{En_select_vs_hw.pdf} \caption{\label{fig:en_hw} Excitation energies of the first $2^+$ and $4^+$ states for $^{12}$C from SA-NCSM calculations with $N_\mathrm{max}$ = 6 and $N_\mathrm{max}$ = 8 model spaces using full JISP16 interaction (``All") and its selected counterpart (with 37\% of the tensors kept), with $\hbar\Omega=15, 20 $ and $25$ MeV, and compared to experiment. } \end{figure} As mentioned above, the dependence on the HO parameter $\hbar\Omega$ disappears at the $N_\mathrm{max}$ $\rightarrow \infty$ limit, however, even for comparatively small $N_\mathrm{max}$ model spaces, there is often a range of $\hbar\Omega$ values, which achieves convergence for selected observables, while typically larger $N_\mathrm{max}$ model spaces are required outside this range. For long-range observables, such a range often falls closely to an empirical estimate given by $\hbar\Omega=41/A^{1/3}$ \cite{BohrMottelson69}, which is 18 MeV for $^{12}$C. We investigate the dependence of the ground state rms radius of $^{12}$C on $\hbar\Omega$ using different selections (Fig. \ref{fig:rms_select_hw}). We examine small model spaces, where the $\hbar\Omega$ dependence is large and its effect on the interaction selections is expected to be enhanced; yet, we ensure that these model spaces provide results close to the $N_\mathrm{max}$ =12 outcomes (see $N_\mathrm{max}$ =6 and 8 results in Figs. \ref{fig:12Cen} and \ref{fig:obs}). Comparing to the full interaction, the results indicate that, indeed, small deviations are observed for values around $\hbar\Omega=18$ MeV, and the deviations become larger at higher (less optimal) $\hbar\Omega$ values (Fig. \ref{fig:rms_select_hw}). Similarly, the excitation energies for $\hbar\Omega = 15$ MeV calculations are much less sensitive to the interaction selection (Fig. \ref{fig:en_hw}),whereas the deviation in the results between the initial and selected interactions increases for higher $\hbar\Omega$. However, this difference gets smaller with increasing model space. To summarize, the selection of the interactions affects the calculations with optimal $\hbar\Omega$ values the least. \begin{figure} \includegraphics[width=0.49\textwidth]{2H_energy_15_v2.pdf} \caption{\label{fig:2H} Energies of the proton-neutron system for the positive-parity lowest-lying states ($<$ 30 MeV), calculated in the SA-NCSM in $N_\mathrm{max}$ =12 model space using the JISP16 interaction, with all terms kept (100\%) as compared to a selection that keeps only 26\% of the terms, for $\hbar\Omega$=15 MeV. } \end{figure} It is interesting to examine how the selection of NN interactions affects the nucleon-nucleon physics. As a simple illustration, we study the Hamiltonian for the proton-neutron system and its corresponding eigenvalues. In addition to $T=0$ states, we consider $T=1$ states, which can also inform the proton-proton and neutron-neutron systems. To do this, we look for deviations in the corresponding eigenvalues as compared to those computed with the full interaction. We note that these comparisons use bound single-particle basis states, so results will not apply to the proton-neutron scattering states, however, using the same many-body method, any deviations will inform about the interaction selection. In particular, we observe that only about quarter of the \SU{3}-symmetric interaction components (the most dominant ones) can reproduce, with high accuracy, the energies that use the full interaction for most of the low-lying states of the proton-neutron system (Fig. \ref{fig:2H}). To estimate the difference in energies, we calculate the root mean square error (RMSE), $\mathrm{RMSE} = \sqrt{\frac{1}{N_d}\sum_i^{N_d} \Big(E_{\rm all}^i-E_{\rm sel}^i\Big)^2}$ where $E_{\rm all}$ and $E_{\rm sel}$ are the eigenenergies calculated with the initial and selected interactions, respectively, the summation is over all positive- or negative-parity states and $N_d$ is the total number of states. For negative-parity $0\le J \le 5$ states up through energy with 30 MeV, we find RMSE to be about 0.9 - 1.2 MeV depending on $\hbar\Omega$, whereas for positive-parity states, it is between 0.5 and 0.9 MeV. Similar RMSE values are seen even for the higher lying spectrum up to 50 MeV. As it can be seen from Fig. \ref{fig:2H}, the main deviations come from the second and third $1^+$ and $3^+$ states indicating that certain states are more sensitive to the selection than others. \subsection{Dominant features in realistic interactions} There are various techniques of renormalization such as OLS and SRG that are employed to ``soften'' the realistic interactions, which in turn can be used in comparatively smaller model space. Comparing the \SU{3} decompositions of initial interactions to their renormalized (effective) counterparts shows that the same major \SU{3} tensors remain dominant after renormalization (Fig. \ref{fig:decomp}). In the case of JISP16 the tensors with the largest relative strengths practically do not change. The renormalization has a larger impact on the N3LO interaction where the spread over various tensors is larger. Here, only a few SU(3)-symmetric components change significantly while the others change slightly. It should be noted that the two effective counterparts of the interactions resemble each other (Fig. \ref{fig:decomp}). A similar behavior is observed for, e.g., the AV18 \cite{WiringaSS95} and CD-Bonn interactions \cite{LauneyDD16}. Examining the largest contributing tensors of realistic interactions we can link them to the monopole operator (the HO potential), $Q \cdot Q$, pairing, spin-orbit and tensor forces. The key idea is that the position and momentum operators, $\vec r$ and $\vec p$ respectively, have an \SU{3} rank $(1\,0)$, and conjugate $(0\,1)$ (to preserve hermicity), with SU(2)$_S$ rank zero ($S_0=0$, that is, the operator does not change spin). Hence, the HO potential operator ($\sim r^2=\vec r \cdot \vec r$) has orbital momentum $L_0=0$ and spin $S_0=0$, and \SU{3} rank of $(2\,0)$ and $(0\,0)$ (and conjugates), whereas the quadrupole operator $Q$, given by the tensor product of $\vec r$, has $L_0=2$ and $S_0=0$, and $(2\,0)$ and $(1\,1)$ (and conjugates) \cite{TobinFLDDB14}; similarly for the tensor force, but with $L_0=2$ and $S_0=2$ . The $Q \cdot Q$ operator, which describes the interaction of each nucleon with the quadrupole moment of the nucleus, will then have $L_0=0$ and spin $S_0=0$, along with $(4\,0)$, $(2\,0)$, $(2\,2)$ and $(0\,0)$ (and conjugates). The spin-orbit operator has $L_0=1$ and $S_0=1$, with an \SU{3} rank of $(1\,1)$. Indeed, the scalar (0 0) $S_0$=0 dominates for a variety of realistic interactions, and especially in their effective counterparts (see Fig. \ref{fig:decomp}); it is typically followed by $(2\, 0)$, $(4\, 0)$ and (2 2)$S_0$=0 and their conjugates. These \SU{3} modes are the ones that appear in the $Q \cdot Q$ interaction, while $(\lambda \, \lambda)$ configurations dominate the pairing interactions within a shell \cite{BahriED95}. The dominant $(2\,0)$ and $(1\,1) S_0=2$ modes, and conjugates, can be linked to the tensor force. Finally, the $(1\, 1)S_0$=1 can be linked to the spin-orbit force. These features, we find, repeat for various realistic interactions and, more notably, the similarity is found to be further enhanced for their renormalized counterparts. Given the link between the phenomenon-tailored interactions and major terms in realistic interactions, it is then not surprising that both \emph{ab initio} approaches and earlier schematic models can successfully describe dominant features in nuclei. \section{Conclusions} Realistic NN interactions expressed in \SU{3} basis show a clear dominance of only a small fraction of terms. We performed \emph{ab initio} calculations of several observables in $^{12}$C using interactions that were selected down to the most significant terms and compared them to the calculations with the initial interactions. We found that for the small $\hbar\Omega$ values even the interactions with less than half of the terms produce almost the same results as the initial interaction for the low-lying spectrum, B(E2) values and rms radii of $^{12}$C. The selection appears to affect more the calculations that use interactions with higher $\hbar\Omega$ values in small model spaces, however the deviations between the initial and selected interaction results decrease as the model space becomes larger. In addition, the eigenvalues of the proton-neutron system for all of the positive and negative parity states below 30 MeV change only slightly with as few as the quarter of the initial interaction terms. By analyzing the most dominant terms of various realistic interactions, we found that they can be linked to well known nuclear forces. In particular, inspection of these terms allowed us to link them to the widely used HO potential, $Q \cdot Q$, pairing, spin-orbit and tensor forces. Moreover, we saw that after renormalization the NN interactions, regardless of their type, have mainly the same dominant terms with similar strengths, indicating that the renormalization techniques strengthen the same dominant terms in all interactions. \section*{ACKNOWLEDGMENTS} Support from the U.S. National Science Foundation (ACI -1713690, OIA-1738287, PHY-1913728), the Czech Science Foundation (16-16772S) and the Southeastern Universities Research Association are all gratefully acknowledged. This work benefitted from computing resources provided by NSF's Blue Waters computing system (NCSA), LSU (www.hpc.lsu.edu), and the National Energy Research Scientific Computing Center (NERSC). \section*{APPENDIX} \label{apdx} In standard second quantized form, a one- and two-body interaction Hamiltonian is given in terms of fermion creation $a_{ jm(1/2)\sigma }^\dagger$ and annihilation $\tilde{a}_{ j-m(1/2)-\sigma } = (-1)^{j-m+1/2 -\sigma }a_{ jm(1/2)\sigma }$ tensors, which create or annihilate a particle of type $\sigma =\pm 1/2$ (proton/neutron) in the HO basis. In Eq. (\ref{V2ndQF}), $V_{rstu}^{\Gamma}$ is the two-body antisymmetric matrix element in the $JT$-coupled scheme [$V_{rstu}^{\Gamma }=-(-)^{r+s-\Gamma}V_{srtu}^{\Gamma }= -(-)^{t+u-\Gamma } V_{rsut}^{\Gamma }=(-)^{r+s-t-u}V_{srut}^{\Gamma }= V_{turs}^{\Gamma }$]. For an isospin nonconserving two-body interaction of isospin rank ${\mathcal T} $, the coupling of fermion operators is as follows, $\{\{a_r^\dagger \otimes a_s^\dagger\}^{JT}\otimes \{a_t \otimes a_u\}^{JT} \}^{(0{\mathcal T})}$, with $V_{rstu}^{({\mathcal T}) J T}$ matrix elements. \begin{widetext} \begin{eqnarray} V&=&-\frac{1}{4}\sum_{rstu \Gamma} \sqrt{(1+\delta _{rs})(1+\delta _{tu})}\Pi_\Gamma V_{rstu}^\Gamma \{\{a_r^\dagger \otimes a_s^\dagger\}^\Gamma \otimes \{\tilde a_t \otimes \tilde a_u\}^\Gamma \}^{(\Gamma_0 M_{\Gamma_0})} \nonumber\\ &=& \sum_{{\tiny \begin{array}{c} (\chi^* \omega S)_{fi} \\ \rho_0 \omega_0 \kappa_0 S_0 \end{array} }} \frac{(-1)^{\omega_0-\omega_f+\omega_i}}{\sqrt{(1+\delta_{\eta_r\eta_s})(1+\delta_{\eta_t\eta_u})}} \frac{1}{\Pi_{S_0}}\sqrt{\frac{\dim \omega_f}{\dim \omega_0}} V_{(\chi \omega S)_{f,i}T}^{\rho_0 \omega_0 \kappa_0 S_0} \times \nonumber\\ &&\sum_{\rho'_0} \Phi_{\rho'_0 \rho_0}(\omega_0 \omega_i \omega_f) \{\{a_{\eta_r}^\dagger \otimes a_{\eta_s}^\dagger\}^{\omega_f S_f T} \otimes \{\tilde a_{\eta_t} \otimes \tilde a_{\eta_u}\}^{\omega_i S_i T} \}^{\rho'_0 \omega_0 \kappa_0 (L_0=S_0 S_0)\Gamma_0=0 M_{\Gamma_0}=0} , \label{V2ndQF} \end{eqnarray} \end{widetext} where dim $\omega$ is defined in Eq. \ref{dim} and the phase matrix $\Phi_{\rho'_0 \rho_0}(\omega_0\omega_i\omega_i)$ accommodates the interchange between the coupling of $\omega_0$ and $\omega_i$ to $\omega_f$, so for \SU{3} Clebsch-Gordan coefficients we have \cite{Escher1997thesis} \begin{widetext} \begin{equation} \CG{\omega_0\kappa_0 L_0 M_0}{\omega_i \kappa_i L_i M_i }{\omega_f\kappa_f L_f M_f}_{\rho_0} = \sum_{\rho'_0} \Phi_{\rho_0\rho'_0}(\omega_0\omega_i\omega_f)\CG{\omega_i\kappa_i L_i M_i}{\omega_0 \kappa_0 L_0 M_0 }{\omega_f\kappa_f L_f M_f}_{\rho'_0}. \end{equation} \end{widetext} For the special case when $\rho=1$, that is, where the \SU{3} coupling $\{\omega_i \otimes\omega_0\} \rightarrow\omega_f$ is unique, the phase matrix reduces to a simple phase factor $(-1)^{(\lambda_0+\mu_0)+(\lambda_i+\mu_i)-(\lambda_f+\mu_f)}$. Finally, the interaction reduced matrix elements in a $\SU{3}\times\SU{2}_S\times\SU{2}_T$-coupled HO basis are given as, \begin{widetext} \begin{eqnarray} V_{(\chi \omega S)_{fi};T}^{\rho_0 \omega_0 \kappa_0 S_0} &=& (-)^{S_f+S_0} \Pi_{TS_0} \frac{ \dim \omega_0}{\dim \omega_f} \sum_{J(\kappa L)_{if}} (-)^{L_i+J} \Pi_J^2 \Pi_{L_f} \Wigsixj{L_f}{S_f}{J}{S_i}{L_i}{S_0} \RedCGw{i}{0}{f}{0} V^{\Gamma}_{(\chi \omega \kappa L S)_{fi}} \nonumber \\ & =& (-)^{S_f+S_0}\Pi_{TS_0} \frac{\dim \omega_0}{\dim \omega_f} \sum_{{\tiny J(\kappa L)_{if} }} (-)^{L_i+J} \Pi_J^2 \Pi_{L_f} \Wigsixj{L_f}{S_f}{J}{S_i}{L_i}{S_0} \RedCGw{i}{0}{f}{0} \times \nonumber \\ && \Pi_{L_iL_fS_iS_f} \sum_{{\tiny \begin{array}{c} l_rl_sl_tl_u \\ j_rj_sj_tj_u \end{array} }} \sqrt{\frac{(1+\delta_{rs})(1+\delta_{tu})} {(1+\delta_{\eta_{r}\eta_{s}})(1+\delta_{\eta_{t}\eta_{u}})}} \Pi_{j_{r}j_{s}j_{t}j_{u}} \RedCG{(\eta_{r}\,0) l_{r}}{(\eta_{s}\,0)l_{s}}{(\omega \kappa L)_f} \times \nonumber \\ && \RedCG{(\eta_{t}\,0)l_{t}}{(\eta_{u}\,0)l_{u}}{(\omega \kappa L)_i} \Wigninej{ l_{r} }{\ensuremath{\textstyle{\frac{1}{2}}}}{ j_{r} }{ l_{s} }{\ensuremath{\textstyle{\frac{1}{2}}}}{ j_{s} } {L_f}{S_f}{J} \Wigninej{ l_{t} }{\ensuremath{\textstyle{\frac{1}{2}}}}{ j_{t} }{ l_{u} }{\ensuremath{\textstyle{\frac{1}{2}}}}{ j_{u} } {L_i}{S_i}{J} V^\Gamma_{rstu}, \label{JTtoSU3} \end{eqnarray} \end{widetext} where $V^{\Gamma}_{(\chi \omega \kappa L S)_{fi}}$ is a two-body interaction in a \SU{3}-$JT$-coupled scheme, as mentioned above $\RedCG{}{}{}$ are reduced SU(3) Clebsch-Gordan coefficients, and we use \SU{2} Wigner 6-j and 9-j symbols. \bibliographystyle{apsrev}
null
null
null
proofpile-arXiv_000-10164
{"arxiv_id":"2009.09001","language":"en","timestamp":1600740046000,"url":"https:\/\/arxiv.org\/abs\/2009.09001","yymm":"2009"}
2024-02-18T23:40:25.216Z
2020-09-22T02:00:46.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10166"}
null
\section*{Main comment} \addcontentsline{toc}{section}{Main comment} Glenn Shafer's paper is a powerful appeal for a wider use of betting ideas and intuitions in statistics. He admits that p-values will never be completely replaced by betting scores, and I discuss it further in Appendix~\ref{app:A} (one of the two online appendices that I have prepared to meet the word limit). Both p-values and betting scores generalize Cournot's principle \cite{Shafer:2007-local}, but they do it in their different ways, and both ways are interesting and valuable. Other authors have referred to betting scores as Bayes factors \cite{Shafer/etal:2011} and e-values \cite{Vovk/Wang:arXiv1912a-local,Grunwald/etal:arXiv1906}. For simple null hypotheses, betting scores and Bayes factors indeed essentially coincide \cite[Section 1, interpretation 3]{Grunwald/etal:arXiv1906}, but for composite null hypotheses they are different notions, and using ``Bayes factor'' to mean ``betting score'' is utterly confusing to Bayesians \cite{Robert:2011-local}. However, the Bayesian connection still allows us to apply Jeffreys's \cite[Appendix B]{Jeffreys:1961} rule of thumb to betting scores; namely, a p-value of 5\% is roughly equivalent to a betting score of $10^{1/2}$, and a p-value of 1\% to a betting score of 10. This agrees beautifully with Shafer's rule (6), which gives, to two decimal places: \begin{itemize} \item for $p=5\%$, $3.47$ instead of Jeffreys's $3.16$ (slight overshoot); \item for $p=1\%$, $9$ instead of Jeffreys's $10$ (slight undershoot). \end{itemize} The term ``e-values'' emphasizes the fundamental role of expectation in the definition of betting scores (somewhat similar to the role of probability in the definition of p-values). It appears that the natural habitat for ``betting scores'' is game-theoretic while for ``e-values'' it is measure-theoretic \cite{Shafer:personal}; therefore, I will say ``e-values'' in the online appendices (Appendix~\ref{app:A} and \cite{Vovk:B}), which are based on measure-theoretic probability. In the second online appendix \cite{Vovk:B} I give a new example showing that betting scores are not just about communication; they may allow us to solve real statistical and scientific problems (more examples will be given by my co-author Ruodu Wang). David Cox \cite{Cox:1975} discovered that splitting data at random not only allows flexible testing of statistical hypotheses but also achieves high efficiency. A serious objection to the method is that different people analyzing the same data may get very different answers (thus violating ``inferential reproducibility'' \cite{Goodman/etal:2016,Held/Schwab:2020}). Using e-values instead of p-values remedies the situation. \subsection*{Acknowledgments} Thanks to Ruodu Wang for useful discussions and for sharing with me his much more extensive list of advantages of e-values. This research has been partially supported by Amazon, Astra Zeneca, and Stena Line.
null
null
null
proofpile-arXiv_000-10165
{"arxiv_id":"2009.08933","language":"en","timestamp":1600654658000,"url":"https:\/\/arxiv.org\/abs\/2009.08933","yymm":"2009"}
2024-02-18T23:40:25.218Z
2020-09-21T02:17:38.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10167"}
null
\section{Introduction} Individual radio source evolution is a field of research where most of the ingredients are known. However, many details are still poorly understood, and it is necessary to investigate how such ingredients interact and influence each other. In a scenario where radio sources grow in a self similar way, the evolutionary stage of each radio source originated by an active galactic nucleus depends on its linear size. Compact symmetric objects (CSOs), with a linear size, LS, $<$ 1 kpc and a two-sided structure that is reminiscent of Fanaroff-Riley radio galaxies \citep{fr74}, are likely to represent radio sources in an early evolutionary stage \citep[see e.g.][]{wilkinson94,readhead96}. Following an evolutionary path, CSOs would become medium-sized objects (MSOs) with 1$<$ LS $<$ 15-20 kpc, which are the progenitors of classical Fanaroff-Riley radio sources \citep[e.g.,][]{fanti01}. This is supported by the estimate of kinematic and radiative ages: 10$^{2-3}$ yr for CSOs \citep[see e.g.,][]{murgia03,polatidis03}, 10$^{4-6}$ yr for MSOs \citep{murgia99} and 10$^{7-8}$ yr for large radio galaxies \citep{orru10,harwood17}.\\ Several evolution models \citep[e.g.][]{fanti95,readhead96,snellen00,antao12} have been proposed to describe how the physical parameters, such as luminosity, linear size and velocity, evolve as the radio emission deploys. However, various aspects concerning the early stages of the radio evolution predicted by the models do not match the observations. This indicates that the initial parameters considered in evolution models must be improved by a better knowledge of the physical conditions when the radio emission turns on. In particular, the interaction with the ambient medium may play a crucial role during the early stage of radio source evolution \citep[e.g.][]{dd13,morganti13,collier18,keim19,sobolewska19}.\\ To test the physical conditions soon after the onset of the radio emission, it is essential to define a fair sample of CSOs, large enough to be statistically sound. A correct classification requires targeted (sub-)parsec scale observations that prove their double morphology and identify the core position. Another indirect way to search for CSOs is by the analysis of their radio spectrum. An empirical anti-correlation was found between the projected linear size and the peak frequency \citep[e.g.,][]{odea97}: the smaller the source, the higher the peak frequency is. In this context, the youngest CSOs should be sought among those sources whose synchrotron spectrum peaks above a few GHz. \\ High frequency peakers \citep[HFPs,][]{dd00} are a heterogeneous class of extragalactic sources, mainly made of blazars and CSOs, and characterized by a radio spectrum that peaks above $\sim$5 GHz. With the aim of searching for CSOs, two samples of HFP radio sources in the northern hemisphere have been constructed and are currently available: the ``bright'' sample (sources brighter than 300 mJy at 5 GHz) selected by \citet{dd00} and the ``faint'' sample (sources with flux density between 50 and 300 mJy at 5 GHz around the North Galactic Cap) presented in \citet{cstan09}, consisting of about 60 sources each. These samples were built starting from the NRAO VLA Sky Survey (NVSS) and 87 Green Bank (87GB) catalogues and selecting only those sources with a radio spectrum peaking at 5 GHz or above, and subsequently cleaned via quasi-simultaneous multi-frequency observations with the Very Large Array (VLA). Further epochs of quasi-simultaneous multi-frequency spectra were obtained to distinguish between CSO candidates from variable flat spectrum sources that matched the initial selection criteria owing to their variability \citep{tinti05, torniainen05,sadler06,hovatta07,mo07,mingaliev12}. Moreover, all the objects from the bright sample and $\sim$30 per cent of those from the faint sample were imaged at parsec-scale resolution in order to determine their morphology \citep{mo06,mo12}. CSOs are not significantly variable \citep{odea98}, possess low polarization, and have double/triple radio morphology characterized by mini-lobes/hotspots and relativistic beaming does not play a major role. On the other hand, blazars do possess strong variability across the whole electromagnetic spectrum, have high polarization, and have core-jet structures on pc-scales. However, in the youngest radio sources, substantial variability is expected as a consequence of the source expansion, on timescales much longer than in beamed objects, and with modest amplitude. In newly born radio sources, a significant evolution of the radio emission can occur on time-scales of the order of a few decades. Assuming a homogeneous radio source in adiabatic expansion with the magnetic field frozen in the plasma, the flux density variation in the optically thick part of the spectrum is $\Delta S \propto \left(1+ (\Delta t/t_0) \right)^3$, where $t_0$ is the source age and $\Delta t$ is the time interval between two observations. If $\Delta t$ is a large fraction of $t_{0}$, $\Delta S$ can be significant.\\ In this paper we present results on new VLA observations from 1 to 32 GHz of 35 out of 61 sources from the faint HFP sample, and Very Long Baseline Array (VLBA) observations at 15 and 24 GHz of a sub-sample of 12 sources. The long time-baseline of multi-epoch VLA observations (more than a decade) allows us to study the long-term variability and investigate if some spectral changes are consistent with a CSO in adiabatic expansion. On the other hand, dual-frequency observations with milli-arcsecond resolution provide a deep look into the radio source structure at few parsecs in size. The combination of VLA and VLBA information is necessary for determining the nature of each object and removing blazars that contaminate the sample of CSO candidates. The final goal is the construction of a sample of genuinely young compact symmetric objects. The determination of the physical properties in the very early phase of radio source evolution will provide important constraints on the initial conditions assumed in the development of evolutionary models.\\ This paper is organized as follows: in Section 2 we present the observations and data analysis; results are reported in Section 3 and discussed in Section 4. A brief summary is presented in Section 5. Throughout this paper, we assume the following cosmology: $H_{0} = 71\; {\rm km/s\, Mpc^{-1}}$, $\Omega_{\rm M} = 0.27$ and $\Omega_{\rm \Lambda} = 0.73$, in a flat Universe. The spectral index $\alpha$ is defined as $S {\rm (\nu)} \propto \nu^{- \alpha}$. \\ \section{Radio observations} \subsection{VLA observations} Simultaneous multi-frequency VLA observations of 35 out of the 61 sources from the ‘faint’ HFP sample \citep{cstan09} were carried out during two runs in 2012 April and May (Table \ref{radio-log}). Observations were performed in L (1$-$2 GHz), S (2$-$4 GHz), C (4.5$-$6.5 GHz), X (8$-$10 GHz), Ku (13$-$15 GHz), K (20.2$-$22.2 GHz) bands, and for a run also in Ka (31$-$33 GHz) band (project code: 12A-048). Observations had a band width of 2 GHz, with the exception of L band. In each frequency band the band-width was divided into 16 spectral windows. In both runs 3C\,286 was used as primary calibrator and band pass calibrator, with the exception of K and Ka band, were the calibrator J0927$+$3902 (4C\,39.25) was used as band pass calibrator. Target sources were observed for about 1.5 min per frequency. Secondary calibrators were chosen to minimize the antenna slewing.\\ Calibration was performed using the \texttt{CASA} software \citep{mcmullin07} following the standard procedure for the VLA. Parts of L, S and C bands were highly affected by RFI and we had to flag some spectral windows. Errors on the amplitude calibration are conservatively 3 per cent in L, C, and X bands, 5 per cent in Ku band, and 10 per cent in S, K, and Ka bands. After the a-priori calibration, imaging was done with the \texttt{CASA} task \texttt{CLEAN} and the \texttt{AIPS} task \texttt{IMAGR}. Phase-only self-calibration of the target field was generally performed in L-band, given the presence of many sources in the field of view, granting substantial amount of flux density for the model. \\ We produced an image for each spectral window of each band in order to maximize the spectral coverage (Fig. \ref{radio_spectra}). In Table \ref{vla-flux} we report the flux densities at 1.4, 1.7, 4.5, 5.0, 8.1, 8.4, 15, 22, 32 GHz, in order to have a direct comparison with the values from the narrow-band receivers of historical VLA observations \citep{cstan09,mo10}. When RFI affects any of those frequencies, nothing is reported.\\ The error on the image plane, $\sigma_{\rm rms}$, is usually around 50 $\mu$Jy/beam, but it may be as high as 0.2$-$0.5 mJy/beam in some spectral windows particularly affected by RFI. \\ \begin{table} \caption{Log of radio observations.} \begin{center} \begin{tabular}{lccc} \hline Date & Project & Configuration & Code \\ \hline 2012-04-25 & AO281 & VLA-C & a \\ 2012-06-16 & AO281 & VLA-CnB & b \\ 2019-01-19 & BO057 & VLBA & c \\ \hline \end{tabular} \end{center} \label{radio-log} \end{table} \begin{table*} \caption{Multi-frequency VLA flux density of faint HFP sources. Column 1: source name; column 2: optical counterpart; column 3: redshift: a $p$ indicates a photometric redshift; columns 4-12: flux density (in mJy) at 1.4, 1.7, 4.5, 5.0, 8.1, 8.4, 14.5, 21.5, and 32 GHz, respectively; columns 13 and 14: spectral index below and above the peak frequency, respectively.} \begin{center} \begin{tabular}{cccccccccccccc} \hline Source & ID & $z$ & S$_{1.4}$ & S$_{1.7}$ & S$_{4.5}$ & S$_{5.0}$ & S$_{8.1}$ & S$_{8.4}$ & S$_{14.5}$ & S$_{21.5}$ & S$_{32}$ & $\alpha_{\rm b}$ & $\alpha_{\rm a}$ \\ (1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) & (9) & (10)&(11)&(12)&(13)&(14)\\ \hline J0754$+$3033 & Q &0.769 & 66$\pm$2 & 75$\pm$2 & 142$\pm$4 & 145$\pm$4 & 143$\pm$4 & 141$\pm$4 & 108$\pm$5 & 96$\pm$10 & - & -0.6 & 0.4\\ J0819$+$3823 & Q & - & 19$\pm$1 & 25$\pm$1 & 115$\pm$3 & 118$\pm$4 & - & - & 41$\pm$2 & 25$\pm$3 & - & -1.5 & 1.1\\ J0821$+$3107 & Q & 2.625& 96$\pm$5 & 106$\pm$5&71$\pm$2&68$\pm$2&52$\pm$2&50$\pm$2&34$\pm$4&33$\pm$5& - & - & 0.5\\ J0951$+$3451 & G & 0.358 & 27$\pm$1 & 36$\pm$1 & 63$\pm$2 & 64$\pm$2 & 57$\pm$2 & 56$\pm$2 & 35$\pm$2 & 27$\pm$3 & 21$\pm$2 & -1.0 & 0.6\\ J0955$\pm$3355& Q & 2.491& 46$\pm$2 & 61$\pm$3& 68$\pm$2 & 66$\pm$2& 47$\pm$1 & 45$\pm$1 & 26$\pm$2 & 20$\pm$3 & - & -0.9 & 0.8\\ J1008$+$2533 & Q & 1.96 & 54$\pm$3 & 69$\pm$3 & 102$\pm$3 & 105$\pm$3 & 103$\pm$3 & 104$\pm$3 & 115$\pm$6 & 130$\pm$13 &- & -1.1 & 0.0\\ J1020$+$4320 & Q & 1.964 & 117$\pm$4 & 158$\pm$5 & 306$\pm$9 & 303$\pm$9 & 243$\pm$7 & 239$\pm$7 & 161$\pm$8 & 124$\pm$12& - & -1.0 & 0.5\\ J1025$+$2541 & G & 0.457 & 26$\pm$1 & 31$\pm$1 & 42$\pm$1 & 38$\pm$1 & 23$\pm$1 & 21$\pm$1 & 11$\pm$1 & 5$\pm$1 & - & -1.2 & 1.2\\ J1035$+$4230 & Q & 2.44 & 23$\pm$1 & 28$\pm$1 & 77$\pm$2 & 88$\pm$3 & 84$\pm$3 & 83$\pm$3 & 59$\pm$3 & 40$\pm$4 & 29$\pm$3 & -1.1 & 0.7\\ J1044$+$2959 & Q & 2.983 & 52$\pm$2 & 66$\pm$2 & 132$\pm$4 & 134$\pm$4 & 124$\pm$4 & 123$\pm$4 & 95$\pm$5 & 75$\pm$8 & - & -0.8 & 0.5\\ J1046$+$2600 & - & - & 16$\pm$1 & 20$\pm$1 & 35$\pm$1 & 33$\pm$1 & 23$\pm$1 & 22$\pm$1 & 11$\pm$1 & 6$\pm$1 & - &-1.1 & 1.1\\ J1052$+$3355 & Q & 1.407& - & 23$\pm$5 & 24$\pm$1 & 22$\pm$1 & 13$\pm$1 & 11$\pm$1 & 7$\pm$1 & 6$\pm$1 & - & - &0.9\\ J1054$+$5058 & Q & - & 12$\pm$1 & 13$\pm$1 & 21$\pm$1 & 21$\pm$1 & 28$\pm$1 & 30$\pm$1 & 36$\pm$2 & - & - & -0.5 & -\\ J1058$+$3353 & G & 0.265 & - & 21$\pm$3 & - & - & 78$\pm$2 & 80$\pm$2 & 116$\pm$6 & 118$\pm$12 & - & -0.7 & -\\ J1107$+$3421 & - & - & 29$\pm$1 & 42$\pm$2 & 64$\pm$2 & 60$\pm$2 & 37$\pm$1 & 35$\pm$1 & 18$\pm$1 & 9$\pm$1 & - &-1.4 & 1.1\\ J1137$+$3441& Q &0.835 & 13$\pm$2 & 30$\pm$3 & 59$\pm$2 & 62$\pm$2 & 63$\pm$2 & 63$\pm$2 & 61$\pm$3 & 59$\pm$6 & - & -0.9 & 0.1\\ J1218$+$2828 & G & 0.18p& - & - & 55$+$2 & 56$\pm$2 & 41$\pm$1 & 39$\pm$1 & 29$\pm$2 & 42$\pm$4 & - & -0.5 & 0.6\\ J1240$+$2323 & G & 0.38p & 18$\pm$5 & 24$\pm$3 & - & - & 50$\pm$2 & 50$\pm$2 & 45$\pm$1 & 42$\pm$1 & - & -0.6 & 0.2\\ J1240$+$2425& Q & 0.831 & 63$\pm$3 & 54$\pm$3 & 30$\pm$1 & 28$\pm$1 & 18$\pm$1 & 17$\pm$1 & 10$\pm$1 & 7$\pm$1 & - & - & 0.9\\ J1258$\pm$2820 & Q & - & -& - & 45$\pm$2 & 49$\pm$2 & 51$\pm$2 & 51$\pm$2 & 45$\pm$2 & 34$\pm$3 & - & -0.3 & 0.5\\ J1309$+$4047 & Q & 2.91 & - & 62$\pm$2 & 128$\pm$4 & 127$\pm$4 & 103$\pm$3 & 101$\pm$3 & 64$\pm$3 & 37$\pm$4 & - & -0.9 & 0.7\\ J1420$+$2704 & Q & - & - & 19$\pm$1 &69$\pm$3 & 70$\pm$2 & 65$\pm$2 & 64$\pm$2 & 45$\pm$2 & 29$\pm$3 & 22$\pm$2 & -1.0 & 0.7\\ J1421$+$4645 & Q & 1.668 & - & 126$\pm$4 & 237$\pm$7 & 244$\pm$7 & 244$\pm$7 & 244$\pm$4 & 204$\pm$10 & 184$\pm$18 &136$\pm$14 & -0.7 & 0.4\\ J1459$+$3337 & Q & 0.645 & 50$\pm$5 & 71$\pm$5 & 216$\pm$7 & 223$\pm$7 & 188$\pm$6 & 183$\pm$6 & 108$\pm$6 & 68$\pm$7 & 43$\pm$4 & -1.0 & 1.0\\ J1512$+$2219& G & 0.40p& 14$\pm$2 & 24$\pm$3 & 24$\pm$1 & 21$\pm$1 & 10$\pm$1 & 9$\pm$1 & 3$\pm$1 & 2$\pm$1 & - & -1.7 & 1.3\\ J1528$+$3816 & Q & 0.749 & 27$\pm$1 & 32$\pm$1 & 52$\pm$2 & 53$\pm$2 & 56$\pm$2 & 57$\pm$2 & 58$\pm$6 & 57$\pm$6 & 59$\pm$6 & -0.6 & 0.0\\ J1530$+$2705 & G & 0.033 & 24$\pm$3 & 29$\pm$3 & 50$\pm$2 & 50$\pm$2 & 41$\pm$1 & 40$\pm$1 & 28$\pm$2 & 26$\pm$3 & - & -0.8 & 0.4\\ J1530$+$5137 & G & 0.632p & 53$\pm$2 & 60$\pm$2 & 98$\pm$3 & 100$\pm$3 & 111$\pm$3 & 112$\pm$3 & 111$\pm$6 & 115$\pm$12 & 127$\pm$13 & -0.5 & 0.0\\ J1547$+$3518& Q & - & - & 23$\pm$2 & 41$\pm$1 & 44$\pm$1 & 51$\pm$1 & 53$\pm$2 & 53$\pm$3 & 57$\pm$6 & - & -0.5 & -\\ J1602$+$2646 & G & 0.372 & - & 44$\pm$2 & 162$\pm$5 & 189$\pm$6 & 297$\pm$9 & 310$\pm$9 & 345$\pm$17 & 303$\pm$30 &266$\pm$27 & -1.0 & 0.3\\ J1613$+$4223 & Q & - & 43$\pm$1 & 70$\pm$2 & 197$\pm$6 & 188$\pm$6 & 107$\pm$3 & 102$\pm$3 & 33$\pm$3 & 15$\pm$2 & - & -1.9 & 1.6\\ J1616$+$4632& Q & 0.950 & 82$\pm$4 & 83$\pm$4 & 77$\pm$2 & 79$\pm$2 & 78$\pm$2 & 78$\pm$2 & 62$\pm$3 & 58$\pm$6 & - & - & 0.2\\ J1617$+$3801 & Q & 1.607 & 26$\pm$1 & 34$\pm$1 & 72$\pm$2 & 77$\pm$2 & 115$\pm$3 & 121$\pm$4 & 128$\pm$7 & 107$\pm$11 &83$\pm$8 & -1.0 & 0.4\\ J1624$+$2748 & G & 0.541p & 19$\pm$1 & 21$\pm$1 & 97$\pm$3 & 105$\pm$3 & 155$\pm$5 & 162$\pm$5 & 175$\pm$9 & 172$\pm$17 & 153$\pm$15 & -1.2 & -\\ J1719$+$4804 & Q & 1.084 & 70$\pm$2 & 83$\pm$3 & 83$\pm$3 & 77$\pm$2 & 47$\pm$1 & 44$\pm$1 & 18$\pm$2 & 11$\pm$1 & - & -0.8 & 0.7\\ \hline \end{tabular} \end{center} \label{vla-flux} \end{table*} \begin{table} \caption{Pc-scale radio morphology and accurate source position of faint HFP sources with VLBA observations. Column 1: source name; column 2: morphology: CD = compact double; MR = marginally resolved; Un = unresolved. Columns 3 and 4: right ascension and declination of the main component; Column 5: VLBA calibrator observed for phase-referencing. The uncertainty on the position is about 0.16 mas.} \begin{center} \begin{tabular}{ccccc} \hline Source & M. &RA & Dec & Cal.\\ & &(J2000) &(J2000) & (B1950)\\ \hline J0754$+$3033& CD &07:54:48.8514 & 30:33:55.020 & 0738$+$313 \\ J0819$+$3823& CD &08:19:00.9562 & 38:23:59.810 & 0821$+$394 \\ J1002$+$5701& Un &10:02:41.6661 & 57:01:11.484 & 1014$+$615 \\ J1025$+$2541& CD &10:25:23.7918 & 25:41:58.362 & 1012$+$232 \\ J1035$+$4230& MR &10:35:32.5776 & 42:30:18.959 & 1020$+$400 \\ J1044$+$2959& Un &10:44:06.3428 & 29:59:01.004 & 1059$+$282 \\ J1046$+$2600& MR &10:46:57.2508 & 26:00:45.104 & 1040$+$244 \\ J1107$+$3421& CD &11:07:34.3382 & 34:21:18.596 & 1101$+$384 \\ J1420$+$2704& Un &14:20:51.4879 & 27:04:27.045 & 1417$+$273 \\ J1459$+$3337& CD &14:59:58.4359 & 33:37:01.776 & 1504$+$377 \\ J1613$+$4223& Un &16:13:04.8033 & 42:23:18.893 & 1638$+$398 \\ J1719$+$4804& Un &17:19:38.2496 & 48:04:12.248 & 1726$+$455 \\ \hline \end{tabular} \end{center} \label{phase-vlba} \end{table} \begin{table*} \caption{VLBA flux density of HFP sources. Column 1: source name; column 2: source component; columns 3 and 4: flux density (in mJy) at 15 and 24 GHz, respectively; column 5: VLBA spectral index between 15 and 24 GHz; column 6: VLA spectral index between 15 and 22 GHz; columns 7 and 8: fractional flux density between VLBA and VLA, S$_{\rm VLBA}/$S$_{\rm VLA}$ at 15 and 24 GHz, respectively. For the source J1002$+$5701 the VLA spectral index and flux density refers to data from \citet{cstan09}.} \begin{center} \begin{tabular}{cccccccc} \hline Source & Comp & S$_{\rm 15}$ & S$_{24}$ &$\alpha_{15}^{24}$ & $\alpha_{\rm VLA}$& F$_{15}$ & F$_{24}$\\ & & mJy & mJy & & & \% & \%\\ \hline J0754$+$3033 & E & 50.7$\pm$3.5 & 38.0$\pm$2.7&0.6$\pm$0.2 & - & - & -\\ & W & 26.0$\pm$1.8 & 14.2$\pm$1.0 & 1.3$\pm$0.2 & - & - & -\\ & Tot & 79.4$\pm$5.5 & 54.7$\pm$3.8 &0.8$\pm$0.2 & 0.3$\pm$0.3&0.73&0.57\\ J0819$+$3823& E & 4.7$\pm$0.3 & - & - & -& - & -\\ & W & 21.7$\pm$1.5 & - & - & - & - & -\\ & Tot& 26.4$\pm$1.8 & 10.4$\pm$0.7 & 2.0$\pm$0.2& 1.0$\pm$0.3&0.64 & 0.42\\ J1002$+$5701& Tot & 8.0$\pm$0.6 & 3.1$\pm$0.2 & 2.0$\pm$0.2 & 1.0$\pm$0.3&0.40 & 0.24\\ J1025$+$2541& E & 7.8$\pm$0.5 & - & - & - & - & -\\ & W & 2.0$\pm$0.1 & - & - & - & - & -\\ & Tot & 9.8$\pm$0.7 & 6.9$\pm$0.5 & 0.7$\pm$0.2 & 1.6$\pm$0.5&0.89&1.4\\ J1035$+$4230& E & - & 13.0$\pm$0.9 & - & - & - & -\\ & W & - & 5.2$\pm$0.4 & - & - & - & -\\ & Tot & 28.5$\pm$2.0 & 18.2$\pm$1.3 & 1.0$\pm$0.2& 0.8$\pm$0.3&0.48&0.45\\ J1044$+$2959& Tot & 73.8$\pm$5.2 & 45.1$\pm$3.1 & 1.0$\pm$0.2& 0.5$\pm$0.3&0.78&0.60\\ J1046$+$2600& E & - & 5.3$\pm$0.4 & - & -& - & -\\ & W & - & 4.1$\pm$0.3 & - & -& - & -\\ & Tot & 11.2$\pm$0.8 & 9.4$\pm$0.7 & 0.4$\pm$0.2& 1.2$\pm$0.3&1.00&0.64\\ J1107$+$3421& E & 4.2$\pm$0.5 & 3.2$\pm$0.2 & 0.6$\pm$0.3 & -& - &-\\ & W & 3.1$\pm$0.5 & 4.0$\pm$0.5 & -0.5$\pm$0.4 & - & - & -\\ & Tot & 7.3$\pm$0.7 & 7.2$\pm$0.7 & 0.0$\pm$0.3& 1.4$\pm$0.3& 0.40 & 0.80\\ J1420$+$2704& Tot & 28.0$\pm$2.0 & 18.4$\pm$1.3 &0.9$\pm$0.2 & 0.9$\pm$0.3&0.62&0.63\\ J1459$+$3337& E & 5.3$\pm$0.4 & 3.3$\pm$0.3 & 1.0$\pm$0.3 & -& - & -\\ & W & 20.1$\pm$1.4 & 11.5$\pm$0.8 & 1.2$\pm$0.2 & - & - & -\\ & Tot & 25.4$\pm$1.8 & 14.8$\pm$1.0 & 1.1$\pm$0.2 & 0.9$\pm$0.3& 0.23&0.22\\ J1613$+$4223&Tot& 28.9$\pm$2.0 & 5.8$\pm$0.4 & 3.4$\pm$0.2 & 1.6$\pm$0.4&0.87&0.39\\ J1719$+$4804&Tot& 4.2$\pm$0.3 & 3.5$\pm$0.3 & 0.4$\pm$0.2 & 1.0$\pm$0.3&0.23&0.32\\ \hline \end{tabular} \end{center} \label{vlba-flux} \end{table*} \begin{table*} \caption{Variability and peak frequency of faint HFP sources. Column 1: source name (sources which are still considered CSO candidates are marked with a diamond); columns 2-5: variability computed between two consecutive epochs, V$_{\rm ep}$ ($a$=1998, $b$=1999, $c$=2000, $d$=2003, $e$=2004, $f$=2006, $g$=2007, $h$=2012. For the precise dates see \citet{cstan09} and \citet{mo10}). Column 6: variability computed between the first epoch \citep[1998-2000,][]{cstan09} and last epoch (2012) of VLA observations, V$_{\rm tot}$; columns 7-11: peak frequency during the first (1998-1999), second (2003), third (2004), fourth (2006-2007) and last (2012) observing epochs, respectively; column 12: variability classification - NV = non variable, SV = slightly variable, V = variable (see Section \ref{sec_variability}). } \begin{center} \begin{tabular}{cccccccccccc} \hline Source &V$_{\rm ep (1-2)}$&V$_{\rm ep (2-3)}$&V$_{\rm ep (3-4)}$&V$_{\rm ep (4-5)}$&V$_{\rm tot (1-5)}$&$\nu_{\rm p,1}$&$\nu_{\rm p,2}$&$\nu_{\rm p,3}$&$\nu_{\rm p,4}$&$\nu_{\rm p,5}$& Class.\\ (1)&(2)&(3)&(4)&(5)&(6)&(7)&(8)&(9)&(10)&(11)&(12)\\ \hline J0754$+$3033$^{\diamond}$& 26.5$^{b,d}$& 1.5$^{d,e}$& 1.9$^{e,f}$& 6.1$^{f,h}$& 57.4&8.8$\pm$0.6&7.8$\pm$0.4&8.3$\pm$0.5&7.4$\pm$0.2&6.5$\pm$1.0& V\\ J0819$+$3823$^{\diamond}$& 6.1$^{a,d}$& 2.7$^{d,e}$& 8.5$^{e,f}$& 13.5$^{f,h}$ & 7.4&6.1$\pm$0.9&5.7$\pm$1.0&6.0$\pm$0.9&6.2$\pm$0.9&5.6$\pm$1.9&SV\\ J0821$+$3107$^{\diamond}$& 52.5$^{b,f}$ & 41.0$^{f,h}$ & - & - & 130.0 & 3.4$\pm$0.3 & - & - &2.7$\pm$0.3&$<$2.0& V \\ J0951$+$3451$^{\diamond}$& 0.2$^{a,d}$& 4.0$^{d,f}$ & 2.7$^{f,h}$ & - & 12.6&6.1$\pm$0.5&6.0$\pm$0.6& - & 5.6$\pm$0.5&5.6$\pm$1.0 & SV\\ J0955$+$3335& 6.2$^{b,d}$ & 9.7$^{d,f}$ & 9.8$^{f,h}$ & - & 124.1 & 5.7$\pm$0.6&5.2$\pm$0.5& -&4.6$\pm$0.4&2.4$\pm$0.4 & V\\ J1008$+$2533& 17.8$^{a,e}$ & 1.4$^{e,f}$ & 3.4$^{f,h}$ & - & 19.7&5.9$\pm$0.5& - &4.9$\pm$0.4&5.4$\pm$0.4&4.1$\pm$0.7 &SV\\ J1020$+$4320& 1.7$^{b,e}$& 17.7$^{e,h}$ & - & - & 23.3&4.6$\pm$0.4& - &4.6$\pm$0.5& - &5.2$\pm$1.2&SV\\ J1025$+$2541& 15.2$^{a,e}$& 19.7$^{e,h}$& - & - & 36.9&4.2$\pm$0.7& - &3.7$\pm$0.5& - &3.3$\pm$1.2 &V\\ J1035$+$4230$^{\diamond}$& 7.8$^{b,e}$& 8.4$^{e,h}$& - & - & 20.8&7.1$\pm$0.6& - &6.7$\pm$0.8& - &7.0$\pm$1.9 &SV\\ J1044$+$2959$^{\diamond}$& 4.1$^{b,e}$& 22.6$^{e,h}$& - & - & 35.9&7.1$\pm$0.7& - &4.8$\pm$0.3& - &6.1$\pm$1.2 &V\\ J1046$+$2600& 4.9$^{a,e}$& 22.4$^{e,h}$& - & - & 13.4&4.7$\pm$0.7& - &4.8$\pm$0.6& - &4.0$\pm$1.3 &SV\\ J1052$+$3355$^{\diamond}$& 42.0$^{b,e}$ & 4.8$^{e,f}$ & 73.3$^{f,h}$ & - & 206.0 &5.2$\pm$0.6& - &1.6$\pm$0.1 &4.2$\pm$0.5&$<$2 & V\\ J1054$+$5058$^{\diamond}$& 7.6$^{c,f}$& 9.1$^{f,h}$& - & - & 7.1&$>$22 & - & - &$>$22 & $>$15 & SV\\ J1058$+$3353& 41.4$^{b,d}$ & 105.4$^{d,h}$ & - & - & 72.2 &6.7$\pm$0.6&26.5$\pm$0.3& - & - &31.7$\pm$0.6 & V\\ J1107$+$3421$^{\diamond}$& 14.5$^{b,d}$& 8.5$^{d,e}$& 2.8$^{e,f}$ &22.7$^{f,h}$& 49.9&4.6$\pm$0.6&4.4$\pm$0.8&4.6$\pm$0.6&4.2$\pm$0.6&3.9$\pm$1.4 & V\\ J1137$+$3441& 32.5$^{b,d}$ & 105.4$^{d,h}$ & - & - & 19.3&14.3$\pm$0.4& $>$34& - & - &9.7$\pm$0.7 & V\\ J1218$+$2828 & 19.5$^{a,d}$ & 176.2$^{d,h}$ & - & - & 131.2 &6.9$\pm$0.7&7.5$\pm$0.6& - & - &3.9$\pm$0.4 & V\\ J1240$+$2323 & 6.2$^{a,d}$ & 4.2$^{d,f}$ & 20.6$^{f,h}$ & - & 9.4 &7.8$\pm$0.6&9.3$\pm$0.4& - &9.8$\pm$0.4&9.8$\pm$0.6 & SV\\ J1240$+$2425& 14.8$^{a,d}$ & 2.6$^{d,f}$ & 193.0$^{f,h}$ & - & 156.5 &3.8$\pm$0.5&$<$1.4 & - &2.7$\pm$0.3&$<$ 1 & V\\ J1258$+$2820 & 21.3$^{b,d}$ & 4.6$^{d,f}$ & 3.2$^{f,h}$ & - & 22.9 & 4.7$\pm$0.3&7.0$\pm$0.4& - &14.6$\pm$0.3& 8.1$\pm$0.5 &SV\\ J1309$+$4047$^{\diamond}$& 7.0$^{b,d}$& 1.6$^{d,f}$& 5.3$^{f,h}$ & - & 2.4&5.4$\pm$0.6&5.7$\pm$0.6& - &5.4$\pm$0.7&4.8$\pm$1.0 &SV\\ J1420$+$2704$^{\diamond}$& 8.6$^{b,d}$& 2.8$^{d,g}$& 6.0$^{g,h}$ & - & 24.6&7.2$\pm$0.8&6.5$\pm$0.6&6.6$\pm$0.6& - &6.9$\pm$1.3 & SV\\ J1421$+$4645& - & - & - & - & 44.1&5.5$\pm$0.3& - & -& - &7.8$\pm$1.0& V\\ J1459$+$3337& 37.3$^{b,d}$& 182.2$^{d,h}$& - & - & 250.8&21.2$\pm$0.9&15.8$\pm$0.8& - & - &2.8$\pm$0.9 & V\\ J1512$+$2219$^{\diamond}$& - & - & - & - & 92.3 &2.8$\pm$0.3& - & - & - & 1.6$\pm$0.5 &V\\ J1528$+$3816& - & - & - & - & 19.4&17.7$\pm$0.4& - & - & - &15.2$\pm$0.7 & SV\\ J1530$+$2705& 58.1$^{b,d}$ & 4.7$^{d,g}$ & 20.6$^{g,h}$ & - & 22.9 &10.2$\pm$0.6&5.6$\pm$0.7& - &7.2$\pm$0.7&4.6$\pm$0.6 &V\\ J1530$+$5137& - & - & - & - & 52.7&15.9$\pm$0.2& - & - & - &19.9$\pm$0.7 & V\\ J1547$+$3518$^{\diamond}$& 5.9$^{b,d}$ & 7.2$^{d,f}$ & 8.4$f,h$ & - & 23.9 &17.5$\pm$0.5&16.3$\pm$0.1& - &16.4$\pm$0.4&14.9$\pm$0.6 & SV\\ J1602$+$2646& 3.1$^{b,d}$& 22.5$^{d,h}$& - & - & 28.5&15.9$\pm$0.5&16.2$\pm$0.5& - & - &15.0$\pm$1.5 & V\\ J1613$+$4223$^{\diamond}$& 6.7$^{b,e}$& 0.8$^{e,g}$& 1.2$^{g,h}$ & - & 12.6&4.7$\pm$0.8&4.5$\pm$0.8&4.5$\pm$0.8& - &4.4$\pm$2.0 &SV\\ J1616$+$4632& 26.1$^{b,f}$ & 126.9$^{f,h}$ & - & - & 62.2 & $>$22 & - & - & 8.6$\pm$0.3&2.6$\pm$0.2 & V\\ J1617$+$3801& 1.3$^{b,d}$& 9.7$d,h$& - & - & 14.4&12.0$\pm$0.4&9.6$\pm$0.5& - & - &12.5$\pm$1.7 & SV\\ J1624$+$2748$^{\diamond}$& 3.8$^{b,d}$& 4.7$^{d,h}$& - & - & 2.7&13.0$\pm$0.7&14.2$\pm$0.6& - & - &16.9$\pm$2.3 & NV\\ J1719$+$4804$^{\diamond}$& 27.6$^{b,d}$& 2.3$^{d,e}$& 34.3$^{e,g}$&120.7$^{g,h}$& 254.3&10.8$\pm$0.4&6.3$\pm$0.4&6.2$\pm$0.4&4.9$\pm$0.4&2.8$\pm$0.9 & V\\ \hline \end{tabular} \end{center} \label{vla-variability} \end{table*} \subsection{VLBA observations and data reduction} VLBA observations at 15 and 24 GHz of a sub-sample of 12 faint HFP sources were carried out on 2019 January 19 in dual polarization and an aggregate bit rate of 2Gbps (project code BO057). The target sources were selected on the basis of their peak frequency below 7 GHz, in order to have observations in the optically-thin part of the spectrum. \\ Each source was observed for about 25 min at 15 GHz and for 30 min at 24 GHz, spread into twelve to fifteen short scans of about 2 min each, switching between frequencies and sources in order to improve the coverage of the ({\it u,v}) plane. Target sources are too faint for fringe fitting, and the observations were performed in phase-referencing mode. Phase calibrators are reported in Table \ref{phase-vlba}. \\ Calibration and data reduction were performed following the standard procedures described in the Astronomical Image Processing System ({\texttt AIPS}) cookbook. J0927+3902 was used to generate the band pass correction. The amplitudes were calibrated using antenna system temperatures and antenna gains and applying an atmospheric opacity correction. The uncertainties on the amplitude calibration were found to be approximately 7 per cent at both frequencies.\\ Images were produced in {\texttt AIPS} with the task {\texttt IMAGR}. Phase-only self-calibration was performed for those sources stronger than 10 mJy. The rms noise level on the image plane is between 0.07 and 0.3 mJy beam$^{-1}$. Flux densities are reported in Table \ref{vlba-flux}. When the source is resolved we define E and W the eastern and western components, respectively (in Fig. \ref{vlba-images} North is up and East is left). \\ The uncertainty on the source position was estimated by comparing the positions at 15 and 24 GHz, and it is about 0.16 mas. \\ \begin{figure*} \begin{center} \special{psfile=0754-color.ps voffset=-350 hoffset=-40 vscale=45 hscale=45} \special{psfile=0819-color.ps voffset=-350 hoffset=140 vscale=45 hscale=45} \special{psfile=0821-color.ps voffset=-350 hoffset=320 vscale=45 hscale=45} \special{psfile=0951-color.ps voffset=-535 hoffset=-40 vscale=45 hscale=45} \special{psfile=0955-color.ps voffset=-535 hoffset=140 vscale=45 hscale=45} \special{psfile=1008-color.ps voffset=-535 hoffset=320 vscale=45 hscale=45} \special{psfile=1020-color.ps voffset=-720 hoffset=-40 vscale=45 hscale=45} \special{psfile=1025-color.ps voffset=-720 hoffset=140 vscale=45 hscale=45} \special{psfile=1035-color.ps voffset=-720 hoffset=320 vscale=45 hscale=45} \vspace{20cm} \caption{Radio spectra of the 35 HFPs from the ``faint'' HFP sample observed with the VLA during the observing runs presented in this paper. Black diamonds and black solid line refer to the first epoch observations \citep[1998-2000,][]{cstan09}; blue triangles and a blue dotted line refer to observations in 2003; green asterisks and a green dashed line refer to observations in 2004; orange $+$ signs and orange dash-dot line refer to observations in 2006/2007; red crosses and red dash-dot-dot line refer to the last epoch (2012). Flux densities and precise observing dates for the epoch 2003, 2004, and 2006-2007 can be found in \citet{mo10}. The designations V, SV, and NV, mean that the source is classified as variable, slightly variable, and non variable, respectively, while F indicates a flat radio spectrum during at least one epoch (see Section 3.2).} \label{radio_spectra} \end{center} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \begin{center} \special{psfile=1044-color.ps voffset=-350 hoffset=-40 vscale=45 hscale=45} \special{psfile=1046-color.ps voffset=-350 hoffset=140 vscale=45 hscale=45} \special{psfile=1052-color.ps voffset=-350 hoffset=320 vscale=45 hscale=45} \special{psfile=1054-color.ps voffset=-535 hoffset=-40 vscale=45 hscale=45} \special{psfile=1058-color.ps voffset=-535 hoffset=140 vscale=45 hscale=45} \special{psfile=1107-color.ps voffset=-535 hoffset=320 vscale=45 hscale=45} \special{psfile=1137-color.ps voffset=-720 hoffset=-40 vscale=45 hscale=45} \special{psfile=1218-color.ps voffset=-720 hoffset=140 vscale=45 hscale=45} \special{psfile=1240+23-color.ps voffset=-720 hoffset=320 vscale=45 hscale=45} \vspace{20cm} \caption{Continued.} \end{center} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \begin{center} \special{psfile=1240+24-color.ps voffset=-350 hoffset=-40 vscale=45 hscale=45} \special{psfile=1258-color.ps voffset=-350 hoffset=140 vscale=45 hscale=45} \special{psfile=1309-color.ps voffset=-350 hoffset=320 vscale=45 hscale=45} \special{psfile=1420-color.ps voffset=-535 hoffset=-40 vscale=45 hscale=45} \special{psfile=1421-color.ps voffset=-535 hoffset=140 vscale=45 hscale=45} \special{psfile=1459-color.ps voffset=-535 hoffset=320 vscale=45 hscale=45} \special{psfile=1512-color.ps voffset=-720 hoffset=-40 vscale=45 hscale=45} \special{psfile=1528-color.ps voffset=-720 hoffset=140 vscale=45 hscale=45} \special{psfile=1530+27-color.ps voffset=-720 hoffset=320 vscale=45 hscale=45} \vspace{20cm} \caption{Continued.} \end{center} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \begin{center} \special{psfile=1530+51-color.ps voffset=-350 hoffset=-40 vscale=45 hscale=45} \special{psfile=1547-color.ps voffset=-350 hoffset=140 vscale=45 hscale=45} \special{psfile=1602-color.ps voffset=-350 hoffset=320 vscale=45 hscale=45} \special{psfile=1613-color.ps voffset=-535 hoffset=-40 vscale=45 hscale=45} \special{psfile=1616-color.ps voffset=-535 hoffset=140 vscale=45 hscale=45} \special{psfile=1617-color.ps voffset=-535 hoffset=320 vscale=45 hscale=45} \special{psfile=1624-color.ps voffset=-720 hoffset=-40 vscale=45 hscale=45} \special{psfile=1719-color.ps voffset=-720 hoffset=140 vscale=45 hscale=45} \vspace{20cm} \end{center} \caption{Continued.} \end{figure*} \begin{figure*} \begin{center} \special{psfile=0754_U_WINDOW.PS voffset=-260 hoffset=-10 vscale=40 hscale=40} \special{psfile=0754_K_WINDOW.PS voffset=-260 hoffset=240 vscale=40 hscale=40} \special{psfile=0819_U_WINDOW.PS voffset=-460 hoffset=-10 vscale=40 hscale=40} \special{psfile=0819_K_WINDOW.PS voffset=-460 hoffset=240 vscale=40 hscale=40} \special{psfile=1025_U_WINDOW.PS voffset=-660 hoffset=-10 vscale=40 hscale=40} \special{psfile=1025_K_WINDOW.PS voffset=-660 hoffset=240 vscale=40 hscale=40} \vspace{22cm} \caption{VLBA images at 15 GHz ({\it left}) and at 24 GHz ({\it right}). On each image we provide the source name, the observing frequency, the peak brightness (peak br.) and the first contour (f.c.), which is 3 times the off-source noise level on the image plane. Contours increase by a factor of 2. The beam is plotted in the bottom left-hand corner of each image.} \label{vlba-images} \end{center} \end{figure*} \addtocounter{figure}{-1} \begin{figure*} \begin{center} \special{psfile=1107_U_WINDOW.PS voffset=-260 hoffset=-10 vscale=40 hscale=40} \special{psfile=1107_K_WINDOW.PS voffset=-260 hoffset=240 vscale=40 hscale=40} \special{psfile=1459_U_WINDOW.PS voffset=-460 hoffset=-10 vscale=40 hscale=40} \special{psfile=1459_K_WINDOW.PS voffset=-460 hoffset=240 vscale=40 hscale=40} \vspace{15cm} \caption{Continued.} \end{center} \end{figure*} \section{Results} The large band width of the VLA observations allowed us to determine the spectral shape roughly continuously from 1 to 22 GHz, with only a few gaps among X, Ku and K bands. All the 35 sources are unresolved in the VLA images and we measured the flux densities using the task \texttt{JMFIT} in \texttt{AIPS} which performs a Gaussian fit on the image plane. In Fig. \ref{radio_spectra} we plot the flux densities for each spectral window, together with the measurements from earlier epochs as it is described in the caption. \\ VLBA observations could detect all the target sources at 15 and 24 GHz. Among the 12 sources observed with the VLBA, 7 sources ($\sim$58 per cent) are resolved or marginally resolved on pc-scale. As in the case of VLA, for the unresolved sources, or unresolved components, we measured the flux density using the task \texttt{JMFIT}, whereas for sources with complex structure we estimated the total flux density using \texttt{TVSTAT}, which extracts the flux density on a selected polygonal area on the image plane. Errors on the VLA and VLBA flux densities are estimated by $\sigma = \sqrt{\sigma_{\rm cal}^{2} + \sigma_{\rm rms}^{2}}$, where $\sigma_{\rm cal}$ is the uncertainty on the amplitude calibration (see Section 2), and $\sigma_{\rm rms}$ is the 1-$\sigma$ noise level measured on the image plane. The former contribution is generally much larger than the latter.\\ Errors on the spectral index are computed assuming the error propagation theory:\\ $\sigma_{\rm sp} = \sqrt{ \left(\frac{\sigma_{S1}}{S_1} \right)^2 + \left( \frac{\sigma_{S2}}{S_2} \right)^2} \frac{1}{{\rm ln(\nu_2) - ln(\nu_1)}}$\\ \noindent where $S_{i}$ and $\sigma_{Si}$ are the flux density and the flux density error, respectively, at the frequency $i$ ($\nu_i$). \\ Sloan Digital Sky Survey (SDSS) information from the data release 12 \citep[DR12,][]{alam15} have been used to identify the host and its redshift (either spectroscopic or photometric) of the sources still lacking an optical counterpart in previous studies. The updated information is reported in Table \ref{vla-flux}. \\ \subsection{Radio spectrum} \label{sec-spectral} One of the main characteristics of young radio sources is the convex radio spectrum that turns over at a frequency related to the source size/age. In general, as the source deploys in the interstellar medium (ISM) of the host galaxy, the peak frequency progressively moves to lower and lower frequencies. The anti-correlation found between the peak frequency and the linear size \citep{odea97} implies that the smaller and younger sources should have the spectral peak above a few GHz. \\ Following \citet{mo10} and with the goal of estimating the peak frequency and how it changes with time, we fitted the simultaneous multi-frequency radio spectrum for each epoch with the pure analytic function:\\ \begin{equation} {\rm Log(}S{\rm )} = a + {\rm Log (}\nu{\rm )} \times (b + c \,{\rm Log (}\nu{)}) \label{eq_spectrum} \end{equation} \noindent where $S$ is the flux density, $\nu$ the frequency, and $a$, $b$, and $c$ numeric parameters. Fits to the radio spectra are presented in Fig. \ref{radio_spectra}.\\ For two sources, J1052$+$3355 and J1512$+$2219 the fit did not converge. In particular, the spectrum of J1512$+$2219 is highly convex, whereas in J1052$+$3355 the lack of data points at low frequencies prevents the fit in the optically-thick part of the spectrum.\\ Following \citet{mo10} we compute the spectral index below ($\alpha_{\rm b}$) and above ($\alpha_{\rm a}$) the peak frequency. For some sources we could not estimate either $\alpha_{\rm b}$ or $\alpha_{\rm a}$, due to the lack of data points above or below the peak frequency.\\ In 6 sources, J1008$+$2535, J1137$+$3441, J1240$+$2323, J1528$+$3816, J1530$+$5137, and J1616$+$4632, the spectral shape changes from convex in the first epoch to flat in 2012, with $\alpha_{\rm a} < 0.4$. In addition, the source J1054$+$5058 shows an inverted spectrum up to 24/32 GHz in all epochs. The remaining sources keep a convex spectrum, but with some amount of flux density variation. \\ In 19 out of the 35 observed sources (54 per cent; 14 quasars, 4 galaxies, and 1 object still missing the optical counterpart) we observe a decrease of the flux density in the optically-thin part of the spectrum, and a flux density increase below the actual peak frequency, which may be consistent with a source in adiabatic expansion. Although the variability probed by the VLA monitoring for J0955$+$3335 is consistent with a source in adiabatic expansion, the flux density measured by VLBA images in an intermediate epoch (in 2010) revealed a temporary increase of the flux density at 15 and 22 GHz, which is hard to reconcile with an expanding source. For this reason \citet{mo12} labelled this source a blazar candidate, and we do not consider this source as a CSO candidate anymore. The same reasoning applies to the source J1052$+$3355 (see Section 3.5).\\ The remaining 16 sources (46 per cent; 9 quasars, 6 galaxies, and 1 object still missing an optical counterpart) show random flux density increase or decrease, as it is expected in blazars.\\ In Table \ref{vla-variability} we provide the peak frequency for each observing epoch. In 13 sources the peak frequency remains roughly constant, within the errors, while in 11 sources it moves to lower frequencies with time. Among the latter group, the source J1218$+$2828 represents an outlier: between 1.4 and 15 GHz the spectral shape is convex with the peak frequency shifted towards lower frequencies with respect to that estimated in earlier epochs. However, above 15 GHz the flux density increases again, suggesting the presence of an additional compact component with a highly inverted spectrum. The radio spectrum becomes clearly inverted within the 2-GHz band width sampled by the present observations in K-band. In 4 sources the peak moves to higher frequencies with time, while in 5 sources it moves up and down without any trend. Remarkably, in all the 18 sources with spectral variability consistent with adiabatic expansion, the peak frequency is constant or decreases with time.\\ \subsection{Variability} \label{sec_variability} Samples selected at high radio frequencies ($>$ 5 GHz) have proved to be highly contaminated by blazars \citep{tinti05, torniainen05,sadler06,sadler08,hovatta07,mo07,mingaliev12}. Flux density and spectral variability are common properties of blazars, whereas CSOs are among the least variable sources with variability of about 10 per cent at most over one year \citep{odea98}, while some ``secular'' variations are known for a few objects \citep[e.g. OQ\,208,][]{cstan97}.\\ In order to identify and remove highly variable sources we performed multi-epoch VLA observations covering quasi-simultaneously the frequencies between $\sim$1 GHz and 22 GHz. Following \citet{tinti05} we estimate the variability by means of the parameter $V$ defined as: \begin{equation} V = \frac{1}{m} \sum_{i=1}^{m} \frac{(S_{\rm A}(i) - S_{\rm B}(i))^{2}}{\sigma_{i}^{2}} \label{eq-var} \end{equation} \noindent where $S_{\rm A}(i)$ and $S_{\rm B}(i)$ are the flux density at the i-th frequency of two consecutive epochs, $\sigma_{i}$ is the error on $S_{\rm A}(i) - S_{\rm B}(i)$, and $m$ is the number of sampled frequencies. We compute the variability index on consecutive epochs $V_{\rm ep}$ in order to detect a flaring state followed by a period of quiescence. In addition we compute $V_{\rm tot}$ between the first epoch from \citet{cstan09} and last epoch (data presented in this paper) in order to determine the long-term variability spanning a decade of observations (between 1998-1999 and 2012). Results are reported in Table \ref{vla-variability}. In Fig. \ref{histo-var} we plot the distribution of $V_{\rm tot}$ for galaxies, quasars, and all sources. Values range between about 3 and 255. Only 1 source optically associated with a galaxy (J1624$+$2748) has $V_{\rm tot} <$4, and is marked as non variable, NV, in Table \ref{vla-variability} and in Fig. \ref{radio_spectra}. The threshold $V = 4$ indicates variability of 10 per cent between two epochs. The majority of the objects, $\sim$47 per cent (16 sources; 13 quasars, 2 galaxies, and 1 object with no optical counterpart) have V$\leq$25, indicating that some variability, up to about 30 per cent, is common on long time scales. These sources are marked as slightly variable SV in Table \ref{vla-variability} and in Fig. \ref{radio_spectra}. Sources with a variability index above 25 are marked V. If the spectrum turns out to be flat, the source is also labelled F (see Sect. \ref{sec-spectral}).\\ In Fig. \ref{fig-var} we plot for each source the variability indices computed for consecutive epochs $V_{\rm ep}$ as a function of the variability index computed over the whole period $V_{\rm tot}$. Usually the variation between two consecutive epochs is smaller than that estimated over the whole period. This is consistent with changes produced by a source in adiabatic expansion. 10 sources (5 quasars, 4 galaxies, and 1 object with unidentified optical counterpart) have $V_{\rm ep} > V_{\rm tot}$, indicating that the variation observed between two consecutive epochs is larger than the variability derived between 1998$-$1999 and 2012. This behaviour is typical of blazars which randomly interchange low-activity and high-activity states. \\ \begin{figure} \begin{center} \special{psfile=histo-variability.ps voffset=-270 hoffset=0 vscale=40 hscale=40} \vspace{7.5cm} \caption{Distribution of the variability index computed over the whole period $V_{\rm tot}$, for galaxies (red dashed line), quasars (blue dotted line), and all sources (black solid line).} \label{histo-var} \end{center} \end{figure} \begin{figure} \begin{center} \special{psfile=var-sources.eps voffset=-220 hoffset=0 vscale=40 hscale=40} \vspace{7.5cm} \caption{Variability index computed for two consecutive epochs $V_{\rm ep}$ as a function of the variability index computed over the whole period $V_{\rm tot}$ for each source. Crosses, diamonds and triangles refer to galaxies, quasars, and sources with unidentified optical counterpart, respectively. The line indicates $V_{\rm ep} = V_{\rm tot}$.} \label{fig-var} \end{center} \end{figure} \subsection{Dynamical ages} The radio spectrum of a homogeneous synchrotron source with the magnetic field frozen in the plasma that is in adiabatic expansion shows a flux density increase with time in the optically-thick part of the spectrum: \begin{equation} S_{1} = S_{0} \left( \frac{t_{0} + \Delta t}{t_{0}}\right)^{3} \label{eq_thick} \end{equation} \noindent where $S_{0}$ and $S_{1}$ are the flux densities at the time $t_{0}$ and $t_{0} + \Delta t$, i.e. the source lifetime at the time of the last observing epoch. Moreover, the peak frequency moves towards low frequencies: \begin{equation} \nu_{p,1} = \nu_{p,0} \left( \frac{t_{0}}{t_{0} + \Delta t} \right)^{4} \label{eq_peak} \end{equation} \noindent where $\nu_{p,0}$ and $\nu_{p,1}$ are the flux density at the time $t_{0}$ and $t_{0} + \Delta t$. Among the 35 sources observed in 2012, 18 (14 quasars, 3 galaxies, and 1 object with no optical counterpart) show a variability that is consistent with the expectation of adiabatic expansion. We estimate the dynamic age of the radio source, $t_{0} + \Delta t$, using Eq. \ref{eq_thick} and comparing the flux density at 1.4/1.7 GHz during the first and last observing epoch. The dynamical age estimated by Eq. \ref{eq_peak} is highly uncertain due to the large uncertainty on the peak frequency. The estimated dynamical age for the 11 sources with significant variability consistent with adiabatic expansion is reported in Table \ref{tab-lifetime}. Dynamical ages range between 40 and 270 yr, supporting the idea that these sources are in a very early phase of their evolution. For the remaining 7 sources the variation in the optically-thick part of the spectrum is consistent within the errors, preventing us to set any constraints on their age. \\ \begin{table} \caption{Estimated dynamical ages for faint HFP sources whose variability is consistent with relativistic plasma in adiabatic expansion. Column 1: source name; column 2: the time (in days) elapsed between the first and last VLA observations; column 3: source dynamical age (in yr) at the epoch of last VLA observations in 2012; Column 4: linear size (pc); column 5: estimated source expansion velocity in units of the speed of light.} \begin{center} \begin{tabular}{ccccc} \hline Source name & $\Delta t$ & $t_{\rm age}$ & $r$ & $v$\\ & days & yr & pc& $c$\\ \hline J0754$+$3033 & 4713 & 130 & 12 & 0.5\\ J0819$+$3823 & 4932 & 240 & - &\\ J0955$+$3335 & 4709 & 240 & - &\\ J1044$+$2959 & 4576 & 75 & $<$4& $<$0.7\\ J1052$+$3355 & 4709 & 100 & - &\\ J1107$+$3421 & 4709 & 270 & - &\\ J1309$+$4047 & 4688 & 255 & 6.8& 0.4\\ J1420$+$2704 & 4596 & 130 & - &\\ J1459$+$3337 & 4699 & 40 & 7 & 0.9\\ J1613$+$4223 & 4720 & 225 & - &\\ J1719$+$4804 & 4720 & 160 & $<$2.5 &$<$0.1\\ \hline \end{tabular} \end{center} \label{tab-lifetime} \end{table} \subsection{Parsec-scale structure} A sub-sample of 12 sources from the faint HFP sample were observed with the VLBA in 2019 January. These sources were selected on the basis of their peak frequency below 7 GHz in order to have VLBA observations at 15 and 24 GHz in the optically-thin part of the spectrum. 5 sources (about 42 per cent) are resolved into two components, while 2 sources are marginally resolved at 24 GHz only (Table \ref{phase-vlba}). Among the 12 newly observed sources, 6 were target of earlier VLBA observations at 8.4 and 15 GHz \citep{mo08,mo12}. The source J1107$+$3421 is resolved in VLBA images at both epochs, whereas J1002$+$5701 (not observed by VLA in 2012) that turned to be slightly resolved in \citet{mo12} is now unresolved. Remarkable cases are J0819$+$3823 and J1459$+$3337 which show a double structure in the last epoch of observations, while they were unresolved in earlier observations. J1420$+$2704 and J1613$+$4223 are unresolved in both observing epochs. In general, the flux density at 15 GHz in our VLBA observations in 2019 is significantly smaller than the flux density measured at the same frequency in earlier VLBA observations, in agreement with the trend of decreasing flux density observed in the VLA monitoring campaign. Usually S$_{\rm VLBA}/$S$_{\rm VLA}$ at 15 GHz is comparable or higher than at 24 GHz. In three sources S$_{\rm VLBA}/$S$_{\rm VLA}$ is higher at 24 GHz. Although the central frequencies of VLA and VLBA observations are slightly different (14.5 and 21.5 GHz at the VLA and 15 and 24 GHz at VLBA), the flux densities can still be compared. In fact, even in the case of the source with the steepest spectral index (J1613$+$4223, $\alpha_{\rm VLA} \sim$1.6), the difference between the flux density at 14.5 (21.5) GHz and the one extrapolated at 15 (24) GHz is within the error. The spectral index of the sources has been computed considering the full-resolution images at both frequencies, since they allow the lowest rms noise levels in the images. This may cause an artificial steepening of the spectral index if some extended emission is present. In general the spectral index ranges between -0.5 and 2.0 (Table \ref{vlba-flux}), indicating the presence of compact and flat components, like core and mini-hotspots, and steep-spectrum emission from jets or mini-lobes. \\ \subsection{Notes on individual sources} Here we provide a brief description of the sources deserving some discussion. Sources which are considered CSO candidates are marked in boldface. An asterisk indicates that the VLBA observations of the source are presented in this paper.\\ \indent {\bf J0754+3033*}: The VLA radio spectrum shows moderate variability on long-time scales ($V_{\rm tot} \sim 57$), but shows low variability when consecutive epochs separated by few years or less ($V_{\rm ep} < 2$ between 2003 and 2004, and between 2004 and 2006; Table \ref{vla-variability}) are considered. The turnover frequency slightly moves to lower frequency, from 8.9$\pm$0.6 GHz in 1999 to 6.5$\pm$1.0 GHz in 2012. The flux density in the optically-thin part of the spectrum decreases with time, whereas it increases in the optically-thick part. The dynamical age inferred from its variability is about 130 yr (Table \ref{tab-lifetime}). On pc-scale the source shows a double structure (Fig. \ref{vlba-images}) with the components separated by about 1.6 mas (i.e. about 12 pc at the redshift of the source). The spectral index of the eastern component is 0.6, indicating the presence of freshly injected/accelerated relativistic particles, as one would expect if the component hosts the core and/or a mini-hotspot, whereas it is steeper ($\alpha \sim$1.3) in the western component (Table \ref{vlba-flux}). These characteristics suggest that this source is a CSO candidate.\\ \indent {\bf J0819$+$3823*}: The VLA radio spectrum is marginally variable, with $2.7 < V_{\rm ep} < 14$, and $V_{\rm tot} \sim 7$. The peak frequency is roughly constant within the errors ($\nu_{\rm p} \sim$ 6 GHz), while the flux density increases in the optically-thick part of the spectrum and decreases in the optically-thin part. The dynamical age computed from its variability is about 240 yr (Table \ref{tab-lifetime}). In the VLBA image at 15 GHz there is diffuse emission on the East of the compact component that dominates the flux density, and the source size is about 1 mas. At 24 GHz the compact component is slightly resolved into two components separated by about 0.4 mas (i.e. the distance between the peak of the two components). The spectral index integrated over the whole VLBA structure is about 2.0, while the VLA spectral index between 15 and 22 GHz is about 1.0, suggesting the presence of extended steep-spectrum emission (Table \ref{vlba-flux}). The larger S$_{\rm VLBA}/$S$_{\rm VLA}$ at 15 GHz than at 24 GHz supports this interpretation. The source was unresolved in earlier VLBA observations presented in \citet{mo12}. Although the complicated pc-scale radio structure prevents us from unambiguously classify this source as a CSO, the VLA variability is in agreement with what expected for a source in adiabatic expansion. For this reason we still consider this object as a CSO candidate.\\ \indent {\bf J0951$+$3451}: The VLA radio spectrum is convex in all the three observing epochs and the peak frequency is roughly constant within the errors ($\nu_{\rm p} = 5.6\pm1.0$). The source shows low variability ($V_{\rm ep}$ and $V_{\rm tot} \sim$12) and the flux density increases in the optically thick part of the spectrum, while it slightly decreases in the optically-thin part. The source is resolved into three components by VLBA observations, with the central region showing a flat spectral index \citep{mo12}. These characteristics confirm the source as a CSO.\\ \indent J1008$+$2533: The VLA radio spectrum had a convex shape only during the first epoch. In the subsequent epochs the spectrum shows a complex shape, with an inverted part below 5 GHz and a flattening at higher frequencies. However, the variability index has relatively small values $1<V_{\rm ep} <18$, and $V_{\rm tot} \sim$20. At pc-scale it shows a core-jet structure with a compact component with an inverted spectrum dominating the radio emission \citep{mo12}. These characteristics confirm the blazar nature of this source.\\ \indent J1025$+$2541*: The VLA radio spectrum has a convex shape in all the three observing epochs. The source has some moderate variable $V_{\rm ep} \sim 15-20$, and $V_{\rm tot} \sim$37. The peak frequency determined for each epoch is consistent within the errors, with a hint of decrease from $\sim$4.2 GHz to $\sim$3.3 GHz. At 15 GHz the pc-scale structure is resolved into two amorphous components whose peaks are separated by about 1.1 mas (i.e. about 6.5 pc), whereas at 24 GHz it shows a single component roughly coincident with the brightest part of the source visible at 15 GHz. The VLBA flux density at 24 GHz is higher than the flux density observed by the VLA in 2012, suggesting a blazar-like variability.\\ \indent {\bf J1107+3421*}: The VLA radio spectrum shows moderate variability with $2 < V_{\rm ep} < 23$ and $V_{\rm tot} \sim 50$. The turnover frequency is roughly constant within the errors, whereas the flux density in the optically-thick part of the spectrum increases with time. The dynamical age computed from its variability is about 270 yr (Table \ref{tab-lifetime}). The pc-scale radio source is characterized by two components (Fig. \ref{vlba-images}) separated by 1 mas. The source position angle slightly changes from -75$^{\circ}$ to -80$^{\circ}$ at 15 and 24 GHz. Although the double structure was already pointed out by \citet{mo12}, the flux density ratio at 15 GHz between the components changed from S$_{\rm E}$/S$_{\rm W}$ $\sim$2 to 1.3 in 2010 and in 2019, respectively. The spectral index of the eastern component is about 0.6, while in the western component $\alpha =$-0.5$\pm$0.4, indicating an inverted spectrum. This component may be either the core or a very compact self-absorbed hotspot, like in the case of the HFP sources J1335$+$5844 and J1735$+$5049 \citep{mo14}. The fractional flux density S$_{\rm VLBA}/$S$_{\rm VLA}$ is higher at 24 GHz than at 15 GHz (about 80 per cent and 40 per cent at 24 and 15 GHz, respectively). On the basis of the VLA variability and the pc-scale properties, we still consider this source as a CSO candidate.\\ \indent{\bf J1309$+$4047}: The VLA radio spectrum is roughly constant with $1 < V_{\rm ep} < 7$ and $V_{\rm tot} \sim 2$. The peak frequency is constant within the errors ($\sim$4.5 GHz). The dynamical age computed from the variability is about 255 yr (Table \ref{tab-lifetime}). On pc-scale, the source shows a double structure whose components are separated by about 0.8 mas, i.e. 6.8 pc at the redshift of the source \citep{mo12}. The steep spectral index derived from VLBA data make us consider this object a CSO candidate. \\ \indent J1459+3337*: This radio source was first identified as an HFP object by \citet{edge96} with a turnover frequency of about 30 GHz. In the two decades thereafter, the peak progressively moved to lower and lower frequencies, at about 21 GHz and 15 GHz in 1999 and 2003, respectively, and our new observations in 2012 set the turnover at about 3 GHz. The flux density in the optically-thin part of the spectrum progressively decreases with time, while in the optically-thick part of the spectrum it steadily increases (the flux density at 1.4 GHz progressively increases from $\sim$8 mJy in 1993 to $\sim$50 mJy in 2012). The source displays one of the highest variability index $V_{\rm tot} \sim$250. The dynamical age computed from its variability is about 40 yr (Table \ref{tab-lifetime}). We observed a change in the radio morphology of this source: it was unresolved in VLBA observations in 2005 \citep{mo08}, while in our new observations it shows a double structure (Fig. \ref{vlba-images}) with the two components separated by about 1.1 mas and 0.9 mas at 15 and 24 GHz, respectively (i.e. about 7 pc at the distance of the source). The flux density ratio is $S_{\rm W}/S_{\rm E} \sim$ 3.8 and 3.4 at 15 and 24 GHz, respectively. The spectral index is relatively steep, with $\alpha \sim$1.0 and $\sim$1.2 in the eastern and western component, respectively. This source shows one of the largest discrepancy between VLA and VLBA flux density, S$_{\rm VLBA}/$S$_{\rm VLA} \sim$ 20 per cent at both frequencies, indicating a huge flux density decrement between 2012 and 2019 with no significant variation of the spectral shape between these frequencies (Table \ref{vlba-flux}). Spectral and morphological changes may be explained in terms of either a CSO or a knot that is moving downstream along the approaching jet. In the first scenario the two components may be two asymmetric mini-lobes that are moving away from each other. However, if we consider that the source grows from $<$0.3 mas ($<$2 pc) in 2005 to 1 mas (about 7 pc) in 2019, we infer a source expansion of about $c$, favouring the interpretation of the propagation of a knot along the approaching jet that was produced by a huge flare a few decades ago in a moderately beamed radio source. Although the variability associated with a single event hardly lasts longer than a few years \citep{hovatta08}, there are some cases in which the ejected component can be followed for longer time. An example is 3C\,84, which underwent a huge increase of flux density in the 1960's, followed by the ejection and expansion of the southern jet \citet{walker94}, whereas not much can be said for the northern counterpart due to severe free-free absorption that prevents its detection below 22 GHz. A similar situation may have happened in the case of J1459$+$3337, where the component emerging from the main compact region may be the approaching jet. This interpretation may be supported by the slightly different position of the eastern component in our images at 15 and 24 GHz with respect to the brightest one. For these reasons, we consider J1459$+$3337 a blazar-like candidate.\\ \indent J1530$+$2705: The VLA radio spectrum shows moderate variability $4 \leq V_{\rm ep} < 60$ and $V_{\rm tot} \sim 23$. Changes of the peak frequency and of flux density in the optically-thick and in the optically-thin part of the spectrum do not follow any trend with time. This radio source is hosted in a nearby galaxy which is part of a group \citep{mo10}. No information on the pc-scale structure is available, but the erratic variability suggests that this source is a blazar.\\ \indent{\bf J1613$+$4223*}: The VLA radio spectrum is highly convex with $\alpha_{\rm b}=-1.9$ and $\alpha_{\rm a} = 1.6$, and shows a modest variability $0.8 \leq V_{\rm ep} < 16$ and $V_{\rm tot} \sim 13$. The dynamical age estimated by the variability is about 225 yr (Table \ref{tab-lifetime}). The peak frequency is constant within the errors ($\sim$4.5 GHz). On pc-scale the source is unresolved at both 15 and 24 GHz, with an upper limit on its angular size of 0.4 mas. The spectral index of the whole source is very steep, suggesting the presence of steep-spectrum low-surface brightness emission that may have been missed at 24 GHz as supported by the much larger S$_{\rm VLBA}/$S$_{\rm VLA}$ observed at 15 GHz than at 24 GHz (Table \ref{vlba-flux}). On the basis of the VLA variability that is consistent with a source in adiabatic expansion, we still consider this source as a CSO candidate.\\ \indent{\bf J1624$+$2748}: The VLA radio spectrum is convex and displays one of the lowest variability estimated for this sample with $V_{\rm ep} \sim4$ and $V_{\rm tot} \sim3$. The peak frequency is roughly constant within the errors. No information on pc-scale structure is available, but the variability properties make this object a very promising CSO candidate.\\ \indent{\bf J1719$+$4804*}: The radio spectrum is convex and displays high variability with $V_{\rm tot} \sim 254$. The peak frequency shifts from 10.8 to 2.8 GHz from 1999 to 2012. The dynamical age computed from the variability is about 160 yr (Table \ref{tab-lifetime}). The pc-scale structure is unresolved in our VLBA images, giving an upper limit on the angular size of 0.3 mas, which corresponds to a linear size of 2.5 pc at the redshift of the source. The VLBA flux density at 15 and 24 GHz is about 4 mJy and 3.5 mJy, respectively, indicating a further decrease with respect to that observed in 2012. The VLBA spectral index is rather flat $\alpha = 0.4\pm0.2$, suggesting that the radio emission is dominated by the core or a very compact self-absorbed hotspot, like in the case of the HFP sources J1335$+$5844 and J1735$+$5049 \citep{mo14}. This source shows one of the largest discrepancy between VLBA and VLA flux densities with S$_{\rm VLBA}/$S$_{\rm VLA} \sim$ 20 and 30 per cent at 15 and 24 GHz, respectively, indicating a huge flux density decrement between 2012 and 2019 with slight variation of the spectral shape between these frequencies (Table \ref{vlba-flux}). The VLA variability consistent with what is expected in case of adiabatic expansion, and the lack of unambiguous blazar characteristics, make us still consider this source as a CSO candidate.\\ \section{Discussion} Ideally, unbeamed young radio sources are characterized by a low level of flux density variability, low fractional polarization, and a double/triple structures dominated by lobe/hotspot components when studied with sub-arcsecond resolution. The location of the source core at the centre of a two-sided radio structure would be the hallmark of a CSO. In contrast, blazars show significant variability, have pc-scale core-jet structures, and high and variable fractional polarization. It is therefore clear that the study of variability, morphology, and polarization is the key to disentangle the nature of a radio source. \\ HFP sources are all unresolved by arc-second scale VLA observations and higher resolution observations are necessary for investigating their structure. With the aim of determining the source structures, the optically-thin emission of 23 sources from the faint HFP sample have been observed with the VLBA, in 2010 and 2019, and results are reported in \citet{mo12} and in this paper. Despite the pc-scale resolution, 14 out of the 23 observed sources with VLBA observations are unresolved or marginally resolved. The optically-thin spectral index derived from VLA data (Table \ref{vla-flux} and \citet{mo10}) points out that 3 of these sources (J1002$+$5701, J1436$+$4820, and J1613$+$4223) have $\alpha_{\rm a} >1.0$, suggesting that the emission is dominated by steep-spectrum emission, likely from mini-lobes. Among the sources with resolved structure, 5 sources (J0754$+$3033, J0943$+$5113, J0951$+$3451, J1107$+$3421, and J1135$+$3624) have a double/triple structure that is consistent with those of CSOs, whereas for the other sources the double morphology may be interpreted in terms of either mini-lobes/hotspots or core-jet structure. In general, the detection of only two components makes the classification of these sources rather tentative, \citep[e.g.][]{snellen00b, deller14}. J0951$+$3451 shows slight variability between the first and last epoch, whereas J0754$+$3033 and J1107$+$3421 are highly variable. The other two sources were found non variable by \citet{mo10}, and the lack of VLA observations in 2012 prevents us from the study of their long-term variability.\\ Monitoring campaigns of high frequency peaking objects show that moderate variability is a common characteristics of these objects. Earlier studies of the sources from the faint HFP samples pointed out that about 40 per cent of the target sources are non variable \citep{mo10}. This percentage drastically decreases to 1 object out of 35 (3 per cent) when we consider a longer time separation between the observing epochs. Contrary to what is found for other samples of high frequency peakers, among the sources showing a random variability typical of beamed objects, there is a predominance of radio sources associated with galaxies ($\sim$ 60 per cent) rather than quasars ($\sim$ 43 per cent).\\ \subsection{Variability in young radio sources} Flux density and spectral variability are not common features of the class of CSOs, but they are the characteristics of blazars. Samples selected at high frequencies (in the GHz regime) are more contaminated by beamed objects than samples selected at lower frequencies or with different criteria \citep[e.g.][]{coppejans16}. Variability studies are thus used to discriminate between CSOs and blazars. However, significant variability may be observed also among the youngest CSOs in which freshly produced bubbles of magnetized plasma are expanding in a rather inhomogeneous ambient medium, implying an irregular expansion rate. Moreover, in CSOs the time elapsed between the first and last epoch of the monitoring campaign corresponds to a significant fraction of the lifetime of the radio source. It is then quite likely to observe spectral and flux density changes. Dynamical ages estimated on the basis of the variation of the optically-thick flux density range between 40 and 270 yr, supporting the idea that these sources are in a very early phase of their evolution. The values derived in this way should be representative of the order of magnitude of the dynamical ages, owing to the strong assumption of a single homogeneous component that is expanding with no effects from the ambient medium. Moreover, the core activity might not be continuous, and its likely erratic on-off cycle could perturb the predicted flux density variability.\\ If these sources are actually in a very early stage of their evolution, their large-scale counterpart should be sought among low-power radio sources observed in the MHz regime. In fact, at least for those sources for which it is possible to ``see'' the epoch-by-epoch evolution, from Eq. \ref{eq_peak} we expect that the peak frequency would lower by a factor of about 16 as $\Delta t$ approaches t$_{0}$, falling in the MHz regime. In parallel the optically-thin flux density decreases as: \begin{equation} S_{1} = S_{0} \left( \frac{t_{0} + \Delta t}{t_{0}} \right)^{-2\delta} \label{eq_thin} \end{equation} \noindent where $S_{0}$ and $S_{1}$ are the flux densities at the time $t_{0}$ and $t_{0} + \Delta t$, i.e. the source lifetime at the time of the last observing epoch, and $\delta$ is the spectral index of the electron energy distribution of the relativistic particles (N($E$) $\propto E^{-\delta}$, $\delta = 2\alpha + 1$). If in Eq. \ref{eq_thin} we consider typical values for $\delta$ between 2 and 3, we find that the flux density would decrease by a factor of about 16$-$60, becoming of (sub-)mJy level. Only the flux density below the peak frequency becomes more prominent with time. Part of these sources may be progenitors of the population of low-power radio sources \citep[see e.g.][]{tingay15,baldi15,baldi19,mingo19} and/or MHz peaked spectrum radio sources \citep[]{coppejans15}, or they may represent short-lived episode of radio emission from an AGN \citep{czerny09}. The Square Kilometre Array, with its huge improvement in sensitivity in the MHz regime would enable systematic studies of population of faint radio sources, providing a fundamental step in our understanding of their evolution.\\ For the sources with information on their parsec scale structure and redshift we estimate the expansion speed $v$ by: \begin{equation} v = \frac{\theta \; {\rm D_{\rm L}}}{(1+z)^2} \frac{1+z}{\rm t_{\rm age}} \label{eq-valocity} \end{equation} \noindent where $\theta$ is the angular size measured from VLBA images, D$_{\rm L}$ is the luminosity distance of the source, $z$ is the redshift, and $t_{\rm age}$ the estimated dynamical age.\\ The expansion speed ranges between 0.1$c$ and 0.7$c$, in agreement with values estimated for the population of CSOs \citep[see e.g.][]{polatidis03,antao12}, with the only exception of the quasar J1459$+$3337, which turned out to be likely a blazar on the basis of the pc-scale structure. Owing to the uncertainty on the source age, the expansion speed should be considered as an upper limit. For all the sources our VLBA observations could not identify the core region preventing us from investigating if both jets are expanding at the same velocity or if the ambient medium plays a role in their growth. Jet-cloud interaction seems to be common during the first evolutionary phase when the jet is piercing its way through the dense and inhomogeneous gas of the narrow line region \citep{dd13}. The presence of an inhomogeneous ambient medium has been found in some CSOs from the bright HFP sample \citep{dd00}, where free-free absorption is observed towards only one of the two jets/mini-lobes \citep[e.g. J0428$+$3259 and J1511$+$0518,][]{mo08b} and highly asymmetric structures mark the presence of clouds slowing down the expansion on one side, preventing its adiabatic expansion and enhancing synchrotron losses \citep[e.g. J1335$+$5844,][]{mo14}. However, the sources studied so far are relatively powerful (L$_{\rm 1.4 GHz}$ $>$ 10$^{26}$ W/Hz), and there are only a few studies on the effects the ambient medium has on the expansion of low-power (L$_{\rm 1.4 GHz}$ $<$ 10$^{26}$ W/Hz) jets \citep[e.g.][]{kunert10}. Deep and systematic VLBI observations are necessary for investigating the role of the ambient medium during the first phase of the source expansion in faint objects.\\ \subsection{Steep spectral shape} About 20 per cent (7 out of 35) of the sources discussed here have a rather steep optically-thin VLA spectrum ($\alpha_{\rm a} >$ 1.0), which is quite uncommon in radio galaxies with active regions. These small and compact radio sources have somehow different characteristics with respect to common extended radio galaxies. The equipartition magnetic fields increases as we consider smaller and smaller sources: from a few $\mu$G in the lobes of the classical FR-I/II sources \citep{croston05} to a few mG in compact steep-spectrum sources \citep{fanti95}, and up to a few tens/hundreds mG in young HFP objects \citep{mo08b}. This implies that for a given observing frequency, the Lorentz factor of the radiating particle is smaller than in larger sources with weaker fields. Moreover, the radiative losses are higher, shortening the radiative lifetime of the relativistic electrons and shifting the frequency break at lower and lower frequency. In the small sources the outer components are usually dominated by bright and compact regions, like mini-hotspots, while no significant emission from the lobes is observed \citep{tingay03,gugliucci05,mo08b,mo14}, supporting the severe losses hypothesis. \\ Alternatively, the steep spectra might be caused by an injection spectral index that is steeper than what is usually found \citep{harwood16,harwood17}. Deep VLBI observations are necessary to unveil the presence of low-luminosity extended structures and disentangle between the two scenarios.\\ \section{Summary} In this paper, we presented results on a multi-epoch multi-frequency VLA monitoring campaign, and pc-scale VLBA observations of CSO candidates from the faint HFP sample. The conclusions that we can draw from this investigation are: \begin{itemize} \item 5 out of the 12 sources (42 per cent) observed with milli-arcsecond resolution turned out to be compact doubles. Taking into account earlier observations of additional 11 objects, we end up with a total of 11 sources showing a morphology consistent with what is expected for young radio sources. However, the radio structure and the spectral index information are not conclusive and deeper pc-scale observations are necessary to probe the nature of the sources and identify the locations of the center of activity. \item 34 out of the 35 sources (97 per cent) observed with VLA have moderate to strong long-term variability. Only one source, J1624$+$2748, has neither spectral nor flux density variability. \item 18 radio sources possess spectral and flux density variability that is consistent with a cloud of homogeneous magnetized relativistic plasma in adiabatic expansion. For the sources with known redshift we estimate the dynamical ages which range between a few tens to a few hundreds years. The corresponding expansion velocity is mainly between 0.1$c$ and 0.7$c$, similar to values found in young radio sources. However, among these sources, one object shows pc-scale properties and an estimated velocity of about $c$, suggesting a blazar nature. \item In 17 sources the flux density changes randomly as it is expected in blazars, and in 6 sources the spectrum becomes flat in the last observing epoch, confirming that samples selected in the GHz regime are highly contaminated by beamed objects. \item No significant dichotomy is found in the flux density variability between galaxies and quasars, with a slightly larger fraction of galaxies showing erratic variability typical of beamed sources. \end{itemize} The fast evolution that we observe in some CSO candidates suggests that they hardly represent the progenitors of classical Fanaroff-Riley radio galaxies. Thanks to the huge improvement in sensitivity and the wide frequency coverage up to 100 GHz, the Next Generation Very Large Array\footnote{ngvla.nrao.edu} will be the optimal instrument for shedding a light on the nature and fate of these objects.\\ \section*{Acknowledgments} We thank the anonymous referee for reading the manuscript carefully and making valuable suggestions. The VLA and VLBA are operated by the US National Radio Astronomy Observatory which is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This work has made use of the NASA/IPAC Extragalactic Database NED which is operated by the JPL, Californian Institute of Technology, under contract with the National Aeronautics and Space Administration. AIPS is produced and maintained by the National Radio Astronomy Observatory, a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Funding for SDSS-III has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, and the U.S. Department of Energy Office of Science. The SDSS-III web site is http://www.sdss3.org/. SDSS-III is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS-III Collaboration including the University of Arizona, the Brazilian Participation Group, Brookhaven National Laboratory, Carnegie Mellon University, University of Florida, the French Participation Group, the German Participation Group, Harvard University, the Instituto de Astrofisica de Canarias, the Michigan State/Notre Dame/JINA Participation Group, Johns Hopkins University, Lawrence Berkeley National Laboratory, Max Planck Institute for Astrophysics, Max Planck Institute for Extraterrestrial Physics, New Mexico State University, New York University, Ohio State University, Pennsylvania State University, University of Portsmouth, Princeton University, the Spanish Participation Group, University of Tokyo, University of Utah, Vanderbilt University, University of Virginia, University of Washington, and Yale University. \section*{Data Availability} The data uderlying this article are available in the NRAO Data Archive (https://science.nrao.edu/observing/data-archive) with the project codes AO281 and BO057. Calibrated data are available on request.
null
null
null
proofpile-arXiv_000-10166
{"arxiv_id":"2009.08995","language":"en","timestamp":1600740027000,"url":"https:\/\/arxiv.org\/abs\/2009.08995","yymm":"2009"}
2024-02-18T23:40:25.219Z
2020-09-22T02:00:27.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10168"}
null
\section{Methods} \subsection{Sample Preparation} The Cr$_2$O$_3${} used in this study is a commercially available, single crystal from MaTeK with a (0001) surface orientation. The originally \SI{5x5x1}{\mm} crystal was broken into two halves along a diagonal (see SI). The sample was prepared by removing magnetic contamination (presumably resulting from the polishing process) with a \SI{100}{\s} ArCl$_2$ plasma etch (ICP-RIE, Sentech) in \SI{2}{\s} steps. One side of the crystal was then spin coated with an HSQ layer (FOx, Dow Corning) and subsequently developed using electron-beam lithography to create \SI{10x2}{\um} mesa masks. These mask patterns were transferred into the sample with a \SI{100}{\s} ArCl$_2$ plasma etch. The masks were then removed using HF. This process results in the \SI{166}{\nm}-tall structures seen in the inset of Fig.~\ref{fig:Sample}a. For the measurements, the sample was mounted on a small Peltier element, placed on top of an open-loop, piezoelectric scanner (Attocube ANSxyz100), allowing us to heat the sample up to $\approx$\SI{340}{K}. \subsection{NV Magnetometry} The NV center is a point defect in diamond, whose S=1 electronic ground state spin can be initialized and read-out through optical excitation at \SI{532}{\nm}. Specifically, we use state-dependent fluorescence to identify the Zeeman splitting between the $\ket{\pm1}$ spin levels using optically detected magnetic resonance (ODMR). All measurements in this study were performed using scanning, all-diamond parabolic pillars~\cite{Hedrich2020a} housing a single NV center and integrated into a custom confocal imaging setup equipped with a CW \SI{532}{\nm} laser~\cite{Grinolds2013}. The measurements were performed with $<$\SI{10}{\uW} of continuous-wave optical excitation, a factor of two smaller than typical saturation powers for NVs in these parabolic scanning pillars~\cite{Hedrich2020a}. The microwave (MW) needed to manipulate the NV is provided by a \SI{30}{\um} gold loop antenna with a typical effective driving strength of \SI{0.25}{G} at the NV. These low excitation powers (both MW and laser) ensure that we do not disturb the DW, which was confirmed by repeating scans and observing the DW. A small bias magnetic field ($<$\SI{60}{G}) was applied along the NV axis using a permanent magnet to allow for a sign-sensitive measurement of the stray magnetic fields. Both 2D magnetic field images and linescans were performed using a feedback technique to lock a microwave driving frequency to the instantaneous NV spin transition frequency, as described in~\cite{Schoenfeld2011}. We employ single-pixel integration times ranging from \SI{0.3}{\s} for full-field images to \SI{5}{\s} for individual line scans with a noise floor of $\approx$~\SI{3.3}{\micro\tesla}$/\sqrt{\textrm{Hz}}$. \subsection{Domain Wall Nucleation} DWs are nucleated in the otherwise mono-domain single crystal Cr$_2$O$_3${} using magnetoelectric cooling through the N\'eel temperature. A uniform magnetic field is achieved by placing two \SI{5x5}{\cm} permanent magnets adjacent to each other and in close proximity to the sample. The result is a nearly homogeneous magnetic field of $B\approx550~$mT along the center normal, as measured by a Hall probe (AS NTM, FM302 Teslameter, Projekt Elektronik). In addition, we apply electric fields across the sample using a split-gate capacitor consisting of two quartz plates with \SI{100}{\nm} Au evaporated onto the surface. The Cr$_2$O$_3${} sample is then centered onto the capacitor gap and sandwiched between the top and bottom gate together with thin mica sheets to prevent electroplating of Au onto the crystal surface. A schematic of this setup may be found in the SI. The whole device is then heated to far above the N\'eel temperature and allowed to cool to room temperature while simultaneously applying $\pm$\SI{750}{V} between the electrodes leading to an electric field of $E\approx$ \SI{0.75}{MV/m} across the crystal. The resulting $|\bm{E}\times \bm{B}| = 0.41\times10^6$ VT/m is sufficient to force the Cr$_2$O$_3${} sample into a two-domain state. The crystal can again be made mono-domain by repeating this procedure with a uniform capacitor rather than the split-gate. This process has been repeated twice to show the reproducibility, where each realization resulted in a different domain configuration (see SI). \subsection{Domain Wall Dragging} The repeated movement of the DW is demonstrated via local heating with a laser - a process we describe as "laser dragging". For this, we remove the NV scanning probe and focus the laser onto the sample surface with a beam diameter of $\approx$\SI{420}{\nm}. We scan the sample at a speed of roughly \SI{80}{\nm/s}, perpendicular to the DW, over distances exceeding \SI{10}{\um} before reducing the laser power back to below \SI{10}{\uW}. With this method, the minimum laser power at which we have observed DW motion is \SI{135}{\uW}. We then replace the NV scanning probe and image the new DW position. These experiments were performed at a sample temperature of $304~$K, achieved through heating with the Peltier element. This domain wall dragging can also be verified through direct measurements with the NV (see SI). \subsection{Fitting to Mesa Stray Fields} The mesa structures etched into the surface of Cr$_2$O$_3${} play a critical role in our study as they act as sources of stray fields for characterizing the surface magnetization ($\sigma_m$) and NV-sample spacing ($d_{NV}$) as well as providing reference markers. For the former, we consider 29 linecut sections, each taken over a mesa, and fit the stray field at the mesa edges by modifying a well-studied model~\cite{Tetienne2015}, which describes the stray field as arising from line currents along the top and bottom edges of the mesa (see SI). We obtain estimates of the NV angles ($\theta_{NV}$ and $\phi_{NV}$), $d_{NV}$, $\sigma_m$ and the mesa edge positions, as well as their variances, through the Metropolis Hastings (MH) algorithm (see SI). In particular, by multiplying the likelihood distributions of all 29 datasets, performed at various temperatures, we obtain reasonable estimates of the NV angles, $\theta_{NV} = 60.7\pm2.9$ and $\phi_{NV} = 260.6\pm 0.8$ degrees. We furthermore extract a mean $d_{NV}=51.4\pm19.2~$nm. By considering only the six measurements taken at room temperature, we determine the value for the surface moment density, $\sigma_m = 2.1 \pm 0.3 $ $\mu_B$/nm$^2$, where the error corresponds to the standard deviation of the measurements. \subsection{Fitting to Domain Wall Stray Fields} To describe the stray field of a domain wall in Cr$_2$O$_3$, we begin with the typical description of the evolution of the magnetic moments of the two sublattices in this collinear antiferromagnet \cite{Belashchenko2016a,Mitsumata2011}. For this, we assume a Bloch wall of the typical form: \begin{align} L_x &= 0,\\ L_y &= \sech(x/\ell_m),\\ L_z &= \tanh(x/\ell_m), \label{eq:DWprofile} \end{align} where $\ell_m$ is the magnetic length, as given in the main text. Thus, the domain wall profile (as shown in Fig.~\ref{fig:Sample}c) is determined by $\ell_m$, allowing us to use this parameter to characterize the domain wall width. This form of the domain wall profile is then considered in the derivation of the stray field along the NV axis, as described in the SI. We again use the MH algorithm to evaluate our model of the stray field with the stray field data. To do so, the NV-sample spacing, angles and surface magnetization previously extracted from the mesa fits are used as prior information in the fit. In particular, we consider the NV-sample spacing on a case-by-case basis as each DW linescan is taken concurrently with a mesa linescan. The upper bound for $\ell_m$ stated in the main text is then obtained from the extracted likelihood distributions at room temperature. We examine the extrema of the distributions, selecting the 98$^{th}$ percentile of the cumulative distribution function (CDF) as the upper limit on $\ell_m$. This implies that, at room temperature, our data excludes an $\ell_m$ larger than \SI{32}{\nm}. We note that for completeness, the strayfield data has also been analyzed under the assumption of a N\'eel wall, resulting in a similar quality fit for slightly changed model parameters. However, the resulting domain wall width is consistently smaller for a N\'eel wall, verifying the validity of our statement on the upper limit of $\ell_m$, regardless of wall type. \subsection{Simulation Details} The spin-lattice simulations are performed in the in-house developed SLaSi package~\cite{SLaSi, Pylypovskyi13f}, rewritten in the CUDA framework, and based on a generic antiferromagnet consisting of a simple cubic lattice, described by the effective Hamiltonian: \begin{equation} \begin{aligned} \mathcal{H} & = \dfrac{\mathcal{J} S^2}{2} \sum_{i,i'} \bm{\mu}_{i} \cdot \bm{\mu}_{i'} - \dfrac{K S^2}{2} \sum_{i} (\mu_{i}^z)^2\\ &+ c_d \dfrac{\mu_0g^2\mu_\textsc{b}^2S^2}{8\pi} \sum_{i\neq j} \left[ \dfrac{\bm{\mu}_i\cdot \bm{\mu}_j}{r_{ij}^3} - 3 \dfrac{(\bm{\mu}_i\cdot \bm{r}_{ij})(\bm{\mu}_j\cdot \bm{r}_{ij})}{r_{ij}^5} \right]. \end{aligned} \label{eq.ham} \end{equation} Here, $\mathcal{J}$ is the exchange integral, $K$ is the easy-axis ($\bm{e}_z$) anisotropy, $\bm{\mu}_i$ is the unit vector representing the direction of the magnetic moment at the $i$-th lattice site and $i'$ runs over the nearest neighbors of $i$, yielding the oppositely oriented magnetic sublattices. To represent a general antiferromagnet, all while approximating the properties of Cr$_2$O$_3${}, we set $S=1$, $a = 0.277~$nm, $\mathcal{J}$= 2.34$\times 10^{-9}~$pJ~\cite{Shi2009a} and $K$= $2.6\times 10^{-10}~$pJ, which leads to $\ell_m = a\sqrt{\mathcal{J}/K} =$ \SI{0.83}{\nm}. These values allow for a reasonable scale of the sample for spin-lattice simulations and properly reproduce the effects observed in experiments. The last term in Hamiltonian (\ref{eq.ham}) represents the dipolar interaction, which we control by the parameter $c_d\in\{0,1\}$. We find that dipolar interactions do not change the results of our simulations qualitatively nor quantitatively, and therefore favor $c_d=0$ in the following, consistent with other studies. With this, we solve the set of Landau-Lifshitz-Gilbert equations \begin{equation} \dfrac{\mathrm{d}\bm{\mu}_i}{\mathrm{d}t} = \dfrac{1}{\hbar S} \bm{\mu}_i \times \dfrac{\partial \mathcal{H}}{\partial \bm{\mu}_i} + \alpha_\textsc{g} \bm{\mu}_i \times \dfrac{\mathrm{d}\bm{\mu}_i}{\mathrm{d}t}, \end{equation} where $\alpha_\textsc{g} = 0.5$ is the Gilbert damping, using the Runge-Kutta-Fehlberg scheme of order 4-5 with a fixed time step to find the equilibrium magnetic state when $\max|\mathrm{d}\bm{\mu}_i/\mathrm{d}t|\to 0$. We simulate parallelepiped-shaped samples with the mesa faces coinciding with lattice planes. This is done without loss of generality, as simulations with arbitrarily oriented mesas show no significant variations. To simulate a given bulk domain wall position, in particular for the study shown in Fig.~\ref{fig:Memory}c, we fix the equilibrium domain wall by notches at the sample boundaries. The initial state is defined as either a straight DW, which can cross the mesa, or one which is pinned at and bent around the mesa edges. The magnetization is then relaxed and its energy is compared to that of an unperturbed domain wall far from the mesa. The excess energy of the initial state can cause the DW to switch from a high energy (strongly extended) to low energy state, thereby imitating an induced switch due to an external stimulus. We note that the present model contains no bias to select a particular DW type (Bloch, N\'eel, or other). The resulting equilibrium DW type in the simulations is thus determined by the initial magnetic state we choose before numerical relaxation and may also vary along the DW. We varied details of the initial conditions and observed no influence of the DW type on the relaxed DW trajectory. Finally, to further investigate the robustness of our model, we performed simulations where we lower the structural symmetry of the lattice by shifting the two magnetic sublattices we consider by half a lattice constant with respect to each other along the main axis of the cubic lattice. Here, we also observe quantitatively similar results as for the original, simple cubic lattice, indicating that our model is indeed robust against variations in model parameters. Thus, though the considered spin-lattice is not a perfect representation of the Cr$_2$O$_3${} spin-lattice, these simplifications appear justified as our minimal model already captures all features of DW mechanics observed in our experiments. Furthermore, the generality of this model means it should be applicable to any achiral, uniaxial antiferromagnet. \section{Acknowledgements} We thank O.~Gomonay and S.A.~D\'{i}az for fruitful discussions and M.~Fiebig and M.~Giraldo for optical characterisation of our Cr$_2$O$_3${} samples at an early stage of the experiment. We would further like to thank M.~Kasperczyk and P.~Amrein for their help with efficient implementations of the Metropolis-Hastings algorithm, A.~K\'{a}kay at the HZDR for providing us with computation time for micromagnetics, and D. Broadway and L. Thiel for valuable input on figures. We would also like to thank A.V.~Tomilo at the TSNUK for his help with the spin-lattice simulations as well as his very helpful insight. We gratefully acknowledge financial support through the NCCR QSIT, a competence centre funded by the Swiss NSF, through the Swiss Nanoscience Institute, by the EU FET-OPEN Flagship Project ASTERIQS (grant $820394$), SNF Project Grant $188521$, German Research Foundation (projects MA $5144/22-1$ and MA $5144/24-1$) and Taras Shevchenko National University of Kyiv (Project No. $19$BF$052-01$). \bibliographystyle{apsrev4-1}
null
null
null
proofpile-arXiv_000-10167
{"arxiv_id":"2009.08986","language":"en","timestamp":1600740016000,"url":"https:\/\/arxiv.org\/abs\/2009.08986","yymm":"2009"}
2024-02-18T23:40:25.225Z
2020-09-22T02:00:16.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10169"}
null
\subsection{Implementation Details} \paragraph{Neural Network Architecture Details} We use a two layer feedforward neural network of 256 and 256 hidden nodes respectively. Rectified linear units~(ReLU) are put after each layer for both the actor and critic except the last layer. For the last layer of the actor network, a tanh function is used as the activation function to squash the action range within $[-1,1]$. \emph{GRAC} then multiplies the output of the tanh function by \emph{max action} to transform [-1,1] into [-\emph{max action}, \emph{max action}]. The actor network outputs the mean and sigma of a Gaussian distribution. \paragraph{CEM Implementation} Our CEM implementation is based on the CEM algorithm described in Pourchot~\etal~\cite{pourchot2018cemrl}. \setcounter{algorithm}{1} \begin{algorithm}[H] {Input: Q-function Q(s,a); size of population $N_{pop}$; size of elite $N_{elite}$ where $N_{elite} \leq N_{pop}$; max iteration of CEM $N_{cem}$.}\\ {Initialize the mean $\mu$ and covariance matrix $\Sigma$ from actor network predictions.} \begin{algorithmic}[1] \For {$i=1...,N_{\text{cem}}$} \State Draw the current population set $\{a_{pop} \}$ of size $N_{pop}$ from $\mathcal{N}(\mu, \Sigma)$. \State Receive the $Q$ values $\{q_{pop}\} = \{Q(s,a) | a \in \{a_{pop}\} \}$. \State Sort $\{q_{pop}\}$ in descending order. \State Select top $N_{elite}$ Q values and choose their corresponding $a_{pop}$ as elite $\{a_{elite}\}$. \State Calculate the mean $\mu$ and covariance matrix $\Sigma$ of the set $\{ a_{elite} \}$. \EndFor \end{algorithmic} {Output: The top one elite in the final iteration.} \caption{CEM} \label{alg:cem} \end{algorithm} \paragraph{Additional Detail on Algorithm 1: GRAC} The actor network outputs the mean and sigma of a Gaussian distribution. In Line 2 of Alg.1, the actor has to select action $a$ based on the state $s$. In the test stage, the actor directly uses the predicted mean as output action $a$. In the training stage, the actor first samples an action $\hat{a}$ from the predicted Gaussian distribution $\pi_{\phi}(s)$, then \emph{GRAC} runs \emph{CEM} to find a second action $\tilde{a}=\emph{CEM}(Q(s,\cdot;\theta_2),\pi_{\phi}(\cdot | s)$). \emph{GRAC} uses $a=\argmax_{\{\tilde{a},\hat{a}\}}\{\min_{j=1,2} Q(s,\tilde{a};\theta_j), \min_{j=1,2} Q(s,\hat{a};\theta_j)\}$ as the final action. \subsection{Appendix on Experiments} \subsubsection{Hyperparameters used} Table~\ref{tab:hyper} and Table~\ref{tab:hyper_env} list the hyperparameters used in the experiments. $[a,b]$ denotes a linear schedule from $a$ to $b$ during the training process. \begin{table}[H] \centering \begin{tabular}{@{}p{0.3\linewidth}p{0.15\linewidth}} \hline \hline Parameters & Values \\ \hline \hline discount $\gamma$ & 0.99\\ \hline replay buffer size & 1e6\\ \hline batch size & 256 \\ \hline optimizer & Adam~\cite{kingma2014adam}\\ \hline learning rate & 3e-4\\ \hline $N_{\text{cem}}$ & 2\\ \hline $N_{\text{pop}}$ & 25\\ \hline $N_{\text{elite}}$ & 5\\ \hline\hline \end{tabular} \caption{Hyperparameter Table} \label{tab:hyper} \end{table} \begin{table}[H] \centering \begin{tabular}{@{}p{0.17\linewidth}p{0.12\linewidth} p{0.11\linewidth} p{0.11\linewidth} p{0.15\linewidth} p{0.14\linewidth}} \hline \hline Environments & ActionDim & $K$ in Alg.1 & $\alpha$ in Alg.1 & CemLossWeight & RewardScale \\ \hline Ant-v2 & 8 & 20 & [0.7,~85] & 1.0/ActionDim & 1.0\\ Hopper-v2 & 3 & 20 & [0.85,~0.95] & 0.3/ActionDim & 1.0\\ HalfCheetah-v2 & 6 & 50 & [0.7,~0.85] & 1.0/ActionDim &0.5\\ Humanoid-v2 & 17 & 20 & [0.7,~0.85] & 1.0/ActionDim & 1.0\\ Swimmer-v2 & 2 & 20 & [0.5,~0.75] & 1.0/ActionDim & 1.0\\ Walker2d-v2 & 6 & 20 & [0.8,~0.9] & 0.3//ActionDim & 1.0\\ \hline\hline \end{tabular} \caption{Environment Specific Parameters} \label{tab:hyper_env} \end{table} \subsubsection{Additional Learning Curves} \label{appendsec:experiment} Figure 5 shows the learning curves for policy improvement with evolution strategy. \begin{figure}[htb!] \centering \begin{tabular}{ccc} \begin{minipage}{.3\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_ant.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_hopper.pdf} \par\small{(b)~Returns on Hopper-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_humanoid.pdf} \par\small{(c)~Returns on Humanoid-v2} \end{minipage} \end{tabular} \begin{tabular}{ccc} \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_halfcheetah.pdf} \par\small{(d)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_swimmer.pdf} \par\small{(e)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_walker2d.pdf} \par\small{(f)~Returns on Walker2d-v2} \end{minipage}\\ \end{tabular} \caption{Learning curves for the OpenAI gym continuous control tasks. The \emph{GRAC} actor network uses a combination of two actor loss functions, denoted as QLoss and CEMLoss. \emph{QLoss Only} represents the actor network only trained with QLoss. \emph{CEM Loss Only} represents the actor network only trained with CEMLoss. In general \emph{GRAC} achieves a better performance compared to either using \emph{CEMLoss} or \emph{QLoss}.} \end{figure} Figure 6 shows the learning curves for ablation Study of Self-Regularized TD Learning. \begin{figure}[htb!] \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_ant.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Ant-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_halfcheetah.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on HalfCheetah-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_humanoid.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Humanoid-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Humanoid-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_walker2d.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Walker2d-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Walker2d-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_swimmer.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Swimmer-v2} \end{minipage} \end{tabular} \caption{Learning curves and average $Q_1$ values ($y^{\prime}_1$ in Alg. 1 of the main paper). \emph{DDPG} w/o target network quickly diverges as seen by the unrealistically high Q values. \emph{DDPG} is stable but often progresses slower. If we remove the target network and add the proposed target regularization, we both maintain stability and achieve a faster or comparable learning rate.} \end{figure} \subsubsection{Hyperparameter Sensitivity for the Termination Condition of Critic Network Training}\label{appendsubsec:termination} We also run experiments to examine how sensitive \emph{GRAC} is to some hyperparameters such as $K$ and $\alpha$ listed in Alg.1. The critic networks will be updated until the critic loss has decreased to $\alpha$ times the original loss, or at most $K$ iterations, before proceeding to update the actor network. In practice, we decrease $\alpha$ in the training process. Fig. 3 shows five learning curves on Ant-v2 running with five different hyperparameter values. We find that a moderate value of $K=10$ is enough to stabilize the training process, and increasing $K$ further does not have significant influence on training, shown on the right of Fig. 3. $\alpha$ is usually within the range of $[0.7,0.9]$ and most tasks are not sensitive to minor changes. However on the task of Swimmer-v2, we find that $\alpha$ needs to be small enough ($<0.7$) to prevent divergence. In practice, without appropriate $K$ and $\alpha$ values, divergence usually happens within the first 50k training steps, thus it is quick to select appropriate values for $K$ and $\alpha$. \begin{figure*}[ht!] \centering \begin{tabular}{cc} \begin{minipage}{.45\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_1.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.45\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_2.pdf} \par\small{(b)~Returns on Ant-v2} \end{minipage} \end{tabular} \caption{Learning curves for the OpenAI gym Ant-v2 environment. $K$ denotes the maximum number of iterations. $\alpha$ denotes the remaining percent of loss value when terminating the iteration. In practice, we decrease $\alpha$ in the training process. $\text{alpha} a\text{\_}b$ denotes the initial value $a$ and final value $b$ for $\alpha$.}\label{fig:results} \end{figure*} \subsection{Theorems and Proofs} \label{appendsec:theorems} For the sake of clarity, we make the following technical assumption about the function approximation capacity of neural networks that we use to approximate the action distribution. \textbf{State separation assumption:} The neural network chosen to approximate the policy family $\Pi$ is expressive enough to approximate the action distribution for each state $\pi(s,\cdot)$ separately. \setcounter{theorem}{0} \subsubsection{Theorem 1: \textbf{$Q$-loss Policy Improvement}} \label{appendsubsec:theorem1} \begin{theorem} Starting from the current policy $\pi$, we update the policy to maximize the objective $J_\pi = \E_{(s,a) \sim \rho_\pi(s,a)} Q^{\pi}(s,a)$. The maximization converges to a critical point denoted as $\pi_{new}$. Then the induced Q function, $Q^{\pi_{new}}$, satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 1] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately, for each state we are maximizing $\E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a)$. Therefore, we have $\forall s, \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. \begin{equation} \begin{array}{rl} Q^{\pi}(s,a) &= r(s,a) + \gamma \E_{s^{\prime}}V^{\pi}(s^{\prime}) \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} Q^{\pi}(s^{\prime}, a^{\prime}) \\ &= r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} [r(s^{\prime}, a^{\prime}) + \gamma \E_{s^{\prime \prime}} V^{\pi}(s^{\prime \prime})] \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} r(s^{\prime}, a^{\prime}) + \gamma^2 \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} \E_{s^{\prime \prime}} \E_{a^{\prime \prime} \sim \pi_{new}} Q^{\pi}(s^{\prime \prime}, a^{\prime \prime}) \\ &= \ldots \quad \mbox{(repeatedly unroll Q function )} \\ &\leq Q^{\pi_{new}}(s,a) \end{array} \end{equation} \end{proof} \subsubsection{Theorem 2: \textbf{\emph{CEM} Policy Improvement}} \label{appendsubsec:theorem2} \begin{theorem} We assume that the \emph{CEM} process is able to find the optimal action of the state-action value function, $a^*(s) = \argmax_{a}Q^{\pi}(s,a)$, where $Q^{\pi}$ is the Q function induced by the current policy $\pi$. By iteratively applying the update $ \E_{(s,a) \sim \rho_\pi(s,a)} [Q(s,a^*)-Q(s,a)]_{+}\nabla \log\pi(a^*|s)$, the policy converges to $\pi_{new}$. Then $Q^{\pi_{new}}$ satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 2] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately. Then, for each state $s$, the policy $\pi_{new}$ will converge to a delta function at $a^*(s)$. Therefore we have $\forall s, \max_a Q^{\pi}(s,a) = \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. Then, following Eq. (1) we have $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a)$ \end{proof} \subsubsection{Theorem 3: \textbf{Max-Min Double Q-learning Convergence}} \label{appendsubsec:theorem3} \begin{theorem} We keep two tabular value estimates $Q_{1}$ and $Q_{2}$, and update via \begin{equation} \begin{array}{rl} Q_{t+1, 1}(s,a) &= Q_{t,1}(s,a) + \alpha_t(s,a) (y_t-Q_{t,1}(s,a))\\ Q_{t+1, 2}(s,a) &= Q_{t,2}(s,a) + \alpha_t(s,a) (y_t-Q_{t,2}(s,a)), \end{array} \end{equation} where $\alpha_t(s,a)$ is the learning rate and $y_t$ is the target: \begin{equation} \begin{array}{cl} y_t & = r_t(s_t, a_t) + \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{1, 2 \}} Q_{t,i}(s_{t+1}, a') \\ a^{\pi} & \sim \pi(s_{t+1})\\ a^* & = argmax_{a'} Q_{t,2}(s_{t+1}, a') \\ \end{array} \end{equation} We assume that the MDP is finite and tabular and the variance of rewards are bounded, and $\gamma \in [0,1]$. We assume each state action pair is sampled an infinite number of times and both $Q_{1}$ and $Q_{2}$ receive an infinite number of updates. We further assume the learning rates satisfy $\alpha_t(s,a) \in [0,1]$, $\sum_t \alpha_t(s,a) = \infty$, $\sum_t [\alpha_t(s,a)]^2 < \infty$ with probability 1 and $\alpha_t(s,a)=0, \forall (s,a) \neq (s_t, a_t)$. Finally we assume \emph{CEM} is able to find the optimal action $a^*(s) = \argmax_{a'}Q(s,a';\theta_2)$. Then Max-Min Double Q-learning will converge to the optimal value function $Q^*$ with probability $1$. \end{theorem} \begin{proof}[Proof of Theorem 3] This proof will closely follow Appendix A of \cite{fujimoto2018addressing}. We will first prove that $Q_2$ converges to the optimal Q value $Q^*$. Following notations of \cite{fujimoto2018addressing}, we have \begin{equation*} \begin{array}{rl} F_t(s_t,a_t) \triangleq& y_t(s_t,a_t) - Q^*(s_t, a_t) \\ =& r_t + \gamma \max_{a^{\prime} \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a^{\prime}) - Q^*(s_t, a_t) \\ =& F_t^Q(s_t, a_t) + c_t \end{array} \end{equation*} Where \begin{eqnarray*} F_t^Q(s_t, a_t) &=& r_t + \gamma Q_{t,2}(s_{t+1}, a^*) - Q^*(s_t, a_t) \\ &=& r_t + \gamma \max_{a^{\prime}} Q_{t,2}(s_{t+1}, a^{\prime}) -Q^*(s_t,a_t) \\ c_t &=& \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a') - \gamma Q_{t, 2}(s_{t+1}, a^*) \end{eqnarray*} $F^Q_t$ is associated with the optimum Bellman operator. It is well known that the optimum Bellman operator is a contractor, We need to prove $c_t$ converges to 0. Based on the update rules (Eq. (A2)), it is easy to prove that for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0. This implies that $\Delta_t(s, a^\pi) = Q_{t,1}(s,a^\pi) - Q_{t,2}(s,a^\pi)$ converges to 0 and $\Delta_t(s, a^*) = Q_{t, 1}(s,a^*) - Q_{t, 2}(s,a^*)$ converges to 0. Therefore, $\min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a) - Q_{t, 2}(s,a) \leq 0$ and the left hand side converges to zero, for $a \in {a^\pi, a^*}$. Since we have $Q_{t, 2}(s, a^*) >= Q_{t, 2}(s, a^\pi)$, then \[ \min_{i \in \{1, 2\}} Q_{t, i}(s, a^*) \leq \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a') \leq Q_{t, 2}(s,a^*) \] Therefore $c_t = \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{ i \in \{ 1, 2 \}} Q_{t, i}(s, a') - Q_{t, 2}(s,a^*)$ converges to 0. And we proved $Q_{t, 2}$ converges to $Q^*$. Since for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0, $Q_{t, 1}$ also converges to $Q^*$. \end{proof} \subsection{Self-Regularized TD Learning}\label{sec:target} Reinforcement learning is prone to instability and divergence when a nonlinear function approximator such as a neural network is used to represent the Q function~\cite{tsitsiklis1997analysis}. Mnih~\etal\cite{mnih2015human} identified several reasons for this. One is the correlation between the current action-values and the target value. Updates to $Q(s_t,a_t)$ often also increase $Q(s_{t+1},a^{*}_{t+1})$ where $a^{*}_{t+1}$ is the optimal next action. Hence, these updates also increase the target value $y_t$ which may lead to oscillations or the divergence of the policy. More formally, given transitions $(s_t,a_t,r_t,s_{t+1})$ sampled from the replay buffer distribution $\mathcal{B}$, the Q network can be trained by minimising the loss functions $\mathcal{L}(\theta_i)$ at iteration $i$: \begin{equation} \mathcal{L}(\theta_i) = \E_{(s_t,a_t) \sim \mathcal{B}}\|(Q(s_t,a_t; \theta_i) - y_i)\|^2 \end{equation} where for now let us assume $y_i= r_t + \gamma\max_{a}Q(s_{t+1},a;\theta_i)$ to be the target for iteration $i$ computed based on the current Q network parameters $\theta_i$. ${a}^{*}_{t+1}=\argmax_{a}Q(s_{t+1},a)$. If we update the parameter $\theta_{i+1}$ to reduce the loss $\mathcal{L}(\theta_i)$, it changes both $Q(s_t,a_t;\theta_{i+1})$ and $Q(s_{t+1},a_{t+1}^{*};\theta_{i+1})$. Assuming an increase in both values, then the new target value $y_{i+1} = r_t + \gamma Q(s_{t+1},a^*_{t+1}; \theta_{i+1})$ for the next iteration will also increase leading to an explosion of the Q function. We demonstrated this behavior in an ablation experiment with results in Fig.~\ref{fig:abl1}. We also show how maintaining a separate target network~\cite{mnih2015human} with frozen parameters $\theta^-$ to compute $y_{i+1} = r_t + \gamma Q(s_{t+1},a^*_{t+1}; \theta^-)$ delays the update of the target and therefore leads to more stable learning of the Q function. However, delaying the function updates also comes with the price of slowing down the learning process. We propose a self-Regularized TD-learning approach to minimize the TD-error while also keeping the change of $Q(s_{t+1},a^*_{t+1})$ small. This regularization mitigates the divergence issue~\cite{tsitsiklis1997analysis}, and no longer requires a target network that would otherwise slow down the learning process. Let $y^{\prime}_i = Q(s_{t+1},a_{t+1}^{*}; \theta_i)$, and $y_i=r_t + \gamma y^{\prime}_i$. We define the learning objective as \begin{equation} \min_{\theta} \|Q(s_t,a_t;\theta) - y_i\|^2 + \|Q(s_{t+1},a^{*}_{t+1};\theta) - y^\prime_i\|^2 \end{equation} where the first term is the original TD-Learning objective and the second term is the regularization term penalizing large updates to $Q(s_{t+1}, a^*_{t+1})$. Note that when the current Q network updates its parameters $\theta$, both $Q(s_t,a_t)$ and $Q(s_ {t+1},a^{*}_{t+1})$ change. Hence, the target value $y_i$ will also change which is different from the approach of keeping a frozen target network for a few iterations. We will demonstrate in our experiments that this self-regularized TD-Learning approach removes the delays in the update of the target value thereby achieves faster and stable learning. \subsection{Self-Guided Policy Improvement with Evolution Strategies}\label{sec:cem_a} The policy, known as the actor, can be updated through a combination of two parts. The first part, which we call Q-loss policy update, improves the policy through local gradients of the current Q function, while the second part, which we call \emph{CEM} policy update, finds a high-value action via \emph{CEM} in a broader neighborhood of the Q function landscape, and update the action distribution to concentrate towards this high-value action. We describe the two parts formally below. Given states $s_t$ sampled from the replay buffer, the Q-loss policy update maximizes the objective \begin{equation} J_{\pi}(\phi) = \E_{s_t \sim \mathcal{B},\hat{a}_t \sim \pi}[Q(s_t, \hat{a}_t)], \end{equation} where $\hat{a}_t$ is sampled from the current policy $\pi(\cdot | s_t)$. The gradient is taken through the reparameterization trick. We reparameterize the policy using a neural network transformation as described in Haarnoja~\etal~\cite{haarnoja2018soft}, \begin{equation} \hat{a}_t = f_{\phi}(\epsilon_t | s_t) \end{equation} where $\epsilon_t$ is an input noise vector, sampled from a fixed distribution, such as a standard multivariate Normal distribution. Then the gradient of $J_\pi(\phi)$ is: \begin{equation} \nabla J_{\pi}(\phi) = \E_{s_t \sim \mathcal{B}, \epsilon_t \sim \mathcal{N}}[\frac{\partial Q(s_t,f_{\phi}(\epsilon_t | s_t))} {\partial f} \frac{\partial f_\phi(\epsilon_t | s_t)}{\partial \phi}] \end{equation} For the CEM policy update, given a minibatch of states $s_t$, we first find a high-value action $\bar{a}_t$ for each state by running \emph{CEM} on the current Q function, $\bar{a}_t = \text{\emph{CEM}}(Q(s_t,\cdot), \pi(\cdot | s_t))$. Then the policy is updated to increase the probability of this high-value action. The guided update on the parameter $\phi$ of $\pi$ at iteration $i$ is \begin{equation} \E_{s_t \sim \mathcal{B}, \hat{a}_t \sim \pi} [Q(s_t,\bar{a}_t) - Q(s_t,\hat{a}_t)]_{+} \nabla_{\phi} \log\pi_i(\bar{a}_t|s_t). \end{equation} We used $Q(s_t, \hat{a}_t)$ as a baseline term, since its expectation over actions $\hat{a}_t$ will give us the normal baseline $V(s_t)$: \begin{equation} \E_{s_t\sim \mathcal{B}} [Q(s_t,\bar{a}_t)-V(s_t)] \nabla_{\phi} \log\pi_i(\bar{a}_t|s_t) \end{equation} In our implementation, we only perform an update if the improvement on the $Q$ function, $Q(s_t,\bar{a}_t)-Q(s_t,\hat{a}_t)$, is non-negative, to guard against the occasional cases where \emph{CEM} fails to find a better action. Combining both parts of updates, the final update rule on the parameter $\phi_i$ of policy $\pi_i$ is \[ \phi_{i+1} = \phi_{i} - \lambda \nabla_{\phi} J_{\pi_i}(\phi_{i} ) - \lambda \E_{s_t \sim \mathcal{B}, \hat{a}_t \sim \pi_i} [Q(s_t,\bar{a}_t) - Q(s_t,\hat{a}_t)]_{+} \nabla_{\phi} \log\pi_i(\bar{a}_t|s_t) \] where $\lambda$ is the step size. We can prove that if the Q function has converged to $Q^{\pi}$, the state-action value function induced by the current policy, then both the Q-loss policy update and the \emph{CEM} policy update will be guaranteed to improve the current policy. We formalize this result in Theorem 1 and Theorem 2, and prove them in Appendix 3.1 and 3.2. \begin{theorem} \textbf{$Q$-loss Policy Improvement} Starting from the current policy $\pi$, we maximize the objective $J_\pi = \E_{(s,a) \sim \rho_\pi(s,a)} Q^{\pi}(s,a)$. The maximization converges to a critical point denoted as $\pi_{new}$. Then the induced Q function, $Q^{\pi_{new}}$, satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{theorem} \textbf{\emph{CEM} Policy Improvement} Assuming the \emph{CEM} process is able to find the optimal action of the state-action value function, $a^*(s) = \argmax_{a}Q^{\pi}(s,a)$, where $Q^{\pi}$ is the Q function induced by the current policy $\pi$. By iteratively applying the update $ \E_{(s,a) \sim \rho_\pi(s,a)} [Q(s,a^*)-Q(s,a)]_{+}\nabla \log\pi(a^*|s)$, the policy converges to $\pi_{new}$. Then $Q^{\pi_{new}}$ satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \subsection{Max-min Double Q-Learning}\label{sec:cem_c} Q-learning~\cite{Watkins:89} is known to suffer from overestimation~\cite{thrun1993issues}. Hasselt~\etal~\cite{hasselt2010double} proposed Double-Q learning which uses two Q functions with independent sets of weights to mitigate the overestimation problem. Fujimoto~\etal~\cite{fujimoto2018addressing} proposed Clipped Double Q-learning with two Q function denoted as $Q(s,a;\theta_1)$ and $Q(s,a;\theta_2)$, or $Q_1$ and $Q_2$ in short. Given a transition $(s_t,a_t,r_t,s_{t+1})$, Clipped Double Q-learning uses the minimum between the two estimates of the Q functions when calculating the target value in TD-error~\cite{sutton1998introduction}: \begin{equation} y= r_t + \gamma\min_{j=1,2} Q(s_{t+1},\hat{a}_{t+1};\theta_j) \label{eqn:clipped} \end{equation} where $\hat{a}_{t+1}$ is the predicted next action. Fujimoto~\etal~\cite{fujimoto2018addressing} mentioned that such an update rule may induce an underestimation bias. In addition, $\hat{a}_{t+1}=\pi_{\phi}(s_{t+1})$ is the prediction of the actor network. The actor network's parameter ${\phi}$ is optimized according to the gradients of $Q_1$. In other words, $\hat{a}_{t+1}$ tends be selected according to the $Q_1$ network which consistently increases the discrepancy between the two Q-functions. In practice, we observe that the discrepancy between the two estimates of the Q-function, $|Q_1 - Q_2|$, can increase dramatically leading to an unstable learning process. An example is shown in Fig.~\ref{fig:abl2} where $Q(s_{t+1},\hat{a}_{t+1};\theta_1)$ is always bigger than $Q(s_{t+1},\hat{a}_{t+1};\theta_2)$. We introduce \emph{Max-min Double Q-Learning} to reduce the discrepancy between the Q-functions. We first select $\hat{a}_{t+1}$ according to the actor network $\pi_{\phi}(s_{t+1})$. Then we run \emph{CEM} to search the landscape of $Q_2$ within a broad neighborhood of $\hat{a}_{t+1}$ to return a second action $\tilde{a}_{t+1}$. Note that \emph{CEM} selects an action $\tilde{a}_{t+1}$ that maximises $Q_2$ while the actor network selects an action $\hat{a}_{t+1}$ that maximises $Q_1$. We gather four different Q-values: $Q(s_{t+1},\hat{a}_{t+1};\theta_1)$, $Q(s_{t+1},\hat{a}_{t+1};\theta_2)$, $Q(s_{t+1},\tilde{a}_{t+1};\theta_1)$, and $Q(s_{t+1},\tilde{a}_{t+1};\theta_2)$. We then run a max-min operation to compute the target value that cancels the biases induced by $\hat{a}_{t+1}$ and $\tilde{a}_{t+1}$. \begin{equation} \label{eq:maxmin} \begin{aligned} y & = r_t + \gamma \max\{\min_{j=1,2} Q(s_{t+1},\hat{a}_{t+1};\theta_j), \min_{j=1,2} Q(s_{t+1},\tilde{a}_{t+1};\theta_j)\} \end{aligned} \end{equation} The inner min-operation $\min_{j=1,2} Q(s_{t+1},\hat{a}_{t+1};\theta_j)$ is adopted from Eq.~\ref{eqn:clipped} and mitigates overestimation~\cite{thrun1993issues}. The outer max operation helps to reduce the difference between $Q_1$ and $Q_2$. In addition, the max operation provides a better approximation of the Bellman optimality operator~\cite{sutton1998introduction}. We visualize $Q_1$ and $Q_2$ during the learning process in Fig.~\ref{fig:abl2}. The following theorem formalizes the convergence of the proposed Max-min Double Q-Learning approach in the finite MDP setting. We prove the theorem in Appendix 3.3. \begin{theorem} \begin{equation} \begin{array}{rl} Q_{t+1, 1}(s,a) &= Q_{t,1}(s,a) + \alpha_t(s,a) (y_t-Q_{t,1}(s,a))\\ Q_{t+1, 2}(s,a) &= Q_{t,2}(s,a) + \alpha_t(s,a) (y_t-Q_{t,2}(s,a)), \end{array} \end{equation} where $\alpha_t(s,a)$ is the learning rate and $y_t$ is the target: \begin{equation} \begin{array}{cl} y_t & = r_t(s_t, a_t) + \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{1, 2 \}} Q_{t,i}(s_{t+1}, a') \\ a^{\pi} & \sim \pi(s_{t+1})\\ a^* & = argmax_{a'} Q_{t,2}(s_{t+1}, a') \\ \end{array} \end{equation} We assume that the MDP is finite and tabular and the variance of rewards are bounded, and $\gamma \in [0,1]$. We assume each state action pair is sampled an infinite number of times and both $Q_{1}$ and $Q_{2}$ receive an infinite number of updates. We further assume the learning rates satisfy $\alpha_t(s,a) \in [0,1]$, $\sum_t \alpha_t(s,a) = \infty$, $\sum_t [\alpha_t(s,a)]^2 < \infty$ with probability 1 and $\alpha_t(s,a)=0, \forall (s,a) \neq (s_t, a_t)$. Finally we assume \emph{CEM} is able to find the optimal action $a^*(s) = \argmax_{a'}Q(s,a';\theta_2)$. Then Max-Min Double Q-learning will converge to the optimal value function $Q^*$ with probability $1$. \end{theorem} \subsection{Comparative Evaluation} We present \emph{GRAC}, a self-guided and self-regularized actor-critic algorithm as summarized in Algorithm~\ref{alg:alg1}. To evaluate {\em GRAC}, we measure its performance on the suite of MuJoCo continuous control tasks~\cite{todorov2012mujoco}, interfaced through OpenAI Gym~\cite{brockman2016openai}. We compare our method with \emph{DDPG}~\cite{lillicrap2015continuous}, \emph{TD3}~\cite{fujimoto2018addressing}, \emph{TRPO}~\cite{schulman2015trust}, and \emph{SAC}~\cite{haarnoja2018soft}. We use the source code released by the original authors and adopt the same hyperparameters reported in the original papers. Hyperparameters for all experiments are in Appendix 2.1. Results are shown in Figure \ref{fig:compare_results}. {\em GRAC} outperforms or performs comparably to all other algorithms in both final performance and learning speed across all tasks. \begin{figure*}[ht!] \centering \includegraphics[width=.85\linewidth]{imgs/fig_1_all_returns_0.pdf} \begin{tabular}{ccc} \begin{minipage}{.2\textwidth} \centering \par\small{(a)~Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Hopper-v2} \end{minipage} & \begin{minipage}{.2\linewidth} \centering \par\small{(c)~Humanoid-v2} \end{minipage} \end{tabular} \includegraphics[width=.85\linewidth]{imgs/fig_1_all_returns_1.pdf} \begin{tabular}{ccc} \begin{minipage}{.2\linewidth} \centering \par\small{(d)~HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(e)~Swimmer-v2} \end{minipage} & \begin{minipage}{.2\linewidth} \centering \par\small{(f)~Walker2d-v2} \end{minipage}\\ \end{tabular} \caption{Learning curves for the OpenAI gym continuous control tasks. For each task, we train 8 instances of each algorithm, using 8 different seeds. Evaluations are performed every 5000 interactions with the environment. Each evaluation reports the return (total reward), averaged over 10 episodes. For each training seed, we use a different seed for evaluation, which results in different start states. The solid curves and shaded regions represent the mean and standard deviation, respectively, of the average return over 8 seeds. All curves are smoothed with window size 10 for visual clarity. \emph{GRAC} (orange) learns faster than other methods across all tasks. \emph{GRAC} achieves comparable result to the state-of-the-art methods on the Ant-v2 task and outperforms prior methods on the other five tasks including the complex high-dimensional Humanoid-v2.}\label{fig:compare_results} \end{figure*} \subsection{Ablation Study} In this section, we present ablation studies to understand the contribution of each proposed component: Self-Regularized TD-Learning~(Section~\ref{sec:target}), Self-Guided Policy Improvment~(Section~\ref{sec:cem_a}), and Max-min Double Q-Learning~(Section~\ref{sec:cem_c}). We present our results in Fig.~\ref{fig:table2} in which we compare the performance of {\em GRAC} with alternatives, each removing one component from GRAC. Additional learning curves can be found in Appendix 2.2. We also run experiments to examine how sensitive GRAC is to some hyperparameters such as $\alpha$ and $K$ listed in Alg.~\ref{alg:alg1}, and the results can be found in Appendix 2.4. \paragraph{Self-Regularized TD Learning} To verify the effectiveness of the proposed self-regularized TD-learning method, we apply our method to \emph{DDPG} (DDPG w/o target network w/ target regularization). We compare against two baselines: the original \emph{DDPG} and \emph{DDPG} without target networks for both actor and critic (\emph{DDPG w/o target network}). We choose DDPG, because it does not have additional components such as Double Q-Learning, which may complicate the analysis of this comparison. In Fig.~\ref{fig:abl1}, we visualize the average returns, and average $Q_1$ values over training batchs (${y^\prime}_1$ in Alg.\ref{alg:alg1}). The $Q_1$ values of \emph{DDPG w/o target network} changes dramatically which leads to poor average returns. \emph{DDPG} maintains stable Q values but makes slow progress. Our proposed \emph{DDPG} w/o target network w/ target regularization maintains stable Q values and learns considerably faster. This demonstrates the effectiveness of our method and its potentials to be applied to a wide range of DRL methods. Due to page limit, we only include results on Hopper-v2. The results on other tasks are in Appendix 2.3. All tasks exhibit a similar phenomenon. \begin{figure}[hb!] \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_0.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Hopper-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Hopper-v2} \end{minipage} \end{tabular} \caption{Learning curves and average $Q_1$ values ($y^{\prime}_1$ in Alg.~\ref{alg:alg1}) on Hopper-v2. \emph{DDPG} w/o target network quickly diverges as seen by the unrealistically high Q values. \emph{DDPG} is stable but progresses slowly. If we remove the target network and add the proposed target regularization, we both maintain stability and achieve faster learning than \emph{DDPG}.}\label{fig:abl1} \end{figure} \paragraph{Policy Improvement with Evolution Strategies} The GRAC actor network uses a combination of two actor loss functions, denoted as \emph{QLoss} and \emph{CEMLoss}. \emph{QLoss} refers to the unbiased gradient estimators which extends the \emph{DDPG}-style policy gradients~\cite{lillicrap2015continuous} to stochastic policies. \emph{CEMLoss} represents the policy improvement guided by the action found with the zero-order optmization method CEM. We run another two ablation experiments on all six control tasks and compare it with our original policy training method denoted as \emph{GRAC}. As seen in Fig.\ref{fig:table2}, in general \emph{GRAC} achieves a better performance compared to either using \emph{CEMLoss} or \emph{QLoss}. The significance of the improvements varies within the six control tasks. For example, \emph{CEMLoss} plays a dominant role in Swimmer while \emph{QLoss} has a major effect in HalfCheetah. It suggests that \emph{CEMLoss} and \emph{QLoss} are complementary. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/fig_table.pdf} \caption{Final average returns, normalized w.r.t {\em GRAC} for all tasks. For each task, we train each ablation setting with 4 seeds, and average the last 10 evaluations of each seed (40 evaluations in total). Actor updates without CEMLoss (\emph{GRAC w/o CEMLoss}) and actor updates w.r.t minimum of both Q networks (\emph{GRAC w/o CriticCEM w/ minQUpdate}) achieves slightly better performance on Walker2d-v2 and Hopper-v2. GRAC achieves the best performance on 4 out of 6 tasks, especially on more challenging tasks with higher-dimensional state and action spaces (Humanoid-v2, Ant-v2, HalfCheetah-v2). This suggests that individual components of GRAC complement each other. } \label{fig:table2} \end{figure} \paragraph{Max-min Double Q-Learning} We additionally verify the effectiveness of the proposed Max-min Double Q-Learning method. We run an ablation experiment by replacing Max-min by Clipped Double Q-learing~\cite{fujimoto2018addressing} denoted as \emph{GRAC w/o CriticCEM}. In Fig.~\ref{fig:abl2}, we visualize the learning curves of the average return, $Q_1$ ($y^{\prime}_1$ in Alg.~\ref{alg:alg1}), and $Q_1-Q_2$ ($y^{\prime}_1-y^{\prime}_2$ in Alg.~\ref{alg:alg1}). \emph{GRAC} achieves high performance while maintaining a smoothly increasing Q function. Note that the difference between Q functions, $Q_1-Q_2$, remains around zero for \emph{GRAC}. \emph{GRAC w/o CriticCEM} shows high variance and drastic changes in the learned $Q_1$ value. In addition, $Q_1$ and $Q_2$ do not always agree. Such unstable Q values result in a performance crash during the learning process. Instead of Max-min Double Q Learning, another way to address the gap between $Q_1$ and $Q_2$ is to perform actor updates on the minimum of $Q_1$ and $Q_2$ networks (as seen in SAC). Replacing Max-min Double Q Learning with this trick achieves lower performance than \emph{GRAC} in more challenging tasks such as HalfCheetah-v2, Ant-v2, and Humanoid-v2 (See \emph{GRAC w/o CriticCEM w/ minQUpdate} in Fig.\ref{fig:table2}). \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{imgs/fig_3_abl_critic_cem_0.pdf} \begin{tabular}{ccc} \begin{minipage}{.3\textwidth} \begin{flushright} \par\small{(a)~Returns on Ant-v2} \end{flushright} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par \small{(b)~Average of $Q_1$ over training batch on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \begin{flushleft} \par\small{(c)~Average of $Q_1-Q_2$ over training batch on Ant-v2} \end{flushleft} \end{minipage} \end{tabular} \caption{ Learning curves (left), average $Q_1$ values (middle), and average of the difference between $Q_1$ and $Q_2$ (right) on Ant-v2. Average Q values are computes as minibatch average of $y^{\prime}_1$ and $y^{\prime}_2$, defined in Alg.~\ref{alg:alg1}. \emph{GRAC w/o CriticCEM} represents replacing Max-min Double Q-Learning with Clipped Double Q-Learning. Without Max-min double Q-Learning to balance the magnitude of $Q_1$ and $Q_2$, $Q_1$ blows up significantly compared to $Q_2$, leading to divergence. }\label{fig:abl2} \end{figure} \section{Notation Table} \documentclass{article} \PassOptionsToPackage{numbers, compress}{natbib} \usepackage{neurips_2020} \usepackage{amsmath} \usepackage{amssymb} \usepackage{enumitem} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator{\E}{\mathbb{E}} \newtheorem{theorem}{Theorem} \newtheorem{thm}{Theorem} \usepackage{xcolor} \newcommand\jean[1]{\textcolor{red}{J: #1}} \newcommand\lin[1]{\textcolor{blue}{L: #1}} \newcommand\yifan[1]{\textcolor{purple}{Y: #1}} \newcommand\mengyuan[1]{\textcolor{green}{MY: #1}} \newcommand\SUN[1]{\textcolor{blue}{SUN: #1}} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage{float} \usepackage{algorithmicx} \usepackage{algorithm,algpseudocode} \usepackage{algorithm} \usepackage{amsthm,amsmath,amsfonts,verbatim} \usepackage[bookmarks=true]{hyperref} \hypersetup{ colorlinks=true, urlcolor=black, } \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{caption} \captionsetup[figure]{font=footnotesize} \captionsetup[table]{font=footnotesize} \defA\arabic{equation}{A\arabic{equation}} \title{Self-Guided and Self-Regularized Actor-Critic: Appendix} \begin{document} \maketitle \section{Implementation Details} \subsection{Neural Network Architecture Details} We use a two layer feedforward neural network of 256 and 256 hidden nodes respectively. Rectified linear units~(ReLU) are put after each layer for both the actor and critic except the last layer. For the last layer of the actor network, a tanh function is used as the activation function to squash the action range within $[-1,1]$. \emph{GRAC} then multiplies the output of the tanh function by \emph{max action} to transform [-1,1] into [-\emph{max action}, \emph{max action}]. The actor network outputs the mean and sigma of a Gaussian distribution. \subsection{CEM Implementation} Our CEM implementation is based on the CEM algorithm described in Pourchot~\etal~\cite{pourchot2018cemrl}. \setcounter{algorithm}{1} \begin{algorithm}[H] \Require{Input: Q-function Q(s,a); size of population $N_{pop}$; size of elite $N_{elite}$ where $N_{elite} \leq N_{pop}$; max iteration of CEM $N_{cem}$.}\\ \Initialization{Initialize the mean $\mu$ and covariance matrix $\Sigma$ from actor network predictions.} \begin{algorithmic}[1] \For {$i=1...,N_{\text{cem}}$} \State Draw the current population set $\{a_{pop} \}$ of size $N_{pop}$ from $\mathcal{N}(\mu, \Sigma)$. \State Receive the $Q$ values $\{q_{pop}\} = \{Q(s,a) | a \in \{a_{pop}\} \}$. \State Sort $\{q_{pop}\}$ in descending order. \State Select top $N_{elite}$ Q values and choose their corresponding $a_{pop}$ as elite $\{a_{elite}\}$. \State Calculate the mean $\mu$ and covariance matrix $\Sigma$ of the set $\{ a_{elite} \}$. \EndFor \end{algorithmic} \Output{Output: The top one elite in the final iteration.} \caption{CEM} \label{alg:cem} \end{algorithm} \subsection{Additional Detail on Algorithm 1: GRAC} The actor network outputs the mean and sigma of a Gaussian distribution. In Line 2 of Alg.1, the actor has to select action $a$ based on the state $s$. In the test stage, the actor directly uses the predicted mean as output action $a$. In the training stage, the actor first samples an action $\hat{a}$ from the predicted Gaussian distribution $\pi_{\phi}(s)$, then \emph{GRAC} runs \emph{CEM} to find a second action $\tilde{a}=\emph{CEM}(Q(s,\cdot;\theta_2),\pi_{\phi}(\cdot | s)$). \emph{GRAC} uses $a=\argmax_{\{\tilde{a},\hat{a}\}}\{\min_{j=1,2} Q(s,\tilde{a};\theta_j), \min_{j=1,2} Q(s,\hat{a};\theta_j)\}$ as the final action. \section{Appendix on Experiments} \subsection{Hyperparameters used} Table~\ref{tab:hyper} and Table~\ref{tab:hyper_env} list the hyperparameters used in the experiments. $[a,b]$ denotes a linear schedule from $a$ to $b$ during the training process. \begin{table}[H] \centering \begin{tabular}{@{}p{0.3\linewidth}p{0.15\linewidth}} \hline \hline Parameters & Values \\ \hline \hline discount $\gamma$ & 0.99\\ \hline replay buffer size & 1e6\\ \hline batch size & 256 \\ \hline optimizer & Adam~\cite{kingma2014adam}\\ \hline learning rate in critic & 3e-4\\ \hline learning rate in actor & 2e-4\\ \hline $N_{\text{cem}}$ & 2\\ \hline $N_{\text{pop}}$ & 256\\ \hline $N_{\text{elite}}$ & 5\\ \hline\hline \end{tabular} \caption{Hyperparameter Table} \label{tab:hyper} \end{table} \begin{table}[H] \centering \begin{tabular}{@{}p{0.17\linewidth}p{0.12\linewidth} p{0.11\linewidth} p{0.11\linewidth} p{0.15\linewidth} p{0.14\linewidth}} \hline \hline Environments & ActionDim & $K$ in Alg.1 & $\alpha$ in Alg.1 & CemLossWeight & Reward Scale \\ \hline Ant-v2 & 8 & 20 & [0.7,~85] & 1.0/ActionDim & 1.0 \\ Hopper-v2 & 3 & 20 & [0.85,~0.95] & 0.3/ActionDim & 1.0\\ HalfCheetah-v2 & 6 & 50 & [0.7,~0.85] & 1.0/ActionDim & 0.5\\ Humanoid-v2 & 17 & 20 & [0.7,~0.85] & 1.0/ActionDim & 1.0\\ Swimmer-v2 & 2 & 20 & [0.5,~0.75] & 1.0/ActionDim & 1.0\\ Walker2d-v2 & 6 & 20 & [0.8,~0.9] & 0.3//ActionDim & 1.0\\ \hline\hline \end{tabular} \caption{Environment Specific Parameters} \label{tab:hyper_env} \end{table} \subsection{Additional Learning Curves for Policy Improvement with Evolution Strategy} \label{appendsec:experiment} \begin{figure}[H] \centering \begin{tabular}{ccc} \begin{minipage}{.3\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_ant.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_hopper.pdf} \par\small{(b)~Returns on Hopper-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_humanoid.pdf} \par\small{(c)~Returns on Humanoid-v2} \end{minipage} \end{tabular} \begin{tabular}{ccc} \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_halfcheetah.pdf} \par\small{(d)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_swimmer.pdf} \par\small{(e)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_walker2d.pdf} \par\small{(f)~Returns on Walker2d-v2} \end{minipage}\\ \end{tabular} \caption{Learning curves for the OpenAI gym continuous control tasks. The \emph{GRAC} actor network uses a combination of two actor loss functions, denoted as QLoss and CEMLoss. \emph{QLoss Only} represents the actor network only trained with QLoss. \emph{CEM Loss Only} represents the actor network only trained with CEMLoss. In general \emph{GRAC} achieves a better performance compared to either using \emph{CEMLoss} or \emph{QLoss}.} \end{figure} \subsection{Additional Learning Curves for Ablation Study of Self-Regularized TD Learning} \begin{figure}[H] \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_ant.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Ant-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_halfcheetah.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on HalfCheetah-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_humanoid.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Humanoid-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Humanoid-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_walker2d.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Walker2d-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Walker2d-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_swimmer.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Swimmer-v2} \end{minipage} \end{tabular} \caption{Learning curves and average $Q_1$ values ($y^{\prime}_1$ in Alg. 1 of the main paper). \emph{DDPG} w/o target network quickly diverges as seen by the unrealistically high Q values. \emph{DDPG} is stable but often progresses slower. If we remove the target network and add the proposed target regularization, we both maintain stability and achieve a faster or comparable learning rate.} \end{figure} \subsection{Hyperparameter Sensitivity for the Termination Condition of Critic Network Training}\label{appendsubsec:termination} We also run experiments to examine how sensitive \emph{GRAC} is to some hyperparameters such as $K$ and $\alpha$ listed in Alg.1. The critic networks will be updated until the critic loss has decreased to $\alpha$ times the original loss, or at most $K$ iterations, before proceeding to update the actor network. In practice, we decrease $\alpha$ in the training process. Fig. 3 shows five learning curves on Ant-v2 running with five different hyperparameter values. We find that a moderate value of $K=10$ is enough to stabilize the training process, and increasing $K$ further does not have significant influence on training, shown on the right of Fig. 3. $\alpha$ is usually within the range of $[0.7,0.9]$ and most tasks are not sensitive to minor changes. However on the task of Swimmer-v2, we find that $\alpha$ needs to be small enough ($<0.7$) to prevent divergence. In practice, without appropriate $K$ and $\alpha$ values, divergence usually happens within the first 50k training steps, thus it is quick to select appropriate values for $K$ and $\alpha$. \begin{figure*}[ht!] \centering \begin{tabular}{cc} \begin{minipage}{.45\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_1.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.45\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_2.pdf} \par\small{(b)~Returns on Ant-v2} \end{minipage} \end{tabular} \caption{Learning curves for the OpenAI gym Ant-v2 environment. $K$ denotes the maximum number of iterations. $\alpha$ denotes the remaining percent of loss value when terminating the iteration. In practice, we decrease $\alpha$ in the training process. $\text{alpha} a\text{\_}b$ denotes the initial value $a$ and final value $b$ for $\alpha$.}\label{fig:results} \end{figure*} \section{Theorems and Proofs} \label{appendsec:theorems} For the sake of clarity, we make the following technical assumption about the function approximation capacity of neural networks that we use to approximate the action distribution. \textbf{State separation assumption:} The neural network chosen to approximate the policy family $\Pi$ is expressive enough to approximate the action distribution for each state $\pi(s,\cdot)$ separately. \subsection{Theorem 1: \textbf{$Q$-loss Policy Improvement}} \label{appendsubsec:theorem1} \begin{theorem} Starting from the current policy $\pi$, we update the policy to maximize the objective $J_\pi = \E_{(s,a) \sim \rho_\pi(s,a)} Q^{\pi}(s,a)$. The maximization converges to a critical point denoted as $\pi_{new}$. Then the induced Q function, $Q^{\pi_{new}}$, satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 1] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately, for each state we are maximizing $\E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a)$. Therefore, we have $\forall s, \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. \begin{equation} \begin{array}{rl} Q^{\pi}(s,a) &= r(s,a) + \gamma \E_{s^{\prime}}V^{\pi}(s^{\prime}) \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} Q^{\pi}(s^{\prime}, a^{\prime}) \\ &= r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} [r(s^{\prime}, a^{\prime}) + \gamma \E_{s^{\prime \prime}} V^{\pi}(s^{\prime \prime})] \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} r(s^{\prime}, a^{\prime}) + \gamma^2 \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} \E_{s^{\prime \prime}} \E_{a^{\prime \prime} \sim \pi_{new}} Q^{\pi}(s^{\prime \prime}, a^{\prime \prime}) \\ &= \ldots \quad \mbox{(repeatedly unroll Q function )} \\ &\leq Q^{\pi_{new}}(s,a) \end{array} \end{equation} \end{proof} \subsection{Theorem 2: \textbf{\emph{CEM} Policy Improvement}} \label{appendsubsec:theorem2} \begin{theorem} We assume that the \emph{CEM} process is able to find the optimal action of the state-action value function, $a^*(s) = \argmax_{a}Q^{\pi}(s,a)$, where $Q^{\pi}$ is the Q function induced by the current policy $\pi$. By iteratively applying the update $ \E_{(s,a) \sim \rho_\pi(s,a)} [Q(s,a^*)-Q(s,a)]_{+}\nabla \log\pi(a^*|s)$, the policy converges to $\pi_{new}$. Then $Q^{\pi_{new}}$ satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 2] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately. Then, for each state $s$, the policy $\pi_{new}$ will converge to a delta function at $a^*(s)$. Therefore we have $\forall s, \max_a Q^{\pi}(s,a) = \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. Then, following Eq. (1) we have $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a)$ \end{proof} \subsection{Theorem 3: \textbf{Max-Min Double Q-learning Convergence}} \label{appendsubsec:theorem3} \begin{theorem} We keep two tabular value estimates $Q_{1}$ and $Q_{2}$, and update via \begin{equation} \begin{array}{rl} Q_{t+1, 1}(s,a) &= Q_{t,1}(s,a) + \alpha_t(s,a) (y_t-Q_{t,1}(s,a))\\ Q_{t+1, 2}(s,a) &= Q_{t,2}(s,a) + \alpha_t(s,a) (y_t-Q_{t,2}(s,a)), \end{array} \end{equation} where $\alpha_t(s,a)$ is the learning rate and $y_t$ is the target: \begin{equation} \begin{array}{cl} y_t & = r_t(s_t, a_t) + \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{1, 2 \}} Q_{t,i}(s_{t+1}, a') \\ a^{\pi} & \sim \pi(s_{t+1})\\ a^* & = argmax_{a'} Q_{t,2}(s_{t+1}, a') \\ \end{array} \end{equation} We assume that the MDP is finite and tabular and the variance of rewards are bounded, and $\gamma \in [0,1]$. We assume each state action pair is sampled an infinite number of times and both $Q_{1}$ and $Q_{2}$ receive an infinite number of updates. We further assume the learning rates satisfy $\alpha_t(s,a) \in [0,1]$, $\sum_t \alpha_t(s,a) = \infty$, $\sum_t [\alpha_t(s,a)]^2 < \infty$ with probability 1 and $\alpha_t(s,a)=0, \forall (s,a) \neq (s_t, a_t)$. Finally we assume \emph{CEM} is able to find the optimal action $a^*(s) = \argmax_{a'}Q(s,a';\theta_2)$. Then Max-Min Double Q-learning will converge to the optimal value function $Q^*$ with probability $1$. \end{theorem} \begin{proof}[Proof of Theorem 3] This proof will closely follow Appendix A of \cite{fujimoto2018addressing}. We will first prove that $Q_2$ converges to the optimal Q value $Q^*$. Following notations of \cite{fujimoto2018addressing}, we have \begin{equation*} \begin{array}{rl} F_t(s_t,a_t) \triangleq& y_t(s_t,a_t) - Q^*(s_t, a_t) \\ =& r_t + \gamma \max_{a^{\prime} \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a^{\prime}) - Q^*(s_t, a_t) \\ =& F_t^Q(s_t, a_t) + c_t \end{array} \end{equation*} Where \begin{eqnarray*} F_t^Q(s_t, a_t) &=& r_t + \gamma Q_{t,2}(s_{t+1}, a^*) - Q^*(s_t, a_t) \\ &=& r_t + \gamma \max_{a^{\prime}} Q_{t,2}(s_{t+1}, a^{\prime}) -Q^*(s_t,a_t) \\ c_t &=& \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a') - \gamma Q_{t, 2}(s_{t+1}, a^*) \end{eqnarray*} $F^Q_t$ is associated with the optimum Bellman operator. It is well known that the optimum Bellman operator is a contractor, We need to prove $c_t$ converges to 0. Based on the update rules (Eq. (A2)), it is easy to prove that for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0. This implies that $\Delta_t(s, a^\pi) = Q_{t,1}(s,a^\pi) - Q_{t,2}(s,a^\pi)$ converges to 0 and $\Delta_t(s, a^*) = Q_{t, 1}(s,a^*) - Q_{t, 2}(s,a^*)$ converges to 0. Therefore, $\min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a) - Q_{t, 2}(s,a) \leq 0$ and the left hand side converges to zero, for $a \in {a^\pi, a^*}$. Since we have $Q_{t, 2}(s, a^*) >= Q_{t, 2}(s, a^\pi)$, then \[ \min_{i \in \{1, 2\}} Q_{t, i}(s, a^*) \leq \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a') \leq Q_{t, 2}(s,a^*) \] Therefore $c_t = \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{ i \in \{ 1, 2 \}} Q_{t, i}(s, a') - Q_{t, 2}(s,a^*)$ converges to 0. And we proved $Q_{t, 2}$ converges to $Q^*$. Since for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0, $Q_{t, 1}$ also converges to $Q^*$. \end{proof} \section{Introduction} \input{tex/intro.tex} \section{Related Work} \input{tex/related.tex} \section{Preliminaries} \input{tex/pre.tex} \section{Technical Approach} \input{tex/tech.tex} \section{Experiments} \input{tex/exp.tex} \section{Conclusion} \input{tex/con.tex} \section{Broader Impact} \input{tex/impact.tex} \section{Introduction} \input{tex/intro.tex} \section{Related Work} \input{tex/related.tex} \section{Preliminaries} \input{tex/pre.tex} \section{Technical Approach} \input{tex/tech.tex} \section{Experiments} \input{tex/exp.tex} \section{Conclusion} \input{tex/con.tex} \section{Broader Impact} \input{tex/impact.tex} \subsection{Implementation Details} \paragraph{Neural Network Architecture Details} We use a two layer feedforward neural network of 256 and 256 hidden nodes respectively. Rectified linear units~(ReLU) are put after each layer for both the actor and critic except the last layer. For the last layer of the actor network, a tanh function is used as the activation function to squash the action range within $[-1,1]$. \emph{GRAC} then multiplies the output of the tanh function by \emph{max action} to transform [-1,1] into [-\emph{max action}, \emph{max action}]. The actor network outputs the mean and sigma of a Gaussian distribution. \paragraph{CEM Implementation} Our CEM implementation is based on the CEM algorithm described in Pourchot~\etal~\cite{pourchot2018cemrl}. \setcounter{algorithm}{1} \begin{algorithm}[H] {Input: Q-function Q(s,a); size of population $N_{pop}$; size of elite $N_{elite}$ where $N_{elite} \leq N_{pop}$; max iteration of CEM $N_{cem}$.}\\ {Initialize the mean $\mu$ and covariance matrix $\Sigma$ from actor network predictions.} \begin{algorithmic}[1] \For {$i=1...,N_{\text{cem}}$} \State Draw the current population set $\{a_{pop} \}$ of size $N_{pop}$ from $\mathcal{N}(\mu, \Sigma)$. \State Receive the $Q$ values $\{q_{pop}\} = \{Q(s,a) | a \in \{a_{pop}\} \}$. \State Sort $\{q_{pop}\}$ in descending order. \State Select top $N_{elite}$ Q values and choose their corresponding $a_{pop}$ as elite $\{a_{elite}\}$. \State Calculate the mean $\mu$ and covariance matrix $\Sigma$ of the set $\{ a_{elite} \}$. \EndFor \end{algorithmic} {Output: The top one elite in the final iteration.} \caption{CEM} \label{alg:cem} \end{algorithm} \paragraph{Additional Detail on Algorithm 1: GRAC} The actor network outputs the mean and sigma of a Gaussian distribution. In Line 2 of Alg.1, the actor has to select action $a$ based on the state $s$. In the test stage, the actor directly uses the predicted mean as output action $a$. In the training stage, the actor first samples an action $\hat{a}$ from the predicted Gaussian distribution $\pi_{\phi}(s)$, then \emph{GRAC} runs \emph{CEM} to find a second action $\tilde{a}=\emph{CEM}(Q(s,\cdot;\theta_2),\pi_{\phi}(\cdot | s)$). \emph{GRAC} uses $a=\argmax_{\{\tilde{a},\hat{a}\}}\{\min_{j=1,2} Q(s,\tilde{a};\theta_j), \min_{j=1,2} Q(s,\hat{a};\theta_j)\}$ as the final action. \subsection{Appendix on Experiments} \subsubsection{Hyperparameters used} Table~\ref{tab:hyper} and Table~\ref{tab:hyper_env} list the hyperparameters used in the experiments. $[a,b]$ denotes a linear schedule from $a$ to $b$ during the training process. \begin{table}[H] \centering \begin{tabular}{@{}p{0.3\linewidth}p{0.15\linewidth}} \hline \hline Parameters & Values \\ \hline \hline discount $\gamma$ & 0.99\\ \hline replay buffer size & 1e6\\ \hline batch size & 256 \\ \hline optimizer & Adam~\cite{kingma2014adam}\\ \hline learning rate & 3e-4\\ \hline $N_{\text{cem}}$ & 2\\ \hline $N_{\text{pop}}$ & 25\\ \hline $N_{\text{elite}}$ & 5\\ \hline\hline \end{tabular} \caption{Hyperparameter Table} \label{tab:hyper} \end{table} \begin{table}[H] \centering \begin{tabular}{@{}p{0.17\linewidth}p{0.12\linewidth} p{0.11\linewidth} p{0.11\linewidth} p{0.15\linewidth} p{0.14\linewidth}} \hline \hline Environments & ActionDim & $K$ in Alg.1 & $\alpha$ in Alg.1 & CemLossWeight & RewardScale \\ \hline Ant-v2 & 8 & 20 & [0.7,~85] & 1.0/ActionDim & 1.0\\ Hopper-v2 & 3 & 20 & [0.85,~0.95] & 0.3/ActionDim & 1.0\\ HalfCheetah-v2 & 6 & 50 & [0.7,~0.85] & 1.0/ActionDim &0.5\\ Humanoid-v2 & 17 & 20 & [0.7,~0.85] & 1.0/ActionDim & 1.0\\ Swimmer-v2 & 2 & 20 & [0.5,~0.75] & 1.0/ActionDim & 1.0\\ Walker2d-v2 & 6 & 20 & [0.8,~0.9] & 0.3//ActionDim & 1.0\\ \hline\hline \end{tabular} \caption{Environment Specific Parameters} \label{tab:hyper_env} \end{table} \subsubsection{Additional Learning Curves} \label{appendsec:experiment} Figure 5 shows the learning curves for policy improvement with evolution strategy. \begin{figure}[htb!] \centering \begin{tabular}{ccc} \begin{minipage}{.3\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_ant.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_hopper.pdf} \par\small{(b)~Returns on Hopper-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_humanoid.pdf} \par\small{(c)~Returns on Humanoid-v2} \end{minipage} \end{tabular} \begin{tabular}{ccc} \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_halfcheetah.pdf} \par\small{(d)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_swimmer.pdf} \par\small{(e)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_walker2d.pdf} \par\small{(f)~Returns on Walker2d-v2} \end{minipage}\\ \end{tabular} \caption{Learning curves for the OpenAI gym continuous control tasks. The \emph{GRAC} actor network uses a combination of two actor loss functions, denoted as QLoss and CEMLoss. \emph{QLoss Only} represents the actor network only trained with QLoss. \emph{CEM Loss Only} represents the actor network only trained with CEMLoss. In general \emph{GRAC} achieves a better performance compared to either using \emph{CEMLoss} or \emph{QLoss}.} \end{figure} Figure 6 shows the learning curves for ablation Study of Self-Regularized TD Learning. \begin{figure}[htb!] \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_ant.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Ant-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_halfcheetah.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on HalfCheetah-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_humanoid.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Humanoid-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Humanoid-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_walker2d.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Walker2d-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Walker2d-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_swimmer.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Swimmer-v2} \end{minipage} \end{tabular} \caption{Learning curves and average $Q_1$ values ($y^{\prime}_1$ in Alg. 1 of the main paper). \emph{DDPG} w/o target network quickly diverges as seen by the unrealistically high Q values. \emph{DDPG} is stable but often progresses slower. If we remove the target network and add the proposed target regularization, we both maintain stability and achieve a faster or comparable learning rate.} \end{figure} \subsubsection{Hyperparameter Sensitivity for the Termination Condition of Critic Network Training}\label{appendsubsec:termination} We also run experiments to examine how sensitive \emph{GRAC} is to some hyperparameters such as $K$ and $\alpha$ listed in Alg.1. The critic networks will be updated until the critic loss has decreased to $\alpha$ times the original loss, or at most $K$ iterations, before proceeding to update the actor network. In practice, we decrease $\alpha$ in the training process. Fig. 3 shows five learning curves on Ant-v2 running with five different hyperparameter values. We find that a moderate value of $K=10$ is enough to stabilize the training process, and increasing $K$ further does not have significant influence on training, shown on the right of Fig. 3. $\alpha$ is usually within the range of $[0.7,0.9]$ and most tasks are not sensitive to minor changes. However on the task of Swimmer-v2, we find that $\alpha$ needs to be small enough ($<0.7$) to prevent divergence. In practice, without appropriate $K$ and $\alpha$ values, divergence usually happens within the first 50k training steps, thus it is quick to select appropriate values for $K$ and $\alpha$. \begin{figure*}[ht!] \centering \begin{tabular}{cc} \begin{minipage}{.45\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_1.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.45\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_2.pdf} \par\small{(b)~Returns on Ant-v2} \end{minipage} \end{tabular} \caption{Learning curves for the OpenAI gym Ant-v2 environment. $K$ denotes the maximum number of iterations. $\alpha$ denotes the remaining percent of loss value when terminating the iteration. In practice, we decrease $\alpha$ in the training process. $\text{alpha} a\text{\_}b$ denotes the initial value $a$ and final value $b$ for $\alpha$.}\label{fig:results} \end{figure*} \subsection{Theorems and Proofs} \label{appendsec:theorems} For the sake of clarity, we make the following technical assumption about the function approximation capacity of neural networks that we use to approximate the action distribution. \textbf{State separation assumption:} The neural network chosen to approximate the policy family $\Pi$ is expressive enough to approximate the action distribution for each state $\pi(s,\cdot)$ separately. \setcounter{theorem}{0} \subsubsection{Theorem 1: \textbf{$Q$-loss Policy Improvement}} \label{appendsubsec:theorem1} \begin{theorem} Starting from the current policy $\pi$, we update the policy to maximize the objective $J_\pi = \E_{(s,a) \sim \rho_\pi(s,a)} Q^{\pi}(s,a)$. The maximization converges to a critical point denoted as $\pi_{new}$. Then the induced Q function, $Q^{\pi_{new}}$, satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 1] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately, for each state we are maximizing $\E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a)$. Therefore, we have $\forall s, \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. \begin{equation} \begin{array}{rl} Q^{\pi}(s,a) &= r(s,a) + \gamma \E_{s^{\prime}}V^{\pi}(s^{\prime}) \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} Q^{\pi}(s^{\prime}, a^{\prime}) \\ &= r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} [r(s^{\prime}, a^{\prime}) + \gamma \E_{s^{\prime \prime}} V^{\pi}(s^{\prime \prime})] \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} r(s^{\prime}, a^{\prime}) + \gamma^2 \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} \E_{s^{\prime \prime}} \E_{a^{\prime \prime} \sim \pi_{new}} Q^{\pi}(s^{\prime \prime}, a^{\prime \prime}) \\ &= \ldots \quad \mbox{(repeatedly unroll Q function )} \\ &\leq Q^{\pi_{new}}(s,a) \end{array} \end{equation} \end{proof} \subsubsection{Theorem 2: \textbf{\emph{CEM} Policy Improvement}} \label{appendsubsec:theorem2} \begin{theorem} We assume that the \emph{CEM} process is able to find the optimal action of the state-action value function, $a^*(s) = \argmax_{a}Q^{\pi}(s,a)$, where $Q^{\pi}$ is the Q function induced by the current policy $\pi$. By iteratively applying the update $ \E_{(s,a) \sim \rho_\pi(s,a)} [Q(s,a^*)-Q(s,a)]_{+}\nabla \log\pi(a^*|s)$, the policy converges to $\pi_{new}$. Then $Q^{\pi_{new}}$ satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 2] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately. Then, for each state $s$, the policy $\pi_{new}$ will converge to a delta function at $a^*(s)$. Therefore we have $\forall s, \max_a Q^{\pi}(s,a) = \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. Then, following Eq. (1) we have $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a)$ \end{proof} \subsubsection{Theorem 3: \textbf{Max-Min Double Q-learning Convergence}} \label{appendsubsec:theorem3} \begin{theorem} We keep two tabular value estimates $Q_{1}$ and $Q_{2}$, and update via \begin{equation} \begin{array}{rl} Q_{t+1, 1}(s,a) &= Q_{t,1}(s,a) + \alpha_t(s,a) (y_t-Q_{t,1}(s,a))\\ Q_{t+1, 2}(s,a) &= Q_{t,2}(s,a) + \alpha_t(s,a) (y_t-Q_{t,2}(s,a)), \end{array} \end{equation} where $\alpha_t(s,a)$ is the learning rate and $y_t$ is the target: \begin{equation} \begin{array}{cl} y_t & = r_t(s_t, a_t) + \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{1, 2 \}} Q_{t,i}(s_{t+1}, a') \\ a^{\pi} & \sim \pi(s_{t+1})\\ a^* & = argmax_{a'} Q_{t,2}(s_{t+1}, a') \\ \end{array} \end{equation} We assume that the MDP is finite and tabular and the variance of rewards are bounded, and $\gamma \in [0,1]$. We assume each state action pair is sampled an infinite number of times and both $Q_{1}$ and $Q_{2}$ receive an infinite number of updates. We further assume the learning rates satisfy $\alpha_t(s,a) \in [0,1]$, $\sum_t \alpha_t(s,a) = \infty$, $\sum_t [\alpha_t(s,a)]^2 < \infty$ with probability 1 and $\alpha_t(s,a)=0, \forall (s,a) \neq (s_t, a_t)$. Finally we assume \emph{CEM} is able to find the optimal action $a^*(s) = \argmax_{a'}Q(s,a';\theta_2)$. Then Max-Min Double Q-learning will converge to the optimal value function $Q^*$ with probability $1$. \end{theorem} \begin{proof}[Proof of Theorem 3] This proof will closely follow Appendix A of \cite{fujimoto2018addressing}. We will first prove that $Q_2$ converges to the optimal Q value $Q^*$. Following notations of \cite{fujimoto2018addressing}, we have \begin{equation*} \begin{array}{rl} F_t(s_t,a_t) \triangleq& y_t(s_t,a_t) - Q^*(s_t, a_t) \\ =& r_t + \gamma \max_{a^{\prime} \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a^{\prime}) - Q^*(s_t, a_t) \\ =& F_t^Q(s_t, a_t) + c_t \end{array} \end{equation*} Where \begin{eqnarray*} F_t^Q(s_t, a_t) &=& r_t + \gamma Q_{t,2}(s_{t+1}, a^*) - Q^*(s_t, a_t) \\ &=& r_t + \gamma \max_{a^{\prime}} Q_{t,2}(s_{t+1}, a^{\prime}) -Q^*(s_t,a_t) \\ c_t &=& \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a') - \gamma Q_{t, 2}(s_{t+1}, a^*) \end{eqnarray*} $F^Q_t$ is associated with the optimum Bellman operator. It is well known that the optimum Bellman operator is a contractor, We need to prove $c_t$ converges to 0. Based on the update rules (Eq. (A2)), it is easy to prove that for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0. This implies that $\Delta_t(s, a^\pi) = Q_{t,1}(s,a^\pi) - Q_{t,2}(s,a^\pi)$ converges to 0 and $\Delta_t(s, a^*) = Q_{t, 1}(s,a^*) - Q_{t, 2}(s,a^*)$ converges to 0. Therefore, $\min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a) - Q_{t, 2}(s,a) \leq 0$ and the left hand side converges to zero, for $a \in {a^\pi, a^*}$. Since we have $Q_{t, 2}(s, a^*) >= Q_{t, 2}(s, a^\pi)$, then \[ \min_{i \in \{1, 2\}} Q_{t, i}(s, a^*) \leq \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a') \leq Q_{t, 2}(s,a^*) \] Therefore $c_t = \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{ i \in \{ 1, 2 \}} Q_{t, i}(s, a') - Q_{t, 2}(s,a^*)$ converges to 0. And we proved $Q_{t, 2}$ converges to $Q^*$. Since for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0, $Q_{t, 1}$ also converges to $Q^*$. \end{proof} \subsection{Self-Regularized TD Learning}\label{sec:target} Reinforcement learning is prone to instability and divergence when a nonlinear function approximator such as a neural network is used to represent the Q function~\cite{tsitsiklis1997analysis}. Mnih~\etal\cite{mnih2015human} identified several reasons for this. One is the correlation between the current action-values and the target value. Updates to $Q(s_t,a_t)$ often also increase $Q(s_{t+1},a^{*}_{t+1})$ where $a^{*}_{t+1}$ is the optimal next action. Hence, these updates also increase the target value $y_t$ which may lead to oscillations or the divergence of the policy. More formally, given transitions $(s_t,a_t,r_t,s_{t+1})$ sampled from the replay buffer distribution $\mathcal{B}$, the Q network can be trained by minimising the loss functions $\mathcal{L}(\theta_i)$ at iteration $i$: \begin{equation} \mathcal{L}(\theta_i) = \E_{(s_t,a_t) \sim \mathcal{B}}\|(Q(s_t,a_t; \theta_i) - y_i)\|^2 \end{equation} where for now let us assume $y_i= r_t + \gamma\max_{a}Q(s_{t+1},a;\theta_i)$ to be the target for iteration $i$ computed based on the current Q network parameters $\theta_i$. ${a}^{*}_{t+1}=\argmax_{a}Q(s_{t+1},a)$. If we update the parameter $\theta_{i+1}$ to reduce the loss $\mathcal{L}(\theta_i)$, it changes both $Q(s_t,a_t;\theta_{i+1})$ and $Q(s_{t+1},a_{t+1}^{*};\theta_{i+1})$. Assuming an increase in both values, then the new target value $y_{i+1} = r_t + \gamma Q(s_{t+1},a^*_{t+1}; \theta_{i+1})$ for the next iteration will also increase leading to an explosion of the Q function. We demonstrated this behavior in an ablation experiment with results in Fig.~\ref{fig:abl1}. We also show how maintaining a separate target network~\cite{mnih2015human} with frozen parameters $\theta^-$ to compute $y_{i+1} = r_t + \gamma Q(s_{t+1},a^*_{t+1}; \theta^-)$ delays the update of the target and therefore leads to more stable learning of the Q function. However, delaying the function updates also comes with the price of slowing down the learning process. We propose a self-Regularized TD-learning approach to minimize the TD-error while also keeping the change of $Q(s_{t+1},a^*_{t+1})$ small. This regularization mitigates the divergence issue~\cite{tsitsiklis1997analysis}, and no longer requires a target network that would otherwise slow down the learning process. Let $y^{\prime}_i = Q(s_{t+1},a_{t+1}^{*}; \theta_i)$, and $y_i=r_t + \gamma y^{\prime}_i$. We define the learning objective as \begin{equation} \min_{\theta} \|Q(s_t,a_t;\theta) - y_i\|^2 + \|Q(s_{t+1},a^{*}_{t+1};\theta) - y^\prime_i\|^2 \end{equation} where the first term is the original TD-Learning objective and the second term is the regularization term penalizing large updates to $Q(s_{t+1}, a^*_{t+1})$. Note that when the current Q network updates its parameters $\theta$, both $Q(s_t,a_t)$ and $Q(s_ {t+1},a^{*}_{t+1})$ change. Hence, the target value $y_i$ will also change which is different from the approach of keeping a frozen target network for a few iterations. We will demonstrate in our experiments that this self-regularized TD-Learning approach removes the delays in the update of the target value thereby achieves faster and stable learning. \subsection{Self-Guided Policy Improvement with Evolution Strategies}\label{sec:cem_a} The policy, known as the actor, can be updated through a combination of two parts. The first part, which we call Q-loss policy update, improves the policy through local gradients of the current Q function, while the second part, which we call \emph{CEM} policy update, finds a high-value action via \emph{CEM} in a broader neighborhood of the Q function landscape, and update the action distribution to concentrate towards this high-value action. We describe the two parts formally below. Given states $s_t$ sampled from the replay buffer, the Q-loss policy update maximizes the objective \begin{equation} J_{\pi}(\phi) = \E_{s_t \sim \mathcal{B},\hat{a}_t \sim \pi}[Q(s_t, \hat{a}_t)], \end{equation} where $\hat{a}_t$ is sampled from the current policy $\pi(\cdot | s_t)$. The gradient is taken through the reparameterization trick. We reparameterize the policy using a neural network transformation as described in Haarnoja~\etal~\cite{haarnoja2018soft}, \begin{equation} \hat{a}_t = f_{\phi}(\epsilon_t | s_t) \end{equation} where $\epsilon_t$ is an input noise vector, sampled from a fixed distribution, such as a standard multivariate Normal distribution. Then the gradient of $J_\pi(\phi)$ is: \begin{equation} \nabla J_{\pi}(\phi) = \E_{s_t \sim \mathcal{B}, \epsilon_t \sim \mathcal{N}}[\frac{\partial Q(s_t,f_{\phi}(\epsilon_t | s_t))} {\partial f} \frac{\partial f_\phi(\epsilon_t | s_t)}{\partial \phi}] \end{equation} For the CEM policy update, given a minibatch of states $s_t$, we first find a high-value action $\bar{a}_t$ for each state by running \emph{CEM} on the current Q function, $\bar{a}_t = \text{\emph{CEM}}(Q(s_t,\cdot), \pi(\cdot | s_t))$. Then the policy is updated to increase the probability of this high-value action. The guided update on the parameter $\phi$ of $\pi$ at iteration $i$ is \begin{equation} \E_{s_t \sim \mathcal{B}, \hat{a}_t \sim \pi} [Q(s_t,\bar{a}_t) - Q(s_t,\hat{a}_t)]_{+} \nabla_{\phi} \log\pi_i(\bar{a}_t|s_t). \end{equation} We used $Q(s_t, \hat{a}_t)$ as a baseline term, since its expectation over actions $\hat{a}_t$ will give us the normal baseline $V(s_t)$: \begin{equation} \E_{s_t\sim \mathcal{B}} [Q(s_t,\bar{a}_t)-V(s_t)] \nabla_{\phi} \log\pi_i(\bar{a}_t|s_t) \end{equation} In our implementation, we only perform an update if the improvement on the $Q$ function, $Q(s_t,\bar{a}_t)-Q(s_t,\hat{a}_t)$, is non-negative, to guard against the occasional cases where \emph{CEM} fails to find a better action. Combining both parts of updates, the final update rule on the parameter $\phi_i$ of policy $\pi_i$ is \[ \phi_{i+1} = \phi_{i} - \lambda \nabla_{\phi} J_{\pi_i}(\phi_{i} ) - \lambda \E_{s_t \sim \mathcal{B}, \hat{a}_t \sim \pi_i} [Q(s_t,\bar{a}_t) - Q(s_t,\hat{a}_t)]_{+} \nabla_{\phi} \log\pi_i(\bar{a}_t|s_t) \] where $\lambda$ is the step size. We can prove that if the Q function has converged to $Q^{\pi}$, the state-action value function induced by the current policy, then both the Q-loss policy update and the \emph{CEM} policy update will be guaranteed to improve the current policy. We formalize this result in Theorem 1 and Theorem 2, and prove them in Appendix 3.1 and 3.2. \begin{theorem} \textbf{$Q$-loss Policy Improvement} Starting from the current policy $\pi$, we maximize the objective $J_\pi = \E_{(s,a) \sim \rho_\pi(s,a)} Q^{\pi}(s,a)$. The maximization converges to a critical point denoted as $\pi_{new}$. Then the induced Q function, $Q^{\pi_{new}}$, satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{theorem} \textbf{\emph{CEM} Policy Improvement} Assuming the \emph{CEM} process is able to find the optimal action of the state-action value function, $a^*(s) = \argmax_{a}Q^{\pi}(s,a)$, where $Q^{\pi}$ is the Q function induced by the current policy $\pi$. By iteratively applying the update $ \E_{(s,a) \sim \rho_\pi(s,a)} [Q(s,a^*)-Q(s,a)]_{+}\nabla \log\pi(a^*|s)$, the policy converges to $\pi_{new}$. Then $Q^{\pi_{new}}$ satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \subsection{Max-min Double Q-Learning}\label{sec:cem_c} Q-learning~\cite{Watkins:89} is known to suffer from overestimation~\cite{thrun1993issues}. Hasselt~\etal~\cite{hasselt2010double} proposed Double-Q learning which uses two Q functions with independent sets of weights to mitigate the overestimation problem. Fujimoto~\etal~\cite{fujimoto2018addressing} proposed Clipped Double Q-learning with two Q function denoted as $Q(s,a;\theta_1)$ and $Q(s,a;\theta_2)$, or $Q_1$ and $Q_2$ in short. Given a transition $(s_t,a_t,r_t,s_{t+1})$, Clipped Double Q-learning uses the minimum between the two estimates of the Q functions when calculating the target value in TD-error~\cite{sutton1998introduction}: \begin{equation} y= r_t + \gamma\min_{j=1,2} Q(s_{t+1},\hat{a}_{t+1};\theta_j) \label{eqn:clipped} \end{equation} where $\hat{a}_{t+1}$ is the predicted next action. Fujimoto~\etal~\cite{fujimoto2018addressing} mentioned that such an update rule may induce an underestimation bias. In addition, $\hat{a}_{t+1}=\pi_{\phi}(s_{t+1})$ is the prediction of the actor network. The actor network's parameter ${\phi}$ is optimized according to the gradients of $Q_1$. In other words, $\hat{a}_{t+1}$ tends be selected according to the $Q_1$ network which consistently increases the discrepancy between the two Q-functions. In practice, we observe that the discrepancy between the two estimates of the Q-function, $|Q_1 - Q_2|$, can increase dramatically leading to an unstable learning process. An example is shown in Fig.~\ref{fig:abl2} where $Q(s_{t+1},\hat{a}_{t+1};\theta_1)$ is always bigger than $Q(s_{t+1},\hat{a}_{t+1};\theta_2)$. We introduce \emph{Max-min Double Q-Learning} to reduce the discrepancy between the Q-functions. We first select $\hat{a}_{t+1}$ according to the actor network $\pi_{\phi}(s_{t+1})$. Then we run \emph{CEM} to search the landscape of $Q_2$ within a broad neighborhood of $\hat{a}_{t+1}$ to return a second action $\tilde{a}_{t+1}$. Note that \emph{CEM} selects an action $\tilde{a}_{t+1}$ that maximises $Q_2$ while the actor network selects an action $\hat{a}_{t+1}$ that maximises $Q_1$. We gather four different Q-values: $Q(s_{t+1},\hat{a}_{t+1};\theta_1)$, $Q(s_{t+1},\hat{a}_{t+1};\theta_2)$, $Q(s_{t+1},\tilde{a}_{t+1};\theta_1)$, and $Q(s_{t+1},\tilde{a}_{t+1};\theta_2)$. We then run a max-min operation to compute the target value that cancels the biases induced by $\hat{a}_{t+1}$ and $\tilde{a}_{t+1}$. \begin{equation} \label{eq:maxmin} \begin{aligned} y & = r_t + \gamma \max\{\min_{j=1,2} Q(s_{t+1},\hat{a}_{t+1};\theta_j), \min_{j=1,2} Q(s_{t+1},\tilde{a}_{t+1};\theta_j)\} \end{aligned} \end{equation} The inner min-operation $\min_{j=1,2} Q(s_{t+1},\hat{a}_{t+1};\theta_j)$ is adopted from Eq.~\ref{eqn:clipped} and mitigates overestimation~\cite{thrun1993issues}. The outer max operation helps to reduce the difference between $Q_1$ and $Q_2$. In addition, the max operation provides a better approximation of the Bellman optimality operator~\cite{sutton1998introduction}. We visualize $Q_1$ and $Q_2$ during the learning process in Fig.~\ref{fig:abl2}. The following theorem formalizes the convergence of the proposed Max-min Double Q-Learning approach in the finite MDP setting. We prove the theorem in Appendix 3.3. \begin{theorem} \begin{equation} \begin{array}{rl} Q_{t+1, 1}(s,a) &= Q_{t,1}(s,a) + \alpha_t(s,a) (y_t-Q_{t,1}(s,a))\\ Q_{t+1, 2}(s,a) &= Q_{t,2}(s,a) + \alpha_t(s,a) (y_t-Q_{t,2}(s,a)), \end{array} \end{equation} where $\alpha_t(s,a)$ is the learning rate and $y_t$ is the target: \begin{equation} \begin{array}{cl} y_t & = r_t(s_t, a_t) + \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{1, 2 \}} Q_{t,i}(s_{t+1}, a') \\ a^{\pi} & \sim \pi(s_{t+1})\\ a^* & = argmax_{a'} Q_{t,2}(s_{t+1}, a') \\ \end{array} \end{equation} We assume that the MDP is finite and tabular and the variance of rewards are bounded, and $\gamma \in [0,1]$. We assume each state action pair is sampled an infinite number of times and both $Q_{1}$ and $Q_{2}$ receive an infinite number of updates. We further assume the learning rates satisfy $\alpha_t(s,a) \in [0,1]$, $\sum_t \alpha_t(s,a) = \infty$, $\sum_t [\alpha_t(s,a)]^2 < \infty$ with probability 1 and $\alpha_t(s,a)=0, \forall (s,a) \neq (s_t, a_t)$. Finally we assume \emph{CEM} is able to find the optimal action $a^*(s) = \argmax_{a'}Q(s,a';\theta_2)$. Then Max-Min Double Q-learning will converge to the optimal value function $Q^*$ with probability $1$. \end{theorem} \subsection{Comparative Evaluation} We present \emph{GRAC}, a self-guided and self-regularized actor-critic algorithm as summarized in Algorithm~\ref{alg:alg1}. To evaluate {\em GRAC}, we measure its performance on the suite of MuJoCo continuous control tasks~\cite{todorov2012mujoco}, interfaced through OpenAI Gym~\cite{brockman2016openai}. We compare our method with \emph{DDPG}~\cite{lillicrap2015continuous}, \emph{TD3}~\cite{fujimoto2018addressing}, \emph{TRPO}~\cite{schulman2015trust}, and \emph{SAC}~\cite{haarnoja2018soft}. We use the source code released by the original authors and adopt the same hyperparameters reported in the original papers. Hyperparameters for all experiments are in Appendix 2.1. Results are shown in Figure \ref{fig:compare_results}. {\em GRAC} outperforms or performs comparably to all other algorithms in both final performance and learning speed across all tasks. \begin{figure*}[ht!] \centering \includegraphics[width=.85\linewidth]{imgs/fig_1_all_returns_0.pdf} \begin{tabular}{ccc} \begin{minipage}{.2\textwidth} \centering \par\small{(a)~Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Hopper-v2} \end{minipage} & \begin{minipage}{.2\linewidth} \centering \par\small{(c)~Humanoid-v2} \end{minipage} \end{tabular} \includegraphics[width=.85\linewidth]{imgs/fig_1_all_returns_1.pdf} \begin{tabular}{ccc} \begin{minipage}{.2\linewidth} \centering \par\small{(d)~HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(e)~Swimmer-v2} \end{minipage} & \begin{minipage}{.2\linewidth} \centering \par\small{(f)~Walker2d-v2} \end{minipage}\\ \end{tabular} \caption{Learning curves for the OpenAI gym continuous control tasks. For each task, we train 8 instances of each algorithm, using 8 different seeds. Evaluations are performed every 5000 interactions with the environment. Each evaluation reports the return (total reward), averaged over 10 episodes. For each training seed, we use a different seed for evaluation, which results in different start states. The solid curves and shaded regions represent the mean and standard deviation, respectively, of the average return over 8 seeds. All curves are smoothed with window size 10 for visual clarity. \emph{GRAC} (orange) learns faster than other methods across all tasks. \emph{GRAC} achieves comparable result to the state-of-the-art methods on the Ant-v2 task and outperforms prior methods on the other five tasks including the complex high-dimensional Humanoid-v2.}\label{fig:compare_results} \end{figure*} \subsection{Ablation Study} In this section, we present ablation studies to understand the contribution of each proposed component: Self-Regularized TD-Learning~(Section~\ref{sec:target}), Self-Guided Policy Improvment~(Section~\ref{sec:cem_a}), and Max-min Double Q-Learning~(Section~\ref{sec:cem_c}). We present our results in Fig.~\ref{fig:table2} in which we compare the performance of {\em GRAC} with alternatives, each removing one component from GRAC. Additional learning curves can be found in Appendix 2.2. We also run experiments to examine how sensitive GRAC is to some hyperparameters such as $\alpha$ and $K$ listed in Alg.~\ref{alg:alg1}, and the results can be found in Appendix 2.4. \paragraph{Self-Regularized TD Learning} To verify the effectiveness of the proposed self-regularized TD-learning method, we apply our method to \emph{DDPG} (DDPG w/o target network w/ target regularization). We compare against two baselines: the original \emph{DDPG} and \emph{DDPG} without target networks for both actor and critic (\emph{DDPG w/o target network}). We choose DDPG, because it does not have additional components such as Double Q-Learning, which may complicate the analysis of this comparison. In Fig.~\ref{fig:abl1}, we visualize the average returns, and average $Q_1$ values over training batchs (${y^\prime}_1$ in Alg.\ref{alg:alg1}). The $Q_1$ values of \emph{DDPG w/o target network} changes dramatically which leads to poor average returns. \emph{DDPG} maintains stable Q values but makes slow progress. Our proposed \emph{DDPG} w/o target network w/ target regularization maintains stable Q values and learns considerably faster. This demonstrates the effectiveness of our method and its potentials to be applied to a wide range of DRL methods. Due to page limit, we only include results on Hopper-v2. The results on other tasks are in Appendix 2.3. All tasks exhibit a similar phenomenon. \begin{figure}[hb!] \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_0.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Hopper-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Hopper-v2} \end{minipage} \end{tabular} \caption{Learning curves and average $Q_1$ values ($y^{\prime}_1$ in Alg.~\ref{alg:alg1}) on Hopper-v2. \emph{DDPG} w/o target network quickly diverges as seen by the unrealistically high Q values. \emph{DDPG} is stable but progresses slowly. If we remove the target network and add the proposed target regularization, we both maintain stability and achieve faster learning than \emph{DDPG}.}\label{fig:abl1} \end{figure} \paragraph{Policy Improvement with Evolution Strategies} The GRAC actor network uses a combination of two actor loss functions, denoted as \emph{QLoss} and \emph{CEMLoss}. \emph{QLoss} refers to the unbiased gradient estimators which extends the \emph{DDPG}-style policy gradients~\cite{lillicrap2015continuous} to stochastic policies. \emph{CEMLoss} represents the policy improvement guided by the action found with the zero-order optmization method CEM. We run another two ablation experiments on all six control tasks and compare it with our original policy training method denoted as \emph{GRAC}. As seen in Fig.\ref{fig:table2}, in general \emph{GRAC} achieves a better performance compared to either using \emph{CEMLoss} or \emph{QLoss}. The significance of the improvements varies within the six control tasks. For example, \emph{CEMLoss} plays a dominant role in Swimmer while \emph{QLoss} has a major effect in HalfCheetah. It suggests that \emph{CEMLoss} and \emph{QLoss} are complementary. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{imgs/fig_table.pdf} \caption{Final average returns, normalized w.r.t {\em GRAC} for all tasks. For each task, we train each ablation setting with 4 seeds, and average the last 10 evaluations of each seed (40 evaluations in total). Actor updates without CEMLoss (\emph{GRAC w/o CEMLoss}) and actor updates w.r.t minimum of both Q networks (\emph{GRAC w/o CriticCEM w/ minQUpdate}) achieves slightly better performance on Walker2d-v2 and Hopper-v2. GRAC achieves the best performance on 4 out of 6 tasks, especially on more challenging tasks with higher-dimensional state and action spaces (Humanoid-v2, Ant-v2, HalfCheetah-v2). This suggests that individual components of GRAC complement each other. } \label{fig:table2} \end{figure} \paragraph{Max-min Double Q-Learning} We additionally verify the effectiveness of the proposed Max-min Double Q-Learning method. We run an ablation experiment by replacing Max-min by Clipped Double Q-learing~\cite{fujimoto2018addressing} denoted as \emph{GRAC w/o CriticCEM}. In Fig.~\ref{fig:abl2}, we visualize the learning curves of the average return, $Q_1$ ($y^{\prime}_1$ in Alg.~\ref{alg:alg1}), and $Q_1-Q_2$ ($y^{\prime}_1-y^{\prime}_2$ in Alg.~\ref{alg:alg1}). \emph{GRAC} achieves high performance while maintaining a smoothly increasing Q function. Note that the difference between Q functions, $Q_1-Q_2$, remains around zero for \emph{GRAC}. \emph{GRAC w/o CriticCEM} shows high variance and drastic changes in the learned $Q_1$ value. In addition, $Q_1$ and $Q_2$ do not always agree. Such unstable Q values result in a performance crash during the learning process. Instead of Max-min Double Q Learning, another way to address the gap between $Q_1$ and $Q_2$ is to perform actor updates on the minimum of $Q_1$ and $Q_2$ networks (as seen in SAC). Replacing Max-min Double Q Learning with this trick achieves lower performance than \emph{GRAC} in more challenging tasks such as HalfCheetah-v2, Ant-v2, and Humanoid-v2 (See \emph{GRAC w/o CriticCEM w/ minQUpdate} in Fig.\ref{fig:table2}). \begin{figure}[ht!] \centering \includegraphics[width=0.8\linewidth]{imgs/fig_3_abl_critic_cem_0.pdf} \begin{tabular}{ccc} \begin{minipage}{.3\textwidth} \begin{flushright} \par\small{(a)~Returns on Ant-v2} \end{flushright} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par \small{(b)~Average of $Q_1$ over training batch on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \begin{flushleft} \par\small{(c)~Average of $Q_1-Q_2$ over training batch on Ant-v2} \end{flushleft} \end{minipage} \end{tabular} \caption{ Learning curves (left), average $Q_1$ values (middle), and average of the difference between $Q_1$ and $Q_2$ (right) on Ant-v2. Average Q values are computes as minibatch average of $y^{\prime}_1$ and $y^{\prime}_2$, defined in Alg.~\ref{alg:alg1}. \emph{GRAC w/o CriticCEM} represents replacing Max-min Double Q-Learning with Clipped Double Q-Learning. Without Max-min double Q-Learning to balance the magnitude of $Q_1$ and $Q_2$, $Q_1$ blows up significantly compared to $Q_2$, leading to divergence. }\label{fig:abl2} \end{figure} \section{Notation Table} \documentclass{article} \PassOptionsToPackage{numbers, compress}{natbib} \usepackage{neurips_2020} \usepackage{amsmath} \usepackage{amssymb} \usepackage{enumitem} \DeclareMathOperator*{\argmax}{arg\,max} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator{\E}{\mathbb{E}} \newtheorem{theorem}{Theorem} \newtheorem{thm}{Theorem} \usepackage{xcolor} \newcommand\jean[1]{\textcolor{red}{J: #1}} \newcommand\lin[1]{\textcolor{blue}{L: #1}} \newcommand\yifan[1]{\textcolor{purple}{Y: #1}} \newcommand\mengyuan[1]{\textcolor{green}{MY: #1}} \newcommand\SUN[1]{\textcolor{blue}{SUN: #1}} \usepackage{graphicx} \usepackage{mathrsfs} \usepackage{float} \usepackage{algorithmicx} \usepackage{algorithm,algpseudocode} \usepackage{algorithm} \usepackage{amsthm,amsmath,amsfonts,verbatim} \usepackage[bookmarks=true]{hyperref} \hypersetup{ colorlinks=true, urlcolor=black, } \usepackage[utf8]{inputenc} \usepackage[T1]{fontenc} \usepackage{hyperref} \usepackage{url} \usepackage{booktabs} \usepackage{amsfonts} \usepackage{nicefrac} \usepackage{microtype} \usepackage{caption} \captionsetup[figure]{font=footnotesize} \captionsetup[table]{font=footnotesize} \defA\arabic{equation}{A\arabic{equation}} \title{Self-Guided and Self-Regularized Actor-Critic: Appendix} \begin{document} \maketitle \section{Implementation Details} \subsection{Neural Network Architecture Details} We use a two layer feedforward neural network of 256 and 256 hidden nodes respectively. Rectified linear units~(ReLU) are put after each layer for both the actor and critic except the last layer. For the last layer of the actor network, a tanh function is used as the activation function to squash the action range within $[-1,1]$. \emph{GRAC} then multiplies the output of the tanh function by \emph{max action} to transform [-1,1] into [-\emph{max action}, \emph{max action}]. The actor network outputs the mean and sigma of a Gaussian distribution. \subsection{CEM Implementation} Our CEM implementation is based on the CEM algorithm described in Pourchot~\etal~\cite{pourchot2018cemrl}. \setcounter{algorithm}{1} \begin{algorithm}[H] \Require{Input: Q-function Q(s,a); size of population $N_{pop}$; size of elite $N_{elite}$ where $N_{elite} \leq N_{pop}$; max iteration of CEM $N_{cem}$.}\\ \Initialization{Initialize the mean $\mu$ and covariance matrix $\Sigma$ from actor network predictions.} \begin{algorithmic}[1] \For {$i=1...,N_{\text{cem}}$} \State Draw the current population set $\{a_{pop} \}$ of size $N_{pop}$ from $\mathcal{N}(\mu, \Sigma)$. \State Receive the $Q$ values $\{q_{pop}\} = \{Q(s,a) | a \in \{a_{pop}\} \}$. \State Sort $\{q_{pop}\}$ in descending order. \State Select top $N_{elite}$ Q values and choose their corresponding $a_{pop}$ as elite $\{a_{elite}\}$. \State Calculate the mean $\mu$ and covariance matrix $\Sigma$ of the set $\{ a_{elite} \}$. \EndFor \end{algorithmic} \Output{Output: The top one elite in the final iteration.} \caption{CEM} \label{alg:cem} \end{algorithm} \subsection{Additional Detail on Algorithm 1: GRAC} The actor network outputs the mean and sigma of a Gaussian distribution. In Line 2 of Alg.1, the actor has to select action $a$ based on the state $s$. In the test stage, the actor directly uses the predicted mean as output action $a$. In the training stage, the actor first samples an action $\hat{a}$ from the predicted Gaussian distribution $\pi_{\phi}(s)$, then \emph{GRAC} runs \emph{CEM} to find a second action $\tilde{a}=\emph{CEM}(Q(s,\cdot;\theta_2),\pi_{\phi}(\cdot | s)$). \emph{GRAC} uses $a=\argmax_{\{\tilde{a},\hat{a}\}}\{\min_{j=1,2} Q(s,\tilde{a};\theta_j), \min_{j=1,2} Q(s,\hat{a};\theta_j)\}$ as the final action. \section{Appendix on Experiments} \subsection{Hyperparameters used} Table~\ref{tab:hyper} and Table~\ref{tab:hyper_env} list the hyperparameters used in the experiments. $[a,b]$ denotes a linear schedule from $a$ to $b$ during the training process. \begin{table}[H] \centering \begin{tabular}{@{}p{0.3\linewidth}p{0.15\linewidth}} \hline \hline Parameters & Values \\ \hline \hline discount $\gamma$ & 0.99\\ \hline replay buffer size & 1e6\\ \hline batch size & 256 \\ \hline optimizer & Adam~\cite{kingma2014adam}\\ \hline learning rate in critic & 3e-4\\ \hline learning rate in actor & 2e-4\\ \hline $N_{\text{cem}}$ & 2\\ \hline $N_{\text{pop}}$ & 256\\ \hline $N_{\text{elite}}$ & 5\\ \hline\hline \end{tabular} \caption{Hyperparameter Table} \label{tab:hyper} \end{table} \begin{table}[H] \centering \begin{tabular}{@{}p{0.17\linewidth}p{0.12\linewidth} p{0.11\linewidth} p{0.11\linewidth} p{0.15\linewidth} p{0.14\linewidth}} \hline \hline Environments & ActionDim & $K$ in Alg.1 & $\alpha$ in Alg.1 & CemLossWeight & Reward Scale \\ \hline Ant-v2 & 8 & 20 & [0.7,~85] & 1.0/ActionDim & 1.0 \\ Hopper-v2 & 3 & 20 & [0.85,~0.95] & 0.3/ActionDim & 1.0\\ HalfCheetah-v2 & 6 & 50 & [0.7,~0.85] & 1.0/ActionDim & 0.5\\ Humanoid-v2 & 17 & 20 & [0.7,~0.85] & 1.0/ActionDim & 1.0\\ Swimmer-v2 & 2 & 20 & [0.5,~0.75] & 1.0/ActionDim & 1.0\\ Walker2d-v2 & 6 & 20 & [0.8,~0.9] & 0.3//ActionDim & 1.0\\ \hline\hline \end{tabular} \caption{Environment Specific Parameters} \label{tab:hyper_env} \end{table} \subsection{Additional Learning Curves for Policy Improvement with Evolution Strategy} \label{appendsec:experiment} \begin{figure}[H] \centering \begin{tabular}{ccc} \begin{minipage}{.3\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_ant.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_hopper.pdf} \par\small{(b)~Returns on Hopper-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_humanoid.pdf} \par\small{(c)~Returns on Humanoid-v2} \end{minipage} \end{tabular} \begin{tabular}{ccc} \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_halfcheetah.pdf} \par\small{(d)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_swimmer.pdf} \par\small{(e)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_abl_actor_loss_return_walker2d.pdf} \par\small{(f)~Returns on Walker2d-v2} \end{minipage}\\ \end{tabular} \caption{Learning curves for the OpenAI gym continuous control tasks. The \emph{GRAC} actor network uses a combination of two actor loss functions, denoted as QLoss and CEMLoss. \emph{QLoss Only} represents the actor network only trained with QLoss. \emph{CEM Loss Only} represents the actor network only trained with CEMLoss. In general \emph{GRAC} achieves a better performance compared to either using \emph{CEMLoss} or \emph{QLoss}.} \end{figure} \subsection{Additional Learning Curves for Ablation Study of Self-Regularized TD Learning} \begin{figure}[H] \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_ant.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Ant-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_halfcheetah.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on HalfCheetah-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on HalfCheetah-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_humanoid.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Humanoid-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Humanoid-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_walker2d.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Walker2d-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Walker2d-v2} \end{minipage} \end{tabular} \centering \begin{tabular}{cc} \begin{minipage}{.55\textwidth} \centering \includegraphics[width=\linewidth]{imgs/fig_2_abl_three_loss_swimmer.pdf} \end{minipage} \end{tabular} \begin{tabular}{cc} \begin{minipage}{.3\textwidth} \centering \par\small{(a)~Returns on Swimmer-v2} \end{minipage} & \begin{minipage}{.3\linewidth} \centering \par\small{(b)~Average of $Q_1$ over training batch on Swimmer-v2} \end{minipage} \end{tabular} \caption{Learning curves and average $Q_1$ values ($y^{\prime}_1$ in Alg. 1 of the main paper). \emph{DDPG} w/o target network quickly diverges as seen by the unrealistically high Q values. \emph{DDPG} is stable but often progresses slower. If we remove the target network and add the proposed target regularization, we both maintain stability and achieve a faster or comparable learning rate.} \end{figure} \subsection{Hyperparameter Sensitivity for the Termination Condition of Critic Network Training}\label{appendsubsec:termination} We also run experiments to examine how sensitive \emph{GRAC} is to some hyperparameters such as $K$ and $\alpha$ listed in Alg.1. The critic networks will be updated until the critic loss has decreased to $\alpha$ times the original loss, or at most $K$ iterations, before proceeding to update the actor network. In practice, we decrease $\alpha$ in the training process. Fig. 3 shows five learning curves on Ant-v2 running with five different hyperparameter values. We find that a moderate value of $K=10$ is enough to stabilize the training process, and increasing $K$ further does not have significant influence on training, shown on the right of Fig. 3. $\alpha$ is usually within the range of $[0.7,0.9]$ and most tasks are not sensitive to minor changes. However on the task of Swimmer-v2, we find that $\alpha$ needs to be small enough ($<0.7$) to prevent divergence. In practice, without appropriate $K$ and $\alpha$ values, divergence usually happens within the first 50k training steps, thus it is quick to select appropriate values for $K$ and $\alpha$. \begin{figure*}[ht!] \centering \begin{tabular}{cc} \begin{minipage}{.45\textwidth} \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_1.pdf} \centering \par\small{(a)~Returns on Ant-v2} \end{minipage} & \begin{minipage}{.45\linewidth} \centering \includegraphics[width=.85\linewidth]{imgs/fig_4_hyperparameter_2.pdf} \par\small{(b)~Returns on Ant-v2} \end{minipage} \end{tabular} \caption{Learning curves for the OpenAI gym Ant-v2 environment. $K$ denotes the maximum number of iterations. $\alpha$ denotes the remaining percent of loss value when terminating the iteration. In practice, we decrease $\alpha$ in the training process. $\text{alpha} a\text{\_}b$ denotes the initial value $a$ and final value $b$ for $\alpha$.}\label{fig:results} \end{figure*} \section{Theorems and Proofs} \label{appendsec:theorems} For the sake of clarity, we make the following technical assumption about the function approximation capacity of neural networks that we use to approximate the action distribution. \textbf{State separation assumption:} The neural network chosen to approximate the policy family $\Pi$ is expressive enough to approximate the action distribution for each state $\pi(s,\cdot)$ separately. \subsection{Theorem 1: \textbf{$Q$-loss Policy Improvement}} \label{appendsubsec:theorem1} \begin{theorem} Starting from the current policy $\pi$, we update the policy to maximize the objective $J_\pi = \E_{(s,a) \sim \rho_\pi(s,a)} Q^{\pi}(s,a)$. The maximization converges to a critical point denoted as $\pi_{new}$. Then the induced Q function, $Q^{\pi_{new}}$, satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 1] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately, for each state we are maximizing $\E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a)$. Therefore, we have $\forall s, \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. \begin{equation} \begin{array}{rl} Q^{\pi}(s,a) &= r(s,a) + \gamma \E_{s^{\prime}}V^{\pi}(s^{\prime}) \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} Q^{\pi}(s^{\prime}, a^{\prime}) \\ &= r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} [r(s^{\prime}, a^{\prime}) + \gamma \E_{s^{\prime \prime}} V^{\pi}(s^{\prime \prime})] \\ &\leq r(s,a) + \gamma \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} r(s^{\prime}, a^{\prime}) + \gamma^2 \E_{s^{\prime}} \E_{a^{\prime} \sim \pi_{new}} \E_{s^{\prime \prime}} \E_{a^{\prime \prime} \sim \pi_{new}} Q^{\pi}(s^{\prime \prime}, a^{\prime \prime}) \\ &= \ldots \quad \mbox{(repeatedly unroll Q function )} \\ &\leq Q^{\pi_{new}}(s,a) \end{array} \end{equation} \end{proof} \subsection{Theorem 2: \textbf{\emph{CEM} Policy Improvement}} \label{appendsubsec:theorem2} \begin{theorem} We assume that the \emph{CEM} process is able to find the optimal action of the state-action value function, $a^*(s) = \argmax_{a}Q^{\pi}(s,a)$, where $Q^{\pi}$ is the Q function induced by the current policy $\pi$. By iteratively applying the update $ \E_{(s,a) \sim \rho_\pi(s,a)} [Q(s,a^*)-Q(s,a)]_{+}\nabla \log\pi(a^*|s)$, the policy converges to $\pi_{new}$. Then $Q^{\pi_{new}}$ satisfies $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a).$ \end{theorem} \begin{proof}[Proof of Theorem 2] Under the state separation assumption, the action distribution for each state, $\pi(s, \cdot)$, can be updated separately. Then, for each state $s$, the policy $\pi_{new}$ will converge to a delta function at $a^*(s)$. Therefore we have $\forall s, \max_a Q^{\pi}(s,a) = \E_{a \sim \pi_{new}(s,\cdot)} Q^{\pi}(s,a) \geq \E_{a \sim \pi(s,\cdot)} Q^{\pi}(s,a) = V^{\pi}(s)$. Then, following Eq. (1) we have $\forall (s,a), Q^{\pi_{new}}(s,a) \geq Q^{\pi}(s,a)$ \end{proof} \subsection{Theorem 3: \textbf{Max-Min Double Q-learning Convergence}} \label{appendsubsec:theorem3} \begin{theorem} We keep two tabular value estimates $Q_{1}$ and $Q_{2}$, and update via \begin{equation} \begin{array}{rl} Q_{t+1, 1}(s,a) &= Q_{t,1}(s,a) + \alpha_t(s,a) (y_t-Q_{t,1}(s,a))\\ Q_{t+1, 2}(s,a) &= Q_{t,2}(s,a) + \alpha_t(s,a) (y_t-Q_{t,2}(s,a)), \end{array} \end{equation} where $\alpha_t(s,a)$ is the learning rate and $y_t$ is the target: \begin{equation} \begin{array}{cl} y_t & = r_t(s_t, a_t) + \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{1, 2 \}} Q_{t,i}(s_{t+1}, a') \\ a^{\pi} & \sim \pi(s_{t+1})\\ a^* & = argmax_{a'} Q_{t,2}(s_{t+1}, a') \\ \end{array} \end{equation} We assume that the MDP is finite and tabular and the variance of rewards are bounded, and $\gamma \in [0,1]$. We assume each state action pair is sampled an infinite number of times and both $Q_{1}$ and $Q_{2}$ receive an infinite number of updates. We further assume the learning rates satisfy $\alpha_t(s,a) \in [0,1]$, $\sum_t \alpha_t(s,a) = \infty$, $\sum_t [\alpha_t(s,a)]^2 < \infty$ with probability 1 and $\alpha_t(s,a)=0, \forall (s,a) \neq (s_t, a_t)$. Finally we assume \emph{CEM} is able to find the optimal action $a^*(s) = \argmax_{a'}Q(s,a';\theta_2)$. Then Max-Min Double Q-learning will converge to the optimal value function $Q^*$ with probability $1$. \end{theorem} \begin{proof}[Proof of Theorem 3] This proof will closely follow Appendix A of \cite{fujimoto2018addressing}. We will first prove that $Q_2$ converges to the optimal Q value $Q^*$. Following notations of \cite{fujimoto2018addressing}, we have \begin{equation*} \begin{array}{rl} F_t(s_t,a_t) \triangleq& y_t(s_t,a_t) - Q^*(s_t, a_t) \\ =& r_t + \gamma \max_{a^{\prime} \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a^{\prime}) - Q^*(s_t, a_t) \\ =& F_t^Q(s_t, a_t) + c_t \end{array} \end{equation*} Where \begin{eqnarray*} F_t^Q(s_t, a_t) &=& r_t + \gamma Q_{t,2}(s_{t+1}, a^*) - Q^*(s_t, a_t) \\ &=& r_t + \gamma \max_{a^{\prime}} Q_{t,2}(s_{t+1}, a^{\prime}) -Q^*(s_t,a_t) \\ c_t &=& \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s_{t+1}, a') - \gamma Q_{t, 2}(s_{t+1}, a^*) \end{eqnarray*} $F^Q_t$ is associated with the optimum Bellman operator. It is well known that the optimum Bellman operator is a contractor, We need to prove $c_t$ converges to 0. Based on the update rules (Eq. (A2)), it is easy to prove that for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0. This implies that $\Delta_t(s, a^\pi) = Q_{t,1}(s,a^\pi) - Q_{t,2}(s,a^\pi)$ converges to 0 and $\Delta_t(s, a^*) = Q_{t, 1}(s,a^*) - Q_{t, 2}(s,a^*)$ converges to 0. Therefore, $\min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a) - Q_{t, 2}(s,a) \leq 0$ and the left hand side converges to zero, for $a \in {a^\pi, a^*}$. Since we have $Q_{t, 2}(s, a^*) >= Q_{t, 2}(s, a^\pi)$, then \[ \min_{i \in \{1, 2\}} Q_{t, i}(s, a^*) \leq \max_{a' \in \{a^{\pi}, a^* \}} \min_{i \in \{ 1, 2 \}} Q_{t, i}(s, a') \leq Q_{t, 2}(s,a^*) \] Therefore $c_t = \gamma \max_{a' \in \{a^{\pi}, a^* \}} \min_{ i \in \{ 1, 2 \}} Q_{t, i}(s, a') - Q_{t, 2}(s,a^*)$ converges to 0. And we proved $Q_{t, 2}$ converges to $Q^*$. Since for any tuple $(s, a)$, $\Delta_t(s, a) = Q_{t, 1}(s,a) - Q_{t, 2}(s,a)$ converges to 0, $Q_{t, 1}$ also converges to $Q^*$. \end{proof}
null
null
null
proofpile-arXiv_000-10168
{"arxiv_id":"2009.08973","language":"en","timestamp":1600654716000,"url":"https:\/\/arxiv.org\/abs\/2009.08973","yymm":"2009"}
2024-02-18T23:40:25.227Z
2020-09-21T02:18:36.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10170"}
null
\section{Introduction} In industry, we often find combinatorial optimisation problems that are non-trivial and NP-hard, which means that in practice, they cannot be solved in polynomial time. Examples of such problems are production process planning or scheduling in single- or multi-objective domains. One of the most common is the Permutation Flow Shop Scheduling Problem (PFSP) \cite{ltgaPopulationSizing,productionSchedulingIeee}. The objective of PFSP is to optimise production quality. A solution in PFSP defines an order in which the production tasks (jobs) are put on the plan by a scheduler. PSFP is considered in single- \cite{ltgaPopulationSizing,productionSchedulingIeee}, multi- \cite{productionSchedulingMO,productionMOrealNumber} and many-objective versions \cite{productionSchedulingManyO}. For some of the problems that emerge from industry optimisation, real numbers are employed to encode a solution \cite{productionMOrealNumber,productionYorkRealNumbers}. For others, sets of discrete values (including binary values) can be used \cite{productionDiscrete,paintsOrig}.\par In this paper, we consider a problem of manufacturing process planning in factories producing bulk commodities. Such a process is comprised of manufacturing recipe selection and resource allocation. The main optimisation objective of this case study is to increase production line utilisation and, consequently, to decrease the total production time (makespan) of batch production by executing an appropriate number of recipes producing ordered amounts of commodities. The extension beyond the typical covering problem is that the amount of the commodities produced should be as close to the ordered ones as possible (i.e., the surpluses should be minimised). The considered problem is an instance of multi-objective optimisation and, as such, is referred to as the multi-objective bulk commodity production problem (MOBCPP) in this paper. This problem is practical and, being an extension of a classic covering problem, belongs to the NP-hard class \cite{Garey1990}. MOBCPP is a multi-objective problem. In multi-objective optimisation we consider $m$ objective functions $f_i(x), i \in \{0,1,\ldots,m-1\}$. Without loss of generality, we may state that the values of all these functions are to be minimised. In this paper, we only consider problems with solutions encoded by $l$ discrete (binary) variables. Thus, a solution $x$ is a binary vector $x=(x_0,x_1,\ldots,x_{l-1})$. For each $x$, the objective value vector is $f(x) = (f_0(x),f_1(x),\ldots,f_{m-1}(x))$.\par A method in multi-objective optimisation is expected to return a Pareto front \cite{MoGomeaGecco,EliteArchive}. A Pareto front is a set of non-dominated solutions. A solution $x^0$ \textit{dominates} a solution $x^1$ if and only if $f_i(x^0) \leq f_i(x^1)$ $\forall{i}\in \{0,1,\ldots,m-1\}$ and $f(x^0)\neq f(x^1)$. A solution that is not dominated by any other solution is a Pareto-optimal solution. A Pareto-optimal set $\mathcal{P}_S$ is a set of all Pareto-optimal solutions. The Pareto-optimal front $\mathcal{P}_F$ is a set of objective value vectors of all Pareto-optimal solutions. The number of Pareto-optimal solutions may be large for many problems (in continuous optimisation it is often infinite). Therefore, usually, it is sufficient to find a good approximation of $\mathcal{P}_F$.\par \begin{comment} The recipes can be executed on different compatible resources. Various recipes can be used to produce the same commodity. Consequently, the decision problem includes the selection of the multisubset (i.e.\ a combination with repetitions) of the recipes and their allocation to compatible resources, such that the appropriate amount of goods are produced with the minimal surplus in the shortest possible time. As such, the problem resembles, to a certain degree, FJSP with process plan flexibility (FJSP-PPF) \cite{Ozguven2010} or the earlier-formulated ``JSP with alternative process plans" \cite{Thomalla2001}. However, none of the papers known to the authors considers process planning by recipe multisubset selection to satisfy both the criteria of the shortest makespan and the minimal surplus of the ordered commodities. \end{comment} If the scale of problem instances is large, metaheuristics may be employed as effective and efficient solvers \cite{paintsOrig,muppetsBaldwinEon,productionYorkRealNumbers,ltgaPopulationSizing}. Evolutionary methods are capable of supporting high-quality solutions consuming a reasonable amount of computation resources. In some practical problems, there are more than one contradicting objectives to optimise. For such problems, instead of a single solution, a set of so-called Pareto optimal solutions is sought. For each of these solutions, improving a single objective causes worsening at least one other objective.\par Many methods have been proposed for multi-objective optimisation, for instance, the well-known NSGA-II \cite{nsga2} that employs mechanisms to bias the evolutionary search towards $\mathcal{P}_F$ and preserves the diversity of the final Pareto front approximation. Another proposition is the Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) \cite{moead,Zhou2011,Ma2014b,Ma2014,Ma2016}. The idea behind MOEA/D is to divide a Pareto front and exchange information only between individuals that optimise a similar part of $\mathcal{P}_F$. One of the key advantages of MOEA/D when compared to NSGA-II is that it does not require the computation of so-called crowding distance that is computationally expensive. NSGA-II and MOEA/D are typical reference methods in multi-objective optimisation \cite{MoGomeaSwarm}, so they are also employed as the baseline in this paper.\par Solutions for the considered MOBCPP problem are binary-coded. Thus, solution space is a discrete one. The methods that employ linkage learning are particularly effective in the optimisation of problems characterised by such solution spaces. This observation applies to both: theoretical \cite{muppets,3lo,MoGomeaSwarm,ltga} and practical problems \cite{ltgaPopulationSizing,subpopInitLL,mupMemo}. It is also shown that methods employing linkage learning may significantly outperform the other that do not use such techniques \cite{muppets,mohBOA,MoGomeaSwarm,mupMemo}. One of the recent propositions dedicated to solving multi-objective problems is the Multi-objective Gene-pool Optimal Mixing Evolutionary Algorithm (MO-GOMEA) \cite{MoGomeaGecco,MoGomeaSwarm}. MO-GOMEA is based on the concept of the Linkage Tree Genetic Algorithm (LTGA) \cite{ltga,ltgaGomeaNaming,ltgaPopulationSizing}. LTGA is a Genetic Algorithm (GA) that employs linkage learning techniques \cite{3lo,P3Original,ltga,dsmga2} to improve its effectiveness. Similarly to LTGA in the single-objective domains, MO-GOMEA has significantly outperformed competing methods (including NSGA-II and MOEA/D) in multi-objective optimisation \cite{MoGomeaGecco,MoGomeaSwarm}. Among all, to obtain high-quality results, MO-GOMEA clusters the population and processes the subpopulations separately. Therefore, it is capable of optimising different Pareto front parts separately, with the use of linkage that is supposed to describe the features of each Pareto front part. \par According to \cite{linkageQuality}, the methods that employ linkage learning are dependent on the quality of the linkage they use. If the quality of linkage is too low, linkage-based methods perform similarly to their competitors that do not consider gene-dependencies. In \cite{linkLearningDetermined}, the authors check the dependency between the method's effectiveness and the linkage learning model. They show that if the problem structure is complex (there are many gene dependencies) \cite{watsonHiff,watsonHiffPPSNfirst}, it is favourable to learn linkage during the method execution rather than obtaining the linkage in the pre-optimisation step. Finally, in \cite{3lo}, the authors show that to assure the method's effectiveness, the linkage should be of high quality, but it also should be diverse. To obtain this, they propose a method that utilises a multi-population approach. Note that the multi-population approaches are usually employed to increase population diversity \cite{subpopInitLL,muppets,mupMemo}, but they may also be useful in obtaining a diverse linkage \cite{3lo}. Note that the lack of linkage learning diversity may be a likely reason for a poor performance of LTGA shown in \cite{linkageLearningIsBad}. The research considering the influence of linkage quality on the methods' performance is in its early stage. For instance, it requires further investigation of the reasons why linkage diversity is important to effectively solve problems with complex structure (including so-called overlapping building blocks, which is a typical feature of practical problems) \cite{3lo}. Nevertheless, the objective of this paper is to use the conclusions of the research that has been already made in this area, and to apply these conclusions as intuitions that shall guide us to proposing an effective method for solving the MOBCPP problem. If the intuitions are precise, such a method shall also be effective in solving typical benchmarks employed in a multi-objective optimisation.\par As stated before, MO-GOMEA is a state-of-the-art method for multi-objective optimisation. However, despite its high effectiveness, it also has some disadvantages. First, it requires a clusterisation of the population. The number of required clusters is adjusted automatically at runtime. However, if the number of clusters is too low or too high, the method may become ineffective \cite{MoGomeaGecco}. Second, although LTGA (the single-objective base of MO-GOMEA) is highly effective in single-objective optimisation for problems with so-called overlapping blocks, it is outperformed by Parameter-less Population Pyramid (P3) \cite{3lo,P3Original,fP3,afP3}. P3 is another state-of-the-art method in single-objective optimisation. The problems with overlapping blocks contain blocks of highly-dependent genes. However, some of the genes in these blocks are also dependent on the genes from other blocks \cite{3lo,P3Original,fP3,afP3}. The feature of inter-block dependencies is typical for practical problems \cite{watsonHiff,watsonHiffPPSNfirst}.\par In this paper, we propose a Multi-objective Parameter-less Population Pyramid (MO-P3) to solve the MOBCPP problem effectively. The motivations behind proposing this method are as follows. In contrast to LTGA, P3 maintains numerous different linkages at the same time that should be beneficial for practical problems \cite{3lo}. MO-P3 uses this linkage diversity to omit the necessity of population clusterisation. P3 is a relatively recent method proposition that effectively solves single-objective problems with overlapping blocks. For such problems, P3 has been shown to be significantly more effective than LTGA and Dependency Structure Matrix Genetic Algorithm II (DSMGA-II) \cite{dsmga2}. Since practical problems often contain blocks that overlap \cite{watsonHiff,watsonHiffPPSNfirst}, P3 seems to be a good starting point for solving the practical multi-objective problem considered in this paper. At each MO-P3 iteration, a new individual is added to the population and updated with the use of collected linkages and the rest of the population, similarly to P3. However, in MO-P3, each new individual is assigned a weight vector that directs the search towards a chosen part of the Pareto front. Such a feature may be found similar to the MOEA/D behaviour.\par The extensive experimental work described in this paper shows that for the considered practical problem, MO-P3 yields results of a higher quality than NSGA-II and MOEA/D. MO-P3 also yields slightly better results than MO-GOMEA. However, its main advantage over MO-GOMEA is that MO-P3 obtains high-quality results significantly faster for MOBCPP (considering both fitness evaluations and computation time). We also present the MO-P3 performance on typical benchmarks. Except for one of them, MO-P3 outperforms all competing methods. Therefore, MO-P3 may be found useful in solving multi-objective problems. Thus, the contribution of this paper is threefold. First, we propose a method dedicated to solving a hard and industrially-relevant practical problem. Second, we fill the gap in the field of Evolutionary Computation that is the lack of a P3-based method dedicated to solving multi-objective problems. Finally, we show that MO-P3 is highly competitive when a typical benchmark set is considered, so we can clearly state that our contribution goes beyond the original problem we set out to solve, and that we propose a new and effective method for multi-objective discrete optimisation.\par The rest of this paper is organised as follows. In the next section, we present the related work that includes linkage learning, the presentation of the state-of-the-art methods employing linkage, the issue of Pareto front clusterisation and MO-GOMEA. In Section \ref{sec:problemDef}, we define the MOBCPP problem. In the fourth section, we describe the proposed MO-P3 approach in detail. Sections \ref{sec:exp:paints} and \ref{sec:exp:benchmarks} report the results obtained for MOBCPP and the benchmark problems, respectively. The results are discussed in the seventh section. Finally, the last section points the future research directions and concludes this paper. \section{Related Work} \label{sec:relWork} In this section, we present the research related to our propositions. Therefore, in the first subsection, we discuss in detail the issue of linkage learning and linkage learning techniques employed by methods considered in this paper. In Section \ref{sec:relWork:dsmMethods}, we show the details of modern evolutionary methods. These methods are single-objective, but one of them is the base of our proposition and another one is the base of the main competing method considered in this paper. In the fifth subsection, we present the latest advances in discrete multi-objective optimisation. Finally, in the last two subsections, we present MOEA/D in more detail and review previous research related to manufacturing scheduling using multi-objective GAs. \subsection{Linkage Learning} \label{sec:relWork:ll} Linkage learning is one of the techniques that are used to detect features of a problem to be optimised. Such knowledge is used during runtime to improve its effectiveness and efficiency. In this section, we present the general linkage classifications and more recent techniques employed by state-of-the-art methods in evolutionary computation. In this paper we concentrate on linkage learning techniques dedicated for discrete domains. However, problem decomposition was found useful also in continuous domains \cite{ieeeSurvey}. \subsubsection{General Description and Classifications} \label{sec:relWork:ll:general} Linkage is a piece of information that describes possible dependencies between genes. If such knowledge is accurate and used properly, it may significantly increase the effectiveness of an evolutionary method. In recent years, many different techniques were proposed to obtain linkage. These techniques may be classified with regard to their features. For instance, linkage learning techniques may be classified on the base of: how good and bad linkage are distinguished, how linkage is represented and how linkage is stored \cite{llClassification}. If a method uses only a fitness value to differentiate between a good and bad linkage, it employs a \textit{unimetric way}, which is typical for older Genetic Algorithms (GAs), but some relatively modern methods also adopt it \cite{muppets,muppetsActive}. Nevertheless, current state-of-the-art methods (e.g., Parameter-less Population Pyramid (P3) \cite{P3Original}, Dependency Structure Matrix Genetic Algorithm (DSMGA-II) \cite{dsmga2}, Linkage Tree Genetic Algorithm (LTGA) \cite{ltga}) employ a multi-metric approach, which means that they use more measures than pure fitness to find the linkage of high quality. If the linkage is represented by dedicated structures (e.g., trees, graphs, matrices or other), such representation is called \textit{virtual}. The linkage represented by the position of the gene in a genotype is called \textit{physical} \cite{muppets}. Finally, the linkage may be stored in one central database or it may be distributed in the population (i.e., each individual may carry its own linkage information).\par More recent linkage classification was proposed in \cite{DSMorig} and was supplemented in \cite{omidvar,muppetsBaldwinEon}. It considers five different ways of linkage generation. The first class uses a perturbation and analyses the subsequent fitness changes \cite{omidvar}. Another way to generate linkage is to evolve the order of the genes in the chromosome, which is referred to as \textit{interaction adaptation} \cite{muppets}. An evolutionary method may also build probabilistic models like the Estimation of Distribution Algorithms \cite{mohBOA}. Surprisingly, linkage generated randomly, in some situations, may also improve the method's effectiveness \cite{linkageRandom}. Finally, the last class (proposed in \cite{muppetsBaldwinEon}) is a comparison of evolution results. The methods employing this technique compare the individuals that resulted from different evolutionary processes to obtain linkage. Similar classification of linkage techniques that are employed in Cooperative Coevolution may be found in \cite{linkageClassificationCC}. \subsubsection{Dependency Structure Matrix} \label{sec:relWork:ll:dsm} The Dependency Structure Matrix (DSM) is a square matrix that stores the dependencies occurring between the components (genes). This structure is derived from information theory~\cite{dsmga2} and is applied in evolutionary methods to describe gene dependencies. The problem size $n$, where $n$ is a number of genes, determines the size of DSM. Each element $d_{i, j} \in R$ of $\mathrm{DSM} = [d_{i, j}]_{n \times n}$ indicates how significantly the $i^{th}$ and $j^{th}$ genes are dependent on each other. Usually, mutual information~\cite{mutualInformation} is used as the dependency measure. It is defined as \begin{equation} \label{eq:mutualInformation} I(X, Y) = \sum_{x \in X} \sum_{y \in Y} p(x, y) \ln\frac{p(x,y)}{p(x)p(y)} \geq 0, \end{equation} where $X$ and $Y$ are random variables. The value of mutual information is proportional to the dependency strength between the pair of genes. If $X$ and $Y$ are independent, the value of $I(X,Y)$ is low, because \begin{equation} p(x,y) = p(x)p(y) \implies \ln{\frac{p(x,y)}{p(x)p(y)}} = \ln{1} = 0. \end{equation} It is also assumed that $\ln{\frac{p(x,y)}{p(x)p(y)}}$ equals $0$ when $p(x,y)$, $p(x)$ or $p(y)$ is equal to $0$ as well. \begin{table} \caption{Population of individuals to demonstrate the DSM creation procedure} \centering% \label{tab:populationDSM} \begin{tabular}{ccccc} \hline \multirow{2}{*} {\textbf{Population}} & \multicolumn{4}{c}{\textbf{Genotype}} \\ & $G_1$ & $G_2$ & $G_3$ & $G_4$ \\ \hline $1^{st}$ individual & 0 & 1 & 0 & 1 \\ $2^{nd}$ individual & 0 & 1 & 0 & 1 \\ $3^{rd}$ individual & 1 & 1 & 1 & 1 \\ $4^{th}$ individual & 1 & 1 & 0 & 1 \\ $5^{th}$ individual & 0 & 0 & 1 & 1 \\ \hline \end{tabular} \end{table} To demonstrate the process of DSM creation, we use the population of $5$ binary-coded individuals presented in Table~\ref{tab:populationDSM}, where $G_i$ denotes the $i^{th}$ gene. Formula~(\ref{eq:mutualInformation}) represents the mutual information that can be calculated for any pair of random variables. Particularly, it can be used to measure the dependency between any two genes. Thus, a binary-adjusted version of formula~(\ref{eq:mutualInformation}) may be defined as \begin{equation} \label{eq:mutualInformationGenes} I(G_i, G_j) = \sum_{g_i \in G_i} \sum_{g_j \in G_j} p_{i,j}(g_i, g_j) \ln\frac{p_{i,j}(g_i, g_j)}{p_i(g_i)p_j(g_j)}, \end{equation} where $g_i$ and $g_j$ indicate the possible values of the $i^{th}$ ($G_i$) and $j^{th}$ ($G_j$) genes, respectively. For instance, if an optimisation problem is binary then $g_i \in \{0, 1\} = G_i$ and $g_j \in \{0, 1\} = G_j$. To calculate the probabilities presented in formula~(\ref{eq:mutualInformationGenes}), all individuals in a population are taken into consideration. The $p_{i,j}(g_i, g_j)$ value denotes the joint probability that a value of the $i^{th}$ gene is $g_i$ and the $j^{th}$ gene has value $g_j$ simultaneously. Moreover, $p_i(g_i)$ is used to indicate the probability that a value of the $i^{th}$ gene is $g_i$. Table~\ref{tab:DSM} presents DSM obtained for the population shown in Table~\ref{tab:populationDSM}. All DSM entries presented in Table~\ref{tab:DSM} have been calculated using formula~(\ref{eq:mutualInformationGenes}). \begin{table} \caption{DSM for the population presented in Table~\ref{tab:populationDSM}} \centering \label{tab:DSM} \begin{tabular}{ccccc} \hline \textbf{} & $G_1$ & $G_2$ & $G_3$ & $G_4$ \\ \hline $G_1$ & X & 0.12 & 0.00 & 0.00 \\ $G_2$ & 0.12 & X & 0.22 & 0.00 \\ $G_3$ & 0.00 & 0.22 & X & 0.00 \\ $G_4$ & 0.00 & 0.00 & 0.00 & X \\ \hline \end{tabular} \end{table} DSM-based linkage learning may lead to excellent results and is employed by leading methods in the field of discrete optimisation \cite{P3Original,ltga,psDSMGA2,dsmga2}. For some problems, it facilitates finding a high-quality linkage \cite{ltga}, which is crucial to solve the problem. Additionally, it aids updating the linkage information during runtime, which is key when addressing problems with complex structure \cite{linkLearningDetermined}. Such a structure may be commonly found in practical problems \cite{watsonHiff,watsonHiffPPSNfirst}.\par As presented in \cite{linkageQuality}, not all problem types are easy to decompose for a DSM-based linkage learning. Recently, Linkage Learning based on Local Optimisation (3LO) was proposed in \cite{3lo}. 3LO is an empirical linkage learning technique, which means that the dependencies \textit{predicted} by other linkage learning techniques are replaced based on an empirical check. Thanks to the idea behind it, 3LO is proven not to report any \textit{false linkage}. The \textit{false linkage} takes place when two independent genes are pointed as being dependent by a linkage learning technique. A drawback of 3LO is its computational cost. Therefore, the methods using it perform worse when overlapping problems need to be solved \cite{3lo}. This observation justifies the choice of a DSM-using method as the base of our proposition. \subsubsection{Linkage Trees} \label{sec:relWork:ll:linkTree} DSM has been created to find linkage and it contains only pairwise gene-dependency values. Therefore, a clustering algorithm is employed to merge pairs of genes into larger groups. Different techniques of DSM utilisation were proposed~\cite{dsmga2, dsmga2e, ltga, P3Original}. The only technique employed by methods considered in this paper is the linkage tree construction algorithm and hence it is described in details below. Nevertheless, at the end of this section, we also give some insights into other DSM utilisation techniques.\par To construct a linkage tree, the distance $D(G_i, G_j)$ between the $i^{th}$ and $j^{th}$ genes is calculated using mutual information (formula~(\ref{eq:mutualInformationGenes})) and joint entropy: \begin{equation} \label{eq:distanceMeause} D(G_i, G_j) = \frac{H(G_i, G_j) - I(G_i, G_j)}{H(G_i, G_j)}, \end{equation} where \begin{equation} \label{eq:entropy} H(G_i, G_j) = - \sum_{g_i \in G_i} \sum_{g_j \in G_j} p_{i,j}(g_i, g_j) \ln{p_{i,j}(g_i, g_j)}. \end{equation} Note that $H(G_i, G_j)$ equal $0$ implies that distance $D(G_i, G_j)$ is $0$ as well. In Table~\ref{tab:distances}, we report values of gene distances computed for the population presented in Table~\ref{tab:populationDSM}. \begin{table} \caption{Distances between genes for the population presented in Table~\ref{tab:populationDSM}} \centering \label{tab:distances} \begin{tabular}{ccccc} \hline \textbf{} & $G_1$ & $G_2$ & $G_3$ & $G_4$ \\ \hline $G_1$ & X & 0.88 & 1.00 & 1.00 \\ $G_2$ & 0.88 & X & 0.76 & 1.00 \\ $G_3$ & 1.00 & 0.76 & X & 1.00 \\ $G_4$ & 1.00 & 1.00 & 1.00 & X \\ \hline \end{tabular} \end{table} A linkage tree consists of the nodes corresponding to the clusters which group the genes that are considered to be dependent on one another. During linkage tree construction, the two most related clusters are joined. Initially, the clusters containing one consecutive single gene are created. Thus, the linkage tree construction algorithm creates $n$ single-gene clusters, where $n$ is the given optimisation problem size. Then, the merging operation is repeated until only one cluster (consisting of all genes) remains. Formula~(\ref{eq:distanceMeause}) is used to calculate the distance between two clusters which contain only a single gene. If one of the clusters contains more than one gene, the following reduction formula is used: \begin{equation} D(C_k, (C_i \cup C_j)) = \frac{|C_i|}{|C_i| + |C_j|} D(C_k, C_i) + \frac{|C_j|}{|C_i| + |C_j|} D(C_k, C_j), \end{equation} where $|C_i|$, $|C_j|$ and $|C_k|$ indicate the sizes of clusters $C_i$, $C_j$ and $C_k$, respectively. According to Table~\ref{tab:distances}, the distance between clusters $\{G_1\}$ and $\{G_2, G_3\}$ is calculated as follows: \begin{equation} \begin{aligned} D(\{G_1\}, (\{G_2\} \cup \{G_3\})) &= \frac{|\{G_1\}|}{|\{G_2\}| + |\{G_3\}|} D(\{G_1\}, \{G_2\}) \\ &+ \frac{|\{G_3\}|}{|\{G_2\}| + |\{G_3\}|} D(\{G_1\}, \{G_3\}) \\ &= \frac{0.88}{2} + \frac{1}{2} = 0.94. \end{aligned} \end{equation} The process of the linkage tree creation for DSM given in Table~\ref{tab:populationDSM} is presented step by step in Figure~\ref{fig:linkageTreeCreation}. To simplify the diagram, indication $G_i$ has been replaced by number $i$. For instance, in Figure~\ref{fig:linkageTreeCreation}, we use $1$ instead of $G_1$. \begin{figure} \centering \includegraphics[width=0.75\linewidth]{linkage_tree_creation.png} \caption{Subsequent steps of the linkage tree creation process for the population from Table~\ref{tab:populationDSM}} \label{fig:linkageTreeCreation} \end{figure} Linkage trees are employed by Linkage Tree Genetic Algorithm (LTGA) \cite{ltga}, also denoted as Linkage Tree Gene-pool Optimal Mixing Evolutionary Algorithm (LT-GOMEA) \cite{ltgaGomeaNaming}. Another method that employs linkage trees is Parameter-less Population Pyramid (P3) \cite{P3Original,fP3}. Both methods are described in the next subsection.\par Another way of using DSM is creation of an incremental linkage set. The incremental linkage set consists of sequences of gene starting indexes. During the process of gene-sequence creation, a single gene index is selected randomly. Then, the index that has the strongest relation to the last gene in the sequence and is not included in the sequence is added to the sequence. Incremental linkage sets are employed by Dependency Structure Matrix Genetic Algorithm II (DSMGA-II) \cite{dsmga2} and Two-edge Dependency Structure Matrix Genetic Algorithm II (DMSGA-IIe) \cite{dsmga2e} that are presented in the next section. \subsection{DSM-using Methods} \label{sec:relWork:dsmMethods} In this section, we present different methods that employ DSM and information theory for linkage discovery. \subsubsection{Linkage Tree Genetic Algorithm} \label{sec:relWork:dsmMethods:ltga} Linkage trees have been employed by Linkage Tree Genetic Algorithm (LTGA)~\cite{ltga, ltgaPopulationSizing}, one of the first methods using DSM which has been shown to be highly effective. LTGA is a population-based method that uses the linkage tree construction algorithm described in Section \ref{sec:relWork:ll:linkTree}. Recently, LTGA has been improved and renamed to Linkage Tree Gene-pool Optimal Mixing Evolutionary Algorithm (LT-GOMEA) \cite{ltgaPopulationSizing}. \par Instead of crossover, LTGA uses the operator called optimal mixing (OM). During OM, two individuals (called \textit{source} and \textit{donor}) and a cluster (a node from a linkage tree) are involved. The genes from the donor individual that are marked by the cluster replace the appropriate genes in the source individual. The operation is reversed if the fitness of the source decreases. Otherwise, the source remains modified. All individuals in the population are mixed using OM. During OM, all clusters except the linkage tree root are considered. The donor is selected randomly for each cluster. If, after OM, an individual remains unmodified, OM is executed for this individual once again with the best-found individual as the donor. This step is called the force improvements (FI) phase. The second situation in which FI is executed takes place when the best-found individual has not been improved for a certain number of iterations. \par As an example of OM, let us consider a 6-bit binary problem. The genotype of a source individual is $110011$, and its fitness is 6. The first considered cluster marks genes 1,2, and 5, and the donor individual is $010101$. After mixing, the genotype of the donor individual will be $010001$, and its fitness will decrease to 4. Therefore, the change introduced by mixing is rejected (we wish to maximise the fitness). The second considered cluster marks genes 2 and 3, and the randomly chosen donor individual for this cluster is $000111$. After mixing, the donor's genotype will be $000101$, and its fitness will be 6. Since fitness has not decreased, the change is preserved. The third cluster marks genes from 2 to 5, and the individual chosen for this cluster is $111111$. After mixing, the genotype of the donor will be $011111$, and its fitness will be 7. Therefore, the change will be preserved. The operation of OM will continue in the manner shown above until all the clusters that do not cover the whole genotype will be considered.\par LTGA requires one parameter, namely the population size. Finding its appropriate value for a particular test case via tuning may be difficult. Therefore, a population-sizing scheme for LTGA was proposed in~\cite{ltgaPopulationSizing}. LTGA employing this scheme is denoted as LT-GOMEA. LT-GOMEA maintains multiple LTGA instances with different population sizes. The first LTGA instance contains only one individual. During LT-GOMEA execution, new LTGA instances with a doubled population size are added at every $4^{th}$ iteration. Some of the LTGA instances may be found useless and deleted, which limits their number. A single LTGA instance is found useless if all of its individuals are the same or its average population fitness is worse than the average fitness of at least one LTGA with a larger population size. Additionally, all LTGA instances with a smaller population than the LTGA instance found useless are treated as useless as well. All LTGA instances are isolated from each other. Only during the FI phase, the globally best individual (found by any LTGA instance) is used as a donor. Additionally, LT-GOMEA introduces two changes to LTGA. First, LT-GOMEA computes DSM on the base of the whole population, while LTGA uses only a half of the population. Second, during the OM operation, LT-GOMEA considers linkage tree clusters in a random order, while LTGA uses them in the order of their creation. These two changes are supposed to increase the quality of linkage and remove the potential bias that may influence the method for a particular problem, respectively.\par \subsubsection{Parameter-less Population Pyramid} \label{sec:relWork:dsmMethods:p3} Parameter-less Population Pyramid (P3)~\cite{P3Original} uses the same linkage tree construction algorithm and the OM operator as LTGA. However, the population structure is significantly different from any other GA-based method. P3 maintains its population in a pyramid-like structure divided into subpopulations called \textit{levels}. Every individual in the population is unique. The population size is not limited and increases during runtime.\par The general P3 procedure can be described in the following way. At every iteration, a new individual is created randomly and initially optimised by First Improvement Hill Climber (FIHC)~\cite{P3Original}. FIHC is a local search algorithm operating on vector $\overrightarrow{x}$ of $n$ decision variables, $\overrightarrow{x}=[x_1, \ldots , x_n]$. Initially, FIHC randomly chooses a gene order. For each gene $x_i$, all available values are checked until a fitness improvement is found. If so, then the original $x_i$ value is replaced. This procedure is executed until no gene is changed during a FIHC iteration. After the optimisation is done by FIHC, the new individual climbs up the pyramid. With the use of OM, it is mixed with all individuals of a single level. The bottom pyramid levels are considered first. If a fitness of the new individual is improved during this operation, a new individual is added to the pyramid level. If a successful OM involves an individual from the top-level, a new \textit{level} (subpopulation) is added to the pyramid. The overall idea of P3 work is presented in Figure \ref{fig:p3FlowChart}.\par \begin{figure} \centering \includegraphics[width=0.9\linewidth]{p3_flowchart.png} \caption{P3 idea visualization} \label{fig:p3FlowChart} \end{figure} \subsubsection{Dependency Structure Matrix Genetic Algorithm II} \label{sec:relWork:dsmMethods:dsmga2} Dependency Structure Matrix Genetic Algorithm II is another method that employs DSM-based linkage learning \cite{dsmga2}. However, unlike P3 or LTGA, it uses an incremental linkage set (ILS) instead of a linkage tree. ILS is a sequence of gene indexes. The process of ILS building starts from a single gene index and adds a new one with the strongest connection to the previously added gene (in terms of the DSM weights) that has not been included in the sequence yet. Similarly to LTGA, DSMGA-II maintains a single population with a fixed number of individuals. Two operators are used: restricted mixing and back mixing. Restricted mixing is used to process a single individual. First, an incremental linkage set is created, starting from a random gene index. Then, the consecutive genes are flipped according to the incremental linkage set. The operation is maintained until a better fitness is obtained or until the same fitness is obtained, but the modified individual is absent in the population. If all the genes have been flipped but the fitness of the modified individuals is worse than that of the starting individual, the changes done by restricted mixing are rejected. However, if restricted mixing leads to a change (i.e., the new individual has a higher or the same fitness but with a genotype that does not exist in the population), the back mixing operation is triggered. During back mixing, the change introduced by restricted mixing is injected into other individuals. The injection is preserved if it improves fitness and rejected otherwise.\par DSMGA-II has been shown to be effective when solving theoretical \cite{dsmga2} and practical problems \cite{steinerTrees}. Recently, its parameter-less version that employs a population-sizing scheme (denoted as psDSMGA-II) has been proposed in \cite{psDSMGA2}. Although psDSMGA-II improves the effectiveness of the original DSMGA-II, its main disadvantage is the same as for its predecessor: it is less effective in solving the problems with overlapping building blocks in comparison with LT-GOMEA or P3 \cite{afP3,psDSMGA2}. \subsection{Linkage Diversity} \label{sec:relWork:linkageDiversity} P3 is effective for solving hard computational problems \cite{P3Original,fP3,afP3}. Compared to LT-GOMEA, P3 performs significantly better when the problem to be solved has highly overlapping blocks (e.g., NK fitness landscapes) \cite{afP3}. This feature is important in practice because it is typical for real-life problems \cite{watsonHiffPPSNfirst,OverlappingSimon}. To the best of our knowledge, no detailed analysis of P3 superiority over LT-GOMEA and DSMGA-II in solving problems that overlap has been performed and published yet. Below, we propose an explanation of this superiority.\par Let us introduce the deceptive function of unitation~\cite{decFunc}. Formula (\ref{eq:dec3}) defines the deceptive function of order $k$, the solution is binary-coded (i.e., is a string of $0$s and $1$s). \begin{equation} \label{eq:dec3} \mathit{dec(u)}= \begin{cases} k - 1 - u & \text{if } u < k\\ k & \text{if } u = k \end{cases}, \end{equation} where $u$ is a sum of gene values (so called \textit{unitation}) and $k$ is the deceptive function size. The optimal solution of the order-3 deceptive function is $111$, while the suboptimum is $000$. Let us consider the concatenation of three order-3 deceptive functions, where the first three bits refer to the first function (building block), the second three bits refer to the second function and so on. The optimal solution to this problem is $111 111 111$. However, most of the population of typical GA-based methods is deceived to the $000 000 000$ solution. If this problem is sufficiently large, it may become intractable. On the other hand, deceptive functions' concatenations are easy to be solved if the problem nature (the perfect linkage) is known \cite{grayWhitley}. In the given example, the perfect linkage that groups the dependent gene indexes is $(1,2,3)$, $(4,5,6)$ and $(7,8,9)$. Such linkage may be represented by a single linkage tree (Figure \ref{fig:perfectLinkageTree}). \begin{figure} \centering \includegraphics[width=0.5\linewidth]{separable_problem_perfect_link.png} \caption{Linkage tree that represents a perfect linkage for the concatenation of three order-3 deceptive functions} \label{fig:perfectLinkageTree} \end{figure} \begin{figure} \centering \includegraphics[width=0.3\linewidth]{posDependencies.png} \caption{Block positions dependencies for the problem constructed from the concatenation of three order-3 deceptive functions without overlap and with overlap $o=1$} \label{fig:posDependencies} \end{figure} Let us now consider the problem of overlapping deceptive functions that is an example of a problem with overlapping blocks. The size of overlap is defined by $o\in\{0,1,\ldots,k-1\}$, where $k$ is a length of all the considered blocks. The first block is defined on the first $k$ positions of the genotype. All blocks except the first one are defined on the last $o$ positions of the preceding block and the next $k-o$ positions. For instance, the positions referring to the second block start at the $(k-o+1)^{th}$ position and finish at the $(2\cdot k-o)^{th}$ position. The examples of deceptive blocks concatenations with and without overlap are given in Figure \ref{fig:posDependencies}. \begin{figure}[h] \subfloat[Perfect linkage for the first block on positions $\{1,2,3\}$]{% \includegraphics[width=0.3\linewidth]{overlap_problem_perfect_link_1st_block}} \label{fig:linkageForOverlap_a}\hfill \subfloat[Perfect linkage for the last block on positions $\{3,4,5\}$]{% \includegraphics[width=0.3\linewidth]{overlap_problem_perfect_link_2nd_block}} \label{fig:linkageForOverlap_b}\hfill \subfloat[Perfect linkage for the last block on positions $\{5,6,7\}$]{% \includegraphics[width=0.3\linewidth]{overlap_problem_perfect_link_3rd_block}} \label{fig:linkageForOverlap_c}\hfill \caption{Possible linkage trees for three order-3 deceptive functions concatenation with overlap $o=1$} \label{fig:linkageForOverlap} \end{figure} In Figure \ref{fig:linkageForOverlap}, we present possible linkage trees for the concatenation of three order-3 deceptive functions with the $o=1$ overlap. Note that although all the linkage trees are correct, each of them marks only one of the blocks. If the blocks overlap, it is impossible to mark all blocks with a single tree. Therefore, maintaining and using several different linkages at the same time may be beneficial when solving problems with overlaps This observation is confirmed by the results presented in \cite{3lo}. The proposed reasoning is valid only under the assumption that a single linkage tree (even if it is correct) may not be enough to solve the problem with overlaps. To the best of our knowledge, the study that analyses the need for linkage diversity has not been proposed yet, but the results presented in the literature seem to support the above claim \cite{3lo,P3Original,fP3,afP3,psDSMGA2}. The comparison between the original DSMGA-II and DSMGA-II with population-sizing (psDSMGA) show that psDSMGA-II significantly outperforms its predecessor for overlapping problems \cite{dsmga2PopulationSizing}. A similar observation can be made for LTGA and LT-GOMEA comparison \cite{P3Original,afP3}. Population-sizing was proposed to eliminate the necessity of tuning and defining the population size parameter (see Section \ref{sec:relWork:dsmMethods:ltga}). However, as a side effect, population-sizing leads to the maintenance of more than one LTGA/DSMGA-II population. All these populations maintain separate linkages and communicate with each other via the global-best individual. Thus, the globally best individual may be updated by OM in which the donor may be any individual from any LTGA/DSMGA-II population. Depending on the population, a different linkage is used. The above reasoning leads to the conclusion that population-sizing, as a side-effect, introduces a linkage diversity (limited but it is still better than none) and this linkage diversity seems to lead to the results' quality improvement for problems with overlapping building blocks.\par \begin{figure} \centering \includegraphics[width=0.95\linewidth]{p3_example.png} \caption{The example of linkage diversity employment in P3} \label{fig:linkDiversP3Example} \end{figure} \subsection{The significance of Linkage Quality and Diversity} \label{sec:relWork:linkageDiversityExample} As presented in \cite{linkageQuality}, the quality of the linkage may be the key to solve hard computational problems. To show the significance of using a diverse linkage, let us analyze an example shown in Figure \ref{fig:linkDiversP3Example}. We consider a problem, built from three order-3 deceptive functions with overlap $o=1$, presented in the lower part of Figure \ref{fig:posDependencies}. The P3-like population is divided into three levels. The linkage information of the first, second, and third level corresponds to the Linkage Trees presented in Figure \ref{fig:linkageForOverlap} (a), (b), and (c), respectively. We assume that, among all, the pyramid contains the following individuals: \begin{itemize} \item Individual $1110000$ (optimal for the first block), on the first level \item Individual $0011100$ (optimal for the second block), on the second level \item Individual $0000111$ (optimal for the third block), on the third level \end{itemize} In the example pictured in Figure \ref{fig:linkDiversP3Example}, we analyze a single iteration of P3, in which we try to add a new individual to the pyramid with the genotype $0000000$. It is possible that after OM with the individuals on the first level, the new individual will receive the optimal value for the first block (after that, the genotype of the new individual will be $1110000$). If during the OM with the second and third level, the new individual will receive the optimal value for the second and the third block, respectively, then the final genotype of the new individual will be optimal (built only from $1$s). Note that it is possible to obtain an optimal individual because P3 employs many different linkages that mark various parts of the genotype.\par Let us now consider the same population of individuals but grouped in a single population (such population contains individuals $1110000$, $0011100$, and $0000111$). We assume that the linkage corresponds to the Linkage Tree presented in Figure \ref{fig:linkageForOverlap} (b). Note that in such a situation, it is impossible to obtain the optimal individual using OM. It is possible to insert the second block of $1$s from individual $0011100$ to individuals $1110000$, $0000111$, and obtain individuals $1111100$, $0011111$, respectively. However, using the Linkage Tree from Figure \ref{fig:linkageForOverlap} (b), it is impossible to pass the first and the third block of $1$s successfully, without destroying the other blocks. Note that it is impossible to separately insert $1$s for genes 1, 2, 6, and 7 – the fitness value of $1011111$, $0111111$, $1111101$, and $0111110$ is $6$ and is lower than the fitness value of $0011111$ and $1111100$ that is 7. Thus, an operation converting a single $0$ into $1$, for individuals $0011111$ and $1111100$, will be rejected by OM.\par For problems encoded with a large number of genes and with a high amount of overlaps, the dependencies between genes will be significantly more complex. Thus, it is intuitive that, in such cases, it is preferable to use more diverse linkage information. LT-GOMEA uses a population-sizing mechanism. Thus, it also maintains many subpopulations, and therefore, it also maintains many linkages. However, based on the research results presented in \cite{linkageQuality}, the number of levels in the pyramid is usually significantly higher than the number of subpopulations maintained by LT-GOMEA. Thus, when P3 and LT-GOMEA are applied to solve the problem, we may expect that P3 will use a more diverse linkage than LT-GOMEA. Therefore, P3 is more suitable to solve overlapping problems.\par In this paper, as a competing method, we also consider NSGA-II that employs a standard crossover operator and no linkage learning mechanisms. Let us consider the probability to successfully insert the first block of three $1$s from individual $1110000$ to individuals $0011100$ and $0011111$. We consider the uniform crossover that is independent of the gene order. The assumption that genetic operators should be independent of gene order seems intuitive and is typical for modern evolutionary algorithms \cite{3lo}. The probability of successful insertion of the first block of $1$s to individual $0011100$ is $2^{-4}$ (we need to exchange genes 1 and 2, and we shall not exchange genes 4 and 5). However, if we wish to insert the same block of $1$s to individual $0011111$, the probability will be $2^{-6}$. Moreover, the probability to successfully exchange a particular building block without destroying the other blocks will decrease quickly with the increase of the problem size. Therefore, the typical crossover operators that do not use linkage learning will not be effective when strong inter-gene dependencies exist. As shown above, the diverse linkage maintained by P3 may be highly useful when overlapping problems are being solved. However, the pyramid-like form of the P3 population has some drawbacks. For instance, when the number of levels is large (eg., over 30), it may be hard to exchange some building blocks, even if appropriate linkage and appropriate blocks exist in the population. This issue has been detected for non-overlapping problems, and the modifications of P3 to address this issue were proposed in \cite{fP3,afP3}. \subsection{Pareto Front Clusterisation} \label{sec:relWork:clusterisation} The goal of multi-objective optimisation is to obtain a Pareto front that covers or is relatively close to the optimal Pareto front. Thus, to measure the quality of a Pareto front, its proximity and diversity are often compared with the optimal Pareto front \cite{frontQualityMeasurement}. To obtain a diverse and high-quality Pareto front, evolutionary methods tend to preserve the overall population diversity, often emphasising the diversity in the objective space. This may be achieved by employing the crowding distance like in NSGA-II \cite{nsga2} or a density measure like in SPEA2 \cite{spea2}. Nevertheless, such operators may not be sufficient to obtain good quality results for hard optimisation problems. Therefore, linkage learning techniques may be useful to recognise the problem's nature and exchange appropriate solution parts during the optimisation process \cite{MoGomeaGecco,MoGomeaSwarm}. However, for different Pareto front parts, the problem's features may be significantly different. For instance, for the practical problem considered in this paper, the solution minimising the production makespan may be significantly different from the solution minimising production surpluses. Similar observations can be made for other practical problems \cite{MoGomeaSwarm}. Therefore, some of the evolutionary methods that are employed for the multi-objective optimisation split the population into clusters, usually based on the objective space \cite{mohBOA,clusterizationBosman}. \subsection{Multi-objective Gene-pool Optimal Mixing Evolutionary Algorithm} \label{sec:relWork:moGomea} MO-GOMEA is a multi-objective version of LTGA (see Section \ref{sec:relWork:dsmMethods:ltga}). Except for the concept of LTGA, it employs other ideas, like Pareto front clusterisation and elitist archive. MO-GOMEA is a parameter-less method, which is an important feature for practical purposes.\par MO-GOMEA uses a so-called \textit{elitist archive}. The elitist archive is a separate population where non-dominated solutions are stored. Maintaining such a buffer is beneficial for multi-objective evolutionary algorithms because, during the search, some non-dominated solutions may be discarded due to the stochastic nature of the search \cite{MoGomeaSwarm}. In some optimisation problems, the Pareto front may contain an infinite (or too large to store) number of solutions \cite{ElitistArchive2}. Thus, if a new non-dominated solution is found, it shall be added to the elitist archive if it dominates at least one solution from the elitist archive or if it increases the diversity of the archive in the solution space \cite{ElitistArchive2}.\par At each method iteration, MO-GOMEA clusters its population with the use of \textit{k}-leader-means clustering \cite{clusterizationBosman}. The population is divided into \textit{k} clusters containing \textit{c} solutions. In MO-GOMEA $c = \frac{2}{k}\cdot|\mathcal{P}_F|$; such a size of \textit{c} causes the clusters to overlap and avoid the situation in which some individuals do not belong to any cluster. Each cluster is a subpopulation that is later processed by LTGA. The only difference is that since the problem is multi-objective, the source solution is found improved if, after the optimal mixing, the altered source solution dominates its previous version or it can be added to the elitist archive. MO-GOMEA requires specifying two parameters: the population size and the number of clusters. To overcome this issue, it employs the so-called interleaved multi-start scheme (IMS) \cite{MoGomeaSwarm} that is equivalent to the population-sizing scheme described in Section \ref{sec:relWork:dsmMethods:ltga}.\par \subsection{Multi-Objective Evolutionary Algorithm based on Decomposition} \label{sec:relWork:moead} Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D) is a multi-objective optimisation algorithm whose main principle is to decompose an $m$-objective problem into $N$ single-objective subproblems \cite{moead}. The three decomposition techniques in MOEA/D are: the weighted sum approach, the Tchebycheff approach, and the penalty-based boundary intersection approach. For example, employing the Tchebycheff approach, maximisation of an $m$-objective optimisation problem $F(x) = (f_1(x), \ldots, f_m(x))^T$ can be decomposed to the optimisation of the $N$ subproblems and the objective function of the $j$-th subproblem, $j=1,\ldots,N$, is: \begin{equation} \label{tchebycheff} g^{te} \left(x \vert \lambda^j, z^*\right) = \max_{1 \leq i \leq m} \left\{ \lambda_i^j | f_i(x) - z^*_i | \right\}, \end{equation} where $x$ indicates a given solution in the decision space, $\lambda^j=(\lambda_1^j,\ldots,\lambda_m^j)$ is a weight vector, $z^*$ indicates a set of reference points $z_i^* = min \{f_i(x) | x \in \mathrm{\Omega} \}$ and $|\cdot|$ denotes the Euclidean distance. Several different methods for selecting weight vectors $\lambda^j$ have been proposed, including classic simplex-centroid and simplex-lattice, as well as more recent transformation methods and uniform decomposition measurement \cite{Ma2014c}. Single-objective subproblems (\ref{tchebycheff}) are solved simultaneously, employing evolutionary algorithms. Each solution to such single-objective subproblems forms a Pareto-optimal front of the original multi-objective problem $F(x)$. One of the main assumptions of MOEA/D is that the optimal solutions to neighbouring subproblems (in terms of the Euclidean distance between the corresponding weight vectors) are likely to be similar. Hence, a chromosome describing a solution to a certain single-objective subproblem can be crossed-over only with the chromosomes of the neighbouring subproblems, where the neighbourhood size is a parameter. Each population comprises the best solutions found so far to all $N$ subproblems. In MOEA/D, the one-point crossover operator and the standard mutation operator are applied. Thanks to defining the weight vector set a priori, the diversity in the population is maintained without the need for computing crowding distances, which is one of the major costs in other multiobjective optimisation evolutionary algorithms, such as NSGA-II \cite{nsga2}. MOEA/D has attracted much attention in the field of evolutionary multiobjective optimisation, and several modifications have been proposed, as surveyed in \cite{Zhou2011}. Some of the suggested improvements, for example, integration with the opposition-based learning \cite{Ma2014b}, Baldwinian learning \cite{Ma2014}, or end-user preference incorporation \cite{Ma2016} can be applied to the method proposed in this paper. The addition of these features is considered as future work. \subsection{Multi-objective GAs for manufacturing scheduling} \label{sec:relWork:IPP} One of the pioneering research related to industrial production planning using a multi-objective GA was described in \cite{Ishibuchi1998}. In that paper, a set of nondominated solutions was determined for a classic flowshop scheduling problem with three objectives, namely, the minimal makespan, the maximum tardiness and the total flowtime. A weighted sum of these three criteria was treated as a fitness value of each individual, but the weight values were randomly specified whenever a pair of the parent solutions was selected. Consequently, each point of the solution space was generated using a different weight vector. A local search was then applied for further improvement of those solutions. However, the considered problem was rather abstract and the considered plant and taskset sizes were limited \cite{Ishibuchi1998}. Various real-world industrial scheduling problems were attempted to be solved with customised multi-objective GAs as well. In \cite{Li2009}, for example, a real-world manufacturing problem of a steel tube production was described as an extension of a classic Job-Shop Scheduling Problem with compatible resources (aka Flexible Job-Shop Scheduling Problem). A multi-objective GA was applied with two objectives: minimisation of the resources' idleness and waiting time of orders. In that paper, it stated explicitly that the prior research related to Job-Shop Scheduling Problems was impractical as being based on oversimplified models and assumptions. Nevertheless, that model is still inappropriate for the batch production problem considered in this paper. In particular, it is unable to select recipes or minimise the commodity surplus. Another interesting real-world manufacturing problem of textile batch dyeing scheduling was presented in \cite{Huynh2018}. In that problem, a batch is comprised of clothes of the same colour whose total weight does not exceed the capacity of the manufacturing resource. Again, that problem is different of the one analysed in this paper, as in the considered case the resources can produce only an exact weight of a given commodity. The total amount of manufactured commodities depends only on the recipes multisubset (i.e., a combination with repetitions) selected for manufacturing. Hence, a batching heuristics, as proposed in \cite{Huynh2018}, is not applicable, but a technique for optimisation of recipe multisubset selection is needed. GA was applied to such a problem in \cite{paintsOrig}, yet the optimisation was performed with typical multi-objective GAs (NSGA-II and MOEA/D). In particular, no linkage learning was performed in that approach. Note that in the research presented in this paper, we compare to both methods considered in \cite{paintsOrig} and both of these methods are outperformed by linkage learning GAs (namely MO-GOMEA and MO-P3) for the considered practical problem and for a wide set of benchmark problems. More examples of applying multi-objective GAs to solve manufacture scheduling problems were surveyed in \cite{Gen2014}, including Job-Shop Scheduling Problems, Flexible Job-Shop Scheduling Problems, dispatching in flexible manufacturing systems and integrated process planning and scheduling. However, none of the papers reviewed there dealt with recipe multisubsets nor minimising the surplus of the manufactured commodities. Similarly, none of the reviewed papers used linkage learning to improve the performance of the applied GAs, as it is proposed in this paper. \section{Real-World Multi-Objective Bulk Commodity Production Problem Formulation} \label{sec:problemDef} In this section, the considered practical problem is firstly described and then formalised as a typical covering problem (CP) instance and its extension to multi-objectiveness. \subsection{Real-World Scenario Description} The considered scenario is based on a process of manufacturing, in which a certain amount of commodities is produced by combining supplies, ingredients or raw substances following a stored recipe. The main optimisation objective of this case study is to decrease the makespan of batch production. Depending on the selected multisubset (i.e., a combination with repetitions) of recipes, the time to produce a commodity may vary significantly, which influences the percentage of manufacturing time that is truly productive, known as Overall Equipment Effectiveness (OEE). In the considered multi-objective bulk commodity production problem (MOB\\CPP), the recipes for each batch produce a certain amount of commodity. Consequently, to satisfy an order for a certain commodity, one or more recipes for producing such commodity have to be selected and allocated to resources. However, the sum of the commodity amount produced by any selection of recipes may be different from the order amount for that commodity. If a certain commodity cannot be produced in the required amount, some commodity surplus is expected. As the surplus storage can be expensive and larger surplus usually implies a higher cost of raw substances used in the production, additional optimisation objectives can be defined: not only the makespan but also the surpluses of each produced commodities have to be minimised. This observation leads to the conclusion that multi-objective optimisation techniques, as described earlier in this paper, can be applied. In particular, this problem can be viewed as a variant of the classic covering problem, as shown in the following subsection. \subsection{Problem Formulation} The considered factory manufactures bulk commodities $c_j$, $j = 1, \ldots, m$. These commodities can be produced by executing $x_i$, $i=1,\ldots,n$, times some pre-defined manufacturing recipes $\gamma_i$ on the only resource $\pi$. The objective is to minimise the makespan \begin{equation} \sum_{i=1}^{n}t_ix_i, \end{equation} where $t_i$ denotes the pre-defined execution time of recipe $\gamma_i$ subject to \begin{equation}\label{eq:constraints} \sum_{i=1}^{n}\delta_{i,j}x_i \geq o_j, \end{equation} where $o_j$ denotes the ordered amount of commodity $c_j$, $\delta_{i,j}$ is the amount of commodity $c_j$ produced by recipe $\delta_i$. The problem defined in this way is a typical example of CP and, as such, is an instance of ILP and belongs to the NP-hard class \cite{Garey1990}. The first extension of the above problem is the possibility of multiple resources $\pi_i$ in the factory, each being capable of executing recipe $\gamma_i$. Hence the makespan minimisation objective can be rewritten as minimising \begin{equation} \max_{\forall i \in \{1,\ldots,n\}} (t_ix_i) \end{equation} subject to the same constraints as provided in (\ref{eq:constraints}). The next modification is caused by the surplus storage cost in the factory and the cost of raw substances needed to produce the commodities, which force the factory to minimise not only the makespan, but also the surpluses of each commodity, i.e. to produce as little commodities as possible to satisfy the ordered amounts. Hence, the following $m$ objectives need to be added to the optimisation problem: $\forall j \in \{1,\ldots,m\}$ minimise \begin{equation} \sum_{i=1}^{n}\delta_{i,j}x_i, \end{equation} subject to the same constraints (\ref{eq:constraints}) as above. The number of instances of each recipe $\gamma_i$ can be viewed as being bounded by the ordered amount of commodities produced by this recipe, computed with equation \begin{equation}\label{eq:ceil} \mu_i=\max_{\forall j \in \{1,\ldots,m\}} \left\lceil \frac{o_j}{\delta_{i,j}}\right\rceil. \end{equation} This upper-bound facilitates the binary encoding of the solutions for GA as described in the next section. The considered problem is a discrete combinatorial problem. As pointed in \cite{moSurvey}, if such problems are solved by conventional methods, the time required to solve them may increase exponentially. Therefore, the use of Multi-Objective Evolutionary Algorithms is justified. \section{Multi-Objective Parameter-less Population Pyramid for Solving MOBCPP} \label{sec:mop3} In this section, we describe the details, motivations and intuitions of the proposed Multi-Objective Parameter-less Population Pyramid (MOP3). In the first subsection, we discuss the solution encoding, while in the second one, we describe the proposed method. \subsection{Solution Encoding} \label{sec:mop3:encoding} As presented in the previous section, in the MOBCPP problem, the size of ordered amounts is known. However, the number of production tasks (\textit{jobs}) is not specified because each recipe may produce a different amount of paint. \begin{comment} In MOBCPP, the goal is to satisfy orders in the shortest possible time with the overproduction as low as possible. The commodities are produced by executing recipes; each recipe execution is termed as a \textit{job}. The produced amount of a commodity may differ for each recipe and one recipe may produce more than one commodity during its execution. We can compute how many jobs are necessary to satisfy the ordered amount of a commodity when a particular recipe is selected. \end{comment} Let us consider an example with a single commodity. The ordered amount is $o_1 = 10$ units. There are two available recipes, $\gamma_1$ and $\gamma_2$, that produce $\delta_{1,1}=3$ and $\delta_{2,1}=5$ units of commodity $c_1$, respectively. Thus, based on equation (\ref{eq:ceil}), to satisfy order $o_1$, it is sufficient to execute $\mu_1=4$ jobs using $\gamma_1$ recipe or $\mu_2=2$ jobs that use recipe $\gamma_2$. Note that the solution to the problem instance considered in this example may be encoded as a 6-bit long binary string, where the first four bits refer to jobs executing recipe $\gamma_1$ and the last two bits refer to jobs that execute recipe $\gamma_2$.\par Based on the above example, we may state that a solution to MOBCPP can be encoded as a binary string where a particular bit refers to a single job executing a particular recipe. In this paper, we order the bits in the following manner. First, we consider all bits that refer to the jobs producing the first commodity, then the bits that refer to the jobs producing the second commodity and so on. The jobs producing more than one commodity are located at the position suitable for the produced commodity with the lowest index. Among the bits that consider the production of a particular commodity, we first encode the bits referring to the minimum number of jobs using a recipe with the lowest index in the recipe list, then we encode the bits referring to the jobs using the recipe with the second-lowest index in the list and so on. The minimum number of jobs to satisfy the ordered amount of $o_i$, using recipe $\gamma_i$ that produces $\delta_{i,j}$ resource units is computed with equation (\ref{eq:ceil}). \begin{figure}[ht!] \centering% \includegraphics[width=0.9\textwidth]{encodingExample} \caption{Solution encoding example} \label{fig:encoding:ex} \end{figure} Let us consider the following example. Two commodities with orders $o_1 = 10$ and $o_2 = 8$ units are considered. The first commodity may be produced with the use of $\gamma_1$ and $\gamma_2$ recipes that produce $\delta_{1,1}=3$ and $\delta_{2,1}=5$ of commodity units, respectively. The second commodity may be produced with the use of three recipes ($\gamma_3$, $\gamma_4$ and $\gamma_5$) that produce $\delta_{3,2}=5$, $\delta_{4,2}=2$ and $\delta_{5,2}=3$ units of this commodity. The encoding and the solution to this problem instance are presented in Fig. \ref{fig:encoding:ex}. First, the jobs that consider the order for the first commodity are considered. Among them, the first four bits refer to jobs using recipe $\gamma_1$, the next two refer to jobs using recipe $\gamma_2$. Respectively, for the order referring to the second commodity, the first two bits refer to recipe $\gamma_3$, the next four to recipe $\gamma_4$ and the last three to recipe $\gamma_5$.\par Note that Fig. \ref{fig:encoding:ex} presents a feasible solution, i.e., the one that produces enough commodities. However, the solution encoded in the manner proposed above may be infeasible. Moreover, it may exceed the order size. Therefore, to fix the above issues, we propose a genotype repair algorithm presented in Pseudocode \ref{alg:imAndSdsnc}. Any genotype that has been updated by the proposed algorithm is feasible and it does not use more jobs for a particular recipe than necessary (i.e., all jobs that may be abandoned without violating the order constraint will be removed). The proposed solution encoding is not unique as two or more different genotypes can encode the same solution. The genotype repair algorithm works as follows. For each gene (that corresponds to a particular recipe), the list of orders produced by the corresponding recipe is gathered. If, for at least one order, we need to increase the production amount, then we set the gene value on $1$. If, for all orders, we can resign from using the recipe without violating the order size, then we set the gene value on $0$. Otherwise, the gene value remains unmodified. \begin{algorithm} \caption{Genotype repair algorithm} \begin{algorithmic}[1] \Procedure{Repair}{Genotype} \State ResGenotype = Genotype \For{$Gene \gets 1$ \textbf{to} length($ResGenotype$)} \State $OrdersIndexes \gets$ GetOrdersForGene($ResGenotype, Gene$) \State $decision = -1$ \For{$i \gets 1$ \textbf{to} length($OrdersIndexes$)} \State $Order \gets OrdersIndexes[i]$ \State $OrderToDo \gets $ GetOrderSize($Order$) \State $OrderPlanProd \gets $ GetPlanProd($Order,ResGenotype$) \State $RecipeAmount \gets$ GetRecipeAmountForGene($Order,Gene$) \If {$OrderToDo > OrderPlanProd$} \State $decision = 1$ \EndIf \If {$OrderToDo > OrderPlanProd - RecipeAmount$ \textbf{and} $decision = -1$} \State $decision = 0$ \EndIf \EndFor \If {$decision = 1$} \State $ResGenotype[Gene] = 1$ \EndIf \If {$decision = -1$} \State $ResGenotype[Gene] = 0$ \EndIf \EndFor \State \Return{ResGenotype} \EndProcedure \end{algorithmic} \label{alg:imAndSdsnc} \end{algorithm} \subsection{Multi-Objective Parameter-less Population Pyramid} \label{sec:mop3:mmethod} In this section, we present the motivations behind the proposed Multi-Objective Parameter-less Population Pyramid (MO-P3) and its description. The main reason for using P3 as a research starting point for proposing a method dedicated to solving a practical multi-objective problem considered in this paper are enumerated below.\par \textbf{Motivation 1}. P3 is effective in solving problems with overlapping building blocks (see Section \ref{sec:relWork:linkageDiversity}). The relations between building blocks (overlapping) are typical for real-life problems \cite{watsonHiffPPSNfirst,OverlappingSimon}. P3 has also been found more effective than LTGA and DSMGA-II when applied to a single-objective practical problem \cite{steinerTrees} (to the best of our knowledge, the results presented in \cite{steinerTrees} are the only comparison between P3, LTGA and DSMGA-II based on practical problem instances). Thus, it is reasonable to assume that a P3-based method dedicated to solving a practical multi-objective problem may be found more effective than MO-GOMEA (that is LTGA-based).\par \textbf{Motivation 2}. As explained in Section \ref{sec:relWork:moGomea}, MO-GOMEA clusters its population concerning the objective space. MO-GOMEA maintains separate linkages for each Pareto front cluster. This kind of multi-objective problem decomposition is the key feature of MO-GOMEA and one of the reasons for its high effectiveness. MO-GOMEA is based on the idea of LTGA, which maintains a single linkage at a time. On the other hand, P3 maintains many linkages (one per pyramid level). As explained in Section \ref{sec:relWork:dsmMethods:p3}, this linkage diversity is likely to be the reason for the high effectiveness of P3 in solving the heavily overlapping problems. Maintaining many different linkages facilitates identifying different blocks (groups of gene indexes). The same feature is required when solving multi-objective problems. Thus, a P3-based method may be highly effective in solving multi-objective problems even without problem decomposition performed by MO-GOMEA.\par Considering the above motivations, we propose a Multi-Objective Parameter-less Population Pyramid (MO-P3) that includes the following mechanisms. MO-P3 uses the elitist archive in the same way as MO-GOMEA (see Section \ref{sec:relWork:moGomea}). In the original P3, each individual that climbs up the pyramid optimises a single-objective problem. Therefore, to adjust MO-P3 to multi-objective optimisation, when a new iteration of MO-P3 starts, a normalised weight vector is chosen to transform a multi-objective problem into a single-objective one. Thus, during its climbing, each individual optimises a single-objective problem. However, due to different weights, it shall climb to reach a different part of the Pareto front.\par \begin{algorithm} \caption{The general MO-P3 overview} \begin{algorithmic}[1] \Procedure{MO-P3}{} \State $levels \gets $ \{ CreateNewEmptyPop()\} \Comment{initialization} \While{$\neg stopCondition$} \State $weightVec \gets $ GetWeightVector(); \State $newInd \gets $ GetRandomIndividual(); \State $newInd \gets $ FIHC($newInd, weightVec$); \If {$newInd$ $\neg$ exist in $levels$} \State InsertIndividualOnLevel($levels[0]$,$newInd$); \EndIf \For {\textbf{each }$level \in levels$} \State $IndImpr \gets $ ImproveIndWithLevel($level$, $newInd$, $weightVec$); \If {$IndImpr \neq newInd$ } \State $newInd \gets IndImpr$ \If {$newInd$ $\neg$ exist in $levels$} \State $nextLevel \gets$ GetNextLevel($level$); \If {$nextLevel = empty$ } \State $nextLevel \gets $ \{ CreateNewEmptyPop()\} \State AddNewTopLevel($levels, nextLevel$); \EndIf \State InsertIndividualOnLevel($nextLevel$,$newInd$); \EndIf \EndIf \EndFor \EndWhile \EndProcedure \end{algorithmic} \label{alg:mop3Overview} \end{algorithm} The general overview of MO-P3 work is presented in Pseudocode \ref{alg:mop3Overview}. For each new individual, the weight vector is chosen to direct the search in one of the parts of the Pareto front (lines 4 and 5). The new individual is improved by FIHC and added to the pyramid (lines 6-9). Then, the new individual climbs up the pyramid. Note that the chosen weight vector is used during the whole iteration. The new individual may cross with any other individual that was added to the pyramid. However, any time the new individual is mixed with individuals on the given pyramid level (line 11), the linkage gathered for this level is employed, and the search is directed by the weight vector selected at the beginning of the iteration (line 4).\par The way of choosing the weight vector at the beginning of each MO-P3 iteration is crucial. It shall push the search to focus on different parts of the Pareto front. However, if the search is too heavily biased towards some parts of the Pareto front, the linkage diversity offered by the pyramid-like population structure may not be sufficient. In consequence, the linkage gathered by MO-P3 may become useful to optimise only some parts of the Pareto front. If so, the quality of solutions referring to other Pareto front parts (those for which the linkage stored by MO-P3 is not useful) may be low. In this paper, we consider two different strategies of choosing the weight vector. In the first one, the weight vector is chosen randomly. MO-P3 using this technique will be denoted as MO-P3-Random. The second strategy of choosing the weight vector is presented in Pseudocode \ref{alg:smartWeightVector} and denoted as MO-P3-Smart. \begin{algorithm} \caption{Smart strategy of choosing the two-dimensional weight vector} \begin{algorithmic}[1] \Procedure{GetWeightVector}{ElitistArchive} \State $EApoints \gets empty$ \For {\textbf{each} \textit{sol} in \textit{ElitistArchive}} \State $sum = sol.FirstObjNormalised + sol.SecondObjNormalised$ \State $newPoint.FirstWeight = sol.FirstObjNormalised / sum$ \State $newPoint.SecondWeight = sol.SecondObjNormalised / sum$ \State $EApoints \gets newPoint$ \EndFor \text{\textbf{ each}} \State $EApoints \gets$ Sort($EApoints$) \State $interv_1 \gets$ GetRandomInt($1, $ SizeOf($EApoints$)$-1$) \State $interv_2 \gets$ GetRandomInt($1, $ SizeOf($EApoints$)$-1$) \State $length_1 \gets $ GetEuclDist($EApoints.At(inter_1), EApoints.At(inter_1 + 1)$) \State $length_2 \gets $ GetEuclDist($EApoints.At(inter_2), EApoints.At(inter_2 + 1)$) \If {$length_1 > length_1$} \State $intervChosen \gets interv_1$ \Else \State $intervChosen \gets interv_2$ \EndIf \State $IntervalStart = EApoints.At(intervChosen).FirstObj$ \State $IntervalEnd = EApoints.At(intervChosen).SecondObj$ \State $weightVec.First = $ GetRandomReal($IntervalStart, IntervalEnd$) \State $weightVec.Second = 1 - weightVec.First$ \State \Return{weightVec} \EndProcedure \end{algorithmic} \label{alg:smartWeightVector} \end{algorithm} In the \textit{smart} strategy of choosing the weight vector, each solution in the elitist archive is transformed into a weight vector. Such an array of weight vectors is then sorted concerning the weight corresponding to the first objective. After sorting an array of weights, the vectors may be interpreted as an array of the crowding distances \cite{nsga2}. Then, the tournament of size two is used to choose the interval that refers to the higher value of the crowding distance. The weight vector is randomly chosen from the interval returned by the tournament.\par The motivation behind the \textit{smart} strategy of choosing the weight vector is as follows. MO-P3-Smart is expected to push the search towards these regions of the Pareto front that are poorly represented. On the other hand, such a bias may cause the linkage to be useful to optimise only some parts of the Pareto front. In this case, the overall Pareto front quality may decrease significantly.\par In this section, we have proposed a multi-objective method dedicated to solving the MOBCPP problem. We propose both: the problem-dedicated solution encoding and the new method. MO-P3 is based on the P3 method. The key modification is the choice of weight vector for further optimisation of a new individual during the climb. We propose two strategies for choosing the weight vector: Random and Smart. In the subsequent sections, we show that our proposition is more effective than the competing methods in solving both: the MOBCPP problem and the typical benchmarks employed in multi-objective discrete optimisation. \section{Experiments on MOBCPP} \label{sec:exp:paints} In this section, we present the results obtained for the MOBCPP problem. The objective of the experiments was to compare the effectiveness of the proposed MO-P3 with other methods dedicated to multi-objective optimisation on the base of the MOBCPP problem. The rest of this section is organised as follows. In the first subsection, we present the experiment setup. In Section \ref{sec:exp:paints:strategiesComparison}, the two considered MO-P3 versions are compared. In the third subsection, we analyse the fitness evaluation number (FFE) and computation time ratio. Finally, in the last subsection, we compare MO-P3 with MO-GOMEA, MOEA/D and NSGA-II. \subsection{Experiments Setup} \label{sec:exp:paints:setup} In this paper, we consider $27$ different MOBCPP problem instances. Each instance is related to a real-life configuration that takes place or may take place in practice. We consider two groups of test cases. In the first group (16 test cases), we consider one production hall, with multiple machines. In these scenarios, each job can be executed on any machine (more information about these scenarios is given in Table \ref{tab:testCases}). In the second group of test cases (11 test cases), we consider scenarios with multiple production halls. In each hall, we can produce only a subgroup of paints (more information is given in Table \ref{tab:testCasesMulti}). Note that even for a relatively low number of resources, the number of available encodings is large. Since MOBCPP is NP-hard (see Section \ref{sec:problemDef}), the considered test cases may be found difficult to solve. Each experiment has been repeated 50 times.\par \begin{table} \caption{The parameters of the single-hall test cases} \centering% \label{tab:testCases} \begin{tabular}{ccc} \hline \textbf{Parameter} & \textbf{Min} & \textbf{Max} \\ \hline \textbf{Production halls} & \multicolumn{2}{c}{1} \\ \textbf{Resources} & 2 & 3 \\ \textbf{Commodities} & 6 & 20 \\ \textbf{Recipes} & 12 & 60 \\ \textbf{Genotype length} & 46 & 746 \\ \textbf{Encodable solutions} & $7.03 \cdot 10^{13}$ & $3.70 \cdot 10^{224}$\\ \hline \end{tabular} \end{table} \begin{table} \caption{The parameters of the multi-hall test cases} \centering% \label{tab:testCasesMulti} \begin{tabular}{ccc} \hline \textbf{Parameter} & \textbf{Min} & \textbf{Max} \\ \hline \textbf{Production halls} & 2 & 12 \\ \textbf{Resources} & 2 & 24 \\ \textbf{Commodities} & 6 & 72 \\ \textbf{Recipes} & 12 & 144 \\ \textbf{Genotype length} & 92 & 552 \\ \textbf{Encodable solutions} & $10^{27}$ & $10^{162}$\\ \hline \end{tabular} \end{table} We consider two MO-P3 versions that employ two different strategies for weight vector initialisation (see Section \ref{sec:mop3:mmethod}). Depending on the employed strategy, they will be denoted as MO-P3-Random and MO-P3-Smart, respectively. We use three competing methods: Non-dominated Sorting Genetic Algorithm II (NSGA-II) \cite{nsga2}, Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D) \cite{moead} and Multi-objective Gene-pool Optimal Mixing Evolutionary Algorithm (MO-GOMEA) \cite{MoGomeaSwarm}. NSGA-II has been selected as it is commonly employed as a baseline in the multi-objective optimisation domain. Similarly to \cite{MoGomeaGecco}, we use bit-flipping mutation with probability $1/l$, where $l$ is the genotype length, the probability of crossover is 0.9. To make NSGA-II independent of the gene order, we use the uniform crossover. Finally, we consider the population sizes of 25, 50, 100, 200, 400 individuals, the same as in \cite{MoGomeaGecco,MoGomeaSwarm}. MOEA/D is frequently used as a research starting point and as a baseline \cite{MoGomeaSwarm,Zhou2011}. Similarly to NSGA-II, MOEA/D requires specification of the population size. We consider the same population sizes as in the NSGA-II case. MO-GOMEA is a state-of-the-art method in the multi-objective optimisation domain that has been reported to significantly outperform NSGA-II \cite{MoGomeaGecco,MoGomeaSwarm} and MOEA/D \cite{MoGomeaSwarm}. Note that MO-GOMEA and the proposed MO-P3 are parameter-less methods. Thus, no tuning is necessary. This feature makes these methods particularly useful for practical implementations. We have abandoned the comparison with the Multi-objective Hierarchical Bayesian Optimization Algorithm \cite{mohBOA} because it was significantly outperformed by MO-GOMEA \cite{MoGomeaGecco,MoGomeaSwarm}.\par For the considered methods, we use the source codes published by their authors\footnote{\url{http://www.iitk.ac.in/kangal/codes.shtml} for NSGA-II, the source code used in \cite{MoGomeaSwarm} for MO-GOMEA, the source code given in \cite{P3Original} for P3 and \url{https://github.com/ZhenkunWang} for MOEA/D}. All sources have been merged on the problem definition level in one project that is available at \url{https://github.com/przewooz/moP3}. Additionally, with the source code, all settings files and the detailed results of all experiments are provided.\par As the stop condition, we use the fitness function evaluation number (FFE). This choice is motivated by the significant amount of computation time consumed by the fitness value computation. The computation budget has been set to 25 million fitness evaluations.\par As a quality measure, we use the Inverted Generational Distance (IGD). IGD is defined as \begin{equation} \label{eq:igd} D_{\mathcal{P}_F\to \mathcal{S}}(\mathcal{S}) = \frac{1}{|\mathcal{P}_F|} \sum_{\substack{f^0 \in \mathcal{P}_F}} \min_{x \in \mathcal{S}}{\{d(f(x), f^0) \}}, \end{equation} where $\mathcal{P}_F$ is the Pareto-optimal front, $\mathcal{S}$ is the final front proposed by the optimiser and $d(\cdot,\cdot)$ is the Euclidean distance. IGD is an average distance from each point in $\mathcal{P}_F$ to the nearest point in $\mathcal{S}$. The quality of the proposed front $\mathcal{S}$ is inversely proportional to the IGD value. The optimal IGD value is $0$, which means that $\mathcal{S}$ covers $\mathcal{P}_F$. We can also compute the average distance from each point in $\mathcal{S}$ to the nearest point in $\mathcal{P}_F$. Such a measure is called the Generational Distance (GD) \cite{MoGomeaSwarm}. The advantage of IGD over GD is that IGD value is optimal if and only if $\mathcal{S}$ covers the whole $\mathcal{P}_F$. Oppositely, GD value is optimal if $\mathcal{S}$ is a subset of $\mathcal{P}_F$ \cite{MoGomeaGecco}. Therefore, we favour IGD over GD.\par The optimal Pareto front must be known to compute IGD. Unfortunately, the considered test cases are based on practice and the optimal Pareto front is not known. To overcome this issue, we construct a pseudo-optimal Pareto front in the following way. For each test case, we consider all $\mathcal{S}$ fronts proposed by every method in every run. From this set of points, we choose only non-dominated ones. The pseudo-optimal Pareto front obtained this way may not be optimal. Nevertheless, all considered $\mathcal{S}$ fronts only contain points that are the part of pseudo-optimal Pareto front or that are dominated by points from pseudo-optimal Pareto front. The same procedure of pseudo-optimal Pareto front creation has been applied in \cite{MoGomeaGecco}. \subsection{The Comparison between MO-P3 with Random and Smart Strategies} \label{sec:exp:paints:strategiesComparison} To compare the performance of MO-P3-Random and MO-P3-Smart, we consider the median IGD value that describes the quality of the proposed Pareto front and the median FFE number necessary to obtain the final solution. To check the statistical significance of the differences, we use the Wilcoxon signed-rank test with a typical 5\% significance level. The summarised results are presented in Table \ref{tab:radnomVsSmart}.\par \begin{table} \caption{The effectiveness comparison between MO-P3 employing Random and Smart strategies} \centering% \label{tab:radnomVsSmart} \begin{tabular}{ccccc} \hline \textbf{Test-case type} & & \textbf{Random} & \textbf{equal} & \textbf{Smart} \\ \hline \multirow{2}{*}{\textbf{Single-hall}} & \textbf{IGD} & 7 & 9 & 0 \\ & \textbf{Median FFE until final solution} & 8 & 7 & 1 \\ \multirow{2}{*}{\textbf{Multi-hall}} & \textbf{IGD} & 0 & 11 & 0 \\ & \textbf{Median FFE until final solution} & 0 & 11 & 0 \\ \hline \end{tabular} \end{table} For test cases with a single production hall, MO-P3 with the \textit{random} strategy has outperformed MO-P3-Smart for 7 test cases (out of 16) and has never been found inferior. Moreover, MO-P3-Random has also been faster to find the solution in 50\% of the cases and found slower for only one test case.\par The \textit{smart} strategy chooses the weight vector at each MO-P3 iteration in a way that shall force the method to obtain a more diverse Pareto front. Thus, it may be found surprising that MO-P3-Smart has been outperformed by MO-P3-Random. However, as stressed in Section \ref{sec:mop3:mmethod}, the \textit{smart} strategy may bias the method towards some solution space regions. If this happens, the linkage gathered and utilised by MO-P3 may become useful only for improving some parts of the Pareto front. As a consequence, the overall Pareto front quality will drop. Note that the drop or lack of linkage diversity may cause a method to become ineffective \cite{3lo}.\par For the test cases considering many production halls, both strategies report results of equal quality. The differences in median FFE necessary for reaching the best result are not statistically significant. Such results may be found surprising when compared to those obtained for a single production hall. The reasonable explanation of this fact is as follows. When we consider multiple halls and each hall produces only a subgroup of paints, then the key issue is to successfully find the appropriate linkage that divides a genotype into subparts responsible for production on each of the halls. For DSM-using methods (like P3, or MO-GOMEA), such a task may be easy for some problem types \cite{linkageQuality}, and hard for the other. If for these test cases, the key to finding a high-quality Pareto front is to find a high-quality linkage that divides a genotype into the appropriate parts, then the multi-level population structure of MO-P3 is the key to solve these problem instances. The other MO-P3 features that cause the domination of MO-P3-Random over the MO-P3-Smart in the case of single-hall test cases do not seem to significantly influence the results for test cases considering many production halls.\par \begin{figure} \includegraphics[width=\linewidth]{paints_mop3_random_vs_smart.png} \caption{The $FFE_{Random}/FFE_{Smart}$ ratio of FFE spent on reaching the final result by MO-P3-Random and MO-P3-Smart for test cases with a single production hall}. Horizontal axis: problem size \label{fig:ffeRationSmartvSRandom} \end{figure} In Figure \ref{fig:ffeRationSmartvSRandom}, we show the $FFE_{Random} / FFE_{Smart}$ ratio for test cases with a single production hall. The $FFE_{Strategy}$ is the median FFE necessary for finding the final solution. The figure shows which method has been faster depending on the number of genes necessary to encode the problem solution. We abandon such a comparison for the multi-hall test cases because there are no statistically significant differences in the FFE number necessary to find the final solution between MO-P3-Random and MO-P3-Smart. If the value of the ratio is below $1$, then MO-P3-Random is faster, if it is higher, then the situation is the opposite. Note that the longer is the genotype, the faster is MO-P3-Random when compared to MO-P3-Smart. Such observation is coherent with the conclusion that the likely reason for the low effectiveness of MO-P3-Smart is the loss of linkage diversity. The lower is the number of genes, the less important is the quality of linkage \cite{linkageQuality}. In other words, the chance for successful crossing is inversely proportional to the genotype length (see Section \ref{sec:exp:benchmarks}). That is why MO-P3-Smart is almost equally fast to find the final solution for the genotypes of the length not exceeding 200 genes (for two test cases, it is even faster than MO-P3-Random). However, for longer genotypes, MO-P3-Random is up to twenty times faster (with equal or better results quality). The reasonable explanation of this observation is that MO-P3-Random posses linkage that is good enough to optimise any part of the Pareto front, while MO-P3-Smart does not due to the bias caused by the \textit{smart} strategy. The situation in which precisely constructed algorithms (or their parts) are outperformed by their random-based competitors is rather rare and may be found as a phenomenon. However, in the literature of the field, we may point the similar cases \cite{linkageLearningIsBad2}.\par Since MO-P3-Random outperforms MO-P3-Smart, in the latter parts of this paper, we will consider only MO-P3 that employs the \textit{random} strategy only. Thus, whenever we refer to MO-P3, we mean MO-P3-Random. \subsection{FFE and Computation Time Ratio Comparison} \label{sec:exp:paints:ffeTimeRatio} Figure \ref{fig:plotFfeRatio} presents the median fitness function evaluation number per second ratio. All MOBCPP test cases have been considered. The values have been measured for short 10-minute runs performed on PowerEdge R430 Dell server Intel Xeon E5-2670 2.3 GHz 64GB RAM with Windows 2012 Server 64-bit installed. To assure the precision of computation time measurement, the number of computation processes has always been one fewer than a number of available CPU nodes. All experiments have been executed in a single thread without any other resource-consuming processes running. Such an experiment setup seems reliable for experiments using a time-based stop condition. Similar experiment setup may be found in \cite{muppets,3lo,muppetsActive}\par As stated in Subsection \ref{sec:exp:paints:setup}, for the considered test problem, the significant amount of computation resources is spent only on fitness value computation. Nevertheless, the $FFE/ComputationTime$ ratio comparison is important as it shows which method is faster. \begin{figure} \includegraphics[width=\linewidth]{plotFfeRatio.png} \caption{Median $FFE/ComputationTime$ ratio per method for all considered test cases} \label{fig:plotFfeRatio} \end{figure} As presented in Figure \ref{fig:plotFfeRatio}, MO-P3 is significantly faster than any other considered method. The statistical significance of median differences has been confirmed by the Wilcoxon signed-rank test. For the null hypothesis that the median $FFE/ComputationTime$ ratio is equal to the median of any other method, the \textit{p}-value has not been higher than $10^{-56}$. NSGA-II and MOEA/D results are dependent on the population size. Such an observation is expected since the smaller the population size is, the more likely the population is to stuck. When the population is stuck, the method frequently requires fitness computation for the same genotypes. If this happens, the fitness may be recomputed or recovered from the caching buffer that stores the fitness values for some of the already considered genotypes. In the considered experiments, all methods have been joined on the problem definition level and the fitness is not recomputed if an individual remains unchanged. Therefore, the $FFE/ComputationTime$ ratio for NSGA-II and MOEA/D with low population size values is low. Note that fitness caching may lead to the following consequences. If the method is stuck and tends to consider the same and small subset of encodable genotypes, the $FFE/ComputationTime$ ratio may become so low, that the computation resources spent on other method activities than the fitness computation will become so significant that FFE will not be a fair and reliable measure. A more detailed analysis of this phenomenon may be found in \cite{fitnessCaching,PEACh,PEAChWPlfl}. Due to the very low $FFE/ComputationTime$ ratio values obtained for NSGA-II and MOEA/D, the considered population size for these methods is 400 individuals hereafter. \subsection{The Comparison between MO-P3 and the Competing Methods} \label{sec:exp:paints:moP3moGomea} \begin{figure} \includegraphics[width=\linewidth]{plotIGDPaintsMop3MogoMoeadNsga.png} \caption{The IGD-based comparison of considered methods for test cases using a single hall} \label{fig:paintsSingleHall} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{plotIGDmultiPaintsMop3MogoMoeadNsga.png} \caption{The IGD-based comparison of considered methods for multi-hall test cases} \label{fig:paintsMultiHall} \end{figure} The IGD comparison between MO-P3 and the rest of the considered methods is presented in figures \ref{fig:paintsSingleHall} and \ref{fig:paintsMultiHall}. MO-P3 outperforms NSGA-II and MOEA/D for both types of test cases. Such a result is expected since neither NSGA-II nor MOEA/D uses linkage information. Therefore, these methods are not capable of recognising the nature of the problem and do not use this knowledge to improve the effectiveness of the optimisation process. Thus, in the latter part of this subsection, we compare MO-P3 with MO-GOMEA that employs linkage learning, the same as MO-P3 is parameter-less and has been proposed recently to solve multi-objective problems effectively.\par \begin{figure} \includegraphics[width=\linewidth]{ffeRatioMOP3vsMoGomea.png} \caption{The ratio of FFE spent on reaching the final result by MO-P3 and MO-GOMEA for test cases with a single production hall} \label{fig:ffeRatioMOP3vsMOGOMEA} \end{figure} For all the test cases considering a single production hall, the median IGD values obtained by MO-P3 and MO-GOMEA have been equal. However, MO-P3 has been significantly faster in finding the solution in most of the runs. The Wilcoxon signed-rank test with the 5\% significance level has confirmed that these differences are statistically significant for 7 test cases. For another 7 test cases, the differences have not been meaningful and MO-GOMEA has been faster for two test cases. Figure \ref{fig:ffeRatioMOP3vsMOGOMEA} shows the $FFE_{MOP3} / FFE_{MO-GOMEA}$ ratio (similarly to the ratio shown in Figure \ref{fig:ffeRationSmartvSRandom}). For most of the considered test cases, MO-P3 is about two times faster than MO-GOMEA. However, the FFE ratio presented in Figure \ref{fig:ffeRatioMOP3vsMOGOMEA} does not seem related to the genotype length. Such an observation has been expected as both the compared methods gather linkage and try to make it diverse. MO-GOMEA supports different linkages necessary to optimise different Pareto front parts by individuals clustering, while MO-P3 uses a pyramid-like population structure.\par The situation is different for multi-hall test cases. For most of the test cases, both methods report similar IGD, but MO-P3 outperforms MO-GOMEA for four test cases and is outperformed only in one of them. The difference between single- and multi-hall test cases is that to successfully solve them, a method has to precisely decompose the problem into parts referring to different production halls. MO-P3 maintains a larger number of linkage sets. Thus it is more likely that some of these linkages will be precise enough to allow for successful exchange for some of the halls (see Section \ref{sec:relWork:linkageDiversityExample}). Therefore, for a relatively low number of halls, both methods report results of equal quality (for one of them MO-GOMEA even outperforms MO-P3). However, when the number of halls increases, MO-P3 outperforms MO-GOMEA. These differences are statistically significant. Moreover, MO-P3 has been faster than MO-GOMEA in finding the final result (in terms of FFEs) for 7 test cases of 11. For other test cases, the differences have been not statistically significant.\par For the MOBCPP problem, MO-P3 and MO-GOMEA yield results of similar quality. However, MO-P3 has outperformed MO-GOMEA for four test cases when many production halls are considered and has been outperformed by MO-GOMEA for one test case only. Since these results are statistically significant, we may state that MO-P3 outperforms MO-GOMEA for the MOBCPP problem. An important advantage of MO-P3 over MO-GOMEA is that MO-P3 requires fewer FFEs to reach the final result. This situation takes place for 11 out of 27 considered test cases, and only for two test cases, MO-GOMEA is better. If we consider the fact that MO-P3 performs about two times more FFEs per second than MO-GOMEA (see Figure \ref{fig:plotFfeRatio}), we can state that MO-P3 is better in solving MOBCPP because it reports the results of slightly higher quality and is significantly faster in reaching these results in terms of FFEs and computation time.\par In this section, we have shown that the proposed MO-P3 outperforms NSGA-II and MOEA/D for the considered practical problem. It has been also demonstrated that it performs better and significantly faster than MO-GOMEA. The intuition that in multi-objective optimisation the P3-based methods may be significantly faster than LTGA-based (GOMEA-based) in solving practical problem instances has been confirmed. To get the full view on MO-P3 effectiveness, we report the comparison based on popular theoretical benchmarks in the next section. \section{Experiments on Benchmark Problems} \label{sec:exp:benchmarks} In this section, we compare MO-P3 with MO-GOMEA, MOEA/D, and NSGA-II using various theoretical benchmarks. The objective of this comparison is to evaluate the overall MO-P3 effectiveness in solving assorted multi-objective problems. As a baseline, we use NSGA-II and MOEA/D. Similarly to the results presented in \cite{MoGomeaGecco,MoGomeaSwarm}, MOEA/D and NSGA-II were outperformed by MO-GOMEA. In this section, we show that except for one benchmark problem (multi-objective knapsack), MO-P3 performs even better than MO-GOMEA. \subsection{Benchmark Problems} In this section, we present the considered benchmark problems. We use the same benchmarks as in \cite{MoGomeaGecco,MoGomeaSwarm}. For Multi-objective weighted MAXCUT and Multi-objective knapsack, we have adopted the implementation from \cite{MoGomeaSwarm} and developed our implementations for \textit{Zeromax-Onemax}, \textit{Trap5-Inverse Trap5} and \textit{Leading Ones Trailing Zeros} (LOTZ). \subsubsection{Zeromax-onemax} The objectives for \textit{Zeromax-onemax} problem are defined as \begin{equation} \label{eq:probDef:oneMaxZeroMax} f_{Onemax}(u) = u; f_{Zeromax}(u) = l-u, \end{equation} where $l$ is the genotype length and $u$ is the \textit{unitation} (see Section \ref{sec:relWork:linkageDiversity}). Thus, $f_{Onemax}$ maximises the number of ones in the genotype, while $f_{Zeromax}$ maximises the number of zeros. The optimal Pareto front $\mathcal{P}_F$ contains $l+1$ points. Many solutions can refer to a single point on $\mathcal{P}_F$ except for the two extreme points that contain only ones and only zeros. Thus, it is potentially harder to find extreme parts of $\mathcal{P}_F$ than the extreme regions \cite{mohBOA}. Note that every encodable solution lies on $\mathcal{P}_F$. \subsubsection{Trap5-Inverse Trap5} Deceptive function of unitation~\cite{decFunc} has been introduced in Section \ref{sec:relWork:linkageDiversity} in formula (\ref{eq:dec3}). The inverse deceptive function of unitation is defined as \begin{equation} \label{eq:dec3inverse} \mathit{dec_{inverse}(u)}= \begin{cases} u - 1 & \text{if } u > 0\\ k & \text{if } u = 0 \end{cases}, \end{equation} where $u$ is a sum of gene values (so called \textit{unitation}) and $k$ is a deceptive function size. In this paper, we use $k=5$, which is the same setting as used in \cite{MoGomeaGecco,MoGomeaSwarm}. The \textit{Trap5-Inverse Trap5} problem is a concatenation of order-5 deceptive blocks. The first objective refers to the trap-5 function and maximises the blocks built from ones, while the second objective maximises the number of blocks built from zeroes. The number of points in $\mathcal{P}_F$ is $l/5+1$. Similarly to the case of the \textit{Zeromax-onemax} problem, there is only one solution that refers to each extreme $\mathcal{P}_F$ point, but there may be more solutions that refer to other parts of $\mathcal{P}_F$. Similarly to the single-objective optimisation, it is difficult to solve a problem built from deceptive blocks if a method is unable to obtain a linkage of appropriately high-quality \cite{MoGomeaSwarm,ltga}. \subsubsection{Leading Ones Trailing Zeros (LOTZ)} \textit{Leading ones trailing zeroes} is a classic benchmark in multi-objective optimisation. The first objective is the \textit{Leading Ones} function, while the second is \textit{Trailing Zeroes} function. They are defined as \begin{equation} \label{eq:lotz} \mathit{f_{LO}(x)}=\sum_{i=1}^{l-1} \prod_{j=1}^{i} x_j; \mathit{f_{TZ}(x)}=\sum_{i=1}^{l-1} \prod_{j=i}^{l-1} (1-x_j). \end{equation} The leading ones function ($f_{LO}$) optimises the number of subsequent ones at the beginning of a genotype. The trailing zeroes optimises the number of subsequent zeroes at the end of it. The number of points in the Pareto-optimal front is $l+1$. In the case of LOTZ, each point in $\mathcal{P}_F$ refers to exactly one genotype. \subsubsection{Multi-objective Weighted MAXCUT} We employ the same multi-objective MAXCUT problem version as in \cite{MoGomeaSwarm,MoGomeaGecco} and use the same source code implementing the MAXCUT problem as in \cite{MoGomeaSwarm}. The problem instances have been generated using the approach proposed in \cite{maxcutOrigin}. The problem is defined as follows.\par Let $G=(V,E)$ be a weighted undirected graph, where $V=(v_0, v_1,\ldots,v_l)$ is a set of $l$ vertices and $E$ is the set of edges $(v_i,v_j)$. Each edge has an associated weight $w_{i,j}$. In the weighted MAXCUT problem, the objective is to find a maximum cut which is a partition of $l$ vertices into two disjoint subsets $A$ and $B$ ($A = V\ B$) such that the total weight of edges $(v_i,v_j)$ having $v_i \in A$ and $v_j \in B$ is maximised. We solve each MAXCUT instance for two different weight sets, making this problem bi-objective.\par The solution is encoded as a string of $l$ bits, where each variable $x_i$ corresponds to each vertex. If $x_i=0$ then $v_i \in A$ and $v_i \in B$ otherwise. The same solution encoding has been used in \cite{MoGomeaSwarm,MoGomeaGecco}. In the experiments, we consider problem instances for $l \in \{12, 25, 50, 100\}$. The Pareto-optimal front $\mathcal{P}_F$ is necessary to compute IGD. For $l \in \{12, 25\}$, the optimal Pareto front has been obtained by the enumeration method. For $l=50$ and larger $\mathcal{P}_F$, we use the reference sets proposed in \cite{MoGomeaGecco}. The instances and the reference Pareto front sets are the same as in \cite{MoGomeaSwarm,MoGomeaGecco}. \subsubsection{Multi-Objective Knapsack} In the multi-objective knapsack problem, we consider $l$ items and $m$ knapsacks. Each knapsack $k$ has capacity $c_k$ and each item $i$ is characterised by weight $w_{i,k}$ and profit $p_{i,k}$ corresponding to each knapsack $k$. Each item $i$ may be either selected and placed in every knapsack or not selected at all. Thus, the problem solution may be encoded as an $l$-bit binary string. If the total weight of the selected items does not violate the capacity constraint of any knapsacks, the solution is feasible. The objective is to maximise the profits of all knapsacks at the same time. The problem may be defined as \begin{equation} \label{eq:knapsack} \max_x (f_0(x), f_1(x),\ldots,f_{m-1}(x)),\\ \end{equation} where $f_k(x) = \sum_{i=0}^{l-1}p_{i,k}x_i$ for $k=0,1,\ldots,m$ and subject to $\sum_{i=0}^{l-1} w_{i,k}x_i~\leq~c_k$. We use the same problem implementation as in \cite{MoGomeaSwarm}. Therefore, we also use the same mechanism to repair a solution that violates the constraints \cite{knapsackRepair}. The repair algorithm removes selected items one by one until all the constraints are satisfied. The items with the lowest profit/weight ratio are removed first.\par We employ the same bi-objective knapsack instances as in \cite{MoGomeaSwarm,knapsackRepair}. The considered number of items is $l \in \{100, 250, 500, 750\}$. For the instance of 750 items, we use pseudo-optimal $\mathcal{P}_F$ created by the combination of many Pareto fronts and employed in \cite{MoGomeaSwarm}. For the remaining instances, we use the optimal Pareto fronts reported in \cite{knapsackRepair}. \subsection{Main Results for Benchmarks} \begin{figure} \includegraphics[width=\linewidth]{plotScalabilitySimpleBench.png} \caption{Scalability of MO-P3 and the competing methods for benchmark problems.} \label{fig:scalePerfsimpleBench} \end{figure} In Figure \ref{fig:scalePerfsimpleBench}, we show MO-P3, MO-GOMEA, and MOEA/D scalability on the \textit{Zeromax-Onemax}, \textit{Trap5-Inverse Trap5}, and \textit{LOTZ} problems. We present the median FFE necessary to find the optimal Pareto front. NSGA-II is excluded from the comparison because it has not found the optimal Pareto front in most of the runs for each problem. For these benchmarks, MO-P3 outperforms MO-GOMEA. This supremacy is caused by the following reason. For all three benchmarks, the linkage is the same for all Pareto front parts. For instance, for the \textit{Trap5-Inverse Trap5} problem, the corresponding bits always occupy the same blocks. This situation may favour MO-P3 because population clusterisation employed by MO-GOMEA does not guarantee any benefits. However, thanks to using linkage learning, MO-GOMEA is the only competing method that can successfully solve the \textit{Trap5-Inverse Trap5}. MOEA/D and NSGA-II were unable to find the optimal Pareto front even for a 25-bit problem version. Note that for the problems based on deceptive trap functions, the DSM-using methods (e.g., MO-GOMEA and MO-P3) are capable of finding the perfect problem decomposition in the early stages of the run \cite{linkageQuality}. This capability allows them to solve the problem. \begin{figure} \includegraphics[width=\linewidth]{plotScalabilityMaxcut.png} \caption{Scalability of MO-P3 and the competing methods for the Maxcut problem} \label{fig:scalabilityMaxcut} \end{figure} \begin{figure} \includegraphics[width=\linewidth]{plotScalabilityKnap.png} \caption{Scalability of MO-P3 and the competing methods for the Knapsack problem} \label{fig:scalabilityKnap} \end{figure} In Figure \ref{fig:scalabilityMaxcut}, we present the scalability on the MAXCUT problems. MO-P3 performs better than all the remaining considered methods including MO-GOMEA. It is the only method that can solve all the MAXCUT instances for $l \leq 50$. On the other hand, MO-GOMEA outperforms MO-P3 for the bi-objective Knapsack problem (Figure \ref{fig:scalabilityKnap}). For this problem, MO-P3 is also slightly outperformed by MOEA/D. Note that bi-objective knapsack problem is the only problem considered in this paper for which MO-P3 has been outperformed by any other method. \section{Results Discussion} In this paper, we have proposed a new multi-objective optimisation method based on the proposition of the Parameter-less Population Pyramid. MO-P3 uses a weighted vector to optimise new solutions added to the pyramid. Two different strategies have been considered: \textit{random} and \textit{smart}. The comparison based on a practical MOBCPP problem has shown that MO-P3-Random performs better. Such an observation may be surprising but is well explainable as the \textit{random} strategy does not bias MO-P3 towards any part of the Pareto front. The lack of bias allows MO-P3 to maintain a diverse linkage that seems to be the key to solve the considered practical problem. The importance of linkage is also supported by the comparison between MO-P3 and NSGA-II. For all the considered test cases except the smallest one, MO-P3 has outperformed NSGA-II as the IGD values for MO-P3 have been equal to $0$ in all the runs. It means that at least some of the points from the Pareto fronts proposed by NSGA-II have been dominated by the points from the Pareto fronts proposed by MO-P3.\par The comparison with MO-GOMEA on the base of MOBCPP instances shows that both the methods yield results of the same quality. However, for most of the test cases, MO-P3 requires significantly fewer FFEs to reach the best result. These results confirm the intuition presented in Section \ref{sec:mop3} that a P3-based method may be more suitable to solve practical problems than the LTGA-based ones. The high potential of MO-P3 application to practical problems is also confirmed by the analysis of the $FFE/ComputationTime$ ratio. It is significantly higher for MO-P3 than for MO-GOMEA and NSGA-II since MO-P3 does not spend the computation resources on the population clusterisation performed by MO-GOMEA and the computation of the crowding distance done by NSGA-II.\par The comparison made on the multi-objective benchmarks also indicates high effectiveness and high efficiency of MO-P3. For \textit{Zeromax-Onemax}, \textit{Trap5-Inverse Trap5} and \textit{LOTZ} problems, MO-P3 has found the optimal Pareto front in all runs, outperforming MO-GOMEA and NSGA-II. MO-GOMEA has been unable to find the optimal Pareto front for $l>25$. Moreover, for these three problems, MO-P3 has found the optimal Pareto fronts up to 100 times faster than MO-GOMEA. For the MAXCUT problem, MO-P3 has also outperformed the competing methods. Bi-objective Knapsack problem has been the only problem for which MO-GOMEA yielded Pareto fronts of better quality. The detailed analysis of MO-GOMEA and MO-P3 behavior for the Knapsack problem instances requires further investigation and is out of this paper's scope. However, it is possible that for the considered problem instances, MO-GOMEA can divide its population into such clusters that it obtains the linkage of high quality (the linkage that shows the true gene dependencies for the considered Pareto front parts).\par \section{Conclusions} In this paper, we have proposed MO-P3, a new method designed to solve a practical industrial multi-objective problem related to the process of production planning. The proposed method is adjusted to the problem with the appropriate solution encoding-decoding algorithm. Since solutions are represented by binary strings, MO-P3 has been compared with NSGA-II that is a typical baseline in multi-objective optimisation and MO-GOMEA that is an up-to-date method in solving multi-objective problems in discrete domains. Our proposition outperforms all competing methods for the set of considered practical problem instances.\par The experiments conducted on a set of typical benchmarks have confirmed both high-effectiveness and high-efficiency of MO-P3. The proposed method is not only capable of finding optimal or near-optimal Pareto fronts, but it is also the fastest in most of the cases. Additionally, it significantly outperforms both of the competing methods when the $FFE/ComputationTime$ ratio is taken into account. This positive feature is obtained thanks to the fact that MO-P3 does not require computationally costly operations like population clusterisation or crowding distance calculation.\par The main future work directions are as follows. The behaviour of MO-P3 for the Knapsack problem must be analysed to investigate the reason it performs worse than MO-GOMEA for this problem. The other future work objective is the development of MO-P3 itself to further improve its effectiveness and efficiency. \section*{Acknowledgement} We would like to thank Hoang Luong (Centrum Wiskunde \& Informatica) for giving us the access to the original MO-GOMEA source codes which has significantly speeded up the experiments preparation process and improved the paper quality.\par We also wish to express our gratitude to Marcin Komarnicki (Wroclaw University of Science and Technology) for helping us in organising the source codes and improving readability and composition of the paper.\par The authors acknowledge the support of the EU H2020 SAFIRE project (Ref. 723634).
null
null
null
proofpile-arXiv_000-10169
{"arxiv_id":"2009.08929","language":"en","timestamp":1600654656000,"url":"https:\/\/arxiv.org\/abs\/2009.08929","yymm":"2009"}
2024-02-18T23:40:25.233Z
2020-09-21T02:17:36.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10171"}
null
\section{Introduction} The description of open quantum systems is often based on the master equation with a relaxation operator \cite{breuer,blum} \begin{equation} \label{eq_1_} \frac{\partial \hat{\rho}}{\partial t}+\frac{i}{\hbar}\left[\hat{H},\hat{\rho}\right]=\hat{R}\left(\hat{\rho}\right). \end{equation} There are many approximations to the form of the relaxation operator that make Eq.~(\ref{eq_1_}) more tractable. Phenomenological models are particularly popular because of their simplicity. A hybrid approach is also possible, which combines the microscopic description of relaxation of populations with phenomenological models of relaxation of quantum coherences; see, for example, \cite{wang2015,winzer2013}. In the energy basis the populations and quantum coherences correspond to the diagonal and off-diagonal elements of density matrix, respectively. The choice of phenomenological models was discussed in a number of papers \cite{mermin1970,tokman2013,zhang2014,salmilehto2012,tokman2009,atwal2002}. Here we derive a universal and relatively simple expression for the relaxation operator of quantum coherences in an ensemble of quasiparticles in a solid, which is free from inconsistencies typical for the known models. We use this operator to generalize the Lindhard formula \cite{haug2009quantum} and consider the case of a dissipative 2D system such as graphene as an example. The simplest phenomenological relaxation operator has the following form in the energy basis \cite{fain1969quantum}: \begin{equation} \label{eq_2_} R_{\alpha \beta}=-{\gamma}_{\alpha \beta}\left[{\rho}_{\alpha \beta}-{\delta}_{\alpha \beta}n^{\left(0\right)}_{\beta}\right], \end{equation} where $ {\gamma}_{\alpha \beta} $ is the relaxation rate for the transition $ \alpha \leftrightarrow \beta $ and $ n^{\left(0\right)}_{\beta} $ are equilibrium populations. This model corresponds to the well-known replacement $ \omega \Rightarrow \omega +i\gamma $ for the equal constants $ {\gamma}_{\alpha \beta}= \gamma $. For the coherences such a relaxation operator is in agreement with the well-known Lindblad form \cite{fain1969quantum,lindblad1975,scully1997quantum}, whereas the diagonal elements according to Eq.~\eqref{eq_2_} relax to the equilibrium state. Unfortunately, this popular model has serious flaws as described below. \subsection{Violation of the continuity equation} Using expression \eqref{eq_2_} in Eq.~(\ref{eq_1_}) can lead to a number of inconsistencies and mistakes. First of all, it leads to violation of the continuity equation connecting the charge density and the current density in a distributed system, as well as an incorrect stationary perturbation limit \cite{mermin1970,tokman2013}. As a consequence of the violation of the continuity equation, for a bounded isolated system in an alternating external field the relation $ \mb{J}=\frac{\partial}{\partial t}\mb{P} $ between the dipole moment of the system $ \mb{P} $ and the average current $ \mb{J} $ is no longer valid \cite{tokman2013}. For $ \mathrm{\propto e}^{i\mb{\kappa}\mb{r}-i\omega t} $ processes the violation of the continuity equation leads to violation of the standard relation $ \omega Q_{\mb{\kappa}\omega}=\mb{\kappa}{\mb{j}}_{\mb{\kappa}\omega} $ between the Fourier harmonics of the charge density $ Q_{\mb{\kappa}\omega} $ and current density $ {\mb{j}}_{\mb{\kappa}\omega} $. As a result, if one calculates the conductivity $ \sigma \left(\omega ,\mb{\kappa}\right) $ and polarizability $ \chi \left(\omega ,\mb{\kappa}\right) $ independently, the fundamental relationship between them, namely \begin{equation} \label{eq_3_} \sigma \left(\omega ,\mb{\kappa}\right)=-i\omega \chi \left(\omega ,\mb{\kappa}\right), \end{equation} turns out to be satisfied only with an accuracy of the order of $ \sim {\gamma}/{\omega} $. Therefore, one has to choose which of these two quantities is more ``correct'' or adequate for a particular situation and calculate it in the framework of the particular microscopic model of the material. In this case Eq.~\eqref{eq_3_} has to be considered as a definition that allows one to find another quantity (see, for example, \cite{tokman2009}). This is hardly acceptable, since there is no universal rule for choosing which response function is ``correct'': $ \sigma $ or $ \chi $. When describing high-frequency or low-dissipation processes in which $ \omega \gg \gamma $, this inconsistency does not lead to significant errors. At the same time, in the region of relatively low frequencies the use of Eq.~\eqref{eq_2_} is highly problematic (see, for example, \cite{tokman2013}). Of particular interest in this regard is the description of Coulomb screening in a dissipative system. In this case, Mermin \cite{mermin1970} proposed a modified relaxation operator, which can be represented as follows: \begin{equation} \label{eq_4_} R_{\alpha \beta}=-\gamma \left[{\rho}_{\alpha \beta}-{\delta}_{\alpha \beta}n^{\left(0\right)}_{\beta}-{\eta}^{\left(\mathrm{st}\right)}_{\alpha \beta}\left(\delta \mu \right)\right], \end{equation} where $ {\eta}^{\left(\mathrm{st}\right)}_{\alpha \beta}\left(\delta \mu \right) $ is a quasistationary perturbation of the equilibrium density matrix, which is linear in perturbation of the chemical potential $ \delta \mu $. For $ {\propto \mathrm{e}}^{i\mb{\kappa}\mb{r}-i\omega t} $ processes, Mermin \cite{mermin1970} developed the procedure which allows one to find the solution for $ \delta \mu \left(\mb{\kappa},\omega \right) $ which preserves the continuity equation. The latter guarantees that, when the relaxation operator \eqref{eq_4_} is used, the relation \eqref{eq_3_} is satisfied in which the conductivity and polarizability are calculated independently. In Eq.~\eqref{eq_4_} the matrix $ {\eta}^{\left(\mathrm{st}\right)}_{\alpha \beta}\left(\delta \mu \right) $ does not depend on the relaxation constant, since it corresponds to the equilibrium state which the system approaches for a stationary perturbation (i.e, when $ \omega \to 0 $), regardless of the relaxation mechanism. This approach goes back to the paper by Landau \cite{landau1935} on the theory of the dispersion of the magnetic permeability in ferromagnetic media. It is important to note that the procedure proposed in \cite{mermin1970} is limited to the simplest case, when the plane waves are considered as basic eigenstates of the unperturbed Hamiltonian, and the energy dispersion of the carriers is parabolic with respect to the quasimomentum $ \mb{k} $, i.e.~it corresponds to the electron current being proportional to the electron quasi-momentum: $ \mb{j}=-\frac{e}{m}\hbar \mb{k} $, where $ -e $ is an electron charge and $ m $ is a fixed effective mass. \subsection{The static limit} The second important test of the relaxation operator model is the behavior of the solution to the master equation (\ref{eq_1_}) in the limit of a static perturbing potential. In this case a \textit{closed system} should reach an equilibrium state in a given external potential. Such a state should not depend on the nature and rate of relaxation, and there is obviously no current in it. As a result, the following requirements appear reasonable: (\textbf{i}) for any $ \mb{\kappa} $ the quantity $ \lim\limits_{\omega \to 0}\mathrm{Re}[\chi \left(\omega ,\mb{\kappa}\right)]=\lim\limits_{\omega \to 0}{\omega}^{-1}\mathrm{Im}[\sigma \left(\omega ,\mb{\kappa}\right)] $ should not depend on the parameters and the model of relaxation, (\textbf{ii}) $ \lim\limits_{\omega \to 0}\omega \mathrm{Im}[\chi \left(\omega ,\mb{\kappa}\right)]=\lim\limits_{\omega \to 0}\mathrm{Re}[\sigma \left(\omega ,\mb{\kappa}\right)]=0 $. However, such a solution cannot describe the situation in which a conductive sample with boundaries which are permeable for carriers is an element of a direct current circuit. In the latter case the limit $ \lim\limits_{ \substack{ \mb{\kappa}\to 0 \\ \omega \to 0}} \omega \mathrm{Im}\chi \left(\omega ,\mb{\kappa}\right)=\lim\limits_{ \substack{ \mb{\kappa}\to 0 \\ \omega \to 0}} \mathrm{Re}\sigma \left(\omega ,\mb{\kappa}\right)=\sigma \left(\gamma \right) $ is nonzero and should correspond to the ohmic conductivity in the uniform constant field, which depends on the relaxation constant $ \gamma $. There is no contradiction with the previous statement, since the element of an electric circuit is obviously not a closed system. A continuous transition from the equilibrium current-free solution to the Ohmic conductivity is possible only within the framework of a problem with boundary conditions. The current-free steady state can be obtained by expanding the initial equations in powers of a small parameter which is independent on the relaxation constant, $ \displaystyle \frac{e\delta \mathrm{\Phi}}{\langle W\rangle} $, where $ \delta \mathrm{\Phi} $ is a maximum potential drop and $ \langle W\rangle $ is a characteristic electron energy. The state with direct current satisfying Ohm's law can be obtained by expanding in powers of another small parameter: $ \displaystyle \frac{eE}{\gamma \langle p\rangle} $, which includes the relaxation constant $ \gamma $, the characteristic value of the electric field $ E $, and the characteristic momentum $ \langle p\rangle $ of the carriers in the conduction band. Note that under the condition $ \displaystyle \frac{eE}{\gamma \langle p\rangle}\ll 1 $ the initial equilibrium distribution of carriers in the conduction band is weakly perturbed for any ratio $ \displaystyle \frac{e\delta \mathrm{\Phi}}{\langle W\rangle} $. Let us discuss how the above properties relate to the functions $ \sigma \left(\omega ,\mb{\kappa}\right) $ and $ \chi \left(\omega ,\mb{\kappa}\right) $, obtained as a result of solving the master equation with relaxation operators defined by Eq.~\eqref{eq_2_} and Eq.~\eqref{eq_4_}, respectively, for conduction electrons in a metal or in a bulk semiconductor far from any boundaries. For a standard model given by Eq.~\eqref{eq_2_}, $ \lim\limits_{\mb{\kappa}\to 0}\sigma \left(\omega ,\mb{\kappa}\right) $ corresponds to the complex Drude conductivity in a uniform field. In the limit $ \lim\limits_{ \substack{ \mb{\kappa}\to 0 \\ \omega \to 0}} \mathrm{Re}[\sigma \left(\omega ,\mb{\kappa}\right)]=\sigma \left(\gamma \right) $, one obtains the standard Ohmic conductivity in a \textit{uniform and time-independent} field. At the same time, for finite values of $ \mb{\kappa} $ and $ \omega \to 0 $ the expression for the conductivity is incorrect: there is no solution corresponding to the equilibrium current-free state, since $ \lim\limits_{\omega \to 0}\mathrm{Re}[\sigma \left(\omega ,\mb{\kappa}\right)] \neq 0 $. As noted above, for the relaxation model of Eq.~\eqref{eq_2_} the relationship \eqref{eq_3_} is violated. Therefore, independent calculations of the conductivity and susceptibility $ \chi \left(\omega ,\mb{\kappa}\right) $ lead to different results, but both of them are incorrect: in the limit $ \omega \to 0 $ and for finite values of $ \mb{\kappa} $ the expression for $ \lim\limits_{\omega \to 0}\mathrm{Re}[\chi \left(\omega ,\mb{\kappa}\right)] $ depends on the relaxation parameter $ \gamma $. The model based on Eq.~\eqref{eq_4_} preserves the continuity equation, and the limit $ \omega \to 0 $ for any finite $ \mb{\kappa} $ corresponds to an equilibrium current-free state in a closed system \cite{mermin1970}. In this case, however, it follows from the relations in \cite{mermin1970} that the limit $ \mb{\kappa}\to 0 $ leads to $ \lim\limits_{\mb{\kappa}\to 0}\omega \mathrm{Im}[\chi \left(\omega ,\mb{\kappa}\right)]=0 $, both for finite values of $ \omega $ and after taking the subsequent limit $ \omega \to 0 $. Thus, the model proposed in \cite{mermin1970} provides more adequate description of the screening effects in comparison with the standard approximation \eqref{eq_2_}, but does not describe Ohmic conductivity in a uniform field. In addition, it cannot include interband transitions and is limited to the quadratic energy dispersion of quasiparticles. Besides, the model is rather complicated. \subsection{The relaxation operator in a real basis} For a real Hamiltonian in the absence of a magnetic field \cite{landau2013quantum} one can always choose the basis eigenfunctions to be real. In this case a much simpler relaxation operator was proposed in \cite{tokman2013}: \begin{equation} \label{eq_5_} R_{\alpha \beta}=-{\gamma}_{\alpha \beta}\left({\rho}_{\alpha \beta}-{\rho}_{\beta \alpha}\right). \end{equation} Equation \eqref{eq_5_} does not determine relaxation of the diagonal elements of the density matrix; however, if necessary, the relaxation operator for the populations can be added separately: see, for example, \cite{fain1969quantum,tokman2013,zhang2014}. Note that the diagonal elements (populations) are usually not perturbed in the linear approximation with respect to an external field. Equation \eqref{eq_5_} was obtained in \cite{tokman2013} from the first principles for a system which possesses an electric dipole-allowed transition interacting with a radiation reservoir. In this case, the interaction of the quantum system with the reservoir is described beyond the rotating wave approximation (RWA), i.e. the interaction Hamiltonian includes off-resonant counter-rotating terms. The master equation beyond the RWA was studied also in \cite{fleming2010, stokes2012, munro1996}. The relaxation operator \eqref{eq_5_} is not of the Lindblad form \cite{lindblad1975}. Nevertheless, one can show \cite{tokman2013,munro1996} that at times exceeding the averaging time corresponding to the Markov approximation, the use of the relaxation operator \eqref{eq_5_} does not violate the condition of positive definiteness of the density matrix. In the steady-state case, the solution of Eq.~\eqref{eq_1_} with the relaxation operator \eqref{eq_5_} corresponds to an equilibrium state in a static external field, and this equilibrium state does not depend on the relaxation constants. Since Eq.~\eqref{eq_5_}, in contrast to Eq.~\eqref{eq_4_}, does not explicitly depend on the external field (In Eq.~\eqref{eq_4_} the external field defines the value $ \delta \mu $), this result seems paradoxical. The point, however, is that in the real basis the stationary perturbation corresponds to the real values of $ {\rho}_{\alpha \beta} $, so that the relaxation operator \eqref{eq_5_} is zero. For time-varying fields, the use of the relaxation operator \eqref{eq_5_} ensures that the relation $ \mb{J}=\frac{\partial}{\partial t}\mb{P} $ is satisfied for a bounded isolated system \cite{tokman2013}. For the simplest systems (harmonic oscillator and free particles) placed in a dissipative reservoir and, simultaneously, in a magnetic field, a generalization of the model based on Eq.~\eqref{eq_5_} was developed in \cite{tokman2013,zhang2014}. They proposed an approach based on the transition from energy representation to the coordinate representation, taking into account the requirement of gauge invariance of the observables in an external field with a nonzero vector potential. This condition imposes certain restrictions on the relaxation operator \cite{tokman2013,tokman2009}. As we see, there is strong motivation to derive the phenomenological relaxation operator that would have the antisymmetric structure like Eq.~\eqref{eq_5_} and would remain applicable to the most general case. In this paper we obtain such a relaxation operator which \noindent (\textbf{i}) is valid for charged carriers in solids with an arbitrary energy dispersion, in particular the Dirac spectrum; \noindent (\textbf{ii}) preserves the continuity equation while including both intraband and interband transitions; \noindent (\textbf{iii}) allows one to obtain both the stationary ``current-free'' regime in equilibrium and the ohmic direct current regime in the limit of a uniform static field; \noindent (\textbf{iv}) is significantly simpler than the model proposed in \cite{mermin1970}. In the simplest (intraband) version, the analog of the expression Eq.~\eqref{eq_5_} for $ |\mb{k}\rangle$-states has the form \begin{equation} \label{eq_6_} R_{\mb{k}\mb{q}}=-{\gamma}_{\mb{kq}}\left({\rho}_{\mb{kq}}-{\rho}_{-\mb{q}-\mb{k}}\right). \end{equation} This expression has a simple interpretation. Suppose that the transition between the states $ |\mb{k}\rangle \leftrightarrow |\mb{q}\rangle $ is accompanied by some excitation of the ``reservoir''. This excitation must have quasimomentum $ \mb{p} $, such that $ \hbar \mb{k} = \hbar\mb{q}+\mb{p} $. However, in this case, the conservation of momentum is also valid for the transition $ |{-}\mb{q}\rangle \leftrightarrow |{-}\mb{k}\rangle $, since the relation $ -\hbar \mb{q} =- \hbar \mb{k}+\mb{p} $ is true. Thus, the reservoir modes inevitably ``couple'' the transitions $ |\mb{k}\rangle \leftrightarrow |\mb{q}\rangle $ and $ |{-}\mb{q}\rangle \leftrightarrow |{-}\mb{k}\rangle $, which is reflected in the expression \eqref{eq_6_}. The structure of the paper is as follows. Section \ref{sec_relaxation} establishes general requirements for the structure of a relaxation operator, following from the conservation of the number of particles. An operator for an ensemble of quasiparticles in a solid is constructed, which ensures the continuity equation to be satisfied exactly or after averaging over the lattice period; see Eqs.~\eqref{eq_25_} below. It turns out that the derivation of such an operator is significantly simplified for systems that are symmetric with respect to time reversal (TRS-systems). Here we mean the corresponding property of the isolated system without taking into account its relatively weak interaction with a dissipative reservoir. In section \ref{sec_lindhard}, the Lindhard formula is obtained for a dissipative 2D system using the correct (in the above sense) relaxation operator. In section \ref{sec_comparison} we compare our results applied to graphene with the results of using the standard model given by Eq.~\eqref{eq_2_}. In Appendix \ref{sec_appendix_TRS} one property of the TRS systems is established which is important for derivation of the relaxation operator. Appendix \ref{appendix_graphene} and \ref{appendix_WSM} describe the the procedure for deriving a phenomenological relaxation operator in graphene and Weyl semimetals with broken time-reversal symmetry. In Appendix \ref{sec_appendix_homogeneous} the properties of the linear susceptibility in the limit of a uniform high-frequency field are considered. Appendix E contains the derivation of the susceptibility of monolayer graphene in the limit of $ \mb{\kappa} = 0 $. \section{Relaxation operator preserving the continuity equation} \label{sec_relaxation} \subsection{Basic relationships} First, we write the Hamiltonian of a nonrelativistic electron in a periodic potential in a fairly general form, \[\hat{H}={\hat{H}}_0\left(\mb{r},\hat{\mb{p}}, \hat{\mb{s}}\right),\] where the dependence of the Hamiltonian on the coordinate $ \mb{r} $ is periodic, $ \hat{\mb{p}}\mb = -i\hbar \mb{\mathrm{\nabla}} $ is a momentum operator, $ \hat{\mb{s}}=\frac{1}{2}\left({\mb{x}}_0{\hat{\sigma}}_x+{\mb{y}}_0{\hat{\sigma}}_y+{\mb{z}}_0{\hat{\sigma}}_z\right) $ is a spin operator, and $ {\hat{\sigma}}_{x,y,z} $ are Pauli matrices. The dependence on the spin operator may be, e.g., due to spin-orbit coupling. If there are perturbing fields defined by electrodynamic potentials $ \varphi \left(\mb{r},t\right) $ and $ \mb{A}\left(\mb{r},t\right) $, the operator $ \hat{H} $ can be obtained from the unperturbed Hamiltonian $ {\hat{H}}_0\left(\hat{\mb{p}}\right) $ as \cite{landau2013quantum} \begin{equation} \label{eq_7_} \hat{H}={\hat{H}}_0\left(\hat{\mb{p}}\Rightarrow \hat{\mb{p}}+\frac{e}{c}\mb{A}\right)-e\varphi . \end{equation} For simplicity, we do not consider here the spin-dependent components of the perturbation operator, such as the energy of a spin magnetic moment in a magnetic field, $ {\hat{V}}_B=-{\mu}_B\left[\hat{\mb{s}}\cdot \left(\mb{\mathrm{\nabla}} \times \mb{A}\right)\right] $, where $ {\mu}_B $ is Bohr's magneton, or the spin-orbit coupling term in the perturbing field, $ {\hat{V}}_{s-o}=-\frac{e\hbar}{2m^2c^2}\left[\left(\frac{1}{c}\dot{\mb{A}}+\mb{\mathrm{\nabla}}\varphi \right)\times \hat{\mb{p}}\right]\cdot \hat{\mb{s}} $ \cite{landau2013quantum,gantmakher1987carrier,berestetskii1982quantum}. Consider the energy basis given by the stationary solution of the Schr\"{o}dinger equation: \[{\hat{H}}_0{\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)={E_{\alpha}\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right),\] where $ {\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right) $ are eigenfunctions of the unperturbed Hamiltonian $ {\hat{H}}_0 $, $ \alpha $ is an index or a set of indices indicating a stationary state of the Hamiltonian taking into account spin orientation, and the spin coordinate $ s $ takes the values 1 and 2 denoting spinor components $ \left( \begin{array}{c} {\mathrm{\Psi}}_{\alpha}\left(\mb{r},1\right) \\ {\mathrm{\Psi}}_{\alpha}\left(\mb{r},2\right) \end{array} \right) $, which define the probability of spin projection on the quantization axis to be equal to $ \frac{1}{2} $ and $ -\frac{1}{2} $. The observed current density $ \mb{j}\left(\mb{r}\right) $ and free carrier density $ n\left(\mb{r}\right) $ can be expressed through the elements of the density matrix in the given basis $ {\rho}_{\alpha \beta} $ as follows (see, for example, \cite{wang2016,kutayiah2018}), \begin{equation} \label{eq_8_} n\left(\mb{r}\right)=\sum_{\alpha \beta}{\sum^2_{s=1}{\left[{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r},s\right){\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)\right]}{\rho}_{\alpha \beta}}, \end{equation} \begin{equation} \label{eq_9_} \mb{j}\left(\mb{r}\right)=-\frac{e}{2}\sum_{\alpha \beta}{\sum^2_{s=1}{\left\{{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r},s\right)\left[\hat{\mb{v}}{\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)\right]+\left[{\hat{\mb{v}}}^\ast{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r},s\right)\right]{\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)\right\}}{\rho}_{\alpha \beta}}, \end{equation} where $ \hat{\mb{v}}=\frac{i}{\hbar}\left[\hat{H},\mb{r}\right]=\frac{1}{m}\left(\hat{\mb{p}}+\frac{e}{c}\mb{A}\right) $ is a velocity operator. Since our goal in this section is to include the limitations imposed by the conservation of the particle number, we neglected the vortex spin current in Eq.~\eqref{eq_9_}, \[{\mb{j}}_S\left(\mb{r}\right)={\mu}_Bc\mb{\mathrm{\nabla}} \times \sum_{\alpha \beta}{\sum^2_{s=1}{\left[{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r},s\right)\hat{\mb{s}}{\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)\right]{\rho}_{\alpha \beta}}},\] because it does not affect the evolution of carrier density. For a spin-independent Hamiltonian the space coordinates and spin are separated. The simplest example is \[{\hat{H}}_0=-eU\left(\mb{r}\right)+\frac{1}{2m}{\hat{\mb{p}}}^2,\] where $ U\left(\mb{r}\right) $ is a periodic lattice potential. In this case, the summation in Eqs.~\eqref{eq_8_}, \eqref{eq_9_} over two equal spin states gives only the degeneracy factor $ g=2 $ in the final expressions: \begin{equation} \label{eq_10_} n\left(\mb{r}\right)=g\sum_{\alpha \beta}{\left[{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r}\right){\mathrm{\Psi}}_{\alpha}\left(\mb{r}\right)\right]}{\rho}_{\alpha \beta}, \end{equation} \begin{equation} \label{eq_11_} \mb{j}\left(\mb{r}\right) =- g\frac{e}{2}\sum_{\alpha \beta}{\left\{{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r}\right)\left[\hat{\mb{v}}{\mathrm{\Psi}}_{\alpha}\left(\mb{r}\right)\right]+\left[{\hat{\mb{v}}}^\ast{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r}\right)\right]{\mathrm{\Psi}}_{\alpha}\left(\mb{r}\right)\right\}{\rho}_{\alpha \beta}}, \end{equation} where the functions $ {\mathrm{\Psi}}_{\alpha}\left(\mb{r}\right) $ are scalar (not the spinors). For brevity, we will use the term ``spinless'' particles. For Fourier harmonics $ n_{\mb{\kappa}}=\frac{1}{{\left(2\pi \right)}^{\varsigma}}\int{n\left(\mb{r}\right){\mathrm{e}}^{-i\mb{\kappa}\mb{r}}d^{\varsigma}r} $ and $ {\mb{j}}_{\mb{\kappa}}=\frac{1}{{\left(2\pi \right)}^{\varsigma}}\int{\mb{j}\left(\mb{r}\right){\mathrm{e}}^{-i\mb{\kappa}\mb{r}}d^{\varsigma}r} $ it follows from Eqs.~\eqref{eq_10_} and \eqref{eq_11_} that \begin{equation} \label{eq_12_} n_{\mb{\kappa}}=\frac{g}{{\left(2\pi \right)}^{\varsigma}}\sum_{\alpha \beta}{{\left({\mathrm{e}}^{-i\mb{\kappa}\mb{r}}\right)}_{\beta \alpha}}{\rho}_{\alpha \beta}, {\mb{j}}_{\mb{\kappa}}=-\frac{g}{{\left(2\pi \right)}^{\varsigma}}\frac{e}{2}\sum_{\alpha \beta}{{\left({\mathrm{e}}^{-i\mb{\kappa}\mb{r}}\cdot \hat{\mb{v}}+\hat{\mb{v}}{\cdot \mathrm{e}}^{-i\mb{\kappa}\mb{r}}\right)}_{\beta \alpha}{\rho}_{\alpha \beta}}, \end{equation} where $ \varsigma $ is the system dimension. The evolution of the density matrix is described by the von Neumann equation \begin{equation} \label{eq_13_} \frac{\partial {\rho}_{\alpha \beta}}{\partial t}+\frac{i}{\hbar}\sum_{\gamma}{\left(H_{\alpha \gamma}{\rho}_{\gamma \beta}-{\rho}_{\alpha \gamma}H_{\gamma \beta}\right)}=0. \end{equation} By applying the summation operation \[\sum_{\alpha \beta}{\sum^2_{s=1}{{\mathrm{\Psi}}^\ast_{\beta}{\left(\mb{r},s\right)\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)\left(\cdots \right)}}\] to Eq.~\eqref{eq_13_} and taking into account Eqs.~\eqref{eq_8_}, \eqref{eq_9_}, we arrive at the continuity equation \begin{equation} \label{eq_14_} \frac{\partial n}{\partial t}+\frac{\mb{\mathrm{\nabla}}\cdot \mb{j}}{-e}=0. \end{equation} \subsection{Correct phenomenological relaxation operator} The correct relaxation operator in Eq.~\eqref{eq_1_} must not violate the continuity equation, i.e. the conservation of the number of particles given by Eq.~\eqref{eq_14_}. Taking into account Eqs.~\eqref{eq_8_} and \eqref{eq_9_}, the particle conservation law is satisfied under the condition \begin{equation} \label{eq_15_} \sum_{\alpha \beta} \sum^2_{s=1}{{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r},s\right){\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)R_{\alpha \beta}} = 0, \end{equation} or, for ``spinless'' particles, \begin{equation} \label{eq_16_} \sum_{\alpha \beta} {\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r}\right){\mathrm{\Psi}}_{\alpha}\left(\mb{r}\right)R_{\alpha \beta} = 0. \end{equation} As noted in the Introduction, the diagonal elements of the density matrix are usually not perturbed in the linear approximation with respect to the Hamiltonian of interaction with an external field. Therefore, when calculating the linear response of the medium it is sufficient to take into account only the off-diagonal elements of the relaxation operator $ R_{\alpha \beta} $ in the sum \eqref{eq_15_}. In the basis of real wave functions of ``spinless'' particles, which can always be chosen for a system without a magnetic field \cite{landau2013quantum}, the relaxation operator of the form Eq.~\eqref{eq_5_} ensures that Eq.~\eqref{eq_16_} is satisfied. Such a basis is ``natural'' for a discrete nondegenerate spectrum describing a finite motion. For particles in a system with translational symmetry (in free space or in a periodic lattice field), the most ``natural'' basis is a set of complex wave functions that are eigenfunctions of the momentum or quasi-momentum operator. When using such a basis, the procedure for constructing the desired relaxation operator similar to Eq.~\eqref{eq_5_} becomes more complicated. The additional complication is caused by spin-dependence of the Hamiltonian. However, as will be shown below, for TRS systems the corresponding procedure is not too cumbersome. It relies on an assumption that the relaxation rate of quantum coherence for a given transition depends only on its energy. Now let's use the time-reversal symmetry of the system. The operation of reversal in time $ \hat{T} $ as applied to a scalar energy eigenfunction is just an operation of complex conjugation: \[\hat{T}{\mathrm{\Psi}}_{\alpha}\left(\mb{r}\right)={\mathrm{\Psi}}^\ast_{\alpha}\left(\mb{r}\right).\] When applied to the spinor $ \left( \begin{array}{c} {\mathrm{\Psi}}_{\alpha}\left(\mb{r},1\right) \\ {\mathrm{\Psi}}_{\alpha}\left(\mb{r},2\right) \end{array} \right) $ this operation takes the form \cite{landau2013quantum} \begin{equation} \label{eq_17_} \hat{T}\left( \begin{array}{c} {\mathrm{\Psi}}_{\alpha}\left(\mb{r},1\right) \\ {\mathrm{\Psi}}_{\alpha}\left(\mb{r},2\right) \end{array} \right)=i{\hat{\sigma}}_y\left( \begin{array}{c} {\mathrm{\Psi}}^\ast_{\alpha}\left(\mb{r},1\right) \\ {\mathrm{\Psi}}^\ast_{\alpha}\left(\mb{r},2\right) \end{array} \right)=\left( \begin{array}{c} {\mathrm{\Psi}}^\ast_{\alpha}\left(\mb{r},2\right) \\ -{\mathrm{\Psi}}^\ast_{\alpha}\left(\mb{r},1\right) \end{array} \right). \end{equation} The Hamiltonian of a TRS-system commutes with the operator $ \hat{T} $. This property is inherent in closed systems, and closure implies, among other things, the absence of an external magnetic field. The Hamiltonian of such systems satisfies the condition $ \hat{H}\left(\hat{\mb{p}},\hat{\mb{s}}\right)=\hat{H}\left(-\hat{\mb{p}},-\hat{\mb{s}}\right) $. The presence of an external dc electric field does not affect the symmetry with respect to time reversal \cite{landau2013quantum}. For ``spinless'' particles, such commutativity is equivalent to the Hamiltonian being real in the $r$-representation. To construct a correct relaxation operator we make use of the fact that for every basis state $|\alpha \rangle$ the state $\hat{T}|\alpha \rangle$ coincides with the same or another basis state, up to a constant phase: \begin{equation} \label{eq_19_} |{\alpha}^\prime \rangle ={\mathrm{e}}^{i{\varphi}_{\alpha}}\hat{T}|\alpha \rangle. \end{equation} Such a basis always exists since it can consist of eigenfunctions of operator $\hat{T}$ which commutes with the Hamiltonian, $\left[\hat{H},\hat{T}\right]=0$. In the latter case $|{\alpha}^\prime \rangle =|\alpha \rangle$. For the degenerate energy levels there is a freedom in the choice of basis states, and the states $|{\alpha}^\prime \rangle$ and $|\alpha \rangle$ can be different. However, the condition \eqref{eq_19_} should be satisfied, since in the general case the state $\hat{T}|\alpha \rangle$ can be equal to the linear combination of basis states corresponding to the same energy level. Since $ {\hat{T}}^2=\pm 1 $ where the upper sign is for a scalar state function and the lower sign is for a spinor, respectively, we obtain that Eq.~\eqref{eq_19_} is reciprocal: \begin{equation} \label{eq_19_1} |\alpha \rangle ={\mathrm{e}}^{i{\varphi}_{{\alpha}^\prime}}\hat{T}|{\alpha}^\prime \rangle \end{equation} and for a ``spinless'' particle $ {\varphi}_{{\alpha}^\prime}={\varphi}_{\alpha} $, while for a particle described by the spinor $ {\varphi}_{{\alpha}^\prime}={\varphi}_{\alpha}+\pi $. In both cases for any pair of states $|\alpha \rangle$ and $|\beta \rangle$ \begin{equation} \label{eq_20_} {\varphi}_{{\beta}^\prime}-{\varphi}_{{\alpha}^\prime}={\varphi}_{\beta}-{\varphi}_{\alpha}. \end{equation} Another useful relation follows from Eqs.~\eqref{eq_17_} and \eqref{eq_19_}: \begin{equation} \label{eq_21_} \sum^2_{s=1}{\left[{\mathrm{\Psi}}^\ast_{\beta}\left(\mb{r},s\right){\mathrm{\Psi}}_{\alpha}\left(\mb{r},s\right)-{{\mathrm{e}}^{i\left({{\varphi}_{\alpha}-\varphi}_{\beta}\right)}\mathrm{\Psi}}^\ast_{{\alpha}^\prime}\left(\mb{r},s\right){\mathrm{\Psi}}_{{\beta}^\prime}\left(\mb{r},s\right)\right]}=0. \end{equation} Using Eq.~\eqref{eq_19_} we construct the relaxation operator in the following way: \begin{equation} \label{eq_18_} R_{\alpha \beta}=-{\gamma}_{\alpha \beta}\left({\rho}_{\alpha \beta}-{{\mathrm{e}}^{i\left({\varphi}_{\beta}-{\varphi}_{\alpha}\right)}\rho}_{{\beta}^\prime {\alpha}^\prime}\right). \end{equation} Calculating the sum of the off-diagonal elements in Eq.~\eqref{eq_15_} with the relaxation operator \eqref{eq_18_}, using Eqs.~\eqref{eq_20_} and \eqref{eq_21_}, and rearranging the summation indices, we confirm that Eq.~\eqref{eq_15_} is satisfied, which, as stated earlier, is the criterion ensuring the continuity equation in the system. Consider the following examples: (\textbf{i}) free particles, (\textbf{ii}) ``spinless'' particles in a periodic lattice and (\textbf{iii}) spin-dependent system in a periodic lattice. \noindent (\textbf{i}) In this case we have $ |\alpha \rangle =|\mb{k}\rangle $, where a set of vectors $ \mb{k} $ is given by periodic boundary conditions. In the coordinate representation the wave function has the form $ {\mathrm{\Psi}}_{\mb{k}}\left(\mb{r}\right)={\mathrm{e}}^{i\mb{kr}} $, $ {{\mathrm{\Psi}}_{{\mb{k}}^\prime}\left(\mb{r}\right) = \mathrm{\Psi}}^\ast_{\mb{k}}={\mathrm{\Psi}}_{-\mb{k}} $, so that the corresponding phases ${\varphi}_{\alpha}$ and ${\varphi}_{{\alpha}^\prime}$ in Eqs.~\eqref{eq_19_} and \eqref{eq_19_1} are equal to zero. Then Eq.~\eqref{eq_18_} gives \[R_{\mb{k}\mb{q}}=-{\gamma}_{\mb{kq}}\left({\rho}_{\mb{kq}}-{\rho}_{-\mb{q}-\mb{k}}\right).\] A simple interpretation of this expression is discussed in the Introduction. \noindent (\textbf{ii}) In this case we have $ |\alpha \rangle =|c,\mb{k}\rangle $, where $c$ is a band index. In the coordinate representation the wave function has a form of the Bloch function: $ {\mathrm{\Psi}}_{c\mb{k}}\left(\mb{r}\right)={\psi}_{c\mb{k}}\left(\mb{r}\right){\mathrm{e}}^{i\mb{kr}} $, where $ {\psi}_{c\mb{k}}\left(\mb{r}\right) $ is a periodic function with the lattice period, or a sum of periodic functions (when several sublattices exists). For a real Hamiltonian in the $\mb{r}$-representation, the dependence of the electron energy on the quasimomentum in a given band is a symmetric function: $ E_{c\mb{k}}=E_{c-\mb{k}} $, $ {{\mathrm{\Psi}}_{c-\mb{k}} = {\mathrm{e}}^{i{\varphi}_{c\mb{k}}}\mathrm{\Psi}}^\ast_{c\mb{k}} $. Thus, each energy state is at least four times degenerate: twice in quasi-momentum and twice in spin. It follows from Eq.~\eqref{eq_18_} that \begin{equation} \label{eq_22_} R_{c\mb{k}d\mb{q}}=-{\gamma}_{c\mb{k}d\mb{q}}\left({\rho}_{c\mb{k}d\mb{q}}-{\mathrm{e}}^{i\left({\varphi}_{d\mb{q}}-{\varphi}_{c\mb{k}}\right)}{\rho}_{d\, -\mb{q}\, c\, -\mb{k}}\right). \end{equation} (\textbf{iii}) For a spin-dependent periodic Hamiltonian, spin degeneracy can be lifted. In this case we have $ {\mathrm{\Psi}}_{c\mb{k}}\left(\mb{r},s\right)={\psi}_{c\mb{k}}\left(\mb{r},s\right){\mathrm{e}}^{i\mb{kr}} $, where periodic functions are the components of the spinor $ {\psi}_{c\mb{k}}\left(\mb{r},1\right) $ and $ {\psi}_{c\mb{k}}\left(\mb{r},2\right) $. The energies of electron states with opposite quasi-momenta and, simultaneously, with opposite average values of spin projections onto a given axis, turn out to be equal. Such states are connected by the operation of time reversal. Thus, we have $ E_{c\mb{k}}=E_{c^\prime -\mb{k}} $, where: \[\left( \begin{array}{c} {\psi}_{c^\prime -\mb{k}}\left(\mb{r},1\right) \\ {\psi}_{c^\prime -\mb{k}}\left(\mb{r},2\right) \end{array} \right){\mathrm{e}}^{-i\mb{kr}}={\mathrm{e}}^{i{\varphi}_{c\mb{k}}}{\left[\left( \begin{array}{c} {\psi}_{c\mb{k}}\left(\mb{r},2\right) \\ -{\psi}_{c\mb{k}}\left(\mb{r},1\right) \end{array} \right){\mathrm{e}}^{i\mb{kr}}\right]}^\ast.\] To avoid any confusion, we emphasize that the band index $c$ numbers all states with different energies for a given $ \mb{k} $. The bands created by spin splitting correspond to different values of the index $c$. When $ \mb{k}=0 $, such bands intersect or touch, so that the degeneracy is restored. Indeed, if the state with $ \mb{k}=0 $ and energy $ E_{c\mb{k}=0} $ is described by the spinor $ \left( \begin{array}{c} {\psi}_c\left(\mb{r},1\right) \\ {\psi}_c\left(\mb{r},2\right) \end{array} \right) $, then the same energy level corresponds to the state $ \hat{T}\left( \begin{array}{c} {\psi}_c\left(\mb{r},1\right) \\ {\psi}_c\left(\mb{r},2\right) \end{array} \right)=\left( \begin{array}{c} {\psi}^\ast_c\left(\mb{r},2\right) \\ -{\psi}^\ast_c\left(\mb{r},1\right) \end{array} \right) $, linearly independent with the first state. Therefore, the state with energy $ E_{c\mb{k}=0} $ is degenerate. The choice of band numbering at the crossing or contact point is a matter of convention. The relaxation operator \eqref{eq_18_} again has the form of Eq.~\eqref{eq_22_} if we consider the band index to be conserved when the sign of the quasi-momentum changes, i.e. choose band numbering so that $ E_{c\mb{k}}=E_{c-\mb{k}} $. \subsection{Averaged relaxation operator} The relaxation operator satisfying the continuity equation for carriers in a solid couples coherences at the transitions that are symmetric in the quasimomentum space with respect to the point $ \mb{k}=0 $. It often makes sense to restrict ourselves to the carrier states in a relatively small vicinity of a certain point $ {\mb{k}}_{0} $ of the Brillouin zone. Suppose that such a point is an extremum, in the vicinity of which the carrier dispersion can be considered symmetric: $ E_c\left({\mb{k}-\mb{k}}_{0}\right)=E_c\left({\mb{k}}_{0}-\mb{k}\right) $ (for example, the vicinity of the Dirac points \cite{vafek2014,orlita2014}). Whenever we calculate any observable quantities including only the states within a small part of the Brillouin zone, we in fact determine their values averaged over a scale much longer than the lattice period $a$. In this case it makes sense to require that the continuity equation be satisfied also ``on average'' over the same scales. It turns out that a relatively simple relaxation operator preserving the number of particles ``on average'' can be constructed without requiring TRS of the system. The corresponding averaging implies a rather narrow interval of quasi-momenta $ \delta k $ satisfying \begin{equation} \label{eq_23_} \delta k a \ll 1. \end{equation} Let us average Eq.~\eqref{eq_15_} over the lattice period, assuming that inequality \eqref{eq_23_} is satisfied: \begin{equation} \label{eq_24_} \sum_{cd\mb{k}\mb{q}}{\sum^2_{s=1}{\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r},s\right)\psi}_{c\mb{k}}\left(\mb{r},s\right)}}{\mathrm{e}}^{i\left(\mb{k}-\mb{q}\right)\mb{r}}R_{c\mb{k}d\mb{q}}}=0, \end{equation} where the bar denotes the corresponding averaging, and the quasimomentum is counted from the point $ {\mb{k}=\mb{k}}_{0} $. The diagonal terms $c\mb{k}= d\mb{q}$ in Eq.~\eqref{eq_24_} give zero contribution to the sum, since the diagonal terms under the averaging bar are equal to one (normalization condition), the exponential terms are also equal to one, and the conservation of the total number of particles in the system requires that $ \sum\limits_{c\mb{k}} R_{c\mb{k}c\mb{k}} = 0 $. The remaining sum of the off-diagonal terms determines the coordinate-dependent part of the particle density which has nonzero spatial harmonics and contributes to the charge continuity equation. Therefore, only the off-diagonal components of the relaxation operator determine whether the continuity equation is preserved and Eq.~\eqref{eq_3_} is satisfied. In the averaged description this is true even beyond the linear response theory. In the non-averaged description, the diagonal terms can be also coordinate-dependent and contribute to the continuity equation, as is clear from Eq.~\eqref{eq_15_}. However, within the linear response theory the diagonal terms are not perturbed by the field and only the off-diagonal elements of the relaxation operator need to be considered. We will seek the matrix of the relaxation operator in a form close to Eq.~\eqref{eq_22_}, replacing factor $ {\mathrm{e}}^{i\left({\varphi}_{d\mb{q}}-{\varphi}_{c\mb{k}}\right)} $ with some matrix element $ G_{c\mb{k}d\mb{q}} $ which we need to determine: \begin{equation} \label{eq_25_} R_{c\mb{k}d\mb{q}}=-{\gamma}_{c\mb{k}d\mb{q}}\left({\rho}_{c\mb{k}d\mb{q}}-G_{c\mb{k}d\mb{q}}{\rho}_{d\,-\mb{q}\,c\,-\mb{k}}\right). \end{equation} Substituting Eq.~\eqref{eq_25_} in Eq.~\eqref{eq_24_}, we get \[\sum_{c\mb{k}d\mb{q}}{{{\gamma}_{c\mb{k}d\mb{q}}\mathrm{e}}^{i\left(\mb{k}-\mb{q}\right)\mb{r}}\left[\sum^2_{s=1}{\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r},s\right)\psi}_{c\mb{k}}\left(\mb{r},s\right)}}\left({\rho}_{c\mb{k}d\mb{q}}-G_{c\mb{k}d\mb{q}}{\rho}_{d\,-\mb{q}\,c\,-\mb{k}}\right)\right]}=0,\] where after rearranging summation indices, \begin{equation} \label{eq_26_} \sum_{c\mb{k}d\mb{q}}{{{\gamma}_{c\mb{k}d\mb{q}}\mathrm{e}}^{i\left(\mb{k}-\mb{q}\right)\mb{r}}\sum^2_{s=1}{\left[\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r},s\right)\psi}_{c\mb{k}}\left(\mb{r},s\right)}-\overline{{\psi}^\ast_{c-\mb{k}}{\left(\mb{r},s\right)\psi}_{d-\mb{q}}\left(\mb{r},s\right)}G_{d\,-\mb{q}\,c\,-\mb{k}}\right]{\rho}_{c\mb{k}d\mb{q}}}}=0. \end{equation} As a result, we obtain the following expression for the matrix elements $ G_{c\mb{k}d\mb{q}} $, \begin{equation} \label{eq_27_} G_{c\mb{k}d\mb{q}}=\frac{1}{G_{d\,-\mb{q}\,c\,-\mb{k}}}=\frac{\sum^2_{s=1}{\overline{{\psi}^\ast_{c-\mb{k}}{\left(\mb{r},s\right)\psi}_{d-\mb{q}}\left(\mb{r},s\right)}}}{\sum^2_{s=1}{\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r},s\right)\psi}_{c\mb{k}}\left(\mb{r},s\right)}}} \end{equation} or, for ``spinless'' particles, \begin{equation} \label{eq_28_} G_{c\mb{k}d\mb{q}}=\frac{1}{G_{d\,-\mb{q}\,c\,-\mb{k}}}=\frac{\overline{{\psi}^\ast_{c-\mb{k}}{\left(\mb{r}\right)\psi}_{d-\mb{q}}\left(\mb{r}\right)}}{\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r}\right)\psi}_{c\mb{k}}\left(\mb{r}\right)}}. \end{equation} A different approach to describe quasiparticles in a crystal is to work with wave functions averaged over the lattice period. The truncated model Hamiltonian that defines such states can be reconstructed using the matrix $ \hat{E}\left(\mb{k}\right) $, whose eigenvalues define the energy bands and the carrier energy dispersion in each band (see, for example, \cite{gantmakher1987carrier}). This matrix can be calculated in various approximations, including the magnetic and/or spin effects ``hidden'' in the form of the matrix $ \hat{E}\left(\mb{k}\right) $. In the region of \textit{k}-space corresponding to Eq.~\eqref{eq_23_} and in the absence of perturbing external fields, the averaged Hamiltonian has the form: \begin{equation} \label{eq_29_} {\hat{H}}_0=\frac{1}{\hbar}{\left(\frac{\partial}{\partial \mb{k}}\hat{E}\right)}_{\mb{k}=0}\cdot \hat{\mb{p}}+\frac{1}{2{\hbar}^2}\sum_{ij}{{\left(\frac{{\partial}^2\hat{E}}{\partial k_i\partial k_j}\right)}_{\mb{k}=0}}\cdot {\hat{p}}_i{\hat{p}}_j, \end{equation} where $ i,j=x,y,z $. The Hamiltonian $ {\hat{H}}_0\left(\hat{\mb{p}}\right) $ is generally an $ N\times N $ matrix which defines a basis of states in the form of \textit{N}-component vectors corresponding to the energies $ E_{c\mb{k}} $: \begin{equation} \label{eq_30_} {\mb{\mathrm{U}}}_{c\mb{k}}\left(\mb{r}\right)\equiv {\mb{u}}_{c\mb{k}}{\mathrm{e}}^{i\mb{kr}}, \, {\mb{u}}_{c\mb{k}}=\left( \begin{array}{c} u^{\left(1\right)}_{c\mb{k}} \\ \vdots \\ u^{\left(N\right)}_{c\mb{k}} \end{array} \right). \end{equation} The elements of the vector $ {\mb{u}}_{c\mb{k}} $ in Eq.~\eqref{eq_30_} are the coefficients of the expansion of the Bloch function over orthogonal periodic functions or over orthogonal periodic two-component functions (spinors). The scalar product \[\left({\mb{u}}^\ast_{d\mb{q}}{\cdot \mb{u}}_{c\mb{k}}\right)=\sum^N_{n=1}{u^{\left(n\right)*}_{d\mb{q}}u^{\left(n\right)}_{c\mb{k}}}\] corresponds to averaged quantities in Eqs.~\eqref{eq_27_}, \eqref{eq_28_}, \begin{equation} \label{eq_31_} \left({\mb{u}}^\ast_{d\mb{q}}{\cdot \mb{u}}_{c\mb{k}}\right)=\sum^2_{s=1}{\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r},s\right)\psi}_{c\mb{k}}\left(\mb{r},s\right)}}, \end{equation} or, for ``spinless'' particles, \begin{equation} \label{eq_32_} \left({\mb{u}}^\ast_{d\mb{q}}{\cdot \mb{u}}_{c\mb{k}}\right)=\overline{{\psi}^\ast_{d\mb{q}}{\left(\mb{r}\right)\psi}_{c\mb{k}}\left(\mb{r}\right)}. \end{equation} For a model with the ``averaged'' Hamiltonian Eq.~\eqref{eq_29_}, the influence of perturbing fields given by the electrodynamic potentials $ \varphi \left(\mb{r},t\right) $ and $ \mb{A}\left(\mb{r},t\right) $ is taken into account by transforming the unperturbed Hamiltonian $ {\hat{H}}_0\left(\hat{\mb{p}}\right)\Rightarrow \hat{H}\left(\hat{\mb{p}},\mb{A},\varphi \right) $ using Eq.~\eqref{eq_7_}. The equation for the density matrix Eq.~\eqref{eq_13_} with the ``averaged'' Hamiltonian $ \hat{H}\left(\hat{\mb{p}},\mb{A},\varphi \right) $ satisfies the continuity equation \eqref{eq_14_}, in which \begin{equation} \label{eq_33_} n\left(\mb{r}\right)=g\sum_{c\mb{k}d\mb{q}}{{\mathrm{e}}^{i\left(\mb{k}-\mb{q}\right)\mb{r}}\left({\mb{u}}^\ast_{d\mb{q}}{\cdot \mb{u}}_{c\mb{k}}\right){\rho}_{c\mb{k}d\mb{q}}}, \end{equation} \begin{equation} \label{eq_34_} \mb{j}\left(\mb{r}\right)=-\frac{e}{2}g\sum_{c\mb{k}d\mb{q}}{\left[{\mb{u}}^\ast_{d\mb{q}}{\mathrm{e}}^{-i\mb{qr}}\cdot \left(\hat{\mb{v}}\cdot {\mb{u}}_{c\mb{k}}{\mathrm{e}}^{i\mb{kr}}\right)+\left({\hat{\mb{v}}}^{\mb{*}}\cdot {\mb{u}}^\ast_{d\mb{q}}{\mathrm{e}}^{-i\mb{qr}}\right){\cdot \mb{u}}_{c\mb{k}}{\mathrm{e}}^{i\mb{kr}}\right]}{\rho}_{c\mb{k}d\mb{q}}, \end{equation} where the velocity operator $ \hat{\mb{v}}=\frac{i}{\hbar}\left[\hat{H},\mb{r}\right] $ is the $ N\times N $ matrix, $ g $ is a degeneracy factor which can take into account both spin degeneracy and the presence of identical extreme points in different valleys of the Brillouin zone (see, for example, \cite{katsnelson2012graphene}). For massless Dirac fermions in Eq.~\eqref{eq_29_} we have $ {\left(\frac{{\partial}^2\hat{E}}{\partial k_i\partial k_j}\right)}_{\mb{k}={\mb{k}}_0}=0 $; therefore, the velocity operator matrix is composed of constant elements and does not involve differentiation: \[\hat{\mb{v}}=\frac{i}{\hbar}\left[\hat{H},\mb{r}\right]={\frac{i}{{\hbar}^2}\left(\frac{\partial}{\partial \mb{k}}\hat{E}\right)}_{\mb{k}={\mb{k}}_0}\cdot \left[\hat{\mb{p}},\mb{r}\right]={\frac{1}{\hbar}\left(\frac{\partial}{\partial \mb{k}}\hat{E}\right)}_{\mb{k}={\mb{k}}_0}.\] Thus, for fermions in Weyl semimetals, graphene and low-energy surface states in topological insulators of the type Bi$_2$Se$_3$, such an ``algebraic'' speed operator is formed by Pauli matrices and is a $ 2\times 2 $ matrix \cite{vafek2014}; for Kane fermions in Cd$_x$Hg$_{1-x}$Te, it is a $ 6\times 6 $ matrix \cite{orlita2014}. In these and similar cases, the products in the expression for the current density Eq.~\eqref{eq_34_} are all algebraic, which leads to a certain simplification (see, for example, \cite{wang2016}). Since in this case we have $ {\mb{u}}^\ast_{d\mb{q}}\left(\hat{\mb{v}}{\mb{u}}_{c\mb{k}}\right)=\left({\hat{\mb{v}}}^{\mb{*}}{\mb{u}}^\ast_{d\mb{q}}\right){\mb{u}}_{c\mb{k}} $, it follows from Eq.~\eqref{eq_34_} that \[\mb{j}\left(\mb{r}\right)=-eg\sum_{c\mb{k}d\mb{q}}{{\mathrm{e}}^{i\left(\mb{k}-\mb{q}\right)\mb{r}}\left({\mb{u}}^\ast_{d\mb{q}}\cdot \hat{\mb{v}}\cdot {\mb{u}}_{c\mb{k}}\right){\rho}_{c\mb{k}d\mb{q}}}.\] The continuity equation for an ensemble of electrons described by a truncated Hamiltonian is satisfied when \[\sum_{cd\mb{k}\mb{q}}{\left({\mb{u}}^\ast_{d\mb{q}}\cdot {\mb{u}}_{c\mb{k}}\right){\mathrm{e}}^{i\left(\mb{k}-\mb{q}\right)\mb{r}}R_{c\mb{k}d\mb{q}}}=0.\] After the derivation similar to Eq.~\eqref{eq_24_}-\eqref{eq_27_} (or using pairs of equations \eqref{eq_27_},\eqref{eq_31_} or \eqref{eq_28_},\eqref{eq_32_}) we obtain the following expression for the relaxation matrix written in the form of Eq.~\eqref{eq_25_}: \begin{equation} \label{eq_35_} G_{c\mb{k}d\mb{q}}=\frac{1}{G_{d\,-\mb{q}\,c\,-\mb{k}}}=\frac{\left({\mb{u}}^\ast_{c-\mb{k}}{\cdot \mb{u}}_{d-\mb{q}}\right)}{\left({\mb{u}}^\ast_{d\mb{q}}{\cdot \mb{u}}_{c\mb{k}}\right)}. \end{equation} Note that for a wide class of systems the following condition is satisfied: \begin{equation} \label{eq_36_} {\left|G_{c\mb{k}d\mb{q}}\right|}^2=1. \end{equation} This is similar to a non-averaged system described by Bloch eigenfunctions, when the relaxation operator is given by Eq.~\eqref{eq_22_}, and the factor $ {\mathrm{e}}^{i\left({\varphi}_{d\mb{q}}-{\varphi}_{c\mb{k}}\right)} $ replaces the coefficient $ G_{c\mb{k}d\mb{q}} $. In Appendix A we will show that to satisfy the condition \eqref{eq_36_} it is sufficient (although not necessary) to have a TRS effective Hamiltonian $ {\hat{H}}_0\left(\hat{\mb{p}}\right) $. Appendix B contains an example of a system (a Weyl semimetal) with broken time-reversal symmetry, for which the condition \eqref{eq_36_} is nevertheless satisfied. Appendix C shows that when calculating the linear susceptibility in the limit of a uniform external field, violating the condition \eqref{eq_36_} does not affect the result. Thus, Eqs.~\eqref{eq_25_}, \eqref{eq_27_}, \eqref{eq_35_} define a relatively simple relaxation operator, the use of which in the master equations preserves the continuity equation for the observables. Note that the relaxation operator Eq.~\eqref{eq_25_} cannot be reduced to the standard form Eq.~\eqref{eq_2_} by a formal replacement $ G_{c\mb{k}d\mb{q}}=0 $. Accordingly, the response of the medium obtained using the relaxation operator Eq.~\eqref{eq_25_} does not reduce to the one obtained on the basis of the standard relaxation model by replacing $ G_{c\mb{k}d\mb{q}}=0 $. This is because the coefficients $ G_{c\mb{k}d\mb{q}} $ determined by the properties of the eigenstates of the Hamiltonian obey Eq.~\eqref{eq_27_} or Eq.~\eqref{eq_35_}. Therefore, formally setting $G_{c\mb{k}d\mb{q}}=0$ for any transition, we automatically have $ G_{d\,-\mb{q}\,c\,-\mb{k}}=\infty $. \section{Generalization of the Lindhard formula in a dissipative system in two dimensions} \label{sec_lindhard} \subsection{Screening effect in a monolayer} Let the monolayer with charge carriers be located in the plane $z=0$. In the region $ z<0 $, there is a substrate with a dielectric constant $ \varepsilon $. Consider the electric field potential $ \mathrm{\Phi}\left(\mb{r},z,t\right) $, where the vector $ \mb{r} $ belongs to the \textit{xy} plane. We write the potential as an expansion in 2D Fourier harmonics, \[\mathrm{\Phi} = \int{d\omega}\mathop{\int\!\!\!\!\int}{{\mathrm{\Phi}}_{\mb{\kappa}\omega}\left(z\right){\mathrm{e}}^{i\mb{\kappa}\mb{r}-i\omega t}d^2\kappa}.\] The complex amplitudes of the field harmonics in the layer plane are given by $ {\mb{E}}_{\mb{\kappa}\omega}=-i\mb{\kappa}\mathrm{\Phi}_{\mb{\kappa}\omega}\left(0\right) $. Hereafter, to simplify the expressions, we will use the notation $ {\mathrm{\Phi}}_{\mb{\kappa}\omega} $ instead of $ {\mathrm{\Phi}}_{\mb{\kappa}\omega}\left(0\right) $. Let $ \chi \left(\omega ,\mb{\kappa}\right) $ be the 2D linear susceptibility of a layer, which determines its surface polarization excited by a field harmonic: \[{\mb{P}}_{\mb{\kappa}\omega}=-i\mb{\kappa}\chi \left(\omega ,\mb{\kappa}\right){\mathrm{\Phi}}_{\mb{\kappa}\omega}.\] Harmonics of the surface charge are related to the harmonics of surface polarization by $ Q_{\mb{\kappa}\omega}=-i\mb{\kappa}{\mb{P}}_{\mb{\kappa}\omega} $, from which \begin{equation} \label{eq_37_} Q_{\mb{\kappa}\omega}=-{\kappa}^2\chi \left(\omega ,\mb{\kappa}\right){\mathrm{\Phi}}_{\mb{\kappa}\omega}. \end{equation} We will seek a response to the external potential $ \mathrm{\Phi} $, taking into account the excitation of the self-consistent potential $ \delta \mathrm{\Phi} $: \begin{equation} \label{eq_38_} Q_{\mb{\kappa}\omega}=-{\kappa}^2\chi \left(\omega ,\mb{\kappa}\right)\left({\mathrm{\Phi}}_{\mb{\kappa}\omega}+\delta {\mathrm{\Phi}}_{\mb{\kappa}\omega}\right). \end{equation} The Poisson equation outside the monolayer gives \[\left(-{\kappa}^2+\frac{{\partial}^2}{\partial z^2}\right){\delta \mathrm{\Phi}}_{\mb{\kappa}\omega}\left(z\right)=0.\] Its continuous solution along the z axis is \[{\delta \mathrm{\Phi}}_{\mb{\kappa}\omega}\left(z\right) = \delta {\mathrm{\Phi}}_{\mb{\kappa}\omega}{\mathrm{e}}^{\mp \kappa z},\] where the upper and lower sign corespond to the upper and lower half-spaces. The value of $ \delta {\mathrm{\Phi}}_{\mb{\kappa}\omega} $ can be determined using the Gauss theorem: \begin{equation} \label{eq_39_} \kappa \delta {\mathrm{\Phi}}_{\mb{\kappa}\omega}+\varepsilon \kappa \delta {\mathrm{\Phi}}_{\mb{\kappa}\omega}=4\pi Q_{\mb{\kappa}\omega}. \end{equation} As a result, we get from Eqs.~\eqref{eq_38_}, \eqref{eq_39_} \begin{equation} \label{eq_40_} {\mathrm{\Phi}}^{\left(\mathrm{scr}\right)}_{\mb{\kappa}\omega} = \frac{{\mathrm{\Phi}}_{\mb{\kappa}\omega}}{1+\frac{\kappa}{1+\varepsilon}4\pi \chi \left(\omega ,\mb{\kappa}\right)}, \end{equation} where $ {\mathrm{\Phi}}^{\left(\mathrm{scr}\right)}_{\mb{\kappa}\omega}={\mathrm{\Phi}}_{\mb{\kappa}\omega}+\delta {\mathrm{\Phi}}_{\mb{\kappa}\omega} $ is the harmonic of a screened potential. Note that equating the expression in the denominator Eq.~\eqref{eq_40_} to zero gives the dispersion equation for a 2D plasmon supported by the monolayer \cite{yao2014}: $ 1+\frac{\kappa}{1+\varepsilon}4\pi \chi \left(\omega ,\mb{\kappa}\right)=0 $. Let us show that Eq.~\eqref{eq_40_} corresponds exactly to the Lindhard formula for a 2D system in the absence of dissipation \cite{haug2009quantum}: \begin{equation} \label{eq_41_} {\mathrm{\Phi}}^{\left(\mathrm{scr}\right)}_{\mb{\kappa}\omega} = \frac{{\mathrm{\Phi}}_{\mb{\kappa}\omega}}{1- \frac{2}{1+\varepsilon}{\mathrm{\Phi}}_{0\mb{\kappa}}g\sum_{\alpha \beta}{\frac{\left(f_{\alpha}-f_{\beta}\right){\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2}{E_{\alpha}-E_{\beta}-\hbar \omega}}}, \end{equation} where $ {\mathrm{\Phi}}_{0\mb{\kappa}} $ is a spatial 2D harmonic of the interaction potential of point charges $ {e^2}/{r} $, \[{\mathrm{\Phi}}_{0\mb{\kappa}}=\frac{1}{4{\pi}^2}\int_{\infty}{\frac{e^2}{r}}{\mathrm{e}}^{-i\mb{\kappa}\mb{r}}d^2r=\frac{e^2}{2\pi \kappa},\] $ f_{\alpha} $ and $ E_{\alpha} $ are the population and energy of quasiparticles corresponding to the state $ |\alpha \rangle $, and $ g $ is the degeneracy factor. To compare Eq.~\eqref{eq_40_} and Eq.~\eqref{eq_41_}, we need to obtain an expression for the susceptibility $ \chi \left(\omega ,\mb{\kappa}\right) $. We use the equation for the complex amplitude of the linear perturbation of the density matrix $ {{\rho}_{\alpha \beta}=\tilde{\rho}}_{\alpha \beta}{\mathrm{e}}^{-i\omega t} $ under the action of the harmonic of the potential $ {\mathrm{\Phi}}_{\mb{\kappa}\omega}{\mathrm{e}}^{i\mb{\kappa}\mb{r}-i\omega t} $, \begin{equation} \label{eq_42_} -i\omega {\tilde{\rho}}_{\alpha \beta}+i\frac{E_{\alpha}-E_{\beta}}{\hbar}{\tilde{\rho}}_{\alpha \beta}=-\frac{i}{\hbar}e{\mathrm{\Phi}}_{\mb{\kappa}\omega}{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\left(f_{\alpha}-f_{\beta}\right), \end{equation} which yields \begin{equation} \label{eq_43_} {\tilde{\rho}}_{\alpha \beta}=-e\frac{{\mathrm{\Phi}}_{\mb{\kappa}\omega}{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\left(f_{\alpha}-f_{\beta}\right)}{E_{\alpha}-E_{\beta}-\hbar \omega}. \end{equation} It follows from the first of Eqs.~\eqref{eq_12_} that \begin{equation} \label{eq_44_} Q_{\mb{\kappa}\omega} =- \frac{eg}{4{\pi}^2}\sum_{\alpha \beta}{{\left({\mathrm{e}}^{-i\mb{\kappa}\mb{r}}\right)}_{\beta \alpha}{\tilde{\rho}}_{\alpha \beta}}. \end{equation} Substituting Eqs.~\eqref{eq_43_}, \eqref{eq_44_} into Eq.~\eqref{eq_37_} and using the relation $ {\left({\mathrm{e}}^{-i\mb{\kappa}\mb{r}}\right)}_{\beta \alpha}{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}={\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2 $ we obtain \begin{equation} \label{eq_45_} \chi \left(\omega ,\mb{\kappa}\right)=-\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{\alpha \beta}{\frac{\left(f_{\alpha}-f_{\beta}\right){\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2}{E_{\alpha}-E_{\beta}-\hbar \omega}}. \end{equation} It is easy to see that the substitution of Eq.~\eqref{eq_45_} into Eq.~\eqref{eq_40_} leads to Eq.~\eqref{eq_41_}. Another way to derive the linear susceptibility is to calculate the Fourier harmonic of the current $ {\mb{j}}_{\mb{\kappa}\omega} $, which follows from the second of Eqs.~\eqref{eq_12_}: \begin{equation} \label{eq_46_} {\mb{j}}_{\mb{\kappa}\omega} =- \frac{eg}{4{\pi}^2}\sum_{\alpha \beta}{{\frac{1}{2}\left({\mathrm{e}}^{-i\mb{\kappa}\mb{r}}\hat{\mb{v}}+\hat{\mb{v}}{\mathrm{e}}^{-i\mb{\kappa}\mb{r}}\right)}_{\beta \alpha}{\tilde{\rho}}_{\alpha \beta}}. \end{equation} Using the identity $ {\left(\frac{\mathrm{\nabla}u\cdot \hat{\mb{v}}+\hat{\mb{v}}\cdot \mathrm{\nabla}u}{2}\right)}_{\beta \alpha}=i\frac{E_{\beta}-E_{\alpha}}{\hbar}u_{\beta \alpha} $ which holds for any function $ u\left(\mb{r}\right) $ (see, for example, \cite{wang2016}), from Eqs.~\eqref{eq_46_} and \eqref{eq_43_} we obtain the expression for the conductivity \begin{equation} \label{eq_47_} \sigma \left(\omega ,\mb{\kappa}\right)=i\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{\alpha \beta}{\frac{E_{\alpha}-E_{\beta}}{\hbar}\times \frac{\left(f_{\alpha}-f_{\beta}\right){\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2}{E_{\alpha}-E_{\beta}-\hbar \omega}}. \end{equation} Using the relation $ \sum_{\alpha}{{\left({\mathrm{e}}^{\mp i\mb{\kappa}\mb{r}}\right)}_{\beta \alpha}{\left({\mathrm{e}}^{\pm i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}}=1 $, which is valid for any index $ \beta $, Eq.~\eqref{eq_47_} can be reduced to the following form, \[\sigma \left(\omega ,\mb{\kappa}\right)=i\omega \frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{\alpha \beta}{\frac{\left(f_{\alpha}-f_{\beta}\right){\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2}{E_{\alpha}-E_{\beta}-\hbar \omega}}=-i\omega \chi \left(\omega ,\mb{\kappa}\right),\] which corresponds to the fundamental relationship Eq.~\eqref{eq_3_}. \subsection{Accounting for relaxation within the standard model} Using the standard relaxation operator defined by Eq.~\eqref{eq_2_} in the equation for the perturbation of the density matrix Eq.~\eqref{eq_42_} results in the substitution $ \omega \to \omega +i{\gamma}_{\alpha \beta} $ in the corresponding relations Eqs.~\eqref{eq_45_}, \eqref{eq_47_}: \begin{equation} \label{eq_48_} \chi \left(\omega ,\mb{\kappa}\right)=-\frac{e^2}{4{\pi}^2{\kappa}^2}g\sum_{\alpha \beta}{\frac{\left(f_{\alpha}-f_{\beta}\right){\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2}{E_{\alpha}-E_{\beta}-\hbar \omega -i\hbar {\gamma}_{\alpha \beta}}}, \end{equation} \begin{equation} \label{eq_49_} \sigma \left(\omega ,\mb{\kappa}\right)=\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{\alpha \beta}{\left(i\omega -{\gamma}_{\alpha \beta}\right)\frac{\left(f_{\alpha}-f_{\beta}\right){\left|{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{\alpha \beta}\right|}^2}{E_{\alpha}-E_{\beta}-\hbar \omega -i\hbar {\gamma}_{\alpha \beta}}} \end{equation} Obviously, this solution obtained using the relaxation operator in the form of Eq.~\eqref{eq_2_} violates Eq.~\eqref{eq_3_}. This can lead to significant errors in the low-frequency range (see, for example, \cite{tokman2013,zhang2014,tokman2009}). \subsection{Accounting for relaxation with the modified relaxation operator} Let us consider a system with quasi-particle states $ |\alpha \rangle =|c,\mb{k}\rangle$ using the relaxation operator Eq.~\eqref{eq_25_}, which preserves the average number of particles. The density matrix equations become \begin{equation} \label{eq_50_} -i\omega {\tilde{\rho}}_{c\mb{k}d\mb{q}}+i\frac{E_{c\mb{k}}-E_{d\mb{q}}}{\hbar}{\tilde{\rho}}_{c\mb{k}d\mb{q}}=-\frac{i}{\hbar}e{\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}\left(f_{c\mb{k}}-f_{d\mb{q}}\right)-{\gamma}_{c\mb{k}d\mb{q}}\left({\tilde{\rho}}_{c\mb{k}d\mb{q}}-{G_{c\mb{k}d\mb{q}}\tilde{\rho}}_{d\,-\mb{q}\,c\,-\mb{k}}\right), \end{equation} \begin{equation} \label{eq_51_} -i\omega {\tilde{\rho}}_{d\,-\mb{q}\,c\,-\mb{k}}+i\frac{E_{d\mb{q}}-E_{c\mb{k}}}{\hbar}{\tilde{\rho}}_{d\,-\mb{q}\,c\,-\mb{k}}=-\frac{i}{\hbar}e{\mathrm{\Phi}}_{\omega ;d-\mb{q}\,c-\mb{k}}\left(f_{d-\mb{q}}-f_{c-\mb{k}}\right)-{\gamma}_{cd\mb{k}\mb{q}}\left({\tilde{\rho}}_{d\,-\mb{q}\,c\,-\mb{k}}-G_{d\,-\mb{q}\,c\,-\mb{k}}{\tilde{\rho}}_{c\mb{k}d\mb{q}}\right), \end{equation} where $ {\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}={\mathrm{\Phi}}_{\mb{\kappa}\omega}{\left({\mathrm{e}}^{i\mb{\kappa}\mb{r}}\right)}_{c\mb{k}d\mb{q}} $, $ {\gamma}_{c\mb{k}d\mb{q}}={\gamma}_{d\,-\mb{q}\,c\,-\mb{k}} $, and energy $ E_{c\mb{k}} $ does not depend on the sign of $\mb{k}$. From Eqs.~\eqref{eq_50_}, \eqref{eq_51_}, taking into account the relationship $ G_{c\mb{k}d\mb{q}}G_{d\,-\mb{q}\,c\,-\mb{k}}=1 $ (see Eq.~\eqref{eq_35_}), we obtain \begin{align} {\tilde{\rho}}_{c\mb{k}d\mb{q}} & = -e\frac{\left(E_{d\mb{q}}-E_{c\mb{k}}-\hbar \omega \right){\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}\left(f_{c\mb{k}}-f_{d\mb{q}}\right)}{{\hbar}^2{\omega}^2+2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}-{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2} \nonumber \\ & + i\hbar {\gamma}_{c\mb{k}d\mb{q}}e\frac{{\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}\left(f_{c\mb{k}}-f_{d\mb{q}}\right)-G_{c\mb{k}d\mb{q}}{\mathrm{\Phi}}_{\omega ;d-\mb{q}\,c-\mb{k}}\left(f_{c-\mb{k}}-f_{d-\mb{q}}\right)}{{\hbar}^2{\omega}^2+2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}-{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2}. \label{eq_52_} \end{align} Within the averaged description, the following relationship is satisfied, \begin{equation} \label{eq_53_} {\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}={\mathrm{\Phi}}_{\mb{\kappa}\omega}{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left({\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right), \end{equation} which gives, after taking Eqs.~\eqref{eq_35_} into account, \begin{equation} \label{eq_54_} {\mathrm{\Phi}}_{\omega ;d-\mb{q}\,c-\mb{k}}={\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}G^\ast_{c\mb{k}d\mb{q}} . \end{equation} Using Eqs.~\eqref{eq_53_}, \eqref{eq_54_} and assuming that the populations are determined only by the energies of states, we transform the second term on the right-hand side of Eq.~\eqref{eq_52_} into \[i\hbar {\gamma}_{c\mb{k}d\mb{q}}e{\mathrm{\Phi}}_{\mb{\kappa}\omega}{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left({\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right)\frac{\mathrm{1}-{\left|G_{c\mb{k}d\mb{q}}\right|}^2}{{\hbar}^2{\omega}^2+2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}-{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2}\left(f_{c\mb{k}}-f_{d\mb{q}}\right).\] Under the condition \eqref{eq_36_} the second term on the right-hand side of Eq.~\eqref{eq_52_} is equal to zero. As a result, we obtain the following expression for the perturbation of the density matrix, \begin{equation} \label{eq_55_} {\tilde{\rho}}_{c\mb{k}d\mb{q}}=-e\frac{\left(E_{c\mb{k}}-E_{d\mb{q}}+\hbar \omega \right){\mathrm{\Phi}}_{\omega ;c\mb{k}d\mb{q}}\left(f_{c\mb{k}}-f_{d\mb{q}}\right)}{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-{\hbar}^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}} . \end{equation} As we noted at the end of the section II, for a wide class of systems the coefficients $\left|G_{c\mb{k}d\mb{q}}\right|=1$ (condition \eqref{eq_36_} is satisfied). We assume this to be true here. Moreover, it will be shown in Appendix C that the derivation of the linear susceptibility in the limit of a uniform external field for an arbitrary value of $\left|G_{c\mb{k}d\mb{q}}\right|$ gives the same result as for $\left|G_{c\mb{k}d\mb{q}}\right|=1$. The expression for the amplitude of monochromatic oscillations of the charge is calculated after substituting Eq.~\eqref{eq_55_} into Eq.~\eqref{eq_44_}. As a result, taking into account Eq.~\eqref{eq_37_}, we obtain the expression for the susceptibility, \begin{equation} \label{eq_56_} \chi \left(\omega ,\mb{\kappa}\right)=-\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{cd\mb{k}\mb{q}}\frac{\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2\left(E_{c\mb{k}}-E_{d\mb{q}}+\hbar \omega \right)}{{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-\hbar^2\omega^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}. \end{equation} It is easy to verify that as $ {\gamma}_{c\mb{k}d\mb{q}}\to 0 $, the expression Eq.~\eqref{eq_56_} transforms into the dissipationless formula Eq.~\eqref{eq_45_}. The resulting expression for the susceptibility can be simplified, given the identity \begin{equation} \label{eq_57_} \sum_{cd\mb{k}\mb{q}}\frac{\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-\hbar^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}=0. \end{equation} To prove Eq.~\eqref{eq_57_} we make a replacement $ c\mb{k} \leftrightarrow d\mb{q} $ in the sum and obtain \[\sum_{cd\mb{k}\mb{q}}\frac{\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-\hbar^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}=\sum_{cd\mb{k}\mb{q}}\frac{\left(f_{c\mb{k}}-f_{d\mb{q}}\right){\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2\frac{{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}-{\delta}_{\mb{k}\left(\mb{q}-\mb{\kappa}\right)}}{2}}{{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-\hbar^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}.\] Under the condition $ \left|G_{c\mb{k}d\mb{q}}\right|=1 $ we always have $ {\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2={\left|{\mb{u}}^\ast_{c-\mb{k}}\cdot {\mb{u}}_{d-\mb{q}}\right|}^2 $. In this case, the right-hand side of the last expression can be represented as the difference of identical sums, so that Eq.~\eqref{eq_57_} is satisfied. As result we get \begin{equation} \label{eq_58_} \chi \left(\omega ,\mb{\kappa}\right)=-\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{cd\mb{k}\mb{q}}\frac{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-\hbar^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}. \end{equation} For an independent derivation of the conductivity, we use Eqs.~\eqref{eq_46_}, \eqref{eq_55_} to obtain \[\sigma \left(\omega ,\mb{k}\right)=i\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{c\mb{k}d\mb{q}}{\frac{E_{c\mb{k}}-E_{d\mb{q}}}{\hbar}\times \frac{\left(E_{c\mb{k}}-E_{d\mb{q}}+\hbar \omega \right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2\left(f_{c\mb{k}}-f_{d\mb{q}}\right)}{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-{\hbar}^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}.\] Then we use the relation \[\sum_{cd\mb{k}\mb{q}}{\frac{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-\hbar}^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}=0,\] which proof is completely analogous to that of Eq.~\eqref{eq_57_}. As a result, we arrive at \[\sigma \left(\omega ,\mb{k}\right)=i\omega \frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{c\mb{k}d\mb{q}}{\frac{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{{\left(E_{c\mb{k}}-E_{d\mb{q}}\right)}^2-{\hbar}^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{q}}}}=-i\omega \chi \left(\omega ,\mb{\kappa}\right).\] As we see, in this case the relationship \eqref{eq_3_} is always satisfied; therefore, it suffices to analyze the properties of the expression \eqref{eq_58_}, which determines the value of $ \chi(\omega,\mb{\kappa}) $. An important feature of the expression \eqref{eq_58_} is that the transition to the limit of a constant field is nontrivial. Depending on the order in which we take the limits, the relations obtained in the limit $ \omega \to 0 $ allow us to describe both the response of an equilibrium quasi-closed system to a perturbing potential and the ohmic conductivity of an open system. For a nonzero value of $ \mb{\kappa} $ in the limit $ \omega =0 $, Eq.~\eqref{eq_58_} defines the real value of $ \chi $, independent on the relaxation constants. If the quantities $ f_{c\mb{k}} $ correspond to the equilibrium distribution in the absence of external fields, then expression \eqref{eq_58_} corresponds to the equilibrium state of the system placed in the potential field, treating the electron-field interaction as a perturbation. This result corresponds to the correct limit of a stationary response to an external non-uniform constant perturbing field, since such a response itself should not depend on the mechanism and rate of relaxation. We will arrive at a different result if we first take the limit $ \mb{\kappa}\to 0 $. In this case we obtain \begin{equation} \label{eq_59_} \chi \left(\omega ,0\right)=-\frac{e^2g}{4{\pi}^2}\sum_{c\neq d,\mb{k}}{\frac{{\left|{\mb{n}\cdot}{\mb{r}}_{c\mb{k}d\mb{k}}\right|}^2\left(f_{c\mb{k}}-f_{d\mb{k}}\right)\left(E_{c\mb{k}}-E_{d\mb{k}}\right)}{{{\left(E_{c\mb{k}}-E_{d\mb{k}}\right)}^2-\hbar}^2{\omega}^2-2{i\hbar}^2\omega {\gamma}_{c\mb{k}d\mb{k}}}}+\frac{e^2g}{4{\pi}^2}\sum_{c\mb{k}}{\frac{\left(\mb{n}\frac{\partial}{\partial \mb{k}}E_{c\mb{k}}\right)\left(\mb{n}\frac{\partial}{\partial \mb{k}}f_{c\mb{k}}\right) }{{\hbar}^2{\omega}^2+2{i\hbar}^2\omega {\gamma}_{c\mb{k}c\mb{k}}}}, \end{equation} where $ {\mb{r}}_{c\mb{k}d\mb{k}}={\mb{u}}^\ast_{c\mb{k}}i\frac{\partial}{\partial \mb{k}}{\mb{u}}_{d\mb{k}} $ is the matrix element of the coordinate operator defined in the $\mb{k}$-representation for a ``direct'' transition; $ \mb{n}=\frac{\mb{\kappa}}{\left|\mb{\kappa}\right|} $. It is easy to verify that when we use Eq.~\eqref{eq_59_} to determine the conductivity $ \sigma \left(\omega ,0\right)=-i\omega \chi \left(\omega ,0\right) $, the second term on the right-hand side of Eq.~\eqref{eq_59_} defines the standard intraband Drude conductivity, up to a factor of 2 in the definition of a relaxation constant. To summarize, if the limit $ \omega \to 0 $ is taken after taking the limit $ \mb{\kappa}\to 0 $, then a physically transparent result is obtained: interband transitions determine the susceptibility which does not depend on the relaxation constant, whereas intraband transitions determine the finite Drude conductivity which depends on the relaxation constant. However, this conclusion is valid for a finite band gap only. Otherwise (as, for example, in graphene), a more complex contribution of interband transitions is possible. Note that for $ \omega \to 0 $, a continuous transition from the equilibrium ``current-free'' solution given by Eq.~\eqref{eq_58_} to the ohmic conductivity given by Eq.~\eqref{eq_59_} is possible only for a problem with boundary conditions. The resonance denominator in the expression for the susceptibility Eq.\eqref{eq_58_} corresponds to a classical harmonic oscillator with friction; this structure of the electrodynamic response is typical for the relaxation operator with an antisymmetric structure like Eq.~\eqref{eq_5_} \cite{tokman2013, zhang2014}. For the simplest open systems that violate time reversal symmetry, for example free particles or a harmonic oscillator in a magnetic field, expressions similar to Eq.\eqref{eq_58_} were obtained in \cite{tokman2013,zhang2014}. The corresponding expressions for the phenomenological relaxation operator were derived there imposing the requirement of the gauge invariance of the electrodynamic response \cite{tokman2013, tokman2009}. In the present case, no additional requirements, except for the conservation of the particle number, were imposed. As noted above, all the necessary information about the system is ``hidden'' in the form of specific Bloch functions $ {\psi}_{c\mb{k}}\left(\mb{r},s\right) $ or, within the simplified description, in the form of vectors $ {\mb{u}}_{c\mb{k}} $. When used in Eqs.~\eqref{eq_25_}, \eqref{eq_27_} or Eqs.~\eqref{eq_25_}, \eqref{eq_35_} respectively, these sets of state functions unambiguously define the correct phenomenological relaxation operator. \section{Application to graphene} \label{sec_comparison} \subsection{General considerations} In this section, we compare the results for the dielectric response of graphene obtained with the modified relaxation operator Eq.~\eqref{eq_25_} and the standard relaxation operator Eq.~\eqref{eq_2_}. One has to choose for which particular quantity to carry out the comparison: the surface susceptibility $ \chi \left(\omega ,\mb{\kappa}\right) $ or the surface conductivity $ \sigma \left(\omega ,\mb{\kappa}\right) $, since using the standard relaxation operator Eq.~\eqref{eq_2_} violates Eq.~\eqref{eq_3_}. Whenever the standard relaxation operator is used, we introduce the notation $ {\chi}^{\left(st\right)}\left(\omega ,\mb{\kappa}\right) $ and $ {\sigma}^{\left(st\right)}\left(\omega ,\mb{\kappa}\right) $. For quasiparticle states $ |\alpha \rangle =|c,\mb{k}\rangle $ the expressions Eqs.~\eqref{eq_48_}, \eqref{eq_49_} have the form \begin{equation} \label{eq_60_} {\chi}^{\left(st\right)}\left(\omega ,\mb{\kappa}\right)=-\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{cd\mb{k}\mb{q}}{\frac{\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{E_{c\mb{k}}-E_{d\mb{q}}-\hbar \omega -i\hbar {\gamma}_{c\mb{k}d\mb{q}}}}, \end{equation} \begin{equation} \label{eq_61_} {\sigma}^{\left(st\right)}\left(\omega ,\mb{\kappa}\right)=\frac{e^2g}{4{\pi}^2{\kappa}^2}\sum_{cd\mb{k}\mb{q}}{\left(i\omega -{\gamma}_{c\mb{k}d\mb{q}}\right)\frac{\left(f_{c\mb{k}}-f_{d\mb{q}}\right){{\delta}_{\mb{k}\left(\mb{q}+\mb{\kappa}\right)}\left|{\mb{u}}^\ast_{c\mb{k}}\cdot {\mb{u}}_{d\mb{q}}\right|}^2}{ E_{c\mb{k}}-E_{d\mb{q}}-\hbar \omega -i\hbar {\gamma}_{c\mb{k}d\mb{q}}}}. \end{equation} As an important example, we compare the values of $ {\sigma}^{\left(st\right)}\left(0,\mb{\kappa}\right) $ and $ {\chi}^{\left(st\right)}\left(0,\mb{\kappa}\right) $. Using exactly the same approach as in the derivation of Eq.~\eqref{eq_57_}, we obtain $ \mathrm{Re}[{\sigma}^{\left(st\right)}\left(0,\mb{\kappa}\right)] \neq 0 $, but $ {\mathrm{Im}[\chi}^{\left(st\right)}\left(0,\mb{\kappa}\right)] =0 $. The relaxation operator \eqref{eq_25_} corresponds to the susceptibility given by Eq.~\eqref{eq_58_}, which leads to $ \mathrm{Im}[\chi \left(0,\mb{\kappa}\right)]={\mathrm{Im}[\chi}^{\left(st\right)}\left(0,\mb{\kappa}\right)] = 0 $. This suggests that, if one wants to use the standard relaxation operator Eq.~\eqref{eq_2_} for low frequencies and finite values of $ \mb{\kappa} $, one can get more adequate results from calculating the susceptibility $ \chi $ rather than the conductivity, since it is the condition $ \mathrm{Im}\chi \left(0,\mb{\kappa}\right)=0 $ that corresponds to the correct stationary state for finite values of $ \mb{\kappa} $. Therefore, below we compare the susceptibilities derived with different relaxation operators rather than conductivities. \subsection{Comparison between the standard and new model of the relaxation operator for graphene} Consider monolayer graphene, for which the wave functions $ {\mb{u}}_{c\mb{k}} $ and electron energy dispersion $ E_{c\mb{k}} $ are given in Appendix B (see Eqs.~\eqref{appendixB_eq_8}, \eqref{appendixB_eq_9}). If we assume that the relaxation rate is a constant, $\gamma_{c\mb{k}d\mb{q}} = \gamma$, then the susceptibility given in Eq.~\eqref{eq_58_} is written as \begin{align} \chi(\omega,\mb{\kappa}) = - \frac{e^2 g}{4\pi^2\kappa^2} \sum_{cd} \int d^2\mb{q} \frac{ (f_{c,\mb{q}+\mb{\kappa}} - f_{d\mb{q}}) \left| u^\ast_{c,\mb{q}+\mb{\kappa}} \cdot u_{d\mb{q}}\right|^2 (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}}) } { (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 - \hbar^2\omega^2 - 2i\hbar^2\omega \gamma } . \label{eq63} \end{align} The above expression can be calculated numerically. However, when $\omega \to 0$, the denominator can become zero (even for $ \mb{\kappa}\neq 0 $ if $ c=d $), and the numerical integration does not work. So, we need to analyze the behavior of the susceptibility in the vicinity of $\omega = 0$. The denominator of $\chi(\omega,\mb{\kappa})$ can be written as \begin{align} \frac{1}{ (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 - \hbar^2\omega^2 - 2i\hbar^2\omega \gamma } = \frac{ (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 - \hbar^2\omega^2 + 2i\hbar^2\omega \gamma } { \left( (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 - \hbar^2\omega^2 \right)^2 + 4 \hbar^4 \omega^2 \gamma^2 } . \end{align} Therefore, \begin{align} &\phantom{{}={}}\lim\limits_{\omega\rightarrow 0} \frac{1}{ (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 - \hbar^2\omega^2 - 2i\hbar^2\omega \gamma } \nonumber \\ &= \mathrm{V.p.} \left\{ \frac{1}{ (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 } \right\} + i\pi \delta( (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 ) , \end{align} where $\mathrm{V.p.}$ stands for the principal value of the integral. As a result, the susceptibility in the zero-frequency limit is given by \begin{align} &\lim\limits_{\omega\rightarrow 0} \chi(\omega,\mb{\kappa}) = - \frac{e^2 g}{4\pi^2\kappa^2} \sum_{cd} \int d^2\mb{q} \mathrm{V.p.} \left\{ \frac{ (f_{c,\mb{q}+\mb{\kappa}} - f_{d\mb{q}}) \left| u^\ast_{c,\mb{q}+\mb{\kappa}} \cdot u_{d\mb{q}}\right|^2 } { E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}} } \right\} \nonumber \\ &- i \pi \frac{e^2 g}{4\pi^2\kappa^2} \sum_{cd} \int d^2\mb{q} (f_{c,\mb{q}+\mb{\kappa}} - f_{d\mb{q}}) \left| u^\ast_{c,\mb{q}+\mb{\kappa}} \cdot u_{d\mb{q}}\right|^2 (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}}) \delta( (E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}})^2 ) \nonumber \\ &= - \frac{e^2 g}{4\pi^2\kappa^2} \sum_{cd} \int d^2\mb{q} \mathrm{V.p.} \left\{ \frac{ (f_{c,\mb{q}+\mb{\kappa}} - f_{d\mb{q}}) \left| u^\ast_{c,\mb{q}+\mb{\kappa}} \cdot u_{d\mb{q}}\right|^2 } { E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}} } \right\} \nonumber \\ &- i \pi \frac{e^2 g}{8\pi^2\kappa^2} \sum_{cd} \int d^2\mb{q} (f_{c,\mb{q}+\mb{\kappa}} - f_{d\mb{q}}) \left| u^\ast_{c,\mb{q}+\mb{\kappa}} \cdot u_{d\mb{q}}\right|^2 \delta( E_{c,\mb{q}+\mb{\kappa}}-E_{d\mb{q}} ) . \end{align} One can see that the imaginary part of $\chi(\omega,\mb{\kappa})$ is zero when $\omega\rightarrow 0$ for $ \mb{\kappa}\neq 0 $. Now we calculate the susceptibility of intrinsic monolayer graphene by carrying out the integration in $k$-space numerically. In the plots below, we assumed $T=300~\mathrm{K}$ and $\gamma = 10^{14}$ s$^{-1}$ $ \simeq 16$ THz. In Fig.~\ref{Fig:chi_vs_w} we show the susceptibility as a function of frequency, calculated with the new model Eq.~(\ref{eq63}) in comparison with the standard model, Eq.~(\ref{eq_60_}). The inset shows the behavior near $\omega=0$. The difference between the predictions of the two models is very large when $\omega \leq \gamma$. In the high frequency limit $\omega \gg \gamma$ the models give a very similar result. \begin{figure}[htb] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{fig1a.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{fig1b.pdf} \end{subfigure} \caption{ The real part (a) and imaginary part (b) of the surface susceptibility for undoped monolayer graphene as a function of frequency for the two models: standard (solid blue curve) and new (dashed red curve). The plots are calculated for $T=300~\mathrm{K}$, $\gamma = 10^{14}$ s$^{-1}$ $ \simeq 16$ THz, and $\kappa/2\pi = 2~\mu\mathrm{m}^{-1}$. The inset in each figure shows the curves near zero frequency. } \label{Fig:chi_vs_w} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{fig2a.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{fig2b.pdf} \end{subfigure} \caption{ The real part (a) and imaginary part (b) of the surface susceptibility for undoped monolayer graphene as a function of the wave vector $\kappa$ for the two models: standard (solid blue curve) and new (dashed red curve). The plots are calculated for $T=300~\mathrm{K}$ and $\gamma = 10^{14}$ s$^{-1}$ $ \simeq 16$ THz. The frequency is at $\omega=0.5\gamma$, namely $\omega/2\pi = 8.0~\mathrm{THz}$. } \label{Fig:chi_vs_kappa_w_eq_0.5_gamma} \end{figure} \begin{figure}[htb] \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=\linewidth]{fig3a.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\linewidth]{fig3b.pdf} \end{subfigure} \caption{ The real part (a) and imaginary part (b) of the surface susceptibility for undoped monolayer graphene as a function of the wave vector $\kappa$ for the two models: standard (solid blue curve) and new (dashed red curve). The plots are calculated for $T=300~\mathrm{K}$ and $\gamma = 10^{14}$ s$^{-1}$ $ \simeq 16$ THz. The frequency is at $\omega=2.0\gamma$, namely $\omega/2\pi = 31.8~\mathrm{THz}$. } \label{Fig:chi_vs_kappa_w_eq_2.0_gamma} \end{figure} Figure \ref{Fig:chi_vs_kappa_w_eq_0.5_gamma} shows the susceptibility as a function of the wave vector $\kappa$, while the frequency is fixed at $\omega = 0.5\gamma$. In Fig.~\ref{Fig:chi_vs_kappa_w_eq_2.0_gamma} we show the same dependence while the frequency is fixed at a higher value $\omega = 2.0\gamma$. Clearly, at low frequencies the models have a very different behavior when $\kappa$ approaches zero. The difference between the models becomes less important as the frequency gets larger than the relaxation rate. \subsection{The case of $\kappa\rightarrow 0$ for graphene} \label{sec:graphene_kappa_0} In the case of ${\kappa} \rightarrow 0$, the susceptibility in the modified model is given in Eq.~\eqref{eq_59_}, and the susceptibility in the standard model is given in Eq.~\eqref{eq_60_}. Here we try to compare the results by finding the analytical expressions of the susceptibility. In order to do this, we consider the case of zero temperature, so the distribution of electrons is given by $f_{n\mb{k}} = \theta(E_F-E_{n\mb{k}})$, where $\theta(x)$ is the Heaviside function, and $E_F$ is the Fermi level, which is related to the Fermi wavevector $k_F$ by $E_F = \mathrm{sgn}(E_F) \hbar v_F k_F$, where $\mathrm{sgn}(x)$ is the sign function. The detailed derivation is given in Appendix \ref{appendix:chi_kappa_0_derivation}. For the modified model, we find the following expression for part of $\chi(\omega,0)$ due to the interband transitions, \begin{align} \chi_{\mathrm{inter}}(\omega, 0) &= -\frac{e^2 g}{16\pi} \frac{1}{\hbar\omega\sqrt{\left( 1+2i\gamma/\omega \right) }} \ln \left[ \frac{2 v_F k_F - \omega\sqrt{\left( 1+2i\gamma/\omega \right)} }{2 v_F k_F + \omega\sqrt{\left( 1+2i\gamma/\omega \right)} } \right] , \end{align} where the branch of the square root should have $\mathrm{Re}[\sqrt{\left( 1+2i\gamma/\omega \right) }] > 0$. The contribution of intraband transitions is found to be \begin{align} \chi_{\mathrm{intra}}(\omega, 0) &= -\frac{e^2 g}{4\pi\hbar} \frac{v_F k_F}{\omega^2 (1+2i\gamma/\omega) } . \end{align} For the standard relaxation operator, the contribution of interband transitions is \begin{align} \chi^{\mathrm{(st)}}_{\mathrm{inter}}(\omega, 0) &= -\frac{e^2 g}{16\pi} \frac{1}{\hbar\omega (1+i\gamma/\omega) } \ln \left[ \frac{2 v_F k_F - \omega (1+i\gamma/\omega) } {2 v_F k_F + \omega (1+i\gamma/\omega) } \right], \end{align} and the contribution of the intraband transitions is \begin{align} {\chi}^{\mathrm{(st)}}_{\mathrm{intra}} \left(\omega, 0\right) &= -\frac{e^2g}{4\pi\hbar} \frac{v_F k_F}{\omega^2(1 + i\gamma/\omega)^2} . \end{align} These results show that the functions $\chi(\omega,0)$ calculated in the standard and modified models tend to be the same at large frequencies $\omega \gg \gamma$, while they are quite different in the region where $\omega$ is of the order of or smaller than $\gamma$. \section{Conclusions} In conclusion, we derived the phenomenological relaxation operator for quasiparticles in a crystalline solid, which has a number of important advantages as compared to widely used models. Our relaxation operator is valid for charged carriers in solids with an arbitrary energy dispersion, in particular the Dirac spectrum; it preserves the continuity equation while including both intraband and interband transitions; it allows one to obtain both the stationary ``current-free'' regime in equilibrium and the well-known ohmic direct current regime in the limit of a uniform static field; and it is much simpler and more general than the model proposed in \cite{mermin1970}. We demonstrated a significant difference between the results of applying the standard and modified models of relaxation of quantum coherence in the low-frequency region. We believe that the proposed relaxation operator model should be used in a wide range of problems related to the interaction of electromagnetic fields with condensed matter systems. \begin{acknowledgments} This work has been supported in part by the Air Force Office for Scientific Research Grant No.~FA9550-17-1-0341, National Science Foundation Award No.~1936276, and Texas A\&M University through X-grant and T3-grant programs. M.T. acknowledges the support from RFBR Grant No. 18-29-19091mk. M.E. acknowledges the support from Federal Research Center Institute of Applied Physics of the Russian Academy of Sciences (Project No. 0035-2019-004). \end{acknowledgments}
null
null
null
proofpile-arXiv_000-10170
{"arxiv_id":"2009.09021","language":"en","timestamp":1600740088000,"url":"https:\/\/arxiv.org\/abs\/2009.09021","yymm":"2009"}
2024-02-18T23:40:25.242Z
2020-09-22T02:01:28.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10172"}
null
\section{Background} \label{sec:background} The $n$-cube, which we denote by $Q_n$, is the graph whose vertex set is the set of all binary $n$-tuples, with two vertices adjacent if and only if they differ in precisely one coordinate (so Hamming distance 1). Let $[n]=\set{1,2,\ldots,n}$. We sometimes denote a vertex $\left[x_1,x_2,\ldots,x_n\right]$ of $Q_n$ by the subset $S$ of $[n]$ such that $i\in S$ if and only if $x_i=1$. So if $n=4$, then $\emptyset$ denotes $\left[ 0 0 0 0\right]$, and $\set{1,3}$ or 13, denotes $\left[ 1 0 1 0\right]$ and $\set{\set{1},\set{1,3}}$ (or $\set{1,13}$) denotes $\set{\left[ 1 0 0 0 \right], \left[ 1 0 1 0 \right]}$. The weight of a vertex is the number of 1s. For each positive integer $d$ less than or equal to $n$, $Q_n$ has $\binom{n}{d}2^{n-d}$ subgraphs which are isomorphic to $Q_d$ ($d$ coordinates can vary, while $n-d$ coordinates are fixed). Let $H$ and $K$ be subsets of $V(Q_d)$ (we call $H$ and $K$ configurations in $Q_d$). We say $K$ is an \emph{exact copy} of $H$ if there is an automorphism of $Q_d$ which sends $H$ to $K$. For example, $\set{\emptyset,12}$ is an exact copy of $\set{2,123}$ in $Q_3$, but $\set{2,13}$ is not (the vertices are distance 3 apart). So if $K$ is an exact copy of $H$ then they induce isomorphic subgraphs of $Q_d$, but the converse may not hold. Let $d$ and $n$ be positive integers with $d\leq n$, let $H$ be a configuration in $Q_d$ and let $S$ be a subset of $V(Q_n)$. We let $G(H,d,n,S)$ denote the number of sub-$d$-cubes $R$ of $Q_n$ in which $S\cap R$ is an exact copy of $H$, $\Gmax(H,d,n)=\max_{S\subseteq V(Q_n)} G(H,d,n,S)$, $g(H,d,n,S)=\frac{G(H,d,n,S)}{\binom{n}{d}2^{n-d}}$ denote the fraction of sub-$d$-cubes $R$ of $Q_n$ in which $S\cap R$ is an exact copy of $H$, and $\ex(H,d,n)=\frac{\Gmax(H,d,n)}{\binom{n}{d}2^{n-d}}=\max_{S\subseteq V(Q_n)} g(H,d,n,S)$. Note that $\ex(H,d,n)$ is the average of $2n$ densities $g(H,d,n-1,S_j)$, each of them the fraction of sub-$d$-cubes $R$ in a sub-$(n-1)$-cube of $Q_n$ in which $R\cap S_j$ is an exact copy of $H$, where $S_j$ is the intersection of a maximizing subset $S$ of $V(Q_n)$ with one of the $2n$ sub-$(n-1)$-cubes. Hence $\ex(H,d,n)$ is the average of $2n$ densities, each of them less than or equal to $\ex(H,d,n-1)$, which means $\ex(H,d,n)$ is a nonincreasing function of $n$, so we can define the $d$-cube density $\pi(H,d)$ of $H$ by \[ \pi(H,d)=\lim_{n\to\infty} ex(H,d,n). \] So $\pi(H,d)$ is the limit as $n$ goes to infinity of the maximum fraction, over all $S\subseteq V(Q_n)$, of ``good'' sub-$d$-cubes -- those whose intersection with $S$ is an exact copy of $H$. As far as we know, this paper is the first to define the notion of $d$-cube density. There have been many papers on Tur\'{a}n and Ramsey type problems in the hypercube. There has been extensive research on the maximum fraction of edges of $Q_n$ one can take with no cycle of various lengths \cite{Chung:1992fk,Conlon:2010,Furedi:2015,Thomason:2009} and a few papers on vertex Tur\'{a}n problems in $Q_n$ \cite{Johnson:1989cy,kostochka1976piercing}. There has also been extensive work on which monochromatic cycles must appear in any edge-coloring of a large hypercube with a fixed number of colors \cite{Alon:2006,Axenovich:2006,Chung:1992,Conder:1993}, and a few results on which vertex structures must appear \cite{Goldwasser:2012}. In \cite{AKS:2006,Goldwasser:2018,Offner:2008} results were obtained on the polychromatic number of $Q_d$ in $Q_n$, the maximum number of colors in an edge coloring of a large $Q_n$ such that every sub-$d$-cube gets all colors. We wanted to investigate a different extremal problem in the hypercube: the maximum density of a small structure within a subgraph of a large hypercube. Instead of using graph isomorphism to determine if two substructures are the same, it seemed to capture the essesnce of a hypecube better if the small structure was ``rigid'' within a sub-$d$-cube, and that is what motivated our definition of $d$-cube density. There are strong connections between $d$-cube density and \emph{inducibility} of a graph, a notion of extensive study over the past few years. Given graphs $G$ and $H$, with $\abs{V(G)}=n$ and $\abs{V(H)}=k$, the \emph{density} of $H$ in $G$, denoted $d_H(G)$, is defined by \[ d_H(G)=\frac{\textrm{\# of induced copies of $H$ in $G$} }{\binom{n}{k}} \] Pippenger and Golumbic \cite{PG:1975} defined the \emph{inducibility} $I(H)$ of $H$ by \[ I(H)=\lim_{n\to\infty}\max_{\abs{V(G)}=n}d_H(G). \] Within the past few years $I(H)$ has been determined for all graphs $H$ with 4 vertices except the path $P_4$ \cite{exoo1986dense,Hirst:2014jp}\cite{EvenZoharLinial:2015}. Given a graph $H$, a natural candidate for maximizing the number of induced copies of $H$ is a balanced blow-up of $H$. Equipartition the $n$ vertices into $\abs{V(H)}=k$ classes corresponding to the verteices of $H$ and add all possible edges between each pair of parts corresponding to an edge of $H$. Any $k$-subset which has one vertex in each part will induce a copy of $H$, so $I(H)\geq \frac{k!}{k^k}$ for any graph $H$ with $k$ vertices. Iterating blow-ups of $H$ within each part improves the bound to $I(H)\geq \frac{k!}{k^k-k}$. A natural generalization of $I(H)$ is to restrict $G$ to a particular class of graphs. Let $\mathscr{G}$ be a class of graphs. The inducibility of $H$ in $\mathscr{G}$ is defined by \[ I(H,\mathscr{G})=\lim_{n\to\infty}\max_{\abs{V(G)}=n,G\in\mathscr{G}}d_H(G), \] if the limit exists (if $\mathscr{G}$ is all graphs the limit always exists). Let $\mathscr{T}$ be the family of all triangle-free graphs. Hatami et al. \cite{Hatami:2013} and Grzesik \cite{Grzesik:2012} used flag algebras to show that $I(C_5,\mathscr{T})=\frac{5!}{5^5}=\frac{24}{625}$, achieving the non-iterated blow-up lower bound. In \cite{CLP:2020}, Choi, Lidicky, and Pfender consider the inducibility of oriented graphs (directed graphs with no 2-cycles). For the directed path $\vv{P_k}$ they conjectured that \[ I(\vv{P_k})=\frac{k!}{(k+1)^{k-1}-1} \] the lower bound provided by an iterated blow-up of the directed cycle $\vv{C_{k+1}}$. To eliminate the possibility of iterated blow-ups they considered the family $\vv{\mathcal{T}}$ of oriented graphs with no transitive tournament on three vertices (so every 3-cycle is directed). They conjectured that \[ I(\vv{P_k},\vv{\mathcal{T}})=\frac{k!}{(k+1)^{k-1}} \] Again, the lower bound is provided by a blow-up of $\vv{C_{k+1}}$ (no iterations). They used flag algebras to prove their conjecture for $k=4$: \[ I(\vv{P_4},\vv{\mathcal{T}})=\frac{4!}{5^3}=\frac{24}{125} \] It has been shown \cite{Bollobas:1986bfa,BrownSidorenko:1994} that if $H$ is a complete bipartite graph then the graph that maximizes $I(H)$ can be chosen to be complete bipartite. There are also a few results on inducibility of 3-graphs \cite{FalgasRavry:2012uu}. As with inducibility, $d$-cube density is exceedingly difficult to determine for all but a few configurations $H$ and values of $d$. A different kind of blow-up can be used to produce lower bounds. For a few configurations $H$ we have been able to prove a matching upper bound, generally using results on inducibility to do so. In this paper we determine the $d$-cube density of a ``perfect'' path with 4 vertices in $Q_3$ and a ``perfect'' 8-cycle in $Q_4$. \section{Results} \label{sec:results} We have some results in a forthcoming paper for certain configurations $H$ when $d$ is equal to 2, 3, or 4, and for a couple of infinite families with $d$ any integer greater than 2. If $H$ is two opposite vertices in $Q_2$, clearly $\pi(H,2)=1$ (let $S$ be all vertices in $Q_n$ of even weight). A more interesting example is when $H$ is two adjacent vertices in $Q_2$. Then it is not hard to show that $\pi(H,2)=\frac{1}{2}$. (For the lower bound, take $S$ to be all vertices in $Q_n$ such that the sum of coordinates 1 through $\floor{\frac{n}{2}}$ is even. Any sub-2-cube which has one varying coordinate in and one out of $\left[1,\floor{\frac{n}{2}}\right]$ will have an exact copy of $H$.) We have been unable to determine $\pi(W_d,d)$ for any $d\geq 2$ when $W_d$ is a single vertex in $Q_d$. Letting $S$ be the set of all vertices in $Q_n$ with weight a multiple of 3 shows that $\pi(W_2,2)\geq \frac{2}{3}$. Using flag algebras Rahil Baber \cite{Baber:2014p} has shown that $\pi(W_2,2)\leq .686$. We suspect that for sufficiently large $d$ one cannot do better than choosing vertices randomly with uniform probability $\frac{1}{2}^d$, wwhich gives a density of $\frac{1}{e}$ in the limit as $d$ goes to infinity. Let $P_{d+1}$ denote the vertex set of a path in $Q_d$ with $d+1$ vertices whose endpoints are Hamming distance $d$ apart. We call $P_{d+1}$ a \emph{perfect path}. For example, $\set{\emptyset,1,12,123,1234}$ and $\set{13,3,\emptyset,4,24}$ are both perfect paths in $Q_4$, while $\set{13,3,\emptyset,4,14}$ is not, even though these 5 vertices do induce a graph-theoretic path. Let $C_{2d}$ denote the vertex set of a $2d$-cycle in $Q_d$ where all $d$ opposite pairs of vertices are distance $d$ apart. We call $C_{2d}$ a \emph{perfect $2d$-cycle}. The only graph-theoretic induced 6 cycle in $Q_3$ is perfect, but while $\set{\emptyset,1,12,123,1234,234,34,4}$ is a perfect 8-cycle, $\set{\emptyset,1,12,123,23,234,34,4}$ and $\set{\emptyset,1,12,123,1234,134,34,3}$ induce nonisomorphic 8-cycles in $Q_4$ which are not perfect. The main results in this paper are the two following theorems. \begin{theorem}\label{PC8theorem} $\pi(C_8,4)=\frac{3}{32}$ \end{theorem} \begin{theorem}\label{P4theorem} $\pi(P_4,3)=\frac{3}{8}$ \end{theorem} These are special cases of the following conjectures. \begin{conj}\label{cycleconj} $\pi(C_{2d},d)=\frac{d!}{d^d}$ for all $d\geq 4$. \end{conj} \begin{conj}\label{pathconj} $\pi(P_{d+1},d)=\frac{d!}{(d+1)^{d-1}}$ for all $d\geq 3$. \end{conj} Note that the formulas in these two conjectures are the same as in the conjectures about the inducibility of directed cycles and paths in oriented graphs. Conjecture \ref{cycleconj} is significant because, as we show in Proposition \ref{mindensity}, $\pi(H,d)\geq \frac{d!}{d^d}$ for all configurations $H$ in $Q_d$ for all $d\geq 1$. To show $\frac{d!}{d^d}$ is also an upper bound when $d=4$ we needed to find the inducibility of two vertex disjoint edges in the family of all bipartite graphs. To prove both Theorem \ref{PC8theorem} and Theorem \ref{P4theorem} we show that the $d$-cube density we are trying to determine is equal to the fraction of $d$-sequences of an $n$-set which have certain properties and then we solve the sequence problems. \section{Constructions} \label{sec:constructions} Consider the following construction which gives a lower bound for the $d$-cube density of any configuration $H$ in $Q_d$, for any $d$. Let $[n]$ denote the set $\{1,2,\ldots,n\}$. We partition $[n]$ into $A_1,A_2,\ldots,A_d$ and let $B$ be the set of binary $d$-tuples representing $H$. For each vertex $\vec{v}=\left[v_1,v_2,\ldots,v_n\right]$ in $Q_n$ we let $\vec{v}(A_i)$ equal 0 or 1 according to $\displaystyle\vec{v}(A_i)\equiv \sum_{j\in A_i}v_j \mod 2$. We put $\vec{v}$ in $S$ if and only if the $d$-tuple $\left( \vec{v}(A_j) \right)_{j\in[d]}$ is in $B$. For example, for a perfect 8-cycle in $Q_4$, we could have $B=\{0000,\allowbreak 1000,\allowbreak 1100,\allowbreak 1110,\allowbreak 1111,\allowbreak 0111,\allowbreak 0011,\allowbreak 0001\}$ and $\vec{v}$ would be in $S$ if and only if its number of 1s in coordinates in $A_1,A_2,A_3,A_4$ is either even,even,even,even, or odd,even,even,even, and so on. We observe that if a sub-$d$-cube has one coordinate in each of $A_1,A_2,\ldots,A_d$, then it will contain an exact copy of $H$. By taking an equipartition of $[n]$, we find the following lower bound: \begin{prop}\label{mindensity} $\displaystyle \pi(H,d)\geq \frac{d!}{d^d}$ for all configurations $H$ in $Q_d$ for all positive integers $d$. \end{prop} We call a set $S$ constructed in this way a \emph{blow-up of $H$}. This notion of blow-up is clearly related to, but not the same as, the blow-up of a graph $G$ (for one thing a blow-up of a graph has one part for each vertex, whereas a blow up of a configuration in $Q_d$ has $d$ parts). In $Q_2$, the only configuration $H$ for which equality holds in Proposition \ref{mindensity} is two adjacent vertices. The smallest upper bound for any of the 22 possible configurations in $Q_3$ as computed by Rahil Baber using flag algebras is $.3048$ (when $H$ is two adjacent vertices in $Q_3$), so it is highly unlikely that any configuration in $Q_3$ has 3-cube density equal to $\frac{2}{9}$, the lower bound provided by Proposition \ref{mindensity}. Of the 238 possible configurations in $Q_4$, only three have flag algebra calculated upper bound 4-cube densities less than $.1$: one is the perfect 8-cycle, for which Theorem \ref{PC8theorem} says the exact value is $\frac{3}{32}=.09375$ and another is a graph theoretic, but not perfect, induced 8-cycle, with flag algebra 4-cube density upper bound $.094205$. So there seems to be something special about the perfect 8-cycle. For the perfect path $P_{d+1}$ in $Q_d$ it turns out that a blow-up of $C_{2d+2}$ gives a better lower bound than that provided by Proposition \ref{mindensity}: \begin{prop}\label{path_density} $\displaystyle\pi(P_{d+1},d)\geq\frac{d!}{(d+1)^{d-1}}$ for all positive integers $d$. \begin{proof} Let $S$ be a blow up of $C_{2d+2}$. That is we partition $[n]$ into $A_1,\allowbreak A_2,\ldots,\allowbreak A_{d+1}$ and let $B$ be the set of binary $(d+1)$-tuples in a copy of $C_{2d+2}$. For each vertex $\vec{v}=[v_1,\ldots,v_n]$ in $Q_n$, we let $\vec{v}(A_i)$ equal 1 or 0 according to $\displaystyle\vec{v}(A_i)\equiv \sum_{j\in A_i}v_j \mod 2$. We put $\vec{v}$ in $S$ if and only if the $(d+1)$-tuple $\left( \vec{v}(A_j) \right)_{j\in[d+1]}$ is in $B$. If a sub-$d$-cube has one coordinate in each of $d$ parts (and none in the other), then it will contain an exact copy of $P_{d+1}$. For example, if $d=3$ and $B=\{0000,\allowbreak 1000,\allowbreak 1100,\allowbreak 1110,\allowbreak 1111,\allowbreak 0111,\allowbreak 0011,\allowbreak 0001\}$ and we select a sub-3-cube with one coordinate in each of $A_1,A_2,$ and $A_4$(so each coordinate in $A_3$ is fixed) then if $\vec{v}(A_3)=0$ the 4-tuples $0001, \allowbreak 0000,\allowbreak 1000,\allowbreak 1100$ in $B$ give us an exact copy of $P_4$ in any such sub-3-cube, while if $\vec{v}(A_3)=1$, then $1110,\allowbreak 1111,\allowbreak 0111,\allowbreak 0011$ does the same. If it is an equipartition, selecting the coordinates of the sub-$d$-cube one-by-one shows that \[ \pi(P_{d+1},d)\geq \frac{(d+1)!}{(d+1)^{d}} = \frac{d!}{(d+1)^{d-1}}. \] \end{proof} \end{prop} \section{Local density, perfect cycles, and sequences} \label{sec:local_density_perfect_cycles_and_sequences} Let $H$ be a configuration in $Q_d$ and $S$ be a subset of $V(Q_n)$. For each vertex $v$ in $S$, we let $\Gin(H,d,n,S)$ be the number of sub-$d$-cubes $R$ of $Q_n$ containing $v$ in which $S\cap R$ is an exact copy of $H$, $\Gmaxin(H,d,n)=\max_{v\in S} \Gin(H,d,n,S)$ where the max is over all $v$ and $S$ such that $v\in S$, $\gin(H,d,n,S)=\frac{\Gin(H,d,n,S)}{\binom{n}{d}}$ denote the fraction of sub-$d$-cubes $R$ of $Q_n$ containing $v$ in which $S\cap R$ is an exact copy of $H$, and $\exlin(H,d,n)=\frac{\Gmaxin(H,d,n)}{\binom{n}{d}}$. As with $\ex(H,d,n)$, a simple averaging argument shows that $\exlin(H,d,n)$ is a nonincreasing function of $n$, so we define $\pilin(H,d)$ by \[ \pilin(H,d)=\lim_{n \to \infty}\exlin(H,d,n) \] For each vertex $v\not\in H$ we perform a similar procedure to define $\Gout(H,d,n,S)$, $\Gmaxout(H,d,n)$, $\gout(H,d,n,s)$, and $\pilout(H,d)$. So $\pilin(H,d)$ and $\pilout(H,d)$ are the maximum local densities of sub-$d$-cubes with an exact copy of $H$ among all sub-$d$-cubes containing $v$ in $S$ and out of $S$ respectively. Finally, we define $\pil(H,d)$ to be $\max\{\pilin(H,d),\pilout(H,d)\}$. Since the global density cannot be more than the maximum local density, we must have $\pi(H,d)\leq \pil(H,d)$. For most configurations $H$ for which we have been able to determine $\pi(H,d)$, our procedure has been to prove an upper bound for $\pil(H,d)$ which matches the density of a construction. If $S\subseteq V(Q_n)$ we let $\overline{S}$ denote $V(Q_n)\setminus S$ and if $H$ is a configuration in $Q_d$ we let $\overline{H}$ denote $V(Q_d)\setminus H$. Clearly $\pi(H,d)=\pi(\overline{H},d)$ and $\pilin(H,d)=\pilout(\overline{H},d)$. If $H$ is self-complementary in $Q_d$, i.e. $\overline{H}$ is an exact copy of $H$, then $\pilin(H,d)=\pilout(\overline{H},d)=\pilout(H,d)=\pil(H,d)$. Every configuration in $Q_3$ with 4 vertices is self-complementary, including $P_4$, and it is easy to check that $C_8$ is self-complementary in $Q_4$ (the complements of the two non-perfect induced 8-cycles in $Q_4$ are not 8-cycles). Now we pose and solve a different maximization problem whose answer we will show to be $\pi(C_8,4)$. Let $S$ be a set of size $n$ and $d$ a positive integer. We consider a set $A(d,n)$ of sequences of $d$ distinct elements of $S$. Given a sequence $w$ in $A(d,n)$ an \emph{end-segment of w} is the set of the first $j$ elements of $w$ or the set of the last $j$ elements of $w$, for some $j$ in $[1,d)$. We say the set $A(d,n)$ has Property $U$ if the two following conditions are satisfied: \begin{enumerate} \item For each pair of sequences $w$ and $x$ in $A(d,n)$, if $D$ is an end-segment of $w$ and all elements of $D$ are in the sequence $x$, then $D$ is an end-segment of $x$ with elements in the same order as in $w$ (so if $abc$ is the beginning of $w$, and $a,b,$ and $c$ all appear in $x$, then either $x$ begins $abc$ or ends $cba$). \item A sequence and its reversal are not both in $A(d,n)$ (unless $d=1$). \end{enumerate} For example, if $x$ and $w$ are sequences in a set $A(5,n)$ with Property $U$ and if $x$ is $abcde$, then $w$ cannot be $abceg$ (or its reversal), $abegh$ (or its reversal), or $ghiaj$ (or its reversal), but could be $fbdcg$ (or its reversal) or $edgbh$ (or its reversal). It is easy to see that no two sequences in $A(d,n)$ can have the same set of $d$ elements. Let $T(d,n)$ denote the maximum size of a family $A(d,n)$ with Property $U$. \begin{prop} $\Gmaxin(C_{2d},d,n)=T(d,n)$ for all $d\geq 2$. \begin{proof} Without loss of generality, we can assume that $\emptyset$ is a vertex where the local $d$-cube density of $C_{2d}$ is a maximum and that $\Gein(C_{2d},d,n,S)=\Gmaxin(C_{2d},d,n)$. Now we construct a set $A(d,n)$ of $d$-sequences. The sequence $a_1,a_2,\ldots,a_d$ or its reversal is in $A(d,n)$ if and only if the intersection of $S$ and the sub-$d$-cube where $a_1,a_2,\ldots,a_d$ are the nonconstant coordinates is equal to $\{\emptyset,\allowbreak a_1,\allowbreak a_1 a_2,\allowbreak a_1 a_2 a_3,\ldots,\allowbreak a_1 a_2\cdots a_k,\allowbreak a_2 a_3\cdots a_d,\ldots,\allowbreak a_{d-1}a_d,\allowbreak a_d\}$. We claim that $A(d,n)$ has Property $U$. Suppose it does not, say $x$ and $w$ are sequences in $A(d,n)$ with $a_1 a_2 \cdots a_j$ an end-segment in $x$ all of whose elements are in $w$ but not an end-segment in $w=b_1 b_2\cdots b_k$. Then $\{a_1,\allowbreak a_2,\ldots,\allowbreak a_j\}$ is a subset of $\{b_1,\allowbreak b_2,\ldots,\allowbreak b_k\}$, so is another vertex in $S$ which is in the sub-$d$-cube containing the perfect $2d$-cycle $\{\emptyset,\allowbreak b_1,\allowbreak b_1 b_2,\ldots,\allowbreak b_{d-1}b_d,\allowbreak b_d\}$, a contradiction. Similarly, by reversing the procedure, a family of sequences with Property $U$ and size $T(d,n)$ can be used to construct $T(d,n)$ sub-$d$-cubes containing $\emptyset$ with exact copies of $C_{2d}$. \end{proof} \end{prop} We define $t(d,n)$ to be $\frac{T(d,n)}{\binom{n}{d}}$. Hence $t(d,n)=\frac{\Gmaxin(C_{2d},d,n)}{\binom{n}{d}}=\exlin(C_{2d},d,n)$ is a nonincreasing function of $n$, so we can define $t(d)$ by $t(d)=\lim_{n \to \infty}t(d,n)=\lim_{n \to \infty}\exlin(C_{2d},d,n)=\pilin(C_{2d},d)$. Hence we have \begin{prop}\label{pilin=td} For all $d\geq 2$ \[ \pilin(C_{2d},d)=t(d) \] \end{prop} We now calculate $t(3)$. Let $A(3,n)$ be a set of 3-sequences with Property $U$. No symbol can appear at the end in one sequence and in the middle of another, so we let $A$ be the set of symbols which appear at the beginning or end and $B$ be the set of symbols which appear in the middle. If $\abs{A}=m$ and $\abs{B}=p$, then, since a sequence and its reversal cannot both be in $A(3,n)$, the total number of sequences is at most $\binom{m}{2}p=\frac{(n-m)m(m-1)}{2}$ which is maximized when $m=\left\lceil{\frac{2n}{3}}\right\rceil$. Hence, \[ \pilin(C_6,3) = t(3) = \lim_{n\to\infty} \frac{\left( \frac{2}{3}n \right)^2 \left( \frac{1}{3}n \right)}{2\binom{n}{3}}=\frac{4}{9}. \] To find $\pilout(C_6,3)$, we just note that if $S$ is the set of all vertices in $Q_n$ with weight not divisible by 3, then every $Q_3$ containing $\vv{0}$ has an exact copy of $C_6$ (the unique vertex with weight 3 in each $Q_3$ containing $\phi$ is also not in $C_6$), so $\pilout(C_6,3)=1$. Using this same set $S$, it is not hard to show that $\pi(C_6,3)\geq \frac{1}{3}$ (any $Q_3$ whose smallest weight vertex is a multiple of 3 has an exact copy of $C_6$). We have been unable to show equality, but Baber's flag algebra upper bound of $.3333333336$ would seem to show equality must hold. To prove Theorem \ref{PC8theorem}, we will prove a result about inducibility in bipartite graphs. \begin{theorem}\label{bipartite_max_4_set_matching} Let $G$ be a bipartite graph with $n$ vertices. Then the limit as $n$ goes to infinity of the maximum fraction of sets of 4 vertices of $G$ which induce two disjoint edges is equal to $\frac{3}{32}$. \begin{proof} Suppose $M,P$ is a bipartition of $V(G)$ where $\abs{M}=m$ and $\abs{P}=p$. Let $\set{u_1,u_2,\ldots,u_m}$ and $\set{v_1,v_2,\ldots,v_p}$ be the vertices of $M$ and $P$ with respective degrees $r_1,r_2,\ldots,r_m$ and $c_1,c_2,\ldots,c_p$. For $i\neq j$, let $t_{i,j}$ denote the number of vertices in $P$ which are adjacent to both $u_i$ and $u_j$. Hence the total number of ``good'' sets of 4 vertices is \[ N=\sum_{i<j} (r_i-t_{i,j})(r_j-t_{i,j}) \] where the sum is over all pairs $i,j$ such that $1\leq i<j\leq m$. To get an upper bound for this we first get an upper bound on the sum $S$ of all pairs of the factors in the products: \[ S = \sum_{i<j}\left[ (r_i-t_{i,j}) + (r_j-t_{i,j})\right] = (m-1)\sum_{i=1}^m r_i - 2\sum_{i=1}^p \binom{c_j}{2}. \] This is because each $r_i$ appears in a sum with each $r_j$ where $j\neq i$ and because $\sum_{i=1}^m t_{i,j}=\binom{c_j}{2}$ since each pair of edges adjacent to $v_j$ is counted precisely once in the sum. Let $w=\sum_{i=1}^m r_i = \sum_{j=1}^p c_j$. Then \begin{align*} S &= (m-1)\sum_{i=1}^m r_i - \sum_{j=1}^p c_j^2 + \sum_{j=1}^p c_j\\ &= mw-\sum_{j=1}^p c_j^2\\ &\leq mw-\sum_{j=1}^p\left( \frac{w}{p} \right)^2\\ &= mw - \frac{w^2}{p} \end{align*} where the inequality is by Cauchy-Schwartz. The function $f(w)=mw-\frac{w^2}{p}$ is maximized when $w=\frac{mp}{2}$, so $S\leq \frac{m^2p}{4}$. Now, we return to our consideration of $N=\sum_{i<j} (r_i-t_{i,j})(r_j-t_{i,j})$. The product $(r_i-t_{i,j})(r_j-t_{i,j})$ is at most $\left(\frac{p}{2}\right)^2$, achieved when $r_i=r_j=\frac{p}{2}$ and $t_{i,j}=0$, in which case $(r_i-t_{i,j})+(r_j-t_{i,j})=p$. Since $S\leq\frac{m^2p}{4}$, to maximize $N$ we want to have $\frac{m^2}{4}$ products of $\left( \frac{p}{2} \right)^2$, with all other products being $0\cdot 0$. Hence $N\leq \frac{m^2p^2}{16}$. Since $m+p=n$, this is maximized when $m=p=\frac{n}{2}$, so $N\leq\frac{n^4}{256}$. It is not hard to conclude from the above inequalities that equality holds only when $G$ is two disjoint copies of $K_{\frac{n}{4},\frac{n}{4}}$. This is the same graph which maximizes, among all graphs with $n$ vertices, the number of induced subgraphs with 4 vertices consisting of two edges which share a vertex and an isolated vertex. This gives a fraction of ``good'' sets of 4 vertices as \[ \frac{\frac{n^4}{256}}{\binom{n}{4}} = \frac{n^3}{(n-1)(n-2)(n-3)}\cdot \frac{3}{32}. \] \end{proof} \end{theorem} \begin{proof}[Proof of Theorem \ref{PC8theorem}] By Proposition \ref{mindensity}, $\pi(C_8,4)\geq\frac{3}{32}$. Since $C_8$ is self-complementary in $Q_4$, $\pi(C_8,4)\leq\pil(C_8,4)=\pilin(C_8,4)=t(4)$ the last equality by Proposition \ref{pilin=td}. So to complete the proof we just need to show that $t(4)\leq \frac{3}{32}$. Let $A(4,n)$ be a maximum size set of 4-sequences with Property $U$ with elements from an $n$-set. Let $A=\{i,\in[n]:i \textrm{ is the first or last element in a sequence in }A(4,n)\}$ and let $B=[n]\setminus A$. We construct a bipartite graph $G$ with vertex bipartition $A,B$. If $a\in A$ and $b\in B$, then $[a,b]$ is an edge of $G$ if and only if $a$ and $b$ are consecutive elements in some sequence in $A(4,n)$ (so some sequence begins $ab$ or ends $ba$). Suppose $a_1 b_1 b_2 a_2$ is a sequence in $A(4,n)$. Then $[a_1,b_1]$ and $[a_2,b_2]$ are edges of $G$. Suppose $a_1 b_2$ (or $a_2 b_1$) is also an edge. Then $a_1 b_2$ is an end-segment of some sequence in $A(4,n)$, which is impossible because $\{ a_1, b_2\}\subseteq \{ a_1, \allowbreak b_1,\allowbreak b_2,\allowbreak a_2 \}$, but $a_1 b_2$ is not an end-segment in $a_1 b_1 b_2 a_2$. Hence the size $T(4,n)$ of $A(4,n)$ is at most the number of sets of 4 vertices in $G$ which induce two disjoint edges, and by Theorem \ref{bipartite_max_4_set_matching}, \[ t(4)=\lim_{n\to\infty}\frac{T(4,n)}{\binom{n}{4}}\leq \frac{3}{32}. \] \end{proof} So Conjecture~\ref{cycleconj} is true for $d=4$. Zongchen Chen \cite{Chen} has shown that $t(d)=\frac{d!}{d^d}$ for all $d\geq 4$. While we know that $\pilin(C_{2d},d)=t(d)=\frac{d!}{d^d}$ for all $d\geq 4$, we have been unable to show that $\pilout(C_{2d},d)=\pilin(C_{2d},d)$ if $d\geq 5$ (our proof for $d=4$ used the fact that $C_8$ is self-complementary in $Q_4$), which would complete a proof of Conjecture 3. We have seen that equality in Conjecture \ref{cycleconj} does not hold for $d=3$ (since $\pi(C_6,3)\geq \frac{1}{3}$). \section{Perfect Paths} \label{sec:perfect_paths} To determine the $d$-cube density of $P_4$ in $Q_3$, as mentioned in Section \ref{sec:local_density_perfect_cycles_and_sequences}, our procedure is to prove an upper bound for $\pil(P_4,3)$ which matches the density of the construction given in Proposition \ref{path_density}. We will show \begin{prop}\label{pilP_4,3} $\displaystyle\pil(P_4,3)\leq \frac{3}{8}$. \end{prop} Let $S$ be a set of size $n$ and $d$ a positive integer. Let $R(d,n)$ be a family of pairs of sequences of elements in $S$, which we call \emph{$d$-bisequences}, one sequence of length $i$ and the other of length $d-i$, where $i\in [0,d]$, and where the $d$ elements in the pair of sequences are distinct. Given a bisequence $w=\{a_1;a_2\}$ in $R(d,n)$ an \emph{initial-segment of $w$} is the set of the first $j$ elements of either $a_1$ or $a_2$ where $j\in [1,d]$. We say that the set $R(d,n)$ has Property $V$ if the following conditions are satisfied: \begin{enumerate} \item For each pair of bisequences $w=\{a_1;a_2\}$ and $x=\{a_3;a_4\}$ in $R(d,n)$, if $L$ is an initial segment of $w$ and all elements of $L$ are in the $d$-bisequence $x$, then $L$ is an initial segment of one of the sequences in $x$ with elements in the same order as in $w$. \item A bisequence $\{a_1;a_2\}$ and its reversal $\{a_2;a_1\}$ are not both in $R(d,n)$. \end{enumerate} Let $B(d,n)$ denote the maximum size of a family of bisequences $R(d,n)$ with property $V$ and let $b(d,n)=\frac{B(d,n)}{\binom{n}{d}}$. \begin{prop}\label{path-bisequence} $\exlin(P_{d+1},d,n)=b(d,n)$. \begin{proof} Without loss of generality, we can assume that $\emptyset$ is a vertex where the local $d$-cube density of $P_{d+1}$ is a maximum. Now we construct a set $R(d,n)$ of bisequences. The bisequence $\{(a_1,a_2,\ldots,a_j);(b_1,b_2,\ldots,b_i)\}$ or its reversal is in $R(d,n)$ if and only if the intersection of $S$ and the sub-$d$-cube where $a_1,a_2,\ldots,a_j,b_1,b_2,\ldots,b_i$ are the nonconstant coordinates is equal to $\{a_1 a_2\cdots a_j, a_1 a_2\cdots a_{j-1}, \ldots,a_1,\emptyset,b_1,b_1 b_2,\ldots, b_1 b_2\cdots b_i\}$. Note that $i$ or $j$ could be equal to 0. We claim that $R(d,n)$ has property $V$. Suppose it does not, say $w$ and $x$ are bisequences in $B(d,n)$ with $a_1a_2\cdots a_l$ an initial-segment of $w$ all of whose elements are in $x$ but not an initial-segment (or not in the same order as in $w$) of either of the sequences in $x=\{(b_1,b_2,\ldots,b_j);(c_1,c_2,\ldots,c_i)\}$. Then $\{\emptyset,a_1,a_1 a_2,\ldots,a_1 a_2 \cdots a_l\}$ is a path contained in the $d$-cube containing the path $\{b_1 b_2\cdots b_j,b_1 b_2\cdots b_{j-1},\ldots,b_1,\emptyset,c_1,c_1 c_2,\ldots,c_1 c_2\cdots c_i\}$, but is not a subpath, a contradiction. Similarly, reversing the procedure, a family of bisequences with Property $V$ and size $B(d,n)$ can be used to construct $B(d,n)$ sub-$d$-cubes containing $\emptyset$ with exact copies of $P_{d+1}$. Hence $\Gmaxin(P_{d+1},d,n)=B(d,n)$, and dividing by $\binom{n}{d}$ gives the desired equality. \end{proof} \end{prop} Since $b(d,n)=\exlin(P_{d+1},d,n)$ is a non-increasing function of $n$, we can define $b(d)$ to be equal to $\displaystyle\lim_{n\to\infty}\frac{b(d,n)}{\binom{n}{d}}$. So $b(d)$ is the limit as $n$ goes to infinity of the maximum fraction of $d$-subsets of $n$ which can be the sets of elements of a family $R(d,n)$ of bisequences with Property $V$. We have \[ \pilin(P_{d+1},d)=\lim_{n\to\infty}\frac{\exlin(P_{d+1},d,n)}{\binom{n}{d}}=\lim_{n\to\infty}\frac{b(d,n)}{\binom{n}{d}}=b(d) \] and we can find $\pilin(P_{d+1},d)$ by finding $b(d)$. Clearly if $R(d,n)$ is a family of $d$-bisequences with Property $V$, then no symbol can appear as the first element of some sequence and not as the first element of another. Furthermore, the following properties are easy to verify. \begin{lem}\label{triangle-free3} Suppose $R(3,n)$ is a family of 3-bisequences with Property $V$. \begin{enumerate}[label=(\roman*)] \item If $\{bxy;\emptyset\}$ and $\{bxz;\emptyset\}$ are in $R(3,n)$, then $\{byz;\emptyset\}$ is not. \item If $\{bxz;\emptyset\}$ and $\{byz;\emptyset\}$ are in $R(3,n)$ then $\{bxy;\emptyset\}$ is not. \item If $\{bx;c\}$ and $\{bx;d\}$ are in $R(3,n)$, then $\{dx;c\}$ is not. \item If $\{bx;c\}$ and $\{dx;c\}$ are in $R(3,n)$, then $\{dx;b\}$ is not. \end{enumerate} \begin{proof} Let $A=\{i\in[n] : i \textrm{ is the first element of some sequence in a 3-bisequence in }R(3,n)\}$ and let $W=[n]\setminus A$. Let $a= \frac{\abs{A}}{n}$ and $w=\frac{\abs{W}}{n}=1-a$. (i) If the assumption in (i) holds, then $\{byz;\emptyset\}$ cannot be in $B(3,n)$ because it has ``$by$'' as an initial segment, and that is a subset of one of the sequences in $\{bxy;\emptyset\}$ but is not an initial segment, violating property $V$. Statements (ii), (iii), and (iv) are just as easy to verify. \end{proof} \end{lem} \begin{proof}[Proof of Proposition \ref{pilP_4,3}] By Proposition \ref{path-bisequence} it suffices to show that $b(3)\leq \frac{3}{8}$. Let $R(3,n)$ be a family of 3-bisequences with Property $V$. Let $A$ be the set of elements in $[n]$ which appear as the first element of some sequence in $R(3,n)$, let $W=[n]\setminus A$, and let $a=\abs{A}$ and $w=\abs{W}$. For each $e\in A$, let $G_e$ be the graph with vertex set $W$ and edge set $\{[x,y] : \{eyx;\emptyset\}\rm{\ or\ }\{exy;\emptyset\} \textrm{ or one of their reversals is in }R(3,n)\}$. By statements (i) and (ii) in Lemma \ref{triangle-free3}, $G_e$ is a triangle free graph for each $e \in A$. Hence by Turan's theorem, at most $\frac{w^2}{4}$ unordered pairs $(x,y)$ of element $x$ and $y$ in $W$ can appear as edges in $G_e$. That means that the total number of 3-subsets of $[n]$ which can be the set of elements of a 3-bisequence in $B(3,n)$ with one element in $A$ and two in $W$ is at most $\frac{w^2}{4}\cdot a$. Similarly, for each element $x$ in $W$ we let $G_x$ be the graph with vertex set $A$ and edge set $\{[b,c] : \{bx,c\}\textrm{ or }\{bc,x\}\textrm{ or either of their reversals is in }R(3,n)\}$. By statements (iii) and (iv) in Lemma \ref{triangle-free3}, $G_x$ is triangle free, and an identical argument to the one for $G_e$ shows that the total number of 3-subsets of $[n]$ which can be the set of elements of a 3-bisequence in $B(3,n)$ with two elements in $A$ and one in $W$ is at most $w\cdot\frac{a^2}{4}$. If $B(3,n)$ is the size of $R(3,n)$ then \begin{align*} B(3,n) &\leq a\cdot \frac{w^2}{4}+ w\cdot \frac{a^2}{4}\\ &= \frac{aw}{4}\cdot n \end{align*} This is maximized when $a=w=\frac{n}{2}$, so $B(3,n)\leq\frac{n^3}{16}$ and $b(3)=\lim_{n \to \infty}\frac{B(3,n)}{\binom{n}{3}}\leq \frac{3}{8}$. \end{proof} Further, this also shows Theorem \ref{P4theorem} holds. \begin{proof}[Proof of Theorem \ref{P4theorem}] By Proposition \ref{path_density} and Proposition \ref{pilP_4,3} \[ \frac{3}{8}\leq\pi(P_4,3)\leq\pil(P_4,3)\leq\frac{3}{8}. \] \end{proof} So Conjecture \ref{pathconj} holds for $d=3$. Lending credence to this conjecture is that Baber's flag algebra upper bound for $\pi(P_5,4)$ is .19200000058, while the conjecture with $d=4$ gives $\frac{24}{125} = .192$. \section*{Acknowledgement} We thank Zongchen Chen for his insightful observations on $d$-cube density. \bibliographystyle{amsplain}
null
null
null
proofpile-arXiv_000-10171
{"arxiv_id":"2009.09037","language":"en","timestamp":1600740131000,"url":"https:\/\/arxiv.org\/abs\/2009.09037","yymm":"2009"}
2024-02-18T23:40:25.248Z
2020-09-22T02:02:11.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10173"}
null
\section*{Acknowledgments} This research was partially supported by the US National Science Foundation grants DMS-1819222, DMS-1912747, CCF-1934568, DMS-2012266; the US Office of Naval Research grant N00014-20-1-2381; and Russell Sage Foundation Grant 2196. \section{Applications in Graph Signal Approximation} \label{sec:applications} In this section, we demonstrate the usefulness of our proposed NGWP dictionaries in efficient approximation of graph signals on two graphs, and compare the performance with the other previously-proposed methods, i.e., the global graph Laplacian eigenbasis; graph Haar basis; graph Walsh basis; GHWT coarse-to-fine (c2f) best basis~\cite{IRION-SAITO-GHWT}; GHWT fine-to-coarse (f2c) best basis~\cite{IRION-SAITO-GHWT}; and eGHWT best basis~\cite{SHAO-SAITO-SPIE}. Note that the graph Haar basis is a particular basis choosable from the GHWT f2c dictionary and the eGHWT dictionary while the graph Walsh basis is choosable from both versions of the GHWT dictionaries as well as the eGHWT; see~\cite{IRION-SAITO-TSIPN, SHAO-SAITO-SPIE} for the details. We use the $\ell^1$-norm minimization as the best-basis selection criterion for all the best bases in our experiments. The edge weights of the dual graph $G^\star$ are the reciprocals of the DAG pseudometric between the corresponding eigenvectors as defined in Eq.~\eqref{eq:DAG}. For a given graph $G$ and a graph signal $\bm{f}$ defined on it, we decompose $\bm{f}$ into those dictionaries first. Then, to measure the approximation performance, we sort the expansion coefficients in the non-increasing order of their magnitude, and use the top $k$ most significant terms to approximate $\bm{f}$ where $k$ starts from $0$ up to about $0.5 \times N$, i.e., 50\% of the total number of terms. All of the approximation performance is measured by the relative $\ell^2$ approximation error with respect to the fraction of coefficients retained, which ranges from $0$ to $0.5$. \subsection{Images Sampled on the Sunflower Graph} We consider the so-called ``sunflower'' graph shown in Fig.~\ref{fig:sunflower-graph}. This particular graph has $400$ nodes and each edge weight is set as the reciprocal of the Euclidean distance between the endpoints of that edge. Consistently counting the number of spirals in such a sunflower graph gives rise to the \emph{Fibonacci numbers}, i.e., $0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, \dots$. One may observe that 8 clockwise and 13 counterclockwise spirals are emanating from the center in Fig.~\ref{fig:sunflower-graph}. See, e.g., \cite{MATHAI-DAVIS, VOGEL} and our code \texttt{SunFlowerGraph.jl} in \cite{HAOTIAN-GITHUB}, for algorithms to construct such sunflower grids and graphs. We can also view such a distribution of nodes as a simple model of the distribution of \emph{photoreceptors in mammalian visual systems} due to cell generation and growth; see, e.g., \cite[Chap.~9]{RODIECK}. \begin{figure} \centering\includegraphics[width = 0.5\textwidth]{Sunflower.png} \caption{Sunflower graph ($N=400$); node radii vary for visualization purpose only} \label{fig:sunflower-graph} \end{figure} Such a viewpoint motivates us the following sampling scheme: we overlay the sunflower graph on several parts of the standard Barbara image and sample the grayscale value at each node using the bilinear interpolation. In particular, we sample two different regions: her left eye and pants, where quite different image features are represented, i.e., a piecewise-smooth image containing oriented edges and a textured image with directional oscillatory patterns, respectively. First, let us discuss our approximation experiments on Barbara's eye graph signal, which are shown in Fig.~\ref{fig:barb-eye}. \begin{figure} \begin{subfigure}{0.325\textwidth} \centering\includegraphics[width = \textwidth]{barb_sunflower_eye.png} \caption{The sunflower graph overlaid on Barbara's left eye} \end{subfigure} \begin{subfigure}{0.325\textwidth} \centering\includegraphics[width = \textwidth]{SunFlower_barbara_feye.png} \caption{Barbara's left eye as an input graph signal} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_approx.png} \caption{Approximation performance of various methods} \end{subfigure} \caption{Barbara's left eye region sampled on the sunflower graph nodes (a) as a graph signal (b); the relative $\ell^2$ approximation errors by various methods (c)} \label{fig:barb-eye} \end{figure} From Fig.~\ref{fig:barb-eye}(c), we observe the following: 1) the VM-NGWP best basis performed best closely followed by the PC-NGWP best basis; 2) the global graph Laplacian eigenbasis worked relatively well; and 3) those bases chosen from the Haar/Walsh wavelet packet dictionaries did not perform well. Note that the GHWT c2f best basis turned out to be the graph Walsh basis in this graph signal. These observations can be attributed to the fact that this Barbara's eye graph signal is not of piecewise-\emph{constant} nature: it is a piecewise-\emph{smooth} graph signal. Hence, those methods using smoother dictionary vectors worked better. Yet, smooth localized basis vectors in the NGWP dictionaries made a difference in performance compared to the global graph Laplacian eigenbasis. In other words, the localized basis vectors were important for efficiently capturing the features of this graph signal. In order to examine what kind of basis vectors were chosen as the best basis to approximate this Barbara's eye signal, we display the most important nine VM-NGWP best basis vectors in Fig.~\ref{fig:barb-eye-top9}. The corresponding PC-NGWP best basis vectors are relatively similar; hence they are not shown here. We note that most of these top basis vectors essentially work as oriented edge detectors for the eye including the iris part while the basis vectors \#2 and \#7 take care of shading and peripheral features of this region. \begin{figure} \begin{center} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector2.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector3.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector4.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector5.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector6.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector7.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector8.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector9.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_feye_DAG_VM_NGW_important_basis_vector10.png} \end{subfigure} \caption{Nine most important VM-NGWP vectors (the DC vector not shown) for Barbara's eye} \label{fig:barb-eye-top9} \end{center} \end{figure} Now, let us discuss our second approximation experiments: Barbara's pants region as an input graph signal as shown in Fig.~\ref{fig:barb-pants}. The nature of this graph signal is completely different from the eye region: it is dominated by directional oscillatory patterns of her pants. \begin{figure} \begin{subfigure}{0.325\textwidth} \centering\includegraphics[width = \textwidth]{barb_sunflower_pants.png} \caption{The sunflower graph overlaid on Barbara's pants/knee region} \end{subfigure} \begin{subfigure}{0.325\textwidth} \centering\includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser.png} \caption{Barbara's pants region as an input graph signal} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_approx.png} \caption{Approximation performance of various methods} \end{subfigure} \caption{Barbara's pants region sampled on the sunflower graph nodes (a) as a graph signal (b); the relative $\ell^2$ approximation errors by various methods (c)} \label{fig:barb-pants} \end{figure} From Fig.~\ref{fig:barb-pants}(c), we observe the following: 1) NGWP best bases and the eGHWT best basis performed very well and their performance is quite close until about $30$\% of the coefficients retained. Then eventually, the eGHWT best basis outperformed all the others; 2) the GHWT f2c best basis performed relatively well behind those three bases in 1); 3) there is a substantial gap in performance between those four bases and the rest, i.e., the graph Haar basis, the graph Walsh basis (= the GHWT c2f best basis in this case too), and the global graph Laplacian eigenbasis. We knew that the eGHWT is known to be quite efficient in capturing oscillating patterns as shown by Shao and Saito for the graph setting~\cite{SHAO-SAITO-SPIE} and by Lindberg and Villemoes for the classical non-graph setting~\cite{LINDBERG-VILLEMOES}. Hence, it is a good thing to observe that our NGWPs are competitive with the eGHWT for this type of textured signal. On the other hand, the graph Haar basis, the graph Walsh basis, and the global graph Laplacian eigenbasis do not have oriented anisotropic basis vectors, and consequently they performed poorly. As in the case of Barbara's eye signal, we display the most important nine NGWP best basis vectors for approximating Barbara's pants signal in Fig.~\ref{fig:barb-pants-top9}, but this time we display the PC-NGWP best basis vectors since that best basis performed slightly better than the VM-NGWP best basis vectors. The corresponding VM-NGWP best basis vectors, however, are relatively similar. We note that the majority of these basis vectors are of high-frequency nature than those for the eye signal shown in Fig.~\ref{fig:barb-eye-top9}, which reflect the oscillating anisotropic patterns of her pants. The basis vector \#1 takes care of shading in this region while the basis vector \#5 is localized around the lower right peripheral region. The latter takes care of the strong spotty pattern (the dark point surrounded by light gray points) around that region in the input graph signal shown in Figure~\ref{fig:barb-pants}(b). \begin{figure} \begin{center} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector2.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector3.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector4.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector5.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector6.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector7.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector8.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector9.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{SunFlower_barbara_ftrouser_DAG_PC_NGW_important_basis_vector10.png} \end{subfigure} \caption{Nine important PC-NGWP vectors (the DC vector not shown) for Barbara's pants} \label{fig:barb-pants-top9} \end{center} \end{figure} \subsection{Toronto Traffic Network} We obtained the road network data of the City of Toronto from its open data portal\footnote{URL: \url{https://open.toronto.ca/dataset/traffic-signal-vehicle-and-pedestrian-volumes}}. Using the street names and intersection coordinates included in the dataset, we construct the graph representing the road network there with $N = 2275$ nodes and $M = 3381$ edges. Fig.~\ref{fig:toronto-fdensity}(a) displays this graph. As before, each edge weight was set as the reciprocal of the Euclidean distance between the endpoints of that edge. We analyze two graph signals on this street network. 1) spatial distribution of the street intersections and 2) pedestrian volume measured at such intersections. The graph signal 1) was constructed by counting the number of the nodes within the disk of radius of about $5$ km centered at each node. In other words, this is just a smooth version of histogram of the distribution of street intersections computed with the overlapping circular bins of equal size. The longest edge length measured in the Euclidean distance among all these $3381$ edges was chosen as this radius of this disk, which is located at the northeast corner of this graph as one can easily see in Figure~\ref{fig:toronto-fdensity}(a). The graph signal 2) is the most recent 8 peak-hour volume counts collected at intersections (i.e., nodes in this graph) where there are traffic signals. The data was typically collected between the hours of 7:30 am and 6:00 pm, over the period of 03/22/2004–02/28/2018. \begin{figure} \begin{subfigure}{0.49\textwidth} \centering\includegraphics[width=\textwidth]{Toronto_fdensity.png} \caption{A smooth spatial distribution of the street intersections} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering\includegraphics[width=\textwidth]{Toronto_fdensity_DAG_approx.png} \caption{Approximation performance of various methods} \end{subfigure} \caption{(a) A graph signal representing the smooth spatial distribution of the street intersections on the Toronto street network (a). The horizontal and vertical axes of this plot represent the longitude and latitude geo-coordinates of this area, respectively. (b) The results of our approximation experiments.} \label{fig:toronto-fdensity} \end{figure} From Fig.~\ref{fig:toronto-fdensity}(b), we observe that qualitative behaviors of these error curves are similar to those of Barbara's eye signal shown in Fig.~\ref{fig:barb-eye}(c). That is, 1) NGWP best bases outperformed all the others and the difference between the VM-NGWP and the PC-NGWP is negligible; 2) the global graph Laplacian eigenbasis worked quite well; 3) those bases based on Haar-Walsh wavelet packet dictionaries did not perform well. In order to examine what kind of basis vectors were chosen to approximate this smooth histogram of street intersections, we display the most important nine VM-NGWP best basis vectors in Fig.~\ref{fig:toronto-fdensity-top9}. We note that these top basis vectors exhibit different spatial scales: the basis vectors \#4, \#6, and \#9 are rather localized while the basis vectors \#2, \#3, and \#7 are of medium scale trying to characterize the sharp gradient of the density function toward the most crowded lower middle region. Then, there are coarse scale basis vectors such as \#1 and \#5. The basis vector \#8 is interesting: it is more oscillatory than the others, and tries to characterize the difference within the dense lower middle region. \begin{figure} \begin{center} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector2.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector3.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector4.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector5.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector6.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector7.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector8.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector9.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fdensity_DAG_VM_NGW_important_basis_vector10.png} \end{subfigure} \caption{Nine important VM-NGWP vectors (the DC vector not shown) for street intersection density data on the Toronto street map} \label{fig:toronto-fdensity-top9} \end{center} \end{figure} Now, let us analyze the pedestrian volume data measured at the street intersections as shown in Fig.~\ref{fig:toronto-fp}(a), which is quite localized around the most dense region in the lower middle section of the street graph. Fig.~\ref{fig:toronto-fp}(b) shows the approximation errors of various methods. \begin{figure} \begin{subfigure}{0.49\textwidth} \centering\includegraphics[width=\textwidth]{Toronto_fp.png} \caption{Pedestrian volume graph signal} \end{subfigure} \begin{subfigure}{0.49\textwidth} \centering\includegraphics[width=\textwidth]{Toronto_fp_DAG_approx.png} \caption{Approximation performance of various methods} \end{subfigure} \caption{The pedestrian volume graph signal on the Toronto street network (a); the results of our approximation experiments (b)} \label{fig:toronto-fp} \end{figure} From Fig.~\ref{fig:toronto-fp}(b), we observe the following: 1) the eGHWT best basis clearly outperformed all the other methods; 2) the graph Haar basis and both versions of the GHWT best bases performed relatively well closely followed by the VM-NGWP best basis; 3) the PC-NGWP best basis was outperformed by the VM-NGWP best basis and the other three bases; 4) the global bases such as the graph Laplacian eigenbasis and the graph Walsh based did not perform well. In order to examine the performance difference between the VM-NGWP best basis and the PC-NGWP best basis, we display the most important nine basis vectors in Figures \ref{fig:toronto-fp-top9} and \ref{fig:toronto-fp-top9-pc}. We note that the top VM-NGWP best basis vectors exhibit different spatial scales: the basis vectors \#2 and \#9 are rather localized while the basis vectors \#1, \#3, \#4, \#7, \#8 are of medium scale trying to characterize the pedestrian data around the most crowded lower middle region. Then, there are coarse scale basis vectors such as \#1 and \#5. The basis vectors \#5 and \#6 try to delineate the boundary between the high and the low pedestrian volume regions. \begin{figure} \begin{center} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector2.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector3.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector4.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector5.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector6.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector7.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector8.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector9.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_VM_NGW_important_basis_vector10.png} \end{subfigure} \caption{Nine important VM-NGWP vectors (the DC vector not shown) for pedestrian volume data on the Toronto street map} \label{fig:toronto-fp-top9} \end{center} \end{figure} On the other hand, the top PC-NGWP best basis vectors are much more localized than those of the VM-NGWP best basis vectors. As one can see from Fig.~\ref{fig:toronto-fp-top9-pc}, the basis vectors \#5, \#8, \#9 are extremely localized, and there are neither medium nor coarse scale basis vectors in these top 9 basis vectors. The reason behind these performance difference between the VM-NGWP and the PC-NGWP is the following. The VM-NGWP best basis for this graph signal turned out to be a true graph Shannon wavelet basis with scale $j=4$, i.e., the subspaces $V^{\star(4)}_0$, $V^{\star(4)}_1$, $V^{\star(3)}_1$, $V^{\star(2)}_1$, $V^{\star(1)}_1$ were selected as its best basis. On the other hand, the PC-NGWP best basis turned out to be a graph wavelet basis with $j=1$, i.e., the subspaces $V^{\star(1)}_0$ and $V^{\star(1)}_1$. Hence, the latter has only fine scale basis vectors. This is because the PC-NGWP procedure promotes localization of its basis vectors too strictly with scale $j=1$ by using the orthogonal projections of the standard basis vectors supported on $V^{(1)}_k$ of $G$ onto the subspace $V^{\star(1)}_k$ of $G^\star$. Since the pedestrian volume data is quite non-smooth and localized, $\bm{\delta}$-like basis vectors with scale $j=1$ in PC-NGWP tend to generate sparser coefficients, i.e., having a small number of large magnitude coefficients with many negligible ones. Therefore, the best basis algorithm with $\ell^1$-norm ends up favoring those $\bm{\delta}$-like basis vectors in PC-NGWP. \begin{figure} \begin{center} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector2.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector3.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector4.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector5.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector6.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector7.png} \end{subfigure} \\ \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector8.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector9.png} \end{subfigure} \begin{subfigure}{0.325\textwidth} \includegraphics[width = \textwidth]{Toronto_fp_DAG_PC_NGW_important_basis_vector10.png} \end{subfigure} \caption{Nine important PC-NGWP vectors (the DC vector not shown) for pedestrian volume data on the Toronto street map} \label{fig:toronto-fp-top9-pc} \end{center} \end{figure} \section{Background} \label{sec:back} \subsection{Graph Laplacians and Graph Fourier Transform} Let $G=G(V,E)$ be an undirected connected graph. Let ${V=V(G)=\{v_1,v_2,\ldots,v_N\}}$ denote the set of nodes (or vertices) of the graph, where $N \, := \, |V(G)|$. For simplicity, we typically associate each vertex with its index and write $i$ in place of $v_i$. $E=E(G)=\{e_1,e_2,\ldots,e_M\}$ is the set of edges, where each $e_k$ connects two vertices, say, $i$ and $j$, and $M \, := \, |E(G)|$. In this article we consider only finite graphs (i.e., $M, N < \infty$). Moreover, we restrict to the case of simple graphs; that is, graphs without loops (an edge connecting a vertex to itself) and multiple edges (more than one edge connecting a pair of vertices). We use $\bm{f} \in \mathbb{R}^N$ to denote a graph signal on $G$, and we define $\bm{1} \, := \, (1, \ldots, 1)^{\scriptscriptstyle{\mathsf{T}}} \in \mathbb{R}^N$. We now discuss several matrices associated with graphs. The information in both $V$ and $E$ is captured by the \emph{edge weight matrix} $W(G) \in \mathbb{R}^{N \times N}$, where $W_{ij} \geq 0$ is the edge weight between nodes $i$ and $j$. In an unweighted graph, this is restricted to be either $0$ or $1$, depending on whether nodes $i$ and $j$ are adjacent, and we may refer to $W(G)$ as an \emph{adjacency matrix}. In a weighted graph, $W_{ij}$ indicates the \emph{affinity} between nodes $i$ and $j$. In either case, since $G$ is undirected, $W(G)$ is a symmetric matrix. We then define the \emph{degree matrix} $D(G)$ as the diagonal matrix with entries $D_{ii} = \sum_j W_{ij}$. With this in place, we are now able to define the \emph{(unnormalized) Laplacian} matrix, \emph{random-walk normalized Laplacian} matrix, and \emph{symmetric normalized Laplacian} matrix, respectively, as \begin{align} \label{eq:graphlaps} L(G) &\, := \, D(G)-W(G) \nonumber \\ L_\mathrm{rw}(G) &\, := \, D(G)^{-1} L(G) \\ L_\mathrm{sym}(G) &\, := \, D(G)^{-1/2} L(G) D(G)^{-1/2} \nonumber . \end{align} We use $0=\lambda_0 \leq \lambda_1 \leq \ldots \leq \lambda_{N-1}$ to denote the sorted Laplacian eigenvalues and $\bm{\phi}_0,\bm{\phi}_1,\ldots,\bm{\phi}_{N-1}$ to denote their corresponding eigenvectors, where the specific Laplacian matrix to which they refer will be clear from either context or subscripts. Denoting $\Phi \, := \, [ \bm{\phi}_0, \ldots, \bm{\phi}_{N-1} ]$ and $\Lambda \, := \, \mathrm{diag}([\lambda_0, \ldots, \lambda_{N-1}])$, the eigendecomposition of $L(G)$ can be written as $L(G) = \Phi \Lambda \Phi^{\scriptscriptstyle{\mathsf{T}}}$. Since we only consider connected graphs here, we have $0 = \lambda_0 \lneqq \lambda_1$. This second smallest eigenvalue $\lambda_1$ is called the \emph{algebraic connectivity} of $G$ and the corresponding eigenvector $\bm{\phi}_1$ is called the \emph{Fiedler vector} of $G$. The Fiedler vector plays an important role in graph partitioning and spectral clustering; see, e.g., \cite{vonLuxburg2007}, which suggests the use of the Fiedler vector of $L_\mathrm{rw}(G)$ for spectral clustering over that of the other Laplacian matrices. \begin{remark} In this article, we use the Fiedler vectors of $L_\mathrm{rw}$ of an input graph and its subgraphs as a tool to hierarchically bipartition the graph although any other graph partition methods can be used in our proposed algorithms. However, note that we use the unnormalized graph Laplacian eigenvectors of $L(G)$ for simplicity to construct the dual domain of $G$ and consequently our graph wavelet packet dictionaries. In other words, $L_\mathrm{rw}$ is only used to compute its Fiedler vector for our graph partitioning purposes. \end{remark} The graph Laplacian eigenvectors are often viewed as generalized Fourier modes on graphs. Therefore, for any graph signal $\bm{f} \in \mathbb{R}^N$ and coefficient vector $\bm{g} \in \mathbb{R}^N$, the \emph{graph Fourier transform} and \emph{inverse graph Fourier transform}~\cite{SHUMAN-ETAL} are defined by \begin{align} \mathcal{F}_G(\bm{f}) \, := \, \Phi^{\scriptscriptstyle{\mathsf{T}}} \cdot \bm{f} \in \mathbb{R}^N \quad \text{ and } \quad \mathcal{F}^{\scriptscriptstyle{-1}}_G(\bm{g}) \, := \, \Phi \cdot \bm{g} \in \mathbb{R}^N . \end{align} Since $\Phi$ is an orthogonal matrix, it is not hard to see that $\mathcal{F}^{\scriptscriptstyle{-1}}_G \circ \mathcal{F}_G = I_{N}$. Thus, we can use $\mathcal{F}_G$ as an analysis operator and $\mathcal{F}^{\scriptscriptstyle{-1}}_G$ as a synthesis operator for graph harmonic analysis. \subsection{Graph Wavelet Transforms and Frames} \label{sec:graph-wavelet} We now briefly review graph wavelet transforms and frames; see, e.g., \cite{SHUMAN-ETAL, ortega2018graph} for more information. Translation and dilation are two important operators for classical wavelet construction. However, unlike in $\mathbb{R}^d$ ($d \in \mathbb{N}$) or its finite and discretized lattice graph $P_{N_1} \times \cdots \times P_{N_d}$, we cannot assume the underlying graph has self-symmetric structure in general (i.e., all the interior nodes do not always have the same neighborhood structure, unless the graph is, for instance, unweighted semiregular tree~\cite{biyikoglu2007laplacian}). Therefore, it is difficult to construct graph wavelet bases or frames by translating and dilating a single mother wavelet function with a fixed shape, e.g., the Mexican hat mother wavelet in $\mathbb{R}$, because the graph structure varies at different locations. Instead, some researchers, e.g., Hammond et al.~\cite{hammond2011wavelets}, constructed wavelet frames by shifting smooth graph spectral filters to be centered at different nodes. A general framework of building wavelet frames can then be summarized as follows: \begin{align} \bm{\psi}_{j,n} \, := \, \overbrace{\Phi F_j \Phi^{\scriptscriptstyle{\mathsf{T}}}}^{\text{Filtering}} \bm{\delta}_n \quad \text{for } j = 0,1,\cdots,J \text{ and } n = 1,2,\cdots, N, \label{eq:wavelet-frame} \end{align} where the index $j$ stands for different scale of spectral filtering (the greater $j$, the finer the scale), the index $n$ represents the center location of the wavelet, the diagonal matrices $F_j \in \mathbb{R}^{N\times N}$ satisfies $F_0(l,l) = h(\lambda_{l-1})$ and $F_j(l,l) = g(s_j\lambda_{l-1})$ for $1 \leq l \leq N$ and $1 \leq j \leq J$. Here, $h$ is a scaling function (which mainly deals with the small eigenvalues), while $g$ is a graph wavelet generating kernel. For example, the kernel proposed in \cite{hammond2011wavelets} can be approximated by the Chebyshev polynomial and lead to a fast algorithm. Note that $\{s_j\}_{1\leq j \leq J}$ are dilation parameters and $\bm{\delta}_n$ is the standard basis vector centered at node $n$. Furthermore, one can show that as long as the generalized partition of unity \begin{equation} \label{eq:partitionofunity} A \cdot I_N \leq \sum_{j = 0} ^ N F_j \leq B \cdot I_N , \qquad 0 < A \leq B \end{equation} holds, $\{ \bm{\psi}_{j,n} \}_{0 \leq j \leq J; 1 \leq n \leq N}$ forms a \emph{graph wavelet frame}, which can be used to decompose and recover any given graph signals~\cite{hammond2011wavelets}. \subsection{Non-Trivial Eigenvector Distances} \label{sec:non-trivial eigenvector distances} However, one important drawback of the above method is that the construction of the spectral filters $F_j$ solely depends on the eigenvalue distribution (except some flexibility in choosing the filter pair $(h, g)$, and the dilation parameters $\{s_j\}$) and does \emph{not} reflect how the eigenvectors \emph{behave}. For simple graphs such as $P_N$ and $C_N$ (a cycle graph with $N$ nodes), the graph Laplacian eigenvectors are global sinusoids whose frequencies can be simply read off from the corresponding eigenvalues, as discussed in \cite{IRION-SAITO-SPIE, saito2018can}. Hence, the usual \emph{Littlewood-Paley wavelet theory} (see, e.g., \cite[Sec.~4.2]{Daubechies1992}, \cite[Sec.~2.4]{jaffard2001wavelets}) applies for those simple graphs. Unfortunately, the graph Laplacian eigenvectors of a general graph can behave in a much more complicated manner than those of $P_N$ or $C_N$ (as discussed in~\cite{SAITO-WOEI-KOKYUROKU, WHY4-LAA, saito2018can, CLONINGER-STEINERBERGER, LI-SAITO-SPIE}). They cannot be described and ordered simply by the corresponding eigenvalues. Therefore, it will be problematic to design graph wavelet by using spectral filters built solely upon eigenvalues and we need to find a way to distinguish eigenvector behaviors. To quantify the difference of behaviors between the eigenvectors, however, we cannot use the usual $\ell^2$-distances among them since they all have the same value $\sqrt{2}$ due to their orthonormality. As a remedy to these issues, we measure the ``behavioral'' difference between the eigenvectors as discussed in~\cite{CLONINGER-STEINERBERGER, saito2018can, LI-SAITO-SPIE}, one of which is used in our numerical experiments in Section~\ref{sec:applications}. We briefly summarize this particular behavioral distance below. See~\cite{CLONINGER-STEINERBERGER, saito2018can, LI-SAITO-SPIE} for more information on the other eigenvector metrics. \begin{hide} \subsubsection{Hadamard (HAD) product affinity} The \emph{Hadamard product affinity}~\cite{CLONINGER-STEINERBERGER} defines a similarity between Laplacian eigenvectors in terms of average local correlations. A benefit of this measure is that it can be computed quickly. Let $\bm{\phi}_i, \bm{\phi}_j$ be eigenvectors of the graph Laplacian $L(G)$. It was shown in~\cite{steinerberger2017spectral} that for all $x \in V$, \begin{equation*} \mathrm{e}^{-tL}(\bm{\phi}_i \circ \bm{\phi}_j)(x) = \sum_{y\in V} p_t(x,y) (\bm{\phi}_i(y) - \bm{\phi}_i(x))(\bm{\phi}_j(y) - \bm{\phi}_j(x)), \end{equation*} where $\bm{\phi}_i \circ \bm{\phi}_j$ is the Hadamard (i.e., entrywise) product of $\bm{\phi}_i$ and $\bm{\phi}_j$, $p_t(x,y)$ is the $(x,y)$-th entry of $\mathrm{e}^{-tL}$, and $t$ satisfies \begin{align}\label{eq:difftime} \mathrm{e}^{- t \lambda_i} + \mathrm{e}^{- t \lambda_j} = 1 \end{align} This implies that, rather than computing the local correlation between eigenvectors for all $x \in V$, one can simply compute the Hadamard product between the eigenvectors and apply the heat kernel matrix $\mathrm{e}^{-tL}$ at time $t$ satisfying \eqref{eq:difftime}. When $\mathrm{e}^{-tL}(\bm{\phi}_i \circ \bm{\phi}_j)$ is similar in magnitude to $\bm{\phi}_i \circ \bm{\phi}_j$, this implies the local correlation between $\bm{\phi}_i$ and $\bm{\phi}_j$ is large. This yields a natural similarity measure between eigenvectors \begin{equation*} \alpha(\bm{\phi}_i,\bm{\phi}_j) \, := \, \frac{\|\mathrm{e}^{-tL}(\bm{\phi}_i \circ \bm{\phi}_j)\|_2}{\|\bm{\phi}_i \circ \bm{\phi}_j\|_2}, \end{equation*} which measures the fraction of $\bm{\phi}_i \circ \bm{\phi}_j$ that projects into the graph Laplacian eigenvectors with smaller eigenvalues (i.e., more global low-frequency eigenvectors). We define the \emph{Hadamard product distance} by (*** the logarithm the inverse of the Hadamard product affinity~\eqref{eq:difftime}, i.e., We define the \emph{Hadamard pseudometric} by the logarithm of the inverse of \begin{equation*} d_{\mathrm{HAD}}(\bm{\phi}_i, \bm{\phi}_j) \, := \, 1/\alpha(\bm{\phi}_i, \bm{\phi}_j) . \end{equation*} We note that this ``distance'' does not satisfy the metric axioms, yet it still quantifies the behavioral difference between the eigenvectors as shown in \cite{CLONINGER-STEINERBERGER}. \end{hide} Instead of the usual $\ell^2$-distance, we use the \emph{absolute gradient} of each eigenvector as its feature vector describing its behavior. More precisely, let $Q(G) \in \mathbb{R}^{N \times M}$ be the \emph{incidence matrix} of an input graph $G(V,E,W)$ whose $k$th column indicates the head and tail of the $k$th edge $e_k \in E$. However, we note that we need to orient each edge of $G$ in an arbitrary manner to form a directed graph temporarily in order to construct its incidence matrix. For example, suppose $e_k$ joins nodes $i$ and $j$, then we can set either $(Q_{ik}, Q_{jk})=(-\sqrt{W_{ij}}, \sqrt{W_{ij}})$ or $(\sqrt{W_{ij}}, -\sqrt{W_{ij}})$. Of course, we set $Q_{lk}=0$ for $l \neq i, j$. It is easy to see that $Q(G) \, Q(G)^{\scriptscriptstyle{\mathsf{T}}}=L(G)$. We now define the \emph{DAG pseudometric} between $\bm{\phi}_i$ and $\bm{\phi}_j$ by \begin{equation} \label{eq:DAG} d_{\mathrm{DAG}}(\bm{\phi}_i,\bm{\phi}_j) \, := \, \| |\nabla_G| \bm{\phi}_i - |\nabla_G| \bm{\phi}_j \|_2 \quad \text{where $|\nabla_G| \bm{\phi} \, := \, \operatorname{abs}.(Q^{\scriptscriptstyle{\mathsf{T}}} \bm{\phi}) \in \mathbb{R}^M_{\geq 0}$}, \end{equation} where $\operatorname{abs}.(\cdot)$ applies the absolute value in the entrywise manner to its argument. We note that this quantity is not a metric but a \emph{pseudometric} because the identity of discernible of the axioms of metric is not satisfied. In order to see the meaning of this quantity, let us analyze its square as follows. \begin{align*} d_{\mathrm{DAG}}(\bm{\phi}_i,\bm{\phi}_j)^2 & = \inner{|\nabla_G| \bm{\phi}_i - |\nabla_G| \bm{\phi}_j}{|\nabla_G| \bm{\phi}_i - |\nabla_G| \bm{\phi}_j}_E \\ &= \inner{|\nabla_G| \bm{\phi}_i}{|\nabla_G | \bm{\phi}_i}_E + \inner{|\nabla_G| \bm{\phi}_j}{|\nabla_G| \bm{\phi}_j}_E - 2 \inner{|\nabla_G| \bm{\phi}_i}{|\nabla_G| \bm{\phi}_j}_E\\ &= \lambda_i + \lambda_j - \sum_{x\in V} \sum_{y \sim x}|\bm{\phi}_i(x) - \bm{\phi}_i(y)| \cdot |\bm{\phi}_j(x) - \bm{\phi}_j(y)| \quad \text{thanks to $QQ^{\scriptscriptstyle{\mathsf{T}}} = L$} \end{align*} where $\langle \cdot,\cdot \rangle_E$ is the inner product over edges. The last term of the formula can be viewed as \emph{a global average of absolute local correlation} between eigenvectors. In this sense, this quantity is related to the \emph{Hadamard-product affinity} between eigenvectors proposed by Cloninger and Steinerberger~\cite{CLONINGER-STEINERBERGER}. Note that the computational cost is $O(M)$ for each $d_{\mathrm{DAG}}(\cdot,\cdot)$ evaluation provided that the eigenvectors have already been computed. \subsection{Graph Wavelet Packets} Instead of building the graph wavelet packet dictionary by graph wavelet frames as summarized in Section~\ref{sec:graph-wavelet}, one could also accomplish it by generalizing the classical wavelet packets to graphs. The classical wavelet packet decomposition (or dictionary construction) of a 1D discrete signal is obtained by passing it through a full binary tree of filters (each node of the tree represents either low-pass filtered or high-pass filtered versions of the coefficients entering that node followed by the subsampling operation) to get a set of binary-tree-structured coefficients~\cite{Coifman-Wickerhauser}, \cite[Sec.~8.1]{MALLAT-BOOK3}. This basis dictionary for an input signal of length $N$ has up to $N(1+\log_2N)$ basis vectors (hence clearly redundant), yet contains more than $1.5^N$ searchable orthonormal bases (ONBs)~\cite{Coifman-Wickerhauser, THIELE-VILLEMOES}. For the purpose of efficient signal approximation, the \emph{best-basis algorithm} originally proposed by Coifman and Wickerhauser~\cite{Coifman-Wickerhauser} can find the most desirable ONB (and the expansion coefficients of the input signal) for such a task among such an immense number of ONBs. The best-basis algorithm requires a user-specified cost function, e.g., the $\ell^p$-norm ($0 < p \leq 1$) of the expansion coefficients for sparse signal approximation, and the basis search starts at the bottom level of the dictionary and proceeds upwards, comparing the cost of the coefficients at the children nodes to the cost of the coefficients at their parents nodes. This best-basis search procedure only costs $O(N)$ operations provided that the expansion coefficients of the input signal have already been computed. In order to generalize the classical wavelet packets to the graph setting, however, there are two main difficulties: 1) the concept of the frequency domain of a given graph is not well-defined; and 2) the relation between the Laplacian eigenvectors and sample locations are much more subtle on general graphs. For 1), we propose to construct a \emph{dual graph} $G^\star$ of the input graph $G$ and view it as the natural spectral domain of $G$, and use any graph partition method to hierarchically bipartition $G^\star$ instead of building low and high pass filters like the classical case. This can be viewed as the \emph{generalized Littlewood-Paley theory}. For 2), we propose a node-eigenvector organization algorithm called the \emph{pair-clustering algorithm}, which implicitly provides a downsampling process on graphs; see Section~\ref{sec:pair-clustering} for the details. \section{Discussion} \label{sec:discussion} In this article, we proposed two ways to construct graph wavelet packet dictionaries that fully utilize the natural dual domain of an input graph: the VM-NGWP and the PC-NGWP dictionaries. Then, using two different graph signals on each of the two different graphs, we compared their performance in approximating a given graph signal against our previous multiscale graph basis dictionaries, such as the GHWT and eGHWT dictionaries, which include the graph Haar and the graph Walsh bases. Our proposed dictionaries outperformed the others on piecewise smooth graph signals, and performed reasonably well for graph signals sampled on an image containing oriented anisotropic texture patterns. On the other hand, our new dictionaries were beaten by the eGHWT on the non-smooth and localized graph signal. One of the potential reasons for such a behavior is the fact that our dictionaries are a direct generalization of the ``Shannon'' wavelet packet dictionaries, i.e., their ``frequency'' domain support is localized and well controlled while the ``time'' domain support is not compact. In order to improve the performance of our dictionaries for such non-smooth localized graph signals, we need to bipartition $G^\star$ recursively but \emph{smoothly with overlaps}, which may lead to a graph version of the \emph{Meyer wavelet packet dictionary}~\cite[Sec.~7.2.2, 8.4.2]{MALLAT-BOOK3}, whose basis vectors are more localized in the ``time'' domain than those of the Shannon wavelet packet dictionary. In fact, it is interesting to investigate such smooth partitioning with overlaps not only on $G^\star$ but also on $G$ itself since it may lead to the graph version of the local cosine basis dictionary~\cite{LCT}, \cite[Sec.~8.4.3]{MALLAT-BOOK3}. We also note the differences between the VM-NGWP and the PC-NGWP dictionaries: the PC-NGWP may allow one to ``pinpoint'' a particular node where the basis vectors should concentrate in a more explicit manner than the VM-NGWP does. The latter requires one to examine the computed basis vectors if one wants to know where in the input graph nodes they concentrate. On the other hand, the difference in their computational costs is just a constant in $O(N^3)$ operations. Hence, it is important to investigate how to reduce the computational complexity in both cases. One such possibility is to use only the first $N_0$ graph Laplacian eigenvectors with $N_0 \ll N$. Clearly, one cannot represent a given graph signal precisely with $N_0$ eigenvectors, but this scenario may be acceptable for certain applications including graph signal clustering, classification, and regression. Of course, it is of our interest to investigate whether we can come up with faster versions of varimax rotation algorithm and MGSLp algorithm, which forms one of our future research projects. \begin{hide} \red{Do you think we should state the following explicitly?} \red{* Building graph spectrogram for visualizing the eigenvectors} \red{I now think that we do not need to state the following:} \red{* Generalize this with LDB for graph signal classification problems} \red{* What are critical applications that NGWP dictionaries outperform the others?} \end{hide} \section{Introduction and Motivation} \label{sec:intro} There is an explosion of interest and demand to analyze data sampled on graphs and networks. This has motivated development of more flexible yet mathematically sound \emph{dictionaries} (i.e., an overcomplete collection of atoms or basis vectors) for data analysis and signal processing on graphs. Our main goal here is to build \emph{smooth} multiscale localized basis dictionaries on an input graph, with beneficial reconstruction and sparsity properties, and to fill the ``gap'' left from our previous graph basis dictionary constructions~\cite{IRION-SAITO-HGLETS, IRION-SAITO-GHWT, IRION-SAITO-SPIE, IRION-SAITO-TSIPN, SHAO-SAITO-SPIE} as we explain below. Our approach differs from the standard literature as we fully utilize both the similarities between the nodes (through the graph adjacency matrix) and the similarities between the eigenvectors of the graph Laplacian matrix (through new non-trivial eigenvector distances). Previous approaches to construct such graph basis dictionaries break down into two main categories. The first category partitions the nodes through recursive graph cuts to generate multiscale basis dictionaries. This includes: the Hierarchical Graph Laplacian Eigen Transform (HGLET)~\cite{IRION-SAITO-HGLETS}; the Generalized Haar-Walsh Transform (GHWT)~\cite{IRION-SAITO-GHWT}; its extension, the eGHWT~\cite{SHAO-SAITO-SPIE}; and other Haar-like graph wavelets (see, e.g., \cite{MURTAGH-Haar, LEE-NADLER-WASSERMAN, GAVISH-NADLER-COIF, COIF-GAVISH, SZLAM-HAAR-GRAPH}). But their basis vectors either are nonsmooth piecewise constants or have non-overlapping supports. The second category uses spectral filters on the Laplacian (or diffusion kernel) eigenvalues to generate multiscale smooth dictionaries. This includes: the Spectral Graph Wavelet Transform~\cite{hammond2011wavelets}; Diffusion Wavelets \cite{coifman2006diffusion}; extensions to spectral graph convolutional networks \cite{levie2018cayleynets}. However, these dictionaries ignore the fact that eigenvectors have a more complex relationship than simply the difference in size of the corresponding eigenvalues~\cite{saito2018can, CLONINGER-STEINERBERGER, LI-SAITO-SPIE}. These relationships can result from eigenvector localization in different clusters, differing scales in multi-dimensional data, etc. These notions of similarity and difference between eigenvectors, while studied in the eigenfunction literature~\cite{saito2018can, CLONINGER-STEINERBERGER, LI-SAITO-SPIE}, have yet to be incorporated into building localized dictionaries on graphs. We combine the benefits of both approaches to construct the graph equivalent of spatial-spectral filtering. We have two approaches: one is to utilize the \emph{dual geometry} of an input graph without partitioning the input graph, and the other is to fully utilize clustering and partition information in both the input graph and its dual domain. Our first approach, detailed in Section~\ref{sec:varimax-ngwp}, fills the ``gap'' in the cycle of our development of the graph basis dictionaries, i.e., HGLET, GHWT, and eGHWT. This approach is a direct generalization of the classical wavelet packet dictionary~\cite[Chap.~8]{MALLAT-BOOK3} to the graph setting: we hierarchically partition the dual domain to generate a tree-structured ``subbands'' each of which is an appropriate subset of the graph Laplacian eigenvectors. We also want to note the following correspondence: The HGLET~\cite{IRION-SAITO-HGLETS} is a graph version of the Hierarchical Block DCT dictionary~\cite[Sec.~8.3]{MALLAT-BOOK3} (i.e., the non-smooth non-overlapping version of the local cosine dictionary~\cite{LCT}, \cite[Sec.~8.5]{MALLAT-BOOK3}), and the former exactly reduces to the latter if the input graph is $P_N$, a path graph with $N$ nodes. The former hierarchically partitions the input graph while the latter does the same (with a non-adaptive manner) on the unit interval $[0,1]$ in the \emph{time} domain. On the other hand, the GHWT~\cite{IRION-SAITO-GHWT} is a graph version of the Haar-Walsh wavelet packet dictionary~\cite{WPK}, \cite[Sec.~8.1]{MALLAT-BOOK3}, and the former exactly reduces to the latter if the input graph is $P_N$. The latter hierarchically partitions the interval $[0, N)$ in the \emph{sequency} domain while the former does the same by the graph domain partitioning plus reordering; see~\cite{IRION-SAITO-GHWT, IRION-SAITO-SPIE, IRION-SAITO-TSIPN} for the details. Our graph basis dictionary using this first approach is a graph version of the \emph{Shannon} wavelet packet dictionary~\cite[Sec.~8.1.2]{MALLAT-BOOK3}, which hierarchically partitions the interval $[0, 1/2)$ (or $[0, \pi]$ depending on how one defines the Fourier transform) in the \emph{frequency} domain. Again, the former essentially reduces to the latter if the input graph is $P_N$. Our second approach, detailed in Section~\ref{sec:pair-clustering}, partitions \emph{both} the input graph \emph{and} its dual domain; more precisely, we first hierarchically partition the dual domain, and then partition the input graph with constraints imposed by the dual domain partition. This approach parallels and generalizes classical time-frequency analysis, where the time domain is replaced by a general \emph{node-domain} geometry and the frequency domain is replaced by a general \emph{eigenvector-domain} organization that changes as a function of location in the node-domain. A version of this approach of node-eigenvector organization that maps the eigenvectors to a 1D projection of the graph has also been considered as a visualization technique for low-frequency eigenfunctions on clustered graphs~\cite{girault2019s}. We aim for the significance and impact of this research to be two fold. First, these results will provide the first set of graph wavelet packet bases that adaptively scale to the local structure of the graph. This is especially important for graphs with complicated multiscale structure, whose graph Laplacians have localized eigenfunctions, for example. This is an impactful direction, as most of graph wavelet packet bases previously proposed only tile the node-eigenfunction ``plane'' along the node ``axis,'' while Laplacian eigenfunctions only tile that plane along the eigenfunction ``axis''. Our approach in Section \ref{sec:pair-clustering} constructs filters in both the nodal domain and eigenfunction domain, similar to the classical \emph{time-frequency adapted} wavelet packets that tile both the time and the frequency domains~\cite{THIELE-VILLEMOES, herley1997joint}. Second, in the long term, this is a first method of systematically using the novel concept of eigenvector dual geometry~\cite{saito2018can, CLONINGER-STEINERBERGER, LI-SAITO-SPIE}. This direction can set a path for future modification of spectral graph theory applications to incorporate dual geometry. \begin{hide} \red{Alex: I don't think we need to add the following paragraph. Because we haven't talked about frames at all up to this point, and the PC-NGWP in Section~\ref{sec:pair-clustering} also generates the basis dictionary, and needs to use the best-basis algorithm to select an appropriate ONB. The graph wavelet basis is a particular choice from this dictionary; and the same can be said for the VM-NGWP dictionary. \emph{Earlier Alex said: Check for rewording or moving to another place:}} The focus of this paper is on construction of basis dictionaries, rather than overcomplete frames. In Section \ref{sec:varimax-ngwp} this is done through a best-basis algorithm for a given signal, and in Section~\ref{sec:pair-clustering} this is done through construction of a signal independent orthonormal wavelet basis. For this reason, in Section~\ref{sec:applications} we compare to other basis dictionaries for a variety of signals. \end{hide} The structure of this article is organized as follows. Section~\ref{sec:back} reviews fundamentals: the basics of graphs, i.e., graph Laplacians and graph Fourier transform as well as graph wavelet transforms and frames that were proposed previously. It also reviews non-trivial metrics of graph Laplacian eigenvectors, which are used to construct the dual domain of an input graph. Section~\ref{sec:varimax-ngwp} presents a natural graph wavelet basis constructed through hierarchical partition of the dual graph of Laplacian eigenvectors. Section~\ref{sec:pair-clustering} presents a second version of a natural graph wavelet packet dictionary constructed through a pair of hierarchical partitions, one on the input graph and one on its dual domain. In Section~\ref{sec:applications}, we demonstrate the usefulness of our propose graph wavelet packet dictionaries in graph signal approximation using numerical experiments. Code scripts can be found at \cite{HAOTIAN-GITHUB}. Finally, we discuss our findings gained through these numerical experiments and near-future projects for further improvements of our dictionaries. \section{Modified Gram-Schmidt with \texorpdfstring{$\ell^p$}{lp} Pivoting Orthogonalization} \label{app:mgslp} We implemented a simplified version (i.e., Algorithm~\ref{algo:MGS-Lp-pivoting}) of the modified Gram-Schmidt with mixed $\ell^2$-$\ell^p$ ($0 < p < 2$) pivoting algorithm in~\cite{CoifmanRonaldR2006Dw}. Our version skips the the step of computing the largest $\ell^2$ norm and picking the parameter $\lambda$ (a notation used in~\cite{CoifmanRonaldR2006Dw}) to increase the numerical stability. Instead, we directly set up a tolerance parameter, i.e., $\mathrm{tol}$ in Algorithm~\ref{algo:MGS-Lp-pivoting}, for the robustness. On the other hand, we keep the $\ell^p$ ($0 < p < 2$) pivoting portion in MGS (i.e., always perform the orthogonalization process of the vector with minimum $\ell^p$-norm in the candidate pool), which nicely preserves the sparsity of the obtained wavelet-like vectors after the orthogonalization process. The MGSLp algorithm is summarized as follows. \begin{algorithm} \DontPrintSemicolon \SetKwData{tol}{tol} \KwIn{List of unit vectors $v = [v_1, \ldots, v_m] \in \mathbb{R}^{N \times m}$; norm parameter $0 < p < 2$ (default value: $1$); error tolerance \mathrm{tol} (default value: 1e-12)} \KwOut{List of orthonormal vectors $q = [q_1, \ldots, q_r] \in \mathbb{R}^{N \times r}$ where $r=\operatorname{rank}(v)$} $q = \emptyset$ \tcp*{initialize the output list} $w = [\|v_1\|_p, \ldots, \|v_m\|_p]$ \For{$i$ from $1$ to $m$}{ $k = i-1+\operatorname{findmin}(w)$ \tcp*{find the minimum $\ell^p$-norm index} swap($v_i$, $v_k$) \tcp*{pivoting} \If{$\|v_i\|_2 < \mathrm{tol}$}{ break \tcp*{check linear dependency} } $\tilde{v} = v_i / \| v_i \|_2$ $w = \emptyset$ \tcp*{re-initialize the $\ell^p$-norm vector} \For{$j$ from $i+1$ to $m$}{ $v_j = v_j - (\tilde{v}^{\scriptscriptstyle{\mathsf{T}}} \cdot v_j)\tilde{v}$ $w$ = append($w$, $\|v_j\|_p$) } $q$ = append($q$, $\tilde{v}$) } \Return{$q$} \caption{Modified Gram-Schmidt Orthogonalization with $\ell^p$ pivoting} \label{algo:MGS-Lp-pivoting} \end{algorithm} \section{Natural Graph Wavelet Packets using \emph{Pair-Clustering}} \label{sec:pair-clustering} Another way to construct a natural graph wavelet packet dictionary is to mimic the convolution and subsampling strategy of the classical wavelet packet dictionary construction: form a full binary tree of spectral filters (i.e., the dual graph $G^\star$) and then perform the filtering/downsampling process based on the relations between the nodes and the eigenvectors of $G$. In order to fully utilize such relations, we look for a coordinated pair of partitions on $G$ and $G^\star$, which is realized by our \emph{pair-clustering} algorithm described below. We will first describe the one-level pair-clustering algorithm and then proceed to the hierarchical version. \subsection{One-Level Pair-Clustering} \label{sec:1levpc} Suppose we partition the dual graph $G^\star$ into $K > 2$ clusters using any method including the spectral clustering~\cite{vonLuxburg2007} as we used in the previous section. Let $V^\star_1, \ldots, V^\star_K$ be those mutually disjoint $K$ clusters of the nodes $V^\star$ representing the graph Laplacian eigenvectors, i.e., $\displaystyle V^\star = \bigsqcup_{k=1}^K V^\star_k$, which is also often written as $\displaystyle \bigoplus_{k=1}^K V^\star_k$. Denote the cardinality of each cluster as $N_k \, := \, |V^\star_k|$, $k = 1,\ldots,K$, and we clearly have $\displaystyle \sum_{k = 1}^K N_k = N$. Then, we also partition the original nodes $V$ into mutually disjoint $K$ clusters, $V_1, \ldots, V_K$ with the constraint that $|V_k|=|V^\star_k|=N_k$, $k = 1,\ldots,K$, and the members of $V_k$ and $V^\star_k$ are as ``closely related'' as possible. The purpose of partitioning $V$ is to select appropriate graph nodes as sampling points around which the graph wavelet packet vectors using the information on $V^\star_k$ are localized. With a slight abuse of notation, let $V$ also represent a collection of the standard basis vectors in $\mathbb{R}^N$, i.e., $V \, := \, \{\bm{\delta}_1, \ldots, \bm{\delta}_N\}$, where $\bm{\delta}_k(k)=1$ and $0$ otherwise. In order to formalize this constrained clustering of $V$, we define the \emph{affinity measure} $\alpha$ between $V_k$ and $V^\star_k$ as follows: \begin{align} \alpha(V_k,V^\star_k) \, := \, \sum_{\bm{\delta} \in V_k, \bm{\phi} \in V^\star_k} | \inner{\bm{\delta}}{\bm{\phi}} |^2, \end{align} where $\inner{\cdot}{\cdot}$ is the standard inner product in $\mathbb{R}^N$. Note that $\displaystyle \alpha(V, V^\star) = \sum_{\bm{\delta} \in V, \bm{\phi} \in V^\star} | \inner{\bm{\delta}}{\bm{\phi}} |^2 = \sum_{\bm{\phi} \in V^\star} \| \bm{\phi} \|^2 = N$. Denote the feasible partition space as \begin{displaymath} U(V; N_1, N_2, \ldots, N_K) \, := \, \left\{ (V_1,V_2,\ldots,V_K) \, \biggl| \, \bigsqcup_{k=1}^K V_k = V; \, |V_k|=N_k, k=1,\ldots,K \right\}. \end{displaymath} Now we need to solve the following optimization problem for a given partition of $\displaystyle V^\star= \bigsqcup_{k=1}^K V^\star_k$: \begin{equation} \label{eq:objective} (V_1, \ldots, V_K) = \arg \max_{U(V; N_1,N_2,\ldots,N_K)} \sum_{k = 1}^K \alpha(V_k,V^\star_k) \end{equation} This is a discrete optimization problem. In general, it is not easy to find the global optimal solution except for the case $K = 2$. For $K=2$, we can find the desired partition of $V$ by the following greedy algorithm: 1) compute $\alpha(\{\bm{\delta}\}, V^\star_1) - \alpha(\{\bm{\delta}\},V^\star_2)$ for each $\bm{\delta} \in V$; 2) select the largest $N_1$ such $\bm{\delta}$'s in $V$, set them as $V_1$, and set $V_2 = V \setminus V_1$. When $K>2$, we can find a \emph{local optimum} by the similar strategy as above: 1) compute the values $\alpha(\{\bm{\delta}\},V^\star_1)$ for each $\bm{\delta} \in V$; 2) select the largest $N_1$ such $\bm{\delta}$'s, and set them as $V_1$; 3) compute the values $\alpha(\{\bm{\delta}\},V^\star_2)$ for each $\bm{\delta} \in V \setminus V_1$, select the largest $N_2$ such $\bm{\delta}$'s, and set them as $V_2$; 4) repeat the above process to produce $V_3, \ldots, V_K$. While this greedy strategy does not reach the global optimum of Eq~\eqref{eq:objective}, we find that empirically the algorithm attains a reasonably large value of the objective function. \subsection{Hierarchical Pair-Clustering} \label{sec:hierarchical-clustering} In order to build a multiscale graph wavelet packet dictionary, we develop a hierarchical (i.e., multilevel) version of the pair-clustering algorithm. First, let us assume that the hierarchical bipartition tree of $V^\star$ is already computed using the same algorithm discussed in Section~\ref{sec:gstar-partition}. We now begin with level $j=0$ where $V^{(0)}_0$ is simply $V = \{\bm{\delta}_1,\bm{\delta}_2,\cdots,\bm{\delta}_N\}$ and $V^{\star(0)}_0$ is $V^\star = \{\bm{\phi}_0,\bm{\phi}_1,\cdots,\bm{\phi}_{N-1}\}$. Then, we perform one-level pair-clustering algorithm ($K=2$) to get $\left(V^{\star(1)}_0, V^{(1)}_0\right)$ and then $\left(V^{\star(1)}_1, V^{(1)}_1\right)$. We iterate the above process to generate paired clusters $\left(V^{\star(j)}_k, V^{(j)}_k\right)$, $j=0, \ldots, {j_\mathrm{max}}$, $k=0, \ldots, 2^j-1$. Note that we force the partition so that each of the terminal nodes, $V^{\star({j_\mathrm{max}})}_k$, $V^{({j_\mathrm{max}})}_k$, contains only a single entry. In other words, the hierarchical pair-clustering algorithm eventually will pair each standard basis $\bm{\delta}_l$ with certain eigenvector $\bm{\phi}_m$. \subsection{Generating the NGWP Dictionary} Once we generate two hierarchical bipartition trees $\left\{V^{(j)}_k\right\}$ and $\left\{V^{\star(j)}_k\right\}$, we can proceed to generate the NGWP vectors $\left\{\Psi^{(j)}_k\right\}$ that are necessary to form an NGWP dictionary. For each $\bm{\delta}_l \in V^{(j)}_k$, we first compute the orthogonal projection of $\bm{\delta}_l$ onto the span of $V^{\star(j)}_k$, i.e., $\operatorname{span}\left(\Phi^{(j)}_k\right)$ where $\Phi^{(j)}_k$ are those eigenvectors of $L(G)$ belonging to $V^{\star(j)}_k$. Unfortunately, $\Phi^{(j)}_k \left(\Phi^{(j)}_k\right)^{\scriptscriptstyle{\mathsf{T}}} \bm{\delta}_l$ and $\Phi^{(j)}_k \left(\Phi^{(j)}_k\right)^{\scriptscriptstyle{\mathsf{T}}} \bm{\delta}_{l'}$ are not mutually orthogonal for $\bm{\delta}_l, \bm{\delta}_{l'} \in V^{(j)}_k$ in general. Hence, we need to perform orthogonalization of the vectors $\left\{\Phi^{(j)}_k \left(\Phi^{(j)}_k\right)^{\scriptscriptstyle{\mathsf{T}}} \bm{\delta}_l\right\}_l$. We use the \emph{modified Gram-Schmidt with $\ell^p (0 < p < 2)$ pivoting orthogonalization} (MGSLp)~\cite{CoifmanRonaldR2006Dw} to generate the orthonormal graph wavelet packet vectors associated with $V^{\star(j)}_k$ (and hence also $V^{(j)}_k$). This MGSLp algorithm listed in Appendix~\ref{app:mgslp} tends to generate localized orthonormal vectors because the $\ell^p$-norm pivoting promotes sparsity. We refer to the graph wavelet packet dictionary $\left\{\Psi^{(j)}_k\right\}_{j=0:{j_\mathrm{max}}; \, k=0:2^j-1}$ generated by this algorithm as the \emph{Pair-Clustering Natural Graph Wavelet Packet} (PC-NGWP) dictionary. \subsection{Computational Complexity} \label{sec:pc-cost} At each node $V^{\star(j)}_k$ of the hierarchical bipartition tree of the dual graph $G^\star$, the orthogonal projection of the standard basis vectors in $V^{(j)}_k$ onto $\operatorname{span}\left(\Phi^{(j)}_k\right)$ and the MGSLp procedure are the two main computational burden for our PC-NGWP dictionary construction. The orthogonal projection costs $O\left(N \cdot (N^j_k)^2 + N \cdot N^j_k\right)$ while the MGSLp costs $O\left(2 N \cdot (N^j_k)^2\right)$. Hence, the dominating cost for this procedure is $O\left(3 N \cdot (N^j_k)^2\right)$ for each $(j,k)$. And we need to sum up this cost on all the tree nodes. Let us analyze the special case of the perfectly balanced bipartition tree with $N=2^{j_\mathrm{max}}$ as we did for the VM-NGWP in Section~\ref{sec:varimax-cost}. In this case, the bipartition tree has $1+{j_\mathrm{max}}$ levels, and $N^j_k = 2^{{j_\mathrm{max}}-j}$, $k=0, \ldots, 2^j-1$. So, for the $j$th level, using Eq.~\eqref{eq:Njk2}, we have $O(3 N^3 \cdot 2^{-j})$. Finally, by summing this from $j=0$ to ${j_\mathrm{max}}-1$ (again, no computation is needed at the bottom level leaves), the total cost for PC-NGWP dictionary construction in this ideal case is: $O(3N^3 \cdot (2-2/N)) \approx O(6N^3)$. So, it still requires $O(N^3)$ operations; the difference from that of the VM-NGWP is the constants, i.e., $6$ (PC-NGWP) vs $1000 \cdot 4/3$ (the worst case VM-NGWP). \section{Varimax Rotation} \label{app:varimax} The algorithm of varimax rotation is a basic singular value (BSV) algorithm as Jennrich demonstrated in \cite{JENNRICH}. The algorithm in Eq.~\eqref{eq:varimax} can be summarized as follows: \begin{enumerate} \item[(0)] Initialize an orthogonal rotation matrix $R$. \item[(1)] Compute $\mathrm{ d} f/\mathrm{ d} R$, where $f$ is the objective function defined in Eq.~\eqref{eq:varimax}. Here $\mathrm{ d} f/\mathrm{ d} R$ is the matrix of partial derivatives of $f$ at $R$. \item[(2)] Find the singular value decomposition $U \Sigma V^*$ of $\mathrm{ d} f/\mathrm{ d} R$. \item[(3)] Replace $R$ by $UV^*$ and go to (1) or stop. \end{enumerate} Jennrich also showed that under certain general conditions, this algorithm converges to a stationary point from any initial estimate. The above algorithm seems to be the standard varimax rotation algorithm available in many packages, e.g., MATLAB\,\textsuperscript{\footnotesize\textregistered}\footnote{MATLAB is a registered trademark of The MathWorks, Inc.}, R, etc. \begin{algorithm} \DontPrintSemicolon \SetKwData{tol}{tol} \SetKwData{maxit}{maxit} \KwIn{Full column rank input matrix whose columns to be rotated $A \in \mathbb{R}^{N \times m}$ ($m \leq N$); maximum number of iteration steps \maxit(default value: $1000$); relative tolerance \mathrm{tol}(default value: 1e-12)} \KwOut{Rotated matrix $B\in \mathbb{R}^{N \times m}$} $B=A$ \tcp*{initialize the output matrix} $S=0$ \tcp*{initialize the nuclear norm} \For{$i$ from $1$ to \maxit}{ $S_0 = S$ $[U, \Sigma, V] = \operatorname{svd}(A^{\scriptscriptstyle{\mathsf{T}}} \cdot(N \cdot B^{\circ 3} - B \cdot \operatorname{diag}(B^{\scriptscriptstyle{\mathsf{T}}} \cdot B)))$ \tcp*{$B^{\circ 3} \, := \, B \circ B \circ B$\\ where $\circ$ is the Hadamard (entrywise) product} $T = U \cdot V^*$ \tcp*{update the orthogonal rotation matrix} $S = \operatorname{trace}(\Sigma)$ $B = A \cdot T$ \tcp*{update the rotated matrix} \If{$|S - S_0|/S<$ \mathrm{tol}}{ break \tcp*{stop when $S$ does not change much} } } \Return{$B$} \caption{The Varimax Rotation Algorithm} \label{algo:VM-rotation} \end{algorithm} \section{Natural Graph Wavelet Packets using \em{Varimax} Rotations} \label{sec:varimax-ngwp} Given a graph $G = G(V,E,W)$ with $|V| = N$ and the non-trivial distance $d$ between its eigenvectors (e.g., $d_{\mathrm{DAG}}$ of Eq.~\eqref{eq:DAG}), we build a complete dual graph $G^\star = G^\star(V^\star, E^\star, W^\star)$ by viewing the eigenvectors as its nodes, $V^\star = \{ \bm{\phi}_0, \ldots, \bm{\phi}_{N-1}\}$, and the non-trivial affinity between eigenvector pairs as its edge weights, $W^\star_{ij} = 1 / d(\bm{\phi}_{i-1}, \bm{\phi}_{j-1})$, $i,j = 1,2,\cdots,N$. Using $G^\star$ for representing the graph spectral domain and studying relations between the eigenvectors is clearly more \emph{natural} and effective than simply using the eigenvalue magnitudes, as \cite{CLONINGER-STEINERBERGER, saito2018can, LI-SAITO-SPIE} hinted at. In this section, we will propose one of our graph wavelet packet dictionary constructions solely based on hierarchical bipartitioning of $G^\star$. Basic Steps to generate such a graph wavelet packet dictionary for $G$ are quite straightforward: \begin{description} \item[Step 1:] \emph{Bipartition the dual graph $G^\star$ recursively} via any method, e.g., spectral graph bipartition using the \emph{Fiedler vectors}; \item[Step 2:] \emph{Generate wavelet packet vectors} using the eigenvectors belonging to each subgraph of $G^\star$ that are \emph{well localized on $G$}. \end{description} Note that Step~1 corresponds to bipartitioning the frequency band of an input signal using the \emph{characteristic functions} in the classical setting. Hence, our graph wavelet packet dictionary constructed as above can be viewed as a graph version of the \emph{Shannon} wavelet packet dictionary~\cite[Sec.~8.1.2]{MALLAT-BOOK3}. We now describe the details of each step below. \subsection{Hierarchical Bipartitioning of \texorpdfstring{$G^\star$}{G*}} \label{sec:gstar-partition} Let $V^\star = V^{\star(0)}_0 \, := \, [\bm{\phi}_0, \ldots, \bm{\phi}_{n-1}]$ be the node set of the dual graph $G^\star$, which are simply the set of the eigenvectors of the unnormalized graph Laplacian matrix $L(G)$. Suppose we get the \emph{hierarchical bipartition tree} of $V^{\star(0)}_0$ as shown in Figure~\ref{fig:binary-tree-dual}. \begin{figure} \begin{center} \begin{tikzpicture}[scale = 1] \tikzset{every tree node/.style={minimum width=3.5em,align=center,anchor=north}, blank/.style={draw=none}, edge from parent/.style={draw, edge from parent path={(\tikzparentnode) -- (\tikzchildnode)}}, level distance=1.1cm} \Tree [.$V^{\star(0)}_0$ [.$V^{\star(1)}_0$ [.$V^{\star(2)}_0$ \edge[dashed]; $\,$ \edge[dashed]; $\,$ ] [.$V^{\star(2)}_1$ \edge[dashed]; $\,$ \edge[dashed]; $\,$ ] ] [.$V^{\star(1)}_1$ [.$V^{\star(2)}_2$ \edge[dashed]; $\,$ \edge[dashed]; $\,$ ] [.$V^{\star(2)}_3$ \edge[dashed]; $\,$ \edge[dashed]; $\,$ ] ] ] \end{tikzpicture} \end{center} \caption{The hierarchical bipartition tree of the dual graph nodes $V^\star \equiv V^{\star(0)}_0$, which corresponds to the frequency domain bipartitioning used in the classical wavelet packet dictionary.} \label{fig:binary-tree-dual} \end{figure} Hence, each $V^{\star(j)}_k$ contains an appropriate subset of the eigenvectors of $L(G)$. Let $N^j_k$ be the number of such eigenvectors belonging to $V^{\star(j)}_k$. With a slight abuse of notation, let $V^{\star(j)}_k$ also denote a matrix of size $N \times N^j_k$ whose columns are those eigenvectors. As we mentioned earlier, any graph bipartitioning method can be used to generate this hierarchical bipartition tree of $G^\star$. Typically, we use the Fiedler vector of the random-walk normalized graph Laplacian matrix $L_\mathrm{rw}$ (see Eq.~\eqref{eq:graphlaps}) of each subgraph of $G^\star$, whose use is preferred over that of $L$ or $L_\mathrm{sym}$ as von Luxburg discussed~\cite{vonLuxburg2007}. \subsection{Localization on \texorpdfstring{$G$}{G} via Varimax Rotation} For realizing Step~2 of the above basic algorithm, we propose to use the \emph{varimax rotation} on $V^{\star(j)}_k \in \mathbb{R}^{N \times N^j_k}$ for each $j$ and $k$. A varimax rotation is an orthogonal rotation, originally proposed by Kaiser~\cite{kaiser1958varimax} and often used in \emph{factor analysis} (see, e.g., \cite[Chap.~11]{MULAIK}), to maximize the variances of energy distribution (or a scaled version of the \emph{kurtosis}) of the input column vectors, which can also be interpreted as the \emph{approximate entropy minimization of the distribution of the eigenvector components}~\cite[Sec.~3.2]{SAITO-GENSPIKE-FINAL}. For the implementation of the varimax rotation algorithm, see Appendix~\ref{app:varimax}, which is based on the Basic Singular Value (BSV) Varimax Algorithm of \cite{JENNRICH}. Thanks to the orthonormality of columns of $V^{\star(j)}_k$, this is equivalent to finding an orthogonal rotation that maximizes the overall \emph{4th order moments}, i.e., \begin{equation} \label{eq:varimax} \Psi^{(j)}_k \, := \, V^{\star(j)}_k \cdot R^{(j)}_k, \quad \text{where $R^{(j)}_k = \arg \max_{R \in \mathrm{SO}(N^j_k)} \sum_{p=1}^N \sum_{q=1}^{N^j_k} \left[ \left( V^{\star(j)}_k \cdot R \right)^4\right]_{p,q}$}. \end{equation} The column vectors of $\Psi^{(j)}_k$ are \emph{more ``localized'' in the original domain $G$} than those of $V^{\star(j)}_k \equiv \Phi^{(j)}_k$. This type of localization is important since the graph Laplacian eigenvectors in $\Phi^{(j)}_k$ are of \emph{global} nature in general. We also note that the column vectors of $\Psi^{(j)}_k$ are orthogonal to those of $\Psi^{(j')}_{k'}$ as long as the latter is neither a direct ancestor nor a direct descendant of the former. Hence, Steps~1 and 2 of the above basic algorithm truly generate the graph wavelet packet dictionary for an input graph signal, where one can run the best-basis algorithm of Coifman-Wickerhauser~\cite{Coifman-Wickerhauser} to extract the ONB most suitable for a task at hand (e.g., an efficient graph signal approximation) once an appropriate cost function is specified (e.g., the $\ell^p$-norm minimization, $0 < p \leq 1$). Note also that it is easy to extract a graph Shannon wavelet basis from this dictionary by specifying the appropriate nodes in the binary tree structured dual graph nodes explicitly, i.e., $\Psi^{(1)}_1, \Psi^{(2)}_1, \ldots, \Psi^{({j_\mathrm{max}})}_1$, and the father wavelet vectors $\Psi^{({j_\mathrm{max}})}_0$. We refer to the graph wavelet packet dictionary $\left\{\Psi^{(j)}_k\right\}_{j=0:{j_\mathrm{max}}; \, k=0:2^j-1}$ generated by this algorithm as the \emph{Varimax Natural Graph Wavelet Packet} (VM-NGWP) dictionary. Let us now demonstrate that our algorithm actually generates the classical \emph{Shannon} wavelet packets dictionary~\cite[Sec.~8.1.2]{MALLAT-BOOK3} when an input graph is the simple path $P_N$. \begin{figure} \begin{subfigure}{0.33\textwidth} \centering\includegraphics[width=\textwidth]{NGWvarimaxj5k1.png} \caption{Father wavelet vectors $\Psi^{(4)}_0$} \end{subfigure} \begin{subfigure}{0.33\textwidth} \centering\includegraphics[width=\textwidth]{NGWvarimaxj5k2.png} \caption{Mother wavelet vectors $\Psi^{(4)}_1$} \end{subfigure} \begin{subfigure}{0.33\textwidth} \centering\includegraphics[width=\textwidth]{NGWvarimaxj5k5.png} \caption{Wavelet packet vectors $\Psi^{(4)}_4$} \end{subfigure} \caption{Some of the Shannon wavelet packet vectors on $P_{512}$} \label{fig:shannon512} \end{figure} Note that the varimax rotation algorithm does not necessarily sort the vectors as shown in Figure~\ref{fig:shannon512} because the minimization in Eq.~\eqref{eq:varimax} is the same modulo to any permutation of the columns and any sign flip of each column. In other words, to produce Figure~\ref{fig:shannon512}, we carefully applied sign flip to some of the columns, and sorted the whole columns so that each subfigure simply shows translations of the corresponding wavelet packet vectors. \subsection{Computational Complexity} \label{sec:varimax-cost} The varimax rotation algorithm of Appendix~\ref{app:varimax} is of iterative nature and is an example of the BSV algorithms~\cite{JENNRICH}: for each iteration at the dual node set $V^{\star(j)}_k$, it requires computing the full Singular Value Decomposition (SVD) of a matrix of size $N^j_k \times N^j_k$ representing a gradient of the objective function, which itself is computed by multiplying matrices of sizes $N^j_k \times N$ and $N \times N^j_k$. The convergence is checked with respect to the relative error between the current and previous gradient estimates measured in the nuclear norm (i.e., the sum of the singular values). For our numerical experiments in Section~\ref{sec:applications}, we set the maximum iteration as 1000 and the error tolerance as $10^{-12}$. Therefore, to generate $\Psi^{(j)}_k$ for each $(j,k)$, the computational cost in the worst case scenario is $O\left(c \cdot (N^j_k)^3 + N \cdot (N^j_k)^2\right)$ where $c=1000$ and the first term accounts for the SVD computation and the second does for the matrix multiplication. For a perfectly balanced bipartition tree with $N=2^{j_\mathrm{max}}$, we have $N^j_k = 2^{{j_\mathrm{max}}-j}$, $j=0, \ldots, {j_\mathrm{max}}$, $k=0, \ldots, 2^j-1$. Hence we have: \begin{equation} \label{eq:Njk2} \sum_{k=0}^{2^j-1} (N^j_k)^2 = \sum_{k=0}^{2^j-1} 2^{2({j_\mathrm{max}}-j)} = 2^{2{j_\mathrm{max}}-2j} \cdot 2^j = N^2 \cdot 2^{-j} , \end{equation} and \begin{displaymath} \sum_{k=0}^{2^j-1} (N^j_k)^3 = \sum_{k=0}^{2^j-1} 2^{3({j_\mathrm{max}}-j)} = 2^{3{j_\mathrm{max}}-3j} \cdot 2^j = N^3 \cdot 2^{-2j} . \end{displaymath} Note that at the bottom level $j={j_\mathrm{max}}$, each node is a leaf containing only one eigenvector, and there is no need to do any rotation estimation and computation. Hence, the total worst case computational cost is, summing the cost $O\left(c \cdot (N^j_k)^3 + N \cdot (N^j_k)^2\right)$ from $j=0$ to ${j_\mathrm{max}}-1$, i.e., $O((2+4c/3)N^3 - 2N^2 - 4c/3 N)$, so after all, it is an $O(N^3)$ algorithm. In practice, the convergence is often achieved with less than 1000 iterations at each node except possibly for the nodes with small $j$ whose $N^j_k$ is large. For example, when computing the VM-NGWP dictionary for the path graph $P_{512}$ shown in Figure~\ref{fig:shannon512}, the average number of iterations over all the dual graph nodes $\left\{ V^{(j)}_k\right\}_{j=0:{j_\mathrm{max}}; \, k=0:2^j-1}$ was $70.43$ with the standard deviation $107.95$.
null
null
null
proofpile-arXiv_000-10172
{"arxiv_id":"2009.09020","language":"en","timestamp":1600740084000,"url":"https:\/\/arxiv.org\/abs\/2009.09020","yymm":"2009"}
2024-02-18T23:40:25.251Z
2020-09-22T02:01:24.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10174"}
null
\section{Supplementary information} \setcounter{figure}{0} \section{Self-pulsing mechanism.} \label{sec:SPmec} Driven optical micro and nano-cavities in the classical regime are often modelled using temporal coupled-mode theory, the specific formulation of which depends on the boundary conditions, i.e., on the way the cavity is excited~\cite{CPT}. For a photonic crystal cavity bidirectionally coupled to a bus waveguide, the fibre tapered loop in our experiments, used for excitation at power $P_{in}=\hbar\omega_L\lvert\overline{a}_{in}\rvert^2$ and frequency $\omega_L$, the equation governing the complex electric field amplitude (in the frame rotating at $\omega_L$) can be written as: \begin{equation} \label{CPTeq} \frac{da(t)}{dt} = i\Delta a(t) - \frac{\kappa}{2}a(t) + \sqrt{\frac{\kappa_{e}}{2}}\overline{a}_{in} \end{equation} where $a(t)$ is normalized such that the intracavity photon number is given by $n_{c}(t)=a^*(t)a(t)$, $\Delta = \omega_{o}-\omega_{L}$ is the detuning, $\kappa$ the overall photonic decay rate and $\kappa_{e}$ the total photon decay rate into the extrinsic channel used for excitation, in this case the bus waveguide. The outcoupled field satisfies $a_{out}=a_{in}+ \sqrt{\frac{\kappa_{e}}{2}}a$, where we have considered that the bidirectional coupling is symmetric. This equation admits a simple steady state solution $\overline{a}$ for a CW-laser drive, the intra-cavity photon number $\overline{n}_{c}=\lvert\overline{a}\rvert^2$ being: \begin{equation} \label{eq:numberphotons} \overline{n}_c=\frac{\kappa_e/2}{(\kappa/2)^2+\Delta^2}\frac{P_{in}}{\hbar\omega_L} \end{equation} The resonant circulating intensity, $I$, can be estimated~\cite{Vahala} in a high-$Q$ and small volume $V$ optical resonator as $I = P_{in} \left( \frac{\lambda}{2 \pi n_g} \right) (Q/V)$, where $P_{in}$ is the input excitation power at wavelength $\lambda_L$ and $n_g$ is the group index of the resonant mode.\ For an input power of 1 mW and a typical silicon OMC mode with $Q \sim 5\cdot10^4$ and $V=2.4(\lambda/n)^3$, this corresponds to $1.8\,GW/m^2$.\ This large stored energy results in a significant non-linear behaviour of the dielectric medium. In the case of silicon, due to its central symmetry, the second order susceptibility tensor is null and only third order terms need to be considered~\cite{siliconsecondorder}. To such order, the main non-linear processes in silicon for single frequency operation in the telecomm range are two-photon absorption (TPA) and a dispersive Kerr effect, arising from the real and imaginary parts of the third order suspceptibility of electronic origin. Nevertheless, free carrier absorption (FCA) needs to be considered since a large population of free carriers $N_{e}$ can be generated, precisely due to two-photon absorption. A schematic describing the microscopic nature of these two phenomena can be found at the top of Fig.~\ref{fig:SP}. Last, most of the absorbed optical power in the cavity will be released to the lattice through the decay of the photoexcited carriers, increasing its temperature. The decay being much faster than the dynamics of the temperature, the energy transfer from the electron population to the lattice can be considered inmediate for the purposes of modelling the temperature field in the cavity. Such temperature rise in the cavity region will in turn produce a shift in the cavity resonant frequency $\omega_{o}$ of opposite sign to the one mediated by the presence of free carriers. Following the derivations in ~\cite{nonlinear1,nonlinear2,nonlinear3}, all such non-linear processes can be microscopically introduced into Maxwell's equations with the corresponding non-linear polarization terms, and then cast into the coupled mode formalism by writing: \begin{subequations} \begin{gather} \frac{da(t)}{dt}= i\Delta a(t) - \left(\frac{\kappa}{2} + \frac{c^2}{n_{Si}^2}\frac{\hbar\omega_l\beta_{TPA}\lvert a(t)\rvert^2}{2V_{TPA}}+ \frac{c}{n_{Si}}\frac{\sigma_{r}N_{e}}{2 V_{FCA}} \right)a(t) + \sqrt{\kappa_{e}}\overline{a}_{in} \label{eq:CPTnonlinear1}\\ \Delta= \omega_{L} - \left(\omega_{o}-\frac{\omega_{o}}{n_{Si}}\frac{\sigma_{i}N_{e}}{V_{FCA}}+\frac{\omega_{o}}{n_{Si}}n_{T}\Delta T\right) \label{eq:CPTnonlinear2} \end{gather} \end{subequations} where, in addition to the cavity losses in the linear regime ($\kappa/2$), the absorbed power due to two-photon absorption and free carrier absorption have been considered in (\ref{eq:CPTnonlinear1}). Dispersion due to free carriers $N_{e}$ and temperature increase $\Delta T$ have also been introduced as can be seen in the equation for the detuning in (\ref{eq:CPTnonlinear2}). Here $\beta_{TPA}$ is the tabulated two-photon absorption coefficient~\cite{betaTPA}, $V_{TPA}$ and $V_{FCA}$ the characteristic volumes of the two-photon and free-carrier absorption processes, respectively, $n_{Si}$ the refractive index of silicon, c the speed of light, $\sigma_r$ and $\sigma_i$ the free-carrier absorption and dispersion \textit{cross-sections}~\cite{sigma} and $n_T$ the first-order refractive index variation caused by temperature. Here, we have already dismissed the dispersion associated to the Kerr effect due to the difference in magnitude compared to free-carrier dispersion and thermo-optic dispersion. We note here that linear absorption does not appear explicitly in (\ref{eq:CPTnonlinear1}) since its effect is already taken into account through the intrinsic cavity losses $\kappa_i$, that contribute to $\kappa$. This implies that solely considering linear absorption already requires solving a much more complex system in which $N_e$ and $\Delta T$ have a prominent role in the dynamics. Actually, in the sample explored in this manuscript, linear absorption dominates over the rest of the terms for the power levels considered and little to no difference is observed when disregarding the two-photon absorption term. The influence of the free carrier population $N_e$ and the temperature increase $\Delta T$ in Eqs. (\ref{eq:CPTnonlinear1}) and (\ref{eq:CPTnonlinear2}) imply that their dynamics need to be tracked too. These are obviously very complex space and time-dependent processes, but can be cast in the following form when the absorption processes mentioned and phenomenological decay rates are considered: \begin{equation}\label{eq:FCD} \frac{dN_{e}(t)}{dt}= -\gamma_{fc} N_{e} + \frac{1}{2}\frac{c^2}{n_{Si}^2}\frac{\hbar\omega_l\beta_{TPA}}{V_{TPA}}\lvert a(t)\rvert^4+\frac{c}{n_{Si}}\frac{\alpha_{lin}}{R_{eff}}\lvert a(t)\rvert^2 \end{equation} \begin{equation}\label{eq:TO} \frac{d\Delta T(t)}{dt}= -\gamma_{th} \Delta T + \frac{1}{\rho_{Si}C_{p,Si}V_{eff,T}} \left(\frac{c^2}{n_{Si}^2}\frac{\beta_{TPA}}{V_{TPA}}\lvert a(t)\rvert^4+\frac{c}{n_{Si}}\frac{\sigma_{r}N_{e}(t)}{V_{FCA}}\lvert a(t)\rvert^2+\frac{c}{n_{Si}}\frac{\alpha_{lin}}{R_{eff}}\lvert a(t)\rvert^2\right) \end{equation} where we have considered unavoidable linear absorption in addition to TPA and FCA. Here $\alpha_{lin}$ is the linear absorption coefficient~\cite{alphalin}, $R_{eff}$ represents the inverse of the fraction of the optical mode inside the silicon, $\rho_{Si}$ and $C_{p,Si}$ the density and constant-pressure specific heat capacity of silicon, $V_{eff,T}$ an effective thermal volume for the cavity and $\gamma_{fc}$ and $\gamma_{th}$ the free-carrier and thermal decay rates.\\ \begin{figure}[t] \centering \includegraphics[width=0.85\textwidth]{FigS1.pdf} \caption[radiationpressure]{\textbf{Self-pulsing dynamics of a highly driven silicon optical cavity}. Driven at \emph{high} powers, the dynamics of free-carriers $N_e$ and temperature changes $\Delta T$ need to be tracked. They couple dispersively to the optical resonance and can end up leading to periodic but highly anharmonic dynamical states. Both dynamical variables $\Delta T$ (a) and $N_e$ (b) display a periodic behaviour at frequency $\nu_{SP}$, forming a closed trajectory in phase space (e). The resulting time-periodic detuning $\Delta$ between the optical cavity and the driving laser modualtes the photon number $\overline{n}_c$ and the transmission, the typical time trace of which is shown in (d). (e) Fast Fourier transform (FFT) of the time trace. Time and frequency are given relative to the thermal decay rate $\gamma_{th}$.} \label{fig:SP} \end{figure} An understanding of the dynamic behaviour of an optical cavity driven at high enough power requires the simultaneous solution of Equations (\ref{eq:CPTnonlinear1}), (\ref{eq:FCD}) and (\ref{eq:TO}). In order to simplify those, we can use the characteristic times involved in the different physical processes. For the optical modes we deal with in this work, the decay time of the optical mode ($\sim \text{ps}$) is orders of magnitude faster than the nonlinear dispersion mechanisms ($\sim \text{ns}$). As a consequence, we can asume $N_{e}$ and $\Delta T$ to be constant in the time-scales involved in (\ref{eq:CPTnonlinear1}). In addition, the losses induced by the non-linear processes can be first neglected with respect to the linear ones due to the apparent high presence of surface states in our SOI-based photonic structures, a possible reason for the low quality factors experimentally observed. Solving for the steady state intra-cavity photon number $\overline{n}_c$ as in (\ref{eq:numberphotons}), we can actually restrict our system of equations to (\ref{eq:FCD}) and (\ref{eq:TO}) by considering the adiabatic response of the optical cavity using the steady state number of photons (\ref{eq:numberphotons}) with a time-dependent detuning (\ref{eq:CPTnonlinear2}) and replacing $\rvert a(t)\lvert^2=\overline{n}_c(t)$ in (\ref{eq:FCD}) and (\ref{eq:TO}). These simplified equations can be integrated without much computational effort and capture most of the experimental features in the presented experiment as well as in our previous experiments~\cite{dani_selfpulsing}. We consider all parameteres defining the equations above, except the laser parameters $P_{in}$ and $\omega_L$, as given. In most situations, the dynamic solution to the system of differential equations is a fixed or equilibrium point in the phase space defined by $\{N_{e},\Delta T\}$ which leads to a stable spectral shift of the cavity mode, typically dominated by the thermal part and referred to as a thermo-optic shift~\cite{TOeffect}. For particular combinations of $P_{in}$ and $\omega_L$ stable limit cycles exist in phase space. With initial conditions $\{N_{e}(0),\Delta T(0)\}$ inside the bassin of attraction of this limit cycle, the dynamic solution in the limit $t\to\infty$ tends to the cycle, forming a periodic closed trajectory in phase space $\{N_{e}^*(t),\Delta T^*(t)\}$ - a self-pulsing (SP) limit cycle~\cite{nonlinear1}- which is generally highly anharmonic. The typical shape of this closed trajectory, as well as the resulting temporal traces for $N_{e}$, $\Delta T$, $\Delta$ and the transmission $T$ are obtained by numerical integration and are depicted in Fig.~\ref{fig:SP}(a,b,c,d,e). Panel (f) sows the Fast Fourier Tranform (FFT) of the transmission trace in (d), taken over many oscillation periods.\\ \begin{figure}[t] \centering \includegraphics[scale=0.53]{FigS2.pdf} \caption[Simulated radiofrequency spectra as a function of wavelength with and without the mechanical resonator]{\textbf{Simulated radiofrequency spectra as a function of wavelength with and without the mechanical resonator.} (a) Radiofrequency spectra of the transmission time trace as a function of the laser wavelength $\lambda_L$ without (a) and with (b) the mechanical mode at $\Omega_m=54$ MHz. When the mechanical motion is taken into account, the system can evolve to mechanical lasing, driven by the self pulsing-induced modulation of the cavity photons $n_c(t)$. Frequency entrainment occurs when $\nu_{SP}$ is close to being an integer multiple of the mechanical frequency $m \nu_{SP}=\Omega_m$. Three such values of $m$ are highlighted with arrows and are denoted as M3, M2 and M1, depening on the value of $m$.} \label{fig:SPandMEClambda} \end{figure} Since in most experiments our knob is the laser wavelength $\lambda_L$ (the laser frequency $\omega_L$) it is interesting to study how the SP evolves with $\lambda_L$ as this one is scanned accross the optical cavity mode of frequency $\omega_{o}$. Numerically solving the set of equations described above for many laser wavelengths leads to the colormap of the radiofrequency spectrum shown in Fig. \ref{fig:SPandMEClambda}(a). To mimick experimental measurement conditions, these simulations start with a laser drive on the blue side of the resonance. The initial conditions at for numerical integration at each wavelength correspond to the the solution $\{N_{e}(t_*),\Delta T(t_*),x(t_*)\}$ of the previous wavelength, with $t_*$ chosen such that the solution has reached a steady-state.\\ \section{Mechanical lasing.} \label{sec:meclasing} The system of equations described above only considers the optical response of the system. If the optical cavity of interest is coupled to a mechanical mode of frequency $\Omega_m$ and dissipation rate $\Gamma_m$ via radiation-pressure with a vaccuum optomechanical coupling rate $g_o$, the dynamics of the mechanical mode might have to be considered, at least for particular values of the laser drive $P_{in}$ and $\lambda_L$. Even if the coupling rate $g_o$ is low or if the system is far from the sideband resolved condition ($\kappa<\Omega_m$), where the response of the mechanical resonator to an external stimulus is truly modified~\cite{cavityoptomechanics}, the periodic modulation of the photon number $\overline{n}_c(t)$ at frequency $\nu_{SP}$ induced by the development of a SP limit-cycle can drive the mechanical mode into a coherent oscillation state. This happens whenever $\nu_{SP}$ or any of its harmonics $M$ is close to the mechanical frequency $\Omega_m$. Since the SP mechanism has no fundamental natural frequency of its own, once the mechanical resonator is excited the self-pulsing is entrained to oscillate at the frequency of the mechanical mode for some range of the drive parameters. Actually, for considerably large absolute values of $g_o$ and high mechanical dissipation $\Gamma_m$ the self-pulsing might never occur without excitation of the mechanical mode, leading to sharp jumps between lasing states with different harmonics $M$ of the SP limit cycle, which is what we observe in our experiment and is depicted in Figure 1 of the main text. To understand the observed dynamics, the following system of equations now needs to be solved: \begin{equation}\label{eq:FCD2} \frac{dN_{e}(t)}{dt}= -\gamma_{fc} N_{e} + \frac{1}{2}\frac{c^2}{n_{Si}^2}\frac{\hbar\omega_l\beta_{TPA}}{V_{TPA}}\overline{n}_c(t)^2+\frac{c}{n_{Si}}\frac{\alpha_{lin}}{R_{eff}}\overline{n}_c(t) \end{equation} \begin{equation}\label{eq:TO2} \frac{d\Delta T(t)}{dt}= -\gamma_{th} \Delta T + \frac{1}{\rho_{Si}C_{p,Si}V_{eff,T}} \left(\frac{c^2}{n_{Si}^2}\frac{\beta_{TPA}}{V_{TPA}}\overline{n}_c(t)^2+\frac{c}{n_{Si}}\frac{\sigma_{r}N_{e}(t)}{V_{FCA}}\overline{n_c}(t)+\frac{c}{n_{Si}}\frac{\alpha_{lin}}{R_{eff}}\overline{n}_c(t)\right) \end{equation} \begin{equation} \label{eq:nonlinearmechanics} \begin{aligned} \frac{d^2 x(t)}{dt^2}+\Gamma_m\frac{dx(t)}{dt}+\Omega_m^2x(t)=\hbar \frac{g_o}{x_{zpf}m_{eff}}\overline{n}_c(t) \end{aligned} \end{equation} where $x(t)$ is the generalized coordinate for the displacement of the mechanical mode and $m_{eff}$ its effective mass. The two first equations correspond to the ones solved in the previous Section and the third describes the dynamics of the mechanical resonator under the action of radiation pressure force $F_{RP}(t)=\hbar\frac{g_o}{x_{zpf}}n(t)$. In this last equation, the action of thermal Langevin forces is left out for simplicity, although those are necessary to experimentally observe the mechanical modes at low driving powers. Due to the low frequency MHz modes considered herein, these three equations are solved again considering the adiabatic response of the optical cavity to the rest of the dynamics, i.e. $n_c(t)=\overline{n}_c(t)$ which implies \begin{equation} \label{eq:numberphotonstime} \overline{n}_c=\frac{\kappa_e/2}{(\kappa/2)^2+\Delta^2}\frac{P_{in}}{\hbar\omega_L} \end{equation} and the detuning \begin{equation} \label{eq:detuntotal} \Delta= \omega_{l} - \left(\omega_{o}-\frac{\omega_{o}}{n_{Si}}\frac{\sigma_{i}N_{e}}{V_{FCA}}+\frac{\omega_{o}}{n_{Si}}n_{T}\Delta T-\frac{g_o}{x_{zpf}}x\right) \end{equation} now includes the dispersive effect of the mechanical mode displacement on the optical cavity.\\ For ease of understanding, Fig.~\ref{fig:SPandMEClambda}(b) depicts what happens for an optomechanical system where both regions of pure self-pulsing and mechanical lasing are visible. Several entrainment plateaus show the different orders $M$ of the mechanical lasing, where the $M$-th harmonic of the SP frequency, $M\nu_{SP}$ coincides with the mechanical frequency $\Omega_m$. Except for the inclusion of the mechanical dynamics and the optomechanical coupling rate $g_o$, the rest of the parameters are kept as in the previous section. This allows us to understand how the SP mechanism acts as a driving term for the mechanical lasing, as can be easily seen by comparing Figures \ref{fig:SPandMEClambda}(a) and (b), where regions of pure SP are practically equivalent. Whenver $g_o$ or $\Gamma_m$ increase, it is easier to drive the mechanical mode at a frequency difference $\Omega_m-n\nu_{SP}$, and a minimal drive can end up locking the SP to a subharmonic, which results in spectra similar to the one observed experimentally, where regions of pure SP are not observed.\\ \section{Injection-locking dynamics.} We develop now a numerical model to qualitatively describe the experimental results obtained for the injection locking experiment. For such purpose, we only have to add the effect of modulating the incoming laser power $P_{in}$ via the electro-optic modulator (EOM). The time-dependent input power is given by $P_{mod}=P_{in}\text{cos}^2\left( \pi \frac{V-V_{\pi}}{2V_{\pi}}\right)$, where $P_{in}$ is again the input excitation power entering the EOM, $V$ the voltage applied to the EOM and $V_{\pi}$ is the characteristic half-wave voltage, which for our EOM is $V_{\pi}=6.7 V$. The voltage used in this work is given by $V=V_{DC}+V_{AC}\text{cos}(2\pi f_{mod}t)$, where $V_{DC}$ is fixed at the optimal modulation point $V_{DC}=V_{\pi}/2$ and $V_{AC}$ can be varied by using the VNA as an RF source. \\ \begin{figure}[t] \centering \includegraphics[scale=0.9]{FigS3.pdf} \caption[radiationpressure]{\textbf{Radiofrequency spectra in the injection locking regime} Power spectral density (PSD) of the (a) transmission and (b) mechanical time traces. An external modulation frequency far from the lasing frequency ($f_{mod}=$ 100.28 MHz, black solid line), leads to two different peaks, while the external tone locks the mechanical lasing frequency for small differences ($f_{mod}=$ 100.34 MHz, blue solid line).} \label{fig:num_sim1} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.8\textwidth]{FigS4.pdf} \caption[radiationpressure]{\textbf{Numerical results reproducing the coherent response of the mechanical signal} (a) Color plot of the PSD of the transmitted signal obtained from the Right cavity for a modulation voltage of $V_{AC}$= 0.034 $V_{\pi}$. (b) (c) Coherent magnitude and phase response obtained from the temporal traces of the Right cavity.} \label{fig:num_sim2} \end{figure} Using the same time-scale arguments as the ones evoqued for the free-carrier, temperature and mechanical dynamics, the response of the optical cavity to the power modulation can be considered instantaneous (the modulation frequencies $f_{mod}$ are close to the mechanical frequency $\Omega_m$) and we can this time set the intracavity photon number shown in Eq. (\ref{eq:numberphotonstime}) as: \begin{equation} \label{eq:numberphotonsmod} \overline{n}_c=\frac{\kappa_e/2}{(\kappa/2)^2+\Delta^2}\frac{P_{mod}}{\hbar\omega_L} \end{equation} which implies that the numerical resolution of the equations is practically equivalent to what is done for the standard mechanical lasing. Note however that being able to spectrally resolve the two peaks, the modulation and lasing peaks, whenever they are very close in frequency requires extremely long simulation time windows. For example, for a frequency difference $ \lvert\Omega_{OMO}-f_{mod}\rvert=0.01$ MHz in the unlocked regime, resolving the two peaks requires a time window of (at the very) least 100 $\mu s$. However, even longer times are required in practice since the solution only becomes stable after some initial time. The simulations shown here use a time span $t\in[0,400\mu s]$.\\ \begin{figure}[b] \centering \includegraphics[width=0.75\textwidth]{FigS5.pdf} \caption[Simulated Arnold tongue]{\textbf{Simulated Arnold tongue as a function of the modulation depth $V_{AC}$}. (a) Color plot of the transmitted signal FFT spectra and (b) the mechanical displacement FFT spectra from $V_{AC}= 0.0149 V_{\pi}$ to $V_{AC}= 0.0472 V_{\pi}$. The vertical dashed white line highlights the oscillator frequency in the absence of any drive $\Omega_{OMO,0}$, while the other two delimit the Lock range and evidence an asymmetric growth on both sides of $\Omega_{OMO,0}$. } \label{fig:Arnold} \end{figure} As explained in the main text, the use of a neighbour optical cavity allows us to readout the mechanical dynamics via the non-vanishing cross coupling terms. In any case, what we can be certain is that the thermal and free-carrier effects are extremely low when reading out the temporal dynamics of the outcoupled light in the neighbour cavity. This implies that whatever we observe in that cavity can be directly compared to what is numerically obtained for the mechanical resonator dynamics. Fig.~\ref{fig:num_sim1} shows the numerical solutions obtained for the injection locking scheme. Fig. \ref{fig:num_sim1}(a) and (b) show the PSD obtained from both the transmission and mechanical displacement time traces, mimicking the spectrum obtained while measuring on the right (R) and left (L) respectively. The black curves correspond to the spectrum for a modulation frequency $f_{mod}$ far from the mechanical frequency. The absence of direct modulation on the mechanical time trace implies that the peak at the modulation frequency is obviously much smaller than the mechanical lasing peak. The blue plots correspond to a small detuning between the excitation frequency and the mechanical frequency, and we clearly see that only a single peak is observed, which is what correspond to the injection locked regime. Fig. \ref{fig:num_sim2}(a) shows the color plot of the PSD signal as obtained from the mechanical displacement time traces, illustrating the lock range for a modulation voltage of $V_{AC}=0.034V_{\pi}$. Fig. \ref{fig:num_sim2}(b) and (c) show the reconstructed magnitude and phase response from the simulated temporal traces of the mechanical displacement. The recovered response agrees well with the experimental measurements shown in Figure 2(b) and (d) of the main manuscript.\\ The Arnold tongues shown are obtained by integrating the full system of equations for several modulation frequencies $f_{mod}$, using the solution $\{N_{e}(t_*),\Delta T(t_*),x(t_*)\}$ of the previous modulation frequency as an initial condition for each value of $f_{mod}$. To mimick the experimental observation, we show in Fig.~\ref{fig:Arnold} the Arnold tongue as observed from the PSD colormaps of both the transmission time traces (a) and the mechanical time traces (b). Much like in the experiment, whose spectra are shown in Figure 3 in the main text, the lock range grows asymmetrically on both sides of the mechanical frequency (white dashed line). The qualitative agreement with the experimetally obtained Arnold tongues is obvious, although the size of the lock ranges for a particular modulation amplitude $V_{AC}$ is much larger in the experimental setting. This discrepancy is likely due to the difficulty in having a precise value for all of the governing parameters in the non-linear dynamical equations of the optical cavity. \\ \section{Higher-harmonic injection-locking.} \begin{figure}[t!] \centering \includegraphics[width=0.75\textwidth]{FigS6.pdf} \caption[Experimental Arnold tongue second harmonic]{\textbf{Experimental Arnold tongue at the first harmonic}. (Color online) Colormaps of the power spectral density (PSD) of light transmitted at the wavelength of the probe laser for modulation amplitudes from $V_{AC}$ = 0.00275$V_{\pi}$ to $V_{AC}$ = 0.0153$V_{\pi}$. (Left) Modulation frequency $f_{mod}$ around $f_{OMO,0}$ and (right) modulation frequency $f_{mod}$ around $2f_{OMO,0}$. The dashed white lines mark either the frequency of the free-running oscillator at $f_{OMO,0}$=100.355 MHz or its first harmonic.} \label{fig:n2sync} \end{figure} Injection-locking of a regenerative oscillator can also be achieved close to higher-order harmonics of the oscillation frequency, i.e., at $f_{mod}\sim nf_{OMO}$~\cite{pikovsky}. This was checked experimentally by carrying out measurements like the ones shown in Fig. 4 of the main text but using a modulation frequency around the first harmonic, i.e. n=2. This is shown, along with measurements done at the direct oscillation frequency, i.e. n=1, in Fig.~\ref{fig:n2sync}. We see that on this measurement run the asymmetry in the Arnold tongue for direct modulation at $f_{mod}\sim f_{OMO}$ does not show up, while the Arnold tongue when driving close to the first harmonic exhibits an asymmetry towards the opposite direction as the one shown in the main text. This is currently under investigation in a configuration in which we injection-lock only the SP dynamics. \section{Mode hybdridization and phonon detection.} \begin{figure}[t] \centering \includegraphics[width=\textwidth]{FigS7.pdf} \caption[radiationpressure]{\textbf{Phonon detection}. (a) Optical resonance belonging to cavity A and B when coupled through a common optical fiber. (b) Thermally driver mechanical resonances. The small detuning is due to fabrication imperfections. (c) Detection of the cavity A in mechanical lasing (blue) by employing cavity B (red), mechanically coupled via a common tether. (d) The detected signal follows the phonon laser switching.} \label{fig:modeshybrid} \end{figure} As mentioned in the manuscript, the string-like mechanical modes of both beams would have exactly the same oscillation frequency $\Omega_{m,A}=\Omega_{m,B}$ in the perfect case and would hybridize into a symmetric (+) and antisymmetric (-) modes with frequencies $\Omega_{m,\pm}=\Omega_{m,A}\pm G$, with $G$ the coupling induced by the common tether. Fabrication disorder breaks the symmetry and each mode has a different original oscillation frequency $\Omega_{m,A}$ and $\Omega_{m,B}$. For our structures, the level of fabrication disorder is relatively strong and the uncoupled detuning $\Delta_m=\Omega_{m,A}$-$\Omega_{m,B}$ is considerable, much bigger than the $G$ induced by the coupling tether(s) for this type of string-like mode. This implies that, once coupled, $\Omega_{m,+}\sim\Omega_{m,A}$ and $\Omega_{m,-}\sim\Omega_{m,B}$ or viceversa. Assuming the former case and that all other normal modes of the uncoupled resonators are far in energy (which is obviously the case for this type of mode), the modes can be written like \begin{subequations} \begin{align} \lvert + \rangle_m &= \lvert A\rangle_m + \alpha_m\lvert B\rangle_m \\ \lvert - \rangle_m &= \lvert B\rangle_m - \alpha_m\lvert A\rangle_m \end{align} \end{subequations} with $0<\alpha_m\ll 1$. The same happens for the optical modes of the structure but due to the stronger dependence of the mode frequencies on the existing fabrication disorder (hole and wing sizes, rugosity, loss of verticality, etc.) we can assume -and this is confirmed by the huge experimental detunings observed- that they do not couple at all and set $\alpha_o=0$. In such case, we define the optomechanical coupling rates $g_o$ between the original optical and mechanical modes as $g_{o,\lvert A\rangle_m\lvert A \rangle_o}=g_A$, $g_{o,\lvert B\rangle_m\lvert B \rangle_o}=g_B$, $g_{o,\lvert A\rangle_m\lvert B \rangle_o}=0$ and $g_{o,\lvert B\rangle_m\lvert A \rangle_o}=0$, where in principle we could expect $g_A$ to be similar to $g_B$. However, one can easily see that for the mechanical mode considered in this work, which is asymmetric with respect to the nanobeam axis, one would expect no coupling at all to any symmetric/antisymmetric optical mode $g_A=g_B=0$. However, as is obtained from direct numerical simulation of the geometries obtained from high-resolution SEM micrographs, the optical modes develop a modified shape due to asymmetries in the fabricated structure, leading to an effective, and in this particular example, high optomechanical couplings $g_{A}$ and $g_{B}$ which can be of different absolute value and sign, depending on where the modified field is located. Indeed, if the field relocates to one side of the beam, that side is contracted when the other side of the beam is expanded in a string-like mode, leading to different sign of $g_{A/B}$. When the tether is added, the optomechanical coupling terms can be cast like \begin{subequations} \begin{align} g_{o,\lvert +\rangle_m\lvert A \rangle_o}&=g_A \\ g_{o,\lvert -\rangle_m\lvert B \rangle_o}&=g_B \\ g_{o,\lvert +\rangle_m\lvert B \rangle_o}&=\alpha_m g_B \\ g_{o,\lvert -\rangle_m\lvert A \rangle_o}&=\alpha_m g_A \end{align} \end{subequations} which grants that, whatever the signs of $g_A$ and $g_B$, the read-out of a particular oscillating mechanical mode, e.g. $\lvert +\rangle_m$, with optical modes A and B leads to a particular phase relation, while the read-out of the other mode reads the same phase relation shifted by half a cycle.\\ \begin{figure} \centering \includegraphics[width=\textwidth]{FigS8.pdf} \caption[radiationpressure]{\textbf{Activation and detection of either symmetric or antisymmetric mechanical modes.} RF spectra and temporal traces when Cavity A is in a lasing state and Cavity B is just transducing (panels a and c) and viceversa (panels b and d). Black (Red) curves correspond to traces recorded when detecting the transmitted signal coming from Cavity A (B).} \label{fig:timetrace} \end{figure} The value of $\alpha_m$ is very weak in all of our experiments and therefore we need the mechanical oscillator A (B) to be in a high amplitude state to be able to experimentally observe any signal by driving the neighbour optical cavity B (A). Since the OMCs used for the experiments in the main text did not both reach the mechanical lasing state simultaneously, we present here results obtained from another pair of coupled beams, with very similar geometry, only with slightly longer structures, leading to lower mechanical frequencies $\Omega_A$ and $\Omega_B$ than those observed in the main text. When weakly driving cavity A (B) and transducing the thermal motion of mode A (B) we are unable to observe any signal above the noise floor of our spectrum analyzer coming from thermal motion of nanobeam B (A), as can be seen on Fig.~\ref{fig:modeshybrid}(b). In panel (c) of the same figure, we see that when nanobeam A (blue) is brought to a mechanical lasing state as the one described previously, we can also add a weak probe driving cavity B (green) and read out both the mechanics of cavity A and its own thermal motion. As seen in (d), the additional peak disappears when the laser driving optical mode A is turned off, evidencing the \emph{distant} phonon detection mechanism. This independent assessment is achieved by using two narrow bandpass filters (BPFs) as discussed in the main text. The experiment can be also reproduced in the reverse way (Fig.~\ref{fig:timetrace}(a,b)), allowing us to study the hybrid nature of the modes in more detail by looking at averaged temporal traces, where any signal coming from the existing thermal motion vanishes. We see (Fig.~\ref{fig:timetrace}(c)) that the signal of nanobeam A in a lasing state as read-out via its own optical mode and via optical mode B have a fixed phase relation and that the fixed phase relation obtained for the reverse case (d) is increased by $\pi$. As stated above, this confirms the presence of a phase and an anti-phase oscillation of both beams in the two mechanical eigenmodes, even though we cannot infer which mode is symmetric and which one is asymmetric without knowing the particular signs of the coupling terms. \end{widetext}
null
null
null
proofpile-arXiv_000-10173
{"arxiv_id":"2009.09012","language":"en","timestamp":1601000287000,"url":"https:\/\/arxiv.org\/abs\/2009.09012","yymm":"2009"}
2024-02-18T23:40:25.255Z
2020-09-25T02:18:07.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10175"}
null
\section{Introduction} Early-type galaxies (ETGs) have long been believed to emerge from collisions between other smaller progenitor galaxies (first proposed by \citealt{Toomre72}) but nowadays it is clear that their formation history is more complex \citep[e.g.][]{Oser10}. Their structural and kinematic properties divide them into (1) fainter (absolute magnitude $M_B > -20.5$) and coreless fast rotators which are nearly axisymmetric and have discy-distorted isophotes and (2) brighter and more massive slow rotators with flat cores, which are moderately triaxial and have boxy-distorted isophotes (\citealt{Faber87,Bender88_a,Bender89,Kormendy96,Cappellari07,Emsellem07}). For the formation of fainter elliptical galaxies dissipational processes are believed to be important (e.g. \citealt{Bender92,Barnes96,Genzel01,Tacconi05,Cappellari07,Hopkins08, Johansson09}), whereas the latest evolutionary phases in the formation of massive ellipticals are dominated by collisionless processes \citep[e.g.][]{Naab06,Cappellari16,Naab17,Moster19}. \\ In general, these merging events modify the potential structure and populate a rich diversity of stellar orbits \citep{Roettgers14}. The intrinsic shape and orbital structure of such galaxies are not directly observable. Instead, sophisticated dynamical models are needed to process kinematic and photometric observational data to extract all the information about the orbital structure and internal composition of the galaxy. \\ Dynamical models are based on the collisionless Boltzmann equation which governs the motion of stars in elliptical galaxies. Dynamical models that go beyond the recovery of velocity moments and aim at reconstructing the entire galaxy structure, additionally take advantage of the Jeans theorem \citep[e.g.][]{Binney08}. This implies that the distribution function, which is the most general description of a system of stars, is constant along individual trajectories in phase-space. In this regard, \citet{Schwarzschild79} pioneered an orbit superposition technique, where the equations of motion are numerically integrated for a finite number of stellar trajectories embedded in an assumed gravitational potential with contributions from the stars and possibly dark components. The weighted superposition of the orbits is determined for which the surface brightness and projected velocity distributions of the model match the observed ones in a least squares sense \citep[e.g.][]{Richstone84}. Besides the orbital weights, all unknown quantities like the central black hole mass and dark matter distribution are varied between different models. The model producing the best fit to the projected velocity distributions is then associated with the correct model parameters. Any galaxy in a steady state can be modeled by Schwarzschild’s orbit superposition technique. In order to determine both, the mass and internal motions of the stars, and solve an underlying mass-anisotropy-entanglement one needs to describe the deviation of the observed absorption lines from a Gaussian profile by additional Gauss-Hermite functions of at least third and fourth order (\citealt{Gerhard93,vanderMarel93,Bender94}), or, preferably if the signal-to-noise ration permits, measure the line shape non-parametrically (e.g. \citealt{Mehrgan19}). \\ Early applications of Schwarzschild's orbit superposition technique concentrated on spherical models (e.g. \citealt{Richstone85,Rix97}). Since the simplified assumption of spherical symmetry is not true for most galaxies, later applications of Schwarzschild's orbit superposition technique assumed axisymmetry (e.g. \citealt{vanderMarel98,Cretton99,Gebhardt00,Thomas04,Valluri04}). \\ However, it is nowadays known that the most massive galaxies are neither spherical nor axisymmetric but triaxial objects. Observational indications are provided by isophotal twists in the surface brightness distribution, velocity anisotropy, minor axis rotation, kinematically decoupled cores and the statistical distribution of the ellipticity of the isophotes (\citealt{Illingworth77,Bertola78,Bertola79,Williams79,Schechter78,Bender88_b,Franx88,Vincent05}). \citet{Schwarzschild79} proved the existence of self-consistent triaxial stellar systems in dynamical equilibrium with numerical orbit superposition models. Also, $N$-body simulations supported the idea of triaxial ellipsoidal stellar bulges and dark matter halos (e.g. \citealt{Aarseth78,Hohl79,Miller79,Barnes92,Jing02,Naab03,Bailin05}). \\ High mass galaxies are of particular interest in several astrophysical aspects, e.g., the stellar initial mass function (IMF) is discussed to vary among galaxies, with the largest excess stellar mass compared to the locally measured Kroupa \citep{Kroupa01} or Chabrier \citep{Chabrier03} IMF occuring in the most massive galaxies (e.g. \citealt{vanDokkum10,Cappellari12,Vazdekis15,Parikh18,Thomas11,Posacki15,Treu10,Smith15}). Moreover, different growth models for supermassive black holes (SMBH) predict different amounts of intrinsic scatter at the high-mass end of SMBH scaling relations (e.g. \citealt{Peng07,Jahnke11,Somerville15,Naab17}). In order to address these questions precision dynamical mass measurements of the stars and SMBHs are required and these are directly linked to triaxial modeling to avoid artificial scatter introduced by wrong symmetry assumptions. \citet{Thomas07} showed that the stellar mass-to-light-ratio (and, thus, indirect inferences about the IMF) can be biased by up to 50\% in extreme cases when using axisymmetric models for a maximally triaxial galaxy. Moreover, a wrongly assumed mass-to-light ratio influences the determination of the mass of the central black hole in the model. The work by \citet{vandenBosch10} suggests that the assumption of axisymmetry may bias black-hole measurements in massive ellipticals. They find that the best-fitting black hole mass estimate doubles when modeling NGC 3379 with their triaxial code \citep{vandenBosch08} in comparison to axisymmetric models. Triaxial dynamical modeling routines are therefore required to recover unbiased stellar mass-to-light ratios and black hole masses with the best possible accuracy.\\ To understand the uncertainties and ambiguities of triaxial modeling one has to understand the following three essentially different effects: \begin{itemize} \item[ 1)] the intrinsic uncertainty of the applied dynamical modeling algorithm, which can only be tested under circumstances where the solution is designed to be unique. This is one of the aspects covered in this paper. \item[ 2)] the uncertainty in the reconstruction of orbital and mass parameters from typical observational data given the right deprojection, which is also addressed in this paper, \item[ 3)] the uncertainty of the deprojection routine. This topic is covered in \citealt{deNicola20}. \end{itemize} All previously described effects need to be combined to evaluate the uncertainties in the whole modeling process. This will be investigated in a future paper. \\ So far, there exist two dynamical modeling codes using Schwarzschild's orbit superposition technique dealing with triaxiality by \citet{vandenBosch08} and \citet{Vasiliev19}. Their estimated precision and efficiency will be later mentioned in the discussion. In this paper, we present our newly developed triaxial Schwarzschild code called \verb'SMART' ("Structure and MAss Recovery of Triaxial galaxies") and test the code on a realistic high-resolution numerical merger simulation including supermassive black holes by \citet{Rantala18}. \\ This paper is structured as follows: We introduce \texttt{SMART} and describe its specific benefits in Section~\ref{sec:Triaxial Schwarzschild code SMART}. We then discuss the most important aspects for choosing this particular simulation in Section~\ref{sec:The N-body simulation}. In this section we furthermore explain all relevant steps to extract the data needed for modeling the simulation. In Section~\ref{sec:Results} we will show the results of these models. Section~\ref{sec:The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs} deals with the quasi-uniqueness of the anisotropy recovery when fitting full line-of-sight velocity distributions. This is followed by a short discussion about remaining sources of systematics and comparison to other triaxial modeling codes in Section~\ref{sec:Discussion}. We summarize our results and conclusions in Section~\ref{sec:Summary and Conclusion}. \section{Triaxial Schwarzschild code \texttt{SMART}} \label{sec:Triaxial Schwarzschild code SMART} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Figure1.pdf} \caption{Our implemented triaxial Schwarzschild Modeling code \texttt{SMART} follows the classical Schwarzschild Modeling routine (red panels) but it is unique in calculating the potential by expansion into spherical harmonics, setting up an adaptive orbit library and computing the orbit superposition by maximizing an entropy-like quantity (green panels). This results in specific advantages (blue panels), e.g. that our code is able to deal with realistic changes in the gravitational potential, e.g. when the SMBH causes a more spherical potential in the center. Our orbit library is adaptive and responds to changes in the integrals of motion. \texttt{SMART} can process density output from any deprojection routine, e.g. a non-parametric deprojection dealing with degeneracies.} \label{fig:Figure1} \end{figure*} \verb'SMART' ("Structure and MAss Recovery of Triaxial galaxies") is a fully 3D orbit superposition code based on the axisymmetric code of \citet{Thomas04} and its original extension to three dimensions and non-axisymmetric densities by \citet{Finozzi18}. It is written in FORTRAN 90/95 \citep{Brainerd96}. \texttt{SMART} follows the classical Schwarzschild method consisting of the computation of the potential and forces for a given density, the setting up of an orbit library and the subsequent superposition of the orbits. \\ In the following sections we will explain in more detail how \texttt{SMART} creates self-consistent density-, potential- and orbit-configurations and how we weight the orbits in order to fit the input density and velocity structure. One main feature of our code is that it uses a five dimensional starting space for the orbit library to adapt to potentials with a radially varying structure of the integrals-of-motion space. Fig.~\ref{fig:Figure1} gives a schematic overview of the code's main modules and the benefits of their specific implementation. \subsection{Coordinate systems and binning} \label{sec:Coordinate systems and binning} We use two different coordinate systems to describe the intrinsic and projected properties of a galaxy. To transform between the intrinsic coordinates ($x$, $y$, $z$), adapted to the symmetry of the object, and the coordinates ($x^{\prime}$, $y^{\prime}$, $z^{\prime}$) adapted to the sky projection, two matrices $P$ and $R$ are used: \begin{equation} \left(\begin{array}{l}{x^{\prime}} \\ {y^{\prime}} \\ {z^{\prime}}\end{array}\right)=R \cdot P \cdot\left(\begin{array}{l}{x} \\ {y} \\ {z}\end{array}\right), \end{equation} with \begin{equation} \label{eq:R matrix} R =\left(\begin{array}{ccc}{\sin \psi} & {-\cos \psi} & {0} \\ {\cos \psi} & {\sin \psi} & {0} \\ {0} & {0} & {1}\end{array}\right) \end{equation} and \begin{equation} P =\left(\begin{array}{ccc}{-\sin \varphi} & {\cos \varphi} & {0} \\ {-\cos \vartheta \cos \varphi} & {-\cos \vartheta \sin \varphi} & {\sin \vartheta} \\ {\sin \vartheta \cos \varphi} & {\sin \vartheta \sin \varphi} & {\cos \vartheta}\end{array}\right). \end{equation} $P$ and its corresponding viewing angles $\vartheta$ and $\varphi$ project to the plane of the sky with $z^{\prime}$ being the line of sight. $R$ and its corresponding rotation angle $\psi$ rotate the coordinates $x^{\prime}$ and $y^{\prime}$ in the plane of the sky along $z^{\prime}$. If not stated otherwise, the intrinsic long axis is hereafter assumed to coincide with $x$, the intrinsic intermediate axis with $y$ and the intrinsic short axis with $z$. \texttt{SMART} works with a cell structure based on spherical coordinates. Intrinsic properties like the stellar or dark matter distribution or individual orbital properties are integrated over small cells in configuration and/or velocity space. We use a linear sampling for the longitude $\theta \in [-90^{\circ},90^{\circ}]$ and the azimuth $\phi \in [0^{\circ},360^{\circ}]$. Radial bins are spaced in even intervals of the radial binning index \begin{equation} \label{eq:radbin_cval_celz} i_r=\frac{1}{a} \log \left(c+\frac{a}{b} r\right). \end{equation} The constant $c$ allows to adapt the central binning scheme from logarithmic ($c=0$) to linear ($c=1$). The constants $a$ and $b$ are determined once the radial extent of the library $r_\mathrm{min} \le r \le r_\mathrm{max}$ and the number of radial bins $N_r$ are set and they are chosen so that the minimum radius $r_\mathrm{min}$ lies within the first radial bin and the maximum radius $r_\mathrm{max}$ in the last one \citep[see also ][]{Siopis09}. \\ Similar to the spatial properties, the line-of-sight velocity distributions (LOSVDs) are integrated over small cells in phase space given by the spatial pattern of the observations (e.g. Voronoi bins; \citealt{Cappellari03}) and the velocity resolution of the LOSVD data. Like its axisymmetric predecessor \citep{Thomas04}, the code uses the entire information contained in the full LOSVDs. See Sections~\ref{sec:Spatial binning} and~\ref{sec:Kinematic Data and velocity binning} for more details. \subsection{Density and potential} The total gravitational potential \begin{equation} \Phi=\Phi_{*}+\Phi_{\mathrm{DM}}+\Phi_{\mathrm{SMBH}} \end{equation} is composed as the sum of a Keplerian contribution from a super-massive black hole ($\Phi_{\mathrm{SMBH}}$) and the contributions from the stars ($\Phi_{\mathrm{*}}$) and dark matter ($\Phi_{\mathrm{DM}}$). \texttt{SMART} allows to use non-parametric densities for both the stars and the dark matter. The stellar density is generally assumed to be provided in 3D tabulated form (as for example returned from a non-parametric deprojection, e.g. \citealt{deNicola20}). The same holds for the dark matter halo. However, the code can also run with quasi-parametric deprojections (e.g. Multi Gaussian Expansion or MGE models, \citealt{Monnet92,Emsellem94,Cappellari02}). It can also run with parametric dark matter halos (e.g. NFW profiles; \citealt{Navarro96}). \\ The solution to the Poisson equation is obtained with the help of an expansion in spherical harmonics \citep{Binney08}. For this, the stellar and dark matter density are individually interpolated by first performing a bi-linear interpolation among the elevation and azimuthal angle bins and afterwards a linear interpolation among the logarithm of the radial bins. For integrating these interpolated densities we use a 10-point Gaussian quadrature algorithm from \citet{Press07}. The advantage of calculating the potential by expansion into spherical harmonics in comparison to other techniques, e.g. by using the MGE method, is its ability to deal with non-parametric densities and halos. \subsection{Orbit library} Every orbit in a gravitational potential is uniquely defined by its integrals of motion (e.g.~\citet{Binney08}). The number of (isolating) integrals of motion depends on the given potential. Furthermore, every integral of motion reduces the dimensionality of the trajectories of the stars in the galaxy. Regular potentials admit in general three integrals of motion, one of which being the energy $E$. In the axisymmetric case another classical integral is known explicitly: the z-component of the angular momentum, $L_z$. The third integral, $I_3$, is usually not known explicitly. In the general triaxial case, only the energy is given explicitly. Near the central SMBH, however, the potential becomes more and more Keplerian and the number of isolating integrals of motion in a Keplerian potential is five. An example for a system in an almost Keplerian potential that is described by a 5-integral distribution function is the asymmetric disc in the center of M31 (\citealt{Bender05,Brown13}). Since our code is not restricted to axisymmetric or triaxial symmetries, we aim for a 5D starting space for the stellar orbits, which we gain by systematically sampling $E$, $L_z$, $v_r$, $r$ and $\phi$. The details of the initial conditions sampling technique are described in Section~\ref{sec:Orbital Representation of the Phase Space}. \\ In total, \texttt{SMART} sets up and integrates $\sim50000$ orbits for 100 surfaces of section (SOS) crossings, i.e. for 100 crossings of the equatorial plane in upwards direction (for a more detailed description of SOS see App.~\ref{sec:Orbital Representation of the Phase Space}). This number of orbits was intentionally chosen to be higher than in the axisymmetric predecessor code of \citet{Thomas04} to address all necessary complexities given by, e.g., a radially changing structure in the integrals-of-motion space. Moreover, this amount of orbits proves to be sufficient to directly recover the (phase-space) density without any dithering of orbits in the starting space. \\ If the potential at hand is, e.g., axisymmetric, then all orbits conserve $L_z$ and precess around the rotation axis. In this case, the orbits will fill the $\phi$ dimension automatically. Likewise, orbits will be represented by invariant curves in the ($r$,$v_\theta$)-plane, due to the conservation of $I_3$. Hence, when sequentially sampling the orbital launch conditions, the dimension of the submanifold containing all the orbital initial conditions that are not yet represented shrinks automatically, according to the number of integrals of motion provided by the gravitational potential under study. \\ Since, in general, triaxial potentials have three integrals of motion, a 2D starting space at a given energy (\citealt{deZeeuw85,Schwarzschild93}) would provide a sufficient orbit sampling: One could sample initial conditions from the (x,z)-plane producing mainly tube orbits and compensate this with launching additional box orbits from the equipotential surfaces \citep{vandenBosch08}. However, it is not clear whether the distribution function of realistic triaxial galaxies requires a 5D starting space near the SMBH in the center. With our choice of a 5D starting space we guarantee that our set of orbits adapts to the actual complexity of the integrals-of-motion space. In a realistic triaxial galaxy, like in the studied simulation, it changes from a more spherical center (requiring at least four integrals of motion) into nearly prolate outskirts. Furthermore, it allows us to model systems like eccentric discs with distribution functions that obviously depend on more than three integrals of motion. In Fig.~\ref{fig:Figure23} we show that our implemented orbit sampling and integration routine (cf. Section~\ref{sec:Orbit integration and classification}) yields a homogeneous and dense coverage of phase space. \subsubsection{Orbit integration and classification} \label{sec:Orbit integration and classification} \texttt{SMART} integrates the orbital equations of motions $ \frac{d\vec{v_{i}}}{dt} = - \vec{\nabla} \phi(\vec{x_i})$, where $i$ denotes the orbit index, in cartesian coordinates by means of the Cash-Karp algorithm \citep{Cash90}. The 5th order Runge-Kutta method is implemented by using an adaptive integration step-size (see \citealt{Press07}). The default integration time of the individual orbits corresponds to 100 SOS-crossings. \\ At each integrated time-step the contribution of orbit i to the luminosity, internal velocity moments and projected LOSVDs is calculated as the fraction of time the orbit spends in the corresponding bins. Projected quantities are convolved with the relevant PSF (point spread function) in every time step and before binning. The PSF can either be provided as a parametrised two dimensional Gaussian or in terms of a PSF image. The convolution is performed via Monte-Carlo method by randomly perturbing the coordinates $x^{\prime}(t),y^{\prime}(t)$ (cf. section~\ref{sec:Coordinate systems and binning}) in dependence of the respective PSF. \\ Modeling a galaxy with \texttt{SMART} does not require an exact orbit classification analysis. However, we built in an approximate classification method. For this, \texttt{SMART} checks the sign conservation of the angular momentum in x-, y- and z-direction for every SOS-crossing event. If the sign of $L_x$ is conserved for the whole integration time and if this is not true for $L_y$ and $L_z$, the orbit gets classified as x-tube. The same applies to the other directions (cf.~\citealt{Barnes92}). If the 100 SOS crossings do not hold an angular momentum sign conservation along any direction, the orbit gets classified as box/chaotic orbit. If the sign conservation is true for every direction or if the orbit shows no radial and azimuthal change during the integration time, the orbit is classified as spherical/Kepler orbit. \\ \subsection{Orbit superposition} \label{sec:Orbit superposition} The orbital weights $w_i$, which are decisive for the consistency between the observed and modeled luminosity as well as for the projected velocity profiles, are iteratively changed until the difference $\chi^2$ between the observed LOSVDs $\cal{L}_\mathrm{data}$ and modeled LOSVDs $\cal{L}_\mathrm{mod}$ is minimal: \begin{equation} \begin{aligned} \chi^{2}=& \sum_{j^{\prime}}^{N_{\text {losvd }}} \sum_{k}^{N_{\text {vlos }}}\left(\frac{{\cal{L}_\mathrm{data}}^{j^{\prime} k}-{\cal{L}_\mathrm{mod}}^{j^{\prime} k}}{\Delta {\cal{L}_\mathrm{data}}^{j^{\prime} k}}\right)^{2}\end{aligned}. \label{chi squared} \end{equation} Here, $j^{\prime}$ describes the spatial bin index of the $N_{\text {losvd }}$ data cells and $k$ describes the velocity bin index of the $N_{\text {vlos }}$ velocity bins. $\Delta {\cal{L}_\mathrm{data}}^{j^{\prime} k}$ is the error of the data in the specific bin. An advantage of \texttt{SMART} is that it uses the full information contained in the LOSVDs and not only the Gauss-Hermite parameters alone (cf., e.g., \citealt{Mehrgan19} for a discussion of the benefits of using non-parametric LOSVDs in measuring galaxy masses). The luminosity density serves as a boundary condition for the choice of the orbital weights. \\ The problem of solving for the weights $w_i$ is usually underdetermined because the number of orbits is much larger than the number of data points. We therefore regularize our models by maximizing an entropy-like quantity \begin{equation} \label{eq:costfunc} \hat{S} \equiv S - \alpha \, \chi^2, \end{equation} where \begin{equation} \label{eq:smoothing} S = - \sum_i w_i \ln \left( \frac{w_i}{\omega_i} \right). \end{equation} In the absence of any other constraints, the entropy maximisation yields $w_i \propto \omega_i$ (cf. Section~\ref{sec:The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs}). Thus, the $\omega_i$ are bias factors for the orbital weights $w_i$ and can be used to smooth the orbit model. Moreover, they can be used to construct orbit models with specific properties, e.g. orbit models dominated by certain families of orbits (cf. Section~\ref{sec:The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs} for examples). The particular form of the entropy in Equation~\ref{eq:smoothing} guarantees the positivity of the orbital weights. The maximum-entropy technique is flexible, however. Other choices for the entropy allow for negative weights as well (\citet{Richstone88}). Technically, for each regularization value $\alpha$, \texttt{SMART} maximises $\hat{S}$ by computing the relevant Lagrange multipliers. The iterative adjustment of the $w_i$'s is performed by using Newton's method. The implemented method is based on \cite{Richstone88}. A detailed description of the algorithm as well as tests demonstrating the high accuracy performance of \texttt{SMART} can be found in Appendix~\ref{sec:Optimization Algorithm and Testing}. As we will describe in full detail in Section~\ref{sec:The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs}, the entropy term in equation \ref{eq:costfunc} makes the solution of the orbital weights unique. While this might be advantageous from an algorithmic point of view it comes in principle with the danger of a potential bias. As we will show, by varying the orbital bias factors $\omega_i$ the maximum-entropy technique allows in principle to reconstruct any of the potentially degenerate solutions for the orbital weights (cf. Section~\ref{sec:The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs}). However, the results of the following Sections imply that when fitting all the information contained in the full LOSVDs, the remaining degeneracies in the weight reconstruction have only little impact on the 'macroscopic' galaxy parameters of interest, like the anisotropy for example. One natural choice for the orbital bias factors is $\omega_i = V_i$, where $V_i$ is the phase-space volume represented by orbit $i$ (cf. e.g. \citealt{Richstone88,Thomas04}). With this choice \begin{equation} S= - \sum_{i} w_{i} \ln \left(\frac{w_{i}}{V_{i}}\right) = - \int f \ln (f) d^{3} r d^{3} v \end{equation} equals the Boltzmann entropy. Since the Boltzmann entropy increases during dissipationless evolutionary processes due to phase mixing and violent relaxation, galaxy models with large Boltzmann entropy are more likely than those with a small one (\citealt{Richstone88,Thomas07}). However, in collisionless self-gravitating systems, every entropy-like functional is assumed to increase in phase-space, such as the generalized H-function \citep{Tremaine86}, the entropy of the ideal gas \citep{White87} or the Tsallis entropy \citep{Tsallis88}. The choice of $\omega_i$ is thus arbitrary to some degree. This and the fact that in case of a triaxial potential it is computationally expensive to calculate the correct phase-space volume $V_i$ for every orbit motivated us to set \begin{equation} \omega_{i}= \mathrm{const}. = 1. \end{equation} This functional form was also tested by \citet{deLorenzi07} in a slightly different context of a made-to-measure (M2M; \citealt{Syer96,Bissantz04}) algorithm for $N$-body particle models. Compared to the Boltzmann entropy, $\omega_i = \mathrm{const.}$ leads to relative preference of orbits with actually small $V_i$ while orbits with large $V_i$ are relatively suppressed. With this choice of constant orbital bias factors $\omega_i$, the entropy equation (cf. eq. ~\ref{eq:smoothing}) resembles the Shannon entropy and yields the least "informed" set of orbital weights. \subsection{Mass Optimisation with \texttt{SMART}} \verb'SMART' is conceived to determine the viewing angles and the mass components, like the dark matter halo, the stellar mass-to-light ratio and black hole mass by looking for the model with the smallest $\chi^2$. To deal with this multi-dimensional parameter space, \verb'SMART' uses NOMAD (Nonlinear Optimisation by Mesh Adaptive Direct search), a software optimised for time-consuming constrained black-box optimisations (\citealt{Audet06,LeDigabel11}). NOMAD is able to optimise a noisy function with unknown derivatives to converge to the best fitting model by using a direct-search scheme. \texttt{SMART} runs on multiple computer cores. The orbit processing (including the setup of the initial conditions, orbit integration, orbit classification and computation of the internal and projected velocity distributions) as well as the relevant linear algebra operations applied for their superposition are parallelised. \section{\texorpdfstring{The $N$-body simulation}{The N-body simulation}} \label{sec:The N-body simulation} In order to test \texttt{SMART} on a realistic mock galaxy, we use a high-resolution collisionless numerical merger simulation by \citet{Rantala18}. The simulation is a single generation binary galaxy merger of two equal-mass elliptical galaxies with an effective radius of $7\mathrm{\,kpc}$ hosting a supermassive black hole of $8.5 \times 10^{9} M_{\odot}$ each and corresponds to the so-called $\gamma$-1.5-BH-6 simulation in \citet{Rantala18}. The two initial galaxies are set up by using a spherically symmetric Dehnen density-potential \citep{Dehnen93} with an initial inner stellar density slope of $\rho \propto r^{-3 / 2}$ for both progenitor galaxies. The merger results in a remnant triaxial galaxy with a supermassive black hole of $1.7 \times 10^{10} M_{\odot}$ with a sphere of influence\footnote{We here use the definition of the sphere of influence as the radius within which the total stellar mass equals the black hole mass, i.e., $M_{*}(r_{\mathrm{SOI}})=M_{BH}.$} of $r_{\mathrm{SOI}} \sim1 \mathrm{\,kpc}$ and an effective radius of $r_e\sim14\mathrm{\,kpc}$. With that, the remnant resembles NGC 1600, a galaxy showing a very large core with a tangentially biased central stellar orbit distribution \citep{Thomas16}. The simulation is based on the hybrid tree–N-body code KETJU (\citealt{Rantala17, Karl15}) which is able to accurately compute the dynamics close to the black hole due to the algorithmic chain regularization method AR-CHAIN (\citealt{Mikkola06,Mikkola08}). The computation of the global galactic dynamics is based on the tree code GADGET-3 \citep{Springel05}. We analyse a snapshot of the simulation $\sim$1.4 Gyr after the galaxy centers have merged, such that the remnant can be assumed to be in a steady state. At this stage, the actual distance of the two merging black holes in the simulation is $5\mathrm{\,pc}$. The merger remnant shows a radially varying triaxiality parameter, being more oblate at smaller radii and increasingly more round in the centre with a maximum of triaxiality, i.e., $T = 0.5$, at about $3\mathrm{\,kpc}$ and more prolate outskirts. The simulation contains $8.3 \times 10^6$ stellar particles with masses of $10^{5} M_{\odot}$ each, leading to a total stellar mass of $8.3 \times 10^{11}M_{\odot}$. The mass ratio of one supermassive black hole and one stellar particle is $M_{BH}/M_*=8.5 \times 10^4$ and therefore sufficiently large to investigate a realistic interaction of the SMBH binary with the stars~\citep{Mikkola92}. The number of dark matter particles is $2 \times 10^7$ with masses of $7.5 \times 10^{6} M_{\odot}$ each, leading to a total dark matter mass of $1.5 \times 10^{14} M_{\odot}$.\\ The simulation is particularly suitable to test \texttt{SMART} because of its (1) very high resolution (including properly resolved black-hole dynamics); (2) realistic orbital structure and shape; (3) realistic mass composition (black hole, stars, and dark matter) with a realistic stellar density core (e.g. \citealt{Thomas09,Rantala19}). Finally, the number of stellar particles is large enough to measure fully resolved LOSVDs from the central sphere of influence of the black holes out to dark matter dominated regions. \subsection{Processing the simulation data} \subsubsection{Orientation of the Simulation} \label{sec:Orientation of the Simulation} We aim at orienting our intrinsic coordinate system as closely as possible to the intrinsic symmetry axes of the merger remnant. However, the stellar and dark matter principal axes of the simulated remnant are not aligned. Hence, the orientation of the main axes depends on the radius and on the mass component for which the reduced inertia tensor (see e.g. \citealt{Bailin05}) is calculated. Such a shift between the stellar and dark matter halo axes is not unexpected for collisionless merger simulations (see e.g. \citealt{Novak06}). We decided to center the remnant on the stars and black holes and afterwards orientate it by using the reduced inertia tensor for stars and dark matter within $30\mathrm{\,kpc}$. With this, the stellar elliptical isophotes for the three different projections are well aligned with the projected principal axes within the field of view of $15\mathrm{\,kpc}$ x $15\mathrm{\,kpc}$. There is a negligible residual misalignment which is strongest for the major axis projection but nowhere larger than $\sim5^{\circ}$ (see Fig.~\ref{fig:Figure3} and~\ref{fig:Figure16}). \subsubsection{Density} Due to the good alignment and taking advantage of the nearly triaxial intrinsic symmetry of the merger remnant, we increase the resolution of the simulation for computing the density by a factor of 8 by folding all stellar and dark matter particles into one octant. The stars are then binned into concentric radial shells with 1000 stars in each shell and the dark matter particles are binned into radial shells with 5000 dark matter particles each. The single shells are subdivided into angular bins, such that the elevation angle $\theta \in [0^\circ, 90^\circ]$ increases in constant $\sin(\theta)$-steps of 0.1 and that the azimuthal angle $\phi \in [0^\circ, 90^\circ]$ increases in constant $\phi$-steps of $10^\circ$. \\ Within $r<0.28\mathrm{\,kpc}$ for the stellar and $r<10\mathrm{\,kpc}$ for the dark matter particles, the resolution is too low to extract a smooth density. Here, we extrapolate the logarithmic densities from the outer parts by a first order polynomial fit in the logarithm of the radius. For the stellar densities we use the slope and vertical intercept averaged over all angular bins. The dark matter density is extrapolated by using the slopes and vertical intercept values of the individual angular bins. We ensure that the total enclosed mass is well covered by the extrapolated density. To smooth the radial density profiles, SciPy's \texttt{gaussian\_filter()}-function \citep{Virtanen19} is used. \subsubsection{Spatial binning} \label{sec:Spatial binning} For spatially binning the kinematic input data we use the Voronoi tessellation method of \citet{Cappellari03}. For each tested projection (see Section~\ref{sec:Applying SMART to the simulation}) we construct a separate set of Voronoi bins. To end up with a roughly constant number of stellar simulation particles $N_*$ in each bin, we define the signal-to-noise ratio as \begin{align} \frac{\mathrm{signal}}{\mathrm{noise}}=\sqrt{\mathrm{signal}}=\sqrt{N_{*}}= \begin{cases} 70 \textrm{ for } r<r_{\mathrm{SOI}} ,\\ 150 \textrm{ for } r_{\mathrm{SOI}}<r<15\mathrm{\,kpc} . \end{cases} \end{align} When calculating the average over the five different projections, this results in a total number of $N_{\text {losvd }}=N_{\text {voronoi }}=227$ Voronoi bins within the whole field of view and 54 Voronoi bins within $r_\mathrm{SOI}$. \\ This resolution is chosen to conform with realistic observational data and does not exceed high-resolution wide-field spectral observations by, e.g., MUSE (cf. \citealt{Mehrgan19}) but still proves to be sufficiently high for this analysis. \subsubsection{Kinematic data and velocity binning} \label{sec:Kinematic Data and velocity binning} For each spatial bin we calculate the line-of-sight velocity distributions for $N_{\text {vlos }}=45$ equally-sized velocity bins with $v^{\mathrm{max}}_{\mathrm{min}} = \pm 1600 \mathrm{\,km\,s^{-1}}$, which is chosen so that it covers about 10 times the velocity dispersion. This results in a velocity resolution of $\Delta v_{\mathrm{vlos}} = 71.11 \mathrm{\,km\,s^{-1}}$. \\ It is not trivial which "error" for the kinematic input data of the simulation should be used since we do not have an error in the simulation in the sense that repeated measurements give the same results. However, the $\chi^2$-minimisation formally requires information about an "error". We tested several assumed "error-bars", such as \begin{compactenum} \item[-] the difference between two kinematic datasets, each determined by using half of the simulation particles, \item[-] the Poisson noise and \item[-] a constant absolute error for each LOSVD as 10 percent of the maximum per LOSVD. \end{compactenum} In order to prevent an underestimation of the relative error for the major axis projection holding more particles along the line of sight than the other projections, the constant absolute error proves to be the most suitable method and is used in the present analysis. The choice of setting the error value to 10\% of the maximum value of each LOSVD corresponds to a velocity uncertainty of $\Delta v = 13\mathrm{\,km\,s^{-1}}$, dispersion error of $\Delta \sigma = 13 \mathrm{\,km\,s^{-1}}$ (or 5\% relative error) and $\Delta h_n = 0.03$ for the higher-order Gauss-Hermite moments. This is a reasonable choice both in terms of real observational errors and of the scatter in the kinematic maps of the merger simulation (see Fig.~\ref{fig:Figure3} and \ref{fig:Figure19}). \subsection{Applying \texttt{SMART} to the simulation} \label{sec:Applying SMART to the simulation} For the purpose of testing our code, \verb'SMART' is provided with the correct viewing angles and with the 3D normalized stellar (i.e. luminosity) density $\rho_*$ from the simulation as well as with the normalized 3D dark matter density $\rho_{\mathrm{DM}}$. The DM scaling parameter $s_{\mathrm{DM}}$ is to be determined by \texttt{SMART}. We skipped any surface brightness deprojection, since degeneracies in the deprojection (see e.g. \citealt{deNicola20}) would only hamper a correct evaluation of our code. We parameterise the density as \begin{equation} \label{eq:densmodel} \rho = M_{BH} \times \delta(r) + \Upsilon \cdot \rho_{*} + s_{\mathrm{DM}} \cdot \rho_{\mathrm{DM}}, \end{equation} where our fit parameters are the black hole mass $M_{BH}$, stellar mass-to-light ratio $\Upsilon$ and the multiplication factor $s_\mathrm{DM}$ defining the magnitude of the dark matter density profile favored by \texttt{SMART}. We determine these parameters by finding the minimum in $\chi^2$. \\ We model and analyse five different projections with (1) $\vartheta=90^{\circ}$, $\varphi=0^{\circ}$, i.e., the major axis projection, (2) $\vartheta=90^{\circ}$, $\varphi=90^{\circ}$, i.e., the intermediate axis projection, (3) $\vartheta=0^{\circ}$, $\varphi=90^{\circ}$, i.e. the minor axis projection, (4) $\vartheta=90^{\circ}$, $\varphi=10^{\circ}$, i.e. a projection 10$^{\circ}$ off the major axis in azimuthal direction and (5) $\vartheta=90^{\circ}$, $\varphi=45^{\circ}$, i.e. a projection in between the major and intermediate axis projection. Without loss of generality, $\psi$ was set to $90^{\circ}$ for all three viewing directions, i.e., $R$ (see eq.~\ref{eq:R matrix}) equals the unit matrix.\\ The field of view is chosen to be $15 \mathrm{\,kpc}$ x $15 \mathrm{\,kpc}$. The minimum sampled starting radius is set to $r_{\mathrm{min}}=0.05\mathrm{\,kpc}$ and the maximum sampled starting radius is set to $r_{\mathrm{max}}=80\mathrm{\,kpc}$. We find optimal results for a central binning with $c=0.5$ in Equation~\ref{eq:radbin_cval_celz} because this guarantees that in case of the simulated merger remnant the difference of the circular velocity within one radial bin equals the model's velocity resolution $\Delta v_{\mathrm{vlos}} = 71.11 \mathrm{\,km\,s^{-1}}$ at a radius of $r=0.16\mathrm{\,kpc}=0.16\cdot r_\mathrm{SOI}$. For $c=1$ this would be only reached at $r=0.38\mathrm{\,kpc}=0.38 \cdot r_\mathrm{SOI}$ resulting in a deteriorated black hole mass recovery by $\sim10\%$.\\ For each tested projection we model two halfs of the LOS kinematic data: After correct projection onto the plane of the sky (see Section~\ref{sec:Coordinate systems and binning}), we separately model the half of the kinematic data with positive $x'$-coordinates (hereafter called 'right half' of the galaxy) and the half of the kinematic data with negative $x'$-coordinates (hereafter called 'left half' of the galaxy). \section{Results} \label{sec:Results} \subsection{Choice of Regularization} \label{sec:Choice of Regularization} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure2.pdf} \caption{Choice of regularization and 1-dimensional mass recovery results for the five different projections (different columns). The first row shows the $rms_{\sigma}$- (thick line) and $rms_{v,\sigma}$-profile (thin line) for the modeled right half of the galaxy (black) and left half of the galaxy (red) with the correct black hole mass, stellar mass-to-light ratio and dark matter scale factor as input. All values are plotted against the increasing regularization value $\alpha$ in logarithmic units. The x-axis-ticks thereby symbolise the tested $\alpha$-values. The minima $\mathrm{min}(rms_{\sigma})$ (thick line) and $\mathrm{min}(rms_{v,\sigma})$ (thin line) are marked as vertical lines and suggest suitable regularization values. The thin black vertical line in the major axis panel thereby overlaps with the thick black vertical line and the thin red vertical line in the $\vartheta=90^{\circ}$, $\varphi=10^{\circ}$ panel overlaps with the thick red vertical line. The second row shows the corresponding $\chi^2/N_{\mathrm{data}}$ values. The third and fourth row show the 1-dimensional mass recovery results for the stellar mass-to-light ratio $\Upsilon/\Upsilon_{\mathrm{sim}}$ and black hole mass $M_{BH}/M_{BH,\mathrm{sim}}$ normalized over the correct values of the simulation. The y-axis-ticks for the third and fourth row symbolise the concrete masses which were tested and used as input values for the models. The black dotted line marks unity which is achieved when the model correctly recovers the mass. The stellar mass-to-light ratio and black hole mass were recovered with an accuracy better than 5\%. Such an intrinsic precision under similar conditions has not yet been demonstrated with other Schwarzschild modeling codes.} \label{fig:Figure2} \end{figure*} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure3.png} \caption{Velocity maps of the simulation (top row) and of the model (bottom row) for the major axis projection, i.e. the projection with the line of sight being parallel to the major axis of the simulation based on the simulation's orientation described in Section~\ref{sec:Orientation of the Simulation} (for the other projections see Fig.~\ref{fig:Figure19}). The different panels show the velocity in $\mathrm{km\,s^{-1}}$ (first column), velocity dispersion in $\mathrm{km\,s^{-1}}$ (second column), the $h_3$-parameter (third column) and $h_4$-parameter (fourth column) plotted over the whole field of view. The fifth panel in the top row shows the corresponding surface brightness map from the simulation in units of logarithmic numbers of stellar particles $N_*$. The contour lines correspond to isodensity surfaces. The fifth panel in the bottom row shows $\chi_{\mathrm{losvd}}^2/N_{\mathrm{vlos}}$ as deviation from the kinematic input data with the modeled fit. We show the result for the model with the correct stellar mass-to-light ratio, black hole mass and dark matter scale factor as input parameters evaluated at the most suitable regularization parameter of $\alpha(min(rms_\sigma))$. The maps in the second row consist of the results of the modeled left and right half of the galaxy. Therefore, the $\chi^2$-maps show two colorbars for the two different halfs. } \label{fig:Figure3} \end{figure*} When applying the code to realistic noisy measurements, regularization becomes important to prevent the orbital weights to fit the noise in the data. The optimal regularization parameter $\alpha$ for a specific observational data set can be determined by running Monte-Carlo simulations on kinematic mock data \citep{Thomas05} and is given as the one providing the minimum deviation of the intrinsic properties (like the distribution function, or velocity moments, or mass parameters) in comparison to the default model. When fitting noiseless ideal data, one would expect best results for $\chi^2 \to 0$, or $\alpha \to \infty$ (neglecting recovery degeneracies and assuming an "error" can be defined). Even in that case, however, due to residual systematics (like finite resolution of the orbit library etc.) and due to the intrinsic noise in the $N$-body simulation, we still expect that the best result may not necessarily be achieved asymptotically for very large $\alpha$, but already for some finite value of the regularization parameter. To take this into account, we split the code test into two phases: (1) We fit the orbit model with the correct mass parameters and determine that value of $\alpha$ for which the internal structure of the simulation is best recovered. Specifically, we use the 2nd order velocity moments for this comparison. (2) We then also vary the mass parameters and test how well they can be recovered. Our benchmark is the optimised $\alpha$ from the comparison of the moments, but we will discuss the results for all $\alpha$ to demonstrate their robustness. \\ For determining the deviation between the model's velocity dispersions $\sigma_r,\sigma_\theta$ and $\sigma_\phi$ and the real ones from the simulation we define \begin{equation} \label{eq: rms_sigma} \begin{aligned} \begin{split} rms_\sigma=\frac{1}{3} \sum_{i} rms_{\sigma_{\mathrm{i}}}= \frac{1}{3} \sum_{i} \sqrt{\frac{1}{N_{\mathrm{data}}} \sum_{j=1}^{N_{\mathrm{data}}}\left(\frac{\sigma_{i, \mathrm{data}}-\sigma_{i, \mathrm{mod}}}{\sigma_{i,\mathrm{data}}}\right)^{2}}, \end{split} \end{aligned} \end{equation} where the index $i$ denotes the three coordinates $r,\theta, \phi$. \\ These $rms_{\sigma}$-profiles of the five tested projections and their respective modeled halfs in dependence of the regularization parameter $\alpha$ are plotted as thick lines in the top row of Fig.~\ref{fig:Figure2}. The x-axis ticks thereby mark all $\alpha$-values which were tested in this analysis. \\ The second row in Fig.~\ref{fig:Figure2} shows the quality of the fit as $\chi^2/N_{\text{data}}$-profile (for definition of $\chi^2$ see formula~\ref{chi squared}) again plotted against the regularization parameter. $\chi^2$ as deviation from the kinematic input data with the modeled fit is here normalized over the number of input data $N_{\text{data}}$, which is composed of the number of Voronoi bins $N_{\text {voronoi }}$ times the number of kinematic bins $N_{\text{vlos}}$. As expected, the fit to the data is poor when $\alpha$ is low (high $\chi^2/N_{\mathrm{data}}$). In this regime, it is the entropy term which is essentially maximised and the data (via $\alpha \cdot \chi^2$, cf. eq.~\ref{eq:costfunc}) have little influence on the fit. With this, the anisotropy strongly depends on the $\omega_i$ and, in our case, happens to be a poor representation of the internal moments of the merger (high $rms$ values, see Fig.~\ref{fig:Figure2}, first row). With increasing $\alpha$, both the fit quality and the agreement with the merger structure improve. However, at very high $\alpha$-values further improvements of the fit do not make the internal moments better since we are dominated by the noise of the $N$-body simulation. The most suitable choices of regularization can be read off from the minima $\mathrm{min}(rms_{\sigma})$ of the $rms_{\sigma}$-profiles and are marked as thick vertical lines in Fig.~\ref{fig:Figure2}. Their average value is $\alpha(\mathrm{min}(rms_{\sigma}))$=0.41. This value, however, depends on the specific implemented setup of \texttt{SMART} as well as of the input data. Evaluated at the individual most suitable regularization values and afterwards averaged over all ten models (5 projections and two halfs each) we get a minimum value of only $\mathrm{min}(rms_{\sigma})= 0.008$. This extremely good agreement demonstrates that our orbit sampling represents the phase space very well. We also test the influence of rotation on the comparison. Due to the overall small angular momentum and thus small absolute velocity amplitude, however, the relative errors in $v$ are sometimes large. The absolute error of the first order velocity moments, averaged over all angular and radial bins, is only $\Delta v=3.9\mathrm{\,km\,s^{-1}}$, but the maximum velocity over these bins is likewise only $23.4\mathrm{\,km\,s^{-1}}$. We therefore here define $rms_{v,\sigma}$ as normalized deviation between the first internal moments and the velocity dispersions together as \begin{equation} \begin{aligned} rms_{v,\sigma}=\frac{1}{3} \sum_{i} rms_{v_i,\sigma_{\mathrm{i}}} =\frac{1}{3} \sum_{i} \sqrt{\frac{1}{2N_{\mathrm{data}}}\sum_{j=1}^{N_{\mathrm{data}}} \left( \Delta^2_{v_i}+ \Delta^2_{\sigma_i} \right)}, \\ \mathrm{with} \: \Delta_{v_i}=\frac{v_{i,\mathrm{data}}-v_{i,\mathrm{mod}}}{\sqrt{v^2_{i,\mathrm{data}} + \sigma^2_{i,\mathrm{data}}}} \: \mathrm{and} \: \Delta_{\sigma_i}=\frac{\sigma_{i,\mathrm{data}}-\sigma_{i,\mathrm{mod}}}{\sqrt{v^2_{i,\mathrm{data}} + \sigma^2_{i,\mathrm{data}}}}, \end{aligned} \end{equation} where the index $i$ again denotes the three coordinates $r,\theta, \phi$. \\ The $rms_{v,\sigma}$-profiles are plotted against the regularization parameter as thin lines in the top row of Fig.~\ref{fig:Figure2}. As exptected, their minima appear at regularization values similar to the ones of the $rms_{\sigma}$-profiles. They are marked as thin vertical lines and their average value is $\alpha(\mathrm{min}(rms_{v,\sigma}))$=0.012. Again, this value depends on the specific setup. Evaluated at the individual $\alpha(\mathrm{min}(rms_{v,\sigma}))$-values and afterwards averaged over the five different projections and their respective modeled halfs we gain a value of only $\mathrm{min}(rms_{v,\sigma})=0.012$. \\ All values in the vicinity of $\alpha(min(rms_{\sigma}))$ and $\alpha(min(rms_{v,\sigma}))$ are good regularization choices. Within this regularization region \verb'SMART' is able to well fit the kinematic input data for each tested projection (see Fig.~\ref{fig:Figure3} and \ref{fig:Figure19}). Averaged over the five different projections and their respective modeled halfs we receive mean values and deviations of $\bar{v}=(0.11\pm2.09)\mathrm{\,km\,s^{-1}, } \text{ }\bar{\sigma}=(309.75\pm2.54)\mathrm{\,km\,s^{-1}, } \text{ }\bar{h_3}=0.00\pm0.01\text{ and }\bar{h_4}=0.01\pm0.01$, again evaluated at $\alpha(min(rms_\sigma))$. The $\chi^2$-maps in Fig.~\ref{fig:Figure3} and \ref{fig:Figure19} show that the models for each projection are able to fit the kinematic input data homogeneously well over the field of view with slightly larger deviations in the center. If not specifically annotated, the results shown in the further analysis are evaluated at $\alpha(min(rms_{\sigma}))$. However, the quality of the models does not strongly depend on the exact regularization value because a broader range of regularization values around these determined $\alpha$-values is sufficiently appropriate and results in equally good mass parameter reproductions within the overall scatter (see also third and fourth line in Fig.~\ref{fig:Figure2} which will be explained in Section~\ref{sec:Mass recovery}). \\ \subsection{Reproduction of internal moments and orbit structure} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Figure4.pdf} \caption{Reproduction of the internal properties within $r\in[0.25\mathrm{\,kpc}, 15\mathrm{\,kpc}]$. Top panel: The internal velocity dispersions in radial direction $\sigma_r$ (solid line), elevation direction $\sigma_{\theta}$ (dotted line) and azimuthal direction $\sigma_{\phi}$ (dashed line) from the model (black lines) averaged over the five different projections accurately follow the real ones from the simulation (blue lines) out to $15\mathrm{\,kpc}$ as field of view. The grey shaded lines mark the deviations between the different projections. Middle panel: Also the anisotropy parameter $\beta$ is well reproduced and represents the tangentially anisotropic orbit distribution ($\beta<0$) within the core radius $r_b=r_{\mathrm{SOI}}\sim1\mathrm{\,kpc}$ as well as the radially anisotropic orbit distribution ($\beta>0$) outside $r_b$. Bottom panel: Radial distribution of the orbit fractions $f_{\mathrm{orbitclass}}$ classified by \texttt{SMART}. } \label{fig:Figure4} \end{figure} The previously shown small $rms_\sigma$- and $rms_{v,\sigma}$-values already demonstrate the very good recovery of the internal moments by the model when providing the correct mass parameters and viewing angles. The high level of agreement between model and simulation is further illustrated in Fig.~\ref{fig:Figure4}. The anisotropy parameter $\beta=1-\frac{\sigma_{\theta}^{2}+\sigma_{\phi}^{2}}{2 \sigma_{r}^{2}}$ of the model matches the profile of the simulated one (see Fig.~\ref{fig:Figure4}, middle panel). The model is also able to reproduce the negative $\beta$ within the core radius $r_b$, which equals the black hole sphere of influence \citep{Thomas16}, reflecting the tangential orbit distribution due to black hole 'core scouring': Within the sphere of influence ETGs at the high-mass end exhibit central regions which are fainter than an extrapolation of a Sérsic function \citep{Sersic63} as fit to the outer surface brightness profile would suggest. The commonly accepted theory for the formation of these 'cores' is a gravitational slingshot process of stars on radial orbits caused by SMBH binaries which were arised by galaxy mergers (e.g. \citealt{Begelman80, Hills80, Ebisuzaki91, Milosavljevic01,Merritt06, Rantala18}). \\ The classification of the integrated orbits is also plotted in Fig.~\ref{fig:Figure4} (bottom panel). Orbits in the immediate vicinity of the black hole are spherical and Keplerian orbits as expected due to the SMBH. Z-tubes are the predominant orbits within $0.3\mathrm{\,kpc}<r<3\mathrm{\,kpc}$ causing the more oblate shape of the galaxy in this range. For $r>3\mathrm{\,kpc}$ the majority of orbits are classified as x-tubes corresponding to the prolate shape in the outskirts of the simulation as described in Section~\ref{sec:The N-body simulation}. Our orbit classification analysis well matches the one by Frigo et al. in prep., which is done via an orbit frequency analysis of the simulation. \subsection{Mass recovery} \label{sec:Mass recovery} So far, we have provided \texttt{SMART} with the correct mass parameters of the stellar mass-to-light ratio $\Upsilon_{\mathrm{sim}}$, black hole mass $M_{BH,\mathrm{sim}}$ and dark matter multiplication factor $s_{\mathrm{DM},\mathrm{sim}}$ of the simulation. Even though \texttt{SMART} can be provided with any type of dark matter profile, we here transfer the dark matter density profile shape of the simulation and concentrate on finding the correct mass multiplication scale factor $s_{\mathrm{DM}}$ of this predetermined halo shape (cf. eq.~\ref{eq:densmodel}). The following sections will show the one-dimensional mass reproduction results of individually determining the favored stellar mass-to-light ratio $\Upsilon$ or black hole mass $M_{BH}$ (Section~\ref{sec:1-dimensional mass recovery of upsilon and mbh}) as well as the two-dimensional mass recovery results of simultaneously determining the favored $\Upsilon$ and $M_{BH}$ (Section~\ref{sec:2-dimensional mass recovery of upsilon and mbh}) or $\Upsilon$ and $s_{\mathrm{DM}}$ (Section~\ref{sec:2-dimensional mass recovery of upsilon and mdm}). We skip any 3-dimensional mass parameter recovery since this would not provide more information than the combined 2-dimensional recoveries in the context of testing the orbit library. \subsubsection{1-dimensional mass recovery of $\Upsilon$ and $M_{BH}$} \label{sec:1-dimensional mass recovery of upsilon and mbh} Fig.~\ref{fig:Figure2} shows the 1-dimensional mass recovery results of $\Upsilon$ (third row) and $M_{BH}$ (fourth row) for the five different projections and their respective modeled halfs. For testing the recovery of the black hole mass, we provide the model with the correct $\Upsilon_{\mathrm{sim}}$- and $s_{\mathrm{DM},\mathrm{sim}}$-values and run nine models with different black hole masses within $M_{BH} \in [0.79 M_{BH,\mathrm{sim}}, 1.21 M_{BH,\mathrm{sim}}]$ including the correct one. The tested mass grid has a smaller grid size close to $M_{BH}/M_{BH,\mathrm{}sim}=1$ and the exact tested values can be read off from the ordinate ticks in Fig.~\ref{fig:Figure2}. \\ For testing the 1-dimensional mass recovery of the stellar mass-to-light ratio we provide $M_{BH,\mathrm{sim}}$ and $s_{\mathrm{DM},\mathrm{sim}}$ and test the same $\Upsilon/\Upsilon_{\mathrm{sim}}$-values as for the black hole mass analysis. \\ Fig.~\ref{fig:Figure2} shows the favored mass parameters, i.e., the mass parameters where $\chi^2/N_{\text{data}}$ is smallest, as a function of $\alpha$. As one can see, for our fiducial choice of $\alpha$ (i.e. $\alpha=\alpha(\mathrm{min}(rms_{\sigma}))$ as the best recovery of the velocity dispersions), the \textit{average} mass recovery performs excellently, with $\Delta M_{BH}=5\%$ and $\Delta \Upsilon=2\%$. In fact, above $\log \alpha \gtrsim -3$, the results are very robust with little dependency on $\alpha$. Within the overall minor scatter, all models, independent from the chosen half of the galaxy or projection, show equally good fits and reproductions of the internal moments and mass parameters. \subsubsection{2-dimensional mass recovery of $\Upsilon$ and $M_{BH}$} \label{sec:2-dimensional mass recovery of upsilon and mbh} For simultaneously recovering $\Upsilon$ and $M_{BH}$ by \texttt{SMART} we sample a two-dimensional grid of input masses for 49 models per projection with $\Upsilon \in [0.79 \Upsilon_{\mathrm{sim}},1.21 \Upsilon_{\mathrm{sim}}]$ and $M_{BH}\in [0.79 M_{BH,\mathrm{sim}},1.21 M_{BH,\mathrm{sim}}]$. We again model all five different projections but fit only the right half of the galaxy since the 1-dimensional mass recovery showed no significant difference between the respective halfs of each projection. The results are plotted in Fig.~\ref{fig:Figure5}. Evaluated at the regularization value $\alpha(min(rms_{\sigma}))$ the stellar mass-to-light ratio is in every case correctly recovered and the black hole mass is reproduced with an accuracy of $6\%$ averaged over the different projections. \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{Figure5.pdf} \caption{2-dimensional mass recovery results of $\Upsilon$ and $M_{BH}$ for the positive halfs of the five different projections. For each projection we evaluate 49 models with different $\Upsilon$- and $M_{BH}$-input-masses covering a 2-dimensional grid with a step size of 7\% around the correct mass parameter. Each plot contains the $\chi^2/N_{\mathrm{data}}$-colorbar for the individual projection. The favored models are marked with a grey cross. The model always finds the correct stellar mass-to-light ratio and the black hole mass with a minor averaged deviation of 6\%.} \label{fig:Figure5} \end{figure*} \subsubsection{2-dimensional mass recovery of $\Upsilon$ and $s_{\mathrm{DM}}$} \label{sec:2-dimensional mass recovery of upsilon and mdm} For recovering $\Upsilon$ and $s_{\mathrm{DM}}$ we sample a two-dimensional grid of input masses for 49 models along the major axis projection with $\Upsilon \in [0.79 \Upsilon_{\mathrm{sim}},1.21 \Upsilon_{\mathrm{sim}}]$ and $s_{\mathrm{DM}} \in [0.7 s_{\mathrm{DM},\mathrm{sim}},1.3 s_{\mathrm{DM},\mathrm{sim}}]$. We here model the right half of the major axis projection. The result is shown in Fig.~\ref{fig:Figure6}. Evaluated at $\alpha(min(rms_{\sigma}))$, the dark matter scale factor is slightly overestimated by 10\% and the stellar mass-to-light ratio is slightly underestimated by $7\%$ confirming the accurate mass reconstruction from the previous tests. \\ \\ In conclusion, these mass recovery results demonstrate that our orbit sampling and superposition algorithms allow for a very accurate reconstruction of the mass composition and orbital structure of triaxial systems with known density shapes. \\ \begin{figure} \centering \includegraphics[width=0.7\textwidth]{Figure6.pdf} \caption{2-dimensional mass recovery results of $\Upsilon$ and $s_{\mathrm{DM}}$ for the positive half of the major axis projection. $s_{\mathrm{DM}}$ thereby is the mass multiplication scale factor of the predetermined halo shape of the simulation. We evaluate 49 models with different $\Upsilon$- and $s_{\mathrm{DM}}$-input-masses covering a 2-dimensional grid with a step size of 7\% for the stellar mass-to-light ratio and 10\% for the dark matter scaling multiplication factor. The favored model is marked with a grey cross. The model slightly overestimates the dark matter halo scale factor by 10\% and slightly underestimates the stellar mass-to-light ratio by 7\%.} \label{fig:Figure6} \end{figure} \subsection{Beyond 2nd order velocity moments} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{Figure7.pdf} \caption{Internal velocity distributions in radial (red), elevation (blue) and azimuthal (green) direction when integrating over the respective spatial bin of the model and the other velocity components for three different radii of $r=0.16\mathrm{\,kpc}$ (left panel), $r=0.77\mathrm{\,kpc}$ (middle panel) and $r=12.46\mathrm{\,kpc}$ (right panel). The internal velocity distributions are calculated for 31 velocity bins within the positive and negative escape velocity evaluated at each radial and angular bin of the \texttt{SMART}-specific grid. We here show the mass-weighted stellar internal velocity distributions per velocity bin averaged within spherical shells. The non-shaded lines correspond to the simulation data and the shaded lines show the modeled results. In closer vicinity of the black hole, the elevation and azimuthal distribution show two maxima corresponding to the tangentially anisotropic orbit distribution. \texttt{SMART} is able to reproduce the internal velocity distributions in general and is also able to follow this specific behaviour in the central bins.} \label{fig:Figure7} \end{figure*} So far, we have focused on the reproduction of the first and second order internal velocity moments. The previous sections have shown that the full shape of the LOSVDs contains enough information to accurately reconstruct the mass and the anisotropy structure of the orbit distribution. In Fig.~\ref{fig:Figure7} we show the full mass-weighted stellar velocity distributions per velocity bin against the velocity in $\mathrm{\,km\,s^{-1}}$ in radial, longitudinal and azimuthal direction when integrating over the other velocity components and respective spatial bins of the model. We find that the central azimuthal and longitudinal velocity distributions within $r<r_{\mathrm{SOI}}$ have two maxima which become more pronounced closer to the center (see Fig.~\ref{fig:Figure7}). This likely reflects the strong tangential anisotropy produced during the formation of the core and is probably linked to the negative $h_4$ parameter at the center of the merger remnant (see Fig.~\ref{fig:Figure3} and~\ref{fig:Figure19}), which will be investigated in more detail in a separate paper. The whole internal velocity distributions contain more information about the formation process than the velocity moments alone can do. So, we extended \texttt{SMART} to calculate the internal velocity distributions for 31 velocity bins within the positive and negative escape velocity, i.e., $v^{max}_{min}=\pm v_{esc}(r,\theta,\phi)$, evaluated at each radial and angular bin of the \texttt{SMART}-specific grid (see Section~\ref{sec:Coordinate systems and binning}). The internal velocity distributions averaged within spherical shells reproduce the ones from the simulation sufficiently well, with a deviation of $rms=0.07$ averaged over all velocity bins with $v<1000\mathrm{\,km\,s^{-1}}$ and radial bins within $r \in [0.25\mathrm{\,kpc},15\mathrm{\,kpc}]$ (see Fig.~\ref{fig:Figure7}). Including the outer wings of the internal velocity distributions with $v>1000\mathrm{\,km\,s^{-1}}$, the $rms$ increases, however, the number of simulation particles in these bins is very small. We will apply this ability of \texttt{SMART} to model the whole internal velocity distribution in future studies of real observational data. \\ We checked the behavior of the $rms$-profile in dependence of the regularization as deviation of the whole internal velocity distributions and compared it with the $rms_\sigma$- and $rms_{v,\sigma}$-profiles. It thereby showed the same form and its minimum appeared in the same regularization region and therefore does not provide additional information when determining the most suitable regularization value. \\ \\ \subsection{Robustness and Uniqueness checks} In order to test the robustness of the results against modifications of our fiducial setup and in order to test the uniqueness of the results we make the following checks and considerations:\\ \subsubsection{PSF convolution and noise} \label{sec:PSF convolution and noise} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Figure8.pdf} \caption{Choice of regularization and 1-dimensional mass recovery results for the positive half of a psf convolved noisy version of the intermediate axis projection. The $rms$- and $\chi^2/N_\mathrm{data}$-profile show the same shape and suitable regularization region than for the noiseless and non-psf-convolved case. Also, the $\Upsilon$- and $M_{BH}$-reproduction shows no remarkable change within the overall scatter.} \label{fig:Figure8} \end{figure} \begin{figure} \centering \includegraphics[width=0.53\textwidth]{Figure9.pdf} \caption{2-dimensional mass recovery results of $\Upsilon$ and $M_{BH}$ for the positive half of a psf convolved noisy version of the intermediate axis projection. We again evaluate 49 models with different $\Upsilon$- and $M_{BH}$-input-masses covering a 2-dimensional grid with a step size of 7\% around the correct parameters. The favored model is marked with a grey cross. The model finds the correct stellar mass-to-light ratio and slightly underestimates the black hole mass by $7\%$. Within the overall scatter this resembles the results of the models without noise and psf convolution. } \label{fig:Figure9} \end{figure} While \citet{Valluri04} describe that three-integral, axisymmetric, orbit-based modeling algorithms in general show a flat-bottomed $\chi^2$ distribution, being unable to determine the black hole mass to better than a factor of $\sim 3.3$, \citet{Magorrian06} demonstrates that this is only true for noiseless data. According to this, our model should not be able to precisely determine the correct black hole mass of the simulation due to the lack of an "error". However, the previous results have already shown that \texttt{SMART} achieves a well defined black hole mass due to a well defined minimum in the $\chi^2$-profile, which was probably supported by the intrinsic noise of the $N-$body simulation. Nevertheless, we check whether the minimum in the $\chi^2$-curve changes when simulating an "error" in the kinematic input data. For this, we model the positive half of the intermediate axis projection by adding Gaussian noise to the simulation chosen so that the velocity dispersion of the noisy kinematic simulation data results in an observationally realistic error of $\sim3\%$ ($\bar{v}=(0.14\pm7.64)\mathrm{\,km\,s^{-1}, } \text{ }\bar{\sigma}=(288.86\pm7.80)\mathrm{\,km\,s^{-1}, } \text{ }\bar{h_3}=0.00\pm0.02\text{, }\bar{h_4}=0.02\pm0.02$). To achieve even more realistic conditions we furthermore smooth the data by simulating a psf convolution with a FWHM of $2.43\mathrm{\,arcsec}$ which corresponds to $0.24\mathrm{\,kpc}$, i.e., about a fourth of the sphere of influence, when assuming the galaxy to be at a distance of $20\mathrm{\,Mpc}$. The so constructed velocity maps can be seen in Fig.~\ref{fig:Figure21}. We provide \texttt{SMART} with the information about the used FWHM-value for the two dimensional psf convolution and test the same $\Upsilon$ and $M_{BH}$ input masses as in Section~\ref{sec:Mass recovery}. The corresponding results for the 1-dimensional mass recoveries when modeling this modified input data are plotted as turquoise lines in Fig.~\ref{fig:Figure8}, and do not differ decisively from the ones without psf convolution and noise. The stellar mass-to-light ratio is slightly underestimated by $\Delta \Upsilon(\alpha(min(rms_\sigma))=3.5\%$ or $\Delta \Upsilon(\alpha(min(rms_{v,\sigma}))=7\%$ and the black hole mass is overestimated by $\Delta M_{BH}(\alpha(min(rms_\sigma))=3.5\%$ or underestimated by $\Delta M_{BH}(\alpha(min(rms_{v,\sigma}))=7\%$. The $rms_{\sigma}-, rms_{v,\sigma}-$ and $\chi^2/N_{\mathrm{data}}$-profiles are of course shifted upwards but follow the same form than the ones without noise and psf convolution (cf. Fig.~\ref{fig:Figure2}). Fig.~\ref{fig:Figure9} shows the 2-dimensional mass recovery result of $\Upsilon$ and $M_{BH}$ for this model. The stellar mass-to-light ratio is again correctly reproduced and the black hole mass is underestimated by only 7\%. With this, \texttt{SMART} is able to model the noisy and psf convolved kinematics as well as the default kinematics without any "error" equally well within the overall scatter. \subsubsection{Changing the orbital bias factors $\omega_i$} \label{sec:Changing the orbital bias factors omegai} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Figure10.pdf} \caption{Choice of regularization and 1-dimensional mass recovery results for the positive half of the major axis projection when using the default constant orbital bias factors $\omega_i=$ const. (black line), when using the best-fit orbital weights $w_i(\boldsymbol{X}^+)$ of the positive half of the major axis as orbital bias factors, i.e., $\omega_i=w_i(\boldsymbol{X}^+)$ (pink line), and when using $w_i(\boldsymbol{X}^-)$ of the negative half of the major axis as orbital bias factors, i.e., $\omega_i=w_i(\boldsymbol{X}^-)$ (blue line). The modified orbital bias factors do not improve the results indicating that the choice of $\omega_i=$ const. is sufficient. } \label{fig:Figure10} \end{figure} As discussed in Section~\ref{sec:Orbit superposition}, the orbital bias factors $\omega_i$ of Equation~\ref{eq:smoothing} can be used to control the orbital weights $w_i$. In the absence of other constraints $w_i \sim \omega_i$. Our choice of $\omega_i = 1$ is somewhat arbitrary. In fact, it biases the $w_i$s strongly away from the true solution. Thus, we want to test whether this affects our fits. Specifically, we test the opposite extreme: We remodel the positive half of the major axis, abbreviated below as $\boldsymbol{X}^+$, by setting $\omega_i=w_i(\boldsymbol{X}^+)$, where $w_i(\boldsymbol{X}^+)$ are the orbital weights of the best-fit model for $\boldsymbol{X}^+$. We also remodel $\boldsymbol{X}^+$ by setting the bias factors $\omega_i=w_i(\boldsymbol{X}^-)$ to the orbital weights of the best-fit model for the negative half of the major axis $\boldsymbol{X}^-$. Fig.~\ref{fig:Figure10} shows the results for these completely independent model fits. As expected, the $rms$- and $\chi^2/N_{\mathrm{data}}$-profiles start with smaller values, since at $\alpha = 0$, the $\omega_i$s bias the orbital weights the strongest. As motivated above, in the specific case here, the weights are biased towards a previous fit, which explains the better initial $\chi^2$ and $rms$. However, at our fiducial $\alpha$ range, the reproduction of the internal moments and the quality of the fit is the same as for the case with identical $\omega_i$s. As a consequence, this implies that the simplifying assumption of constant $\omega_i$ does not change the modeling results significantly. \subsubsection{Degeneracy} \label{sec:Degeneracy} \begin{figure} \centering \includegraphics[width=1\textwidth]{Figure11.pdf} \caption{Choice of regularization and 1-dimensional mass recovery results for the positive half of the major axis projection when using the default constant initial orbital bias factors $\omega_i=$ const. (black line) and when using the best-fit orbital weights $w_i(\boldsymbol{Y}^+)$ of the positive half of the intermediate axis as orbital bias factors, i.e., $\omega_i=w_i(\boldsymbol{Y}^+)$ (brown line). The additional information from the second projection axis appears to reduce the degeneracy leading to a minor improvement in the internal moments reproduction as seen in the lower $rms_\sigma$- and $rms_{v,\sigma}$-values. } \label{fig:Figure11} \end{figure} When redoing the just described analysis (Subsection~\ref{sec:Changing the orbital bias factors omegai}) but using the orbital bias factors of the right half of the \textit{intermediate} axis projection $\omega_i(\boldsymbol{Y}^+)$ (brown line in Fig.~\ref{fig:Figure11}) as initial values for remodeling $\boldsymbol{X}^+$ we gain a minor improvement in the reconstruction of the internal moments since the $rms_{\sigma}$- and $rms_{v,\sigma}$-values are a bit smaller compared to the default model (black line), though the mass recovery shows equal results. The fact that, for the same quality of fit (i.e. same $\chi^2$), the $rms$ of the internal moments is smaller for the model with $\omega_i$ assumed to be equal to orbital weights from another modeled projection direction (in this case from the intermediate axis projection $\boldsymbol{Y}$) suggests that these $\omega_i$ contain some information in addition to the kinematics of the given projection (in this case the major axis projection $\boldsymbol{X}$). The fact that information from another line of sight can improve the model is not surprising. In fact, it shows that some of the already very small residual $rms$ in the internal velocity moments is due to remaining degeneracies in the recovery of the orbital weights. These degeneracies seem to be surprisingly small. However, our results imply that as long as the deprojected light profile and normalized DM halo are known and the orbit sampling is dense, masses and anisotropies can be recovered with very high accuracy, independent of the viewing angle. \subsubsection{Change in orbit sampling technique} \label{sec:Change in orbit sampling technique} \begin{figure} \centering \includegraphics[width=1.0\textwidth]{Figure12.pdf} \caption{Choice of regularization and 1-dimensional mass recovery results for the positive half of the major axis projection (left column) and positive half of the intermediate axis projection (right column) when changing the $x$- and $z$-coordinates in the simulation so that the orbit sampling in \texttt{SMART} changes from a sampling of $v_\phi$ to $v_\theta$. Whereas the mass recovery gets slightly improved, the $rms_\sigma$- and $rms_{v,\sigma}$-profiles show slightly deteriorated values. Overall, both techniques basically lead to the same results.} \label{fig:Figure12} \end{figure} Even though our results already prove the efficient ability of our orbit library to completely reproduce all necessary orbits in a triaxial potential, we want to check the robustness of our orbit sampling method (cf. App.~\ref{sec:Orbital Representation of the Phase Space}). We therefore change the angular momentum sampling direction from the minor axis to the major axis. This corresponds to a change of $v_{\phi}$- to $v_{\theta}$-sampling. The basic idea is that a more homogeneous angular momentum sampling along the major axis instead of the minor axis might improve the mass recovery. Thus, we change the major and minor axis coordinates within the simulation and rerun the models for the new major- (left column in Fig.~\ref{fig:Figure12}) and intermediate-axis-kinematics (right column in Fig.~\ref{fig:Figure12}). With this, the angular momentum orbit sampling is proceeded along the major axis of the simulated remnant. The results are shown in Fig.~\ref{fig:Figure12}. One can see that both orbit libraries produce the same results. In fact, the modified orbit sampling procedure reveals a slightly better mass recovery but slightly worse internal moments reproduction (higher $rms_{\sigma}$- and $rms_{v,\sigma}$-values) for equally good kinematic fits. Nevertheless, regardless of the chosen angular momentum sampling axis, our orbit sampling technique (cf. App.~\ref{sec:Orbital Representation of the Phase Space}) of creating initial conditions, which belong to certain energy shells and angular momentum sequences, has proven to be a highly efficient technique. It produces a general and complete set of orbits for triaxial potentials being able to deal with changing structure in the integrals-of-motion space since it manages to reproduce all relevant properties and to radially adapt itself to the more spherical center as well as the more prolate outskirts of the simulated galaxy. \\ \subparagraph*{Summary of Robustness and Uniqueness Checks.} In conclusion, these checks prove that \texttt{SMART} is robust against minor internal modifications as well as input data changes. \texttt{SMART} has proved its ability to handle with noisy and psf convolved data. Furthermore we have shown that constant orbital bias factors are a good approximation. Even though it is impossible to get observations from two viewing points, we have demonstrated that this would allow to reduce the minor degeneracies allowed by the kinematic data even further. We have verified this by using the orbital bias factors from a second projection direction. Moreover, the results are not affected by changing the orbit sampling technique from setting up $L_z$-sequences to setting up $L_x$-sequences. This shows that, as expected, the choice of sampling axis is not decisive and that the orbit sampling routine is universal. \\ \section{The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs} \label{sec:The quasi-uniqueness of the anisotropy reconstruction when fitting full LOSVDs} The results of the previous Section have shown that the anisotropy of the $N$-body merger remnant can be reconstructed with very high accuracy from the Schwarzschild models that we fitted to the full LOSVDs. This not only demonstrates the high accuracy of our orbit superposition model, but it also implies that when models can exploit the full information contained in the entire LOSVDs, the remaining degeneracy in the recovery of the distribution function can not affect the anisotropy or mass recovery significantly. In this Section we want to use the maximum entropy technique to explore this in more depth. \subsection{The Maximum Entropy Technique and the Mathematical Structure of the Solution Space} \label{sec:Maximum Entropy Technique and Structure of Solution Space} As already described in Section~\ref{sec:Orbit superposition}, we solve for the orbital weights $w_i$ by using a maximum entropy technique (cf. eqs. \ref{eq:costfunc} and \ref{eq:smoothing}). The $\chi^2$ term in Equation~\ref{eq:costfunc} contains the kinematical constraints and is the deviation between observed LOSVDs $\cal{L}_\mathrm{data}$ and the model prediction $\cal{L}_\mathrm{mod}$, i.e. the weighted sum over the contributions of all orbits to LOSVD $j^\prime$ and line-of-sight velocity bin $k$, \begin{equation} {\cal{L}_\mathrm{mod}}^{j^\prime,k} \equiv \sum_{i=1}^{N_\mathrm{orbit}} w_i \, {\cal{L}_\mathrm{orb}}^{j^\prime,k,i}. \end{equation} It is convenient to think of the observed $\vect{\cal{L}}_\mathrm{data}$ and the model predictions $\vect{\cal{L}}_\mathrm{mod}$ as vectors with $N_\mathrm{data} = N_\mathrm{losvd} \times N_\mathrm{vlos}$ elements. Then, \begin{equation} \label{eq:kinmat} \vect{\cal{L}}_\mathrm{mod} = \mathrm{\cal{L}_\mathrm{orb}} \cdot \vect{w}, \end{equation} where $\mathrm{\cal{L}_\mathrm{orb}}$ is a matrix with $N_\mathrm{data}$ rows and $N_\mathrm{orbit}$ columns. \\ In addition to the kinematical observations $\vect{\cal{L}}_\mathrm{data}$, the orbital weights $\vect{w}$ are subject to photometric constraints. In analogy to Equation~\ref{eq:kinmat}: \begin{equation} \vect{p}_\mathrm{mod} = \mathrm{P_\mathrm{orb}} \cdot \vect{w}, \end{equation} where $\vect{p}_\mathrm{mod}$ is a vector with the model predictions for the 3D luminosity density at spatial position $j_3$ in the galaxy ($j_3 = 1,\ldots,N_\mathrm{phot}$). To guarantee the self-consistency of our model, the respective observed $\vect{p}_\mathrm{data}$ are not included via a $\chi^2$ term. Instead, we treat them as boundary conditions for the fit: \begin{equation} \label{eq:constraints} \vect{p}_\mathrm{data} \stackrel{!}{=} \vect{p}_\mathrm{mod}. \end{equation} Hence, we seek for the maximum of Equation~\ref{eq:costfunc} subject to the linear equality constraints \begin{equation} \label{eq:densityconstraints} \vect{p}_\mathrm{data} - \mathrm{P_\mathrm{orb}} \cdot \vect{w} = 0. \end{equation} For convenience, we normalise the $\vect{p}_\mathrm{data}$ such that $\sum \vect{p}_\mathrm{data} = 1$. Since we are only interested in positive orbital weights (see below), the orbital weights obey $0 \leq w_i \leq 1$ and we can restrict the maximization to the respective $N_\mathrm{orbit}$-dimensional convex quader. \\ To show that Equation~\ref{eq:costfunc} has a unique global maximum that can be controlled through the bias factors $\omega_i$ it is convenient to consider the equivalent minimisation problem for $f \equiv - \hat{S}$ given by multiplying Equation~\ref{eq:costfunc} with $-1$. \\ Let us first consider Equation~\ref{eq:costfunc} without the entropy term $S$. The $\chi^2$ term can be written as \begin{equation} \chi^2 = -\vect{\cal{L}}_\mathrm{data}^{T} \mathrm{C_v} \vect{\cal{L}}_\mathrm{data} + 2 \vect{\cal{L}}_\mathrm{data}^{T} \mathrm{C_v} \mathrm{\cal{L}_\mathrm{orb}} \vect{w} - \vect{w}^{T} \mathrm{\cal{L}_\mathrm{orb}}^{T} \mathrm{C_v} \mathrm{\cal{L}_\mathrm{orb}} \vect{w}, \end{equation} where $\mathrm{C_v}$ is the covariance matrix of the observed LOSVDs and is positive definite. The Hesse matrix of $\chi^2$ reads \begin{equation} \mathrm{\nabla^2} \chi^2 = 2 \mathrm{\cal{L}_\mathrm{orb}}^{T} \mathrm{C_v} \mathrm{\cal{L}_\mathrm{orb}} . \end{equation} Because $\mathrm{\cal{L}_\mathrm{orb}}$ is positive by construction, the symmetric matrix $\mathrm{\nabla^2} \chi^2$ is at least positive semi-definite and $\chi^2$ is convex. \\ The minimisation of $\chi^2$ alone, subject to the linear equality constraints Equation~\ref{eq:densityconstraints}, is therefore a convex optimisation problem with affine equality constraints in standard form. As such, it only has a global minimum \citep[e.g.][]{convex}. In general we cannot assume that $\chi^2$ is {\it strictly} convex (i.e. that $\mathrm{\nabla^2} \chi^2$ is positive definite). The set of orbital weights that solve $\chi^2(\vect{w}) = \chi^2_\mathrm{min}$ may therefore be non-unique. This is not surprising given that the linear equation \begin{equation} \vect{\nabla} \chi^2 = 2 \vect{w}^{T} \mathrm{\cal{L}_\mathrm{orb}}^{T} \mathrm{C_v} \mathrm{\cal{L}_\mathrm{orb}} + 2 \vect{\cal{L}}_\mathrm{data}^{T} \mathrm{C_v} \mathrm{\cal{L}_\mathrm{orb}} \equiv 0 \end{equation} will in general be underconstrained if $N_\mathrm{orbit} > N_\mathrm{data}$. As already mentioned above, the reconstruction of the distribution function is not unique even if the deprojected density and the potential are known. We give an example in Appendix~\ref{sec:Optimization Algorithm and Testing}. \\ Because the Hesse matrix of $-S$ is diagonal with the $i$-th element equal to $1/w_i$, the entropy $-S$ is {\it strictly} convex. In contrast to the case of $\chi^2$, the set of orbital weights that minimise $-S$ (or, equivalently, that maximise $S$) is {\it unique}. For the entropy alone this is easy to see since \begin{equation} \nabla S = - \log \frac{w_i}{\omega_i} - 1 \equiv 0 \end{equation} can be solved analytically: $w_i = \exp(-1) \cdot \omega_i$. The constraints from Equation~\ref{eq:densityconstraints} will shift the solution, but the strict convexity still guarantees it to remain unique \citep[e.g.][]{convex}. \\ For general $\alpha>0$, the Hesse matrix of $f$ is the sum $\mathrm{\nabla^2} (-S) + \alpha \mathrm{\nabla^2} \chi^2$ and $f$ is always strictly convex. Equation~\ref{eq:costfunc} subject to the constraints (\ref{eq:densityconstraints}) hence always has a unique solution. \\ As already mentioned above, from an algorithmic point of view it is advantageous to have a unique solution. From the physical point of view this is not desirable because any algorithm that picks up only one of the potentially many solutions that minimise $\chi^2$ in Equation~\ref{chi squared} may lead to a {\it bias}. However, suppose $\vect{s_1}$ and $\vect{s_2}$ are two solutions which lead to the same $\chi^2_\mathrm{min}$. By setting $\omega_i = s_{1,i} \cdot \exp(1)$ we can make $\vect{s_1}$ the global solution of Equation~\ref{eq:costfunc}. Likewise, by setting $\omega_i = s_{2,i} \cdot \exp(1)$ we can make $\vect{s_2}$ the global solution of Equation~\ref{eq:costfunc}. This shows that the maximum-entropy formulation of the problem allows in principle the reconstruction of the entire solution space (via variation of the $\omega_i$). \subsection{Testing the Uniqueness of the Anisotropy Recovery} \label{sec:Experiment to testing the Uniqueness of the Anisotropy Recovery} The previous Section has shown that the recovery of the distribution function (or, equivalently, of the orbital weights) is in general not unique, even when the deprojection and the potential are known. The maximum entropy technique recovers one of the many possible solutions, which is unique for every given set of $\omega_i$. Variation of the $\omega_i$ allows to sample the full solution space. On the other hand, the fits of the $N$-body merger remnant have shown that the anisotropy recovery is very accurate and stable even to variations of the $\omega_i$. Moreover, in Appendix~\ref{sec:Optimization Algorithm and Testing} we explicitly construct two different phase-space distribution functions that fit a given set of kinematics equally well. Even though they have different orbital weights, they reveal very similar anisotropies in the second-order velocity moments. This suggests that while the recovery of the full distribution function is non-unique, the anisotropy of the second-order moments is actually very well constrained by the information contained in the full LOSVDs. \\ To investigate this further, we create kinematic data of several toy models with different intrinsic properties: with randomised orbit weights (called RANDOM in the following), with an overpopulation of box/chaotic orbits (called BOX), with an overpopulation of $z$-tubes (called ZTUBE), with an overpopulation of only the prograde $z$-tubes (called ZROT) and with an overpopulation of $x$-tubes (called XTUBE). We then fit the mock kinematic data of these toy models under different choices for the $\omega_i$, trying to push the fitted orbit model towards extreme shapes. The goal is to test how accurate and stable the recovery of the internal velocity anisotropy is, when we fit the entire information contained in the LOSVDs. \\ The toy models are constructed as maximum-entropy models (i.e. through maximisation of eq.~\ref{eq:costfunc} with $\alpha = 0$; cf.~\citealt{Thomas07}). For the BOX model, we increase the $\omega_i$ of all box orbits by a factor of 1000, for the XTUBE and ZTUBE models we increase the $\omega_i$ of the respective tube orbits by the same amount. For the ZROT toy model we increase the $\omega_i$ only for prograde z-tubes and for the RANDOM model we use randomised $\omega_i$ (cf. Appendix~\ref{sec:Optimization Algorithm and Testing}). The weights are then still forced to satisfy the density constraints of the N-body simulation. For each toy model, we create kinematic mock data for four different projections (major-, intermediate-, minor-axis projection and the $\theta=45^\circ, \phi=45^\circ$-projection). We then model every projection of all toy models (in total 20 different input data) and test four different methods for the fits: We use (i) our default constant orbital bias factors, i.e. $\omega_i=1$ corresponding to the Shannon-entropy (abbreviated as 'shannon' in Fig.~\ref{fig:Figure13}), (ii) increased orbital bias factors by a factor of 10 for the box/chaotic orbits (box10), (iii) increased orbital bias factors by a factor of 10 for the $z$-tubes (ztube10) and (iv) increased orbital bias factors by a factor of 10 for the $x$-tubes (xtube10). \\ Figure~\ref{fig:Figure13} shows the resulting anisotropy-profiles (when averaging over shells) for these models. The different rows correspond to the different input toy models and the different columns show the individual projection directions. The dotted data points symbolise the anisotropy profiles of the input toy models (which are of course the same for the different projections). The colored lines (green, blue, orange, red) show the recovered anisotropies of \texttt{SMART} fits using entropy functions with enhanced bias factors for specific orbit types as described under (i)-(iv) above. The grey lines, for comparison, show the anisotropies that result when we maximise the above entropy functions (i)-(iv) without fitting the mock LOSVDs of the toy models. The grey lines therefore illustrate the variety of different anisotropy profiles that can be constructed by varying the orbital bias factors $\omega_i$. They also indicate the range of different anisotropy profiles that are consistent with the given density distribution. \\ As one can see, even though the entropy functions tend to push the fits into extreme directions, the range of anisotropy profiles recovered after the fit to the LOVSDs is very narrow. When averaging over all radii and toy models fitted with different entropies, the mean deviation to the input models (dots) is $|\Delta \beta| = 0.05$. The average spread in beta inside the sphere of influence is slightly larger $|\Delta \beta(r<r_{\mathrm{SOI}})| = 0.09$ than $|\Delta \beta(r_{\mathrm{SOI}}<r<r_{\mathrm{FOV}})| = 0.03$ outside $r_{\mathrm{SOI}}$. One possible explanation for this might be the increase of degrees of freedom of spherical orbits near the center. In addition, because $\beta$ involves the ratio of the intrinsic dispersions, the same fractional error in the intrinsic dispersions results in a 4 times larger $|\Delta \beta|$ when the anisotropy is as tangential as $\beta = -1.5$ compared to the isotropic case. Overall, the Shannon entropy (red line) is able to recover the beta anisotropy best. This is our default entropy used in \texttt{SMART}.\\ These results together with Section~\ref{sec:Results} strongly suggest that the information contained in the full LOSVDs constrains the anisotropy in the second-order velocity moments very well. In turn, this is the reason why our models can reproduce the mass of the black hole and of the stars in the N-body simulation very well. \\ At larger radii, solely the reconstruction of the intrinsic anisotropy of the toy model with enhanced prograde $z$-tubes (i.e. ZROT) turns out to be difficult when viewed along the minor axis. This, however, is expected since any rotation around the minor-axis can not be observed and, thus, not be reconstructed from this viewing direction. Since we use equal $\omega_i$ for prograde and retrograde orbits in our {\it fitted} models, these models do not have intrinsic rotation in the $z$-tubes for this projection. Consequently, the tangential velocity {\it dispersion} is larger than in the toy model and the fitted $\beta$ becomes too negative. We checked that if we use the true second-order velocity moment rather than the velocity dispersion in the tangential direction, then the differences between the outer profiles of the ZROT model and the fits along the minor axis disappear. For dynamical models which aim for a full phase-space reconstruction (like Schwarzschild models) and which use the full information encoded in the LOSVDs (see also \citealt{Vasiliev19}) the anisotropy should be recoverable with a typical error of $|\Delta \beta| \approx 0.05$. We solely found larger anisotropy discrepancies (up to $|\Delta \beta| = 0.5$) in extremely tangentially biased regions inside the sphere of influence. \\ As an example of the corresponding accordance of the reconstructed orbit fractions, Figure~\ref{fig:Figure14} shows the case of $z$-tubes. The intended overpopulation of the $z$-tubes in the ZTUBE toy model (third row) and the ZROT toy model (fourth row) can be clearly seen in comparison to the other toy models. Independent of the chosen line of sight and entropy method (i.e. the bias factors $\omega_i$), the fraction of $z$-tubes is qualitatively recovered and follows the enhancement tendencies. The orbit fractions however are less well determined than the anistropy by the data and show a stronger dependence on the entropy. The same is true for the $x$-tubes, box/chaotic and spherical/Kepler orbits shown in Figures~\ref{fig:Figure28} to~\ref{fig:Figure30}. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Figure13.pdf} \caption{Recovery of the anisotropy profiles of different input toy models by SMART fits with different entropy functions. The input models (black dots) are constructed to have specific orbit classes overrated, resulting in different anisotropy profiles (the overrated orbit type is labelled on the y-axis of each individual row, details in the text). All the five toy models can be well recovered. This is independent of the choice of the projection axis (different columns; from left to right major, intermediate, minor and a diagonal axis) and from the assumed entropy function in the fit (colored lines). For example, the anisotropy profile of the toy model with an overpopulation of box/chaotic orbits (called BOX; second row) is well reproduced by models maximising the Shannon-entropy (red lines, labelled shannon) but also with other entropy functions that use a ten times higher bias factor for box-orbits (green, labelled box10), for $x$-tubes (blue, xtube10) or for $z$-tubes (orange, ztube10). The grey lines correspond to the anisotropies implied by maximising these four different entropy functions without fitting the kinematic data. They symbolise the variety of anisotropy profiles which are in principle possible for different choices of $\omega_i$. After fitting the kinematic data, the average deviation (averaged over all radii and toy models fitted with different entropy techniques) between recovered and input anisotropy is very small, $|\Delta \beta| = 0.05$.} \label{fig:Figure13} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Figure14.pdf} \caption{Recovery of the $z$-loop orbit fractions of different input toy models by models with different entropy methods. We here show the same analysis as in Figure~\ref{fig:Figure13} but now for the reconstruction of the fraction of orbits classified as $z$-tubes. The color coding is adapted to Fig.~\ref{fig:Figure4} and~\ref{fig:Figure23}. Independent of the tested projection, the $z$-loop fractions of the individual input toy models are well recovered by the models using different entropy methods. The same is true for the other orbit class fractions (see Fig.~\ref{fig:Figure28}-~\ref{fig:Figure30}).} \label{fig:Figure14} \end{figure*} \section{Intermediate axis rotation} \begin{figure} \centering \includegraphics[width=0.95\textwidth]{Figure15.png} \caption{Fitting mock kinematic input data containing intermediate axis rotation. When modeling artificial LOSVDs showing a net rotation along the intermediate axis, \texttt{SMART} is able to reproduce a y-rotation signal when integrating the orbits for a limited integration time of 2Gyrs (first row). The y-rotation becomes visible in the $v-$ (left panel) and $h_3$ (right panel)-maps. When integrating for 1000 SOS-crossings (second row), the y-rotation signal cannot be fitted by the model any more.} \label{fig:Figure15} \end{figure} One of the first investigations to check whether tube orbits with net rotation around the intermediate axis are stable in a triaxial ellipsoid was done by \citet{Heiligman79}, who studied a triaxial model with fixed axis ratios of 1:1.25:2 by numerical methods and stated that “Y-tube orbits nearly certainly do not exist in the adopted model”. Also \citet[p. 263]{Binney08} assert that “tube orbits around the intermediate axis are unstable” in a triaxial potential. \citet{Adams07} analysed the orbit instability of orbits in a triaxial cusp potential, which are initially confined to one of the three principal axes, under a perturbation along the perpendicular direction. They found that orbits around any of the principal axes are unstable to perpendicular motions. However, according to previous results, they again state that orbits around the intermediate axis are more likely to be unstable. This instability is strongest for original box orbits lying in the x-z plane when the axis ratio of these two axes in the original plane is largest. \\ Our orbit classification routine in \verb'SMART' finds no y-tubes in the sense that there is no sign conservation of the angular momentum along the intermediate axis over our default integration period of 100 SOS-crossings, agreeing with the aforementioned works done by other groups. However, some of the orbits integrated for the N-body models do show y-rotation for a limited time-span. When providing \verb'SMART' with artificial projected input line-of-sight velocity distributions that mimick a net rotation along the intermediate axis, the model is able to produce a y-rotation signal of the order of $10\mathrm{\,km\,s^{-1}}$ in the fit. A rotation of this magnitude is small but in principle detectable with today's telescopes' resolution. Fig.~\ref{fig:Figure15} shows the \verb'SMART' fit to the major-axis projection of the simulation (cf. Fig.~\ref{fig:Figure3}) when assuming the viewing angles of the minor-axis projection. With this, we simulate a hypothetical rotation along the intermediate axis. The top row shows the velocity- and $h_3$-map when stopping the orbit integration after 2 Gyrs if this is shorter than the time needed for 100 SOS-crossings. Indeed, the model reproduces a y-rotation signal. The amplitude of this residual y-rotation becomes smaller and smaller when the orbital integration time is increased, and vanishes when all orbits are integrated for 1000 SOS-crossings (bottom row). \\ This analysis indicates that the model's triaxial potential, which is constructed based on the 3D density from the realistic $N$-body merger simulation, --contrary to expectations-- contains orbits with y-rotation for a physically relevant time span. What remains unclear at this moment is whether such a y-rotation indeed appears in real elliptical galaxies. If so, then the connection between kinematic misalignments and photometric twists is less constrained than often assumed when only rotation around the intrinsic long and short axes is considered. \section{Discussion} \label{sec:Discussion} \subsection{Remaining sources of systematics} All relevant properties of the simulated merger remnant galaxy were proven to be recovered with a convincing precision and the deviations we found are almost negligible. The remaining deviations (in the $\sim 5-10\%$ level) can be either originated by \texttt{SMART} or the simulation. One remaining contribution to the scatter in the final mass recovery certainly comes from the finite binning resolution of the simulation data and of the \verb'SMART' models (see Sections~\ref{sec:Triaxial Schwarzschild code SMART} and~\ref{sec:The N-body simulation}). Especially the need of extrapolating the density towards the center due to limited resolution of the simulation holds uncertainties. \\ One more inaccuracy is potentially induced by the softening length in the simulation used to avoid unrealistic 2-body encounters between massive particles. Force calculations for radii smaller than the softening length are consequently modified by the softening. In the close vicinity of the black hole, the stellar particles in the simulation are modeled using a nonsoftened algorithmic chain regularization technique (ARCHAIN; \citealt{Mikkola06, Mikkola08}) including post-Newtonian corrections (e.g., \citealt{Will06}). The particles outside this chain radius are treated by using softened gravitational force calculations based on the GADGET-3 \citep{Springel05} leapfrog integrator. The chain radius $r_{\mathrm{chain}}$ is chosen to be at least 2.8 times larger than the GADGET-3 softening length $\epsilon$ ($r_{\mathrm{chain}}>2.8\epsilon$) to ensure that the particles within the chain remain nonsoftened. The softening length in the simulation for the stellar particles is $\epsilon_*=3.5\mathrm{\,pc}$ and the softening length for the dark matter particles is $\epsilon_{\mathrm{DM}}=100\mathrm{\,pc}$. The mass recovery of the global stellar mass-to-light ratio and dark matter scale factor will be unaffected by these relatively small values. However, there might be a remaining influence on the black hole mass recovery within $r_{\mathrm{SOI}}$. Nevertheless, this effect is expected to be small. \subsection{Comparison to other triaxial Schwarzschild Models} We found two other dynamical modeling codes in the literature using Schwarzschild's orbit superposition technique dealing with triaxiality by \citet{vandenBosch08} and \citet{Vasiliev19}. The code \verb'FORSTAND' by \citet{Vasiliev19} is applicable to galaxies of all morphological types. When assuming that the deprojection and dark matter halo is known and provided to the code, the models of noise-free axisymmetric disc mock datasets taken from $N$-body simulations showed very weak constraints on $M_{BH}$: any value between zero and 5-10 times the true black hole mass was equally consistent with the data. Read from Fig. 2 in their paper, the stellar mass-to-light ratio showed a variation of around 20\%. \\ \citet{Jin19} tested the triaxial Schwarzschild code by \citet{vandenBosch08} by applying it to nine triaxial galaxies from the large scale, high resolution Illustris-1 simulation \citep{Vogelsberger14}, which provides a stellar and dark matter resolution of $\sim10^6M_{\odot}$. When fixing the black hole mass and allowing the model to deproject the mock dataset, the stellar mass within an average effective radius is underestimated by $\sim$24\% and the dark matter is overestimated by $\sim$38\%. Their averaged model results obtained from mock data with different viewing angles tend to be too radial in the outer regions with better anisotropy matches in the inner region. \\ Of course, these results cannot be used for direct comparison due to a widely varying resolution and in case of the analysis by \citet{Jin19} the deprojection probably causes the major deviation. However, \texttt{SMART} for sure is able to add further progress in modeling triaxial galaxies and convinces with its proved precision. \section{Summary and Conclusion} \label{sec:Summary and Conclusion} We have developed a new triaxial dynamical Schwarzschild code called \texttt{SMART} and tested its efficiency and reliability by applying it to an $N$-body merger simulation including supermassive black holes. The simulation was deliberately selected due to its high accuracy, reasonable formation process, realistic internal structure and ability to precisely calculate the dynamics close to the central black hole. This ensured the possibility to check whether \texttt{SMART} is able to recover all relevant properties including the mass of a supermassive black hole of a realistic triaxial galaxy when providing the deprojected light profile and normalized DM halo.\\ \texttt{SMART} is assembled with the feature to compute the potential and force by expansion into spherical harmonics allowing to deal with non-parametric densities and halos. Its orbit library contains 50000 integrated orbits which are set up by creating random initial radial and velocity values within given energy shells and angular momentum sequences and by filling the surfaces of section. This ensures the ability to adapt itself to a radially changing number of integrals of motion. The orbit superposition is executed by maximizing an entropy-like quantity and by using the full line-of-sight velocity distributions instead of only Gauss-Hermite parameters alone. \\ These benefits enable \texttt{SMART} to reconstruct all relevant properties and features of the merger remnant with an excellent precision. We will now recap the requirements which were set on \texttt{SMART} and proved to be fulfilled in this analysis: \begin{itemize} \item[-] \texttt{SMART} is able to reproduce the anisotropy profile and internal velocity dispersions with an $rms_\sigma$ of only $1.2\%$. \item[-] \texttt{SMART} reproduces the stellar mass-to-light ratio, \item[-] black hole mass, \item[-] and mass scale factor of the dark matter density profile with a precision on the $5$--$10\%$ level. To our knowledge this is the first time that the intrinsic precision at given deprojected light profile has been quantified to be so high. This sets the basis for further investigations of the whole modeling procedure. \item[-] \texttt{SMART} well fits the line-of-sight velocity distributions with mean values and deviations of only $\bar{v}=(0.11\pm2.09)\mathrm{\,km\,s^{-1}, } \text{ } \bar{\sigma}=(309.75\pm2.54)\mathrm{\,km\,s^{-1}, } \text{ } \bar{h_3}=0.00\pm0.01\text{ and }\bar{h_4}=0.01\pm0.01$. \end{itemize} For the determination of these accuracy values, the simulation is modeled from up to five different projections. The mass recovery precision can be achieved for noiseless as well as noisy and psf convolved kinematic input data. We extensively discuss that the maximum-entropy technique provides an elegant technique to study the range of possible orbit distributions consistent with a given set of data. Our tests with the $N$-body data and with additional toy models strongly suggest that when the full information contained in the entire LOSVDs is used to constrain the model, then the remaining degeneracies in the recovery of the exact phase-space distribution function do not affect 'macroscopic' properties of the galaxy models, like the anisotropy in the second-order velocity moments. This is the basis for the very good reconstruction of the orbital structure and mass of the black hole and stars with \texttt{SMART}. It was shown that the orbit library is robust against axis changes and generates a complete set of well superpositioned orbits necessary to model a triaxial galaxy with all corresponding internal structures. \\ Also the accurate mass parameter recovery accomplished by \texttt{SMART} suggests only minor degeneracies contained in the projected kinematic data, provided that the deprojection is known. We showed that these remaining minor degeneracies could in principle be narrowed even more if information about the orbital bias factors from a second projection direction were provided. When analysing the elevation and azimuthal internal velocity distributions of the simulation we find that the central radial bins show two maxima. This corresponds to the negative $h_4$-parameter in the center and the strong tangential anisotropy produced during the core formation. \texttt{SMART} is able to reconstruct this phenomenon with an accuracy of $\sim 7\%$. One more discovery of scientific interest is intermediate axis rotation which is produced by orbits contained in the model's orbit library representing the simulation's triaxial potential. Independent of the question whether such intermediate axis rotation really appears in the real universe, it was shown that our model contains orbits with y-rotation whose stability was empirically found to be maintained up to at least 2 Gyrs. \\ \section*{Acknowledgements} We acknowledge the support by the DFG Cluster of Excellence "Origin and Structure of the Universe". The dynamical models have been done on the computing facilities of the Computational Center for Particle and Astrophysics (C2PAP) and we are grateful for the support by F. Beaujean through the C2PAP. \section*{Data Availability Statement} The data underlying this article will be shared on reasonable request to the corresponding author. \clearpage \bibliographystyle{mnras}
null
null
null
proofpile-arXiv_000-10174
{"arxiv_id":"2009.08979","language":"en","timestamp":1600740010000,"url":"https:\/\/arxiv.org\/abs\/2009.08979","yymm":"2009"}
2024-02-18T23:40:25.258Z
2020-09-22T02:00:10.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10176"}
null
\section{Introduction} \begin{wrapfigure}{r}{0.45\textwidth} \vspace{-12.5pt} \includegraphics[width=0.45\textwidth]{images/area6_renderc.png} \captionsetup{justification=centering} \caption{Sample segmentation results on S3DIS \cite{2017arXiv170201105A} Area $3$.}\label{fig:area6} \vspace{-10pt} \end{wrapfigure} Semantic segmentation of large-scale 3D pointcloud data has attracted numerous interests in real-world applications. Automatic sight inspection for quality verification, for example, is one of the emerging applications for pointcloud semantic segmentation. Deployment of a 3D pointcloud semantic segmentation algorithm can avoid delays and reduce costs caused by human errors. Figure \ref{fig:area6} shows how accurately our proposed multi-resolution graph neural network can perform and semantic segmentation. When it comes to dense large-scale pointcloud scans, the current methods either require a significant reduction in the data density \cite{kpconv, qi2016volumetric, DBLP:journals/corr/LiPSQG16, DBLP:journals/corr/EngelckeRWTP16, pointnet, shelnet} or offer only limited processing capability for multiple scans \cite{DBLP:journals/corr/MasciBBV15, 10.1007/978-3-030-01237-3_6, pointnet2, pointcnn, pointweb}. Two challenges hinder the development of pointcloud semantic segmentation algorithms on dense large-scale pointcloud scans. The first is the extensive memory usage for processing dense pointclouds with up to billions of data points in a single scan. Prior research has attempted to downsample from the original pointcloud to the state that common computing hardware can be utilized. Performing drastic downsampling on dense pointcloud can, however, negatively impact the segmentation result on dense pointcloud. Intricate details initially present in the dense pointcloud input are removed during the downsampling process. This is undesirable since sparse pointclouds contain less geometric features. The segmentation result obtained with sparse pointcloud would be inaccurate when interpolating back onto the original dense pointcloud. To address this issue, we convert the large-scale pointclouds into semantically similar point clusters. By doing this, the amount of GPU memory demanded by the semantic segmentation network is drastically reduced. When tested on the benchmarked dataset Stanford Large-Scale 3D Indoor Spaces Dataset, we observe that MuGNet can inference at once up to 45 pointclouds, each containing on average 2.6 million points. Besides the challenge posed by computation requirement, semantic segmentation is also challenged by the orderless and uneven nature of the raw pointclouds. This leads to the less desirable performance of deep convolutional neural networks, which require evenly arranged and ordered data. Prior research has attempted to convert the orderless pointclouds to mesh or voxels as ways to artificially structure pointclouds before applying deep convolutional neural networks \cite{qi2016volumetric, DBLP:journals/corr/LiPSQG16, DBLP:journals/corr/EngelckeRWTP16, acnn, interpolate}. Such attempts typically result in voluminous rendered data and introduce unnecessary computation overhead to the algorithm. To overcome this challenge, we propose a framework with graph convolutions that are invariant to permutations of pointcloud orderings. Segmentation can thus be performed without artificially structuring the pointcloud. In addition to addressing the two major challenges, our proposed framework also features a bidirectional multi-resolution fusion network. The framework reasons the relationship between adjacent clusters at different resolutions with both forward and backward paths. We observe that the concatenated graph features from different resolutions provide richer feature representation compared to the resultant features from the final convolution layer alone. The backward fusion network then further enriches the representations. With the aforementioned design components, we demonstrate that our proposed MuGNet achieves 88.5\% overall accuracy and 69.8\% mIOU accuracy on Stanford Large-Scale 3D Indoor Spaces Dataset semantic segmentation, outperforming SPG \cite{DBLP:journals/corr/abs-1711-09869} by 7.7\% better mIOU accuracy and 3\% better overall accuracy. Figure \ref{fig:area6} presents a sample semantic segmentation result for Area 3 in the Stanford Large-Scale 3D Indoor Spaces Dataset. \section{Related Work} \label{sec:citations} Three main categories of learning-based frameworks have been previously proposed for pointcloud semantic segmentation: voxel-based approach, point-based approach and graph-based approach. We outline the corresponding approaches as follows. \hspace{5mm} \textbf{Voxel-based approach:} As attempts to tackle the orderless and uneven nature of pointcloud, previous algorithms have ventured to convert pointcloud into structured voxel data such that convolution neural networks can be applied. Volumetric CNN \cite{qi2016volumetric} for example, pioneered on voxelized shapes; FPNN \citep{DBLP:journals/corr/LiPSQG16} and Vote3Deep \citep{DBLP:journals/corr/EngelckeRWTP16} proposed special methods to deal with the sparsity problem in volumes. Voxelizing a large pointcloud scan, however, imposes computation overhead. Such operation becomes infeasible for processing dense large-scale pointclouds. \hspace{5mm} \textbf{Point-based approach:} PointNet \cite{pointnet} has inspired many previous works to process pointclouds as direct input. Typical point-based frameworks learn feature encodings of each point. The encoded features are then aggregated with a permutation invariant function to arrive at transformation invariant segmentation results. Spatial and spectral convolutions have been implemented in previous works \cite{DBLP:journals/corr/MasciBBV15, Tang2019ChebNetEA,10.1007/978-3-030-01237-3_6, acnn, kpconv, pointcnn, pointnet2} to improve the efficiency of feature encoding. All of these approaches are demanding in memory usage and thus require either a sliding window approach or data downsampling approach to process dense pointclouds. In contrast, our framework alleviates the memory demand during the segmentation process by preforming point clusters based on geometric similarities. In this way, we can process multiple dense pointclouds at once with a commonly available GPU. \hspace{5mm} \textbf{Graph-based approach:} While pointclouds are inherently orderless and unevenly distributed, they can be represented as graphs with interdependency between points. Node classification can be performed to distinguish the classes among the nodes in the graph representations \cite{DBLP:journals/corr/KipfW16, DBLP:journals/corr/HamiltonYL17,velickovic2018graph}. Wang et al.\cite{DBLP:journals/corr/abs-1801-07829} first applied the concept of graph convolutional network on pointcloud data and formulated an approach that dynamically computes the graphs at each layer of the network. Various types of graph convolutional neural networks have since been utilized in the context of pointcloud segmentation \cite{mining, Wang2019_GACNet, DBLP:journals/corr/abs-1711-09869, eng, dgcnn}. All of the frameworks sequentially infer the graph embeddings with only forward network paths. On the contrary, we exploit a bidirectional framework that retains rich contextual information derived from multiple convolution units. The backward path of our network receives graph embeddings at different resolutions and infer rich contextual relationships to achieve high segmentation accuracy. \section{Proposed Computational Framework} \label{sec:methodology} \begin{figure}[htbp] \centering \includegraphics[width=0.85\textwidth]{images/overview.png} \captionsetup{justification=centering} \caption{Illustration of the overall workflow for MuGNet. The input pointcloud is first clustered, and then each formed cluster is classified into its respective semantic classes based on cluster-features.} \label{overview} \end{figure} Given a large-scale pointcloud with millions of points spanning up to hundreds of meters, directly processing it with a deep neural network requires an innumerable amount of computation capability. Downsampling points from the original pointcloud by order of magnitude is a common practice to cope with this limitation. This approach, however, takes away the intricate details in the original pointcloud and yet still suffers from expensive memory usage. We propose MuGNet, a multi-resolution graph neural network inspired by EfficientDet \cite{tan2019efficientdet}, to effectively translates large-scale pointclouds into directed connectivity graphs and efficiently segments the pointclouds from the formed graph with a bidirectional graph convolution network. MuGNet allows the processing of multiple large-scale pointclouds in a single pass while producing excellent segmentation accuracy. Figure \ref{overview} showcases the overall workflow of MuGNet. In the subsequent sections, we will further explain the cluster formation process and introduce the three key design features in our segmentation network: cluster feature embedding, feature-fusion backbone, and bidirectional-convolution network. \subsection{Clustering algorithm for graph formation} \label{sec:cluster} We preprocess large-scale pointclouds into geometrically homogeneous clusters, a 3D equivalent of superpixels commonly used in image analysis. The existing point clustering approaches can be roughly classified into unsupervised geometric approaches and supervised learning-based approaches. There is currently no standard clustering strategy that is suitable for large-scale pointclouds. We individually analyze their relative merits as follows and indicate our reasoning for choosing the supervised clustering approach. \hspace{5mm} \textbf{(1) Unsupervised clustering} Given a pointcloud data with millions of points, geometric features based on the neighborhood planarity, linearity, and scattering can be calculated \cite{article, DBLP:journals/corr/abs-1711-09869}. The unsupervised methods rely on the assumption that geometrically or radio-metrically homogeneous segments are also semantically homogeneous. This assumption is challenged when attempting to form semantically homogeneous clusters for semantic segmentation \cite{DBLP:journals/corr/abs-1904-02113}. Since the clusters are formed without the oversight of semantic information, the clusters can have semantic impurity among its constituting points. The impurity within each of the clusters negatively impacts the performance of the subsequent node classification network and limits the final semantic-segmentation accuracy. \hspace{5mm} \textbf{(2) Supervised clustering} The supervised-clustering method proposed by Landrieu et al.\cite{DBLP:journals/corr/abs-1904-02113} standardizes a pointcloud with a spatial transformer, embeds local feature with a small MLP-based network, and finally forms clusters by optimizing on the generalized minimal partition problem with the introduction of contrastive loss as a feedback metric for cluster purity. The supervised clustering approach achieves significant improvements compared to the unsupervised methods in terms of reduction in semantic impurity within the formed clusters. The semantically pure clusters can provide enhanced geometric and semantic connectivity information for subsequent learning tasks. On average the supervised clustering technique can effectively encapsulate pointcloud information of a Stanford Large-Scale 3D Indoor Spaces Dataset room scan containing $\sim2.6$ million points into $\sim10^3$ clusters. Compared to downsampling a pointcloud scan randomly, the conversion to point clusters can perform a drastic reduction in data size while carrying richer contextual information from the original pointcloud. Figure \ref{s3dis}(c) and \ref{vKITTI}(c) visualize sample pointcloud scans color-coded by geometric features and point clusters produced by the supervised clustering approach. \subsection{Cluster-feature Embedding} \label{sec:clusteremb} \begin{figure}[h] \centering \captionsetup{justification=centering} \includegraphics[width=0.9\linewidth]{images/network_all.png} \caption{MuGNet includes cluster embedding and bidirectional graph convolution networks. The input pointcloud is clustered based on their geometric similarities and learnable features into cluster set $C = \{C_1, C_2, ...C_K\},$ where $K$ denotes the total number of clusters formed for a pointcloud.} \label{network} \end{figure} As shown in Figure \ref{network}, our cluster-feature embedding network is applied to each of the formed pointcloud clusters. The points in a cluster go through a multi-resolution feature-fusion network with three sets of multi-layer perceptrons. The input points contain geometric features analogous to \cite{DBLP:journals/corr/abs-1711-09869}. The feature-fusion network receives pointcloud at three different resolutions. The highest resolution channel is defined as the cluster points, from which the other two lower-resolution channels are formed by down-sampling with linear layers. Each of the three-channel inputs then goes through a series of 2D convolutions, and their output features are concatenated together into a single feature vector. Utilization of the feature-fusion network for cluster-feature extraction effectively identifies cluster-features that are not only local representative but also distinctive at a larger receptive field. \subsection{Feature-fusion Backbone} \label{sec:backbone} \begin{wrapfigure}{r}{0.25\textwidth} \captionsetup{justification=centering} \vspace{-17.5pt} \includegraphics[width=0.25\textwidth]{images/backbone} \caption{Exploded-view of the backbone layers.}\label{fig:backbone} \vspace{-5pt} \end{wrapfigure} The node-classification network that identifies the semantic class for each formed cluster employs a backbone network to extract feature vectors at different resolutions. The feature vectors are subsequently fed to a bidirectional pyramidal feature-fusion network. Consider a graph $G(V, E)$ containing point clusters as nodes and node features obtained from the embedding network, where $V = {1, 2, ... N}$ and $E \subseteq |V|\times|V|$ represent the set of vertices and edges in the formed graph respectively. $N$ denotes the total number of nodes. The set of neighboring nodes of Node $i$ can be denoted as $N(i) = \{j:(i, j) \in E\} \cup \{i\}$. Each of the individual node feature $h_i\in\mathbb{R}^F$ is associated with a corresponding graph Node $i\in V$, among the set of node features, $H = \{h_1, h_2, ..., h_N\}$, where $F$ denotes feature dimension at each node. The backbone network consists of four residual blocks, each constructed with a graph convolution, batch normalization, and activation layer: $$H^{bb out}_i = Activation(BatchNorm(GraphConv(H^{bb in}_i))),$$ where $i$ denotes the level of backbone. It has been observed in previous works that with the message-passing mechanism for graph convolutions, the node features eventually become similar as the number of passing increases. Short-term and long-term residual connections are added to the basic building blocks to increase information density and avoid gradient diminishing as the network depth increases. For the short-term residual connection, the output features from the previous block are aggregated with a simple addition operation. The expression of the residual block with a short-term direct connection from the previous block is formulated as: $$H^{bb out}_2 = \{Activation(BatchNorm(GraphConv(H^{bb in}_2))) + H^{bb out}_1\}.$$ The outputs from the residual blocks are fed into the bidirectional graph convolution network in parallel. At the end of the backbone structure, the features are concatenated together for cluster classification: $$H^{bb out} = \{H^{bb out}_1, H^{bb out}_2, ..., H^{bb out}_4\},$$ where $H^{bb out}$ expresses the concatenated final feature output from the backbone network. \subsection{Bidirectioal-graph Convolution Network} \label{sec:bidirection} Multi-scale feature-fusion aims to aggregate features at different resolutions. Given a list of multi-scale features, $H^{in} = (H^{in}_{l1}, H^{in}_{l2}, ...)$, where each of $H^{in}_{li}$ represents the feature at level $l_i$, the feature-fusion network acts as a transformation function, $f$, that aggregates features at different resolution and outputs a list of new features denoted as $H^{out} = f(H^{in})$. The structure of the bidirectional graph convolution network is shown in Figure \ref{network}. The network receives graph features, $H^{in} = (H^{in}_{1} ...H^{in}_{4})$, at different resolutions from backbone levels 1-4. The conventional feature pyramidal network used in image tasks aggregates multi-scale features in a top-down manner and performs resizing operation for resolution matching. Each of the network node in the pyramidal network is formulated in a similar fashion to the basic backbone building block, where batchnorm and activation layers are incorporated on top of each of the graph convolution: $$H^{out}_i = Activation(BatchNorm(GraphConv(H^{in}_i))).$$ For graph convolutions, resizing becomes inefficient and causes information mixing during the message passing operation. The feature pyramidal network is thus redesigned to produce a constant number of features at each level: \begin{align*} & H^{out}_4 = GraphConv(H^{in}_4),\\ & H^{out}_3 = GraphConv(H^{in}_3+H^{out}_4),\\ & \cdots \\ & H^{out}_1 = GraphConv(H^{in}_1+H^{out}_2). \end{align*} While propagating the node features through a pyramidal network complements the baseline backbone structure by enriching the aggregated feature information, it is still limited by the one-way information flow. To address this issue, we configure a bidirectional graph propagation network that creates a two-way information flow to aggregate information for both the deep-to-shallow direction and the other way around. In addition to the bidirectional passes, a skip connection that travels from the input node to output node is incorporated for each resolution. In this way, feature information is more densely fused, and diminishing-gradient issue can be avoided to an extent. Resultant features from the fusion network for different resolutions contribute to the final output feature unequally, and naively concatenating the features together requires an unnecessarily heavy memory usage. Instead, an additional set of weights are introduced when aggregating the multi-res output features into a single feature vector. The intuition is to have the network itself learn the importance of each resolution during training. This approach not only avoids manual assignment of importance based on potentially biased or inaccurate human knowledge but also leads to a more elegant memory usage compared to naive concatenation. To give a sample formulation of the final graph bidirectional GraphConv network with learnable weights for different resolutions, the formulation for aggregation at level 3 is shown as follows: \begin{align*} & H^{mid}_3 = GraphConv(\frac{w_1 \cdot H^{in}_3 + w_2 \cdot H^{in}_4}{w_1 + w_2 + \epsilon}),\\ & H^{out}_3 = GraphConv(\frac{w_1' \cdot H^{in}_3 + w_2' \cdot H^{mid}_3 + w_3'\cdot H^{out}_2}{w_1' + w_2' + w_3' + \epsilon}), \end{align*} where $H^{in}_3$ indicates the middle feature-passing node at the third level on the top-to-bottom pathway. Features propagated from channels for the other resolutions are constructed in a similar fashion. The pathway and network edges are formulated according to connections shown in Figure \ref{network}. \\ We have found through experiments that graph neural network is prone to experiencing information assimilation, which leads to ineffective feature-fusion at deeper convolution levels. This phenomenon is conceptually analogous to the thermodynamic equilibrium state where the temperature gradient between two objects diminishes as the energy from one object is transferred to another. To address this problem, we send the output features from both the backbone network and the pyramidal fusion network to the segmentation module. \section{Results} \label{sec:result} In this section, we evaluate the MuGNet on various 3D pointcloud segmentation benchmarks, including the Stanford Large-Scale 3D Indoor Spaces(S3DIS) dataset \cite{2017arXiv170201105A}, and the Virtual KITTI(vKITTI) dataset \cite{DBLP:journals/corr/GaidonWCV16}. The network performance is evaluated by the mean Intersection-over-Union(mIOU), and Overall Accuracy (OA) of all object classes specific to each dataset. \subsection{Segmentation Results on S3DIS} \label{sec:benchmark} The Stanford Large-Scale 3D Indoor Spaces Dataset provides a comprehensive clean collection of indoor pointcloud scans. There are in total over 695 million points and indoor scans of 270 rooms \cite{2017arXiv170201105A}. Each scan is a dense pointcloud of a medium-sized room ($ \sim 20\times15\times5$ meters). We use the standard $6$-fold cross-validation approach in our experiments. \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{images/s3dis.png} \caption{Qualitative semantic segmentation results of MuGNet on the validation set of S3DIS \cite{2017arXiv170201105A}.} \label{s3dis} \end{figure} Table \ref{s3dis_6fold} shows the network's performance averaged over all six areas. Our network has achieved better performance than state-of-the-art graph-based methods. When compared to the latest state-of-the-art for non-graph-based network RandLA-Net \cite{hu2019randla}, our network also achieves comparable results with 0.5\% higher overall accuracy and only 0.2\% lower mIOU. It should be noted that our network has been validated to consistently arrive at the reported result for the five times that the network is trained. In contrast, some of the baselines tend to produce inconsistent results due to their random sampling operations. \begin{table}[!htbp] \setlength{\tabcolsep}{2pt} \centering \captionsetup{justification=centering} \scriptsize \begin{tabular}{c|cc|ccccccccccccc} \toprule \textbf{} & \textbf{OA} & \textbf{mIOU} & \textbf{ceiling} & \textbf{floor} & \textbf{wall} & \textbf{beam} & \textbf{column} & \textbf{window} & \textbf{door} & \textbf{chair} & \textbf{table} & \textbf{bookcase} & \textbf{sofa} & \textbf{board} & \textbf{clutter}\\ \midrule \textbf{PointNet \cite{pointnet}} & 78.5 & 47.6 & 88 & 88.7 & 69.3 & 42.4 & 23.1 & 47.5 & 51.6 & 42 & 54.1 & 38.2 & 9.6 & 29.4 & 35.2\\ \textbf{Engelmann et al. \cite{eng}} & 81.1 & 49.7 & 90.3 & 92.1 & 67.9 & 44.7 & 24.2 & 52.3 & 51.2 & 47.4 & 58.1 & 39 & 6.9 & 30.0 & 41.9\\ \textbf{DGCNN \cite{dgcnn}} & 84.1 & 56.1 & - & - & - & - & - & - & - & - & - & - & - & - & -\\ \textbf{SPG \cite{DBLP:journals/corr/abs-1711-09869}} & 85.5 & 62.1 & 89.9 & 95.1 & 76.4 & 62.8 & 47.1 & 55.3 & 68.4 & 73.5 & 69.2 & 63.2 & 45.9 & 8.7 & 52.9\\ \textbf{RandLA-Net \cite{hu2019randla}} & 88.0 & \textbf{70.0} & - & - & - & - & - & - & - & - & - & - & - & - & -\\ \textbf{MuGNet (ours)} & \textbf{88.5} & 69.8 & \textbf{92.0} & \textbf{95.7} & \textbf{82.5} & \textbf{64.4} & \textbf{60.1} & \textbf{60.7} & \textbf{69.7} & \textbf{82.6} & \textbf{70.3} & \textbf{64.4} & \textbf{52.1} & \textbf{52.8} & \textbf{60.6}\\ \bottomrule \end{tabular} \vspace{0.5em} \caption{Semantic segmentation results measured by overall accuracy, mean intersection over union, and intersection over union of each class for all six areas in S3DIS dataset \cite{2017arXiv170201105A}.} \label{s3dis_6fold} \end{table} \begin{table}[!htbp] \setlength{\tabcolsep}{2pt} \centering \captionsetup{justification=centering} \scriptsize \begin{tabular}{c|cc|ccccccccccccc} \toprule \textbf{ } & \textbf{OA} & \textbf{mIOU} & \textbf{ceiling} & \textbf{floor} & \textbf{wall} & \textbf{beam} & \textbf{column} & \textbf{window} & \textbf{door} & \textbf{chair} & \textbf{table} & \textbf{bookcase} & \textbf{sofa} & \textbf{board} & \textbf{clutter}\\ \midrule \textbf{PointNet \cite{pointnet}} & - & 41.1 & 88.8 & 97.3 & 69.8 & 0.05 & 3.9 & 46.3 & 10.8 & 52.6 & 58.9 & 40.3 & 5.8 & 26.4 & 33.2\\ \textbf{Engelmann et al. \cite{eng}} & - & 48.9 & 90.1 & 96.0 & 69.9 & 0.0 & 18.4 & 38.3 & 23.1 & 75.9 & 70.4 & 58.4 & 40.9 & 13.0 & 41.6\\ \textbf{SPG \cite{DBLP:journals/corr/abs-1711-09869}} & 86.4 & 58.0 & 89.3 & 96.9 & 78.1 & 0.0 & 42.8 & 48.9 & 61.6 & 84.7 & 75.4 & 69.8 & 52.6 & 2.1 & 52.2\\ \textbf{GACNet \cite{Wang2019_GACNet}} & 87.8 & 62.8 & \textbf{92.3} & \textbf{98.3} & 81.9 & 0.0 & 20.3 & \textbf{59.1} & 40.8 & 78.5 & \textbf{85.8} & 61.7 & \textbf{70.7} & \textbf{74.7} & 52.8\\ \textbf{MuGNet (ours)} & \textbf{88.1} & \textbf{63.5} & 91.0 & 96.9 & \textbf{83.2} & \textbf{5.0} & \textbf{37.0} & 54.3 & \textbf{62.6} & \textbf{85.3} & 76.4 & \textbf{70.1} & 55.2 & 55.2 & \textbf{53.4}\\ \bottomrule \end{tabular} \vspace{0.5em} \caption{Semantic segmentation results measured by overall accuracy, mean intersection over union, and intersection over union of each class in Area $5$ of S3DIS dataset \cite{2017arXiv170201105A}.} \label{s3dis_a5} \end{table} We would also like to investigate our network's performance compared to GACNet \cite{Wang2019_GACNet}, a state-of-the-art graph convolution for pointcloud semantic segmentation. The results for Area $5$ are compared since this is the only area reported for the work. As shown in Table \ref{s3dis_a5}, our network still achieves better performance than the rest of the baselines for the segmentation task on Area $5$. Besides, most of the compared baseline networks struggle with handling a single pointcloud scan at once. They require drastic down-sampling and dividing of a large pointcloud room into small $1\times 1 \times 1$ blocks with sparse points. Our network, on the other hand, can effectively process multiple rooms in a single pass. \subsection{Segmentation Results on Virtual KITTI} \label{sec:vKITTI} \begin{figure}[htbp] \centering \captionsetup{justification=centering} \includegraphics[width=\linewidth]{images/vkitti_corl.png} \caption{Qualitative semantic segmentation results of MuGNet on the validation set of vKITTI \cite{DBLP:journals/corr/GaidonWCV16}.} \label{vKITTI} \end{figure} The Virtual KITTI (vKITTI) dataset \cite{DBLP:journals/corr/GaidonWCV16} contains simulated LiDAR data acquired through $50$ annotated $1242 \times 375$ resolution monocular videos generated from five different worlds in urban setting. Each of the scans in the dataset is a dense pointcloud with $13$ semantic classes. The annotated 2D depth images are projected into 3D space to produce the simulated LiDAR scans that resemble pointclouds obtained by real-life LiDAR scanners. For testing and training, the scans are separated into $6$ non-overlapping sets. We obtain the final evaluation following the $6$-fold validation protocol. \begin{table}[!htbp] \setlength{\tabcolsep}{2pt} \centering \captionsetup{justification=centering} \scriptsize \begin{tabular}{c|cc|ccccccccccccc} \toprule \textbf{} & \textbf{OA} & \textbf{mIOU} & \textbf{terrain} & \textbf{tree} & \textbf{vegetation} & \textbf{building} & \textbf{road} & \textbf{g-rail*} & \textbf{t-sign*} & \textbf{t-light*} & \textbf{pole} & \textbf{misc} & \textbf{truck} & \textbf{car} & \textbf{van}\\ \midrule \textbf{PointNet \cite{pointnet}} & 63.3 & 17.9 & 32.9 & 76.4 & 11.9 & 11.7 & 49.9 & 3.6 & 2.8 & 3.7 & 3.5 & 0.7 & 1.5 & 25.1 & 3.4\\ \textbf{Engelmann et al. \cite{eng}} & 73.2 & 26.4 & 38.9 & 87.1 & 14.6 & 44.0 & 58.4 & 12.4 & 9.4 & 10.6 & 5.3 & 2.2 & 3.6 & 43.0 & 13.3\\ \textbf{MuGNet (ours)} & \textbf{85.1} & \textbf{50.0} & \textbf{70.0} & \textbf{88.6} & \textbf{35.2} & \textbf{63.0} & \textbf{80.2} & \textbf{40.8} & \textbf{32.0} & \textbf{56.3} & \textbf{23.4} & \textbf{3.92} & \textbf{7.1} & \textbf{84.3} & \textbf{65.4}\\ \bottomrule \end{tabular} \vspace{0.5em} \caption{Overall accuracy, mean intersection over union, and intersection over union of each class for all six splits in vKITTI dataset\cite{DBLP:journals/corr/GaidonWCV16}. *"t-" is short for traffic; "g-" is short for guard.} \label{vkitti_6fold} \end{table} Table \ref{vkitti_6fold} shows a quantitative comparison of MuGNet against other benchmark methods for the dataset, and Figure \ref{vKITTI} presents MuGNet's qualitative segmentation result. The reported model is trained with the XYZ coordinates without RGB values to investigate our model's ability to learn geometric features. For comparison purposes, the benchmarked models are also trained with the same configuration. MuGNet has demonstrated to exceeded previous approaches in all evaluation metrics. It should be noted that during inference MuGNet can fit all $15$ scans in each of the six splits into a single GPU simultaneously, making it more viable for inspection tasks where computation resource is limited. \subsection{Efficiency Analysis} \label{sec:efficiency} The scalability of MuGNet is investigated by increasing the number of rooms to be processed in a single shot with the Stanford Large-Scale 3D Indoor Spaces(S3DIS) dataset \cite{2017arXiv170201105A}. Table \ref{table:efficiency_table} showcases the number of rooms for inference against the respective inference time and GPU memory consumption, and Figure \ref{fig:efficiency_plot} presents the corresponding plot of the trend. With a setup of single NVIDIA RTX $1080$Ti GPU, up to $45$ rooms (totaled $38,726,591$ points) can be accommodated into the available computation resource. We have observed that with an increase of room number, the increase in memory usage slows down and resembles a pseudo-logistic growth. The inference time also only increases at a relatively low rate. The growth trends indicate that the model scales nicely with an increased number of scans to be processed. The model would, therefore, be desirable for the application scenario where a multitude of dense scans need to be processed with few shots. \begin{figure}[!htb] \minipage{0.6\textwidth} \scriptsize \begin{tabular}{ p{.16\textwidth}| p{.33\textwidth}| p{.33\textwidth} } \toprule \textbf{\# of Rooms} & \textbf{Mean Inference Time(sec)} & \textbf{GPU Memory Usage(GB)} \\ \midrule \textbf{1} & 0.5056 & 1.042\\ \textbf{5} & 0.5396 & 4.073\\ \textbf{12} & 0.6124 & 6.684\\ \textbf{34} & 0.7077 & 10.365\\ \textbf{45} & 0.8332 & 10.617\\ \bottomrule \end{tabular} \vspace{14pt} \captionof{table}{Efficiency analysis on S3DIS dataset \cite{2017arXiv170201105A}.}\label{table:efficiency_table} \endminipage\hfill \minipage{0.4\textwidth} \vspace{-4pt} \includegraphics[width=\linewidth]{images/efficiency_plot.png} \captionof{figure}{Corresponding efficiency plot.}\label{fig:efficiency_plot} \endminipage\hfill \end{figure} \subsection{Ablation Study} \label{sec:ablation} In order to further investigate the behavior of our network, we conduct the following ablation studies. All ablated networks are trained and tested with the same $6$-fold validation setup as previously mentioned in Section 4.1 . \hspace{5mm} \textbf{(1$\sim$3) Removing Bidirectional GraphConv:} We want to study the backbone's ability to propagate useful information. In conventional convolutions, it has been widely proven that deeper networks often produce features at a higher quality that are more representative of the entire data distribution and in turn enhance network performance. If this phenomenon remains true for graph convolutions, then using features from deeper layers of the backbone as input to Bidirectional GraphConv should in theory yield a higher accuracy segmentation result. After removing the bidirectional network entirely, we obtain the final features directly from the backbone network before feeding them to the segmentation network. We also vary the number of backbone blocks to $7, 14$, and $28$ to investigate the extent of the backbone structure's capability. From the three different depths that the backbone has been tested, it can be observed that increased backbone depth in fact leads to better performance. \hspace{5mm} \textbf{(4) Stacking multiple Bidirectional GraphConvs:} The Bidirectional GraphConv network is structurally capable of stacking multiple network copies in parallel. Previous works in 2D image processing have shown positive correlation between model performance and the number of stacked networks. However, graph convolutions are fundamentally different from the conventional convolutions for 2D images, where the filtering operation is replaced by a message-passing mechanism. Naively adopting the structure used in 2D image processing would lead to sub-optimal results. \hspace{5mm} \textbf{(5) Increase depth of backbone layers:} Previous studies on graph neural networks have alerted on the loss of expressive power as the network complexity increases \cite{express, power}. It is therefore crucial to investigate the impact of backbone depth to the Bidirectional GraphConv network. When the backbone layer is increased to $14$, and the last $4$-layer outputs are fed into the Bidirectional GraphConv, the resultant segmentation is deemed to be less optimal compared to having $4$ layers of the backbone. Besides, the GPU memory usage increases by $\sim1/3$ times from our final network. Despite the incorporation of short-term and long-term residual connections, graph convolutions are still prone to information assimilation among nodes. A deeper backbone network does not necessarily extract higher-quality features. \begin{table}[!htbp] \setlength{\tabcolsep}{2pt} \centering \captionsetup{justification=centering} \scriptsize \begin{tabular}{p{25em} | p{3.5em}} \toprule \textbf{ } & \textbf{mIOU\%}\\ \midrule \textbf{(1) Backbone with 7 layers} & 59.0\\ \textbf{(2) Backbone with 14 layers} & 53.4\\ \textbf{(3) Backbone with 28 layers} & 64.5\\ \textbf{(4) Stacking 2 bidirectional GraphConvs} & 66.3\\ \textbf{(5) MuGNet with 14-layer backbone} & 63.7\\ \textbf{(6) MuGNet Baseline} & \textbf{69.8}\\ \bottomrule \end{tabular} \vspace{0.5em} \caption{The $6$-fold validated mean IOU of ablated networks tested on the S3DIS dataset \cite{2017arXiv170201105A}.\\ The results are compared against the final MuGNet configuration.} \label{experiments} \end{table} \vspace{-2em} \section{Conclusion} \label{sec:conclusion} In this paper, we have demonstrated that it is possible to process a large quantity of dense pointcloud scans by reformulating the learning objective to utilize a bidirectional graph convolutional network. Most of the current approaches rely on drastic downsampling or sliding window operations, both of which negatively impact the segmentation result on dense pointclouds. In contrast, we effectively transform the pointclouds into graph representations, which radically reduce the computation demand and facilitate the processing of up to forty-five room scans with a single commercially available GPU. A multi-resolution cluster embedding network is introduced to provide high-quality representations of the point clusters. Finally, a bidirectional-graph convolution network aggregates useful features from different graph resolutions. Extensive experiments on benchmark datasets have proven our network's strong capability in processing a multitude of pointclouds. It has also achieved state-of-the-art accuracy for pointcloud semantic segmentation. Our model would be desirable for application scenarios where a multitude of pointcloud scans need to be processed at once, and detailed pointcloud information need to be retained in the segmented pointcloud. Future work will involve investigations on potential industrial applications for the model. \clearpage
null
null
null
proofpile-arXiv_000-10175
{"arxiv_id":"2009.08924","language":"en","timestamp":1600654642000,"url":"https:\/\/arxiv.org\/abs\/2009.08924","yymm":"2009"}
2024-02-18T23:40:25.265Z
2020-09-21T02:17:22.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10177"}
null
\section{Introduction} \label{sec:intro}} \section{Introduction} \label{sec:intro} Meaningful information is often hidden in subtle interactions and associations within data. Naturally, graphs are well-suited to representing the connectivity structures that emerge from many real-world social, biological, and physical phenomena. Often, to gain a deeper understanding of a graph's topology, it is useful to summarize a graph through specific representative characteristics that capture its structure. These summarizations often abstract away some graph details and no longer represent any single graph, but rather an entire set of graphs sharing similar characteristics. The faithfulness of a graph model on an input graph is usually tested by asking a model to make predictions about the graph's evolution or by generating a new graph using some production scheme. In the generative case, if the model faithfully captures the source graph's structure, the subsequently generated graphs should resemble the original according to some similarity criteria. These graph models come in many varieties. For example, the Erdős–Rényi model relies only on external parameters---typically a count of nodes and edges---to determine how it will randomly connect nodes, making it incapable of truly \textit{learning} any deeper topological structure~\citep{erdos1960evolution}. More recent graph models, like the Chung-Lu model, improve on the Erdős–Rényi model by combining specific extrinsic parameters with information learned directly from the input graph~\citep{chung2002average}. Then, there are those models, including grammar-based schemes and graph neural networks, that are parameterized solely by the topology of the source graph. These latter two classes of graph models seek a more comprehensive approach by imbuing their production strategies with salient topological information extracted directly from the source graph. \begin{figure}[tb!] \centering \input{figures/inf-mirror-framework} \caption{The Infinity Mirror method iteratively fits a graph model $\mathcal{M}$ on $G_i$, uses the fit parameters $\Theta_i$ to generate a new graph $G_{i + 1}$, and repeats with $G_{i + 1}$. Model biases and errors are quickly revealed.} \label{fig:framework} \end{figure} Just as statistical biases exist in classical machine learning models, any graph model will make implicit assumptions and value judgments that influence the learning and generation process. Of course, this is not necessarily undesirable; principled assumptions are often necessary for decision-making. Certain aspects of graphs may be more important based on the model wielder's focus and intentions. Indeed, models like Chung-Lu---which learns a degree distribution---and the various stochastic block models, which capture clustering information, are clear about which graph properties they preserve and ignore. However, what assumptions do graph neural networks make when learning parameters from a source graph? What implicit biases drive a grammar-based model to extract one production rule over another? Often, these questions are difficult to answer, and these hidden inclinations may not be easily revealed using traditional methodologies. In this paper, we present the Infinity Mirror test: a framework for revealing and evaluating statistical biases present in graph models. The Infinity Mirror test takes its name from the children's toy, which contains two mirrors that reflect light interminably between them. As illustrated in Fig.~\ref{fig:framework}, this framework operates by iteratively applying a particular graph model onto a graph that it previously generated, constructing a sequence of generated graphs starting from some source graph. As a JPEG image reveals compression artifacts when repeatedly re-compressed, a graph will degenerate when the same model is repeatedly fit on its own outputs. This sequence of generated graphs can be analyzed using a variety of graph similarity metrics to quantify how the generated graphs diverge from the source graph. If the sequence is allowed to grow long enough, this repetition is likely to cause the sequence of graphs to deviate from the source in a way that exposes unknown statistical biases hidden in the model. \section{Preliminaries} \label{sec:prelim} A graph $G = (V,E)$ is defined by a finite set of nodes $V$ and a set of edges $E$. We denote a node by $v_i \in V$ and an edge between $v_i$ and $v_j$ is given by $e_{ij} = (v_i, v_j) \in E$. For convenience, we let $n = |V|$ and $m = |E|$. It is sometimes desirable to represent the graph as an $n \times n$ adjacency matrix $A$, where $A_{ij} = 1$ if $e_{ij} \in E$ and $A_{ij} = 0$ otherwise. We take the convention that all graphs are undirected, static, and unlabeled unless otherwise indicated. \subsection{Graph Models} A graph model $\mathcal{M}$ is any process or algorithm by which a set of salient features $\Theta$ can be extracted from a graph $G$. In prediction scenarios, the performance of $\mathcal{M}$ can be assessed using standard precision and recall metrics on held-out data. If $\mathcal{M}$ also describes how new graphs can be constructed from $\Theta$, its performance can be analyzed by comparing the generated graphs to $G$ using various measures of graph similarity. Early graph models like the random graph of Erd\H{o}s and R\'{e}nyi~\citep{erdos1960evolution}, the small world network of Watts and Strogatz~\citep{watts1998collective}, the scale-free graph of Albert and Barab\'{a}si~\citep{barabasi1999emergence} and its variants~\citep{bianconi2001competition,ravasz2003hierarchical}, or the more recent LFR benchmark graph generators~\citep{lancichinetti2008benchmark} generate graphs by applying hand-tuned parameters to some underlying generative process. This exercise of fine-tuning the model parameters to generate topologically faithful graphs to an input graph is taxing and often hard to achieve. In response, graph models were developed to automatically learn the topological properties of the source graph for more faithful generation. One of the first of this new generation of graph models was the Chung-Lu/configuration model~\citep{chung2002average}. It generated graphs by randomly rewiring edges based on the degree sequence of the source graph. Even though the degree sequence of the generated graph exactly matched that of the original, the configuration model often failed to incorporate higher-order topological structures like triangles, cliques, cores, and communities observed in the original graph. Since its introduction, more comprehensive models have attempted to fix these flaws by proposing improvements like incorporating assortativity and clustering~\citep{pfeiffer2012fast,mussmann2014assortativity,mussmann2015incorporating,kolda2014scalable}. For example, the Block Two-level Erd\H{o}s-R\'{e}nyi (BTER) model interprets its input as a scale-free collection of dense Erd\H{o}s-R\'{e}nyi graphs~\citep{kolda2014scalable}. BTER respects two properties of the original graph: local clustering and degree sequence. However, BTER sometimes fails to capture higher-order structures and degenerates in graphs with homogenous degree sequences (\textit{e.g.}, grids). Stochastic Block Models (SBMs) primarily consider communities in the source graph and then create a block matrix that encodes communities as block-to-block connectivity patterns~\citep{karrer2011stochastic, funke2019stochastic}. To generate a graph, the SBM creates an Erd\H{o}s-R\'{e}nyi graph inside each block and random bipartite graphs across communities. Since SBMs' introduction, they have been extended to handle edge-weighted~\citep{aicher2013adapting}, bipartite~\citep{larremore2014efficiently}, temporal~\citep{peixoto2015inferring}, and hierarchical networks~\citep{peixoto2014hierarchical}. Likewise, Exponential Random Graph Models (ERGMs)~\citep{robins2007introduction}, Kronecker graph models~\citep{leskovec2010kronecker,leskovec2007scalable,gleich2012moment}, and graph grammar models~\citep{aguinaga2018learning, sikdar2019modeling, hibshman2019towards} are able to generate graphs that are more-or-less faithful to the source graph. Recent advances in graph neural networks have produced graph generators based on recurrent neural networks~\citep{you2018graphrnn}, variational autoencoders~\citep{salha2020simple, kipf2016variational, li2018learning}, transformers~\citep{yun2019graph}, and generative adversarial networks~\citep{bojchevski2018netgan}, each of which has advantages and disadvantages. Graph autoencoders (GraphAE) learn an embedding of the input graph's nodes via message-passing and then construct new graphs by taking inner products of the embedding vectors and passing them through an activation function (\textit{e.g.}, sigmoid). NetGAN trains a Generative Adversarial Network (GAN) to \textit{generate} and \textit{discriminate} between real and synthetic random walks over the input graph. After training, the model builds graphs from random walks produced by the generator. GraphRNN, a kind of Recurrent Neural Network (RNN), decomposes the process of graph generation into two separate RNNs---one for generating a sequence of nodes, and the other for the sequence of edges. \begin{figure}[tb!] \centering \includegraphics[width=0.99\linewidth,bb=0in 0in 13.13in 21.39in]{figures/fig2-pdf.pdf} \vspace{.3cm} \caption{(A) Example graph with three distinct communities. (B) Graphs $G_1$, $G_5$, and $G_{10}$ are generated using the Infinity Mirror framework described in Fig.~\ref{fig:framework} for various models. Chung-Lu and Kronecker immediately lose the community structure of the source graph. Kronecker progressively makes the graph more dense. CNRG and SBM are able to retain the input graph's community structure, albeit with some appreciable deterioration. NetGAN is able to capture the topology of the input graph but fails to generate graphs beyond the first iteration.} \label{fig:toy-example} \vspace{-.2cm} \end{figure} \subsection{Graph Comparison Metrics} Graph models are typically evaluated by their ability to predict nodes and edges. Although the prediction task is important, it does not measure a model's ability to capture a graph's topological structure. Another evaluation approach involves comparing a source graph $G$ to a new graph $\hat{G}$ generated from $\mathcal{M}$ using a measure of graph similarity or divergence. Here, the choice of metric influences the differences and biases that are exposed within the model. The simplest graph comparison metrics are computed by comparing distributions of first-order graph properties like the graph's degree and PageRank distributions. In addition to visually inspecting these distributions, graph modelers also employ quantitative metrics like the Jensen-Shannon (JS) divergence. More advanced metrics compare two graphs by examining properties of their adjacency matrices. There are two conventional approaches in this space: (1) known node-correspondence metrics, which assume every node in $G$ has a known corresponding node in $\hat{G}$, and (2) unknown node-correspondence metrics, where there is no assumed node correspondence. \textsc{DeltaCON} is an example of a known node-correspondence metric that compares node affinities between $G$ and $\hat{G}$~\citep{koutra2013deltacon,koutra2016deltacon}. Affinities are measured using belief propagation, which associates every node with a vector measuring its influence on its $k$-hop neighbors. A score is produced by comparing the vectors of corresponding node affinities. Unfortunately, most graph models do not generate graphs with a known node-correspondence. Although best-guess node correspondence can be predicted if needed, known node-correspondence metrics like \textsc{DeltaCON} and the cut distance~\citep{liu2018cut} are not well-suited for the present work. Fortunately, there also exist many unknown node-correspondence metrics. Examples include Portrait divergence, $\lambda$-distance, and NetLSD~\citep{bagrow2019information,wilson2008study,tsitsulin2018netlsd}. We can also directly compare the graphlet counts~\citep{ahmed2015efficient} between $G$ and $\hat{G}$, or compute their graphlet correlation distance (GCD)~\citep{prvzulj2007biological} by counting node orbitals. The Network Laplacian Spectral Descriptor (NetLSD) is a permutation-invariant graph comparison method that simulates a heat diffusion processes on each graph. NetLSD produces a high-dimensional vector that corresponds to the sum of the heat in discretized timesteps over the simulation, and the Euclidean distance between these vectors can be used to compare two or more graphs. Finally, loss functions from recent graph neural network models compare $G$ to $\hat{G}$ by collating derived graph features like power-law exponents, diameters, and node centrality scores~\citep{bojchevski2018netgan}. In each case, the features extracted by a model and the generative process by which predictions are made carry inherent biases, which may elude even the most comprehensive performance or comparison metric. \section{Infinity Mirror Test} \label{sec:method} The Infinity Mirror~\citep{aguinaga2016infinity} test seeks to expose a graph model's implicit biases by iteratively computing hereditary chains of graphs generated by the model. For a graph model $\mathcal{M}$ and source graph $G_0$, we define a chain $\langle G_1, G_2, \dots, G_\ell \rangle$ of length $\ell$ by computing: $$G_{i + 1} = \mathcal{M}(G_i, \Theta_i)$$ \noindent at each iteration $i$ of the chain, where $\Theta_i$ denotes the features extracted from $G_i$ by the model $\mathcal{M}$. Fig.~\ref{fig:framework} illustrates this iterative fit-and-generate process. Because each subsequent graph is a lossy reflection of a previous graph, which itself may have been a lossy reflection of one prior, we expect that this chain of graphs will diverge from the source graph as the chain grows longer. By inspecting the divergence along the chain of generated graphs, patterns of model error or bias should become easier to detect. This chain of graphs can be evaluated from several different perspectives. The most straightforward way to measure the error in a chain is by comparing the initial graph $G_0$ to the last graph $G_\ell$ using a graph similarity metric. This provides insight into the total degradation resulting from model-specific errors and biases. A natural extension of this idea is to compare consecutive graphs in the chain. Let $\mu$ represent a specific graph comparison metric and $\Delta_i = \mu(G_{i - 1}, G_i)$, then each chain will provide a sequence $\mathbf{\Delta} = \langle \Delta_1, \dots \Delta_\ell \rangle$. This vector of distances can then be analyzed to extract information from one or multiple chains; for example, $\sum_i{\Delta_i}$ can be used as a proxy for the accumulated error measured by $\mu$ under repeated applications of $\mathcal{M}$. One might also consider accumulated error in $\mathbf{\Delta}$ as a kind of anti-convergence process, wherein the quality of $\mathcal{M}$ can be measured by stability in the chain. Analogously, analyzing the features $\Theta_i$ that a model learns when generating a chain might shine a light on the inner workings of a model, which might not be reflected by merely comparing output graphs to each other. However, this would require a more tailored, heuristic approach than we can provide in the present work as different generative models learn different sets of features that are often incomparable. \para{Kronecker Graphs.} Next, we apply the Infinity Mirror test on the Kronecker graph model, which is known to produce graphs that approximate a log-normal degree distribution (with a heavy tail)~\citep{seshadhri2013depth}. Although this property of the Kronecker graph model is often desirable, it is no doubt a bias that is encoded into the model. If we provide a source graph that does not follow a log-normal degree sequence, we expect the degree sequence of $\mathbf{\Delta}$ to diverge relative to the source graph. \begin{figure}[t!] \vspace*{5em} \centering \input{figures/kron_transition} \vspace*{0.5em} \input{figures/example_kron_dist} \vspace*{0.75em} \input{figures/example_kron_overtime} \caption{ (A) A generic $2\times 2$ Kronecker initiator matrix $\mathcal{I} = [a\, b; c\, d], 0 \le a,b,c,d < 1$ can be used to generate an adjacency matrix of size $2^n \times 2^n$ by using Kronecker products. (B) State transition diagram corresponding to the initiator matrix $\mathcal{I}$ in (A) visually representing the recursive growth process of Kronecker graphs. (C) Degree distribution on the Flights graph illustrated over seven fit-and-generate iterations of the Kronecker model. The Kronecker initiator matrix from \texttt{KronFit} is positioned over the individual ridge plots. (D) Jensen-Shannon divergence of the $i^\textrm{th}$ graph compared to source graph $JS(G_0,G_i)$ (red line) and of the $i^\textrm{th}$ graph compared to the $i-1^\textrm{th}$ graph $JS(G_{i-1},G_i)$ (green line). This example shows hidden bias in the Kronecker graph model: the degree distribution tends to flatten and oscillate after a few iterations. } \label{fig:kronexample} \end{figure} In the Kronecker model, graphs are modeled as a repeated Kronecker product of a $k \times k$ initiator matrix $\mathcal{I}$ (usually, $k = 2$) as shown in Fig.~\ref{fig:kronexample}(A). Every entry of $\mathcal{I}$ can be thought of as a transition probability between two states, as seen in Fig.~\ref{fig:kronexample}(B). The \texttt{KronFit}\footnote{\texttt{KronFit} and \texttt{KronGen} can be obtained from SNAP Stanford} utility can be used to fit the initiator matrix to an input graph, while the \texttt{KronGen} utility can be used to generate new graphs from an initiator matrix, by performing repeated Kronecker products of the initiator matrix. As a result, it can only generate graphs with nodes which are in powers of $k$. For example, consider the plots in Fig.~\ref{fig:kronexample}(C) which illustrate the degree distributions of a chain of graphs obtained by performing the Infinity Mirror test on a graph of airline flights\footnote{\url{https://openflights.org/data.html}} using the Kronecker model. The subplot labeled $G_0$ shows the degree distribution of the original Flights graph. We then fed the Flights graph to \texttt{KronFit} and generated a graph from the learned initiator matrix (\texttt{KronGen}) to create a new graph $G_1$~\citep{leskovec2010kronecker}. The plot labeled $G_1$ illustrates the degree distribution of this new graph. The degree distributions of $G_0$ and $G_1$ look visually similar. The next step is to compute some distribution similarity measure to analytically compare the two distributions (we use the Jensen-Shannon (JS) divergence), concluding that the graph model has some error $\epsilon$. Rather than stopping here, the Infinity Mirror methodology continues this fit-and-generate process. So, we input $G_1$ into the Kronecker model and generated a new graph $G_2$. Its degree distribution is illustrated in the plot labeled $G_2$. Likewise, the plots labeled $G_3,G_4,\ldots, G_7$ illustrate the degree distribution of subsequent iterations. A visual inspection of these plots shows that the degree sequence of the Kronecker model degenerates into an ever-spreading oscillating distribution. We also note that the entries in the Kronecker initiator matrices monotonically increase in magnitude across generations, signifying the increase in edge density~\citep{leskovec2010kronecker}. Fig.~\ref{fig:kronexample}(D) provides an analytical view of the visual plots on top as the number of iterations continues to $10$. The cumulative JS divergence compares the degree distribution of $G_0$ to $G_i$, thereby accounting for the error over multiple iterations; Conversely, the iterative JS divergence compares the degree sequence of $G_{i-1}$ to $G_{i}$, thereby accounting for errors in each iteration. The iterative error shows that each fit and generate procedure produces some amount of error, which is expected when performing any modeling. However, it is important to note that the cumulative error is not simply a summation of each iteration error. It is certainly possible for an iterative error made by some later iteration to correct an earlier error and lead to a decrease in the cumulative error. In this case, we show that the cumulative error starts off low, but diverges asymptotically. In summary, this iterative test can not only capture model-induced error, it can also reveal bias encoded in the model, such as the Kronecker model's dispersing degree distribution previously uncovered and formalized by Seshadhri \textit{et al.}~\citep{seshadhri2013depth}. \newpage \begin{figure*}[tb!] \centering \input{figures/degree_js} \input{figures/legend-degree} \caption{Jensen-Shannon (JS) divergence of degree distribution (top) and PageRank distribution (bottom) over 10 fit-and-generate iterations on three real-world datasets and two synthetic datasets. Plots represent mean JS divergence over 50 chains; confidence intervals are not plotted for clarity but do not exceed $\pm$0.028 in the worst case. The area above the Erd\H{o}s-R\'{e}nyi curves is shaded in gray to indicate worse than random performance. Some graph models tend towards total divergence over 10 iterations, indicating that they degenerate (in interesting ways). Other models are stable, indicating that they perform well on this data on the degree and PageRank metrics.} \label{fig:degree} \end{figure*} \section{Experiments} In this section, we apply the Infinity Mirror test to several graph models using relevant graph properties and metrics. The Infinity Mirror test reveals interesting error patterns in many graph models and also suggests hidden biases worthy of further investigation. Our open-source implementation can be found on GitHub\footnote{\url{www.github.com/satyakisikdar/infinity-mirror}}. \para{Data.} We perform experiments over five source graphs, denoted $G_0$. These include three real-world graphs commonly found in the graph modeling literature: OpenFlights (Flights), an email graph from a large European research institution (Email), and a network of chess competitions (Chess). We also examine synthetic graphs: a 3000-node tree (Tree) where each node has 2, 3, or 4 children with equal probability, and a ring of cliques (Clique-Ring) where each node in a 500-node ring is replaced by a 4-clique ($K_4$). Summary statistics on the datasets can be found in Tab.~\ref{tab:dataset}. \begin{table}[htb] \centering \caption{Dataset summary statistics. Avg CC is the average local clustering, and Avg PL is the average unweighted shortest path length.} \vspace{0.1cm} \footnotesize{ \begin{tabular}{@{} l rrr rr @{}} \toprule \textbf{Dataset} & \textbf{$\mathbf{|V|}$} & \textbf{$\mathbf{|E|}$} & \textbf{\# Triangles} & \textbf{Avg. CC} & \textbf{Avg. PL} \\ \midrule Flights & 2,905 & 15,645 & 72,843 & 0.45 & 4 \\ Email & 986 & 16,064 & 105,461 & 0.40 & 2.5 \\ Chess & 7,115 & 55,779 & 108,584 & 0.18 & 4 \\ Clique-Ring & 2,000 & 3,500 & 2,000 & 0.75 & 250 \\ Tree & 2,955 & 2,954 & 0 & 0 & 12 \\ \bottomrule \end{tabular} } \label{tab:dataset} \end{table} \para{Graph Models.} There are hundreds of possible graph models to which the Infinity Mirror test may be applied; however, the current work is not a survey of graph models; instead, we sample archetypal graph models from among the various options available. These include Chung Lu model~\citep{chung2002average}, clustering-based node replacement graph grammars (CNRG)~\citep{sikdar2019modeling}, block two-level Erd\H{o}s R\'{e}yni (BTER)~\citep{seshadhri2012bter}, degree-corrected stochastic block models (SBM)~\citep{karrer2011sbm}, hyperedge replacement graph grammars (HRG)~\citep{aguinaga2016growing, aguinaga2018learning}, Kronecker graphs~\citep{leskovec2010kronecker, leskovec2007scalable}, bottom-up graph grammar extractor (BUGGE)~\citep{hibshman2019towards}, generative adversarial network (NetGAN)~\citep{bojchevski2018netgan}, graph linear autoencoder (LinearAE)~\citep{salha2020simple}, graph convolutional neural networks (GCNAE)~\citep{kipf2016variational}, and graph recurrent neural networks (GraphRNN)~\citep{you2018graphrnn}. Random graphs generated using the Erd\H{o}s-R\'{e}nyi model with an edge probability equal to the density of the input graph are also included as a baseline. \para{Methodology.} For each combination of $\mathcal{M}$ and $G_0$, we create 50 independent fit-and-generate chains, each of length 10 (\textit{i.e.}, $G_1, G_2,\ldots,G_{10}$). The 50 independent chains are almost certainly different because each graph model incorporates some stochasticity during feature extraction, graph generation, or both. GraphRNN does not conform to the above format, because it learns from and generates batches of graphs at a time. For this model, we initially feed in 50 identical copies of $G_0$, which are used to generate a batch of 50 different $G_1$s; in this manner, every iteration of GraphRNN works on batches of 50 graphs for both input and output. In a small number of cases, some graph models (like NetGAN in Fig.~\ref{fig:toy-example}) degenerate to such an extent that they can no longer be refit on their output. In those cases, we plot as much data as possible before the model fails. \begin{figure*}[tb!] \centering \input{figures/deltacon} \input{figures/legend-degree} \caption{Portrait divergence (top) and $\lambda$-distance (bottom) over 10 fit-and-generate iterations on three real-world datasets and two synthetic datasets. Confidence intervals are not plotted for clarity, but do not exceed $\pm$0.021 in the worst case. The area above the Erd\H{o}s-R\'{e}nyi curves is shaded in gray to indicate worse than random performance. Like the degree and PageRank distributions in Fig.~\ref{fig:degree}, many graph models degenerate, but some remain stable.} \label{fig:deltacon} \end{figure*} \subsection{Degree and PageRank Metrics} In this section, following the Kronecker example, we show how the degree and PageRank distributions change over the iterations of the Infinity Mirror. The degree of a node is the number of connections it has in the graph, and the degree distribution describes the probability of a node having a given degree in a graph. For directed graphs, it is common to plot in-degree and out-degree distributions; for simplicity, in the present work, we consider only the total degree (in-edges plus out-edges). The degree distribution is a critical component in a graph's overall topology. The Chung-Lu model focuses exclusively on generating graphs with a degree distribution that conforms to an input degree distribution. Likewise, the Kronecker graph model's degree distribution has been rigorously analyzed and found to generate graphs with oscillating degree distributions that approximate a log-normal shape with a heavy tail. Early work on graph topology by Erd\H{o}s and R\'{e}nyi found that random graphs have a binomial degree distribution~\citep{erdos1960evolution}. Then Albert and Barab\'{a}si found that many large real-world graphs exhibit a power-law degree distribution~\citep{albert2002statistical}, which has been challenged recently by a comprehensive analysis showing that many degree distributions may look like they have power-law degree distributions, but have fat-tails~\citep{broido2019scale}. Graph models claiming to be comprehensive aught to accurately encode a graph's degree distribution regardless of its shape. The five plots in the top row of Fig.~\ref{fig:degree} show the mean JS divergence over the 50 independent trials between the degree distribution of the source graph $G_0$ and each new graph $G_i$ (\textit{i.e.}, the cumulative error from Fig.~\ref{fig:kronexample}). We compute error bars at the 95\% confidence level, but they are not plotted to maintain clarity. Over all datasets and models, the maximum 95\% confidence interval was $\pm0.028$, and most were $\pm0.01$. As expected, we find the Chung-Lu model, which directly and exclusively models the degree distribution, performs well on real-world graphs, but poorly on the synthetic graphs. This is likely because the Chung-Lu model is restrictive to graphs with a power-law degree distribution. However, the degree distribution of the synthetic graphs is multinomial and, therefore, not well modeled by the Chung-Lu model. The CNRG graph grammar model performs well at generating graphs that match the degree distribution of synthetic graphs. The random graphs of Erd\H{o}s and R\'{e}nyi are included as a baseline. We set the number of nodes and edges in the random graph generator equal to the number of nodes and edges in the source graph. As expected, the performance of the random graphs stays the same over various iterations. We expected that the performance of most graph models would degenerate towards the performance of the random graph. However, we show that some graph models quickly degenerate past the performance of the random model. In addition to the degree distribution, the bottom row in Fig.~\ref{fig:degree} show the results of the JS Distance of the PageRank distributions of $G_0$ and $G_i$. The PageRank algorithm assigns a weight to each node PR$(v)$ signifying the probability of a random walker over the graph's edges landing on that node. A node's PageRank score is highly correlated to its (in-) degree, but contains additional information about the graph's topology. However, unlike degree distribution, the PageRank is a real-numbered value, so the JS divergence cannot be directly calculated. In our analysis we create $100$ equal length bins from $0$ to $\underset{v \in \{V_0\cup V_i\}}{\max}(\textrm{PR}(v))$ and compute the JS divergence on these two PageRank distributions. As expected, the performance of each model using the PageRank distribution is similar to the degree distribution. In summary, the degree and PageRank distributions uncover known biases in the Kronecker and Chung-Lu graph generators. Specifically, that degree distributions of the Kronecker graph model oscillate and have a heavy tail, and that the Chung-Lu graph model works well on real-world networks that have (approximately) a power-law degree sequence, but do not accurately model non-scale-free synthetic graphs. \subsection{Portrait Divergence and $\lambda$-Distance} Degree and PageRank distributions describe a graph from an important but limited perspective. Indeed, similar values of degree and PageRank statistics do not necessarily imply a similar network topology~\citep{prvzulj2007biological}. A more thorough examination that compares the topology and internal substructures between the source graph and each new iteration is needed to describe each model in more detail. Among the many options~\citep{tantardini2019comparing}, we use Portrait divergence~\citep{bagrow2019information} and $\lambda$-distance~\citep{wilson2008study}, which are unknown node-correspondence graph distance measures to compare the topology of two graphs. The Portrait divergence is based on a portrait of a graph~\citep{bagrow2008portraits}, which is a matrix $B$ with entries $B_{\ell,k}$ such that $\ell=0,1,\ldots,d$, where $d$ is the graph diameter, and $k=0,1,\ldots,N-1$ is the number of nodes having $k$ nodes at the shortest path distance $\ell$. Simply put, the portrait of a graph measures the distribution of its shortest path lengths. Because the shortest path lengths are correlated with edge density, degree distribution, etc., the network portrait is a comprehensive (and visually compelling) summary of a graph's topology. Furthermore, the graph portrait provides for a probability $P(k,\ell)$ of randomly choosing $k$ nodes at a distance $\ell$, and therefore provides a probability distribution over all nodes in a graph. The Portrait divergence is therefore defined as the JS divergence of portrait distributions from two graphs~\citep{bagrow2019information}. \begin{figure}[tb!] \centering \input{figures/heatmap} \vspace*{.2cm} \includegraphics[width=.99\linewidth]{figures/figure1-crop.pdf} \vspace*{.2cm} \input{figures/legend_pca} \caption{(Top) PCA weights for the graphlet vectors. (Bottom) 2-D PCA of graphlet vectors on all five datasets (represented by shape) and all 12 graph models (represented by separate plots), showing all 10 iterations (represented by color). The original graphs are plotted in green. Each mark represents the coordinate of a generated graph averaged over all trials. These illustrations show how graphs degenerate comparatively over multiple iterations.} \label{fig:pca} \end{figure} The $\lambda$-distance is similar to Portrait divergence. It is defined as the Euclidean distance between the eigenvalues of two graphs. In the present work, we use the graph Laplacian $\mathcal{L} = D - A$ (\textit{i.e.}, the difference between the degree and adjacency matrices) instead of the adjacency matrix due to its desirable properties (\textit{e.g.}, lower co-spectrality between non-isomorphic graphs)~\citep{wilson2008study}. Results of Portrait divergence and $\lambda$-distance are plotted in Fig.~\ref{fig:deltacon}. Like in Fig.~\ref{fig:degree}, each mark represents the mean over 50 trials on each source graph and model over 10 iterations. Again, confidence intervals are not plotted for clarity but do not exceed $\pm$0.021 in the worst case. Close examination of these results again shows that some models degenerate, and others remain consistent---depending on the source graph and model. Although these plots show that models degenerate, they do not show \textit{how} they change like in Fig.~\ref{fig:kronexample}. \subsection{Tracking Degeneracy: Graphlets} Graph modelers have also found that the number and types of various graphlets significantly contribute to the graph's topology. To that end, Yaverouglu et al. introduced the Graphlet Correlation Distance (GCD) that compares two graphs based on the Spearman correlation of the graphlet orbits of each node~\citep{yaverouglu2014revealing}. Unfortunately, the GCD algorithm requires significant processing resources, so instead, we computed the total graphlet counts for all possible connected 2, 3, and 4-node configurations within each graph using the Parallel Parameterized Graphlet Decomposition (PGD) algorithm~\citep{ahmed2015efficient}, \textit{i.e.}, we counted all the edges, closed-triangles, open-triangles, 3-paths, etc. for each graph. Then we calculate the Relative Graphlets Frequency Distance (RGFD) as the sum of the absolute difference in graphlet counts between two graphs~\citep{prvzulj2004modeling}. This produces a $9$-dimensional vector---one entry for each graphlet. Likewise, the Network Laplacian Spectral Descriptor (NetLSD) summarizes a graph's features into a 250 dimensional vector derived from the solution of a heat equation that resembles the dynamics of a random walker. Each entry in the vector is a discretized sample from the heat equation summed over all nodes as the energy in the system dissipates over the edges in the graph~\citep{tsitsulin2018netlsd}. \begin{figure*}[tb!] \centering \input{figures/lsd} \vspace{.2cm} \input{figures/legend-degree} \caption{Relative Graphlet Frequency divergence (RGFD) (top) and NetLSD Distance (bottom) over 10 fit-and-generate iterations on three real-world datasets and two synthetic datasets. Confidence intervals are not plotted for clarity, but become large as iterations increase.} \label{fig:lsd} \end{figure*} In either case, the description vectors can be reduced with principal component analysis (PCA) and visualized. Fig.~\ref{fig:pca} show the results of PCA on the graphlet vectors with each model separated out. Each mark in the plot for a specific model, therefore, represents the reduced vector, averaged over 50 trials for each iteration. Different datasets are represented with different shapes, while different iterations follow a gradient of blue to red, with blue representing the first iteration. The coordinates of the original datasets are marked in green to serve as a reference. PCA was used instead of t-SNE or other dimension reduction techniques because its reduced vectors are a simple linear combination of weights on the original vector. This provides a somewhat understandable representation for each reduced vector if the original vector also carries semantic meaning. Fortunately, in the case of graphlet vectors, the original vector represented normalized graphlet counts. The top part of Fig.~\ref{fig:pca} shows the weights for each element that, when combined, map each mark to its 2-dimensional point. Interpreting PCA weights is fraught with difficulty, but in this case, the findings are pretty clear: the $x$-axis is highly correlated with the 4-clique and negatively correlated with 4-path and 3-star, while the $y$-axis is highly correlated with 3-star and 4-clique, negatively correlated with $4$-tailed-triangles. Both axes are uncorrelated with the 4-cycle. Representing axes in this way allows the reader to begin to draw conclusions from the plot in Fig.~\ref{fig:pca}. We call-out two interesting findings as examples. First, from the HRG plot, we observe that the markers tend to shift upwards in the $y$-axis with increasing iterations, as evidenced by the change in marker colors from blue to red. This could be due to an increase in the number of $4$-cliques and $3$-stars. Simply put, the HRG model appears to be biased towards generating more cliques. This can be validated by the observation that the \textit{hyperedges} in the HRG model are formed directly from a clique-tree. Second, the graph autoencoder based models (GCNAE and LinearAE) tend to diverge quickly from the input graphs, away from all the models, to occupy a region with high values of $x$. This transition, for some datasets like the Flights network, is gradual, as seen by the trail of "+"s in the GCNAE and LinearAE plots. High $x$ values could be the result of a large number of $4$-cliques and a lack of $4$-paths and $3$-stars, indicating an increase in the density of edges in the generated graphs. These prompted us to examine the topological degeneracy of the graphs from yet another perspective in Sec.~\ref{sec:aplcc}, looking at how average clustering and average shortest path lengths evolve across iterations for different models. There are certainly additional conclusions that may be drawn from the PCA plots, but we leave these for the reader. Semantic analysis of PCA on NetLSD vectors may also be possible, but we do not consider such analysis in the present work. In addition to PCA, we can also compare the $9$- and $250$-dimensional vectors of each generated graph with the respective vectors from the source graph. Indeed, the Euclidean distance comparing these vectors of two graphs is the definition of the Relative Graphlet Frequency Distance (RGFD) and NetLSD distance described in their original work. Because the Euclidean calculations used to compute the RGFD and NetLSD distances are not bounded, it is difficult to compare models across source graphs directly. However, comparisons among models on the same graph are still valid. With that caveat in mind, results of RGFD distance and NetLSD distance are plotted in Fig.~\ref{fig:lsd}. Like earlier plots, each mark represents the mean over 50 trials on each source graph and model, over 10 fit-and-generate iterations. The confidence intervals are not plotted for clarity. However, confidence intervals widen in the RGFD plots and become quite large ($\approx$0.08) in the final iterations. The growth in confidence intervals indicates that some models are not consistent over multiple trials. CNRG produced graphs that were isomorphic to the source graph on the Clique-Ring data resulting in distances of 0, which are not permitted on a log-scale. We instead plotted these results across the bottom of the plots. Like earlier plots, most models tend toward an asymptote near the random graph baseline. We are surprised to find that many models perform worse than the random baseline, even in early generations. \begin{figure}[tb!] \centering \includegraphics[width=.99\linewidth]{figures/figure2-crop.pdf} \vspace*{.5cm} \input{figures/legend_pca} \caption{Variation of average clustering (CC) and average path length (APL) across all five datasets (represented by shape) and all 12 graph models (represented by separate plots), showing all 10 iterations (represented by color). The original graphs are plotted in green. Each mark represents the coordinate of a generated graph averaged over all trials. These illustrations show how graphs degenerate comparatively over multiple iterations.} \label{fig:aplcc} \end{figure} \subsection{Tracking Degeneracy: Clustering and Geodesics} \label{sec:aplcc} A graph's topology is entirely determined by how nodes are spatially positioned in relation to each other. How nodes \emph{cluster} together can have a dramatic impact on how the network is interpreted in a given domain. Similarly, the (unweighted) shortest path lengths, also known as \emph{geodesics}, play an influential role in scenarios both abstract and real-world. In this section, we analyze how graphs' average clustering and average shortest path lengths change as the Infinity Mirror test is applied. Average clustering measures the amount of triadic closure in a graph. Real-world social networks often contain a lot more triangles, and therefore have higher average clustering compared to an Erd\H{o}s-R\'{e}nyi graph having the same number of nodes and edges~\citep{klimek2013triadic}. Similarly, distances between nodes, as measured by the lengths of shortest paths, are indicative of how far apart or close together the nodes in a graph are, with meaningful implications in many applications. Graphs mined from real-world data, such as social and biological networks, often exhibit a low average shortest path length, known as the \emph{small world} property~\citep{watts1998collective}. Synthetic graphs (\textit{e.g.}, trees, cycles), on the other hand, can have much longer geodesics. If a graph model hopes to capture the global nature of interactions in a network, then the average path length is an important property to preserve. Tab.~\ref{tab:dataset} has the average clustering, average shortest path lengths used in the paper. In Fig.~\ref{fig:aplcc}, we have plotted the average clustering (CC) against the average path length (APL) of twelve generative models on five datasets as they experience 10 iterations of the Infinity Mirror test. Points plotted are aggregated means across the 50 independent chains of the Infinity Mirror, with the initial graph $G_0$ for each dataset colored green and subsequent iterations $G_1$ through $G_{10}$ colored in a gradient from dark blue to red. Datasets are distinguished by different shaped markers. Notably, we can see that HRG displays a gradual progression from smaller CCs to larger ones while staying within a relatively stable window of APL values across datasets. Kronecker displays a similarly gradual progression, whereby the APL values consistently and monotonically decrease as the clustering coefficients creep higher over the iterations. This further validates our previous observation that Kronecker tends to create highly dense graphs as the model is iteratively fit. The situation for the autoencoders GCNAE and LinearAE is more cut-and-dry. We can see that, regardless of where the dataset is initially located in CC$\times$APL space, the resulting chains are tightly grouped in the bottom right of the plot, indicating very high clustering and short average distances between nodes. Importantly, this transition occurs suddenly, with only a few of the very early iterations falling outside of that lower-right region. The Erd\H{o}s–R\'{e}nyi random baseline and Chung-Lu display a similarly sudden and consistent grouping of points towards the lower left of the graph, which corresponds with the common knowledge that these random graph models tend to have low clustering and small inter-node distances. CNRG performs markedly well on these metrics. We can see that the points for each dataset stay fairly near their initial starting point for a few iterations before gradually tending to decrease clustering. On all of the datasets, either CC or APL is very well preserved, while the other metric slowly drifts away from the initial values (except on Clique-Ring, for which CNRG generates isomorphic copies). Interestingly, both BUGGE and BTER seem to have qualitatively similar behavior on the same datasets, tending to suddenly shift them to roughly the same regions. SBM displays a tendency to decrease clustering on most datasets, while also startingly decreasing the APL of Clique-Ring in particular. Finally, GraphRNN and NetGAN display peculiar behavior. GraphRNN displays neither the tight, sudden grouping nor the gradual, consistent decay of some of the other models. Instead, the points seem to generally congregate near the bottom of the plot, indicating a consistent decrease in APL but inconsistent behavior in terms of clustering. For some datasets, like Clique-Ring, clustering is significantly diminished for a time before increasing again near the end of the 10 iterations, while for the Tree dataset, clustering seems to be slowly increasing. On the other hand, NetGAN could only be evaluated on the Clique-Ring dataset due to model failure on the others. It significantly decreased both CC and APL on this dataset before marginally raising APL towards the end. \subsection{Tracking Degeneracy: Learned Model Spaces} \begin{figure*}[tb!] \centering \input{figures/model-study.tex} \caption{Model parameters for iterations 1, 5, and 10 fit on the Clique-Ring dataset using Kronecker, SBM, BUGGE, and CNRG. Kronecker initiator matrices tends towards denser graphs. The SBM model tends towards fewer, sparser communities. Higher probabilities are redder, and only the non-zero elements on the block matrices are drawn. We only draw the right-hand side of the two most frequent BUGGE production rules, which show that a $K_5$ is encoded (instead of a $K_4$ from the source graph). CNRG generates graphs that are isomorphic to the source graph where $\bar{C}_{500}$ is a 500 node cycle. Rule frequencies are denoted by $\times$.} \label{fig:models} \end{figure*} Graph metrics like degree and PageRank distribution from Fig.~\ref{fig:degree}, Portrait and $\lambda$-distance from Fig.~\ref{fig:deltacon}, and Graphlet and NetLSD distances from Fig.~\ref{fig:lsd} generally show that models tend to degenerate as they are repeatedly asked to model their generated graph. The result from Fig.~\ref{fig:pca} offers clues to the ways in which they degenerate, but these plots investigate the output of the models, not the model itself. Next, we investigate how the models themselves change in order to gain further insights into the biases encoded into each model. Unfortunately, only a handful of the graph models studied in the present work contain interpretable parameters. For example, the model parameters of neural networks (\textit{i.e.}, GraphRNN, LinearAE, GCNAE, and NetGAN) are well known to be difficult to interpret. The BTER, HRG, and Chung-Lu parameters carry semantic meaning but are still difficult for a human to interpret. Likewise, the Erd\H{o}s-R\'{e}nyi model is just the number of nodes and edges, which do not degenerate over iterations. What remains are the Kronecker, SBM, BUGGE, and CNRG models. We display the interpretable graph model parameters $\Theta$ for iterations 1, 5, and 10 of the Clique-Ring source graph in Fig.~\ref{fig:models}. The Kronecker model shows the initiator matrix, which generates a graph by repeatedly computing the Kronecker product and stochastically drawing an edge in the resulting matrix according to the value in each cell. We find that the entries in the initiator matrix tend towards 0.999 for three of the elements and 0.775 for the remainder. This leads to the generation of increasingly dense graphs, similar to what we observed in Fig.~\ref{fig:kronexample} and Fig.~\ref{fig:aplcc}. The SBM models graph communities directly in a reduced block matrix. New graphs are generated by drawing tight-knit groups according to the probabilities in each block and connecting disparate communities according to their off-diagonal probabilities. We show that, as the iterations increase, the number of detected communities decreases. In addition to that, we observe that the off-diagonal elements are minuscule, leading to the creation of graphs with few and sparsely interconnected components---eventually leading to small disconnected islands. Node replacement graph grammar models of the Clique-Ring produced by BUGGE and CNRG are easily interpretable. These models encode context-free grammars of graph substructures where a node represented on the left-hand-side of a production rule is replaced with the graph on the right-hand-side of the rule. CNRG first encodes the individual 4-cliques into separate nonterminal nodes (bottom rule) and then encodes the 500-node ring into a single starting rule. In this way, CNRG is able to capture the \textit{exact} topology of the graph and is, therefore, able to generate an isomorphic copy over multiple iterations. BUGGE treats all edges as bi-directed, including in the production rules, which show an interesting bias. In the first iteration $\Theta_1$, the top rule encodes cliques of arbitrary size, and the bottom rule encodes a chain of cliques. Because the top rule in the first iteration does not limit either the number or the size of individual cliques, we observe in later iterations that 5-cliques start appearing frequently. These rules still preserve the clique-ring structure, but the size of the individual cliques starts to vary. \section{Conclusions} \label{sec:findings} In summary, this work presents a new methodology for analyzing graph models by repeatedly fitting and generating. This iterative fit-and-generate processes can reveal hidden model biases, facilitating their critical examination. For example, we find that graph autoencoders and convolutional neural networks quickly densify to almost complete graphs in the limit. Conversely, NetGAN and GraphRNN models sparsify and shrink a graph in the limit and degenerate quickly. Interestingly, the graphs produced by the Kronecker and HRG models do not monotonically degenerate; sometimes, their performance improves after multiple generations. It is unclear why this might be; further research is needed to indicate when and why this improvement occurs. The parameters of CNRG, BUGGE, and SBMs can be more closely inspected to further reveal biases that are encoded into each of these models. Finally, it is our hope that these findings will be used to more deeply understand the statistical biases that are encoded into graph models and aide in the future development of more robust graph models. \section*{Acknowledgements} This work is supported by grants from the US National Science Foundation (\#1652492), DARPA (FA8750-20-2-1004), and the US Army Research Office (W911NF-17-1-0448).
null
null
null
proofpile-arXiv_000-10176
{"arxiv_id":"2009.08925","language":"en","timestamp":1600654645000,"url":"https:\/\/arxiv.org\/abs\/2009.08925","yymm":"2009"}
2024-02-18T23:40:25.268Z
2020-09-21T02:17:25.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10178"}
null
\section*{Ville Turunen:\\ Time-frequency analysis on groups} \paragraph{Abstract:} {\it Phase-space analysis or time-frequency analysis can be thought as Fourier analysis simultaneously both in time and in frequency, originating from signal processing and quantum mechanics. On groups having unitary Fourier transform, we introduce and study a natural family of time-frequency transforms, and investigate the related pseudo-differential operators. } \section{Introduction} Time-frequency analysis is a subfield of Fourier analysis. It studies ``time'' dependent signals (functions or distributions), presenting them simultaneously both in ``time'' and in ``frequency'', and consequently manipulating them as sharp as possible. Traditionally, ``time'' and ``frequency'' refer to real variables, where the Fourier integral transform is the essential tool. In this text, we establish time-frequency analysis on those locally compact groups that allow a unitary Fourier transform. Time-frequency transforms in Cohen's class present signals as joint time-frequency distributions, which are linked to the pseudo-differential operators for manipulating signals. The time-frequency concepts apply to the phase-space analysis, e.g. for position-momentum presentations of wavefunctions in quantum mechanics. One of Cohen's original motivating examples in \cite{Cohen1966} was the deduction of the Born--Jordan phase-space transform, stemming from the Born--Jordan quantization of Heisenberg's matrix mechanics \cite{Heisenberg,BornJordan,BornHeisenbergJordan}. Time-frequency analysis has been studied for $p$-adic numbers \cite{Haran}, on more general locally compact commutative groups \cite{Kutyniok}, and on certain classes of locally compact groups \cite{MantoiuRuzhansky}. However, our treatise is not reduced to those works. For a compact group, the time-frequency plane is the Cartesian product of the group and its unitary dual. Time-frequency transforms will be ``time-frequency invariant'' sesquilinear mappings on pairs of test functions (trigonometric polynomials, or Schwartz--Bruhat functions), with values in the corresponding space of matrix-valued test functions on the time-frequency plane. In the non-commutative setting, the ``frequency modulations'' require careful rethinking. Euclidean time-frequency analysis is usually built around the symmetric Wigner transform, corresponding to the Weyl pseudo-differential quantization. However, groups often lack suitable scalings, so we build our time-frequency analysis around the always existing Rihaczek or Kohn--Nirenberg transform: this could have been the starting point for the Euclidean theory. A time-frequency transform dictates a pseudo-differential quantization, and we shall study this connection. On compact Lie groups, the Kohn--Nirenberg quantization has been treated e.g. in \cite{Taylor,RuzhanskyTurunen,gardingRuzhanskyTurunen,RuzhanskyTurunenWirth,Fischer}. The compact group results are finally generalized to those locally compact groups that allow a unitary Fourier transform. \section{On Euclidean time-frequency analysis} To motivate our definitions for time-frequency analysis on compact groups $G$, let us briefly explain how analogous concepts can be presented on Euclidean spaces $\mathbb R^n$, avoiding technicalities. The general background is presented in the monographs \cite{Cohen1995} and \cite{Grochenig}. To underline the similarities, we use quite similar notions both on $G$ and on $\mathbb R^n$. Signals are nice-enough functions $u:\mathbb R^n\to\mathbb C$. We call variables $x,y\in\mathbb R^n$ {\it time-like} (or {\it position-like}) and variables $\xi,\eta\in\widehat{\mathbb R}^n\cong\mathbb R^n$ {\it frequency-like} (or {\it momentum-like}). The starting point is formula \begin{equation} u(x) = \iint {\rm e}^{{\rm i}2\pi(x-y)\cdot\eta}\,u(y)\,{\rm d}y\,{\rm d}\eta \end{equation} for the Schwartz test functions $u\in\mathscr S(\mathbb R^n)$. Define the {\it Fourier transform} $\widehat{u}$ by \begin{equation} \widehat{u}(\eta) := \int {\rm e}^{-{\rm i}2\pi y\cdot\eta}\,u(y)\,{\rm d}y. \end{equation} From the Schwartz test function space $\mathscr S(\mathbb R^n)$, the Fourier transform extends to a unitary operator $\mathscr F:L^2(\mathbb R^n)\to L^2(\widehat{\mathbb R}^n)$: in other words, $\mathscr F=(u\mapsto\widehat{u})$ is a linear bijection satisfying \begin{equation} \langle u,v\rangle := \langle \widehat{u},\widehat{v}\rangle, \end{equation} where Hilbert space $L^2(\mathbb R^n)$ has the inner product defined by \begin{equation} \langle u,v\rangle = \int u(x)\,v(x)^\ast\,{\rm d}x, \end{equation} where $\lambda^\ast$ is the complex conjugate of $\lambda\in\mathbb C$. Signal $u$ has the {\it norm} $\|u\|=\langle u,u\rangle^{1/2}$ and the {\it energy} $\|u\|^2=\langle u,u\rangle$. The {\it symplectic Fourier transform} is then $F=\mathscr F^{-1}\otimes\mathscr F$, taking functions on the {\it time-frequency plane} (or {\it phase-space}) $\mathbb R^n\times\widehat{\mathbb R}^n$ to functions on the {\it ambiguity plane} $\widehat{\mathbb R}^n\times\mathbb R^n$. A Cohen class {\it time-frequency transform} $D$ of signals $u,v$ is $D(u,v):\mathbb R^n\times\widehat{\mathbb R}^n\to\mathbb C$, \begin{eqnarray} D(u,v)(x,\eta) & = & F^{-1}\left(\phi\ FW(u,v) \right)(x,\eta) \\ & = & \iint {\rm e}^{-{\rm i}2\pi y\cdot\eta} \,{\rm e}^{+{\rm i}2\pi x\cdot\xi}\,\phi(\xi,y)\,FW(u,v)(\xi,y) \,{\rm d}\xi\,{\rm d}y,\label{EQ:TFtransform} \end{eqnarray} where $\phi:\widehat{\mathbb R}^n\times\mathbb R^n\to\mathbb C$ is the {\it ambiguity kernel}, $W(u,v):\mathbb R^n\times\widehat{\mathbb R}^n\to\mathbb C$ is the {\it Wigner transform}, \begin{equation}\label{DEFN:Wigner} W(u,v)(x,\eta) := \int {\rm e}^{-{\rm i}2\pi y\cdot\eta} \,u(x+y/2)\,v(x-y/2)^\ast\,{\rm d}y, \end{equation} and $FW(u,v):\widehat{\mathbb R}^n\times\mathbb R^n\to\mathbb C$ is the {\it ambiguity transform}, \begin{equation} FW(u,v)(\xi,y) = \int {\rm e}^{-{\rm i}2\pi x\cdot\xi} \,u(x-y/2)\,v(x-y/2)^\ast\,{\rm d}x. \end{equation} As pointed out by Gr\"ochenig in \cite{Grochenig}, in the literature there is no precise definition of a Cohen class transform $D$. Informally, such $D$ is obtained by smoothing the Wigner transform by some tempered distribution as the convolution kernel. In light of the time-frequency results in the sequel, we would suggest that the ambiguity kernel $\phi$ should be a smooth function with polynomially bounded derivatives: in other words, then we would have a Schwartz multiplier $$ \left(h\mapsto F^{-1}(\phi\,Fh)\right): \mathscr S(\mathbb R^n\times\widehat{\mathbb R}^n) \to\mathscr S(\mathbb R^n\times\widehat{\mathbb R}^n). $$ Indeed, the literature examples of ambiguity kernels $\phi$ seem to be smooth with polynomially bounded derivatives. Moreover, those examples in the literature are typically bounded with $|\phi(\xi,y)|\leq 1$, which yields the $L^2$-boundedness $$ \|D(u,v)\|_{L^2} \leq \|u\|_{L^2} \|v\|_{L^2}. $$ Hence $(u,v)\mapsto D(u,v)$ is sesquilinear: $u\mapsto D(u,v)$ is linear, and $v\mapsto D(u,v)$ conjugate-linear. The idea is that the {\it time-frequency distribution} $D[u]:=D(u,u)$ would be a quasi-energy density for signal $u$ (or a quasi-probability density for wavefunction $u$). If $v(x)={\rm e}^{+{\rm i}2\pi x\cdot\xi}\,u(x-y)$ then \begin{equation} D[v](x,\eta) = D[u](x-y,\eta-\xi), \end{equation} reflecting the idea that $v$ is ``$u$ shifted in time-frequency by $(y,\xi)$''. For example, if the ambiguity kernel $\phi$ in \eqref{EQ:TFtransform} is given by $\phi(\xi,y):={\rm e}^{{\rm i}2\pi(\xi\cdot y)\tau}$ for $\tau\in\mathbb R$, this defines the {\it Rihaczek-$\tau$-transform} $D=R_\tau$, where \begin{equation} R_\tau(u,v)(x,\eta) = \int {\rm e}^{-{\rm i}2\pi y\cdot\eta} \,u(x+(\tau+1/2)y)\,v(x+(\tau-1/2)y)^\ast\,{\rm d}y. \end{equation} Sometimes $W_\tau:=R_{\tau+1/2}$ is called the {\it Wigner-$\tau$} or {\it Shubin-$\tau$ transform}. Transforms $R_\tau$ and $R_{-\tau}$ are conjugates to each other in the sense that $R_\tau(u,v)(x,\eta)^\ast = R_{-\tau}(v,u)(x,\eta)$. Especially, $R_0=W$, the Wigner transform. The {\it Kohn--Nirenberg transform} (or the {\it Rihaczek transform}) is $R:=R_{-1/2}$, which will be the starting point for time-frequency analysis on groups. The {\it anti-Kohn--Nirenberg transform} refers to $R_{+1/2}$. Here $D=R$ with $\phi(\xi,y)={\rm e}^{-{\rm i}\pi\xi\cdot y}$, giving \begin{equation} R(u,v)(x,\eta) = u(x)\,{\rm e}^{-{\rm i}2\pi x\cdot\eta}\,\widehat{v}(\eta)^\ast. \end{equation} It is easy to check that $$ \|R_\tau(u,v)\|_{L^2(\mathbb R^n\times\widehat{\mathbb R}^n)} = \|u\|_{L^2(\mathbb R^n)} \|v\|_{L^2(\mathbb R^n)}. $$ Hence the {\it Born--Jordan transform} $Q$ defined by the integral average \begin{equation} Q(u,v) = \int_0^1 W_\tau(u,v)\,{\rm d}\tau = \int_{-1/2}^{1/2} R_\tau(u,v)\,{\rm d}\tau \end{equation} satisfies $$ \|Q(u,v)\|_{L^2(\mathbb R^n\times\widehat{\mathbb R}^n)} \leq \|u\|_{L^2(\mathbb R^n)} \|v\|_{L^2(\mathbb R^n)}, $$ The ambiguity kernel of $D=Q$ satisfies $\displaystyle\phi(\xi,y) =\int_0^1 {\rm e}^{{\rm i}2\pi\xi\cdot y(\tau-1)}\,{\rm d}\tau = {\rm sinc}(\xi\cdot y)$, where ${\rm sinc}(t)=\sin(\pi t)/(\pi t)$ for $t\not=0$. Let $\phi$ be the ambiguity kernel of time-frequency transform $D$. Property $\phi(0,0)=1$ corresponds to the normalization \begin{equation}\label{EQ:normalization} \iint D(u,v)(x,\eta)\,{\rm d}\eta\,{\rm d}x = \langle u,v\rangle. \end{equation} Properties $\phi(\xi,0)=1$ and $\phi(0,y)=1$ correspond respectively to the margins \begin{equation} \int D(u,v)(x,\eta)\,{\rm d}\eta = u(x)\,v(x)^\ast,\quad \int D(u,v)(x,\eta)\,{\rm d}x = \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast. \end{equation} Property $|\phi(\xi,y)|\equiv 1$ corresponds to so-called {\it Moyal identity} \cite{Moyal} \begin{equation} \langle D(u,v),D(f,g)\rangle = \langle u,f\rangle\,\langle v,g\rangle^\ast. \end{equation} In applied sciences and engineering, perhaps the most common time-frequency transforms date back to Gabor's work \cite{Gabor}: these transforms $D$ are of the form \begin{equation} D(u,v)(x,\eta) := \mathscr G_wu(x,\eta)\ \mathscr G_wv(x,\eta)^\ast, \end{equation} where the {\it $w$-windowed short-time Fourier transform} (STFT) is defined by \begin{equation} \mathscr G_w u(x,\eta) := \int {\rm e}^{-{\rm i}2\pi y\cdot\eta} \,u(y)\,w(y-x)^\ast\,{\rm d}y, \end{equation} where $\phi=FW[w]^\ast$. Then the normalization \eqref{EQ:normalization} means $\|w\|^2 = \langle w,w\rangle = 1$, and then $D[u]=D(u,u)$ is called the {\it $w$-spectrogram} of $u$. Once choosing a time-frequency transform $D$, it defines the {\it $D$-quantization} $a\mapsto a^D$ by the $L^2$-duality \begin{equation} \langle u,a^D v\rangle = \langle D(u,v),a\rangle. \end{equation} Here the weight function $a:\mathbb R^n\times\widehat{\mathbb R}^n\to\mathbb C$ is called a {\it symbol} of {\it pseudo-differential operator} $a^D=(v\mapsto a^D v)$. Conversely, time-frequency transform $D$ can be recovered from the quantization map $a\mapsto a^D$, whose properties reflect the properties of $D$. Wigner-$\tau$-transform $W_\tau=R_{\tau-1/2}$ corresponds to so-called {\it Weyl-$\tau$-quantization} $a\mapsto a^{W_\tau}$, \begin{equation} a^{W_\tau} v(x) = \iint {\rm e}^{{\rm i}2\pi(x-y)\cdot\eta} \,a(x+\tau(y-x),\eta) \, v(y)\,{\rm d}y\,{\rm d}\eta, \end{equation} Especially, the Wigner transform $W=W_{1/2}=R_0$ corresponds to the {\it Weyl quantization} $a\mapsto a^W$. The Rihaczek (or Kohn--Nirenberg) transform $R=W_0=R_{-1/2}$ corresponds to the {\it Kohn--Nirenberg quantization} $a\mapsto a^R$, \begin{equation} a^R v(x) = \iint {\rm e}^{{\rm i}2\pi(x-y)\cdot\eta}\,a(x,\eta) \, v(y)\,{\rm d}y\,{\rm d}\eta = \int {\rm e}^{{\rm i}2\pi x\cdot\eta}\,a(x,\eta)\,\widehat{v}(\eta) \,{\rm d}\eta. \end{equation} The Born--Jordan quantization $\displaystyle a\mapsto a^Q = \int_0^1 a^{W_\tau}\,{\rm d}\tau = \int_{-1/2}^{1/2} a^{R_\tau}\,{\rm d}\tau$ satisfies \begin{equation} a^Q v(x) = \iint {\rm e}^{{\rm i}2\pi(x-y)\cdot\eta} \int_0^1 a(x+\tau(y-x),\eta) \,{\rm d}\tau\,v(y)\,{\rm d}y\,{\rm d}\eta. \end{equation} Weyl introduced his quantization in 1927 in \cite{Weyl}, and Wigner his distribution in 1932 in \cite{Wigner} for quantum mechanics. The Wigner distribution was independently discovered in \cite{Ville}, with applications to signal processing. The Born--Jordan quantization was implicit in \cite{BornJordan} for polynomial symbols, but in the modern sense the Born--Jordan distribution was deduced by Cohen in \cite{Cohen1966}. The Kohn--Nirenberg quantization arose from the studies \cite{KohnNirenberg,Hormander} by H\"ormander, Kohn and Nirenberg. \section{Euclidean revision} On a compact group $G$, we cannot expect to find a reasonable analogy to Wigner transform $W$, which is the central object in the Euclidean case presented above. This is simply because analogies to the Euclidean scaling $(y\mapsto y/2):\mathbb R^n\to\mathbb R^n$ are missing on a typical compact group $G$. This problem does not disappear by a naive doubling change of variable in the integral formula: see Example~\ref{EXA:falseWigner}. Of course, on the odd-order cyclic group $\mathbb Z/N\mathbb Z$, such scalings $y\mapsto y/2$ exist in modular arithmetic. On the other hand, there is no necessity to start with the symmetric Wigner transform in the Euclidean case, either. Instead, we could have built the Cohen class theory around the non-symmetric Kohn--Nirenberg quantization, and this approach will work on compact groups, too. There is another illuminating point of view: Due to the time-frequency shift-invariance, time-frequency transform $D$ is already encoded in data \begin{equation} D(u,v)(0,0) = \langle D(u,v),\delta\rangle = \langle u,\delta^D v\rangle, \end{equation} where $\delta=\delta_{(0,0)}$ is the Dirac delta distribution at the time-frequency origin $(0,0)\in\mathbb R^n\times\widehat{\mathbb R}^n$. Despite such a highly singular symbol $\delta$, pseudo-differential operator $\delta^D$ is typically rather well-behaving. We call $\delta^D$ the {\it original localization operator}, as $\delta^D v$ tries to be the ``localization of $v$ to the time-frequency origin'', which strictly speaking cannot be achieved in view of the Heisenberg uncertainty principle. If \begin{equation} \delta^D v(z) = \int K_{\delta^ C}(z,y)\,v(y)\,{\rm d}y, \end{equation} i.e. if $K_{\delta^D}$ is the Schwartz distribution kernel of $\delta^D$, then \begin{equation}\label{EQ:kernelEuclidean} D(u,v)(x,\eta) = \iint u(x+z)\,{\rm e}^{-{\rm i}2\pi z\cdot\eta} \,K_{\delta^D}(z,y)^\ast\,{\rm e}^{+{\rm i}2\pi y\cdot\eta}\,v(x+y)^\ast \,{\rm d}z\,{\rm d}y. \end{equation} This formula suggests a natural variant for compact groups $G$, where time shifts do not pose problems, whereas frequency modulations are elusive. \section{Fourier analysis on compact groups} Let $e$ denote the neutral element of group $G$. A topological group $G$ is a group and a Hausdorff space, where the group operation $((x,y)\mapsto xy):G\times G\to G$ and the inversion $(x\mapsto x^{-1}):G\to G$ are continuous. Time-frequency analysis on non-compact locally compact groups is treated in Section~\ref{SEC:LCG}, generalizing most (but not all) of our compact case results. In the sequel, unless otherwise mentioned, $G$ is a {\it compact group}: in other words, $G$ is a topological group with compact topology. Monographs \cite{HewittRoss1} and \cite{HewittRoss2} present background in Fourier analysis on compact groups. From the Peter--Weyl theorem, which we shall review later, it follows that such $G$ is isomorphic to a closed subgroup of the Cartesian product of a family of unitary matrix groups. If $G$ is commutative, instead of this multiplicative notation for group operations, it is common to use additive notation: that is, instead of $xy,x^{-1},e$, writing $x+y,-x,0$, respectively. Let $C(G)$ be the vector space of continuous functions $u:G\to\mathbb C$, endowed with the norm $u\mapsto\|u\|_{ C(G)}=\max\{|u(x)|:\, x\in G\}$. Especially, the unit constant function ${\bf 1}=(x\mapsto 1):G\to\mathbb C$ belongs to $C(G)$. Let \begin{equation} \int u(x)\,{\rm d}x = \int_G u(x)\,{\rm d}x \in \mathbb C \end{equation} be the {\it Haar integral} of $u\in C(G)$: the corresponding Haar measure is the unique translation-invariant Borel probability measure on $G$. We obtain the space $L^2(G)$ of square-integrable functions or {\it signals} by completing $C(G)$ with respect to the {\it norm} $\|u\|:=\langle u,u\rangle^{1/2}$ given by the the {\it inner product} $(u,v)\mapsto\langle u,v\rangle$, \begin{equation} \langle u,v\rangle := \int u(x)\,v(x)^\ast\,{\rm d}x. \end{equation} Here $\|u\|^2=\langle u,u\rangle$ is the {\it energy} of the signal. A {\it unitary representation} of compact group $G$ on Hilbert space $\mathscr H_\eta$ is a strongly continuous group homomorphism $\eta:G\to\mathscr U(\mathscr H_\eta)$ to the group $\mathscr U(\mathscr H_\eta)$ of unitary operators on $\mathscr H_\eta$. Hence $\eta(xy)=\eta(x)\,\eta(y)$, $\eta(x^{-1})=\eta(x)^{-1}=\eta(x)^\ast$, $\eta(e)=I$ (the identity operator on $\mathscr H_{\eta}$). The {\it Fourier coefficient} of $u\in L^2(G)$ at $\eta$ is the bounded linear operator $\widehat{u}(\eta)=\mathscr F u(\eta):\mathscr H_\eta\to\mathscr H_\eta$ defined by \begin{equation} \widehat{u}(\eta) = \mathscr F u(\eta) := \int u(x)\,\eta(x)^\ast\,{\rm d}x. \end{equation} The {\it left regular representation} of $G$ is $\pi_L:G\to\mathscr U(L^2(G))$ defined by \begin{equation} \pi_L(y)u(x) := u(y^{-1}x) \end{equation} for almost all $x\in G$. The left regular representation $\pi_L$ can be thought to embed the group $G$ into the ``rotations'' acting on Hilbert space $\mathscr H=L^2(G)$: thus we can study the group by tools of functional analysis. Unitary representations $\xi,\eta$ of $G$ are {\it equivalent} if there is a unitary isomorphism $U:\mathscr H_\xi\to\mathscr H_\eta$ such that $$ U\xi(x) = \eta(x) U $$ for all $x\in G$. The corresponding equivalence class is then denoted by $[\xi]=[\eta]$. Unitary representation $\eta$ is called {\it irreducible} if for operators $\eta(x)$ there are no non-trivial simultaneous invariant subspaces of $\mathscr H_\eta$. Let $$ \varepsilon=(x\mapsto 1):G\to\mathscr U(\mathbb C) $$ denote the trivial irreducible unitary representation, corresponding to ``zero frequency'', a unit signal with no oscillations. We distinguist the trivial unitary representation $\varepsilon$ from the unit constant function $ {\bf 1}=(x\mapsto 1):G\to\mathbb C, $ even though they are effectively the same. This convention will clarify the treatise. The {\it unitary dual} $\widehat{G}$ of $G$ consists of equivalence classes $[\eta]$ of irreducible unitary representations of $G$. To make notation lighter, instead of $[\eta]\in\widehat{G}$ we simply write $\eta\in\widehat{G}$. Due to the compactness of $G$, for each $\eta\in\widehat{G}$, Hilbert space $\mathscr H_\eta$ is finite-dimensional. Hence in the sequel we assume that $\eta(x)\in\mathbb C^{d_\eta\times d_\eta}$ is a unitary matrix of dimension $d_\eta\in\mathbb Z^+$: there is such a choice in that equivalence class $\eta\in\widehat{G}$. The corresponding Fourier coefficient $\widehat{u}(\eta)$ is a matrix, belonging to $\mathbb C^{d_\eta\times d_\eta}$. Function $u\in L^2(G)$ is called a {\it trigonometric polynomial} if it has only finitely many non-zero Fourier coefficients: in this sense, trigonometric polynomials are band-limited signals. Equivalently, $u\in L^2(G)$ is a trigonometric polynomial if and only if the span of $\{\pi_L(y)u:\ y\in G\}$ is a finite-dimensional vector space. The space of trigonometric polynomials is denoted by $\mathscr T(G)$. By the Peter--Weyl theorem, the left regular representation can be decomposed to a direct sum of irreducible unitary representations \begin{equation} \pi_L = \bigoplus_{\eta\in\widehat{G}} d_\eta\, \eta, \end{equation} corresponding to the Fourier decomposition of signals $u$: in the sense of $L^2(G)$, there is the Fourier inverse formula (Fourier series) \begin{equation} u(x) = \sum_{\eta\in\widehat{G}} d_\eta\, {\rm tr}\left( \eta(x)\,\widehat{u}(\eta)\right), \end{equation} where ${\rm tr}$ is the usual matrix {\it trace}. Here $\{\sqrt{d_\eta}\,\eta_{jk}:\ \eta\in\widehat{G},\ 1\leq j,k\leq d_\eta\}$ is an orthonormal basis for the Hilbert space $L^2(G)$. Remember that ${\rm tr}(AB)={\rm tr}(BA)$, but often ${\rm tr}(ABC)\not={\rm tr}(CBA)$. In the sequel, for matrix-valued functions $\widehat{a}$ on $\widehat{G}$, we write ``non-commutative integrals'' \begin{equation} \int \widehat{a}(\eta)\,{\rm d}\eta := \int_{\widehat{G}} {\rm tr}\left(\widehat{a}(\eta)\right) {\rm d}\mu_{\widehat{G}}(\eta) = \sum_{\eta\in\widehat{G}} d_\eta\,{\rm tr}(\widehat{a}(\eta)) \end{equation} Here $\mu_{\widehat{G}}$ is the Plancherel measure. We obtain $$ u(x) = \int \eta(x)\,\widehat{u}(\eta)\,{\rm d}\eta = \int \widehat{u}(\eta)\,\eta(x)\,{\rm d}\eta = \iint u(y)\,\eta(y^{-1}x)\,{\rm d}y\,{\rm d}\eta. $$ Defining $\|\widehat{u}\|:=\langle\widehat{u},\widehat{u}\rangle^{1/2}$, where \begin{equation} \langle\widehat{u},\widehat{v}\rangle := \int \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast\,{\rm d}\eta, \end{equation} we obtain the Plancherel (or Parseval) identity \begin{equation}\label{EQ:Plancherel} \langle u,v\rangle = \langle \widehat{u},\widehat{v}\rangle. \end{equation} Especially, $\|u\|^2=\|\widehat{u}\|^2$ is the conservation of energy. Consequently, the Fourier transform $\mathscr F=(u\mapsto\widehat{u})$ is a Hilbert space isomorphism $\mathscr F:L^2(G)\to L^2(\widehat{G})$. Furthermore, Fourier transform can also be vieved as linear isomorphisms \begin{eqnarray} && \mathscr F=(u\mapsto\widehat{u}):\mathscr T(G)\to\mathscr T(\widehat{G}),\\ && \mathscr F=(f\mapsto\widehat{f}):\mathscr T'(G)\to\mathscr T'(\widehat{G}), \end{eqnarray} where $\mathscr T'(G)$ is the space of {\it trigonometric distributions} or formal trigonometric expansions $f$. Here $\mathscr T'(\widehat{G})$ consists of all functions $\widehat{f}$ on $\widehat{G}$ such that $\widehat{f}(\eta)\in\mathbb C^{d_\eta\times d_\eta}$ for each $\eta\in\widehat{G}$. Elements $\widehat{u}\in\mathscr T(\widehat{G})\subset\mathscr T'(\widehat{G})$ are those which have only finitely many non-zero Fourier coefficients. On compact group $G$, the algebra of test function can be enlarged from trigonometric $\mathscr T(G)$ to the Schwartz space (or Schwartz--Bruhat space) $\mathscr S(G)$, introduced by Bruhat in \cite{Bruhat}. Let $\mathscr J$ be the family of the closed normal subgroups $K$ of $G$ such that $G/K$ is isomorphic to a Lie group: for short, $G/K$ is a Lie group. Endow $\mathscr J$ with the inverse inclusion order. For $K\in\mathscr J$, we identify $u\in C^\infty(G/K)$ with $u\circ\pi_K:G\to\mathbb C$, where $\pi_K=(x\mapsto xK):G\to G/K$ is the the quotient map. Hence $C^\infty(G/K)\subset C(G)$. The reflexive space of {\it Schwartz test functions} is the inductive limit $$ {\mathscr S}(G) := \lim_{\longrightarrow} C^\infty(G/K) $$ of the direct system $\left( (C^\infty(G/K))_{K\in\mathscr J}, (f_{KL})_{K,L\in\mathscr J:\ K\subset L} \right)$, where functions $f_{KL}:C^\infty(G/K)\to C^\infty(G/L)$ are defined by $f_{KL}(u)(xL):=u(xK)$. The strong dual of the Schwartz space $\mathscr S(G)$ is the {\it Schwartz distribution space} $\mathscr S'(G)$, and they are complete nuclear barreled spaces. Function spaces are treated as subsets of distribution spaces, and we have $$ \mathscr T(G)\subset\mathscr S(G)\subset C(G) \subset L^\infty(G)\subset L^2(G)\subset L^1(G) \subset\mathscr S'(G)\subset\mathscr T'(G). $$ The Fourier transform can also be viewed as linear isomorphisms \begin{eqnarray} && \mathscr F=(u\mapsto\widehat{u}):\mathscr S(G)\to\mathscr S(\widehat{G}),\\ && \mathscr F=(f\mapsto\widehat{f}):\mathscr S'(G)\to\mathscr S'(\widehat{G}), \end{eqnarray} where $\mathscr S(\widehat{G})\subset L^2(\widehat{G})$ and $\mathscr S'(\widehat{G})\subset\mathscr T'(\widehat{G})$. There is a positive central trigonometric approximate identity, i.e. a net of central positive trigonometric polynomials $h_\alpha$ of unit $L^1$-norm such that \begin{equation} \lim_\alpha \|u-h_\alpha\ast u\|_{L^1(G)} = 0 \end{equation} for every $u\in L^1(G)$, see \cite{HewittRoss2} (Theorem 28.53). Let us present a brief related construction: Let $\alpha=(U,m)$, where $m\in\mathbb Z^+$ and $U$ is a symmetric neighborhood of $e\in G$ meaning $U=U_eU_e$ for a neighborhood $U_e=U_e^{-1}$ of $e\in G$. Choose central $f=f_U\in C(G)$ such that $\|f\|_{L^2}=1$, and $f(x)=0$ whenever $x\not\in U$. Approximate $f$ by central $g=g_{(U,m)}\in\mathscr T(G)$ such that $\|f-g\|_{C(G)} < 1/(m\|f\|_{C(G)})$. Define central $h=h_{(U,m)}\in\mathscr T(G)$ by $h:=|g|^2/\|g\|_{L^2}^2$. The index pairs $\alpha=(U,m)$ and $\beta=(V,n)$ have the partial order \begin{eqnarray*} \alpha\leq\beta & \iff & V\subset U\ {\rm and}\ m\leq n. \end{eqnarray*} The functions $h_\alpha$ form a positive central trigonometric approximate identity. Convolution $u\ast v$ of signals $u,v$ is the signal defined by \begin{equation} u\ast v(x) := \int u(xy^{-1})\,v(y)\,{\rm d}y. \end{equation} Then $\widehat{u\ast v}=\widehat{v}\,\widehat{u}$, that is $\widehat{u\ast v}(\eta)=\widehat{v}(\eta)\,\widehat{u}(\eta)$, as $$ \iint \eta(x)^\ast\, u(xy^{-1})\, v(y)\,{\rm d}y\,{\rm d}x = \int \eta(y)^\ast\,v(y) \int \eta(xy^{-1})^\ast\, u(xy^{-1}) \,{\rm d}x\,{\rm d}y. $$ The unitary dual $\widehat{G}$ does not have a group structure when $G$ is non-commutative. Nevertheless, we define a formal convolution by \begin{equation} \widehat{u}\ast\widehat{v} := \mathscr F\left((\mathscr F^{-1}\widehat{u}) \,\mathscr F^{-1}\widehat{v}\right). \end{equation} Here we have commutativity $\widehat{v}\ast\widehat{u}=\widehat{u}\ast\widehat{v}$ also on non-commutative groups $G$, since multiplication of scalar-valued functions is commutative. Matrix $M=\begin{bmatrix} M_{jk}\end{bmatrix}\in\mathbb C^{d\times d}$ is {\it positive semi-definite} (or {\it positive}, for short) if $$ 0\leq \langle Mz,z\rangle := \sum_{k=1}^d (Mz)_k\,z_k = \sum_{j,k=1}^d \overline{z_j}\,M_{jk}\,z_k. $$ The Fourier series (or ``non-commutative integral'') over $\widehat{G}$ behaves much like the Haar integral over $G$. For instance, $$ \int\widehat{u}(\eta)\,{\rm d}\eta=u(e),\quad\quad\quad \int u(x)\,{\rm d}x = \widehat{u}(\varepsilon). $$ If $\widehat{u}\geq 0$ in the sense that $\widehat{u}(\eta)\geq 0$ for all $\eta\in\widehat G$ then $u(e)=\displaystyle\int\widehat{u}(\eta)\,{\rm d}\eta\geq 0$. \begin{exa} {\rm For $1\leq p<\infty$ the {\it Schatten-$p$-norm} of a matrix $M\in\mathbb C^{d\times d}$ is $$ \|M\|_{S^p} := \left({\rm tr}(|M|^p)\right)^{1/p}, $$ where $|M|:=(M\,M^\ast)^{1/2}$. The {\it operator norm} $\|M\|_{op}$ or the {\it Schatten-$\infty$-norm} is the largest singular value of $M$, $$ \|M\|_{op} = \|M\|_{S^\infty} = \lim_{p\to\infty} \|M\|_{S^p}, $$ or alternatively $\|M\|_{op} = \sup\{\|Mz\|_{\mathbb C^d}:\,\|z\|_{\mathbb C^d}\leq 1\}$, where $\|z\|_{\mathbb C^d}^2 = \sum_{k=1}^d |z_k|^2$. Here $\|M\|_{S_1}={\rm tr}(|M|)$ is the {\it trace class norm}, and $\|M\|_{HS}=\|M\|_{S^2}$ is the {\it Hilbert-Schmidt norm}. The Lebesgue spaces $L^p(\widehat{G})$ have the norms given by \begin{eqnarray} \|\widehat{u}\|_{L^p(\widehat{G})} & := & \left(\int |\widehat{u}(\eta)|^p\,{\rm d}\eta \right)^{1/p},\\ \|\widehat{u}\|_{L^\infty(\widehat{G})} & := & \sup_{\eta\in\widehat{G}} \|\widehat{u}(\eta)\|_{op}. \end{eqnarray} If $1\leq p,q\leq\infty$ such that $1/p+1/q=1$, then $$ |u\ast v(e)| = \left| \int \widehat{v}(\eta)\,\widehat{u}(\eta)\,{\rm d}\eta \right| \leq \int |\widehat{v}(\eta)\,\widehat{u}(\eta)|\,{\rm d}\eta \leq \|\widehat{u}\|_{L^p(\widehat{G})}\,\|\widehat{v}\|_{L^q(\widehat{G})}. $$ } \end{exa} \paragraph{Dirac and Kronecker deltas.} In distributional sense, the Fourier inverse formula $\displaystyle u(x) = \int \eta(x)\ \widehat{u}(\eta)\,{\rm d}\eta $ gives the expression $$ \delta_e(x) = \int \eta(x)\,{\rm d}\eta $$ for the Dirac delta distribution $\delta_e\in C'(G)$ at $e\in G$. Also, for $\eta\in\widehat{G}$, $$ \int \eta(x)^\ast\,{\rm d}x = \widehat{\bf 1}(\eta) = \delta_\varepsilon(\eta)\,I\in\mathbb C^{d_\eta\times d_\eta}, $$ where the Kronecker delta $\delta_\varepsilon$ at $\varepsilon\in\widehat{G}$ satisfies $ \delta_\varepsilon(\eta) := \begin{cases} 1 & {\rm if}\ \varepsilon=\eta\in\widehat{G},\\ 0 & {\rm if}\ \varepsilon\not=\eta\in\widehat{G}. \end{cases} $ \begin{exa} {\rm The compact commutative Lie groups $G$ are easy to list up to an isomorphism: such a $G$ can be a product of a discrete cyclic group $\mathbb Z/N\mathbb Z$ and a flat torus $\mathbb T^n=\mathbb R^n/\mathbb Z^n$ for some $N,n\in\mathbb N=\{0,1,2,3,\cdots\}$. Let us review the notion above in the familiar case of the torus group $G=\mathbb T^n=\mathbb R^n/\mathbb Z^n$. The Haar measure on $G$ is given by the usual Lebesgue measure, and for functions $u\in L^2(G)$ the traditional Fourier coefficient transform $\widehat{u}:\mathbb Z^n\to\mathbb C$ is defined by $$ \widehat{u}(\eta) := \int_{\mathbb T^n} {\rm e}^{-{\rm i}2\pi y\cdot\eta} \,u(y)\,{\rm d}y. $$ The inverse Fourier transform is given by the $L^2$-converging Fourier series $$ u(x) = \sum_{\eta\in\mathbb Z^n} {\rm e}^{+{\rm i}2\pi x\cdot\eta} \,\widehat{u}(\eta). $$ Here the irreducible unitary representations are one-dimensional $$ x\mapsto {\rm e}^{+{\rm i}2\pi x\cdot\eta}, $$ and we may obviously identify $\widehat{G}$ with $\mathbb Z^n$, which is a non-compact discrete commutative group. The convolutions are now given by $$ u\ast v(x) = \int_{\mathbb T^n} u(x-y)\,v(y)\,{\rm d}y,\quad\quad\quad \widehat{u}\ast\widehat{v}(\xi) = \sum_{\eta\in\mathbb Z^n} \widehat{u}(\xi-\eta)\,\widehat{v}(\eta). $$ } \end{exa} \section{Hopf algebras of functions and distributions} Test function space $\mathscr T(G)$ of trigonometric polynomials and $\mathscr S(G)$ of Schwartz functions can be endowed with Hopf algebra structures. Notice that \begin{eqnarray*} \mathscr T(G\times G) & \cong & \mathscr T(G)\otimes\mathscr T(G),\\ \mathscr S(G\times G) & \cong & \mathscr S(G) \hat{\otimes} \mathscr S(G), \end{eqnarray*} where $\otimes$ denotes the algebraic tensor product, and $\hat{\otimes}$ the projective tensor product. The commutative unital $C^\ast$-algebra $C(G)$ of continuous functions has involution $\iota:C(G)\to C(G)$ given by $\iota u(x):=u(x)^\ast$. Let us define mappings \begin{eqnarray} m_0: C(G\times G)\to C(G), && m_0 w(x):=w(x,x),\\ \eta_0:\mathbb C\to C(G), && \eta_0(\lambda):=\lambda\,{\bf 1},\\ \Delta_0: C(G)\to C(G\times G), && \Delta_0 u(x,y):=u(xy),\\ \varepsilon_0: C(G)\to\mathbb C, && \varepsilon_0 u(x) := u(e),\\ S_0: C(G)\to C(G), && S_0 u(x) := u(x^{-1}). \end{eqnarray} When restricting these mappings respectively to trigonometric polynomials to and Schwartz test functions, $\mathscr T(G)$ and $\mathscr S(G)$ can be regarded as Hopf algebras. By dualizing the structure of $\mathscr T(G)$, we obtain mappings \begin{eqnarray*} && (m_1,\eta_1,\Delta_1,\varepsilon_1,S_1)\\ & := & (\Delta_0',\varepsilon_0',m_0',\eta_0',S_0') \end{eqnarray*} where for $f,g\in\mathscr T'(G)$ we have \begin{eqnarray} m_1(f\otimes g)=f\ast g, && \mathscr F m_1(f\otimes g)(\xi)=\widehat{f}(\xi)\,\widehat{g}(\xi),\\ \eta_1(\lambda) = \lambda\,\delta_e, && \mathscr F(\eta_1(\lambda))(\xi)=\lambda\,I \in\mathbb C^{d_\xi\times d_\xi},\\ \Delta_1 f(x,y)=f(x)\,\delta_x(y), && \widehat{\Delta_1 f}(\xi\otimes\eta) = \widehat{f}(\xi\otimes\eta), \\ \varepsilon_1(f) = \int f(x)\,{\rm d}x, && \varepsilon_1(f) = \widehat{f}(\varepsilon), \\ S_1 f(x) = f(x^{-1}), && \widehat{S_1 f}(\eta)=\widehat{f}(\eta^\ast)^T. \end{eqnarray} Here $M^T$ is the transpose of matrix $M$, and $\eta^\ast\in\widehat{G}$ is the contragredient representation of $\eta\in\widehat{G}$, defined by $\eta^\ast(x):=\eta(x^{-1})^T$. \section{Symplectic Fourier transform} We call $G\times\widehat{G}$ the {\it time-frequency plane} (or the {\it position-momentum space}, or the {\it phase-space}), where time-frequency points $(x,\eta)\in G\times\widehat{G}$ comprise of {\it time} $x\in G$ and of {\it frequency} $\eta\in\widehat{G}$. We shall deal with Hilbert space $L^2(G\times\widehat{G})$, where the inner product is given by \begin{equation} \langle b,a\rangle = \iint b(x,\eta)\,a(x,\eta)^\ast \,{\rm d}\eta\,{\rm d}x. \end{equation} Here the matrix elements of $x\mapsto a(x,\eta)\in\mathbb C^{d_\eta\times d_\eta}$ belong to $L^2(G)$ for all $\eta\in\widehat{G}$. The {\it ambiguity plane} \begin{equation} \widehat{G}\times G = \left\{(\xi,y):\ \xi\in\widehat{G}, y\in G\right\} \end{equation} is the Fourier dual to the time-frequency plane $G\times\widehat{G}$ by the {\it symplectic Fourier transform} $F$, which is the linear isomorphism \begin{equation} F=(\mathscr F\otimes I)(I\otimes\mathscr F^{-1}): L^2(G\times\widehat{G})\to L^2(\widehat{G}\times G). \end{equation} Thus if $a\in L^2(G\times\widehat{G})$ then $Fa\in L^2(\widehat{G}\times G)$, \begin{equation} Fa(\xi,y) = \int \xi(x)^\ast \int \eta(y)\,a(x,\eta) \,{\rm d}\eta\,{\rm d}x. \end{equation} As in traditional signal processing, here we may call $y\in G$ the {\it time-delay} or {\it lag} variable, and $\xi\in\widehat{G}$ the {\it frequency-delay} or {\it Doppler} variable. The inverse symplectic Fourier transform is then given by \begin{equation} a(x,\eta) = \int \eta(y)^\ast \int \xi(x)\, Fa(\xi,y) \,{\rm d}\xi\,{\rm d}y. \end{equation} Then $$ \langle Fa,Fb \rangle = \langle a,b\rangle,\quad\quad\quad \|Fa\|^2 = \langle Fa,Fa\rangle = \langle a,a\rangle = \|a\|^2. $$ Matrix-valued functions on $G\times\widehat{G}$ and $\widehat{G}\times G$ can be multiplied ``pointwise'': $$ (ab)(x,\eta):= a(x,\eta)\,b(x,\eta),\quad\quad\quad ((Fa)Fb)(\xi,y):=Fa(\xi,y)\,Fb(\xi,y). $$ Then the {\it convolution} $a\ast b$ of $a,b$ on $G\times\widehat{G}$ is defined by \begin{equation} a\ast b := F^{-1}((Fb)Fa). \end{equation} For example, $a\ast I = \lambda I$, where \begin{equation} \lambda = Fa(\varepsilon,e) = \iint a(x,\eta)\,{\rm d}\eta\,{\rm d}x\in\mathbb C. \end{equation} We shall also need spaces of matrix-valued test functions and distributions. Especially, we have linear isomorphisms \begin{eqnarray} (I\otimes\mathscr F) & : & \mathscr S(G\times G)\to\mathscr S(G\times\widehat{G}), \\ F & : & \mathscr S(G\times\widehat{G})\to\mathscr S(\widehat{G}\times G), \end{eqnarray} where we have the projective tensor product isomorphisms \begin{eqnarray*} \mathscr S(G\times G) & \cong & \mathscr S(G)\hat{\otimes}\mathscr S(G), \\ \mathscr S(G\times\widehat{G}) & \cong & \mathscr S(G)\hat{\otimes}\mathscr S(\widehat{G}), \\ \mathscr S(\widehat{G}\times G) & \cong & \mathscr S(\widehat{G})\hat{\otimes}\mathscr S(G). \end{eqnarray*} Then $\mathscr S'(\ldots)$ will denote the respective distribution space corresponding to the test function space $\mathscr S(\ldots)$. \section{Kohn--Nirenberg quantization} The Kohn--Nirenberg quantization of pseudo-differential operators serves as the starting point to acquire all the different time-frequency transforms. The idea of the Kohn--Nirenberg pseudo-differential operators on compact Lie groups was introduced by Taylor in \cite{Taylor}, and further investigated e.g. in \cite{RuzhanskyTurunen,gardingRuzhanskyTurunen,RuzhanskyTurunenWirth,Fischer}. \begin{dfn} {\rm The {\it Kohn--Nirenberg symbol} $a\in\mathscr S'(G\times\widehat{G})$ of linear mapping $B:\mathscr S(G)\to\mathscr S'(G)$ is defined by \begin{equation}\label{DEFN:symbolKohnNirenberg} a(x,\eta) = \eta(x)^\ast B\eta(x), \end{equation} where matrix elements of $B\eta$ belong to $\mathscr S'(G)$. Then \begin{equation}\label{EQ:operatorKohnNirenberg} Bv(x) = \int \eta(x)\,a(x,\eta)\,\widehat{v}(\eta)\,{\rm d}\eta = \int a(x,\eta)\,\widehat{v}(\eta)\,\eta(x)\,{\rm d}\eta, \end{equation} and we call $a^R:=B$ the {\it Kohn--Nirenberg pseudo-differential operator} with symbol $a$. The invertible mapping $a\mapsto a^R$ is called the {\it Kohn--Nirenberg quantization}. For $u,v\in\mathscr S(G)$, we define the corresponding {\it Kohn--Nirenberg {\rm (or the {\it Rihaczek})} time-frequency transform} $R(u,v)\in\mathscr S(G\times\widehat{G})$ by \begin{equation}\label{EQ:quantizationKohnNirenberg} \langle u,a^{R}v\rangle = \langle R(u,v),a\rangle \end{equation} for all symbols $a\in\mathscr S'(G\times\widehat{G})$. Then the {\it Kohn--Nirenberg ambiguity transform} is $FR(u,v)=(\mathscr F\otimes I)(I\otimes\mathscr F^{-1})R(u,v) \in\mathscr S(\widehat{G}\times G)$. } \end{dfn} \begin{rem} {\rm Combining \eqref{EQ:operatorKohnNirenberg} and \eqref{EQ:quantizationKohnNirenberg}, we obtain \begin{equation}\label{EQ:KN} R(u,v)(x,\eta) = u(x)\,\eta(x)^\ast\,\widehat{v}(\eta)^\ast \quad\in\quad\mathscr B(\mathscr H_\eta). \end{equation} Especially, $R(u,v)(e,\varepsilon)=u(e)\,\widehat{v}(\varepsilon)^\ast\in\mathbb C$. Notice that the same definition extends directly to distributions $u,v\in\mathscr S'(G)$, so that $R(u,v)\in\mathscr S'(G\times\widehat{G})$, and then $FR(u,v)\in\mathscr S'(\widehat{G}\times G)$. Moreover, \begin{eqnarray} FR(u,v)(\xi,y) & = & \int \xi(x)^\ast \int \eta(y)\,R(u,v)(x,\eta) \,{\rm d}\eta\,{\rm d}x\\ & = & \int \xi(x)^\ast\,u(x)\,v(xy^{-1})^\ast\,{\rm d}x \quad\in\quad \mathscr B(\mathscr H_\xi). \end{eqnarray} Especially, $FR(u,v)(\varepsilon,e) =\langle u,v\rangle =\langle\widehat{u},\widehat{v}\rangle$. Notice that $FR(u,v)(\xi,y) = \widehat{f_y\,}(\xi)$, where $f_y(x)=u(x)\,v(xy^{-1})^\ast$. The Cauchy--Schwarz inequality yields \begin{equation} \int |f_y(x)|\,{\rm d}x \leq \|u\|\,\|v\|. \end{equation} } \end{rem} \begin{rem} {\rm On a compact group $G$, the Kohn--Nirenberg transform $R$ maps $\mathscr T(G)\times\mathscr T(G)$ to $\mathscr T(G\times\widehat{G})$. Why? Let $u,v\in\mathscr T(G)$. Since $$ u(x) = (I\otimes\varepsilon)\Delta u(x,y),\quad v(xy^{-1})^\ast = (I\otimes S)\Delta j v(x,y), $$ this shows both $(x,y)\mapsto u(x)$ and $(x,y)\mapsto v(xy^{-1})^\ast$ belong to $\mathscr T(G\times G)$. Hence also $(x,y)\mapsto u(x)\,v(xy^{-1})^\ast$ belongs to $\mathscr T(G\times G)$. With a similar reasoning, we see that the Kohn--Nirenberg transform maps $\mathscr S(G)\times\mathscr S(G)$ to $\mathscr S(G\times\widehat{G})$. } \end{rem} \section{Time-frequency transforms and quantizations} Time-frequency transform $(u,v)\mapsto D(u,v)$ will be ``time-frequency invariant'', taking test functions of time to matrix-valued test functions of time-frequency. More precisely: \begin{dfn} {\rm {\it Time-frequency transform} $D:\mathscr S(G)\times\mathscr S(G)\to\mathscr S(G\times\widehat{G})$ is a mapping of the form \begin{equation}\label{DEFN:TFtransform} D(u,v) := F^{-1}\left(\phi_D\,FR(u,v)\right), \end{equation} where $\phi_D\in\mathscr S'(\widehat{G}\times G)$ is the {\it ambiguity kernel} (or {\it Doppler-lag kernel}). Time-frequency transform is called {\it band-limited} if it maps $\mathscr T(G)\times\mathscr T(G)$ to $\mathscr T(G\times\widehat{G})$. } \end{dfn} \begin{rem} {\rm Trigonometric function $u\in\mathscr T(G)$ is {\it band-limited} in the sense that it has only finitely many non-zero Fourier coefficients. For the Kohn--Nirenberg transform $R$, notice that $\phi_R(\xi,y)=I$ for all $(\xi,y)\in\widehat{G}\times G$. Thus we may have $\phi_D\not\in\mathscr S(\widehat{G}\times G)$. Nevertheless, \begin{equation} FD(u,{\bf 1})(\xi,y)=\phi_D(\xi,y)\,\widehat{u}(\xi) \end{equation} for all $u\in\mathscr T(G)$. Hence the matrix elements of $y\mapsto\phi_D(\xi,y)$ belong to $\mathscr S(G)$ for each $\xi\in\widehat{G}$. Band-limitedness of $D$ is equal to that these matrix elements would be trigonometric polynomials: for instance, the Kohn--Nirenberg transform $R$ is band-limited. Time-frequency transform can also be expressed by \begin{eqnarray} D(u,v)(x,\eta) & = & \int \eta(y)^\ast \int \xi(x) \,\phi_D(\xi,y)\,FR(u,v)(\xi,y)\,{\rm d}\xi\,{\rm d}y \\ & = & R(u,v)\ast\psi_D(x,\eta), \end{eqnarray} where $\psi_D=F^{-1}(\phi_D)$ is the {\it time-frequency kernel} of $D$, corresponding to the {\it ambiguity kernel} $\phi_D=F(\psi_D)$. Sometimes we need the {\it time-lag kernel} $\varphi_D = (I\otimes\mathscr F)\psi_D = (\mathscr F^{-1}\otimes I)\phi_D$. Notice that the kernels $$ \psi_D(x,\eta),\quad \varphi_D(x,y),\quad \phi_D(\xi,y) $$ contain the same information, with different variables $x,y\in G$ and $\xi,\eta\in\widehat{G}$. With the approach above, we have avoided finding ``frequency modulations'' on non-commutative groups; the commutative case works still fine, and yet we obtain many essential features also in the non-commutative setting. } \end{rem} \begin{rem} {\rm If $u,v\in\mathscr T'(G)$ then $\phi_D\,FR(u,v)\in\mathscr T'(\widehat{G}\times G)$, so that we can define \begin{equation} D(u,v):=F^{-1}(\phi_D\,FR(u,v))\in\mathscr T'(G\times\widehat{G}). \end{equation} } \end{rem} \begin{dfn} {\rm Let $D$ be a time-frequency transform. The corresponding {\it $D$-quantization} $a\mapsto a^D$ satisfies \begin{equation}\label{DEFN:quantization} \langle u,a^D v\rangle = \langle D(u,v),a\rangle. \end{equation} Linear operators $a^D=(v\mapsto a^D v)$ are called {\it $D$-pseudo-differential operators}. } \end{dfn} In the sequel, we investigate how the properties of different kernels affect the properties of the time-frequency transform $D$ and the $D$-quantization $a\mapsto a^D$. Due to \eqref{DEFN:quantization}, $a^Dv\in\mathscr S'(G)$ if $v\in\mathscr S(G)$ and $a\in\mathscr S'(G\times\widehat{G})$. Moreover, if $v\in\mathscr S'(G)$ and $a\in\mathscr S(G\times\widehat{G})$, then $a^Dv\in\mathscr S(G)$. Thereby we have \begin{eqnarray} a^D:\mathscr S(G)\to\mathscr S'(G) & {\rm if} & a\in\mathscr S'(G\times\widehat{G}),\\ a^D:\mathscr S'(G)\to\mathscr S(G) & {\rm if} & a\in\mathscr S(G\times\widehat{G}). \end{eqnarray} Different quantizations can be linked to the Kohn--Nirenberg case: \begin{lem}\label{LEM:C2R} Let $D$ be a time-frequency transform, and let $a\in\mathscr S'(G\times\widehat{G})$. Then $a^D=b^R$, where $Fb(\xi,y)=\phi_D(\xi,y)^\ast\,Fa(\xi,y)$. \end{lem} \paragraph{Proof.} Noticing that $$ \langle D(u,v),a\rangle = \langle FD(u,v),Fa\rangle = \langle FR(u,v),Fb\rangle = \langle R(u,v),b\rangle, $$ we obtain $\langle u,a^D v\rangle = \langle u,b^R v\rangle$. \hfill {\bf QED} \begin{dfn} {\rm For a time-frequency transform $(u,v)\mapsto D(u,v)$, we call \begin{equation} D[u]:=D(u,u) \end{equation} the {\it time-frequency distribution} of signal $u$. Notice that $D[\lambda u]=|\lambda|^2 D[u]$ for all $\lambda\in\mathbb C$, so define the equivalence class $[u]$ of {\it indistinguishable signals} by \begin{equation} [u] := \left\{ \lambda u:\ \lambda\in\mathbb C,\ |\lambda|=1 \right\}. \end{equation} } \end{dfn} Value $D[u](x,\eta)\in\mathscr B(\mathscr H_\eta)$ presents an idealized operator-valued energy density at time-frequency $(x,\eta)\in G\times\widehat{G}$ for a scalar-valued signal $u:G\to\mathbb C$. With the complex scalars, numeric data families \begin{equation}\label{EQ:uaCu} \left(\langle u,a^Du\rangle\right)_{u\in\mathscr S(G)} \quad{\rm and}\quad \left( \langle u,a^Dv\rangle\right)_{u,v\in\mathscr S(G)} \end{equation} mediate the same information. Thereby the {\it invertibility} of time-frequency transform $D$ refers to the invertibility of the mapping $[u]\mapsto D[u]$. This amounts to the properties of ambiguity kernel $\phi_D$. Invertibility is not merely ``being bijective'', it deals also with the numerical stability (cf. the inverse problem for the traditional heat equation). For invertibility, we need $\phi_D(\xi,y)$ to be invertible for almost every $(\xi,y)\in\widehat{G}\times G$, and numerically that $\phi_D$ grows or decays at infinity at most polynomially. The Kohn--Nirenberg transform is invertible, since $$ \int \eta(y)\,R[u](x,\eta)\,{\rm d}\eta = u(x)\,u(xy^{-1})^\ast. $$ \begin{exa} {\rm An analogue of Wigner-$\tau$-pseudo-differential operators on certain families of locally compact groups was introduced and studied in \cite{MantoiuRuzhansky}. On a compact group, this Wigner-$\tau$-quantization would formally correspond to our time-frequency transform $D$, which has the ambiguity kernel of the form $$ \phi_D(\xi,y) = \xi(\tau(y)), $$ where $\tau:G\to G$ is a suitable function. } \end{exa} \paragraph{Boundedness in energy.} What if also $\phi_D\in L^\infty(\widehat{G}\times G)$? In other words, $\phi_D$ would be bounded in the sense that $\|\phi_D\|_{L^\infty} < \infty$ for \begin{equation} \|\phi_D\|_{L^\infty} = \sup_{(\xi,y)\in\widehat{G}\times G} \|\phi_D(\xi,y)\|_{op}, \end{equation} where $\|M\|_{op}$ is the spectral norm of operator $M$. We obtain the following boundedness result on $L^2$-spaces, where norms $\|f\|$ are the appropriate $L^2$-norms: \begin{thm}\label{THM:L2boundedness} Let $\phi_D\in L^\infty(\widehat{G}\times G)$ for a time-frequency transform $D$. Then \begin{eqnarray} \|D(u,v)\| & \leq & \|\phi_D\|_{L^\infty}\,\|u\|\,\|v\|, \label{INEQ:L2C}\\ \|a^Dv\| & \leq & \|\phi_D\|_{L^\infty}\,\|a\|\,\|v\|, \end{eqnarray} for all $u,v\in L^2(G)$ and $a\in L^2(G\times\widehat{G})$. For the Kohn--Nirenberg transform, $\|R(u,v)\|=\|u\|\,\|v\|$, and $\|a^Rv\|\leq \|a\|\,\|v\|$. \end{thm} \paragraph{Proof.} In the special case of the Kohn--Nirenberg transform, $\|\phi_R\|_{L^\infty}=1$ as $\phi_R(\xi,y)=I$ for all $(\xi,y)\in\widehat{G}\times G$. Moreover, \begin{eqnarray*} \|R(u,v)\|^2 & = & \langle R(u,v),R(u,v)\rangle \\ & = & \iint u(x)\,\eta(x)^\ast\,\widehat{v}(\eta)^\ast \,\widehat{v}(\eta)\,\eta(x)\,u(x)^\ast\,{\rm d}\eta\,{\rm d}x\\ & = & \int |u(x)|^2\,{\rm d}x \int \widehat{v}(\eta)^\ast\,\widehat{v}(\eta) \,{\rm d}\eta\\ & = & \|u\|^2\,\|v\|^2. \end{eqnarray*} The $L^2$-norm is preserved in the symplectic Fourier transform: $$ \|D(u,v)\| = \|FD(u,v)\| = \| \phi_D\,FR(u,v)\|. $$ Let $\|M\|_{HS}=({\rm tr}(M M^\ast))^{1/2}$ denote the Hilbert--Schmidt norm. Recall that $\|MN\|_{HS}\leq \|M\|_{op}\|N\|_{HS}$. Thereby \begin{eqnarray*} \|\phi_D\,FR(u,v)\|^2 & = & \iint \|\phi_D(\xi,y)\,FR(u,v)(\xi,y)\|_{HS}^2 \,{\rm d}\xi \,{\rm d}y \\ & \leq & \iint \|\phi_D(\xi,y)\|_{op}^2 \, \|FR(u,v)(\xi,y)\|_{HS}^2 \,{\rm d}\xi \,{\rm d}y \\ & \leq & \|\phi_D\|_{L^\infty}^2\,\|FR(u,v)\|^2 \ = \ \|\phi_D\|_{L^\infty}^2\,\|R(u,v)\|^2. \end{eqnarray*} Inequality \eqref{INEQ:L2C} follows from this, because $\|R(u,v)\|=\|u\|\,\|v\|$. Hence by the Cauchy--Schwarz inequality we obtain \begin{eqnarray*} \left|\langle u,a^D v\rangle\right| = \left|\langle D(u,v),a\rangle\right| \leq \|D(u,v)\|\,\|a\| \leq \|\phi_D\|_{L^\infty}\,\|u\|\,\|v\|\,\|a\|, \end{eqnarray*} completing the proof. \hfill {\bf QED} ${}$ Let us emphasize the invariance under the time translations, in the sense that $D[v](x,\eta)=D[u](yx,\eta)$ if $v(x)=u(yx)$. The frequency modulations are more elusive in the non-commutative case, but nevertheless, the message of the next result is that the information can be ``shifted'' to the specific point $(e,\varepsilon)$ in the time-frequency plane: \begin{thm}\label{THM:localizationoriginal} Time-frequency transform $D$ can be recovered from the evaluation mapping $(u\mapsto D[u](e,\varepsilon)):\mathscr S(G)\to\mathbb C$. \end{thm} \paragraph{Proof.} For $u,v\in\mathscr S(G)$ we have \begin{eqnarray*} D(u,v)(x,\eta) & = & \int \eta(y)^\ast \int \xi(x)\,\phi_D(\xi,y) \int \xi(t)^\ast\,u(t)\,v(ty^{-1})^\ast \,{\rm d}t\,{\rm d}\xi\,{\rm d}y \\ & = & \int \eta(y)^\ast \int \varphi_D(t^{-1}x,y)\,u(t)\,v(ty^{-1})^\ast \,{\rm d}t\,{\rm d}y. \end{eqnarray*} Especially, \begin{equation}\label{EQ:original} \langle u,\delta^D u\rangle = D[u](e,\varepsilon) = \int u(x) \left(\int \varphi_D(x^{-1},y^{-1}x)^\ast\,u(y)\,{\rm d}y \right)^\ast {\rm d}x, \end{equation} where $\delta=\delta_{(e,\varepsilon)}$ is the Dirac--Kronecker delta distribution at $(e,\varepsilon)\in G\times\widehat{G}$. Hence from knowing all $D[u](e,\varepsilon)$ we obtain $\varphi_D$ and thereby $D$. \hfill {\bf QED} \begin{rem}{\rm Notice that in the statement of the previous Theorem on a compact group $G$, we could replace the test function space $\mathscr S(G)$ by $\mathscr T(G)$.} \end{rem} \paragraph{Original localizations.} Let us call $(e,\varepsilon)\in G\times\widehat{G}$ the {\it origin of the time-frequency plane} $G\times\widehat{G}$. As seen in the proof of Theorem~\ref{THM:localizationoriginal} above, the {\it original localization} pseudo-differential operator $\delta^D=(\delta_{(e,\varepsilon)})^D:\mathscr S(G)\to\mathscr S'(G)$ encodes all the information about the time-frequency transform $D$. The original localization $\delta^D$ is bounded on $L^2(G)$ if and only if \begin{equation}\label{DEFN:uniformbound} |D(u,v)(e,\varepsilon)| = |\langle u,\delta^D v\rangle| \leq c\,\|u\|\,\|v\| \end{equation} for all $u,v\in\mathscr S(G)$, where $c<\infty$ is a constant. Original localizations provide an alternative way to understand time-frequency transforms. Notice that if $K_{\delta^D}$ is the Schwartz integral kernel of the original localization $\delta^D$, then \begin{equation}\label{EQ:Cfrom0} D(u,v)(x,\eta) = \iint u(xz)\,\eta(z)^\ast \,K_{\delta^D}(z,y)^\ast\,\eta(y)\,v(xy)^\ast \,{\rm d}z\,{\rm d}y, \end{equation} in analogy to the Euclidean case \eqref{EQ:kernelEuclidean}. Here $K_{\delta^D}(z,y)^\ast = \varphi_D(z^{-1},y^{-1}z)$, that is $\varphi_D(x,y)=K_{\delta^D}(x^{-1},x^{-1}y^{-1})^\ast$. Hence \begin{equation}\label{EQ:ambiguitySchwartz} \phi_D(\xi,y) = \int K_{\delta^D}(x,xy^{-1})^\ast\,\xi(x)\,{\rm d}x. \end{equation} Moreover, if we define {\it amplitude} $a_D$ by \begin{equation} a_D(x,y,\eta):=\int a(t,\eta)\,K_{\delta^D}(t^{-1}x,t^{-1}y)\,{\rm d}t \end{equation} then $\displaystyle a^D v(x) = \int K_{a^D}(x,y)\,v(y)\,{\rm d}y$ for the Schwartz kernel $K_{a^D}$: \begin{equation} K_{a^D}(x,y) = \int \eta(y^{-1}x)\,a_D(x,y,\eta)\,{\rm d}\eta. \end{equation} \begin{exa} {\rm Since $R(u,v)(e,\varepsilon)=u(e)\,\widehat{v}(\varepsilon)^\ast$, the Kohn--Nirenberg original localization is given by \begin{equation} \delta^{R}v(x) = \widehat{v}(\varepsilon)\,\delta_e(x) =\int v(y)\,{\rm d}y\ \delta_e(x). \end{equation} Here $\delta^{R}:\mathscr S(G)\to\mathscr S'(G)$ is unbounded on $L^2(G)$ unless $G$ is finite. Amplitude $a_R$ of $a^R$ satisfies $a_R(x,y,\eta)=a(x,\eta)$. The so-called {\it anti-Kohn--Nirenberg transform} $R^\ast$ satisfies $R^\ast(u,v)(e,\varepsilon)=\widehat{u}(\varepsilon)\,v(e)^\ast$. Its original localization satisfies $ \delta^{(R^\ast)} v(x) = v(e), $ and its amplitudes are given by $a_{R^\ast}(x,y,\eta)=a(y,\eta)$. } \end{exa} \section{Uncertainty and original localizations} In this section we discuss original localizations related to the Heisenberg uncertainty principle in quantum mechanics. Our quantum states $u$ are unit vectors in the Hilbert space $\mathscr H=L^2(G)$, identifying states $u,v$ whenever $[u]=[v]$. Bounded observables are self-adjoint operators $A:\mathscr H\to \mathscr H$, and \begin{equation} \begin{cases} \mu = \mu_A^u := \langle Au,u\rangle, & \\ \sigma = \sigma_A^u := \|Au-\mu u\| & \end{cases} \end{equation} are the {\it expectation} and the {\it deviation} of measurement $A$ in state $u$, respectively. For instance, let $A=\sum_{\alpha\in J} \alpha\,P_\alpha$ where $P_\alpha$ is an orthogonal projection, with distinct measured values $\alpha\in J\subset\mathbb R$. Then the interpretation is the following: in initial state $u$, our measurement gives value $\alpha\in J$ with probability $\|P_\alpha u\|^2$, and then $u$ collapses to state $P_\alpha u/\|P_\alpha u\|$. Let $A,B$ be bounded observables. The {\it uncertainty observable} of the pair $(A,B)$ is \begin{equation} -{\rm i}\hbar^{-1} [A,B] = -{\rm i}\hbar^{-1} (AB-BA), \end{equation} where we normalize the Dirac--Planck constant so that $\hbar := (2\pi)^{-1}$. Applying Cauchy--Schwarz inequality, we obtain the Heisenberg uncertainty inequality \begin{equation} \left|\mu_{-{\rm i}\hbar^{-1}[A,B]}^u\right| \leq 2\hbar^{-1}\,\sigma_A^u\,\sigma_B^u. \end{equation} Suppose above $A$ would be a ``position operator'' and $B$ a ``momentum operator'': $Au=fu$ and $\widehat{Bu}=\widehat{u}\,\widehat{g}$ (that is $Bu=g\ast u$), initially with $f,g\in\mathscr S(G)$ (later considering $f,g\in\mathscr S'(G)$), where for self-adjointness we should have real-valued ``coordinate function'' $f$, and $g(z)^\ast=g(z^{-1})$. If $A,B$ are able to distinguish $(e,\varepsilon)\in G\times\widehat{G}$ in a reasonable fashion, then a good candidate for an original localization $\delta^D$ would be given by \begin{equation} \delta^D := -{\rm i}2\pi [A,B]. \end{equation} Then \begin{eqnarray*} D(u,v)(e,\varepsilon) & = & \iint {\rm i}2\pi\left(f(x)-f(y)\right) g(xy^{-1})^\ast\,u(x)\,v(y)^\ast \,{\rm d}y\,{\rm d}x. \end{eqnarray*} We shall return to this uncertainty commutator approach when dealing with cyclic groups in Section~\ref{SEC:cyclic}. If here $f=\delta_e\in\mathscr S'(G)$ and $g={\bf 1}$ then $$ D(u,v) = {\rm i}2\pi \left( R(u,v)-R^\ast(u,v) \right), $$ where the conjugate transforms $R^\ast(u,v):=R(v,u)^\ast$ will be studied in Section~\ref{SEC:symmetry}. \section{Symmetry}\label{SEC:symmetry} \begin{dfn} {\rm Let $D:\mathscr S(G)\times\mathscr S(G)\to\mathscr S(G\times\widehat{G})$ be a time-frequency transform. We define its {\it conjugate} \begin{equation}\label{DEFN:conjugateC} D^\ast:\mathscr S(G)\times\mathscr S(G)\to\mathscr S(G\times\widehat{G}) \end{equation} by $D^\ast(u,v):=D(v,u)^\ast$, more precisely \begin{equation}\label{DEFN:cC} D^\ast(u,v)(x,\eta) := D(v,u)(x,\eta)^\ast \end{equation} for all $u,v\in\mathscr S(G)$ and $(x,\eta)\in G\times\widehat{G}$. We call time-frequency transform $D$ {\it symmetric} if $D^\ast=D$. The $D$-quantization is {\it symmetric} if for all $u,v\in\mathscr S(G)$ $$ \langle a^D u,v\rangle = \langle u,a^D v\rangle $$ whenever $a\in\mathscr S(G\times\widehat{G})$ satisfies $a(x,\eta)^\ast=a(x,\eta)$ for all $(x,\eta)\in G\times\widehat{G}$. } \end{dfn} \begin{thm} Mapping $D^\ast$ defined in \eqref{DEFN:conjugateC},\eqref{DEFN:cC} is a time-frequency transform. Moreover, the following conditions are equivalent: \begin{itemize} \item[{\rm (a)}] For all $u\in\mathscr S(G)$ we have $D[u](e,\varepsilon)\in\mathbb R$. \item[{\rm (b)}] Time-frequency transform $D$ is symmetric. \item[{\rm (c)}] The $D$-quantization is symmetric. \item[{\rm (d)}] The time-lag kernel $\varphi_D=(I\otimes\mathscr F^{-1})\psi_D=(\mathscr F^{-1}\otimes I)\phi_D$ satisfies \begin{equation}\label{EQ:symmetry} \varphi_D(x,y)^\ast = \varphi_D(yx,y^{-1}). \end{equation} \end{itemize} \end{thm} \begin{rem} {\rm The superficial non-symmetry in the appearance of \eqref{EQ:symmetry} is just due to the fact that the Kohn--Nirenberg transform itself is not symmetric. Moreover, in the statement of the previous Theorem on a compact group $G$, we can replace the test function space $\mathscr S(G)$ by $\mathscr T(G)$. } \end{rem} \paragraph{Proof.} On the one hand, \begin{eqnarray*} D(u,v)(x,\eta) & = & \int \eta(y)^\ast \int \xi(x)\,\phi_D(\xi,y)\,FR(u,v)(\xi,y) \,{\rm d}\xi\,{\rm d}y \\ & = & \int \eta(y)^\ast \int \xi(x)\,\phi_D(\xi,y) \int \xi(z)^\ast\,u(z)\,v(zy^{-1})^\ast \,{\rm d}z\,{\rm d}\xi\,{\rm d}y \\ & = & \int \eta(y)^\ast \int \varphi_D(z^{-1}x,y)\,u(z)\,v(zy^{-1})^\ast \,{\rm d}z\,{\rm d}y. \end{eqnarray*} On the other hand, \begin{eqnarray*} D(v,u)(x,\eta)^\ast & = & \int \eta(y) \int \varphi_D(z^{-1}x,y)^\ast\,v(z)^\ast\,u(zy^{-1}) \,{\rm d}z\,{\rm d}y \\ & = & \int \eta(y)^\ast \int \varphi_D(z^{-1}x,y^{-1})^\ast\,u(zy) \,v(z)^\ast\,{\rm d}z\,{\rm d}y \\ & = & \int \eta(y)^\ast \int \varphi_D(yz^{-1}x,y^{-1})^\ast\,u(z) \,v(zy^{-1})^\ast\,{\rm d}z\,{\rm d}y, \end{eqnarray*} showing that \begin{equation}\label{EQ:timelagconjugate} \varphi_{D^\ast}(x,y) = \varphi_D(yx,y^{-1})^\ast, \end{equation} and leading to the equivalence of conditions (b) and (d). In the special case of $(x,\eta)=(e,\varepsilon)$ and $v=u$, this gives also the equivalence of (d) and (a). Moreover, if $D$ is symmetric and $a^\ast=a$, then $$ \langle a^D u,u\rangle = \langle u,a^D u\rangle^\ast = \langle D[u],a\rangle^\ast = \langle D[u]^\ast,a^\ast\rangle = \langle D[u],a\rangle = \langle u,a^D u\rangle, $$ so that $a\mapsto a^D$ is also symmetric. Thus (b) implies (c). Now suppose $a\mapsto a^D$ is symmetric. Let $(h_\alpha)_\alpha$ be a bounded left approximate identity with $0\leq h_\alpha\in\mathscr S(G)$. Define $a_\alpha\in\mathscr S(G\times\widehat{G})$ by $a_\alpha(x,\eta):=h_\alpha(x)\,\delta_\varepsilon(\eta)I$. Then $a_\alpha(x,\eta)^\ast=a_\alpha(x,\eta)$, and $$ D[u](e,\varepsilon) = \langle D[u],\delta_{(e,\varepsilon)}\rangle = \lim_{\alpha}\langle D(u,u),a_\alpha\rangle = \lim_{\alpha}\langle u,(a_\alpha)^D u\rangle, $$ which is real-valued due to the symmetry of the quantization. Hence condition (c) implies (a). \hfill {\bf QED} \begin{rem} {\rm Clearly, $(D^\ast)^\ast = D$. Notice also that \begin{equation} (a^D)^\ast = (a^\ast)^{(D^\ast)}, \end{equation} and especially $(\delta^D)^\ast=\delta^{(D^\ast)}$. This follows from $$ \langle u,(a^D)^\ast v\rangle = \langle v,a^D u\rangle^\ast = \langle D(v,u)^\ast,a^\ast\rangle = \langle D^\ast(u,v),a^\ast\rangle = \langle u,(a^\ast)^{(D^\ast)}v\rangle. $$ } \end{rem} \begin{exa} {\rm The conjugate $R^\ast$ of the Kohn--Nirenberg transform $R$ satisfies \begin{equation} R^\ast(u,v)(x,\eta) = R(v,u)(x,\eta)^\ast = \widehat{u}(\eta)\,\eta(x)\,v(x)^\ast. \end{equation} The corresponding pseudo-differential quantization satisfies \begin{eqnarray*} \langle u,a^{(R^\ast)} v\rangle & = & \langle R^\ast(u,v),a\rangle \\ & = & \iiint \widehat{u}(\eta)\,\eta(x)\,v(x)^\ast\,a(x,\eta)^\ast \,{\rm d}\eta\,{\rm d}x \\ & = & \int u(y) \left( \iint a(x,\eta)\,v(x)\,\eta(x^{-1}y) \,{\rm d}\eta\,{\rm d}x\right)^\ast {\rm d}y, \end{eqnarray*} leading to \begin{equation} a^{(R^\ast)} v(x) = \iint \eta(y^{-1}x)\,a(y,\eta)\,v(y)\,{\rm d}\eta\,{\rm d}y. \end{equation} Mapping $a\mapsto a^{(R^\ast)}$ is called the {\it anti-Kohn--Nirenberg quantization}. It is easy to find that $\phi_{R^\ast}(\xi,y)=\xi(y)$. } \end{exa} \begin{exa} {\rm Let $D$ be a time-frequency transform. Then $$ D=\frac{D+D^\ast}{2}+{\rm i}\,\frac{D-D^\ast}{2{\rm i}}, $$ where the symmetric time-frequency transforms $(D+D^\ast)/2$ and $-{\rm i}(D-D^\ast)/2$ could be called the respective {\it real} and {\it imaginary parts} of $D$. } \end{exa} \begin{exa}\label{EXA:falseWigner} {\rm For the moment, let us try to introduce Wigner distribution on compact groups $G$. The Euclidean space Wigner transform \eqref{DEFN:Wigner} satisfies \begin{eqnarray*} W(u,v)(x,\eta) & = & \int_{\mathbb R^n} {\rm e}^{-{\rm i}2\pi y\cdot\eta} \,u(x+y/2)\,v(x-y/2)^\ast\,{\rm d}y \\ & = & 2^n \int_{\mathbb R^n} {\rm e}^{-{\rm i}2\pi 2z\cdot\eta} \,u(x+z)\,v(x-z)^\ast\,{\rm d}z. \end{eqnarray*} It would be tempting to define the ``Wigner transform ${\bf W}$'' of $u,v\in\mathscr S(G)$ by \begin{equation}\label{DEFN:falseWigner} {\bf W}(u,v)(x,\eta) := \int \eta(z)^\ast\,u(xz)\,v(xz^{-1})^\ast\,{\rm d}z \end{equation} possibly up to a constant multiple, depending on $G$. The problem here is that $(xz^{-1})^{-1}(xz)=z^2$ is not the lag $z$ in time. Transform ${\bf W}$ would also be formally symmetric, as ${\bf W}(v,u)(x,\eta)^\ast={\bf W}(u,v)(x,\eta)$. Nevertheless, \begin{eqnarray*} F{\bf W}(u,{\bf 1})(\xi,y) & = & \int \xi(x)^\ast \int \eta(y) \int \eta(z)^\ast\,u(xz) \,{\rm d}z\,{\rm d}\eta\,{\rm d}x \\ & = & \int \xi(x)^\ast u(xy)\,{\rm d}x \\ & = & \xi(y)\,\widehat{u}(\xi). \end{eqnarray*} So for such ${\bf W}$ to be a time-frequency transform, we would have $\phi_{\bf W}(\xi,y)=\xi(y)$, meaning that ${\bf W}=R^\ast$, the anti-Kohn--Nirenberg transform: this is possible only when $G=\{e\}$ is the trivial group of one element. Hence, $\bf W$ defined in formula \eqref{DEFN:falseWigner} is a dead-end in time-frequency analysis, and it does not make sense to talk about a corresponding Weyl-like pseudo-differential quantization: especially, $(u,v)\mapsto {\bf W}(u,v)$ would not be modulation-invariant for commutative $G\not=\{e\}$. However, consider such a compact group $G$, where $(y\mapsto y^2):G\to G$ is a bijection: its inverse $(y\mapsto y^{1/2}):G\to G$ is a homeomorphism of the compact Hausdorff space $G$. Then \begin{equation} W(u,v)(x,\eta) := \int \eta(y)^\ast\,u(xy^{1/2})\,v(xy^{-1/2})^\ast\,{\rm d}y \end{equation} defines the natural {\it Wigner transform} on $G$, where $y^{-1/2}=(y^{1/2})^{-1}$, and $\phi_W(\xi,y)=\xi(y^{1/2})$. Especially, it is possible to define the Wigner time-frequency transform on finite cyclic groups of odd order, or on $p$-adic groups for primes $p\not=2$. Related questions on commutative locally compact groups have been treated in \cite{Kutyniok}. } \end{exa} \section{Normalization, and time-frequency margins} \begin{dfn} {\rm We call time-frequency transform $D$ {\it normalized} if \begin{equation}\label{EQ:marginenergy} \iint D(u,v)(x,\eta)\,{\rm d}\eta\,{\rm d}x = \langle u,v\rangle = \langle \widehat{u},\widehat{v}\rangle \end{equation} for all $u,v\in\mathscr S(G)$. Especially for $v=u$ formula \eqref{EQ:marginenergy} yields the energy $\|u\|^2=\|\widehat{u}\|^2$. We say that the $D$-quantization has {\it correct traces} if \begin{equation}\label{EQ:trace} {\rm tr}(a^D) = \iint a(t,\eta)\,{\rm d}\eta\,{\rm d}t \end{equation} for all $a\in\mathscr S(G\times\widehat{G})$. } \end{dfn} \begin{thm} The following conditions are equivalent: \begin{itemize} \item[{\rm (a)}] $\iint D[{\bf 1}](x,\eta)\,{\rm d}\eta\,{\rm d}x = 1$. \item[{\rm (b)}] Time-frequency transform $D$ is normalized. \item[{\rm (c)}] The $D$-quantization has correct traces. \item[{\rm (d)}] The ambiguity kernel satisfies $\phi_D(\varepsilon,e)=1\in\mathbb C$. \end{itemize} Especially, the Kohn--Nirenberg transform $R$ is normalized. \end{thm} \begin{rem}{\rm Condition (a) in the previous Theorem is relevant only for compact groups $G$. }\end{rem} \paragraph{Proof.} Conditions (a), (b) and (d) are equivalent, because \begin{eqnarray*} \iint D(u,v)(x,\eta)\,{\rm d}\eta\,{\rm d}x & = & FD(u,v)(\varepsilon,e) \\ & = & \phi_D(\varepsilon,e)\,FR(u,v)(\varepsilon,e) \\ & = & \phi_D(\varepsilon,e)\,\langle u,v\rangle. \end{eqnarray*} Let $a\in\mathscr S(G\times\widehat{G})$. By Lemma~\ref{LEM:C2R}, we see that $a^D=b^R$, where $b:=F^{-1}(\phi_D^\ast Fa)\in\mathscr S(G\times\widehat{G})$. Hence $$ \iint b(x,\eta)\,{\rm d}\eta\,{\rm d}x = Fb(\varepsilon,e) = \phi_D(\varepsilon,e)^\ast Fa(\varepsilon,e) = \phi_D(\varepsilon,e)^\ast \iint a(x,\eta)\,{\rm d}\eta\,{\rm d}x. $$ Moreover, $b(x,\eta)=\eta(x)^\ast\,(b^R\eta)(x)$, so that $$ {\rm tr}(a^D) = {\rm tr}(b^R) = \sum_{\eta\in\widehat{G}} d_\eta \sum_{j,k=1}^{d_\eta} \langle b^R\eta_{jk},\eta_{jk}\rangle = \iint b(x,\eta)\,{\rm d}\eta\,{\rm d}x. $$ Thus conditions (c) and (d) are equivalent. \hfill {\bf QED} \begin{rem} {\rm Let us find how the Schwartz kernel $K\in\mathscr S(G\times G)$ of $a^D$ is related to the symbol $a\in\mathscr S(G\times\widehat{G})$ in the previous proof: \begin{eqnarray*} && \langle u,a^D v\rangle \\ & = & \langle D(u,v),a\rangle \\ & = & \iint D(u,v)(t,\eta)\, a(t,\eta)^\ast\,{\rm d}\eta\,{\rm d}t\\ & = & \iiint \eta(y)^\ast \int \xi(t)\,\phi_D(\xi,y) \int \xi(x)^\ast\,u(x)\,v(xy^{-1})^\ast\,{\rm d}x\,{\rm d}\xi\,{\rm d}y \,a(t,\eta)^\ast\,{\rm d}\eta\,{\rm d}t \\ & = & \int u(x)\left(\iint \eta(y)\,a(t,\eta) \iint \,v(xy^{-1})\,\xi(x)\,\phi_D(\xi,y)^\ast \,\xi(t)^\ast\,{\rm d}\xi\,{\rm d}y\,{\rm d}\eta\,{\rm d}t \right)^\ast {\rm d}x. \end{eqnarray*} Hence we obtain $$ K(x,z) = \iint \eta(z^{-1}x)\,a(t,\eta) \int\xi(t^{-1}x)\,\phi_D(\xi,z^{-1}x)^\ast \,{\rm d}\xi\,{\rm d}\eta\,{\rm d}t. $$ Here naturally $\displaystyle {\rm tr}(a^D) = \int K(x,x)\,{\rm d}x$. } \end{rem} \begin{dfn} {\rm We say that time-frequency transform $D$ has the {\it correct time margins} if \begin{equation}\label{EQ:margintime} \int D(u,v)(x,\eta)\,{\rm d}\eta = u(x)\,v(x)^\ast \end{equation} for all $u,v\in\mathscr S(G)$ and $x\in G$. We say that $D$-quantization is {\it correct in time} if \begin{equation} a^Dv(x) = f(x)\,v(x) \end{equation} for all $v\in\mathscr S(G)$ and for all symbols $a$ of the time-like form $a(x,\eta)=f(x) I$, where $f\in\mathscr S(G)$. } \end{dfn} \begin{thm} The following conditions are equivalent: \begin{itemize} \item[{\rm (a)}] $D[\delta_e]=\delta_e\otimes I$. In other words, $ D[\delta_e](x,\eta)=\delta_e(x)\,I. $ \item[{\rm (b)}] Time-frequency transform $D$ has the correct time margins. \item[{\rm (c)}] The $D$-quantization is correct in time. \item[{\rm (d)}] The ambiguity kernel satisfies $\phi_D(\xi,e)=I$ for all $\xi\in\widehat{G}$. \end{itemize} \end{thm} \paragraph{Proof.} For any time-frequency transform $D$ we have $$ F(D[\delta_e])(\xi,y) = \phi_D(\xi,y) \int \xi(z)\,\delta_e(z)\,\delta_e(zy^{-1})^\ast\,{\rm d}z = \phi_D(\xi,e)\,\delta_e(y). $$ On the other hand, if $D[\delta_e](x,\eta)=\delta_e(x)\,I$ then $$ F(D[\delta_e])(\xi,y) = \int \xi(x)^\ast \int \eta(y)\,\delta_e(x) \,{\rm d}\eta\,{\rm d}x = \delta_e(y)\,I. $$ Thus conditions (a) and (d) are equivalent. By the Fourier inverse formula, \begin{eqnarray*} \int D(u,v)(x,\eta)\,{\rm d}\eta & = & \iint \eta(y)^\ast \int \xi(x) \,\phi_D(\xi,y)\,FR(u,v)(\xi,y)\,{\rm d}\xi\,{\rm d}y\,{\rm d}\eta\\ & = & \int \xi(x) \,\phi_D(\xi,e)\,FR(u,v)(\xi,e)\,{\rm d}\xi \\ & = & \int \xi(x)\,\phi_D(\xi,e)\,\widehat{u\,v^\ast}(\xi)\,{\rm d}\xi, \end{eqnarray*} so that conditions (b) and (d) are equivalent. Now assume condition (b), and let $a(x,\eta)=f(x)$. Then \begin{eqnarray*} \langle u,a^D v\rangle & = & \langle D(u,v),a\rangle\\ & = & \iint D(u,v)(x,\eta)\,a(x,\eta)^\ast\,{\rm d}\eta\,{\rm d}x \\ & = & \iint D(u,v)(x,\eta)\,{\rm d}\eta\,f(x)^\ast\,{\rm d}x \\ & = & \int u(x)\,v(x)^\ast\,f(x)^\ast\,{\rm d}x, \end{eqnarray*} so $a^Dv(x)=f(x)\,v(x)$. That is, condition (b) implies (c). Finally, assume condition (c). Let $(h_\alpha)_\alpha$ be a bounded left approximate identity with $0\leq h_\alpha\in\mathscr S(G)$. By translation, it is enough to check the time margins at $x=e$: \begin{eqnarray*} \int D(u,v)(e,\eta)\,{\rm d}\eta & = & \iint D(u,v)(t,\eta)\,\delta_e(t)\,{\rm d}\eta\,{\rm d}t \\ & = & \lim_\alpha \langle D(u,v),h_\alpha\otimes I\rangle \\ & = & \lim_\alpha \langle u,(h_\alpha\otimes I)^D v\rangle \\ & = & \lim_\alpha \langle u,h_\alpha v\rangle \\ & = & u(e)\,v(e)^\ast. \end{eqnarray*} This proves condition (b) of the correct margins in time. \hfill {\bf QED} \begin{dfn} {\rm We say that time-frequency transform $D$ has the {\it correct frequency margins} if \begin{equation}\label{EQ:marginfrequency} \int D(u,v)(x,\eta)\,{\rm d}x = \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast \end{equation} for all $u,v\in\mathscr S(G)$ and $\eta\in\widehat{G}$. As a special case $v=u$ of \eqref{EQ:marginfrequency}, matrix $\widehat{u}(\eta)\,\widehat{u}(\eta)^\ast$ is the ``energy density'' of $u$ at frequency $\eta\in\widehat{G}$. We say that $D$-quantization is {\it correct in frequency} if \begin{equation} b^Dv(x) = v\ast g(x), \quad {\rm i.e.}\quad \widehat{b^Dv}(\eta)=\widehat{g}(\eta)\,\widehat{v}(\eta), \end{equation} for all $v\in\mathscr S(G)$ and for all symbols $b$ of the frequency-like form $b(x,\eta)=\widehat{g}(\eta)$, where $g\in\mathscr S(G)$. } \end{dfn} \begin{thm} The following conditions are equivalent: \begin{itemize} \item[{\rm (a)}] $D[{\bf 1}]={\bf 1}\otimes\delta_\varepsilon I$. In other words, $D[{\bf 1}](x,\eta)=\delta_\varepsilon(\eta)\,I$. \item[{\rm (b)}] Time-frequency transform $D$ has the correct frequency margins. \item[{\rm (c)}] The $D$-quantization is correct in frequency. \item[{\rm (d)}] The ambiguity kernel satisfies $\phi_D(\varepsilon,y)=1\in\mathbb C$ for all $y\in G$. \end{itemize} \end{thm} \paragraph{Proof.} For any time-frequency transform $D$ we have $$ F(D[{\bf 1}])(\xi,y) = \phi_D(\xi,y) \int \xi(z)\,{\rm d}z = \phi_D(\varepsilon,y)\,\delta_\varepsilon(\xi). $$ On the other hand, $D[{\bf 1}](x,\eta)=\delta_\varepsilon(\eta)\,I$ gives here $$ F(D[{\bf 1}])(\xi,y) = \int \xi(x)^\ast \int \eta(y)\,\delta_\varepsilon(\eta)\,I \,{\rm d}\eta\,{\rm d}x = \int\xi(x)^\ast\,{\rm d}x = \delta_\varepsilon(\xi). $$ Hence conditions (a) and (d) are equivalent. By $\mathscr F,\mathscr F^{-1}$ canceling each other, we obtain \begin{eqnarray*} \int D(u,v)(x,\eta)\,{\rm d}x & = & \iint \eta(y)^\ast \int \xi(x) \,\phi_D(\xi,y)\,FR(u,v)(\xi,y)\,{\rm d}\xi\,{\rm d}y\,{\rm d}x\\ & = & \int \eta(y)^\ast \,\phi_D(\varepsilon,y)\,FR(u,v)(\varepsilon,y)\,{\rm d}y\\ & = & \iint \eta(y)^\ast\,\phi_D(\varepsilon,y)\,u(z)\,v(zy^{-1})^\ast \,{\rm d}y\,{\rm d}z. \end{eqnarray*} Especially, $$ \int D(u,\delta_e)(x,\eta)\,{\rm d}x = \int \eta(y)^\ast\,\phi_D(\varepsilon,y)\,u(y)\,{\rm d}y $$ which equals to $\widehat{u}(\eta)$ for all $u\in\mathscr S(G)$ if and only if $\phi_D(\varepsilon,y)=1$ for all $y\in G$: in that case also $$ D(u,v)(x,\eta)\,{\rm d}x = \iint \eta(z)^\ast\,u(z)\,\eta(zy^{-1})\,v(zy^{-1})^\ast \,{\rm d}y\,{\rm d}z = \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast. $$ Thus conditions (b) and (d) are equivalent. Now assume condition (b), and let $b(x,\eta)=\widehat{g}(\eta)$. Then \begin{eqnarray*} \langle u,b^Dv\rangle & = & \langle D(u,v),b \rangle \\ & = & \iint D(u,v)(x,\eta)\,b(x,\eta)^\ast\,{\rm d}\eta\,{\rm d}x \\ & = & \iint D(u,v)(x,\eta)\,{\rm d}x\,\widehat{g}(\eta)^\ast\,{\rm d}\eta \\ & = & \int \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast\,\widehat{g}(\eta)^\ast \,{\rm d}\eta\\ & = & \int \widehat{u}(\eta) \left(\widehat{g}(\eta)\,\widehat{v}(\eta)\right)^\ast{\rm d}\eta\\ & = & \langle \widehat{u},\widehat{g}\,\widehat{v}\rangle \quad =\quad \langle u,v\ast g\rangle. \end{eqnarray*} Hence condition (c) follows from (b). Finally, assume condition (c). Then \begin{eqnarray*} \int D(u,v)(x,\eta)\,{\rm d}x & = & \iint D(u,v)(x,\omega)\,\delta_\eta(\omega)\,{\rm d}\omega\,{\rm d}x \\ & = & \langle D(u,v),{\bf 1}\otimes\delta_\eta I\rangle \\ & = & \langle u,({\bf 1}\otimes\delta_\eta I)^D v \rangle \\ & = & \langle \widehat{u},\delta_\eta\,\widehat{v} \rangle \\ & = & \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast, \end{eqnarray*} so that we obtain condition (b) of the correct margins in frequency. \hfill {\bf QED} \begin{exa} {\rm In a sense, on a finite group $G$ of $|G|$ elements, the minimal time-frequency transform $D$ having the correct margins would satisfy \begin{equation}\label{EQ:addmargins} \phi_D(\xi,y) = \begin{cases} I & {\rm if}\ \xi=\varepsilon\ {\rm or}\ y=e,\\ 0 & {\rm otherwise}. \end{cases} \end{equation} Then \begin{equation} D(u,v)(x,\eta) = \widehat{u}(\eta)\,\widehat{v}(\eta)^\ast + \frac{1}{|G|}\left( u(x)\,v(x)^\ast-\langle u,v\rangle\right)I. \end{equation} Such $D$ could be added to other time-frequency transforms that would otherwise have zero margins: for instance, this happens when the original localization comes from a commutator of position and momentum operators, like on cyclic groups in Section~\ref{SEC:cyclic}. } \end{exa} \section{Positivity} From the application point of view, a reasonable time-frequency transform ought to be at least normalized: this does not pose any problems. However, it turns out that the pointwise positivity is typically conflicting with the margin properties, and thus positivity may not be an utterly desirable property. \begin{dfn} {\rm {\it Positivity} of time-frequency transform $D$ means $$ D[u](x,\eta)\geq 0 $$ for all $u\in\mathscr S(G)$ and all $(x,\eta)\in G\times\widehat{G}$. {\it Positivity} of the $D$-quantization $a\mapsto a^D$ means that for all $u\in\mathscr S(G)$ $$ \langle u,a^Du\rangle \geq 0 $$ whenever $a\in\mathscr S(G\times G)$ is positive in the sense that $a(x,\eta)\geq 0$ for all $(x,\eta)\in G\times\widehat{G}$. } \end{dfn} \begin{exa} {\rm In the trivial case of the one-element group $G=\{e\}$, defining $D(u,v)(x,\eta):=u(e)\,v(e)^\ast$ gives a positive time-frequency transform with the correct margins in time and in frequency. For time-frequency transforms, positivity is a special case of symmetry: } \end{exa} \begin{thm} The following conditions are equivalent: \begin{itemize} \item[{\rm (a)}] For all $u\in\mathscr S(G)$ we have $D[u](e,\varepsilon)\geq 0$. \item[{\rm (b)}] Time-frequency transform $D$ is positive. \item[{\rm (c)}] The $D$-quantization is positive. \item[{\rm (d)}] The time-lag kernel satisfies $\varphi_D(x,y)=\int\kappa(x,z)\,\kappa(yx,z)^\ast {\rm d}z$ for some $\kappa$. \end{itemize} \end{thm} \paragraph{Proof.} Condition (b) trivially implies (a). Assume condition (a). Let $K_{\delta^D}$ denote the Schwartz kernel of the original localization $\delta^D:\mathscr S(G)\to\mathscr S'(G)$. Then for any $u\in\mathscr S(G)$ and $z=(z_k)_{k=1}^{d_\eta}\in\mathbb C^{d_\eta}$ we have \begin{eqnarray*} \langle D[u](t,\eta)\,z,z\rangle & = & \sum_{j,k=1}^{d_\eta} z_j^\ast z_k\ D[u]_{jk}(t,\eta) \\ & = & \sum_{j,k=1}^{d_\eta} z_j^\ast z_k \iint K_{\delta^D}(x,y)\,u(tx)\,u(ty)^\ast\,\eta_{jk}(yx^{-1}) \,{\rm d}x\,{\rm d}y\\ & = & \sum_{\ell=1}^{d_\eta} \iint K_{\delta^D}(x,y) \,u_\ell(x)\,u_\ell(y)^\ast {\rm d}x\,{\rm d}y \\ & = & \sum_{\ell=1}^{d_\eta} D[u_\ell](e,\varepsilon)\ \geq\ 0, \end{eqnarray*} where $\displaystyle u_\ell(x):=\sum_{k=1}^{d_\eta} z_k\,\eta_{k\ell}(x)^\ast u(tx)$. Hence condition (a) implies (b). Let $a\geq 0$. Then $a^\ast=a=(a^{1/2})^2$, where $a^{1/2}(x,\eta)=a(x,\eta)^{1/2}$ is the positive square root of $a(x,\eta)$, and \begin{eqnarray*} \langle u,a^D u\rangle & = & \langle D[u],a\rangle \\ & = & \iint D[u](x,\eta)\,a(x,\eta) \,{\rm d}\eta\,{\rm d}x \\ & = & \iint D[u](x,\eta)\,a(x,\eta)^{1/2}\,a(x,\eta)^{1/2} \,{\rm d}\eta\,{\rm d}x \\ & = & \iint a(x,\eta)^{1/2}\,D[u](x,\eta)\,a(x,\eta)^{1/2} \,{\rm d}\eta\,{\rm d}x \quad \geq\quad 0, \end{eqnarray*} where the last inequality follows because the ``integrand'' $a^{1/2}\,D[u]\,a^{1/2}$ is positive: notice that here both the Haar integral and the ``non-commutative $\eta$-integral'' are positive functionals. Hence condition (b) implies (c). Now suppose $a\mapsto a^D$ is positive and $u\in\mathscr S(G)$. Take $(h_\alpha)_{\alpha}$ be a bounded left approximate identity, where $0\leq h_\alpha\in\mathscr S(G)$ such that $\lim_\alpha\langle u,h_\alpha\rangle = u(e)$. Define $a_\alpha\in\mathscr S(G\times\widehat{G})$ by $a_\alpha(x,\eta):=h_\alpha(x)\,\delta_\varepsilon(\eta)I$. Then $a_\alpha(x,\eta)\geq 0$, and $$ D[u](e,\varepsilon) = \langle D[u],\delta_{(e,\varepsilon)}\rangle = \lim_{\alpha}\langle D(u,u),a_\alpha\rangle = \lim_{\alpha}\langle u,(a_\alpha)^D u\rangle, $$ which is non-negative due to the positivity of the quantization. Hence condition (c) implies (a). Assuming (d), from \eqref{EQ:original} we obtain \begin{eqnarray*} D[u](e,\varepsilon) & \stackrel{\eqref{EQ:original}}{=} & \int u(x) \left( \int \varphi_D(x^{-1},y^{-1}x)^\ast\,u(y)\,{\rm d}y\right)^\ast {\rm d}x \\ & = & \iint u(x)\,u(y)^\ast\,\varphi_D(x^{-1},y^{-1}x)\,{\rm d}y\,{\rm d}x \\ & \stackrel{{\rm (d)}}{=} & \iiint u(x)\,u(y)^\ast \kappa(x^{-1},z)\,\kappa(y^{-1},z)^\ast\,{\rm d}z\,{\rm d}y\,{\rm d}x \\ & = & \int \left| \int u(x)\,\kappa(x^{-1},z)\,{\rm d}x\right|^2{\rm d}z \quad \geq\quad 0, \end{eqnarray*} yielding condition (a). Finally, let $\delta^D=A^2$ for a positive operator $A$. Then \begin{eqnarray*} \varphi_D(x,y) & = & K_{\delta^D}(x^{-1},x^{-1}y^{-1})^\ast \\ & = & \int K_A(x^{-1},z)^\ast\,K_A(z,x^{-1}y^{-1})^\ast\,{\rm d}z \\ & = & \int K_A(x^{-1},z)^\ast\,K_A(x^{-1}y^{-1},z)\,{\rm d}z \\ & = & \int \kappa(x,z)\,\kappa(yx,z)^\ast\,{\rm d}z, \end{eqnarray*} when setting $\kappa(x,z)=K_A(x^{-1},z)^\ast$. Thus condition (d) follows from (a). \hfill {\bf QED} \paragraph{Spectrograms.} A simple example of positive original localization operators is an orthogonal projection $\delta^D:L^2(G)\to L^2(G)$ onto the $1$-dimensional subspace spanned by a unit-energy {\it window} $w\in\mathscr S(G)$: \begin{equation} \delta^D v := \langle v,w\rangle\,w. \end{equation} The window here should be ``focused at $(e,\varepsilon)\in G\times\widehat{G}$'' in a reasonable sense: most of energy of $w$ should be nearby $e\in G$, and most of energy of $\widehat{w}$ should be nearby $\varepsilon\in\widehat{G}$. In any case, now $K_{\delta^D}(x,y)=w(y)^\ast w(x)$, and \begin{eqnarray} D(u,v)(x,\eta) & = & \iint u(xz)\,\eta(z)^\ast\,K_{\delta^D}(z,y)^\ast\,\eta(y)\,v(xy)^\ast \,{\rm d}y\,{\rm d}z\\ & = & \mathscr G_wu(x,\eta)\ \mathscr G_wv(x,\eta)^\ast, \end{eqnarray} where \begin{equation} \mathscr G_wu(x,\eta) := \int \eta(y)^\ast\,u(y)\,w(x^{-1}y)^\ast\,{\rm d}y \end{equation} defines the $w$-windowed {\it short-time Fourier transform} $\mathscr G_w u$ of signal $u$. Notice that \begin{eqnarray} \mathscr G_w\delta_e(x,\eta) & = & w(x^{-1})^\ast\ =:\ \widetilde{w}(x), \\ \mathscr G_w{\bf 1}(x,\eta) & = & \widehat{\overline{w}}(\eta)^\ast\,\eta(x)^\ast. \end{eqnarray} Clearly $D[u](x,\eta):=D(u,u)(x,\eta)\geq 0$, and we may call it the {\it $w$-spectrogram} of signal $u$ at $(x,\eta)\in G\times\widehat{G}$. Actually, such a short-time Fourier transform formula on unimodular groups was briefly mentioned in \cite{ChirikjianKyatkin}, as an analogue to the Euclidean case. Let us find the corresponding ambiguity kernel $\phi_D$: \begin{eqnarray*} & & FD(u,v)(\xi,y)\\ & = & \int \xi(x)^\ast \int \eta(y) \int \eta(t)^\ast\, u(t)\, w(x^{-1}t)^\ast \,{\rm d}t \int w(x^{-1}s)\, v(s)^\ast\, \eta(s)\, {\rm d}s \,{\rm d}\eta\,{\rm d}x \\ & = & \int \xi(x)^\ast \int u(t)\, w(x^{-1}t)^\ast \,w(x^{-1}ty^{-1})\, v(ty^{-1})^\ast \,{\rm d}t \,{\rm d}x \\ & = & \int \left( \int \xi(x^{-1}t)\,w(x^{-1}t)^\ast\,w(x^{-1}ty^{-1}) \,{\rm d}x \right) \xi(t)^\ast\,u(t)\,v(ty^{-1})^\ast \,{\rm d}t \\ & = & \int \left(\int \xi(z)\,w(z)^\ast\,w(zy^{-1}) \,{\rm d}z \right) \xi(t)^\ast\,u(t)\,v(ty^{-1})^\ast \,{\rm d}t \\ & = & \left(\int \xi(z)^\ast\,w(z)\,w(zy^{-1})^\ast \,{\rm d}z \right)^\ast \int \xi(t)^\ast\,u(t)\,v(ty^{-1})^\ast \,{\rm d}t \\ & = & \phi_D(\xi,y)\,FR(u,v)(\xi,y), \end{eqnarray*} where $$ \phi_D(\xi,y) = \int \xi(z)\,w(z)^\ast\,w(zy^{-1})\,{\rm d}z = FR(w,w)(\xi,y)^\ast. $$ Hence $\varphi_D(x,y)=(\mathscr F^{-1}\otimes I)\phi_D(x,y) =w(x^{-1})^\ast\,w(x^{-1}y^{-1})=\widetilde{w}(x)\,\widetilde{w}(yx)^\ast$, where $\widetilde{w}(t)=w(t^{-1})^\ast$. The energy normalization means then the energy normalization of the window: $$ 1 = \phi_D(\varepsilon,e) = \int |w(x)|^2\,{\rm d}x = \|w\|^2. $$ The correct margins in time would mean $$ I = \phi_D(\xi,e) = \int \xi(x)\,|w(x)|^2\,{\rm d}x = \widehat{|w|^2}(\xi), $$ i.e. $|w|^2=\delta_e$, the Dirac delta at $e\in G$. From another point of view, here \begin{eqnarray*} D[\delta_e](x,\eta) & = & |w(x^{-1})|^2\ =\ |\widetilde{w}(x)|^2,\\ D[{\bf 1}](x,\eta) & = & \widehat{\overline{w}}(\eta)^\ast\ \widehat{\overline{w}}(\eta). \end{eqnarray*} Consequently, it is too much to ask for the correct margins here, but the energy normalization follows just from $\|w\|=1$. \begin{rem} {\rm Let $D$ be a positive time-frequency transform satisfying the correct margins both in time \eqref{EQ:margintime} and in frequency \eqref{EQ:marginfrequency}. Suppose $\delta^D$ is bounded on $L^2(G)$. By the spectral decomposition of $\delta^D$, then $G$ must be the trivial group of just one element $e$, and $D(u,v)(x,\eta)=u(e)\,v(e)^\ast$. } \end{rem} \section{Unitarity} \begin{dfn} {\rm Time-frequency transform $D$ is called {\it unitary} if it satisfies the {\it Moyal identity} \begin{equation}\label{EQ:Moyal} \langle D(u,v),D(f,g)\rangle = \langle u,f\rangle\,\langle v,g\rangle^\ast \end{equation} for all $u,v\in\mathscr S(G)$ and $f,g\in\mathscr S'(G)$. The $D$-quantization $a\mapsto a^D$ is called {\it unitary} if \begin{equation} \langle a,b\rangle = \langle a^D,b^D\rangle \end{equation} for all $a,b\in\mathscr S(G\times\widehat{G})$, where $\langle a^D,b^D\rangle = {\rm tr}\left( a^D\,(b^D)^\ast\right)$. } \end{dfn} \begin{thm}\label{THM:unitary} The following conditions are equivalent: \begin{enumerate} \item[{\rm (a)}] $\langle D(u,{\bf 1}),D(\delta_e,\delta_y)\rangle = u(e)$ for all $u\in\mathscr S(G)$ and $y\in G$. \item[{\rm (b)}] Time-frequency transform $D$ is unitary. \item[{\rm (c)}] The $D$-quantization is unitary. \item[{\rm (d)}] Ambiguity operators $\phi_D(\xi,y)$ are unitary for all $(\xi,y)\in\widehat{G}\times G$. \end{enumerate} Especially, the Kohn--Nirenberg transform is unitary. \end{thm} \begin{rem} {\rm In condition (a) of Theorem~\ref{THM:unitary}, on non-compact $G$ we may approximate the constant ${\bf 1}\not\in\mathscr S(G)$ within $\mathscr S(G)$. } \end{rem} \paragraph{Proof.} As $\phi_R(\xi,y)\equiv I$, the Kohn--Nirenberg transform satisfies condition (d). Moreover, it is unitary, because \begin{eqnarray*} \langle R(u,v),R(f,g)\rangle & = & \iint u(x)\,\eta(x)^\ast\,\widehat{v}(\eta)^\ast \,\widehat{g}(\eta)\,\eta(x)\,f(x)^\ast\,{\rm d}\eta\,{\rm d}x \\ & = & \int u(x)\,f(x)^\ast\,{\rm d}x \int \widehat{v}(\eta)^\ast\,\widehat{g}(\eta) \,{\rm d}\eta \\ & = & \langle u,f\rangle\,\langle\widehat{g},\widehat{v}\rangle \quad = \quad \langle u,f\rangle\,\langle v,g\rangle^\ast. \end{eqnarray*} Assume (d), i.e. the unitarity of the ambiguity operators $\phi_D(\xi,y)$. Then \begin{eqnarray*} && \langle D(u,v),D(f,g)\rangle \\ & = & \langle FD(u,v),FD(f,g)\rangle \\ & = & \iint \phi_D(\xi,y)\,FR(u,v)(\xi,y) \,FR(f,g)(\xi,y)^\ast\,\phi_D(\xi,y)^\ast\,{\rm d}\xi\,{\rm d}y \\ & = & \iint FR(u,v)(\xi,y) \,FR(f,g)(\xi,y)^\ast \,{\rm d}\xi\,{\rm d}y \\ & = & \langle FR(u,v),FR(f,g)\rangle \\ & = & \langle R(u,v),R(f,g)\rangle. \end{eqnarray*} Thus condition (d) implies (b), as we already know that $R$ is unitary. Condition (b) implies condition (a), because for $(u,v,f,g)=(u,{\bf 1},\delta_e,\delta_y)$ we have $$ \langle u,f\rangle \langle v,g\rangle^\ast = u(e). $$ Now assume condition (a), and let $(u,v,f,g)=(u,{\bf 1},\delta_e,\delta_y)$, and $M(\omega,t):=\phi_D(\omega,t)^\ast \phi_D(\omega,t)$. Then \begin{eqnarray*} u(e) & = & \langle D(u,v),D(f,g) \rangle \\ & = & \langle FD(u,v),FD(f,g) \rangle \\ & = & \iint M(\xi,t) \int \xi(x)^\ast u(x)\,v(xt^{-1})^\ast {\rm d}x \left(\int \xi(z)^\ast f(z)\,g(zt^{-1})^\ast\,{\rm d}z\right)^\ast {\rm d}\xi\,{\rm d}t \\ & = & \int M(\xi,y^{-1})\,\widehat{u}(\xi)\,{\rm d}\xi. \end{eqnarray*} Since this holds for every $u\in\mathscr S(G)$, we have $M(\xi,y^{-1})=I$ for every $(\xi,y)\in\widehat{G}\times G$. Hence condition (d) follows from (a). Finally, let us consider the Hilbert--Schmidt inner product of operators: \begin{eqnarray*} \langle a^D,b^D\rangle & = & \iint K_{a^D}(x,y)\,K_{b^D}(x,y)^\ast\,{\rm d}x\,{\rm d}y \\ & = & \iiint \xi(yt)\,\phi_D(\xi,t)^\ast Fa(\xi,t)\,{\rm d}\xi \int Fb(\omega,t)^\ast\,\phi_D(\omega,t)\,\omega(yt)^\ast {\rm d}\omega\,{\rm d}x\,{\rm d}y \\ & = & \iint \phi_D(\xi,t)^\ast Fa(\xi,t)\,Fb(\xi,t)^\ast\,\phi_D(\xi,t) \,{\rm d}\xi\,{\rm d}t. \end{eqnarray*} It is clear that this equals to $\langle Fa,Fb\rangle=\langle a,b\rangle$ for all $a,b\in\mathscr S(G\times\widehat{G})$ if and only if condition (d) holds: thus conditions (c) and (d) are equivalent. \hfill {\bf QED} \begin{rem} {\rm By the previous Theorem, unitary time-frequency transforms satisfy the Moyal identity~\eqref{EQ:Moyal} also for all $u,v,f,g\in L^2(G)$. As a consequency of the unitarity of the Kohn--Nirenberg transform, the energy densities $D[v_\alpha]$ uniformly cover the time-frequency plane $G\times\widehat{G}$ for any time-frequency transform $D$: } \end{rem} \begin{cor} Let $D$ be normalized, i.e. $ \phi_D(\varepsilon,e) = 1. $ Let $(v_\alpha)_{\alpha\in J}$ be an orthonormal basis of $L^2(G)$. Then $b^R=I$, where \begin{equation} b = \sum_{\alpha\in J} D[v_\alpha]. \end{equation} \end{cor} \paragraph{Proof.} Notice that $$ \langle u,v\rangle = \langle\sum_{\alpha\in J} \langle u,v_\alpha\rangle\,v_\alpha,v\rangle = \sum_{\alpha\in J}\langle u,v_\alpha\rangle\,\langle v,v_\alpha\rangle^\ast. $$ Thus by the previous Theorem, for the Kohn--Nirenberg transform $R$ we have $$ \langle u,v\rangle = \sum_{\alpha\in J} \langle R(u,v),R(v_\alpha,v_\alpha)\rangle = \langle R(u,v),\sum_{\alpha\in J} R[v_\alpha]\rangle = \langle u,a^{R}v\rangle, $$ yielding $a^{R}=I$ with $$ a = \sum_{\alpha\in J} R[v_\alpha]. $$ Now $$ \sum_{\alpha\in J} D[v_\alpha] = \sum_{\alpha\in J} R[v_\alpha]\ast \psi_D = I\ast\psi_D = \lambda I, $$ where $\displaystyle\lambda=\iint \psi_D(x,\eta)\,{\rm d}\eta\,{\rm d}x = \phi_D(\varepsilon,e) = 1$. \hfill {\bf QED} \section{Inner invariance} Let us study the invariance under inner automorphisms $(x\mapsto z^{-1}xz):G\to G$. We denote $u_z(x):=u(z^{-1}xz)$ for $u\in\mathscr S(G)$ and $x,z\in G$. \begin{dfn} {\rm Time-frequency transform $D$ is called {\it inner} if it satisfies \begin{equation}\label{EQ:inner} D(u_z,v_z)(x,\eta)=\eta(z)\,D(u,v)(z^{-1}xz,\eta)\,\eta(z)^\ast \end{equation} for all $u,v\in\mathscr S(G)$, $(x,\eta)\in G\times\widehat{G}$ and $z\in G$. The $D$-quantization $a\mapsto a^D$ is called {\it inner} if \begin{equation} \left(a^D(v_z)\right)_{z^{-1}} = a^D v \end{equation} for all $v\in\mathscr S(G)$ and $z\in G$ whenever $a\in\mathscr S(G\times\widehat{G})$ satisfies $a(z^{-1}xz,\eta)=\eta(z)^\ast\,a(x,\eta)\,\eta(z)$ for all $(x,\eta)\in G\times\widehat{G}$ and $z\in G$. } \end{dfn} \begin{thm} The following conditions are equivalent: \begin{enumerate} \item[{\rm (a)}] $D[u_z](e,\varepsilon)=D[u](e,\varepsilon)$ for all $u\in\mathscr S(G)$ and $z\in G$. \item[{\rm (b)}] Time-frequency transform $D$ is inner. \item[{\rm (c)}] The $D$-quantization is inner. \item[{\rm (d)}] $\phi_D(\xi,zyz^{-1})=\xi(z)\,\phi_D(\xi,y)\,\xi(z)^\ast$ for all $(\xi,y)\in\widehat{G}\times G$ and $z\in G$. \end{enumerate} Especially, the Kohn--Nirenberg transform is inner. \end{thm} \paragraph{Proof.} Condition (a) is a special case of condition (b). Condition (d) implies condition (b), because \begin{eqnarray*} && D(u_z,v_z)(x,\eta) \\ & = & \int \eta(y)^\ast \int \xi(x)\, \phi_D(\xi,y) \int \xi(t)^\ast\,u_z(t)\,v_z(ty^{-1})^\ast \,{\rm d}t\,{\rm d}\xi\,{\rm d}y \\ & = & \int \eta(y)^\ast \int \xi(x)\, \phi_D(\xi,y) \int \xi(ztz^{-1})^\ast\,u(t)\,v(tz^{-1}y^{-1}z)^\ast \,{\rm d}t\,{\rm d}\xi\,{\rm d}y \\ & = & \int \eta(zyz^{-1})^\ast \int \xi(z^{-1}x)\, \phi_D(\xi,zyz^{-1}) \,\xi(z^{-1})^\ast \int \xi(t)^\ast\,\,u(t)\,v(ty^{-1})^\ast \,{\rm d}t\,{\rm d}\xi\,{\rm d}y \\ & \stackrel{\rm (d)}{=} & \int \eta(zy z^{-1})^\ast \int \xi(z^{-1}xz)\,\phi_D(\xi,y) \int \xi(t)^\ast\,u(t)\,v(ty^{-1})^\ast\,{\rm d}t\,{\rm d}\xi\,{\rm d}y \\ & = & \eta(z)\,D(u,v)(z^{-1}xz,\eta)\,\eta(z)^\ast. \end{eqnarray*} Suppose $a\in\mathscr S(G\times\widehat{G})$ is inner invariant: now assuming condition (b), we obtain condition (c), because $$ \langle u,(a^D(v_z))_{z^{-1}}\rangle = \langle u_z,a^D(v_z)\rangle \\ = \langle D(u_z,v_z),a\rangle \\ \stackrel{\rm (b)}{=} \langle D(u,v),a\rangle \\ = \langle u,a^D v\rangle. $$ Now assume condition (c). Let $(h_\alpha)_\alpha$ be an inner invariant approximate identity in $\mathscr S(G)$. Let $a_\alpha(x,\eta)=h_\alpha(x)\,\delta_\varepsilon(\eta)I :\mathscr H_\eta\to\mathscr H_\eta$. Then for all $u\in\mathscr S(G)$ and $z\in G$ we have $$ D[u](e,\varepsilon) = \langle u,\delta_{(e,\varepsilon)}^D u \rangle = \lim_\alpha \langle u,a_\alpha^D u\rangle \stackrel{\rm (c)}{=} \lim_\alpha \langle u,(a_\alpha^D(u_z))_{z^{-1}}\rangle = D[u_z](e,\varepsilon). $$ Hence condition (c) implies condition (a). Finally, conditions (a) and (d) are equivalent, because for the kernel $\varphi_D=(\mathscr F^{-1}\otimes I)\phi_D$ on one hand $$ D[u](e,\varepsilon) = \iint \varphi_D(x^{-1},y)\,u(x)\,u(xy^{-1})^\ast\,{\rm d}x\,{\rm d}y, $$ and on the other hand \begin{eqnarray*} D[u_z](e,\varepsilon) & = & \iint \varphi_D(x^{-1},y)\,u_z(x)\,u_z(xy^{-1})^\ast \,{\rm d}x\,{\rm d}y \\ & = & \iint \varphi_D(x^{-1},y)\,u(z^{-1}xz)\,u(z^{-1}xy^{-1}z)^\ast \,{\rm d}x\,{\rm d}y \\ & = & \iint \varphi_D(zx^{-1}z^{-1},y)\,u(x)\,u(xz^{-1}y^{-1}z)^\ast \,{\rm d}x\,{\rm d}y \\ & = & \iint \varphi_D(zx^{-1}z^{-1},zyz^{-1})\,u(x)\,u(xy^{-1})^\ast \,{\rm d}x\,{\rm d}y. \end{eqnarray*} This completes the proof. \hfill {\bf QED} \section{On locally compact groups}\label{SEC:LCG} Time-frequency analysis on compact groups was presented above so that the results turn out to have natural counterparts on those locally compact groups that allow reasonable Fourier analysis. We shall consider two families of such groups: the Abelian ones, and the type I second-countable unimodular locally groups. \subsection{Locally compact Abelian groups}\label{SUBSEC:LCA} For locally compact Abelian groups, time-frequency analysis has been studied e.g. in \cite{Kutyniok}, and Kohn--Nirenberg pseudo-differential operators have been treated in \cite{GrochenigStrohmer}. We just have to modify the definitions a bit, and then the results would hold as such. In the commutative case, the frequency matrices would be just one-dimensional scalars, which drastically simplifies many of the proofs. What to change? Let $G$ be a locally compact Abelian group. Now $\widehat{G}$ is the {\it character group} of $G$, consisting of the characters $\eta:G\to U(1)$, i.e. continuous scalar unitary homomorphisms. By the Pontryagin--van Kampen duality theorem, $\widehat{G}$ is a locally compact Abelian group. The group operation is given by the multiplication of the characters, and the topology is the natural compact-open topology. In the non-compact case, we choose a positive regular group-invariant measure on $G$ to be the Haar measure: this is unique up to a scalar multiple, and $G$ has then infinite measure. After this, we choose the Haar measure on $\widehat{G}$ so that the Fourier transform and the Fourier inverse transform formulas match: \begin{eqnarray} \widehat{u}(\eta) = \int_G u(y)\,\eta(y)^\ast\,{\rm d}y, && u(x) = \int_{\widehat{G}} \eta(x)\,\widehat{u}(\eta)\,{\rm d}\eta \end{eqnarray} for those $u\in L^1(G)$ for which $\widehat{u}\in L^1(\widehat{G})$. Then we let the test function space to be $\mathscr S(G)$, the Schwartz--Bruhat space on $G$. The corresponding tempered distribution space is denoted by $\mathscr S'(G)$. Why we did not choose Eymard's {\it Fourier algebra} $A(G)$ for a space of test functions on compact groups $G$? Here $u\in A(G)$ has the norm $\|u\|_{A(G)}:=\|\widehat{u}\|_{L^1(\widehat{G})}$, see \cite{Eymard}. The Fourier algebra looks initially an inviting alternative, especially as on the compact Abelian groups it coincides with the {\it Feichtinger algebra}. The Feichtinger algebra has turned out to be a natural setting for time-frequency analysis on locally compact Abelian groups, see e.g. \cite{Feichtinger1,Feichtinger2,Grochenig}. However, on non-commutative compact groups the Kohn--Nirenberg transform would not map $A(G)\times A(G)$ to $A(G\times\widehat{G})$, and we would have the similar difficulties with the Kohn--Nirenberg quantization, which is our starting point for the time-frequency analysis on groups. The difficulties boil down to that the co-multiplication $\Delta$ does not necessarily map $A(G)$ to $A(G\times G)$, as \begin{eqnarray*} \|u\|_{A(G)} & = & \sum_{\eta\in\widehat{G}} d_\eta\,{\rm tr}(|\widehat{u}(\eta)|),\\ \|\Delta u\|_{A(G\times G)} & = & \sum_{\eta\in\widehat{G}} d_\eta^2\,{\rm tr}(|\widehat{u}(\eta)|), \end{eqnarray*} where dimensions $d_\eta$ may grow arbitrarily large. Of course, $d_\eta\equiv 1$ when the group is commutative, and then $\Delta:A(G)\to A(G\times G)$ is an isometry, and the Kohn--Nirenberg transform behaves well. All in all, on a locally compact Abelian group $G$, a time-frequency transform is a mapping $$ D:\mathscr S(G)\times\mathscr S(G)\to\mathscr S(G\times\widehat{G}) $$ such that $$ FD(u,v)(\xi,y) = \phi_D(\xi,y)\,FR(u,v)(\xi,y), $$ where the ambiguity kernel $\phi_D:\widehat{G}\times G\to\mathbb C$ defines a Schwartz multiplier $h\mapsto F^{-1}(\phi_D\,Fh)$. Then we have the translation-modulation invariance $$ D[M_\xi T_y u](x,\eta) = D[u](x-y,\xi^{-1}y), $$ where $T_yu(x) := u(x-y)$ and $M_\xi u(x) := \xi(x)\,u(x)$. In case of the compact group $G$, the approximate identities on $G\times\widehat{G}$ could be treated merely on $G$. This is not enough on non-compact locally compact Abelian groups $G$, but the modification for $G\times\widehat{G}$ is easy. Notice that in the calculations for non-compact $G$, distribution ${\bf 1}\not\in\mathscr S(G)$ occasionally has to be approximated by test functions. \subsection{Type I second-countable unimodular groups} Let $G$ be a type I second-countable unimodular locally compact group. For background information, see e.g. \cite{Dixmier,Folland,Fuhr}. Unimodularity of $G$ means that the left-invariant Haar measure coincides with the right-invariant Haar measure: briefly, it is the Haar measure of $G$. Recall that a topological space is second-countable when its topology has a countable base. In our convention, topological groups are always Hausdorff spaces, and consequently second-countable locally compact groups are metrizable with a complete metric. Moreover, second-countable locally compact groups are of type I if and only if they are postliminal: this means that for each $\eta\in\widehat{G}$ the compact linear operators $M:H_\eta\to H_\eta$ belong to the closure of $\{\widehat{u}(\eta):\,u\in L^1(G)\}$. On such a group $G$, the Schwartz--Bruhat space $\mathscr S(G)$ will be the test function space, with the corresponding Schwartz--Bruhat distributions $\mathscr S'(G)$. The time-frequency analysis results on compact groups are carried to $G$ without major changes in formulations and proofs. The unit constant function ${\bf 1}:G\to\mathbb C$ is a distribution which does not belong to $\mathscr S(G)$ on non-compact $G$, but it can be approximated by the test functions. \section{Example of finite cyclic groups} \label{SEC:cyclic} Consider time-frequency analysis on the finite cyclic group $G=\mathbb Z/N\mathbb Z$, where $\widehat{G}\cong G$. First, label spaces $G,\widehat{G}$ by functions $f:G\to\mathbb R$ and $\widehat{g}:\widehat{G}\to\mathbb R$. Define respective {\it position and momentum operators} $A,B:L^2(G)\to L^2(G)$ by \begin{eqnarray} A u := f\,u, && \widehat{Bu} := \widehat{g}\,\widehat{u}. \end{eqnarray} The {\it uncertainty observable} of measurement pair $(A,B)$ is \begin{equation} \delta^{D_{\mathbb Z/N\mathbb Z}} := -{\rm i}2\pi[A,B] = -{\rm i}2\pi\left(AB-BA\right). \end{equation} This means \begin{equation} \delta^{D_{\mathbb Z/N\mathbb Z}} v(x) = \int K_{\mathbb Z/N\mathbb Z}(x,y)\,v(y)\,{\rm d}y, \end{equation} where \begin{equation} K_{\mathbb Z/N\mathbb Z}(x,y) = {\rm i}2\pi \left(f(y)-f(x)\right) g(x-y) \end{equation} corresponds to the time-lag kernel $\varphi_{D_{\mathbb Z/N\mathbb Z}}:G\times G\to\mathbb C$, \begin{equation} \varphi_{D_{\mathbb Z/N\mathbb Z}}(x,y) = K_{\mathbb Z/N\mathbb Z}(-x,-x-y)^\ast = {\rm i}2\pi \left(f(-x)-f(-x-y)\right) g(y)^\ast. \end{equation} As $D(u,v)(0,0)=\langle u,\delta^D v\rangle$, by the time-frequency shift-invariance \begin{equation} |D_{\mathbb Z/N\mathbb Z}(u,v)(x,\eta)| \leq 2\pi \|AB-BA\|\,\|u\|\|v\| \leq 4\pi\,\|f\|_{L^\infty} \|\widehat{g}\|_{L^\infty} \|u\|\,\|v\| \end{equation} for all $(x,\eta)\in G\times\widehat{G}$. For the ambiguity kernel $\phi_{D_{\mathbb Z/N\mathbb Z}}:\widehat{G}\times G\to\mathbb C$, \begin{equation} \phi_{D_{\mathbb Z/N\mathbb Z}}(\xi,y) = {\rm i}2\pi \widehat{f}(-\xi) \left(1-{\rm e}^{{\rm i}2\pi\xi y/N}\right) g(y)^\ast. \end{equation} A natural choice for the position labeling function $f:G\to\mathbb R$ could be \begin{equation}\label{EQ:f-left} f(x) := x/N\quad {\rm for}\quad 0\leq x<N \end{equation} (here $f(x):=x/N$ for $0<x\leq N$ would be another good choice, but it ultimately leads to the same limit as $N\to\infty$ in the next section). Observe that for $0<\eta<N$ $$ 0 = N^{-1}\sum_{x=0}^{N-1}\left((x+1)/N-x/N\right) {\rm e}^{-{\rm i}2\pi x\eta/N} = {\rm e}^{{\rm i}2\pi\eta/N}\left(\widehat{f}(\eta) + N^{-1}\right) -\widehat{f}(\eta), $$ yielding \begin{equation} \widehat{f}(\eta) = \frac{-1/N}{1-{\rm e}^{-{\rm i}2\pi\eta/N}}, \end{equation} so that if $\widehat{g}(\eta)=f(\eta)$ (i.e. $g(y)=N\widehat{f}(-y)$) then \begin{equation}\label{EQ:ambiguitydiscrete} \phi_{D_{\mathbb Z/N\mathbb Z}}(\xi,y) = \begin{cases} \frac{{\rm i}2\pi}{N} \frac{1-{\rm e}^{{\rm i}2\pi\xi y/N}} {\left(1-{\rm e}^{{\rm i}2\pi\xi/N}\right) \left(1-{\rm e}^{-{\rm i}2\pi y/N}\right)} & {\rm if}\ \xi\not=0\ {\rm and}\ y\not=0,\\ 0 & {\rm if}\ \xi=0\ {\rm or}\ y=0. \end{cases} \end{equation} Let us define the time-frequency transform $Q_{\mathbb Z/N\mathbb Z}$ on the finite cyclic group $G=\mathbb Z/N\mathbb Z$ by its ambiguity kernel, where \begin{equation} \phi_{Q_{\mathbb Z/N\mathbb Z}}(\xi,y) = \begin{cases} \frac{{\rm i}2\pi}{N} \frac{1-{\rm e}^{{\rm i}2\pi\xi y/N}} {\left(1-{\rm e}^{{\rm i}2\pi\xi/N}\right) \left(1-{\rm e}^{-{\rm i}2\pi y/N}\right)} & {\rm if}\ \xi\not=0\ {\rm and}\ y\not=0,\\ 1 & {\rm if}\ \xi=0\ {\rm or}\ y=0. \end{cases} \end{equation} That is, we summed \eqref{EQ:ambiguitydiscrete} and \eqref{EQ:addmargins}, obtaining the correct margins. \begin{thm} Mapping $[u]\mapsto Q_{\mathbb Z/N\mathbb Z}[u]$ is invertible for all $N\in\mathbb Z^+$. The corresponding $Q_{\mathbb Z/N\mathbb Z}$-quantization is invertible if and only if $N$ is prime or $N=1$. \end{thm} \paragraph{Proof.} Let $D=Q_{\mathbb Z/N\mathbb Z}$. Case $N=1$ is trivial. Assume now that $N$ is prime. Then ambiguity kernel $\phi_D$ has no zeros, so let $g(\xi,y):=\phi_D(\xi,y)^{-1}$. Hence starting from $D[u] = F^{-1}(\phi_D\,FR[u])$ we find $FR[u]=g\,FD[u]$, and from it we obtain $u(x)\,u(x-y)^\ast$ for all $x,y\in\mathbb Z/N\mathbb Z$. Thus $[u]\mapsto D[u]$ is invertible when $N$ is prime. What about the invertibility of the $D$-quantization $a\mapsto a^D$? Recall that the Kohn--Nirenberg quantization $a\mapsto a^R$ is invertible: linear mapping $A:L^2(\mathbb Z/N\mathbb Z)\to L^2(\mathbb Z/N\mathbb Z)$ is of the form $A=a^R$, where $ a(x,\eta)=\eta(x)^\ast A\eta(x) $ for $\eta(x):={\rm e}^{{\rm i}2\pi x\eta/N}$. Then the $D$-quantization $b\mapsto b^D$ is invertible, because \begin{eqnarray*} && \langle u,a^R v\rangle = \langle R(u,v),a\rangle = \langle FR(u,v),Fa\rangle = \langle FD(u,v),Fb\rangle = \langle D(u,v),b\rangle \\ & = & \langle u,b^D v\rangle, \end{eqnarray*} where $Fb=g^\ast Fa$: here $a^R=b^D$. This concludes the case of prime $N$. Finally, let us consider divisible $N\geq 4$. Now $\phi_D(\xi,y)=0$ if and only if $\xi,y$ are zero divisors modulo $N$. In this case, $b^D=0$ if $b$ is a symbol such that $Fb$ is supported only on the zero divisors. Hence the $D$-quantization is not injective, nor surjective (due to the finite-dimensionality). However, it turns out that $[u]\mapsto D[u]$ is still invertible. Finding $[u]$ from $D[u]$ is reduced to phase retrieval, as we easily get the time margins $ |u(x)|^2 = \sum_{\eta=1}^N D[u](x,\eta). $ Especially, case $u=0$ is trivial, so assume $u\not=0$. Knowing $D[u]$, we also find $$ F^{-1}D[u](\xi,y) = \phi_D(\xi,y)\,\frac{1}{N}\sum_{z=1}^N {\rm e}^{-{\rm i}2\pi z\xi/N} \,u(z)\,u(z-y)^\ast. $$ From this, since $\displaystyle \frac{1-{\rm e}^{{\rm i}2\pi\xi y/N}}{1-{\rm e}^{{\rm i}2\pi\xi/N}} = \sum_{k=0}^{y-1} {\rm e}^{{\rm i}2\pi k\xi/N}$ for $0<y<N$, we obtain numbers $$ E(x,y) := \sum_{k=0}^{y-1} u(x+k)\,u(x+k-y)^\ast $$ for all $x$. We may recover only the equivalence class $[u]$ of $u$, but suppose we know the complex phase of some $u(z)\not=0$. We proceed recursively as follows: We find numbers $u(z+1)$ and $u(z-1)$ from $E(z+1,1)$ and $E(z,1)$, respectively. If we have already recovered numbers $u(z\pm h)$ for $0\leq h<j$, then we stably obtain numbers of $u(z+j)$ and $u(z-j)$ by finding their complex phases from $E(z+1,j)$ and $E(z,j)$, respectively. This completes the proof. \hfill{\bf QED} \begin{rem} {\rm In the previous proof, the stable algorithm for $D[u]\mapsto [u]$ can be built around any point $z\in\mathbb Z/N\mathbb Z$ for which $u(z)\not=0$. Let us also note the estimates $$ |\phi_D(\xi,y)|\leq |\phi_D(1,y)| = \frac{2\pi}{N}\left|1-{\rm e}^{{\rm i}2\pi/N}\right|^{-1}\leq\frac{\pi}{2} $$ for all $N\geq 2$ and $\xi,y$. Without losing generality, for $0<y\leq N/2$ this follows by observing that $$ \phi_D(\xi,y) = \frac{{\rm i}2\pi}{N} \left(1-{\rm e}^{-{\rm i}2\pi y/N}\right)^{-1} \sum_{k=0}^{y-1} {\rm e}^{{\rm i}2\pi k\xi/N}. $$ By the geometry of the unit circle, the optimal bounds $$ |\phi_D(\xi,y)|\leq \frac{2\pi}{N} \left|1-{\rm e}^{{\rm i}2\pi/N}\right|^{-1} $$ form a monotonically decreasing sequence with the limit $1$ as $N\to\infty$. } \end{rem} \section{Limit of cyclic case: Born--Jordan} Next we study what happens to transforms $D_{\mathbb Z/N\mathbb Z}$ when we take the limit $N\to\infty$ interpreting either that $\mathbb Z/N\mathbb Z$ tends to the compact circle group $\mathbb T=\mathbb R/\mathbb Z$ or to the non-compact group $\mathbb Z$ of integers. We also study the further limiting time-frequency transforms on the real line $\mathbb R$. Starting from natural time-frequency transforms of signals on $\mathbb Z/N\mathbb Z$, we study the limiting cases on compact $\mathbb T$ and non-compact $\mathbb Z$, and their limits on $\mathbb R$. At limit $N\to\infty$ to compact group $\mathbb T$, from transforms $D_{\mathbb Z/N\mathbb Z}$ in the previous section we obtain time-frequency transform $D_{\mathbb T}$ with ambiguity kernel $\phi_{D_{\mathbb T}}:\mathbb Z\times\mathbb T\to\mathbb C$, where \begin{equation} \phi_{D_{\mathbb T}}(\xi,y) = \begin{cases} -\xi^{-1} \left(1-{\rm e}^{{\rm i}2\pi\xi y}\right) / \left(1-{\rm e}^{-{\rm i}2\pi y}\right) & {\rm if}\ \xi\not=0\ {\rm and}\ y\not=0, \\ 1 & {\rm if}\ \xi\not=0\ {\rm and}\ y=0, \\ 0 & {\rm if}\ \xi=0. \end{cases} \end{equation} Indeed, $y\mapsto\phi_{D_{\mathbb T}}(\xi,y)$ is a trigonometric polynomial: \begin{eqnarray*} \phi_{D_\mathbb T}(\xi,y) & = & \frac{1}{|\xi|}\sum_{k=0}^{|\xi|-1}{\rm e}^{-{\rm i}2\pi yk} \quad{\rm if}\quad \xi<0,\\ \phi_{D_\mathbb T}(\xi,y) & = & \frac{1}{\xi} \sum_{k=1}^{\xi} {\rm e}^{+{\rm i}2\pi yk} \quad\quad{\rm if}\quad \xi>0. \end{eqnarray*} Indeed, $(h\mapsto F^{-1}(\phi_D\,Fh)): \mathscr S(\mathbb T\times\mathbb T)\to\mathscr S(\mathbb T\times\mathbb Z)$ is a Schwartz multiplier. Moreover, time-frequency transform $D_{\mathbb T}$ is band-limited, mapping $\mathscr T(\mathbb T)\times\mathscr T(\mathbb T)$ to $\mathscr T(\mathbb T\times\mathbb Z)$. Since $|\phi_{D_{\mathbb T}}(\xi,y)|\leq 1$, by Theorem~\ref{THM:L2boundedness} we have the $L^2$-bounds \begin{eqnarray} \|D_{\mathbb T}(u,v)\|\leq \|u\|\,\|v\|,\\ \|a^{D_\mathbb T} v\|\leq \|a\|\,\|v\|. \end{eqnarray} Analogously, we have time-frequency transform $D_{\mathbb Z}$ on non-compact group $\mathbb Z$, with ambiguity kernel $\phi_{D_{\mathbb Z}}:\mathbb T\times\mathbb Z\to\mathbb C$, \begin{equation} \phi_{D_{\mathbb Z}}(\xi,y) = \begin{cases} y^{-1} \left(1-{\rm e}^{{\rm i}2\pi\xi y}\right) / \left(1-{\rm e}^{{\rm i}2\pi\xi}\right) & {\rm if}\ \xi\not=0\ {\rm and}\ y\not=0, \\ 1 & {\rm if}\ \xi=0\ {\rm and}\ y\not=0, \\ 0 & {\rm if}\ y=0. \end{cases} \end{equation} Hence time-lag kernel $\varphi_{D_{\mathbb Z}}:\mathbb Z\times\mathbb Z\to\mathbb C$ is given by \begin{equation} \varphi_{D_{\mathbb Z}}(x,y) = \begin{cases} 1/|y| & {\rm if}\ -y< x\leq 0\ {\rm or}\ 0< x\leq -y,\\ 0 & {\rm otherwise.} \end{cases} \end{equation} Here $\varphi_{D_{\mathbb Z}}(x,y)=K_{\mathbb Z}(-x,-x-y)^\ast$ (equivalently, $K_{\mathbb Z}(x,y)=\varphi_{D_{\mathbb Z}}(-x,x-y)^\ast$), with \begin{equation} \delta^{D_{\mathbb Z}} v(x) = \sum_{y\in\mathbb Z} K_{\mathbb Z}(x,y)\,v(y), \end{equation} with kernel $K_{\mathbb Z}:\mathbb Z\times\mathbb Z\to\mathbb C$ given by \begin{equation} K_{\mathbb Z}(x,y) = \begin{cases} 1/|x-y| & {\rm if}\ y<0\leq x\ {\rm or}\ x<0\leq y,\\ 0 & {\rm otherwise.} \end{cases} \end{equation} At the continuum limit on $\mathbb R$, we obtain time-frequency transform $D_{\mathbb R}$, with \begin{equation} \delta^{D_{\mathbb R}}v(x) = \int_{\mathbb R} K_{\mathbb R}(x,y)\,v(y)\,{\rm d}y, \end{equation} where Schwartz kernel $K_{\mathbb R}:\mathbb R\times\mathbb R\to\mathbb C$ is given by \begin{equation} K_{\mathbb R}(x,y) = \begin{cases} 1/|x-y| & {\rm if}\ xy<0, \\ 0 & {\rm otherwise.} \end{cases} \end{equation} Hence $D_{\mathbb R}=Q$ is the Born--Jordan transform, \begin{equation} Q(u,v)(x,\eta) = \int_{\mathbb R} {\rm e}^{-{\rm i}2\pi y\eta}\frac{1}{y}\int_{x-y/2}^{x+y/2} u(t+y/2)\,v(t-y/2)^\ast \,{\rm d}t\,{\rm d}y. \end{equation} Time-frequency transform $D_{\mathbb Z/N\mathbb Z}$ has zero margins in both time and in frequency, but the margins for $Q$ are correct. \paragraph{Alternative way.} Above, we went from $\mathbb Z/N\mathbb Z$ to $\mathbb R$ via $\mathbb Z$. What if our route would have been via $\mathbb T$ instead? The outcome must still be the Born--Jordan transform. Let us check this process: Time-frequency transform $D_{\mathbb T}$ on compact group $\mathbb T$ has time-lag kernel $\varphi_{D_{\mathbb T}}:\mathbb T\times\mathbb T\to\mathbb C$, where for $y\not=0$ we have \begin{equation} \varphi_{D_{\mathbb T}}(x,y) = {\rm i}2\pi\,\frac{w(x)-w(x+y)}{1-{\rm e}^{-{\rm i}2\pi y}}, \end{equation} with the sawtooth wave $w:\mathbb T\to\mathbb R$ satisfying $w(x)=x$ for $0<x<1$. Now \begin{equation} \delta^{D_{\mathbb T}} v(x) = \int K_{\mathbb T}(x,y)\,v(y)\,{\rm d}y, \end{equation} with kernel $K_{\mathbb T}:\mathbb T\times\mathbb T\to\mathbb C$ given by $K_{\mathbb T}(x,y)=\varphi_{D_{\mathbb T}}(-x,x-y)^\ast$, \begin{equation} K_{\mathbb T}(x,y) = -{\rm i}2\pi\,\frac{1-(x-y)}{1-{\rm e}^{{\rm i}2\pi(x-y)}} \end{equation} when $-1<y<0<x<1$ and $x-y\not=1$: if here $x,y\to 0$, we again obtain the Born--Jordan transform $Q$ as the continuum limit. Properties of the Born--Jordan transform were studied in \cite{Turunen}, where also closely related variants of $D_{\mathbb T},D_{\mathbb Z}$ were introduced. \section{Computed pictures of discrete distributions} In the following pictures, we present three different discrete time-frequency distributions for the same signal: the periodic and non-periodic Born--Jordan distributions, and a spectrogram. The original speech signal of the author has 1000 samples, with sampling rate of 4000 Hz. The pictures were produced using Matlab. In the grey-scale time-frequency distribution pictures, higher values are darker in shade. For the spectrogram, zero value corresponds to white. For the other time-frequency images, zero value corresponds to mid-grey. \newpage \includegraphics[trim=10mm 0mm 0mm 10mm,scale=0.75]{signalWhy.pdf} \captionof{figure} {Speech signal ``Why?'', sampling rate 4000 Hz.} \includegraphics[trim=3mm 0mm 0mm 0mm,scale=0.75]{BJwhy0.pdf} \captionof{figure}{Time-frequency distribution $Q_{\mathbb Z}[u]$ for signal $u$ (``Why?'').} \newpage \includegraphics[trim=3mm 0mm 0mm 0mm,scale=0.75]{period250.pdf} \captionof{figure}{Time-frequency distribution $Q_{\mathbb Z/N\mathbb Z}[u]$ for the periodized signal $u$ (``...Why\,Why\,Why\,Why...''), zooming into a single period of 250 ms.} \includegraphics[trim=3mm 0mm 0mm 0mm,scale=0.75]{spectro41.pdf} \captionof{figure}{Spectrogram for periodized signal $u$ (``...Why\,Why\,Why\,Why...''), with a Gaussian window, zooming into a single period of 250 ms.}
null
null
null
proofpile-arXiv_000-10177
{"arxiv_id":"2009.08945","language":"en","timestamp":1600654687000,"url":"https:\/\/arxiv.org\/abs\/2009.08945","yymm":"2009"}
2024-02-18T23:40:25.273Z
2020-09-21T02:18:07.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10179"}
null
\section{Introduction} \noindent Scene appearance is directly dependent on the light source properties, such as the spectral composition of emitted light, and the position and direction of the light source, whose interaction with the scene objects provoke shaded surfaces or dark cast shadows that become essential visual cues to understand image content. Esti\-ma\-ting the properties of the light conditions from a single image is an initial step to improve subsequent computer vision algorithms for image understanding. In this paper we perform a preliminary study to estimate color and position of light in a simple and unified approach, that is based on the shading properties of the image where we assume a single scene illuminant. Estimating the color of the light from a single image has been focus of attention in previous research. Computational color constancy (CC) has been studied in a large number of works \cite{FOSTER2011Color,Finlayson2006, gijsenij2011computational} where the problem was tackled from different points of views \cite{gijsenij2011computational}. A first approach was to extract statistics from RGB image values under different assumptions to estimate the canonical white. A second approach was to introduce spatio-statistical information like gradients or frequency content of the image. One last group of CC algorithms was to try to get the information from physical cues of the image (highlights, shadows, inter-reflections, etc). In the last years, new approaches have been based on deep learning frameworks where the solution is driven by the data with physical constrains in the loss functions. An updated comprehensive compendium and comparison of CC algorithm performances can be found in \cite{cheng2014illuminant,Lou2015DeepColor,das2018color,Yan2018MultipleIE}. In this work we propose one more approach based on a deep architecture, but color of the light source is jointly estimated with the light direction. Estimating the direction of the light has also been tackled from different areas like, computer graphics, computer vision or augmented reality. Single image light direction estimation can be divided in two different kinds of approaches. First, those in which light probes with known reflectance and geometric properties are used. A Specular sphere is commonly used to represent light position in different computer graphics applications \cite{Debevec1998RenderingSO,Agusanto2003PhotorealisticRF,Schnieders2011LightSE}. But, random shaped objects to detect light position were used by Mandl et-al in \cite{Mandl2017LearningLF}, jointly with a deep learning approach to get the light position with each of these random shaped object. Second, we find those works in which no probe is used, and where multiple image cues such as shading, shape and cast shadows are the basis to estimate light direction. Some examples of these works can be found in computer vision literature \cite{Lalonde2009EstimatingNI,samaras2003incorporating,Panagopoulos2011IlluminationEA,panagopoulos2009robust,arief2012realtime}. More recently, some deep learning methods have been proposed to estimate scene illumination and have been used for different computer graphics tasks. Gardner et al.\cite{gardner2017learning} introduced a method to convert low dynamic range (LDR) indoor images to high dynamic range (HDR) images, first they used a deep network to localize the light source in LDR image environmental map and then they used another network with these annoted LDR images to convert them to HDR image. Following a similar approach, Geoffroy et al. \cite{hold2017deep} introduced a method to convert outdoor LDR images to HDR images. They trained their network with a set of panorama images and predicted HDR environmental maps with sky and sun positions. Later on, Geoffroy et al \cite{gardner2019deep} extended their previous idea for indoor lighting but replaced the environmental maps with light geometric and photometric properties. Sun et al. \cite{Sun2019SingleIP} introduced an encoder-decoder based network to relight a single portrait image, the encoder predicts the input image environmental map and an embedding for the scene information, while the decoder builds a new version of the image scene with the target environmental map and obtains a relighted portrait. Very recently, Muramann et al. \cite{murmann2019dataset} have introduced the \textit{Multi-illuminant dataset} of $1000$ scenes each one acquired under $25$ different light position conditions, and they used a deep network to predict a right sided illuminated image from its corresponding left sided illuminated image. In this work we will test our proposal on this new wild dataset after providing a procedure to compute the light direction from each sample. To sum up, we can state that a large range of works have tackled the problem of estimating color and direction of the scene light source from different points of views and focusing on specific applications. In this work we propose an easy end-to-end approach to jointly characterize the light source of a scene, both for color and direction. We pursuit to measure the level of accuracy we can achieve, in order it can be applied to a wide range of images, without using probes in the scene and becoming a robust preliminary stage to be subsequently combined with any task. To this end, the paper is organized as follows: first we introduce a new synthetic dataset, secondly we use it to train a deep architecture that estimates light properties from a single image, finally we show how our proposal performs on three different scenarios, synthetic, real indoor and natural images. \section{A synthetic dataset}\label{sec:dataset} \noindent In order to train our end-to-end network that estimates light conditions we developed a new image dataset similar to SID \cite{sial2020deep}, which was created for intrinsic image decomposition. This dataset is formed by a large set of images of surreal scenes where Shapenet objects \cite{shapenet} are enclosed in the center of multi-sided rooms with highly diverse backgrounds provided by flat textures on the room walls, resulting in a large range of different light effects. From now one we will refer to this dataset as SID1, and we will propose a new one better adapted to our problem, using the same methodology and software provided by the authors, which is based on an open source Blender rendering engine to synthesize images. Our new dataset will be called SID2 dataset. The main difference is that it introduces more than one object in each scene, with the aim of increasing the number of strong light effects and interactions. Additionally, we also introduce more variability in the distance from the light source to the scene center. The dataset is formed by $45,000$ synthetic images with the corresponding ground truth data: direction and color of the light source. \begin{figure} \begin{tabular}{c} \includegraphics[width=0.45\textwidth]{images/generation_setup.png} \\ \end{tabular} \caption{Image Generation Setup. Camera and light positions are given in spherical coordinates $(r, \vartheta,\varphi)$.} \label{Figure:generation_setup} \end{figure} We did several assumptions in building the dataset: (a) objects are randomly positioned around the scene center but always close to the room ground floor to have realistic cast shadows; (b) light source direction goes from scene center to a point onto an imaginary semi-sphere of a random radius and with random RGB color; (c) camera is randomly positioned at a random distance from the center of the scene and always with the focal axis pointing to the scene center. In Figure \ref{Figure:generation_setup} we show the diagram of the synthetic world we defined for the generation of SID2. More specifically, we took $45,000$ 3D objects from Shapenet dataset \cite{shapenet}. Likewise in \cite{sial2020deep} we did not use textured surfaces, we used a diffuse bidirectional scattering distribution function (BSDF) with random color and roughness values for each mesh texture in each object. This roughness parameter controls how much light is reflected back from each object surface. We randomly picked from $1$ to $3$ objects in each image. They were placed at random locations within the camera view range. We placed an empty object in the center of the scene to ensure non-overlapping between the rest of objects. Light direction was randomly defined in spherical coordinates $(r_l,\vartheta_l,\varphi_l)$, being radius, pan and tilt, respectively. We took random values within the ranges of $[20m,50m]$ $[30^{\circ},90^{\circ}]$ and $[0^{\circ},360^{\circ}]$ respectively, in steps of $1^{\circ}$ for pan and $5^{\circ}$ for tilt. Light intensity and chromaticity was randomly selected, but chromaticity was constrained to be around the Planckian locus to simulate natural lighting conditions. Camera position is also denoted in spherical coordinates as $(r_c,\vartheta_c,\varphi_c)$, where $r_c$ was fixed at $20m$ and pan, $\vartheta_c=0^{\circ}$, the tilt range randomly varied within $[10^{\circ},70^{\circ}]$. In the final ground-truth (GT), light pan and tilt are provided with reference to the camera position, in order not to depend on real world positions which are usually not available in real images. Backgrounds were generated in the same way as in \cite{sial2020deep}. \begin{comment} \begin{equation} Pan = \vartheta_c -\vartheta_l \\ Tilt = \theta_c - \theta_l \end{equation} \textcolor{magenta}{HASSAN, IT IS BETTER YOU DO THE GENERATION SETUP, HERE YOU HAVE THE SETUP FOR JOSA.} JOSA content from here ...... \textcolor{blue}{To generate this dataset, we used $20,000$ shapenet\cite{shapenet} objects, each object was captured $3$ times by changing light source pan position. As contrast to previous work, the camera is fixed at $15m$ width and $10m$ height and tilted at an angle of $50$\degree. This camera position simulate natural human eye position to see any ground object. Most of camera natural scenes are generally taken at this position. In this settings, light source is position randomly around the object in semi sphere at radius of $20m$. Light source can take two random angles i.e. pan and tilt, pan angle is selected from range of $[0\degree,360\degree]$ at step size of 1\degree and tilt angle is selected from $[30\degree,80\degree]$ at step size of $5$\degree. Light source can also take random intensity values as well. Background objects were selected from $4$ to $6$ sided rooms and these background can rotate randomly around their center in $z$ axis. Each object in shapenet have different height and we made sure that object bottom touches the floor of our indoor environment. In this way we made sure that object shadows touches the objects and are always visible at different light positions. } \textcolor{magenta}{\textbf{Generation setup:} Each object was captured twice, by using two camera positions separated by $180$ degrees in the horizontal pan axis. The camera were randomly positioned around the object over a semi-spherical surface. In this way, we can get two different views of the same object. To set the scene lighting we put $4$ white light sources with fixed location and orientation, we just randomly changed the intensity, that is what most affects the intrinsic shading. Two of the light sources having lower intensity range while higher for the two other to create more shading effects.} \end{comment} \section{Our deep architecture} \noindent We propose an inception-based encoder-decoder architecture to predict light parameters. In Figure \ref{fig:deep architecture} we give an scheme, where we can see that our encoder has five modules combining 3 types of layers: inception, convolution and pooling. The encoder input is the image that is transformed to a higher dimensional feature space, from which three decoders convert this embedding to a common feature space of pan, tilt and color of light source. Pan and tilt output predictions are given as functions of angle differences. We use the functions $\sin(\vartheta_c -\vartheta_l)$ and $\cos(\vartheta_c -\vartheta_l)$ to bound the pan output. Similarly, tilt prediction is represented as difference of angles $\sin(\varphi_c - \varphi_l)$ and $\cos(\varphi_c - \varphi_l)$. Finally color is predicted here as R, G and B values. We used the split inception module from \cite{szegedy2016rethinking}, which replaces $n \times n$ convolution filters with $1 \times n$ and $n \times 1$ filter, to achieve faster convergence with overall less parameter. Our global loss function to estimate illumination parameters is based on three terms: \begin{equation} Loss(x,\hat{y}) = \alpha_1 L_{Pan}(x,\hat{y}) + \alpha_2 L_{Tilt}(x,\hat{y}) + \alpha_3 L_{Color}(x,\hat{y}) \end{equation} where $x$ is the input image, $\hat{y}$ is a 7 dimensional vector giving the estimation of the scene light properties represented by $x$, $\alpha_i$ are the weights for the different loss terms defined for pan, tilt and color, and which are respectively given by: \begin{equation} \begin{array}{llll} L_{Pan}(x,\hat{y}) = MSE((\hat{y}_1 - \sin(\vartheta_c^x -\vartheta_l^x)) + (\hat{y}_2 - \cos(\vartheta_c^x -\vartheta_l^x))) & \\ L_{Tilt}(x,\hat{y}) = MSE\{(\hat{y}_3 - \sin(\varphi_c^x -\varphi_l^x)) + (\hat{y}_4 - \cos(\varphi_c^x -\varphi_l^x)\} & \\ L_{Color}(x,\hat{y}) = \arccos((\hat{y}_5 \cdot x_{RGB}) / \|\hat{y}_5\| * \|(x_{RGB}\|) \end{array}\nonumber \end{equation} $L_{Pan}$ and $L_{Tilt}$ are computed as the mean square error ($MSE$) between the estimations for pan, $\hat{y}_1$ and $\hat{y}_2$, and for tilt, $\hat{y}_3$ and $\hat{y}_4$, and a function of the difference between the camera and light positions for the ground-truth of $x$. The third loss term, $L_{Color}$, is the mean angular error between the estimated RGB values, $\hat{y}_5$, and the color of the light for $x$ image provided in the ground-truth. This network has been trained using Adam optimizer \cite{AdamKingmaB14}, with initial learning rate $0.0002$ which is decreased with factor of $0.1$ on reaching plateau. Weights are initialized using He Normal\cite{He2016DeepRL}. All experiments in next sections were trained using a batch size of $16$. In the following sections we show the results of several experiments to evaluate the architecture performance on different datasets and conditions. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{images/Network2b.png} \caption{Deep Architecture. Inception module from \cite{szegedy2016rethinking}} \label{fig:deep architecture} \end{figure} \section{Experiment 1: Synthetic dataset} \noindent In this first experiment we trained and tested the proposed architecture on two different datasets: SID1 (single object) and SID2 (multiple objects). The results are shown in Table \ref{tab:Exp_SID}, where we separately compute different angular errors. Direction error is given separately in the pan and tilt components, and the global angular error for direction estimation. We can see that all the estimations are improved when the network is trained on a more complex dataset, like SID2, where multiple objects light interactions provide richer shading cues. However, there is slight improvement for color estimation, performance is similar for both datasets. In Figure \ref{Figure:SID_examples} we show qualitative results on SID2 dataset. Images are ordered from smaller (left) to larger (right) direction estimation error. \begin{table}[h!] \tiny \centering \resizebox{\columnwidth}{!}{\begin{tabular}{|c|c|c|c|c|} \hline Dataset & Pan & Tilt & Direction & Color \\ \hline \hline SID1 & $14.63$ & $9.86$ & $16.98$ & $1.05$\\ \hline SID2 & $10.46$ & $9.21$ & $14.22$ & $1.02$ \\ \hline \end{tabular}} \caption{\textit{Table 1.} Estimation Errors (in degrees) for light source direction and color with the proposed architecture trained on SID1 and SID2.} \label{tab:Exp_SID} \vspace{4mm} \end{table} Intuitively as the light becomes more zenithal, the shadows shorten and cast shadows present more uncertainty to estimate light direction. We analysed the performance of the method at different tilt locations of the light source in the input image, from the ground (level 1: $[30^\circ$, $50^\circ]$) up to the zenithal area (level 3: $[70^\circ$, $90^\circ]$). This effect is confirmed in Table \ref{tab:my_label2}, where estimated errors in direction clearly increase from level 1 to level 3. \begin{table}[!h] \tiny \centering \resizebox{\columnwidth}{!}{\begin{tabular}{|c|c|c|c|c|} \hline Tilt range & Pan & Tilt & Direction & Color \\ \hline \hline Level 1 & $4.92$ & $5.14$ & $7.57$ & $1.04$\\ \hline Level 2 & $8.51$ & $5.14$ & $11.97$ & $0.90$ \\ \hline Level 3 & $22.36$ & $18.33$ & $28.33$ & $1.00$\\ \hline \end{tabular}} \caption{\textit{Table 2.} Estimation Errors (in degrees) for light direction and color at different tilt levels.} \label{tab:my_label2} \vspace{4mm} \end{table} Similarly, we analysed the performance at different pan levels, each level covers $90\degree$ of pan area. Level 1 is when light comes from center front, level 2 from right, level 3 from back and level 4 is when light comes from left side. Tilt angle was kept between $30\degree$ and $70\degree$ to analyze pan error while minimizing zenithal tilt error effects. Table \ref{tab:my_label3} shows results for this experiment, both direction and color error are consistent in all levels of pan. \begin{table}[!h] \centering \tiny \resizebox{\columnwidth}{!}{\begin{tabular}{|c|c|c|c|c|} \hline Pan range & Pan & Tilt & Direction & Color \\ \hline \hline Level1 & $6.87$ & $5.10$ & $9.10$ & $0.98$\\ \hline Level2 & $6.24$ & $5.12$ & $8.80$ & $0.96$ \\ \hline Level3 & $6.35$ & $5.09$ & $9.46$ & $1.03$\\ \hline Level4 & $6.01$ & $5.16$ & $9.03$ & $0.97$\\ \hline \end{tabular}} \caption{\textit{Table 3.} Estimation Errors (in degrees) for light direction and color at different pan levels.} \label{tab:my_label3} \vspace{0.4cm} \end{table} \section{Experiment 2. Multi-illumination dataset} \noindent Once we have evaluated our method in synthetic images, we want to analyze whether it generalises for real images. We have tested our method on the \textit{Multi-illuminant dataset} (MID) \cite{murmann2019dataset}, that contains $1000$ different indoor scenes, all of them containing a diffuse and a specular sphere at random locations. Light source is mounted above the camera and can be rotated at different predefined pan and tilt angles, creating different light conditions. The dataset provides the orientation of the light source for each acquired image, but since the light can bounce off the walls, the direction of the incident light on the scene is not defined by the light source angles and it needs to be recomputed. We have defined a procedure to compute the incident light direction from the specular sphere present in all the scenes, whose highlights provide enough information to collect our GT data (tilt and pan angle between light and camera). The color of the light is obtained from the average color of the diffuse sphere. To obtain the light direction we used the ideas proposed by \cite{Schnieders2011LightSE} where they assume that the angle of incident light is equal to the angle of outgoing light at the specular highlight on a spherical ball. We use a reference image in each scene where light and camera both are pointed in the same direction towards the center of the scene. We also assumed that the light is mounted at $10\degree$ height with respect to scene center. The angles obtained from this reference images allow to correct the angle displacement due to the sphere position shifting inside the image on the rest of scene images. \begin{table}[!h] \centering \small \resizebox{\columnwidth}{!}{\begin{tabular}{|l|c|c|c|c|} \hline Dataset (Error) & Pan & Tilt & Direction & Color \\ \hline \hline Masked MID (Mean) & $21.38$ & $10.14$ & $22.72$ & $0.63$ \\ Masked MID (Median) & $13.74$ & $7.64$ & $17.80$ & $0.40$ \\ \hline UnMasked MID (Mean) & $14.28$ & $6.96$ & $15.44$ & $0.36$ \\ UnMasked MID (Median) & $8.20$ & $4.83$ & $10.83$ & $0.24$ \\ \hline \end{tabular}} \caption{\textit{Table 4.} Estimation errors in degrees on two versions of MID dataset (with Masked or UnMasked spheres).} \label{tab:wild1} \vspace{0.4cm} \end{table} Starting from the network trained on SID2 it was fine tuned on this dataset under two different conditions: a) keeping the reference spheres in the image, and b) masking them. Although specular spheres are not present in real images, the first configuration should provide an upper bound of our method performance on wild images. Table \ref{tab:wild1} shows the results on this experiment. As expected, network performance is much higher when complete images are used as inputs. We can also observe that results on color estimation are better than on SID2, mainly due to the stability of single white light source in the dataset. To analyze the results removing the influence of the outliers, we also reported median error on this dataset. Results are as good as the ones obtained only using synthetic images on the upper bound. Qualitative results are provided in Figure \ref{Figure:wild_examples}. Top row depicts the original image, second row are the spheres generated from the GT information, and the third row shows the synthetic spheres generated with the obtained prediction. Finally, we perform a last experiment on this dataset by dividing the test set in two: (a) images with incident light from the front, and (b) from the back. Table \ref{tab:wild2} also shows the errors computed for these two sets. Both color and direction errors are higher when the light comes from the back of the scene and a big area of the image becomes saturated. We want to note here that the GT we created present a low accuracy for the subset of images with back light sources.This is due to the inherent uncertainty derived from what can be inferred from spheres illuminated from the back. Therefore, this MID dataset division is highly recommended to analyse results derived from this GT. \begin{table}[!h] \centering \tiny \resizebox{\columnwidth}{!}{\begin{tabular}{|c|c|c|c|c|} \hline Light position & Pan & Tilt & Direction & Color\\ \hline \hline Front & $11.51$ & $6.33$ & $13.09$ & $0.34$ \\ \hline Back & $32.90$ & $11.21$ & $31.23$ & $0.52$ \\ \hline \end{tabular}} \caption{\textit{Table 5.} Estimation errors in degrees dividing MID dataset in front and back light.} \label{tab:wild2} \vspace{0.4cm} \end{table} \renewcommand{\arraystretch}{1.5} \newcommand\sizephotosyn{0.15} \begin{figure*}[h!] \begin{center} \setlength{\tabcolsep}{1pt} \begin{tabular}{rcccccc} \raisebox{1.2cm}{(a)} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/gt/im1.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/gt/im2.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/gt/im3.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/gt/im8.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/gt/im4.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/gt/im5.png} \\[-0.1cm] \raisebox{1.2cm}{(b)} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/pdcn/im1.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/pdcn/im2.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/pdcn/im3.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/pdcn/im8.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/pdcn/im4.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/pdcn/im5.png} \\[-0.1cm] \raisebox{1.2cm}{(c)}& \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/diff/im1.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/diff/im2.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/diff/im3.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/diff/im8.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/diff/im4.png} & \includegraphics[width=\sizephotosyn\textwidth]{images/Synthetic/diff/im5.png} \\[-0.1cm] \hline\hline $Direction$ & $0.57$ & 1.58 & $4.3$ & $8.55$ & $8.81$ & $30.0$ \\[-0.2cm] $Color$ & $1.27$ & $0.78$ & $1.64$ & $1.8$ & $0.22$ & $0.24$ \\ \hline \end{tabular} \end{center} \caption{Direction and Color estimation examples on SID2 dataset: (a) Original images, (b) Generated images with estimated light properties, (c) RGB Image subtraction between (a) and (b). Bottom rows are the corresponding computed errors for direction and color in degrees, ordered from smaller (left) to larger (right) direction estimation error.} \label{Figure:SID_examples} \end{figure*} \newcommand\sizephoto{0.225} \begin{figure*}[h!] \begin{center} \setlength{\tabcolsep}{1pt} \begin{tabular}{rcccc} \raisebox{1.0cm}{(a)} & \includegraphics[width=\sizephoto\textwidth]{images/real/data/im1.jpg} & \includegraphics[width=\sizephoto\textwidth]{images/real/data/im2.jpg} & \includegraphics[width=\sizephoto\textwidth]{images/real/data/im3.jpg} & \includegraphics[width=\sizephoto\textwidth]{images/real/data/im4.jpg}\\[-0.1cm] \raisebox{0.3cm}{(b)} & \includegraphics[width=\sizephoto\textwidth]{images/real/gt/im1.png} & \includegraphics[width=\sizephoto\textwidth]{images/real/gt/im2.png} & \includegraphics[width=\sizephoto\textwidth]{images/real/gt/im3.png} & \includegraphics[width=\sizephoto\textwidth]{images/real/gt/im4.png}\\[-0.1cm] \raisebox{0.3cm}{(c)}& \includegraphics[width=\sizephoto\textwidth]{images/real/pdcn/im1.png} & \includegraphics[width=\sizephoto\textwidth]{images/real/pdcn/im2.png} & \includegraphics[width=\sizephoto\textwidth]{images/real/pdcn/im3.png} & \includegraphics[width=\sizephoto\textwidth]{images/real/pdcn/im4.png}\\ [-0.1cm]\hline\hline $Direction$ & $3.16$ & $5.60$ & $20.12$ & $25.05$ \\[-0.2cm] $Color$ & $0.16$ & $0.21$ & $1.30$ & $0.35$ \\ \hline \end{tabular} \end{center} \caption{Direction and Color estimation examples on Multi-illumination dataset: (a) Original images, (b) Ground-truth plotted on corresponding spheres, (c) Estimations provided by our proposed architecture. Bottom rows are computed errors for direction and color in degrees.} \label{Figure:wild_examples} \end{figure*} \section{Experiment 3. Natural images} \noindent Previous experiments show the performance of our method on synthetic and real indoor images. Here, we show a few qualitative results on real outdoor images. In Figure \ref{Figure:real_examples} we show some examples with strong outdoor cast shadows, in order to visually evaluate the prediction we depict a synthetic pole at the left top corner. In these examples camera is assumed to be at $45\degree$ tilt from ground. Left side four images are from SBU shadow dataset \cite{vicente2016large} and the two on the right have been captured with a mobile device. \newcommand\sizephotosynb{0.16} \begin{figure*}[h!] \begin{center} \setlength{\tabcolsep}{1pt} \begin{tabular}{cccccc} \includegraphics[width=\sizephotosynb\textwidth]{images/real_result1.png} & \includegraphics[width=\sizephotosynb\textwidth]{images/real_result2.png} & \includegraphics[width=\sizephotosynb\textwidth]{images/real_result3.png} & \includegraphics[width=\sizephotosynb\textwidth]{images/real_result4.png} & \includegraphics[width=\sizephotosynb\textwidth]{images/real_result5.png} & \includegraphics[width=\sizephotosynb\textwidth]{images/real_result6.png} \\[-0.1cm] \end{tabular} \end{center} \caption{Examples of light direction estimation on natural images. Predicted direction is plotted top left in each image.} \label{Figure:real_examples} \end{figure*} \begin{comment} \begin{figure} \includegraphics[width=0.47\textwidth]{images/real_result.png} \\ \caption{Examples of light direction estimation on natural images. Predicted direction is plotted top left in each image.} \label{Figure:real_examples} \end{figure} \end{comment} \section{Conclusions} In this work we have proved the plausibility of using a simple deep architecture to estimate physical light properties of a scene from a single image. The proposed approach is based on training a deep regression architecture on a large synthetic and diversified dataset. We show that the obtained regressor can generalize to real images and can be used as a preliminary step for further complex tasks. \begin{comment} The process involved a first analysis on a problem specific synthetic dataset. We used an existing general framework that allows to generate synthetic image datasets for specifics purposes. The output is a 45K image with light properties. The evaluation on real images was conducted on an adapted multi-illuminant existing dataset. We defined a procedure to extract the real light properties starting form a physical configuration of a single light source. Finally we evaluate the approach on images complety different from those used in the training phases. Although the experiment is limited, the results shows the validity of the proposal. \end{comment} \begin{comment} \section*{Acknowledgments} This work has been supported project TIN2014-61068-R and FPI Predoctoral Grant (BES-2015-073722), project RTI2018-095645-B-C21 of Spanish Ministry of Economy and Competitiveness and the CERCA Programme / Generalitat de Catalunya. \end{comment} \bibliographystyle{plain} \small
null
null
null
proofpile-arXiv_000-10178
{"arxiv_id":"2009.08941","language":"en","timestamp":1600654673000,"url":"https:\/\/arxiv.org\/abs\/2009.08941","yymm":"2009"}
2024-02-18T23:40:25.280Z
2020-09-21T02:17:53.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10180"}
null
\section{Introduction} \label{sec:intro} \subsection{Scientific Background} The integration of mobile health (mHealth) devices into behavioral health research has fundamentally changed the way researchers and interventionalists are able to collect data as well as deploy and evaluate intervention strategies. Leveraging mobile and sensing technologies, just-in-time adaptive interventions (JITAI) or ecological momentary interventions are designed to provide tailored support to participants based on their mood, affect, and socio-environmental context \citep{heron2010ecological, nahum2017just}. In order to deliver theory-based interventions at critical moments, researchers collect intensive longitudinal data using ecological momentary assessment (EMA) methods, which aim to capture psychological, emotional, and environmental factors that may relate to a behavioral outcome in near real-time. In practice, JITAIs' effectiveness depends on accurately identifying high-risk situations by the user or by pre-determined decision rules to initiate the delivery of intervention components. Decision rules for efficacious interventions rely on a thorough understanding of the factors that characterize a subject's risk for a behavioral outcome, the dynamics of these risk factors' relation with the outcome over time, and the knowledge of possible strategies to target a risk factor \citep{nahum2017just}. In the analysis of this paper, we investigate a behavioral health intervention study that targets smoking cessation. Historically, smoking cessation studies have used health behavior theory \citep{shiffman2002immediate,timms2013dynamical} or group-level trends of smoking antecedents \citep{piasecki2013smoking} to determine when a JITAI should be triggered. However, this approach is limited since current health behavior models are inadequate for guiding the dynamic and granular nature of JITAIs \citep{riley2011health,klasnja2015microrandomized}. Additionally, the design of efficacious smoking cessation interventions is challenged by the complexity of smoking behaviors around a quit attempt and misunderstandings of the addiction process \citep{piasecki2002have}. More recently, smoking behavior researchers have capitalized on the ability of mHealth techniques to collect rich streams of data capturing subjects' experiences close to their occurrence at a high temporal resolution. The structure, as well as the complexity, of these data provide unique opportunities for the development and implementation of more advanced analytical methods compared to traditional longitudinal data analysis methods used in behavioral research (e.g., mixed models, growth curve models) \citep{trail2014functional}. For example, researchers have applied reinforcement learning \citep{luckett2019estimating} and dynamic systems approaches \citep{trail2014functional,rivera2007using, timms2013dynamical} to design and assess optimal treatment strategies using mHealth data. Additionally, \cite{koslovsky2018bayesian}, \cite{de2017use} and \cite{berardi2018markov} have applied hidden and observed Markov models to study transitions between discrete behavioral states, \cite{shiyko2012using} and \cite{dziak2015modeling} have used mixture models to identify latent structures, and \cite{kurum2016time} have employed joint modeling techniques to study the complexity of smoking behaviors. Greater insights into the dynamic relation between risk factors and smoking behaviors have been generated by the application of functional data techniques \citep{ trail2014functional,vasilenko2014time,koslovsky2017time, tan2012time}. These methods are well-suited for high-dimensional data with unbalanced and unequally-spaced observation times, matching the format of data collected with EMAs. They also require little assumptions on the structure of the relations between risk factors and behavioral outcomes. One popular approach uses varying-coefficient models, which belong to the class of generalized additive (mixed) models. These semiparametric regression models allow a covariate's corresponding coefficient to vary as a smooth function of other covariates \citep{hastie1993varying}. For example, \cite{selya2015nicotine} examined how the relation between the number of cigarettes smoked during a smoking event and smoking-related mood changes varies as a function of nicotine dependence. More frequently, penalized splines have been employed in varying-coefficient models to investigate how the effect of a covariate varies as a function of time, leading to time-varying effect models (TVEM) \citep{tan2012time,lanza2013advancing,koslovsky2017time, shiyko2012using, mason2015time,vasilenko2014time}. These approaches allow researchers to identify the critical moments that a particular risk factor is strongly associated with smoking behaviors, information that can be used to design tailored intervention strategies based on a subject's current risk profile. \subsection{Model Overview} While there are various inferential challenges that functional data analysis models can address, in the application of this paper we focus on incorporating three recurring themes in behavioral research to explore the relations between risk factors and smoking behaviors: \begin{enumerate} \item \textit{Model Assumptions} - Numerous smoking behavior research studies have relied on semiparametric, spline-based methods to learn the relational structure between risk factors and outcomes \citep{tan2012time,vasilenko2014time}. \item \textit{Variable Selection} - One of the main objectives of intensive longitudinal data analysis is to identify or re-affirm complex relations between risk factors and behavioral outcomes over time \citep{walls2005models}. \item \textit{Latency} - A common aim in smoking behavior research studies is to identify latent structure in the data, such as groups or clusters of subjects with similar smoking behaviors over time \citep{mccarthy2016repeated, cursio2019latent, geiser2013analyzing, dziak2015modeling, brook2008developmental}. \end{enumerate} To incorporate and expand upon these features in our analysis, we develop a flexible Bayesian varying-coefficient regression modeling framework for longitudinal binary responses that uses variable selection priors to provide insights into the dynamic relations between risk factors and outcomes. We embed spike-and-slab variable selection priors as mixtures of a point mass at zero (spike) and a diffuse distribution (slab) \citep{george1993variable,brown1998multivariate} and adopt the formulation of \cite{scheipl2012spike} to deconstruct the varying-coefficients terms, in our case time-varying effects, into a main effect, linear interaction term, and non-linear interaction term. Unlike previous approaches in behavioral health research that use time-varying effect models, our formulation allows us to gain inference on whether a given risk factor is related to the smoking behavior while also learning the type of relation. Additionally, by performing selection on fixed as well as random effects, our method is equipped to identify relations that vary over time and across subjects. For this, we exploit a P\'olya-Gamma augmentation scheme that enables efficient sampling without sacrificing interpretability of the regression coefficients as log odds ratios \citep{polson2013bayesian}. Furthermore, we adopt a Bayesian semiparametric approach to model fixed and random effects by replacing the traditional spike-and-slab prior with a nonparametric construction to cluster risk factors that have similar strengths of association. \subsection{ Just-in-Time Adaptive Interventions for Smoking Abstinence} Although multiple studies have examined momentary predictors of smoking lapse \citep{shiffman2000dynamic,piasecki2003smoking,businelle2014predicting}, JITAIs for smoking cessation are still nascent. Thus far, studies have used participant-labeled GPS coordinates to trigger supportive messages to prevent smoking \citep{naughton2016context}, or have tailored messages to the duration and intensity of participant's self-reported side effects while taking varenicline \citep{mcclure2016evaluating}. Using our proposed approach, we analyze ILD collected in a study investigating the utility of a novel, smartphone-based smoking cessation JITAI (\textit{SmartT}). The \textit{SmartT} intervention \citep{businelle2016ecological} uses a lapse risk estimator to identify moments of heightened risk for lapse, and tailors treatment messages in real-time based upon the level of imminent smoking lapse risk and currently present lapse triggers. To our knowledge, no other studies have used EMA data to estimate risk for imminent smoking lapse and deliver situation-specific, individually-tailored treatment content prior to lapse. In this study, adult smokers (N=81) recruited from a smoking cessation research clinic were randomized to the \textit{SmartT} intervention, the National Cancer Institute's QuitGuide (\textit{NCI QuitGuide}), or weekly counseling sessions (\textit{usual care}), and followed over a five-week period spanning one week prior to a scheduled quit attempt to four weeks after. At the beginning of the assessment period, baseline measures were collected, and subjects were shown how to complete EMAs on a study-provided smartphone. Throughout the assessment period, subjects completed daily diaries and received four random EMAs from the smartphone to complete each day. For each EMA, subjects were prompted on their recent smoking behaviors, alcohol consumption, as well as various questions regarding their current psychological, social, and environmental factors that may contribute to an increased risk of smoking behaviors. Findings indicate that our approach is well-positioned to help researchers evaluate, design, and deliver tailored intervention strategies in the critical moments surrounding a quit attempt. In particular, results confirm previously identified temporal relations between smoking behaviors around a quit attempt and risk factors. They also indicate that subjects differ in how they respond to different risk factors over time. Furthermore, we identify clusters of active risk factors that can help researchers prioritize intervention strategies based on their relative strength of association at a given moment. Importantly, our approach generates these insights with minimal assumptions regarding which risk factors were related to smoking in the presence of others, the structural form of the relation for active terms, or the parametric form of regression coefficients. The rest of the paper is organized as follows. In section \ref{sec:methods}, we present our modeling approach and describe prior constructions. In section \ref{sec:case}, we investigate the relations between risk factors and smoking behaviors in the critical moments surrounding a scheduled quit attempt using mHealth data. In section \ref{sec:simul}, we conduct a simulation study investigating the variable selection and clustering performance of our proposed method on simulated data. In section \ref{sec:sens}, we evaluate prior sensitivity of our model. In section \ref{sec:remarks}, we provide concluding remarks. \section{Methods} \label{sec:methods} The objective of our analysis is to identify relations between a set of risk factors (i.e., baseline and EMA items) and a binary outcome (i.e., momentary smoking) repeatedly collected over time. For this, we employ a Bayesian variable selection framework that allows a flexible structure for the unknown relations. We achieve this by performing selection not only on main effects, but additionally on linear and non-linear interaction terms as well as random effects. In this work, we refer to fixed and random effects in the context of hierarchical or multilevel models, where fixed effects are constant across subjects and random effects differ at the subject-level. We chose this terminology based on its familiarity within both frequentist and Bayesian paradigms, but point out that the fixed or population-level effects are treated as random variables in our model, and thus follow a probability distribution. \subsection{A Varying-Coefficient Model for Intensive Longitudinal Data Collected with EMAs} Let $y_{ij} \in \{0,1\}$ represent momentary smoking for subject $i = 1,\dots, N$, and $ \boldsymbol{x}_{ij}$ and $\boldsymbol{z}_{ij} $ represent $P$- and $D$-dimensional vectors of risk factors collected on each subject at time $j = 1,\dots, n_i$, respectively. To maintain temporality in our particular application (see section \ref{sec:case} for more details), we model the relation between momentary smoking by the next assessment and current, potential risk factors as a varying-coefficient model of the type \begin{equation}\label{VICM} logit(P(y_{i,j+1} = 1|\boldsymbol{x}_{ij},\boldsymbol{z}_{ij},u_{ij})) = \sum_{p=1}^{P}f_p(u_{ij}) x_{ijp} + \boldsymbol{\alpha}^{\prime}_i\boldsymbol{z}_{ij}, \end{equation} where $f_p(u)$ are smooth functions of a scalar covariate $u$, and $\boldsymbol{\alpha}_i$ represents subject specific random effects. Similar temporal assumptions have been made previously in smoking behavior research studies \citep{bolman2018predicting,minami2014relations, shiffman1996first,shiffman2013conceptualizing,shiyko2014modeling}. Note that in general, researchers may use the framework of \ref{VICM} to model the relation between a binary outcome and potential risk factors collected concurrently, in addition to lagged trends, as is typical in longitudinal studies \citep{fitzmaurice2012applied}. With this formulation, we include varying-coefficient terms for each of the $P$ risk factors based on $u$. However in general, we can specify varying-coefficient terms that depend on $u^{\prime} \ne u$, and thus the number of varying-coefficient terms in the full model is not strictly $P$. If $u$ is chosen to represent time, then this model is commonly referred to as a \textit{time-varying effect model} in smoking behavior research \citep{tan2012time,vasilenko2014time,dziak2015modeling,koslovsky2017time}. Note that $ \boldsymbol{z}_{ij}$ is typically a subset of $\boldsymbol{x}_{ij} $ \citep{kinney2007fixed,cheng2010real,hui2017hierarchical} and that incorporating a 1 in $ \boldsymbol{x}_{ij}$ and $\boldsymbol{z}_{ij} $, allows for an intercept term that varies as a function of $u$ and a random intercept term, respectively. Additionally, this formulation can handle time-invariant risk factors, such as baseline items, by fixing $x_{ijp}$ ($z_{ijd}$) to $x_{ip}$ ($z_{id}$) for all observations $j$. We approximate the smooth functions with spline basis functions. Specifically, \begin{equation}\label{smooth} f_p(u_{ij})= \boldsymbol{ \mathcal{U}}_{ij}^{\prime} \boldsymbol{\phi}_p, \end{equation} where $\boldsymbol{ \mathcal{U}}_{ij}$ is a spline basis function for $u_{ij}$, and $ \boldsymbol{\phi}_p$ is a $r_p$-dimensional vector of corresponding spline coefficients. For simplicity, the splines are constructed with an equal number of equally spaced knots that depend on the minimum and maximum values of $\boldsymbol{u}$. \subsection{Penalized Priors for the Spline Coefficients}\label{Spline} Using a combination of variable selection and shrinkage priors, our approach generates insights on the underlying structure of the smooth functions by reconstructing them as the summation of main effect, linear interaction, and non-linear interaction components. Formally, we rewrite Equation (Eq.) \eqref{smooth} as \begin{equation}\label{smoothII} f_p(u_{ij}) = \beta^{*}_p\boldsymbol{ \mathcal{U}}_{ij}^{*\prime}\boldsymbol{\xi}_p + \beta_p^{\circ}u_{ij} + \beta_{0p}, \end{equation} where the constant term $\beta_{0p}$ captures the main effect of $\boldsymbol{x}_p$, $\beta_p^{\circ}$ represents the effect of the linear interaction between $\boldsymbol{u}$ and $\boldsymbol{x}_p$, and $\beta^{*}_p\boldsymbol{\xi}_p$ is a parameter-expanded vector of coefficients corresponding to the non-linear interaction term. To derive the non-linear component in Eq. \eqref{smoothII}, we start by penalizing the spline functions in Eq. \eqref{smooth} with a second-order Gaussian random walk prior following \begin{equation}\label{smoothprior} \boldsymbol{ \mathcal{U}} \boldsymbol{\phi}_p |s^2 \sim N(\boldsymbol{0},s^2\boldsymbol{ \mathcal{U}}\boldsymbol{P}^{-}\boldsymbol{ \mathcal{U}}^{\prime}), \end{equation} where $\boldsymbol{\mathcal{U}}$ is a $\sum_{i = 1}^N (n_i-1) \times r_p$-dimensional matrix with each row corresponding to $\boldsymbol{\mathcal{U}}_{ij}^{\prime}$ for the $i^{th}$ subject at the $j^{th}$ assessment, $s^2$ controls the amount of smoothness, and $\boldsymbol{P}$ is the appropriate penalty matrix \citep{lang2004bayesian}. Next, we take the spectral decomposition of $\boldsymbol{ \mathcal{U}}\boldsymbol{P}^{-}\boldsymbol{ \mathcal{U}}^{\prime} =$ $\begin{bmatrix} \boldsymbol{U}_+ & \boldsymbol{U}_{\circ} \end{bmatrix}$ $\begin{bmatrix} \boldsymbol{V}_+ & \boldsymbol{0}\\ \boldsymbol{0} & \boldsymbol{0} \end{bmatrix}$ $\begin{bmatrix} \boldsymbol{U}_+ \\ \boldsymbol{U}_{\circ}\end{bmatrix},$ where $\boldsymbol{U}_+$ is a matrix of eigenvectors with corresponding positive eigenvalues along the diagonal of matrix $\boldsymbol{V}_+$, and $\boldsymbol{U}_{\circ}$ are the eigenvectors associated with the zero eigenvalues. Now, we can re-define the smooth functions in Eq. \eqref{smooth} as the sum of non-linear (penalized) interaction, linear (non-penalized) interaction, and main effect terms as presented in Eq. \eqref{smoothII}, where the penalized term is written as $\boldsymbol{\mathcal{U}}^*\boldsymbol{\varphi}_p^*$ with $\boldsymbol{\mathcal{U}}^* = \boldsymbol{U}_+\boldsymbol{V}_+^{1/2}$. By assuming independent normal priors for $\boldsymbol{\varphi}_p^*$, a proper prior for the penalized terms that is proportional to Eq. \eqref{smoothprior} can be obtained. We take two additional measures to enhance the computational efficiency of the resulting MCMC algorithm. First, only eigenvalues/vectors that explain a majority of the variability in Eq. \eqref{smoothprior} are used to construct $\boldsymbol{\mathcal{U}}^*$. Additionally, we apply a parameter-expansion technique for the penalized terms in $f_p (\cdot)$, setting $\boldsymbol{\varphi}_p^* = \beta^{*}_p\boldsymbol{\xi}_p$, where $\beta^{*}_p$ is a scalar and $\boldsymbol{\xi}_p$ is a vector with the same dimension as $\boldsymbol{\varphi}_p^*$. This technique enables us to perform selection on the penalized terms as a group rather than determining their inclusion separately. By rescaling $\beta^{*}_p$ and $\boldsymbol{\xi}_p$ at each MCMC iteration, such that $|\boldsymbol{\xi}_p|$ has mean equal to one, $\boldsymbol{\xi}_p$ maintains the shape of the smooth function and $\beta^{*}_p$ represents the term's strength of association, while preserving identifiability, similar to \cite{scheipl2012spike}. For variable selection, we impose spike-and-slab prior distributions on the $3*P = T$-dimensional vector $\boldsymbol{\beta} = (\beta^*_1,\beta^{\circ}_1,\beta_{01},\dots, \beta^*_P,\beta^{\circ}_P, \beta_{0P})^{\prime}$. In general, the spike-and-slab prior distribution is composed of a mixture of a Dirac delta function at zero, $\delta_0(\cdot)$, and a known distribution, $\mathcal{S}(\cdot)$, such as a normal with mean zero and diffuse variance \citep{george1993variable,brown1998multivariate}. A latent indicator variable, $\nu_t$, representing a risk factor's inclusion or exclusion in the model determines whether the risk factor's regression coefficient is set to zero (spike) or free to be estimated in the model (slab). Specifically for a given coefficient $\beta_t$, we assume \begin{eqnarray} \label{eq:prior_beta} \beta_t|\nu_t \sim \nu_t\cdot \mathcal{S}(\beta_t) + (1-\nu_t)\delta_0(\beta_t). \end{eqnarray} To complete the prior specification for this portion of the model, we assume that the slab component, $\mathcal{S}(\beta_t)$, follows a $N(0,\tau^2)$ with variance $\tau^2$, and that the inclusion indicators are distributed as $\nu_t|\theta_t \sim \mbox{Bernoulli}(\theta_t)$, with prior probability of inclusion $\theta_t \sim \mbox{Beta}(a_{\nu_t},b_{\nu_t})$. Integrating out $\theta_t$ we obtain $\nu_t \sim$ Beta-Binomial$(a_{\nu_t},b_{\nu_t})$, where hyperparameters $a_{\nu_t}$ and $b_{\nu_t}$ are set to control the sparsity in the model. Lastly, each element of $\boldsymbol{\xi}_p $, $\xi_{pr}$, is assumed to follow a $N(\mu_{pr},1)$, with mean $\mu_{pr} =\pm1$ with equal probability. Placing a majority of the prior mass for each $ \xi_{pr}$ around $\pm 1$ is motivated by the role it plays in the expansion of $\boldsymbol{\varphi}_p^*$, as described above. \subsection{Prior Specification for the Random Effects} We perform selection on the random effects, $\boldsymbol{\alpha}_i$, using the modified Cholesky decomposition approach of \cite{chen2003random}. Specifically, we reparameterize the random effects \begin{equation} \boldsymbol{\alpha}_{i} = \boldsymbol{K}\boldsymbol{\Gamma}\boldsymbol{\zeta}_i, \end{equation} where $\boldsymbol{K}$ a positive diagonal matrix with elements $\boldsymbol{\kappa} = (\kappa_1,\dots, \kappa_D)^{\prime}$, and $\boldsymbol{\Gamma}$ a lower triangle matrix with diagonal elements set to one and free elements otherwise. To perform variable selection, we set the prior for $\boldsymbol{\kappa}$ to follow a similar spike-and-slab prior distribution as in section 2.2, where the slab distribution $\mathcal{S}(\kappa_d) = FN(m_0,v_0)$. Here, $FN$ represents a folded normal distribution defined as $$FN(m_0,v_0) = (2\pi v_0)^{-1/2}\exp(-(\kappa_d - m_0)^2/(2v_0)) + (2\pi v_0)^{-1/2}\exp(-(\kappa_d + m_0)^2/(2v_0)),$$ where $m_0 \in \mathbb{R}$ and $v_0 > 0 $ are location and scale parameters, respectively. Note that we forgo the parameter-expansion approach of \cite{kinney2007fixed}, which introduces a redundant multiplicative parameter in the implied random effect covariance matrix, in favor of a model that enables meaningful inference for $\boldsymbol{\kappa}$ and ultimiately their cluster assignments. Similar to section \ref{Spline}, we let the corresponding inclusion indicators $\lambda_d$ follow a Beta-Binomial$(a_{\lambda_d},b_{\lambda_d})$ to induce sparsity on the random effect terms. Lastly, we assume the $D(D-1)/2$-dimensional vector of free elements in $\boldsymbol{\Gamma}$ follow $N(\boldsymbol{\gamma}_0, V_{\gamma}) \cdot I(\boldsymbol{\gamma} \in \mathcal{Z}) $, where $I$ represents an indicator function, and $\mathcal{Z}$ represents the parameters with corresponding random effects included in the model. For example, if the $d^{th}$ random effect is included (i.e., $\lambda_d = 1$), then $\gamma_{d1},\dots,\gamma_{d,d-1}\mbox{ and } \gamma_{d+1,d}, \dots \gamma_{D,d} \in \mathcal{Z}$. Lastly, we assume $\boldsymbol{\zeta}_i \sim N(\boldsymbol{0},\boldsymbol{I}).$ \subsection{Spiked Nonparametric Priors} To complete our approach, we investigate nonparametric prior constructions for the spike-and-slab components of the reparameterized fixed and random effects by assuming that the slab component follows a Dirichlet process (DP). These priors are commonly referred to as spiked DP (SDP) priors \citep{canale2017pitman,kim2009spiked,savitsky2010spiked,dunson2008bayesian}. In the context of our model, SDP priors allow us to simultaneously select influential risk factors while clustering effects with similar relations to the smoking outcome. The formulation we use here is sometimes refers to as an ``outer" SDP prior, since the point mass at zero is outside of the base distribution of the DP. Alternatively, the ``inner" construction places the spike-and-slab prior inside the DP, serving as the base distribution. The inner formulation provides the opportunity for coefficients to cluster at zero, but does not force a point mass at zero explicitly. As such, the likelihood that a coefficient is assigned to the trivial cluster grows with the number of coefficients excluded from the model. Alternatively, the outer formulation is a more informative prior, since it explicitly assigns a point mass at zero, and, in addition, carries less computational demands since it does not require auxiliary variables for MCMC sampling \citep{neal2000markov, savitsky2010spiked}. We refer readers to \cite{canale2017pitman} for a detailed explanation of the structural differences between the two prior formulations. First, we assume the regression coefficients associated with the main effects and linear interaction terms follow a SDP to provide insights on risk factors that share underlying linear trends with momentary smoking by the next assessment over the course of the study. Specifically, we assume the slab component in Eq. \eqref{eq:prior_beta} is a Dirichlet process prior $H \sim DP(\vartheta,H_0)$, with base distribution $H_0 = N(0,\tau^2)$ and concentration parameter $\vartheta$. Furthermore, we assume a hyperprior $\vartheta \sim G(a_{\vartheta},b_{\vartheta})$, with $a_{\vartheta},b_{\vartheta} > 0$. For the nonlinear interaction terms, we avoid the SDP since it would produce uninterpretable cluster assignments due to the parameter-expansion approach taken to improve selection performance. For example, similar values for $\beta_t^*$ and $\beta_{t'}^*$ may correspond to vastly different $\boldsymbol{\varphi}^*_t$ and $\boldsymbol{\varphi}^*_{t'}$, depending on their respective $\boldsymbol{\xi}$ and spline basis functions. Similarly, placing a DP prior on the individual components in $\boldsymbol{\xi}$, or even $\boldsymbol{\varphi}$, would not provide interpretable results on the overall nonlinear effect. We take a similar approach for the random effects. Here, we assume the slab components for the diagonal elements of $\boldsymbol{K}$, $\mathcal{S}(\kappa_d) = W$, $W \sim DP( \mathcal{A}, W_0),$ where $W_0 \sim FN(m_0,v_0)$, and $\mathcal{A}$ is the concentration parameter of the DP. To complete the prior assumptions for the random effects portion of the model, let $\mathcal{A} \sim G(a_{\mathcal{A}}, b_{\mathcal{A}})$, where $a_{\mathcal{A}},b_{\mathcal{A}}>0$ are shape and rate parameters, respectively. There is evidence that relaxing parametric assumptions for random effects using DP priors may cause inferential challenges as the mean of the random effects are non-zero almost surely \citep{li2011center,yang2012bayesian,cai2017bayesian}. Our approach differs in that we do not directly replace the typical normal assumption for random effects with a nonparametric prior. Instead, we place a nonparametric prior on the covariance decomposition components, $\boldsymbol{K}$, while letting $\boldsymbol{\zeta}_i$ follow a normal distribution centered at zero. As such, our approach avoids any identifiability issues with the fixed effects while still relaxing the parametric assumption on the reparameterized random effects, $\boldsymbol{K \Gamma \zeta}_i$. It is important to note that by doing this we are adopting a Bayesian semiparametric modeling structure, since the random effects are linear combinations of spiked Dirichlet process and normal random variables \citep{muller2007semiparametric}. \subsection{Posterior Inference} For posterior inference, we implement a Metropolis-Hastings within Gibbs algorithm. The full joint model is defined as $$ f(\boldsymbol{y}|\boldsymbol{\varrho},\boldsymbol{\omega},\boldsymbol{x},\boldsymbol{u},\boldsymbol{z})p(\boldsymbol{\omega})p(\boldsymbol{\beta}|\boldsymbol{\nu})p(\boldsymbol{\nu})p(\vartheta)p(\boldsymbol{K}|\boldsymbol{\lambda})p(\boldsymbol{\lambda})p(\mathcal{A})p(\boldsymbol{\xi}|\boldsymbol{\mu})p( \boldsymbol{\mu})p(\boldsymbol{\zeta})p(\boldsymbol{\Gamma}),$$ where $\boldsymbol{\varrho} = \{\boldsymbol{\beta}, \boldsymbol{\xi}, \boldsymbol{K}, \boldsymbol{\Gamma},\boldsymbol{\zeta} \}$. We use the P\'olya-Gamma augmentation of \cite{polson2013bayesian} to efficiently sample the posterior distribution for the logistic regression model. Following \cite{polson2013bayesian}, we express the likelihood contribution of $y_{i,j+1}$ as $$f(y_{i,j+1}|\cdot) = \frac{(e^{\psi_{ij}})^{y_{i,j+1}}}{(1 + e^{\psi_{ij}})} \propto \exp({k_{i,j+1}\psi_{ij}})\int_{0}^{\infty}\exp(-\omega_{i,j+1}\psi_{ij}^2/2)p(\omega_{i,j+1}|n_{i,j+1},0)\partial \omega, $$ where $k_{i,j+1} = y_{i,j+1} - n_{i,j+1}/2$, $p(\omega_{i,j+1}|n_{i,j+1},0) \sim PG(n_{i,j+1},0)$, and $PG$ is the P\'olya-Gamma distribution. Using the notation presented in the previous sections, we set $$\psi_{ij} = \sum_{p=1}^{P}(\beta^{*}_p\boldsymbol{ \mathcal{U}}_{ij}^*\boldsymbol{\xi}_p + \beta_p^{\circ}u_{ij} + \beta_{0p})x_{ijp} + \boldsymbol{z}_{ij}^{\prime} \boldsymbol{K} \boldsymbol{\Gamma} \boldsymbol{\zeta}_i.$$ The MCMC sampler used to implement our model is outlined below in Algorithm 1. A more detailed description of the MCMC steps as well as a graphical representation of the model are provided in the Supplementary Material. After burn-in and thinning, the remaining samples obtained from running Algorithm 1 for $\tilde{T}$ iterations are used for inference. To determine a risk factor's inclusion in the model, its marginal posterior probability of inclusion (MPPI) is empirically estimated by calculating the average of its respective inclusion indicator's MCMC samples \citep{george1997approaches}. Note that inclusion for both fixed and random effects is determined marginally for $\beta_t$ and $\lambda_d$, respectively. Commonly, covariates are included in the model if their MPPI exceeds 0.50 \citep{barbieri2004optimal} or a Bayesian false discovery rate threshold, which controls for multiplicity \citep{newton2004detecting}. \begin{algorithm} \caption{MCMC Sampler}\label{MCMC} \begin{algorithmic}[1] \State Input data $\boldsymbol{y},\boldsymbol{x},\boldsymbol{u},\boldsymbol{z}$ \State Initialize parameters: $\boldsymbol{\varrho}, \boldsymbol{\omega}, \boldsymbol{\nu},\boldsymbol{\lambda},\vartheta, \mathcal{A}, \boldsymbol{\mu}$ \State Set $DP_{\bar{\boldsymbol{\beta}}}$ and $DP_{\boldsymbol{K}}$ to True or False to indicate DP for slab on fixed or random effects, respectively. \For{iteration $\tilde t = 1,\dots,\tilde T$} \For{$i = 1,\dots,N$} \For{ $j = 1,\dots,n_i-1$} \State Update $\omega_{i,j+1} \sim PG(1,\psi_{ij})$ \EndFor \EndFor \If {$DP_{\bar{\boldsymbol{\beta}}}$ } \State Update cluster assignment of $\bar{\boldsymbol{\beta}}$ following \cite{neal2000markov} algorithm 2. \EndIf \State Jointly update $\boldsymbol{\beta}$ and $\boldsymbol{\nu}$ with Between and Within Step following \cite{savitsky2011variable}. \State Update $\boldsymbol{\xi}$ from FCD $N(\mu_{\boldsymbol{\xi}},V_{\boldsymbol{\xi}})$. \For{ $p = 1,\dots,P$} \State Rescale $\boldsymbol{\xi}_p^*$ and $\beta_p^*$ so $\boldsymbol{\varphi}_p^*$ remains unchanged. \EndFor \For{$p = 1,\dots,P$} \For{ $r = 1,\dots,r_p$} \State Set $\mu_{pr} = 1$ with probabilty $1/(1 + \exp(-2\xi_{pr}))$. \EndFor \EndFor \State Update $\vartheta$ by the two-step Gibbs update of \cite{escobar1995bayesian}. \If {$DP_{\boldsymbol{K}}$ } \State Update cluster assignment of $DP_{\boldsymbol{K}}$ following \cite{neal2000markov} algorithm 2. \EndIf \State Jointly update $\boldsymbol{K}$ and $\boldsymbol{\lambda}$ with Between and Within Step following \cite{savitsky2011variable}. \State Update $\mathcal{A}$ following two-step Gibbs update of \cite{escobar1995bayesian}. \State Update $\boldsymbol{\Gamma}$ from FCD $N(\hat{\boldsymbol{\gamma}},\hat{V}_{\gamma})\cdot I(\boldsymbol{\gamma} \in \mathcal{Z})$. \For{ $i = 1,\dots, N $ } \State Update $\boldsymbol{\zeta}_i$ from FCD $N(\hat{\boldsymbol{\zeta}}_i,\hat{V}_{\zeta_i})$. \EndFor \EndFor \end{algorithmic} \end{algorithm} \section{Case Study} \label{sec:case} In this section, we study the smoking behaviors in a group of adult smokers recruited from a smoking cessation research clinic. The overall research goal of this study was to identify and investigate the structural form of the relations between a set of risk factors and smoking over a five-week period surrounding a scheduled quit attempt, using intensive longitudinal data collected with EMAs. \subsection{Data Analysis} In the study design, momentary smoking, our outcome of interest, was defined as whether or not a subject reported smoking in the 4 hours prior to the current EMA. However at each EMA, a subject was prompted on their \textit{current} psychological, social, environmental, and behavioral status. Thus to maintain temporality in this study, we assessed the relations between momentary smoking and measurements collected in the previous EMA. As such, regression coefficients are interpreted as the log odds of momentary smoking by the next assessment for a particular risk factor. In this study, we investigated psychological and affective factors including \textit{urge} to smoke, feelings of \textit{restlessness}, \textit{negative affect} (i.e., irritability, frustration/anger, sadness, worry, misery), \textit{positive affect} (i.e., happiness and calmness), being \textit{bored}, \textit{anxiousness}, and \textit{motivation to quit smoking}. Additionally, we investigated numerous social and environmental factors such as whether or not the subject was \textit{interacting with a smoker}, if cigarettes were easily available (\textit{cigarette availability}), and whether or not the subject was drinking alcohol (\textit{alcohol consumption}). Also, we included a set of baseline, time-invariant measures (i.e., heaviness of smoking index (\textit{HSI}), \textit{age} (years), being \textit{female}, and treatment assignment) into the model. For each of these risk factors, we included a fixed main effect, linear interaction, and non-linear interaction term as well as a random main effect and linear interaction term. All interactions investigated in this analysis were between risk factors and assessment time (i.e., $u_{ij} = t_{ij}$), and $t_{ij}$ were centered so that $t=0$ represents the beginning of the scheduled quit attempt. Only complete EMAs with corresponding timestamps were included in this analysis, resulting in 9,634 total observations with the median number of assessments per individual 151 (IQR 101.5-162). All continuous covariates were standardized to mean zero and variance one before analysis to help reduce multicollinearity and place covariates on the same scale for interpretation. The spline functions were initially generated with 20 basis functions, but only the eigenvalues/eigenvectors that captured 99.9\% of the variability were included in the model to reduce the parameter space and computation time, similar to \citep{scheipl2012spike}. This reduced the column space of the penalized covariates $\boldsymbol{\mathcal{U}}^*$ to 8 in our application. We applied our model with the traditional spike-and-slab prior, as well as the spiked DP. When fitting each model, we chose a non-informative prior for the fixed and random effects' inclusion indicators, $a_{\nu_t} = b_{\nu_t} = a_{\lambda_d} = b_{\lambda_d} = 1$. This assumption reflects the exploratory nature of our study aimed at learning potential relations between risk factors and smoking behaviors with little or no information regarding their occurrence in the presence of other risk factors. We assumed a mildly informative prior on the fixed regression coefficients by setting $\tau^2= 2$. This places a 95\% prior probability of included regression coefficients between an odds ratio of 0.06 and 16. Additionally, we set $ v_0= v^*= 10$, $m_0 = m^* = 0$, and ${\Gamma} \sim N( \boldsymbol{\gamma}_0 = \boldsymbol{0},\boldsymbol{V}_{\gamma}=\boldsymbol{I} )$. Lastly, when using the SDP prior, the hyperparameters for the concentration parameters $\vartheta$ and $\mathcal{A}$ were set to $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 1$. For posterior inference, we ran our MCMC algorithm with and without SDP priors for both fixed and random effects for 10,000 iterations, treating the first 5,000 as burn-in and thinning to every 10$^{th}$ iteration. Trace plots of the parameters' posterior samples indicated good convergence and mixing. Additionally, we observed a relatively high correlation ($\sim 97$\%) between the posterior probabilities of inclusion obtained from two chains initiated with different parameter values, and potential scale reduction factors, $\hat{R}$, for each of the selected $\boldsymbol{\beta}$ and $\boldsymbol{K}$ below 1.1 \citep{gelman1992inference}, further demonstrating that the MCMC procedure was working properly and the chains converged. To assess model fit, a residual plot and a series of posterior predictive checks were performed in which we compared replicated data sets from the posterior predictive distribution of the model to the observed data \citep{gelman2000diagnostic}. Overall, we found strong evidence of good model fit. See the Supplementary Materials for details. Inclusion in the model was determined using the median model approach \citep{barbieri2004optimal} (i.e., marginal posterior probability of inclusion (MPPI) $\geq 0.50 $). For the SDP model, clusters of regression coefficients were determined using sequentially-allocated latent structure optimization to minimize the lower bound of the variation of information loss \citep{wade2018bayesian,dahl2017}. To compare the predictive performance of both models, we performed a leave-one-out cross-validation approximation procedure, following the approach proposed by \cite{vehtari2017practical}. This approach approximates leave-one-out (LOO) cross-validation with the expected log pointwise predictive density (epld). By using Pareto smoothed importance sampling (PSIS) for estimation, it provides a more stable estimate compared to the method of \cite{gelfand1996model}. We used the R package \texttt{loo} \citep{vehtari2016loo}, which requires the pointwise log-likelihood for each subject $i = 1, \dots , N$ at each observation $j = 1, \dots, n_i$ calculated at each MCMC iteration $s = 1, \dots, S$, and produces an estimated $\widehat{\mbox{epld}}$ value, with larger values implying a superior model. \subsection{Results} Overall, we found better predictive performance for the model with SDP priors versus the traditional spike-and-slab priors, $\widehat{\mbox{epld}}_{SDP} = -2985.1 $ and $\widehat{\mbox{epld}}_{SS} = -3062.7 $, respectively. Plots of the marginal posterior probabilities of inclusion for the fixed and random effects selected using our proposed approach with SDP priors are found in Figure \ref{fig:MPPI}. Figure \ref{fig:curves} presents the time-varying effects selected using the same model. Compared to \textit{usual care}, we found a higher odds of momentary smoking by the next assessment for those assigned to the \textit{NCI QuitGuide} group prior to the quit attempt. However immediately after the quit attempt, we observed a lower odds of momentary smoking by the next assessment for those assigned to the \textit{NCI QuitGuide} group, which gradually increased to the initial level over the remainder of the study (top left panel). Similarly, we observed a positive relation between having the \textit{urge} to smoke and momentary smoking by the next assessment prior to the quit attempt that diminished during the three weeks following the quit attempt, before sharply increasing during the fourth week post-quit (top right panel). Throughout the assessment period, we observed a positive relation between \textit{negative affect} and momentary smoking by the next assessment that increased during the first week post-quit, leveling off at an odds ratio of 1.75 until the third week after the quit attempt. We additionally found a positive relation between \textit{cigarette availability} and the odds of momentary smoking by the next assessment that strengthened over the assessment window. For a 1 SD increase in \textit{cigarette availability}, the odds of momentary smoking by the next assessment increased by 300\% for the typical subject one week after the quit attempt, holding all else constant. In the two lower panels of Figure \ref{fig:curves} we observe a relatively weak, oscillating effect of being \textit{bored} and \textit{interacting with a smoker} on momentary smoking by the next assessment, respectively. In addition to these effects, the model identified a constant effect for \textit{alcohol consumption} in the last hour and \textit{motivation to quit smoking} over the assessment period. A similar set of fixed effect relations were identified by our model without the SDP prior, with the exception of not selecting being \textit{bored}. \begin{figure} \centering \includegraphics[scale=0.39]{MPPI_fixed.png} \includegraphics[scale=0.39]{MPPI_random.png} \caption{Smoking Cessation Study: Marginal posterior probabilities of inclusion (MPPI) for fixed ({\it top}) and random ({\it bottom}) effects. Selected fixed effects in ascending order: NCI (NL-INTX), urge to quit (NL-INTX), cigarette availability (all), interacting with a smoker (NL-INTX), negative affect (NL-INTX, main), being bored (NL-INTX), alcohol consumption (main), motivation to quit (main), HSI (NL-INTX). Selected random effects in ascending order: urge (main), cigarette availability (main), being bored (main), motivation to quit (main), SmartT (L-INTX), interacting with a smoker (L-INTX), being bored (L-INTX). Dotted lines represent the inclusion threshold of 0.50. NL-INTX: non-linear interaction, L-INTX: linear interaction } \label{fig:MPPI} \end{figure} \begin{figure} \centering \includegraphics[scale=0.3]{SmartT_NCI.png} \includegraphics[scale=0.3]{SmartT_Urge.png}\\ \vskip 2mm \includegraphics[scale=0.3]{SmartT_Negative.png} \includegraphics[scale=0.3]{SmartT_Cigarettes.png}\\ \vskip 2mm \includegraphics[scale=0.3]{SmartT_Bored.png} \includegraphics[scale=0.3]{SmartT_Interacting.png} \caption{Smoking Cessation Study: Time-varying effects on momentary smoking by the next assessment of those covariates selected by our model with SDP priors. Shaded regions represent pointwise 95\% CI. Dashed lines indicate an odds ratio of one. } \label{fig:curves} \end{figure} Compared to standard TVEMs, our approach deconstructs the structure of the relations between risk factors and smoking behaviors over time, aiding the interpretation of the underlying trends. This information may help the development and evaluation of tailored intervention strategies targeting smoking cessation using mHealth data. For example, \textit{negative effect} has an obvious positive association with momentary smoking by the next assessment that wavers around an odds ratio of 1.2 to 1.5 for a majority of the study. However based on Figure \ref{fig:curves}, it is unclear whether or not the effect linearly diminishes over time. By performing selection on the main effect, linear interaction, and non-linear interaction terms separately, we are able to obtain an actual point estimate for the constant effect of \textit{negative affect} (OR 1.40) as opposed to subjectively assuming a range of values from the plot. Additionally, since the linear interaction term was not selected, we can claim that the effect was not linearly decreasing over time and that it was simply wavering around the constant effect throughout the study. Tables \ref{tab:one} and \ref{tab:two} present the estimated variances and corresponding 95\% credible intervals (CI) for the random effects selected using SDP priors and traditional spike-and-slab priors, respectively. Using SDP priors, our method identified a random main effect for \textit{urge} to smoke, \textit{cigarette availability}, being \textit{bored}, and \textit{motivation to quit smoking} as well as a random linear interaction between being assigned to the \textit{SmartT} treatment group, \textit{interacting with smokers}, and being \textit{bored} with time. Thus even though we did not discover an overall difference in the odds of momentary smoking by the next assessment for those assigned to the \textit{SmartT} treatment versus \textit{usual care}, we observed evidence that the subjects responded differently to the \textit{SmartT} treatment across the assessment window. With the traditional spike-and-slab priors, we found similar results overall. However, the model only selected a random main effect for \textit{interacting with smokers} and additionally suggested a random effect for \textit{anxiousness}. By using SDP priors, our approach is capable of clustering covariates that share similar linear trends with momentary smoking by the next assessment over time. In practice, this information can be used to help construct decision rules when designing future intervention strategies. In our analysis, only five main effect and linear interaction terms were selected, and each of them were allocated to their own cluster. With this knowledge, researchers can prioritize targeting risk factors based on their relative strength of association at a given moment. Had some of these risk factors' effects been clustered together, researchers may rely more heavily on other pieces of information, such as the cost or success rates for a particular intervention strategy, when assessing which risk factors to target during a high-risk moment. \begin{table} \centering \footnotesize \begin{tabular}{ccc} \hline \textbf{Random Effect} & \textbf{$\hat{\sigma}^2$} & \textbf{95\% CI} \\ \hline Intercept & 0.923 & (0.539, 1.528) \\ Urge & 0.152 & (0.031, 0.278) \\ Cigarette Availability & 0.865 & (0.394, 1.467) \\ Bored & 0.183 & (0.076, 0.398) \\ Motivation to Quit Smoking & 0.156 & (0.045, 0.311) \\ SmartT $\times$ Time & 0.077 & (0.010, 0.210) \\ Interacting with a Smoker $\times$ Time & 0.016 & (0.002, 0.050) \\ Bored $\times$ Time & 0.002 & (0.000, 0.005) \\ \hline \end{tabular} \caption{Smoking Cessation Study: Estimated variances with corresponding 95$\%$ credible intervals (CI) for selected random effects with SDP priors based on MPPI $\geq 0.50$.} \label{tab:one} \end{table} \begin{table} \centering \footnotesize \begin{tabular}{ccc} \hline \textbf{Random Effect} & \textbf{$\hat{\sigma}^2$} & \textbf{95\% CI} \\ \hline Intercept & 1.317 & (0.676, 2.487) \\ Urge & 0.099 & (0.011, 0.248) \\ Cigarette Availability & 0.905 & (0.503, 1.607) \\ Interacting with a Smoker & 0.848 & (0.286, 1.924) \\ Bored & 0.244 & (0.065, 0.517) \\ Anxiousness & 0.140 & (0.001, 0.361) \\ Motivation to Quit Smoking & 0.212 & (0.076, 0.448) \\ SmartT $\times$ Time & 0.062 & (0.016, 0.155) \\ Bored $\times$ Time & 0.002 & (0.000, 0.004) \\ \hline \end{tabular} \caption{Smoking Cessation Study: Estimated variances with corresponding 95$\%$ credible intervals (CI) for selected random effects with traditional spike-and-slab priors based on MPPI $\geq 0.50$.} \label{tab:two} \end{table} Similar to previous studies investigating the temporal relation between risk factors and smoking behaviors around a quit attempt, our results show a convex relation between \textit{urge} to smoke and momentary smoking after the quit attempt, a positive association with \textit{cigarette availability} throughout the quit attempt, and a positive, increasing relation between \textit{negative affect} and momentary smoking during the first week after the quit attempt \citep{koslovsky2017time,vasilenko2014time}. Existing TVEMs approaches, however, typically model the repeated measures structure of the data by simply including a random intercept term in the model, neglecting to investigate random main effects or interaction terms. They also do not incorporate variable selection. Our approach, on the other hand, delivers insights on how relations vary over time as well as how they vary across individuals. \subsection{Sensitivity Analysis} \label{sec:sens} To investigate our model's sensitivity to prior specification, we set each of the hyperparameters to default values and then evaluated the effect of manipulating each term on the results obtained in section \ref{sec:case}. For the default parameterization, we set the hyperparameters for the prior inclusion indicators $\boldsymbol{\nu}$ and $\boldsymbol{\lambda}$ to $a_{\nu_t}=b_{\nu_t}=a_{\lambda_d}=b_{\lambda_d} = 1$. For interpretation, $a_{\nu_t}=b_{\nu_t}=1$ implies that the prior probability of inclusion for a fixed effect is $a_{\nu_t}/(a_{\nu_t} + b_{\nu_t})=0.50$. The default values for the variance of the normal distribution for the slab of $\boldsymbol{\beta}_0$ and $\boldsymbol{\beta}^{\circ}$ as well as the base distribution for $\boldsymbol{\beta}^{*}$ were each fixed at $5$. Additionally, the mean and variance for the random effect terms' proposal and prior distributions were set to $0$ and $5$, respectively. The hyperparameters for the concentration parameters $\vartheta$ and $\mathcal{A}$ were set to $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 1$. Lastly, we assumed ${\Gamma} \sim N( \boldsymbol{\gamma}_0 = \boldsymbol{0},\boldsymbol{V}_{\gamma}=\boldsymbol{I} )$. We ran our MCMC algorithm for 10,000 iterations, treating the first 5,000 iterations as burn-in and thinning to every $10^{th}$ iteration for the SDP model, similar to our case study. For each of the fixed and random effects, inclusion in the model was determined using the median model approach \citep{barbieri2004optimal}. Since the true model is never known in practice, we evaluated each model parameterization in terms of sparsity levels and overlap with the results reported in the case study section. Specifically, we present the total number terms selected for both fixed and random effects (\# Fixed and \# Random). We also provide the proportion of active risk factors in our case study that were also included by each model and the proportion of inactive risk factors that were also excluded by each model, for fixed (f-IN and f-EX) and random effects (r-IN and r-EX) as well as overall (IN and EX). Results of the sensitivity analysis are reported in Table \ref{tab:four}. Compared to the results presented in the case study, we found relatively consistent overlap in the risk factors included and excluded by each model overall. We observed moderate sensitivity to hyperparameter values in terms of percent overlap for fixed and random effects of risk factors included in the model, an artifact of the relatively weak associations identified for some of the risk factors. Notably, risk factors showing stronger associations with momentary smoking at the next assessment (e.g., \textit{negative affect}, \textit{cigarette availability}, and \textit{motivation to quit smoking}) were selected by the model regardless of prior specification. Likewise, weaker relations between momentary smoking at the next assessment and risk factors, such as \textit{being bored} and \textit{interacting with a smoker}, were more sensitive to hyperparameters. We also observed that the number of selected fixed and random effects increased (decreased) as the prior probability of inclusion increased (decreased), as expected. In practice, there are a variety of factors researchers should consider when setting the prior probability of inclusion, including the aim of the research study, the desired sparsity of the model, prior knowledge of covariates inclusion, as well as results from simulation and sensitivity analyses to name a few. From a clinical perspective, $\tau^2=10$ reflects a relatively diffuse prior for a given risk factor (i.e., odds ratio between 0.002 and roughly 500). To further investigate the model's sensitivity to regression coefficients' variances, we set $\tau^2=v_0=1000$, and found somewhat similar results to the model with $\tau^2 =v_0 = 10$ overall (i.e., IN = 0.8, EX = 0.8). Here, we unexpectedly found non-montonic behavior in the proportion of included and excluded terms as a function of the coefficients' variance, which might also reflect our model's sensitivity to relatively weak associations as previously noted. In theory, the selection of random effects may be sensitive to the order in which the columns of $\boldsymbol{Z}$ are ordered, since the Cholesky decomposition is itself, order dependent \citep{muller2013model}. In our case study, we did not observe any differences regarding which random effects were selected with a random permutation of the $\boldsymbol{Z}$ columns. In section \ref{sec:sens}, we further demonstrate our model's robustness to the ordering of $\boldsymbol{Z }$ on simulated data. \begin{table} \centering \footnotesize \begin{tabular}{cccccc} \hline & $a_{v_t} = a_{\lambda_d} = 1$, $b_{v_t} = b_{\lambda_d} = 9$ & & $\tau^2 = v_0 = 2$ & & $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 0.1$ \\ \cline{2-2} \cline{4-4} \cline{6-6} \# Fixed & 4 & & 8 & & 6 \\ \# Random & 5 & & 5 & & 7 \\ IN & 0.60 & & 0.70 & & 0.80 \\ f-IN & 0.44 & & 0.67 & & 0.56 \\ r-IN & 0.50 & & 0.50 & & 0.83 \\ EX & 0.80 & & 0.60 & & 1.00 \\ f-EX & 1.00 & & 1.00 & & 1.00 \\ r-EX & 0.78 & & 0.78 & & 0.89 \\ & $a_{v_t} = a_{\lambda_d} = 9$, $b_{v_t} = b_{\lambda_d} = 1$ & & $\tau^2 = v_0 = 10$ & & $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 10$ \\ \cline{2-2} \cline{4-4} \cline{6-6} \# Fixed & 10 & & 7 & & 6 \\ \# Random & 8 & & 5 & & 4 \\ IN & 1.00 & & 0.60 & & 0.80 \\ f-IN & 0.78 & & 0.67 & & 0.56 \\ r-IN & 0.83 & & 0.50 & & 0.33 \\ EX & 0.60 & & 1.00 & & 1.00 \\ f-EX & 0.83 & & 0.80 & & 1.00 \\ r-EX & 0.67 & & 0.78 & & 0.78 \\ \hline \end{tabular} \caption{Case Study Data: Sensitivity results for the proposed model with SDP across various prior specifications. Total number of terms selected for both fixed and random effects are indicted as \# Fixed and \# Random, respectively. The proportion of active (inactive) risk factors presented in the case study that were also included (excluded) by each model is reported as f-IN and r-IN (f-EX and r-EX), for fixed and random effects, respectively. Finally, the overall proportion of active (inactive) risk factors presented in the case study that were also included (excluded) by each model is represented as IN (EX). } \label{tab:four} \end{table} \section{Simulation Study} \label{sec:simul} In this section, we evaluate our model in terms of variable selection and clustering performance on simulated data similar in structure to our case study data. We compared our method with and without SDP priors on varying-coefficient and random effects to two other Bayesian methods which are designed to handle this class of models. The first is the method of \cite{scheipl2012spike}, which has previously shown promising results performing function selection in structural additive regression models using continuous spike-and-slab priors. Their approach differs from ours in that they assume parameter-expanded normal-mixture-of-inverse-gamma (peNMIG) distribution priors for selection, inspired by \cite{ishwaran2005spike}, and design a Metropolis-Hastings with penalized iteratively weighted least-squares algorithm for updating regression coefficients within the logistic framework. A popular alternative to spike-and-slab priors to induce sparsity in high-dimensional regression settings is to assume global-local shrinkage priors on the regression coefficients (see \cite{van2019shrinkage, bhadra2019lasso} for detailed reviews). At the request of a reviewer, we additionally compared our proposed model to a reparameterized version with shrinkage priors \citep{carvalho2009handling}. To achieve this, we replaced the spike-and-slab priors on $\boldsymbol{\beta}$ with horseshoe priors, which belong to the class of global-local scale mixtures of normal priors \citep{polson2010shrink}. For random effects, $\boldsymbol{K}$, we assumed a similar global-local structure for the scale parameters of the folded-normal distribution, $v_0$. To our knowledge, the theoretical properties and selection performance of global-local scale mixtures of non-normal priors have yet to be explored. However we conjectured that the global-local framework should effectively shrink inactive random effects towards zero and allow active terms to be freely estimated. Details of the resulting model and accompanying MCMC algorithm are found in the Supplementary Material. We simulated $N = 100$ subjects with $20$-$40$ observations randomly spaced across an assessment window with $t_{ij} \in [0,1]$, without loss of generality. For each observation, we generated a set of 15 covariates, $\boldsymbol{x}_i$, comprised of an intercept term and 14 continuous covariates simulated from a $N_{14}(\boldsymbol{0},\Sigma)$, where $\Sigma_{st} = w^{|s-t|}$ and $w = 0.3$. To simulate time-varying covariate trajectories, we randomly jittered half of the elements within $\boldsymbol{x}_i$ by $N(0,1)$. Additionally, we set $\boldsymbol{z}_{ij} = \boldsymbol{x}_{ij}$. Thus, each full model contained 15 main effects, linear interactions, non-linear interactions, and random main effects, corresponding to 60 potential terms (or groups of terms for the non-linear interaction components) to select. The first 5 functional terms in the true model were defined as \begin{itemize} \item $f_1(t_{ij}) = \pi\sin(3\pi t_{ij}) + 1.4t_{ij} - 1.6$ \item $f_2(t_{ij}) = \pi\cos(2\pi t_{ij}) + 1.6$ \item $f_3(t_{ij}) = - \pi t\sin(5\pi t_{ij}) + 1.7t_{ij} - 1.5$ \item $f_4(t_{ij}) = - 1.5t_{ij} + 1.6$ \item $f_5(t_{ij}) = - 1.6,$ \end{itemize} and the random effects $\boldsymbol{a}_i \sim N(\boldsymbol{0}, \Sigma_{\alpha})$ with $\sigma_{kk} = 0.75$ and $\sigma_{jk} = 0.4$ for $j,k = 1,\dots, 5$. Thus in the true model, $\psi_{ij} = \sum_{p = 1}^5f_p(t_{ij})x_{ijp} + \boldsymbol{z}_{ij}^{\prime}\boldsymbol{a}_i$. Note that to impose an inherent clustering for the main effects and linear interaction terms, their values were specified to center around $\pm 1.5$. We ran each of the MCMC algorithms on 50 replicated data sets, using 7,500 iterations, treating the first 3,750 iterations as burn-in and thinning to every $10^{th}$ iteration for each model. The spline functions were generated similar to our application. We set the hyperparameters for the inclusion indicators, $a_{\nu_t} =b_{\nu_t} =a_{\nu_t} =b_{\nu_t} = 1 $, imposing a non-informative prior for selection of fixed and random effect terms. Additionally, we fixed the regression coefficient hyperparameters to $\tau^2 = 2$ and $m_0 = 0$ with $v_0 = 10$. For the concentration parameters $\vartheta$ and $\mathcal{A}$, we assumed $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 1$. Before analysis, the covariates were standardized to mean $0$ and variance $1$ For each of the models with spike-and-slab priors, inclusion in the model for both fixed and random effects was determined using the median model approach \citep{barbieri2004optimal}. For the horseshoe model, fixed effects were considered active if their corresponding 95\% credible interval did not contain zero, similar to \cite{bhadra2019lasso}. The 95\% credible interval for random effects will almost surely not contain zero. As a naive alternative, we assumed a random effect was active in the model if its posterior mean exceeded a given threshold. For the sake of demonstration, we evaluated the performance of the model over a grid of potential threshold values, and presented the results for the best performing model overall. Notably, this solution is only feasible when the true answer is known, which is never the case in practice. Variable selection performance was evaluated via sensitivity (SENS), specificity (SPEC), and Matthew's correlation coefficient (MCC) for fixed and random effects separately. These metrics are defined as $$SENS = \frac{TP}{FN + TP}$$ $$SPEC = \frac{TN}{FP + TN}$$ $$MCC = \frac{TP \times TN - FP \times FN}{\sqrt{(TP + FP)(TP + FN)(TN + FP)(TN + FN)}},$$ where $TN$, $TP$, $FN$, and $FP$ represent the true negatives, true positives, false negatives, and false positives, respectively. For the SDP models, clusters of regression coefficients were determined using sequentially-allocated latent structure optimization to minimize the lower bound of the variation of information loss \citep{wade2018bayesian,dahl2017}. Once clusters were determined, clustering performance was evaluated using the variation of information, a measure of distance between two clusterings ranging from $0$ to $\log R$, where $R$ is the number of items to cluster and lower values imply better clustering \citep{meilua2003comparing}. Figure \ref{fig:simul} presents the estimated smooth functions obtained using our proposed method with SDP priors on a randomly selected replicated data set from the simulation study. Here, $f_1(t_{ij})$ represents the global intercept comprised of a main effect, linear interaction, and non-linear interaction term that were forced into the model. Of interest is the ability of the model to properly select the influential components in $f_2(t_{ij})$ and $f_3(t_{ij})$ and additionally capture their structure. Using the method proposed in \cite{dahl2017} to identify latent clusters of fixed main effect and linear interaction terms, our method successfully clustered the linear interaction in $f_1(t_{ij})$ and the main effects in $f_2(t_{ij})$ and $f_4(t_{ij})$, while incorrectly assigning the linear interaction term in $f_3(t_{ij})$ to its own cluster. Additionally, the main effects in $f_1(t_{ij})$, $f_3(t_{ij})$, and $f_5(t_{ij})$ were appropriately clustered together, while the linear interaction term in $f_4(t_{ij})$ was incorrectly assigning to its own cluster. The remaining, uninfluential terms were all allocated to the trivial group. Despite $f_1(t_{ij})$ and $f_3(t_{ij})$ having similar main effect and linear interaction terms, they are dramatically different in terms of their non-linear interaction terms. However by clustering their underlying linear trajectories, our model with SDP priors was able to uncover similarities in their relations with the outcome over time that traditional approaches would fail to discover. \begin{figure} \centering \includegraphics[width=4.5in,height=2.1in]{function_1.png} \\ \includegraphics[width=4.5in,height=2.1in]{function_2.png} \\ \includegraphics[width=4.5in,height=2.1in]{function_3.png} \caption{Simulated Data: Estimated smooth function $f_1(t_{ij}), f_2(t_{ij}), f_3(t_{ij})$ for a randomly selected replicate data set generated in the simulation study. The estimated smooth function is represented by a solid black line with pointwise 95\% credible regions in grey. Dashed lines represent the true log odds ratios as a function of time. } \label{fig:simul} \end{figure} Table \ref{tab:simul} reports results for our proposed method with SDP priors (PGBVSDP), our proposed method without SDP priors (PGBVS), peNMIG, and our model with horseshoe priors (PGHS) in terms of average sensitivity, specificity, and MCC for fixed (fSENS, fSPEC, fMCC) and random (rSENS, rSPEC, rMCC) effects across the replicate data sets with standard errors in parentheses. Additionally for the PGBVSDP model, we provide clustering performance results for fixed (fCLUST) and random effects (rCLUST). Since each of the random effects were simulated similarly, clusterings were compared to a single cluster for the non-zero terms. Overall, the methods had relatively similar results for fixed effects, with PGBVS and PGHS performing the best in terms of sensitivity (1.00 and 1.00) and MCC (0.96 and 0.99), respectively. Our method with SDP priors, PGBVSDP, obtained the highest specificity for fixed effects overall. Given that the maximum possible values fCLUST and rCLUST could take on were 3.4 and 2.7, respectively, we found fairly strong clustering performance for both fixed (0.39) and random (0.92) effects with PGBVSDP. We observed more variability in the selection of random effects across models. Random effect selection sensitivity was significantly lower compared to the fixed effects for all of the models. In terms of specificity (1-false positive rate) for random effects, our methods, regardless of prior formulation, dramatically outperformed peNMIG, with PGBVS obtaining the highest specificity overall (0.96). However, PGBVSDP and PGBVS had lower sensitivity with respect to random effects compared to PGHS. While PGHS performed well separating active from inactive random effects, recall that the truth was used to select the optimal selection threshold. The improved performance of PGBVS, PGBVSDP, and PGHS in terms of variable selection was achieved in considerably less computation time compared to peNMIG. Our core method was able to run 7,500 iterations in a fifth of the time compared to peNMIG, accessed via \cite{scheipl2011spikeslabgam}. Using the SDP priors, which requires additional updates for clustering the regression coefficients, we observed a two-fold increase in computation time for PGBVSDP compared to PGBVS. However on average, the PGBVSDP approach still achieved about a $50\%$ reduction in computation time compared to peNMIG. It is important to note that for comparison, all algorithms were run in series, even though the R package spikeSlabGAM \citep{scheipl2011spikeslabgam} provides functionality to run multiple chains in parallel. \begin{table} \centering \footnotesize \begin{tabular}{cccccccc} \hline & PGBVSDP & & PGBVS & & peNMIG & & PGHS \\ \cline{2-2} \cline{4-4} \cline{6-6} \cline{8-8} fSENS & 0.96 (0.09) & & 1.00 (0.02) & & 0.93 (0.11) & & 1.00 (0.00) \\ fSPEC & 0.99 (0.02) & & 0.98 (0.02) & & 0.94 (0.04) & & 0.96 (0.01) \\ fMCC & 0.94 (0.08) & & 0.96 (0.05) & & 0.83 (0.10) & & 0.99 (0.02) \\ fCLUST & 0.39 (0.21) & & - & & - & & - \\ rSENS & 0.76 (0.21) & & 0.62 (0.25) & & 0.46 (0.23) & & 0.86 (0.24) \\ rSPEC & 0.88 (0.10) & & 0.96 (0.05) & & 0.64 (0.16) & & 0.90 (0.11) \\ rMCC & 0.63 (0.26) & & 0.64 (0.23) & & 0.11 (0.33) & & 0.76 (0.21) \\ rCLUST & 0.92 (0.50) & & - & & - & & - \\ Time (s) & 4658 (271) & & 2235 (46) & & 10720 (1116) & & 3076 (74) \\ \cline{1-8} \end{tabular} \caption{Simulated Data: Results for the proposed model with and without the SDP on regression coefficients compared to peNMIG \citep{scheipl2012spike} and our model with horseshoe priors \citep{carvalho2009handling}. Results are averaged over 50 replicate data sets with standard deviations in parentheses. } \label{tab:simul} \end{table} \section{Sensitivity Analysis} \label{sec:sens} To assess the model's sensitivity to hyperparameter settings, we set each of the hyperparameters to default values and then evaluated the effect of manipulating each term on selection and clustering performance. For the default parameterization, we set the hyperparameters for the prior inclusion indicators $\boldsymbol{\nu}$ and $\boldsymbol{\lambda}$ to $a_{\nu_t}=b_{\nu_t}=a_{\lambda_d}=b_{\lambda_d} = 1$. The default values for the variance of the normal distribution for the slab of $\boldsymbol{\beta}_0$ and $\boldsymbol{\beta}^{\circ}$ as well as the base distribution for $\boldsymbol{\beta}^{*}$ were each fixed at $5$. Additionally, the mean and variance for the random effect terms' proposal and prior distributions were set to $0$ and $5$, respectively. The hyperparameters for the concentration parameters, $\vartheta$ and $\mathcal{A}$ $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 1$. Lastly, we assumed ${\Gamma} \sim N( \boldsymbol{\gamma}_0 = \boldsymbol{0},\boldsymbol{V}_{\gamma}=\boldsymbol{I} )$. We ran our MCMC algorithm on the 50 replicated data sets generated in the simulation study, using 7,500 iterations, treating the first 3,750 iterations as burn-in and thinning to every $10^{th}$ iteration for the SDP model. Results of the sensitivity analysis are reported in Table \ref{tab:sens}. As expected, we found that the sensitivity (specificity) increased (decreased) as the prior probability of inclusion for the fixed and random effects increased. The model did not seem sensitive to the variance assumed for the normal and folded normal priors assigned to the fixed and random effect slab distributions, respectively. Similarly, we found comparable results in terms of sensitivity and specificity for different values of the concentration parameters' hyperparameters. In terms of clustering, we saw marginally better variation of information measures with larger concentration parameter hyperparameters. However across simulations runs, we observed relatively high standard errors in terms of the variation of information measures. To assess potential sensitivity to the order of random effects in our simulations, we re-ran the simulation study with a random permutation of the columns of $\boldsymbol{Z}$. Similar to the case study, we found no evidence of sensitivity to random effect ordering with our model as the results were almost identical to those presented in Table 4 with PGBVSDP (rSENS = 0.76 (0.20), rSPEC = 0.87 (0.09), rMCC = 0.62 (0.20), rCLUST = 0.94 (0.42)). \begin{table} \centering \footnotesize \begin{tabular}{cccccc} \hline & $a_{v_t} = a_{\lambda_d} = 1$, $b_{v_t} = b_{\lambda_d} = 9$ & & $\tau^2 = v_0 = 2$ & & $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 0.1$ \\ \cline{2-2} \cline{4-4} \cline{6-6} fSENS & 0.92 (0.13) & & 0.97 (0.08) & & 0.94 (0.12) \\ fSPEC & 0.99 (0.02) & & 0.99 (0.02) & & 0.99 (0.02) \\ fMCC & 0.93 (0.10) & & 0.96 (0.07) & & 0.94 (0.09) \\ fCLUST & 0.45 (0.30) & & 0.35 (0.20) & & 0.45 (0.25) \\ rSENS & 0.50 (0.20) & & 0.79 (0.20) & & 0.54 (0.28) \\ rSPEC & 0.87 (0.09) & & 0.88 (0.08) & & 0.85 (0.10) \\ rMCC & 0.40 (0.26) & & 0.66 (0.23) & & 0.41 (0.29) \\ rCLUST & 1.30 (0.36) & & 0.91 (0.44) & & 1.25 (0.50) \\ & $a_{v_t} = a_{\lambda_d} = 9$, $b_{v_t} = b_{\lambda_d} = 1$ & & $\tau^2 = v_0 = 10$ & & $a_{\vartheta} = b_{\vartheta} = a_{\mathcal{A}} = b_{\mathcal{A}} = 10$ \\ \cline{2-2} \cline{4-4} \cline{6-6} fSENS & 0.99 (0.03) & & 0.96 (0.07) & & 0.94 (0.11) \\ fSPEC & 0.96 (0.03) & & 0.99 (0.02) & & 0.99 (0.02) \\ fMCC & 0.91 (0.06) & & 0.95 (0.07) & & 0.93 (0.11) \\ fCLUST & 0.40 (0.20) & & 0.39 (0.23) & & 0.41 (0.24) \\ rSENS & 0.85 (0.20) & & 0.78 (0.20) & & 0.74 (0.23) \\ rSPEC & 0.84 (0.10) & & 0.89 (0.10) & & 0.86 (0.10) \\ rMCC & 0.66 (0.19) & & 0.67 (0.25) & & 0.60 (0.27) \\ rCLUST & 0.84 (0.49) & & 0.89 (0.47) & & 0.97 (0.49) \\ \hline \end{tabular} \caption{Simulated Data: Sensitivity results for the proposed model with SDP on regression coefficients. Results are averaged over 50 replicated data sets with standard errors in parentheses.} \label{tab:sens} \end{table} \section{Conclusions} \label{sec:remarks} In this paper, we have investigated intensive longitudinal data, collected in a novel, smartphone-based smoking cessation study to better understand the relation between potential risk factors and smoking behaviors in the critical moments surrounding a quit attempt, using a semiparametric Bayesian time-varying effect modeling framework. Unlike standard TVEMs, our approach deconstructs the structure of the relations between risk factors and smoking behaviors over time, which aids in formulating hypotheses regarding dynamic relations between risk factors and smoking in the critical moments around a quit attempt. By performing variable selection on random effects, the approach delivers additional insights on how relations vary over time as well as how they vary across individuals. Furthermore, the use of non- and semiparametric prior constructions allows simultaneous variable selection for fixed and random effects while learning latent clusters of regression coefficients. As such, our model is designed to discover various forms of latent structures within the data without requiring strict model assumptions or burdensome tuning procedures. Results from our analysis have confirmed previously identified temporal relations between smoking behaviors and \textit{urge} to smoke, \textit{cigarette availability}, and \textit{negative affect}. They have also identified subject-specific heterogeneity in the effects of \textit{urge} to smoke, \textit{cigarette availability}, and \textit{motivation to quit}. Additionally, we have found that subjects differed in how they responded to the \textit{SmartT} treatment (compared to usual care), \textit{interacting with a smoker}, and being \textit{bored} over time. This has practical relevance as researchers can use this information to design adaptive interventions that prioritize targeting risk factors based on their relative strength of association at a given moment. They also reinforce the importance of designing dynamic intervention strategies that are adaptive to subjects' current risk profiles. Throughout this work, we have demonstrated how our method is well-suited to aide the development and evaluation of future JITAI strategies targeting smoking cessation using mHealth data. The existing \textit{SmartT} algorithm delivers treatment based on the presence of six lapse triggers, which are weighted based on their relative importance in predicting risk of lapse \citep{businelle2016ecological}. The results of this study allow for a more dynamic algorithm that takes into account not only the time-varying relationships between psychosocial and environmental variables and smoking lapse, but the different ways in which individuals experience a quit attempt. For example, the results suggest that providing momentary support to cope with \textit{urge} to smoke and \textit{negative affect} may be more useful if delivered in the early stages of a quit attempt, but become less important by week 4 post-quit. However, messages that address \textit{cigarette availability}, \textit{alcohol consumption}, and \textit{motivation to quit smoking} may be a more important focus for the entire quit attempt. Although the findings for this small sample may not be generalizable to larger, more diverse populations, these methods are the next step in developing a personalized smoking risk algorithm that can inform highly specific, individualized treatment to each smoker. It is important to note that selection of a risk factor by our proposed method (or any variable selection technique), does not imply clinical significance. Notably, the point-wise credible intervals often contained odds ratios of one and most risk factors were only influential for brief moments throughout the study period. While these results highlight the importance of understanding risk factors' dynamic relations with smoking to design tailored intervention strategies, we recommend using our method for hypothesis generation in practice and conducting confirmatory studies before generalizing results. Compliance rates for EMA studies typically range between 70\% and 90\%, with a recommended threshold of 80\% \citep{jones2006continuous}. In our case study, the compliance rate was 84\%. Additionally, 97.3\% of all assessments were completed once initiated, and subjects were unable to skip questions within an assessment. Since subjects were assessed multiple times per day, nonresponse was attributed more to situational context (e.g., driving) than smoking status. Thus for this study, we found the missing completely at random assumption for missing observations justified. However, future studies may consider the development of advanced analytical methods for EMA data sets that can handle different types of missingness assumptions and other potential biases, such as social desirability bias. In this analysis, we focus on time-varying effects due to their recent popularity in smoking behavior research \cite{tan2012time,shiyko2012using,vasilenko2014time,koslovsky2017time,lanza2013advancing,shiyko2014modeling}. A promising alternative for investigating the complexity of smoking behaviors around a quit attempt is the varying index coefficient model, which allows a covariate's effect to vary as a function of multiple other variables \citep{ma2015varying}. By incorporating variable selection priors, researchers could identify which variables are responsible for modifying a covariate's effect. Oftentimes behavioral researchers are interested in exploring other forms of latent structure, such as clusters of individuals who respond similarly to treatments or have similar risk profiles over time. Taking advantage of the flexibility and efficiency of our approach, future work could extend our core model to address these research questions by recasting it into a mixture modeling framework. In addition, while we have developed our method for binary outcomes due to their prevalence in smoking behavior research studies, our approach is easily adaptable to other data structures found within and outside of smoking behavior research, such as time to event data \citep{sha2006bayesian} and continuous outcomes. While our method borrows information across regression coefficients, we avoided imposing structure among covariates via heredity constraints, which restrict the model space for higher order terms depending on the inclusion status of the lower order terms that comprise them. Researchers interested in extending our approach to accommodate these, and other forms of, hierarchical constraints may adjust the prior probabilities of inclusion \citep{chipman1996bayesian}. Lastly, while we were hesitant to present variable selection results for PGHS, due to the limited understanding of global-local priors for non-Gaussian distributions, this showed good results in simulations. Furthermore, when applied to the case study data, we obtained promising predictive performance (i.e., $\widehat{\mbox{epld}}_{HS} = -2955.5$) that warrant future investigation of its theoretical properties. \section*{Acknowledgements} Matthew Koslovsky is supported by NSF via the Research Training Group award DMS-1547433. \section*{Supplementary Material} $\mbox{ }$ \\ \noindent \textbf{R-package for PGBVS:} \\ R-package PGBVS contains code to perform the methods described in the article. The package also contains functionality for reproducing the data used in the sensitivity and simulation studies and for posterior inference. The R package is located at \url{https://github.com/mkoslovsky/PGBVS}. \vspace{1cm} \noindent \textbf{Supplementary Information:} \\ This file contains a description of the full joint distribution of our model with a graphical representation, a detailed description of our proposed MCMC algorithm with and without SDP priors, and derivations for the prior marginal likelihood used to sample latent cluster assignments. Additionally, we include details of the goodness-of-fit analysis for the case study. \bibliographystyle{imsart-nameyear}
null
null
null
proofpile-arXiv_000-10179
{"arxiv_id":"2009.09034","language":"en","timestamp":1600740118000,"url":"https:\/\/arxiv.org\/abs\/2009.09034","yymm":"2009"}
2024-02-18T23:40:25.282Z
2020-09-22T02:01:58.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10181"}
null
\section{Introduction} The new era of direct imaging of exoplanets has revealed a population of Jupiter-like objects that orbit their host stars at large separations ($\sim$10--100 AU; \citealt{Bowler16,Nielsen19,vigan20}). These giant planets, with masses between $\sim$2--14 $M_\mathrm{Jup}$ and effective temperatures between $\sim$500--2000 K, are young ($\sim$15--200 Myr) compared to exoplanets discovered through other methods (e.g., Doppler spectroscopy, transit, gravitational microlensing) because their detectability is enhanced at young ages (e.g., \citealt{Baraffe08}). The formation of these gas giant planets has traditionally been challenging for the two main planet formation models, core (or pebble) accretion and gravitational instability (e.g., \citealt{Dodson-Robinson09}). Some planet formation scenarios influence a planet’s final atmospheric composition more than others. A potential connection between formation and composition highlights the importance of studying the properties of exoplanet atmospheres. It has long been suggested that the compositions of giant planets in our Solar System were likely determined by their initial location in the protoplanetary disk and the accretion they experienced (e.g., \citealt{Owen99}). For example, the ratio of the abundances of carbon and oxygen (C/O) in a Jovian planet atmosphere has been suggested as a potential way to trace the location and mechanism of formation (e.g., \citealt{Madhu11}). To estimate elemental abundances, however, we need a detailed understanding of chemical and dynamical histories of the giant planets’ atmospheres. The luminosity and effective temperature of a giant planet decreases with time causing its atmosphere to undergo considerable changes even over a short period of time equal to the age of the directly imaged planet population, and certainly over a few billion years. In particular, the vertical mixing timescales will change as the planet’s atmospheric dynamics evolve and as the radiative-convective boundary moves to higher pressures. Changes in temperature and pressure will also result in changes in the atmospheric abundances of gasses and condensates. The composition of their atmospheres could be further altered by continued accretion of solid bodies from the planetary disk, or mixing inside the metal-rich core (e.g., \citealt{Mousis09}). Important trace molecules (H$_{2}$O, CH$_{4}$, CO$_{2}$, CO, NH$_{3}$, and N$_{2}$) of giant planets are greatly impacted by these complex chemical and physical processes that occur over time (e.g., \citealt{Zahnle14}). Because of these challenges, detailed abundance measurements for certain species, such as oxygen, have been challenging for the planets in our Solar System. For Saturn, only upper limits on the C/O ratio have been measured \citep{wong04,visscher05}. For Jupiter, previous estimates of C/O were impacted by inconclusive findings on the water abundance in the atmosphere from the Galileo probe. Using Juno data, \cite{li20} recently measured the water abundance in the equatorial zone as 2.5$^{+2.2}_{-1.6}$ \(\times 10^3\) ppm, suggesting an oxygen abundance roughly three times the Solar value. The directly imaged planets offer an interesting laboratory for pursuing detailed chemical abundances, as they have not undergone as many complex changes in composition as their older counterparts. The $\kappa$ Andromedae ($\kappa$ And) system consists of a B9V-type host star with a mass of $\sim$2.7 $M_{\odot}$ and a bound companion, $\kappa$ And b \citep{Carson13}. This system is one of the most massive stars known to host an extrasolar planet or low-mass brown dwarf companion. $\kappa$ And b has been described as a “super-Jupiter,” with a lower mass limit near or just below the deuterium burning limit \citep{Carson13,Hinkley13}. \cite{Zuckerman11} proposed that $\kappa$ And is a member of the Columba association with an age of $\sim$30 Myr, leading \cite{Carson13} to adopt that age and estimate $\kappa$ And b to have a mass $\sim$12.8 $M_\mathrm{Jup}$ with DUSTY evolutionary models \citep{Chabrier00}. However, \cite{Hinkley13} suggested that $\kappa$ And b had a much older isochronal age of 220 ${\pm}$ 100 Myr, a higher surface gravity ($\log g \approx 4.33$ as opposed to $\log g \sim 4$ for 30 Myr), and a mass of 50$^{+16}_{-13}$ $M_\mathrm{Jup}$ by comparing its low-resolution $YJH$-band spectra with empirical spectra of brown dwarfs. \cite{Bonnefoy14} derived a similar age to \cite{Carson13} of 30$^{+120}_{-10}$ Myr based on the age of the Columba association and a lower mass limit of 10 $M_\mathrm{Jup}$ based on “warm-start” evolutionary models, but did not constrain the surface gravity. More recent studies of $\kappa$ And b by \cite{currie18} and \cite{uyama20} have concluded the object is low gravity ($\log g \sim 4$--4.5) and resembles an L0--L1 dwarf. Other studies focusing on the host star found the system to be young ($t$ $\sim$ 30--40 Myr; \citealt{david15,brandt15}). Using CHARA interferometry, \cite{jones16} constrained the rotation rate, gravity, luminosity, and surface temperature of $\kappa$ And A and compared these properties to stellar evolution models, showing that the models favor a young age, 47$^{+27}_{-40}$ Myr, which agrees with a more recent age estimate of $42^{+6}_{-4}$ Myr for the Columba association by \cite{bell15}. Understanding the orbital dynamics of exoplanets can also put constraints on formation pathways. Radial velocity measurements can be used to break the degeneracy in the orientation of the planets' orbital plane. While astrometric measurements from imaging are ever increasing in precision (e.g., \citealt{wang2018}), measuring the radial velocity (RV) of directly imaged exoplanets is challenging due to the required higher spectral resolution balanced with their faintness and contrast with respect to their host stars. The first RV measurement of a directly imaged planet was $\beta$ Pictoris b using the Cryogenic High-Resolution Infrared Echelle Spectrograph (CRIRES, R=100,000) at the Very Large Telescope (VLT; \citealt{kaeufl04}). An RV of -15.4 $\pm$ 1.7 km s$^{-1}$ relative to the host star was measured via cross-correlation of a CO molecular template \citep{snellen14}. \cite{haffert19} detected H$\alpha$ around PDS 70 b and c, but the radial velocities measured were of the accretion itself and not of the motion of the planets. \cite{ruffio19} measured the RV of HR 8799 b and c with a 0.5 km s$^{-1}$ precision using a joint forward modeling of the planet signal and the starlight (speckles). Here we present $R\sim4000$ $K$-band spectra of $\kappa$ And b. In Section \ref{sec:data_red} we report our observations and data reduction methods. In Section \ref{sec:model} we use atmosphere model grids and forward modeling Markov Chain Monte Carlo methods to determine the best-fit effective temperature, surface gravity, and metallicity of the companion. We use our best-fit parameters and $PHOENIX$ models with scaled molecular mole fractions to derive a C/O ratio of 0.70$_{-0.24}^{+0.09}$ for $\kappa$ And b. In Section \ref{sec:rvs} we use the joint forward modeling technique devised by \cite{ruffio19} to measure $\kappa$ And b's radial velocity and to constrain the plane and eccentricity of its orbit. In Section \ref{sec:discussion} we discuss the implications of our results and future work. \par \section{Data Reduction}\label{sec:data_red} $\kappa$ And b was observed in 2016 and 2017 with the OSIRIS integral field spectrograph (IFS) \citep{Larkin06} in the $K$ broadband mode (1.965--2.381 $\mu$m) with a spatial sampling of 20 milliarcseconds per lenslet. A log of our observations is given in Table \ref{tab:obslog}. Observations of a blank patches of sky and an A0V telluric standard (HIP 111538) were obtained close in time to the data. We also obtained dark frames with exposure times matching our dataset. The data were reduced using the OSIRIS data reduction pipeline \citep[DRP;][]{Krabbe04,Lockhart19}. Data cubes are generated using the standard method in the OSIRIS DRP, using rectification matrices provided by the observatory. At the advice of the DRP working group, we did not use the Clean Cosmic Rays DRP module. We combined the sky exposures from each night and subtracted them from their respective telluric and object data cubes (we did not use scaled sky subtraction). \begin{deluxetable}{lccc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{OSIRIS Observations of $\kappa$ Andromedae b\label{tab:obslog}} \tablehead{ \colhead{Date} & \colhead{Number of} & \colhead{Integration} & \colhead{Total Int.} \\ \colhead{(UT)} & \colhead{Frames} & \colhead{Time (min)} & \colhead{Time (min)} \\ } \startdata 2016 Nov 6 & 5 & 10 & 50 \\ 2016 Nov 7 & 8 & 10 & 80 \\ 2016 Nov 8 & 5 & 10 & 50 \\ 2017 Nov 4 & 13 & 10 & 130 \enddata \end{deluxetable} After extracting one-dimensional spectra for the telluric sources, we used the DRP to remove hydrogen lines, divide by a blackbody spectrum, and combine all spectra for each respective night. An initial telluric correction for $\kappa$ And b was then obtained by dividing the final combined telluric calibrator spectrum in all object frames. Once the object data cubes are fully reduced, we identify the location of the planet. The location can be challenging to find due to the brightness of the speckles even at the separation of the planet ($\sim$1\arcsec\ separation). Speckles have a wavelength-dependent spatial position behavior, and the planet signal does not. In order to locate the planet, we visually inspect the cubes while stepping through the cube in wavelength, and determine which features do not depend on wavelength. Once we find the planet, we record the spatial coordinates. During preliminary spectral extraction, we noted that the telluric frames did a poor job of correcting some absorption features, particularly in the blue part of the spectrum. We therefore used the speckles from $\kappa$ And A that are present in all datacubes to derive a telluric correction spectrum for each individual exposure. This correction works well because $\kappa$ And A is a B9 type star with very few intrinsic spectral lines, so the majority of the spectral features will be from Earths' atmosphere. We masked the location of the planet and extracted a 1-D spectrum from the rest of the datacube to use as the telluric spectrum. As with the A0V star, we removed the hydrogen lines and blackbody spectrum based on the temperature of $\kappa$ And A. Once the data cubes were reduced and planet location identified, we used a custom IDL routine to remove speckles. The program smooths and rebins the data to $\lambda/\Delta\lambda\sim$50, and then magnifies each wavelength slice, $\lambda$, about the star by $\lambda m/ \lambda$ with $\lambda m = 2.173~\mu$m, the median wavelength in the $K$-band. The generated data cube has speckles that are positionally aligned, with the planet position varying. The program then fits first order polynomials to every spatial location as a function of wavelength \citep{Barman11,Konopacky13}. We know the position of the planet, and use it to mask the planet to prevent bias in the polynomial fit. The results of the fits are subtracted from the full-resolution spectrum before the slices are demagnified. The resultant cube is portrayed in Figure \ref{fig:speckles} by showing one of the spaxels before and after the speckle removal. \begin{figure*} \epsscale{0.60} \plotone{OSIRIS_rev.png} \caption{An example data cube image frame, collapsed in wavelength via median, from our OSIRIS $\kappa$ And b data set. The panel on the left shows a reduced cube before speckle removal, demonstrating the brightness of the speckles at the location of $\kappa$ And b. The right panel shows the data cube after the speckle removal process. The algorithm effectively removes the speckle noise, leaving most of the flux from the planet behind for spectral extraction.} \label{fig:speckles} \end{figure*} Uncertainties were determined by calculating the RMS between the individual spectra at each wavelength. These uncertainties include contributions from statistical error in the flux of the planet and the speckles as well as some additional error in the blue end of the spectrum due to imperfect removal of large telluric features in this region. The OH sky lines are well-subtracted and have a negligible contribution to the uncertainties. We also tested our reduction methodology by planting a fake planet with a flat spectrum in each data cube and going through the same reduction process as above. When we ran the speckle subtraction and then extracted the fake planet spectra from each cube, there were some fluctuations in the spectra, particularly near the ends of the spectral range. We decided to test the speckle subtraction algorithm and extract the fake planet spectra using a higher order polynomoial fit, but the continuum fluctuations were much larger. We therefore determined that the first order polynomial fit introduces the least continuum bias to our data. The uncertainties from the extracted spectra incorporate most of the impact of this bias, with some residual impact at the blue and red ends. We mitigate the impact in further analysis through removal of the continuum (see Section \ref{sec:temp_g_m}). Once the speckles are removed, we extract the object spectrum using a box of $3 \times 3$ spatial pixels (spaxels). Once we extracted the $\kappa$ And b spectra from each frame for all data, we then normalize each individual spectrum to account for seeing and background fluctuations, and we apply a barycentric correction to each spectrum. Finally, we median-combine all 30 individual spectra. To calibrate the flux of our spectra we calculated the flux at each wavelength such that, when integrated, the flux matches the $K$-band apparent magnitude ($14.37 \pm 0.07$) from \cite{currie18}. Figure \ref{fig:extracted_spec} shows the combined, flux calibrated spectrum for $\kappa$ And b. \begin{figure*} \epsscale{0.95} \plotone{fillbt_c_5_12_20.png} \caption{Our fully reduced, combined, and flux calibrated moderate-resolution OSIRIS $K$-band spectra of $\kappa$ And b. The errors are shown as a shaded light blue region.} \label{fig:extracted_spec} \end{figure*} Once we had our fully reduced, combined, and flux calibrated spectra, we wanted to analyze the spectrum both with and without the continuum. The expectation is that by removing the continuum, some of the residual correlated noise from the speckles get removed as well. To remove the continuum, we apply a high-pass filter with a kernel size of 200 spectral bins to each of the individual spectra. Then we subtract the smoothed spectrum from the original spectra. Once all the individual spectra had been continuum subtracted we median combined them as well, and find the uncertainties by calculating the RMS of the individual spectra at each wavelength. \section{Spectral Modeling}\label{sec:model} \subsection{Synthetic Spectra} Our first goal is to constrain the temperature, surface gravity, and metallicity of $\kappa$ And b. In order to do this, we must construct a model grid that spans the expected values of these parameters. In a number of previous works on $\kappa$ And b, the temperature was estimated to be $\sim$2000 K and the surface gravity ($\log g$) $<5$ \citep[e.g.,][]{Hinkley13,Bonnefoy14,todorov16,currie18,uyama20}. The metallicity has not been constrained, but estimates from the host star suggest values near solar or slightly subsolar \citep{wu11,jones16}. Based on these measurements, we generated a custom grid based on the \textit{PHOENIX} model framework. The details on the computation of this grid are described in \cite{Barman11,barman15}, with the updated methane line list from \cite{yurchenko14} and the optical opacities from \cite{karkoschka10}. The grid spans a temperature range 1500--2500~K, a $\log g$ range of 2--5.5~dex, and a metallicity range of $-0.5$--0.5~dex, which encompasses the range of values previously reported for $\kappa$ And b. For a $\sim$2000~K object, the C is already in CO instead of CH$_4$ throughout the atmosphere, and thus the amount of CO should be constant with height. Therefore, for $\kappa$ And b, we chose not to model vertical mixing (Kzz = 0). The cloud properties for young gas giants and brown dwarfs are notoriously complex. In our modeling framework, we are able to incorporate clouds in several different ways. We can generate a thick cloud with an ISM-like grain size distribution (DUSTY, \citealt{allard01}), a complete lack of cloud opacity (COND, \citealt{allard01}), or an intermediate model that spans these two extremes (ICM, \citealt{Barman11}). Given the estimated temperature and surface gravity of $\kappa$ And b, we chose to use a $(DUSTY)$ cloud model in our grid, which has been shown to do a reasonably good job at reproducing brown dwarf spectra with similar properties \citep[e.g.,][]{kirkpatrick06}. We will therefore refer to the custom grid constructed here as \textit{PHOENIX-ACES-DUSTY} to distinguish it from other models based on the \textit{PHOENIX} framework. We explore the results of this choice of cloud model and describe the results from a few other models in Section \ref{sec:temp_g_m}. The synthetic spectra from the grid were calculated with a wavelength sampling of 0.05~\AA\ from 1.4 to 2.4~$\mu$m. Each spectrum was convolved with a Gaussian kernel with a FWHM that matched the OSIRIS spectral resolution \citep{barman15}. Both flux calibrated and continuum subtracted data were modeled and analyzed. The synthetic spectra was flux calibrated and continuum subtracted using the same routines as the data. \subsection{Forward Modeling}\label{sec:for_mod} To determine the best-fit \textit{PHOENIX-ACES-DUSTY} model, we use a forward-modeling approach following \cite{Blake2010}, \cite{Burgasser16}, Hsu et al. (in prep), and Theissen et al. (in prep). The effective temperature ($T_\mathrm{eff}$), surface gravity ($\log g$), and metallicity ([M/H]) are inferred using a Markov Chain Monte Carlo (MCMC) method built on the \texttt{emcee} package that uses an implementation of the affine-invariant ensemble sampler \citep{GoodmanWeare10,ForemanMackey13}. We assume that each parameter we are solving for should be normally distributed, and thus the log-likelihood function is computed as follows \begin{equation} \ln L = -0.5 \times \left[\sum \left[{\frac{\mathrm{data}[p] - D[p]}{\sigma[p]}}\right]^2 + \\ \sum ln(2\pi{\sigma}^2)\right], \end{equation} where \(\sigma\) is the provided uncertainties, data\([p]\) is our science data, and \(D[p]\) is the forward-modeled data. The uncertainty is taken as the difference between the 84th and 50th percentile as the upper limit, and the difference between the 50th and 16th percentile as the lower limit for all model parameters. If the posterior distributions follow normal (Gaussian) distributions then this equates to the 1-\(\sigma\) uncertainty in each parameter (e.g., \citealt{Blake2010, Burgasser16}). Assuming that there are no additional systematic uncertainties in the data or in the models, these uncertainties should be an accurate reflection of our knowledge of each parameter. We discuss and attempt to account for additional systematic uncertainties in the data and the models in Section \ref{sec:temp_g_m}. The data is forward-modeled using the following equation: \begin{equation} D[p] = C \times \left[\left(M\left[p\left(\lambda \left[1 - \frac{RV}{c}\right]\right),T_\mathrm{eff},\log g, [M/H]\right]\right)\right] * \kappa_G(\Delta \nu_\mathrm{inst}) + C_\mathrm{flux}. \end{equation} Here, \(p(\lambda)\) is the mapping of the wavelength values to pixels, \(M[p(\lambda)]\) is the stellar atmosphere model parameterized by effective temperature ($T_\mathrm{eff}$), surface gravity ($\log g$), and metallicity ([M/H]), \(C\) is the dilution factor, (radius/distance)$^2$, that scales the model to the observed fluxes, (which is measure of radius since the distance is known, e.g., \citealt{theissen14,kesseli19}), and \(\kappa_G(\Delta \nu_\mathrm{inst})\) is the line spread function (LSF) calculated from the OSIRIS resolution of $R = 4000$ to be 34.5~km s$^{-1}$. The \(RV\) is the radial velocity that is used here only to account for wavelength calibration errors in the OSIRIS DRP, \(c\) is the speed of light, and \(C_\mathrm{flux}\) is an additive continuum correction to account for potential systematic offsets in the continuum. This final parameter (\(C_\mathrm{flux}\)) is only used when fitting the continuum normalized data. Our MCMC runs used 100 walkers, 500 steps, and a burn-in of 400 steps to ensure parameters were well mixed. \subsection{Temperature, Gravity, and Metallicity}\label{sec:temp_g_m} We ran our MCMC fitting procedure on both the flux calibrated spectrum and the continuum-subtracted spectrum. The best-fit parameters for our flux calibrated data are $T_\mathrm{eff} = 1588 \pm 5$ K, $\log g = 4.72^{+0.05}_{-0.06}$, and a metallicity of [M/H] = $0.5 \pm 0.01$. For the radius, which comes from the multiplicative flux parameter, we found R = 1.00 $\pm$ 0.02 R$_{Jup}$. For our continuum-subtracted data the best-fit parameters were $T_\mathrm{eff} = 2048 \pm 11$~K, $\log g = 3.77 \pm 0.03$, and a metallicity of [M/H] = $-0.11 \pm 0.02$. Radii cannot be derived for the continuum-subtracted data. Figures \ref{fig:model_continuum} through \ref{fig:corner_flat} show the best-fit spectrum overplotted on our data, and the resulting corner plots from our MCMC analysis for both the initially extracted and continuum-subtracted spectra. \begin{deluxetable*}{lccccc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Summary of atmospheric parameters derived from MCMC fits.} \label{tab:atm_param} \tablehead{ \colhead{Spectra} & \colhead{Effective Temperature} & \colhead{Surface Gravity} & \colhead{Metallicity} & \colhead{Radius} & \colhead{Luminosity}\\ \colhead{$\kappa$ And b} & \colhead{${T}_\mathrm{{eff}}$ (K)} & \colhead{$\log g$} & \colhead{[M/H]} &\colhead{($R_\mathrm{Jup}$)} &\colhead{$\log_{10}\left(\frac{L}{L_\odot}\right)$} } \startdata \multicolumn{6}{c}{PHOENIX-ACES-DUSTY} \\ \hline OSIRIS Including Continuum & $1588 \pm 5$ & $4.72^{+0.05}_{-0.06}$ & $0.50^{+0.01}_{-0.01}$ & $1.0 \pm 0.02$ & $-4.2 \pm 0.1$ \\ OSIRIS Continuum Subtracted & $2048 \pm 11$ & $3.77 \pm 0.03$ & $-0.11 \pm 0.02$ & n/a & n/a\\ CHARIS All Bands & $2021^{+20}_{-19}$ & $3.64^{+0.18}_{-0.10}$ & $0.46^{+0.03}_{-0.07}$ & $0.99 \pm 0.02$ & $-3.80 \pm 0.02$\\ CHARIS K Band Only & $1707^{+147}_{-118}$ & $4.62^{+0.48}_{-0.63}$ & $-0.12^{+0.28}_{-0.23}$ & $1.4 \pm 0.2$ & $-3.8 \pm 0.1$ \\ SpeX 2MASS J01415823$-$4633574 & $1972^{+9}_{-10}$ & $2.93^{+0.08}_{-0.14}$ & $0.49 \pm 0.01$ & n/a & n/a \\ \hline \multicolumn{6}{c}{BT-SETTL} \\ \hline OSIRIS Including Continuum & $1630^{+7}_{-5}$ & $3.5^{+0.5}_{-0.4}$ & n/a & $1.10 \pm 0.10$ & $-4.11 \pm 0.1$ \\ OSIRIS Continuum Subtracted & $2128^{+70}_{-73}$ & $4.47^{+0.02}_{-0.06}$ & n/a & n/a & n/a \\ CHARIS All Bands & $1817^{+48}_{-18}$ & $5.15^{+0.13}_{-1.13}$ & n/a & $1.2 \pm 0.1$ & $-3.8 \pm 0.1$\\ CHARIS K Band Only & $1647^{+194}_{-96}$ & $4.16^{+0.35}_{-0.33}$ & n/a & $1.5 \pm 0.3$ & $-3.8 \pm 0.2$\\ \hline \multicolumn{6}{c}{DRIFT-PHOENIX} \\ \hline OSIRIS Including Continuum & $2200^{+100}_{-130}$ & $4.0^{+0.3}_{-0.5}$ & n/a & $1.0^{+0.2}_{-0.1}$ & $-3.6 \pm 0.2$ \\ OSIRIS Continuum Subtracted & $2126^{+104}_{-131}$ & $4.19^{+0.2}_{-0.22}$ & n/a & n/a & n/a \\ CHARIS All Bands & $1747^{+20}_{-18}$ & $3.99^{+0.19}_{-0.20}$ & n/a & $1.5 \pm 0.1$ & $-3.7 \pm 0.1$\\ CHARIS K Band Only & $1863^{+289}_{-233}$ & $4.22^{+0.50}_{-0.46}$ & n/a & $1.3 \pm 0.3$ & $-3.7 \pm 0.2$ \\ \hline Adopted Values & 2050 & 3.8 & -0.1 & 1.2 & -3.8 \\ Range of Allowed Values & 1950 - 2150 & 3.5 - 4.5 & -0.2 - 0.0 & 1.0 - 1.5 & -3.5 - -3.9 \enddata \tablenotetext{-}{The grid used in each case is noted above derived parameters. Using the range of best-fit values and our estimates of systematic uncertainties, range of adopted atmospheric parameters for $\kappa$ And b are shown in the last row. We are using the convention for metallicity where [M/H] = $\log _{10}\left(\frac{N_{M}}{N_{H}}\right)_\mathrm{star} - \log _{10}\left(\frac{N_{M}}{N_{H}}\right)_{\sun}$} \end{deluxetable*} The discrepancy between the two fits, one with the continuum and one without, is not entirely unexpected. The continuum is strongly impacted by residual systematic errors from the speckle noise, which injects features at low spatial frequencies. Effective temperature is particularly sensitive to continuum shape, and as a bolometric quantity is better estimated by including data from a broader range of wavelengths. Subtracting the continuum mitigates and removes some of these residual errors. \begin{figure*} \epsscale{0.95} \plotone{cont_flam_resid.png} \caption{Results from the MCMC model fit to the OSIRIS spectrum for $\kappa$ And b without continuum removal (black). The best matching \textit{PHOENIX-ACES-DUSTY} model has $T_\mathrm{eff}$ = 1588 K, $\log g$ = 4.72, and a [M/H] = $0.50$ (magenta). The residuals between the data and the model are plotted in gray. The shape of the continuum is impacted by speckle noise, which modulates at low spatial frequencies and leaves residual noise in our dataset post speckle removal. The fits are then driven to lower temperatures and higher metallicities than previously found for $\kappa$ And b due to the continuum shape impact, particular at blue wavelengths.} \label{fig:model_continuum} \end{figure*} \begin{figure*} \epsscale{0.95} \plotone{continuum_triangle_rev_rev.pdf} \caption{Corner plot corresponding to the fit shown in Figure \ref{fig:model_continuum}. \{The diagonal shows the marginalized posteriors. The subsequent covariances between all the parameters are in the corresponding 2-d histograms. The blue lines represent the 50 percentile, and the dotted lines represent the 16 and 84 percentiles. $C$ corresponds to the dilution factor that scales the model by \((radius)^2 (distance)^{-2}\) as mentioned in Section \ref{sec:for_mod}}. \label{fig:corner_continuum} \end{figure*} \begin{figure*} \epsscale{0.95} \plotone{flat_feb17_residuals.png} \caption{Results from MCMC model fit to the OSIRIS spectrum after continuum removal (black). The best fitting model has $T_\mathrm{eff}$ = 2048 K, $\log g$ = 3.77, and [M/H] = $-0.11$ in magenta. The residuals between the flattened data and the flattened model are in gray.} \label{fig:model_flat} \end{figure*} \begin{figure*} \epsscale{0.95} \plotone{cont_triangle_rev.png} \caption{Corner plot corresponding to the fit shown in Figure \ref{fig:model_flat}. The diagonal shows the marginalized posteriors. The subsequent covariances between all the parameters are in the corresponding 2-d histograms. The blue lines represent the 50 percentile, and the dotted lines represent the 16 and 84 percentiles. $C_\mathrm{flux}$ is the additive flux parameter and $C$ corresponds to the dilution factor that scales the model by \((radius)^2 (distance)^{-2}\) as mentioned in Section \ref{sec:for_mod}}. \label{fig:corner_flat} \end{figure*} In order to verify that the temperature estimates we derived from the flattened spectra are robust, we ran our MCMC fitting code using the \textit{PHOENIX-ACES-DUSTY} grid on the CHARIS spectrum from \cite{currie18}, which spans a much larger range of wavelengths (Figure \ref{fig:charis_2mass}). We adjusted our MCMC code for the CHARIS data by changing the LSF to 7377~km s$^{-1}$ for the instrument. We fit all near-infrared bands simultaneously, and also performed a fit using only the $K$-band. For the fit to all the bands simultaneously, we obtained $T_\mathrm{eff} = 2021^{+20}_{-19}$~K, $\log g = 3.64^{+0.18}_{-0.10}$, [M/H] = $0.46^{+0.03}_{-0.07}$, and R = 0.99 $\pm$ 0.02. When we fit only the $K$-band of the CHARIS spectrum we obtained $T_\mathrm{eff} = 1707^{+147}_{-118}$~K, $\log g = 4.62^{+0.48}_{-0.63}$, [M/H] = $-0.12^{+0.28}_{-0.23}$, and R=1.4$\pm$0.2. The all-wavelength fit is consistent with the results we obtained for our continuum-normalized spectrum fitting of the OSIRIS data, while K-band only is slightly lower in temperature, albeit with large uncertainties. Our fits to the CHARIS data are also consistent with the results obtained in \citet{currie18} and \citet{uyama20} ($T_\mathrm{eff} \approx 1700--2000~K, \log g \approx 4--4.5$, R=1.3--1.6 R$_{Jup}$). For a more detailed comparison of the OSIRIS continuum to the CHARIS spectrum, we binned our OSIRIS $K$-band spectra to the same sampling as the \cite{currie18} spectra shown in Figure~\ref{fig:charis_2mass}. The spectra were consistent except for the OSIRIS spectral peak was shifted very slightly towards the red. Two CHARIS data points are less than 1.5-$\sigma$ off from our OSIRIS data, and the rest (4 additional points) are consistent within the error bars. Figure~\ref{fig:charis_2mass} also shows a comparison between our spectrum and the best-matching brown dwarf from the SpeX prism library \citep{Burgasser14} found by \citet{currie18}, 2MASS J01415823-4633574 \citep{kirkpatrick06}. This source is a young, early L-type object associated with the Tucana-Horologium association (age $\sim 40$~Myr). Since the match to this brown dwarf is quite good, we also fit its spectrum using the same model grid and our MCMC framework, adjusting the model resolution to match SpeX. We found fully consistent properties with temperatures between $\sim$2050--2130~K and a $\log g \approx 3$--4 for this source. Figure~\ref{fig:bigplot} shows all available spectral and photometric data for $\kappa$ And b. Overplotted on the spectrum is the best-fit \textit{PHOENIX-ACES-DUSTY} model based on the continuum-subtracted OSIRIS spectrum. The model is scaled to match the continuum flux at $K$-band, which in turn is derived using the $K$-band magnitude in \citet[$K_s = 14.37 \pm 0.07$;][]{currie18}. The match to the \citet{currie18} spectrum is quite good, in alignment with the consistent effective temperature we derive from fitting that data set with our models. While the shape of the $J$- and $H$-band spectra is similar to the CHARIS spectrum, the model over-predicts the flux by 2--7-$\sigma$ in the $H$-band and 1--4-$\sigma$ in the $J$-band, and slightly underpredicts the flux by $\sim$1.5-$\sigma$ near 4 $\mu$m. The reason that a similar temperature is derived from an all-band fit to the CHARIS spectrum using our models is that the flux scaling parameter, and thus the radius, is lowered in this case such that it results in the model ``trisecting" the three wavelengths, matching $J$- and $H$-band quite nicely, but then underpredicting the $K$-band flux. \begin{figure*} \epsscale{0.95} \plotone{CHARIS_2MASS_OSIRIS.png} \caption{OSIRIS $K$-band data of $\kappa$ And b compared to \cite{currie18} low-resolution CHARIS data of $\kappa$ And b and their best-matching field source, 2MASS J01415823-4633574 from the SpeX Library \citep{kirkpatrick06}. A fit to the SpeX spectrum (not shown) reveals temperatures and gravities consistent with the OSIRIS and CHARIS data on $\kappa$ And b.} \label{fig:charis_2mass} \end{figure*} \begin{figure*} \epsscale{0.95} \plotone{NEWBIGFIG_NB.png} \caption{All available spectral and photometric data for $\kappa$ And b compared to the best-fit \textit{PHOENIX-ACES-DUSTY} model, shown in gray, of $T_\mathrm{eff}$ = 2048 K, $\log g$ = 3.77, and M/H = $-0.11$ over the near-infrared. Our OSIRIS data are shown in black. \cite{currie18} low-resolution CHARIS spectra is plotted in dark blue. Photometric data points are taken from \cite{Bonnefoy14}, \cite{Carson13}, \cite{currie18}, and \cite{uyama20}. The model matches the data at $K$-band, but predicts higher flux in $H$- and $J$-band (though the morphology is consistent). The mismatch at the low and high wavelength range is likely due to our use of a DUSTY cloud model.} \label{fig:bigplot} \end{figure*} The mismatch at $J$ and $H$ bands could almost certainly be due to the cloud properties used in our grid. We are using a DUSTY cloud model, which is meant to be a limiting case of a true thick cloud model. Generally, DUSTY models do a reasonable job at matching spectra in this temperature range (2000--2500~K). A slight modification to the cloud properties could result in a general change to the flux at a given band without dramatically impacting the spectral morphology. Given the insensitivity of the continuum-normalized OSIRIS spectrum to clouds, it is encouraging that all fits are returning consistent temperatures in spite of the flux offsets. A recent analysis of the CHARIS data by \citet{uyama20} found a slightly lower temperature using models from \citet{allard12}, \citet{Chabrier00}, and \citet{witte11} (\textit{BT-SETTL}, \textit{BT-DUSTY}, \textit{DRIFT-PHOENIX}). These models have different assumptions about cloud properties than we used in our grid. The \textit{BT-SETTL} grids treat clouds with number density and size distribution as a function of depth based on nucleation, gravitational settling, and vertical mixing \citep{allard12}. The \textit{DRIFT-PHOENIX} grids treat clouds by including effects of nucleation, surface growth, surface evaporation, gravitational settling, convection, and element conservation \citep{witte11}. \citet{uyama20} were able to get very good matches at all wavelengths using these models, with temperatures of 1700--1900~K and $\log g$ between 4--5. The range of uncertainties they found encompasses $\sim$2000~K, and were close to the range of temperatures we find with \textit{PHOENIX-ACES-DUSTY}. Since our subsequent analysis of the chemical abundances of $\kappa$ And b relies on knowledge of the temperature and gravity, we did additional modeling to look at the comparison between these models and our continuum-normalized OSIRIS data. In addition to differences in cloud parameters, each set of models incorporates slightly different assumptions that lead to systematic differences in the output spectra for the same parameters such as temperature and gravity (e.g., \citealt{oreshenko20}). These systematics are not captured in the formal uncertainties from each MCMC run. We attempt to account for these systematics by looking at the range of values given from the three models. We incorporated both the \textit{BT-SETTL} and \textit{DRIFT-PHOENIX} models into our MCMC analysis code, and fit our OSIRIS spectrum using the same procedure described above. The best-fit using \textit{BT-SETTL} yielded $T_\mathrm{eff} = 2128^{+70}_{-73}$ K and $\log g = 4.47^{+0.02}_{-0.06}$. The \textit{DRIFT-PHOENIX} models generally provided poor matches to the higher resolution data, but yielded $T_\mathrm{eff} = 2126^{+104}_{-131}$ K and $\log g = 4.19^{+0.2}_{-0.22}$ as best-fit parameters. We found no fits with \textit{DRIFT-PHOENIX} that properly captured the first drop of the CO bandhead at $\sim$2.9 $\mu$m. We also fit the CHARIS data using our code and these model grids, and found parameters consistent with \citet{uyama20}. We then looked in detail at the difference between our best-fits to the OSIRIS data and these lower temperature models at $R \sim 4000$. The $\chi^2$ of the best fits ($T_\mathrm{eff} = 2100$~K) is significantly better than the $\chi^2$ of a $T_\mathrm{eff} =1700$~K, $\log g = 4$ model, by roughly 5$\sigma$ using either grid. Table 2 shows the results for all atmospheric parameters derived in this work. We use the range of best-fit values from the OSIRIS continuum-normalized data to define the adopted parameters for temperature, gravity, and metallicity, as the resolved line information offers the most constraints on those parameters. We adopt values a value of $T_\mathrm{eff} = 2050$~K, with a range of 1950--2150~K, $\log g=3.8$, with a range of 3.5--4.5, and [M/H] = $-0.1$, with a range of -0.2--0.0. For radius, we use the median value from the OSIRIS continuum-included data and the CHARIS data to arrive at $R=1.2~R_{Jup}$, with a range of 1.0--1.5~$R_{Jup}$. This yields an implied bolometric luminosity of log(L/L$_{\odot}$) = $-3.7$, with a range of $-3.5$ to $-3.9$, consistent with the estimate from \citet{currie18}. While it is possible that lower temperatures could be invoked for $\kappa$ And b, a more detailed analysis including a variation of cloud models will be required to determine whether this is a viable solution that also matches the OSIRIS data. Since our high resolution data is not particularly informative for cloud properties, we leave such analysis to future work. \subsection{Mole Fractions of CO and H$_2$O} With best-fit values for temperature, surface gravity, and metallicity we can fit for abundances of CO and H$_2$O in our OSIRIS $K$-band spectra. Once best-fit values were determined for $T_\mathrm{eff}$, $\log g$, and [M/H], we fixed those parameters to generate a grid of spectra with scaled mole fractions of the molecules for the $K$-band \citep{barman15}. Since our best-fit metallicity was slightly subsolar (roughly 80\% of the solar value), we note that the overall abundances of these molecules will be slightly less than that of the Sun, but their unscaled \textit{ratios} will match the Sun. The molecular abundances of CO, CH$_4$, and H$_2$O were scaled relative to their initial values from 0 to 1000 using a uniform logarithmic sampling, resulting in 25 synthetic spectra. We fit for the mole fraction of H$_2$O first, holding CO and CH$_4$ at their initial values. The fit was restricted to wavelengths less than the CO band head to avoid biasing from overlapping CO. Next, the H$_2$O mole fraction was set to its nominal value, and we fit for scaled CO. While in principle we could do the same analysis for CH$_4$, we did not do so because in this temperature regime there is no expectation of a significant amount of CH$_4$ present in our $K$-band spectrum. Figure \ref{fig:chisq_co_h2o} shows the resulting \(\chi^2\) distribution as a function of CO and H$_2$O mole fraction. The models with the lowest \(\chi^2\) when compared to the flattened data gave us the best-fits for both H$_2$O and CO. The best fit for H$_2$O had a scaling of 1, and the best fit for CO had a scaling of 1.66. To calculate the 1-$\sigma$ uncertainties in each mole fraction value, we used the values from models within $\pm$1 of our lowest \(\chi^2\). Using interpolation along the curves shown in Figure \ref{fig:chisq_co_h2o}, the range of mole fractions encompassed by these uncertainties is 0.599 to 3.24 times the initial mole fraction of CO, and 0.599 to 1.791 times the initial H$_2$O mole fraction. \begin{figure*} \epsscale{0.95} \plotone{CO_H2O_chisq.png} \caption{Results of $T_\mathrm{eff}$ = 2048 K and $\log g$ = 3.77 model fits with varying mole fractions for both H$_2$O and CO to our continuum-subtracted OSIRIS spectrum. The mole fractions are given in units relative to the ratio in the Sun, such that a value of zero implies the solar value. Both scalings of CO and H$_2$O prefer values near solar. From these fits we find C/O = 0.70$_{-0.24}^{+0.09}$.} \label{fig:chisq_co_h2o} \end{figure*} \cite{todorov16} derived a water abundance for $\kappa$ And b using spectral retrieval with a one-dimensional plane-parallel atmosphere and a single cloud layer that covers the whole planet. This modeling was done on the low-resolution spectrum from P1640 presented in \cite{Hinkley13}. They derived the $\log(n_\mathrm{H_{2}O})$ for four cases that varied in the treatment of molecular species and clouds. In each case, they found consistent values for the mole fraction of water, with $\log(n_\mathrm{H_{2}O})$ $\sim-$3.5. Our best-matching mole fraction for water is $\log(n_\mathrm{H_{2}O})$ $\sim-$3.7, which is consistent within the uncertainties in \cite{todorov16}. \subsection{C/O Ratios} For giant planets formed by gravitational instabilities, their atmospheres should have element abundances that match their host stars \citep{helled2009}. If giant planets form by a multi-step core accretion process, it has been suggested that there could be a range of elemental abundances possible \citep{oberg2011,madhu19}. In this scenario, the abundances of giant planets' atmospheres formed by core/pebble accretion are highly dependent on the location of formation relative to CO, CO$_2$, and H$_2$O frost lines and the amount of solids acquired by the planet during runaway accretion phase. This can be diagnosed using the C/O ratio. The C/O ratio dependence on atmospheric mole fractions (N) is \[\frac{C}{O}=\frac{N(CH_4)+N(CO)}{N(H_2O)+N(CO)},\] \noindent and for small amounts of CH$_{4}$, as in $\kappa$ And b's case, the C/O ratio can be determined by H$_2$O and CO alone \citep{barman15}. The C/O ratio we derive for $\kappa$ And b is 0.70$_{-0.24}^{+0.09}$. In Figure \ref{fig:co_comparison} we show a visual comparison of three different models with different values of C/O, with our best-fit model in the middle panel. Clearly, the models with low C/O do not make deep enough lines in the CO bandhead, and the models with C/O near unity make the first drop in the CO bandhead too wide. With lower resolution, it would be difficult to distinguish this difference, thus demonstrating the need for higher spectral resolution to probe these abundance ratios. \begin{figure*} \epsscale{0.95} \plotone{coratio_comp.png} \caption{Visual comparison of three different $T_\mathrm{eff}$ = 2048 K and $\log g$ = 3.77 models with different values of C/O in our scaled mole fraction grid. The best fit C/O ratio is shown in the central panel, while values of very high (bottom) and very low (top) C/O ratio are clearly disfavored by our data. The relative strengthening or weakening of the primary CO bandhead at $\sim$2.29 $\mu$m is a fairly clear discriminator at R$\sim$4000 that might otherwise be lost at lower spectral resolution.} \label{fig:co_comparison} \end{figure*} Due to the remaining uncertainty in the temperature of the planet, we verified that lower temperature model grids with scaled mole fractions return C/O ratios emcompassed by our model. We explored a grid with a temperature of $\sim$1900 K and a $\log g\sim$4, scaling the ratios of H$_2$O and CO by the same values as above. We find that the best matching spectrum at this temperature is also $\sim$0.70, with similar uncertainties. Given the obvious changes in the spectral morphology expected at high and low C/O ratio as shown in Figure \ref{fig:co_comparison}, it is not surprising that a small temperature change does not dramatically change the best-fit C/O ratio. Thus we assert that our uncertainties properly capture our current knowledge of the C/O ratio for $\kappa$ And b. \section{Kinematic Modeling}\label{sec:rvs} \subsection{Radial Velocity Measurement} Radial velocity measurements can be used to help determine the orientation of the planets' orbital plane. We measure the radial velocity of $\kappa$ And b following a similar method to the one described in \citet{ruffio19}. A significant limitation of \citet{ruffio19} is that the transmission of the atmosphere in \citet{ruffio19} is calculated using A0 star calibrators, which assumes that the tellurics are not changing during the course of a night. This assumption is not valid for the $\kappa$ And b data presented in this work, as discussed in Section \ref{sec:data_red}. We therefore improved upon the method to correct for the biases due to the variability of the telluric lines compared to the calibrator. A common way to address such systematics in high-resolution spectroscopic data is to use a principal component analysis (PCA)-based approach to subtract the correlated residuals in the data \citep{Hoeijmakers2018,PetitditdelaRoche2018,WangJi2018}. However, this approach can lead to over-subtraction of the planet signal and therefore also bias any final estimation. For example, the water lines from the companion can be subtracted by the telluric water lines appearing in the PCA modes. The over-subtraction can be mitigated by jointly fitting for the planet signal and the PCA modes, which is possible in the framework presented in \citet{ruffio19}. The original data model is, \begin{equation} {\bm d} = {\bm M}_1{\bm \phi}_1 + {\bm n}. \end{equation} The data ${\bm d}$ is a vector including the pixel values of a spectral cube stamp centered at the location of interest. The data vector has $N_{\bm d}=5\times5\times N_\lambda$ elements corresponding to a $5\times5$ spaxel stamp in the spatial dimensions and $N_\lambda$ spectral channels (e.g., $N_\lambda = 1665$ in $K$-band). The matrix ${\bm M}_1$ includes a model of the companion and the spurious starlight. It is defined as ${\bm M}_{1}=[{\bm c}_{\mathrm{0,planet}},{\bm c}_1,\dots,{\bm c}_{25}]$, where the ${\bm c}_i$ are column vectors with the same size as the data vector ${\bm d}$. The companion model ${\bm c}_{\mathrm{0,planet}}$ is also a function of the RV of the companion. The linear parameters of the model are included in ${\bm \phi}_1$ and the noise is represented by the random vector ${\bm n}$. A spectrum of the planet can be extracted at any location in the image by, first, subtracting a fit of the null hypothesis (i.e., ${\bm M}_{0}=[{\bm c}_1,\dots,{\bm c}_{25}]$) from the data, ${\bm r}_{xy} = {\bm d} - {\bm M}_{0}{\bm \phi}_0$, and then, fitting the companion PSF at each spectral channel to the residual stamp spectral cube. We perform this operation at each location in the field of view and divide the subsequent residual spectra by their local low-pass filtered data spectrum. This results in a residual vector ${\bm r}_{xy}$, which has been normalized to the continuum, for each spaxel in the field of view. After masking the spaxels surrounding the true position of the companion, a PCA of all the ${\bm r}_{xy}$ for a given exposure defines a basis of the residual systematics in the data. These principal components can be used to correct the model of the data. Before they can be included in the data model, each principal component needs to be rescaled to the local continuum, which is done by multiplying them by the low-pass filtered data at the location of interest. Finally, these 1D spectra are applied to the 3D PSF to provide column vectors that can be used in the model matrix ${\bm M}$. We denote these column vectors $\{ {\bm r}_{\mathrm{pc}1},{\bm r}_{\mathrm{pc}2},\dots\}$ ordered by decreasing eigenvalues. A new data model ${\bm M}_2$ including the first $K$ principal components is defined as \begin{equation} {\bm M}_2 = [{\bm c}_{\mathrm{0,planet}},{\bm c}_1,\dots,{\bm c}_{25},{\bm r}_{\mathrm{pc}1},\dots,{\bm r}_{\mathrm{pc}K}]. \label{eq:modelwithPCA} \end{equation} We define a new vector of linear parameter ${\bm \phi}_2$ including $K$ more elements than ${\bm \phi}_1$. The advantage of this approach is that the PCA modes are jointly fit with the star and the companion models preventing over-subtraction. Additionally, the general form of the linear model is unchanged, which implies that the radial velocity estimation is otherwise identical to \citet{ruffio19}. Figure \ref{fig:kapAndbRV} shows the RV estimates for each exposure as a function of the number of principal components used in the model. The final RV converges from $-11.9\pm0.4\,\mathrm{km}\,\mathrm{s}^{-1}$ to $-13.9\pm0.4\,\mathrm{km}\,\mathrm{s}^{-1}$ as the number of modes increases suggesting a $2\,\mathrm{km}\,\mathrm{s}^{-1}$ bias in the original model. In order to increase our confidence in the robustness of the RV estimate and uncertainty, we calculate the final RV and uncertainty after binning the data by pair to account for possible correlations between exposures. Each pair of measurements is replaced by their mean value and largest uncertainty. We note that the reduced $\chi^2$ is lower than unity, which suggests that the final uncertainty is not overestimated. \begin{figure*} \centering \includegraphics[width=1\linewidth]{RV_kap_And_measurements.pdf} \caption{Radial velocity (RV) measurements of $\kappa$ And b by individual exposures and epoch of observation. The grey region represents the current uncertainty in the RV of the star. The RVs are shown for different number of principal components (None, 1, 5, and 10 respectively) included in the data model. The weighted mean RVs (solid horizontal lines) converge as the number of principal components increases. The final RV values and uncertainties are available in Table 3.} \label{fig:kapAndbRV} \end{figure*} Additionally, we perform a simulated companion injection and recovery at each location in the field of view to estimate possible residual biases in the data. The corrected RV estimates are shown in Figure \ref{fig:kapAndbRV_fakes_corrected}, which prove to be consistent with the results from Figure \ref{fig:kapAndbRV}. Table 3 summarizes the RV estimates, uncertainties, and $\chi_r^2$ as a function of the different cases presented previously. The uncertainties are inflated when $\chi_r^2$ is greater than unity. \begin{figure*} \centering \includegraphics[width=1\linewidth]{RV_kap_And_measurements_fakes.pdf} \caption{Same as Figure \ref{fig:kapAndbRV}, but corrected for biases using simulated planet injection and recovery. The final RV values and uncertainties are available in Figure 3.} \label{fig:kapAndbRV_fakes_corrected} \end{figure*} \begin{deluxetable*}{@{\extracolsep{4pt}}c|lr|lr|lrr|lrr} \tablewidth{0pc} \tabletypesize{\scriptsize} \label{tab:RVs} \tablecaption{$\kappa$ And b RV estimates summary.} \tablehead{ \colhead{}& \multicolumn{2}{c}{Independent}& \multicolumn{2}{c}{Binned}& \multicolumn{3}{c}{Independent + injection \& recovery}& \multicolumn{3}{c}{Binned + injection \& recovery} \\ \cline{2-3} \cline{4-5} \cline{6-8} \cline{9-11} \colhead{\# PCs} & \colhead{RV} & \colhead{$\chi^2_r$} & \colhead{RV} & \colhead{$\chi^2_r$} & \colhead{RV} & \colhead{$\chi^2_r$} & \colhead{Offset} & \colhead{RV} & \colhead{$\chi^2_r$} & \colhead{Offset} \\ \colhead{} & \colhead{($\mathrm{km}\,\mathrm{s}^{-1}$)} & \colhead{} & \colhead{($\mathrm{km}\,\mathrm{s}^{-1}$)} & \colhead{} & \colhead{($\mathrm{km}\,\mathrm{s}^{-1}$)} & \colhead{} & \colhead{($\mathrm{km}\,\mathrm{s}^{-1}$)} & \colhead{($\mathrm{km}\,\mathrm{s}^{-1}$)} & \colhead{}& \colhead{($\mathrm{km}\,\mathrm{s}^{-1}$)} } \startdata None & $-11.9\pm0.4\tablenotemark{a}$ & $1.8$ & $-11.9\pm0.3\tablenotemark{a}$ & $1.1$ & $-13.2\pm0.3\tablenotemark{a}$ & $1.6$ & $-1.28$ & $-13.1\pm0.3$ & $0.8$ & $-1.28$ \\ 1 & $-13.2\pm0.3\tablenotemark{a}$ & $1.5$ & $-13.1\pm0.3$ & $0.8$ & $-13.3\pm0.4\tablenotemark{a}$ & $1.6$ & $-0.14$ & $-13.3\pm0.3$ & $0.9$ & $-0.15$ \\ 5 & $-13.8\pm0.3\tablenotemark{a}$ & $1.4$ & $-13.8\pm0.3$ & $0.8$ & $-14.0\pm0.3\tablenotemark{a}$ & $1.6$ & $-0.16$ & $-14.0\pm0.3$ & $0.9$ & $-0.17$ \\ 10 & $-13.9\pm0.3\tablenotemark{a}$ & $1.4$ & $-13.9\pm0.3$ & $0.8$ & $-14.1\pm0.4\tablenotemark{a}$ & $1.6$ & $-0.19$ & $-14.1\pm0.3$ & $1$ & $-0.19$ \enddata \tablenotetext{-}{(Columns 2--3) RVs calculated using a data model that includes principal component as defined in Equation \ref{eq:modelwithPCA}. The final RVs were calculated with a weighted mean assuming that each individual exposure is independent. (Columns 4--5) Same as columns 2--3, but pairs of consecutive exposures were averaged and the largest of their uncertainties used. (Columns 6--7) RVs are corrected for biases using simulated planet injection and recovery. The resulting offset on the final RV with and without the injection and recovery is given in column 8. (Columns 9--11) Same as columns 6--8, but combining consecutive pairs of exposures.} \tablenotetext{a}{Uncertainties have been inflated by $\chi^2_r$ when $\chi^2_r$ is greater than unity.} \end{deluxetable*} We conclude that the RV of $\kappa$ And b is $-14.1\pm0.4\,\mathrm{km}\,\mathrm{s}^{-1}$ (cf Table 4), while the estimates for the RV of the star are $-12.7\pm0.8\,\mathrm{km}\,\mathrm{s}^{-1}$ \citep{Gontcharov2006} and $-11.87\pm1.53\,\mathrm{km}\,\mathrm{s}^{-1}$ \citep{Becker2015}. These values are consistent within the uncertainties - we use $-12.7\pm0.8\,\mathrm{km}\,\mathrm{s}^{-1}$ in the following because the uncertainty is smaller. The relative RV between the companion and the star is $-1.4\pm0.9\,\mathrm{km}\,\mathrm{s}^{-1}$ for which the error is dominated by the stellar RV. Similar to \citet{ruffio19}, this highlights the need to better constrain the stellar RV of stars hosting directly imaged companions. \begin{deluxetable}{lc} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablecaption{Final RVs for $\kappa$ Andromedae b.} \label{tab:RVsummary} \tablehead{ \colhead{Date} & \colhead{RV ($\mathrm{km}\,\mathrm{s}^{-1}$)} } \startdata 2016 Nov 6--8 & $-14.3\pm0.4$ \\ 2017 Nov 4 & $-13.6\pm0.6$ \enddata \end{deluxetable} \pagebreak \subsection{Orbital Analysis} The orbit of $\kappa$ And b has been explored using astrometry by several authors (\citealt{Blunt17,currie18,uyama20,Bowler19}). Though the orbit is highly under-constrained in terms of phase coverage, current fits to astrometry have yielded some constraints on orbit orientation and eccentricity of the companion. In particular, the eccentricity is currently estimated to be fairly high ($>$0.7). The measurement of an RV for the companion with our OSIRIS data offers a valuable new piece of information, wherein degeneracies in the orbit orientation can be resolved. To determine the constraints provided by the RV measurement, we performed a series of orbit fits with both astrometry from the literature (\citealt{Carson13,Bonnefoy14,currie18,uyama20}) and our OSIRIS RV using the code described in \citet{Kosmo-Oneil19}. Specifically, we use the Efit5 code \citep{meyer12}, which uses MULTINEST to perform a Bayesian analysis of our data (e.g., \citealt{feroz09}), and we use two different priors. We first use the typical flat priors in orbital parameters, including period (P), eccentricity (e), time of periastron passage (T$_0$), inclination (flat in $\sin i$), longitude of the ascending node ($\Omega$ or O), and longitude of periastron passage ($\omega$ or w). We also use the observational-based priors derived in \citet{Kosmo-Oneil19}. Although we believe the latter are more appropriate in this case due to the biases introduced by flat priors for under-constrained orbits, we include both for completeness. We performed fits both with and without the RV point derived above to determine the impact of including the RV. We fix the distance to 50.0 $\pm$ 0.1 pc \citep{gaia18}, and the mass to the value of 2.7 $\pm$ 0.1 M$_{\odot}$ estimated by \citet{jones16}, which encompasses the range of values they found given uncertainty in the internal metallicity of $\kappa$ And A. The results of these fits are given in Table \ref{tab:orbit} and shown visually in corner plots in Figures \ref{fig:corner_obs_rv}--\ref{fig:corner_flat_no_rv}. The addition of the RV constrains $\Omega$ to most likely be $\sim$85--90$\deg$, although due to the large uncertainty in the RV the secondary peak is not completely ruled out. Additionally, the RV pushes the distribution of eccentricities slightly higher than the astrometry alone, with global minima $>$0.8, although the uncertainties encompass the previous values. Figures \ref{fig:orbit_model_obs} and \ref{fig:orbit_model_flat} demonstrates the impact of the RV on the best-fit orbits. Although the best fits are not strictly meaningful due to undersampling of the period, there are clear differences in orbit predictions when the RV is included - the best fit with astrometry alone favors RVs that are closer to 0 km s$^{-1}$. \begin{figure*} \centering \includegraphics[width=1\linewidth]{corner_obs_rv_9jan2020.png} \caption{Corner plot showing the results of fitting the orbit of $\kappa$ And b, including both astrometry and radial velocities. In this case, we use the observationally-based prior presented in \citet{Kosmo-Oneil19}, which can help account for biases in parameters like T$_0$ that arise in undersampled orbits. Note that this prior does increase the range of T$_o$ included in our 1$\sigma$ uncertainties more than is seen when flat priors in the orbital parameters are used (e.g., Figure \ref{fig:corner_flat_rv}). The allowed parameter space for $\omega$ also shrinks considerably when RV is included (see for comparison Figure \ref{fig:corner_obs_no_rv}).} \label{fig:corner_obs_rv} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\linewidth]{corner_obs_no_rv_9jan2020.png} \caption{The same as Figure \ref{fig:corner_obs_rv}, but with no RV included in the fit. The results give values for orbital parameters consistent with previous fits found in the literature. A secondary peak in $\Omega$ can be seen more prominently here around $\sim$270$^o$ that is nearly absent in Figure \ref{fig:corner_obs_rv}. The addition of the RV eliminates this degeneracy in the orbit plane orientation.} \label{fig:corner_obs_no_rv} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\linewidth]{corner_flat_rv_9jan2020.png} \caption{Corner plot showing the results of fitting the orbit of $\kappa$ And b, including both astrometry and radial velocities. In this case, we use the typical flat priors in fit parameter space for easier comparison to previous work. Note that the use of these priors leads to a highly peaked prediction for T$_0$. Since velocity changes rapidly at this orbital phase, we will be able to test whether this prediction holds true in the next few years (Figure \ref{fig:orbit_model_flat}).} \label{fig:corner_flat_rv} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\linewidth]{corner_flat_no_rv_9jan2020.png} \caption{The same as Figure \ref{fig:corner_flat_rv}, but with no RV included in the fit. The results give values for orbital parameters consistent with previous fits found in the literature. Again, a secondary peak in $\Omega$ can be seen more prominently here around $\sim$270$^o$ that is nearly absent in Figure \ref{fig:corner_flat_rv}. The addition of the RV eliminates this degeneracy in the orbit plane orientation (regardless of prior choice).} \label{fig:corner_flat_no_rv} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\linewidth]{model_kelly_prior.png} \caption{Best-fit orbits with (red) or without (blue) inclusion of the OSIRIS RV using the observationally-based prior. Because of the large parameter space allowed by the astrometry, the best-fits are used here only for illustrative purposes. The left panel shows the orbits on the plane of the sky while the right panel demonstrates the variation in relative RV of the planet with time. Including the RV increases the preferred eccentricity of the best-fit solution, though the astrometry drives solutions to high eccentricities regardless. Based on the right hand panel, the RVs clearly have more diagnostic power in the next several years than the astrometry. If the planet is indeed approaching periastron passage, a rapid decrease in the relative radial velocity is predicted.} \label{fig:orbit_model_obs} \end{figure*} \begin{figure*} \centering \includegraphics[width=1\linewidth]{model_normal_prior.png} \caption{The same as Figure \ref{fig:orbit_model_obs}, but using flat orbit parameter priors. Either choice of prior yields preferred orbit solutions that approach periastron in the next several years, stressing the utility of more RVs.} \label{fig:orbit_model_flat} \end{figure*} The current prediction (whether RVs are included or not) is that $\kappa$ And b is on its way towards closest approach in the next 20--30 years. It is possible this prediction is impacted by systematics in the astrometric dataset, which is drawn from multiple different cameras and reduction pipelines. Indeed, the observational prior is meant to account for this known bias in T$_{0)}$, and using it pushes the prediction of periastron later by about 10 years (Table \ref{tab:orbit}). If it is the case that the planet is heading towards closest approach, the predicted change in RV in the next several years is significant and thus can be easily confirmed with more data of similar quality in the next decade. Thus spectroscopy has the potential to provide much more stringent constraints on the orbit in the near term than more astrometric measurements. \begin{turnpage} \begin{deluxetable}{@{\extracolsep{1pt}}l|cccccc|cccccc|cccccc} \tabletypesize{\footnotesize} \tablewidth{0pt} \tablecaption{Derived orbit parameters for $\kappa$ And B. \label{tab:orbit}} \tablehead{ \colhead{}& \multicolumn{6}{c}{Global Minimum}& \multicolumn{6}{c}{Mean}& \multicolumn{6}{c}{1$\sigma$ Range}\\ \cline{2-7} \cline{8-13} \cline{14-19} \colhead{Fit Type} & \colhead{P} & \colhead{ecc} & \colhead{T$_o$} & \colhead{inc.} & \colhead{$\omega$} & \colhead{$\Omega$} & \colhead{P} & \colhead{ecc} & \colhead{T$_o$} & \colhead{inc.} & \colhead{$\omega$} & \colhead{$\Omega$}& \colhead{P} & \colhead{ecc} & \colhead{T$_o$} & \colhead{inc.} & \colhead{$\omega$} & \colhead{$\Omega$} \\ \colhead{} & \colhead{(yrs)} & \colhead{} & \colhead{(yr)} & \colhead{(deg.)} & \colhead{deg.} & \colhead{(deg)} & \colhead{(yrs)} & \colhead{} & \colhead{(yr)} & \colhead{(deg.)} & \colhead{deg.} & \colhead{(deg)} & \colhead{(yrs)} & \colhead{} & \colhead{(yr)} & \colhead{(deg.)} & \colhead{deg.} & \colhead{(deg)} } \startdata Flat Priors & 398.0 & 0.83 & 2042.5 & 152.1 & 186.4 & 104.4 & 428.0 & 0.83 & 2041.7 & 155.9 & 184.1 & 107.1 & 293.0 - & 0.80 - & 2039.7 - & 132.0 - & 158.6 - & 91.5 - \\ Astrometry+RV & & & & & & & & & & & & & 925.6 & 0.86 & 2050.1 & 171.7 & 236.9 & 163.6 \\ \hline Flat Priors & 291.42 & 0.79 & 2040.85 & 156.2 & 68.5 & 149.6 & 576.1 & 0.83 & 2040.8 & 150.3 & 127.7 & 97.1 & 311.4 - & 0.75 - & 2039.5 - & 127.9 - & 56.4 - & 53.8 - \\ Astrometry Only & & & & & & & & & & & & & 1088.24 & 0.89 & 2046.2 & 167.6 & 219.7 & 294.5 \\ \hline Obs. Priors & 576.6 & 0.78 & 2051.7 & 129.3 & 172.5 & 92.5 & 523.0 & 0.80 & 2049.9 & 132.0 & 175.9 & 94.1 & 293.0 - & 0.72 - & 2041.4 - & 119.9 - & 158.4 - & 86.4 - \\ Astrometry+RV & & & & & & & & & & & & & 975.2 & 0.85 & 2060.4 & 113.1 & 200.2 & 163.6 \\ \hline Obs. Priors & 380.4 & 0.66 & 2050.8 & 127.3 & 150.7 & 78.1 & 468.7 & 0.78 & 2045.6 & 133.5 & 141.9 & 85.0 & 287.2 - & 0.64 - & 2040.3 - & 118.8 - & 69.9 - & 65.0 - \\ Astrometry Only & & & & & & & & & & & & & 884.0 & 0.87 & 2056.2 & 153.7 & 183.9 & 272.9 \\ \enddata \end{deluxetable} \clearpage \end{turnpage} \section{Discussion and Conclusions}\label{sec:discussion} Using moderate-resolution spectroscopy, we have greatly expanded our knowledge of the low-mass, directly imaged companion, $\kappa$ And b. In recent years, most studies of the $\kappa$ And system have led to the conclusion that it is young, as originally predicted by \cite{Carson13}. Our derivation of low surface gravity ($\log g<$4.5) using our OSIRIS spectrum is another piece of evidence in favor of a young age. If we consider the age range adopted by \cite{jones16} of 47$^{+27}_{-40}$ Myr, the predicted mass for a roughly $\sim$2050 K object range from $\sim$10--30 $M_\mathrm{Jup}$ (e.g., \citealt{baraffe15}). We note that our best-fit surface gravity of 3.8 is too low to be consistent with evolutionary models for this mass range, which predicts $\log g \approx$ 4--4.7. However, our uncertainties allow for gravities up to log(g)$\sim$4.5. The OSIRIS data does not favor $\log g$ greater than 4.5, which argues for an age less than $\sim$50 Myr. Our derived radius is also on the low end of what is allowed by evolutionary models, which predict R = 1.3 R$_{Jup}$ for older, more massive objects through R = 1.8 R$_{Jup}$ for younger, lower mass objects. Our uncertainties again are sufficient to encompass this range. The implied bolometric luminosity is consistent with \citet{currie18}, who note that it is similar to other young, substellar objects. Additional constraints on the temperature, cloud properties, and radius in the future via additional photometry, spectra, or modeling could yield tighter constraints on the mass of $\kappa$ And b. $\kappa$ And b is an excellent candidate for moderate-resolution spectroscopy at shorter wavelengths to look for lines from higher atomic number species beyond carbon and oxygen. With future measurements of highly gravity sensitive lines, like potassium in the $J$-band if detectable, stronger limits can be placed spectroscopically on $\log g$, which will provide a more robust age. Further mass constrains could also come from astrometric measurements with $Gaia$ or more radial velocity measurements that include velocity measurements for the star, although the precision of such RVs may be limited. Given the size and separation of $\kappa$ And b, its formation pathway is of considerable interest. Our measurement of C/O here provides one possible diagnostic of formation. We note that in a number of recent works, it has been demonstrated that the C/O ratio is impacted by a variety of phenomena beyond formation location in the disk. These include the grain size distribution \citep{piso15}, migration of grains or pebbles \citep{booth17}, migration of planets themselves \citep{cridland20}, and whether the accreted material is from the midplane (\citealt{morbidelli14,batygin18}). Current studies are therefore incorporating more chemical and physical processes into models to get a better idea of what impacts the C/O ratio and what exactly the ratio tells us about formation. With these studies in mind, we turn to the C/O ratio we have measured for $\kappa$ And b. Although our current uncertainties allow for somewhat elevated C/O ratios, the most likely scenario is that C/O is roughly consistent with the Sun. This result diagnostically points to a very rapid formation process, potentially through either gravitational instability or common gravitational collapse similar to a binary star system. The complication, however, is that the comparison must be made to the host star in order to draw definitive conclusions about formation. The C/O ratio of the host star, $\kappa$ And A, has not been measured or reported in the literature. As a late B-type star, probing these abundances is challenging, although certainly possible \citep{takeda16}. However, the rapid rotation of $\kappa$ And A ($\sim$162 km s$^{-1}$; \citealt{royer07}), may make abundance determinations difficult. High resolution optical spectroscopy for the star would be able to probe potential diagnostic lines, such as the OI triplet at 7771 \AA. Until individual abundance estimates for C and O are available, however, we can only conclude that the evidence points to roughly similar values for the host star and the companion if the star has similar abundances to the Sun. In terms of overall metallicity, the [Fe/H] abundance of $\kappa$ And A was estimated by \cite{wu11} to be subsolar, [M/H] = $-0.32 \pm$ 0.15. However, \citet{jones16} argue this is unlikely to be the true internal metallicity of the star, instead adopting a roughly solar abundance range of [M/H] = 0.00$\pm$0.14 based on the range of metallicities in nearby open clusters. Interestingly, our slightly subsolar best-fit metallicity for $\kappa$ And b may suggest that indeed the star is metal poor overall. A number of theoretical works have suggested that formation via gravitational instability would preferentially occur around low metallicity stars. Metal poor gas allows for shorter cooling timescales, allowing planets to quickly acquire sufficient density to avoid sheering (e.g., \citealt{boss02,cai06,helled11}). Since metals are difficult to measure in high mass hosts like $\kappa$ And A, direct metallicity measurements of the planets themselves could provide insight into measurements for the host star. We note that the derived abundances could be impacted by non-equilibrium chemistry effects in the $K$-band, and measuring atomic abundances can mitigate this issue and may be preferable (e.g., \citealt{nikolov18}). Additional metallicity measurements for directly imaged planets will also help probe the intriguing trend that the correlation of planet occurrence and metallicity breaks down at $\sim$4 M$_\mathrm{Jup}$ \citep{santos17}. The apparently low metallicity of $\kappa$ And b is certainly consistent with this finding. $\kappa$ And b now represents a fourth case of a directly imaged planet, in addition to three of the the HR 8799 planets \citep{Konopacky13,barman15,molliere20}, where the C/O ratio formation diagnostic did not reveal ratios that clearly point to formation via core/pebble accretion. The scenario certainly cannot be ruled out given the uncertainties in the data and the range of possible C/O ratios predicted by models (e.g., \citealt{madhu19}). Because of this uncertainty, other probes of formation will be needed to shed additional light on this fascinating population of companions. That includes the suggestion that the the high eccentricity of $\kappa$ And b is a result of scattering with another planetary mass object. Our results cannot shed light on potential formation closer to the star using C/O as a diagnostic until we can improve our uncertainties. Since the C/O ratio is largely a function of the amount of solids incorporated into the atmosphere, it is possible that the massive size of these planets simply implies that they very efficiently and rapidly accreted their envelopes. This could have included enough solid pollution in the envelope to return the C/O ratio to the original value. Indeed, there are pebble accretion scenarios proposed in which it is possible to achieve slightly superstellar C/H and C/O, but stellar O/H ratios via significant accretion of large, metal-rich grains \citep{booth17}, which is consistent with our results for $\kappa$ And b. The next steps for the $\kappa$ And system going forward will be confirmation of the high eccentricity solutions currently favored using more RVs, and continued monitoring with astrometry using consistent instrumentation to limit astrometric systematics. The strong CO lines and favorable contrast make $\kappa$ And b an excellent candidate for high-resolution, AO-fed spectroscopy with instruments like KPIC on Keck, IRD on Subaru, or CRIRES on the VLT (e.g., \citealt{snellen14,WangJi2018}). We can also determine whether the bulk population of directly imaged planets show C/O ratios consistent with solar/stellar values by continuing to obtain moderate or high-resolution spectra of these companions. If the population of directly imaged planets shows C/O distinct from what has been seen with closer in giant planets probed via transmission spectroscopy, this could point to distinct formation pathways for these sets of objects. \acknowledgements We would like to thank the referee, Joe Carson, for reviewing this work, and Marshall Perrin and Justin Otor for helpful conversations relating to this work. K. K. W. would also like to thank Thea Kozakis and Laura Stevens for discovering $\kappa$ And b. J.-B. R. acknowledges support from the David \& Ellen Lee Prize Postdoctoral Fellowship. Work conducted by Laci Brock and Travis Barman was supported by the National Science Foundation under Award No. 1405504. Support for this work was provided by NASA through the NASA Hubble Fellowship grant HST-HF2-51447.001-A awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. Material presented in this work is supported by the National Aeronautics and Space Administration under Grants/Contracts/Agreements No.NNX17AB63G and NNX15AD95G issued through the Astrophysics Division of the Science Mission Directorate. Any opinions, findings, and conclusions or recommendations expressed in this poster are those of the author(s) and do not necessarily reflect the views of the National Aeronautics and Space Administration. The data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
null
null
null
proofpile-arXiv_000-10180
{"arxiv_id":"2009.08959","language":"en","timestamp":1600654696000,"url":"https:\/\/arxiv.org\/abs\/2009.08959","yymm":"2009"}
2024-02-18T23:40:25.288Z
2020-09-21T02:18:16.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10182"}
null
\section{Introduction} Choquet's theory of integrability (as described by Denneberg \cite{Denn} and Wang and Klir \cite{WK}) leads to a new class of nonlinear operators called in \cite{GN2020b} \emph{Choquet operators} because they are defined by a mix of conditions representative for Choquet's integral. Its technical definition is detailed as follows. Given a Hausdorff topological space $X,$ we will denote by $\mathcal{F}(X)$ the vector lattice of all real-valued functions defined on $X$ endowed with the pointwise ordering. Two important vector sublattices of it are \[ C(X)=\left\{ f\in\mathcal{F}(X):\text{ }f\text{ continuous}\right\} \] and \[ C_{b}(X)=\left\{ f\in\mathcal{F}(X):\text{ }f\text{ continuous and bounded}\right\} . \] With respect to the sup norm, $C_{b}(X)$ becomes a Banach lattice. See the next section for details concerning the ordered Banch spaces. As is well known, all norms on the $N$-dimensional real vector space $\mathbb{R}^{N}$ are equivalent. See Bhatia \cite{Bhatia2009}, Theorem 13, p. 16. When endowed with the sup norm and the coordinate wise ordering, $\mathbb{R}^{N}$ can be identified (algebraically, isometrically and in order) with the space $C\left( \left\{ 1,...,N\right\} \right) $, where $\left\{ 1,...,N\right\} $ carries the discrete topology. Suppose that $X$ and $Y$ are two Hausdorff topological spaces and $E$ and $F$ are respectively ordered vector subspaces of $\mathcal{F}(X)$ and $\mathcal{F}(Y).$ An operator $T:E\rightarrow F$ is said to be a \emph{Choquet operator }(respectively a\emph{ Choquet functional when }$F=\mathbb{R}$) if it satisfies the following three conditions: \begin{enumerate} \item[(Ch1)] (\emph{Sublinearity}) $T$ is subadditive and positively homogeneous, that is \[ T(f+g)\leq T(f)+T(g)\quad\text{and}\quad T(af)=aT(f) \] for all $f,g$ in $E$ and $a\geq0;$ \item[(Ch2)] (\emph{Comonotonic additivity}) $T(f+g)=T(f)+T(g)$ whenever the functions $f,g\in E$ are comonotone in the sense that \[ (f(s)-f(t))\cdot(g(s)-g(t))\geq0\text{ for all }s,t\in X; \] \item[(Ch3)] (\emph{Monotonicity}) $f\leq g$ in $E$ implies $T(f)\leq T(g).$ \end{enumerate} The linear Choquet operators acting on ordered Banach spaces are nothing but the linear and positive operators acting on these spaces; see Corollary 1. While they are omnipresent in the various fields of mathematics, the nonlinear Choquet operators are less visible, their study beginning with the\ seminal papers of Schmeidler \cite{Schm86}, \cite{Schm89} in the 80's. An important step ahead was done by the contributions of Zhou \cite{Zhou}, Marinacci and Montrucchio \cite{MM2004} and Cerreia-Vioglio, Maccheroni, Marinacci and Montrucchio \cite{CMMM2012}, \cite{CMMM2015}, which led to the study of vector-valued Choquet operators in their own. See \cite{GN2020a} and \cite{GN2020b}. Interestingly, the condition of comonotonic additivity (the substitute for additivity) lies at the core of many results concerning the real analysis. Indeed, its meaning in the context of real numbers, can be easily understood by identifying each real number $x$ with the affine function $\alpha _{x}(t)=tx$, $t\in\mathbb{R}.$ As a consequence, two real numbers $x$ and $y$ are comonotone if and only if the functions $\alpha_{x}$ and $\alpha_{y}$ are comonotone, equivalently, if either both $x$ and $y$ are nonnegative or both are nonpositive. This yields the simplest example of Choquet functional from $\mathbb{R}~$into itself which is not linear, the function $x\rightarrow x^{+}.$ At the same time one can indicate a large family of nonlinear Choquet operators from $C([-1,1])$ into an arbitrary ordered Banach space $E,$ \[ T_{\varphi,U}(f)=U\left( \int_{-1}^{1}f^{+}(tx)\varphi(x)\mathrm{d}x\right) , \] where $\varphi\in C([-1,1])$ is any nonnegative function such that $\varphi(0)=0$ and $U:C([-1,1])\rightarrow E$ is any monotonic linear operator. Based on previous work done by Zhou \cite{Zhou}, Cerreia-Vioglio, Maccheroni, Marinacci and Montrucchio \cite{CMMM2012}, proved that a larger class of functionals defined on a space $C(X)$ (where $X$ is a Hausdorff compact space) admit a Choquet analogue of the Riesz representation theorem. The aim of our paper is to further extend their results to the case of operators by developing a Choquet-Bochner theory of integration relative to monotone set functions taking values in ordered Banach spaces. Section 2 is devoted to a quick review of some basic facts from the theory of ordered Banach spaces. While the particular case of Banach lattices is nicely covered by a series of textbooks such as those by Meyer-Nieberg \cite{MN} and Schaefer \cite{Sch1974}, the general theory of ordered Banach spaces is still waiting to become the subject of an authoritative book. In Section 3 we develop the theory of Choquet-Bochner integral associated to a vector capacity (that is, to a monotone set function $\boldsymbol{\mu}$ taking values in an ordered Banach space such that $\boldsymbol{\mu}(\emptyset)=0).$ As is shown in Theorem 1, this integral has all nice features of the Choquet integral: monotonicity, positive homogeneity and comonotonic additivity. The transfer of properties from vector capacities to their integrals also works in a number of important cases such as the upper/lower continuity and submodularity. See Theorem \ref{thmtransfer}. In the case of submodular vector capacities with values in a Banach lattice, the integral analogue of the modulus inequality also holds. See Theorem \ref{subadthm}. Section 4 deals with the integral representation of the Choquet operators defined on spaces $C(X)$ ($X$ being compact and Hausdorff) and taking values in a Banach lattice with order continuous norm. The main result, Theorem \ref{thmEWgen}, shows that each such operator is the Choquet-Bochner integral associated to a suitable upper continuous vector capacity. In Section 5, this representation is generalized to the framework of comonotonic additive operators with bounded variation. See Theorem \ref{thmfinal}. The basic ingredient is Lemma \ref{lemtech}, which shows that every comonotonic additive operator with bounded variation can be written as the difference of two positively homogeneous, translation invariant and monotone operators. The paper ends with a short list of open problems. \section{Preliminaries on ordered Banach spaces} An \emph{ordered vector space} is a real vector space $E$ endowed with an order relation $\leq$ such that the following two conditions are verified \begin{align*} x & \leq y\text{ implies }x+z\leq y+z\text{ for all }x,y,z\in E;\text{ and}\\ x & \leq y\text{ implies }\lambda x\leq\lambda y\text{ for }x,y\in E\text{ and }\lambda\in\mathbb{R}_{+}=[0,\infty). \end{align*} In this case the set $E_{+}=\left\{ x\in E:x\geq0\right\} $ is a convex cone, called the \emph{positive cone}. A real Banach space endowed with an order relation that makes it an ordered vector space is called an \emph{ordered Banach space} if the norm is monotone on the positive cone, that is \[ 0\leq x\leq y\text{ implies }\left\Vert x\right\Vert \leq\left\Vert y\right\Vert . \] Note that in this paper we will consider only ordered Banch spaces $E$ whose positive cones are closed (in the norm topology), \emph{proper} $(-E_{+}\cap E_{+}=\left\{ 0\right\} )$ and \emph{generating} $(E=E_{+}-E_{+})$. A convenient way to emphasize the properties of ordered Banach spaces is that described by Davies in \cite{Davies1968}. According to Davies, a real Banach space $E$ endowed with a closed and generating cone $E_{+}$ such tha \[ \left\Vert x\right\Vert =\inf\left\{ \left\Vert y\right\Vert :y\in E,\text{ }-y\leq x\leq y\right\} \text{\quad for all }x\in E, \] is called a \emph{regularly ordered Banach space \index{space!regularly ordered .\emph{ }Examples are the Banach lattices and some other spaces such as $\operatorname*{Sym}(n,\mathbb{R)}$, the ordered Banach space of all $n\times n$-dimensional symmetric matrices with real coefficients. The norm of a symmetric matrix $A$ is defined by the formul \[ \left\Vert A\right\Vert =\sup_{\left\Vert x\right\Vert \leq1}\left\vert \langle Ax,x\rangle\right\vert , \] and the positive cone $\operatorname*{Sym}^{+}(n,\mathbb{R})$ of $\operatorname*{Sym}(n,\mathbb{R})$ consists of all symmetric matrices $A$ such that $\langle Ax,x\rangle\geq0$ for all $x.$ \begin{lemma} \label{Lem0}Every ordered Banach space can be renormed by an equivalent norm to become a regularly ordered Banach space. \end{lemma} For details, see Namioka \cite{Nam}. Some other useful properties of ordered Banach spaces are listed below. \begin{lemma} \label{lem1}Suppose that $E$ is a regularly ordered Banach space. Then: $(a)$ There exists a constant $C>0$ such that every element $x\in E$ admits a decomposition of the form $x=u-v$ where $u,v\in E_{+}$ and $\left\Vert u\right\Vert ,\left\Vert v\right\Vert \leq C\left\Vert x\right\Vert .$ $(b)$ The dual space of $E,$ $E^{\ast},$ when endowed with the dual cone \[ E_{+}^{\ast}=\left\{ x^{\ast}\in E^{\ast}:x^{\ast}(x)\geq0\text{ for all }x\in E_{+}\right\} \] is a regularly ordered Banach space. $(c)\ x\leq y$ in $E$ is equivalent to $x^{\ast}(x)\leq x^{\ast}(y)$ for all $x^{\ast}\in E_{+}^{\ast}.$ $(d)$ $\left\Vert x\right\Vert =\sup\left\{ x^{\ast}(x):x^{\ast}\in E_{+}^{\ast},\text{ }\left\Vert x^{\ast}\right\Vert \leq1\right\} $ for all $x\in E_{+}.$ $(e)$ If $(x_{n})_{n}$ is a decreasing sequence of positive elements of $E$ which converges weakly to $0,$ then $\left\Vert x_{n}\right\Vert \rightarrow0.$ \end{lemma} The assertion $(e)$ is a generalization of Dini's lemma in real analysis; see \cite{CN2014}, p. 173. \begin{proof} The assertion $(a)$ follows immediately from Lemma \ref{Lem0}. For $(b)$, see Davies \cite{Davies1968}, Lemma 2.4. The assertion $(c)$ is an easy consequence of the Hahn-Banach separation theorem; see \cite{NP2018}, Theorem 2.5.3, p. 100. The assertion $(d)$ is also a consequence of the Hahn-Banach separation theorem; see \cite{SW1999}, Theorem 4.3, p. 223. \end{proof} \begin{corollary} \label{cor1}Every ordered Banach space $E$ can be embedded into a space $C(X)$, where $X$ is a suitable compact space. \end{corollary} \begin{proof} According to the Alaoglu theorem, the set $X=\left\{ x^{\ast}\in E_{+}^{\ast }:\left\Vert x^{\ast}\right\Vert \leq1\right\} $ is compact relative to the $w^{\ast}$ topology. Taking into account the assertions $(c)$ and $(d)$ of Lemma \ref{lem1} one can easily conclude that $E$ embeds into $C(X)$ (algebraically, isometrically and in order) via the map \[ \Phi:E\rightarrow C(X),\text{\quad}\left( \Phi(x)\right) (x^{\ast})=x^{\ast }(x). \] \end{proof} The following important result is due to V. Klee \cite{Klee}. A simple proof of it is available in \cite{NO2020}. \begin{lemma} \label{lem1*}Every positive linear operator $T:E\rightarrow F$ acting on ordered Banach spaces is continuous. \end{lemma} Sometimes, spaces with a richer structure are necessary. A vector lattice is any ordered vector space $E$ such that $\sup\{x,y\}$ and $\inf\{x,y\}$ exist for all $x,y\in E.$ In this case for each $x\in E$ we can define $x^{+}=\sup\left\{ x,0\right\} $ (the positive part of $x$), $x^{-}=\sup\left\{ -x,0\right\} $ (the negative part of $x$) and $\left\vert x\right\vert =\sup\left\{ -x,x\right\} $ (the modulus of $x$). $\ $We have $x=x^{+}-x^{-}$ and $\left\vert x\right\vert =x^{+}+x^{-}.$ A vector lattice endowed with a norm $\left\Vert \cdot\right\Vert $ such tha \[ \left\vert x\right\vert \leq\left\vert y\right\vert \text{ implies x\leq\left\Vert y\right\Vert \] is called a normed vector lattice; it is called a Banach lattice when in addition it is metric complete. Examples of Banach lattice are numerous: the discrete spaces $\mathbb{R}^{n},$ $c_{0},$ $c$ and $\ell^{p}$ for $1\leq p\leq\infty$ (endowed with the coordinate-wise order), and the function spaces $C(K)$ (for $K$ a compact Hausdorff space) and $L^{p}(\mu)$ with $1\leq p\leq\infty$ (endowed with pointwise order). Of a special interest are the Banach lattices with \emph{order continuous norm}, that is, the Banach lattices for which every monotone and order bounded sequence is convergent in the norm topology. So are $\mathbb{R}^{n},$ $c_{0}$ and $L^{p}(\mu)$ for $1\leq p<\infty$. \begin{lemma} \label{lem2}Every monotone and order bounded sequence of elements in a Banach lattice $E$ with order continuous norm admits a supremum and an infimum and all closed order intervals in $E$ are weakly compact. \end{lemma} For details, see Meyer-Nieberg \cite{MN}, Theorem 2.4.2, p. 86. \section{The Choquet-Bochner Integral} This section is devoted to the extension of Choquet's theory of integrability to the framework with respect to a monotone set function with values in the positive cone of a regularly ordered Banach space $E$. This draws a parallel to the real-valued case already treated in full details by Denneberg \cite{Denn} and Wang and Klir \cite{WK}. Given a nonempty set $X,$ by a \emph{lattice} of subsets of $X$ we mean any collection $\Sigma$ of subsets that contains $\emptyset$ and $X$ and is closed under finite intersections and unions. A lattice $\Sigma$ is an \emph{algebra} if in addition it is closed under complementation. An algebra which is closed under countable unions and intersections is called a $\sigma$-algebra. Of a special interest is the case where $X$ is a compact Hausdorff space and $\Sigma$ is either the lattice $\Sigma_{up}^{+}(X),$ of all \emph{upper contour} \emph{closed sets} $S=\left\{ x\in X:f(x)\geq t\right\} ,$ or the lattice $\Sigma_{up}^{-}(X)$ of all \emph{upper contour open sets} $S=\left\{ x\in X:f(x)>t\right\} ,$ associated to pairs $f\in C(X)$ and $t\in \mathbb{R}.$ When $X$ is a compact metrizable space, $\Sigma_{up}^{+}(X)$ coincides with the lattice of all closed subsets of $X$ (and $\Sigma_{up}^{-}(X)$ coincides with the lattice of all open subsets of $X)$. In what follows $\Sigma$ denotes a lattice of subsets of an abstract set $X$ and $E$ is a regularly ordered Banach space. \begin{definition} \label{defcap}A set function $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ is called a vector capacity if it verifies the following two conditions: $(C1)$ $\boldsymbol{\mu}(\emptyset)=0;$ and $(C2)~\boldsymbol{\mu}(A)\leq\boldsymbol{\mu}(B)$ for all $A,B\in\Sigma$ with $A\subset B$. \end{definition} Notice that any vector capacity $\boldsymbol{\mu}$ is positive and takes values in the order interval $[0,$ $\boldsymbol{\mu}(X)].$ An important class of vector capacities is that of \emph{additive} (respectively $\sigma$-\emph{additive) vector measures with positive values}, that is, of capacities $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ with the property \[ \boldsymbol{\mu}\left( {\displaystyle\bigcup\nolimits_{n}} A_{n}\right) {\displaystyle\sum\nolimits_{n=1}^{\infty}} \boldsymbol{\mu}(A_{n}), \] for every finite (respectively infinite) sequence $A_{1},A_{2},A_{3},...$ of disjoint sets belonging to $\Sigma$ such that $\cup_{n}A_{n}\in\Sigma.$ Some other classes of capacities exhibiting various extensions of the property of additivity are listed below. A vector capacity $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ is called \emph{submodular} i \begin{equation} \boldsymbol{\mu}(A\cup B)+\boldsymbol{\mu}(A\cap B)\leq\boldsymbol{\mu }(A)+\boldsymbol{\mu}(B)\text{\quad for all }A,B\in\Sigma\label{submod \end{equation} and it is called \emph{supermodular} when the inequality (\ref{submod}) works in the reversed way. Every additive measure taking values in $E_{+}$ is both submodular and supermodular. A vector capacity $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ is called \emph{lower continuous\ }(or continuous by ascending sequences)\emph{ }i \[ \lim_{n\rightarrow\infty}\boldsymbol{\mu}(A_{n})=\boldsymbol{\mu} {\displaystyle\bigcup\nolimits_{n=1}^{\infty}} A_{n}) \] for every nondecreasing sequence $(A_{n})_{n}$ of sets in $\Sigma$ such that $\cup_{n=1}^{\infty}A_{n}\in\Sigma;$ $\boldsymbol{\mu}$ is called \emph{upper continuous\ }(or continuous by descending sequences) if \[ \lim_{n\rightarrow\infty}\boldsymbol{\mu}(A_{n})=\boldsymbol{\mu}\left( \cap_{n=1}^{\infty}A_{n}\right) \] for every nonincreasing sequence $(A_{n})_{n}$ of sets in $\Sigma$ such that $\cap_{n=1}^{\infty}A_{n}\in\Sigma.$ If $\boldsymbol{\mu}$ is an additive capacity defined on a $\sigma$-algebra, then its upper/lower continuity is equivalent to the property of $\sigma$-additivity. When $\Sigma$ is an algebra of subsets of $X,$ then to each vector capacity $\boldsymbol{\mu}$ defined on $\Sigma$, one can attach a new vector capacity $\overline{\boldsymbol{\mu}}$, the \emph{dual} of $\boldsymbol{\mu},$ which is defined by the formula \[ \overline{\boldsymbol{\mu}}(A)=\boldsymbol{\mu}(X)-\boldsymbol{\mu}(X\setminus A). \] Notice that $\overline{\left( \overline{\boldsymbol{\mu}}\right) }=\boldsymbol{\mu}.$ The dual of a submodular (supermodular) capacity is a supermodular (submodular) capacity. Also, dual of a lower continuous (upper continuous) capacity is an upper continuous (lower continuous) capacity. \begin{example} There are several standard procedures to attach to a $\sigma$-additive vector measure $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ certain not necessarily additive capacities. So is the case of \emph{distorted measures,} $\nu(A)=T(\boldsymbol{\mu}(A)),$ obtained from $\boldsymbol{\mu}$ by applying to it a continuous nondecreasing distortion $T:[0,\boldsymbol{\mu }(X)]\rightarrow\lbrack0,\boldsymbol{\mu}(X)].$ The vector capacities $\nu$ so obtained are both upper and lower continuous. For example, this is the case when $E=\operatorname*{Sym}(n,\mathbb{R})$, $\boldsymbol{\mu}(X)=\mathrm{I}$ $($the identity of $\mathbb{R}^{n}),$ and $T:[0,\mathrm{I}]\rightarrow\lbrack0,\mathrm{I}]$ is the distortion defined by the formula $T(A)=A^{2}.$ \end{example} Taking into account the assertions $(c)$ and $(d)$ of Lemma \ref{lem1}, some aspects (but not all) of the theory of vector capacities are straightforward consequences of the theory of $\mathbb{R}_{+}$-valued capacities. \begin{lemma} \label{lemweak}A set function $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ is a submodular \emph{(}supermodular, lower continuous, upper continuous\emph{)} vector capacity if and only if $x^{\ast}\circ\boldsymbol{\mu}$ is a submodular \emph{(}supermodular, lower continuous, upper continuous\emph{)} $\mathbb{R}_{+}$-valued capacity whenever $x^{\ast}\in E_{+}^{\ast}.$ When $E=\mathbb{R}^{n},$ this assertion can be formulated via the components $\mu_{k}=\operatorname*{pr}_{k}\circ\boldsymbol{\mu}$ of $\mathbf{\mu}.$ \end{lemma} In what follows the term of (upper) \emph{measurable} \emph{function} refers to any function $f:X\rightarrow\mathbb{R}$ whose all upper contour sets $\left\{ x\in X:f(x)\geq t\right\} $ belong to $\Sigma$. When $\Sigma$ is a $\sigma$-algebra, this notion of measurability is equivalent to the Borel measurability. We will denote by $B(\Sigma)$ the set of all bounded measurable functions $f:X\rightarrow\mathbb{R}.$ In general $B(\Sigma)$ is not a vector space (unless the case when $\Sigma$ is a $\sigma$-algebra). However, even when $\Sigma$ is only an algebra, the set $B(\Sigma)$ plays some nice properties of stability: if $f,g\in B(\Sigma)$ and $\alpha,\beta\in\mathbb{R},$ then \[ \inf\{f,g\},\text{ }\sup\left\{ f,g\right\} \text{ and }\alpha+\beta f \] also belong to $B(\Sigma).$ See \cite{MM2004}, Proposition 15. Given a capacity $\mu:\Sigma\rightarrow\mathbb{R}_{+},$ the \emph{Choquet integral} of a measurable function $f:X\rightarrow\mathbb{R}$ on a set $A\in\Sigma$ is defined as the sum of two Riemann improper integrals, \begin{align} (\operatorname*{C})\int_{A}f\mathrm{d}\mu & =\int_{0}^{+\infty}\mu\left( \{x\in A:f(x)\geq t\}\right) \mathrm{d}t\label{Chint}\\ & +\int_{-\infty}^{0}\left[ \mu\left( \{x\in A:f(x)\geq t\}\right) -\mu(A)\right] \mathrm{d}t.\nonumber \end{align} Accordingly, $f$ is said to be \emph{Choquet integrable} on $A$ if both integrals above are finite. See the seminal paper of Choquet \cite{Ch1954}. If $f\geq0$, then the last integral in the formula (\ref{Chint}) is 0. When $\Sigma$ is a $\sigma$-algebra, the inequality sign $\geq$ in the above two integrands can be replaced by $>;$ see \cite{WK}, Theorem 11.1,\emph{ }p. 226. Every bounded measurable function is Choquet integrable. The Choquet integral coincides with the Lebesgue integral when the underlying set function $\mu$ is a $\sigma$-additive measure defined on a $\sigma$-algebra. The theory of Choquet integral is available from numerous sources including the books of Denneberg \cite{Denn}, and Wang and Klir \cite{WK}. The concept of integrability of a measurable function with respect to a vector capacity $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ can be introduced by a fusion between the Choquet integral and the Bochner theory of integration of vector-valued functions. Recall that a function $\psi:\mathbb{R}\rightarrow E$ is \emph{Bochner integrable} with respect to the Lebesgue measure on $\mathbb{R}$ if there exists a sequence of step functions $\psi_{n}:\mathbb{R}\rightarrow E$ such tha \[ \lim_{n\rightarrow\infty}\psi_{m}(t)=\psi(t)\text{ almost everywhere and \int_{\mathbb{R}}\left\Vert \psi-\psi_{n}\right\Vert \mathrm{d}t\rightarrow0. \] In this case, the (Bochner) integral of $\psi$ is defined by \[ \int_{\mathbb{R}}\psi\mathrm{d}t=\lim_{n\rightarrow\infty}\int_{\mathbb{R }\psi_{n}\mathrm{d}t. \] Notice that if $T:E\rightarrow F$ is a bounded linear operator, then \begin{equation} T\left( \int_{\mathbb{R}}\psi\mathrm{d}t\right) =\int_{\mathbb{R}}T\circ \psi\mathrm{d}t. \label{CB_opercommuting \end{equation} Details about Bochner integral are available in the books of Diestel and Uhl \cite{DU1977} and Dinculeanu \cite{Din}. \begin{definition} \label{defIntB}A measurable function $f:X\rightarrow\mathbb{R}$ is called Choquet-Bochner integrable with respect to the vector capacity $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ on the set $A\in\Sigma$ if for every $A\in\Sigma$ the functions $t\rightarrow\boldsymbol{\mu}\left( \{x\in A:f(x)\geq t\}\right) $ and $t\rightarrow\boldsymbol{\mu}\left( \{x\in A:f(x)\geq t\}\right) -\boldsymbol{\mu}(A)$ are Bochner integrable respectively on $[0,\infty)$ and $(-\infty,0].$ Under these circumstances, the Choquet-Bochner integral over $A$ is defined by the formul \begin{align*} (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu} & =\int_{0 ^{+\infty}\boldsymbol{\mu}\left( \{x\in A:f(x)\geq t\}\right) \mathrm{d}t\\ + & \int_{-\infty}^{0}\left[ \boldsymbol{\mu}\left( \{x\in A:f(x)\geq t\}\right) -\mu(A)\right] \mathrm{d}t. \end{align*} \end{definition} According to the formula (\ref{CB_opercommuting}), if $f$ is Choquet-Bochner integrable, then \begin{equation} x^{\ast}\left( (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu }\right) =(\operatorname*{C})\int_{A}f\mathrm{d}(x^{\ast}\circ\mu), \label{functcomm \end{equation} for every positive linear functional $x^{\ast}\in E^{\ast}.$ A large class of Choquet-Bochner integrable is indicated below. \begin{lemma} \label{lem6}If $f:X\rightarrow\mathbb{R}$ is a bounded measurable function, then it is Choquet-Bochner integrable on every set $A\in\Sigma$. \end{lemma} \begin{proof} Suppose that $f$ takes values in the interval $[0,M].$ Then the function $\varphi(t)=\boldsymbol{\mu}\left( \{x\in A:f(x)\geq t\}\right) $ is positive and nonincreasing on the interval $[0,M]$ and null outside this interval. As a consequence, the function $\varphi$ is the uniform limit of the sequence of step functions defined as follows \begin{align*} \varphi_{n}(t) & =0\text{ if }t\notin\lbrack0,M]\\ \varphi_{n}(t) & =\varphi(t)\text{ if }t=M \end{align*} and \[ \varphi_{n}(t)=\varphi(\frac{k}{n}M) \] if $t\in\lbrack\frac{k}{n}M,\frac{k+1}{n}M)$ and $k=0,...,n-1.$ A simple computation shows tha \begin{align*} \int_{0}^{+\infty}\left\Vert \varphi(t)-\varphi_{n}(t)\right\Vert \mathrm{d}t & =\int_{0}^{M}\left\Vert \varphi(t)-\varphi_{n}(t)\right\Vert \mathrm{d}t\\ & \le {\displaystyle\sum\nolimits_{l=0}^{n-1}} \int_{t_{k}}^{t_{k+1}}\left\Vert \varphi(t)-\varphi_{n}(t)\right\Vert \mathrm{d}t\\ & \leq\frac{2M}{n}\boldsymbol{\mu}(X)\rightarrow0 \end{align*} as $n\rightarrow\infty,$ which means the Bochner integrability of the function $\varphi.$ See \cite{DU1977}, p. 44. The other cases, when $f$ takes values in the interval $[M,0]$ or in an interval $[m,M]$ with $m<0<M,$ can be treated in a similar way. \end{proof} The next lemma collects a number of simple (but important) properties of the Choquet-Bochner integral. \begin{lemma} \label{lemgenprop}Suppose that $E$ is an ordered Banach space, $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ is a vector capacity and $A\in\Sigma$. $(a)$ If $f$ and $g$ are Choquet-Bochner integrable functions, the \begin{gather*} f\geq0\text{ implies }(\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu }\geq0\text{ \quad\emph{(}positivity\emph{)}}\\ f\leq g\text{ implies }\left( \operatorname*{CB}\right) \int_{A f\mathrm{d}\boldsymbol{\mu}\leq\left( \operatorname*{CB}\right) \int _{A}g\mathrm{d}\boldsymbol{\mu}\text{ \quad\emph{(}monotonicity\emph{)}}\\ \left( \operatorname*{CB}\right) \int_{A}af\mathrm{d}\boldsymbol{\mu =a\cdot\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\boldsymbol{\mu }\text{\quad for all }a\geq0\text{ \quad\emph{(}positive\emph{ homogeneity\emph{)}}\\ \left( \operatorname*{CB}\right) \int_{A}1\cdot\mathrm{d}\boldsymbol{\mu }=\boldsymbol{\mu}(A)\text{\quad\emph{(}calibration\emph{).} \end{gather*} $(b)$ In general, the Choquet-Bochner integral is not additive but if $f$ and $g$ are Choquet-Bochner integrable functions and also comonotonic in the sense of Dellacherie \emph{\cite{Del1970}) (}that is, $(f(\omega)-f(\omega^{\prime }))\cdot(g(\omega)-g(\omega^{\prime}))\geq0$, for all $\omega,\omega^{\prime }\in X$\emph{), }then \[ \left( \operatorname*{CB}\right) \int_{A}(f+g)\mathrm{d}\boldsymbol{\mu }=\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\boldsymbol{\mu }+\left( \operatorname*{CB}\right) \int_{A}g\mathrm{d}\boldsymbol{\mu}. \] In particular, the Choquet-Bochner integral is translation invariant \[ \left( \operatorname*{CB}\right) \int_{A}(f+c)\mathrm{d}\boldsymbol{\mu }=\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\boldsymbol{\mu }+c\boldsymbol{\mu}(A), \] for all Choquet-Bochner integrable functions $f$, all sets $A\in\Sigma$ and all numbers $c\in\mathbb{R}$. \end{lemma} \begin{proof} According to Lemma \ref{lem1} $(c)$, and formula (\ref{CB_opercommuting}), applied in the case of an arbitrary functional $x^{\ast}\in E_{+}^{\ast},$ the proof of both assertions $(a)$ and $(b)$ reduce to the case of capacities with values in $\mathbb{R}_{+},$ already covered by Proposition 5.1 in \cite{Denn}, pp. 64-65. \end{proof} \begin{corollary} The equalit \[ \left( \operatorname*{CB}\right) \int_{A}(\alpha f+c)\mathrm{d \boldsymbol{\mu}=\alpha\left( \left( \operatorname*{CB}\right) \int _{A}f\mathrm{d}\boldsymbol{\mu}\right) +c\cdot\boldsymbol{\mu}(A), \] holds for all Choquet-Bochner integrable functions $f$, all sets $A\in\Sigma$ and all numbers $\alpha\in\mathbb{R}_{+}$ and $c\in\mathbb{R}$. \end{corollary} The next result describes how the special properties of a vector capacity transfer to the Choquet-Bochner integral. \begin{theorem} \label{thmtransfer}$(a)$ If $\boldsymbol{\mu}$ is an upper continuous capacity, then the Choquet-Bochner integral is an upper continuous operator, that is \[ \lim_{n\rightarrow\infty}\left\Vert \left( \operatorname*{CB}\right) \int_{A}f_{n}\mathrm{d}\boldsymbol{\mu}-\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\boldsymbol{\mu}\right\Vert =0, \] whenever $(f_{n})_{n}$ is a nonincreasing sequence of Choquet-Bochner integrable functions that converges pointwise to the Choquet-Bochner integrable function $f$ and $A\in\Sigma$. $(b)$ If $\boldsymbol{\mu}$ is a lower continuous capacity, then the Choquet-Bochner integral is lower continuous in the sense tha \[ \lim_{n\rightarrow\infty}\left\Vert \left( \operatorname*{CB}\right) \int_{A}f_{n}\mathrm{d}\boldsymbol{\mu}-\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\boldsymbol{\mu}\right\Vert =0 \] whenever $(f_{n})_{n}$ is a nondecreasing sequence of Choquet-Bochner integrable functions that converges pointwise to the Choquet-Bochner integrable function $f$ and $A\in\Sigma$. $(c)$ If $\Sigma$ is an algebra and $\boldsymbol{\mu}:\Sigma\rightarrow E_{+}$ is a submodular capacity, then the Choquet-Bochner integral is a submodular operator in the sense that \[ \left( \operatorname*{CB}\right) \int_{A}\sup\left\{ f,g\right\} \mathrm{d}\boldsymbol{\mu}+\left( \operatorname*{CB}\right) \int_{A \inf\{f,g\}\mathrm{d}\boldsymbol{\mu}\leq\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\mu+(\operatorname*{CB})\int_{A}g\mathrm{d}\boldsymbol{\mu \] whenever $f$ and $g$ are Choquet-Bochner integrable and $A\in\Sigma$ . \end{theorem} \begin{proof} $(a)$ Since $\mathbf{\mu}$ is an upper continuous capacity and $(f_{n})_{n}$ is a nonincreasing sequence of measurable functions that converges pointwise to the measurable function $f$ it follows that \[ \boldsymbol{\mu}\left( \{x\in A:f_{n}(x)\geq t\}\right) \searrow \boldsymbol{\mu}\left( \{x\in A:f(x)\geq t\}\right) \] in the norm topology. Taking into account the property of monotonicity of the Choquet-Bochner integral (already noticed in Lemma \ref{lemgenprop} $(a))$ we hav \[ (\operatorname*{CB})\int_{A}f_{n}\mathrm{d}\boldsymbol{\mu}\geq (\operatorname*{CB})\int_{A}f_{n}\mathrm{d}\boldsymbol{\mu}\geq\mathbf{\cdots }\geq(\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu}\mathbf{, \] so by Bepo Levi's monotone convergence theorem from the theory of Lebesgue integral (see \cite{Din}, Theorem 2, p. 133) it follows that \[ x^{\ast}\left( (\operatorname*{CB})\int_{A}f_{n}\mathrm{d}\boldsymbol{\mu }\right) =(\operatorname*{C})\int_{A}f_{n}\mathrm{d}\mu^{\ast}\rightarrow (\operatorname*{C})\int_{A}f\mathrm{d}\mu^{\ast}=x^{\ast}\left( (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu}\right) \] for all $x^{\ast}\in E_{+}^{\ast}$. The conclusion of the assertion $(c)$ is now a direct consequence of the generalized Dini's lemma (see Lemma \ref{lem1} $(e)$). $(b)$ The argument is similar to that used to prove the assertion $(a)$. $(c)$ Since $\Sigma$ is an algebra, both functions $\inf\{f,g\}$ and $\sup\left\{ f,g\right\} $ are measurable. The fact that $\mathbf{\mu}$ is submodular implies \begin{multline*} \boldsymbol{\mu}\left( \left\{ x:\sup\{f,g\}(x)\geq t\right\} \right) +\boldsymbol{\mu}\left( \left\{ x:\inf\{f,g\}(x)\geq t\right\} \right) \\ =\boldsymbol{\mu}\left( \left\{ x:f(x)\geq t\right\} \cup\left\{ x:g(x)\geq t\right\} \right) +\boldsymbol{\mu}\left( \left\{ x:f(x)\geq t\right\} \cap\left\{ x:g(x)\geq t\right\} \right) \\ \leq\boldsymbol{\mu}\left( \left\{ x:f(x)\geq t\right\} \right) +\boldsymbol{\mu}\left( \left\{ x:g(x)\geq t\right\} \right) , \end{multline*} and the same works when $\boldsymbol{\mu}$ is replaced by $\mu^{\ast}=x^{\ast }\circ\boldsymbol{\mu},$ where $x^{\ast}\in E_{+}^{\ast}$ is arbitrarily fixed. Integrating side by side the last inequality it follows tha \[ \left( \operatorname*{C}\right) \int_{A}\sup\left\{ f,g\right\} \mathrm{d}\mu^{\ast}+\left( \operatorname*{C}\right) \int_{A}\inf \{f,g\}\mathrm{d}\mu^{\ast}\leq\left( \operatorname*{C}\right) \int _{A}f\mathrm{d}\mu^{\ast}+(\operatorname*{C})\int_{A}g\mathrm{d}\mu^{\ast}, \] which yields (via Lemma \ref{lem1} $(c))$ the submodularity of the Choquet-Bochner integral. \end{proof} The property of subadditivity of the Choquet-Bochner integral makes the objective of the following result: \begin{theorem} \label{subadthm}\emph{(The Subadditivity Theorem) }If $\boldsymbol{\mu}$ is a submodular capacity, then the associated Choquet-Bochner integral is subadditive, that is \[ \left( \operatorname*{CB}\right) \int_{A}(f+g)\mathrm{d}\boldsymbol{\mu \leq\left( \operatorname*{CB}\right) \int_{A}f\mathrm{d}\boldsymbol{\mu }+\left( \operatorname*{CB}\right) \int_{A}g\mathrm{d}\boldsymbol{\mu \] whenever $f$, $g$ and $f+g$ are Choquet-Bochner integrable functions and $A\in\Sigma$. In addition, when $E$ is a Banach lattice and $f$, $g,$ $f-g$ and $g-f$ are Choquet-Bochner integrable functions, then the following integral analog of the modulus inequality holds true: \[ \left\vert (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu }-(\operatorname*{CB})\int_{A}g\mathrm{d}\boldsymbol{\mu}\right\vert \leq(\operatorname*{CB})\int_{A}|f-g|\mathrm{d}\mu \] for all $A\in\Sigma$. In particular, \[ \left\vert (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu}\right\vert \leq(\operatorname*{CB}\int_{A}|f|\mathrm{d}\boldsymbol{\mu \] whenever $f$ and $-f$ are Choquet-Bochner integrable functions and $A\in \Sigma.$ \end{theorem} \begin{proof} According to Lemma \ref{lemweak}, if $\boldsymbol{\mu}$ is submodular, then every real-valued capacity $\mu^{\ast}=x^{\ast}\circ\boldsymbol{\mu}$ also is submodular, whenever $x^{\ast}\in E_{+}^{\ast}$. Then \[ \left( \operatorname*{C}\right) \int_{A}f\mathrm{d}\mu^{\ast}+\left( \operatorname*{C}\right) \int_{A}g\mathrm{d}\mu^{\ast}\leq\left( \operatorname*{C}\right) \int_{A}(f+g)\mathrm{d}\mu^{\ast}, \] as a consequence of Theorem\emph{ }6.3, p. 75, in \cite{Denn}. Therefore the first inequality in the statement of Theorem \ref{subadthm} is now a direct consequence of Lemma \ref{lem1} $(c)$ and the formula (\ref{functcomm}). For the second inequality, notice that the subadditivity property implies \begin{align*} (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu} & =(\operatorname*{CB})\int_{A}\left( f-g+g\right) \mathrm{d}\boldsymbol{\mu }\\ & \leq\left( \operatorname*{CB}\right) \int_{A}\left( f-g\right) \mathrm{d}\boldsymbol{\mu}+\left( \operatorname*{CB}\right) \int _{A}g\mathrm{d}\boldsymbol{\mu}\mathbf{\boldsymbol{,} \end{align*} and taking into account that $f-g\leq\left\vert f-g\right\vert $ we infer tha \[ (\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu}-\left( \operatorname*{CB}\right) \int_{A}g\mathrm{d}\boldsymbol{\mu}\leq\left( \operatorname*{CB}\right) \int_{A}\left\vert f-g\right\vert \mathrm{d \boldsymbol{\mu}. \] Interchanging $f$ and $g$ we also obtai \[ \pm((\operatorname*{CB})\int_{A}f\mathrm{d}\boldsymbol{\mu}-\left( \operatorname*{CB}\right) \int_{A}g\mathrm{d}\boldsymbol{\mu \mathbf{\boldsymbol{)}}\leq\left( \operatorname*{CB}\right) \int _{A}\left\vert f-g\right\vert \mathrm{d}\boldsymbol{\mu \] and the proof is done. \end{proof} \section{The integral representation of Choquet operators defined on a space $C(X)$} The special case when $X$ is a compact Hausdorff space and $\Sigma =\mathcal{B}(X),$ the $\sigma$-algebra of all Borel subsets of $X,$ allows us to shift the entire discussion concerning the Choquet-Bochner integral from the vector space $B(\Sigma)$ (of all real-valued Borel measurable functions$)$ to the Banach lattice $C(X)$ (of all real-valued continuous functions defined on $X$. Indeed, in this case $C(X)$ is a subspace of $B(\Sigma)$ and all nice properties stated in Lemma \ref{lemgenprop} and in Theorem \ref{subadthm} remain true when restricting the integral to the space $C(X).$ As a consequence the integral operato \begin{equation} \operatorname*{CB}\nolimits_{\boldsymbol{\mu}}:C(X)\rightarrow E,\text{\quad }\operatorname*{CB}\nolimits_{\boldsymbol{\mu}}(f)=(\operatorname*{CB )\int_{X}f\mathrm{d}\boldsymbol{\mu,} \label{CPoper \end{equation} associated to a vector-valued submodular capacity $\boldsymbol{\mu }:\mathcal{B}(X)\rightarrow E,$ is a Choquet operator. The linear and positive functionals defined on $C(X)$ can be represented either as integrals with respect to a unique regular Borel measure (the Riesz-Kakutani representation theorem) or as Choquet integrals (see Epstein and Wang \cite{EW1996}). \begin{example} The space $c$ of all convergent sequences of real numbers can be identified with the space of continuous functions on $\mathbb{\hat{N}}=\mathbb{N \cup\{\infty\}$ $($the one-point compactification of the discrete space $\mathbb{N)}$. The functional \[ I:c\rightarrow\mathbb{R},\text{\quad}I(x)=\lim_{n\rightarrow\infty}x(n) \] is linear and monotone \emph{(}therefore continuous by Lemma \emph{\ref{lem1* )} and its Riesz representation i \[ I(x)=\int_{\mathbb{\hat{N}}}x(n)\mathrm{d}\delta_{\infty}(n), \] where $\delta_{\infty}$ is the Dirac measure concentrated at $\infty,$ that is, $\delta_{\infty}(A)=1$ if $A\in\mathcal{P}(\mathbb{\hat{N}})$ and $\{\infty\}\in A$ and $\delta_{\infty}(A)=0$ if $\{\infty\}\notin A.$ Meantime, $I$ admits the Choquet representatio \[ I(x)=(\operatorname*{C})\int x(n)d\mu(n) \] where $\mu$ is the capacity defined on the power set $\mathcal{P (\mathbb{\hat{N}})$ by $\mu(A)=0$ if $A$ is finite and $\mu(A)=1$ otherwise. \end{example} As the Riesz-Kakutani representation theorem also holds in the case of linear and positive operators $T:C(X)\rightarrow E$, it seems natural to search for an analogue in the case of Choquet operators. The answer is provided by the following representation theorem that extend a result due to Epstein and Wang \cite{EW1996} from functionals to operators: \begin{theorem} \label{thmEWgen}Let $X$ be a compact Hausdorff space and $E$ be a Banach lattice with order continuous norm. Then for every comonotonic additive and monotone operator $I:C(X)\rightarrow E$ with $I(1)>0$ there exists a unique upper continuous vector capacity $\boldsymbol{\mu}:\Sigma_{up}^{+ (X)\rightarrow E_{+}$ such tha \begin{equation} I(f)=(\operatorname*{CB})\int_{X}f\mathrm{d}\boldsymbol{\mu}\text{\quad for all }f\in C(X). \label{reprformula \end{equation} Moreover, $\boldsymbol{\mu}$\ admits a unique extension to $\mathcal{B}(X)$ $($also denoted $\boldsymbol{\mu})$ that fulfils the following two properties of regularity: $(R1)$ $\boldsymbol{\mu}(A)=\sup\left\{ \boldsymbol{\mu}(K):K\text{ closed, }K\subset A\right\} $\ for all $A\in\mathcal{B}(X);$ $(R2)$ $\boldsymbol{\mu}(K)=\inf\left\{ \boldsymbol{\mu}(O):O\text{ open, }O\supset K\right\} $\ for$\ $all closed sets $K.$ \end{theorem} Recall that $\Sigma_{up}^{+}(X)$ represents the lattice of upper contour sets associated to the continuous functions defined on $X.$ This lattice coincides with the lattice of all closed subsets of $X$ when $X$ is compact and metrizable. The proof of Theorem \ref{thmEWgen} needs several auxiliary results and will be detailed at the end of this section. \begin{lemma} \label{lem-aux-1*}Let $E$ be an ordered Banach space. Then every monotone and translation invariant operator $I:C(X)\rightarrow E$ is Lipschitz continuous. \end{lemma} \begin{proof} Given $f,g\in C(X),$ one can choose a decreasing sequence $(\alpha_{n})_{n}$ of positive numbers such that $\alpha_{n}\downarrow\left\Vert f-g\right\Vert .$ Then $f\leq g+\left\Vert f-g\right\Vert \leq g+\alpha_{n}\cdot1,$ which implie \[ I(f)\leq I(g+\alpha_{n}\cdot1)=I(g)+\alpha_{n}I(1) \] due to the properties of monotonicity and translation invariance and positive homogeneity of $I.$ Since the role of $f$ and $g$ is symmetric, this leads to the fact that $\left\vert I(f)-I(g)\right\vert \leq I(1)\cdot\alpha_{n}$ for all $n,$ whence by passing to the limit as $n\rightarrow\infty,$ we conclude tha \[ \left\vert I(f)-I(g)\right\vert \leq I(1)\cdot\left\Vert f-g\right\Vert . \] \end{proof} \begin{lemma} \label{lem-aux-1}Let $E$ be an ordered Banach space. Then every comonotonic additive and monotone operator $I:C(X)\rightarrow E$ is positively homogeneous. \end{lemma} \begin{proof} Let $f\in C(X)_{+}.$ Since $I$ is comonotonic additive, we get $I(0)=I(0+0)=2\cdot I(0),$ which implies $I(0)=0$. As a consequence, $I(0\cdot f)=0=0\cdot I(f).$ Then the same argument shows that $I(2f)=I(f+f)=2I(f)$ and by mathematical induction we infer that $I(pf)=pI(f)$ for all $p\in\mathbb{N}$. Now, consider the case of positive rational numbers $r=p/q,$ where $p,q\in\mathbb{N}$. Then $I(f)=I\left( q\cdot\frac{1}{q}f\right) =qI\left( \frac{1}{q}f\right) $, which implies $I\left( \frac{1}{q}f\right) =\frac {1}{q}\cdot I(f)$. Therefore \[ I\left( \frac{p}{q}f\right) =pI\left( \frac{1}{q}f\right) =\frac{p {q}\cdot I(f). \] Passing to the case of an arbitrary positive number $\alpha,$ let us choose a decreasing sequence $(r_{n})_{n}$ of rationals converging to $\alpha$. Then $r_{n}f\geq\alpha f$ for all $n,$ which yields $r_{n}I(f)=I(r_{n}f)\geq I(\alpha f)$ $n\in\mathbb{N}$. Passing here to limit (in the norm of $E$) it follows $\alpha I(f)\geq I(\alpha f)$. On the other hand, considering a sequence of positive rational numbers $s_{n}\nearrow\alpha$ and reasoning as above, we easily obtain $\alpha I(f)\leq I(\alpha f)$, which combined with the previous inequality proves the assertion of Lemma \ref{lem-aux-1} in the case of nonnegative functions. When $f\in C(X)$ is arbitrary, one can choose a positive number $\lambda$ such that $f+\lambda\geq0.$ By the above reasoning and the property of comonotonic additivity, for all $\alpha\geq0,$ \begin{align*} \alpha I(f)+\alpha\lambda I(1) & =\alpha I\left( f+\lambda\right) =I(\alpha\left( f+\lambda\right) )\\ & =I(\alpha f+\alpha\lambda)=I(\alpha f)+\alpha\lambda I(1), \end{align*} which ends the proof of Lemma \ref{lem-aux-1}. \end{proof} \begin{lemma} \label{lem-aux-3}Let $E$ be a Banach lattice with order continuous norm. Then every monotone, positively homogeneous and translation invariant operator $I:C(X)\rightarrow E$ is weakly compact and upper continuous. \end{lemma} \begin{proof} The weak compactness of $I$ follows from the fact that $I$ maps the closed order interval in $C(X)$ (that is, all closed balls) into closed order intervals in $E$ and all such intervals are weakly compact in $E$. See Lemma 3. For the property of upper continuity, let $(f_{n})_{n}$ be any nonincreasing sequence of functions in $C(X)$, which converges pointwise to a continuous function $f.$ By Dini's lemma, the sequence $(f_{n})_{n}$ is convergent to $f$ in the norm topology of $C(X).$ Therefore, for each $\varepsilon>0$ there is an index $N$ such that $f\leq f_{n}<f+\varepsilon$ for all $n\geq N.$ Since $I$ is monotone it follows tha \begin{equation} I(f)\leq I(f_{n})\leq I(f+\varepsilon)=I(f)+\varepsilon\cdot I(1)\text{ for all }n\geq N, \label{consDini \end{equation} where $1$ is the unit of $C(X).$ Taking into account that $E$ has order continuous norm and $I(f_{n})\geq I(f_{n+1})\geq I(f)$ for all $n$ it follows that the limit $\lim_{n\rightarrow\infty}I(f_{n})$ exists in $E$ and $\lim_{n\rightarrow\infty}I(f_{n})\geq I(f)$. Combining this fact with (\ref{consDini}) we infer that $I(f)\leq\lim_{n\rightarrow\infty}I(f_{n})\leq I(f)+\varepsilon I(1).$ Since $\varepsilon>0$ was chosen arbitrarily we conclude that $\lim_{n\rightarrow\infty}I(f_{n})=I(f).$ \end{proof} The next result, was stated for real-valued functionals in \cite{Zhou}, Lemma 1 (and attributed by him to Masimo Marinacci). For the convenience of the reader we include here the details. \begin{lemma} \label{lem-aux-2}Let $E$ be an ordered Banach space. $(a)$ Suppose that $I:C(X)\rightarrow E$ is a monotone, positively homogeneous and translation invariant operator. The following two properties are equivalent: $(a_{1})$ $\lim_{n\rightarrow\infty}I(f_{n})=I(f)$ for any nonincreasing sequence $(f_{n})_{n}$ in $C(X)$ that converges pointwise to a function $f$ also in $C(X);$ $(a_{2})$ $\lim_{n\rightarrow\infty}I(f_{n})\leq I(f)$ for any nonincreasing sequence $(f_{n})_{n}$ in $C(X)$ and any $f$ in $C(X)$ such that for each $x\in X$ there is an index $n_{x}\in\mathbb{N}$ such that $f_{n}(x)\leq f(x)$ whenever $n\geq n_{x}$. $(b)$ For any vector capacity $\boldsymbol{\mu}:\mathcal{B}(X)\rightarrow E_{+}$, the following two properties are equivalent : $(b_{1})$ $\lim_{n\rightarrow\infty}\boldsymbol{\mu}(A_{n})=\boldsymbol{\mu }(A)$, for any nonincreasing sequence $(A_{n})_{n}$ of sets in $\mathcal{B (X)$ such that $A=\cap_{n=1}^{\infty}A_{n};$ $(b_{2})$ $\lim_{n\rightarrow\infty}\boldsymbol{\mu}(A_{n})\leq\boldsymbol{\mu }(A)$, for any nonincreasing sequence $(A_{n})_{n}$ of sets in $\mathcal{B (X)$ and any $A\in\mathcal{B}(X)$ such that $\cap_{n=1}^{\infty}A_{n}\subset A$. All the limits above are considered in the norm topology of $E$. \end{lemma} \begin{proof} $(a_{1})\Rightarrow(a_{2})$. Let an arbitrary sequence $(f_{n})_{n}$ in $C(X)$ and $f\in C(X)$, be such that $(f_{n})_{n}$ is nonincreasing and, for all $x\in X$, there is an $n_{x}$ with $f_{n}(x)\leq f(x)$ for all $n\geq n_{x}$. Since the sequence $(\max\{f_{n},f\})$ is also a nonincreasing sequence in $C(X)$ and $\lim_{n\rightarrow\infty}\max\{f_{n}(x),f(x)\}=f(x)$ for all $x\in X$, $(a_{i})$ implies $\lim_{n\rightarrow\infty}I(\max\{f_{n},f\})=I(f)$. By the monotonicity of $I$ it follows $\lim_{n\rightarrow\infty}I(f_{n})\leq \lim_{n\rightarrow\infty}I(\max\{f_{n},f\})=I(f)$. $(a_{2})\Rightarrow(a_{1})$. Let $f_{n},f\in C(X)$, $n\in\mathbb{N}$, be such that $(f_{n})_{n}$ is nonincreasing and $\lim_{n\rightarrow\infty f_{n}(x)=f(x)$, for all $x\in X$. Fix $\varepsilon>0$ arbitrary. Since for all $x\in X$, there exists $n_{x}\in\mathbb{N}$ such that $f_{n}(x)\leq f(x)+\varepsilon$, for all $n\geq n_{x}$, $(a_{ii})$ implies (from monotonicity, positive homogeneity and translation invariance) $\lim _{n\rightarrow\infty}I(f_{n})\leq I(f+\varepsilon\cdot1)=I(f)+\varepsilon I(1)$. Passing with $\varepsilon\rightarrow0$ it follows $\lim_{n\rightarrow \infty}I(f_{n})\leq I(f)$. But since $(f_{n})_{n}$ is nonincreasing and $I$ is monotone, we also have $\lim_{n\rightarrow\infty}I(f_{n})\geq I(f)$, which combined with the previous inequality implies $\lim_{n\rightarrow\infty }I(f_{n})=I(f)$. The equivalence $(b_{1})\Leftrightarrow(b_{2})$ can be proved in a similar way. \end{proof} Recall that in the case of compact Hausdorff space $X$ the lattice $\Sigma_{up}^{+}(X)$ represents the lattice of upper contour sets associated to the continuous functions defined on $X.$ This lattice coincides with the lattice of all closed subsets of $X$ when $X$ is compact and metrizable; indeed, if $d$ is the metric of $X,$ then every closed subset $A\subset X$ admits the representation $A=\left\{ x:-d(x,A)\geq0\right\} .$ \begin{proof} [Proof of Theorem $3$]Notice first that according to Lemma \ref{lem-aux-1} and Lemma \ref{lem-aux-3} the operator $I$ is also positively homogeneous and upper continuous. Every set $K\in\Sigma_{up}^{+}(X)$ admits a representation of the for \[ K=\left\{ x:f(x)\geq\alpha\right\} , \] for suitable $f\in C(X)$ and $\alpha\in\mathbb{R}.$ As a consequence its characteristic function $\chi_{K}$ is the pointwise limit of a nonincreasing sequence $(f_{n}^{K})_{n}$ of continuous and nonnegative functions. For example, one may choose \begin{equation} f_{n}^{K}(x)=1-\inf\left\{ 1,n(\alpha-f)^{+}\right\} =\left\{ \begin{array} [c]{cl 0 & \text{if }f(x)\leq\alpha-1/n\\ \in(0,1) & \text{if }\alpha-1/n<\varphi(x)<\alpha\\ 1 & \text{if }f(x)\geq\alpha\text{ (i.e., }x\in K).\text{ \end{array} \right. \label{Ury \end{equation} See \cite{Zhou}, p. 1814. Since $I$ is monotone, the sequence $(I(f_{n ^{K}))_{n}$ is also nonincreasing and bounded from below by $0$, which implies (due to the order continuity of the norm of $E)$, that it is also convergent in the norm topology of $E.$ This allows us to define $\boldsymbol{\mu}$ on the sets $K\in\Sigma_{up ^{+}(X)$ by the formula \begin{equation} \boldsymbol{\mu}(K)=\lim_{n\rightarrow\infty}I(f_{n}^{K}). \label{defmu \end{equation} The definition of $\boldsymbol{\mu}(K)$ is independent of the particular sequence $(f_{n}^{K})_{n}$ with the aforementioned properties. Indeed, if $(g_{n}^{K})_{n}$ is another such sequence, fix a positive integer $m$, and infer from Lemma \ref{lem-aux-2} $(a)$ that $\lim_{n\rightarrow\infty I(g_{n}^{A})\leq I(f_{m}^{A})$. Taking the limit as $m\rightarrow\infty$ on the right-hand side we get \[ \lim_{n\rightarrow\infty}I(g_{n}^{A})\leq\lim_{m\rightarrow\infty}I(f_{m ^{A}). \] Then, interchanging $(f_{n}^{K})_{n}$ and $(g_{n}^{K})_{n}$ we conclude that actually equality holds. Clearly, the set function $\mathbf{\mu}:\Sigma_{up}^{+}(X)\rightarrow E_{+}$ is a vector capacity and it takes values in the order interval $[0,I(1)].$ We next show that $\boldsymbol{\mu}$ is upper continuous on $\Sigma_{up ^{+}(X)$, that is \[ \boldsymbol{\mu}(K)=\lim_{n\rightarrow\infty}\boldsymbol{\mu}(K_{n}) \] whenever $(K_{n})_{n}$ is a nonincreasing sequence of sets in $\Sigma_{up ^{+}(X)$ such that $K=\cap_{n=1}^{\infty}K_{n}$. Indeed, using formula (\ref{Ury}) one can choose a nonincreasing sequence of continuous function $g_{n}$ such that $g_{n}\geq\chi_{K_{n}},$ $g_{n}=0$ outside the neighborhood of radius $1/n$ of $K_{n}$ and $I(g_{n})-\boldsymbol{\mu}(K_{n})\rightarrow0$. See formula (\ref{Ury}) and using analogous reasonings with those concerning relations (7)-(9) in \cite{Zhou}, pp. 1814-1815. Then $\boldsymbol{\mu}(K)=\lim_{n\rightarrow\infty }I(g_{n}),$ which implies the equality $\boldsymbol{\mu}(K)=\lim _{n\rightarrow\infty}\boldsymbol{\mu}(K_{n}).$ The next goal is the representation formula (\ref{reprformula}). For this, let $x^{\ast}\in E_{+}^{\ast}$ be arbitrarily fixed and consider the comonotonic additive and monotone functional \[ x^{\ast}\circ I:C(X)\rightarrow\mathbb{R}. \] It verifies $I^{\ast}(1)=x^{\ast}(I(1))>0,$ so by Theorem 1 in \cite{Zhou} there is a unique upper continuous capacity $\nu^{\ast}:\Sigma_{up ^{+}(X)\rightarrow\lbrack0,I^{\ast}(1)]$ such that \[ \left( x^{\ast}\circ I\right) (f)=(C)\int_{X}f\mathrm{d}\nu^{\ast}\text{ for all }f\in C(X). \] The capacity $\nu^{\ast}$ is obtained via an approximation process similar to (\ref{defmu}). Therefor \[ \nu^{\ast}(K)=\lim_{n\rightarrow\infty}x^{\ast}(I(f_{n}^{K}))=x^{\ast (\lim_{n\rightarrow\infty}[I(f_{n}^{K}))=x^{\ast}(\boldsymbol{\mu}(K)) \] for all $K\in\Sigma_{up}^{+}(X),$ which implies \[ x^{\ast}(I(f))=(C)\int_{X}f\mathrm{d}(x^{\ast}\circ\boldsymbol{\mu}). \] Since $x^{\ast}\in E_{+}^{\ast}$ was arbitrarily fixed, an appeal to Lemma \ref{lem1} $(c)$ easily yields the equality \[ I(f)=(\operatorname*{CB})\int_{X}f\mathrm{d}\boldsymbol{\mu}. \] For the second part of Theorem \ref{thmEWgen},since $\boldsymbol{\mu}$ takes values in an order bounded interval, one can extend it to all Borel subsets of $X$ via the formul \[ \boldsymbol{\mu}(A)=\sup\left\{ \boldsymbol{\mu}(K):K\text{ closed, }K\subset A\right\} ,\text{\quad}A\in\mathcal{B}(X). \] The fact that the resulting set function $\boldsymbol{\mu}$ is a vector capacity is immediate. This set function $\boldsymbol{\mu}$ also verifies the regularity condition $(R2)$. Indeed, given a closed set $K$, we can consider the sequence of open set \[ O_{n}=\left\{ x\in X:d(x,K)<1/n\right\} . \] Clearly, $\boldsymbol{\mu}(K)\leq\boldsymbol{\mu}(O_{n})\leq\boldsymbol{\mu }(K_{n})$ where \[ K_{n}=\left\{ x\in X:d(x,K)\leq1/n\right\} . \] Since $\boldsymbol{\mu}$ is upper continuous on closed sets, it follows that $\lim_{n\rightarrow\infty}\boldsymbol{\mu}(K_{n})=\boldsymbol{\mu}(K)$, whence $\lim_{n\rightarrow\infty}\boldsymbol{\mu}(O_{n})=\boldsymbol{\mu}(K).$ Therefore $\boldsymbol{\mu}$ verifies the regularity condition $(R2)$. The uniqueness of the extension of $\boldsymbol{\mu}$ to $\mathcal{B}(X)$ is motivated by the condition $(R1).$ \end{proof} \begin{remark} \label{rem2}If the operator $I$ is submodular, that is \[ I(\sup\left\{ {f,g}\right\} )+I(\inf\left\{ {f,g}\right\} )\leq I(f)+I(g)\text{\quad for all }f,g\in C(X), \] then the vector capacity $\boldsymbol{\mu}:\Sigma_{up}^{+}(X)\rightarrow E_{+}$ stated by Theorem \emph{\ref{thmEWgen}} is submodular. This is a consequence of Theorem $13$ $(c)$ in \emph{\cite{CMMM2012}}. For the convenience of the reader we will recall here the argument. Let $A,B\in \Sigma_{up}^{+}(X)$ and consider the sequences $(f_{n}^{A}(x))_{n}$ and $(f_{n}^{B}(x))_{n}$ of continuous functions associated respectively to $A$ and $B$ by the formula \emph{(\ref{defmu})}. Then \[ \boldsymbol{\mu}(A)=\lim_{n\rightarrow\infty}I(f_{n}^{A}),\ \text{ }\boldsymbol{\mu}(B)=\lim_{n\rightarrow\infty}I(f_{n}^{B}),\ \text{ }\boldsymbol{\mu}(A\cup B)=\lim_{n\rightarrow\infty}I(\sup\left\{ f_{n ^{A},f_{n}^{B}\right\} \ \text{ \] and $\boldsymbol{\mu}(A\cap B)=\lim_{n\rightarrow\infty}I(\inf\left\{ f_{n}^{A},f_{n}^{B}\right\} $. Since $I$ is submodular, it follows that \[ \boldsymbol{\mu}(A\cup B)+\boldsymbol{\mu}(A\cap B)\leq\boldsymbol{\mu }(A)+\boldsymbol{\mu}(B), \] and the proof is done. \end{remark} \section{The case of operators with bounded variation} The representation Theorem \ref{thmEWgen} can be extended outside the framework of monotone operators by considering the class of operators with bounded variation. As above, $X$ is a compact Hausdorff space and $E$ is a Banach lattice with order continuous norm. \begin{definition} \label{defOpBV}An operator $I:C(X)\rightarrow E$ has \emph{bounded variation} \emph{over an order interval} $[f,g]$ i \begin{equation} \vee_{f}^{g}I=\su {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(f_{k})-I(f_{k-1})\right\vert \text{\quad exists in }E, \label{sumvar \end{equation} the supremum being taken over all finite chains $f=f_{0}\leq f_{1}\leq \cdots\leq f_{n}=g$ of functions in the Banach lattice $C(X).$ The operator $I$ is said to have \emph{bounded variation} if it has bounded variations on all\emph{ }order intervals $[f,g]$ in $C(X).$ \end{definition} Clearly, if $I$ is monotone, then $\vee_{f}^{g}I=I(g)-I(f)$ for all $f\leq g$ in $C(X)$ and thus $I$ has bounded variation. More generally, every operator $I:C(X)\rightarrow E$ which can be represented as the difference $I=I_{1}-I_{2}$ of two monotone operators $I_{1 ,I_{2}:C(X)\rightarrow E$ has bounded variation. This follows from the order completeness of $E$ and the modulus inequality, which provides an upper bound for the sums appearing in formula (\ref{sumvar}): \ {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(f_{k})-I(f_{k-1})\right\vert \leq I_{1}(g)-I_{1}(f)+I_{2 (g)-I_{2}(f). \] Remarkably, the converse also holds. The basic ingredient is the following result. \begin{lemma} \label{lemtech}Suppose that $I:C(X)\rightarrow E$ is a comonotonic additive operator with bounded variation. Then there exist two positively homogeneous, translation invariant and monotone operators $I_{1},I_{2}:C(X)\rightarrow E$ such that $I=I_{1}-I_{2}.$ Moreover, if $I$ is upper continuous, then both operators $I_{1}$ and $I_{2}$ can be chosen to be upper continuous. \end{lemma} \begin{proof} The proof is done in the footsteps of Lemma 14 in \cite{CMMM2012} by noticing first the following four facts: $(a)$ According to our hypotheses, \[ I(\alpha f+\beta)=\alpha I(f)+I(\beta) \] for all $f\in C(X),$ $\alpha\in\mathbb{R}_{+}$ and $\beta\in\mathbb{R}.$ $(b)$ $\vee_{0}^{f+\alpha}I=\vee_{-\alpha}^{f}I$ for all $f\in C(X)$ and $\alpha\in\mathbb{R}$ with $f+\alpha\geq0.$ This follows from the definition of the variation. $(c)$ $\vee_{0}^{\alpha f}I=\alpha\vee_{0}^{f}I$ for all $f\in C(X)_{+}$ and $\alpha\in\mathbb{R}_{+}.$ Indeed for every $\varepsilon\in E,$ $\varepsilon>0,$ there exists a chain $0=f_{0}\leq f_{1}\leq\cdots\leq f_{n}=f$ such that \ {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(f_{k})-I(f_{k-1})\right\vert \geq\vee_{0}^{f}I-\varepsilon. \] According to fact $(a)$ the chain $0=\alpha f_{0}\leq\alpha f_{1}\leq \cdots\leq\alpha f_{n}=\alpha f$ verifie \[ \vee_{0}^{\alpha f}I\ge {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(\alpha f_{k})-I(\alpha f_{k-1})\right\vert \geq\alpha\vee_{0 ^{f}I-\alpha\varepsilon. \] As $\varepsilon>0$ was arbitrarily fixed it follows that $\vee_{0}^{\alpha f}I\geq\alpha\vee_{0}^{f}I.$ By replacing $\alpha$ to $1/\alpha$ and then $f$ by $\alpha f$, one obtains the reverse inequality, $\vee_{0}^{\alpha f I\leq\alpha\vee_{0}^{f}I.$ $(d)$ $\vee_{-\alpha}^{f}I=\vee_{-\alpha}^{0}I+\vee_{0}^{f}I=\vee_{0}^{\alpha }I+\vee_{0}^{f}I$ for all $f\in C(X)_{+}$ and $\alpha\in\mathbb{R}_{+}.$ The fact that $\vee_{-\alpha}^{f}I\geq\vee_{-\alpha}^{0}I+\vee_{0}^{f I=\vee_{0}^{\alpha}I+\vee_{0}^{f}I$ is a direct consequence of the definition of variation. For the other inequality, fix arbitrarily $\varepsilon>0$ in $E$ and choose a chain $-\alpha=f_{0}\leq f_{1}\leq\cdots\leq f_{n}=f$ such that \ {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(f_{k})-I(f_{k-1})\right\vert \geq\vee_{-\alpha}^{f I-\varepsilon. \] Then $-\alpha=-f_{0}^{-}\leq-f_{1}^{-}\leq\cdots\leq-f_{n}^{-}=0$ and $0=f_{0}^{+}\leq f_{1}^{+}\leq\cdots\leq f_{n}^{+}=f.$ Since all pairs $-f_{k}^{-},f_{k}^{+}$ are comonotonic, we hav \begin{align*} \vee_{-\alpha}^{0}I+\vee_{0}^{f}I & \ge {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(-f_{k}^{-})-I(-f_{k-1}^{-})\right\vert {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(f_{k}^{+})-I(f_{k-1}^{+})\right\vert \\ & \ge {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(-f_{k}^{-})-I(-f_{k-1}^{-})+I(f_{k}^{+})-I(f_{k-1}^{+ )\right\vert \\ & {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert I(f_{k})-I(f_{k-1})\right\vert \geq\vee_{-\alpha}^{f}I-\varepsilon \end{align*} and it remains to take the supremum over $\varepsilon>0.$ Now we can proceed to the choice of the operators $I_{1}$ and $I_{2}.$ By definition, \[ I_{1}(f)=\vee_{0}^{f}I\text{\quad if }f\in C(X)_{+}, \] while if $f\in C(X)$ is an arbitrary function, one choose $\alpha\in \mathbb{R}_{+}$ such that $f+\alpha\geq0$ and put \[ I_{1}(f)=\vee_{0}^{f+\alpha}I-\alpha\vee_{0}^{1}I. \] The fact that $I_{1}$ is well-defined (that is, independent of $\alpha)$ can be proved as follows. Suppose that $\alpha,\beta>0$ are two numbers such that $f+\alpha\geq0$ and $f+\beta\geq0.$ Without loss of generality we may assume that $\alpha<\beta.$ Indeed, according to the facts $(b)-(d)$, we hav \begin{align*} \vee_{0}^{f+\beta}I-\beta\vee_{0}^{1}I & =\vee_{0}^{f+\alpha+(\beta-\alpha )}I-\beta\vee_{0}^{1}I\overset{\text{fact }(b)}{=}\vee_{-(\beta-\alpha )}^{f+\alpha}I-\beta\vee_{0}^{1}I\\ & \overset{\text{fact }(d)}{=}\vee_{-(\beta-\alpha)}^{0}I+\vee_{0}^{f+\alpha }I-\beta\vee_{0}^{1}I\\ & \overset{\text{fact }(b)}{=}\vee_{0}^{\beta-\alpha}I+\vee_{0}^{f+\alpha }I-\beta\vee_{0}^{1}I\\ & \overset{\text{fact }(c)}{=}(\beta-\alpha)\vee_{0}^{1}I+\vee_{0}^{f+\alpha }I-\beta\vee_{0}^{1}I\\ & =\vee_{0}^{f+\alpha}I-\alpha\vee_{0}^{1}I. \end{align*} By definition, \[ I_{2}=I-I_{1}. \] Let $f,g$ be two functions in $C(X)$ such that $f\leq g$ and let $\alpha>0$ such that $f+\alpha\geq0.$ Since $I$ is monotonic and has bounded variation \[ 0\leq I(g)-I(f)=I(g+\alpha)-I(f+\alpha)\leq\vee_{f+\alpha}^{g+\alpha}I\leq \vee_{0}^{g+\alpha}I-\vee_{0}^{f+\alpha}I=I_{1}(g)-I_{1}(f), \] whence we infer that both operators $I_{1}$ and $I_{2}$ are monotonic. Due to the fact $(c),$ the operator $I_{1}$ is positively homogeneous. Therefore the same is true for $I_{2}.$ For the property of translation invariance, let $f\in C(X)$, $\beta \in\mathbb{R}$ and choose $\alpha>0$ such that $f+\beta+\alpha\geq0$ and $\beta+\alpha\geq0.$ The \begin{align*} I_{1}(f+\beta) & =\vee_{0}^{f+\beta+\alpha}I-\alpha\vee_{0}^{1}I\\ & =I_{1}(f+\beta+\alpha)-\alpha\vee_{0}^{1}I \end{align*} while from facts $(b)\&(d)$ we infer that $I_{1}(f)=I_{1}(f+\beta +\alpha)-\left( \beta+\alpha\right) \vee_{0}^{1}I.$ Therefore \[ I_{1}(f+\beta)=I_{1}(f)+I_{1}(\beta) \] which proves that indeed $I_{1}$ is translation invariant. The same holds for $I_{2}=I-I_{1}.$ As concerns the second part of Lemma \ref{lemtech}, it suffices to prove that $I_{1}$ is upper continuous when $I$ has this property. Let $(f_{n})_{n}$ be a decreasing sequence in $C(X)$ which converges pointwise to a function $f$ also in $C(X).$ By Dini's lemma, $\left\Vert f_{n -f\right\Vert \rightarrow0$. Since $I_{1}$ is a Lipschitz operator (see Lemma \ref{lem-aux-1*}) we conclude that $\left\Vert I_{1}(f_{n})-I(f)\right\Vert \rightarrow0.$ This ends the proof of Lemma \ref{lemtech}. \end{proof} Now it is clear that the representation Theorem \ref{thmEWgen} can be extended to the framework of comonotonic additive operators $I:C(X)\rightarrow E$ with bounded variation by considering Choquet-Bochner integrals associated to differences of vector capacities (that is, to set functions with bounded variation in the sense of Aumann and Shapley \cite{AS1974}). Let $\Sigma$ be a lattice of sets of $X$ and $\boldsymbol{\mu}:\mathcal{\Sigma }\rightarrow E$ a set function taking values in a Banach lattice $E$ with order continuous norm. The \emph{variation} of the set function $\boldsymbol{\mu}$ is the set function $\left\vert \boldsymbol{\mu}\right\vert $ defined by the formul \[ \left\vert \mathbf{\mu}\right\vert (A)=\su {\displaystyle\sum\nolimits_{k=0}^{n}} \left\vert \boldsymbol{\mu}(A_{k})-\boldsymbol{\mu}(A_{k-1})\right\vert \text{\quad for }A\in\Sigma, \] where the supremum is taken over all finite chains $A_{0}=\emptyset\subset A_{1}\subset\cdots\subset A_{n}\subset A$ of sets in the lattice $\Sigma.$ The space of set functions with bounded variation \[ \operatorname*{bv}(\Sigma,E)=\left\{ \boldsymbol{\mu}:\Sigma\rightarrow E:\boldsymbol{\mu}(\emptyset)=0\text{ and }\left\vert \boldsymbol{\mu }\right\vert (X)\text{ exists in }E\right\} , \] is a normed vector space when endowed with the nor \[ \left\Vert \boldsymbol{\mu}\right\Vert _{\operatorname*{bv}}=\left\Vert \left\vert \boldsymbol{\mu}\right\vert (X)\right\Vert . \] Associated to every set function $\boldsymbol{\mu}\in\operatorname*{bv (\Sigma,E)$ are two positive vector-valued set functions, the\emph{ inner upper variation} of $\mu,$ defined b \[ \boldsymbol{\mu}^{+}(A)=\su {\displaystyle\sum\nolimits_{k=0}^{n}} \left( \boldsymbol{\mu}(A_{k})-\boldsymbol{\mu}(A_{k-1})\right) ^{+}\text{\quad for }A\in\Sigma \] and the \emph{inner lower variation} of $\boldsymbol{\mu},$ defined b \[ \boldsymbol{\mu}^{-}(A)=\su {\displaystyle\sum\nolimits_{k=0}^{n}} \left( \boldsymbol{\mu}(A_{k})-\boldsymbol{\mu}(A_{k-1})\right) ^{-}\text{\quad for }A\in\Sigma; \] in both cases the supremum is taken over all finite chains $A_{0 =\emptyset\subset A_{1}\subset\cdots\subset A_{n}\subset A$ of sets in $\Sigma.$ Notice tha \[ \boldsymbol{\mu}=\boldsymbol{\mu}^{+}-\boldsymbol{\mu}^{-}\text{ and }\left\vert \boldsymbol{\mu}\right\vert =\boldsymbol{\mu}^{+}+\boldsymbol{\mu }^{-}. \] \begin{lemma} \label{lemmesbv}We assume that $E$ is a Banach lattice with order continuous norm. The following conditions are equivalent for a set function $\boldsymbol{\mu}:\Sigma\rightarrow E:$ $(a)$ $\boldsymbol{\mu}\in\operatorname*{bv}(\Sigma,E);$ $(b)$ $\boldsymbol{\mu}^{+}$\ and $\boldsymbol{\mu}^{-}$ are vector capacities$;$ $(c)$ there exist two vector capacities $\boldsymbol{\mu}_{1}$ and $\boldsymbol{\mu}_{2}$ on $\Sigma$ such that $\boldsymbol{\mu}=\boldsymbol{\mu }_{1}-\boldsymbol{\mu}_{2}.$ Moreover, for any such decomposition we have $\boldsymbol{\mu}^{+ \leq\boldsymbol{\mu}_{1}$ and $\boldsymbol{\mu}^{-}\leq\boldsymbol{\mu}_{2}$. \end{lemma} The details are similar to those presented in \cite{AS1974}, Ch. 1, \S 4. We are now in a position to state the main result of this section. \begin{theorem} \label{thmfinal}Let $X$ be a compact Hausdorff space and $E$ be a Banach lattice with order continuous norm. Then for every comonotonic additive operator $I:C(X)\rightarrow E$ with bounded variation there exists a unique upper continuous set function $\boldsymbol{\mu}:\Sigma_{up}^{+}(X)\rightarrow E$ with bounded variation such tha \begin{align} I(f) & =\int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t\label{genreprformula}\\ & +\int_{-\infty}^{0}\left[ \boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) -\mu(A)\right] \mathrm{d}t.\nonumber \end{align} for all $f\in C(X).$ \end{theorem} \begin{proof} By Lemma \ref{lemtech}, there exist two functionals $I_{1},I_{2 :C(X)\rightarrow E$ that are monotone, translation invariant, positively homogeneous, upper continuous and such that $I=I_{1}-I_{2}$. Define $\boldsymbol{\mu}:\Sigma_{up}^{+}(X)\rightarrow E$ by $\boldsymbol{\mu }=\boldsymbol{\mu}_{1}-\boldsymbol{\mu}_{2},$ where $\boldsymbol{\mu _{1},\boldsymbol{\mu}_{2}$ are associated to the functionals $I_{1}$ and $I_{2}$ via Theorem \ref{thmEWgen}. Then $\boldsymbol{\mu}$ is upper continuous and by Lemma \ref{lemmesbv} it also has bounded variation. We now prove that the representation formula (\ref{genreprformula}) holds. Suppose that $f\in C(X)_{+}.$ The integra \[ \int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t \] is well defined as a Bochner integral (see Lemma \ref{lem6}). Given $\varepsilon>0,$ one can choose an equidistant division $0=t_{0}<\cdots <t_{m}=\left\Vert f\right\Vert $ of $[0,\left\Vert f\right\Vert ]$ such tha \begin{align} & \left\Vert \int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t-\sum_{k=0}^{m-1}\boldsymbol{\mu}\left( \{x:f(x)\geq t_{k}\}\right) (t_{k+1}-t_{k})\right\Vert \nonumber\\ & =\left\Vert \int_{0}^{\left\Vert f\right\Vert }\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t-\sum_{k=0}^{m-1}\boldsymbol{\mu}\left( \{x:f(x)\geq t_{k}\}\right) (t_{k+1}-t_{k})\right\Vert <\varepsilon \label{cond0 \end{align} and $\left\Vert f\right\Vert /m<\varepsilon$. Denote $C_{k}=\{x:f(x)\geq t_{k}\}$ for $k=0,...,m-1.$ By the definition of $\boldsymbol{\mu}_{1}$ and $\boldsymbol{\mu}_{2}$ (see the proof of Theorem \ref{thmEWgen}) one can choose functions $f_{n}^{C_{k}}\in C(X)_{+}$ such tha \begin{equation} \left\Vert \boldsymbol{\mu}\left( C_{k}\right) -I(f_{n}^{C_{k}})\right\Vert <\varepsilon/\left\Vert f\right\Vert \text{ and }n^{-1}<\left\Vert f\right\Vert /m. \label{cond1 \end{equation} Because the functions $f_{n}^{C_{k}}$ are defined by the formula (\ref{Ury}) and $n^{-1}<\left\Vert f\right\Vert /m$, it follows that the functions $f_{n}^{C_{i}}(t_{i+1}-t_{i})$ and $\sum_{k=i+1}^{m-1}f_{n}^{C_{k} (t_{k+1}-t_{k})$ are comonotonic for all indices $i,$ so tha \begin{align*} I(\sum_{k=0}^{m-1}f_{n}^{C_{k}}(t_{k+1}-t_{k})) & =I(f_{n}^{C_{0} )(t_{1}-t_{0})+I(\sum_{k=1}^{m-1}f_{n}^{C_{k}}(t_{k+1}-t_{k}))\\ & =\cdots=\sum_{k=0}^{m-1}I(f_{n}^{C_{k}})(t_{k+1}-t_{k}); \end{align*} the property of positive homogeneity of $I$ is assured by Lemma \ref{lemtech}. Notice tha \[ f(x)\leq\sum_{k=0}^{m-1}f_{n}^{C_{k}}(x)(t_{k+1}-t_{k})\leq f(x)+2\varepsilon \text{\quad for all }x\in X. \] Since the operators $I_{1},I_{2}$ are monotone and translation invariant, it follows tha \[ I_{j}(f)\leq I_{j}(\sum_{k=0}^{m-1}f_{n}^{C_{k}}(t_{k+1}-t_{k}))\leq I_{j}(f)+2\varepsilon I_{j}(1)\text{\quad for }j\in\{1,2\}. \] whenc \begin{equation} \left\Vert I_{j}(\sum_{k=0}^{m-1}f_{n}^{C_{k}}(t_{k+1}-t_{k}))-I_{j (f)\right\Vert \leq2\varepsilon\left\Vert I_{j}(1)\right\Vert \text{\quad for }j\in\{1,2\}. \label{cond2 \end{equation} Therefor \begin{align*} & \left\Vert \int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t-I(f)\right\Vert \\ & \leq\left\Vert \int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t-\sum_{k=0}^{m-1}I(f_{n}^{C_{k}})(t_{k+1 -t_{k})\right\Vert \\ & +\left\Vert \sum_{k=0}^{m-1}I(f_{n}^{C_{k}})(t_{k+1}-t_{k})-I(f)\right\Vert \\ & \overset{\text{see }(\ref{cond2})}{\leq}\left\Vert \int_{0}^{+\infty }\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t-\sum_{k=0 ^{m-1}I(f_{n}^{C_{k}})(t_{k+1}-t_{k})\right\Vert \\ & +2\varepsilon\left\Vert I_{1}(1)\right\Vert +2\varepsilon\left\Vert I_{2}(1)\right\Vert \\ & \leq\left\Vert \int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t-\sum_{k=0}^{m-1}\boldsymbol{\mu}\left( C_{k}\right) (t_{k+1}-t_{k})\right\Vert \\ & +\left\Vert \sum_{k=0}^{n-1}\boldsymbol{\mu}\left( C_{k}\right) (t_{k+1}-t_{k})-\sum_{k=0}^{n-1}I(f_{n}^{C_{k}})(t_{k+1}-t_{k})\right\Vert \\ & +2\varepsilon\left\Vert I_{1}(1)\right\Vert +2\varepsilon\left\Vert I_{2}(1)\right\Vert \\ & \overset{\text{see }(\ref{cond0})\&(\ref{cond1})}{\leq}2\varepsilon\left( 1+\left\Vert I_{1}(1)\right\Vert +\left\Vert I_{2}(1)\right\Vert \right) . \end{align*} Since $\varepsilon>0$ was arbitrarily fixed, the above reasoning yields formula (\ref{genreprformula}) in the case of positive functions. If $f\notin C(X)_{+},$ then $f+\left\Vert f\right\Vert \in C(X)_{+~}$ and by the preceding considerations we hav \begin{multline*} I(f)+\left\Vert f\right\Vert I(1)=I(f+\left\Vert f\right\Vert )=\int _{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)+\left\Vert f\right\Vert \geq t\}\right) \mathrm{d}t\\ =\int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t+\int_{-\left\Vert f\right\Vert }^{0}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t\\ =\int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t+\int_{-\left\Vert f\right\Vert }^{0}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) -\boldsymbol{\mu}(X)\mathrm{d}t+\left\Vert f\right\Vert \boldsymbol{\mu}(X)\\ =\int_{0}^{+\infty}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) \mathrm{d}t+\int_{-\infty}^{0}\boldsymbol{\mu}\left( \{x:f(x)\geq t\}\right) -\boldsymbol{\mu}(X)\mathrm{d}t+\left\Vert f\right\Vert I(1). \end{multline*} The proof of the representation formula (\ref{genreprformula}) is now complete. As concerns the uniqueness of $\boldsymbol{\mu}$, suppose that $\boldsymbol{\nu}$ is another upper continuous monotone set function with bounded variations for which the formula (\ref{genreprformula}) holds. Given a set $K=\left\{ x:f(x)\geq t\right\} \in\Sigma_{up}^{+}(X),$ it is known that the functions $f_{n}^{K}:X\rightarrow\lbrack0,1]$ defined by the formula (\ref{Ury}) decrease to$\chi_{K}.$ This implies that $(\left\{ x:f_{n ^{K}(x)\geq t\right\} )_{n}$ is decreasing to $K$ for each $t\in\lbrack0,1].$ Consider the sequence of functions \[ \varphi_{n}:[0,1]\rightarrow E,\text{\quad}\varphi_{n}(t)=\boldsymbol{\nu }(\left\{ x:f_{n}^{K}(x)\geq t\right\} ). \] Notice that all these functions have bounded variation and their variation is bounded by the variation of $|\boldsymbol{\nu}|(X).$ By Lebesgue dominated convergence, \[ \lim_{n\rightarrow\infty}\int_{0}^{1}\varphi_{n}(t)dt=\boldsymbol{\nu}(K). \] On ther hand, by (\ref{genreprformula}) and the definition of $\boldsymbol{\mu }, \[ \boldsymbol{\mu}(K)=\lim_{n\rightarrow\infty}I(f_{n}^{K})=\lim_{n\rightarrow \infty}\int_{0}^{1}\varphi_{n}(t)dt=\boldsymbol{\nu}(K), \] which ends the proof of the uniqueness. \end{proof} \section{Open problems} We end our paper by mentioning few open problems that might be of interest to our readers. \begin{problem} \label{probl1}Is the order continuity of the norm of $E$ a necessary condition for the validity of Theorems \ref{thmEWgen} and \ref{thmfinal}? \end{problem} As was noticed by Bartle, Dunford and Schwartz \cite{BDS}, much of the theory of weakly compact linear operators defined on a space $C(X)$ is dominated by the concept of absolute continuity. For more recent contributions see \cite{N1975}, \cite{N1979} and \cite{N2009}. Suppose that $\mathcal{A}$ is a $\sigma$-algebra and $E$ is a Banach latttice with order continuous norm. A vector capacity $\boldsymbol{\mu}:\mathcal{A}\rightarrow E_{+}$ is called absolutely continuous with respect to a capacity $\lambda:\mathcal{A \rightarrow\lbrack0,\infty)$ (denoted $\mu\ll\lambda)$ if for every $\varepsilon>0$ there is $\delta>0$ such tha \[ A\in\mathcal{A},\text{ }\lambda(A)<\delta\Longrightarrow\left\Vert \boldsymbol{\mu}(A)\right\Vert <\varepsilon. \] The following lemma extends a result proved by Pettis in the context of $\sigma$-additive measures. \begin{lemma} \label{probl2}If $\mu$ is upper continuous and $\lambda$ is upper continuous and supermodular then the condition $\mu\ll\lambda$ is equivalent to the following one: \[ A\in\mathcal{A}\text{ and }\lambda(A)=0\Longrightarrow\boldsymbol{\mu}(A)=0. \] \end{lemma} The proof is immediate, by reductio ad absurdum. \begin{problem} \label{probl3}Suppose $\boldsymbol{\mu}:\mathcal{A}\rightarrow E_{+}$ is an upper continuous vector measure. Does there exist a capacity $\lambda :\mathcal{A}\rightarrow\lbrack0,\infty)$ such that $\mu\ll\lambda?$ If \ Yes, is it possible to choose $\lambda$ of the form $\lambda=x^{\ast }\circ\boldsymbol{\mu}$ for a suitable $x^{\ast}\in E_{+}^{\ast}?$ \end{problem} It would be also interesting the following operator analogue of Problem \ref{probl3}: \begin{problem} Does there exist for each Choquet operator $I:C(X)\rightarrow E$ a functional $x^{\ast}\in E_{+}^{\ast}$ such that for every $\varepsilon>0$ there is $\delta>0$ with the propert \[ \left\Vert I(f)\right\Vert \leq\varepsilon\left\Vert f\right\Vert +\delta x^{\ast}\left( \left\vert f\right\vert \right) \text{\quad for all }f\in C(X)? \] \end{problem} {\bf Declaration of interests : none.}
null
null
null
proofpile-arXiv_000-10181
{"arxiv_id":"2009.08946","language":"en","timestamp":1600654687000,"url":"https:\/\/arxiv.org\/abs\/2009.08946","yymm":"2009"}
2024-02-18T23:40:25.294Z
2020-09-21T02:18:07.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10183"}
null
\section{Introduction} Polymer confinement plays an important role for many aspects of adhesion, wetting, lubrication, and friction of complex fluids from both theoretical and technological points of view. For more than two decades, both theoretical and experimental works~\cite{Thompson1992,Eisenriegler1993,Aoyagi2001,Harmandaris2005,Batistakis2012,FKremer2014,Pressly2019} have shown that the dynamic and structural properties of polymers subject to confinement may deviate from that in the bulk as the interaction between polymers and confining surfaces becomes non-negligible. For example, the conformations of polymer chains in a melt near the wall in the direction perpendicular to the wall shrink remarkably compared to bulk chains while they only extend slightly parallel to the wall~\cite{Aoyagi2001,Pakula1991,Cavallo2005}. Long chain mobility in confined melts is also affected by entanglement effects while in turn the conformational deviations from bulk chains have influence on the distribution of entanglements~\cite{Shin2007,Sussman2014,Russell2017,Lee2017,Garcia2018}. Furthermore many studies have also focused on the dependency between the glass transition temperature $T_g$ and the nature of the confinement effects~\cite{Alcoutlabi2005,Ediger2014,Vogt2018}. Therefore, it is important to understand the mechanical properties of confined polymer melts and how confinement impacts both viscous and elastic properties of amorphous polymer films with different surface substrates or even free surfaces. Computer simulations provide a powerful method to mimic the behavior of polymers under well-defined external conditions covering the range from atomic to coarse-grained (CG) scales~\cite{Binder2005,Barrat2010}. However, the cost of computing time rises dramatically as the size and complexity of systems increase. Therefore, applying appropriate coarse-grained models that keep the global thermodynamic properties and the local mechanical and chemical properties is still an important subject~\cite{Murat1998,Kremer1990,Kremer1992,Plathe2002,Harmandaris2006,Gujrati2010,Vettorel2010,Zhang2013,Karatrantos2019}. One of the successful monomer-based models, namely the bead-spring (BS) model~\cite{Kremer1990,Kremer1992} together with an additional bond-bending potential~\cite{Faller1999,Faller2000,Faller2001,Everaers2004}, has been successfully employed to provide a better understanding of generic scaling properties of polymer melts in bulk. For such a model static and dynamic properties of highly entangled polymer melts in bulk have been extensively studied in our previous work~\cite{Hsu2016,Hsu2017}. We have verified the crossover scaling behavior of the mean square displacement of monomers between several characteristic time scales as predicted by the Rouse model~\cite{Rouse1953} and reptation theory~\cite{deGennes1979,Doi1980,Doi1983,Doi1986} over several orders of magnitude in the time. For weakly semiflexible polymer chains of sizes of up to $N=2000$ monomers we have also confirmed that they behave as ideal chains to a very good approximation. For these chains the entanglement length $N_e=28$ in the melt, estimated through the primitive path analysis and confirmed by the plateau modulus~\cite{Everaers2004,Moreira2015,Hsu2016}, resulting in polymer chains of $N=2000 \approx 72 N_e$. Thus, we here focus on this model and a related variant for the study of the equilibration of polymer melts confined between two repulsive walls as supporting films, and free-standing films after walls are removed. Each film contains $n_c=1000$ chains of $N=2000$ monomers at the bulk melt density {$\rho_0=0.85\sigma^{-3}$}. Directly equilibrating such large and highly entangled chains in bulk or confinement is not feasible within reasonably accessible computing time. A novel and very efficient methodology has recently been developed~\cite{Zhang2013,Zhang2014} for equilibrating large and highly entangled polymer melts in bulk described by the bead-spring model~\cite{Kremer1990,Kremer1992}. Through a hierarchical backmapping of CG chains described by the soft-sphere CG model~\cite{Vettorel2010,Zhang2013} from low resolution to high resolution and a reinserting of microscopic details of bead-spring chains, finally, highly entangled polymer melts in bulk are equilibrated by molecular dynamics (MD) simulations using the package ESPResSO++~\cite{Espressopp,Espressopp20}. To first order, the required computing time depends only on the overall system size and becomes independent of chain length. Similar methodologies have also been used to equilibrate high-molecular-weight polymer blends~\cite{Ohkuma2018} and polystyrene melts~\cite{Zhang2018}. In this paper, we extend the application of the soft-sphere approach to confined polymer melts and subsequently free-standing films. As polymer chains are described at a lower resolution, the number of degrees of freedom becomes smaller. Here we adapt this hierarchical approach to equilibrate polymer melts confined between two walls in detail. Moreover, we apply our newly developed, related model~\cite{Hsu2019,Hsu2019e} to prepare polymer films with one or two free surfaces. Differently from Refs.~\onlinecite{Vettorel2010,Zhang2014}, we take the bending elasticity of bead-spring chains in a bulk melt into account for the parameterization of the soft-sphere CG model. Namely, the underlying microscopic bead-spring chains are weakly semiflexible (bending constant $k_\theta=1.5\epsilon$) instead of fully flexible ($k_\theta=0.0\epsilon$). The outline of this paper is as follows: In Sec.~II, we introduce the main features of the microscopic bead-spring model and soft-sphere coarse-grained model used for studying the confined polymer melts. The application of the soft-sphere CG model for confined coarse-graining melts, and conformational properties of fully equilibrated confined CG melts are addressed in Sec.~III. In Sec.~IV, we reinsert the microscopic details of confined CG melts, and discuss the equilibration procedures. In Sec.~V, we show how to prepare films with one or two free surfaces by switching to another variant of bead-spring model. Finally, our conclusion is given in Sec.~VI. \section{Models} \subsection{Generic microscopic bead-spring models} \label{BSM} In the microscopic bead-spring model~\cite{Kremer1990,Kremer1992}, all monomers at a distance $r$ interact via a shifted Lennard-Jones (LJ) potential $U_{\rm LJ}(r)$, \begin{eqnarray} U_{\rm LJ}(r)=\left\{\begin{array}{ll} V_{\rm LJ}(r)-V_{\rm LJ}(r=r_{\rm cut}) & \;, \, r \le r_{\rm cut} \\ 0 &\;, \, r>r_{\rm cut} \end{array} \right. \, , \label{eq-ULJ} \end{eqnarray} with \begin{equation} V_{\rm LJ}(r)=4\epsilon\left[\left(\frac{\sigma}{r}\right)^{12}-\left(\frac{\sigma}{r}\right)^{6} \right] \end{equation} where $\epsilon$ is the energy strength of the pairwise interaction, $r_{\rm cut}$ is the cutoff in the minimum of the potential such that force and potential are zero at $r_{\rm cut}=2^{1/6}\sigma$. These LJ units also provide a natural time definition via $\tau= \sigma \sqrt{m/\epsilon}$, $m=1$ being the mass of the monomers. The temperature is set to $T = 1.0\epsilon/k_B$, $k_B$ being the Boltzmann factor, which is set to one. Along the backbone of the chains a finitely extensible nonlinear elastic (FENE) binding potential $U_{\rm FENE}(r)$ is added, \begin{eqnarray} U_{\rm FENE}(r) = \left\{\begin{array}{ll} -\frac{k}{2}R_0^2 \ln \left[1-\left(\frac{r}{R_0}\right)^2 \right] & \;,\, r < R_0 \\ \infty & \;, \, r \ge R_0 \end{array} \right . \, , \label{eq-UFENE} \end{eqnarray} where $k=30 \epsilon/\sigma^2$ is the force constant and $R_0=1.5 \sigma$ is the maximum value of bond length. For controlling the bending elasticity, i.e., chain stiffness, the standard bond-bending potential~\cite{Faller1999,Faller2000,Faller2001,Everaers2004} is given by \begin{equation} U_{\rm BEND}^{\rm (old)}(\theta)=k_\theta (1-\cos \theta) \,, \qquad 0<\theta <\pi \label{eq-UBENDold} \end{equation} with the bond angle $\theta$ defined by $ \theta=\cos^{-1}\left( \frac{\vec{b}_j \cdot \vec{b}_{j+1}} {\mid \vec{b}_j \mid \mid \vec{b}_{j+1} \mid} \right) $ where $\vec{b}_j=\vec{r}_{j+1}-\vec{r}_{j}$ is the bond vector between the $(j+1)$th monomer and the $j$th monomer along the identical chain. The bending factor $k_\theta$ is set to $1.5\epsilon$ and the melt density is set to the widely used value of $0.85 \sigma^{-3}$ throughout the whole paper. {For studying polymer melts under confinement, w}e first consider the simpler example of polymer melts that are confined between two planar, structureless repulsive walls. The walls placed at $z=0$ and $z=L_z$ are described by the 10-4 Lennard-Jones planar wall potential ~\cite{Grest1996,Aoyagi2001}, \begin{eqnarray} U_{\rm wall}(z)=\left\{\begin{array}{ll} V_{\rm wall}(z)-V_{\rm wall}(z=\sigma) \quad &,\quad z \leq \sigma \\ V_{\rm wall}(L_z-z)-V_{\rm wall}(L_z-z=\sigma) \quad &,\quad L_z-z \leq \sigma \\ 0 &,\quad {\rm otherwise} \end{array} \right . \label{eq-Uwall} \end{eqnarray} with \begin{equation} V_{\rm Wall}= 4 \pi \varepsilon_w \left [\frac{1}{5}\left(\frac{\sigma}{z}\right)^{10}-\frac{1}{2} \left(\frac{\sigma}{z}\right)^4 \right] \,. \label{eq-Vwall} \end{equation} Here $\varepsilon_w$ is the interaction strength between {monomers} and the walls, $z$ and $L_z-z$ are the vertical distances of a monomer from the two walls, respectively. For preparing polymer films with one or two free surfaces, we have to stabilize the system at zero pressure, particularly, in the direction perpendicular to the walls. Only then we can switch off the wall potential and prevent system instability. Therefore, a short-range attractive potential to reduce the pressure to zero is added~\cite{Hsu2019,Hsu2019e} with an additional shift term, \begin{eqnarray} U_{\rm ATT}(r)=\left\{\begin{array}{ll} \alpha \left[ \cos(\pi \left(\frac{r}{r_{\rm cut}} \right)^2) +1 \right] ,& {r_{\rm cut}} \leq r < r^a_{c} \\ & \\ 0 ,& {\rm otherwise} \end{array} \right . \,, \label{eq-Uatt} \end{eqnarray} such that $U_{\rm ATT}(r)=U_{\rm LJ}(r)$ at $r=r_{\rm cut}$. Here $\alpha=0.5145\epsilon$ is the strength parameter, $r^a_{c}=\sqrt{2}r_{\rm cut} \approx 1.5874\sigma$ is the upper cut-off such that it has zero force at $r=r_{\rm cut}$ and $r=r^a_{c}$. Note that this additional potential does not alter the characteristic conformations at $T= 1 \epsilon / k_B$, so that we can switch between these different models as needed~{\cite{Kremer1990,Kremer1992,Auhl2003,Hsu2019,Hsu2019e}}. {Note that using $U_{\rm BEND}^{\rm (old)}(\theta)$, bead-spring chains tend to stretch out as the temperature decreases. To avoid such an artificial chain stretching, Eq.~(\ref{eq-UBENDold}) can be replaced by~\cite{Hsu2019,Hsu2019e} \begin{equation} U_{\rm BEND}(\theta) = -a_\theta \sin^2 (b_\theta \theta) \,, \qquad 0 < \theta <\theta_c=\pi/b_\theta \, \label{eq-UBEND} \end{equation} if one is interested in studying polymer melts under cooling. The fitting parameters $a_\theta=4.5\epsilon$ and $b_\theta=1.5$ are determined so that the local conformations of chains remain essentially unchanged compared to those with $k_\theta=1.5\epsilon$ using Eq.~(\ref{eq-UBENDold}) at temperature $T=1.0\epsilon/k_B$.} \subsection{Soft-sphere coarse-grained model} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.70\textwidth,angle=0]{eqFilm_fig01.pdf} \caption{Snapshot of the configuration of a bead-spring chain of $N=250$ monomers represented by a coarse-grained chain of $N_{\rm CG}=10$ fluctuating soft spheres. Each sphere corresponds to $N_b=25$ monomers, cf. text.} \label{fig-bsmsph} \end{center} \end{figure*} In the soft-sphere approach~\cite{Vettorel2010}, each bead-spring polymer chain in a melt is represented by a chain of $N_{\rm CG}=N/N_b$ fluctuating soft spheres in a CG view as shown in Fig.~\ref{fig-bsmsph}. The coordinate of the center and the radius of the $s$th sphere in the $i$th chain, $\vec{r}_i(s)$ and $\sigma_i(s)$, are described by \begin{equation} \vec{r}_i(s)=\vec{R}_{{\rm CM},i}(s)=\frac{1}{N_b}\sum_{k=(s-1)N_b+1}^{sN_b}\vec{r}_{i,k} \label{eq-mapping-cm} \end{equation} and \begin{equation} \sigma_i(s)=R_{g,i}(s)=\left(\frac{1}{N_b}\sum_{k=(s-1)N_b+1}^{sN_b} \mid \vec{r}_{i,k}-\vec{r}_i(s) \mid^2 \right)^{1/2} \,, \label{eq-mapping-rg} \end{equation} respectively. Here $\vec{r}_{i,k}$ is the coordinate of the $k$th monomer in chain $i$, $\vec{R}_{{\rm CM},i}(s)$ and $R_{g,i}(s)$ are the center of mass (CM) and the radius of gyration of $N_b$ monomers in its subchains, respectively. {Since we extend the application of the soft-sphere CG model to semiflexible chains and follow the slightly different strategies of Refs.~\onlinecite{Zhang2013,Zhang2014}, we list the details of soft-sphere potentials as follows.} The spheres are connected by a harmonic bond potential \begin{equation} U_{\rm bond}^{\rm (CG)}(d_i(s)=\mid \vec{d}_i(s) \mid) =\frac{3}{2b^2_{\rm CG}}{d_i(s)^2} \label{eq-Ubond-CG} \end{equation} and an angular potential \begin{equation} U_{\rm ang}^{\rm (CG)}(\theta_i(s))=\frac{1}{2}k_{\rm bend}(1-\cos \theta_i(s)) \quad \textrm{with} \quad \cos \theta_i(s)=\frac{\vec{d}_i(s) \cdot \vec{d}_i(s+1)} {\mid \vec{d}_i(s)\mid \mid \vec{d}_i(s+1) \mid} \label{eq-Uang-CG} \end{equation} where $\vec{d}_i(s)=\vec{r}_i(s+1)-\vec{r}_i(s)$ is the bond vector, and $b_{\rm CG}$ denotes the effective bond length. The radius of the $s$th sphere in chain $i$ and its fluctuation are controlled by the following two potentials~\cite{Vettorel2010,Zhang2013}, \begin{equation} U_{\rm sphere}^{\rm (CG)}(\sigma_i(s))=a_1\frac{N_b^3}{\sigma^{6}_i(s)}+a_2\frac{\sigma_i^2(s)}{N_b} \end{equation} and \begin{equation} U_{\rm self}^{\rm (CG)}(\sigma_i(s))=a_3\sigma_i^{-3} (s)\,, \label{eq-Uself-CG} \end{equation} respectively. The non-bonded interactions between any two different spheres due to excluded volume interactions according to Flory's theory~\cite{Flory1949,deGennes1979,Murat1998} is taken care of by the potential \begin{eqnarray} && U_{\rm{nb}}^{\rm (CG)}(\vec{r}_i(s),\sigma_i(s);\vec{r}_{i'}(s'),\sigma_{i'}(s')) \nonumber \\ &&=\varepsilon_1 \left(\frac{2\pi(\sigma_i^2(s)+\sigma_{i'}^2(s'))}{3}\right)^{-3/2} \exp\left[-\frac{3(\vec{r}_i(s)-\vec{r}_{i'}(s'))^2}{2(\sigma_i^2(s)+\sigma_{i'}^2(s'))} \right] \, \textrm{for} \, i\neq i' \,\textrm{or} \, s \neq s'\,. \label{eq-Unb-CG} \end{eqnarray} Here the parameters $k_{\rm bend}$, $a_1$, $a_2$, $a_3$, $b_{\rm CG}$, and $\varepsilon_1$ depending on $N_b$ are determined by a numerical approach {via curve fitting.} Similar as shown in Eq.~(\ref{eq-Uwall}), the two soft repulsive planar walls located at $z=0$ and $z=L_z$ depending on the radius of each sphere is given by \begin{eqnarray} U_{\rm wall}^{\rm (CG)}(z)=\left\{\begin{array}{ll} V_{\rm wall}^{\rm (CG)}(z)-V_{\rm wall}^{\rm (CG)}(z=\sigma_i(s)) \quad &,\quad z \leq \sigma_i(s) \\ V_{\rm wall}^{\rm (CG)}(L_z-z)-V_{\rm wall}^{\rm (CG)}(L_z-z=\sigma_i(s)) \quad &,\quad L_z-z \leq \sigma_i(s) \\ 0 &,\quad {\rm otherwise} \end{array} \right . \label{eq-Uwall-CG} \end{eqnarray} with \begin{equation} V_{\rm Wall}^{\rm (CG)}= 4 \pi \varepsilon_w^{\rm (CG)} \left [\frac{1}{5}\left(\frac{\sigma_i(s)}{z}\right)^{10}-\frac{1}{2} \left(\frac{\sigma_i(s)}{z}\right)^4 \right] \label{eq-Vwall-CG} \end{equation} where $\varepsilon_w^{\rm (CG)}$ is the interaction strength between soft spheres and the walls, $z$ and $L_z-z$ are the vertical distances from the two walls, respectively. For the parameterization of the soft-sphere CG model, we take $15$ independent and fully equilibrated bulk polymer melts of bead-spring polymer chains with $k_\theta=1.5\epsilon$ obtained from the previous works~\cite{Zhang2014,Moreira2015,Hsu2016} as our reference systems. Using Eqs.~(\ref{eq-mapping-cm}), (\ref{eq-mapping-rg}) with $N_b=25$, each melt containing $n_c=1000$ bead-spring chains of $N=2000=N_{\rm CG}N_b$ monomers in a cubic simulation box of size $V=L^3$ ($L\approx 133 \sigma$ with periodic boundary conditions in $x$-, $y$-, and $z$- directions) at the melt density $\rho_0=0.85\sigma^{-3}$ is mapped into a CG melt containing $n_c=1000$ soft-sphere chains of $N_{\rm CG}=80$ spheres at the CG melt density $\rho_0^{\rm (CG)}(N_{\rm CG})=\rho_0N_{\rm CG}/N=0.034\sigma^{-3}$. The parameters $a_1=6.3444\times 10^{-3}\sigma^6$, $a_2=4.1674\sigma^{-2}$, $k_{\rm bend}=1.3229\epsilon$, $a_3=22.0\epsilon \sigma^3$, $b_{\rm CG}=6.68\sigma \epsilon^{-1/2}$, and $\varepsilon_1=290.0\epsilon \sigma^{3}$ are determined~\cite{Vettorel2010,Zhang2013} such that the average conformational properties of the reference melt systems in a CG representation are reproduced by fully equilibrated CG melts of soft-sphere chains. Quantitatively, the conformational properties are characterized by the probability distributions of the radius of soft spheres, $P(\sigma,N_b)$, the bond length connecting two successive soft spheres, $P(d,N_b)$, and the bond angle between two successive bonds, $P(\theta,N_b)$, the average mean square internal distance between the $j$th soft sphere and the $(j+s)$th soft sphere along the identical chain, $\langle R^2(s,N_b) \rangle$, the pair distribution of all pairs of soft spheres, $g(r,N_b)$ (see Figs.~\ref{fig-psigma-meltCG}, \ref{fig-R2s-wall} discussed in the next section). Since the excluded volume effect between $N_b$ monomers in each subchain is ignored in the {parameterization of the} soft-sphere approach and self-entanglements on this length scale is negligible ($N_b=25<N_e=28$) subchains behave as ideal chains {(alternative, one could include excluded volume for semi dilute solutions via a Flory term)}. In this case, we can simplify several steps of hierarchical backmapping~\cite{Zhang2014} to only one step of fine-graining to introduce microscopic details of subchains once a CG melt reaches its equilibrated state. \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig02a.pdf} \hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig02b.pdf}\\ (c)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig02c.pdf}\\ \caption{Probability distributions of radius $\sigma$ of soft spheres, $P(\sigma)$ (a), bond length $d$ between two successive soft spheres, $P(d)$ (b), and the bond angle $\theta$ between two successive bonds, $P(\theta)$ (c), for a fully equilibrated bulk CG melt, and a confined CG melt ($n_c=1000$, $N_{\rm CG}=80$). Data for the reference systems ($n_c=1000$, $N=2000$) in a CG representation are also shown for comparison. Data are averaged over $30$ (bulk) and $60$ (confined melts) independent configurations.} \label{fig-psigma-meltCG} \end{center} \end{figure*} \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig03a.pdf} \hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig03b.pdf}\\ (c)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig03c.pdf} \hspace{0.1cm} (d)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig03d.pdf}\\ \caption{(a)(b) Rescaled mean square internal distance, $\langle R^2(s,N_b) \rangle / s$, plotted as a function of $s$. (c)(d) Pair distribution of sphere pairs, $g(r,N_b)$, plotted as a function of $r$. Data for a bulk CG melt, a confined CG melt, a confined melt in a CG presentation are presented, as indicated. In (a)(c), all $n_c=1000$ chains are considered while only chains having their CMs in the interval $[0.3L_z, 0.7L_z]$ are counted in (b)(d). Data for the reference systems in a CG representation are also shown for comparison.} \label{fig-R2s-wall} \end{center} \end{figure*} \section{Equilibration of soft-sphere chains in a confined CG melt} In this section, we extend the application of the soft-sphere CG model for polymer melts in bulk to polymer melts confined between two repulsive walls \{Eq.~(\ref{eq-Uwall-CG})\}, first focusing on the CG melt containing $n_c=1000$ chains of $N_{\rm CG}=80$ spheres at the CG bulk melt density $\rho_0^{\rm (CG)}=0.034\sigma^{-3}$. For the comparison to our reference systems, we set the distance between two walls compatible with the bulk melt. Thus, we locate two walls at $z=0\sigma$, and $z=L_z \approx 133 \sigma$ while keeping the periodic boundary conditions along the $x$- and $y$- directions with {the lateral linear dimensions} $L_x=L_y=133\sigma$. Of course, one can adjust $L_z$ and extend/reduce $L_x$, $L_y$ for keeping the bulk melt density as needed. The initial configurations of soft-sphere chains in terms of $\{\sigma_i(s), d_i(s), \theta_i(s)\}$ are randomly generated according to their corresponding Boltzmann weights $\exp[-\beta U_{\rm sphere}^{\rm (CG)}(\sigma_i(s))]$, $\exp[-\beta U_{\rm bond}^{\rm (CG)}(d_i(s))]$, and $\exp[-\beta U_{\rm ang}^{\rm (CG)}(\theta_i(s))]$, respectively, where $\beta=1/(k_BT)=1.0\epsilon^{-1}$. Additionally, we set $1\sigma<\sigma_i(s)<\sigma_{\rm max}=8\sigma$ and $0\sigma<d_i(s)<d_{\rm max}=21\sigma$ but restrict the coordinates of centers of spheres, $\vec{r}_i(s)$, satisfying the condition $\sigma_i(s)<\vec{r}_i(s)<(L_z-\sigma_i(s))$. It is computationally more efficient to perform Monte Carlo simulations to equilibrate confined CG melts. Similar to Ref.~\onlinecite{Zhang2013}, our simulation algorithm including three types of MC moves at each step is as follows: (i) For a local move, one of $n_cN_{\rm CG}$ spheres is randomly selected, e.g. the $s$th sphere in the $i$th chain, the sphere of radius $\sigma_i(s)$ at $\vec{r}_i(s)=(r_{i,x}(s), r_{i,y}(s), r_{i,z}(s))$ is allowed to move within the range $-\sigma_i(s)< \Delta r_{i,x}(s),\; \Delta r_{i,y}(s),\; \Delta r_{i,z}(s) < \sigma_i(s)$. The trial move is therefore accepted if $\exp[-\beta (\Delta U_{\rm nb}^{\rm (CG)}+\Delta U_{\rm bond}^{\rm (CG)}+\Delta U_{\rm ang}^{\rm (CG)} +\Delta U_{\rm wall}^{\rm (CG)})]>\eta$, where $\eta$ is a random number and $\eta \in[0,1)$. (ii) For a snake-slithering move, one end of $n_c$ chains is randomly selected, and $\sigma_i^{\rm (new)}(s)$, $d_i^{\rm (new)}(s)$, and $\theta_i^{\rm (new)}(s)$ of the selected sphere are randomly generated according to their corresponding Boltzmann weights, respectively. The trial move is accepted if $\exp[-\beta (\Delta U_{\rm nb}^{\rm (CG)}+\Delta U_{\rm self}^{\rm (CG)}+\Delta U_{\rm wall}^{\rm (CG)})]>\eta$. A cut-off at $r_{\rm cut}^{\rm (CG)}=20\sigma$ for calculating the non-bonded interactions between two different spheres, $U_{\rm nb}^{\rm (CG)}(r_i(s),\sigma_i(s),r_{i'}(s'),\sigma_{i'}(s'))$, is also introduced~\cite{Vettorel2010} since the contributions for $r>20\sigma$ are negligible. Nevertheless, there is no influence on measurements of any physical observable while it speeds up the simulations by a factor of four. Applying a linked-cell algorithm with the cell size $L_c=2.66\sigma$ ($L_x/L_c=L_y/L_c$ is very close to an integer), smaller than the cut-off value $r_{\rm cut}^{\rm (CG)}$, speeds up the simulation even more by an additional factor of {2.5, i.e. all together it speeds up by a factor of ten.} It takes about $10$ hours CPU time on an Intel 3.60GHz PC for a confined CG melt to reach its equilibrated state (after $2\times 10^7$ MC steps are performed). The acceptance ratio is about 73\% for a trial change of sphere size, 45\% for a local move, and 41\% for a snake-slithering move. Choosing the wall strength $\varepsilon_{w}^{\rm (CG)}\approx {\cal O}(0.1-1)\epsilon$, there is no detectable influence on the probability distributions $P(\sigma,N_b)$, $P(d,N_b)$, and $P(\theta)$ comparing to that for an equilibrated CG melt in bulk as shown in Fig.~\ref{fig-psigma-meltCG}. Since soft spheres are allowed to penetrate each other in CG melts, we observe that the distributions $P(\sigma,N_b)$ and $P(d,N_b)$ are slightly narrower for both {equilibrated} CG melts in bulk and in confinement than that for the reference systems while the average values of radius and bond length remain the same. We have also compared the estimates of $\langle R^2(s,N_b)\rangle$ and $g(r,N_b)$ for {an equilibrated} confined CG melt to an equilibrated CG melt in bulk and the reference data in Fig.~\ref{fig-R2s-wall}. The curve of $\langle R^2(s,N_b) \rangle$ taken the average over all $n_c=1000$ chains for a confined CG melt deviates from the bulk behavior for $s>10$ due to the confinement effect while for $s<6$, $\langle R^2(s,N_b) \rangle$ is a bit smaller compared to the bulk value. It is due to the artifact of the soft-sphere CG model, where the excluded volume effect within the size of spheres is not considered. After monomers are reinserted into soft-sphere chains, local excluded volume and the corresponding correlation hole effect automatically correct for these deviations. However, the discrepancy for $s<6$ is still within fluctuations observed in bulk. When monomers are reinserted into soft-sphere chains, the estimate of $g(r,N_b)$ starts to increase at $r \approx 3\sigma$, and then decrease at $r\approx 5 \sigma$. It indicates that near the walls, the distance between any two spheres decreases due to the confinement effect. \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig04a.pdf} \hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig04b.pdf} \caption{(a) Soft-sphere density profile rescaled by the CG bulk melt density, $\rho^{\rm (CG)}(z,N_{\rm CG})/\rho_0^{\rm (CG)}(N_{\rm CG})$, plotted as a function of $z$ for fully equilibrated confined {CG} melts ($n_c=1000$, $N_{\rm CG}=80$, $N_b=25$) with three different values of $\varepsilon_{w}^{\rm (CG)}$, as indicated. (b) Two components of the rescaled mean square radius of gyration in the directions parallel ($||$) and perpendicular ($\perp$) to the walls, $\langle R^2_{g,\alpha}(z,N_b) \rangle/\langle R^2_{g,\alpha} \rangle_0$, plotted versus the rescaled distance of the CM of the chains with bin size $3\sigma$ from the walls, $z/L_z$, including error bars for a confined CG melt with $\varepsilon_{w}^{\rm (CG)}=0.5\epsilon$ and $L_z=133\sigma$. In (a), Data for the reference systems in a CG representation are also shown for comparison. In (b), data for a confined melt of ($n_c=1000$, $N=2000$) based on the bead-spring model with $\varepsilon_{w}=0.005\epsilon$ and $L_z=134\sigma$ in a CG representation (BS $\rightarrow$ CG) are also shown for comparison.} \label{fig-rhoz-wall} \end{center} \end{figure*} To investigate the confinement effect on packing and conformations of a polymer melt, we determine the soft-sphere density profile between two walls as follows, \begin{equation} \rho^{\rm (CG)}(z,N_b)=\frac{1}{L_x L_y}\sum_{i=1}^{n_c} \sum_{s=1}^{N_{\rm CG}} \delta(z_i(s)-z) \, . \end{equation} Fig.~\ref{fig-rhoz-wall} shows that the soft-sphere density profiles with bin size $1\sigma$ for three different values of the interaction strength between the soft spheres and walls, $\varepsilon_{w}^{\rm (CG)}/\epsilon=1.0$, $0.5$, and $0.1$ are the same within small fluctuation. The bulk melt density persists, i.e., $\rho^{\rm (CG))}(z,N_b) =\rho_0^{(\rm CG)}$, between $z=5$ and $z=L_z-5$. $\rho^{\rm (CG)}(z,N_b)$ increases and reaches a maximum value at $z=3\sigma$, and then approaches zero next to the walls. It indicates that the confinement effect is weak for spheres sitting in the middle regime between two walls. The change of $\rho^{\rm (CG)}(z,N_b)$ is related to $P(\sigma,N_b)$ since $P(\sigma,N_b)$ has its maximum at $z=\langle \sigma_i(s) \rangle \approx 3.06\sigma$ (see Fig.~\ref{fig-psigma-meltCG}). The two components of the mean square radius of gyration depending on the $z$-component of CMs of chains are defined as follows, \begin{equation} \langle R_{g,\alpha}^2(z,N_b) \rangle = \frac{\sum_{i=1}^{n_c} \sum_{s=1}^{N_{\rm CG}} (\vec{r}_{i,\alpha}(s) - \vec{r}^{\rm (CM)}_{i,\alpha})^2 \delta(\vec{r}_{i,z}^{\rm (CM)}-z)} {N_{\rm CG}\sum_{i=1}^{n_c}\delta(\vec{r}^{\rm (CM)}_{i}-z)} \label{eq-Rg} \end{equation} where $\vec{r}_{i,\alpha=||}(s)=\vec{r}_{i,x}(s)+\vec{r}_{i,y}(s)$ and $\vec{r}_{i,\alpha=\perp}(s)=\vec{r}_{i,z}(s)$. Fig.~\ref{fig-rhoz-wall}b shows that the linear dimensions of confined chains having their CM in the regime $0.3L_z\le \vec{r}^{\rm (CM)}_{i,z} \le 0.7L_z$ are the same as in a bulk melt {within fluctuation}. For polymer chains of $N=2000$ monomers in a bulk melt, the mean square radius of gyration $\langle R_g^2 \rangle_0=\langle R_{g,||}^2 \rangle + \langle R_{g,\perp}^2 \rangle \approx 909\sigma^2$. With decreasing the distance $z$ from the walls, $\langle R_{g,||}^2(z,N_b) \rangle$ increases moderately {with larger fluctuations} while $\langle R^2_{g,\perp}(z,N_b) \rangle$ decreases gradually even already in the regime where the monomer density $\rho^{\rm (CG)}(z,N_{\rm CG}) \approx \rho_0^{\rm (CG)}$. {Note that none of chains having their CM next to the wall}. From the results shown in Fig.~\ref{fig-rhoz-wall}b, we should expect that $\langle R^2(s,N_b) \rangle$ and $g(r,N_b)$ follow the bulk behavior if we count those chains sitting in the middle regime ($0.3\le \vec{r}_{i,z}^{\rm (CM)} \le 0.7$) between two walls. It is indeed seen in Fig.~\ref{fig-R2s-wall}b,d. {Similar behavior has been observed for shorter chains confined between two walls~\cite{Pakula1991,Aoyagi2001,Cavallo2005,Sarabadani2014}.} \section{Equilibrating bead-spring chains in a confined melt} \subsection{Backmapping procedure} \label{Backmapping} After equilibration of the CG melt, we apply the similar backmapping strategy developed in Ref.~\onlinecite{Zhang2014} to reinsert the microscopic details of the bead-spring model described in Section~\ref{BSM} (See Fig.~\ref{fig-backmapping-wall}). In this strategy, two monomers along the chains are bonded via the FENE potential \{Eq.~(\ref{eq-UFENE})\} and the shifted LJ potential \{Eq.~(\ref{eq-ULJ})\}. The non-bonded and bond-bending interactions are excluded at this step. The confinement effect is introduced by the soft repulsive wall potential \{Eq.~(\ref{eq-Uwall})\}. Each soft-sphere CG chain of $N_{\rm CG}=80$ spheres is now replaced by a bead-spring chain of $N=2000$ monomers. To preserve the relationship between a soft sphere and a subchain of $N_b=25$ monomers given in Eqs.~(\ref{eq-mapping-cm}), (\ref{eq-mapping-rg}), two pseudopotentials~\cite{Zhang2014} for the $s$th soft sphere in the $i$th chain are implemented as follows, \begin{equation} U_{\rm cm}(\vec{r}_i(s),\vec{R}_{\rm CM,i}(s))=k_{\rm cm} \left[\vec{r}_i(s)-\vec{R}_{{\rm CM},i}(s) \right ]^2 \end{equation} and \begin{equation} U_g (\sigma_i,R_{g,i})= k_g \left[\sigma_i^2(s)-R_{g,i}^2(s)\right]^2 \end{equation} where $k_{\rm cm}$ and $k_g$ determine the coupling strength. The forces derived from these two potentials can drive the center of mass and the radius of gyration of each subchain to the center and the radius of the corresponding soft sphere, respectively. Namely, each bead-spring chain is then sitting on top of its corresponding soft-sphere CG chain (see Fig.~\ref{fig-bsmsph}). During this backmapping procedure, it is more practical to perform MD simulations in the NVT ensemble with a weak coupling Langevin thermostat at $T=1.0\epsilon/k_B$ by setting the friction constant $\Gamma=0.5\tau^{-1}$. Choosing $k_{\rm cm}=50.0\epsilon$ and $k_g=5.0\epsilon$, the integration time step is set to $\Delta t=0.005\tau$. At this stage all $1000$ soft-sphere CG chains can be mapped into $1000$ bead-spring chains confined between two walls simultaneously since there is no interaction between different chains. Snapshots of the configurations of the fully equilibrated confined CG melt and the backmapped confined melt of bead-spring chains are shown in Fig.~\ref{fig-backmapping-wall}. Here the strength of the wall potential $U_{\rm wall}(z)$ is set to $\varepsilon_{w}=0.005\epsilon$. To keep the bulk melt density $\rho_0=0.85\sigma^{-3}$ in the middle regime between two walls (the weak confinement regime) in a microscopic representation, we set $L_z=134\sigma$ instead of $L_z=133\sigma$ taking the repulsive potential of the wall, which is a steep but smooth function, into account. The reinsertion MD time is about $80\tau$ and the CPU time is about $2.8$ hours on a single processor in an Intel 3.60GHz PC. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.85\textwidth,angle=0]{eqFilm_fig05.pdf} \caption{Snapshots of the configuration of fully equilibrated CG melt containing $n_c=1000$ chains of $N_{\rm CG}=80$ soft spheres confined between two walls at the CG melt density $\rho_0^{\rm (CG)}(N_{\rm CG})=0.034\sigma^{-3}$, and its fine-graining configuration containing $n_c=1000$ chains of $N=N_{\rm CG}N_b=2000$ monomers based on the bead-spring model at the melt density $\rho_0=0.85\sigma^{-3}$ obtained via the backmapping procedure (Sec.~\ref{Backmapping}). Here each soft sphere is represented by a subchain of $N_b=25$ monomers. The walls are placed at $z=0$ and $z=L_z$ where $L_z=133\sigma$ (left), and $134\sigma$ (right). The periodic boundary conditions are considered in the parallel directions to the walls, i.e., along the $x$- and $y$- directions, and the linear dimensions of walls are $L_x=L_y=133\sigma$.} \label{fig-backmapping-wall} \end{center} \end{figure*} \subsection{Equilibration procedure} \label{Equilibration} In the next step the full excluded volume interaction as listed in Section~\ref{BSM} has to be introduced. To avoid the ``explosion'' of the system due to the overlap of monomers, we have to switch on the excluded volume interactions between the non-bonded pairs of monomers in a quasi-static way (slow push-off)~\cite{Auhl2003, Moreira2015}. Therefore, the shifted LJ potential for each non-bonded pair of monomers at a distance $r$, \{Eq.~(\ref{eq-ULJ})\}, is first replaced by a force-capped LJ potential \begin{eqnarray} U_{\rm FC-LJ}(r)=\left\{\begin{array}{ll} (r-r_{\rm fc})U_{\rm LJ}^{'}(r_{\rm fc})+U_{LJ}(r_{\rm fc}) & {\rm for} \, r\le r_{\rm fc} \\ U_{\rm LJ}(r) & {\rm for} \, r > r_{\rm fc} \end{array} \right . \end{eqnarray} where $r_{\rm fc}$ is an adjustable cut-off distance in this warm-up procedure. {$r_{\rm fc}=r_{\rm fc}^{o}$ decreases monotonically at each cycle from $2^{1/6}\sigma$ to $0.8\sigma$ in the warm-up procedure for the non-bonded pairs except the next-nearest neighboring (nn) pairs along identical chains. For the nn pairs, $r_{\rm fc}=r_{\rm fc}^{\rm nn}$ is set to $1.1\sigma$ initially, and tuned according to the following cost function~\cite{Moreira2015}} \begin{equation} C = \int_{s=20}^{s=50} ds \left [ \left(\frac{\langle R^2(s) \rangle}{s} \right)_{\textrm{master curve}} -\left(\frac{\langle R^2(s) \rangle}{s} \right)_{\textrm{current cycle}} \right ] \label{eq-cost} \end{equation} at the end of each cycle in the warm-up procedure with the restriction that $0.8\sigma \le r_{\rm fc}^{\rm nn} \leq 1.1\sigma$. Master curve refers to fully equilibrated polymer melts in bulk (the reference systems). In this process the nn-excluded volume is adjusted according to the value of $C$, namely reduced (increase $r_{\rm fc}^{\rm nn}$ by $0.01\sigma$) if $C<0$ and enhanced (decrease $r_{\rm fc}^{\rm nn}$ by $0.01\sigma$) if $C>0$. If $|C|<0.0001\sigma^2$, $r_{\rm fc}^{\rm nn}$ remains unchanged. Thus, we have assumed that the mean square internal distance, $\langle R^2(s) \rangle$, for the confined chains in a melt finally should coincident with the master curve for $s<50$ at least. For a polymer melt under strong confinement effect, the bulk behavior may no longer be valid even for $s<50 \approx 2N_e$. However, it's more important to first correct chain distortion due to the absence of excluded volume interactions between monomers (see Fig.~\ref{fig-Warmup-wall}b). Once we remove the criterion of the cost function, confined chains will relax very fast on a short length scale dominated by both the entanglement and confinement effects. Note that this final equilibration step only affects subchain lengths of up to the order of $N_e$. We perform MD simulations in the NVT ensemble with a Langevin thermostat at the temperature $T=1.0\epsilon/k_B$ using the package ESPResSo++~\cite{Espressopp,Espressopp20} for equilibrating a confined polymer melt containing $n_c=1000$ chains of $N=2000$ monomers under three procedures as follows: (a) In the warm-up procedure, $120$ cycles of $1.5 \times 10^{5}$ MD steps per cycle in the first $80$ cycles and $5 \times 10^4$ MD steps per cycle in the rest $40$ cycles are performed with a larger friction constant $\Gamma=1.0\tau^{-1}$, and a small time step $\Delta t=0.0002\tau$. Differently from~\cite{Moreira2015}, more MD steps and a slower rate of decreasing the cut-off value of $r_{\rm fc}^o$ for non-bonded monomer pairs except nn pairs are required for the confined polymer melt system due to the competition between the excluded volume effect and the confinement effect. In the first $100$ cycles, $r_{\rm fc}^{\rm nn}$ is updated at each cycle associated with the cost function. In the rest $20$ cycles, $r_{\rm fc}^{\rm nn}$ decreases by $0.01\sigma$ before reaching the minimum value $0.8\sigma$ (see Fig.~\ref{fig-Warmup-wall}a). (b) In the relaxation procedure, the shifted LJ potential $U_{\rm LJ}(r)$ is restored. We first perform $10^5$ MD steps with $\Gamma=0.5\tau^{-1}$ and $\Delta t=0.001\tau$, and then another $2 \times 10^6$ MD steps with $\Delta t=0.005\tau$ to ensure that the confined melt reaches its equilibrated state. Afterwards, we can set the time step to its standard value, $\Delta t =0.01\tau$ for the further study of the confined polymer melt in equilibrium. \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig06a.pdf} \hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig06b.pdf}\\ (c)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig06c.pdf} \caption{(a) Monomer density profile rescaled by the bulk melt density $\rho(z)/\rho_0$, plotted as a function of $z$ for a polymer melt confined between two walls located at $z=0\sigma$ and $z=L_z=134\sigma$ in the intermediate stage of the warm-up procedure. (b) Rescaled mean square internal distance, $\langle R^2(s) \rangle / (sl_b^2)$, plotted versus $s$ for chains in a confined polymer melt right after the backmapping and warm-up procedures, $\ell_b=0.964\sigma$ being the mean square root of bond length. (c) Time series of the cut-off distance for the next-nearest neighboring pairs of monomers ($r_{\rm fc}^{\rm nn}$), and other non-bonded pairs ($r_{\rm fc}^{\rm o}$) for $U_{\rm FC-LJ}(r)$ in a warm-up procedure. In (a), several values of the strength $\varepsilon_w$ are chosen, as indicated, and only data for $z \le 10\sigma$ are shown in the inset. In (b), the master curve for the average behavior of the reference systems is also shown for comparison.} \label{fig-Warmup-wall} \end{center} \end{figure*} During warming up the confined polymer melt, the local monomer density profile $\rho(z)$ near the walls varies with the interaction strength $\varepsilon_w$ between monomers and walls as shown in Fig.~\ref{fig-Warmup-wall}a. For $\varepsilon_w=0.005\epsilon$, the bulk melt density $\rho_0$ is conserved all the way up to the wall, where the bin size is set to {$0.5\sigma$}. At the same time, after the confined polymer melt is warmed up, the curve of $\langle R^2(s) \rangle$ coincides with the master curve for $s<50$ as shown in Fig.~\ref{fig-Warmup-wall}b. A typical variation of cut-off distances, $r_{\rm fc}^{\rm nn}$ and $r_{\rm fc}^{\rm o}$ for the non-bonded pairs of monomers in the warm-up procedure are shown in Fig.~\ref{fig-Warmup-wall}c. At the end of the warm-up procedure, $U_{\rm FC-LJ}(r)$ approaches to $U_{\rm LJ}(r)$. Thus, there is no problem to switch back to $U_{\rm LJ}(r)$ for the further relaxation of the confined polymer melt. First we examine the difference between the equilibrated confined melt and the reference bulk melt as shown in Fig.~\ref{fig-eqbsm-wall}. We compare the whole system in terms of the mean square internal distance $\langle R^2(s) \rangle$, the single chain structure factor $S_c(q)$, and the collective structure $S_{||}(q)$ in the direction parallel to the walls. We see that the curve of $\langle R^2(s) \rangle$ estimated only for those chains having their CMs satisfying $0.3L_z\le \vec{r}_{i,z}^{\rm (CM)} \le 0.7L_z$ follows the master curve while the curve obtained by taking the average over all $1000$ chains starts to deviate from bulk behavior at $s=250$. To detect any anisotropy of chain conformations under confinement, we distinguish $S_{c,\perp}(q=q_z)$ where the wave vector $\vec{q}$ is oriented in the $z$-direction perpendicular to the wall, and $S_{c,||}(q=(q_x^2+q_y^2)^{1/2})$. $S_{c,||}(q)$ follows the same chain structure as in the bulk melt as shown in Fig.~\ref{fig-eqbsm-wall}b. The shift of {$S_{c,\perp}(q)$} toward to {a slightly larger} value of $q$ from the bulk curve also indicates that the estimate of {$\langle R^2_{g,\perp} \rangle$} for all $1000$ chains is smaller. Nevertheless, chains still behave as ideal chains. Comparison of the collective structure factor of the whole melt between the confined polymer and the bulk melt in the parallel direction to the walls, we see that there is no difference as the distance between two walls is compatible to the bulk melt (see Fig.~\ref{fig-eqbsm-wall}c). The local packing of monomers characterized by the pair distribution $g(r)$ for both inter and intra pairs of monomers is also in perfect agreement with the bulk melt as shown in Fig.~\ref{fig-eqbsm-wall}d. \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig07a.pdf}\hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig07b.pdf}\\ (c)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig07c.pdf}\hspace{0.1cm} (d)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig07d.pdf} \caption{(a) Rescaled mean square internal distance, $\langle R^2(s) \rangle / (sl_b^2)$, plotted as a function of $s$. (b) Two components of the single chain structure factors, $S_{c,||}(q)$ and $S_{c,\perp}(q)$, plotted versus $q$. (c) Collective structure factor $S(q)$ plotted versus $q$. (d) Pair distribution $g(r)$ plotted versus $r$. Data are for a fully equilibrated confined melt of bead-spring chains with the interaction strength $\varepsilon_w=0.005\epsilon$. In (b) the theoretically predicted slope $q^{-2}$ for a Gaussian coil is also shown for comparison. Data for the average behavior of fully equilibrated polymer melts in bulk (the reference systems) are also shown for comparison.} \label{fig-eqbsm-wall} \end{center} \end{figure*} Finally, we go back to the outset and analyze the conformational properties of the confined equilibrated melt in a CG representation by mapping bead-spring chains to soft-sphere chains using Eqs.~(\ref{eq-mapping-cm}), (\ref{eq-mapping-rg}). As shown in Fig.~\ref{fig-R2s-wall} and Fig.~\ref{fig-rhoz-wall}b, all results obtained by taking the average over $14$ configurations within $6\tau_e$ are consistent with the data obtained from the MC simulations of a confined CG melt within fluctuation. Since in the microscopic model, the excluded volume interactions between monomers are properly taken into account, the curves of $\langle R^2(s,N_b) \rangle$, $g(r,N_b)$ at short length scales do not deviate from the curves for the reference systems in a CG representation. This shows that the confined polymer melt indeed reaches the equilibrium on all length scales. \section{Preparation of supported and free-standing films at zero pressure} In order to study free-standing or supported films {at pressure $P=0.0\epsilon/\sigma^{3}$}, stabilizing {non-bonded} attractive monomer monomer interactions are required. For this {we} switch to our recently developed CG model~\cite{Hsu2019,Hsu2019e} based on the bead-spring model~\cite{Kremer1990,Kremer1992} by simply turning on the attractive potential $U_{\rm ATT}(r)$ \{Eq.~(\ref{eq-Uatt})\} {for non-bonded monomer pairs} and replacing the bending potential $U_{\rm BEND}^{\rm (old)}(\theta)$ with $U_{\rm BEND}(\theta)$ \{Eq.~(\ref{eq-UBEND})\}. This choice of interaction has the additional advantage that it allows us to study glassy films as well. Starting from a fully equilibrated polymer melt confined between two walls obtained in the last section, we perform MD simulations in the NVT ensemble with a Langevin thermostat at the temperature $T=1.0\epsilon/k_B$ using the package ESPResSo++~\cite{Espressopp,Espressopp20}, keeping the short range repulsion from the walls. Fig.~\ref{fig-p0}a shows that the three diagonal terms of the pressure tensor $P_{\alpha \beta}(t)$ drop from $4.9\epsilon/\sigma^3$ ($5.0\epsilon/\sigma^3$ for bulk melts) to $0.1\epsilon/\sigma^3$ in a very short time about $20 \tau$. We further relax the confined polymer film for $30000\tau \approx 13\tau_e$, $\tau_e\approx 2266\tau$ being the entanglement time~\cite{Hsu2016}, by performing MD simulations in the NPT ensemble (Hoover barostat with Langevin thermostat~\cite{Martyna1994,Quigley2004} implemented in ESPResSo++~\cite{Espressopp,Espressopp20}) at temperature $T=1.0\epsilon/k_B$ and pressure $P=(P_{xx}+P_{yy}+P_{zz})/3 =0.0\epsilon/\sigma^3$ to finally adjust the pressure from $0.1\epsilon/\sigma^3$ to $0.0\epsilon/\sigma^3$. Under this circumstance, an equilibrated free-standing film is generated after removing two walls by turning off the wall potential at $z=0\sigma$ and $z=L_z=134\sigma$. If we only remove one of the walls, we get a polymer film with one supporting substrate, where {one} of course can introduce appropriate adhesion interactions. \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig08a.pdf}\hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig08b.pdf}\\ (c)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig08c.pdf}\hspace{0.1cm} (d)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig08d.pdf}\\ \caption{(a) Time series of diagonal terms of pressure tensor $P_{\alpha \beta}(t)$. (b)(c) Rescaled mean square internal distance, $\langle R^2(s) \rangle / (sl_b^2)$, plotted versus $s$ for all chains (b) and chains having their CMs in the interval $[0.3L_z, 0.7L_z]$. (c) in a confined polymer melt. (d) Monomer density profile rescaled by the bulk melt density $\rho(z)/\rho_0$, plotted as a function of $z$ between two walls located at $z=0\sigma$ and $z=L_z=134\sigma$. Data for confined polymer melts based on two variants of bead-spring model are shown, as indicated. In (b)(c), data for the reference systems (bulk melts), and for the free-standing film after relaxing for $30000\tau$ are also shown for comparison. In the inset of (d), only data for $z\le 5$ are shown. The Gibbs dividing surfaces for the confined film at different status at $z=z_G^{\rm (lower)}=0.35\sigma$, $1.21\sigma$, and $1.82\sigma$ from left to right are marked by arrows. The root mean square bond length increases slightly from $\ell_b \approx 0.964\sigma$ to $\ell_b \approx 0.967\sigma$ after switching the potential $U_{\rm BEND}^{\rm (old)}(\theta)$ to $U_{\rm BEND}(\theta)$ and $U_{\rm ATT}(r)$.} \label{fig-p0} \end{center} \end{figure*} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.85\textwidth,angle=0]{eqFilm_fig09.pdf} \caption{Snapshots of the configurations of a fully equilibrated melt containing $n_c=1000$ chains of $N=2000$ monomers confined between two walls (a confined film) at the melt density $\rho=0.85\sigma^{-3}$ and with free surfaces (a free-standing film) at $T=1.0\epsilon/k_B$ and $P=0.0\epsilon/\sigma^3$.} \label{fig-films} \end{center} \end{figure*} The overall conformations of all chains and inner chains ($0.3L_z \le r^{(i)}_{\rm CM,z} \le 0.7L_z$) as characterized by $\langle R^2 (s) \rangle$ between two walls for a confined polymer melt based on two variants of bead-spring model, and after relaxing for $30000\tau$ (NPT MD simulations) are preserved within fluctuation as shown in Fig.~\ref{fig-p0}b,c. The monomer density profile $\rho(z)$ in the direction perpendicular to the wall compared to the density in the interior of confined melt is also preserved as shown in Fig.~\ref{fig-p0}d. The thickness of films, $D$, is determined according to the concept of Gibbs dividing surface that has been applied to identify the interface between two different phases~\cite{Hansen2013,Kumar1994,Peter2006}, e.g. liquid and vapor, polymer and vacuum, etc. based on the density profile in the direction perpendicular to the interfaces. The locations of the Gibbs dividing surfaces (planar surfaces) corresponding to the upper and lower bounds of films, \begin{equation} z_G^{\rm (upper)} = z_c + \frac{1}{\bar{\rho}}\int_{z_c}^{z_{\rm max}}\rho(z) dz \quad {\rm and} \quad z_G^{\rm (lower)} = z_c - \frac{1}{\bar{\rho}}\int_{z_{\rm min}}^{z_c} \rho(z)dz \, \label{eq-zG} \end{equation} are obtained by the requirement of equal areas that \begin{equation} \int_{z_c}^{z_G^{\rm (upper)}}(\rho(z)-\bar{\rho}) dz = \int_{z_G^{\rm (upper)}}^{z_{\rm max}} (\rho(z)-0) dz \end{equation} and \begin{equation} \int_{z_{\rm min}}^{z_G^{\rm (lower)}}(\rho(z)-0) dz = \int_{z_G^{\rm (lower)}}^{z_c} (\rho(z)-\bar{\rho}) dz \,, \end{equation} respectively. Here $z_{\rm min}$ and $z_{\rm max}$ are the two limits where $\rho(z)$ approaches to zero, and $z_c=(z_{\rm min}+z_{\rm max})/2$. The mean monomer density is given by \begin{equation} \bar{\rho}=\frac{1}{2\Delta z}\int_{z_c-\Delta z}^{z_c+\Delta z} \rho(z) dz \end{equation} where the value of $\Delta z$ is chosen such that the monomer density profile $\rho(z)$ in the interval $[z_c-\Delta z,z_c+\Delta z]$ reaches a plateau value within small fluctuation. {Choosing $\Delta = 2.5\sigma$,} the thickness of confined film, $D=z_G^{\rm (upper)}-z_G^{\rm (lower)}$, at $T=1.0\epsilon/k_B$ reduces from $133.4\sigma$ at $P=4.9\epsilon/\sigma^3$ to $131.6\sigma$ at $P=0.1\epsilon/\sigma^3$ due to the short-range attractive interaction between non-bonded monomers, and finally is stabilized at $130.3\sigma$ at $P=0.0\epsilon/\sigma^3$ {where the lateral dimensions of film increase slightly from $L_x=L_y \approx 133.0\sigma$ to $L_x=L_y \approx 134.0\sigma$.} To further relax the free-standing film, we have also performed MD simulations in the NVT ensemble at the temperature $T=1.0\epsilon/k_B$ for $30000\tau$, where the resulting pressure $P=0.0\epsilon/\sigma^3$. {The perpendicular simulation box size is set to $L_z=194\sigma$ for preventing any interaction between monomers and the lateral surfaces of the box.} Snapshots of the configurations of confined and free-standing films after relaxing for $30000\tau$ are shown in Fig.~\ref{fig-films}. The estimate of $\langle R^2(s) \rangle$ for all chains and chains in the middle part of the free-standing film, and $\rho(z)$ with bin size $0.25\sigma$ are shown in Figs.~\ref{fig-p0}b,c and ~\ref{fig-free}, respectively. We see that after relaxing polymer chains in a free-standing film for $30000\tau \approx 13\tau_e$, the thickness of the free-standing film, $130.5\sigma$, is still compatible with that of the confined polymer film, $130.3\sigma$. The average conformations of all chains and inner chains also remain the same while the tails of the monomer density profile become longer indicating that the surface becomes a bit rougher. \begin{figure*}[t!] \begin{center} \includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig10.pdf} \caption{Monomer density profile rescaled to the bulk melt density $\rho(z)/\rho_0$, plotted as a function of $z$ along the perpendicular direction of the interfaces of free-standing film at different relaxation times. For comparison, the centers of free-standing and confined films in the $z$-direction are matched. In the inset, only data for $z \le 4\sigma$ are shown. The Gibbs dividing surfaces for the confined film at different states at $z=z_G^{\rm (lower)}=1.82\sigma$, $1.75\sigma$ for the relaxation time $t=0\tau$ (right after removing the two walls), and $t=30000\tau$ are marked by arrows.} \label{fig-free} \end{center} \end{figure*} \begin{figure*}[t!] \begin{center} (a)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig11a.pdf}\hspace{0.1cm} (b)\includegraphics[width=0.32\textwidth,angle=270]{eqFilm_fig11b.pdf}\\ (c)\includegraphics[width=0.43\textwidth,angle=270]{eqFilm_fig11c.pdf}\hspace{0.1cm} (d)\includegraphics[width=0.43\textwidth,angle=270]{eqFilm_fig11d.pdf}\\ \caption{Normalized densities of all monomers and end monomers, $\phi(z)$ and $\phi_e(z)$, respectively, and relative excess density of end monomers, $\phi_e^{\rm (excess)}(z)$, plotted as a function of $z$ along the perpendicular direction to the interfaces of confined (a)(c) and free-standing films (b)(d). Explicit bead-spring chains are shown by red while the underlying soft-sphere chains are shown in green as indicated. For the most sensitive data $\phi_e^{\textrm{(excess)}}$ error bars are included in (c) and (d). The centers of free-standing and confined films in the $z$-direction are located at $z=67\sigma$ for comparison.} \label{fig-pend} \end{center} \end{figure*} Finally, the density of chain ends near surfaces for both confined and free-standing films at $T=1.0\epsilon/k_B$ and $P=0.0\epsilon/\sigma^3$ is examined. For this we compare the normalized density of all monomers, $\phi(z)=\rho(z)/\rho_0$, the density of end monomers, $\phi_e(z)=\rho_e(z)/(2\rho_0/N)$, and the relative excess of end monomers, $\phi_e^{\textrm{(excess)}}(z)=[\rho_e(z)-(2/N)\rho(z)]/(2\rho_0/N)$, along the perpendicular direction of interfaces, averaged over $30$ configurations within $6\tau_e$, are shown in Fig.~\ref{fig-pend}. To illustrate the dependency on chain length or bead size, we have also mapped the bead-spring chains of $N=2000$ monomers of size $\sigma=1$ onto underlying soft-sphere chains of $N_{\rm CG}=80$ spheres of approximate size $6\sigma$ (average radius $\langle R_g(N_b=25) \rangle \approx 3.0\sigma$). For films we observe a weak enrichment of end monomers at the surfaces due to a potential gain of entropy~\cite{Wu1995,Matsen2014}. The related depletion zone for chains ends near the surface, as predicted by self-consistent field theories~\cite{Wu1995,Matsen2014}, however turns out to be too small be resolved within the fluctuations of our data. The coarse-graining slightly smears out this enrichment effect due to the overlap of soft spheres. For the free-standing film, the enrichment effect of end monomers near the interfaces is slightly less pronounced while at the same time, the interface widens. This indicates a weak roughening of the free surface compared to the confined film. Nevertheless, the indicated $\phi_e^{\rm (excess)}(z)$ near the surfaces in both cases is very small and levels off on the scale of the typical bulk density correlation length given e.g. in Fig.~\ref{fig-eqbsm-wall}. \section{conclusion} In this paper, we have developed an efficient methodology to equilibrate long chain polymer films and applied this method to a polymer film where $1000$ chains of $2000\approx 72 N_e$ monomers are confined between two repulsive walls at the bulk melt density $\rho=0.85\sigma^{-3}$. Starting from a confined CG melt of $1000$ chains of $80$ soft spheres~\cite{Vettorel2010,Zhang2013} at rather high resolution such that each sphere corresponds to $25$ monomers, it takes only $12.8$ hours CPU time on a single processor in Intel 3.6GHz PC to prepare an initial configuration based on the bead-spring model ($10$ hours for equilibrating the confined CG melt using a MC simulation and $2.8$ hours for reinserting monomers into soft spheres using a MD simulation). By gradually switching on the excluded volume interactions between two monomers, overlapping monomers are pushed away slowly in a warm-up procedure. Finally, the confined polymer melt is relaxed with full standard potentials. This takes about $182$ hours CPU time using 48 cores (2.7GHz) on Dual Intel Xeon Platinum 8168 ($155$ hours for the warm-up procedure, $27$ hours for the relaxation procedure). Similarly, as found in the previous studies~\cite{Zhang2014,Ohkuma2018}, the required MD time for equilibrating confined polymer melts based on the bead-spring model is only about $t=12900\tau \approx 5.69 \tau_e$. Following the same strategy, one can easily equilibrate highly entangled polymer melts confined between two walls at distances ranging from thick films to thin films (in which the distance between two walls is smaller than the radius of gyration of chains) within easily manageable computing time. Our work opens ample possibilities to study static and dynamic properties of highly entangled polymer chains in large polymer films, including e.g. entanglement distributions. Varying the interaction potential between walls and monomers, or even replacing the wall potential by other potentials, only requires local short relaxation runs starting from a fully equilibrated polymer melt confined between two repulsive walls. Switching to our recently developed coarse-grained model for studying polymer melts under cooling~\cite{Hsu2019,Hsu2019e}, both fully equilibrated confined and free-standing films at the temperature $T=1.0\epsilon$ and pressure $P=0.0\epsilon/\sigma^3$ are also obtained in this work. This provides a direct route to further investigate the relation between the glass transition temperature and the thickness of films of highly entangled polymer chains at the zero pressure. Beyond that, it is also interesting to analyze the rheological properties and local morphology of deformed films by stretching or shearing.\\ \\ \noindent {\bf DATA AVAILABILITY} The data that support the findings of this study are available from the corresponding author upon reasonable request. \begin{acknowledgments} H.-P. H. thanks T. Ohkuma and T. Stuehn for helpful discussions. We also thank K. Ch. Daoulas for carefully reading our paper. This work has been supported by European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013)/ERC Grant Agreement No.~340906-MOLPROCOMP. We also gratefully acknowledge the computing time granted by the John von Neumann Institute for Computing (NIC) and provided on the supercomputer JUROPA at J\"ulich Supercomputing Centre (JSC). \end{acknowledgments}
null
null
null
proofpile-arXiv_000-10182
{"arxiv_id":"2009.08999","language":"en","timestamp":1600740038000,"url":"https:\/\/arxiv.org\/abs\/2009.08999","yymm":"2009"}
2024-02-18T23:40:25.299Z
2020-09-22T02:00:38.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10184"}
null
\section{Introduction\label{introduction}} Recent experimental advances on synthesizing various quantum platforms, including ultra-cold atoms in optical lattices \cite{jotzu2014experimental,daley2012measuring,schreiber2015observation,choi2016exploring,flaschner2018observation}, trapped ions \cite{jurcevic2017direct,martinez2016real,neyenhuis2017observation,smith2016many}, nitrogen-vacancy centers in diamond \cite{yang2019floquet}, superconducting qubit systems \cite{guo2019observation} and quantum walks in photonic systems \cite{wang2019simulating,xu2020measuring} provide a framework for experimentally studying different quantum systems far from thermodynamic equilibrium. Nonequilibrium physics with exotic properties, specifically realizing the quantum time evolution beyond the thermodynamic equilibrium description as well as the dynamics of out-of-equilibrium quantum many-body criticality, has recently attracted attention of theoretical \cite{Dutta:2017717,Essler2016,Fogarty2020,Campbell2016,Polkovnikov2011,Mitra2018,jafari2019dynamical,bhattacharya2017emergent,halimeh2019dynamical, vzunkovivc2018dynamical,heyl2013dynamical,budich2016dynamical,heyl2017quenching,weidinger2017dynamical} and experimental \cite{xu2020measuring,yang2019floquet,jurcevic2017direct,guo2019observation,wang2019simulating,xu2020measuring,smith2016many,martinez2016real,neyenhuis2017observation,schreiber2015observation,aidelsburger2011experimental,aidelsburger2013realization} researches in physics. Lately, a new area of research named dynamical quantum phase transitions (DQPTs) \cite{heyl2013dynamical,heyl2018dynamical}, has been introduced, in non-equilibrium quantum systems, as a counterpart of conventional equilibrium thermal phase transitions. Within DQPT real time plays the role of control parameter analogous to temperature in conventional equilibrium phase transitions \cite{heyl2018dynamical,zvyagin2016dynamical}. DQPT represents a phase transitions between dynamically emerging quantum phases, that occurs during the nonequilibrium coherent quantum time evolution under quenching or time-periodic modulation of Hamiltonian \cite{heyl2018dynamical,zvyagin2016dynamical,heyl2018dynamical,yang2019floquet}. The concept of DQPT originates from the analogy between the equilibrium partition function of a system, and boundary partition function, which measures the overlap between an initial state and its time-evolved one, termed as Loschmidt amplitude (LA). As the equilibrium phase transition, characterized by nonanalyticities in the thermal free energy, in DQPT, nonanalytic behavior manifests as singularities in the LA, a dynamical analog of the equilibrium free energy \cite{heyl2018dynamical,zvyagin2016dynamical}. Furthermore, nonanalyticities in dynamical free energy are accompanied by zeros of LA in the complex time plane known in statistical physics as Fisher zeros of the partition function \cite{heyl2013dynamical,heyl2018dynamical,vzunkovivc2018dynamical,jafari2019dynamical,halimeh2019dynamical,bhattacharya2017emergent,fisher1967theory,lee1952statistical,van1984location,zvyagin2016dynamical}. It has been also established that there exists a dynamical topological order parameter (DTOP), analogous to order parameters at conventional quantum phase transition, which can characterize DQPTs \cite{budich2016dynamical,vajna2015topological,sedlmayr2018fate}. The presence of a DTOP illustrates the emergence of a topological characteristic associated with the time evolution of nonequilibrium systems. The DTOP takes integer values as a function of time and represent unit magnitude jumps at the critical times, signaling the occurrence of DQPTs \cite{sedlmayr2018fate,yang2019floquet,vajna2015topological,flaschner2018observation}. Decoherence and particle loss processes do affect the dynamics, hence, the notion of DQPTs has been developed to the generalized form, i.e., thermal (mixed) state dynamical phase transition \cite{heyl2017dynamical,bhattacharya2017mixed,sedlmayr2018fate}. DQPT has been extensively explored from both theoretical \cite{Karrasch2013,halimeh2019dynamical,heyl2013dynamical,budich2016dynamical, vzunkovivc2018dynamical,jafari2019dynamical,andraschko2014dynamical,sharma2015quenches,jafari2019quench,zhou2018dynamical,jafari2019dynamical, andraschko2014dynamical,sharma2015quenches,canovi2014first,budich2016dynamical,bhattacharya2017emergent, hickey2014dynamical,schmitt2015dynamical,Sun2020,Zhou2019,Mera2018,Khatun2019, Srivastav2019,Abdi2019,Cao2020,Bhattacharyya2020,Ding2020,Rylands2020,Hu2020,Pastori2020,Kyaw2020} and experimental \cite{flaschner2018observation,jurcevic2017direct,guo2019observation,wang2019simulating,yang2019floquet,martinez2016real, lanyon2011universal,buyskikh2016entanglement,bernien2017probing,atala2014observation} point of views. Most researches on DQPT, are associated with both sudden and slow quantum quenches of the Hamiltonian \cite{Sharma2016b,Sedlmayr2018,Sedlmayr2020,jafari2019dynamical,halimeh2019dynamical,bhattacharya2017emergent,budich2016dynamical,heyl2013dynamical,heyl2017quenching,vzunkovivc2018dynamical}. It was first found that, during the quench procedure, crossing the equilibrium quantum critical point (EQCP) is an essential condition to observe DQPT \cite{heyl2013dynamical}. However, subsequent analytical studies indicate that DQPT occurs regardless of any quenches across EQCP \cite{jafari2019dynamical,halimeh2019dynamical,vajna2014disentangling,sharma2015quenches}. In addition, the theory of DQPT has been recently extended to Floquet systems \cite{kosior2018dynamical,kosior2018dynamical1,yang2019floquet}. In Floquet DQPT, as opposed to conventional quantum quench scenario, systems evolve under the time-periodic Hamiltonian \cite{yang2019floquet,kosior2018dynamical}. Although the quench-induced dynamical free energy accompanied with periodicity and decaying in time, in Floquet systems DQPT occurs periodically without decaying in time \cite{kosior2018dynamical,kosior2018dynamical1}. Despite numerous studies of DQPTs in a wide variety of quantum systems, comparatively little attention has been devoted to the quantum Floquet systems. To the best of our knowledge, there is currently no conclusive analytical evidence that under which circumstances DQPTs appear in quantum Floquet systems. Further studies are needed to clarify and shed more light on this issue. For instance, exactly solvable models play a particularly important role in this direction. In the present work, we contribute to expand the systematic understanding of Floquet DQPT by introducing the exactly solvable periodically time-dependent extended XY model in the presence of a staggered magnetic field. We analytically study the pure and mixed states DQPT characteristics, especially the underlying topological features. We show that, pure state DQPT occurs when the system enters into the adiabatic regime. In other words, the minimum required driving frequency for appearance of Floquet DQPT is equal to the threshold frequency needed for transition from nonadiabatic to adiabatic regime. In the nonadiabatic regime, the quasi-spins (i.e. noninteracting effective spins) feel a constant effective Zeeman field, which does not induce Rabi oscillations between spin up and down states\cite{Schwinger1937}. When the system enters the adiabatic regime the quasi-spins oscillate between spin up and down states \cite{Schwinger1937}. We also indicate that the pure state dynamical free energy undergoes periodic nonanalyticities without decaying in time. In contrast, the nonanalyticities of mixed state dynamical free energy are periodically robust in the presence of temperature, where their amplitude decreases with time. This implies mixed state Floquet DQPT time scale becomes the same as its corresponding pure state, which does not depend on the temperature. Moreover, discrete jumps of DTOP of pure and mixed states confirms consistently the topological feature of DQPT. Although its topological nature is lost at finite temperatures. It should be mentioned that, analogous to the message of Ref. [\onlinecite{heyl2013dynamical}], the existence of Floquet DQPTs in Ref.[\onlinecite{yang2019floquet}] is connected to the well-known equilibrium quantities given by two gapless critical points. In contrast, the underlying model here, has a single critical point and the Floquet DQPTs within a window frequency is shown to be related to the well-known dynamical notion i.e., adiabatic-nonadiabatic processes. \section{Periodically Driven Extended XY Model\label{model}} The Hamiltonian of the one-dimensional harmonically driven extended XY spin chain in the staggered magnetic field is given by \begin{eqnarray} {\cal H}(t) = \sum_{n=1}^{N}&& J_{1} \Big[\cos(\omega t) \Big(S_n^x S_{n+1}^x + S_n^y S_{n+1}^y \Big) \nonumber\\ &&- (-1)^{n} \sin(\omega t) \Big(S_n^x S_{n+1}^y - S_n^y S_{n+1}^x \Big) \nonumber\\ &&-(-1)^{n} J_{2} \Big(S_n^x S_{n+1}^z S_{n+2}^x + S_n^y S_{n+1}^z S_{n+2}^y \Big) \nonumber\\ &&+(-1)^{n} h_{s} S_n^z \Big] \label{eq1}, \end{eqnarray} where, $N$ is the size of the system, $h_{s}$ is the magnitude of staggered magnetic field and $\omega$ is the driving frequency. Here we impose periodic boundary condition and $S_n^\alpha$ are the spin half operators at the $n$th site, i.e. $S_n^\alpha=\frac{1}{2}\sigma^{\alpha}_{n}; \;\;\; \alpha=\{x,y,z\}$, where $\sigma^{\alpha}$ are Pauli matrices. The first and second terms in Eq. (\ref{eq1}) describe the time dependent nearest neighbour XY and staggered Dzyaloshinskii-Moriya interactions \cite{Jafari2008}, and the third term is a staggered cluster (three-spin) interaction \cite{Titvinidze}. The Hamiltonian, Eq. (\ref{eq1}), can be exactly diagonalized by Jordan-Wigner transformation, \cite{} which transforms spins into spinless fermions, where $c^{\dagger}_{n}$ ($c_{n}$) is the fermion creation (annihilation) operator (see Appendix \ref{A1}). The crucial step is to define two independent fermions at site $n$, $c_{n-1/2}^{A}=c_{2n-1}$, and $c_{n}^{B}=c_{2n}$. This can be regarded as splitting the chain having a diatomic unit cell. Introducing the Nambu spinor $\Gamma^{\dagger}_k=(c_{k}^{\dagger B},~c_{k}^{\dagger A})$, the Fourier transformed Hamiltonian can be expressed as sum of independent terms acting in the two-dimensional Hilbert space generated by $k$ \begin{eqnarray} {\cal H}(t)=\sum_{k}\Gamma_{k}^{\dagger}H_{k}(t) \Gamma_{k} \label{eq2}, \end{eqnarray} The Bloch single particle Hamiltonian $H_{k}(t)$ in Eq. (\ref{eq2}), is $H_{k}(t)=J_{1}\Big[h_{xy}\Big(\cos(\omega t)\sigma^{x}+\sin(\omega t)\sigma^{y}\Big)+h_{z}\sigma^{z}\Big]/2$, where $h_{xy}(k)=2\cos(k/2)$ and $h_{z}(k)=J_{2}\cos(k)+2h_{s}$. Eq. (\ref{eq2}) implies that the Hamiltonian of $N$ interacting spins (Eq. (\ref{eq1})) can be mapped to the sum of $N/2$ noninteracting quasi-spins. In the next section, we deal with the Hamiltonian of noninteracting quasi-spins to show that quasi-spins transform from nonadiabatic to adiabatic regime by tuning the driving frequency $\omega$. \subsection{Exact solution of the time-dependent Schr\"{o}dinger equation\label{schrodingerEQ}} The noninteracting (quasi-spin) Hamiltonian $H_{k}(t)$ is exactly the same as Schwinger-Rabi model \cite{Schwinger1937} of a spin in the time dependent effective magnetic field $H_{k}(t)=\vec{h}_{k}(t)\cdot\vec{S}$ with $\vec{h}_{k}(t)=J_{1}(h_{xy}(k)\cos(\omega t),h_{xy}(k)\sin(\omega t),h_{z}(k))$ and $|\vec{h}_{k}|=J_{1}(h^2_{xy}(k)+h^2_{z}(k))^{1/2}$. In such a case, the polar and azimuthal angles of effective magnetic field, are $\theta_{k}=\arctan(h_{xy}(k)/h_{z}(k))$ and $\varPhi(t)=\omega t$, respectively. Using the time-dependent Schr\"{o}dinger equation ${\it i}\frac{d}{dt}|\psi_{k}^{\pm}(t)\rangle=H_{k}(t)|\psi_{k}^{\pm}(t)\rangle$ in the rotating frame given by the periodic unitary transformation $U_{R}(t)=\exp[{\it i}\omega(\mathbb{1}-\sigma^{z})t/2]$, the time dependent Hamiltonian is transformed to its time-independent form $\mathbb{H}_{k}|\chi^{\pm}_{k}\rangle=E^{\pm}_{k}|\chi^{\pm}_{k}\rangle$ where \begin{equation} \mathbb{H}_{k}=[h_{xy}(k)\sigma^{x}+(h_{z}(k)-\omega)\sigma^{z}+\omega\mathbb{1}]/2 \label{eq3}, \end{equation} and $|\chi^{\pm}_{k}\rangle=U_{R}^{\dagger}(t)|\psi_{k}^{\pm}(t)\rangle$ (see Appendix \ref{A2}). For simplicity and without loss of generality we take $J_{1}=1$, and $J_{2}, h_{s}, \omega>0$, henceforth. The eigenvalues and eigenvectors of the time-independent noninteracting Hamiltonian $\mathbb{H}_{k}$ are \begin{equation} E^{\pm}_{k}=\frac{\omega}{2}\pm\frac{1}{2} \sqrt{h_{xy}^{2}(k)+(h_{z}(k)-\omega)^{2}} \label{eq4}, \end{equation} \begin{eqnarray} &&|\chi^{-}_{k}\rangle =\left( \begin{array}{c} \cos(\gamma_{k}/2) \\ \sin(\gamma_{k}/2) \\ \end{array} \right), \nonumber\\ &&|\chi^{+}_{k}\rangle =\left( \begin{array}{c} \sin(\gamma_{k}/2) \\ -\cos(\gamma_{k}/2) \\ \end{array} \right), \label{eq5} \end{eqnarray} where \begin{eqnarray} \label{eq6} \gamma_{k}=\arctan\Big[\frac{\sin(\theta_{k})}{\cos(\theta_{k})-\omega/|\vec{h}_{k}|}\Big]. \end{eqnarray} It is worthy to mention that, the time independent extended XY model in the presence of the renormalized staggered magnetic filed \begin{eqnarray} {\cal H} = \sum_{n=1}^{N}&& J_{1} \Big[\Big(S_n^x S_{n+1}^x + S_n^y S_{n+1}^y \Big)+(-1)^{n} h^{eff}_{s} S_n^z \nonumber\\ &&+(-1)^{n} J_{2} \Big(S_n^x S_{n+1}^z S_{n+2}^x + S_n^y S_{n+1}^z S_{n+2}^y \Big)\Big] \label{eq7}, \end{eqnarray} with $h^{eff}_{s}=h_{s}-\omega/2$, results the noninteracting (quasi-spin) Hamiltonian $\mathbb{H}_{k}$, the same as Eq. (\ref{eq3}), (apart from an additive constant), which leads to eigenvectors and eigenvalues given by Eqs. (\ref{eq4})-(\ref{eq5}). It is clear that the ground state is separated from the excited state by the energy gap $\Delta_{k}=|E^{+}_{k}-E^{-}_{k}|$, which vanishes at Brillouin zone boundary $k_{c}=\pm\pi$ and $h^{eff}_{s}=h_{s}-\omega/2=J_{2}/2$. So, a QPT occurs at $h^{eff}_{s}=J_{2}/2$, where the system transforms from the Spin Liquid phase ($h^{eff}_{s}<J_{2}/2$) to the long-range ordered antiferromagnetic phase ($h^{eff}_{s}>J_{2}/2$). \begin{figure} \centerline{\includegraphics[width=1\columnwidth]{fig1.pdf}} \caption{(Color online) Variation of $\cos(\gamma_k)$ versus $k/\pi$ for $J_2=\pi, h_{s}=3\pi$ and $\omega=4\pi$ (solid line), $\omega=6\pi$ (dotted line) and $\omega=8\pi$ (dash-dotted line).} \label{fig1} \end{figure} \subsection{Topological transition from adiabatic to nonadiabatic regime\label{diabatictoadiabatic}} An adiabatic cyclic process associates with slowly changing the periodic parameters driving a physical system. In such a case, the system returns to its initial state after a cycle. However, a quantum state may gain a phase factor that can be given as the sum of a dynamical phase that depends on the Hamiltonian parameters and an extra term of geometrical origin. The latter is known as the Berry phase and can be determined from the geometric properties of the path expressed by the driving parameters in the parameter space of the Hamiltonian \cite{Berry1984}. Nonadiabatic cyclic processes may also produce geometric phases that are smaller than adiabatic Berry phases but with similar geometric characteristics \cite{Aharonov1987,Bohmbook,JAFARI20133279}. Consider the instantaneous non-degenerate eigenstates $|m,\textbf{R}\rangle$ of the system for a given set of parameters \textbf{R} in $H(\textbf{R})$, i.e. $H(\textbf{R})|m,\textbf{R}\rangle=E_{m}(\textbf{R})|m,\textbf{R}\rangle$, where $E_{m}(\textbf{R})$ is the corresponding eigenvalues. Hence, the Berry phase is expressed as $\Upsilon_{m}=\oint A^{m}(\textbf{R})$, where $A^{m}(\textbf{R})$ is the so-called Berry connection expressed as a (local) differential one-form \cite{Bohmbook} \begin{equation} A^{m}(\textbf{R})=A^{m}_{\upsilon}dR^{\upsilon}=i\langle m,\textbf{R}|\frac{\partial}{\partial R^{\upsilon}}|m,\textbf{R}\rangle dR^{\upsilon} \label{eq8} \end{equation} Using Stokes theorem, we arrive at $\Upsilon_{m}=\int_{S} F^{m}(\textbf{R})$, where $F^{m}(\textbf{R})$ is the Berry curvature two-form given by \cite{Bohmbook} \begin{equation} F^{m}(\textbf{R})=\frac{\partial A^{m}_{\mu}}{\partial R^{\upsilon}}dR^{\upsilon}\wedge dR^{\mu}=dA^{m}. \label{eq9} \end{equation} Here, $\wedge$ is the antisymmetric wedge product and $dA^{m}$ stands for the exterior derivative of the Berry connection one-form $A^{m}$. The Chern number, which is given by the integral of the Berry curvature over the whole parameter space, encodes the information of a topological transition between two different driving regimes (see Appendix \ref{A4}). In the adiabatic regime, a spin can adapt to the variation of effective magnetic field and will remain in its instantaneous eigenstate during the slow evolution. In contrast, in the nonadiabatic regime, the spin cannot align with the magnetic field and hence is exposed to an average magnetic field \cite{gomez2012topological,zener1932non,betthausen2012spin,leon2014dynamical}. \begin{figure*} \begin{minipage}{\linewidth} \centerline{\includegraphics[width=0.31\linewidth]{fig2a.pdf} \includegraphics[width=0.31\linewidth]{fig2b.pdf} \includegraphics[width=0.37\linewidth]{fig2c.pdf}} \centering \end{minipage} \begin{minipage}{\linewidth} \centerline{\includegraphics[width=0.33\linewidth]{fig2d.pdf} \includegraphics[width=0.33\linewidth]{fig2e.pdf} \includegraphics[width=0.33\linewidth]{fig2f.pdf}} \centering \end{minipage} \caption{(Color online) The density plot of Loschmidt echo $|{\cal L}_{k}(t)|^{2}$ as a function of time $t$ and $k$, for (a) $\omega=4\pi$ (b) $\omega=6\pi$ and (c) $\omega=8\pi$. The rate function of Loschmidt amplitude versus time $t$ for (d) $\omega=4\pi$ (e) $\omega=6\pi$ and (f) $\omega=8\pi$. In all plots we take $J_{2}=\pi$, $h_{s}=3\pi$. } \label{fig2} \end{figure*} Implementing the exact Floquet states, we obtain the Berry curvature, which gives an exact expression for the Chern number ($C$) (see Appendix \ref{A4}). \begin{equation} C=\Theta (1-\frac{\omega}{\sqrt{h_{xy}^2(k)+h_{z}^2(k)}})=\Theta (1-\frac{\omega}{|\vec{h}_{k}|}), \label{eq10} \end{equation} where $\Theta(x)$ is the Heaviside step function. According to the calculated Chern number (Eq. (\ref{eq10})), in the adiabatic regime $\omega<|\vec{h}_{k}|$ we get $C=1$, where the quasi-spin is able to follow the evolution of magnetic field $\vec{h}_{k}$, leading to the Rabi oscillation \cite{Schwinger1937}. However, in the nonadiabatic regime $\omega>|\vec{h}_{k}|$ ($C=0$), which corresponds to the Landau Zener transition \cite{zener1932non}, the spin can not follow the effective magnetic field and is only subjected to the average field \cite{betthausen2012spin,leon2014dynamical}. To simply explain the transition from adiabatic to nonadiabatic regime, let us to revisit the definition of angle $\gamma_{k}$ in Eq. (\ref{eq6}). To get oscillations on the quasi-spin, $\gamma_{k}$ needs to be varied from $0$ to $\pi$, i.e., $\gamma_{k}\in [0,\pi]$. This is possible only if $\omega<|\vec{h}_{k}|$, such that the denominator of Eq. (\ref{eq6}) can become zero leading the argument runs from $(-\infty,\infty)$. In turn, it is required that the driving frequency ranges from $|J_{2}-2h_{s}|$ to $\sqrt{(J_{2}+2h_{s})^2+4}$, i.e., $|J_{2}-2h_{s}|\leq\omega\leq\sqrt{(J_{2}+2h_{s})^2+4}$. In Fig. \ref{fig1}, variation of $\cos(\gamma_{k})$ has been plotted versus $k/\pi$ at $J_{2}=\pi$, $h_s=3\pi$ and for different values of $\omega$. As seen, $\gamma_{k}$ changes from $0$ to nearly $\pi$ (dotted line), where the driving frequency is $\omega=6\pi$ and forces system to evolve adiabatically. On the other hand, in the nonadiabatic regime ($\omega=4\pi, 8\pi$), $\cos(\gamma_{k})$ is roughly constant, which means that quasi-spins cannot align with the effective magnetic field. \section{Dynamical Phase Transition\label{DQPTs}} As mentioned in the Introduction, there has been recently a renewed focus in the study of DQPTs, probing nonanalyticities of the dynamical free energy of a quenched system in both pure and mixed states \cite{heyl2018dynamical,heyl2017dynamical,bhattacharya2017mixed}. The DQPTs notion emanates from the resemblance between the canonical partition function of an equilibrium system $Z(\beta)=Tr e^{-\beta{\cal H}}$ and the quantum boundary partition function $Z(z)=\langle\Psi|e^{-z{\cal H}}|\Psi\rangle$ \cite{LeClair,Piroli}. When $z=it$ the quantum boundary partition function corresponds to the Loschmidt amplitude ${\cal L}(t)=\langle\Psi(\lambda_{1})|e^{-i{\cal H}(\lambda_{2})t}|\Psi(\lambda_{1})\rangle$. In such a case, LA is the overlap amplitude of the initial quantum state $|\Psi(\lambda_{1})\rangle$ with its time evolved state under the post-quenched Hamiltonian ${\cal H}(\lambda_{2})$. The DPTs are defined by sharp nonanalyticities in the rate function of the Loschmidt echo (LE)-square given by \cite{Pollmann,heyl2013dynamical,andraschko2014dynamical,sharma2015quenches}, \begin{eqnarray} \nonumber g(t)=-\frac{1}{2\pi}\int_{-\pi}^{\pi} dk \ln|{\cal L}_{k}(t)|^{2}. \end{eqnarray} In addition, these nonanalyticities are signaled by the zeros of $Z(z)$, known as Fisher zeros \cite{heyl2013dynamical,heyl2018dynamical}. Here, we investigate the Floquet DQPTs in the proposed periodically time dependent Hamiltonian Eq. (\ref{eq1}) to explore characteristics of DQPTs in the quantum Floquet systems. \subsection{Pure state dynamical phase transition\label{pureDQPT}} According to the results of section \ref{schrodingerEQ} the time-evolved ground state $|\psi^{-}_{k}(t)\rangle$ of the quasi-spin Hamiltonian $H_{k}(t)$, is given by {\small \begin{eqnarray} |\psi^{-}_{k}(t)\rangle&=&U_{R}(t)e^{-iE^{-}_{k}t}|\chi^{-}_{k}\rangle =e^{-iE^{-}_{k}t}e^{i\omega(\mathbb{1}-\sigma^{z})t/2}|\chi^{-}_{k}\rangle. \label{eq11} \end{eqnarray} } Due to the decoupling of different momentum sectors, the initial and time-evolved ground states of the original Hamiltonian exhibit a factorization property that is expressed by \begin{eqnarray} &&|\psi^{-}(t)\rangle=\Pi_{k}|\psi^{-}_{k}(t)\rangle=\Pi_{k} e^{-iE^{-}_{k}t}U_{R}(t)|\chi^{-}_{k}\rangle, \nonumber\\ &&|\psi^{-}(t=0)\rangle=\Pi_{k}|\chi^{-}_{k}\rangle. \label{eq12} \end{eqnarray} It is straightforward to show that the LA corresponding to the ground state of the proposed model is given by \begin{eqnarray} \nonumber {\cal L}(t)&=&\langle\psi^{-}(0)|\psi^{-}(t)\rangle=\Pi_{k}{\cal L}_{k}(t),\\ \nonumber {\cal L}_{k}(t)&=&\langle\psi^{-}_{k}(0)|\psi^{-}_{k}(t)\rangle =e^{-iE^{-}_{k}t}\langle\chi^{-}_{k}|U_{R}(t)|\chi^{-}_{k}\rangle\\ \label{eq13} &=&e^{-iE^{-}_{k}t}\Big[\cos^2(\frac{\gamma_{k}}{2})+\sin^2(\frac{\gamma_{k}}{2})e^{i\omega t}\Big]. \end{eqnarray} \begin{figure*} \centerline{\includegraphics[width=0.325\linewidth]{fig3a.pdf} \includegraphics[width=0.325\linewidth]{fig3b.pdf} \includegraphics[width=0.325\linewidth]{fig3c.pdf}} \caption{(Color online) Lines of Fisher zeros for $J_{2}=\pi$, $h_{s}=3\pi$, and (a) $\omega=4\pi$, (b) $\omega=6\pi$ and (c) $\omega=8\pi$.} \label{fig3} \end{figure*} \begin{figure*} \centerline{\includegraphics[width=0.33\linewidth]{fig4a.pdf} \includegraphics[width=0.33\linewidth]{fig4b.pdf} \includegraphics[width=0.33\linewidth]{fig4c.pdf}} \caption{(Color online) The dynamical topological order parameter as a function of time for $J_{2}=\pi$, $h_{s}=3\pi$ and (a) $\omega=4\pi$, (b) $\omega=6\pi$ and (c) $\omega=8\pi$.} \label{fig4} \end{figure*} The density plot of LE and the rate function of LE have been shown in Figs. (\ref{fig2})(a)-(f). It can be clearly seen that, in the adiabatic regime (Fig. \ref{fig2}(b)) there exist critical points $k^{\ast}$ and $t^{\ast}$, where ${\cal L}_{k^{\ast}}(t^{\ast})$ becomes zero. In contrast, there is no such critical point in nonadiabatic regime (Figs. \ref{fig2}(a) and \ref{fig2}(c)). Consequently, the nonanalyticities in the rate function of the LA and DQPT occur for the driving frequency, at which the system evolves adiabatically (Fig. \ref{fig2}(e)). The real time instances at which the DQPT appears is exactly equal to the time instances at which at least one factor in LA becomes zero i.e., ${\cal L}_{k^{\ast}}(t^{\ast})=0$. According to Eq. (\ref{eq13}), we find that DQPT happens only whenever there is a mode $k^{\ast}$, which satisfies $J_{2}\cos(k^{\ast})+2h_{s}-\omega=0$, that leads to $|J_{2}-2h_{s}|<\omega<J_{2}+2h_{s}$. Since $J_{2}+2h_{s}<\sqrt{4+(J_{2}+2h_{s})^{2}}$, we come to conclude that the nonanalyticities in the rate function of LA can only exist whenever the system evolves adiabatically. In other words, the minimum required driving frequency for the emergence of Floquet DQPT is equal to the threshold frequency needed for transition from nonadiabatic to adiabatic regime. Consequently, LA shows a periodic sequence of real-time nonanalyticities in adiabatic regime at \begin{equation} t^{\ast}_{n}=(2n+1)\frac{\pi}{\omega}=(n+\frac{1}{2}) t^{\ast},~~n{\cal{2}}\mathbb{Z}^+ \label{eq14}, \end{equation} with the period $T_{p}=t^{\ast}=2\pi/\omega$. This result is in agreement with the numerical simulation shown in Figs. \ref{fig2}(b) and \ref{fig2}(e). Furthermore, in contrast to DQPT in the conventional quench mechanism, in which the height of rate function singularity decays with time, the height of cusps in Floquet DQPT does not decay with time. As mentioned, nonanalyticities of the LA rate function are accompanied by Fisher zeros of the boundary partition function. The boundary partition function of our model is given by \begin{equation} {\cal L}_{k}(z)=e^{-E^{-}_{k}z}\Big[\cos^2(\frac{\gamma_{k}}{2})+\sin^2(\frac{\gamma_{k}}{2})e^{\omega z}\Big] \label{eq15}. \end{equation} Zeros of Eq. (\ref{eq15}) coalesce in the thermodynamic limit to the family of lines labeled by a number $n \in \mathbb{Z}$: \begin{eqnarray} z_{n}=\frac{1}{\omega}\Big[\ln(\frac{1+\cos(\gamma_{k})}{1-\cos(\gamma_{k})}) +i(2n+1)\pi\Big]. \label{eq16} \end{eqnarray} \begin{figure*} \begin{minipage}{\linewidth} \centerline{\includegraphics[width=0.31\linewidth]{fig5a.pdf} \includegraphics[width=0.31\linewidth]{fig5b.pdf} \includegraphics[width=0.37\linewidth]{fig5c.pdf}} \centering \end{minipage} \begin{minipage}{\linewidth} \centerline{\includegraphics[width=0.33\linewidth,height=0.26\linewidth]{fig5d.pdf} \includegraphics[width=0.33\linewidth,height=0.26\linewidth]{fig5e.pdf} \includegraphics[width=0.33\linewidth]{fig5f.pdf}} \centering \end{minipage} \caption{(Color online) The density plot of generalized Loschmidt amplitude versus $t$ and $k$ for $\beta=1$, and (a) $\omega=4\pi$, (b) $\omega=6\pi$ and (c) $\omega=8\pi$. The rate function of generalized Loschmidt amplitude versus time for different $\beta$ and (d) $\omega=4\pi$, (e) $\omega=6\pi$ and (f) $\omega=8\pi$. In the whole plots $J_{2}=\pi$, $h_{s}=3\pi$. } \label{fig5} \end{figure*} The sketches of lines of Fisher zeros are shown in Fig. \ref{fig3} for different values of driving frequency. It can be clearly seen that, the imaginary axis is crossed by Fisher zeros lines for the adiabatic regime (Fig. \ref{fig3}(b)). However, Fisher zeros do not cut the imaginary axis for driving frequencies that drive system nonadiabatically (Figs. \ref{fig3}(a) and \ref{fig3}(c)). We have also plotted the density plot of time dependent expectation values of quasi-spin components $\langle\sigma^{\alpha}\rangle$, shown in Fig. \ref{fig7} in Appendix \ref{A5}. In the adiabatic regime, it is clearly visible that quasi-spin components oscillate between spin up and down states and cover the range $[-1,1]$. As stated in Sec.\ref{introduction}, dynamical topological order parameter has been proposed to represent the topological characteristic associated with DQPTs. The DTOP displays integer values as a function of time and reveals unit magnitude jumps at the critical times at which the DQPTs occur. The DTOP is given by \begin{eqnarray} \label{eq17} \nu_D(t)=\frac{1}{2\pi}\int_0^\pi\frac{\partial\phi^G(k,t)}{\partial k}\mathrm{d}k, \end{eqnarray} where the geometric phase $\phi^G(k,t)$ is obtained from the total phase $\phi(k,t)$ by subtracting the dynamical phase $\phi^{D}(k,t)$, i.e., $\phi^G(k,t)=\phi(k,t)-\phi^{D}(k,t)$. The total phase $\phi(k,t)$ is the phase factor of LA in its polar coordinates representation, i.e. ${\cal L}_{k}(t)=|{\cal L}_{k}(t)|e^{i\phi(k,t)}$, and $\phi^{D}(k,t)=-\int_0^t \langle \psi_{k}^{-}(t')|\mathbb{H}_{k}|\psi_{k}^{-}(t')\rangle dt'$, are obtained as follows {\small \begin{eqnarray} \nonumber \phi(k,t)=-E^{-}_{k}t &+& \tan^{-1}\Big(\frac{\sin^{2}(\gamma_{k}/2)\sin(\omega t)}{\cos^{2}(\gamma_{k}/2) +\sin^{2}(\gamma_{k}/2)\cos(\omega t)}\Big),\\ \nonumber \phi^{D}(k,t)&=&-E^{-}_{k}t+(1-\cos(\gamma))\omega t/2. \end{eqnarray} } \begin{figure*} \centerline{\includegraphics[width=0.33\linewidth]{fig6a.pdf} \includegraphics[width=0.33\linewidth]{fig6b.pdf} \includegraphics[width=0.33\linewidth]{fig6c.pdf}} \caption{(Color online) The mixed state topological order parameter as a function of time for different values of $\beta$, $J_{2}=\pi$, $h_{s}=3\pi$, and (a) $\omega=4\pi$, (b) $\omega=6\pi$ and (c) $\omega=8\pi$.} \label{fig6} \end{figure*} The DTOP has been displayed in Fig. \ref{fig4} for different values of the driving frequencies. From Fig. \ref{fig4} one can see that, DTOP smoothly decreases/increases with time in nonadiabatic regime (Figs. \ref{fig4}(a) and \ref{fig4}(c)), which represent the absence of DQPT. The unit jumps in Fig. \ref{fig4}(b) features the topological aspects of DPTs, which happen in the adiabatic regime. \subsection{Mixed state dynamical phase transition} In experiments \cite{flaschner2018observation,jurcevic2017direct}, which investigate the far-from-equilibrium theoretical concepts, the initial state in which system is prepared, is typically not a pure state but rather a mixed state. This leads to propose generalized Loschmidt amplitude (GLA) for mixed thermal states, which perfectly reproduce the nonanalyticities manifested in the pure state DQPTs \cite{bhattacharya2017mixed, heyl2017dynamical}. Here, we investigate mixed state Floquet DQPTs notion in the Floquet Hamiltonian, Eq. (\ref{eq1}). The GLA for thermal mixed state is defined as follows \begin{equation} {\cal GL}(t) =\prod_{k} {\cal GL}_{k}(t)=\prod_{k}Tr \Big(\rho_{k}(0) U(t)\Big), \label{eq18} \end{equation} where $\rho_{k}(0)$ is the mixed state density matrix at time $t=0$, and $U(t)$ is the time-evolution operator. The mixed state density matrix and time-evolution operator of Floquet Hamiltonian (Eq. (\ref{eq1})) is given by \begin{equation} \label{eq19} \rho_{k}(0)=\frac{e^{-\beta \mathbb{H}_{k}}}{{\rm Tr}(e^{-\beta \mathbb{H}_{k}})} =\frac{1}{2}\Big(\mathbb{1}-\tanh(\beta\frac{\Delta_{k}}{2}){{\hat n}_k}\cdot {\vec {\sigma}}\Big),\\ \label{eq20} U(t)=U_{R}(t)e^{-{\it i}\mathbb{H}_{k}t}=e^{{\it i}\omega(\mathbb{1}-\sigma^{z})t/2}e^{-{\it i}\mathbb{H}_{k}t}, \end{equation} respectively, where $\mathbb{H}_{k}=\frac{1}{2}(\omega\mathbb{1}+\Delta_{k}{\hat n}_k\cdot\vec {\sigma})$, with ${\hat n}_k=(h_{xy}(k),0,h_{z}(k)-\omega)/\Delta_{k}$ and $\beta=T^{-1}$ is the inverse temperature with Boltzmann constant $K_{B}=1$. A rather lengthy calculation yielding an exact expression for GLA \begin{eqnarray} \label{eq21} {\cal GL}_{k}(t)&=&\frac{1}{\Delta_{k}}\Big[\Delta_{k}\cos(\frac{\omega t}{2})\cos(\frac{\Delta_{k}t}{2})\\ \nonumber &-&(h_z(k)-\omega)\sin(\frac{\omega t}{2})\sin(\frac{\Delta_{k}t}{2})\\ \nonumber &+&i\Big(\Delta_{k}\cos(\frac{\omega t}{2})\sin(\frac{\Delta_{k}t}{2})\\ \nonumber &+&(h_z(k)-\omega)\sin(\frac{\omega t}{2})\cos(\frac{\Delta_{k}t}{2})\Big)\tanh(\beta\Delta_{k}/2)\Big]. \end{eqnarray} The density plot of GLA has been plotted versus time $t$ and $k$ in Figs. \ref{fig5}(a)-(c) for different values of driving frequency at $\beta=1$. As seen, in the adiabatic regime Fig. \ref{fig5}(b), the critical points $k^{\ast}$ and $t^{\ast}$, where GLA becomes zero, are exactly the same as the corresponding one in LA. In the nonadiabatic regime, there is no critical point to get zero for GLA. So we expect that the mixed state DQPT occurs in the adiabatic regime even at finite temperatures. The comparison of Fig. \ref{fig2}(b) with Fig. \ref{fig5}(b) manifests that, GLA shows deformation versus time. Our numerical results show that the deformation enhances by increasing time and temperature. The rate function of GLA has been plotted versus time in Figs. \ref{fig5}(d)-(f) for different values of $\beta$ and driving frequencies. As is clear, the nonanalyticities in the rate function of GLA appear in the adiabatic regime (Fig. \ref{fig5}(e)). Although GLA correctly reproduces the critical mode $k^{\ast}$, and critical time $t^{\ast}$ observed during the pure state DQPT, the height of cusps decreases by increasing time. However, the height of cusps and its reductional behaviour with time grows by increasing temperature (Fig. \ref{fig5}(e)). It has to be noted that for temperatures smaller than the temperature associated with the minimum energy gap, the critical modes and times of the mixed state DQPT, remain unaffected. For higher temperatures the finger print of DQPT washed out, which states a cross over to a new regime without DQPT (see inset of Fig. \ref{fig5}(e), which shows the case of $\beta=0.1$). In Fig. \ref{fig8} (see Appendix \ref{A8}) the time dependent expectation values of quasi-spin components $\langle\sigma^{\alpha}\rangle$, have been plotted at finite temperature. Similar to the pure state case, in the adiabatic regime quasi-spin components oscillate between spin up and down states. It is remarkable to mention that the range, over which quasi-spin components oscillate, shrinks by increasing temperature and at high temperatures their expectation values are approximately constant. Analogous to the pure state DQPT, topological invariant has been established for mixed state DQPT to display its topological characteristics. In the mixed state DQPT the total phase and dynamical phase are defined as $\phi(k,\beta,t)=Arg\Big[Tr\big(\rho(k,\beta,0)U(t)\big)\Big]$, and $\phi^{D}(k,\beta,t)=-\int_{0}^{t} Tr[\rho(k,\beta,t')H(k,t')]dt'$, respectively. Hence, the topological invariant $\nu_D(t)$ can be calculated for mixed state using Eq. (\ref{eq17}) in which $\phi^{G}(k,\beta,t)=\phi(k,\beta,t)-\phi^{D}(k,\beta,t)$. After a lengthy calculation, one can obtain $\phi(k,\beta,t)$ and $\phi^{D}(k,\beta,t)$ for a mixed state (see Appendix \ref{A7}). The mixed state topological invariant has been illustrated in Fig. \ref{fig6} for different values of driving frequencies and $\beta$. One can clearly see that $\nu_D(t)$ exhibits a nearly perfect quantization (unit jump) as a function of time between two successive critical times $t^{\ast}$ in the adiabatic regime, Fig. \ref{fig6}(b). The quantized structure of $\nu_D(t)$ is only observed as far as temperatures are smaller than the temperature associated with the minimum energy gap. Although abrupt jumps of $\nu_D(t)$ is observed at higher temperatures, it does not show a quantized value to represent a topological character as seen in the inset of Fig. \ref{fig6}(b), for $\beta=0.1$. This confirms the existence of a crossover temperature below which mixed state DQPT exist and are signaled by the mixed state DTOP, which are nearly quantized. \section{Conclusion} We have studied both pure and mixed states Floquet dynamical quantum phase transition in the periodically driven extended XY model in the presence of staggered magnetic field. We have shown that the proposed Floquet Hamiltonian with $N$ interacting spins can be mapped to $N/2$ noninteracting quasi-spins subjected to the time dependent effective magnetic field (Schwinger-Rabi model). The calculated values of Chern number reveals that there exists topological transition from the nonadiabatic to adiabatic regime. In the adiabatic regime, the quasi-spins follow the time dependent effective magnetic field and then oscillate between up and down states. However, in the nonadiabatic regime the quasi-spins cannot trace the time dependent effective magnetic field and feel an average magnetic field. We have obtained the range of driving frequency over which the system experiences adiabatic cyclic process and shows DQPT. It can be understood that for very high frequencies the adiabatic evolution is lost, while, at very low frequencies the period of cyclic dynamics diverges, which prohibits a recursion to show DQPT at finite time. This justifies the presence of a window frequency over which the DQPT is observed. We would like to stress that our model has a single gapless critical point and the observed DQPTs within a window frequency is different from the observed counterparts in Ref.[\onlinecite{yang2019floquet}], which possesses two distinct critical points, which define the window frequecy. We have also obtained the exact expression of Loschmidt amplitude and the generalized Loschmidt amplitude of proposed Floquet system. Our results confirm that both pure and mixed states dynamical phase transitions occur whenever the system evolves adiabatically. In other words, the minimum frequency needed for emerging the dynamical phase transition is equivalent to the threshold driving frequency, which is necessary to drive the system into the adiabatic regime. Furthermore, we have observed that for the mixed state dynamical phase transition there is a crossover temperature above which the nonanalyticities in the rate function of generalized Loschmidt amplitude and quantization of the mixed state DTOP get completely wiped out. Floquet systems may show the pre-thermal phase protected by discrete time-translation symmetry in the absence of disorder and integrals of motion \cite{Prosen1998,Luca2014,Lazarides2014,Ponte2015,Mori2016,Khemani2016,Abanin2017,Else2017}. It would be an exciting topic to investigate the presence of DQPT in a pre-thermal regime, which can be realized by adding interaction/perturbation to the model we discussed. \section{Acknowledgements} A. L. would like to acknowledge the support from Sharif University of Technology under Grant No. G960208.
null
null
null
proofpile-arXiv_000-10183
{"arxiv_id":"2009.09008","language":"en","timestamp":1600740057000,"url":"https:\/\/arxiv.org\/abs\/2009.09008","yymm":"2009"}
2024-02-18T23:40:25.303Z
2020-09-22T02:00:57.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10185"}
null
\section{Introduction} There have been many developments in estimating the average treatment effects (ATE). In various fields, the estimation of the ATE provides a central insight on the causal effect of a treatment (e.g., an intervention, an environmental policy, and so on) on an outcome, on average for the whole population. However, in addition to the ATE, it is critically important to identify subpopulations of the population that would benefit the most from a treatment and/or would be most vulnerable to an environmental exposure. In the context of air pollution, it is deemed important to public health to identify the subpopulations that are most vulnerable, so that effective interventions can be put in place to mitigate adverse health effects \citep{lee2018discovering}. % There is extensive literature on assessing heterogeneity of causal effects that is based on estimating the conditional average treatment effect (CATE). For each combination of covariates $\mathbf{X} = \mathbf{x}$ (i.e., subset of the features space), the CATE can be estimated with the same set of the causal assumptions that are needed for estimating the ATE \citep{athey2016}. Under the same identification assumptions, earlier works on estimating CATE rely on nearest-neighbor matching and kernel methods \citep{crump2008nonparametric, lee2009non}. \cite{wager2018estimation} discuss that these approaches may fail in handling a large number of covariates. This issue is often referred to as \textit{curse of dimensionality} \citep{bellman1961adaptive, robins1997toward}. Recently, other nonparametric approaches facilitate machine learning methods such as Random Forest \citep{breiman2001random} and Bayesian Additive Regression Tree (BART) \citep{chipman2010bart}. These approaches have been successful when the number of features is large. For instance, in their seminal contributions, \cite{foster2011subgroup} and \cite{hill2011bayesian} used forest-based algorithms for the prediction of the missing potential outcomes. In a similar spirit, \cite{hahn2020bayesian} proposed a BART-based approach but with a novel parametrization of the outcome surfaces. In more recent contributions, \cite{wager2018estimation} and \cite{athey2019generalized} developed forest-based methods for the estimation of heterogeneous treatment effects. They also provide asymptotic theory for the conditional treatment effect estimators and valid statistical inference. Despite the success in accurately estimating the CATE using machine learning methods, these tree ensemble methods offer little guidance about which covariates or, even further, subpopulations (i.e., subsets of the features space defined by multiple covariates) bring about treatment effect heterogeneity. Outputs/results obtained from existing methods are hard to interpret by human experts because parametrizations of the covariate space are complicated. This issue is well-known as \textit{lack of interpretability}. Increasing model interpretability is key to understanding and furthering human knowledge. However, effort to improve interpretability is so far lacking in the current causal inference literature dealing with the study of treatment effect heterogeneity. In this paper, we propose a novel Causal Rule Ensemble (CRE) method that ensures interpretability, while maintaining a high level of accuracy in estimation. The CRE method uses decision rules obtained from multiple trees, selects a key subset of rules to identify subpopulations contributing to heterogeneous treatment effects, and estimates CATE for each selected rule. Interpretability is usually a qualitative concept, and often defined as the degree to which a human can understand the cause of a decision or consistently predict the results of the model \citep{miller2018explanation, kim2016examples}. Decision rules are ideal for this non-mathematical definition of interpretability. A decision rule consists of simple \textit{if-then} statements regarding several conditions and corresponds to a specific subpopulation. A handful number of decision rules, called \textit{causal rules}, can be chosen by using high performance machine learning techniques, but are still easy to understand. We achieve the following three main goals: (1) discovering de novo causal rules that lead to heterogeneity of causal effects; (2) providing valid inference and large sample properties about CATE with respect to the newly discovered rules; and (3) assessing sensitivity to unmeasured confounding bias for the rule-specific causal effects. To do so, we follow \cite{athey2016}, and rely on a sample-splitting approach that divides the total sample into two smaller subsamples: one for discovering a set of interpretable decision rules that could lead to treatment effect heterogeneity (i.e., discovery sample) and the other for estimating the rule-specific treatment effects (i.e., inference sample). We also tailor a sensitivity analysis method proposed by \cite{zhao2019sensitivity} to assess the robustness of the rule-specific treatment effects to unmeasured confounding. Furthermore, the CRE method has several other advantages besides interpretability and estimation accuracy. It allows practitioners to represent treatment effect heterogeneity in a more flexible and stable way. This stability provides higher standards of replicability. Also, it is a powerful tool to disentangle the effect modifiers (namely, drivers of causal effect heterogeneity) from measured confounders. The reminder of the paper is organized as follows. In Section \ref{sec:interpretability}, we introduce the main definitions of CATE and interpretable decision rules. In Section \ref{sec:discovery}, we describe the algorithm used for the discovery of causal rules. Section \ref{sec:inference} introduces our innovative ideas regarding estimation and sensitivity analysis. In Section \ref{sec:simulations}, we conduct simulations studies. In Section \ref{sec:application}, we apply the proposed method to the Medicare Data. Section \ref{sec:discussion} discusses the strengths and weaknesses of our proposed approach and areas of future research. \section{Treatment Effect Heterogeneity, Interpretability and Sample-Splitting}\label{sec:interpretability} \subsection{Causal Treatment Effects} Suppose there are $N$ subjects. For each subject, let $Y_i$ be an outcome, $Z_i$ be a binary treatment, and $\mathbf{X}_i$ be a $K$-dimensional vector of covariates. Following the potential outcome framework \citep{neyman1923, rubin1974estimating}, we define two potential outcomes $Y_i(1)$ and $Y_i(0)$ as a function of the treatment assigned to each subject $i$ (i.e., $Y_i(Z_i)$). $Y_i(1)$ is the potential outcome for unit $i$ under treatment, while $Y_i(0)$ is the potential outcome under control. The fundamental problem is that the treatment effect $\tau_i = Y_i(1) - Y_i(0)$ cannot be observed from a given sample $(Y_i, Z_i, \mathbf{X}_i)$ since we can observe only one of the potential outcomes \citep{holland1986statistics}. Since either $Z_i=1$ or $Z_i=0$ is realized, the corresponding potential outcome $Y_i(Z_i)$ is observed, and the other potential outcome $Y_i(1 - Z_i)$ is \textit{counterfactual} and always missing. For instance, if $Z_i=1$, we can observe $Y_i(1)$, but cannot observe $Y_i(0)$. What we can observe is the realization of $Y_i$ that can be formalized as $Y_i = Z_i Y_i(1) + (1-Z_i)Y_i(0)$. It is extremely difficult to estimate $\tau_i$ as this quantity is never observed in reality. Instead, we consider the conditional average treatment effect (CATE) $\tau(x)$ defined as $\tau(x) = \mathbb{E} \left[Y_i(1) - Y_i(0) | \mathbf{X}_i = x \right]$ and the average treatment effect (ATE) defined as $\tau = \mathbb{E}_{X}[\tau(x)]$. Although $\tau(x)$ cannot be observed, $\tau(x)$ can be estimated under the assumption of no unmeasured confounders \citep{rosenbaum1983central}, \begin{equation} \left(Y_i(1), Y_i(0) \right) \independent Z_i \>\vert\> \mathbf{X}_i. \label{assump:unconfounded} \end{equation} This assumption means that the two potential outcomes depend on $\mathbf{X}_i$, but are independent of $Z_i$ conditioning on $\mathbf{X}_i$. By using the propensity score $e(x) = \mathbb{E}[Z_i \vert \mathbf{X}_i=x]$ \citep{rosenbaum1983central}, the CATE $\tau(x)$ can be identified as \begin{equation} \tau(x) = \mathbb{E} \left[\left( \frac{Z_i}{e(x)} - \frac{1-Z_i}{1-e(x)} \right) Y_i \>\vert\> \mathbf{X}_i =x \right]. \label{identification1} \end{equation} However, in practice we do not know whether a considered set $\mathbf{X}_i$ is sufficient for the assumption~\eqref{assump:unconfounded}. When there exists a source of unmeasured confounding, this assumption is violated, and the identification results do not hold. Sensitivity analysis provides a useful tool to investigate the impact of unmeasured confounding bias, which will be discussed in Section~\ref{ssec:sa}. \subsection{Interpretability and Decision Rules} Our primary goal is to provide an interpretable structure of $\tau(x)$ (discovery of treatment effect heterogeneity) and then estimate $\tau(x)$ in an efficient and precise manner (estimation of treatment effect heterogeneity). The functional form of $\tau(x)$ is not known, and may have a complicated structure varying across subgroups. A complex model can be considered to get a better estimate of $\tau(x)$, but such model may mask the truly informative structure of $\tau(x)$. Instead, it is possible to ``extract" (or ``approximate'') important subspaces of $\mathbf{X}$ that are responsible for the variation of $\tau(x)$, using a sparse representation of the conditional average treatment effect (CATE) defined by parsimonious subsets of the covariates space \citep{imai2013estimating}. The extracted subspaces may not perfectly describe the treatment effect heterogeneity, but provide a concise and informative summary, thus are more interpretable. This is often referred to as \textit{trade-off} between interpretability and accuracy. More specifically, weighing too much on achieving high accuracy in the estimation of the causal effect for a given covariate-profile generally comes at the cost of compromising interpretability. On the other hand, trying to improve interpretability might lead to too simple structures that are not representative of the true $\tau(x)$ structure, which in turn may reduce the statistical precision. It is important to find a good balance between interpretability and estimation accuracy. Our work focuses on finding this balance and, furthermore, improving interpretability while minimizing the loss of estimation accuracy. To do so, we consider decision rules as base learners, and describe the heterogeneous treatment effect as a linear combination of these learners. \begin{figure}[t!] \centering \begin{tikzpicture}[level distance=60pt, sibling distance=10pt, edge from parent path={(\tikzparentnode) -- (\tikzchildnode)}] \tikzset{every tree node/.style={align=center}} \Tree [.{Total sample} \edge node[auto=right,pos=.6]{Male$=0$}; {Female} \edge node[auto=left,pos=.6]{Male$=1$};[.{Male} \edge node[auto=right,pos=.6]{Young$=0$}; {Old male} \edge node[auto=left,pos=.6]{Young$=1$}; {Young male} ]] \end{tikzpicture} \caption{An example tree.} \label{fig:example_tree} \end{figure} We first introduce decision rules with formal definition. Let $S_k$ be the set of possible values of the $k$th covariate and $s_{k, m} \subseteq S_k$ be a specific subset corresponding to the $m$th rule. Then, each decision rule $r_m(x)$ can be represented as \begin{equation*} r_m(x) = \prod_{k: s_{k, m} \neq S_k} \mathbbm{1}(x_k \in s_{k, m}). \end{equation*} Define the covariate space $D = S_1 \times \cdots \times S_K$ as a Cartesian product of $K$ sets. A vector of covariates $\mathbf{X}_i$ must lie in $D$ for all $i$. Also, we define the subset $D_m$ corresponding to the rule $r_m(x)$, $D_m = s_{1, m} \times \cdots \times s_{K, m}$. Then, $r_m(x)=1$ if $x \in D_m$ and $r_m(x)=0$ otherwise. Decision rules as base learners are easily obtained from decision trees. For example, Figure~\ref{fig:example_tree} shows a toy example. The decision tree in this figure consists of several decision rules. Four decision rules can be extracted from this tree. For example, the young male group can be expressed as $r_4 = \mathbbm{1}( \text{Male} = 1) \times \mathbbm{1}( \text{Young} = 1)$. Other decision rules are listed in Table~\ref{tab:rules_example}. We note that $r_2$ represents the internal node of the male group. Each decision rule corresponds to either internal or terminal nodes (leaves) except for the initial node (root). Including rules corresponding to internal nodes may increase the total number of decision rules to consider, but it can be helpful not to miss important decision rules that are essential for describing the variability of $\tau(x)$. \begin{table} \centering \caption{Extracting rules from the example tree in Figure~\ref{fig:example_tree}} \begin{tabular}{cc} \hline Rules & Conditions \\ [0.05cm] \hline $r_1$ & Male$=0$ \\ $r_2$ & Male$=1$ \\ $r_3$ & Male$=1$ \& Young$=0$ \\ $r_4$ & Male$=1$ \& Young$=1$ \\ \hline \end{tabular} \label{tab:rules_example} \end{table} \subsection{Sample Splitting} Analysis of heterogeneous treatment effects or subgroup analysis are typically conducted for subgroups defined \textit{a priori} to avoid the cherry-picking problem that reports only subgroups with extremely high/low treatment effects \citep{cook2004subgroup}. However, defining subgroups a priori requires a fairly good understanding of the treatment effect, probably from previous literature. On top of that, another problem is that researchers may miss unexpected subgroups. To overcome these limitations, we propose to use data-driven machine learning approaches combined with honest inference \citep{athey2016}. In particular, we use a sample-splitting approach that divides the total sample into two smaller subsamples \citep{athey2016, lee2018discovering}: (1) discovery and (2) inference subsamples. By using the discovery subsample, decision rules are generated, and among them, a few candidates are selected. These selected rules are regarded as rules given a priori when making statistical inference using the remaining inference subsample. The main advantage of sample-splitting is to provide \textit{transparency} that is foundational to reproducibility and replicability. The discovery step is performed only using the data in the discovery subsample, which enables researchers to avoid damaging validity of a later inference step. Also, this separated discovery step can be considered as a pre-analysis step to increase efficiency. When causal rules selected by the discovery step represent treatment effect heterogeneity well, in the context of estimation, methods with sample-splitting often perform as good as methods without sample-splitting, or perform better in high-dimensional settings. \begin{algorithm}[h] \caption{Overview of the Causal Rule Ensemble (CRE) Method \label{alg:dis}} \begin{itemize} \item Randomly split the total sample into two smaller samples: the discovery and inference subsamples \item The Discovery Step (performed on the discovery subsample): \begin{enumerate} \item Rule generation (Section~\ref{ss:rule_gen}) \begin{enumerate} \item Estimate $\tau_i$ using existing methods for estimating the CATE \item Use $(\hat{\tau}_i, \mathbf{X}_i)$ to generate a collection of trees through tree-ensemble methods \item From the collection, extract decision rules $r_j$by removing duplicates \end{enumerate} \item Rule regularization (Section~\ref{subsection:rulediscovery}) \begin{enumerate} \item Generate a new vector $\tilde{\mathbf{X}}_i^*$ whose $j$th component corresponds to rules $r_j$ \item Apply stability selection to $(\hat{\tau}_i, \tilde{\mathbf{X}}_i^*)$ and select potentially important decision rules $\tilde{\mathbf{X}}$. \end{enumerate} \end{enumerate} \end{itemize} \begin{itemize} \item The Inference Step (performed on the inference subsample): \begin{enumerate} \item Estimate the CATE for the selected decision rules represented by $\tilde{\mathbf{X}}$ (Section~\ref{ss:estimation}) \item Sensitivity analysis for the estimated CATE from the previous step (Section~\ref{ssec:sa}) \end{enumerate} \end{itemize} \end{algorithm} \section{Discovery Step: Which Subgroups Are Potentially Important?}\label{sec:discovery} This section illustrates a step for discovering potentially important subgroups in terms of decision rules. In particular, we introduce a generic method that creates a set of decision rules and identifies the rule-generated structure. The discovery step consists of two parts: (1) rule generation and (2) rule regularization. For rule-generation, we create base learners that are building blocks to describe the heterogeneous structure of treatment effects. The rule regularization is used to choose \textit{necessary} bulding blocks, and needed for interpretability and stability. \subsection{Rule Generation} \label{ss:rule_gen} We use decision rules as base learners. Decision rules can be externally specified by using prior knowledge. However, we propose to employ a data-driven approach that uses hundreds of trees and extracts decision rules. This approach overcomes two drawbacks that classical subgroup analysis approaches have: (1) they strongly rely on the subjective decisions on which are the heterogeneous subpopulations to be investigated; and (2) they fail to discover new rules other than the ones that are \textit{a priori} defined by the researchers. To create base learners for $\tau(x)$, we estimate $\tau_i$ first. This estimation is not considered for making inference, but designed for exploring the heterogeneous treatment effect structure. Many different approaches for the estimation of $\tau(x)$ can be considered. There has been methodological advancement in directly estimating $\tau(x)$ by using machine learning methods such as Causal Forest \citep{wager2018estimation} or Bayesian Causal Forest (BCF) \citep{hahn2020bayesian}. The BCF method directly models the treatment effect such as $\tau_i = m_0(\mathbf{X}_i; \hat{e}) + \alpha(\mathbf{X}_i; \hat{e}) Z_i$ using the estimated propensity score $\hat{e}(\mathbf{X}_i)$. \cite{hahn2020bayesian} shows that BCF produces precise estimates of $\tau(x)$. This evidence is consistent with recent works showing an excellent performance of Bayesian machine learning methodologies in causal inference scenarios \citep{hill2011bayesian, hahn2018atlantic, logan2019decision, bargagli2019heterogeneous, starling2019targeted, nethery2019estimating}. The BCF method can be applied to the discovery sample to estimate $\tau_i$. We denote such estimates with $\hat{\tau}_i^{BCF}$. Alternative approaches for the estimation of $\tau(x)$ are discussed in Appendix \ref{Appendix:comparison}. Here, we want to highlight that the simulations' results show that the performance of the CRE algorithm in discovering the true causal rules is higher when we use BCF to estimate $\tau(x)$ (additional details on these simulations are provided in Appendix \ref{Appendix:comparison}). Once the unit level treatment effect $\hat{\tau}_i$ is obtained, one can fit a decision tree using the data $(\hat{\tau}_i, \mathbf{X}_i)$. After fitting a tree, decision rules can be extracted as discussed above. However, using a single tree is neither an efficient nor a stable way to find important decision rules. This is due to two main factors, the \textit{greedy} nature of tree-based algorithms and their lack of flexibility. Binary trees are greedy algorithms as they do not subdivide the population based on the overall best splits (the set of splits that would lead to the minimization of the overall criterion function), but they pick the best split at each step (the one that minimizes the criterion function at that particular step)\footnote{Optimal trees \citep{bertsimas2017optimal} accommodate for this shortcoming at the cost of an extremely higher computational burden.}. Moreover, binary trees may not spot \textit{simultaneous} patterns of heterogeneity in the data because of their binary nature. Imagine that the treatment effect differs by sex and high school diploma. The tree-based algorithm may be able to spot only one of the two drivers of heterogeneity if one affects the treatment effect more than the other. Even when both the heterogeneity drivers are discovered they are spotted in a suboptimal way as an interaction between the two variables (i.e., women with a high school diploma, men with no diploma, etc). To accommodate for these shortcomings of single trees, tree ensemble methods such as Random Forest \citep{breiman2001random} and Gradient Boosting \citep{friedman2001greedy} can be applied to obtain a collection of trees. \cite{nalenz2018tree} discuss that boosting and Random Forest are different in nature, thus using both approaches can generate a wider set of decision rules. Even after removing duplicate rules, this will lead to a more diverse set of candidate rules, thus will increases the probability to capture important rules. We follow \cite{friedman2008predictive} and \cite{nalenz2018tree} to use the same settings of the tuning parameters for gradient boosting and Random Forest. Also, we note that one should avoid using too lengthy (i.e., many conditions) or too many decision rules, which results in reducing interpretability. \subsection{Rule Regularization and Stability Selection} \label{subsection:rulediscovery} Denote $r_{m}(x)$ as the generated rules from Section~\ref{ss:rule_gen}, with $m=1, \ldots, M^*$. Since each rule $r_{m}(x)$ indicates whether $x$ satisfies the rule or not, it can take a value either 0 or 1. Define $\tilde{\mathbf{X}}^*$ as a new matrix whose columns are the decision rules. The number of rules, $M^*$, is usually larger than $K$ and depends on how heterogeneous $\tau(x)$ is. Although the original dataset $\mathbf{X}$ is not high-dimensional, $\tilde{\mathbf{X}}^*$ can be high-dimensional. The set of the generated rules is a set of potentially important rules. It can contain actually important decision rules that describe the true heterogeneity in the treatment effects. However, insignificant rules may be contained in the set. Then, rule selection is applied to distinguish between few important rules and many insignificant ones. This regularization step improves understanding of the heterogeneity and increases efficiency. Such step improves interpretability: if too many rules are selected, this information may be too complex to understand. We consider the following linear regression model of the form, \begin{equation} \tau(x) = \beta_0 + \sum_{m=1}^{M^*} \beta_m r_m(x) + \epsilon \label{eqn:model1} \end{equation} Since a linear model is considered, the model \eqref{eqn:model1} lends a familiar interpretation of the coefficients $\{ \beta_m\}_{0}^{M^*}$. Using this linear model, one can employ the following penalized regression to select important rules: \begin{equation} \underset{\beta_m}{\mathrm{argmin}} \bigg\{\beta_0 + \sum_{m=1}^{M^*} \beta_m r_m(x) \bigg\} \text{ subject to } ||\beta_m||_p\leq \lambda \label{eqn:modellasso} \end{equation} where $\lambda$ is the regularization parameter and $||\cdot||_p$ is the $l_p$-norm. However, variable selection has been known as a notoriously difficult problem. The Least Absolute Shrinkage and Selection Operator (LASSO) estimator \citep{tibshirani1996regression} has been popular and widely used over the past two decades in order to solve the problem in~\eqref{eqn:modellasso} by employing the $l_1$-norm ($\sum_{m=1}^{M^*}|\beta_m|$). The usefulness of this estimator among other penalization regression methods is demonstrated in various applications \citep{su2016identifying, belloni2016inference, chernozhukov2016locally, chernozhukov2017double}. Using a regularization parameter $\lambda$, we can see that the estimate $\hat{\bm{\beta}}$ shrinks toward to zero as $\lambda$ increases, which provides a sparse structure close to the true model. Consistency of LASSO variable selection has been studied in \cite{zhao2006model} and others. However, the biggest challenge is to choose a proper value of $\lambda$ for consistent selection. Cross-validation is usually accompanied to choose $\lambda$. However, it may fail for high-dimensional data \citep{meinshausen2006high}. Stability selection \citep{meinshausen2010stability} can be used to enhance the performance of the LASSO estimator. Roughly speaking, by using a subsampling scheme, selection probabilities of variables (decision rules in our paper) can be estimated. Given two parameters (i.e., cut-off threshold and the average number of selected variables), variable selection can be done in a comparatively robust and transparent way. Also, \cite{meinshausen2010stability} discussed that the solution of stability selection depends little on the initial regularization chosen, which is a desirable feature when selecting decision rules. Among initially generated $M^*$ decision rules, we assume that $M \> (\text{with } M <\!\!< M^*)$ decision rules are selected as an output from the discovery procedure. Then, we can define $\tilde{\mathbf{X}}$ as the matrix containing only the selected decision rules. Figure \ref{fig:trees} depicts the intuition behind these steps of rules discovery and selection. The Figure shows a simple forest composed of just five trees. Each node of each tree (with the exclusion of the roots) represents a causal rule (\textit{rules' discovery}), while the nodes highlighted in red represent the causal rules that are selected by the stability selection methodology (\textit{rules' selection}). \begin{figure}[H] \centering \includegraphics[width=1\textwidth]{ensemble_of_trees.jpg} \caption{Rules' discovery and selection in a simple forest.} \label{fig:trees} \end{figure} \section{Inference Step: Which Subgroups Are Really Different?}\label{sec:inference} \label{s:inference} Using the rules that were discovered and then selected in the discovery step, the remaining inference sample is used to make inference. We propose a generic approach to estimate the rule-specific treatment effect, and compare it with existing approaches. Furthermore, we propose a new sensitivity analysis method to assess the impact of unmeasured confounding bias on causal conclusion. \subsection{Estimating the subgroup-specific treatment effect} \label{ss:estimation} Decision rules are selected through the discovery step by using the linear model in~\eqref{eqn:model1}, $$ \bm{\tau} = \tilde{\mathbf{X}} \bm{\beta} + \bm\epsilon $$ with $\mathbb{E}(\bm{\epsilon} | \mathbf{x}) = 0$ and $\text{var}(\bm{\epsilon} | \mathbf{x}) = \sigma^2 \bm{I}$. The main goal of this inference step is to estimate $\bm{\beta}$ that represents rule-specific treatment effects. We estimate these CATE for the subpopulations corresponding to the selected rules. We use ordinary least squares (OLS) to estimate $\bm{\beta}$. However, this estimator cannot be obtained as the unit level treatment effect $\bm{\tau}$ is not known. Hence, we define a new vector $\bm{\tau}^* = (\tau_1^*, \ldots, \tau_N^*)^T$ that is an estimate of $\bm{\tau}$. The newly defined $\bm{\tau}^*$ is just an intermediate value used in the inference step. Although $\bm{\tau}^*$ is an estimate, to distinguish this with the fitted value obtained by using the linear model, we save hat notation. The estimate $\tau_i^*$ can be viewed as an observable quantity with some sampling error although $\tau_i^*$ is an estimated value. The quantity $\tau_i^*$ can be represented by \begin{equation} \tau_i^* = \tau_i + u_i \>\>\> \text{where}\>\>\> \mathbb{E}(u_i | \mathbf{x}_i) =0 \text{ and } \text{var}(u_i | \mathbf{x}_i) = w_i. \label{eqn:tau_model} \end{equation} In general, the variance $w_i$ is not constant across all individuals. By combining it with the model~\eqref{eqn:model1}, the modified linear model is obtained as \begin{equation} \tau_i^* = \beta_0 + \sum_{j=1}^{M} \beta_j \tilde{X}_{ij} + \nu_i = \tilde{\mathbf{X}}_i \bm{\beta} + \nu_i, \label{eqn:model2} \end{equation} where $\tilde{\mathbf{X}}_i$ is the $i$th row of $\tilde{\mathbf{X}}$ and $\nu_i = \epsilon_i + u_i$. \cite{lewis2005estimating} considered a similar linear model like the model~\eqref{eqn:model2} and found via simulation studies that the OLS estimator performs well in many cases. Also, they found that since the error $\nu_i$ is not homoscedastic, considering White or Efron's heteroscedastic-consistent standard error can provide an efficient estimate. Based on these findings, we also consider the OLS estimator for $\bm{\beta}$ that can be defined as \begin{equation} \hat{\bm{\beta}} = (\tilde{\mathbf{X}}^T \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^T \bm{\tau}^*. \label{eqn:estimate} \end{equation} Also, the fitted value $\hat{\bm{\tau}}$ is defined as $\hat{\bm{\tau}} = \tilde{\mathbf{X}} \hat{\bm{\beta}} = \tilde{\mathbf{X}}(\tilde{\mathbf{X}}^T \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^T \bm{\tau}^*$. The intermediate vector $\bm{\tau}^*$ can be chosen in various ways. However, to validate our estimator~\eqref{eqn:estimate}, each $\tau_i^*$ has to be unbiased with finite variance. Given that the matrix $\tilde{\mathbf{X}}$ is fixed, we can prove that the estimator $\hat{\beta}_j, j=1, \ldots, M$ is a consistent estimator of $\beta_j$ that is the average treatment effect for the subgroups defined by the decision rule $r_j$ if $r_j$ is not included in another rule $r_{j'}$. \begin{theorem} If $\tau_i^*$ satisfies the model~\eqref{eqn:tau_model} (Condition 1) and $\mathbb{E}(\tilde{\mathbf{X}}_i^T \tilde{\mathbf{X}}_i) = \mathbf{Q}$ is a finite positive definite matrix (Condition 2), then the estimator $\hat{\bm{\beta}} = (\tilde{\mathbf{X}}^T \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^T \bm{\tau}^*$ is a consistent estimator for $\bm{\beta}$. \label{thm:consistency} \end{theorem} An example is to use the inverse probability weighting (IPW) or stabilized IPW (SIPW) approaches. Both $\tau_i^* = \hat{\tau}_i^{IPW}$ and $\tau_i^* = \hat{\tau}_i^{SIPW}$ satisfy the model~\eqref{eqn:tau_model}. Therefore, the following corollary can be obtained from Theorem~\ref{thm:consistency}. \begin{corollary} The estimator $\hat{\bm{\beta}}^{(S)IPW} = (\tilde{\mathbf{X}}^T \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^T \bm{\tau}^*$ where $\tau_i^* = \hat{\tau}_i^{(S)IPW}$ is consistent. \end{corollary} We need additional assumptions to prove asymptotic normality of $\hat{\bm\beta}$. For a general covariate matrix, the following three conditions are required: \begin{enumerate} \setcounter{enumi}{2} \item $\mathbb{E}(\tilde{\mathbf{X}}_{ij}^4) < \infty$; \item $\mathbb{E}(\nu_i^4) < \infty$; \item $\mathbb{E}(\nu_i^2 \tilde{\mathbf{X}}_i^T \tilde{\mathbf{X}}_i) = \bm\Omega $ is a positive definite matrix. \end{enumerate} Since $\tilde{\mathbf{X}}_{ij}$ is either 0 or 1 in our setup, Condition (3) is satisfied by design. The following theorem represents the asymptotic distribution of $\hat{\bm\beta}$. \begin{theorem} If Conditions (1)-(5) hold, then $$ \sqrt{N} (\hat{\bm\beta} - \bm{\beta}) \overset{d}{\to} \mathcal{N}(0, \mathbf{V}) \:\:\: \text{as} \:\:\: N \to \infty $$ where $\mathbf{V} = \mathbf{Q}^{-1} \bm\Omega \mathbf{Q}^{-1}$. \end{theorem} The variance $\mathbf{V}$ usually has to be estimated. The variance-covariance matrix estimator $\hat{\mathbf{V}}_n = \hat{\mathbf{Q}}^{-1} \hat{\bm\Omega} \hat{\mathbf{Q}}^{-1}$ can be obtained by the sandwich formula where $\hat{\mathbf{Q}} = n^{-1} \sum_{i=1}^{N} \tilde{\mathbf{X}}_i^T \tilde{\mathbf{X}}_i$, $\hat{\bm\Omega} = n^{-1} \sum_{i=1}^{N} \hat{\nu}_i^2 \tilde{\mathbf{X}}_i^T \tilde{\mathbf{X}}_i$ and $\hat{\nu}_i = \tau_i^* - \tilde{\mathbf{X}}_i {\hat{\bm\beta}}$. This estimator is robust and often called the White's estimator \citep{white1980heteroskedasticity}. There are other approaches to obtain a heteroscedasticity consistent covariance matrix, which is discussed in \cite{long2000using}. For small samples, Efron's estimator \citep{efron1982jackknife}, known as HC3 estimator, can be considered alternatively. Also, if the variance $w_i$ is known from the large sample properties of existing methods for obtaining $\tau_i^*$, then feasible generalized least squares estimators \citep{lewis2005estimating} can be considered. Instead of estimation, hypothesis testing for identifying true decision rules can be considered. As we will see in a simulation study, true rules representing treatment effect heterogeneity are discovered and selected with high probability. However, at the same time, non-true rules are selected with high probability. If one wants to know which rules are really important for describing treatment effect heterogeneity, variable selection can be used to choose a final model for treatment effect heterogeneity. For instance, the null hypothesis $H_0: \bm{\beta} = 0$ can be tested by using a Wald-type test statistic $T_n = n \hat{\bm\beta}^T \hat{\mathbf{V}}_n^{-1} \hat{\bm\beta}$. If $T_n > \chi_{M, 1-\alpha}^2$, $H_0$ is rejected. \subsection{Sensitivity Analysis} \label{ssec:sa} The validity and consistency of the estimator $\hat{\bm{\beta}}$ rely on the assumption of no unmeasured confounders and correct specification of the propensity score model. However, in reality, there is no guarantee that these assumptions are satisfied. Also, such assumptions are not directly testable. Without further knowledge about unmeasured confounding, it is extremely difficult to quantify how much bias can occur. Instead of attempting to quantify the degree of unmeasured confounding in the given dataset, it is more realistic to see how our causal conclusion will change with respect to various degrees of such bias. In this subsection, we propose a sensitivity analysis to examine the impact of potential violations of the no unmeasured confounding assumption. We introduced a generic approach for estimating $\bm{\beta}$ in Section~\ref{ss:estimation}. However, in our sensitivity analysis, we consider the special case where $\tau_i^*$ is estimated as $\hat{\tau}_i^{SIPW}$. Define $\mathbf{W} = (\tilde{\mathbf{X}}^T \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^T$ that is a $M \times N$ matrix. Also, let $W_j$ be the $j$th row of $\mathbf{W}$ and $W_{ji}$ be the $(j, i)$ element of $\mathbf{W}$. Our estimator $\hat{\beta}_j$ is explicitly represented by \begin{align*} \hat{\beta}_j &= \hat{\beta}_j(1) - \hat{\beta}_j(0) \quad \text{where}\\ \hat{\beta}_j(1) &= \left[\frac{1}{N} \sum_{i=1}^{N} \frac{Z_i}{\hat{e}(\mathbf{X}_i)} \right]^{-1} \left[\sum_{i=1}^{N} \frac{ W_{ji} Y_i Z_i}{\hat{e}(\mathbf{X}_i)} \right] \\ \hat{\beta}_j(0) &= \left[\frac{1}{N} \sum_{i=1}^{N} \frac{1-Z_i}{1- \hat{e}(\mathbf{X}_i)} \right]^{-1} \left[\sum_{i=1}^{N} \frac{ W_{ji} Y_i(1-Z_i)}{1 - \hat{e}(\mathbf{X}_i)} \right]. \end{align*} We consider the marginal sensitivity model that was introduced by \cite{tan2006distributional} and \cite{zhao2019sensitivity}. Let the true propensity probability $e_0(\mathbf{x}, y; a) = \text{P}_0 (Z=1 | \mathbf{X} = \mathbf{x}, Y(a)=y)$ for $a \in \{0, 1\}$. If the assumption of no unmeasured confounders holds, this probability would be the same as $e_0(\mathbf{x}) = \text{P}_0 (Z=1 | \mathbf{X} = \mathbf{x})$ that is identifiable from the data. Unfortunately, this assumption cannot be tested since $e_0(\mathbf{x}, y; a)$ is generally not identifiable from the data. For each sensitivity parameter $\Lambda$ that will be introduced in detail later, the maximum deviation of $e_0(\mathbf{x}, y; a)$ from the identifiable quantity $e_0(\mathbf{x})$ is restricted, and sensitivity analysis is conducted for each $\Lambda$ to see if there is any qualitative change of our conclusion. In addition to this non-identifiability of $e_0(\mathbf{x}, y; a)$, there is another difficulty in obtaining $e_0(\mathbf{x})$ non-parametrically when $\mathbf{X}$ is high-dimensional. In practice, $e_0(\mathbf{x})$ is estimated by a parametric logistic model in the form of $e_{\gamma}(\mathbf{x}) = \exp(\gamma' \mathbf{x})/\{1 + \exp(\gamma' \mathbf{x})\}$ where $e_{\gamma_0}(\mathbf{x})$ can be considered as the best parametric approximation of $e_0(\mathbf{x})$, and used for sensitivity analysis. Our sensitivity model assumes that the true propensity probability $e_0(\mathbf{x}, y; a) = \text{P}_0 (Z=1 | \mathbf{X} = \mathbf{x}, Y(a)=y)$ satisfies: \begin{equation} e_0(\mathbf{x}, y; a) \in \mathcal{E}_{\gamma_0}(\Lambda) = \left\{ 0 < e(\mathbf{x}, y; a) < 1: 1/\Lambda \leq OR\{e(\mathbf{x}, y; a), e_{\gamma_0}(\mathbf{x})\} \leq \Lambda \right\} \quad \text{for } a \in \{0, 1\} \label{eqn:sensi_model} \end{equation} where $e_{\gamma_0}(\mathbf{x}) = \text{P}_{\gamma_0}(Z=1 | \mathbf{X} = \mathbf{x})$ and $OR\{e(\mathbf{x}, y; a), e_{\gamma_0}(\mathbf{x})\} = \{(1-e(\mathbf{x}, y; a)) \cdot e_{\gamma_0}(\mathbf{x})\}/ \{e(\mathbf{x}, y; a)\cdot(1- e_{\gamma_0}(\mathbf{x}))\}$. The deviation of $e_0(\mathbf{x}, y; a)$ is symmetric with respect to the parametrically identifiable quantity $e_{\gamma_0}(\mathbf{x})$, and the degree of the deviation is governed by the sensitivity parameter $\Lambda \geq 1$. When $\Lambda = 1$, $e_0(\mathbf{x}, y; a) = e_{\gamma_0}(\mathbf{x})$ for all $a$, which implies that there is no violations of the assumptions. If the propensity score model is correctly specified, $e_{\gamma_0}(\mathbf{x}) = e_0(\mathbf{x})$. Also, if there is no unmeasured confounder, $e_0(\mathbf{x}, y; a) = e_0(\mathbf{x})$. Therefore, under the assumptions of correct propensity score model and no unmeasured confounder, $e_0(\mathbf{x}, y; a) = e_{\gamma_0}(\mathbf{x})$. Our sensitivity model considers violations of both assumptions. This sensitivity analysis model resembles the model proposed by \cite{rosenbaum2002observational}. The connection between the two models is illustrated in Section 7.1 in \cite{zhao2019sensitivity}. \begin{algorithm}[t] \caption{Constructing the confidence interval of $\beta_j$ for each sensitivity parameter $\Lambda$ \label{alg:ci}} \begin{enumerate} \item Generate a matrix $\mathbf{W} = (\tilde{\mathbf{X}}^T \tilde{\mathbf{X}})^{-1} \tilde{\mathbf{X}}^T$ \item In the $\ell$th of $L$ iterations: \begin{enumerate} \item Generate a bootstrapped sample: $(Z_i^{(\ell)}, Y_i^{(\ell)}, \mathbf{X}_i^{(\ell)})_{i=1, \ldots, N}$. \item Generate transformed outcomes $\tilde{Y}_{ji}^{(\ell)} = W_{ji} Y_i^{(\ell)}$ for all $i$, where $W_{ji}$ is the $(j,i)$ element of $\mathbf{W}$. \item Reorder the index such that the first $N_1 = \sum_{i=1}^{N} Z_i^{(\ell)}$ units are treated with $\tilde{Y}_{j1} \geq \ldots \geq \tilde{Y}_{j, N_1}$ and the rest are control with $\tilde{Y}_{j,N_1+1} \geq \ldots \geq \tilde{Y}_{j, N}$ \item Compute an estimate $\hat{\gamma}^{(\ell)}$ by fitting the logistic regression with $(Z_i^{(\ell)}, \mathbf{X}_i^{(\ell)})$ \item Solve the following optimization problems: $$ \text{min or max} \frac{\sum_{i=1}^{N_1} \tilde{Y}_{ji}^{(\ell)}[1 + q_i \exp\{-\hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}] }{\sum_{i=1}^{N_1} [1 + q_i \exp\{ -\hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}]} - \frac{\sum_{i=N_1+1}^{N} \tilde{Y}_{ji}^{(\ell)}[1 + q_i \exp\{\hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}] }{\sum_{i= N_1+1}^{N} [1 + q_i \exp\{ \hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}]} $$ subject to $1/\Lambda \leq q_i \leq \Lambda$, for $1 \leq i \leq N$, and denote the minimum as $L_j^{(\ell)}$ and the maximum as $U_j^{(\ell)}$ \end{enumerate} \item Construct the $(1-\alpha)$-coverage confidence interval $[L_j, U_j]$ where $L_j = Q_{\alpha/2} \left( L_j^{(\ell)} \right)$ and $U_j = Q_{1 - \alpha/2} \left( U_j^{(\ell)} \right), \ell =1, \ldots, L$ \end{enumerate} \end{algorithm} The $(1-\alpha)$-coverage confidence interval of $\beta_j$ can be constructed by using the percentile bootstrap. First, we denote $W_{ji} Y_i$ by $\tilde{Y}_i^{j}$ and treat $\tilde{Y}_i^j$ as if it is an observed outcome, then the confidence interval for each $\beta_j$ can be constructed through the procedure in Algorithm~\ref{alg:ci}. Confidence intervals have at least $100(1-\alpha)$ \% coverage probability even in the presence of unmeasured confounding. The validity of the percentile bootstrap confidence interval $[L_j, U_j]$ can be proved by using Theorem 1 in \cite{zhao2019sensitivity}. In Step (2e) of Algorithm~\ref{alg:ci}, the optimization problem can be efficiently solved by separating two simpler optimization problems. The minimum is obtained when the first part $\frac{\sum_{i=1}^{N_1} \tilde{Y}_{ji}^{(\ell)}[1 + q_i \exp\{-\hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}] }{\sum_{i=1}^{N_1} [1 + q_i \exp\{ -\hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}]}$ is minimized and the second part $\frac{\sum_{i=N_1+1}^{N} \tilde{Y}_{ji}^{(\ell)}[1 + q_i \exp\{\hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}] }{\sum_{i= N_1+1}^{N} [1 + q_i \exp\{ \hat{\gamma}^{(\ell)} \mathbf{X}_i^{(\ell)} \}]}$ is maximized. For instance, the minimization of the first part is achieved at $\{q_i: q_i = 1/\Lambda \text{ for } i=1, \ldots, b, \> \text{and } q_i = \Lambda \> \text{ for } i= b+1, \ldots, N_1 \}$ for some $b$. To find the minimum, it is required to check every possible value of $b$, and the computational complexity is $O(N_1)$. See Proposition 2 in \cite{zhao2019sensitivity} for more details. \section{Simulation} \label{sec:simulations} In this section, we introduce two simulation studies to assess the performance of the CRE method. In the first simulation study, we evaluate the discovery step in terms of how well the method performs in discovering the true underlying decision rules. In the second simulation study, we evaluate the overall performance of both the discovery and inference steps in terms of estimation accuracy. \subsection{Simulation study: Rules Discovery}\label{subsec:rule_discovery} In order to evaluate the ability of CRE in the discovery step we run a series of simulations in which we assess how many times CRE can spot the true underlying decision rules. First, we assess the \textit{absolute} performance of CRE; then we compare its performance with the one of the Honest Causal Tree (HCT) method \citep{athey2016}. The HCT method is a causal decision tree algorithm and discovers disjoint decision rules through a single tree. In order to evaluate the ability of discovering decision rules we consider, among the discovered rules, how many times true rules are captured. As we discussed in Section \ref{sec:discovery}, to apply the CRE method, many approaches can be considered to estimate the individual treatment effect $\tau_i$. We have found via simulation studies that the BCF approach has better performance than other approaches such as $\hat{\tau}_i^{IPW}$ or BART (in Appendix \ref{Appendix:comparison}, we show the comparative performances of BCF, BART, IPW and outcome regression). Thus, in this simulation study, we implement the BCF approach for the CRE method. We call this version of the CRE method \textit{CRE-BCF}. Also, for the data-generating process, we generate the covariate matrix $\mathbf{X}$ with 10 binary covariates from $X_{i1}$ to $X_{i,10}$. The binary treatment indicator $Z_i$ is drawn from a binomial distribution, $Z_i \sim \textit{Binom}(\pi_i)$ where $\pi_i = \text{logit}(-1 + X_{i1} - X_{i2} + X_{i3})$. Finally, the output is generated as $y = y_0\cdot (1 - z_i) + y_1\cdot z_i + f(\mathbf{X})$ where $f(\mathbf{X})$ is a linear function of the confounders $X_{i1}, X_{i2} \text{ and } X_{i3}$. We consider three factors: (i) the number of decision rules, (ii) the effect size $k$ from 0 to 2, and (iii) sample size $N=1000$ or 2000. In order to produce a more meaningful comparison between CRE-BCF and HCT, we restricted the data generating process to causal rules that are representable through a binary tree. In particular, the potential outcomes are generated by $Y_i(0) \sim N(X_{i1} + 0.5 X_{i2} + X_{i3}, 1)$ and $Y_i(1) = Y_i(0) + \tau(\mathbf{X}_i)$. For the case of two causal rules, $\tau = \tau(\mathbf{X}_i) = k$ if $X_{i1}=0,X_{i2}=0$, $\tau = -k$ if $X_{i1}=1, X_{i2}=1$, and $\tau=0$ otherwise. While in the case of four causal rules we have that $\tau = k$ if $(X_{i1}, X_{i2}, X_{i3}) = (0, 0, 1)$, $\tau = 2k$ if $(X_{i1}, X_{i2}, X_{i3}) = (0, 0, 0)$, $\tau = -k$ if $(X_{i1}, X_{i2}, X_{i3}) = (0, 1, 0)$, $\tau = -2k$ if $(X_{i1}, X_{i2}, X_{i3}) = (0, 1, 1)$ and $\tau=0$ otherwise. This scenario was chosen because it is the most favourable to the HCT algorithm. Moreover, we introduce variations in the set of covariates that are used to define the causal rules (we refer to these variables as effect modifiers) by switching $(X_1, X_2, X_3)$ with $(X_8, X_9, X_{10})$. This change represents the case where the effect modifiers are different from the confounders. Investigating this scenario is important since confounders may affect the ability of the algorithm to spot the correct causal rules. Figures \ref{figure:CRE-BCF_vs_HCT} and \ref{figure:CRE-BCF_vs_HCT_modification} depict the results in the case with the same variables for both confounders and effect modifiers and in the case of different variables, respectively. \begin{figure}[] \centering \includegraphics[width=1\textwidth]{CRE-BCF_vs_HCT.pdf} \caption{Average number of correctly discovered rules in the case where the same variables are used as confounders and effect modifiers. The first column depicts the case of two true rules while the second column the case of four true rules. In the first row the sample size is 1,000 while in the second it is 2,000.} \label{figure:CRE-BCF_vs_HCT} \end{figure} \begin{figure}[] \centering \includegraphics[width=1\textwidth]{CRE-BCF_vs_HCT_modification.pdf} \caption{Average number of correctly discovered rules in the case where the confounders are different from the effect modifiers. The first column depicts the case of two true rules while the second column the case of four true rules. In the first row the sample size is 1,000 while in the second it is 2,000.} \label{figure:CRE-BCF_vs_HCT_modification} \end{figure} When $(X_1, X_2, X_3)$ are both confounders and effect modifiers, CRE-BCF consistently outperforms HCT, while in the other scenario CRE-BCF is still performing better but the performance gap is narrower especially in the case with two true causal rules. In both the scenarios, the gap is wider as the number of true rules is four. This shows that HCT is consistently unable to detect all the four true rules. These simulations show that the usage of CRE-BCF results in an increase in the detection of the true causal rules especially when there is overlap between the confounders and effect modifiers. This is a very interesting feature of CRE-BCF as similar scenarios are very likely in real-world applications. For instance, it can be the case in pollution studies, that income is a confounder as poorer people live in neighbours with higher levels of pollution, and also an effect modifier as poorer people may have worst living conditions and, in turn, experience higher negative effects from pollution. Also, it is important to note that CRE provides a smaller number of detected rules (4 to 7) as compared to HCT (10 to 84). \subsection{Simulation study: Rule-specific Effects Estimation}\label{subsection:estimation} In this subsection, we evaluate the overall performance of the CRE method including both the discovery and inference steps. In the previous simulation study, we found that the BCF approach shows a great performance in discovering underlying decision rules. Also, it has been shown that the BCF approach estimates $\tau_i$ with great accuracy heuristically. Thus, in this simulation study, we use the BCF approach for both discovery and inference steps when applying the CRE method. In particular, in the discovery step, $(\mathbf{X}_i, \hat{\tau}_i^{BCF})$ is used in order to discover and select important decision rules. During the inference step, the intermediate variable $\tau_i^*$ is estimated by using the BCF approach (CRE-BCF). To compare, we consider a BCF approach that does not use the sample-splitting technique, and we call this \textit{original-BCF}. The original BCF provides the estimate $\hat{\tau}_i$, but does not provide an interpretable form of the heterogeneous treatment effects. On the contrary, the CRE-BCF not only provides a set of decision rules that significantly increase interpretability of findings, but also provides the estimate with respect to the discovered structure. It is difficult to compare the two methods in terms of interpretability, so in this simulation study, we compare them in terms of estimation accuracy. \cite{athey2016} recommends to use a (50\%, 50\%) ratio between the discovery and inference samples for sample-splitting, but \cite{lee2018discovering} shows through simulation studies that (25\%, 75\%) has better performance than (50\%, 50\%). We investigate six different ratios from (10\%, 90\%) to (50\%, 50\%). Note that the original-BCF can be considered as an extreme case of (0\%, 100\%). \begin{table} \centering \caption{RMSE comparison between the CRE and BCF methods} \begin{tabular}{c|cccccc|c} \hline & \multicolumn{7}{c}{Method} \\ $N$ & CRE50 & CRE40 & CRE30 & CRE25 & CRE20 & CRE10 & BCF \\ \hline 500 & 0.568 & 0.520 & 0.486 & \textbf{0.478} & 0.498 & 0.598 & 0.391 \\ 1000 & 0.399 & 0.366 & 0.347 & \textbf{0.343} & 0.344 & 0.406 & 0.355 \\ 1500 & 0.326 & 0.299 & 0.279 & \textbf{0.276} & 0.278 & 0.320 & 0.309 \\ 2000 & 0.283 & 0.258 & 0.239 & \textbf{0.231} & 0.234 & 0.262 & 0.237 \\ \hline \end{tabular} \label{tab:sim_overall} \end{table} For the data generating process, covariates, treatment, and potential outcomes are generated in the same way as in the previous simulation study. In this simulation, only difference is that we assume that there are two true underlying decision rules: (1) $X_{i1}=0, X_{i2}=0$ and (2) $X_{i1}=1, X_{i2}=1$. The treatment effect $\tau_i$ is defined as $\tau_i = 1$ if $X_{i1}=0, X_{i2}=0$, $\tau_i = -1$ if $X_{i1}=1, X_{i2}=1$, and $\tau_i=0$ otherwise. We consider four samples sizes, $N=500, 1000, 1500$ and $2000$. We consider the root mean squared error (RMSE) to compare the two methods. Table~\ref{tab:sim_overall} shows the performance comparison using RMSE. We consider 1000 simulated datasets for each sample size and provide the average of 1000 RMSE values. Among the considered ratios, (25\%, 75\%) provides the least RMSE for every sample size. The RMSE value decreases as the proportion of the discovery sample increases up to 25\%, and it starts to increase after 25\%. When the sample size is small (i.e., $N=500$), the RMSE for CRE25 is higher than that for BCF, however, it is lower when $N$ is moderately large. This simulation result shows that even though, for instance, the CRE25 method uses 75\% of the total sample for inference, it is as efficient as the original BCF that uses 100\% of the total sample for inference. \section{ Application to the Medicare data} \label{sec:application} We apply the proposed CRE method to the Medicare data in order to study the effect of long-term exposure to fine particulate matter (PM$_{2.5}$) on 5-year mortality. \cite{lee2018discovering} studied the treatment effect of exposure to PM$_{2.5}$ with 110,091 matched pairs and discovered treatment effect heterogeneity in a single tree structure by using the HCT approach. However, as pointed out in their discussion, a single tree unavoidably contains subgroups that are not informative for describing treatment effect heterogeneity due to the nature of tree algorithms. In particular, they found six disjoint subgroups, but only three of them are informative to describe the treatment effect heterogeneity. Also, as we discussed and showed in our simulation study, the HCT approach can be affected by sample-to-sample variations. For a brief overview of the matched Medicare data, it contains Medicare beneficiaries in New England regions in the United States between 2000 and 2006. The treatment is whether the two-year (2000-2001) average of exposure to PM$_{2.5}$ is greater than 12 $\mu g \slash m^3$. The outcome is five-year mortality measured between 2002-2006. For an individual, the outcome is 0 if he/she was died before the end of 2006 and 1 otherwise. There are four individual level covariates - sex (male, female), age (65-70, 71-75, 76-80, 81-85, 86+), race (white, non-white), and Medicaid eligibility (eligible, non-eligible). Medicaid eligibility is considered as a variable indicating socioeconomic status. If an individual is eligible for Medicaid, it is highly likely that he/she has lower household income, thus we use this variable as a proxy for low income. In the matched data, these four variables are exactly matched. There are also 8 ZIP code-level or 2 county-level covariates, and they are fairly balanced between the treated and control groups, see Table 2 in \cite{lee2018discovering} for the covariate balance. Namely, the additional variables used are body-mass-index (BMI), smoker rate, Hispanic population rate, Black population rate, median household income, median value of housing, \% of people below the poverty level, \% of people below high school education, \% of owner occupied housing and population density. Also, see \cite{di2016assessing} for general description about the Medicare data. We apply the CRE method to the same discovery and inference samples that are split by using a (25\%, 75\%) ratio and contain 27,500 and 82,591 matched pairs respectively. Since two individuals in a matched pair share the same covariates, but experience different treatment values, the observed outcomes can be considered as two potential outcomes for a hypothetical individual that represents the corresponding matched pair. The treated-minus-control difference can be considered as an estimate of $\tau_i$ for matched pair $i$. However, since our outcome is binary, the estimate $\hat{\tau}_i^{Matching}$ can take only one of three possible values $\{-1, 0, 1\}$. This discrete feature is undesirable under the linear regression model~\eqref{eqn:model1}. Instead, we ignore the matched structure and use $27500 \times 2 = 55000$ individuals in the discovery sample as if they were obtained independently. We use the logistic BART approach to estimate potential outcome functions $m_z(x) = \mathbb{E}[Y_i(z) | \mathbf{X}_i = x]$, $z=0, 1$. Then, the estimate $\hat{\tau}_i^{BART} = \hat{m}_1(\mathbf{X}_i) - \hat{m}_0(\mathbf{X}_i)$ is obtained. Note that for a continuous outcome, as shown in the simulation study, the BCF approach can be considered. \begin{table} \centering \caption{Discovering decision rules and estimating the coefficients for the decision rules} \begin{tabular}{ll|rcrc} \hline \multicolumn{2}{c}{Rules} & \multicolumn{4}{c}{Covariates}\\ & & \multicolumn{2}{c}{Individual} & \multicolumn{2}{c}{Indiv. + ZIP code} \\ \hline \# & Description & Est. & 95\% CI & Est. & 95\% CI \\ \hline & Intercept & 0.070 & (0.054, 0.087) & 0.077 & (0.061, 0.094) \\ $r_1$ & $\mathbbm{1}(\text{white} = 0)$ & -0.008 & (-0.027, 0.011) & & \\ $r_2$ & $\mathbbm{1}(65 \leq \text{age} \leq 75)$ & -0.012 & (-0.024, 0.000) & -0.009 & (-0.021, 0.002) \\ $r_3$ & $\mathbbm{1}(65 \leq \text{age} \leq 80)$ & -0.027 & (-0.045, -0.010) & -0.027 & (-0.045, -0.010)\\ $r_4$ & $\mathbbm{1}(65 \leq \text{age} \leq 85) \cdot \mathbbm{1}(\text{Medicaid}=0)$ & -0.033 & (-0.050, -0.016) & -0.031 & (-0.048, -0.015) \\ [0.1cm] $r_5$ & $\mathbbm{1}(\text{hispanic \%} =0) \cdot \mathbbm{1}(\text{education} =1)$ & & & -0.019 & (-0.038, 0.000)\\ [0.1cm] $r_6$ & $\mathbbm{1}(\text{hispanic \%} =0) \cdot \mathbbm{1}(\text{education} =1)$ \\ & $ \cdot \mathbbm{1}(\text{population density} =0)$ & & & -0.045 & (-0.067, -0.022)\\ \hline \end{tabular} \label{tab:est} \end{table} In the discovery step with the estimate $\hat{\tau}_i^{BART}$, we first apply the CRE method only with four individual-level covariates. Four decision rules, $r_1, r_2, r_3, r_4$, are discovered. Table~\ref{tab:est} shows the descriptions for these rules on the left column. Based on this finding, the model $\tau(x) = \beta_0 + \sum_{j=1}^{4} \beta_j r_j(x)$ is considered for the later inference step. The first rule is for non-white people that is only 7\% of the total population. The next three rules are defined by age, and $r_2$ is included in $r_3$. The intercept $\beta_0$ represents the treatment effect for the subgroup of Medicaid eligible white people aged between 81-85 and white people aged above 85. This subgroup corresponds to the single tree finding in \cite{lee2018discovering}. Furthermore, we extend the CRE method to the four individual-level and eight ZIP code-level covariates. The ZIP code-level covariates could not be considered in \cite{lee2018discovering} because pairs were matched exactly only on individual-level covariates. When applying the CRE method, five rules are discovered including the three previous rules, $r_2, r_3, r_4$ and two additional rules, $r_5, r_6$. The additional rules are defined by Hispanic population (10\% above or not), Education (did not complete high school 30\% above or not), and Population density (above the average or not). The rule $r_5$ means a subgroup of people living in areas where the proportion of hispanic is below 10\% and the proportion of people who did not complete high school is above 30\%. The rule $r_6$ is included in $r_5$, and means a subgroup of $r_5$ with the additional condition that the population density is below the average. Before making inference with the discovered rules, we want to emphasize two aspects in the discovery step of the CRE method. First, since the CRE method discovers decision rules instead of a whole tree, only important subgroups (decision rules) are selected, and other subgroups are represented by the intercept. For example, \cite{lee2018discovering} discovered six subgroups, but only a few of them are informative for describing the treatment effect heterogeneity. Second, the discovered rules are stable. Being stable means that if another discovery sample is chosen, the discovered rules are hardly changed while a discovered tree varies with its size and terminal nodes. This robustness to sample-to-sample variation makes findings replicable. This feature is extremely significant because higher standards of reproducibility are deemed important in the context of open science. Next, with the discovered decision rules, we use the remaining 82,591 pairs in the inference sample to estimate the rule-specific treatment effects. For the first rule set $\{r_1, \ldots, r_4\}$, the corresponding coefficients are estimated and reported in Table~\ref{tab:est}. To obtain the estimates and 95\% confidence intervals, we consider $\tau_i^* = \hat{\tau}_i^{SIPW}$ to claim the asymptotic normality and obtain the 95\% confidence intervals using the asymptotic distributions. Causal Forest method \citep{athey2019generalized} can be also considered since it guarantees the asymptotic normal distribution for $\bm{\beta}$. Note that the logistic BART approach can be considered as the discovery step, but the asymptotic properties are not guaranteed. On the first part of the right column of Table~\ref{tab:est}, all the coefficients except for the intercept have the negative sign. The intercept indicates that individuals who do not belong to the discovered rules $\{r_1, r_2, r_3, r_4 \}$ are significantly affected by exposure to air pollution. There is a 7 percentage point increase of mortality rate. Though the estimates for $r_1, r_2$ are negative, they are not statistically significant at a significance level $\alpha = 0.05$. However, the estimates for $r_3, r_4$ are significant, which means that people below 80 (i.e., $r_3$) and people below 85 not being eligible for Medicaid (i.e., $r_4$) are significantly less vulnerable than the others. When including ZIP code-level covariates, all the estimates for the discovered rules $\{r_2, r_3, r_4, r_5, r_6\}$ are negative. Similarly, $r_2$ is not significantly different. For the newly discovered $\{r_5, r_6\}$, only $r_6$ is statistically significant. Also, we want to emphasize the interpretation of the coefficients in the inference step. A non-significant estimate does not mean that the corresponding subgroup has the null treatment effect. In this context, non-significance means that the decision rule is no longer important for describing treatment effect heterogeneity. For instance, consider a subgroup of Black people above 85. This groups belongs to $r_1$ only. The estimate $\hat{\bm\beta}_1$ is shown as not significant. However, the treatment effect for this subgroup is $\hat{\bm\beta}_0 + \hat{\bm\beta}_1 = 0.062$ with the 95\% CI (0.041, 0.084) meaning that this subgroup is significantly affected by air pollution. For another example, a subgroup of Black people below 75 has the effect $\hat{\bm\beta}_0 + \hat{\bm\beta}_1 + \hat{\bm\beta}_2 + \hat{\bm\beta}_3 = 0.023$ (95\% CI: (0.003, 0.043)), which is still significant. Since we have the asymptotic distribution for $\bm{\beta}$, any subgroup's treatment effect can be examined. \begin{table} \centering \caption{Sensitivity analysis for the treatment effect heterogeneity by using the percentile bootstrap} \begin{tabular}{l|ccccc} \hline Rules & \multicolumn{5}{c}{Sensitivity Parameter $\Lambda$}\\ & $1.01$ & $1.02$ & $1.03$ & $1.04$ & $1.05$\\ \hline Inter. & (0.054, 0.100) & (0.045, 0.110) & (0.035, 0.121) & (0.024, 0.130) & (0.014, 0.140)\\ $r_2$ & (-0.026, 0.007) & (-0.031, 0.012) & (-0.036, 0.017) & (-0.041, 0.022) & (-0.047, 0.028)\\ $r_3$ & (-0.051, -0.003) & (-0.060, 0.004) & (-0.069, 0.014) &(-0.077, 0.023) & (-0.086, 0.033)\\ $r_4$ & (-0.054, -0.009) & (-0.062, 0.000) & (-0.071, 0.009) & (-0.078, 0.017) & (-0.087, 0.024)\\ $r_5$ & (-0.040, 0.004) & (-0.046, 0.012) & (-0.053, 0.016) & (-0.059, 0.020) & (-0.066, 0.026)\\ $r_6$ & (-0.072, -0.017) & (-0.080, -0.011) & (-0.085, -0.004) & (-0.091, 0.002) & (-0.098, 0.008) \\ \hline \end{tabular} \label{tab:sensi} \end{table} Finally, to evaluate the robustness of the above finding about the treatment effect heterogeneity, we conduct sensitivity analysis in the inference step by setting $\tau_i^* = \hat{\tau}_i^{SIPW}$. We use the following model for sensitivity analysis, $\tau_i = \beta_0 + \sum_{j=2}^{6} \beta_j r_j$. Under the sensitivity model~\eqref{eqn:sensi_model}, for each $\Lambda$, we obtain several sets of 95\% CIs for the coefficients. Table~\ref{tab:sensi} shows the 95\% CIs for $\Lambda$ from 1.01 to 1.05. As $\Lambda$ increases all the CIs get wider. When $\Lambda = 1.04$, all the coefficients contain zero and there is no evidence for the heterogeneity. In particular, $\Lambda=1.04$ means that if there is an unmeasured confounder that can make the estimated propensity score deviated from the true score by 1.04 in terms of the odds ratio scale, then our finding about the heterogeneity can be explained by this unmeasured bias. One further thing to note is that even if the heterogeneity can be explained by an unmeasured bias at $\Lambda = 1.04$, the treatment effect of the baseline subgroup (i.e., intercept) is significant. \section{Discussion} \label{sec:discussion} This paper proposes a new data-driven method for studying treatment effect heterogeneity that notably improves interpretability and provides helpful guidance about subgroups with heterogeneous effects in terms of decision rules. Moreover, the proposed CRE methodology accommodates for well-known shortcomings of binary trees by providing a more stable, flexible and robust methodology to discover and estimate heterogeneous effects. Indeed, CRE is stable to sample-to-sample variations, leading to more reproducible results, and its flexibility allows for the discovery of a wider set of causal rules. Also, CRE provides robust results for the detection of causal rules in the presence of overlap between confounders and effect modifiers. Though the CRE method makes inference using a smaller inference subsample due to sample-splitting, it maintains estimation precision at a similar level as other existing methods while providing an interpretable form of the treatment effect heterogeneity. The CRE method is a generic method that is completely compatible with existing methods for estimating CATE. The performance of CRE may vary with respect to the choice of existing methods to generate base decision rules in the discovery sample and intermediate values $\tau_i^*$ in the inference sample. Therefore, the CRE method can be thought of as a \textit{refinement} process of the outputs produced by existing methods. If an estimation method for the CATE has great precision, then it is highly likely that it detects the treatment effect heterogeneity during the estimation procedure. When the CRE method is accompanied with this estimation method, the CRE method discovers the underlying treatment effect structure with high probability and represents this structure in an easy-to-interpret form. Indeed, a few simple rules are utterly important for public policy implications. However, when it comes to precision medicine, discovering a possibly lengthy rule that is specific to a patient could be of interest. The proposed CRE method requires a researcher to specify some of the so-called tuning parameters. In particular, one should choose a number of trees that are generated for extracting decision rules, and the cut-off threshold in stability selection during the discovery step. Previous studies show that the performance is not much affected by specification of the parameters. Also, the studies provide a general guidance of how to specify the parameters. However, the optimal choice of the splitting ratio between the discovery and inference samples is not known yet. Even though the ratio (25\%, 75\%) is shown to have the best performance through simulation studies, we do not know whether this ratio works best for all real-world datasets. It may be possible to require a larger proportion of the discovery sample if data is sparse. On the contrary, a smaller proportion is needed when the underlying effect structure is simple. Regardless of the splitting ratio, it is more important to choose a proper number of decision rules during the discovery step. The choice may depend on the questions that practitioners want to answer. For example, public policy makers generally want to discover a short list of risk factors. A few important subgroups defined by the risk factors are usually easy-to-understand, and further foster focused discussions about the assessments of potential risks and benefits of policy actions. Also, due to the restriction of resources, public health can be promoted efficiently when prioritized subgroups are available. Instead, a comparatively larger set of decision rules can be chosen, for instance, in precision medicine. An important goal is to identify patient subgroups that respond to treatment at a much higher (or lower) rate than the average \citep{loh2019subgroup}. Also, identifying a subgroup that must avoid the treatment due to its excessive side effects can be valuable information. However, discovering only a few subgroups is likely to miss this extreme subgroup. A number of extensions of the CRE method can be possible. First, the CRE method maintains the benefits that existing methods have, i.e., asymptotic normality and unbiasedness. If an existing method can produce unbiased point estimates for $\tau(x)$ with valid confidence intervals, the CRE method can also produce unbiased estimates for $\bm{\beta}$ with valid confidence intervals. Bayesian methods such as BART or BCF can be also used, and it is empirically shown that they perform really well. However, the validity of Bayesian inference such as constructing credible intervals remains as a future research question. Second, the discovery step of the CRE method can be considered as a dimension reduction procedure. We used a set of decision rules as a basis, but it may be possible to use other forms to characterize the treatment effect heterogeneity. Finally, we proposed an approach for sensitivity analysis of unmeasured confounding bias based on the inverse probability of treatment weighting estimator. A general approach for sensitivity analysis that can be compatible with a larger class of estimation methods would be helpful. Future research is needed for developing such sensitivity analysis. \bibliographystyle{apalike}
null
null
null
proofpile-arXiv_000-10184
{"arxiv_id":"2009.09036","language":"en","timestamp":1600740130000,"url":"https:\/\/arxiv.org\/abs\/2009.09036","yymm":"2009"}
2024-02-18T23:40:25.307Z
2020-09-22T02:02:10.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10186"}
null
\section{Introduction} A key aspect of the formation of stars and their planets regards to the pristine chemical composition of their parental cloud of gas and dust. Various connections between the chemical composition of planet-host stars and the planet frequency are observed and proposed such as primordial abundance anomalies, planet engulfment events, retention of refractory elements by rocky planets and/or core of giant planets, and due to the formation of Jupiter analogs, which could trap significant quantity of dust exterior to their orbits preventing large amounts of refractory elements from being accreted to the forming star \citep{santos2001, santos2004, fischer, Melendez2009, ramirez2011, Wang2015, Ghezzi2018, Petigura2018, Teske2019, Booth2020}. The abundances of carbon, nitrogen and oxygen beyond the ice line in a protoplanetary disc of gas/dust (or simply nebula) are strictly related to the efficiency on the assembling of planetesimals to form planetary embryos (bodies with sizes between the Moon and Mars) and cores of giant planets. This is because at large distances from the central protostar, the icy planetesimals existent there due to lower temperatures have their mutual sticking process enhanced \citep{Haghighipour2013, Marboeufetal2014}. By developing a model that computes the chemical composition and the ice-to-rock mass ratio in icy planetesimals throughout protoplanetary discs of different masses, \citet{Marboeufetal2014} derived that the ice content of these bodies is mainly dominated by H$_{2}$O, CO, CO$_{2}$, CH$_{3}$OH, and NH$_{3}$. Their models predict the ice-to-rock ratio in icy planetesimals to be equal to 1($\pm$0.5), like observed in the Solar System's comets, but still 2-3 times lower than usually considered in planet formation models. Therefore, in some sense, the pristine content of CNO in a star plays an important role for the potential planetary formation around it, and this could also be imprinted in the chemical composition and structure of the formed planets (and vice-versa). The chemical evolution of the solar neighbourhood is based on the stellar yields of all elements and other key ingredients, such as the star formation rate, the matter infall from the halo, the radial gas and star flows across the disc, the vertical gas outflows from the disc, the stellar initial mass function, the relative rates of different kinds of supernovae and the frequency of close binaries \citep{Tinsley1980, Pagel2009, Matteucci2012}. The production of C, N, O and Fe along time, as well as of other abundant elements, will determine the temporal evolution of their abundances in the interstellar medium (ISM), and will be imprinted in the composition of stars and their planets. The CNO surface abundances in evolved stars are modified by the nuclear burning together with internal mixing processes \citep{Charbonnel1994, Gratton2000, Maas2019}. Therefore, dwarf stars are fundamentally indispensable to be observed in order to trace the CNO enrichment in the ISM. Whilst carbon is a primary element (i.e., it is synthesized starting from H and He in the parent star), nitrogen is considered a primary element (when produced, e.g., in low-metallicity fast-rotating massive stars, see \citet{Meynet2002a}) as well as a secondary one (when synthesized from $^{12}$C or $^{16}$O nuclei already present since the star birth). The intermediate-mass (2-3\,$\leq$\,M\,$\leq$\,7-9\,M$_{\odot}$) and massive (M\,$\geq$\,7-9\,M$_{\odot}$) stars are the C and N main sources to pollute the ISM due to their shorter lifetimes relative to low-mass stars (M\,$\leq$\,2-3\,M$_{\odot}$), although 1-2\,M$_{\odot}$ stars may also eject some amount of C and N through the first thermal pulse on the asymptotic giant branch, AGB \citep{Karakas2016}. The relative contributions between intermediate-mass and massive stars for the C and N productions are still uncertain \citep{Meynet2002a, Meynet2002b}. The carbon main isotope $^{12}$C is essentially made by the triple-$\alpha$ process. The isotope $^{13}$C is residually made by different ways \citep{Meynet2002b}: {\bf (a)} in the CN cycle also named as CNO-I cycle (H burning in intermediate-mass and massive stars), {\bf (b)} in the $^{12}$C burning (low-metallicity fast-rotating massive stars), and {\bf (c)} through proton-capture nucleosynthesis on the AGB (intermediate-mass stars with hot bottom burning just below the convective envelope). Nitrogen basically comes from C and/or O, becoming its production more complex than carbon. The most abundant isotope $^{14}$N is synthesized and ejected to the ISM through distinct ways: {\bf (i)} in the complete CNO-II cycle and ejected through the thermal pulses on the AGB phase for isolated stars only \citep{Pettini2002}; {\bf (ii)} through the $^{12}$C burning in low-metallicity fast-rotating massive stars and ejected by their strong winds \citep{Meynet2002a}; {\bf (iii)} through a proton-capture reaction in the hot base of the convective envelope and ejected during the AGB phase of intermediate-mass stars (like $^{13}$C) \citep{Marigo2001, Meynet2002a}; and {\bf (iv)} in the CN cycle from $^{12}$C and ON cycle from $^{16}$O in stars of any mass and ejected by them. However, the relative importance of these processes is not very well determined. Oxygen -- an $\alpha$-element -- is mainly ejected to the ISM in the pre-supernova phase of the type II supernovae core collapse events (SN-II) \citep{Chiappini2003}. On the chemical enrichment scenario of the solar neighbourhood, iron peak elements (Fe as a template) come from a combinative production by type Ia supernovae (SN-Ia) and SN-II. In fact, the observed decrease of [$\alpha$-element/Fe] versus [Fe/H] in the thick and thin disc stars reflects the relative importance of SN-Ia over SN-II along the formation history of the Galaxy's disc \citep{Feltzingetal2003, Bensbyetal2003, Melendezetal2008, Alves-Britoetal2010, RecioBlancoetal2014, Weinbergetal2019}. Explosions of classical novae in close binary systems specifically play an important role on the production of the rare CNO isotopes $^{13}$C, $^{15}$N and $^{17}$O \citep{Romano2019, JoseHernanz2007}. On the other hand, SN-Ia events eject negligible amounts of CNO elements to the ISM. Therefore, the ratios C/Fe, N/Fe, $^{12}$C/$^{13}$C, C/N, C/O and N/O in dwarf stars provide valuable information about the main nucleosynthetic processes for these elements over the evolution of the Galaxy's disc. For instance, the relative contributions of yields of single massive stars, yields of isolated AGB stars (low and intermediate-mass stars), SN-II/SN-Ia ratio and close binary fraction could be understood through robust Galactic chemical evolution (GCE) models with those observations as constraints \citep{Chiappini2003, Kobayashi2011, Sahijpal2013, Romano2017, Romano2019}. Specifically, \citet{Romano2017} states that measurements of $^{12}$C/$^{13}$C in wide samples of nearby dwarfs would be very useful for construction of representative GCE models. Analysing nearby solar twins provide an opportunity for studying the chemical evolution of the local disc with time around the solar metallicity, because they can encompass very different ages ranging since about the formation of the thin disc until now \citep{TucciMaia2016}. The opportunity becomes more interesting when the solar twins have very well-determined fundamental parameters, resulting in precise relative ages. \citet{Bedell2018}, \citet{Spina2018} and \citet{Nissen2015} have recently investigated how abundance ratios [X/Fe] in solar twins are related to age and [Fe/H]. \citet{Bedell2018} and \citet{Nissen2015}, for instance, similarly obtained that [C/Fe] and [O/Fe] increase with the stellar age, but they did not include nitrogen and $^{12}$C/$^{13}$C in their studies. We point out that nitrogen has not been derived for the current sample of solar twins yet, albeit a first analysis of N in 11 solar twins was performed by \citet{Melendez2009}. The main reason is due to a lack of measurable atomic lines in the optical region and also because absorption features of molecules (e.g. CH, NH and CN) are actually spectral blends requiring spectral synthesis. For instance, equivalent widths of a weak atomic N\,I line at 7468.3\,{\AA} and spectral synthesis of NH in the ultraviolet were used in the study of solar-type stars by \citet{Ecuvillon2004}. More recently, \citet{DaSilva2015} and \citet{Suarez-Andres2016}, handling samples of solar-type dwarfs, found that giant planet hosts are nitrogen-rich (in the [N/H] scale) when compared with their control samples of dwarfs without known planets. However, the increase of [N/Fe] as a function of [Fe/H] and the negative trend of [N/Fe] with the stellar age, as observed in both works, do not imply on any significant difference between the two stellar groups. In the current work we have homogeneously measured precise abundances of carbon and nitrogen plus the carbon main isotopic ratio $^{12}$C/$^{13}$C, in a sample of well-studied solar twins, having distances up to 100\,pc and spanning a wide range of ages since around the formation epoch of the Galaxy's thin disc until a few hundreds Myr ago \citep{Bedell2018, Spina2018, Botelho2019}. The analysis of our results is focused on the distributions of a set of CNO abundance ratios as a function of the age and metallicity of these solar twins in order to provide unprecedented and self-consistent constraints for GCE modelling \citep[e.g.][]{Romano2017, Sahijpal2013}. Our paper is organized as follows. In Section 2, we present the sample of solar twins, their main parameters and the spectroscopic data. Section 3 details the homogeneous chemical analysis carried out differentially to the Sun for deriving the C abundance, the $^{12}$C/$^{13}$C isotopic ratio and the N abundance based on the spectral synthesis of molecular features of the electronic systems CH\,A-X, CH\,A-X and CN\,B-X, respectively. In Section 4, we study the behaviour of [C/Fe], [N/Fe], $^{12}$C/$^{13}$C, [C/N], [C/O] and [N/O] as a function of [Fe/H] and isochrone stellar age, and also [C/N], [C/O] and [N/O] as a function of [O/H] (other stellar metallicity indicator). Finally, Section 5 summarizes our main results and conclusions. \section{Solar twins sample and data} The sample is composed of 67 solar twins, stars with atmospheric parameters very similar to the Sun, within around $\pm100\thinspace$K in $T_{\rm eff}$ and $\pm0.1\thinspace$dex in $\log g$ and [Fe/H] \citep{Ramirez2009}, studied previously by \citet{Bedell2018}, \citet{Spina2018} and \citet{Botelho2019}. The atmospheric parameters are those by \citet{Spina2018}. The typical errors in $T_{eff}$, $\log g$ and [Fe/H] are 4\,K, 0.012, 0.004\,dex, respectively, using line-by-line differential spectroscopic analysis relative to the Sun by means of equivalent width (EW) measurements of Fe\, I and Fe\,II. We also adopted the isochronal ages and masses derived by \citet{Spina2018}, who employed the \textbf{q}$^{2}$ code by applying the isochrone method with the use of the Yonsei-Yale isochrones (as described in \citet{Ramirez2014a, Ramirez2014b}). This sample spans a wide range in age, i.e. from 500\,Myr up to 8.6\,Gyr with a typical error of 0.4\,Gyr, which gives us a good understanding of the evolution of the Galactic disc. We considered the elemental abundances from \citet{Spina2018}, which analysed 12 neutron-capture elements (Sr, Y, Zr, Ba, La, Ce, Pr, Nd, Sm, Eu, Gd, and Dy), and the abundances of 17 additional elements (C, O, Na, Mg, Al, Si, S, Ca, Sc, Ti, V, Cr, Mn, Co, Ni, Cu and Zn) from \citet{Bedell2018}. The elemental abundances in both works were also measured based on a line-by-line analysis relative to the Sun. We have adopted the line-of-sight rotational velocity $V.\sin(i)$ and macro-turbulence velocity $V_{\rm macro}$ given in \citet{leonardo} to reproduce the line broadening of our sample of solar twin stars. The spectra were obtained with the HARPS spectrograph (High Accuracy Radial velocity Planet Searcher) on the 3.6m telescope of the ESO (European Southern Observatory) La Silla Observatory in Chile \citep{Mayor2003}. The HARPS spectra cover $\lambda\lambda$3780-6910\,{\AA} with a resolving power (R\,=\,115,000). Stacking all HARPS spectra of each star resulted in a very high signal-to-noise ratio (SNR) of approximately 800\,per\,pixel as measured around 6000\,{\AA}, with a minimum of 300\,per\,pixel and a maximum of 1800\,per\,pixel. The solar spectrum used in this work is a combination of several exposures of sunlight reflected from the asteroid Vesta, also observed with HARPS. Table~\ref{tab_sample} shows the photospheric parameters, broadening velocities, isochrone age and oxygen abundance of the solar twins sample. \begin{table*} \centering \caption{ Parameters of the 67 solar twins collected from previous published works: photospheric parameters and isochrone age by \citet{Spina2018}, macro-turbulence and rotation velocities by \citet{leonardo}, and oxygen abundance by \citet{Bedell2018}. The Sun parameters are added in the first row. Full table online. } \label{tab_sample} \resizebox{\linewidth}{!}{ \begin{tabular}{rrrrrrrrr} \hline Star ID & $T_{\rm eff}$ & log\,$g$ & [Fe/H] & $\xi$ & $V_{\rm macro}$ & $V.sin(i)$ & age & [O/H]\\ & (K) & & (dex) & (km.s$^{-1}$) & (km.s$^{-1})$ & (km.s$^{-1}$)& (Gyr) & (dex) \\ \hline Sun & 5777 & 4.440 & 0.000 & 1.00 & 3.20 & 2.04 & 4.56 & 0.00\\ HIP\,003203 & 5868$\pm$9 & 4.540$\pm$0.016 & -0.050$\pm$0.007 & 1.16$\pm$0.02 & 3.27 & 3.82 & 0.50$\pm$0.30 & -0.139$\pm$0.021\\ HIP\,004909 & 5861$\pm$7 & 4.500$\pm$0.016 & 0.048$\pm$0.006 & 1.11$\pm$0.01 & 3.33 & 4.01 & 0.60$\pm$0.40 & -0.038$\pm$0.017\\ HIP\,006407 & 5775$\pm$7 & 4.505$\pm$0.013 & -0.058$\pm$0.006 & 0.98$\pm$0.01 & 2.96 & 2.30 & 1.90$\pm$0.70 & -0.110$\pm$0.013\\ HIP\,007585 & 5822$\pm$3 & 4.445$\pm$0.008 & 0.083$\pm$0.003 & 1.01$\pm$0.01 & 3.37 & 1.90 & 3.50$\pm$0.40 & 0.054$\pm$0.005\\ -- & -- & -- & -- & -- & -- & -- & -- & -- \\ HIP\,118115 & 5798$\pm$4 & 4.275$\pm$0.011 & -0.036$\pm$0.003 & 1.10$\pm$0.01 & 3.55 & 0.89 & 8.00$\pm$0.30 & -0.088$\pm$0.010\\ \hline \end{tabular} } \end{table*} \section{Determination of C, N and $^{12}$C/$^{13}$C} The uniform determination of the C abundance was performed from a set of selected absorption lines of the molecular electronic system A$^{2}\Delta$-X$^{2}\Pi$ of $^{12}$CH (hereafter CH\,A-X, \citet{Jorgensen}) and the N abundance from a set of selected lines of the molecular electronic system A$^{2}\Sigma^{-}$-X$^{2}\Pi$ of $^{12}$C$^{14}$N (the CN Violet System, hereafter CN\,B-X, \citet{Brooke}). The homogeneous determination of the $^{12}$C/$^{13}$C isotopic ratio was obtained by using features from the CH\,A-X system too ($^{12}$CH\,A-X lines mixed with $^{13}$CH\,A-X lines). We used the C abundance for deriving the N abundance (in the analysis of CN\,B-X lines), and for extracting the $^{12}$C/$^{13}$C ratio. Although \citet{Bedell2018} have already determined the C abundance for this same solar twin sample, it was necessary to proceed with a new determination in order to make a homogeneous chemical analysis, because \citet{Bedell2018} used equivalent widths of atomic and molecular lines to derive the carbon abundance (C\,I and CH\,A-X), and in the current work we adopt the spectral synthesis technique for mensurable and almost isolated lines of CH\,A-X that were carefully selected based on their sensitivity to the C abundance variation. We used the same stellar parameters and model atmosphere grid (ATLAS9 by \citet{castelli2004new}), as presented in \citet{Spina2018}. We also adopted the same version of the MOOG code, 2014 \citep{moog} that solves the photospheric radiative transfer with line formation under the LTE (local thermodynamic equilibrium) approximation. The non-LTE analysis of molecular spectral lines is an unexplored territory in the literature, but those effects are probably more important in stars with extended atmospheres \citep{Lambertetal2013}, rather than in our sample of dwarf stars. Throughout our differential analysis, we have also adopted the standard parameters for the Sun: $T_{\rm eff}$\,=\,5777\,K, $\log g$\,=\,4.44, [Fe/H]\,=\,0.00\,dex, and $\xi$\,=\,1.00\,km.s$^{-1}$ \citep[e.g.][]{Cox2000}. The adopted solar chemical composition is that by \citet{Asplund2009}. The abundance of oxygen, as derived by \citet{Bedell2018} for the same solar twins sample, is taken into account along the whole spectral synthesis procedure for determining the abundances of C and N and the isotopic ratio $^{12}$C/$^{13}$C, since the molecular dissociative equilibrium is solved by the MOOG code. The atomic line list has been compiled from the VALD database \citep{Ryabchikova2015}, and we have added the following molecular lines from the Kurucz database \citep{Kurucz}: $^{12}$C$^{1}$H and $^{13}$C$^{1}$H lines of the A-X system \citep{Jorgensen}, $^{12}$C$^{1}$H and $^{13}$C$^{1}$H lines of the B-X system \citep{Jorgensen}, $^{12}$C$^{12}$C, $^{12}$C$^{13}$C and $^{13}$C$^{13}$C lines of the d-a system \citep{Brooke}, $^{12}$C$^{14}$N, $^{12}$C$^{15}$N and $^{13}$C$^{14}$N lines of the B-X system \citep{Brooke}, and $^{14}$N$^{1}$H and $^{15}$N$^{1}$H lines of the A-X system \citep{Kurucz2017}. We have verified the impact of the variation of $^{12}$C/$^{13}$C and $^{14}$N/$^{15}$N, respectively, on the selected CH\,A-X and CN\,B-X lines, and we found that those molecular lines are insensitive to change in these isotopic ratios (the difference in the resulting abundance is always smaller than the abundance error). The adopted dissociative energies of CH, C$_{2}$, CN and NH are those by \citet{Barklem2016}. The first step for our differential chemical analysis relative to the Sun was the selection of (almost) isolated lines of CH\,A-X, CN\,B-X and features of $^{13}$CH\,A-X ($^{13}$CH-$^{12}$CH partially mixed lines). To help us on this search, we adopted the solar atlas of \citet{Wallace2011}, which identifies various molecular and atomic lines. We could find, then, the best candidates of CH\,A-X and CN\,B-X non-perturbed lines. This atlas also allowed us to identify continuum points (or pseudo-continuum in some cases) for a better flux normalization between observed spectrum and model spectra. After this step, a detailed investigation is made via spectral synthesis to address the individual contributions of all species in the region. Finally, a global $gf$ calibration is made to finely reproduce the HARPS observed solar spectrum over each selected molecular absorption. An automated Python code adjusts the $gf$-value under a line-by-line approach for each selected region. We obtained [X/Fe] near to zero (absolute value always smaller than or equal to 0.005\,dex) for the calibration relative to the Sun for the derived abundances of C and N in all selected lines, and $^{12}$C/$^{13}$C between 87.3 and 90.7 for the resulting isotopic ratio in all selected features (see Fig.~\ref{ch4210}, Fig.~\ref{cn4180} and Fig.~\ref{c4300} in the following sub-sections, in which we show examples). The next step was to search for continuum points and adequate $\chi^{2}$ windows for every molecular line/feature, including an inspection of the effective sensitivity to the variation of the measured parameter (C, N or $^{12}$C/$^{13}$C). We employ the same procedure to extract the elemental abundance based on the $\chi^{2}$ minimization, as applied in \citep{Botelho2019}. The minimizing of $\chi^{2}$ is only computed in a window around the central wavelength of the line/feature, directly providing the best abundance or isotopic ratio among the synthetic spectra generated for different values of abundance or isotopic ratio; $\chi^{2}\,=\,\sum_{i\,=\,1}^{n}(O_{i} - S_{i})^{2}/\sigma(O_{i})^{2}$, where $O_{i}$ and $S_{i}$ are the flux of the observed and synthetic spectrum, respectively, $\sigma(O_{i})$ is the error in the observed flux, and $i$ represent the wavelength point. The observed flux error is estimated as a function of the continuum SNR, that is $\sigma(O_{i})\,=\,O_{i}/$SNR$_{continuum}$. The $\chi^{2}$ of every spectral synthesis fit is plotted as a function of [X/Fe] for deriving the C and N abundances, and versus $^{12}$C/$^{13}$C ratio for obtaining this isotopic ratio. An automated procedure has been used for performing the spectral synthesis fit in order to measure a final representative abundance or isotopic ratio. Even though the spectral synthesis has been applied automatically for each molecular line/feature (CH\,A-X, CN\,B-X and $^{13}$CH\,A-X), we have performed a general visual inspection of all spectral synthesis fits. The stars HIP\,010303, HIP\,030037, HIP\,038072 and HIP\,083276 were eliminated from our measurements, because their spectral fits for all CH\,A-X selected lines were unreliable, not providing a representative abundance of C, and therefore becoming impossible to derive N and $^{12}$C/$^{13}$C. Perhaps there is a problem with the data reduction in that region for those stars, as the computed $\chi^{2}$ are very high in comparison with the $\chi^{2}$ distribution of the other stars. In summary, we were able to measure C, $^{12}$C/$^{13}$C and N in 63 solar twins from high-resolution high-quality spectra through a self-consistent and homogenous procedure. \subsection{Carbon and nitrogen abundances} In order to derive the carbon abundance, eleven lines of the (0,0) vibrational band and one of the (1,0) vibrational band of the CH\,A-X system were selected in the spectral range $\lambda\lambda$4211-4387\,{\AA}. Table~\ref{tab_CHlines} shows the twelve CH\,A-X lines. For obtaining the nitrogen abundance, we selected five CN\,B-X lines in 4180-4212\,{\AA}, from the bands (0,0), (1,0) and (2,0), respectively 2, 2 and 1 line. The five CN\,B-X lines are shown in Table~\ref{tab_CNlines}. We have also verified that the selected lines of $^{12}$CH\,A-X and $^{12}$C$^{14}$N B-X are insensitive to the variation of the main isotopic ratios of C and N ($^{12}$C/$^{13}$C and $^{14}$N/$^{15}$N). Table~\ref{tab_linelistCH} and Table~\ref{tab_linelistCN}, respectively, present the line list of 2\,{\AA} wide centered at the central wavelength of the selected CH\,A-X and CN\,B-X lines, whose $gf$ values have been calibrated to the solar spectrum. \begin{table*} \centering \caption{ The comprehensive list of CH\,A-X lines used in this work for determining the C abundance in solar twin stars. The line identification (first column) adopts the short molecule notation and the wavelength of the main molecular electronic transition as a whole number in Angstroms. The central wavelength of the spectral absorption, assuming a Gaussian profile, is presented in the second column. The spectral range in which the $\chi^{2}$ is computed and the correspondent number of pixels are shown in the third and fourth columns, respectively. The blue and red continuum intervals are presented in sequence, and, finally, the vibrational band and the number of blended lines to form the molecular absorption are shown in the last columns. Two CH\,A-X lines represent double absorptions, i.e. two absorptions side by side (CH4248 and CH4278). } \label{tab_CHlines} \resizebox{\linewidth}{!}{ \begin{tabular}{rrrrrrrr} \hline line & $\lambda_{central}$ & $\chi^{2}$ window & n. pixels & blue continuum & red continuum & band & n. lines \\ & ({\AA}) & ({\AA}) & & ({\AA}) & ({\AA}) & ($v'$, $v''$) & \\ \hline CH4210 & 4210.96 & 4210.83-4211.10 & 27 & 4207.54-4207.60 & 4211.55-4211.62 & (0,0) & 3 \\ CH4212 & 4212.65 & 4212.54-4212.76 & 22 & 4211.56-4211.62 & 4212.94-4213.00 & (0,0) & 3 \\ CH4216 & 4216.60 & 4216.50-4216.70 & 20 & 4214.10-4214.16 & 4218.53-4218.59 & (1,0) & 3 \\ CH4217 & 4217.23 & 4217.11-4217.35 & 24 & 4214.10-4214.16 & 4218.53-4218.59 & (0,0) & 3 \\ CH4218 & 4218.71 & 4218.61-4218.81 & 20 & 4217.90-4217.96 & 4218.93-4218.99 & (0,0) & 3 \\ CH4248 & 4248.72 & 4248.64-4249.01 & 37 & 4246.26-4246.33 & 4251.50-4251.57 & (0,0) & 2 \\ & 4248.94 & & & & & & 1 \\ CH4255 & 4255.25 & 4255.14-4255.36 & 22 & 4253.40-4253.46 & 4256.67-4256.73 & (0,0) & 2 \\ CH4278 & 4278.85 & 4278.77-4279.14 & 37 & 4278.55-4279.59 & 4281.66-4281.70 & (0,0) & 2 \\ & 4279.06 & & & & & & 1 \\ CH4281 & 4281.96 & 4281.84-4282.08 & 24 & 4281.66-4281.70 & 4283.48-4283.54 & (0,0) & 3 \\ CH4288 & 4288.73 & 4288.62-4288.87 & 25 & 4287.20-4287.28 & 4290.55-4290.60 & (0,0) & 4 \\ CH4378 & 4378.25 & 4378.13-4378.37 & 24 & 4377.62-4377.68 & 4379.86-4379.93 & (0,0) & 3 \\ CH4387 & 4387.06 & 4386.94-4387.16 & 22 & 4385.50-4385.58 & 4392.38-4392.45 & (0,0) & 3 \\ \hline \end{tabular} } \end{table*} \begin{table*} \centering \caption{ The comprehensive list of CN\,B-X lines used in this work for determining the N abundance in solar twin stars. The same notation and layout of Tab.~\ref{tab_CHlines} are adopted. } \label{tab_CNlines} \resizebox{\linewidth}{!}{ \begin{tabular}{rrrrrrrr} \hline line & $\lambda_{central}$ & $\chi^{2}$ window & n. pixels & blue continuum & red continuum & band & n. lines \\ & ({\AA}) & ({\AA}) & & ({\AA}) & ({\AA}) & ($v'$, $v''$) & \\ \hline CN4180 & 4180.02 & 4179.95-4180.08 & 13 & 4179.06-4179.12 & 4181.01-4181.06 & (2,0) & 3 \\ CN4192 & 4192.94 & 4192.84-4193.03 & 19 & 4192.72-4192.78 & 4195.77-4195.85 & (1,0) & 3 \\ CN4193 & 4193.40 & 4193.33-4193.45 & 14 & 4192.72-4192.78 & 4195.77-4195.85 & (1,0) & 3 \\ CN4195 & 4195.95 & 4195.87-4196.01 & 14 & 4195.77-4195.85 & 4203.26-4203.32 & (0,0) & 3 \\ CN4212 & 4212.25 & 4212.15-4212.34 & 19 & 4211.56-4211.63 & 4212.92-4212.98 & (0,0) & 6 \\ \hline \end{tabular} } \end{table*} \begin{table} \centering \caption{ Line lists for the CH\,A-X line regions after the $gf$ calibration to the solar spectrum: wavelength, species code, excitation potential of transition lower level, $gf$ and species identification. The species code is the MOOG standard notation, i.e. atomic number(s) before the decimal point (listed in crescent order for molecules) followed by the ionization level immediately after the decimal point (0: neutral, 1: first ionized, and so on). Each list covers a region of 2\,{\AA} centered in a given CH\,A-X line. The first list is for the CH4210 line. Full tables for all twelve CH\,A-X lines are online. } \label{tab_linelistCH} \begin{tabular}{rrrrl} \hline CH4210 & & & & \\ \hline wavelength & species code & $\chi_{e}$ & $gf$ & species \\ ({\AA}) & & (eV) & & \\ \hline 4209.9650 & 606.0 & 1.559 & 0.316E-03 & C$_{2}$ \\ 4209.9710 & 607.0 & 0.258 & 0.135E-05 & CN \\ 4209.9760 & 607.0 & 2.348 & 0.682E-04 & CN \\ 4209.9760 & 607.0 & 0.998 & 0.941E-05 & CN \\ 4209.9870 & 607.0 & 4.001 & 0.289E-03 & CN \\ --- & --- & --- & --- & --- \\ 4211.9670 & 606.0 & 1.574 & 0.257E$+$02 & C$_{2}$ \\ \hline \end{tabular} \end{table} \begin{table} \centering \caption{ Line lists for the CN\,B-X line regions after the $gf$ calibration to the solar spectrum. The same notation of Tab.~\ref{tab_linelistCH} is adopted. Each list covers a region of 2\,{\AA} centered in a given CN\,B-X line. The first list is for the CN4180 line. Full tables for all five CN\,B-X lines are online. } \label{tab_linelistCN} \begin{tabular}{rrrrl} \hline CN4180 & & & & \\ \hline wavelength & species code & $\chi_{e}$ & $gf$ & species \\ ({\AA}) & & (eV) & & \\ \hline 4179.0220 & 607.0 & 3.822 & 0.226E-02 & CN \\ 4179.0270 & 606.0 & 1.515 & 0.211E-09 & C$_{2}$ \\ 4179.0360 & 24.0 & 3.849 & 0.486E-03 & Cr\,I \\ 4179.0380 & 107.0 & 1.139 & 0.110E-06 & NH \\ 4179.0390 & 607.0 & 3.518 & 0.211E-02 & CN \\ --- & --- & --- & --- & --- \\ 4181.0140 & 606.0 & 1.538 & 0.469E-08 & C$_{2}$ \\ \hline \end{tabular} \end{table} For performing the spectral synthesis fit of every selected line of CH\,A-X and CN\,B-X, seven synthetic spectra are computed with uniform step of 0.10\,dex in [X/Fe] (X: C and N respectively) that are resampled in wavelength to that sampling of the observed spectrum. After that, the continuum level of the observed spectrum needs to be fitted to the continuum level of each synthetic spectrum; the continuum correction is multiplicative and derived by using both blue and red continuum ranges (see Tab.~\ref{tab_CHlines} and Tab.~\ref{tab_CNlines}). For deriving the resulting elemental abundance from each molecular line, the $\chi^{2}$ between the model and observed spectra is computed in the line window. The $\chi^{2}$ window covers from 20 up to 37 pixels for the CH\,A-X lines and from 13 up to 19 pixels for the CN\,B-X lines. The C abundance is derived first (as a simple mean) and then the N abundance is obtained (simple average too), after fixing the C abundance. Both C and N abundances were used to measure of the $^{12}$C/$^{13}$C isotopic ratio, making the overall chemical analysis self-consistent. Figure~\ref{ch4210} and Figure~\ref{cn4180} are examples of spectral synthesis calibration to the solar spectrum, from which the C and N measurements are derived from, respectively, a CH\,A-X line and a CN\,B-X line, showing diagnostic plots for the individual contributions from different species in the line profile and for the spectral synthesis itself as well as the graph of $\chi^{2}$ as a function of [X/Fe], whose minimum directly provides the resulting elemental abundance. None resulting abundance from every CH\,A-X and CN\,B-X line was excluded by a 3$\sigma$ clipping criterium. \begin{figure*} \includegraphics[scale=0.270]{Fig_1_CH4210_linefit.pdf} \includegraphics[scale=0.246]{Fig_1_CH4210_chi.pdf} \includegraphics[scale=0.255]{Fig_1_CH4210_contrib.pdf} \caption{ Example of spectral synthesis calibration to the Sun of a CH\,A-X line (CH4210), showing the individual contributions from different species in the line profile (bottom panel), spectral comparisons between the observed spectrum and seven synthetic spectra (top-left panel), and the $\chi^{2}$ graph as a function of [X/Fe] (top-right panel). } \label{ch4210} \end{figure*} \begin{figure*} \includegraphics[scale=0.270]{Fig_2_CN4180_linefit.pdf} \includegraphics[scale=0.246]{Fig_2_CN4180_chi.pdf} \includegraphics[scale=0.255]{Fig_2_CN4180_contrib.pdf} \caption{ Example of spectral synthesis calibration to the Sun of a CN\,B-X line (CN4180). The same plots of Fig.~\ref{ch4210} are adopted. } \label{cn4180} \end{figure*} The $\chi^{2}$ minimization procedure directly provides a good estimation for the error in [X/Fe]. The abundance error is derived when $\chi^{2}$ increases by $\nu$ degrees of freedom from the minimum $\chi^{2}$ value along the polynomial fitting curve $\chi^{2}$ versus [X/Fe], such that $\nu$ is the number of pixels in the $\chi^{2}$ window less a single free parameter (the elemental abundance itself). The uncertainties in abundance due to the photospheric parameter errors plus the impact of the continuum level adjustment, considering the influence of the flux noise, have been added in quadrature to that error from the spectral synthesis itself ($\chi^{2}$ estimation). For estimating the final error in the N abundance, the whole carbon abundance error is also taken into account. We have also estimated the impact of the oxygen abundance error in the abundances of carbon of nitrogen, and we found that the influence is negligible (i.e. the difference in the resulting abundance is always smaller than 1\,milidex). The error in [X/H] is computed in quadrature with the error in [Fe/H]. The global error in [X/Fe] derived from each molecular line is roughly dependent on the local SNR (measured in the spectral region of the molecular lines). The SNR changes from 250\,per\,pixel up to 600\,per\,pixel for the CH\,A-X line regions, and from 200\,per\,pixel up to 500\,per\,pixel for the CN\,B-X line regions. Concerning the C abundance derivation by \citet{Bedell2018} for the same solar twins sample, we have found a systematic difference in the [C/H] scale of about -0.04\,dex between our determinations and their average [C/H] abundance ratios based on EW measurements from C\,I and CH\,A-X lines. However, this small discrepancy is lower than the data dispersion (e.g. a standard $lsq$ linear fit provides a $rms$ of 0.08\,dex). The final abundances in C and N ([X/Fe] scale) for the 63 analyzed solar twins are computed as simple averages of the measured abundances from the selected individual molecular lines. The errors in the final [C/Fe] and [N/Fe] are estimated as simple means of their individual values. The error in [C/Fe] varies from 0.004 up to 0.011\,dex with an average value around 0.006\,dex (the error in [C/H] changes from 0.004 up to 0.013\,dex). The error in [N/Fe] varies from 0.015 up to 0.057\,dex, whose average value lies around 0.030\,dex (the error in [N/H] changes from 0.016 up to 0.057\,dex). \subsection{$^{12}$C/$^{13}$C isotopic ratio} In order to self-consistently derive the $^{12}$C/$^{13}$C isotopic ratio, our previous measurements of the C and N concentrations were adopted. We found six $^{13}$CH-$^{12}$CH\,A-X features around the G band of CH at $\lambda$4300\,{\AA} (all containing lines of the (0,0) vibrational band of the CH\,A-X system) that are sensitive to the variation of the $^{12}$C/$^{13}$C ratio. Table~\ref{tab_CClines} shows these CH\,A-X selected features (in fact, $^{13}$CH\,A-X lines mixed with $^{12}$CH\,A-X lines). Table~\ref{tab_linelistCC} presents the line list covering 2\,{\AA} centered at the central wavelength of the $^{13}$CH-$^{12}$CH\,A-X selected features, whose $gf$ values have been calibrated to the solar spectrum. \begin{table*} \centering \caption{ The comprehensive list of $^{13}$CH-$^{12}$CH\,A-X features used in this work for determining the $^{12}$C/$^{13}$C isotopic ratio in solar twin stars. The same notation and layout of Tab.~\ref{tab_CHlines} are adopted. } \label{tab_CClines} \resizebox{\linewidth}{!}{ \begin{tabular}{rrrrrrrr} \hline feature & $\lambda_{central}$ & $\chi^{2}$ window & n. pixels & blue continuum & red continuum & band & n. lines \\ & ({\AA}) & ({\AA}) & & ({\AA}) & ({\AA}) & ($v'$, $v''$) & \\ \hline $^{13}$CH4297 & 4297.10 & 4297.03-4297.17 & 14 & 4292.83-4292.88 & 4304.93-4305.01 & (1,0) & 6 \\ $^{13}$CH4298 & 4298.14 & 4298.08-4298.21 & 13 & 4292.83-4292.88 & 4304.93-4305.01 & (1,0) & 3 \\ $^{13}$CH4299A & 4299.42 & 4299.34-4299.51 & 17 & 4292.83-4292.88 & 4304.93-4305.01 & (1,0) & 7 \\ $^{13}$CH4299B & 4299.74 & 4299.65-4299.83 & 18 & 4292.83-4292.88 & 4304.93-4305.01 & (1,0) & 8 \\ $^{13}$CH4300 & 4300.67 & 4300.58-4300.76 & 18 & 4292.83-4292.88 & 4304.93-4305.01 & (1,0) & 5 \\ $^{13}$CH4303 & 4303.32 & 4303.24-4303.40 & 16 & 4292.83-4292.88 & 4304.93-4305.01 & (1,0) & 3 \\ \hline \end{tabular} } \end{table*} Analogously to the measurements of the C and N abundances, seven synthetic spectra are computed to derive the $^{12}$C/$^{13}$C isotopic ratio assuming a uniform step of 15 in this free parameter. The model spectra are resampled in wavelength to the sampling of the observed spectrum. The observed continuum level is also individually fitted to the continuum level of each synthetic spectrum (see Tab.~\ref{tab_CClines}). For deriving the resulting isotopic ratio from each $^{13}$CH-$^{12}$CH\,A-X selected feature, the $\chi^{2}$ between the model and observed spectra is also computed along the feature window (covering from 13 up to 18\,pixels). Both C and N abundances were used to measure of the $^{12}$C/$^{13}$C isotopic ratio, making the overall chemical analysis self-consistent. Along the procedure of spectral synthesis, whilst the isotopic ratio $^{14}$N/$^{15}$N is fixed to the solar value, the ratio $^{12}$C/$^{13}$C to be measured starts from the solar value. We have adopted the values suggested by \citet{Asplund2009} for the Sun ($^{14}$C/$^{15}$C\,=\,89.4$\pm$\,0.2 and $^{14}$N/$^{15}$N\,=\,435$\pm$\,57). Figure~\ref{c4300} is an example of spectral synthesis calibration to the solar spectrum, from which $^{12}$C/$^{13}$C is derived by using a $^{13}$CH-$^{12}$CH\,A-X selected feature. This figure shows diagnostic plots for the individual contributions from different species in the line profile and for the spectral synthesis itself as well as the graph of $\chi^{2}$ versus $^{12}$C/$^{13}$C, in which the isotopic ratio is recovered from the $\chi^{2}$ minimum value. \begin{table} \centering \caption{ Line lists for the $^{13}$CH-$^{12}$CH\,A-X feature regions after the $gf$ calibration to the solar spectrum. The same notation of Tab.~\ref{tab_linelistCH} is adopted, also including the mass numbers of atoms (listed in crescent order) in the species code in the case of molecular lines, but excluding the species identification (last column in the previous tables). Each list covers a region of 2\,{\AA} centered in a given $^{13}$CH-$^{12}$CH\,A-X feature. The first list is for the $^{13}$CH4297 feature. Full tables for all six $^{13}$CH-$^{12}$CH\,A-X features are online. } \label{tab_linelistCC} \begin{tabular}{rrrr} \hline $^{13}$CH4297 & & & \\ \hline wavelength & species code & $\chi_{e}$ & $gf$ \\ ({\AA}) & & (eV) & \\ \hline 4296.1010 & 606.01313 & 2.145 & 0.731E-01 \\ 4296.1030 & 606.01313 & 1.329 & 0.637E-01 \\ 4296.1060 & 606.01213 & 2.043 & 0.111E-01 \\ 4296.1060 & 607.01314 & 2.349 & 0.302E-04 \\ 4296.1080 & 106.00112 & 1.247 & 0.550E-02 \\ --- & --- & --- & --- \\ 4298.1000 & 607.01214 & 0.566 & 0.259E-03 \\ \hline \end{tabular} \end{table} \begin{figure*} \includegraphics[scale=0.270]{Fig_3_13CH4300_linefit.pdf} \includegraphics[scale=0.246]{Fig_3_13CH4300_chi.pdf} \includegraphics[scale=0.255]{Fig_3_13CH4300_contrib.pdf} \caption{ Example of spectral synthesis calibration to the Sun of a $^{13}$CH-$^{12}$CH\,A-X selected feature ($^{13}$CH4300). The same plots of Fig.~\ref{ch4210} are adopted, except in the top-right panel, which shows $\chi^{2}$ as a function of $^{12}$C/$^{13}$C instead. } \label{c4300} \end{figure*} By adopting the 3$\sigma$ clipping criterium over the measurements of the resulting $^{12}$C/$^{13}$C isotopic ratios for every sample star, four $^{13}$CH-$^{12}$CH\,A-X features have been used for 7 stars, five features for other 7 stars, and all of them have been useful for the 49 remaining stars. The estimation of the error in $^{12}$C/$^{13}$C is similar to that performed for the C and N abundances, i.e. the $\chi^{2}$ minimization approach objectively gives the error due to the spectral synthesis procedure itself. The basic error in $^{12}$C/$^{13}$C is estimated from the variation of $\chi^{2}$ in ($\nu$\,-\,1) unities along the polynomial fitting curve $\chi^{2}$-$^{12}$C/$^{13}$C, where $\nu$ is the number of pixels in the $\chi^{2}$ window. The error due to the spectral synthesis is added in quadrature with the propagated photospheric parameter errors, the uncertainties in the C abundance and the errors from the impact of the continuum level adjustment. We have also estimated the impact of the error in the oxygen abundance over the error of the isotopic ratio, and we found that the influence is negligible. We have verified that the global error in $^{12}$C/$^{13}$C derived from each molecular feature is roughly dependent on the local SNR (measured in the correspondent spectral regions). The SNR changes from 170\,per\,pixel up to 400\,per\,pixel for the $^{13}$CH-$^{12}$CH\,A-X feature regions. Like the C and N abundances, the final $^{12}$C/$^{13}$C isotopic ratios for the 63 analyzed solar twins are computed as simple averages of the individual measurements from the $^{13}$CH-$^{12}$CH\,A-X features, and their errors as simple means of the individual errors. The error in $^{12}$C/$^{13}$C varies from 1.9 up to 10.5, having 4.3 as an average value. \section{Analysis of the results} We have measured C, $^{12}$C/$^{13}$C and N in 63 solar twins from HARPS high-resolution high-quality spectra through a self-consistent, homogeneous and automated procedure. Seven old $\alpha$-enhanced solar twins (HIP\,014501, HIP\,028066, HIP\,030476, HIP\,065708, HIP\,073241, HIP\,074432 and HIP\,115577) and the single solar twin anomalously rich in $s$-process elements HIP\,064150 were excluded from the further analysis, in order to be consistent with the analyses carried out by \citet{Bedell2018} and \citet{Spina2018} on other elements for the same solar twins sample, representing a Galaxy's homogeneous population (i.e. the local thin disc \citep{Ramirezetal2012}). Therefore, the remaining 55 thin disc solar twins are investigated for studying correlations of [C/Fe], [N/Fe], $^{12}$C/$^{13}$C, [C/N], [C/O] and [N/O] with [Fe/H] and age, as well as [C/N], [C/O] and [N/O] versus [O/H]. The nearby $\alpha$-enhanced stars are in average older than the ordinary thin disc stars \citep{Adibekyanetal2011, Haywoodetal2013}. The $\alpha$-rich stars, which have [<$\alpha$/Fe>] higher in about 0.1\,dex, are split into two Galaxy's disc different stellar populations, carrying on two distinct dynamical histories, in contrast with the local thin disc stars. Whilst the metal-poor $\alpha$-rich stars actually belong to the thick disc based on their kinematics and orbital parameters, the metal-rich $\alpha$-rich stars exhibit nearly circular orbits close to the Galactic plane (similarly to the thin disc stars). Those seven old $\alpha$-enhanced solar twins could be either thick disc 'metal-rich' stars \citep{Bedell2018} or might have likely come from inner disc regions, as speculated by \citet{Adibekyanetal2011} for the metal-rich $\alpha$-rich stars. The determination of the C abundance, N abundance and $^{12}$C/$^{13}$C ratio reached, respectively, 100, 100 and 80\,per\,cent of completeness in terms of all selected molecular lines under the 3$\sigma$-clipping criterium. Table~\ref{results} compiles our measurements for the 63 solar twins. Figure~\ref{hist} shows the distributions of [C/H], [N/H] and $^{12}$C/$^{13}$C among 55 thin disc solar twins. Regarding the 55 thin disc solar twins, [C/Fe] changes from -0.129 up to 0.042\,dex and [C/H] from -0.207 up to 0.080\,dex. Their average values, respectively, are about -0.040\,dex ($\sigma$\,=\,0.033\,dex), and -0.042\,dex ($\sigma$\,=\,0.070\,dex). The mean errors of [C/Fe] and [C/H] are, respectively, 0.006 and 0.007\,dex; their respective variations are: 0.004-0.011\,dex and 0.004-0.013\,dex. [N/Fe] changes from -0.313 up to 0.023\,dex and [N/H] from -0.310 up to 0.087\,dex. Their average values, respectively, are about -0.094\,dex ($\sigma$\,=\,0.075\,dex), and -0.096\,dex ($\sigma$\,=\,0.108\,dex). The average errors in [N/Fe] and [N/H] are the same, i.e. around 0.030\,dex. The standard deviations of the [N/Fe] and [N/H] errors are, respectively, 0.016 and 0.057\,dex. Also regarding the 55 thin disc solar twins, the isotopic ratio $^{12}$C/$^{13}$C varies from 70.9 up to 101.1, presenting an average value around 85.8 ($\sigma$\,=\,6.2). The error in $^{12}$C/$^{13}$C varies from 1.9 up to 10.5, having 4.3 as an average value (like for all 63 solar twins). We have investigated [C/Fe], [N/Fe], $^{12}$C/$^{13}$C, [C/N], [C/O] and [N/O] as a function of [Fe/H] and isochrone stellar age, and also [C/N], [C/O] and [N/O] as a function of [O/H]. Only $^{12}$C/$^{13}$C is analysed as a function of [C/Fe] and [N/Fe]. We adopt linear dependence for all relations, based on those 55 thin disc solar twins, whose results are discussed in the following subsections. We have used the Kapteyn kmpfit package\footnote{https://www.astro.rug.nl/software/kapteyn/index.html} to do all fits to the data. The KMPFIT code performs a robust linear fit $y$ versus $x$, which minimizes the orthogonal distance of the overall data points to the fitting curve, taking into account the errors in both variables $x$ and $y$ under a variance weighting approach. The Sun data are not included in all linear fits. Table~\ref{value-adj} compiles the results of all computed linear fits for [C/Fe], [N/Fe], $^{12}$C/$^{13}$C, [C/N], [C/O] and [N/O] as a function of [Fe/H] and age; Table~\ref{value-adj-razao} for $^{12}$C/$^{13}$C as a function of [C/Fe] and [N/Fe]; and Table~\ref{value-adj-oh} for [C/N], [C/O] and [N/O] as a function of [O/H]. \begin{table*} \centering \caption{ Carbon and nitrogen abundances and $^{12}$C/$^{13}$C ratio measured in this work relatively to the Sun for 63 solar twins. The Sun data are included in the first row. Full table online. } \label{results} \begin{tabular}{rrrrrr} \hline Star ID & {[}C/H{]} & {[}C/Fe{]} & {[}N/H{]} & {[}N/Fe{]} & $^{12}$C/$^{13}$C \\ & (dex) & (dex) & (dex) & (dex) & \\ \hline Sun & 0.002$\pm$0.001 & 0.002$\pm$0.001 & 0.002$\pm$0.018 & 0.002$\pm$0.018 & 88.7$\pm$0.5 \\ HIP\,003203 & -0.129$\pm$0.012 & -0.079$\pm$0.010 & -0.301$\pm$0.056 & -0.251$\pm$0.056 & 84.9$\pm$9.8 \\ HIP\,004909 & -0.018$\pm$0.011 & -0.066$\pm$0.009 & -0.122$\pm$0.042 & -0.170$\pm$0.042 & 89.3$\pm$8.5 \\ HIP\,006407 & -0.092$\pm$0.012 & -0.034$\pm$0.010 & -0.180$\pm$0.045 & -0.122$\pm$0.045 & 87.9$\pm$7.5 \\ HIP\,007585 & 0.040$\pm$0.006 & -0.043$\pm$0.005 & -0.035$\pm$0.024 & -0.118$\pm$0.024 & 87.0$\pm$3.2 \\ -- & -- & -- & -- & -- & -- \\ HIP\,118115 & -0.054$\pm$0.007 & -0.018$\pm$0.006 & -0.196$\pm$0.030 & -0.160$\pm$0.030 & 85.1$\pm$4.0 \\ \hline \end{tabular} \end{table*} \begin{figure} \center \includegraphics[scale=0.300]{Fig_4_hist_ch.pdf} \includegraphics[scale=0.300]{Fig_4_hist_nh.pdf} \includegraphics[scale=0.300]{Fig_4_hist_cisoratio.pdf} \caption{ Distributions of [C/H], [N/H] and $^{12}$C/$^{13}$C among the 55 thin disc solar twins. } \label{hist} \end{figure} \subsection{[C/Fe] and [N/Fe] versus [Fe/H] and age} Concerning the relations between [C,N/Fe] and [Fe/H] in the small range of metallicity of solar twins (Fig.~\ref{cfe-nfe-feh-age}), we have found that: {\bf (i)} [C/Fe] is anti-correlated with [Fe/H] with a slope of -0.056$\pm$0.012 (the relative error of the negative slope is about 21\,per\,cent); and {\bf (ii)} there is a tentative correlation between [N/Fe] and [Fe/H] (slope relative error of 35\,per\,cent or just 2.8$\sigma$ of significance). The Sun seems slightly above, when compared against the fits of [C/Fe] and [N/Fe] versus [Fe/H] of the solar twins (within about 1\,$rms$ of deviation in average from the linear relations). This result is expected indeed, because \citet{Melendez2009} found, as confirmed afterwards by other works \citep{Ramirez2009, Ramirez2010, Nissen2015, Bedell2018}, that the Sun is slightly deficient in refractory elements relative to volatile ones in comparison with most of solar twins. Being C and N volatile elements and Fe a refractory element, thus the Sun is slightly enhanced in C and N relative to Fe. The 50\,per\,cent equilibrium condensation temperatures of C, N, O and Fe, for the solar nebula and the Solar System composition taking into account the total pressure at 1\,AU, are respectively 40, 123, 180 and 1334 \,K \citep{Lodders2003}. We can see relatively good agreements by comparing our derived relations [C/Fe]-[Fe/H] and [N/Fe]-[Fe/H] with those by \citet{DaSilva2015} done over a more extended range in [Fe/H] than solar twins, for a sample of 120 thin disc FGK dwarfs. Restricting the results from \citet{DaSilva2015} to the small metallicity range of solar twins, there would be a negative trend between [C/Fe] and [Fe/H] and no trend between [N/Fe] and [Fe/H], with the Sun data also slightly above the overall data set. Unfortunately, \citet{DaSilva2015} did not publish the slopes for the anti-correlation and correlation [C/Fe] vs. [Fe/H] found respectively for stars with [Fe/H] below and above zero, neither for the positive correlation [N/Fe] vs. [Fe/H] found along the [Fe/H] whole range. \citet{Suarez-Andres2017} recently presented linear fits between [C/Fe] and [Fe/H] for a huge sample of 1110 solar-type stars. They also derived [C/Fe]-[Fe/H] fits for stars with [Fe/H] below and above the solar value. We cannot do the same unfortunately, because the [Fe/H] amplitude of our solar twins sample is much smaller. On the other hand, we are able to split our sample in terms of stellar age due to a wide coverage in this parameter (see the following paragraphs and last subsection in the case of the carbon abundance ratios). The \citet{Suarez-Andres2017}'s sample was split into two groups of stars with planets of different masses and a large comparison sample of stars without detected planets. Additionally, they performed linear fits between [C/H] and [Fe/H] for these subsamples over the whole wide metallicity range (-1.4\,$\leq$\,[Fe/H]\,$\leq$\,+0.6\,dex). For the stellar sample without planets, \citet{Suarez-Andres2017} found a slope of +0.970$\pm$0.008 that agrees within 1$\sigma$ with the slope derived by us for this relation (+0.989$\pm$0.015). Note that the solar twins sample of the current work has only two known planet-host stars so far. On the other hand, \citet{Nissen2015} studied the chemical content of a sample of 21 solar twins like us (just 3 of them holding known planets). However, \citet{Nissen2015} did not provide any fit for the [C/Fe]-[Fe/H] relation. By inspecting the results for carbon, there seems to be a negative trend between [C/Fe] and [Fe/H], as derived by us for a larger solar twins sample. \citet{Suarez-Andres2016} obtained, for a restricted sample of 32 solar-type stars without known planets, no correlation between [N/Fe] and [Fe/H] in the range -0.45\,$\leq$\,[Fe/H]\,$\leq$\,+0.55\,dex (the slope that they found has a large error, i.e. +0.040$\pm$0.070). The whole sample of \citet{Suarez-Andres2016} has 74 stars, 42 of which holding detected planets until the publication date. The slope for the tentative correlation [N/Fe]-[Fe/H] derived by us for solar twins also agrees within 1$\sigma$ with the result from \citet{Suarez-Andres2016}. \begin{figure*} \includegraphics[scale=0.266]{Fig_5_cfe_nfe_vs_feh_vertical.pdf} \includegraphics[scale=0.372]{Fig_5_cfe_nfe_vs_age_vertical.pdf} \caption{ [C/Fe] and [N/Fe] as a function of [Fe/H] and isochrone stellar age for the 55 thin disc solar twins: the global linear fit is shown in every plot (blue solid line), and the broken linear fit is only presented in the [X/Fe]-age plots (green solid line), splitting the sample into stars with ages up to the Sun's age and stars older than the Sun (the break is fixed to the Sun's age). The red points represent the seven old $\alpha$-rich solar twins and the magenta point represents the star rich in $s$-process elements HIP\,064150. These stars do not take part in the fits as well as the Sun, whose data are also plotted as reference (green symbol). The results of each linear fit are shown in Table~\ref{value-adj}. } \label{cfe-nfe-feh-age} \end{figure*} The linear fits of [C,N/Fe] as a function of the isochrone stellar age over the wide range of ages of our solar twins sample (Fig.~\ref{cfe-nfe-feh-age}) provide the following results: {\bf (i)} [C/Fe] is positively well correlated with the stellar age (slope of +0.013$\pm$0.001\,dex.Gyr$^{-1}$ with a relative error of 8\,per\,cent, i.e. 13$\sigma$ of significance); and {\bf (ii)} [N/Fe] is also positively well correlated with the stellar age (slope of +0.013$\pm$0.002\,dex.Gyr$^{-1}$ with a relative error of about 15\,per\,cent). The Sun again seems slightly above, when compared with the derived fits of both [C/Fe] and [N/Fe] versus age of the solar twins (1.2\,$rms$ of deviation in average from the linear relations), confirming the Sun is slightly deficient in refractory elements relative to volatile ones in comparison with solar twins \citep{Melendez2009, Nissen2015, Bedell2018}. The analysis of the abundance ratios as a function of the stellar age is extended with linear fits that fix a break point at the Sun's age, whose value is very well stablished (the fit results are added in Tab.~\ref{value-adj}). The sample of 55 thin disc solar twins is split into stars with ages up to the Sun's age (23 {\lq\lq}young{\rq\rq} solar twins) and stars with ages greater than the solar age (32 {\lq\lq}old{\rq\rq} solar twins). Analogously to the global single linear fits, the alternative broken linear fits (applied with the Python.Scipy ODR function) perform the minimization of the orthogonal distances of the data points to the fitting function taking into account the errors in both variables. By fixing the abscissa of the break point, the fitting computation is simplified (four free parameters instead of five) and two linear functions are derived with a connection/transition at the solar age. Each broken linear fit is shown associated with the global fit in the same plot (right panel in Fig.~\ref{cfe-nfe-feh-age}, bottom panel in Fig.~\ref{razao-feh-age} plus Fig.~\ref{artigo_cn_co_no_age}). For instance, \citet{DaSilva2015} applied independent linear fits of [X/Fe] versus isochrone stellar age for a sample of FGK dwarfs also considering ages below and above the Sun's age (X: C, N, O, Na, Mg, Si, Ca, Ti, V, Mn, Fe, Ni, Cu and Ba). Whilst [C/Fe] is found to have a tentative correlation with age for the {\lq\lq}young{\rq\rq} solar twins, it is correlated with age for the {\lq\lq}old{\rq\rq} solar twins, such that the broken linear fit matches well the overall behaviour of [C/Fe] based on the global linear fit (Fig.~\ref{cfe-nfe-feh-age}). The broken linear fit exhibits overall effective abundance ratio dispersion comparable to the dispersion of the global single fit. We estimate an effective $rms$ of 0.039\,dex over the whole age scale against 0.037\,dex of the global fit ($rms_{\rm effective}$\,$\approx$\,sqrt(($rms_{\rm young\,twins}^{2}$\,+\,$rms_{\rm old\,twins}^{2}$)/2)). The broken linear fit [N/Fe]-age gives a stronger correlation with age for the {\lq\lq}young{\rq\rq} solar twins in comparison with the global fit (slope of +0.041($\pm$0.008)\,dex.Gyr$^{-1}$), and a negative trend for the {\lq\lq}old{\rq\rq} solar twins (see Fig.~\ref{cfe-nfe-feh-age} and Tab.~\ref{value-adj}). However, there is an overall decrease of [N/Fe] with time from the older solar twins towards the younger solar twins, like seen in the global single fit. The effective dispersion of the broken fit over the whole age scale is also equivalent to the dispersion of the global single fit (overall $rms$ of 0.067\,dex against 0.070\,dex of the global fit). If we could consider our solar twins relations [C,N/Fe]-age as roughly representative for the local thin disc evolution, rude estimates for the current [C/Fe] and [N/Fe] ratios in the local ISM would be respectively -0.11 and -0.13\,dex (as suggested by the global fits' linear coefficients). In addition to these estimates, the global fits also suggest that [C/Fe] and [N/Fe] would be respectively around 0.00 and -0.02\,dex for 8.6\,Gyr ago in the local ISM, which is close to {\lq\lq}the formation epoch{\rq\rq} of the thin disc. The [C/Fe]-age broken fit for the solar twins younger and older than the Sun also provides the same rude estimates for the current and ancient values of [C/Fe] in the local ISM. In fact, the precision of absolute values for both abundance ratios estimated for the past and the present has less statistical significance in comparison with their overall distribution and variation in time. At least, we are confident to predict that both [C/Fe] and [N/Fe] in the local ISM are certainly under-solar nowadays as well as they were close to the solar ratios in average 8.6\,Gyr ago. Therefore, [C/Fe] and [N/Fe] have decreased along the evolution of the Galactic thin disc in the solar neighbourhood during nearly 9\,Gyr. This is likely due to a higher relative production of iron in the local disc (by basically SN-Ia in binary systems) in comparison with the nucleosynthesis of carbon and nitrogen (by single low-mass/intermediate-mass AGB stars plus metal-rich massive stars too). Whilst the carbon-to-iron ratio decreases in time and with the iron abundance, the nitrogen-to-iron ratio decreases in time but seems to increase with the iron abundance. Consequently, it is complicated to reach a conclusive statement about the impact of the ratio AGB-stars/SN-Ia or even the frequency of isolated stars relative to binary stars, to consistently explain the lowering of the carbon-to-iron and nitrogen-to-iron ratios over time. We have found a few GCE models \citep{Kobayashi2011, Sahijpal2013} that partially agree with our estimates for the [C/Fe] and [N/Fe] abundance ratios at the solar neighbourhood. The GCE models with AGB yields by \citet{Kobayashi2011} predict [N/Fe] around 0.0\,dex for {\lq\lq}the formation epoch{\rq\rq} of the thin disc (see Fig.\,13 in this publication considering [Fe/H]\,=\,-1.5\,dex as the minimum metallicity for the thin disc stars), predicting a $^{12}$C/$^{13}$C isotopic ratio around 80 in the current time (see Figs.\,17-19 in this publication). The models case A by \citet{Sahijpal2013} predict [C/Fe] around -0.1\,dex in the current time (see Fig.\,5 in this paper), and the grid model\,\#30 of the case A given by the same work specifically predicts [C/Fe] around the solar value for {\lq\lq}the formation epoch{\rq\rq} of the thin disc (also see Fig.\,5 in this publication, assuming an initial [Fe/H]\,=\,-1.5\,dex). Besides these comparisons against theoretical predictions, all the GCE models by \citet{Romano2019} predict an anti-correlation between [C/Fe] and [Fe/H] around the solar ratios, like we observe, and three of them show a positive correlation between [N/Fe] and [Fe/H], as our result. There are only two models that simultaneously reproduce both observed relations for our solar twins sample, but just roughly and qualitatively. Like the others, these fiducial models entitled MWG-02 and MWG-07 are multi-zone models with two almost independent infall episodes, a star formation rate proportional to both star and total surface mass density, the Kroupa initial mass function (IMF) with a slope $x$\,=\,1.7 for the range of massive stars, and a specific set of stellar yields for low-mass/intermediate-mass stars, super-AGB stars (7-9\,M$_{\odot}$) and massive stars (see more details in \citet{Romano2019}). The particular nucleosynthesis prescriptions of the MWG-02 model is based on the stellar yields of low-mass/intermediate-mass stars from \citet{Karakas2010}, the stellar yields of massive stars from \citet{Nomotoetal2013} with no stellar rotation effects considered, and absence of contributions by super-AGB stars, hypernovae and novae. The ingredients of the MWG-07 model are: stellar yields of low-mass/intermediate-mass stars, super-AGB stars and super solar metallicity stars from \citet{Venturaetal2013}, the stellar yields of massive stars from \citet{Limongi2018} with no stellar rotation effects included, and absence of contributions by hypernovae and novae. \citet{Nissen2015} also obtained a linear correlation of [C/Fe] as a function of age for a smaller sample of solar twins, whose slope (0.0139$\pm$0.0020\,dex.Gyr$^{-1}$) and intercept (-0.110$\pm$0.011\,dex) are very close to our results. Similarly to \citet{Nissen2015}, \citet{DaSilva2015} did an important work, presenting for 140 FGK dwarfs a series of linear fits of abundance ratios [X/Fe] as a function of [Fe/H] and isochrone stellar age. Since \citet{DaSilva2015} performed independent fits for ages below and above the solar age without publishing the slopes, we can just make a qualitative comparison. Whilst they found a nearly global positive trend of [C/Fe] versus age, agreeing with our prediction, their data suggest an overall negative trend for [N/Fe] as a function of age, opposite to our global single fit over the whole age scale, like \citet{Suarez-Andres2016}, who also found negative trends of [N/Fe] as a function of age for more restricted samples of solar-type stars with and without known planets in comparison with \citet{DaSilva2015}. On the other hand, we have found a negative trend of [N/Fe] with age only for the solar twins older than the Sun. Very recently, \citet{Nissenetal2020} have proposed that the relations between several abundance ratios and stellar age derived from observations of solar type-stars ([C/Fe] and [Fe/H] included) might be represented by two distinct sequences that would be interpreted as evidence of two episodes of star formation induced by the accretion of gas onto the Galactic disc. The separation of the two sequences in the stellar age scale would be around 5-6\,Gyr, approaching close to the formation epoch of the Sun (see Figure 3 in their paper showing the age-metallicity relation). This result would substantiate the fits of [C,N/Fe] as a function of stellar age for FGK dwarfs younger and older than the Sun done by \citet{DaSilva2015} and alternatively done in the current work too, or even of [C,N/Fe] as a function of [Fe/H] by \citet{DaSilva2015}, and [C/Fe] as a function of [Fe/H] by \citet{Suarez-Andres2017} for stars with [Fe/H] below and above zero. We are able then to qualitatively compare our thin disc solar twins relation [C/Fe]-age against the results from \citet{Nissenetal2020} obtained very recently for 72 nearby solar-type stars with -0.3\,$\leq$\,[Fe/H] \,$\leq$\,+0.3\,dex through the analysis of HARPS spectra. They split their sample into two distinct stellar populations based on their age-[Fe/H] distribution: old stars and young stars. \citet{Nissenetal2020} found [C/Fe] increasing with the stellar age, like we have derived too. A linear fit would not be good enough to reproduce the whole data distribution of both populations, which cover ages from 0 up to 11\,Gyr. None kind of fit was tried by them (rather than a linear fit, perhaps, a higher order polynomial fitt would work for their data distribution). Coincidently, they found that [C/Fe] tends to decrease from about +0.1\,dex 10\,Gyr ago down to around -0.1\,dex now, such that the carbon-to-iron ratio would be very close to the solar value nearly to 8.6\,Gyr ago, like we observe. Our relations [C/Fe]-[Fe/H] and [C/Fe]-age derived for local thin disc solar twins can also be compared with the very recent carbon abundance measurements obtained for 2133 FGK dwarfs by \citet{Franchinietal2020} in the Gaia-ESO Survey. They classified their sample stars as thin and thick disc members by using independent chemical, kinematical and orbital criteria. The under-solar anti-correlation of [C/Fe] versus [Fe/H] found by us qualitatively agrees with the \citet{Franchinietal2020}'s relation obtained for thin disc dwarfs, specially in the cases of the kinematical and orbital classifications (unfortunately they did not apply any kind of fit to their data). The overall increase of [C/Fe] as a function of age found by us is compatible with the \citet{Franchinietal2020}'s results for the thin disc stars, which cover ages from 2 up to 12\,Gyr (again, the kinematical and orbital classifications provide better agreements). Differently from our analysis, the variation of the carbon-to-iron ratio seems to better follow a high-order polynomial function of age rather than a linear fit. They obtained [C/Fe] increasing from about -0.12\,dex 2\,Gyr ago up to near the solar level 12\,Gyr ago. \subsection{$^{12}$C/$^{13}$C versus [C/Fe], [N/Fe], [Fe/H] and age} Since we have homogeneously measured the abundances of C, N and the isotopic ratio $^{12}$C/$^{13}$C, we investigate effects due to some internal mixing process associated with the CNO cycles that could have changed these photospheric chemical abundances, perhaps altering their pristine chemical composition. According to the plots of $^{12}$C/$^{13}$C versus [C/Fe] and [N/Fe] (Fig.~\ref{razao-cfe-nfe}), the $^{12}$C/$^{13}$C ratio is somehow correlated with [C/Fe] (slope with a relative error of 25\,per\,cent, i.e. 3.9$\sigma$ of significance), as expected from some mixing process connected with a CNO cycle. On the other hand, the C isotopic ratio is not anti-correlated with [N/Fe], instead being positively correlated (slope with a relative error of about 16\,per\,cent), unlike expectations for CNO processing. These trends are compatible with no deep internal mixing in association with products from the CNO cycle. \begin{figure} \includegraphics[scale=0.300]{Fig_6_isoC_vs_cfe.pdf} \includegraphics[scale=0.300]{Fig_6_isoC_vs_nfe.pdf} \caption{ $^{12}$C/$^{13}$C as a function of [C/Fe] and [N/Fe] for the 55 thin disc solar twins. The same description of Fig.~\ref{cfe-nfe-feh-age} is adopted. The results of each linear fit are shown in Table~\ref{value-adj-razao}. } \label{razao-cfe-nfe} \end{figure} \begin{table*} \centering \caption{Slopes, intercepts and $rms$ of the linear fits of $^{12}$C/$^{13}$C versus [C/Fe] and [N/Fe], derived for the 55 thin disc solar twins.} \label{value-adj-razao} \begin{tabular}{llll|lll} \hline & \multicolumn{3}{c}{{[}C/Fe{]}} & \multicolumn{3}{c}{{[}N/Fe{]}} \\ \hline & slope & intercept & $rms$ & slope & intercept & $rms$ \\ & (dex$^{-1}$) & & & (dex$^{-1}$) & & \\ \hline $^{12}$C/$^{13}$C & 55$\pm$14 & 86.8$\pm$0.8 & 6.0 & 50.2$\pm$8.2 & 88.9$\pm$0.8 & 6.3 \\ \hline \end{tabular} \end{table*} The linear fit of $^{12}$C/$^{13}$C as a function of metallicity (Fig.~\ref{razao-feh-age}) demonstrates that $^{12}$C/$^{13}$C is positively well correlated with [Fe/H] for the solar twins sample in their metallicity range -0.126\,$\leq$\,[Fe/H]\,$\leq$\,+0.132\,dex (slope of +56.5$\pm$7.2\,Gyr$^{-1}$, i.e. slope with a relative error of just 13\,per\,cent). The isotopic ratio $^{12}$C/$^{13}$C increases from 78.2 up to 92.8 in the interval of metallicity of our sample stars (a variation of 3\,$rms$ of the derived fit). In fact, the statistical significance of this observational result is basically on the overall increase of the isotopic ratio as a function of [Fe/H], putting the absolute extreme values aside. Surprisingly, the GCE models follow an opposite trend with [Fe/H], something that is worth exploring in further GCE models. \begin{figure} \includegraphics[scale=0.300]{Fig_7_isoC_vs_feh.pdf} \includegraphics[scale=0.300]{Fig_7_isoC_vs_age.pdf} \caption{ $^{12}$C/$^{13}$C as a function of [Fe/H] and age for the 55 thin disc solar twins. The same description of Fig.~\ref{cfe-nfe-feh-age} is adopted. The results of each linear fit are shown in Table~\ref{value-adj}. } \label{razao-feh-age} \end{figure} The linear fit of $^{12}$C/$^{13}$C as a function of the isochrone stellar age (Fig.~\ref{razao-feh-age}) shows that $^{12}$C/$^{13}$C is marginally correlated with age for the solar twins sample (slope of +0.614$\pm$0.250\,Gyr$^{-1}$, i.e. slope with a relative error of 41\,per\,cent, or just 2.5$\sigma$ of significance). There seems to be a positive trend between this isotopic ratio and the isochrone age, presenting overall decrease of $^{12}$C/$^{13}$C from a value of 86.6 (8.6\,Gyr ago) down to 81.3 in the current time (a variation of only 0.8\,$rms$ of the derived fit). The broken linear fit splitting the sample into solar twins younger and older than the Sun is statistically equivalent to the global single fit (also see Tab.~\ref{value-adj}). The result from the $^{12}$C/$^{13}$C-age relation is very interesting indeed in association with the other derived relation $^{12}$C/$^{13}$C-[Fe/H]. We remind the reader that the fundamental statistical significance of our result is focused on the negative trend found between the isotopic ratio and the time rather than the absolute values of the predictions for the local ISM now and 8.6\,Gyr ago. The GCE models must prove or not this observational evolutive trend. Nevertheless, the decrease of $^{12}$C/$^{13}$C in time would be perharps expected along the evolution of the Galactic disc due to the delayed contribution of the low-mass/intermediate-mass AGB stars. The terrestrial $^{12}$C/$^{13}$C\,=\,89.4$\pm$0.2 \citep{Coplen2002}, as suggested by \citet{Asplund2009} to represent the current solar value (and his pristine value too), is in between the solar photosphere values of 86.8$\pm$3.8 by \citet{Scott2006}, 91.4$\pm$1.3 by \citet{Ayres2013}, and 93.5$\pm$0.7 by \citet{Lyons2018}. Possible causes for the small apparent discrepancy between the terrestrial and solar ratios are discussed by \citet{Lyons2018}. Recently, the $^{12}$C/$^{13}$C ratio has been employed by \citet{Adibekyan2018} as one of the criteria defining solar siblings, meaning stars that formed in the same birth cluster as the Sun, being thus stars with ages, chemical compositions and kinematics as the Sun \citep{Ramirez2014b}. We recommend that the variation in the $^{12}$C/$^{13}$C ratio around the solar value should be within 4.5 unities: 84.9\,$\leq$\,$^{12}$C/$^{13}$C\,$\leq$\,93.9 (i.e. with a 3$\sigma$ agreement around the solar value proposed by \citet{Asplund2009} and \citet{Coplen2002}, assuming an average error from the four values previously listed). We have found 28 stars in our sample of 55 thin disc solar twins that could be solar sibling candidates based on the C isotopic ratio only. However, considering that solar siblings must have similar ages to the Sun, only 10 stars are left (HIP\,025670, HIP\,040133, HIP\,049756, HIP\,064673, HIP\,07585, HIP\,079672, HIP\,089650, HIP\,095962, HIP\,104045 and HIP\,117367). We propose that the age interval for the convergence to the solar age should vary from 3.36\,Gyr up to 5.76\,Gyr, i.e. 3$\sigma$ around the solar value assuming the typical error in the isochrone age of our solar twins sample. Furthermore, nevertheless, the stars must have similar kinematics to the Sun, and further applying this condition, none solar sibling candidate remains as solar sibling. Concerning the comparisons against predictions of the state-of-the-art GCE models to track the CNO isotopes in the local ISM by \citet{Romano2017}, our results for solar twins, spanning ages between 0.5 up to 8.6\,Gyr (around 8\,Gyr in time), agree with the predictions of one of their models, because the $^{12}$C/$^{13}$C ratio appears actually decreasing since 8.6\,Gyr up to now (i.e. since 5\,Gyr ago until nowadays) and this model simultaneously reproduce both the solar and ISM $^{12}$C/$^{13}$C ratios. This model does not include C yields from super-AGB stars and does not take the effects of stellar rotation on the yields of massive stars into account (Model\,1 in \citet{Romano2017} renamed as MWG-02 in \citet{Romano2019}). Similarly to the \citet{Romano2019}'s GCE models, it is a multi-zone model with two infall episodes, star formation rate proportional to both star and total surface mass density, the Kroupa IMF under a slope $x$\,=\,1.7 for the range of massive stars, and a set of stellar yields for low-mass/intermediate-mass stars, super-AGB stars and massive stars (read details in \citet{Romano2017}). Whilst the current C isotopic ratio for the ISM of the neighbourhood adopted by \citet{Romano2017} is 68$\pm$15 (as measured by \citet{Milam2005} as a typical for the local ratio), we suggest a value of 81.3 ($\pm$1.4), such that there is an agreement within 1$\sigma$. \subsection{[C/N], [C/O] and [N/O] versus [Fe/H], [O/H] and age} We have used our measurements of carbon and nitrogen together with the oxygen abundances homogeneously derived by \citet{Bedell2018} for the same solar twins sample, for computing their [C/N], [C/O] and [N/O] ratios. Regarding the relations of these abundance ratios as a linear function of [Fe/H] (shown in Fig.~\ref{artigo_cn_co_no_feh_oh}), we have found that: {\bf (i)} [C/N] is anti-correlated with [Fe/H] (the relative error of the negative slope is 22\,per\,cent, i.e. 4.6$\sigma$ of significance level); {\bf (ii)} [C/O] is anti-correlated with [Fe/H] (relative error of the negative slope of about 18\,per\,cent); and {\bf (iii)} there is a tentative correlation between [N/O] and [Fe/H] (roughly 2$\sigma$ significance level). The Sun, when compared against the fits versus [Fe/H] of the solar twins, seems to be about normal (within about 1-sigma) in the C/N, C/O and N/O ratios. In \citet{Melendez2009}, the [C,N,O/Fe] ratios are about the same within the error bars, which is expected considering that the three elements have low condensation temperatures. In this sense, the C/N, C/O and N/O ratios in \citet{Melendez2009} are roughly about the same (within the errors) in the Sun and solar twins, as also found in the present work. Due to an existence of a few electronic transitions of oxygen in the optical range and many Fe\,I and Fe\,II lines in the case of FGK stars, iron is usually used to trace the stellar metallicity. On the other hand, oxygen is the most abundant metal in stars, being in fact considered the best indicator of the metallicity of any star or stellar system. Therefore, it becomes interesting to compare [C/N], [C/O] and [N/O] directly with [O/H], also because these abundance ratios measured in the gas phase for other galaxies are often given as a function of O and, secondly, oxygen has a simpler nucleosynthetic origin (basically made by massive stars that die as SN-II) in comparison with iron (both SN-II and SN-Ia). [C/N], [C/O] and [N/O] as a function of [O/H] for the 55 thin disc solar twins are also shown in Fig.~\ref{artigo_cn_co_no_feh_oh}. Regarding the correspondent linear fits, we have found that: {\bf (i)} [C/N] is anti-correlated with [O/H] (the negative slope has 7.9$\sigma$ of significance level); {\bf (ii)} [C/O] is anti-correlated with [O/H] (negative slope with 8.1$\sigma$ significance level); and {\bf (iii)} [N/O] is correlated with [O/H] (positive slope with 3.9$\sigma$ significance level). [C/N] and [C/O] present the same behaviour as a function of both [Fe/H] and [O/H], but while [N/O] is well correlated with [O/H], it only shows a tentative correlation against [Fe/H]. The Sun seems normal (within about 1-sigma) in the C/N, C/O and N/O ratios. Making an analogy between the $\alpha$-elements oxygen and magnesium (products of the evolution of massive stars), we can make a qualitative comparison between our relation [C/O]-[O/H] derived for local thin disc solar twins and the global relation [C/Mg]-[Mg/H] observed by the GALAH\footnote{GALactic Archaeology with HERMES} optical spectroscopic survey for two distinct Galactic's disc stellar populations that embraces 12,381 out of 70,924 GALAH stars \citep{Griffithetal2019}. Their two stellar groups are denominated by high-$\alpha$/low-Ia and low-$\alpha$/high-Ia (where Ia means SN-Ia) based on cuts in [Mg/Fe] in the diagram [Mg/Fe] vs. [Fe/H] (see their Figure 2), corresponding, respectively, to thick and thin disc stars as chemically identified by others works (e.g. \citet{Franchinietal2020}). \citet{Griffithetal2019} found that [C/Mg] is anti-correlated with [Mg/H] for low-$\alpha$ (thin disc members), although no fit is provided. [C/Mg] versus [Mg/H] qualitatively shows the same behaviour of our fit [C/O]-[O/H], albeit their relation seems steeper. By {\lq\lq}cross-correlating{\rq\rq} our observed relations [C/O] and [N/O] versus [O/H] with the predictions of the GCE models by \citet{Romano2019}, we have found there is no agreement in the case of the [C/O]-[O/H] relation, observed by us as an anti-correlation, differently from a positive correlation predicted by some \citet{Romano2019}'s models covering the chemical composition of solar twins ([C/O]\,=\,[O/H]\,=\,0.0($\pm$\,0.15)\,dex). On the other hand, there is a qualitative agreement of the observed relation [N/O]-[O/H] for our solar twins sample against some \citet{Romano2019}'s models, showing that [N/O] is positively correlated with [O/H] around [N/O]\,=\,[O/H]\,=\,0.0($\pm$\,0.13)\,dex. The \citet{Romano2019}'s MWG-07 model, which roughly and simultaneously reproduces both observed relations [C,N/Fe]-[Fe/H] (Subsection\,4.1), is not among these models that show qualitative agreements against the [N/O]-[O/H] observed relation. However, one of them is coincidently the same model that reproduces our observed relation $^{12}$C/$^{13}$C-age, which is named as MWG-02 by \citet{Romano2019} and is equivalent to the Model\,1 in \citet{Romano2017}. \begin{figure*} \includegraphics[scale=0.315]{Fig_8_cn_co_no_vs_feh_vertical.pdf} \includegraphics[scale=0.315]{Fig_8_cn_co_no_vs_oh_vertical.pdf} \caption{ [C/N], [C/O] and [N/O] as a function of [Fe/H] and [O/H] for the 55 thin disc solar twins. The same description of Fig.~\ref{cfe-nfe-feh-age} is adopted. The results of each linear fit are shown in Table~\ref{value-adj} and Table~\ref{value-adj-oh}. } \label{artigo_cn_co_no_feh_oh} \end{figure*} \begin{figure} \includegraphics[scale=0.400]{Fig_9_cn_co_no_vs_age_vertical.pdf} \caption{ [C/N], [C/O] and [N/O] as a function of the isochrone age for the 55 thin disc solar twins. The same description of Fig.~\ref{cfe-nfe-feh-age} is adopted. The results of each linear fit are shown in Table~\ref{value-adj}. } \label{artigo_cn_co_no_age} \end{figure} \begin{table*} \centering \caption{ Slopes, intercepts and $rms$ of the global linear fits of [C/Fe], [N/Fe], [C/N], [C/O], [N/O] and $^{12}$C/$^{13}$C versus [Fe/H] and isochrone stellar age, derived for the 55 thin disc solar twins (first row of each abundance ratio). These parameters are also listed for the broken linear fits carried out only as a function of age that split the sample into solar twins younger and older than the Sun (respectively second and third rows of each abundance ratio). } \label{value-adj} \begin{tabular}{lrrrrrr} \hline & \multicolumn{3}{c}{{[}Fe/H{]}} & \multicolumn{3}{c}{stellar age} \\ \hline & slope & intercept & $rms$ & slope & intercept & $rms$ \\ & & (dex) & (dex) & (dex.Gyr$^{-1}$) & (dex) & (dex) \\ \hline {[}C/Fe{]} & -0.056$\pm$0.012 & -0.039$\pm$0.001 & 0.033 & 0.013$\pm$0.001 & -0.107$\pm$0.003 & 0.037 \\ & & & & 0.007$\pm$0.004 & -0.093$\pm$0.014 & 0.038 \\ & & & & 0.022$\pm$0.006 & -0.160$\pm$0.036 & 0.040 \\ {[}N/Fe{]} & 0.162$\pm$0.057 & -0.064$\pm$0.003 & 0.080 & 0.013$\pm$0.002 & -0.135$\pm$0.010 & 0.070 \\ & & & & 0.041$\pm$0.008 & -0.207$\pm$0.031 & 0.075 \\ & & & & -0.015$\pm$0.007 & 0.045$\pm$0.059 & 0.059 \\ {[}C/N{]} & -0.274$\pm$0.059 & 0.034$\pm$0.004 & 0.056 & -0.007$\pm$0.002 & 0.069$\pm$0.011 & 0.054 \\ & & & & -0.028$\pm$0.006 & 0.127$\pm$0.023 & 0.052 \\ & & & & 0.013$\pm$0.006 & -0.061$\pm$0.045 & 0.046 \\ {[}C/O{]} & -0.130$\pm$0.023 & -0.022$\pm$0.002 & 0.026 & -0.005$\pm$0.001 & 0.002$\pm$0.004 & 0.025 \\ & & & & -0.016$\pm$0.003 & 0.027$\pm$0.012 & 0.018 \\ & & & & 0.011$\pm$0.004 & -0.097$\pm$0.025 & 0.027 \\ {[}N/O{]} & 0.120$\pm$0.061 & -0.059$\pm$0.004 & 0.050 & 0.002$\pm$0.002 & -0.070$\pm$0.011 & 0.050 \\ & & & & 0.011$\pm$0.005 & -0.095$\pm$0.020 & 0.057 \\ & & & & -0.006$\pm$0.005 & -0.017$\pm$0.039 & 0.040 \\ \hline & (dex$^{-1}$) & & & (Gyr$^{-1}$) & & \\ \hline $^{12}$C/$^{13}$C & 56.5$\pm$7.2 & 85.3$\pm$0.5 & 4.7 & 0.614$\pm$0.250 & 81.3$\pm$1.4 & 6.3 \\ & & & & -0.218$\pm$0.893 & 83.9$\pm$3.4 & 6.2 \\ & & & & 1.248$\pm$0.709 & 77.2$\pm$6.2 & 6.3 \\ \hline \end{tabular} \end{table*} \begin{table} \centering \caption{ Slopes, intercepts and $rms$ of the global linear fits of [C/N], [C/O], [N/O] versus [O/H], derived for the 55 thin disc solar twins. } \label{value-adj-oh} \begin{tabular}{lrrrrrr} \hline & \multicolumn{3}{c}{{[}O/H{]}} \\ \hline & slope & intercept & $rms$ \\ & & (dex) & (dex) \\ \hline {[}C/N{]} & -0.400$\pm$0.051 & 0.034$\pm$0.004 & 0.046 \\ {[}C/O{]} & -0.161$\pm$0.020 & -0.024$\pm$0.002 & 0.025 \\ {[}N/O{]} & 0.201$\pm$0.053 & -0.059$\pm$0.004 & 0.045 \\ \hline \end{tabular} \end{table} After inspecting the global linear relations of [C/N], [C/O] and [N/O] as a function of the isochrone stellar age in the wide range of ages of the 55 thin disc solar twins (Fig.~\ref{artigo_cn_co_no_age}), we have found that: {\bf (i)} [C/N] is somehow anti-correlated with age (the relative error of the negative slope is about 29\,per\,cent, i.e. 3.5$\sigma$ of significance level); {\bf (ii)} [C/O] is anti-correlated with age (a relative error of the negative slope of about 20\,per\,cent); and {\bf (iii)} there is no correlation between [N/O] and age (the relative error of the slope reaches 100\,per\,cent). Alternatively, we have also derived composed linear fits of [C/N], [C/O] and [N/O] by splitting the sample into solar twins younger and older than the Sun (see the parameters of the broken linear fits in Tab.~\ref{value-adj} and the plots in Fig.~\ref{artigo_cn_co_no_age}). The [C/N] and [C/O] ratios are found to be anti-correlated with the stellar age for the solar twins younger than the Sun (slightly more anti-correlated than in the global single fits), but both ratios show a positive trend with age for the older solar twins. Curiously, the break point of the [C/N]-age broken fit matches the solar ratio. [N/O], as shown by the broken fit for {\lq\lq}young{\rq\rq} and {\lq\lq}old{\rq\rq} solar twins, follows the same overall behaviour of the global single linear fit (i.e. it is likely constant in time, in fact). The overall dispersion of every broken fit is comparable to the dispersion of the correspondent global single fit. Notice that dependence of the [C/N] and [C/O] ratios on age (or temporal evolution of the thin disc) is enhanced for the {\lq\lq}young{\rq\rq} solar twins in comparison with the overall behaviour derived from the single linear fit (the slopes are different in 2-sigma). The result for [C/O] corroborates the rise of [C/Mg] for thin disc young stars as observed by \citet{Franchinietal2020}, suggesting two competing scenarios: {\bf (i)} a delayed contribution from low-mass stars to explain the carbon abundance in the Galactic thin disc along the recent times \citep{Marigoetal2020}, and {\bf (ii)} an enhanced mass loss through stellar winds from metal-rich massive stars that produces an increased C yield at solar and super-solar metallicities. On one hand, the Sun, when compared against the fits versus age of the solar twins, seems about normal (within about 1-sigma) in the C/N, C/O and N/O ratios. On the other hand, the C/N ratio in the Sun is placed close to the bottom end of the solar twins distribution in all [C/N]-[Fe/H], [C/N]-[O/H] and [C/N]-age planes. The solar C/O ratio is specifically found near to the distribution high end for the mid-age solar twins only (all [C/O]-[Fe/H], [C/O]-[O/H] and [C/O]-age distributions), that qualitatively agrees well with \citet{Bedell2018}, who analysed the same solar twins sample, as well as with \citet{Nissenetal2020} for a solar analogs sample. Particularly, the N/O ratio in the Sun is found to be placed at the high end of the overall solar twins distribution (1.2 $rms$ in average above the relations [N/O]-[Fe/H], [N/O]-[O/H] and [N/O]-age of the solar twins). Our derived relation [C/O]-age for thin disc solar twins can be finally and directly compared with the relations [C/Mg] vs. age and C/O number ratio vs. age by \citet{NissenGustafsson2018}, which compiled data from a set of different samples of solar twins: \citet{Nissen2015}, \citet{Nissenetal2017} and \citet{Bedell2018} (the former work handled the same sample of the current work, but performed their own C measurements as explained before). While the \citet{NissenGustafsson2018}'s [C/Mg]-age relation has a large data dispersion making hard to assess a temporal analysis, the C/O ratio appears to decrease toward solar twins younger than the Sun, differently from what we have observed. However, the variation of the C/O ratio at a fixed age is comparable with the abundance dispersion measured by us. Taking into account the global behaviour of these abundance ratios over the whole age scale, in fact the variations of [C/N] and [C/O] are both really small along the evolution of the Galactic thin disc in the solar neighbourhood (only 10-17\,per\,cent), and there is no variation at all for [N/O]. [C/N] in the ISM would have increased from around the solar value 8.6 Gyr ago up to +0.07\,dex now, whilst [C/O] would have increased from -0.04 dex 8.6 Gyr ago up to the solar ratio now. The absence of evolution of the N/O ratio in both time and [Fe/H] in solar twins suggests uniformity in the nitrogen-oxygen budget for potential giant planet formation around these stars (note that C/O has or would have small variations versus age and [Fe/H] respectively, as also derived by \citet{Bedell2018} and \citet{Nissen2015}). We are now able to state that along the evolution of the Galactic local disc, C seems to have been stochastically more accumulated in the ISM relatively compared with N and O, since the abundance ratios C/N and C/O increase with time, although they decrease with both metallicity indicators (Fe and O abundances) in the limited metallicity coverage of solar twins. As expected, C and N are produced and ejected by low-mass/intermediate-mass (AGB phase) plus massive stars (SN-II), and O basically by massive stars only. Even though the N/O ratio is kept constant along the time, N does not necessarily follow O specially because [N/O] is found to be correlated with [O/H]. Therefore, the N production seems to have a more relative contribution from massive stars than carbon, or, in other words, carbon would be relatively more synthesized by low-mass/intermediate-mass stars than nitrogen (as suggested by \citet{Franchinietal2020} and \citet{Marigoetal2020}). However, alternatively, an enhanced mass loss from metal-rich massive stars could produce an increased C yield at solar and super-solar metallicities. \section{Summary and Conclusions} We have measured C, $^{12}$C/$^{13}$C and N in 63 solar twins through a self-consistent and homogenous procedure based on a high-precision spectral synthesis of molecular lines/features in the blue region (specifically $\lambda\lambda$4170-4400\,{\AA}), being 7 of them old alpha-enhanced solar twins and 1 having an excess of $s$-process elements, which were excluded from further analysis, remaining 55 solar twins for studying correlations with age and metallicity in the solar neighbourhood. All 55 stars have thin disc kinematics \citep{Ramirezetal2012}. The carbon abundance has been derived from $^{12}$CH\,A-X lines and the $^{12}$C/$^{13}$C ratio from $^{13}$CH-$^{12}$CH features of the same electronic system. The nitrogen abundance has been extracted from $^{12}$C$^{14}$N B-X lines (CN Violet System), taking into account the measured carbon abundance and its main isotopic ratio. We provide comprehensive lists of these molecular lines and features: twelve CH\,A-X lines, six $^{13}$CH-$^{12}$CH\,A-X features and five CN\,B-X lines. The average errors in the C and N abundances are around 0.006 and 0.03\,dex respectively (very similar in both [X/Fe] and [X/H] scales). Specifically, the error in $^{12}$C/$^{13}$C changes from 1.9 up to 10.5 (about 4.3 as a mean value). We could obviously conclude, as expected, that the measured abundances of C and N as well as the isotopic ratio $^{12}$C/$^{13}$C represent in fact the pristine chemical composition of every solar twin of our sample, because we have not found an overall connection among the photospheric $^{12}$C/$^{13}$C, [C/Fe], [N/Fe] and [C/N] that could be an indicative of some internal mixing process associated with the CNO cycles (for instance, $^{12}$C/$^{13}$C is not correlated with [$^{12}$C/$^{14}$N], being anti-correlated indeed). We confirm that the Sun is slightly enhanced in the volatiles C and N relatively to Fe in comparison with solar twins, like \citet{Bedell2018}, \citet{Nissen2015}, and \citet{Melendez2009} have already found for volatiles in general. On the one hand, the Sun seems to be normal in the $^{12}$C/$^{13}$C, C/O, C/N and N/O ratios within the errors (about 1-sigma) in comparison with solar twins. Notice that \citet{Melendez2009} found comparable (within the errors) C/Fe, N/Fe and O/Fe ratios in the Sun relative to the solar twins, probably because the highly volatile elements C, N and O have similar low condensation temperatures among them. On the other hand, we have particularly found that the N/O ratio in the Sun is placed at the high end of the overall solar twins distribution, the solar C/N ratio just close to the distribution bottom, and the solar C/O ratio near to the high end for the distribution of the mid-age solar twins only. The linear fits of [C/Fe] and [N/Fe], as a function of [Fe/H] and isochrone stellar age for the analysed solar twins sample, lay under solar for the whole scale of both [Fe/H] and age. Whilst [C/Fe] decreases with [Fe/H] in the restrict metallicity interval of solar twins, we have found a positive trend between [N/Fe] and [Fe/H]. On the other hand, both [C/Fe] and [N/Fe] increase with age (or, in other words, both [C/Fe] and [N/Fe] decrease with time from around the solar ratio value 8.6 Gyr ago down to nearly -0.1\,dex now). Furthermore, the Sun is placed around 1$\sigma$ in average above the linear fits [X/Fe]-[Fe/H] and [X/Fe]-age. This indicates that the Sun is slightly enhanced in both C and N relative to Fe in comparison with solar twins. For the relations [C/Fe]-[Fe/H] and [N/Fe]-[Fe/H], our results agree with \citet{DaSilva2015, Suarez-Andres2016, Suarez-Andres2017} that investigated solar-type dwarfs. For the [C/Fe]-[Fe/H] relation, our results also agree with \citet{Nissen2015}, who analysed solar twins like the ones studied in this work. Our under-solar anti-correlation of [C/Fe] vs. [Fe/H] qualitatively agrees with the \citet{Franchinietal2020}'s relation obtained for thin disc dwarfs (specially in the cases of their kinematical and orbital classifications). Regarding the [C/Fe]-age relation, our results agree with \citet{DaSilva2015, Bedell2018} (only this work done for solar twins). Coincidently, \citet{Nissenetal2020} found that [C/Fe] tends to decrease from about +0.1\,dex 10\,Gyr ago down to around -0.1\,dex now, such that the carbon-to-iron ratio would be very close to the solar value nearly to 8.6\,Gyr ago, like we observe through both kinds of linear fits (single and broken). The overall increase of [C/Fe] as a function of age found by us is compatible with the \citet{Franchinietal2020}'s results that cover ages from 2 up to 12\,Gyr. However, for the [N/Fe]-age relation, our results do not agree with neither \citet{DaSilva2015} nor \citet{Suarez-Andres2016}, perhaps because these studies were not restricted to solar twins. Our solar twins sample has the advantage, relative to solar type dwarfs, that a better precision can be obtained in the solar twins, because systematic errors largely are canceled out in a differential analysis, resulting in improved stellar parameters and chemical abundances, and also in more precise stellar ages relative to the Sun, which is at near the middle of the age distribution of solar twins. Comparisons against predictions of inhomogeneous chemical evolution models for the solar neighbourhood by \citet{Sahijpal2013} show that our results agree, qualitatively, for the relations [C/Fe]-[Fe/H] and [N/Fe]-[Fe/H] around [Fe/H]=0 for all models, because they predict [C/Fe]-[Fe/H] with negative slope and [N/Fe]-[Fe/H] with positive slope around the solar metallicity. These results suggest that possibly C and N have different stochastic nucleosynthetic productions during the evolution of the Galaxy's thin disc. We have measured the $^{12}$C/$^{13}$C isotopic ratio for solar twins for the first time, also focusing on the evolutive analysis on those 55 thin disc solar twins. Our measurements are certainly useful as important constraints for chemical evolution models of the solar neighbourhood, as \citet{Romano2017} had pointed out. We predict 81.3 ($\pm$1.4) for the current C isotopic ratio in the ISM of the solar neighbourhood, which is in agreement within 1$\sigma$ with the observed value of 68$\pm$15 by \citet{Milam2005}. No solar sibling is found in our sample following the criteria of similar ages, chemical compositions ($^{12}$C/$^{13}$C included) and kinematics to the Sun \citep{Ramirez2014b, Adibekyan2018}. We have found that $^{12}$C/$^{13}$C is positively well correlated with [Fe/H] in the small metallicity range of solar twins. This result should be tested against predictions of robust GCE models for the solar vicinity. Regarding specifically the $^{12}$C/$^{13}$C-age relation, this isotopic ratio seems to have decreased a little bit in time along the evolution of the nearby Galaxy's thin disc. A possible positive trend between $^{12}$C/$^{13}$C and stellar age has been found by us. This is actually in qualitative agreement with the predictions of a couple of GCE models \citep{Romano2017, Romano2019}. We have obtained linear anti-correlations for [C/N] and [C/O] as a function of [Fe/H] and [O/H]. For specifically [N/O], we have found no correlation with [Fe/H], but a positive correlation with [O/H]. We have derived linear anti-correlations for [C/N] and [C/O] as a function of age (i.e. both abundance ratios increase with time), and again there is no correlation for [N/O]. [C/O] seems to be more anti-correlated with [Fe/H] and age than [C/N]. In fact, the variations of [C/N] and [C/O] are both really small along the evolution of the Galactic thin disc in the solar neighbourhood. However, we have found greater increase rates in time of both [C/N] and [C/O] ratios for the solar twins younger than the Sun in comparison with the overall single fit. We have found one common GCE model by \citet{Romano2017} and \citet{Romano2019} (under different denominations), which roughly, qualitatively and simultaneously predict the relations [C/Fe]-[Fe/H], [N/Fe]-[Fe/H], $^{12}$C/$^{13}$C-age and [N/O]-[O/H] derived in the present work for a sample of thin disc solar twins. Surprisingly, the GCE models follow an opposite trend between $^{12}$C/$^{13}$C and [Fe/H] (we have obtained a direct correlation), something that is worth exploring in further GCE models. Moreover, GCE models must also prove or not the observational trend between $^{12}$C/$^{13}$C and stellar age derived by us for solar twins. We can conclude that C does not exactly follow neither N nor O as a function of [Fe/H] and [O/H] and the time too. We can also state that carbon and nitrogen likely have had different nucleosynthetic origins along the Galaxy thin's disc evolution. The N production seems to have a more relative contribution from massive stars than carbon, or, in other words, carbon would be relatively more synthesized by low-mass/intermediate-mass stars than nitrogen (as suggested by \citet{Franchinietal2020} and \citet{Marigoetal2020}). Particularly, we have found an increase in time of both [C/N] and [C/O] ratios for the solar twins younger than the Sun. However, an alternative explanation could come from an enhanced mass loss through stellar winds from metal-rich massive stars that would increase the C yield at solar and super-solar metallicities. The absence of evolution of [N/O] in both [Fe/H] and time in solar twins suggests uniformity in the nitrogen-oxygen budget for potential giant planet formation around these stars, although [N/O] is seen to be correlated with [O/H]. Our results can contribute to the {\lq}CNO composition -- planet formation{\rq} connection (mainly linked to the formation of icy planetesimals, watery super-earths and/or giant planets, \citet{Marboeufetal2014}). Our results for C, $^{12}$C/$^{13}$C and N in a sample of nearby solar twins, with well-determined parameters and spanning a wide range in age, are certainly excellent constraints to understand the chemical evolution of thin disc in the solar vicinity, as well as to study the nucleosynthetic origins of C, N, O and Fe along the Galaxy's evolution. \section*{Acknowledgements} This study was financed in part by the Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'\i vel Superior - Brasil (CAPES) - Finance Code 001 (RBB and ADCM acknowledge this support). ADCM also thanks Conselho Nacional de Desenvolvimento Cient\'\i fico e Tecnol\'ogico (CNPq) for the research productivity grant (309562/2015-5). JM thanks FAPESP (2018/04055-8) and CNPq for the research productivity grant (306730/2019-7). LS acknowledges financial support from the Australian Research Council (Discovery Project 170100521). We are also grateful for the anonymous referee by his/her critical revision to improve this work. \section*{Data availability} The data underlying this article are available in the article and in its online supplementary material. \bibliographystyle{mnras}
null
null
null
proofpile-arXiv_000-10185
{"arxiv_id":"2009.09003","language":"en","timestamp":1600740049000,"url":"https:\/\/arxiv.org\/abs\/2009.09003","yymm":"2009"}
2024-02-18T23:40:25.311Z
2020-09-22T02:00:49.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10187"}
null
\section{Introduction} \label{introduction} Traditional community consultation methods, such as town halls, public forums, and workshops, are the \textit{modus operandi} for public engagement~\cite{mahyar2019deluge, ehsaei2015successful}. For fair and impartial civic decision-making, the inclusivity of community members' feedback is paramount~\cite{gordon2016civic, mahyar2019deluge, torres2007citizen}. However, traditional methods rarely provide opportunities for inclusive public participation~\cite{Mansbridge2006NormsStudy, Levine2005FutureDeliberation, bryson2013designing}. For instance, reticent meeting attendees struggle to speak up and articulate their viewpoints due to fear of confronting outspoken and dominant individuals~\cite{tracy2007contentious, brabham2009crowdsourcing}. This lack of inclusivity in traditional face-to-face meetings results in an uneven representation of community members and often fails to capture broader perspectives of attendees~\cite{innes2004reframing}. As a result, these methods often fall short in achieving the desired exchange of perspectives between government officials and the community~\cite{salgado2014so, Karpowitz2005DisagreementDeliberation, sanders1997against}. Furthermore, meeting organizers grapple with simultaneously facilitating often contentious discussions and taking meeting notes to capture attendees' broader perspectives~\cite{innes2004reframing, Mansbridge2006NormsStudy}. These bottlenecks further obstruct inclusivity and may lead to biased decisions that can significantly impact people's lives~\cite{Mansbridge2006NormsStudy, mahyar2019deluge}. Advancements in computer-mediated technology can address this predicament by creating a communication channel between these entities. Bryan~\cite{bryan2010real} and Gastil~\cite{gastil2008political} investigated the state of town halls\footnote{In this work, we use the term \emph{Town Hall} to refer to various community consultation approaches where community members convene to meet and discuss civic issues with government officials or decision-makers. Town Halls can take many forms, but the main goal is to establish a communication channel between the officials and the community, to inform the community about civic issues, and often to receive community's feedback~\cite{bryan2010real}.} and demonstrated a steady decline in civic participation due to the growing disconnect between local government and the community. To reengage disconnected, reticent, or disenfranchised community members, researchers in HCI and digital civics\footnote{Digital Civics is an emerging interdisciplinary area that explores novel ways to utilize technology for promoting broader public engagement and participation in the design and delivery of civic services~\cite{kennethdd, Vlachokyriakos, Olivier}.} have offered novel strategies and technological interventions to increase engagement~\cite{mahyar2019deluge, kennethdd, Olivier, Vlachokyriakos, gordon2016civic}. Researchers in this field have proposed several online technologies that made wider participation possible for community members (e.g.,~\cite{mahyar2018communitycrit, decidim:2019, polis:2017, democracyos:2019}). Despite the introduction of such online platforms, government officials and decision-makers predominantly favor traditional face-to-face meetings to create relationships, foster discourse, and conduct follow-up conversations with community members~\cite{mahyar2019deluge, asad2017creating, corbett2018going} to understand their views and aspirations~\cite{ehsaei2015successful, von2005democratizing, innes1999consensus}. However, employing technology to capture attendees' feedback---in particular, silent attendees' feedback---in face-to-face meetings remains largely unexplored. Commonly, feedback in meetings is gathered using voting or polling attendees~\cite{murphy2009promotion, boulianne2018citizen, lukensmeyer21} or taking notes during the meeting~\cite{lukensmeyer2002taking, manuel2017participatory, bryan2010real}. However, voting often restricts attendees to only agreeing or disagreeing, which often does more harm to the richness of the captured feedback from attendees rather than promoting inclusivity~\cite{mahyar2019deluge, bergstrom2009vote}. To help alleviate this problem, prior work mostly focused on automatic speech recognition~\cite{voicea, tur2010calo} and interactive annotations~\cite{banerjee2006smartnotes, kalnikaite2012markup} to help organizers take notes for creating reports. However, these methods rarely preserve the discussion context or improve the inclusivity of attendees' feedback. To better understand the needs of attendees and organizers in community consultations and explore how technology can help to address these issues, we conducted a formative study by attending three town halls in a college town in the United States. We surveyed a total of 66 attendees to inquire about their ability to voice their opinions during town halls. 17\% of the attendees (11 responses) expressed that despite being physically present, they could not voice their opinions. They attributed this to factors such as intimidation from other participants, lack of confidence, and fear of interruption. Moreover, we surveyed 20 organizers to identify what could help them to make better use of the town halls to ensure that the public feedback is prioritized in the reports they authored. We found that the organizers often relied on their memories or the notes taken during the meeting to generate reports. However, they struggled to accurately capture or remember the attendees' feedback and important details after the meeting. In effect, these incomplete memories and meeting notes could potentially result in incomprehensive reports. These findings indicated a requirement for better capturing attendees' feedback and preserving meeting discussion details to support organizers in authoring more comprehensive reports. Based on our formative study, we designed and developed CommunityClick, a system for town halls that captures more inclusive feedback from attendees during the meeting and enables organizers to author more comprehensive meeting reports. We modified iClickers~\cite{iclicker} to allow attendees to silently and anonymously provide feedback and enable organizers to tag the meeting discussion at any time during the meeting. We chose to use iClickers due to their familiarity and ease of use~\cite{morse2010clicker, herreid2006clicker, whitehead2010usingiClicker} as an audience response system (ARS)~\cite{nickerson1993real}. Furthermore, we augmented the automatically-generated meeting transcript by synchronizing it with attendees' feedback and organizers' tags. We also provided an interactive interface where organizers can explore, analyze, and utilize the augmented meeting transcript in a data-driven way and a novel feedback-weighted summary of the meeting discussion to author meeting reports at their convenience. To evaluate the efficacy of our approach, we conducted a field experiment in the wild, followed by eight semi-structured interviews with experienced organizers. Our results demonstrate the efficacy of CommunityClick to give voice to reticent participants to increase their involvement in town halls, capture attendees' feedback, and enable organizers to compile more inclusive, comprehensive, and accurate meeting reports in a way that lends credibility to the report creation process. Our key contributions in this work are as follows: 1) using a communitysourcing technology to enable attendees to share their feedback at any time during the meeting by modifying iClickers as a real-time response mechanism, 2) augmenting meeting transcripts by combining organizers' tags and attendees' feedback to preserve discussion context and feedback from a broader range of attendees, 3) applying a novel feedback-weighted summarization method to generate meeting summaries that prioritize community's feedback, 4) developing an interface for organizers to explore and utilize the augmented meeting transcript to author more comprehensive reports, and 5) insights from a field experiment in the wild that demonstrates how technology can be effective in capturing attendees' voices, authoring more inclusive reports, and future directions to enhance our approach. \section{Background} \label{background} In this section, we describe the current challenges that inhibit inclusivity in town hall meetings. We also discuss existing technologies designed to promote engagement in various meeting scenarios and to help analyze and utilize meeting data. \subsection{Current Challenges in Town Halls} Prior investigations by Bryan~\cite{bryan2010real} and Gastil~\cite{gastil2008political} showed a steady decline in civic participation in town halls due to the growing disconnect between local government and community members and the decline in social capital~\cite{putnam2000bowling, costa2003understanding, rahn1998social}. Despite the introduction of online methods to increase public engagement in the last decade~\cite{coleman2001bowling, mahyar2018communitycrit, decidim:2019, polis:2017, democracyos:2019, kim2016budgetmap}, government officials continue to prefer face-to-face meetings to engage the community in the decision-making process~\cite{mahyar2019deluge, button1999deliberative, ehsaei2015successful}. They believe face-to-face meetings facilitate two-way communications between decision-makers and community members that can foster discourse and help them understand the views and aspirations of the community members~\cite{button1999deliberative, ehsaei2015successful, von2005democratizing, mahyar2019deluge, innes1999consensus}. However, constraints such as fixed physical locations for co-located meetings and scarcity of time and resources, limit the efficacy of face-to-face processes and inhibit the ability of officials to make proper use of town halls~\cite{Levine2005FutureDeliberation, Mansbridge2006NormsStudy, innes2004reframing, seltzer2013citizen, baker2007achieving}. Such constraints might repel or alienate citizens for whom traveling or dedicating time for town halls may not be a viable option from the perspective of physical, economical, or intrinsic interest~\cite{misra2014crowdsourcing, irvin2004citizen, seltzer2013citizen, convertino2015large}. While predictors such as education, income, and intrinsic interest help gauge civic engagement~\cite{davies2014civic}, social dynamics, such as shyness and tendency to avoid confrontation with dominant personalities can also hinder opinion sharing in town halls by favoring privileged individuals who are comfortable or trained to take part in contentious public discussions~\cite{brabham2009crowdsourcing, tracy2007contentious}. Specifically, people with training in analytical and rhetorical reasoning often find themselves at an advantage when discussing complex and critical civic issues~\cite{sanders1997against}. As a result, town halls inadvertently cater to a small number of privileged individuals, and silent participants often become disengaged despite physically attending the meetings~\cite{gordon2011immersive}. Due to the lack of inclusivity, the outcome of such meetings often tends to feel unjust and opaque for the general public~\cite{fung2006varieties, corbette2019trust}. \subsection{Technological Interventions to Increase Engagements in Town Halls} To increase broader civic participation, researchers in HCI have proposed both online~\cite{democracyos:2019, mahyar2018communitycrit, decidim:2019, polis:2017, kim2016budgetmap} and face-to-face~\cite{bergstrom2007conversation, keske2010consulting, lukensmeyer2002taking, taylor2012empowering} technological interventions that use the communitysourcing\footnote{Communitysourcing leverages the specific knowledge of a targeted community. It follows the crowdsourcing model, but takes a more direct approach in harnessing collective ideas, input, and feedback from targeted community members to find solutions to complex problems such as civic decision-making.~\cite{brabham2010crowdsourcing, heimerl2012communitysourcing}} approach. For instance, to increase engagement in town halls, some researchers have experimented with audience response systems (ARS)~\cite{kay2009examining, keske2010consulting, murphy2009promotion, boulianne2018citizen}. Murphy used such systems to promote democracy and community partnerships~\cite{murphy2009promotion}. Similarly, Boulianne et al. deployed clicker devices in contentious public discussions about climate change to gauge public opinions~\cite{boulianne2018citizen}. Bergstrom et al. used a single button device where the attendees anonymously voted (agree/disagree) on issues during the meeting. They showed that back-channel voting helped underrepresented users get more involved in the meeting~\cite{bergstrom2009vote}. The America\textit{Speaks}' public engagement platform, 21 Century Town Meeting\textregistered, also used audience response systems to collect feedback and perform straw polls and votes during town halls~\cite{lukensmeyer21}. However, in these works, the audience response systems were used either for binary voting or polling~\cite{bergstrom2009vote, lukensmeyer21}, or to receive feedback on specific questions that expected attendees' feedback on a Likert scale-like spectrum~\cite{likert1932technique}. These restrictions limit when and how meeting attendees can share their feedback. Audience response systems have seen widespread success as a lightweight tool to engage participants, promote discussions, and invoke critical thinking in the education domain~\cite{morse2010clicker, herreid2006clicker, addison2009usingiclicker, keller2007researchclicker, whitehead2010usingiClicker, mollborn2010meetingclicker}. As such, these devices have the potential to provide a communication channel for silent participants to express their opinions without the obligation to verbalize and risk confrontation. We build upon their success in the education domain by appropriating iClickers for the civic domain. We modify and utilize iClickers for town halls to create a mechanism that supports silent and real-time community feedback. HCI researchers have proposed research solutions to increase participation in face-to-face meetings such as design charrettes, or group-meetings in general, by using various tools and methods~\cite{mahyar2016udcospace, hunter2011memtable, bergstrom2007conversation, bergstrom2009vote, Shi2017IdeaWallStimuli}. Some researchers used interactive tabletop and large screen surfaces to engage attendees~\cite{mahyar2016udcospace, hunter2011memtable, Shi2017IdeaWallStimuli}. For example, UD Co-Spaces~\cite{mahyar2016udcospace} used a tabletop-centered multi-display environment for engaging the public in complex urban design processes. Memtable~\cite{hunter2011memtable} used a tabletop display to increase attendees' engagement by integrating annotations using a multi-user text editor on top of multimedia artifacts. IdeaWall visualized thematically grouped discussion contents in real-time on a large screen during the meeting to encourage attendees to discuss such topics~\cite{Shi2017IdeaWallStimuli}. However, large displays along with real-time visualizations might distract attendees from concentrating on meeting discussions~\cite{appleton2005gis}, which might lead to less contribution from the participants irrespective of how vocal they are~\cite{gordon2011immersive}. Furthermore, these innovative approaches might be overwhelming for meeting attendees, especially in the civic domain, due to the heterogeneity of participants with a wide spectrum of expertise and familiarity with technology~\cite{costa2003civic, tracy2007contentious, adams2004publicandemocratic}. It might also be impractical and financially infeasible to use expensive tabletops and large interactive displays for town halls organized in a majority of cities. \subsection{Technologies to Analyze and Utilize Meeting Data} Prior works showed that decision-makers often relied on meeting reports generated by organizers to inform public policies that had potential long-term impacts on the community~\cite{lukensmeyer2002taking}. Commonly, organizers generate these reports based on the outcome of voting or polling attendees~\cite{murphy2009promotion, boulianne2018citizen, lukensmeyer21}, and taking notes during the meeting~\cite{lukensmeyer2002taking, manuel2017participatory, bryan2010real}. They often use manual note-taking techniques such as pen-and-paper or text editors~\cite{di1972listening}. However, simultaneously taking notes from the meetings to capture attendees' feedback and facilitating the meeting to guide and encourage discussions is overwhelming and often lead to losing critical information and ideas~\cite{rowe2005typology, piolat2005cognitive}. Sometimes the meetings are audio-recorded and transcribed for reviewing before writing the report. However, such reviewing processes require significant manpower and labor~\cite{lukensmeyer21, lukensmeyer2002taking}. Furthermore, the audio recording themselves do not capture the feedback of reticent participants who did not speak up. In general meeting scenarios, researchers proposed improvements to manual note-taking for capturing meeting discussions~\cite{chiu2001liteminutes, kam2005livenotes, meetingKing, davis1998notepals}. For example, LiteMinutes used a web interface that allowed creating and distributing meeting contents~\cite{chiu2001liteminutes}. MeetingKing provided meeting agenda templates to help organizers create reports~\cite{meetingKing}. Similarly, LiveNotes facilitated note-taking using a shared whiteboard where users could write notes using a digital pen or a keyboard~\cite{kam2005livenotes, bennett1991optical}. Another group of researchers experimented with various tools and techniques to provide support for facilitating online synchronous group discussion. For example, SolutionChat provides moderation support to group discussion facilitators by visualizing group discussion stages and featured opinions from participants and suggesting appropriate moderator responses~\cite{lee2020solutionchat}. Bot in the Bunch takes a different approach and propose a chatbot agent that aims to enhance goal-oriented online group discussions by managing time, encouraging participants to contribute to the discussion, and summarizing the discussion~\cite{kim2020bot}. Similarly, Tilda synthesizes online chat conversation using structured summaries and enable annotation of chat conversation to improve recall~\cite{zhang2018tilda}. Closer to our approach, some researchers proposed different techniques for automatically capturing meeting discussions without significant manual intervention~\cite{tur2010calo, voicea, banerjee2006smartnotes, iCompassTech}. For example, CALO and Voicea are automated systems for annotating, transcribing, and analyzing multiparty meetings~\cite{tur2010calo, voicea}. These systems use automatic speech recognition~\cite{rabiner1993fundamentals} to transcribe the meeting audio and extract topics, question-answer pairs, and action items using natural language processing~\cite{hirschberg2015advances}. However, automatic approaches are often error-prone and susceptible to miscategorization of important discussion topics~\cite{mcgregor2017moretomeeting, chuang:2012}. To address this problem, some researchers suggested incorporating human intervention to control and compile reports without completely depending on automatically generated results~\cite{banerjee2006smartnotes, kalnikaite2012markup, iCompassTech}. SmartNotes~\cite{banerjee2006smartnotes} and commercial applications such as ICompassTech~\cite{iCompassTech} use this approach where the users can add and revise notes and topics. Although these methods used a combination of automatic and interactive techniques, they enabled the utilization of meeting discussions without capturing sufficient circumstantial input. The automatically generated transcript might record discussions but it may not contain feedback shared by all attendees, especially silent ones. Furthermore, these tools are designed for small-scale group meetings and may not be practical in town halls in the civic domain where attendee numbers and requirements can vary significantly~\cite{bryan2010real, lukensmeyer2002taking}. The heterogeneity of attendees in town halls~\cite{costa2003civic, adams2004publicandemocratic}, the lack of inclusivity in sharing their opinions~\cite{Karpowitz2005DisagreementDeliberation, sanders1997against}, the deficient methods to record meeting discussions~\cite{lukensmeyer21}, and limited financial and design pragmatism in existing technologies~\cite{mahyar2016udcospace, Shi2017IdeaWallStimuli} necessitate a closer investigation and innovation in designing technology to address both meeting attendees' and organizers' challenges regarding inclusivity in town halls. \section{Formative Study: School Building Town Halls} \label{formative_study} To inform our work, we wanted to understand the perspectives and needs of organizers and attendees who participate in community consultations. To this end, we attended several town halls, including multiple public engagement sessions in a college town in the United States (U.S.). The agenda for these town halls included discussions regarding new public school buildings where the town authorities wanted the community's feedback on two new proposals that involved millions of dollars worth of renovation or reconstruction. The town authorities made careful considerations to ensure that the public has access to all the information about the proposals by arranging six community-wide engagement sessions organized by professional facilitators. We joined three out of six of these sessions in a span of three months. We decided to investigate this particular case due to the unique characteristics of the town, where there is relatively high citizen engagement on discussions around education and public schools. We also wanted to investigate how people discuss potentially contentious proposals in the town halls that pitted two contrasting ideas for future public school buildings. \subsection{Participants and Procedures} We approached both the meeting attendees, who were community members, and the meeting organizers, who included facilitators and town officials. The town halls began with the organizers presenting their proposal to attendees. Afterward, attendees engaged in discussions and shared their ideas, opinions, and concerns about the proposals. The facilitators were responsible for guiding the discussion and collecting the attendees' feedback.They played a neutral role in the discussion and did not influence the attendees' opinions in any way. To understand the attendees' perspectives, we conducted surveys after each town hall. The attendees were all residents of the town and attended the meeting of their own volition. We surveyed a total of 66 attendees. We refer to the attendees from our formative study as \textbf{FA} and organizers from our formative study as \textbf{FO}. In the survey, we asked them open-ended questions regarding their motivation to join the town hall meetings, their experiences in these meetings, and their thoughts around what makes these face-to-face meetings successful. We also asked about their ability to voice their opinions in town halls and about their familiarity with audience response technologies. Furthermore, to gain an understanding of what type of semantic tags they wanted to provide during a town hall, we compiled a list of tags based on prior work on meeting discussions or public engagements that focused on characterizing effective participation during synchronous communication between organizers and meeting attendees in both online and face-to-face settings~\cite{zhang2018tilda, nathan2012incase, derek2010collaborative, zhang2017characterizing, im2018deliberation, murray2006incorporating}. The list of tags is presented in Fig.~\ref{fig:survey}(B). We provided this list to the attendees and asked them to rate which of these tags would help them to express their thoughts in town halls. They rated each tag on a 5-point Likert scale~\cite{likert1932technique}. During these town halls, we made contact with the organizers and explained our project goals. From these initial contacts, we used the snowball method~\cite{goodman1961snowball} to reach other organizers working across the U.S. to learn more about their practices regarding town hall meeting organization, facilitation, and report creation. We conducted surveys with a total of 20 organizers with an average experience of 10.5 years (min 1 year, max 35 years) in conducting town halls across the U.S. Our survey participants consisted of town administrators, town clerks, senior planners, directors, professional facilitators, and chairs of different town committees. We asked them open-ended questions around their meeting data recording practices including what data do they prioritize when recording, the challenges they face while organizing the meeting simultaneously and recording meeting notes, and their post-meeting data analysis and meeting report generation processes. We also asked about their choice of tools and technologies for recording meeting notes and for creating reports, and what they think are the elements that constitute a good report. To identify representative tags that could help organizers track and categorize meeting discussion, we compiled another list of tags based on prior works~\cite{zhang2018tilda, nathan2012incase, derek2010collaborative, im2018deliberation, murray2006incorporating}. We used a different list from the one we provided to attendees (Fig.~\ref{fig:survey}(A)) because prior work suggest that organizers and attendees need different tags to categorize meeting discussion based on their perspectives. We asked organizers to rate the tags in the order of importance on a five-point Likert scale~\cite{likert1932technique}. The survey questions and list of tags are provided as supplementary materials. \subsection{Findings} \begin{figure} \includegraphics[width=1\textwidth]{Figures/surveyresults.pdf} \caption{This figure presents attendees' and organizers' perceived importance of two sets of tags that we compiled based on prior research: (A) shows the ratings of 20 organizers on a list of 10 tags for organizers; (B) shows the ratings of 66 Community members on a list of 9 tags for attendees' feedback. } \label{fig:survey} \end{figure} Here, we report the findings from our surveys with both attendees and organizers. The 66 attendees we surveyed were highly motivated to attend town halls and the majority of them (64\%, 42 responses) considered attending such meetings to provide their feedback on civic issues as their \emph{civic duty}. Most of the attendees (88\%, 58 responses) attended two or more town halls every year. Regarding their familiarity with technology, every meeting attendee (100\%, 66 responses) mentioned having a computer, smartphone, and internet access, but 61\% of them (40 responses) never used an audience response system before. We also found that 17\% (11 responses) of meeting attendees felt they were not able to share their feedback during these meetings, and 23\% (15 responses) were not satisfied with the way town halls were organized to discuss critical issues. It was surprising for us to find that 17\% of people from a homogeneous, relatively wealthy, and educated community in a college town believed that they could not voice their opinions during town halls. Despite their unfamiliarity with audience response systems, the majority of the meeting attendees (87\%, 57 responses) mentioned that they were willing to use such devices in town halls to share their feedback. In response to the question regarding what makes face-to-face town hall meeting successful, a group of attendees mentioned that the success of town hall meetings hinges upon the attendees' ability to openly communicate their opinions and hear others' opinions. One attendee (FA-17) mentioned, \blockquote{\emph{Town halls need to provide opportunities for all people to be heard, having diversity of voices and opinions, and be present in the discussion.}} Another attendee (FA-36) mentioned, \blockquote{\emph{Being face-to-face means asking questions, listening to others' questions and ideas, and to be able to see their emotions, nuances and expressions.}} Several attendees' mentioned facing challenges around sharing their opinions due to being dominated in the discussion. One such attendees' mentioned (FA-23), \blockquote{\emph{Town halls give us the chance to talk things out, but often this doesn't happen and people get shut down.}} Another attendee (FA-11) mentioned, \blockquote{\emph{You need to have ground rules so that we stay on track and no one dominates the discussion.}} Some attendees considered that facilitators play an important role in the success of town halls and skilled and well-equipped facilitators can make a difference. One attendee (FA-57) emphasized the importance of skilled facilitator saying, \blockquote{\emph{Professional facilitators understand the context of our community, and the history of tension regarding the issues. They move it along and listen with open minds.}} Another attendee (FA-47) mentioned, \blockquote{\emph{Skilled facilitators make sure all voices are heard, organize the discussion around goals and keep everyone focused on tasks.}} When rating the tags for sharing opinions during meetings (Fig.~\ref{fig:survey} (A)), the majority (90\%) of the attendees considered \emph{Agree} and \emph{Disagree} to be the most important tags. 75\% or more attendees thought \emph{Unsure}, \emph{Important}, \emph{Confused}, and \emph{Neutral} to be important. The majority of the organizers (17 responses) mentioned that often manpower and budget constraints forced them to forego the appointment of designated note-takers and thus, they must shuffle between organizing and note-taking during the meetings. Some organizers also mentioned how context-switching between organizing the meeting and taking notes often led to missing critical evidence or information (8 responses). They employed a variety of methods for recording meeting data depending on their convenience and their operational abilities and experiences to utilize such methods, including pen-and-paper (5 responses), text editors (6 responses), audio or video recorders (3 responses), or a combination of these methods. To generate reports from the notes taken during the meetings, organizers usually used text editors (17 responses) with preferences towards Microsoft Word and Notepad (12 responses). However, when asked about the time required to compile a report from meeting notes, their responses varied from 15 minutes to a few days. The variation, in part, can be attributed to the amount of notes and the format in which notes were captured. All organizers (20 responses) mentioned that the report generation process involves some form of summarization process of meeting records to retain the most important discussion components. One organizer (FO-7) explained, \blockquote{\emph{If I'm responsible for the report, I listen to the recording, review the notes, and translate them into coherent accounts of meeting attendees, topics, discussion, and decisions/outcomes.}}. Another organizer (FO-1) mentioned, \blockquote{\emph{I start with the agenda, review the audio, then edit and summarize the meeting notes into report content.}} Organizers also described high-level properties of what would constitute a \textit{good report}. One organizer (FO-1) mentioned that, good meeting reports should be \blockquote{\emph{accurate, comprehensive, clear, and concise}}. Another organizer (FO-4) emphasized that several components constitue a good report including, \blockquote{\textit{[the] main agenda items, relevant comments, consensus, action plans, and feedback}}. One organizer (FO-17) thought meeting reports should be \blockquote{\textit{balanced, fair, and pertinent}} towards multiple perspectives, however, it is often challenging because they only \blockquote{listen to a few, while others stay silent}. When rating tags that would help them to take better notes to create good reports (Fig.~\ref{fig:survey} (B)), the meeting organizers unanimously (100\%) endorsed the tag \emph{Decision}. The tags \emph{Topic}, \emph{Action Items}, \emph{Main Issue}, and \emph{Date} were preferred by more than 70\% organizers. However, all other tags except for \emph{Q\&A} were favored by more than 50\% of organizers. These survey responses suggest that preferences for tagging meeting discussion varies among organizers, and different sets of tags might be required to be useful for generating reports from different meetings with diverse agendas. \subsection{Design Goals} Based on prior work and our formative study, we identified four design goals to guide our system design that address the requirements and challenges of both meeting attendees and organizers. First, we found that some attendees lacked a way to respond to ongoing discussions. Furthermore, many organizers struggled to keep track of the discussion and take notes simultaneously. Thus, we needed to provide communication channel between attendees and organizers for sharing opinions and capturing feedback (G1). Second, the organizers often refer to several sources of meeting data including meeting notes and audio/video recording to compile meeting reports. Hence, the meeting discussion audio, organizers' tags, and attendees' feedback should be captured and combined together to provide organizers with a holistic view of the meeting data (G2). Third, the organizers perform some form of summarization to generate meeting reports. However, many organizers also struggled to account for attendees' feedback while summarizing meeting data. This challenge motivated a third goal to introduce summarization techniques that incorporate attendees' feedback to help organizers to get the gist of meeting discussions (G3). Finally, organizers needed to examine the meeting-generated data to capture more inclusive attendees' feedback so that they could write more comprehensive reports. To that end, our final design goal was to provide exploration and report authoring functionalities to help them investigate the meeting data and identify evidence to generate reports that included and reflected attendees' feedback (G4). \begin{figure} \includegraphics[width=1\textwidth]{Figures/cc_fig.pdf} \caption{A snapshot of CommunityClick's workflow. During the meeting, attendees and organizers can use iClickers to share feedback and tag the meeting respectively. The meeting is also audio-recorded for transcription. The audio recordings are transcribed automatically and then augmented with the organizer's tags and attendees' feedback. Furthermore, we generated the feedback-weighted discussion summary and extracted the most relevant topics. The interactive interface enables the exploration and utilization of augmented meeting discussions, which is available online for organizers to examine and author meeting reports.} \label{fig:cc_block} \end{figure} \section{CommunityClick} \label{sysdesign} Guided by the design goals, we designed and developed CommunityClick, a system where we modified iClickers as a real-time response mechanism and augmented meeting transcripts to capture more inclusive feedback. We also introduced a novel feedback-weighted summarization to prioritize attendees' feedback and enabled exploration and utilization of the augmented transcript through an interactive interface for organizers to author meeting reports. Here, we provide a scenario where CommunityClick can be employed (Fig.~\ref{fig:cc_block}), followed by the system description. \subsection{User Scenario} Michelle is an organizer who has been appointed by the local government officials to organize an important town hall. Given the importance of the meeting, she decides to deploy CommunityClick to focus on facilitating the meeting while using iClickers to capture the community's feedback. Adam is a community member who is attending the town hall. He cares about the community and wants to share his opinions on the agenda. He prefers to avoid confrontations, especially in town halls, as he is worried about speaking up and running into arguments. In the meeting, he is given an iClicker and instructions for using it to share his opinions using five options. Adam engages in discussion with other attendees in the meeting but whenever he feels hesitant to speak up, he uses the iClicker to provide feedback. A week later, Michelle finally gets around to writing the report of the town hall. By now, she has forgotten a significant portion of the meeting discussion. She logs in to CommunityClick and selects the town hall. She uses the timeline and feedback-weighted summary to get an overview of the meeting discussion and jog her memory by exploring the meeting discussion. She uses her own tags, timeline, and the interactive summary to investigate the augmented meeting transcript that contained attendees' feedback alongside the discussion segments. Finally, she authors the report by importing information into the text editor from the transcript. \begin{figure}[h] \centering \includegraphics[width=0.5\textwidth]{Figures/apts.pdf} \caption{The apparatus used to capture organizers' tags and attendees' feedback. (A) The iClicker for organizers to tag the meeting. (B) The iClicker for attendees to add their feedback. We used different sets of tags for organizers and attendees based on our formative study. Each iClicker was labeled with the respective set of tags to reduce the cognitive load of mapping options to the iClicker buttons. (C) The iClicker recorder. We used an Adafruit Feather M0 with the 900 MHz RFM69W/RFM69HW transceiver to capture iClicker clicks with timestamps in real-time to synchronize tags and feedback with meeting audio.} \label{fig:apa} \end{figure} \subsection{System Description} In the following, we describe how we addressed the design goals by using iClickers, augmenting meeting transcripts, performing text analysis, and developing an interactive interface. \subsubsection{Modifying iClickers to enable attendees and organizers to provide real-time responses (G1)} We used iClickers for both organizers and attendees to enable them to respond to meeting discussions any time during the meeting without the need to manually take notes or speak up to share opinions. iClicker is a communication device that uses radio frequency to allow users to anonymously respond using its five buttons (Fig.~\ref{fig:apa}). Despite the widespread usage of smartphones, we chose iClickers as an audience response system due to recent statistics that show 20\% of U.S. citizens do not yet have access to smartphones~\cite{pewresearch}. Furthermore, they are often a major cause of distraction and hindrance to participation in meetings~\cite{bajko2016prevalence, kushlev2016silence}. There are also technical overheads involved including installation of application, maintenance, and issues regarding version compatibility based on operating systems which might disengage the participants. In contrast, iClickers have proven to be successful in town halls due to its familiarity and affordance of anonymity in sharing opinions in town halls and to receive attendees' feedback on specific questions~\cite{bergstrom2009vote, murphy2009promotion, boulianne2018citizen}. The anonymous use of iClickers could ensure that silent participants can also share their opinions to the organizer about the ongoing discussion without engaging with a potentially heated debate. Moreover, we modified the iClickers to go beyond previous approaches by allowing meeting organizers and attendees to respond to the ongoing discussion using all five different options instead of only binary agree or disagree. The tag options are customizable and depending on their meeting agendas, the organizers can set up an appropriate list of tags and feedback before the meeting. We used different types of iClickers for organizers and attendees. The organizers used instructor iClickers and attendees used regular ones (Fig.~\ref{fig:apa}). It helped us to effectively separate organizers' and attendees' responses. To reduce the cognitive load of iClicker users to map and remember the options, we labeled each iClicker with organizers' tags and attendees' feedback. \subsubsection{Combining automatically generated transcripts with tags and feedback to capture more inclusive feedback (G2)} We recorded and synchronized three different sets of data generated simultaneously in the meeting---the discussion audio, the organizers' tags, and the attendees' feedback via iClickers. We recorded the meeting audio using a regular and commonly used omnidirectional microphone. To remove the noise from the audio recording, we used an open-source freeware named Audacity\textregistered. However, capturing organizers' tags and attendees' feedback from iClickers was non-trivial due to the limitations in hardware access and the API provided by the iClicker manufacturer. The original software and hardware in factory settings did not provide timestamped data on each click. As a result, we customized the hardware and API to record organizers' tags and attendees' feedback (Fig.~\ref{fig:apa}). We used an Adafruit Feather M0 with the 900 MHz RFM69W/RFM69HW transceiver to collect an iClicker's clicks and timestamps that were transmitted through radio frequency on the same bandwidth. This allowed us to accurately and precisely capture and synchronize iClicker interactions to match the time of discussion. To transcribe the meeting audio, we used automatic speech recognition techniques from Assembly AI~\cite{assemblyai}. We assessed the quality of this method by comparing them to human-generated reference transcripts. We found our approach to be on-par with human-generated transcripts. The results of these analyses are presented in full in the \hyperref[appendix]{Appendix} section. We combined the transcript with the timestamped tags and feedback to transform the recorded meeting audio into timestamped text. Furthermore, we used the organizers' tags to divide the meeting transcript into manageable and consumable segments. Previous work showed that there is a gap (2 seconds on average) between hearing something, registering it, and taking actions upon it, such as clicking a button for annotation~\cite{risko2013collaborative}. Based on prior work and our early pilot experiments, for each organizer's tag, we created a 30 second time window around the tag (2 seconds before the tag and 28 seconds after the tag). The complete meeting transcript is divided into similar 30-second segments. For each segment, we collected the attendees' feedback provided within that time-window. Consequently, the meeting audio is transformed into timestamped segments, each containing transcribed conversation, organizers' tags, and a set of attendees' feedback (Fig.~\ref{fig:interface}(E)). We also extracted the main discussion points from the transcript segments using Topic Rank~\cite{bougouin2013topicrank} (Fig.~\ref{fig:interface}(B)). We chose this topic modeling method to have better multi-word topics which are useful for better topic representation~\cite{blei2009visualizing}. \subsubsection{Applying text summarization method to incorporate attendee's feedback into meeting discussion summaries (G3)} To summarize the meeting transcript, we used the graph-based TextRank algorithm~\cite{mihalcea2004textrank}, which is based on a variation of the PageRank~\cite{page1999pagerank} algorithm. TextRank is known to deliver reasonable results in summarizing meeting transcripts~\cite{garg2009clusterrank} in unsupervised settings. However, it treats all input text the same without any domain-specific consideration. Our goal was to incorporate attendees' feedback in the summarization process so that the resultant summary was weighted by attendees' feedback. To that end, we added two critical modifications to the original methodology: 1) by incorporating attendees' feedback while computing relative importance of sentences, and 2) replacing the vanilla similarity function used in TextRank with a bag-of-words ranking function called BM25, which is proven to work well in information extraction tasks~\cite{robertson1995okapi}. Each individual transcript is treated as a set of sentences ($s_1, s_2...s_n$). Each sentence is considered as an independent node. We used a function to compute the similarity between these sentences to construct edges between them. The higher the similarity between the sentences, the more important the edge between them will be in the graph. The original TextRank algorithm considers the relation between two sentences based on the content (tokens) they share. This relationship is between two sentences $S_i, S_j$, for every $w_k$ common token, is given by equation~\ref{tr_sim}. \begin{equation} \centering sim(S_i, S_j) = \frac{\vert \{w_k \vert w_k \in S_i \& w_{k} \in S_j\}\vert}{\log(\vert S_j \vert) + \log(\vert S_i \vert)} \label{tr_sim} \end{equation} We replaced the above similarity function by a BM25 ranking function utility defined by equation \ref{bm_sim}. \begin{equation} \centering sim(S_i, S_j) = \sum_{k=1}^n IDF(w_k \in S_i) \frac{f(w_k \in S_i, S_j). (a + 1)}{f(w_k \in S_i, S_j) + a.(1-b+b.\frac{\vert P \vert}{\mu_{DL}})} \label{bm_sim} \end{equation} \noindent where $a, b$ are function parameters ($a=1.2, b=0.75$), $f(w_k, S_j)$ is $w_k$'s term frequency in $S_j$, $IDF$ is the inverse document frequency, and $\mu_{DL}$ is the average length of the sentences in our collection. More importantly, since we timestamped and tagged our transcripts, we knew which instances were potentially more important in terms of garnering attendees' feedback. To incorporate those, we augmented every edge weight $w_{i,j}$ by a factor of $\epsilon$ determined experimentally (set to $1.10$ if either of the sentences being compared prompted feedback, or $0.90$ otherwise). From the constructed graph of sentences, we computed the final relative importance of every vertex (sentence), and selected the top-n sentences corresponding to about $30\%$ of the total length of the transcript which were then presented, in their original order, as the summary of the meeting discussion. We performed a series of ablation tests to evaluate the robustness of our summarization approach. The results of these tests are presented in full in the \hyperref[appendix]{Appendix} section. For three different meeting transcripts that we had generated using CommunityClick, we quantitatively evaluated the auto-generated summaries against human-annotated reference summaries using the widely used ROUGE~\cite{lin-2004-rouge} metrics. Across different meeting transcripts, we observed similar ROUGE scores, indicating that we produced summaries of consistent quality of summaries across different meetings. Additionally, we evaluated our algorithmic approach on the popular AMI meeting corpus~\cite{carletta2005ami}. Our results were found to be comparable to the current state-of-the-art methods~\cite{zhong2019searching, shang2018unsupervised} employed on the AMI dataset under unsupervised settings. \begin{figure} \centering \includegraphics[width=1\textwidth]{Figures/interface.pdf} \caption{A snapshot of CommunityClick's interface. A) The title provides useful metadata about the meeting such as date and location. B) The main topics extracted from the meeting transcript. C) The timeline visualizes organizers' tags in chronological order. Each circle represents a tag. Clicking on a circle brings organizers to the corresponding transcript segment. D) The interactive feedback-weighted summary. E) Transcript view displays the transcript text alongside organizers' assigned tag, the main topic, and aggregated attendees' feedback in that time interval for each segment. F) Filters for attendee's feedback (based on what was provided on iClicker). G) The bar chart displays attendees' feedback. H) Filters for organizer's tags. I) Rich text editor for organizers to author the report. J) Options to view or collapse the summary or the transcript view.} \label{fig:interface} \end{figure} \subsubsection{Enabling exploration of an augmented meeting transcript through an interactive interface to help organizers author meeting reports (G4)} We developed CommunityClick's interface as a web application. It allows multi-faceted exploration and analysis of meeting data by providing components including the title, filters, discussion topics, timeline, summary, text editor, and finally the augmented meeting transcript segments (Fig.~\ref{fig:interface}). The title contains the metadata about the meeting, including the meeting title, date, and location (Fig.~\ref{fig:interface}(A)). The filters allow organizers to explore the transcript segments according to the selected feedback or tags of interest (Fig.~\ref{fig:interface}(F, H)). These options are customizable, and organizers may customize the tags to suit their purpose before the meeting. We chose to visually collapse transcript segments that are filtered as opposed to completely removing them from the view to communicate to the users that there are additional conversations that transpired between the segments currently visible. In the topic and timeline component, we provide the list of most relevant topics and the timeline of the meeting discussion (Fig.~\ref{fig:interface}(B, C)). The organizers can filter the transcript segments based on any topic. The timeline displays the organizers' tags using circles in a chronological manner, where each circle represents a tag, and the color corresponds to organizers' tags (Fig.~\ref{fig:interface}(C)). This provides the organizers with a temporal distribution of tags that demonstrates how the conversation progressed during the meeting. When a circle is selected, the transcript is scrolled to the corresponding segment and highlights the background to distinguish it from other segments. The feedback-weighted extractive summary is presented in a textbox (Fig.~\ref{fig:interface}(D)). Each of these sentences is interactive, and upon selection, they navigate to the transcript segment it was extracted from. This can enable organizers to explore the transcript and get a better understanding of why the sentence was added to the summary. Below the summary, we added a rich text editor for authoring the meeting report with rich formatting options (Fig.~\ref{fig:interface}(I)). We also added options for attaching additional files or images. Once the report is created, it can be printed in PDF format directly, without switching to other external printing applications. Finally, we present the augmented transcript divided into transcript segments (Fig.~\ref{fig:interface}(E)). The segments are ordered chronologically. Each transcript segment contains the transcript text, associated organizer's tag, the most relevant extracted topic, time of the segment, option to import the summary of the selected transcript to the text editor, and aggregated attendees' feedback in the form of a bar chart. For easy tracking, we highlight the transcript text that are added to the summary. Organizers can edit the segments to assign or change tags and topics. However, they do not have control over attendees' feedback to mitigate bias injection. To reduce clutter on the screen, we added two additional filters to collapse the summary or augmented transcript (Fig.~\ref{fig:interface}(J)). \subsection{Pilot Study} To explore whether CommunityClick could be effectively deployed in a real-world town hall meeting, we performed a pilot study where we simulated a town hall with nine participants. We recruited eight participant as meeting attendees and one participant as the organizer, who had previous experience with organizing meetings. We refer to the attendees who participated in our pilot study as \textbf{PA} We recruited all participants using word of mouth from a public university in the U.S. For the discussion topic, we selected two contentious issues regarding the university that were popular at the time of our pilot study. The topics included discussions around building a new common room for graduate students and the rise of racist activities across the campus. All participants were graduate students (6 males and 2 females with a average age of 27.25). The goal of the pilot study was to assess the system workflow for potential deployment and whether the attendees could share their feedback silently using iClickers without interrupting others. Furthermore, we used the augmented transcript from the meetings to enable the pilot study organizer we recruited to explore the meeting discussions to identify potential interface issues. The meeting took 60 minutes which is similar to a traditional town hall. We collected 292 items of feedback from attendees (avg 36.5 per attendee $\pm$ 8.23) and 56 tags from the organizer. After the meeting, we asked attendees to share their experiences of using iClickers to voice their opinions using open-ended questions. The findings suggested that attendees found iClickers easy to get used to. They were able to share their feedback silently using iClickers, managed to avoid potential confrontations, and thought they could contribute more compared to their experiences in other meetings. However, one attendee mentioned about difficulties around remembering which iClicker button mapped to which attendees' tag. Another attendee mentioned that while expressing strong agreement or disagreement with the ongoing discussion, some attendees might spam the iClicker button which might \blockquote{\emph{dilute the opinion values}} (PA-4). The organizer mentioned using iClickers enabled him to focus more on the discussion and the interface allowed him to better capture attendees' feedback. He recalled that some attendees were silent, but the bar charts showed feedback from all eight participants, meaning they were participating silently. He also identified the flow of the discussion, and important discussion points. This pilot study helped us to better understand and solidify operational procedures to perform real-world deployment of our system. Based on the feedback we received, we modified the system and user interface. For instance, we added a spamming prevention technique~\cite{sun2013synthetic} by calibrating the system to capture one click from each attendees' iClicker in a 30-second window to negate the possibility of diluting the values of specific feedback options. Furthermore, we added an option to collapse the summary or transcript to reduce interface clutter. Finally, we used written labels on iClickers to reduce attendees' cognitive load in remembering the mapping of response options. \section{Evaluation} \label{evaluation} We evaluated the application of iClickers as a real-time response system, particularly for the ability of silent attendees to share feedback, and the efficacy of our approach in enabling organizers to explore, capture, and incorporate attendees' feedback to author more comprehensive reports. To that end, we conducted a field experiment to examine if the attendees could effectively use iClickers to voice their feedback. In addition, we followed up by conducting semi-structured interviews with 8 expert organizers to evaluate if CommunityClick could enable them to capture attendees' feedback and generate more comprehensive reports. \subsection{Field Experiment: Parking Town Hall} We deployed CommunityClick at a town hall in a college town in the U.S. The meeting focused on a new set of proposals to improve the parking condition of the town. We reached out to the town officials a month before the meeting took place. They allowed us to deploy our system, record the discussion, and use the data. They also introduced us to the organizer who facilitated the meeting. We explained the procedure to them and discussed the tags to be used for both the organizer and attendees. The town hall took place on a Thursday evening and was open for all to attend. \subsubsection{Meeting Participants} There were 31 attendees and 1 organizer present in the meeting. We provided attendees' iClickers with Agree, Disagree, Unsure, Important, and Confused tags to 31 attendees. For the organizers, we provided them iClickers labeled with Main Issue, Concern, Supportive, New Idea, and Good Point tags as per our pre-meeting discussion. \subsubsection{Procedure} At the beginning of the town hall, we provided a brief tutorial for five minutes on how to use the iClickers to the meeting attendees. We also received consent from the attendees about recording their discussions. The meeting began with an organizer presenting the meeting agenda and the new parking proposals to the meeting attendees. The attendees and the organizer used iClickers to tag the conversations throughout the meeting. After the presentation, the attendees engaged in discussing the proposals. The meeting lasted for 76 minutes. At the end of the meeting, we provided post-study questionnaires to attendees that asked various questions, such as their reasons behind attending the meeting, their usual experience during town halls, whether they could share their opinions by speaking up, and how did using iClickers compare to such experiences. They responded on a five-point Likert scale. We also asked them open-ended questions around their experience of working with iClickers, whether they faced any issues or challenges, and suggestions to improve their experiences and our approach. The post-study questionnaire is provided as a supplementary material. \subsubsection{Data Collection and Analysis} We were given permission to collect and use the data from the town hall by government officials. We collected 61 minutes of meeting audio for transcription. We also collected organizer's tags and attendees' feedback from the meeting. In total, we captured 56 tags from the organizer. Out of 31 meeting attendees, 22 used the iClickers we provided to share their feedback. Out of these 22 attendees, 20 of them filled up the post-study questionnaire. We report the statistics based only on these 20 attendees' responses. We captured a total of 492 attendees' feedback with an average of 24.6 feedback items per attendee with a standard deviation of 6.44. This data was later used to populate CommunityClick's interface for demonstrating its various functionalities to meeting organizers, which we describe in~\ref{sub:interview}. We also collected the post-study questionnaire responses and entered them into spreadsheets for creating charts (Fig.~\ref{fig:field_exp}) and statistics for analysis. \subsubsection{Findings} From the analysis of the attendees' iClicker usage patterns, we found that the attendees' used the tag \textit{Agree} the most 187 clicks (38\%), followed by \textit{Important} with 103 clicks (21\%), \textit{Disagree} with 93 clicks (19\%), \textit{Confused} with 79 clicks (16\%), and finally \textit{Unsure} with the least amount of 30 clicks (6\%) only. Initially, we were surprise to see the large gap between agreement and disagreement. However, upon closer inspection, we found that on several occasions, the attendees who were using iClickers was clicking \textit{Agree} when other vocal attendee were verbally expressing their disagreement to a discussion topic. This behavior pattern indicates that the silent attendees used iClickers to provide their support for an ongoing argument alongside sharing their own opinions. We also found that the attendees did not press any iClicker options during the introduction when the organizers were setting up the discussion and conclusion of the meeting when the organizers expressed gratitude for attending the meeting and other social conversation. This suggests that the attendees took their opinion sharing using iClickers seriously and did not randomly clicked different options during the meeting. \begin{figure} \centering \includegraphics[width=1\textwidth]{Figures/field_exp_data.pdf} \caption{The results from the field experiment. A) Attendees' responses show that the majority of meeting attendees were not satisfied with the status quo of town halls but found iClickers easy to get used to. It also displays the number of attendees who thought they could share their voices by speaking up or using iClickers. B) A deeper comparison between speaking up and using iClickers to share attendees' feedback. The diamonds (\textcolor{gray}{\ding{117}}) and stars (\textcolor{gray}{\ding{86}}) represent 20 attendees' (A1-A20) responses to questions that asked them to rate their experiences of sharing opinions by speaking up and using iClickers respectively during town halls. The arrows show the difference and increase or decrease in their ratings. The arrows demonstrate that the majority of the participants who were not satisfied with the current methods of sharing opinions (A1-A3, A5) during town halls found iClickers to be a better alternative. They also show the participants who were comfortable with speaking up during meetings, did not endorse iClickers as strongly to share their voices.} \label{fig:field_exp} \end{figure} From the analysis of the post-study questionnaires, we found that all of the 20 attendees either lived or worked in the town of Amherst, where the town hall was organized. 95\% of these attendees (19 responses) were well-accustomed with such meetings and they mentioned attending similar town halls twice a year, while 50\% (10 responses) attended these meetings more than five times per year. When asked about their exposure to technology, all meeting attendees responded to owning at least one computer and one smartphone with an internet connection, which they were comfortable using. However, their responses to the experiences in town halls varied as presented in Fig.~\ref{fig:field_exp}(A). 25\% of attendees (5 responses) mentioned that they did not feel that they are able to voice their thoughts in town halls. Only 35\% (7 responses) were pleased with the way such town halls are organized while 50\% of the attendees (10 responses) were neutral in their responses. 75\% attendees (15 responses) responded that they got used to iClickers quickly. 85\% mentioned (17 responses) they were able to share their thoughts using iClickers compared to only 65\% (13 responses) attendees who are comfortable with speaking up to share opinions. The majority of the attendees (90\%) were positive about their experiences of using iClickers to share their opinions (18 responses). One attendee (A9) mentioned, \blockquote{\emph{I feel like I could contribute more than usual and I would definitely like use it in future meetings.}} We further compared the attendees' responses on their ability to share their thoughts in town halls by voicing their opinions against using iClickers (Fig.~\ref{fig:field_exp}(B)). The data shows that almost all of the attendees who did not think they could share their thoughts by speaking up, except for one (A4) thought they could voice their opinions using iClickers. One such attendee (A2) mentioned, \blockquote{\textit{I didn't like what others were saying. But instead of interrupting, I just used the clicker to say that I didn't agree with them.}} We also found that, while agreeing that iClickers' could provide a way to voice opinions in town halls, the attendees who strongly preferred speaking up did not rate their experience of using iClickers to share opinions as high. One of these attendees (A17) mentioned, \blockquote{\textit{I was distracted, so I didn't use it that much.}} We identified two important insights from this field experiment (Fig.~\ref{fig:field_exp}). Firstly, it demonstrated that the silent attendees' who could not speak their minds found a way to voice their opinions without apprehension of confrontation in town halls using iClickers (Fig.~\ref{fig:field_exp}(A) and (B), attendees A1-A3, A5). However, we also found that some attendees who strongly agreed that they were satisfied with the current method of sharing opinions by speaking up, did not as strongly endorse iClickers as a way to share their opinions (Fig.~\ref{fig:field_exp}(B), attendees A13-A15, A17, A18). We speculate two reasons for such reduced ratings for iClickers. First, for the attendees who are already comfortable with speaking up, iClickers might seem like an additional step to share opinions which might lead to distractions, as mentioned by one of the attendees (A17). The second reason might be a reluctance to deviate from the norm and use technology in an established albeit impaired town hall proceedings and customs. Nevertheless, our results suggest that the addition of iClickers could be an acceptable trade-off between providing the silent attendees a way to communicate their opinions and mildly inconveniencing the adept and vocal meeting attendees. \subsection{Semi-structured Interviews with Meeting Organizers} \label{sub:interview} We conducted semi-structured interviews with 8 expert meeting organizers, who were experienced with organizing or facilitating town halls to gather data on community's needs, issues, and ideas. They were also adept at compiling meeting reports that play a pivotal role in informing civic decision-making. Our objective was to examine if CommunityClick's interactive interface could help the organizers to better capture the attendees' feedback to author more comprehensive reports that preserve the equity and inclusivity of voiced opinions in town halls. \subsubsection{Participants} We reached out to a total of 29 expert organizers from across the U.S. 8 of them responded by agreeing to help us with evaluating CommunityClick. Our interviewees were experts in their fields with intimate knowledge of town hall organization and decision-making. We refer to our semi-structured interview participants as \textbf{P}. On average, they had over 20 years of experience. One interviewee (P1) was the organizer from our field experiment---the town hall on parking. We made connections with the others (P3-P8) during our formative study. All of our interviewees were based in the U.S. \subsubsection{Procedure} Several experts we engaged with to evaluate CommunityClick were excited about its potential and agreed to deploy the system in their then upcoming town halls. Our original evaluation plan involved several deployments in the wild, then providing organizers with the meeting audio from these deployments and ask them to write two reports, one using their current method of writing reports, and the other using CommunityClick's augmented meeting transcript and interactive interface. We wanted to study the differences between these reports to investigate the efficacy of our system. However, the recent COVID-19 pandemic forced the organizers to cancel all town halls until further notice, and we were compelled to cut our evaluation short. Due to this setback, we revised and modified our evaluation procedure as follows. We deployed the CommunityClick interface on a public domain and shared it with our interviewees via emails at least two weeks before our interviews. To maintain privacy of usage, we provided each organizer with their own user account and login credentials. We also provided detailed instructions on how to use CommunityClick's various features and encouraged the interviewees to explore the interface at their own convenience. We populated the interface with the data collected from the simulated meetings from our pilot study as well as the meeting from our field experiment for the interviewees to explore. During the interview sessions, we asked them open-ended questions focusing around their current practices towards town hall organization, how using CommunityClick differed from these practices, how useful could CommunityClick be to capture silent attendees' feedback and marginalized perspectives, could the interface allow them to author better reports, and finally, suggestions to improve CommunityClick. We also allowed them to ask any questions they might have about CommunityClick. The interview questions are provided as a supplementary material. We conducted the meetings over video conferencing via Zoom~\cite{zoom}. The interviews lasted between 45-60 minutes. All participation was voluntary. Each interview was conducted by an interviewer and a dedicated note-taker from our research team. \subsubsection{Data Collection and Analysis} We transcribed over 400 minutes of audio recording from our interviews with organizers. We also took extensive notes throughout the meeting. Finally, we thematically analyzed the interview transcripts and notes taken using the open-coding method~\cite{burnard1991method}. Two members of our research team independently coded the data at the sentence level using a spreadsheet application. The inter-coder reliability was 0.89 which was measured using Krippendorff's alpha~\cite{krippendorff2011computing}. We had several iterations of discussions among the research team to condense the codes into the themes presented in Table~\ref{tab:theme_table}. \begin{table*} \caption{This table shows themes that emerged from analyzing the interviews with the organizers. The codes associated with the themes and their description is also presented in the table.} \scriptsize \setlength\tabcolsep{6pt} \ra{1.5} \centering \begin{tabular}[t]{p{2.6cm} p{6cm} p{4cm}} \toprule \textbf{Themes} & \textbf{Codes} & \textbf{Descriptions}\\ \midrule Enabling inclusivity & Equitable platform, problem speaking up, opinion sharing, understanding others, inclusive opinions & CommunityClick's impact on inclusivity in town halls\\ Diverse perspectives & Shared narrative, honest reflections, attendee's reactions, identifying conversation flow & Different perspectives and opinion shared in town halls\\ Report quality & meeting summarization, missing information, credible process, comprehensiveness, accurate representation & CommunityClick's utility in creating reports\\ Meeting organization & Unstructured discussions, real-time attendee's response, tracking response, customized tags, measuring consensus & Organizing meeting-generated data\\ Interface learnability & Intuitiveness, easy-to-use, formatting, data exploration & Users' ability to learn and use interface features\\ Concerns and caveats & Technology as a barrier, tech-savvy, distraction factors, young generation & Concerns regarding CommunityClick's usage in town halls\\ Improvement suggestions & Real-time feedback, opinion statistics, organizers' input & Suggestions to improve our approach\\ \bottomrule \end{tabular} \label{tab:theme_table} \end{table*} \subsubsection{Findings} Our analysis of interview transcripts and notes surfaced critical insights on how CommunityClick could enable attendees to share opinions and help organizers to capture inclusive feedback to author more comprehensive reports. We elaborate on these insights in the following and discuss possible room for improvement. \\\\ \noindent\textbf{CommunityClick can create a more inclusive platform to share opinions.} Our interviewees were unanimous (8 out of 8) in acknowledging CommunityClick's potential to create an inclusive platform for community to share their opinions. They shared with us several example scenarios they experienced where CommunityClick could have provided silent attendees a way to speak their minds. P1 mentioned, \blockquote{\textit{People want to share their opinions, but sometimes they just can't express themselves because they're not comfortable talking, or they're nervous about how they'll appear to or who is around the table. Often they are intimidated. Here, }"intimidated" \textit{is a strong word. But I don't think it's the wrong word.}} P2 mentioned, \blockquote{\textit{There was an individual who attended several meetings, it was clear that their presence had an impact on people's willingness to speak at all, or the opposite effect, where people escalated in reaction to that person. Giving them the ability to click help both ways. They can avoid confrontation or avoid escalation by just clicking.}} Similarly, P3 drew examples from his experiences, saying, \blockquote{\textit{Even if the attendees are from the U.S., [people with] different upbringings or cultural backgrounds have a disadvantage to those who are quite familiar with the moors of group dynamics. In our town halls, we only take votes on questions or get agreements, but in a conversation, there are so many euphemisms, colloquialisms, and metaphors that make it difficult for someone unfamiliar with them to understand others' reactions. There is real value in using options like ``confused'' and ``unsure'' to allow them to record that they didn't understand the conversation instead of forcing them to agree or disagree.}} P6 found further value in separating the attendees' tags and the organizers' tags to establish organizers' credibility. She mentioned, \blockquote{\textit{The organizers cannot unintentionally skew the attendees' feedback because [their tags] are separate. That way, we know the recorded feedback is unbiased.}} \\\\ \noindent\textbf{The augmented transcripts provide evidence of attendees' reflections.} One of our primary goals was to enable organizers to have access to attendees' perspectives to form a shared narrative of the meeting discussions. After exploring CommunityClick's interface, the majority of interviewees (7 out of 8) mentioned how it enabled them to capture meeting attendees' reflections on the meeting agenda. P6 mentioned, \blockquote{\textit{It provides a way of ensuring that voices and reactions are reflected as people speak and click. It is a huge step towards having a more honest reflection of what really went on in the meeting.}} P3 further emphasized how CommunityClick not only captured the attendees' feedback but also allowed navigation of the conversation flow using the Timeline, \blockquote{\textit{This tool allows me to see both how many ideas have traction or agreement and how many don't, but just as importantly, how the flow went. The facilitators are concerned with the way topics are discussed in town halls. These topics are influenced by the surrounding conversations. It [Timeline] allows me to see reactions that might or might not be intended because of the sequence of conversations. Having a way to track that has a huge value.}} Regarding the interactive augmented transcript, P4 specifically preferred the way it enabled her to track attendees' responses. She drew a comparison with her usual methods for note-taking during town halls saying, \blockquote{\textit{We usually have a table facilitator and then a table observer. The table observer takes detailed notes, but it adds to the number of staff we have to have. So that creates an additional challenge, but the speech to text transcription makes a big difference in recording people's reactions. With [CommunityClick], maybe we won't need a table observer.}} P5 also mentioned how CommunityClick gave credence to attendees' reactions during the meeting discussions through the feedback bar charts. She said, \blockquote{\textit{It makes a lot of sense to see where people are aligned, where the challenges are, and giving information from their reactions. When changing policies, we hear from only a few voices who are either for or against something, and they tend to dominate the conversation. Having a visual and data-driven way to show what was the actual participation is gold. Sometimes people feel that a proposal is not aligned with their ideas. With the bar chart, you can show them that maybe you are the outlier and others agree with the proposal.}} \vspace{0.3cm} \noindent\textbf{CommunityClick can help create more comprehensive and accurate reports.} All of our interviewees had prior experiences of writing reports by summarizing the meeting discussions and identifying key discussion points. They found various aspects of CommunityClick useful to not only author more comprehensive reports but also more accurate ones that lend credibility to the report creation process. P1 drew parallels with his experience of working in scenarios where designated note-takers took notes and his team generated the reports from those notes. He mentioned, \blockquote{\textit{People who take notes have varying abilities and the notes vary in quality. Instead, as you are writing reports, you have [CommunityClick], where you can see and add the reactions to what [attendees] discussed right away, it builds credibility for the process.}} P3 echoed similar sentiments, saying, \blockquote{\textit{You are usually distracted by the conversation while taking notes, which means you might miss every third word at a particular moment, which could be the difference between agreement and disagreement. Having it transcribed and summarized will remind a facilitator of some things that he or she may not have remembered or make it more accurate, because they may have remembered it differently. I love the fact that the [text analysis methods] can capture that objectively for us.}} P4 also emphasized the usefulness of importing a summary to the text editor. She mentioned, \blockquote{\textit{Having the text editor where you can start writing the report and pull in pieces from the transcript could be really helpful, because then as you read through the transcript and you're writing about some themes, you can pull characteristic quotes that would really help bring in more evidence for claims for those themes.}} Furthermore, we found that the report creation process can take a few hours to a few days depending on variables such as the way notes were taken, the length of the meeting, report creators' skills, etc. P7 highlighted the reduced workload and efficiency that CommunityClick could provide, saying, \blockquote{\textit{There is a physical component of getting into it, typing it up, theming, organizing, and editing which always takes longer than anticipated, I can see some of those issues can be fixed with this.}} \\\\ \noindent\textbf{CommunityClick preserves the flow of meeting discussions by establishing an implicit structure.} The majority of our interviewees (6 out of 8) thought CommunityClick could be best utilized to organize unstructured meeting discussions. They emphasized that contrary to asking meeting attendees to respond to specific questions in town halls, CommunityClick allowed attendees to respond whenever they wanted, creating an implicit structure to the meeting while preserving the flow of discussion. One interviewee (P1) mentioned, \blockquote{\textit{[CommunityClick] would provide the biggest benefit in more unstructured kind of discussions. If you have a town hall, where people are less likely to speak up, [tags] would be helpful to understand their reactions and help with the theming.}} Another interviewee (P5) mentioned, \blockquote{\textit{It's hard to keep track of many ideas, but the visual organization of information helps to gauge reactions and figuring out if we reached consensus. But most importantly, it helps me to see if there are any social or racial injustice components into the proposals where there can be negative reactions.}} They also found the option to customize the attendees' feedback and organizers' tags useful to adapt to different meeting scenarios. One interviewee (P3) mentioned, \blockquote{\textit{Words may mean different things in different meetings. Having the ability to label and customize [tags] individually would be a way for different organizations to adjust to that. Sometimes we want to know [attendees'] hopes and concerns, but other times, we just want to know if they agree or disagree.}} However, P7 raised a concern about larger meetings, saying, \blockquote{\textit{I think [CommunityClick] will be useful for smaller meetings, but if there are hundreds of people, and everyone is speaking over each other, I'm not sure if you will be able to cope with that.}} \\\\ \noindent\textbf{The simplicity and learnability of the interface affords intuitive exploration.} From a usability standpoint, all of our interviewees (8 out of 8) found CommunityClick's interface to be simple and straightforward to work with. P3 extolled the interface saying, \blockquote{\textit{It's very intuitive, simple, easy to use, and navigate after the fact, edit, and update. All user interface features look well-thought-out to me considering the inner workings are extremely delicate and complicated.}} P4 valued the rich editing options of the text editor. She said, \blockquote{\textit{The automatic summaries can be used as a starting point of the report, as an initial cut, and then I can delete the things that might not be very useful and build up the report by adding more to it and formatting it. I can clearly see a lot of thought was put into designing the interface.}} P5 thought that the interactivity of the timeline was useful for navigating the augmented transcript. She mentioned, \blockquote{\textit{When I started clicking on the buttons on the circles at the top [timeline], it was very intuitive, like, it just automatically brings you to the places where that correlates with the statements, so you understand what it's connected to.}} P6 further emphasized CommunityClick's potential as a record-keeping and meeting exploration system, saying, \blockquote{\textit{Everything is linked together. So in that sense, it makes intuitive and logical sense when I'm looking at the data. It will be a total game-changer for policymakers and community organizers.}} \\\\ \noindent\textbf{Concerns around technology in town halls.} Although our interviewees praised CommunityClick, some of them raised a few important concerns. P8 mentioned how technology usage could be troublesome in town halls, saying, \blockquote{\textit{It feels like the technology itself could be seen as a barrier, because a lot of people might not feel quite as comfortable, clicking on things and reacting to.}} On a different note, P4 raised concerns about the sense of urgency such technology might impose on the meeting attendees. She said, \blockquote{\textit{[Attendees'] reactions, they are decisions that are being made in the spur of the moment as they're hearing information. And it's such a complex, sociological, and psychological response to information.}} Her concern was whether the urge to immediately respond to someone's perspectives could inhibit the ability to contemplate and deliver a measured response. Further concerns arose from P5, who mentioned how younger meeting attendees might have an advantage in the town halls if the technology is heavily used, saying, \blockquote{\textit{Younger generations tend to use technology so much more easily. And they turn to it so much more easily than older generations.}} \\\\ \noindent\textbf{Possible room for improvements.} We received some feedback from our interviewees on how to improve CommunityClick. Some of these suggestions focused on adding more real-time components to CommunityClick that can further augment the ongoing discussions. For example, P3 mentioned, \blockquote{\textit{Right now, [CommunityClick] helps me to understand attendees' reactions after the fact. I think if the facilitators could see them in real-time as the attendees are clicking away, they might be able to steer the conversation better to be more fruitful and fair.}} Other suggestions involved adding functionalities to add the organizers' own topics on top of the automatically extracted topics. In that regard, P4 mentioned, \blockquote{\textit{I guess it depends on who is using the system, but we look to dive a little bit more and would want to maybe customize the topics and themes}}. \section{Discussion} \label{discussions} From our field experiment and interviews with experts, we found that CommunityClick could be used to create an equitable platform to share more inclusive feedback in town halls. Compared to prior approaches to utilize technology in town halls~\cite{murphy2009promotion, boulianne2018citizen}, CommunityClick enabled attendees to utilize iClickers to silently and anonymously voice their opinions any time during the meeting without interrupting the discussion or the apprehension of being shut out by dominant personalities. We extended the use of audience response systems in prior works by adding five customizable options that go beyond binary agreement or disagreement and modifying iClickers as a real-time response system~\cite{boulianne2018citizen, bergstrom2009vote}. The experts we interviewed valued options such as \textit{confused}, or \textit{unsure} to identify if attendees were disengaged or did not understand the conversation without forcing them to agree or disagree~\cite{Karpowitz2005DisagreementDeliberation, sanders1997against}. The customizability of both organizers' tags and attendees' feedback further added to the flexibility of our approach that could potentially increase adaptability in meetings with diverse agendas in both civic and other domains. Moreover, the automation and augmentation of speech-to-text transcription, extraction of most relevant discussion topics, and feedback-weighted summarization of meeting discussion could potentially eliminate the need for separate note-takers. According to the organizers we interviewed, it could reduce the manpower requirement significantly compared to established approaches~\cite{lukensmeyer21, lukensmeyer2002taking}. From organizers' perspectives, CommunityClick could help create more comprehensive and accurate reports that could provide evidence of attendees' reflections. Prior work experimented with annotations during face-to-face meetings~\cite{kalnikaite2012markup, kelkar2010livetagging} for memory recall. In our work, we empowered organizers to go beyond recollection of events during meetings by integrating their own customized tags and enabled them to capture a more comprehensive picture of the meeting discussion. Furthermore, our interactive summary and attendees' feedback visualization provided a visual and data-driven way to highlight attendees' viewpoints, and outliers on critical points of discussions. This could enable organizers to receive a more clear reflection of what occurred in the meeting and author more accurate reports based on tangible evidence rather than incomprehensive interpretation~\cite{mahyar2019deluge}, that could further lend credibility to the report creation process. Prior work highlighted concerns about accuracy in computational approaches to analyze meeting data~\cite{mcgregor2017moretomeeting}. However, from our interviews with experts, we found that the accuracy and comprehensiveness of meeting reports often depends on meeting length, method of taking notes during the meetings, and note-takes' skills. We posit that synchronization of meeting data and addition of inclusive attendees' feedback into the summary and interface will enable organizers to author more accurate reports with the added benefit of reduced manpower requirement. Although during our interviews with meeting organizers, some of them highlighted that the augmented transcripts, discussion topics, and summaries could only be accessed after the meeting is finished, we argue that the latency is an acceptable tradeoff for increased comprehensiveness, credibility, and accuracy in generating reports. Furthermore, enabling access to the variety of meeting generated data could also help to reduce both organizers' and attendees' distraction during the meeting~\cite{gordon2011immersive, appleton2005gis} and allow them to engage with the ongoing discussion. In particular, the separation of attendees' feedback and organizers' tags along with the evidence of attendees' feedback in meeting reports could pave the way to instill trust between the decision-makers in the local government and the community members, which is considered to be a wicked problem in the civic domain~\cite{corbette2019trust, corbett2018problem, harding2015, mahyar2019deluge}. \subsection{Marginalization in Civic Participation and the Role of Technology} Marginalization can be broadly defined as the exclusion of a population from mainstream social, economic, cultural, or political life~\cite{given2008sage}, which still stands as a barrier to inclusive participation in the civic domain~\cite{mahyar2019deluge, dickinson2019cavalry}. Researchers in HCI and CSCW have explored various communitysourcing approaches to include marginalized populations in community activities, proceedings, and designs~\cite{dickinson2019cavalry, erete2017empowered, walsh2019ai+, mahyar2018communitycrit, kim2016budgetmap}. In this work, we added to this body of work by designing technology that included silent voices in civic participation to increase the visibility of marginalized opinions. Our field experiment and interviews with experts demonstrated the efficacy of our approach to enable, capture, and include silent attendees' participation in town halls, regardless of their reasons behind keeping silent (social dynamics, fear of confrontation, cultural background, etc.). Our work also answers the call for more inclusive data analysis processes and practices to augment computational approaches~\cite{rhody2016dig, d2020data} by proposing a text summarization method that included and prioritized attendees' feedback when generating meeting discussion summaries. Such inclusive data analysis techniques could reflect the community's opinions, identify social and injustice, and support accountability in the outcome of civic engagements~\cite{erete2017empowered}. However, designing communitysourcing technologies to include marginalized opinions and amplify participation alone may not be enough to solve inequality of sharing opinions in the civic domain~\cite{torres2007citizen, bovaird2007beyond}. Despite the success of previous works~\cite{erete2017empowered, lukensmeyer21, boulianne2018citizen}, technology is rarely integrated with existing manual practices and follow-ups of engagements between government officials and community members are seldom propagated to the community. This lack of communication might lead to the uncertainty of attendees on whether actions will be taken to reflect their opinions. As a result, the power dynamics between the government officials and the community remain unbalanced, especially for the marginalized populations~\cite{dickinson2019cavalry, erete2017empowered, dickinson2018inclusion, taylor2012empowering}. One way to establish the practicality of using technology to include marginalized opinions is to integrate them into existing processes to convince the officials of its efficacy. Our work provides first steps towards integration of marginalized perspectives, however, long-term studies are required to assess the possibility and feasibility of integrating public perspectives into the actual civic decisions. \subsection{Integrating CommunityClick in Today's Town Halls} Our formative study with 66 attendees and field experiment with 20 attendees bore a striking resemblance regarding the attendees' ability to share their opinions in town halls. We found that 17\% (11 out of 66) of attendees from the formative study and 25\% (5 out of 20) of attendees from the field experiment were not comfortable to speak up in town halls and needed a way to share their opinions. However, similar to previous works~\cite{lukensmeyer21, gastil2008political}, some of the meeting organizers we interviewed were apprehensive towards depending solely on technology due to concerns around logistics involved with the procurement and maintenance of electronic devices such as iClickers, reliability concerns of using technology that require proper management, and unfair advantage towards newer generation who are more receptive to novel technologies. We argue that renewed motivation, careful design choices, and detailed instructions could help overcome the novelty barrier of technology even for people from older generations~\cite{vaportzis2017older, d2014three}. Based on our experiences from this study, we advocate for integrating technology with current face-to-face methods. We do not suggest complete replacement of traditional methods with technology, rather we suggest augmenting them with technology to address some of their limitations. CommunityClick could be gradually integrated as a fail-safe or an auxiliary method to complement the current process. All the functionality of CommunityClick would remain operational while the organizers could take personal notes in parallel alongside their iClicker interactions. This would allow organizers to retain their current practices while taking advantage of augmented meeting transcripts, discussion topics, and summaries from CommunityClick to better capture and understand attendees' perspectives when compiling reports. Another way to integrate CommunityClick into current processes would be to provide statistics of attendees' feedback so that the organizers could track the discussion dynamics and facilitate the conversation accordingly to keep attendees engaged. However, prior works suggested that real-time visualization or displays can add to attendees' distraction, causing them to disengage from the ongoing discussion~\cite{gordon2011immersive, appleton2005gis}. To circumvent this issue, optional companion mobile applications could be introduced only for organizers to receive real-time feedback on attendees' iClicker responses so that they can make course corrections accordingly without distracting the attendees or influencing their opinions. \subsection{Expanding CommunityClick to other Domains} From our field experiment and interviews with the experts, we demonstrated CommunityClick's potential in elevating traditional town halls. We argue that our proposed data pipeline can be expanded to other domains where opinion sharing is important for successful operation. For example, in education, whether in classroom settings or in massive open online courses~\cite{kizilcec2013deconstructing, d2007mind, jones2013classroom}, it could enable students to share their feedback anytime without interrupting the class on whether they could understand a certain concept, or if they were confused and needed more elaboration, especially when the class size is large. For educators, it could allow them to track the effectiveness of their content delivery, class progress, and student motivation, which might help them to adjust the course curriculum more effectively and efficiently, instead of receiving feedback from students once every semester. More importantly, familiarity with iClickers in education eliminates the entry point barrier for technology making the system readily adoptable~\cite{whitehead2010usingiClicker, addison2009usingiclicker}. Similarly, CommunityClick could be used as a possible alternative to expensive commercial applications in meeting domains within the business and other corporate organizations~\cite{meetingKing, iCompassTech}. Similar to town halls, in a corporate setting, CommunityClick could provide text summary and direct evidence of attendees' feedback from the meeting transcripts for better decision-making. Automatic text summarization remains a challenging problem~\cite{tas2007survey}, especially when it comes to automatically summarize meeting discussions~\cite{gillick2009global}. Our summarization approach emphasized the importance of meeting attendees' feedback instead of purely text-based summarization approach that treated all sorts of text documents similarly~\cite{tas2007survey, mihalcea2004textrank}. We argue that this \textit{feedback-weighted} summary could be valuable in generating domain-specific contextual text summarization in other meeting genres. Furthermore, there are potential applications of CommunityClick as a record-keeping tool which might be particularly applicable in journalism where the journalists can utilize the iClickers to annotate the interview conversation with important tags and later review the augmented interview conversation using CommunityClick's interface to better organize and write news articles or interview reports. \section{Limitations and Future Work} Our evaluations suggested the efficacy of CommunityClick in providing a voice to reticent participants and enabling organizers to better capture the community's perspectives. However, we had only one real-world deployment of CommunnityClick at a town hall due to the pandemic. We will continue to engage with meeting organizers and deploy CommunityClick in town halls to study the long-term impact of CommunityClick and attempt to gradually integrate our approach in the predominantly manual town hall ecosystem. Also, we found that iClickers might be distracting as a new technology for some attendees in town halls as the findings from our field experiment suggested. Furthermore, the logistical issues associated with both hardware procurement and the unavailability of software APIs might be a hindrance to some communities. To circumvent these issues, low-cost alternatives to iClickers or fully software-based substitute audience response system applications could be utilized~\cite{voxvote, slido}. A fully software based solution could also enable attendees to provide open-ended textual feedback which CommunityClick does not allow in its current state. However, further comparative studies are required to assess the cost, efficacy, and applicability of such alternatives to replace iClickers that would provide the same benefit without incurring additional financial, computational, or cognitive overhead. There are several avenues to improve CommunityClick in the future. For example, it could be augmented with more real-time components including a companion application to deliver dynamic feedback statistics for organizers to access and utilize during the meeting. We will also explore novel methodologies to speed up the automatic speech to text transcription process to further reduce the time required for data processing. One approach could be to utilize parallel pipeline processing~\cite{chaiken2008scope} where audio signals from the meeting will be processed concurrently, which might reduce processing time significantly. To further provide evidence from attendees' feedback to help organizers when authoring reports, the audio of discussions could be added and synchronized with the transcript segments. It could enable the organizers to identify vocal cues and impressions from the attendees who spoke during the town hall to further contextualize the discussion segment ~\cite{mehrabian2017nonverbal}. In addition, we will investigate the possibility of tracking individual iClickers IDs without risking the privacy of attendees to anonymously identify potentially contentious individuals who might be pushing specific agendas or marginalize minority viewpoints in a discussion. However, further studies are required to understand the potential computational challenges that might arise with such extensions. To further improve the report generation, we will explore novel technologies in natural language generation~\cite{wen2015semantically} to automatically write meeting reports that could further reduce meeting organizers' workload. In addition to exporting the created reports, we will further enable exporting various statistics around attendees' feedback, organizers' tags, discussion topics, etc. to be added to the report or used separately in presenting the outcome of the town halls to decision-makers. \section{Conclusion} In this study, we investigated the practices and issues around the inequality of opportunity in providing feedback in town halls, especially for reticent participants. To inform our work, we attended several town halls and surveyed 66 attendees and 20 organizers. Based on our findings, we designed and developed CommunityClick, where we modified iClickers as a real-time response mechanism to give voice to silent meeting attendees and reflect on their feedback by augmenting automatically generated meeting transcripts with organizers' tags and attendees' feedback. We proposed a novel feedback-weighted text summarization method along with extracting the most relevant discussion topics to better capture community's perspectives. We also designed an interactive interface to enable multi-faceted exploration of the summary, main discussion topics, and augmented meeting-generated data to enable organizers to author more inclusive reports. We deployed CommunityClick in-the-wild to conduct a field experiment and interviewed 8 expert organizers to evaluate our system. Our evaluation demonstrated CommunityClick's efficacy in creating a more inclusive communication channel to capture attendees' opinions from town halls and provide evidence of attendees' feedback that could help organizers to author more comprehensive and accurate reports to inform critical civic decision-making. We discussed how CommunityClick could be integrated into the current town hall ecosystem and possibly expanded to other domains. \bibliographystyle{ACM-Reference-Format}
null
null
null
proofpile-arXiv_000-10186
{"arxiv_id":"2009.09053","language":"en","timestamp":1600740159000,"url":"https:\/\/arxiv.org\/abs\/2009.09053","yymm":"2009"}
2024-02-18T23:40:25.318Z
2020-09-22T02:02:39.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10188"}
null
\section{Introduction} Historically, metrics for evaluating the quality of machine translation (MT) have relied on assessing the similarity between an MT-generated hypothesis and a human-generated reference translation in the target language. Traditional metrics have focused on basic, lexical-level features such as counting the number of matching n-grams between the MT hypothesis and the reference translation. Metrics such as {\sc Bleu} \cite{papineni-etal-2002-bleu} and {\sc Meteor} \cite{banerjee-lavie-meteor2009} remain popular as a means of evaluating MT systems due to their light-weight and fast computation. Modern neural approaches to MT result in much higher quality of translation that often deviates from monotonic lexical transfer between languages. For this reason, it has become increasingly evident that we can no longer rely on metrics such as {\sc Bleu} to provide an accurate estimate of the quality of MT \cite{barrault-etal-2019-findings}. While an increased research interest in neural methods for training MT models and systems has resulted in a recent, dramatic improvement in MT quality, MT evaluation has fallen behind. The MT research community still relies largely on outdated metrics and no new, widely-adopted standard has emerged. In 2019, the WMT News Translation Shared Task received a total of 153 MT system submissions \cite{barrault-etal-2019-findings}. The Metrics Shared Task of the same year saw only 24 submissions, almost half of which were entrants to the Quality Estimation Shared Task, adapted as metrics \cite{ma-etal-2019-results}. The findings of the above-mentioned task highlight two major challenges to MT evaluation which we seek to address herein \cite{ma-etal-2019-results}. Namely, that current metrics \textbf{struggle to accurately correlate with human judgement at segment level} and \textbf{fail to adequately differentiate the highest performing MT systems}. In this paper, we present {\sc Comet}, a PyTorch-based framework for training highly multilingual and adaptable MT evaluation models that can function as metrics. Our framework takes advantage of recent breakthroughs in cross-lingual language modeling \cite{laser2019-Artetxe, devlin-etal-2019-bert, NIPS2019_8928, conneau2019unsupervised} to generate prediction estimates of human judgments such as \textit{Direct Assessments} (DA) \cite{graham-etal-2013-continuous}, \textit{Human-mediated Translation Edit Rate} (HTER) \cite{Snover06astudy} and metrics compliant with the \textit{Multidimensional Quality Metric} framework \cite{mqm}. Inspired by recent work on Quality Estimation (QE) that demonstrated that it is possible to achieve high levels of correlation with human judgements even without a reference translation \cite{fonseca-etal-2019-findings}, we propose a novel approach for incorporating the source-language input into our MT evaluation models. Traditionally only QE models have made use of the source input, whereas MT evaluation metrics rely instead on the reference translation. We show that using a multilingual embedding space allows us to leverage information from all three inputs and demonstrate the value added by the source as input to our MT evaluation models. To illustrate the effectiveness and flexibility of the {\sc Comet} framework, we train three models that estimate different types of human judgements and show promising progress towards both better correlation at segment level and robustness to high-quality MT. We will release both the {\sc Comet} framework and the trained MT evaluation models described in this paper to the research community upon publication. \section{Model Architectures} \label{sec:model} Human judgements of MT quality usually come in the form of segment-level scores, such as DA, MQM and HTER. For DA, it is common practice to convert scores into relative rankings ({\small DA}RR) when the number of annotations per segment is limited \cite{bojar-etal-2017-results, ma-etal-2018-results, ma-etal-2019-results}. This means that, for two MT hypotheses $h_i$ and $h_j$ of the same source $s$, if the DA score assigned to $h_i$ is higher than the score assigned to $h_j$, $h_i$ is regarded as a ``better'' hypothesis.\footnote{In the WMT Metrics Shared Task, if the difference between the DA scores is not higher than 25 points, those segments are excluded from the {\scriptsize DA}RR data.} To encompass these differences, our framework supports two distinct architectures: The {\bf Estimator model} and the {\bf Translation Ranking model}. The fundamental difference between them is the training objective. While the Estimator is trained to regress directly on a quality score, the Translation Ranking model is trained to minimize the distance between a ``better'' hypothesis and both its corresponding reference and its original source. Both models are composed of a cross-lingual encoder and a pooling layer. \subsection{Cross-lingual Encoder} \label{ssec:encoder} The primary building block of all the models in our framework is a pretrained, cross-lingual model such as multilingual BERT \cite{devlin-etal-2019-bert}, XLM \cite{NIPS2019_8928} or XLM-RoBERTa \cite{conneau2019unsupervised}. These models contain several transformer encoder layers that are trained to reconstruct masked tokens by uncovering the relationship between those tokens and the surrounding ones. When trained with data from multiple languages this pretrained objective has been found to be highly effective in cross-lingual tasks such as document classification and natural language inference \cite{conneau2019unsupervised}, generalizing well to unseen languages and scripts \citep{pires-etal-2019-multilingual}. For the experiments in this paper, we rely on XLM-RoBERTa (base) as our encoder model. Given an input sequence $x = \left[x_0, x_1, ..., x_n\right]$, the encoder produces an embedding $\bm{e}_{j}^{(\ell)}$ for each token $x_j$ and each layer $\ell \in \{0,1,...,k\}$. In our framework, we apply this process to the source, MT hypothesis, and reference in order to map them into a shared feature space. \subsection{Pooling Layer} \label{ssec:pooling} The embeddings generated by the last layer of the pretrained encoders are usually used for fine-tuning models to new tasks. However, \citep{tenney-etal-2019-bert} showed that different layers within the network can capture linguistic information that is relevant for different downstream tasks. In the case of MT evaluation, \citep{ZhangBERTScore} showed that different layers can achieve different levels of correlation and that utilizing only the last layer often results in inferior performance. In this work, we used the approach described in \citet{peters-etal-2018-deep} and pool information from the most important encoder layers into a single embedding for each token, $\bm{e}_{j}$, by using a layer-wise attention mechanism. This embedding is then computed as: \vspace{-5pt} \begin{equation} \label{eq:attention} \bm{e}_{x_j} = \mu {E}_{x_j}^\top \bm{\alpha} \end{equation} where $\mu$ is a trainable weight coefficient, $\bm{E}_{j} = [\bm{e}_{j}^{(0)}, \bm{e}_{j}^{(1)}, \dots\, \bm{e}_{j}^{(k)}]$ corresponds to the vector of layer embeddings for token $x_j$, and $\bm{\alpha} = \textrm{softmax} ([\alpha^{(1)}, \alpha^{(2)}, \dots, \alpha^{(k)}])$ is a vector corresponding to the layer-wise trainable weights. In order to avoid overfitting to the information contained in any single layer, we used layer dropout \cite{kondratyuk-straka-2019-75}, in which with a probability $p$ the weight $\alpha^{(i)}$ is set to $-\infty$. Finally, as in \cite{reimers-gurevych-2019-sentence}, we apply average pooling to the resulting word embeddings to derive a sentence embedding for each segment. \subsection{Estimator Model} \label{ssec:estimator} \begin{figure*} \centering \begin{minipage}{.48\textwidth} \centering \includegraphics[width=1.0\linewidth]{images/small_estimator.jpg} \caption{Estimator model architecture. The source, hypothesis and reference are independently encoded using a pretrained cross-lingual encoder. The resulting word embeddings are then passed through a pooling layer to create a sentence embedding for each segment. Finally, the resulting sentence embeddings are combined and concatenated into one single vector that is passed to a feed-forward regressor. The entire model is trained by minimizing the Mean Squared Error (MSE). } \label{fig:estimator} \end{minipage}% \hfill \begin{minipage}{.48\textwidth} \centering \vspace{25pt} \includegraphics[width=1.0\linewidth]{images/small_ranker.jpg} \caption{Translation Ranking model architecture. This architecture receives 4 segments: the source, the reference, a ``better'' hypothesis, and a ``worse'' one. These segments are independently encoded using a pretrained cross-lingual encoder and a pooling layer on top. Finally, using the triplet margin loss \cite{SchroffKP15} we optimize the resulting embedding space to minimize the distance between the ``better'' hypothesis and the ``anchors'' (source and reference).} \label{fig:ranking_model} \end{minipage} \end{figure*} Given a $d$-dimensional sentence embedding for the source, the hypothesis, and the reference, we adopt the approach proposed in RUSE \cite{shimanaka-etal-2018-ruse} and extract the following combined features: \begin{itemize} \item Element-wise source product: $\bm{h} \odot \bm{s}$ \item Element-wise reference product: $\bm{h} \odot \bm{r}$ \item Absolute element-wise source difference: $|\bm{h} - \bm{s}|$ \item Absolute element-wise reference difference: $|\bm{h} - \bm{r}|$ \end{itemize} These combined features are then concatenated to the reference embedding $\bm{r}$ and hypothesis embedding $\bm{h}$ into a single vector $\bm{x} = [\bm{h}; \bm{r}; \bm{h} \odot \bm{s}; \bm{h} \odot \bm{r}; |\bm{h} - \bm{s}|; |\bm{h} - \bm{r}|]$ that serves as input to a feed-forward regressor. The strength of these features is in highlighting the differences between embeddings in the semantic feature space. The model is then trained to minimize the mean squared error between the predicted scores and quality assessments (DA, HTER or MQM). Figure~\ref{fig:estimator} illustrates the proposed architecture. Note that we chose not to include the raw source embedding ($\bm{s}$) in our concatenated input. Early experimentation revealed that the value added by the source embedding as extra input features to our regressor was negligible at best. A variation on our HTER estimator model trained with the vector $\bm{x} = [\bm{h}; \bm{s}; \bm{r}; \bm{h} \odot \bm{s}; \bm{h} \odot \bm{r}; |\bm{h} - \bm{s}|; |\bm{h} - \bm{r}|]$ as input to the feed-forward only succeed in boosting segment-level performance in 8 of the 18 language pairs outlined in section \ref{sec:results} below and the average improvement in Kendall's Tau in those settings was +0.0009. As noted in \citet{zhao2020limitations}, while cross-lingual pretrained models are adaptive to multiple languages, the feature space between languages is poorly aligned. On this basis we decided in favor of excluding the source embedding on the intuition that the most important information comes from the reference embedding and reducing the feature space would allow the model to focus more on relevant information. This does not however negate the general value of the source to our model; where we include combination features such as $\bm{h} \odot \bm{s}$ and $|\bm{h} - \bm{s}|$ we do note gains in correlation as explored further in section \ref{sect:source-value} below. \subsection{Translation Ranking Model} \label{ssec:ranking} Our Translation Ranking model (Figure \ref{fig:ranking_model}) receives as input a tuple $\chi = (s, h^{+}, h^{-}, r)$ where $h^{+}$ denotes an hypothesis that was ranked higher than another hypothesis $h^{-}$. We then pass $\chi$ through our cross-lingual encoder and pooling layer to obtain a sentence embedding for each segment in the $\chi$. Finally, using the embeddings $\{\bm{s}, \bm{h^{+}}, \bm{h^{-}}, \bm{r}\}$, we compute the triplet margin loss \cite{SchroffKP15} in relation to the source and reference: \vspace{-10pt} \begin{equation} \label{eq:tloss} \begin{split} L(\chi) &= L(\bm{s}, \bm{h^{+}}, \bm{h^{-}}) + L(\bm{r}, \bm{h^{+}}, \bm{h^{-}}) \end{split} \end{equation} where: \begin{equation} \label{eq:tloss1} \begin{split} L(\bm{s}, \bm{h^{+}}, \bm{h^{-}}) = \\ \max\{0, d(&\bm{s}, \bm{h^{+}})\ - d(\bm{s}, \bm{h^{-}}) + \epsilon\} \end{split} \end{equation} \begin{equation} \label{eq:tloss2} \begin{split} L(\bm{r}, \bm{h^{+}}, \bm{h^{-}}) = \\ \max\{0, d(&\bm{r}, \bm{h^{+}})\ - d(\bm{r}, \bm{h^{-}}) + \epsilon\} \end{split} \end{equation} $d(\bm{u}, \bm{v})$ denotes the euclidean distance between $\bm{u}$ and $\bm{v}$ and $\epsilon$ is a margin. Thus, during training the model optimizes the embedding space so the distance between the anchors ($\bm{s}$ and $\bm{r}$) and the ``worse'' hypothesis $\bm{h^{-}}$ is greater by at least $\epsilon$ than the distance between the anchors and ``better'' hypothesis $\bm{h^{+}}$. During inference, the described model receives a triplet $(s, \hat{h}, r)$ with only one hypothesis. The quality score assigned to $\hat{h}$ is the harmonic mean between the distance to the source $d(\bm{s}, \bm{\hat{h}})$ and the distance to the reference $d(\bm{r}, \bm{\hat{h}})$: \begin{equation} \label{eq:ranking_inference} \begin{split} f(s, \hat{h}, r) = \frac{2 \times d(\bm{r}, \bm{\hat{h}}) \times d(\bm{s}, \bm{\hat{h}})}{d(\bm{r}, \bm{\hat{h}}) + d(\bm{s}, \bm{\hat{h}})} \end{split} \end{equation} Finally, we convert the resulting distance into a similarity score bounded between 0 and 1 as follows: \begin{equation} \label{eq:ranking_similarity} \begin{split} \hat{f}(s, \hat{h}, r) = \frac{1}{1+f(s, \hat{h}, r)} \end{split} \end{equation} \section{Corpora} \label{sec:Corpora} To demonstrate the effectiveness of our described model architectures (section \ref{sec:model}), we train three MT evaluation models where each model targets a different type of human judgment. To train these models, we use data from three different corpora: the QT21 corpus, the {\small DA}RR from the WMT Metrics shared task (2017 to 2019) and a proprietary MQM annotated corpus. \subsection{The QT21 corpus} \label{sec:qt21} The QT21 corpus is a publicly available\footnote{QT21 data: \url{https://lindat.mff.cuni.cz/repository/xmlui/handle/11372/LRT-2390}} dataset containing industry generated sentences from either an information technology or life sciences domains \cite{specia-etal_MTSummit:2017}. This corpus contains a total of 173K tuples with source sentence, respective human-generated reference, MT hypothesis (either from a phrase-based statistical MT or from a neural MT), and post-edited MT (PE). The language pairs represented in this corpus are: English to German (en-de), Latvian (en-lt) and Czech (en-cs), and German to English (de-en). The HTER score is obtained by computing the translation edit rate (TER) \cite{Snover06astudy} between the MT hypothesis and the corresponding PE. Finally, after computing the HTER for each MT, we built a training dataset $D = \{s_i, h_i, r_i, y_i\}_{n=1}^N$, where $s_i$ denotes the source text, $h_i$ denotes the MT hypothesis, $r_i$ the reference translation, and $y_i$ the HTER score for the hypothesis $h_i$. In this manner we seek to learn a regression $f(s, h, r) \rightarrow y $ that predicts the human-effort required to correct the hypothesis by looking at the source, hypothesis, and reference (but not the post-edited hypothesis). \subsection{The WMT {\small DA}RR corpus} \label{sec:daRR} Since 2017, the organizers of the WMT News Translation Shared Task \cite{barrault-etal-2019-findings} have collected human judgements in the form of adequacy DAs \cite{graham-etal-2013-continuous, graham-etal-2014-machine, graham_baldwin_moffat_zobel_2017}. These DAs are then mapped into relative rankings ({\small DA}RR) \cite{ma-etal-2019-results}. The resulting data for each year (2017-19) form a dataset $D = \{s_i, h_i^+, h_i^-, r_i\}_{n=1}^N$ where $h_i^+$ denotes a ``better'' hypothesis and $h_i^-$ denotes a ``worse'' one. Here we seek to learn a function $r(s, h, r)$ such that the score assigned to $h_i^+$ is strictly higher than the score assigned to $h_i^-$ ($r(s_i, h_i^+, r_i) > r(s_i, h_i^-, r_i)$). This data\footnote{The raw data for each year of the WMT Metrics shared task is publicly available in the results page (2019 example: \url{http://www.statmt.org/wmt19/results.html}). Note, however, that in the \texttt{README} files it is highlighted that this data is not well documented and the scripts occasionally require custom utilities that are not available.} contains a total of 24 high and low-resource language pairs such as Chinese to English (zh-en) and English to Gujarati (en-gu). \subsection{The MQM corpus} \label{sec:mqm} The MQM corpus is a proprietary internal database of MT-generated translations of customer support chat messages that were annotated according to the guidelines set out in \citet{mqm_guidelines}. This data contains a total of 12K tuples, covering 12 language pairs from English to: German (en-de), Spanish (en-es), Latin-American Spanish (en-es-latam), French (en-fr), Italian (en-it), Japanese (en-ja), Dutch (en-nl), Portuguese (en-pt), Brazilian Portuguese (en-pt-br), Russian (en-ru), Swedish (en-sv), and Turkish (en-tr). Note that in this corpus English is always seen as the source language, but never as the target language. Each tuple consists of a source sentence, a human-generated reference, a MT hypothesis, and its MQM score annotated by one (or more) professional editors. The MQM metric referred to throughout this paper is an internal metric defined in accordance with the MQM framework \citep{mqm} (MQM). Errors are annotated under an internal typology defined under three main error types; `Style', `Fluency' and `Accuracy'. Our MQM scores range from $-\infty$ to 100 and are defined as: \begin{equation} \label{eq:mqm} \begin{split} \text{\footnotesize MQM} = 100 - \frac{{I}_{\text{\scriptsize Minor}} + 5 \times {I}_{\text{\scriptsize Major}} + 10 \times {I}_{\text{\scriptsize Crit.}}}{\text{\footnotesize Sentence Length} \times 100} \end{split} \end{equation} where ${I}_{\text{\scriptsize Minor}}$ denotes the number of minor errors, ${I}_{\text{\scriptsize Major}}$ the number of major errors and ${I}_{\text{\scriptsize Crit.}}$ the number of critical errors. Our MQM metric takes into account the severity of the errors identified in the MT hypothesis, leading to a more fine-grained metric than HTER or DA. When used in our experiments, these values were divided by 100 and truncated at 0. As in section~\ref{sec:qt21}, we constructed a training dataset $D = \{s_i, h_i, r_i, y_i\}_{n=1}^N$, where $s_i$ denotes the source text, $h_i$ denotes the MT hypothesis, $r_i$ the reference translation, and $y_i$ the MQM score for the hypothesis $h_i$. \begin{table*}[!ht] \centering \caption{Kendall's Tau ($\tau$) correlations on language pairs with English as source for the WMT19 Metrics {\footnotesize DA}RR corpus. For {\sc Bertscore} we report results with the default encoder model for a complete comparison, but also with XLM-RoBERTa (base) for fairness with our models. The values reported for YiSi-1 are taken directly from the shared task paper \cite{ma-etal-2019-results}.} \label{tab:english-to-x2019} \begin{tabular}{lcccccccc} \hline \textbf{Metric} & \textbf{en-cs} & \textbf{en-de} & \textbf{en-fi} & \textbf{en-gu} & \textbf{en-kk} & \textbf{en-lt} & \textbf{en-ru} & \textbf{en-zh} \\ \specialrule{1.5pt}{1pt}{1pt} {\sc Bleu} & 0.364 & 0.248 & 0.395 & 0.463 & 0.363 & 0.333 & 0.469 & 0.235 \\ {\sc chrF} & 0.444 & 0.321 & 0.518 & 0.548 & 0.510 & 0.438 & 0.548 & 0.241 \\ {\sc YiSi-1} & 0.475 & 0.351 & 0.537 & 0.551 & 0.546 & 0.470 & 0.585 & 0.355 \\ {\sc Bertscore} {\footnotesize ({default})} & 0.500 & 0.363 & 0.527 & 0.568 & 0.540 & 0.464 & 0.585 & 0.356 \\ {\sc Bertscore} {\footnotesize ({xlmr-base})} & 0.503 & 0.369 & 0.553 & 0.584 & 0.536 & 0.514 & 0.599 & 0.317 \\ \hline {\sc Comet-hter} & 0.524 & 0.383 & 0.560 & 0.552 & 0.508 & 0.577 & 0.539 & 0.380 \\ {\sc Comet-mqm} & 0.537 & 0.398 & 0.567 & 0.564 & 0.534 & 0.574 & \textbf{0.615} & 0.378 \\ {\sc Comet-rank} & \textbf{0.603} & \textbf{0.427} & \textbf{0.664} & \textbf{0.611} & \textbf{0.693} & \textbf{0.665} & 0.580 & \textbf{0.449} \\ \hline \end{tabular} \end{table*} \section{Experiments} \label{sec:experiments} We train two versions of the Estimator model described in section \ref{ssec:estimator}: one that regresses on HTER (\textbf{{\sc Comet-hter}}) trained with the QT21 corpus, and another that regresses on our proprietary implementation of MQM (\textbf{{\sc Comet-mqm}}) trained with our internal MQM corpus. For the Translation Ranking model, described in section \ref{ssec:ranking}, we train with the WMT {\small DA}RR corpus from 2017 and 2018 (\textbf{{\sc Comet-rank}}). In this section, we introduce the training setup for these models and corresponding evaluation setup. \subsection{Training Setup} \label{ssec:estimator_setup} The two versions of the Estimators (\textbf{{\sc Comet-HTER/MQM}}) share the same training setup and hyper-parameters (details are included in the Appendices). For training, we load the pretrained encoder and initialize both the pooling layer and the feed-forward regressor. Whereas the layer-wise scalars $\bm{\alpha}$ from the pooling layer are initially set to zero, the weights from the feed-forward are initialized randomly. During training, we divide the model parameters into two groups: the encoder parameters, that include the encoder model and the scalars from $\bm{\alpha}$; and the regressor parameters, that include the parameters from the top feed-forward network. We apply gradual unfreezing and discriminative learning rates \cite{howard-ruder-2018-universal}, meaning that the encoder model is frozen for one epoch while the feed-forward is optimized with a learning rate of $3\mathrm{e}{-5}$. After the first epoch, the entire model is fine-tuned but the learning rate for the encoder parameters is set to $1\mathrm{e}{-5}$ in order to avoid catastrophic forgetting. In contrast with the two Estimators, for the \textbf{{\sc Comet-rank}} model we fine-tune from the outset. Furthermore, since this model does not add any new parameters on top of XLM-RoBERTa (base) other than the layer scalars $\bm{\alpha}$, we use one single learning rate of $1\mathrm{e}{-5}$ for the entire model. \subsection{Evaluation Setup} \label{ssec:metrics} We use the test data and setup of the WMT 2019 Metrics Shared Task \cite{ma-etal-2019-results} in order to compare the {\sc Comet} models with the top performing submissions of the shared task and other recent state-of-the-art metrics such as {\sc Bertscore} and {\sc Bleurt}.\footnote{To ease future research we will also provide, within our framework, detailed instructions and scripts to run other metrics such as {\sc chrF}, {\sc Bleu}, {\sc Bertscore}, and {\sc Bleurt}} The evaluation method used is the official Kendall's Tau-like formulation, $\tau$, from the WMT 2019 Metrics Shared Task \cite{ma-etal-2019-results} defined as: \begin{equation} \label{eq:kendall} \tau = \frac{\textit{Concordant} - \textit{Discordant}}{\textit{Concordant} + \textit{Discordant}} \end{equation} where \textit{Concordant} is the number of times a metric assigns a higher score to the ``better'' hypothesis $h^+$ and \textit{Discordant} is the number of times a metric assigns a higher score to the ``worse'' hypothesis $h^-$ or the scores assigned to both hypotheses is the same. As mentioned in the findings of \cite{ma-etal-2019-results}, segment-level correlations of all submitted metrics were frustratingly low. Furthermore, all submitted metrics exhibited a dramatic lack of ability to correctly rank strong MT systems. To evaluate whether our new MT evaluation models better address this issue, we followed the described evaluation setup used in the analysis presented in \cite{ma-etal-2019-results}, where correlation levels are examined for portions of the {\small DA}RR data that include only the top 10, 8, 6 and 4 MT systems. \begin{table*}[!ht] \centering \caption{Kendall's Tau ($\tau$) correlations on language pairs with English as a target for the WMT19 Metrics {\footnotesize DA}RR corpus. As for {\sc Bertscore}, for {\sc Bleurt} we report results for two models: the base model, which is comparable in size with the encoder we used and the large model that is twice the size.} \label{tab:x-to-english2019} \begin{tabular}{llllllll} \hline \textbf{Metric} & \textbf{de-en} & \textbf{fi-en} & \textbf{gu-en} & \textbf{kk-en} & \textbf{lt-en} & \textbf{ru-en} & \textbf{zh-en} \\ \specialrule{1.5pt}{1pt}{1pt} {\sc Bleu} & 0.053 & 0.236 & 0.194 & 0.276 & 0.249 & 0.177 & 0.321 \\ {\sc chrF} & 0.123 & 0.292 & 0.240 & 0.323 & 0.304 & 0.115 & 0.371 \\ {\sc YiSi-1} & 0.164 & 0.347 & 0.312 & \textbf{0.440} & 0.376 & 0.217 & 0.426 \\ {\sc Bertscore} {\footnotesize ({default})} & 0.190 & 0.354 & 0.292 & 0.351 & 0.381 & 0.221 & 0.432 \\ {\sc Bertscore} {\footnotesize ({xlmr-base})} & 0.171 & 0.335 & 0.295 & 0.354 & 0.356 & 0.202 & 0.412 \\ {\sc Bleurt} {\footnotesize ({base-128})} & 0.171 & 0.372 & 0.302 & 0.383 & 0.387 & 0.218 & 0.417 \\ {\sc Bleurt} {\footnotesize ({large-512})} & 0.174 & 0.374 & 0.313 & 0.372 & 0.388 & \textbf{0.220} & 0.436 \\ \hline {\sc Comet-hter} & 0.185 & 0.333 & 0.274 & 0.297 & 0.364 & 0.163 & 0.391 \\ {\sc Comet-mqm} & \textbf{0.207} & 0.343 & 0.282 & 0.339 & 0.368 & 0.187 & 0.422 \\ {\sc Comet-rank} & 0.202 & \textbf{0.399} & \textbf{0.341} & 0.358 & \textbf{0.407} & 0.180 & \textbf{0.445} \\ \hline \end{tabular} \end{table*} \section{Results} \label{sec:results} \subsection{From English into X} \label{ssec:from-English} Table \ref{tab:english-to-x2019} shows results for all eight language pairs with English as source. We contrast our three {\sc Comet} models against baseline metrics such as {\sc Bleu} and {\sc chrF}, the 2019 task winning metric {\sc YiSi-1}, as well as the more recent {\sc Bertscore}. We observe that across the board our three models trained with the {\sc Comet} framework outperform, often by significant margins, all other metrics. Our {\small DA}RR Ranker model outperforms the two Estimators in seven out of eight language pairs. Also, even though the MQM Estimator is trained on only 12K annotated segments, it performs roughly on par with the HTER Estimator for most language-pairs, and outperforms all the other metrics in en-ru. \subsection{From X into English} \label{sect:into-english} Table \ref{tab:x-to-english2019} shows results for the seven to-English language pairs. Again, we contrast our three {\sc Comet} models against baseline metrics such as {\sc Bleu} and {\sc chrF}, the 2019 task winning metric {\sc YiSi-1}, as well as the recently published metrics {\sc Bertscore} and {\sc Bleurt}. As in Table \ref{tab:english-to-x2019} the {\small DA}RR model shows strong correlations with human judgements outperforming the recently proposed English-specific {\sc Bleurt} metric in five out of seven language pairs. Again, the MQM Estimator shows surprising strong results despite the fact that this model was trained with data that did not include English as a target. Although the encoder used in our trained models is highly multilingual, we hypothesise that this powerful ``zero-shot'' result is due to the inclusion of the source in our models. \subsection{Language pairs not involving English} \label{sect:no-english} \begin{table}[!ht] \centering \caption{Kendall's Tau ($\tau$) correlations on language pairs not involving English for the WMT19 Metrics {\small DA}RR corpus.} \label{tab:not-english2019} \begin{tabular}{llll} \hline \textbf{Metric} & \textbf{de-cs} & \textbf{de-fr} & \textbf{fr-de} \\ \specialrule{1.5pt}{1pt}{1pt} {\sc Bleu} & 0.222 & 0.226 & 0.173 \\ {\sc chrF} & 0.341 & 0.287 & 0.274 \\ {\sc YiSi-1} & 0.376 & 0.349 & 0.310 \\ {\sc Bertscore} {\footnotesize ({default})} & 0.358 & 0.329 & 0.300 \\ {\sc Bertscore} {\footnotesize ({xlmr-base})} & 0.386 & 0.336 & 0.309 \\ \hline {\sc Comet-hter} & 0.358 & 0.397 & 0.315 \\ {\sc Comet-mqm} & 0.386 & 0.367 & 0.296 \\ {\sc Comet-rank} & \textbf{0.389} & \textbf{0.444} & \textbf{0.331} \\ \hline \end{tabular} \end{table} All three of our {\sc Comet} models were trained on data involving English (either as a source or as a target). Nevertheless, to demonstrate that our metrics generalize well we test them on the three WMT 2019 language pairs that do not include English in either source or target. As can be seen in Table \ref{tab:not-english2019}, our results are consistent with observations in Tables \ref{tab:english-to-x2019} and \ref{tab:x-to-english2019}. \subsection{Robustness to High-Quality MT} \label{sect:top-systems} For analysis, we use the {\small DA}RR corpus from the 2019 Shared Task and evaluate on the subset of the data from the top performing MT systems for each language pair. We included language pairs for which we could retrieve data for at least ten different MT systems (i.e. all but kk-en and gu-en). We contrast against the strong recently proposed {\sc Bertscore} and {\sc Bleurt}, with {\sc Bleu} as a baseline. Results are presented in Figure \ref{fig:Top models}. For language pairs where English is the target, our three models are either better or competitive with all others; where English is the source we note that in general our metrics exceed the performance of others. Even the MQM Estimator, trained with only 12K segments, is competitive, which highlights the power of our proposed framework. \begin{figure}[ht!] \centering \begin{tikzpicture}[scale=0.8, transform shape] \pgfplotsset{every axis legend/.append style={ at={(0.5,1.03)}, anchor=south}} \begin{axis}[ xlabel=Top models from X to English, ylabel=Kendall Tau ($\tau$), xticklabels={ , , All, 10 , 8, 6, 4}, legend columns=2] \addplot[color=Unbabel7,mark=x] coordinates { (1, 0.327) (2, 0.240) (3, 0.198) (4, 0.165) (5 , 0.134) }; \addplot[color=red,mark=x] coordinates { (1, 0.207) (2, 0.115) (3, 0.07) (4, 0.062) (5 , 0.026) }; \addplot[color=Unbabel2,mark=x] coordinates { (1, 0.305) (2, 0.227) (3, 0.192) (4, 0.174) (5 , 0.150) }; \addplot[color=Unbabel5,mark=x] coordinates { (1, 0.316) (2, 0.230) (3, 0.192) (4, 0.167) (5 , 0.126) }; \addplot[color=Unbabel1,mark=x] coordinates { (1, 0.287) (2, 0.215) (3, 0.175) (4, 0.159) (5 , 0.143) }; \addplot[color=Unbabel4,mark=x] coordinates { (1, 0.318) (2, 0.227) (3, 0.175) (4, 0.151) (5 , 0.104]) }; \legend{{\sc Comet-rank},{\sc Bleu},{\sc Comet-mqm},{\sc Bertscore},{\sc Comet-hter}, {\sc Bleurt}} \end{axis} \end{tikzpicture} \begin{tikzpicture}[scale=0.8, transform shape] \begin{axis}[ xlabel=Top models from English to X, ylabel=Kendall Tau ($\tau$), xticklabels={ , , All, 10 , 8, 6, 4}, ] \addplot[color=red,mark=x] coordinates { (1, 0.363) (2, 0.170) (3, 0.106) (4, 0.068) (5 , 0.045) }; \addplot[color=Unbabel5,mark=x] coordinates { (1, 0.488) (2, 0.370) (3, 0.296) (4, 0.256) (5 , 0.226) }; \addplot[color=Unbabel7,mark=x] coordinates { (1, 0.587) (2, 0.442) (3, 0.355) (4, 0.326) (5 , 0.302) }; \addplot[color=Unbabel2,mark=x] coordinates { (1, 0.521) (2, 0.405) (3, 0.334) (4, 0.294) (5 , 0.260) }; \addplot[color=Unbabel1,mark=x] coordinates { (1, 0.503) (2, 0.393) (3, 0.318) (4, 0.271) (5 , 0.239) }; \end{axis} \end{tikzpicture} \caption{Metrics performance over all and the top (10, 8, 6, and 4) MT systems.} \label{fig:Top models} \end{figure} \subsection{The Importance of the Source} \begin{table*}[!ht] \centering \caption{Comparison between {\sc Comet-rank} (section \ref{ssec:ranking}) and a reference-only version thereof on WMT18 data. Both models were trained with WMT17 which means that the reference-only model is never exposed to English during training.} \label{tab:value-src} \begin{tabular}{lllllllll} \hline \textbf{Metric} & \textbf{en-cs} & \textbf{en-de} & \textbf{en-fi} & \textbf{en-tr} & \textbf{cs-en} & \textbf{de-en} & \textbf{fi-en} & \textbf{tr-en} \\ \specialrule{1.5pt}{1pt}{1pt} \multicolumn{1}{l|}{{\sc Comet-rank} \footnotesize ({ref. only})} & 0.660 & 0.764 & 0.630 & \multicolumn{1}{l|}{0.539} & 0.249 & 0.390 & 0.159 & 0.128 \\ \multicolumn{1}{l|}{{\sc Comet-rank}} & 0.711 & 0.799 & 0.671 & \multicolumn{1}{l|}{0.563} & 0.356 & 0.542 & 0.278 & 0.260 \\ \hline \multicolumn{1}{l|}{$\Delta \tau$} & 0.051 & 0.035 & 0.041 & \multicolumn{1}{l|}{0.024} & \textbf{0.107} & \textbf{0.155} & \textbf{0.119} & \textbf{0.132} \\ \hline \end{tabular} \end{table*} \label{sect:source-value} To shed some light on the actual value and contribution of the source language input in our models' ability to learn accurate predictions, we trained two versions of our {\small DA}RR Ranker model: one that uses only the reference, and another that uses both reference and source. Both models were trained using the WMT 2017 corpus that only includes language pairs from English (en-de, en-cs, en-fi, en-tr). In other words, while English was never observed as a target language during training for both variants of the model, the training of the second variant includes English source embeddings. We then tested these two model variants on the WMT 2018 corpus for these language pairs and for the reversed directions (with the exception of en-cs because cs-en does not exist for WMT 2018). The results in Table \ref{tab:value-src} clearly show that for the translation ranking architecture, including the source improves the overall correlation with human judgments. Furthermore, the inclusion of the source exposed the second variant of the model to English embeddings which is reflected in a higher $\Delta \tau$ for the language pairs with English as a target. \section{Reproducibility} \label{sec:reproducibility} We will release both the code-base of the {\sc Comet} framework and the trained MT evaluation models described in this paper to the research community upon publication, along with the detailed scripts required in order to run all reported baselines.\footnote{These will be hosted at: \url{https://github.com/Unbabel/COMET}} All the models reported in this paper were trained on a single Tesla T4 (16GB) GPU. Moreover, our framework builds on top of PyTorch Lightning \cite{falcon2019pytorch}, a lightweight PyTorch wrapper, that was created for maximal flexibility and reproducibility. \section{Related Work} \label{sec:literature-review} Classic MT evaluation metrics are commonly characterized as \textbf{$n$-gram matching metrics} because, using hand-crafted features, they estimate MT quality by counting the number and fraction of $n$-grams that appear simultaneous in a candidate translation hypothesis and one or more human-references. Metrics such as {\sc Bleu} \cite{papineni-etal-2002-bleu}, {\sc Meteor} \cite{banerjee-lavie-meteor2009}, and {\sc chrF} \cite{popovic-2015-chrf} have been widely studied and improved \cite{koehn-etal-2007-moses, popovic-2017-chrf, denkowski-lavie-2011-meteor, guo-hu-2019-meteor}, but, by design, they usually fail to recognize and capture semantic similarity beyond the lexical level. In recent years, word embeddings \cite{NIPS2013_5021, pennington-etal-2014-glove, peters-etal-2018-deep, devlin-etal-2019-bert} have emerged as a commonly used alternative to $n$-gram matching for capturing word semantics similarity. \textbf{Embedding-based metrics} like {\sc YiSi-1} \cite{lo-2019-yisi}, {\sc MoverScore} \cite{zhao-etal-2019-moverscore} and {\sc Bertscore} \cite{ZhangBERTScore} create soft-alignments between reference and hypothesis in an embedding space and then compute a score that reflects the semantic similarity between those segments. However, human judgements such as DA and MQM, capture much more than just semantic similarity, resulting in a correlation upper-bound between human judgements and the scores produced by such metrics. \textbf{Learnable metrics} \cite{shimanaka-etal-2018-ruse, mathur-etal-2019-putting} attempt to directly optimize the correlation with human judgments, and have recently shown promising results. {\sc Bleurt} \cite{Sellam&das-bleurt}, a learnable metric based on BERT \cite{devlin-etal-2019-bert}, claims state-of-the-art performance for the last 3 years of the WMT Metrics Shared task. Because {\sc Bleurt} builds on top of English-BERT \cite{devlin-etal-2019-bert}, it can only be used when English is the target language which limits its applicability. Also, to the best of our knowledge, all the previously proposed learnable metrics have focused on optimizing DA which, due to a scarcity of annotators, can prove inherently noisy \cite{ma-etal-2019-results}. \textbf{Reference-less MT evaluation}, also known as Quality Estimation (QE), has historically often regressed on HTER for segment-level evaluation \cite{bojar-etal-2013-findings, bojar-etal-2014-findings, bojar-etal-2015-findings, bojar-etal-2016-findings, bojar-etal-2017-findings}. More recently, MQM has been used for document-level evaluation \cite{specia-etal-2018-findings, fonseca-etal-2019-findings}. By leveraging highly multilingual pretrained encoders such as multilingual BERT \cite{devlin-etal-2019-bert} and XLM \cite{NIPS2019_8928}, QE systems have been showing auspicious correlations with human judgements \cite{kepler-etal-2019-unbabels}. Concurrently, the OpenKiwi framework \cite{kepler-etal-2019-openkiwi} has made it easier for researchers to push the field forward and build stronger QE models. \section{Conclusions and Future Work} \label{sec:conclusions} In this paper we present {\sc Comet}, a novel neural framework for training MT evaluation models that can serve as automatic metrics and easily be adapted and optimized to different types of human judgements of MT quality. To showcase the effectiveness of our framework, we sought to address the challenges reported in the 2019 WMT Metrics Shared Task \cite{ma-etal-2019-results}. We trained three distinct models which achieve new state-of-the-art results for segment-level correlation with human judgments, and show promising ability to better differentiate high-performing systems. One of the challenges of leveraging the power of pretrained models is the burdensome weight of parameters and inference time. A primary avenue for future work on {\sc Comet} will look at the impact of more compact solutions such as DistilBERT \citep{sanh2019distilbert}. Additionally, whilst we outline the potential importance of the source text above, we note that our {\sc Comet-rank} model weighs source and reference differently during inference but equally in its training loss function. Future work will investigate the optimality of this formulation and further examine the interdependence of the different inputs. \section*{Acknowledgments} We are grateful to André Martins, Austin Matthews, Fabio Kepler, Daan Van Stigt, Miguel Vera, and the reviewers, for their valuable feedback and discussions. This work was supported in part by the P2020 Program through projects MAIA and Unbabel4EU, supervised by ANI under contract numbers 045909 and 042671, respectively.
null
null
null
proofpile-arXiv_000-10187
{"arxiv_id":"2009.09025","language":"en","timestamp":1600740098000,"url":"https:\/\/arxiv.org\/abs\/2009.09025","yymm":"2009"}
2024-02-18T23:40:25.326Z
2020-09-22T02:01:38.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10189"}
null
\section{Introduction} \textit{Introduction.}\textemdash Event GW170817~\cite{TheLIGOScientific:2017qsa} marked not only the first direct detection of a binary neutron star (BNS) merger via gravitational waves but also the simultaneous detection of the short $\gamma$-ray burst (sGRB) GRB170817A \cite{2017GCN.21520....1V,2017GCN.21517....1K}, and kilonova AT 2017gfo, with its afterglow radiation in the radio, optical/IR, and X-ray bands. These detections constituted a golden moment in the era of multimessenger astronomy~\cite{GBM:2017lvd,Monitor:2017mdv, Abbott:2017wuw}. To investigate the different scenarios for jet formation triggering the such electromagnetic (EM) events, fully general relativistic, magnetohydrodynamic (GRMHD) simulations are needed (for recently reviews see~e.g. \cite{Paschalidis:2016agf,Baiotti:2016qnr,Ciolfi:2020cpf}). The most common scenario for launching a magnetically-driven jet is a black hole-disk (BH-disk) remnant. BNS mergers that lead to hypermassive remnants (HMNSs) inevitably collapse to BHs immersed in gaseous disks. These remnants are robust engines for jet launching~\cite{prs15,Ruiz:2018wah,Ruiz:2016rai,Ruiz:2017inq,Ruiz:2019ezy}. Their lifetimes are $\Delta t\simeq 150\ {\rm ms}$ and outgoing Poynting luminosities $\sim 10^{52\pm 1}\ \rm erg/s$, both consistent with typical sGRBs~\cite{Bhat:2016odd,Lien:2016zny,Svinkin:2016fho,Ajello:2019zki}, as well as with the Blandford-Znajek (BZ) mechanism for launching jets and their associated Poynting luminosities \cite{BZeffect77,Thorne86}. The key requirement for the emergence of a jet is the existence of a large-scale poloidal magnetic field along the direction of the total orbital angular momentum of the BH-disk remnant~\cite{Ruiz:2020via}. Such a field arises when the NS is initially endowed with a dipolar magnetic field confined or not to the interior of the NSs. During the BNS merger and the HMNS phase, magnetic instabilities drive the magnetic energy to saturation levels~\cite{Kiuchi:2014hja}. Following the HMNS collapse to a BH, such magnetic fields launch a mildly relativistic, magnetically-driven outflow with a Lorentz factor $\Gamma_L\gtrsim 1.2$ confined inside a tightly-wound-magnetic funnel. This becomes an ``incipient jet'' once regions above the BH poles approach force-free values ($B^2/8\,\pi \rho_0\gg 1$). Here $B$ and $\rho_0$ are the strength of the magnetic field and the rest-mass density, respectively. For axisymmetric, Poynting-dominated jets, the maximum $\Gamma_L$ ultimately attained in the funnel is approximately $B^2/8\,\pi \rho_0$ \citep{B2_over_2RHO_yields_target_Lorentz_factor}. Therefore, incipient jets will become highly relativistic, as required by sGRB models~\cite{Zou2009}. The possibility of jet launching from a stable NS remnant has recently been investigated \cite{Ruiz:2017due,Ciolfi2019,Ciolfi:2020hgg}. In particular, Ref. \cite{Ruiz:2017due} presented $200\ \rm ms$-numerical simulations of a stable (supramassive \cite{1992ApJ...398..203C}) NS remnant initially threaded by a dipolar magnetic field that extended from the stellar interior into its exterior. Such a stable NS remnant showed no evidence of jet formation, since the outflow confined in the funnel had $\Gamma_L\lesssim 1.03$ and $B^2/8\,\pi \rho_0\lesssim 1$, thereby lacking a force-free magnetosphere needed for the BZ--like mechanism to power a collimated jet \cite{Shapiro:2017cny}. These results suggest that a supramassive NS remnant, which may arise and live arbitrarily long following the merger of a BNS, probably cannot be the progenitor of a sGRB. These results have been confirmed in Refs. \cite{Ciolfi2019,Ciolfi:2020hgg}, which reported the emergence of a $\Gamma_L\lesssim 1.05$-outflow after $\gtrsim 212\ \rm ms$ following the merger of a magnetized low-mass BNS. However, it has been suggested that neutrino effects may help reduce the baryon-load in the region above the poles of the NS, which may drive up the force-free parameter in the funnel~\cite{Mosta:2020hlh} and lead to jet formation. Several questions need to be addressed regarding the central engine that launches an incipient jet, as well as the nature of the jet itself. First, the membrane paradigm implies that a spinning BH immersed in a disk is a sufficient condition for jet formation~\cite{Thorne82,Thorne86}, but {\it is it also a necessary condition?} If yes, then a stable NS remnant (like a supramassive NS) cannot be the generator of such jets. If no, then {\it is a NS jet qualitatively the same as the one launched from BH-disks?} In particular, {\it can one still describe it as a BZ-like jet?} If not, {\it what are the main differences from a BZ-like jet?} It has been argued that contrary to the membrane paradigm, the horizon is not the ``driving force'' behind the BZ mechanism but rather it is the ergoregion \cite{Komissarov:2002dj,Komissarov:2004ms,2005MNRAS.359..801K, Ruiz:2012te}. Thus it may be possible that \textit{normal} NSs cannot launch a BZ jet, but \textit{ergostars} (NSs that contain ergoregions) can. If that is the case, {\it can one distinguish between BH-disk and ergostar jets?} To address the above questions, we employ our recently constructed, dynamically stable ergostars \cite{Tsokaros:2019mlz,Tsokaros:2020qju} and perform a series of GRMHD simulations that include differentially rotating HMNSs with and without ergoregions to assess the effect of the ergoregion in launching a jet. At the same time, we make critical comparisons between candidate jets and the ones generated by a BH-disk. Our results suggest that an ergoregion does not facilitate or inhibit the launching of outflows. A magnetically-driven outflow with a maximum value of $\Gamma_L\sim 2.5$ is launched whether or not an ergoregion is present (see left column in~Fig.~\ref{fig:bfield}). This outflow therefore is not consistent with sGRBs, which require flows to reach $\Gamma_L\gtrsim 20$~\cite{Zou2009}. In contrast to the BH-disk where the force-free parameter~$B^2/8\pi\rho_0$ above the BH poles reaches values of $\gtrsim 100$, the force-free parameter in the HMNS cases is only~$\lesssim 10$. We find that the Poynting luminosity of our evolved models is comparable to the values reported in \cite{Shibata_2011}, as well as to the BZ luminosity. This shows that the value of the luminosity by itself cannot be taken as a criterion to distinguish compact objects with masses in the range $3-5\ M_\odot$ (the so-called mass-gap). Finally, the angular frequency ratio of the magnetic field lines around the HMNSs is at least twice the value $\Omega_F/\Omega_H\sim 0.5$ expected from the BZ mechanism~\citep{Blandford1977}. Thus our current simulations suggest that the BZ mechanism for launching relativistic jets only operates when a spinning BH is present, and that neither normal NSs nor ergostars can be the central engines that power sGRBs. \begin{center} \begin{table*} \caption{Equilibrium models. ER denotes the existence or not of an ergoregion. Parameter $\hat{A}=A/R_e$, where $R_e$, the equatorial radius, determines the degree of differential rotation, $R_p/R_e$ is the ratio of polar to equatorial radius, $M_0$ is the rest mass, $M$ is the ADM mass, $J$ is the ADM angular momentum, $T/|W|$ is the ratio of kinetic to gravitational energy, $P_c$ is the rotational period corresponding to the central angular velocity $\Omega_c$, $\Omega_c/\Omega_s$ is the ratio of the central to the surface angular velocity, and $t_{\rm dyn}\sim 1/\sqrt{\rho}$ the dynamical timescale.} \label{tab:idmodels} \begin{tabular}{lcccccccccccc} \hline\hline Model & EOS & ER & $\hat{A}^{-1}$ & $R_p/R_e$ & $M_0\ [M_\odot]$ & $M\ [M_\odot]$ & $R_e\ [{\rm km}]$ & $J/M^2$ & $T/|W|$ & $P_c/M$ & $\Omega_c/\Omega_s$ & $t_{\rm dyn}/M$\\ \hline NS1 & ALF2cc & \xmark & $0.2$ & $0.4688$ & $6.973$ & $5.587$ & $12.55$& $0.8929$ & $0.2423$ & $25.21$ & $1.359$ & $6.6$ \\ \hline ES1 & ALF2cc & \cmark & $0.2$ & $0.4531$ & $7.130$ & $5.709$ & $12.49$& $0.9035$ & $0.2501$ & $24.18$ & $1.378$ & $6.5$ \\ \hline NS2 & SLycc2 & \xmark & $0.3$ & $0.4688$ & $4.839$ & $3.944$ & $8.873$& $0.8666$ & $0.2329$ & $21.66$ & $1.707$ & $6.7$ \\ \hline ES2 & SLycc2 & \cmark & $0.3$ & $0.4531$ & $4.930$ & $4.017$ & $8.753$& $0.8759$ & $0.2403$ & $20.58$ & $1.743$ & $6.6$ \\ \hline\hline \end{tabular} \end{table*} \end{center} \textit{Numerical setup.}\textemdash The HMNS initial data are constructed using the GR rotating equilibrium code described in \cite{1992ApJ...398..203C} using two equations of state (EOSs). The first one is ALF2cc which was employed in \cite{Tsokaros:2019mlz} to find the first dynamically stable ergostars. It is based on the ALF2 EOS \cite{Alford2005} where the inner region with rest-mass density $\rho_0\geq\rho_{0s}=\rho_{0\rm nuc}=2.7\times 10^{14}\ {\rm g/cm^3}$ is replaced by $P=\sigma(\rho-\rho_s) + P_s$, where $\sigma$ is a dimensionless constant, $\rho$ is the total mass-energy density, and $P_s$ the pressure at $\rho_s$. Here we assume $\sigma=1$, i.e. a causal core. Since the causal core starts at a relatively low density, $\rho_{0\rm nuc}$, the models based on this EOS have density profiles that resemble the ones found in quark stars which exhibit a finite surface density. The crust of the NS that follows the ALF2cc EOS is about $\sim 6\%$ of the equatorial radius. These models, denoted by NS1 and ES1 in Table \ref{tab:idmodels}, have been presented in Table I of \cite{Tsokaros:2019mlz} (with names iA0.2-rp0.47 and iA0.2-rp0.45, respectively). They both belong to the sequence of stars having the same central rest-mass density $\rho_0=4.52\times 10^{14}\ {\rm g/cm^3}$ and are differentially rotating NSs with a $j$-const rotation law, $j(\Omega)=A^2 (\Omega_c - \Omega)$, where $\Omega_c$ is the angular velocity at the center of the star, and $\hat{A}=5$. NS1 has no ergoregion, while the adjacent model ES1, which rotates slightly faster, does. The second EOS is SLycc2 \cite{Tsokaros:2020qju} and is based on the SLy EOS \cite{Douchin01} with the interior region having rest-mass density $\rho_0\geq 2 \rho_{0\rm nuc}$ replaced by the same causal EOS as given above. Here the causal core is further from the surface of the star, so a smoother transition to the surface is accomplished. Differentially rotating models with this EOS have been fully explored in \cite{Tsokaros:2020qju}. Here we pick two models, NS2 and ES2, that again belong in the sequence of central rest-mass density $\rho_0=7.82\times 10^{14}\ {\rm g/cm^3}$ and have $\hat{A}=3.\bar{3}$. They are slightly more differentially rotating than the ALF2cc models. Model NS2 has no ergoregion, while the adjacent model ES2 does. Further properties of all our adopted models can be found in Table \ref{tab:idmodels}. In the following, we consider both pure hydrodynamic and magnetized (MHD) evolutions of these models. For the magnetized cases, the stars are initially threaded by a dipole-like magnetic field whose strength at the NS poles ranges between $4.5\times 10^{14}\rm G$ to $1.5\times 10^{16}\rm G$, and with a magnetic dipole moment aligned with the direction of the spin of the star (see~top panel~of Fig.~3~in~\cite{Ruiz:2019ezy}). We verify that initially the magnetorotational-instability (MRI) is captured in our models by computing the quality factor $Q_{\rm MRI}\equiv\lambda_{\rm MRI}/dx$, which measures the number of grid points per fastest growing (MRI) mode as in~\cite{UIUC_PAPER2}. Here $\lambda_{\rm MRI}$ is the fastest-growing MRI wavelength. We choose astrophysically large magnetic fields to mimic their growth due to the Kelvin-Helmholtz instability (KHI) and the MRI during BNS merger and HMNS formation. These instabilities can amplify moderate magnetic fields ($\sim 10^{13} \rm G$) to rms values in the HMNS of~$\sim 10^{15.5}\rm G$, and locally up to $\sim 10^{17}\ \rm G$~\cite{Kiuchi:2015sga,Kiuchi:2014hja,Aguilera-Miret:2020dhz}. To capture the magnetosphere that surrounds the NS, we set a variable, low-density magnetosphere in the HMNS exterior such that the plasma parameter $\beta\equiv P_{\rm gas}/ P_{\rm mag}=0.01$ everywhere~\cite{Ruiz:2018wah}. In all of our cases, the low-density increases the total rest-mass of the star by $\lesssim 1\%$, consistent with the values reported previously (see~e.g.~\cite{Ruiz:2016rai}). The ideal GRMHD equations are then integrated everywhere, imposing on top of the magnetosphere a density floor in regions where $\rho_0^{\rm atm}\leq 10^{-10} \rho_0^{\rm max}$, where $\rho_0^{\rm max}$ is the initial maximum rest-mass density of the system. \begin{figure*} \begin{center} \begin{turn}{90} \bf NS2: Normal star\hspace{0.5cm} $\bm{t/M=630}$ \end{turn} \includegraphics[width=0.99\columnwidth]{SLycc2_iA0.3_rho7.9a_mag_lower_t_p_30_velocities_zoomout.png} \includegraphics[width=0.99\columnwidth]{SLycc2_iA0.3_rho7.9a_mag_lower_t_p_30_bsqrd_zoomout.png} \vspace{0.1cm} \begin{turn}{90} \bf ES2: Ergostar\hspace{0.5cm} $\bm{t/M=700}$ \end{turn} \includegraphics[width=0.99\columnwidth]{SLycc2_iA0.3_rho7.9_mag_lower_t_p_33_velocities_zoomout.png} \includegraphics[width=0.99\columnwidth]{SLycc2_iA0.3_rho7.9_mag_lower_t_p_33_bsqrd_zoomout.png} \vspace{0.1cm} \begin{turn}{90} \bf Black hole-disk \hspace{0.5cm} $\bm{t/M=220}$ \end{turn} \includegraphics[width=0.99\columnwidth]{bhdisk_velocity_rot_p.png} \includegraphics[width=0.99\columnwidth]{bhdisk_bsqrd_p.png} \caption{Final rest-mass density profiles normalized to the initial maximum density (left column) and the force-free parameter inside the helical magnetic funnel (right column) for cases NS2 (top row), ES2 (middle row), and BH-disk (bottom row). White lines depict the magnetic field lines, while the arrows display fluid velocities, and $P_c$ is the rotation period measure at the point where the rest-mass density is maximum. Here $M = 5.9\ \rm km$.} \label{fig:bfield} \end{center} \end{figure*} In addition to the above HMNS models, we evolve a BH-disk of $4\,M_\odot$ with an initial dimensionless spin $J_{\rm BH}/M^2_{\rm BH}= 0.9$ to match those parameters of models NS2 and ES2 in Table~\ref{tab:idmodels}. The BH is surrounded by a massless accretion disk modeled by a $\Gamma=4/3$ polytropic EOS which is initially threaded by a pure poloidal magnetic field confined to the disk interior. The maximum value of $\beta$ is $10^{-3}$ (see~Eq.~2~in~\cite{Khan:2018ejm}). Since we neglect the self-gravity of the disk, our model can be scaled to an arbitrary rest-mass density and magnetic field, keeping $\beta$ constant (see Eq.~A4 in~\cite{Khan:2018ejm}). We evolve the above systems using the Illinois GRMHD moving-mesh-refinement code (see e.g.~\cite{Etienne:2010ui}), which employs the Baumgarte-Shapiro-Shibata-Nakamura formulation of the Einstein’s equations~\cite{shibnak95,BS} with puncture gauge conditions~(see~Eq.~(2)-(4) in~\cite{Etienne:2007jg}). The MHD equations are solved in conservation-law form adopting high-resolution shock-capturing methods. Imposition of $\nabla\cdot\vec{B}=0$ during evolution is achieved by integrating the magnetic induction equation using a vector potential $\mathcal{A}^\mu$ \cite{Etienne:2010ui}). The generalized Lorenz gauge \cite{Farris:2012ux} is employed to avoid the appearance of spurious magnetic fields~\cite{Etienne:2011re}. Pressure is decomposed as a sum of a cold and a thermal part, $P = P_{\rm cold} + (\Gamma_{\rm th}-1)\rho_0 (\epsilon-\epsilon_{\rm cold})$ where $P_{\rm cold}, \epsilon_{\rm cold}$ are the pressure and specific internal energy as computed from the initial data EOS (ALF2cc or SLycc2). For the thermal part we assume $\Gamma_{\rm th}=5/3$. In our NS simulations we used nine nested refinement levels with minimum grid spacing (at the finest refinement level) $\Delta x_{\rm min}=122\ \rm m$ for the ALF2cc models, while for the SLycc2 models, whose radii are much smaller, we used a minimum resolution of $\Delta x_{\rm min}=85.6\ \rm m$. In both cases the radius of the star is resolved by $\sim 102$ grid points. For the BH-disk simulation we used eight refinement levels with $\Delta x_{\rm min}=0.034192 k^{3/2}\ \rm m$, where $k=P/\rho_0^\Gamma$ is the polytropic constant in the initial cold disk. The horizon radius is resolved by $\sim 41$ grid points. \paragraph*{{\bf Pure hydrodynamic evolutions} \textemdash } To probe the dynamical stability of models NS2 and ES2, we evolve them following the same procedure as in~\cite{Tsokaros:2019mlz}, where the stability properties of models NS1 and ES1 were reported. We find that these models remain in equilibrium for more than a hundred dynamical timescales ($\gtrsim 30$ rotation periods). Due to the large density close to the star's surface, centrifugal forces push the outer layers of the star (low-density layers) slightly outwards, while the bulk of the star remains axisymmetric to a high degree until the end of our simulations (see Fig.~1 in \cite{Tsokaros:2019mlz}). We do not find evidence of any significant growing instabilities or outflows during this time. \paragraph*{{\bf MHD evolutions} \textemdash } Magnetically-driven instabilities and winding inevitable change the differential rotation law in the bulk of the stars, and ultimately lead to the transition of the HMNS into another state which may or may not be dynamically stable, depending on the specific characteristics of the initial configuration and the magnetic field. In Fig. \ref{fig:bfield}, the magnetized evolution of the dynamically stable normal star NS2 and ergostar ES2 are shown in the top and middle rows respectively. In both cases after $30P_c$ the HMNSs are still differentially rotating, although not in the same way as the initial configurations. To probe the stability of magnetized ergostars and their EM characteristics we survey HMNS models against different magnetic field configurations. Notice that the Alfv\'en time for magnetic growth in the HMNS (mainly by magnetic winding, followed by MRI) is $\tau_{\text{\tiny A}}\sim 10\,\rho_{14}^{1/2}\,B_{15}^{-1}\,R_{10}\rm ms$ (see~Eq.~2 in \cite{Ruiz:2020via}). Here $\rho_{14}=\rho/10^{14}\rm g/cm^3$ is the characteristic density of the HMNS, $B_{15}=B/10^{15}\rm G$, and $R_{10}=R/10\rm km$, where $R$ the stellar equatorial radius. The evolutions with the highest magnetic field strength, denoted by NS1-Bh and ES1-Bh, involve the normal HMNS NS1 and ergostar ES1, which as discussed have similar physical properties (see Table~\ref{tab:idmodels}), and a poloidal magnetic field of $1.5\times 10^{16}\rm G$. We find that after $\sim \tau_{\text{\tiny A}}\sim 3P_c$ magnetic winding and MRI change the rotation law of the HMNSs, driving the onset of stellar collapse. In the ES1-Bh case, the ergoregion expands and after $4\tau_{\text{\tiny A}}\sim 12P_c$ an apparent horizon appears inside it. A BH horizon in NS1-Bh forms at $\sim 5\tau_{\text{\tiny A}}\sim 15P_c$. Using the isolated horizon formalism~\cite{dkss03}, we estimate that the BH remnant in both cases has a mass of $M_{\rm BH}\simeq 0.95M$ and dimesionless spin $a/M_{\rm BH}\sim 0.93$. Here $M$ is the ADM mass of the corresponding HMNS (see Table~\ref{tab:idmodels}). In both cases the BH is surrounded by an accretion disk with $\sim 4.2\%$ of the initial HMNS rest mass. The Poynting luminosity ($L_{EM}\equiv -\int {T^r}_t\sqrt{-g}\,dS$) computed at different extraction radii $r_{\rm ext}\gtrsim 80 M$) is $L_{EM}\sim 10^{54}\ \rm erg/s$ for both cases. In the bottom panel of Fig.~\ref{fig:lum} the ES1-Bh is shown with a solid blue line and the black star symbol marks the BH formation time, and the termination of our simulations. The fate of the BH-disk remnant has been already discussed in~\cite{Ruiz:2016rai}; as our initial magnetic field is near saturation, we do not expect additional enhancement following BH formation. However, as the matter above BH poles is accreted onto the BH, the ratio $B^2/8\,\pi \rho_0$ in the funnel will increase up to values $\gtrsim 100$, and so the outflow can be accelerated to $\Gamma_L\gtrsim 100$ as required by sGRB models~\cite{Zou2009}. \begin{figure} \includegraphics[width=0.99\columnwidth]{luminosity_evol_2pan.pdf} \caption{Top panel: Outgoing EM Poynting luminosity for the three cases depicted in Fig.~\ref{fig:bfield}. Bottom panel: Outgoing EM Poynting luminosity for the ergostar ES1 and three different magnetic field strengths. The black star marks the BH formation time. Dashed horizontal lines correspond to the BZ estimate is, while dotted horizontal lines (almost coincident with the dashed ones) to the estimate in Ref. \cite{Shibata_2011}.} \label{fig:lum} \end{figure} In the medium-magnetized case, ES1-Bm, where~$B=3\times 10^{15}\rm G$ ($\tau_{\text{\tiny A}}\sim 13P_c$), magnetic winding and the MRI slowly drive the star into a new quasistationary configuration, which remains stable for more than $\gtrsim 30P_c$. We do not find evidence of any significant growing instabilities~\cite{Tsokaros:2019mlz}. During this time, magnetic winding in the bulk of the star induces a linear growth of the toroidal component of the magnetic field and corresponding magnetic pressure. Alfv\'en waves then propagate near the rotation axis transporting electromagnetic energy~\cite{Shibata_2011}. This builds up magnetic pressure above the stellar poles until eventually the inflow is halted and driven into an outflow confined by the tightly wound field lines. The luminosity for this case is shown in the bottom panel of Fig. \ref{fig:lum} with a solid green line. With a solid red line we also show the luminosity of the lowest magnetized case ES1-Bl, whose initial magnetic field at the pole is $B= 4.5\times 10^{14}\rm G$. Similar to the evolution of ES1-Bm, when HMNSs NS2 and ergostar ES2 are threaded by a poloidal magnetic field of $5.25\times 10^{15}\rm G$ (Alf\'en time $\tau_{\text{\tiny A}}\sim 12P_c$) they evolve stably for more than $30P_c$ (top and middle rows of~Fig.~\ref{fig:bfield}). In both cases, we observe that an outflow is launched after roughly $\sim 20P_c$ whether an ergoregion is present or not. As shown in Fig. \ref{fig:lum} top panel, the corresponding Poynting luminosities are roughly the same, $L_{EM}\sim 10^{53}\rm erg/s$. {\it The comparison between NS2 and ES2 suggest that the ergoregion neither facilitates nor inhibits the launching of a magnetically-driven outflow}. To assess if the BZ mechanism is operating in our systems, we compare the luminosity with that of a BH-disk remnant that launches an incipient jet (right column in~Fig.~\ref{fig:bfield}). As seen in the top panel of Fig.~\ref{fig:lum}, the luminosity from the BH-disk matches the one coming from the stable HMNSs NS2 and ES2. In the same panel we show with a dashed black line the luminosity predicted by the BZ mechanism: $L_{BZ}\sim 10^{51} (a/M_{\rm BH})^2(M_{\rm BH}/4M_\odot)^2\,B^2_{15}\ \rm erg/s$ \cite{BZeffect,MembraneParadigm} for the BH-disk, which is consistent with the numerically computed one. We note here that if one naively applies the same formula to the HMNSs NS2 and ES2 one gets roughly the same results, since the masses, dimensionless spins, and polar magnetic fields are the same as those of the BH-disk. The BZ estimates for the cases ES1-Bh, ES1-Bm, and ES1-Bl are shown with dashed lines in the bottom panel of Fig.~\ref{fig:lum} which indicates agreement with the numerical values (solid lines). In other words, the luminosity is not an efficient diagnostic for distinguishing outflows coming from a BH-disk or those coming from a NS. For example, a magnetized compact object in the mass gap will yield the same luminosity as the BZ formula, making impossible its identification (BH-disk vs HMNS) through this criterion. Ref. \cite{Shibata_2011} concludes that HMNSs like our models emit EM radiation with a luminosity $L_{EM}\sim 10^{51}B^2_{15}R^3_{10} \Omega_4\ \rm erg/s$, where $\Omega_4=\Omega/10^4\ \rm rad/s$. This estimate is shown in both panels of Fig.~\ref{fig:lum} with dotted lines, and coincide with the BZ dashed lines to a very high precision. This is curious, since the two formula exhibit very different scaling with parameters. Both of them are also close to the numerically computed luminosity. One other possible source of luminosity is the pulsar spin-down luminosity~\citep{GJ1969}, which according to \cite{Ruiz:2014zta} can be enhanced by as much as $\sim 35\%$ relative to its Minkowski value, if general relativistic effects are taken into account. The numerical values shown in Fig. \ref{fig:lum} (as well the BZ estimate and the estimate from Ref. \cite{Shibata_2011}) show that our luminosities are at least one order of magnitude larger than those found in~\cite{Ruiz:2014zta}. This is not surprising since: (1) the stars in our recent analysis are differentially rotating, not uniformly rotating, as in \cite{Ruiz:2014zta}. Thus field lines tied to matter on the surface cannot ``corotate" with a rigidly rotating surface inside the light cylinder, as in a pulsar, but get wound up due to magnetic winding in the helical pattern we observe and which follows the exterior plasma flow. (2) Our magnetosphere is only marginally force-free, unlike the pulsars modeled in \cite{Ruiz:2014zta}, which are strongly force-free. Hence the exterior field topology, which is always attached to the plasma, does not show nearly the same degree of winding under truly force-free conditions and the flow is thus also with less winding. Because of these reasons it is difficult to make a comparison regarding the spin-down luminosity in our simulations. We will address this issue in a future work. Following~\cite{prs15}, we measure the level of collimation of the outflow from the funnel opening angle, which can be determined by the $B^2/8\pi\rho_0 \simeq 10^{-2}$ contour. Based on this value, we estimate an opening angle of $\sim 25^\circ$ in our HMNS models. Such behavior has already been found in~\cite{Ruiz:2017due} where a HMNS has been evolved for $\sim 200\ \rm ms$. Similar results have been reported in~\cite{Ciolfi:2020hgg}. This level of collimation, although robust, is not as tight as in the BH-disk where the opening angle is $\sim 15^\circ$. {\it The existence of the ergoregion does not seem to affect either the development of this funnel structure or its geometry}. In all cases, fluid elements inside the funnel have specific energy $E=-u_0-1 > 0$ and hence are unbound. The characteristic maximum value of the Lorentz factor is~$\Gamma_L \sim 2.5$ for cases NS2 and ES2, while~$\Gamma_L \sim 1.3$ in the BH-disk case. However, the force-free parameter $B^2/8\pi\rho_0$ inside the funnel in the latter case is a factor of $\gtrsim 10$ larger than that of the HMNS cases (where $B^2/8\pi\rho_0\lesssim 10$). Since in steady-state the maximum attainable~$\Gamma_L$ of axisymmetric jets equals the plasma parameter~\cite{B2_over_2RHO_yields_target_Lorentz_factor}, only in the BH-disk case material inside the funnel can be accelerated to~$\Gamma_L\gtrsim 100$ as required for sGRBs~\cite{Zou2009}. Given the fact that magnetic winding and MRI will further transfer angular momentum to the surface and make these HMNSs more uniformly rotating, leading to catastrophic collapse, as in case ES1-Bh, we do not expect that further evolution will produce any significant changes \textit{until BH formation}, given what is shown in the right column of Fig.~\ref{fig:bfield}. As the magnetic field is near saturation, the only way for $B^2/8\pi\rho_0$ to grow is for the funnel to become baryon–free. However, the outer layers of the star are a repository of matter that constantly supplies appreciable plasma inside the funnel. Another characteristic of the BZ mechanism is the value~$\Omega_F/\Omega_H$. Here $\Omega_F=F_{t\theta}/F_{\theta\phi}$ is the angular frequency of the magnetic field lines and $\Omega_H=(a/m)(1+\sqrt{1-(a/m)^2})/(2m)$ is the angular frequency of the BH horizon. For a Kerr BH with $a/m=0.9$ in a strongly force-free disk this ratio increases from $\sim 0.49$ at the pole to $\sim0.53$ at the equator \cite{Komissarov2001}. Numerical simulations that resulted in successful jet formation \citep{2004ApJ...611..977M,prs15,Ruiz:2016rai,Ruiz:2019ezy} have found this ratio to be $\sim 0.1-0.6$. We find $\Omega_F/\Omega_H\in [0.2,0.5]$ when $\theta\in [0,\pi/2]$ in our present BH-disk simulation. Although this ratio is defined only for BHs, we nevertheless calculate its value using the central/surface angular velocity of the neutron star. We find $\Omega_F/\Omega_c \in [0.4,0.8]$, while if we normalize with the surface angular velocity $\Omega_s$ instead, the ratio $\Omega_F/\Omega_s$ becomes $\sim 1.5$ times larger. These results are approximately the same for both NS2 and ES2 and, therefore, the presence of the ergoregion does not seem to affect this ratio either. Our preliminary conclusion is that for NSs this ratio can be at least twice as large as the one coming from the BHs. \begin{figure} \includegraphics[width=1.\columnwidth]{ejecta.pdf} \caption{Rest-mass fraction of escaping mass (ejecta) for the cases in Table~\ref{tab:idmodels} with different magnetic field strengths. The star marks the BH formation time for case ES1-Bh.}. \label{fig:esc} \end{figure} For all cases we compute the ejected material $M_{\rm esc} = -\int\rho_0\alpha u^t \sqrt{\gamma}d^3x$ for $r>30M$ with specific energy $E>0$ and positive radial velocity. We find that for cases NS1-Bh and ES1-Bh it is $\sim 0.2\%$ of the total rest-mass of the HMNS~(see Fig.~\ref{fig:esc}), and therefore it may give rise to an observable kilonova \cite{Metzger:2011bv}. Also, the ejecta coming from HMNSs NS1, and ES1 are approximately three times larger than those coming from HMNSs NS2 and ES2. The reason for that is that NS1, ES1 have a larger density close to the surface than NS2 and ES2; therefore, when magnetic fields are present more mass in the outer layers can get ejected. \textit{Conclusions.}\textemdash We surveyed different HMNSs models with and without ergoregions with different initial strengths of the seeded magnetic field to probe the impact of ergoregions on launching magnetically--driven outflows. We found that magnetized HMNSs launch a mildly relativistic jet confined inside a tightly-wound-magnetic-field funnel whether or not an ergoregion is present. Our GRMHD simulations suggest that the properties of the outflow, such as maximum Lorentz factors ($\Gamma_L\sim 2.5$), the plasma parameter ($B^2/8\pi\rho_0\lesssim 10$) and the magnetic collimation, are not affected by the ergoregion. Notice that, using force-free evolutions of magnetic fields on a \textit{fixed, homogeneous} ergostar background and based on the similarities between the topology of the strongly force-free EM fields and the induced currents on the ergostar vs. the ones on a BH-disk, ~Ref.~\cite{Ruiz:2012te} concluded that the BZ mechanism is likely to operate in ergostars. As in~\cite{Ruiz:2012te} we do find some similarities in the topology of the magnetic fields (i.e.~collimation) for our ergostar and BH spacetimes, although we find the same similarities when a magnetized normal HMNS, instead of an ergostar, is compared. The Poynting luminosity in the HMNS is comparable with that of the BH-disk remnant in which the BZ mechanism is operating, as well as with the luminosity reported by Ref. \cite{Shibata_2011}. Hence the luminosity diagnostic cannot determine whether the BZ mechanism is operating or not. On the other hand the ratio~$\Omega_F/\Omega_H$ turns out to be twice (at least) with the one computed in the BH-disk case. These results complement our previous studies with supramassive remnants \cite{Ruiz:2017due} and suggest that in the hypermassive state it would be challenging for either normal stars or ergostars to be the origin of relativistic jets and the progenitors of sGRBs. Finally, we note here that similar maximum Lorentz factors and funnel magnetization have been reported in simulations that include neutrinos \cite{Mosta:2020hlh}, so it is not clear if they can play a significant role in the formation of jets. Although threading an ergostar with a magnetic field is by itself inadequate to launch a bona fide BZ jet, there remain mechanisms that can tap the rotational energy of the star by virtue of its ergoregion. These include the Penrose mechanism \cite{Penrose:1969pc}. Noninteracting particles undergoing the Penrose process that can escape (e.g. various dark matter candidates) carry off energy and angular momentum. Particles that interact with the NS matter and are captured conserve (and redistribute) total angular momentum but can convert rotational kinetic energy into heat. Whether or not thermal emission from this heat may be detectable, the lifetime of the ergoregion and any emission, and the fate of the ergostar must all await further analysis. \vspace{0.5cm} We thank K. Parfrey and J. Schnittman for useful discussions. This work was supported by NSF Grants No. PHY-1662211 and No. PHY-2006066, and NASA Grant No. 80NSSC17K0070 to the University of Illinois at Urbana-Champaign. This work made use of the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation Grant No. TG-MCA99S008. This research is also part of the Frontera computing project at the Texas Advanced Computing Center. Frontera is made possible by National Science Foundation award OAC-1818253. Resources supporting this work were also provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center. \bibliographystyle{apsrev4-1}
null
null
null
proofpile-arXiv_000-10188
{"arxiv_id":"2009.08982","language":"en","timestamp":1600740012000,"url":"https:\/\/arxiv.org\/abs\/2009.08982","yymm":"2009"}
2024-02-18T23:40:25.329Z
2020-09-22T02:00:12.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10190"}
null
\section{Introduction} \label{sec:1} Variabilities present among the different plant species (also called genotypes) are useful in their breeding programs. Here, the selection of diverse parent genotypes is important. More diverse the parents, the higher are the chances of developing new plant varieties having excellent qualities \citep{cite2}. A commonly used technique here is to study the genetic variability, which looks at the different genome sequences. However, this kind of analysis requires a large number of sequences, while very few are available \citep{Ingvarsson2011,reseq302} because genetic sequencing is computationally and monetarily expensive \citep{sequencing}. Variabilities in plant genotypes can also be studied using their phenotypic characteristics (physical characteristics). This kind of analysis can be relatively easily done because a sufficiently large amount of data is available from different geographical areas. In the phenotypic context, which is our first focus, a few characteristics that play an important role are Days to 50\% Flowering, Days to Maturity, Plant Height, 100 Seed Weight, Seed Yield Per Plant, Number of Branches Per Plant, etc. Cluster analysis is an important tool to describe and summarize the variation present between different plant genotypes \citep{cite2}. Thus, clustering can be used to obtain diverse parents which, as mentioned above, is of paramount importance. It is obvious that after clustering, the genotypes present in the same cluster would have similar characteristics, while those present in different clusters would be diverse. Phenotypic data for the genotypes of different plants (e.g., Soybean, Wheat, Rice, Maize, etc.) usually have enough variation for accurate clustering. However, if this data is obtained for the genotypes of the same plant, then clustering becomes challenging due to less variation in the data, which forms our second focus. Hierarchical Clustering (HC) is a traditional and standard method that is currently being used by plant biologists for grouping of phenotypic data \citep{cite2,wheat,cluster-analysis-1}. However, this method has a few disadvantages. First, it does not provide the level of accuracy required for clustering similar genotypes \citep{hc}. Second, HC is based on building a hierarchical cluster tree (also called dendrogram), which becomes cumbersome and impractical to visualize when the data is too large. To overcome the two disadvantages of HC, in this paper, we propose the use of the Spectral Clustering (SC) algorithm. SC is mathematically sound and is known to give one of the most accurate clustering results among the existing clustering algorithms \citep{sc}. For genome data, we have recently shown substantial accuracy improvements by using SC as well \cite{vqsc}. Furthermore, unlike HC, SC does not generate the intermediate hierarchical cluster tree. To the best of our knowledge, this algorithm has not been applied to phenotypic data in any of the previous works (see the Literature Review Section below). HC, as well as SC, both are computationally expensive. They require substantial computational time when clustering large amounts of data \citep{sc,hc_complexity}. Hence, we use sampling to reduce this complexity. Probability-based sampling techniques have recently gained a lot of attention because of their high accuracy at reduced cost \citep{sampling}. Among these, Pivotal Sampling is most commonly used, and hence, we apply it to phenotypic data \citep{pivotal}. Like for SC, using Pivotal Sampling for phenotypic data is also new. Recently, Vector Quantization (VQ) has given promising results for genetic data \citep{vqsc}. Hence, here we adapt VQ for phenotypic data as well. This also serves as a good standard against which we compare Pivotal Sampling. To summarize, in this paper, we develop a modified SC with Pivotal Sampling algorithm that is especially adapted for phenotypic data. The novelty of our work is in constructing the crucial similarity matrix for the clustering algorithm and defining the probabilities for the sampling technique. Although our algorithm can be applied to any plant genotypes, we test it on around 2400 Soybean genotypes obtained from Indian Institute of Soybean Research, Indore, India \citep{phenotypic-data}. In the experiments, we perform two sets of comparisons. {\it First}, our algorithm outperforms all the proposed competitive clustering algorithms with sampling in terms of the accuracy (i.e. modified SC with VQ, HC with Pivotal Sampling, and HC with VQ). The computational complexities of all these algorithms are similar because of the involved sampling. {\it Second}, our modified SC with Pivotal Sampling doubly outperforms HC, which as earlier, is a standard in the plant studies domain. In terms of the accuracy, we are up to 45\% more accurate. In terms of complexity, our algorithm is more than a magnitude cheaper than HC. The rest of this paper is organized as follows. Section \ref{sec:2} provides a brief summary of the previous works on HC for phenotypic data.\footnote{Since none of the previous works have used sampling for phenotypic data, we could not review this aspect.} The standard algorithms for Pivotal Sampling and SC are discussed in Section \ref{sec:3}. Section \ref{sec4} describes the crucial adaptations done in Pivotal Sampling and SC for phenotypic data. The validation metric, the experimental set-up, and the results are presented in Section \ref{sec:5}. Finally, Section \ref{sec:6} gives conclusions and future work. \section{Literature Review} \label{sec:2} In this section, we present some relevant previous studies on phenotypic data and the novelty of our approach. Broadly, these studies can be classified into two categories. The first category consists of the works that identify the correlation between the different phenotypic characteristics (for example, less plant height may indicate less plant yield). The second category consists of the studies that identify the genotypes having dissimilar properties for the breeding program. Initially, we present few works belonging to the first category. Immanuel et al. \citep{rice1} in 2011 measured nine traits of 21 Rice genotypes. Grain Yield was kept as the primary characteristic, and its correlations with all others were obtained. It was observed that several traits like Plant Height, Days to 50\% Flowering, Number of Tillers per Plant, Filled Grains per Panicle and Panicle Length were positively correlated with Grain Yield. Divya et al. \citep{rice2} in 2015 investigated the association between Infected Leaf Area, Blast Disease Susceptibility, Number of Tillers and Grain Yield traits of two Rice genotypes. The authors concluded that Infected Leaf Area had a significant positive correlation with leaf's Blast Susceptibility. Also, Number of Tillers exhibited the highest association with Grain Yield. Huang et al. \citep{leaf} in 2018 exploited leaf shapes and clustered 206 Soybean genotypes into three clusters using the $k$-means clustering algorithm. These clusters contained genotypes having elliptical leaves, lanceolate leaves and round leaves. Then, the authors studied variation present among the different phenotypic characteristics of these categories. They deduced that the cluster containing lanceolate leaves had maximum average Plant Height, Number of Pods per Plant, Number of Branches per Plant, and 100-Seed Weight, while the other two clusters had less values for these traits. Carpentieri-Pipolo et al. \citep{endobacteria} in 2019 investigated the effects of endophytic bacteria on $45$ phenotypic characteristics of a Soybean genotype. The authors clustered these bacteria into three clusters using Unweighted Pair Group Method using Arithmetic Mean (UPGMA) and studied whether the bacteria had positive or negative activity on the phenotypic traits. It was found that five bacteria in one cluster had positive activities on almost $40$ characteristics, while fifteen bacteria in the remaining two clusters had positive activities on around 25 characteristics only. Next, we present works that belong to the second category. Sharma et al. \citep{wheat} in 2014 performed clustering of $24$ synthetic Wheat genotypes (lines) to identify High Heat Tolerant (HHT) lines among them. Cluster analysis was performed using HC, and the Wheat lines were grouped into three clusters. This study aimed to improve heat tolerance of Wheat lines in a breeding program by identifying the diverse Wheat breeders. A similar study for accessing the drought tolerance of spring and winter bread Wheat using phenotypic data was conducted by Dodig et al. in 2010 \citep{wheat2}. Painkra et al. \citep{cite2} in 2018 performed clustering of $273$ Soybean genotypes to determine the diversity among them. Here, the authors clustered them into seven groups using HC. Most diverse genotypes were obtained from the clusters having the highest inter-cluster distance values. According to the authors, choosing the genotypes belonging to the distant clusters maximized heterosis\footnote{Heterosis refers to the phenomenon in which a hybrid plant exhibits superiority over its parents in terms of Plant Yield or any other characteristic.} in cross-breeding. Kahraman et al. \citep{cluster-analysis-1} in 2014 analyzed the field performances of 35 promising Common Bean genotypes. These genotypes were clustered into three groups using HC. Again, the goal of this work was to provide information for selecting the most diverse genotypes for breeding programs. Finally, we present a few works that belong to both the categories. Stansluos et al. \citep{corn} in 2019 analyzed $22$ phenotypic traits for $11$ Sweet Corn genotypes (cultivars). The authors showed a positive and significant correlation of Yield of Marketable Ear (YME) with Ear Diameter (ED) and Number of Marketable Ear (NME), whereas YME was negatively correlated with Thousand Kernel Weight (TKW). Also, they grouped these cultivars into four clusters using HC to obtain variation among them. Fried et al. \citep{root} in 2018 determined whether the root traits were related to other phenotypic traits for 49 Soybean genotypes. In this work, Principal Component Analysis (PCA) biplot was used to separate the Soybean genotypes into different clusters. According to the authors, this research was critical for Soybean improvement programs since it helped select genotypes with the improved root characteristics. We have three contributions as below, which have not been catered in any of the above cited papers. \begin{enumerate}[noitemsep] \item With a focus on the second category, we perform grouping of several thousand genotypes as compared to a few hundred in the papers cited above. \item Clustering becomes computationally expensive when the size of the data is very large. Hence, sampling is required to make the underlying algorithm scalable. As mentioned in the previous section, we adapt Pivotal Sampling, a probability-based technique, for phenotypic data. \item HC, which is the most common clustering algorithm (and some other sporadically used algorithms like $k$-means and UPGMA), do not provide the level of accuracy needed. Again, as earlier, we develop a variant of the SC algorithm, which is considered highly accurate, especially for phenotypic data. \end{enumerate} \section{Sampling and Clustering Algorithms} \label{sec:3} In this section, we briefly discuss the standard algorithms for Pivotal Sampling and SC in the two subsections below. \subsection{Pivotal Sampling} \label{sec:3.1} This is a well-developed sampling theory that handles complex data with unequal probabilities. The method is attractive because it can be easily implemented by a sequential procedure, i.e. by a single scan of the data \citep{pivotal2}. Thus, the complexity of this method is $\mathcal{O}(n)$, where $n$ is the population size. It is important to emphasize that the method is independent of the density of the data. Consider a finite population $U$ of size $n$ with its each unit identified by a label $i = 1, 2, ..., n$. A sample $S$ is a subset of $U$ with its size, either being random ($N(S)$) or fixed ($N$). Obtaining the inclusion probabilities of all the units in the population, denoted by $\pi_i$ with $i=1, 2, ..., n$, forms an important aspect of this unequal probability sampling technique. The pivotal method is based on a principle of contests between units \citep{sampling}. At each step of the method, two units compete to get selected (or rejected). Consider unit $i$ with probability $\pi_i$ and unit $j$ with probability $\pi_j$, then we have the two cases as below. \begin{enumerate} \item \textbf{Selection step ($\pi_i+\pi_j\geq1$):} Here, one of the units is selected, while the other one gets the residual probability $\pi_i+\pi_j-1$ and competes with another unit at the next step. More precisely, if $(\pi_i,\pi_j)$ denotes the selection probabilities of the two units, then \begin{equation*} (\pi_i,\pi_j)=\left\{\begin{matrix} (1,\pi_i+\pi_j-1) \textnormal{ with probability } \frac{1-\pi_j}{2-\pi_i-\pi_j}\\ (\pi_i+\pi_j-1,1) \textnormal{ with probability } \frac{1-\pi_i}{2-\pi_i-\pi_j} \end{matrix}\right. \end{equation*} \item \textbf{Rejection step ($\pi_i+\pi_j<1$):} Here, one of the units is definitely rejected (i.e. not selected in the sample), while the other one gets the sum of the inclusion probabilities of both the units and competes with another unit at the next step. More precisely, \begin{equation*} (\pi_i,\pi_j)=\left\{\begin{matrix} (0,\pi_i+\pi_j) & \textnormal{with probability } \frac{\pi_j}{\pi_i+\pi_j}\\ (\pi_i+\pi_j, 0) & \textnormal{with probability } \frac{\pi_i}{\pi_i+\pi_j} \end{matrix}\right. \end{equation*} \end{enumerate} This step is repeated for all the units present in the population until we get the sample of size $N(S)$ or $N$. The worst-case occurs when we obtain the last sample (i.e. $N^{th}$ sample) in the last iteration. \subsection{Spectral Clustering} \label{sec:3.2} Clustering is one of the most widely used techniques for exploratory data analysis with applications ranging from statistics, computer science, and biology to social sciences and psychology etc. It is used to get a first impression of data by trying to identify groups having ``similar behavior" among them. Compared to the traditional algorithms such as $k$-means, SC has many fundamental advantages. Results obtained by SC are often more accurate than the traditional approaches. It is simple to execute and can be efficiently implemented by using the standard linear algebra methods. The algorithm consists of four steps as below \citep{sc}. \begin{enumerate} \item The first step in the SC algorithm is the construction of a matrix called the similarity matrix. Building this matrix is the most important aspect of this algorithm; better its quality, better the clustering accuracy \citep{sc}. This matrix captures the local neighborhood relationships between the data points via similarity graphs and is usually built in three ways. The first such graph is a $\epsilon$-neighborhood graph, where all the vertices whose pairwise distances are smaller than $\epsilon$ are connected. The second is a $k$-nearest neighborhood graph, where the goal is to connect vertex $v_i$ with vertex $v_j$ if $v_j$ is among the $k$-nearest neighbors of $v_i$. The third and the final is the fully connected graph, where each vertex is connected with all the other vertices. Similarities are obtained only between the connected vertices. Thus, similarity matrices obtained by the first two graphs are usually sparse, while the fully connected graph yields a dense matrix. Let the $n$ vertices of a similarity graph be represented numerically by vectors $a_1, a_2, ..., a_n$, respectively. Here, each $a_i$ $\in \mathbb{R}^m$ is a column vector for $i=1, ..., n$. Also, let $a_i^l$ and $a_j^l$ denote the $l^{th}$ elements of vectors $a_i$ and $a_j$, respectively, with $l=1, ..., m$. There exist many distance measures to build the similarity matrix \citep{distance}. We describe some common ones below using the above introduced terminologies. \begin{enumerate}[noitemsep] \item \textbf{City block distance:} \citep{distance} It is the special case of the Minkowski distance \begin{equation*} d_{ij}=\sqrt[p]{\sum_{l=1}^{m}|a_i^l-a_j^l|^p} \end{equation*} with $p=1$. \item \textbf{Euclidean distance:} \citep{distance} It is the ordinary straight line distance between two points in the Euclidean space. It is again the special case of the Minkowski distance, where the value of $p$ is taken as $2$. Thus, it is given by \begin{equation*} d_{ij}=\sqrt{\sum_{l=1}^{m}(a_i^l-a_j^l)^2}. \end{equation*} \item \textbf{Squared Euclidean distance:} \citep{distance} It is the square of the Euclidean distance, and is given by \begin{equation*} d_{ij}=\sum_{l=1}^{m}(a_i^l-a_j^l)^2. \end{equation*} \item \textbf{Cosine distance:} \citep{distance} It measures the cosine of the angle between two non-zero vectors, and is given by \begin{equation*} d_{ij}=1-{a_i \cdot a_j \over \|a_i\| \|a_j\|}, \end{equation*} where, $\|\cdot \|$ denotes the Euclidean norm of a vector. \item \textbf{Correlation distance:} \citep{correlation} It captures the correlation between two non-zero vectors, and is given by \begin{equation*} d_{ij}=1-{(a_i-\bar{a}_i)^t(a_j-\bar{a}_j) \over \sqrt{(a_i-\bar{a}_i)^t(a_i-\bar{a}_i)}\sqrt{(a_j-\bar{a}_j)^t(a_j-\bar{a}_j)}}, \end{equation*} where, $\bar{a}_i$ and $\bar{a}_j$ are the means of $a_i$ and $a_j$ multiplied with a vector of ones, respectively, and $t$ signifies the transpose operation. \item \textbf{Hamming distance:} \citep{hamming} It measures the number of positions at which the corresponding values of two vectors are different, and is given by \begin{equation*} d_{ij}={\#(a_i^l \neq a_j^l) \over n}, \end{equation*} \item \textbf{Jaccard distance:} \cite{jaccard} It again measures the number of positions at which the corresponding values of two vectors are different excluding the positions where both the vectors have zero values, and is given by \begin{equation*} d_{ij}={\#[(a_i^l \neq a_j^l)\cap ((a_i^l\neq 0)\cup (a_j^l\neq 0))] \over \#[(a_i^l\neq 0)\cup (a_j^l\neq 0)]}. \end{equation*} \end{enumerate} \item Next, a matrix called the Laplacian matrix is constructed. This matrix is either non-normalized or normalized. The non-normalized Laplacian matrix is defined as \begin{equation*} L=D-W, \end{equation*} where $W$ is the similarity matrix and $D$ is a diagonal matrix whose elements are obtained by adding together the elements of all the columns for every row of $W$. Normalized Laplacian matrix is again of two types: the symmetric Laplacian ($L_{sym}$) and the random walk Laplacian ($L_{rw}$). Both these matrices are closely related to each other and are defined as \begin{equation*} L_{sym}=D^{-1/2}LD^{-1/2}=I-D^{-1/2}WD^{-1/2}. \end{equation*} \begin{equation*} L_{rw}=D^{-1}L=I-D^{-1}W. \end{equation*} Henceforth, the non-normalized Laplacian matrix is referred to as the Type-1 Laplacian, $L_{sym}$ as the Type-2 Laplacian, and $L_{rw}$ as the Type-3 Laplacian. In the literature, it is suggested to use the normalized Laplacian matrix instead of the non-normalized one, and specifically the Type-3 Laplacian \citep{sc}. \item Once we have the Laplacian matrix, we obtain the first $k$ eigenvectors $u_1, ..., u_k$ of this matrix, where $k$ is the number of clusters. \item Finally, these eigenvectors are clustered using the $k$-means clustering algorithm. \end{enumerate} \section{Implementing Pivotal Sampling and Modified Spectral Clustering for Phenotypic Data} \label{sec4} Here, we first present the application of Pivotal Sampling to obtain the samples from phenotypic data. Subsequently, we implement our modified SC algorithm on the same data. Consider that the phenotypic data of a plant consist of $n$ genotypes with each genotype evaluated for $m$ different characteristics/ traits. As discussed in Section \ref{sec:3.1}, Pivotal Sampling requires that the inclusion probabilities (i.e. $\pi_i$ for $i=1, ..., n$), of all the genotypes in the population $U$, be computed before a unit is considered for a contest. The set of characteristics associated with a genotype can be exploited in computing these probabilities. To select a sample of size $N$, where $N \ll n$, we obtain these probabilities as \citep{pivotal2} \begin{equation} \pi_i=N\frac{\varkappa_i}{\sum_{i\in U}\varkappa_i}, \label{eq:1} \end{equation} where $\varkappa_i$ can be a property associated with any one characteristic (or a combination of them) of the $i^{th}$ genotype. Obtaining $\pi_i$ in such a way also ensures that $\sum_{i=1}^{n}\pi_i=N$, i.e. we get exactly $N$ selection steps, and in-turn, exactly $N$ samples. In our implementation, we use the deviation property of the genotypes, which is discussed next. Since different characteristics have values in different ranges, we start by normalizing them as below \citep{normalize,normalize1}. \begin{equation*} (\mathcal{X}_j)_i={(x_j)_i-\textnormal{min}(x_j) \over \textnormal{max}(x_j)-\textnormal{min}(x_j)}. \end{equation*} Here, $(\mathcal{X}_j)_i$ and $(x_j)_i$ are the normalized value and the actual value of the $j^{th}$ characteristic for the $i^{th}$ genotype, respectively with $j=1, ..., m$ and $i=1, ..., n$. Furthermore, max($x_j$) and min($x_j$) are the maximum and the minimum values of the $j^{th}$ characteristic among all the genotypes. Now, the deviation for the $i^{th}$ genotype is calculated using the above normalized values as \begin{equation*} dev_i=\sum_{j=1}^{m}\textnormal{max}(\mathcal{X}_j)-(\mathcal{X}_j)_i. \end{equation*} Here, $\textnormal{max}(\mathcal{X}_j)$ denotes the maximum normalized value of the $j^{th}$ characteristic among all the genotypes. Practically, a relatively large value of $dev_i$ indicates that the $i^{th}$ genotype is less important, and hence, its probability should be small. Thus, the inclusion probability of a genotype is calculated by taking $\varkappa_i = \frac{1}{dev_i}$ in Eq. (\ref{eq:1}) or \begin{equation*} \pi_i=N\frac{\frac{1}{dev_i}}{\sum_{i\in U}\frac{1}{dev_i}}. \end{equation*} Once these probabilities are obtained, we follow the two steps (selection and rejection) as discussed in Section \ref{sec:3.1}. Next, we discuss the clustering of these $N$ genotypes into $k$ clusters. Similar to the standard SC algorithm discussed in Section \ref{sec:3.2}, the first step in our modified SC is to obtain the similarity matrix. As mentioned earlier, this is the most important aspect of this algorithm since better the quality of this matrix, better is the clustering accuracy. For this, we consider these $N$ genotypes as the vertices of a graph. Let vector $a_i$ contain the normalized values of all the characteristics ($m$) for the $i^{th}$ genotype. Thus, we have $N$ such vectors corresponding to the $N$ genotypes selected using Pivotal Sampling, and each vector is of size $m$. In our implementation, we use a fully connected graph to build the similarity matrix, i.e. we obtain similarities among all the $N$ genotypes. We define the similarity between the vectors $a_i$ and $a_j$ (representing the genotypes $i$ and $j$, respectively) as the inverse of the distance between these vectors obtained by using the distance measures mentioned in Section \ref{sec:3.2}. This is intuitive because smaller the distance between any two genotypes, larger the similarity between them and vice versa. We denote this distance by $d_{ij}$. We build this matrix of size $N \times N$ by obtaining the similarities among all the $N$ genotypes. The next step is to compute the Laplacian matrix, which when obtained from the above-discussed similarity matrix, generates poor eigenvalues,\footnote{Zero/ close to zero and distinct eigenvalues are considered to be a good indicator of the connected components in a similarity matrix. Thus, eigenvalues are considered poor when they are not zero/ not close to zero or indistinct \citep{sc}.} and in-turn poor corresponding eigenvectors that are required for clustering\footnote{For some distance matrices (like Euclidean distance), the eigenvalues don't even converge.}. Thus, instead of taking only the inverse of $d_{ij}$, we also take its exponent, i.e. we define the similarity between the $i^{th}$ and the $j^{th}$ genotypes as $e^{-d_{ij}}$ \citep{NgSC,spsc}. This, besides fixing the poor eigenvalues/ eigenvectors problem, also helps perform better clustering of the given data. Further, we follow the remaining steps as discussed in Section \ref{sec:3.2}. Above, we discussed the clustering of $N$ sampled genotypes into $k$ clusters. However, our goal is to cluster all $n$ genotypes and not just $N$. Hence, there is a need to reverse-map the remaining $n-N$ genotypes to these $k$ clusters. For this, we define the notion of average similarity, which between the non-clustered genotype $p_i$ and the cluster $C_l$ is given as \begin{equation*} \mathcal{AS}(C_l, p_i)=\frac{1}{\#(C_l)}\sum_{q\in C_l} e^{-d_{p_iq}}. \end{equation*} Here, $\#(C_l)$ denotes the number of genotypes present in $C_l$ and $q$ is a genotype originally clustered in $C_l$ by our modified SC algorithm with Pivotal Sampling. We obtain the average similarity of $p_i$ with all the $k$ clusters (i.e. with $C_l$ for $l=1, ..., k$), and associate it with the cluster with which $p_i$ has the maximum similarity. Next, we perform the complexity analysis of our algorithm. Since Pivotal Sampling and SC form the bases of our algorithm, we discuss the complexities of these algorithms before ours. \begin{enumerate}[noitemsep] \item Pivotal Sampling ($n$: number of genotypes, $N$: sample size) \begin{enumerate}[noitemsep] \item Obtaining Probabilities: $\mathcal{O}(n)$ \item Obtaining Samples: $\mathcal{O}(n)$ \end{enumerate} \item SC ($n$, $m$: number of characteristics) \begin{enumerate}[noitemsep] \item Constructing Similarity Matrix: $\mathcal{O}(n^2m)$ \item Obtaining Laplacian Matrix: $\mathcal{O}(n^3)$ \end{enumerate} \item Our Algorithm ($n$, $N$, $m$) \begin{enumerate}[noitemsep] \item Obtaining Samples: $\mathcal{O}(n)$ \item Constructing Similarity Matrix: $\mathcal{O}(N^2m)$ \item Obtaining Laplacian Matrix: $\mathcal{O}(N^3)$ \item Reverse Mapping: $\mathcal{O}\big((n-N)N\big)$ \end{enumerate} Thus, the overall complexity of our algorithm is $\mathcal{O}(nN+N^3+Nm^2)$. Here, we have kept three terms because any of these can dominate (here, $n \gg N, m$). \end{enumerate} When we compare complexity of our algorithm with that of HC, which is $\mathcal{O}(n^3)$, it is evident that we are more than a magnitude faster than HC. \section{Results} \label{sec:5} In this section, we first briefly discuss the data used for our experiments. Next, we check the goodness of our sampling technique by estimating a measure called the population total. Subsequently, we describe the clustering set-up, where the validation metric, the ideal number of clusters, the suitable distance measures for building similarity matrices, and the most useful Laplacian matrix are discussed. Finally, we present the results for our modified SC with Pivotal Sampling. Here, we compare our algorithm with (a) SC with VQ, HC with Pivotal Sampling, HC with VQ and (b) non-sampled HC. \subsection{Data Description} \label{sec:5.1} As mentioned in Introduction, our techniques can be applied to any plant data. However, here we experiment on phenotypic data of Soybean genotypes. This data is taken from Indian Institute of Soybean Research, Indore, India, and consists of $29$ different characteristics/ traits for $2376$ Soybean genotypes \citep{phenotypic-data}. Among these, we consider the following eight characteristics that are most important for higher yield: Early Plant Vigor (EPV), Plant Height (PH), Number of Primary Branches (NPB), Lodging Score (LS), Number of Pods Per Plant (NPPP), 100 Seed Weight (SW), Seed Yield Per Plant (SYPP) and Days to Pod Initiation (DPI). Table \ref{data} provides a snapshot of this data for a few genotypes. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \textbf{Genotypes} & \textbf{EPV} & \textbf{PH} & \textbf{NPB} & \textbf{LS} & \textbf{NPPP} & \textbf{SW} & \textbf{SYPP} & \textbf{DPI} \\ \hline 1&Poor &54 &6.8 &Moderate &59.8 &6.5 &2.5 &65 \\ 2&Poor &67 &3.4 &Severe &33 &6.2 &3.9 &64 \\ 3&Poor &38.4 &2.8 &Slight &68 &6.9 &4.4 &61 \\ 4&Good &60.8 &4 &Moderate &34.6 &6.1 &3 &65 \\ \vdots&\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots &\vdots \\ 2376&Very Good&89.6 &5 &Severe &32.6 &7.3 &3.4 &62 \\ \hline \end{tabular} \caption{\label{data}Description of Phenotypic Data.} \end{table} \subsection{Sampling Discussion} \label{sec:5.2} To inspect the quality of our sampling techniques, we estimate a measure called the population total, which is the addition of values of a particular characteristic for all the $n$ units (genotypes here) present in the population $U$. For example, if ``Plant Height (PH)" is the characteristic of interest, then the population total is the addition of PH values for all the $n$ genotypes. Mathematically, the exact (or actual) population total for a characteristic of interest $x_j$ is given as \begin{equation*} Y=\sum_{i\in U}(x_j)_i, \end{equation*} where, as earlier, $(x_j)_i$ is the value of the $j^{th}$ characteristic for the $i^{th}$ genotype and $U$ is the set of all genotypes. As evident by the above equation, this measure can be only applied to characteristics that contain numerical values.\footnote{Methods do exist to convert non-numeric data to numeric one for using this measure.} In this work, we use two different estimators to compute an approximation of the population total from the sampled data. Closer the value of an estimator to the actual value, better the sampling. First is the Horvitz-Thompson (HT)-estimator (also called $\pi$-estimator), which is defined as \citep{HT} \begin{equation*} Y_{HT}^{'} = Y_{\pi}^{'}=\sum_{i\in S}\frac{(x_j)_i}{\pi_i}, \end{equation*} where, $\pi_i$ is the inclusion probability of the $i^{th}$ genotype as evaluated in Section \ref{sec4} and $S$ is the set of sampled genotypes. Another estimator that we use is the H\'ajek-estimator. It is usually considered better than the HT-estimator and is given as \citep{hajek} \begin{equation*} Y_{H\acute{a}jek}^{'} =n\frac{\sum_{i\in S}\frac{(x_j)_i}{\pi_i}}{\sum_{i\in S}\frac{1}{\pi_i}}, \end{equation*} here, as earlier, $n$ is the total number of genotypes. The actual population total and the values of the above two estimators for six characteristics (that have numerical values) when using Pivotal Sampling and $500$ samples are given in Table \ref{estimator} (see columns 3, 4, and 6, respectively). From this table, it is evident that the approximate values of the population total are very close to the corresponding actual values. Thus, Pivotal Sampling works well in an absolute sense. Here, we also compute the values of the two estimators when using VQ (see columns 5 and 7). We can notice from these results that VQ also works reasonably well, but Pivotal Sampling is better. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Sr.} & \textbf{Characteristics} & \textbf{Actual} & \textbf{Pivotal} & \textbf{VQ} & \textbf{Pivotal} & \textbf{VQ} \\ \textbf{No.} & & \textbf{Population} & \textbf{Sampling} & \textbf{(HT)} & \textbf{Sampling} &\textbf{(H\'ajek)} \\ & & \textbf{Total} & \textbf{(HT)} & & \textbf{(H\'ajek)} & \\ \hline 1& PH &121773.05 &122507.84 &123407.80 &123716.09 &113168.90 \\ 2 &NPB &8576.56 &8585.28 &9669.29 &8669.95 &8867.05 \\ 3 & NPPP &99712.72 &100193.53 &114465.66 &101181.70 &104968.67 \\ 4& SW &20073.32 &19907.10 &20966.86 &20103.44 &19227.28 \\ 5 &SYPP &10048.04 &10137.57 &10536.08 &10237.55 &9661.92 \\ 6& DPI &136810 &135309.78 &149242.17 &136644.29 &136859.84 \\ \hline \end{tabular} \caption{\label{estimator}HT and H\'ajek estimators values for Pivotal Sampling and VQ as compared to the actual population total with $N=500$ as the sample size.} \end{table} \subsection{Clustering Setup} \label{sec:5.3} Here, {\it first}, we describe the criteria used to check the goodness of generated clusters. There are various metrics available for the validation of clustering algorithms. These include Cluster Accuracy (CA), Normalized Mutual Information (NMI), Adjusted Rand Index (ARI), Compactness (CP), Separation (SP), Davis-Bouldin Index (DB), and Silhouette Value \citep{validation,silhouette}. For using the first three metrics, we should have a prior knowledge of the cluster labels. However, here we do not have this information. Hence, we cannot use these validation metrics. Rest of the techniques do not have this requirement, and hence, can be used for validation here. We use Silhouette Value because of its popularity \citep{silhouette}. Silhouette Value is a measure of how similar an object is to its own cluster (intra-cluster similarity) compared with other clusters (inter-cluster similarity). For any cluster $C_l$ ($l = 1, ..., k; \textnormal{ say } l = 1$), let $a(i)$ be the average distance between the $i^{th}$ data point and all other points in the cluster $C_1$, and let $b(i)$ be the average distance between this $i^{th}$ data point in the cluster $C_1$ and all other points in clusters $C_2, ..., C_k$. Silhouette Value for the $i^{th}$ data point is defined as \citep{silhouette} \begin{equation} s(i) = {b(i)-a(i) \over \text{max}\{a(i), b(i)\}}, \label{eq:2} \end{equation} where, $a(i)$ and $b(i)$ signify the intra-cluster and the inter-cluster similarities, respectively. Silhouette Value comes to be between $-1$ and $1$,\footnote{This is because the denominator of Eq. (\ref{eq:2}) is always greater than its numerator.} and average over all the data points is computed. A positive value (tending towards 1) indicates good clustering (compact and well-separated clusters), while a negative value (tending towards -1) indicates poor clustering. {\it Second}, we determine the ideal number of clusters by using the eigenvalue gap heuristic \citep{sc,eigenheuristic}. If $\lambda_1, \lambda_2, ..., \lambda_n$ are the eigenvalues of the matrix used for clustering (e.g., the Laplacian matrix), then often the initial set of eigenvalues, say $k$, have a considerable difference between the consecutive ones in this set. That is, $|\lambda_i - \lambda_{i+1}| \not \approx 0$ for $i=1, ..., k-1$. After the $k^{th}$ eigenvalue, this difference is usually approximately zero. According to this heuristic, this $k$ gives a good estimate of the ideal number of clusters. For this experiment, without loss of generality, we build the similarity matrix using the Euclidean distance measure on the above discussed phenotypic data. As mentioned earlier, it is recommended to use the Type-3 Laplacian matrix \citep{sc}. Hence, we use its eigenvalues for estimating $k$. Figure \ref{Fig1} represents the graph of the first fifty smallest eigenvalues (in absolute terms) of this Laplacian matrix. On the $x$-axis, we have the eigenvalue number, and on the $y$-axis its corresponding value. \begin{figure}[ht] \centering \includegraphics[width=\linewidth,height=6cm]{euclidean.eps} \caption{Fifty Smallest Eigenvalues of the Type-3 Laplacian Matrix Obtained from the Euclidean Similarity Matrix (for estimating the ideal number of clusters).} \label{Fig1} \end{figure} From this figure, we can see that there is a considerable difference between the first ten consecutive eigenvalues. After the tenth eigenvalue, this difference is very small (tending to zero). Hence, based upon the earlier argument and this plot, we take $k$ as ten. To corroborate this choice more, we experiment with $k$ as twenty and thirty as well. As expected, and discussed in detail later in this section, Silhouette Values for these numbers of clusters are substantially lesser than those for ten clusters. {\it Third}, and final, we perform experiments to identify the suitable similarity measures to build the similarity matrix, and also verify that, as recommended, the Type-3 Laplacian matrix is the best. Table \ref{suitable_similarity} below gives Silhouette Values of our modified SC for all seven similarity measures and three Laplacians when clustering the earlier presented phenotypic data into $10$, $20$, and $30$ clusters. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline \textbf{Sr.} & \textbf{Similarity} & \textbf{Number of} & \textbf{Type-1} & \textbf{Type-2} & \textbf{Type-3} \\ \textbf{No.} & \textbf{Measure} & \textbf{Clusters $(k)$} & \textbf{Laplacian} & \textbf{Laplacian} & \textbf{Laplacian} \\ \hline 1. &Euclidean &10 &0.0828 &-0.0273& \textbf{0.2422} \\ & &20 &0.0455 &-0.1096& \textbf{0.2069} \\ & &30 &0.0887 &-0.1536& \textbf{0.1783} \\ \hline 2. &Squared &10 &0.0815 &-0.0555& \textbf{0.3836} \\ &Euclidean &20 &-0.0315&-0.1809& \textbf{0.2612} \\ & &30 &0.0354 &-0.2367& \textbf{0.1538} \\ \hline 3. &City-block &10 &0.0687 &0.2375 &\textbf{0.2647} \\ & &20 &-0.0356& 0.1347& \textbf{0.2082} \\ & &30 &-0.0870& 0.0866& \textbf{0.1887} \\ \hline 4. &Cosine &10 &0.1737 &-0.1408& 0.0694 \\ & &20 &0.0359 &-0.1973& 0.0277 \\ & &30 &0.0245 &-0.2456& -0.0316 \\ \hline 5. &Correlation &10 &0.1926&-0.1259 &\textbf{0.3426} \\ & &20 &0.0970 &-0.2198& \textbf{0.2313} \\ & &30 &0.2383 &-0.2604& \textbf{0.1556} \\ \hline 6. &Hamming &10 &0.0643 &0.0706&0.0775 \\ & &20 &0.0683 &0.0311 &0.0382 \\ & &30 &0.0715 &0.0283 &0.0229 \\ \hline 7. &Jaccard &10 &0.0716 &0.0303 &0.0458 \\ & &20 &0.0446 &0.0276 &0.0236 \\ & &30 &0.0279 &0.0298 &0.0318 \\ \hline \end{tabular} \caption{\label{suitable_similarity}Silhouette Values for modified SC with seven similarity measures and three Laplacian matrices for $k=10, 20$, and $30$. Silhouette Values in bold represent good clustering.} \end{table} From this table, it is evident that Silhouette Values for the Euclidean, Squared Euclidean, City-block and Correlation similarity measures and the Type-3 Laplacian matrix are the best. Hence, we use these four similarity measures and this Laplacian matrix. Also, as mentioned earlier, Silhouette Values decrease for twenty and thirty cluster sizes. \subsection{Results} \label{sec:5.4} Using the earlier presented dataset, and sampling-clustering setups, we compare our proposed algorithm (modified SC with Pivotal Sampling) with the existing variants in three ways. Initially, we compare with modified SC with VQ, HC\footnote{HC also requires building a similarity matrix.} with Pivotal Sampling and HC with VQ for a sample size of $500$. Since the results for modified SC with VQ come out to be closest to our algorithm, next, for broader appeal we compare these two algorithms for a sample size of $300$. Finally, we compare our algorithm with the current best in literature for this kind of data (i.e. HC without sampling) for both the sample sizes of $500$ and $300$. The results for the {\it initial} set of comparisons are given in Table \ref{sampling500}. Columns 2 and 3 give the similarity measures and the number of clusters chosen, respectively. Columns 4 and 5 give Silhouette Values of modified SC with Pivotal Sampling and VQ, respectively, while columns 6 and 7 give Silhouette Values of HC with Pivotal Sampling and VQ, respectively. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf{Sr.}& \textbf{Similarity} &\textbf{\# of} & \multicolumn{2}{|c|}{\textbf{modified SC}} & \multicolumn{2}{|c|}{\textbf{HC}} \\ \cline{4-7} \textbf{No.} & \textbf{Measure} & \textbf{Clusters} & \textbf{Pivotal} & \textbf{VQ} & \textbf{Pivotal} & \textbf{VQ} \\ & & $(k)$ & \textbf{Sampling} & & \textbf{Sampling} & \\ \hline 1. &Euclidean &10 &{\bf 0.2152} &0.2061 &0.2105 &-0.1040\\ & &20 &{\bf 0.1905} &0.1448 &$0.2263^*$ &-0.1620\\ & &30 &{\bf 0.1741} &0.1021 &$0.1933^*$ &-0.2874\\ \hline 2. &Squared &10 &{\bf 0.3362} &0.2969 &0.2634 &-0.2096\\ &Euclidean &20 &{\bf 0.2469} &0.1522 &$0.3726^*$ &-0.5899\\ & &30 &{\bf 0.1658} &0.0440 &$0.2933^*$ &-0.6083\\ \hline 3. &City-block &10 &{\bf 0.2369} &0.2354 &0.1703 &-0.2278\\ & &20 &{\bf 0.2019} &0.1870 &0.1879 &-0.2398\\ & &30 &{\bf 0.1752} &0.1524 &$0.1988^*$ &-0.2868\\ \hline 4. &Correlation&10 &{\bf 0.3367} &0.2560 &0.2582 &-0.0060\\ & &20 &{\bf 0.2291} &0.0899 &0.0867 &-0.4120\\ & &30 &{\bf 0.1742} &-0.0349 &0.0998 &-0.7018\\ \hline \end{tabular} \caption{\label{sampling500}Silhouette Values for modified SC and HC with Pivotal Sampling and VQ for $N=500$. Silhouette Values in bold represent good clustering.} \end{table} When we compare our algorithm (values in the fourth column, and highlighted in bold) with other variants, it is evident that we are clearly better than modified SC with VQ and HC with VQ (values in the fifth and the seventh columns); our values are higher than those from these two algorithms. When we compare our algorithm with HC with Pivotal Sampling (values in the sixth column), we again perform better for many cases. However, for some cases, our algorithm performs worse than HC with Pivotal Sampling (highlighted with a *). Upon further analysis (discussed below), we realize that an inappropriate grouping of genotypes by HC with Pivotal Sampling results in these set of Silhouette Values getting wrongly inflated. To further assess the quality of our algorithm, we present the distribution of genotypes into different clusters (after reverse-mapping) for modified SC with Pivotal Sampling and HC with Pivotal Sampling. Without loss of generality, this comparison is done using the Squared Euclidean similarity measure and cluster size thirty. The results for our algorithm are given in Figure \ref{Fig2} and for HC with Pivotal Sampling are given in Figure \ref{Fig3}. On the $x$-axis, we have the cluster number and on the $y$-axis, the number of genotypes present in them. \begin{figure}[ht] \centering \includegraphics[width=\linewidth,height=6cm]{SqEu-SC-30.eps} \caption{Distribution of Genotypes (modified SC with Pivotal Sampling) for Squared Euclidean similarity measure and cluster size thirty.} \label{Fig2} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=\linewidth,height=6cm]{SqEu-HC-30.eps} \caption{Distribution of Genotypes (HC with Pivotal Sampling) for Squared Euclidean similarity measure and cluster size thirty.} \label{Fig3} \end{figure} From Figure \ref{Fig2}, we can see that our algorithm equally distributes all genotypes between the different clusters. This matches the real-life segregation as well since all genotypes belong to the same plant (Soybean in this case). This clustering has also been validated by the plant biologists at Indian Institute of Soybean Research, Indore, India. The equivalent clustering results obtained by using HC with Pivotal Sampling (as given in Figure \ref{Fig3}), however, depicts a very skewed distribution. Most genotypes are segregated in only a few clusters, while the remaining clusters contain only one or two genotypes. This is also the reason for the inflation of Silhouette Values of HC with Pivotal Sampling in Table \ref{sampling500} since the intra-cluster similarity for solitary genotype is zero leading to its respective Silhouette Value to become one (the maximum possible; see Eq. (\ref{eq:2})). Thus, our algorithm also outperforms HC with Pivotal Sampling, which from Table \ref{sampling500} was not very evident. {\it Next}, as mentioned earlier, to further demonstrate the applicability of our work, we also present the results with a sample size $300$. Since modified SC with VQ turns out to be our closest competitor, we compare our algorithm with this one only. This comparison is given in Table \ref{sampling300}, with its columns mapping the respective columns of Table \ref{sampling500}. As evident from Table \ref{sampling300}, our modified SC with Pivotal Sampling substantially outperforms modified SC with VQ (see values in columns 4 and 5). \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Sr.}& \textbf{Similarity} &\textbf{\# of} & \multicolumn{2}{|c|}{\textbf{modified SC}} \\ \cline{4-5} \textbf{No.} & \textbf{Measure} & \textbf{Clusters} & \textbf{Pivotal} & \textbf{VQ} \\ & & $(k)$ & \textbf{Sampling} & \\ \hline 1. &Euclidean &10 &{\bf }0.2104 &0.1833 \\ & &20 &{\bf }0.1968 &0.0955 \\ & &30 &{\bf }0.1743 &0.0722 \\ \hline 2. &Squared &10 &{\bf }0.3280 &0.2589 \\ &Euclidean &20 &{\bf }0.2424 &0.1322 \\ & &30 &{\bf }0.1613 &0.0044 \\ \hline 3. &City-block &10 &{\bf }0.2392 &0.2157 \\ & &20 &{\bf }0.1990 &0.1696 \\ & &30 &{\bf }0.1752 &0.1373 \\ \hline 4. &Correlation&10 &{\bf }0.3368 &0.2229 \\ & &20 &{\bf }0.2312 &0.0336 \\ & &30 &{\bf }0.1725 &-0.0788 \\ \hline \end{tabular} \caption{\label{sampling300}Silhouette Values for modified SC with Pivotal Sampling and VQ for $N=300$.} \end{table} As earlier, {\it finally}, we compare the results of our algorithm (modified SC with Pivotal Sampling) with the currently popular clustering algorithm in the plant studies domain (i.e. HC without sampling). For this set of experiments, without loss of generality, we use the cluster size of ten. The results of this comparison are given in Table \ref{percentage}, where the first four columns are self-explanatory (based upon the data given in Tables \ref{sampling500} and \ref{sampling300} earlier). In the last column of this table, we also evaluate the percentage improvement in our algorithm over HC. As evident from this table, our algorithm is up to 45\% more accurate than HC for both the sample sizes. As earlier, our algorithm also has the crucial added benefit of reduced computational complexity as compared to HC. \begin{table}[ht] \centering \begin{tabular}{|c|c|c|c|c|} \hline \textbf{Sample} & \textbf{Similarity} & \textbf{modified SC with} & \textbf{HC} & \textbf{Percentage} \\ \textbf{Size} & \textbf{Measure} & \textbf{Pivotal Sampling} & & \textbf{Improvement} \\ \hline & Euclidean & 0.2152 & 0.2173 & -0.97\% \\ $N=500$ &Squared Euclidean & 0.3362 &0.3257 & 3.22\% \\ &City-block & 0.2369 & 0.2135 & 10.96\% \\ &Correlation & 0.3367 & 0.2307 & 45.95\% \\ \hline & Euclidean & 0.2104 & 0.2173 & -3.28\% \\ $N=300$ &Squared Euclidean & 0.3280 &0.3257 & 0.71\% \\ &City-block & 0.2392 & 0.2135 & 12.04\% \\ &Correlation & 0.3368 & 0.2307 & 45.99\% \\ \hline \end{tabular} \caption{\label{percentage}Silhouette Values of modified SC with Pivotal Sampling and HC for cluster size ten.} \end{table} \section{Conclusions and Future Work} \label{sec:6} We present the modified Spectral Clustering (SC) with Pivotal Sampling algorithm for clustering plant genotypes using their phenotypic data. We use SC for its accurate clustering and Pivotal Sampling for its effective sample selection that in-turn makes our algorithm scalable for large data. Since building the similarity matrix is crucial for the SC algorithm, we exhaustively adapt seven similarity measures to build such a matrix. We also present a novel way of assigning probabilities to different genotypes for Pivotal Sampling. We perform two sets of experiments on about 2400 Soybean genotypes that demonstrate the superiority of our algorithm. \textit{First}, when compared with the competitive clustering algorithms with samplings (SC with Vector Quantization (VQ), Hierarchical Clustering (HC) with Pivotal Sampling, and HC with VQ), Silhouette Values obtained when using our algorithm are higher. \textit{Second}, our algorithm doubly outperforms the standard HC algorithm in terms of clustering accuracy and computational complexity. We are up to 45\% more accurate and an order of magnitude faster than HC. Since the choice of the similarity matrix has a significant impact on the quality of clusters, in the future, we intend to adapt other ways of constructing this matrix such as Pearson $\chi^2$, Squared $\chi^2$, Bhattacharyya, Kullback-Liebler etc. \citep{distance}. Furthermore, in-place of Pivotal Sampling, we also plan to adapt other probabilistic sampling techniques like Cube Sampling, which possess complementary data analysis properties \citep{sampling}. As mentioned earlier, our algorithm is developed to work well for phenotypic data of all plants. Hence, in the future, we aim to test our algorithm on other plant genotypes as well (e.g., Wheat, Rice, etc.) \citep{wheat,rice-future}. \section*{Acknowledgments} The authors would like to thank Mr. Mohit Mohata, Mr. Ankit Gaur and Mr. Suryaveer Singh (IIT Indore, India) for their help in preliminary experiments, which they did as part of their undergraduate degree project. We would also like to sincerely thank Dr. Vangala Rajesh and Dr. Sanjay Gupta (Indian Institute of Soybean Research, Indore, India) for their help in generating the experimental data.
null
null
null
proofpile-arXiv_000-10189
{"arxiv_id":"2009.09028","language":"en","timestamp":1600740101000,"url":"https:\/\/arxiv.org\/abs\/2009.09028","yymm":"2009"}
2024-02-18T23:40:25.331Z
2020-09-22T02:01:41.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10191"}
null
\section*{Acknowledgments} \addcontentsline{toc}{section}{Acknowledgments} This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior — Brasil (CAPES) — Finance Code 001. RMM is partially supported by FAPESP grants 2018/03338-6 and 2018/13481-0. We would also like to thank Paulo Ricardo da Silva, Marco Antonio Teixeira and the referee for the helpful suggestions and advices. \section{Affine Dynamics}% \label{sec:affine} Let $\mathcal{A} \subset \mathcal{D}^k$ be the set of all piecewise smooth vector fields $\mathbf{F}$ with a double discontinuity given by affine vector fields \begin{equation} \label{sec:affine:eq:system} \begin{aligned} \mathbf{F}_i(x,y,z) = ( & a_{i1}x + b_{i1}y + c_{i1}z + d_{i1}, \\ & a_{i2}x + b_{i2}y + c_{i2}z + d_{i2}, \\ & a_{i3}x + b_{i3}y + c_{i3}z + d_{i3}), \end{aligned} \end{equation} \noindent where $a_{ij}, b_{ij}, c_{ij}, d_{ij} \in \mathbb{R}$ for all $i$ and $j$. According to \cref{sec:framework:blowup:thm:dynamics}, the dynamics over $\Sigma_x$ of such a field is blow-up associated to the following dynamics over the cylinder $C = \mathbb{R} \times S^1 = S_1 \cup \ldots \cup S_4$: over each stripe $S_i$ acts a slow-fast dynamics whose reduced dynamics is given by \begin{equation} \label{sec:affine:eq:reduced} \left\{\begin{matrix*}[l] \dot{x} = a_{i1}x + d_{i1}\\ 0 = (a_{i3}x + d_{i3}) \cos{\theta} - (a_{i2}x + d_{i2}) \sin{\theta}\\ \end{matrix*}\right., \end{equation} \noindent with radial slow dynamics $\dot{r} = (a_{i2}x + d_{i2}) \cos{\theta} + (a_{i3}x + d_{i3}) \sin{\theta}$; and layer dynamics given by \begin{equation} \label{sec:affine:eq:layer} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = (a_{i3}x + d_{i3}) \cos{\theta} - (a_{i2}x + d_{i2}) \sin{\theta} \end{matrix*}\right., \end{equation} \noindent with radial fast dynamics $r' = 0$. Besides that, for \cref{sec:affine:eq:system}, we have $p_i = a_{i2}x + d_{i2}$ and $q_i = a_{i3}x + d_{i3}$ so that WFH is satisfied as long as \begin{equation} \label{sec:affine:eq:wfh} a_{i2}x + d_{i2} \neq 0 \quad \text{or} \quad a_{i3}x + d_{i3} \neq 0, \end{equation} \noindent whereas, since $(p_i)_x = a_{i2}$ and $(q_i)_x = a_{i3}$, then SFH is satisfied as long as \begin{equation} \label{sec:affine:eq:sfh} \begin{aligned} 0 & \neq p_i(q_i)_x - q_i(p_i)_x = \\ & = (a_{i2}x + d_{i2})a_{i3} - (a_{i3}x + d_{i3})a_{i2} = \\ & = a_{i3}d_{i2} - a_{i2}d_{i3} \eqqcolon \gamma_i, \end{aligned} \end{equation} \noindent which not only assures the fundamental hypothesis, but also avoids the already studied constant case, as we will see below. As in the constant case, our goal at this section is to fully describe the dynamics of \cref{sec:affine:eq:system} over the cylinder $C$ under the hypothesis \cref{sec:affine:eq:sfh}. In order to do so, we are going to systematically analyse the slow-fast systems \eqref{sec:affine:eq:reduced}--\eqref{sec:affine:eq:layer} for the cases suggested by \cref{sec:affine:eq:wfh} and outlined at \cref{sec:affine:tab:cases}. \begin{table}[ht] \centering \begin{tabular}{ccc} \hline & $a_{i2}x + d_{i2} \neq 0$ & $a_{i2}x + d_{i2} = 0$ \\ \hline $a_{i2} \neq 0$ & A & B \\ $a_{i2} = 0$ & C & D \\ \hline \end{tabular}% \caption{Division \eqref{sec:affine:eq:system} dynamics into study cases.} \label{sec:affine:tab:cases} \end{table} Observe that case (B) actually complements case (A). Moreover, observe that at case (D) we have $a_{i2} = 0$ and $d_{i2} = 0$ which implies the absurd $\gamma_i = 0$. Therefore, cases (A) and (B) complement it other and will be studied at \cref{sec:affine:sub:a2_nonzero}; case (C) will be studied at \cref{sec:affine:sub:a2_zero}. The resulting \cref{sec:affine:thm:dynamics} is stated and exemplified at \cref{sec:affine:sub:theorem}. \input{tex/sec/affine/sub/a2_nonzero.tex} \input{tex/sec/affine/sub/a2_zero.tex} \input{tex/sec/affine/sub/theorem.tex} \subsection{Case \texorpdfstring{$a_{i2} \neq 0$}{a2=/=0}} \label{sec:affine:sub:a2_nonzero} Let's start with case (A), i.e., assume that $a_{i2} \neq 0$ and $a_{i2}x + d_{i2} \neq 0$. In order to explicitly define $\mathcal{M}_i$, observe that whenever $\cos{\theta} \neq 0$ the second equation of \eqref{sec:affine:eq:reduced} gives us \begin{equation*} \begin{aligned}[l] 0 & = (a_{i3}x + d_{i3}) \cos{\theta} - (a_{i2}x + d_{i2}) \sin{\theta} \Leftrightarrow \\ & \Leftrightarrow \tan{\theta} = \frac{a_{i3}x + d_{i3}}{a_{i2}x + d_{i2}} \eqqcolon h(x) \Leftrightarrow \\ & \Leftrightarrow \theta = \arctan{\left(\frac{a_{i3}x + d_{i3}}{a_{i2}x + d_{i2}}\right)} + n\pi = \theta_i\left(x \right) + n\pi, \end{aligned}% \end{equation*} \noindent where $n \in \mathbb{Z}$. Therefore, without loss of generality, the slow manifold can be written as $\mathcal{M}_i = H_{i} \cup H_{i}^{\pi}$, where \begin{align*} H_{i} & = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i(x) \right\} \text{ and} \\ H_{i}^{\pi} & = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i(x) + \pi \right\}, \end{align*} \noindent which consists into two arctangent-normalized hyperboles inside the cylinder $C = \mathbb{R} \times S^1$. In fact, since $a_{i2} \neq 0$, then $h(x)$ is a hyperbole such that \begin{equation*} \frac{d}{dx}h(x) = \frac{d}{dx}\left[\frac{a_{i3}x + d_{i3}}{a_{i2}x + d_{i2}}\right] = \frac{a_{i3}d_{i2} - d_{i3}a_{i2}}{(a_{i2}x + d_{i2})^2} = \frac{\gamma_i}{(a_{i2}x + d_{i2})^2} \end{equation*} \noindent or, in other words, it is an increasing hyperbole if $\gamma_i > 0$ and decreasing if $\gamma_i < 0$\footnote{If $\gamma_i = 0$, then $h(x)$ is a constant function and, therefore, $H_{i}$ and $H_{i}^{\pi}$ are straight lines. In other words, the constant case is recovered.}. Besides that, observe that $h(x)$ has a vertical asymptote at \begin{equation*} a_{i2}x + d_{i2} = 0 \Leftrightarrow x = -\frac{d_{i2}}{a_{i2}} \eqqcolon \alpha_i \end{equation*} \noindent which satisfies \begin{equation*} \lim_{x \to \alpha_i^{\pm}} h(x) = \mp\infty \quad \text{and} \quad \lim_{x \to \alpha_i^{\pm}} h(x) = \pm\infty \end{equation*} \noindent if $\gamma_i > 0$ and $\gamma_i < 0$, respectively; and $h(x)$ has a horizontal asymptote at \begin{equation*} \lim_{x \to \pm\infty} h(x) = \lim_{x \to \pm\infty} \left(\frac{a_{i3}x + d_{i3}} {a_{i2}x + d_{i2}} \right) = \frac{a_{i3}}{a_{i2}}. \end{equation*} \noindent Translating the information above about the hyperbole $h(x)$ to the arctangent-normalized hyperbole $H_{i}$, we get that it \begin{itemize} \item is an increasing curve if $\gamma_i > 0$ and decreasing if $\gamma_i < 0$; \item has a vertical asymptote at $x = \alpha_i$ which satisfies \begin{equation*} \lim_{x \to \alpha_i^{\pm}} \theta_{i}(x) = \mp\frac{\pi}{2} \quad \text{and} \quad \lim_{x \to \alpha_i^{\pm}} \theta_{i}(x) = \pm\frac{\pi}{2} \end{equation*} \noindent if $\gamma_i > 0$ and $\gamma_i < 0$, respectively; \item has a horizontal asymptote at $\theta = \arctan{\left(\frac{a_{i3}}{a_{i2}} \right)} \eqqcolon \beta_i$. \end{itemize} \noindent More precisely, the hyperbole $H_{i}$ behave as the red part of \cref{sec:affine:sub:a2_nonzero:subfig:hyperbole}. However, putting together the hyperboles $H_{i}$ and $H_{i}^{\pi}$ we get that they actually behave as two arctangent-like curves as represented at \cref{sec:affine:sub:a2_nonzero:subfig:cylinder}. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.85\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/affine/fig/}{aff_a2_nonzero_hyp.pdf_tex} \caption{Hyperbole $H_{i}$.} \label{sec:affine:sub:a2_nonzero:subfig:hyperbole} \end{subfigure} \quad \begin{subfigure}[b]{0.85\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/affine/fig/}{aff_a2_nonzero.pdf_tex} \caption{Hyperboles $H_{i}$ and $H_{i}^{\pi}$ together at the cylinder $C$ forming the arctangents $A_{i}$ and $A_{i}^{\pi}$.} \label{sec:affine:sub:a2_nonzero:subfig:cylinder} \end{subfigure} \caption{Affine double discontinuity dynamics for $a_{i1} = 1$, $d_{i1} = -1$, $a_{i2} = 1$, $d_{i2} = 1$, $a_{i3} = 1$ and $d_{i3} = 0$. At this example we have $\alpha_i = -1$, $\beta_i = \frac{\pi}{4}$ and $\delta_i = 1$. Therefore, for example, $S_1$ has part of the hyperbole $H_{i}$ as a visible part of the slow manifold; whereas $S_2$ has only part of $A_{i}^{\pi}$ visible.} \label{sec:affine:sub:a2_nonzero:fig:cylinder} \end{figure} These arctangent-like curves will be denoted by $A_{i}$ and $A_{i}^{\pi}$. Based on the analysis done before, we conclude that they are given by \begin{align*} A_{i} & = \left\{(x,\theta) \in [-\infty,\alpha_i] \times \left[0,2\pi\right];~ \theta = \theta_i(x) + \pi \right\} \cup \\ & \cup \left\{(x,\theta) \in [\alpha_i,+\infty] \times \left[0,2\pi\right];~ \theta = \theta_i(x) \right\}, \\ A_{i}^{\pi} & = \left\{(x,\theta) \in [-\infty,\alpha_i] \times \left[0,2\pi\right];~ \theta = \theta_i(x) \right\} \cup \\ & \cup \left\{(x,\theta) \in [\alpha_i,+\infty] \times \left[0,2\pi\right];~ \theta = \theta_i(x) + \pi \right\}, \end{align*} \noindent and, therefore, on one hand, $A_{i}$ is an arctangent-like curve with $\theta = \beta_i + \pi$ and $\theta = \beta_i$ as negative and positive\footnote{Where negative means $x \to -\infty$ and positive means $x \to +\infty$.} horizontal asymptotes, respectively; on the other hand, $A_{i}^{\pi}$ is an arctangent-like curve with $\theta = \beta_i$ and $\theta = \beta_i + \pi$ as negative and positive horizontal asymptotes, respectively.\footnote{In particular, when $a_{i3} = 0$ we have $\beta_i = 0$ and, therefore, the horizontal asymptotes are given by $\theta = 0$ and $\theta = \pi$, which are part of the stripes' boundary.} Moreover, because of the very definition of $\beta_i$, the positioning of the asymptotes inside the cylinder behaves similarly as the straight lines $L_{i}$ and $L_{i}^{\pi}$ in \cref{sec:constant}. This completes the qualitative analysis of the shape of the slow manifold and, from now on we will write $\mathcal{M}_i = A_{i} \cup A_{i}^{\pi}$. Over both the arctangents $\mathcal{M}_i = A_{i} \cup A_{i}^{\pi}$, we have the one-dimensional dynamics given by the first equation of \eqref{sec:affine:eq:reduced}, i.e., $\dot{x} = a_{i1}x + d_{i1}$. Analyzing this equation we observe that, if $a_{i1} \neq 0$, then there are hyperbolic critical points at \begin{equation*} x = -\frac{d_{i1}}{a_{i1}} \eqqcolon \delta_i, \end{equation*} \noindent being these points attractors if $a_{i1} < 0$ and repellers if $a_{i1} > 0$, as represented at \cref{sec:affine:sub:a2_nonzero:subfig:cylinder}. Since we are under $SFH$, then \cref{sec:framework:sub:blowup:cor:singularities} tells us that, in this case, these hyperbolic singularities are actually hyperbolic singularities of the whole stripe $S_i$. If $a_{i1} = 0$, then there is no critical point and the dynamics over $\mathcal{M}_i$ is exactly as in the constant case described in \cref{sec:constant}. This completes the qualitative analysis of the reduced dynamics. Regarding the layer dynamics, we have the layer system \eqref{sec:affine:eq:layer} which says that for each fixed value of $x \in \mathbb{R}$, we have a one-dimensional dynamics given by the second equation of \eqref{sec:affine:eq:layer}. In particular, assuming that $\cos{\theta} > 0$ and $a_{i2}x + d_{i2} > 0$, then \begin{equation*} \theta' > 0 \Leftrightarrow \theta < \theta_i(x), \end{equation*} \noindent since the arctangent function is strictly increasing. Likewise, and under the same conditions we have that \begin{equation*} \theta' < 0 \Leftrightarrow \theta > \theta_i(x), \end{equation*} \noindent and, therefore, we conclude that for $a_{i2}x + d_{i2} > 0$, the piece of curve $\theta = \theta_i(x)$ is attractor of the all around dynamics and, therefore, $\theta = \theta_i(x) + \pi$ is repellor. Moreover, if $a_{i2} > 0$, then $a_{i2}x + d_{i2} > 0$ happens for $x > \alpha_i$; if $a_{i2} < 0$, then $a_{i2}x + d_{i2} > 0$ happens for $x < \alpha_i$. Completing this analysis and comparing with the definition of $A_{i}$ and $A_{i}^{\pi}$ we reach the results summarized at \cref{sec:affine:tab:fast} and represented as the green part of \cref{sec:affine:sub:a2_nonzero:fig:cylinder}. Moreover, at $\cos{\theta} = 0$ with $a_{i2} \neq 0$ and $a_{i2}x + d_{i2} \neq 0$, \eqref{sec:affine:eq:layer} give us the layer systems \begin{equation*} \left\{\begin{matrix*}[l] x' & = & 0 \\ \theta' & = & - (a_{i2}x + d_{i2}) \end{matrix*}\right. \quad \text{ and } \quad \left\{\begin{matrix*}[l] x' & = & 0 \\ \theta' & = & a_{i2}x + d_{i2} \end{matrix*}\right. \end{equation*} \noindent for $\theta = \frac{\pi}{2}$ and $\theta = \frac{3\pi}{2}$, respectively, whose dynamics is consistent with \cref{sec:affine:tab:fast}. This completes the qualitative analysis of the layer dynamics for case (A). \begin{table}[ht] \centering \begin{tabular}{lll} \hline & $a_{i2} < 0$ & $a_{i2} > 0$ \\ \hline $A_{i}$ & repellor & attractor \\ $A_{i}^{\pi}$ & attractor & repellor \\ \hline \end{tabular} \caption{Layer dynamics around the arctangents $A_{i}$ and $A_{i}^{\pi}$ that compose the slow manifold $\mathcal{M}_i = A_{i} \cup A_{i}^{\pi}$.} \label{sec:affine:tab:fast} \end{table} Now, lets consider the case (B), which complements the case (A) studied above defining the missing dynamics over $a_{i2}x + d_{i2} = 0$ ($\Leftrightarrow x = \alpha_i$) with $a_{i2} \neq 0$. At this case, the reduced system \eqref{sec:affine:eq:reduced} becomes \begin{equation*} \left\{\begin{matrix*}[l] \dot{x} & = & a_{i1}x + d_{i1} \\ 0 & = & (a_{i3}x + d_{i3}) \cos{\theta} \end{matrix*}\right., \end{equation*} \noindent whose slow manifold $\mathcal{M}_i$ is implicitly given by the equation $0 = (a_{i3}x + d_{i3}) \cos{\theta}$ which actually means $0 = \cos{\theta}$, since we are under SFH and, therefore $a_{i3}x + d_{i3} \neq 0$. In other words, $\mathcal{M}_i = \left\{\left(\alpha_i, \frac{\pi}{2}\right), \left(\alpha_i, \frac{3\pi}{2}\right)\right\}$. Over these points acts the dynamics $\dot{x} = a_{i1}x + d_{i1}$, which is consistent with case (A). Regarding the fast dynamics, we have the layer system \begin{equation*} \left\{\begin{matrix*}[l] x' & = & 0 \\ \theta' & = & (a_{i3}x + d_{i3}) \cos{\theta} \end{matrix*}\right. \quad \leadsto \quad \left\{\begin{matrix*}[l] x' & = & 0 \\ \theta' & = & -\frac{\gamma_i}{a_{i2}} \cos{\theta} \end{matrix*}\right., \end{equation*} \noindent since $x = \alpha_i$, which can be easily verified to be consistent with the layer dynamics given by \cref{sec:affine:tab:fast} and, therefore, it is consistent with case (A). Therefore, we conclude that case whole (B) is consistent with case (A). In other words, the dynamics over the asymptote $a_{i2}x + d_{i2} = 0$ agree with the all around dynamics. \subsection{Case \texorpdfstring{$a_{i2} = 0$}{a2=0}} \label{sec:affine:sub:a2_zero} For case (C), remember that we have $a_{i2} = 0$ and $a_{i2}x + d_{i2} \neq 0$ implying $d_{i2} \neq 0$. Therefore, everything at the beginning of \cref{sec:affine:sub:a2_nonzero} is true. However, whenever $\cos{\theta} \neq 0$, the explicit expression for the slow manifold $\mathcal{M}_i$ is now \begin{equation*} \theta = \arctan{\left(\frac{a_{i3}x + d_{i3}}{d_{i2}}\right)} + n\pi = \theta_i(x) + n\pi, \end{equation*} \noindent where $n \in \mathbb{Z}$. Therefore, without loss of generality, the slow manifold can be written as $\mathcal{M}_i = A_{i} \cup A_{i}^{\pi}$, where \begin{align*} A_{i} & = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i(x) \right\} \text{ and} \\ A_{i}^{\pi} & = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i(x) + \pi \right\}, \end{align*} \noindent which consists into two arctangent-like curves inside the cylinder $C = \mathbb{R} \times S^1$ as the red part of \cref{sec:affine:sub:a2_zero:fig:cylinder}. In fact, since $a_{i2} = 0$, then $h(x)$ is a straight line and, therefore, \begin{equation*} \theta = \theta_i(x) = \arctan{\left( h(x) \right)} \end{equation*} \noindent is an arctangent curve. Besides that, we have \begin{equation*} \frac{d}{dx}h(x) = \frac{\gamma_i}{(a_{i2}x + d_{i2})^2} = \frac{\gamma_i}{d_{i2}^{2}} \end{equation*} \noindent and, therefore, $A_{i}$ and $A_{i}^{\pi}$ are increasing curves if $\gamma_i > 0$ and decreasing if $\gamma_i < 0$\footnote{Again, if $\gamma_{i} = 0$ ($\Leftrightarrow a_{i3} = 0$), then $h(x)$ is a constant function and, therefore, $A_{i}$ and $A_{i}^{\pi}$ are straight lines. In other words, the constant case is recovered.}. Moreover, since \begin{align*} \lim_{x \to \pm\infty} \theta_i(x) & = \lim_{x \to \pm\infty} \left[ \arctan{\left(\frac{a_{i3}x + d_{i3}}{d_{i2}} \right)}\right] = \\ & = \arctan{\left[\lim_{x \to \pm\infty} \left(\frac{a_{i3}x + d_{i3}} {d_{i2}} \right) \right]} = \\ & = \arctan{\left[\pm\sgn{\left(\frac{a_{i3}}{d_{i2}} \right)}\infty \right]} = \\ & = \pm\sgn{\left(\frac{a_{i3}}{d_{i2}} \right)}\frac{\pi}{2} = \pm\sgn{\left(\frac{\gamma_i}{d_{i2}^{2}} \right)}\frac{\pi}{2} = \\ & = \pm\sgn{\left(\gamma_i \right)}\frac{\pi}{2} \eqqcolon \sigma_{i\pm}, \end{align*} \noindent then $A_{i}$ has $\sigma_{i-}$ and $\sigma_{i+}$ as negative and positive horizontal asymptote, respectively; while $A_{i}^{\pi}$ has $\sigma_{i+}$ and $\sigma_{i-}$ as negative and positive horizontal asymptote, respectively. This completes the qualitative analysis of the shape of the slow manifold. \begin{figure}[ht] \centering \def0.85\linewidth{0.85\linewidth} \import{tex/sec/affine/fig/}{aff_a2_zero.pdf_tex} \caption{Affine double discontinuity dynamics for $a_{i1} = 1$, $d_{i1} = -1$, $a_{i2} = 0$, $d_{i2} = 1$, $a_{i3} = 1$ and $d_{i3} = 1$. At this example we have $\delta_i = 1$ and $\sigma_{i\pm} = \pm\frac{\pi}{2}$.} \label{sec:affine:sub:a2_zero:fig:cylinder} \end{figure} Over both the arctangents $\mathcal{M}_i = A_{i} \cup A_{i}^{\pi}$, we have the one-dimensional dynamics given by the first equation of \eqref{sec:affine:eq:reduced}, i.e., $\dot{x} = a_{i1}x + d_{i1}$ which behaves as described in \cref{sec:affine:sub:a2_nonzero}. This completes the qualitative analysis of the reduced dynamics. Regarding the layer dynamics, a completely analogous analysis such as that made for the previous cases allows us to conclude that it behaves as described in \cref{sec:affine:tab:fast}, including the case $\cos{\theta} = 0$, but exchanging $a_{i2}$ with $d_{i2}$. \subsection{Theorem and Examples} \label{sec:affine:sub:theorem} Summarizing, we conclude that the dynamics over $\Sigma_x$ for affine fields behaves as described in the theorem below, whose proof consists in the analysis done above. \begin{thm}[Affine Double Discontinuity Dynamics]% \label{sec:affine:thm:dynamics} Given $\mathbf{F} \in \mathcal{A}$ with affine components $\mathbf{F}_i$ given by \eqref{sec:affine:eq:system} and such that $\gamma_i \neq 0$, let $\mathbf{\tilde{F}} \in \tilde{\mathcal{A}}$ be the induced vector field by the blow-up $\phi_1(x,\theta,r) = (x, r\cos{\theta}, r\sin{\theta})$. Then, this blow-up associates the dynamics over $\Sigma_x$ with the following dynamics over the cylinder $C = \mathbb{R} \times S^1 = S_1 \cup \ldots \cup S_4$: over each stripe $S_i$ acts a slow-fast dynamics whose slow manifold is given by $\mathcal{M}_i = A_{i} \cup A_{i}^{\pi}$, where $A_{i}^{\pi}$ is a $\pi$-translation of $A_{i}$ in $\theta$ and \begin{enumerate} \item \label{sec:affine:thm:dynamics:a2_nonzero} case $a_{i2} \neq 0$, then \begin{align*} A_{i} & = \left\{(x,\theta) \in [-\infty,\alpha_i] \times \left[0,2\pi\right];~ \theta = \theta_i(x) + \pi \right\} \cup \\ & \cup \left\{(x,\theta) \in [\alpha_i,+\infty] \times \left[0, 2\pi\right];~ \theta = \theta_i(x) \right\} \end{align*} \noindent with $\theta_i(x) = \arctan{\left(\frac{a_{i3}x + d_{i3}}{a_{i2}x + d_{i2}}\right)}$, which consists into an arctangent-like curve inside the cylinder C with $\theta = \beta_i + \pi$ and $\theta = \beta_i$ as negative and positive horizontal asymptotes, respectively; \item \label{sec:affine:thm:dynamics:a2_zero} case $a_{i2} = 0$, then \begin{equation*} A_{i} = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i(x) \right\} \end{equation*} \noindent with $\theta_i(x) = \arctan{\left(\frac{a_{i3}x + d_{i3}}{d_{i2}}\right)}$, which consists into an arctangent-like curve inside the cylinder C with $\theta = \sigma_{i-}$ and $\theta = \sigma_{i+}$ as negative and positive horizontal asymptotes, respectively. \end{enumerate} Both arctangents are increasing if $\gamma_i > 0$ and decreasing if $\gamma_i < 0$. Over then acts the reduced dynamics $\dot{x} = a_{i1}x + d_{i1}$ and, around then, acts the layer dynamics described in \cref{sec:affine:tab:fast}, but exchanging $a_{i2}$ with $d_{i2}$ if $a_{i2} = 0$. Finally, the new parameters above are given by $\alpha_i = -\frac{d_{i2}}{a_{i2}}$, $\beta_i = \arctan{\left(\frac{a_{i3}}{a_{i2}} \right)}$, $\gamma_i = a_{i3}d_{i2} - d_{i3}a_{i2}$, $\delta_i = -\frac{d_{i1}}{a_{i1}}$ and $\sigma_{i\pm} = \pm\sgn{\left(\gamma_i \right)}\frac{\pi}{2}$. \qed \end{thm} \begin{exmp} \label{sec:affine:exmp:slow_cycle} Let $\mathbf{F} \in \mathcal{A}$ be given by affine vector fields such that \begin{align*} & \mathbf{F}_2 : \begin{bmatrix} a_{21} & d_{21} \\ a_{22} & d_{22} \\ a_{23} & d_{23} \end{bmatrix} = \begin{bmatrix*}[r] 1 & -2 \\ -1 & 1 \\ -1 & 0 \end{bmatrix*}, \quad & \mathbf{F}_1 : \begin{bmatrix} a_{11} & d_{11} \\ a_{12} & d_{12} \\ a_{13} & d_{13} \end{bmatrix} = \begin{bmatrix*}[r] -1 & 2 \\ -1 & 1 \\ 1 & 0 \end{bmatrix*}, \\ \\ & \mathbf{F}_3 : \begin{bmatrix} a_{31} & d_{31} \\ a_{32} & d_{32} \\ a_{33} & d_{33} \end{bmatrix} = \begin{bmatrix*}[r] 1 & -2 \\ 1 & 1 \\ -1 & 0 \end{bmatrix*}, \quad & \mathbf{F}_4 : \begin{bmatrix} a_{41} & d_{41} \\ a_{42} & d_{42} \\ a_{43} & d_{43} \end{bmatrix} = \begin{bmatrix*}[r] -1 & 2 \\ 1 & 1 \\ 1 & 0 \end{bmatrix*}, \end{align*} \noindent with parameters $c_{ij}$'s and $d_{ij}$'s arbitrary since, according to \cref{sec:affine:thm:dynamics}, they only affect the dynamics outside the cylinder. Using this theorem we can also verify that, over the cylinder $C$ given by the blow-up of $\Sigma_x$, the system has a single \textbf{slow cycle} as represented at \cref{sec:affine:sub:theorem:fig:affine_slow_cycle}. \begin{figure}[ht] \centering \def0.85\linewidth{0.85\linewidth} \import{tex/sec/affine/fig/}{affine_slow_cycle.pdf_tex} \caption{Dynamics over $C$ generated by the field $\mathbf{F}$ studied at \cref{sec:affine:exmp:slow_cycle}.} \label{sec:affine:sub:theorem:fig:affine_slow_cycle} \end{figure} For instance, according to \cref{sec:affine:thm:dynamics}, the field $\mathbf{F}_1$ induces a slow-fast system whose slow manifold $\mathcal{M}_1 = A_1 \cup A_1^{\pi}$ consists into arctangents with horizontal asymptotes \begin{equation*} \theta = \beta_1 = \arctan{\left( \frac{a_{13}}{a_{12}} \right)} = \arctan{(-1)} = -\frac{\pi}{4} \end{equation*} \noindent at $S_4$ and $\theta = \beta_1 + \pi = \frac{3\pi}{4}$ at $S_2$. Besides that, since \begin{equation*} \gamma_1 = a_{13}d_{12} - a_{12}d_{13} = 1 \end{equation*} \noindent then these arctangents are increasing. Therefore, we conclude that $\mathcal{M}_1 \cap S_1 \subset A_1^{\pi}$ and it transversally crosses $S_1$ as represented at the lowest stripe of \cref{sec:affine:sub:theorem:fig:affine_slow_cycle} from $\mathbf{R}_1$ to $\mathbf{Q}_1$, where the point $\mathbf{Q}_1$ is given by \begin{equation*} \frac{\pi}{2} = \theta_1(x) = \arctan{\left(\frac{a_{i3}x + d_{i3}}{a_{i2}x + d_{i2}}\right)} = \arctan{\left(\frac{x}{-x+1} \right)}, \end{equation*} \noindent which happens when $x \to 1^-$; and the point $\mathbf{R}_1$ is given by \begin{equation*} 0 = \theta_1(x) = \arctan{\left(\frac{a_{i3}x + d_{i3}}{a_{i2}x + d_{i2}}\right)} = \arctan{\left(\frac{x}{-x+1} \right)}, \end{equation*} \noindent which happens when $x \to 0^+$. Dynamically it also goes $\mathbf{R}_1 \to \mathbf{Q}_1$, since over $\mathcal{M}_1 \cap S_1$ acts the reduced dynamics $\dot{x} = -x+2$, which has $x = 2$ as a stable singularity. Finally, since $a_{12} = -1 < 0$ and $\mathcal{M}_1 \cap S_1 \subset A_1^{\pi}$, then $\mathcal{M}_1 \cap S_1$ attracts the surrounding layer dynamics, according to \cref{sec:affine:tab:fast}. Therefore, we conclude that the dynamics generated by $\mathbf{F}_1$ over the stripe $S_1$ in fact behaves as represented at \cref{sec:affine:sub:theorem:fig:affine_slow_cycle}. The dynamics over the other stripes can be similarly verified to be as represented. \qed \end{exmp} \begin{cor} Every $\mathbf{F} \in \mathcal{A}$ with $\gamma_i \neq 0$ can induce at most one slow cycle over the cylinder. \end{cor} \begin{proof} Given a stripe $S_i$, according to \cref{sec:affine:thm:dynamics} the arctangents that forms the slow manifold $\mathcal{M}_i$ can either have a horizontal asymptote inside $S_i$ or not. If a horizontal asymptote is inside $S_i$, then a slow cycle construction is impossible, even if the asymptote is at one of the borders of $S_i$, since $\mathcal{M}_i$ don't cross transversally both borders of $S_i$. However, if no horizontal asymptote is inside $S_i$, then a construction similar to that realized at \cref{sec:affine:exmp:slow_cycle} can occur. Finally, no more than one slow cycle can occur, since the arctangents are strictly monotonous and, therefore, transversally crosses $S_i$ at most once. \end{proof} \section{Constant Dynamics}% \label{sec:constant} Let $\mathcal{C} \subset \mathcal{D}^k$ be the set of all piecewise smooth vector fields $\mathbf{F}$ with a double discontinuity given by constant vector fields \begin{equation} \label{sec:constant:eq:system} \mathbf{F}_i(x,y,z) = (d_{i1}, d_{i2}, d_{i3}), \end{equation} \noindent where $d_{ij} \in \mathbb{R}$ for all $i$ and $j$. According to \cref{sec:framework:blowup:thm:dynamics}, the dynamics over $\Sigma_x$ of such a field is blow-up associated to the following dynamics over the cylinder $C = \mathbb{R} \times S^1 = S_1 \cup \ldots \cup S_4$: over each stripe $S_i$ acts a slow-fast dynamics whose reduced dynamics is given by \begin{equation} \label{sec:constant:eq:reduced} \left\{\begin{matrix*}[l] \dot{x} = d_{i1}\\ 0 = d_{i3} \cos{\theta} - d_{i2} \sin{\theta}\\ \end{matrix*}\right., \end{equation} \noindent with radial slow dynamics $\dot{r} = d_{i2} \cos{\theta} + d_{i3} \sin{\theta}$; and layer dynamics given by \begin{equation} \label{sec:constant:eq:layer} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = d_{i3} \cos{\theta} - d_{i2} \sin{\theta} \end{matrix*}\right., \end{equation} \noindent with radial fast dynamics $r' = 0$. Besides that, for \cref{sec:constant:eq:system}, we have $p_i = d_{i2}$ and $q_i = d_{i3}$ so that WFH is satisfied as long as \begin{equation} \label{sec:constant:eq:wfh} d_{i2} \neq 0 \quad \text{or} \quad d_{i3} \neq 0, \end{equation} \noindent whereas SFH is \textbf{never} satisfied, since $(p_i)_x = (q_i)_x = 0$. Therefore, our goal at this section is to fully describe the dynamics of \cref{sec:constant:eq:system} over the cylinder $C$ under the hypothesis \cref{sec:constant:eq:wfh}. In order to do so, we are going to systematically analyse the slow-fast systems \eqref{sec:constant:eq:reduced}--\eqref{sec:constant:eq:layer} for the two cases suggested by \cref{sec:constant:eq:wfh}. This analysis takes place in \cref{sec:constant:sub:d2_nonzero,sec:constant:sub:d2_zero}, resulting in \cref{sec:constant:thm:dynamics} stated and exemplified at \cref{sec:constant:sub:theorem}. \input{tex/sec/constant/sub/d2_nonzero.tex} \input{tex/sec/constant/sub/d2_zero.tex} \input{tex/sec/constant/sub/theorem.tex} \subsection{Case \texorpdfstring{$d_{i2} \neq 0$}{d2=/=0}} \label{sec:constant:sub:d2_nonzero} In order to explicitly define the slow manifold $\mathcal{M}_i$, observe that whenever $\cos{\theta} \neq 0$ the second equation of \eqref{sec:constant:eq:reduced} gives us \begin{equation*} 0 = d_{i3} \cos{\theta} - d_{i2} \sin{\theta} \Leftrightarrow \tan{\theta} = \frac{d_{i3}}{d_{i2}} \Leftrightarrow \theta = \arctan{\left(\frac{d_{i3}}{d_{i2}}\right)} + n\pi = \theta_i + n\pi, \end{equation*} \noindent where $n \in \mathbb{Z}$. Therefore, without loss of generality, the slow manifold can be written as $\mathcal{M}_i = L_{i} \cup L_{i}^{\pi}$, where \begin{align*} L_{i} & = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i \right\} \text{ and} \\ L_{i}^{\pi} & = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \theta_i + \pi \right\}, \end{align*} \noindent which consists into two straight lines inside the cylinder $C = \mathbb{R} \times \left[0,2\pi\right]$, as the red part of \cref{sec:constant:fig:d2_nonzero}. Observe that, since $\theta_i \in \left(-\frac{\pi}{2}, \frac{\pi}{2}\right)$ and $\theta_i + \pi \in \left(\frac{\pi}{2}, \frac{3\pi}{2}\right)$, then either $L_{i} \subset S_1$ and $L_{i}^{\pi} \subset S_3$ or $L_{i} \subset S_4$ and $L_{i}^{\pi} \subset S_2$. In other words, this straight lines are always at intercalated stripes. Therefore, a given stripe $S_i$ might or might not contain one of this straight lines, depending exclusively on the value of $\theta_i$.\footnote{In particular, when $d_{i3} = 0$ we have $\theta_i = 0$ and, therefore, the straight lines $L_{i}$ and $L_{i}^{\pi}$ are given by $\theta = 0$ and $\theta = \pi$, respectively, which are part of the stripes' boundary.} This completes the qualitative analysis of the shape of the slow manifold. \begin{figure}[ht] \centering \def0.85\linewidth{0.85\linewidth} \import{tex/sec/constant/fig/}{const_d2_nonzero.pdf_tex} \caption{Constant double discontinuity dynamics for $d_{i1} = 1 > 0$, $d_{i2} = 0.7 > 0$ and $d_{i3} = 1 > 0$. At this example we have $\theta_i = \arctan{\frac{1}{0.7}} \approx 0.96$. Therefore, for example, $S_1$ has $\theta = \theta_i$ as an attracting visible part of the slow manifold; whereas $S_2$ has none.} \label{sec:constant:fig:d2_nonzero} \end{figure} Over both the straight lines $\mathcal{M}_i = L_{i} \cup L_{i}^{\pi}$, we have the one-dimensional dynamics given by the first equation of \eqref{sec:constant:eq:reduced}, i.e., $\dot{x} = d_{i1}$. Analyzing this equation we observe that, considering the usual growth direction of the $x$-axis, the dynamics over $\mathcal{M}_i$ is increasing if $d_{i1} > 0$ and decreasing if $d_{i1} < 0$. This completes the qualitative analysis of the reduced dynamics. Regarding the layer dynamics, we have the layer system \eqref{sec:constant:eq:layer} which says that for each fixed value of $x \in \mathbb{R}$, we have a one-dimensional dynamics given by the second equation of \eqref{sec:constant:eq:layer}. In particular, assuming that $\cos{\theta} > 0$ and $d_{i2} > 0$, then \begin{equation*} \theta' > 0 \Leftrightarrow d_{i3} \cos{\theta} - d_{i2} \sin{\theta} > 0 \Leftrightarrow \tan{\theta} < \frac{d_{i3}}{d_{i2}} \Leftrightarrow \theta < \arctan{\left(\frac{d_{i3}}{d_{i2}}\right)} = \theta_i, \end{equation*} \noindent since the arctangent function is strictly increasing. Likewise and under the same conditions we have that \begin{equation*} \theta' < 0 \Leftrightarrow \theta > \arctan{\left(\frac{d_{i3}}{d_{i2}}\right)} = \theta_i \end{equation*} \noindent and, therefore, we conclude that for $d_{i2} > 0$, the straight line $L_{i}$ is attractor of surrounding layer dynamics and, therefore, $L_{i}^{\pi}$ is a repellor, as the green part of \cref{sec:constant:fig:d2_nonzero}. An analogous study for $d_{i2} < 0$ allows us to reach the results summarized in \cref{sec:constant:tab:fast}. \begin{table}[ht] \centering \begin{tabular}{lll} \hline & $d_{i2} < 0$ & $d_{i2} > 0$ \\ \hline $L_{i}$ & repellor & attractor \\ $L_{i}^{\pi}$ & attractor & repellor \\ \hline \end{tabular} \caption{Layer dynamics around the straight lines $L_{i}$ and $L_{i}^{\pi}$ that compose the slow manifold $\mathcal{M}_i = L_{i} \cup L_{i}^{\pi}$.} \label{sec:constant:tab:fast} \end{table} Finally, at $\cos{\theta} = 0$ with $d_{i2} \neq 0$ the reduced system \eqref{sec:constant:eq:reduced} tells us that $\mathcal{M}_i = \emptyset$ and, therefore, there is only the fast dynamics \eqref{sec:constant:eq:layer} which reduces to \begin{equation*} \left\{\begin{matrix*}[l] x' & = & 0 \\ \theta' & = & - d_{i2} \end{matrix*}\right. \quad \text{ and } \quad \left\{\begin{matrix*}[l] x' & = & 0 \\ \theta' & = & d_{i2} \end{matrix*}\right. \end{equation*} \noindent for $\theta = \frac{\pi}{2}$ and $\theta = \frac{3\pi}{2}$, respectively, whose dynamics is consistent with \cref{sec:constant:tab:fast}. This completes the qualitative analysis of the layer dynamics and, therefore, the qualitative analysis of this case. See \cref{sec:constant:exmp:no_singularities}. \subsection{Case \texorpdfstring{$d_{i2} = 0$}{d2=0}}% \label{sec:constant:sub:d2_zero} Now, the reduced system \eqref{sec:constant:eq:reduced} can be written as \begin{equation} \label{sec:constant:sub:d2_zero:eq:reduced} \left\{\begin{matrix*}[l] \dot{x} & = & d_{i1} \\ 0 & = & d_{i3} \cos{\theta} \end{matrix*}\right., \end{equation} \noindent whose slow manifold $\mathcal{M}_i$ is implicitly given by the equation $0 = d_{i3} \cos{\theta}$ which actually means $0 = \cos{\theta}$, since we are under WFH and, therefore, $d_{i3} \neq 0$. In other words, $\mathcal{M}_i = L_{i} \cup L_{i}^{\pi}$ with $L_{i}$ and $L_{i}^{\pi}$ being the straight lines given by $\theta = \frac{\pi}{2}$ and $\theta = \frac{3\pi}{2}$, respectively.\footnote{Here, again, the straight lines $L_{i}$ and $L_{i}^{\pi}$ are part of the boundary of the stripes.} The dynamics over and around $\mathcal{M}_i$ behaves exactly as in the case $d_{i2} \neq 0$, but exchanging $d_{i2}$ with $d_{i3}$ at \cref{sec:constant:tab:fast}. \subsection{Theorem and Examples}% \label{sec:constant:sub:theorem} Summarizing, we conclude that the dynamics over $\Sigma_x$ for constant fields behaves as described in the theorem below, whose proof consists in the analysis done above in \cref{sec:constant:sub:d2_nonzero,sec:constant:sub:d2_zero}. \begin{thm}[Constant Double Discontinuity Dynamics] \label{sec:constant:thm:dynamics} Given $\mathbf{F} \in \mathcal{C}$ with constant components $\mathbf{F}_i = (d_{i1},d_{i2},d_{i3})$ such that $d_{i2} \neq 0$ or $d_{i3} \neq 0$, let $\mathbf{\tilde{F}} \in \tilde{\mathcal{C}}$ be the induced vector field by the blow-up $\phi_1(x,\theta,r) = (x, r\cos{\theta}, r\sin{\theta})$. Then, this blow-up associates the dynamics over $\Sigma_x$ with the following dynamics over the cylinder $C = \mathbb{R} \times S^1 = S_1 \cup \ldots \cup S_4$: over each stripe $S_i$ acts a slow-fast dynamics whose slow manifold is given by $\mathcal{M}_i = L_{i} \cup L_{i}^{\pi}$, where $L_{i}^{\pi}$ is a $\pi$-translation of $L_{i}$ in $\theta$ and \begin{enumerate} \item \label{sec:constant:thm:dynamics:d2_nonzero} case $d_{i2} \neq 0$, then \begin{equation*} L_{i} = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \arctan{\left(\frac{d_{i3}}{d_{i2}}\right)} \right\}; \end{equation*} \item \label{sec:constant:thm:dynamics:d2_zero_d3_nonzero} case $d_{i2} = 0$ and $d_{i3} \neq 0$, then \begin{equation*} L_{i} = \left\{(x,\theta) \in \mathbb{R} \times \left[0,2\pi\right] ;~ \theta = \frac{\pi}{2} \right\}; \end{equation*} \end{enumerate} \noindent which, in both cases, consists into two straight lines inside the cylinder $C$, possibly invisible relative to $S_i$. Over this straight lines acts the reduced dynamics $\dot{x} = d_{i1}$ and, around then, acts the layer dynamics described in \cref{sec:constant:tab:fast}, but exchanging $d_{i2}$ with $d_{i3}$ if $d_{i2} = 0$. \end{thm} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/constant/fig/}{slice.pdf_tex} \caption{Before the blow-up.}% \label{sec:constant:exmp:no_singularities:subfig:slice} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/constant/fig/}{slice_blowup.pdf_tex} \caption{After the blow-up.}% \label{sec:constant:exmp:no_singularities:subfig:slice_blowup} \end{subfigure} \caption{Slices of the system studied at \cref{sec:constant:exmp:no_singularities}.} \label{sec:constant:exmp:no_singularities:fig:slices} \end{figure} \begin{exmp} \label{sec:constant:exmp:no_singularities} Let $\mathbf{F} \in \mathcal{C}$ be given by the constant vector fields \begin{align*} \mathbf{F}_2(x,y,z) & = (1,-1,-1), & \mathbf{F}_1(x,y,z) & = (1,-1,1), \\ \mathbf{F}_3(x,y,z) & = (1,1,-1), & \mathbf{F}_4(x,y,z) & = (1,1,1), \end{align*} \noindent that behaves as represented at \cref{sec:constant:exmp:no_singularities:subfig:slice}. Using \cref{sec:constant:thm:dynamics} we can verify that, over the cylinder $C$ given by the blow-up of $\Sigma_x$, this system behaves as expected, i.e., as represented at \cref{sec:constant:exmp:no_singularities:subfig:slice_blowup}. For instance, over the stripe $S_1 = \mathbb{R} \times \left[0,\sfrac{\pi}{2}\right]$ we have \begin{equation*} (d_{11},d_{12},d_{13}) = \mathbf{F}_1(x,y,z) = (1,-1,1) \end{equation*} \noindent such that, according to \cref{sec:constant:thm:dynamics}, induces over $S_1$ a slow-fast system with $L_{1} \subset \mathcal{M}_1$ given by \begin{equation*} \theta = \theta_1 = \arctan{\left(\frac{d_{13}}{d_{12}}\right)} = \arctan{\left(\frac{1}{-1}\right)} = -\frac{\pi}{4}, \end{equation*} \noindent and, therefore, the slow manifold $\mathcal{M}_1$ consists into the straight lines $L_{1} \subset S_4$ and $L_{1}^{\pi} \subset S_2$ given by $\theta = \theta_1 = - \frac{\pi}{4}$ and $\theta = \theta_1 + \pi = \frac{3\pi}{4}$, respectively. In particular, none of these lines are visible at $S_1$. Over these lines acts the reduced dynamics $\dot{x} = d_{11} = 1$. Finally, since $d_{12} = -1 < 0$, then $L_{1}$ is repellor and $L_{1}^{\pi}$ is attractor of surrounding layer dynamics, according to \cref{sec:constant:tab:fast}. Therefore, we conclude that the dynamics generated by $\mathbf{F}_1$ over the whole cylinder $C$ behaves as represented in \cref{sec:constant:exmp:no_singularities:fig:stripe}. In particular, the dynamics over the stripe $S_1$ behaves as represented in \cref{sec:constant:exmp:no_singularities:subfig:slice_blowup}. The dynamics over the other stripes can be similarly verified to be as represented. \qed \begin{figure}[ht] \centering \def0.85\linewidth{0.85\linewidth} \import{tex/sec/constant/fig/}{const_exmp_crossing.pdf_tex} \caption{Dynamics over $C$ generated by the field $\mathbf{F}_1$ studied at \cref{sec:constant:exmp:no_singularities}. The dynamics over $S_1$ behaves as represented in \cref{sec:constant:exmp:no_singularities:subfig:slice_blowup}.} \label{sec:constant:exmp:no_singularities:fig:stripe} \end{figure} \end{exmp} \section{Framework}% \label{sec:framework} The first step consists into the application of a polar blow-up at the origin of the slice represented at \cref{sec:framework:sub:blowup:subfig:slice} or, in other words, a \textbf{cylindrical blow-up} at $\Sigma_{x}$. More specifically, assuming that the components of $\mathbf{F} \in \mathcal{D}^k$ can be written as \begin{equation*} \mathbf{F}_i = (w_i, p_i, q_i), \end{equation*} \noindent we apply the blow-up $\phi_1: \mathbb{R} \times S^1 \times \mathbb{R}^+ \to \mathbb{R}^3$ given by \begin{equation*} \phi_1(x,\theta,r) = (x, r\cos{\theta}, r\sin{\theta}), \end{equation*} \noindent which induces a piecewise smooth vector field $\mathbf{\tilde{F}} = [(\phi_1)_*^{-1}\mathbf{F}] \circ \phi_1$ whose components are given by \begin{equation*} \mathbf{\tilde{F}}_i = \left(w_i, \frac{q_i \cos{\theta} - p_i \sin{\theta}}{r}, p_i \cos{\theta} + q_i \sin{\theta} \right), \end{equation*} \noindent where $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,r)$. We then define the set \begin{equation*} \tilde{\mathcal{D}}^k = \left\{\mathbf{\tilde{F}} = [(\phi_1)_*^{-1}\mathbf{F}] \circ \phi_1 ;~ \mathbf{F} \in \mathcal{D}^k \right\} \end{equation*} \noindent of all blow-up induced vector fields. An extremely important observation at this point is the fact that, according to~\cite[p.~498]{Llibre2015}, the induced vector field $\mathbf{\tilde{F}}$ has only \textbf{regular discontinuities}, i.e., classical Filippov theory, as presented at \cref{sec:preliminaries}, is sufficient for its study. More precisely, we have now a piecewise smooth vector field $\mathbf{\tilde{F}}$ given by the four smooth vector fields $\mathbf{\tilde{F}}_i$ which induces the four \textbf{slow-fast systems} \begin{equation} \label{sec:framework:sub:blowup:eq:slow-fast} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ r \dot{\theta} = q_i \cos{\theta} - p_i \sin{\theta}\\ \dot{r} = p_i \cos{\theta} + q_i \sin{\theta} \end{matrix*}\right., \end{equation} \noindent where $\dot{\square} = \sfrac{d\square}/{dt}$; $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,r)$; and $r$ is the time rescaling factor. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/framework/fig/}{slice.pdf_tex} \caption{Slice.}% \label{sec:framework:sub:blowup:subfig:slice} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/framework/fig/}{slice_blowup.pdf_tex} \caption{Blow-up.}% \label{sec:framework:sub:blowup:subfig:slice_blowup} \end{subfigure} \caption{Framework process at slice-level.}% \label{sec:framework:sub:blowup:fig:framework_process} \end{figure} The study of the dynamics of~\eqref{sec:problem:eq:piecewise_field} has therefore been reduced to the study of the slow-fast systems \eqref{sec:framework:sub:blowup:eq:slow-fast}. In particular, the dynamics over $\Sigma_x$, previously undefined, can now be associated with \cref{sec:framework:sub:blowup:eq:slow-fast} at $r = 0$, which is given by the combination of the dynamics of the \textbf{reduced system} \begin{equation} \label{sec:framework:blowup:eq:slow_system} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ 0 = q_i \cos{\theta} - p_i \sin{\theta}\\ \dot{r} = p_i \cos{\theta} + q_i \sin{\theta} \end{matrix*}\right. \end{equation} \noindent and the dynamics of the \textbf{layer system} \begin{equation} \label{sec:framework:blowup:eq:fast_system} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = q_i \cos{\theta} - p_i \sin{\theta}\\ r' = 0 \end{matrix*}\right., \end{equation} \noindent where $\square' = \sfrac{d\square}/{d\tau}$ with $t = r \tau$; and the components $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,0) = (x,0,0)$. More geometrically, the dynamics over $\Sigma_{x}$ in \cref{sec:problem:eq:piecewise_field} can now be associated to the dynamics over the cylinder $C = \mathbb{R} \times S^1$ divided into the four infinite stripes \begin{align*} & S_2 = \mathbb{R} \times [\sfrac{\pi}{2}, \pi], & S_1 & = \mathbb{R} \times [0,\sfrac{\pi}{2}], \\ & S_3 = \mathbb{R} \times [\pi, \sfrac{3\pi}{2}], & S_4 & = \mathbb{R} \times [\sfrac{3\pi}{2}, 2\pi], \end{align*} \noindent as represented at \cref{sec:framework:sec:blowup:fig:stripes}, where acts the slow-fast systems given by \cref{sec:framework:blowup:eq:slow_system,sec:framework:blowup:eq:fast_system}, respectively. As we previously said, according to \cite[p.~498]{Llibre2015}, the four lines where these stripes intersect has regular discontinuities at most. Finally, the analysis of the dynamics over each stripe $S_i$ can then be carried out using GSP-Theory. \begin{figure}[ht] \centering \def0.85\linewidth{0.95\linewidth} \import{tex/sec/framework/fig/}{stripes.pdf_tex} \caption{Green cylinder $C$ divided into the four stripes $S_i$. A scheme of the stripe $S_1$ is also put into evidence.} \label{sec:framework:sec:blowup:fig:stripes} \end{figure} In particular, the first two equations of system \eqref{sec:framework:blowup:eq:slow_system} are independent of $r$ and, therefore, it can decoupled as \begin{equation} \label{sec:framework:blowup:eq:slow_system_cyl} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ 0 = q_i \cos{\theta} - p_i \sin{\theta}\\ \end{matrix*}\right., \end{equation} \noindent which gives the \textbf{reduced dynamics} over $S_i$; and \begin{equation} \label{sec:framework:blowup:eq:slow_system_rad} \dot{r} = p_i \cos{\theta} + q_i \sin{\theta}, \end{equation} \noindent which gives the respective \textbf{slow radial dynamics} or, in other words, it indicates how the external dynamics communicates with the dynamics \eqref{sec:framework:blowup:eq:slow_system_cyl} over the cylinder: entering ($\dot{r} > 0$), leaving ($\dot{r} < 0$) or staying ($\dot{r} = 0$) at $S_i$. Analogously, the first two equations of system \eqref{sec:framework:blowup:eq:fast_system} are independent of $r$ and, therefore, it can also be decoupled as \begin{equation} \label{sec:framework:blowup:eq:fast_system_cyl} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = q_i \cos{\theta} - p_i \sin{\theta} \end{matrix*}\right., \end{equation} \noindent which gives the \textbf{layer dynamics} over $S_i$; and \begin{equation} \label{sec:framework:blowup:eq:fast_system_rad} r' = 0, \end{equation} \noindent which gives the respective \textbf{fast radial dynamics} over the cylinder. Summarizing, we conclude that the dynamics over $\Sigma_x$ behaves as described in the theorem below, whose proof consists in the analysis done above. \begin{thm}[Double Discontinuity Dynamics] \label{sec:framework:blowup:thm:dynamics} Given $\mathbf{F} \in \mathcal{D}^k$ with components $\mathbf{F}_i = (w_i, p_i, q_i)$, let $\mathbf{\tilde{F}} \in \tilde{\mathcal{D}}^k$ be the induced vector field by the blow-up $\phi_1(x,\theta,r) = (x, r\cos{\theta}, r\sin{\theta})$. Then, this blow-up associates the dynamics over $\Sigma_x$ with the following dynamics over the cylinder $C = \mathbb{R} \times S^1 = S_1 \cup \ldots \cup S_4$: over each stripe $S_i$ acts a slow-fast dynamics whose reduced dynamics is given by \begin{equation} \label{sec:framework:blowup:thm:dynamics:eq:reduced} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ 0 = q_i \cos{\theta} - p_i \sin{\theta}\\ \end{matrix*}\right., \end{equation} \noindent with slow radial dynamics $\dot{r} = p_i \cos{\theta} + q_i \sin{\theta}$; and layer dynamics given by \begin{equation} \label{sec:framework:blowup:thm:dynamics:eq:layer} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = q_i \cos{\theta} - p_i \sin{\theta} \end{matrix*}\right., \end{equation} \noindent with fast radial dynamics $r' = 0$. Finally, at every equation above the functions $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,0) = (x,0,0)$. \qed \end{thm} In order to perform a deeper analysis of the dynamics given by \cref{sec:framework:blowup:thm:dynamics} with GSP-Theory as described at \cref{sec:preliminaries:sub:gsp}, let $S_i$ be one of the cylinder's stripe and let \begin{equation*} \mathcal{M}_i = \left\{ (x, \theta) \in \mathbb{R} \times S^1 ;~ f_i(x, \theta, 0) = 0 \right\} \end{equation*} \noindent be its slow manifold, where $f_i(x, \theta, 0) = q_i \cos{\theta} - p_i \sin{\theta}$. Given $(x_0, \theta_0, 0) \in \mathcal{M}_i \times \left\{ 0 \right\}$, the Jacobian matrix of the complete layer system \cref{sec:framework:blowup:eq:fast_system} over this point is \begin{equation*} \mathbf{J}_{\text{fast}} = \begin{bmatrix} 0 & 0 & 0 \\ (f_i)_x & (f_i)_{\theta} & 0 \\ 0 & 0 & 0 \end{bmatrix}, \end{equation*} \noindent where $(f_i)_x$ and $(f_i)_{\theta}$ represents the partial derivatives calculated at $\left( x_0, \theta_0, 0 \right)$. The eigenvalues of this matrix are the elements of the set $\left\{0, 0, (f_i)_{\theta} \right\}$ and, therefore, $(x_0, \theta_0)$ is normally hyperbolic if, and only if, $(f_i)_{\theta} \neq 0$. However, we observe that, since we are over the slow manifold, then $(f_i)_{\theta} = 0$ leads to the homogeneous linear system \begin{equation*} \left\{\begin{matrix*}[r] f_i = 0 \\ (f_i)_{\theta} = 0 \end{matrix*}\right. \quad \sim \quad \left\{\begin{matrix} q_i \cos{\theta} - p_i \sin{\theta} = 0 \\ q_i \sin{\theta} + p_i \cos{\theta} = 0 \end{matrix}\right. \quad \sim \quad \begin{bmatrix} \cos{\theta} & -\sin{\theta} \\ \sin{\theta} & \cos{\theta} \end{bmatrix} \begin{bmatrix} q_i \\ p_i \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \end{equation*} \noindent whose only solution is the trivial, $p_i = q_i = 0$, since the trigonometrical matrix above is invertible ($\text{det} \equiv 1$) for every $\theta \in S^1$ and, therefore, we conclude that $(f_i)_{\theta} \neq 0$ whenever \begin{equation} \tag{WFH} \label{sec:framework:sub:blowup:eq:weak_fund_hypo} p_i \neq 0 \quad \text{or} \quad q_i \neq 0, \end{equation} \noindent henceforth, called \textbf{weak fundamental hypothesis}, or WFH for short. We also observe that \begin{equation*} (f_i)_x = (q_i)_x \cos{\theta} - (p_i)_x \sin{\theta} \end{equation*} \noindent which, as above, supposing $(f_i)_x = 0$ leads to the homogeneous linear system \begin{equation*} \left\{\begin{matrix*}[r] q_i \cos{\theta} - p_i \sin{\theta} = 0 \\ (q_i)_x \cos{\theta} - (p_i)_x \sin{\theta} = 0 \end{matrix*}\right. \quad \sim \quad \begin{bmatrix*}[c] q_i & p_i \\ (q_i)_x & (p_i)_x \end{bmatrix*} \begin{bmatrix} \cos{\theta} \\ \sin{\theta} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{equation*} \noindent which only admits the absurd solution $\cos{\theta} = \sin{\theta} = 0$ if the matrix above is invertible. Hence, we can ensure $(f_i)_x \neq 0$ by imposing this absurd, i.e., \begin{equation} \tag{SFH} \label{sec:framework:sub:blowup:eq:strong_fund_hypo} 0 \neq \det{\begin{bmatrix*}[c] q_i & p_i \\ (q_i)_x & (p_i)_x \end{bmatrix*}} = q_i (p_i)_x - p_i (q_i)_x \end{equation} \noindent which always implies the weak fundamental hypothesis and, therefore, will be called \textbf{strong fundamental hypothesis}, or SFH for short. \begin{cor} \label{sec:framework:sub:blowup:cor:radial_dynamics} The radial dynamics can only be transversal ($\dot{r} \neq 0$) to the cylinder $C$ over the slow manifold $\mathcal{M}_i$. More over, under WFH, it is in fact transversal. \end{cor} \begin{proof} The first part of the statement is assured by \cref{sec:framework:blowup:thm:dynamics}. For the second part, just observe that $\dot{r} = - (f_i)_{\theta} \neq 0$ under WFH. \end{proof} \begin{cor} \label{sec:framework:sub:blowup:cor:explicit_slow_manifold} The slow manifold $\mathcal{M}_i$ is locally a graph $\left( x, \theta(x) \right)$ under WFH. However, if $\norm{(f_i)_{\theta}}$ admits a global positive minimum, then $\mathcal{M}_i$ is globally a graph $\left( x, \theta(x) \right)$. Either way, $\theta(x)$ is of class $C^k$. \end{cor} \begin{proof} The first part is assured by the usual Implicit Function Theorem applied to $f_i(x_0, \theta_0, 0) = 0$ over $\mathcal{M}_i$, since under WFH we have $\norm{(f_i)_{\theta}} > 0$. Analogously, the second part is assured by the Global Implicit Function Theorem found at \cite[p.~253]{Zhang2006}, which requires a stronger hypothesis. \end{proof} \begin{cor} \label{sec:framework:sub:blowup:cor:normal_hyperbolic} The slow manifold $\mathcal{M}_i$ is normally hyperbolic at every point that satisfies the WFH. \end{cor} \begin{proof} Just observe that the only non-trivial eigenvalue, $(f_i)_{\theta}$, is non-zero under the WFH. \end{proof} \begin{cor} \label{sec:framework:sub:blowup:cor:singularities} The hyperbolic singularities of the reduced system \cref{sec:framework:blowup:thm:dynamics:eq:reduced} acts as hyperbolic saddle or node singularities of $S_i$ under WFH. \end{cor} \begin{proof} Let $\mathbf{P} = \left( x_0, \theta_0 \right) \in \mathcal{M}_i$ be a hyperbolic singularity of the reduced system, i.e., $w_i(x_0, 0, 0) = 0$ with eigenvalue $\lambda_1 = (w_i)_{x}(x_0, 0, 0) \neq 0$. We have two possibilities: \begin{itemize} \item $\lambda_1 > 0 \Rightarrow (j^s, j^u) = (0,1)$; or \item $\lambda_1 < 0 \Rightarrow (j^s, j^u) = (1,0)$, \end{itemize} \noindent where $j^s$ and $j^u$ are the dimensions of the stable and unstable manifolds of $\mathbf{P}$ with respect to the reduced system, respectively. On the other hand, under WFH we also have the non-trivial eigenvalue $\lambda_2 = (f_i)_{\theta}(x_0, \theta_0, 0) \neq 0$ for the layer system and, therefore, the two possibilities: \begin{itemize} \item $\lambda_2 > 0 \Rightarrow (k^s, k^u) = (0,1)$; or \item $\lambda_2 < 0 \Rightarrow (k^s, k^u) = (1,0)$, \end{itemize} \noindent where $k^s$ and $k^u$ are the dimensions of the stable and unstable manifolds of $\mathbf{P}$ with respect to the layer system, respectively. Hence, observing that $j = \dim{\mathbf{P}} = 0$ and remembering \cref{sec:preliminaries:sub:gsp:thm:fenichel}, any combination of the signs of $\lambda_1$ and $\lambda_2$ leads to the total sum of dimensions \begin{equation*} (j^s + k^s) + (j^u + k^u) = 2 = \dim{S_i}, \end{equation*} \noindent and, therefore, $\mathbf{P}$ acts as a hyperbolic singularity of $S_i$. Finally, the saddle-node duality comes from the fact that both non-trivial eigenvalues above have no imaginary parts. \end{proof} In other words, under WFH, the slow manifold $\mathcal{M}_i$ is, at minimum, locally a graph. More than that, it is the entry-point for the external dynamics to the cylinder. Besides that, it is normally hyperbolic at its full extend, assuring then not only persistence and well-behaved stability for its invariant compact parts, but also that $\mathcal{M}_i$ is always attracting or repelling the all around (layer) dynamics. All this nice properties comes at the low cost of WFH. Therefore, it is not a surprise that, for every system studied below, we require at least WFH, but also always test for SFH, whose importance will become clear when studying affine systems. \subsection{Blow-up}% \label{sec:framework:sub:blowup} The first step consists in applying a polar blow-up at the origin of the slice represented at \cref{sec:framework:sub:blowup:subfig:slice} or, in other words, a \textbf{cylindrical blow-up} at $\Sigma_{x}$. See \cref{sec:framework:sub:blowup:subfig:slice_blowup}. More specifically, assuming that the components of $\mathbf{F} \in \mathcal{D}^k$ can be written as \begin{equation*} \mathbf{F}_i = (w_i, p_i, q_i), \end{equation*} \noindent we apply the blow-up $\phi_1: \mathbb{R} \times S^1 \times \mathbb{R}^+ \to \mathbb{R}^3$ given by \begin{equation*} \phi_1(x,\theta,r) = (x, r\cos{\theta}, r\sin{\theta}), \end{equation*} \noindent which induces a piecewise smooth vector field $\mathbf{\tilde{F}} = [(\phi_1)_*^{-1}\mathbf{F}] \circ \phi_1$ whose components are given by \begin{equation*} \mathbf{\tilde{F}}_i = \left(w_i, \frac{q_i \cos{\theta} - p_i \sin{\theta}}{r}, p_i \cos{\theta} + q_i \sin{\theta} \right), \end{equation*} \noindent where $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,r)$. We then define the set \begin{equation*} \tilde{\mathcal{D}}^k = \left\{\mathbf{\tilde{F}} = [(\phi_1)_*^{-1}\mathbf{F}] \circ \phi_1 ;~ \mathbf{F} \in \mathcal{D}^k \right\} \end{equation*} \noindent of all blow-up induced vector fields. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/framework/fig/}{slice_blowup.pdf_tex} \caption{Blow-up.}% \label{sec:framework:sub:blowup:subfig:slice_blowup} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/framework/fig/}{slice_regul.pdf_tex} \caption{Regularization.}% \label{sec:framework:sub:blowup:subfig:slice_regul} \end{subfigure} \caption{Framework process.}% \label{sec:framework:sub:blowup:fig:framework_process} \end{figure} An extremely important observation at this point is the fact that, according to~\cite[p.~498]{Llibre2015}, the induced vector field $\mathbf{\tilde{F}}$ has only \textbf{regular discontinuities}, i.e., classical Filippov theory, as presented at \cref{sec:preliminaries}, is sufficient for its study. More precisely, we have now a piecewise smooth vector field $\mathbf{\tilde{F}}$ given by the four smooth vector fields $\mathbf{\tilde{F}}_i$ which induces the four \textbf{slow-fast systems} \begin{equation} \label{sec:framework:sub:blowup:eq:slow-fast} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ r \dot{\theta} = q_i \cos{\theta} - p_i \sin{\theta}\\ \dot{r} = p_i \cos{\theta} + q_i \sin{\theta} \end{matrix*}\right., \end{equation} \noindent where $\dot{\square} = \sfrac{d\square}/{dt}$; $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,r)$; and $r$ is the time rescaling factor. The study of the dynamics of~\eqref{sec:problem:eq:piecewise_field} has therefore been reduced to the study of the slow-fast systems \eqref{sec:framework:sub:blowup:eq:slow-fast}. In particular, the dynamics over $\Sigma_x$, previously undefined, can now be associated with \cref{sec:framework:sub:blowup:eq:slow-fast} at $r = 0$, which is given by the combination of the dynamics of the \textbf{reduced system} \begin{equation} \label{sec:framework:blowup:eq:slow_system} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ 0 = q_i \cos{\theta} - p_i \sin{\theta}\\ \dot{r} = p_i \cos{\theta} + q_i \sin{\theta} \end{matrix*}\right. \end{equation} \noindent and the dynamics of the \textbf{layer system} \begin{equation} \label{sec:framework:blowup:eq:fast_system} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = q_i \cos{\theta} - p_i \sin{\theta}\\ r' = 0 \end{matrix*}\right., \end{equation} \noindent where $\square' = \sfrac{d\square}/{d\tau}$ with $t = r \tau$; and the components $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,0) = (x,0,0)$. More geometrically, the dynamics over $\Sigma_{x}$ in \cref{sec:problem:eq:piecewise_field} can now be associated to the dynamics over the cylinder $C = \mathbb{R} \times S^1$ divided into the four infinite stripes \begin{align*} & S_2 = \mathbb{R} \times [\sfrac{\pi}{2}, \pi], & S_1 & = \mathbb{R} \times [0,\sfrac{\pi}{2}], \\ & S_3 = \mathbb{R} \times [\pi, \sfrac{3\pi}{2}], & S_4 & = \mathbb{R} \times [\sfrac{3\pi}{2}, 2\pi], \end{align*} \noindent as represented at \cref{sec:framework:sec:blowup:fig:stripes}, where acts the slow-fast systems given by \cref{sec:framework:blowup:eq:slow_system,sec:framework:blowup:eq:fast_system}, respectively. As we previously said, according to \cite[p.~498]{Llibre2015}, the four lines where these stripes intersect has regular discontinuities at most. Finally, the analysis of the dynamics over each stripe $S_i$ can then be carried out as presented in \cref{sec:preliminaries:sub:gsp:exmp:regul_process}. \begin{figure}[ht] \centering \def0.85\linewidth{0.95\linewidth} \import{tex/sec/framework/fig/}{stripes.pdf_tex} \caption{Green cylinder $C$ divided into the four stripes $S_i$. A scheme of the stripe $S_1$ is also put into evidence.} \label{sec:framework:sec:blowup:fig:stripes} \end{figure} In particular, the first two equations of system \eqref{sec:framework:blowup:eq:slow_system} are independent of $r$ and, therefore, it can decoupled as \begin{equation} \label{sec:framework:blowup:eq:slow_system_cyl} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ 0 = q_i \cos{\theta} - p_i \sin{\theta}\\ \end{matrix*}\right., \end{equation} \noindent which gives the \textbf{reduced dynamics} over $S_i$; and \begin{equation} \label{sec:framework:blowup:eq:slow_system_rad} \dot{r} = p_i \cos{\theta} + q_i \sin{\theta}, \end{equation} \noindent which gives the respective \textbf{slow radial dynamics} or, in other words, it indicates how the external dynamics communicates with the dynamics \eqref{sec:framework:blowup:eq:slow_system_cyl} over the cylinder: entering ($\dot{r} > 0$), leaving ($\dot{r} < 0$) or staying ($\dot{r} = 0$) at $S_i$. Analogously, the first two equations of system \eqref{sec:framework:blowup:eq:fast_system} are independent of $r$ and, therefore, it can also be decoupled as \begin{equation} \label{sec:framework:blowup:eq:fast_system_cyl} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = q_i \cos{\theta} - p_i \sin{\theta} \end{matrix*}\right., \end{equation} \noindent which gives the \textbf{layer dynamics} over $S_i$; and \begin{equation} \label{sec:framework:blowup:eq:fast_system_rad} r' = 0, \end{equation} \noindent which gives the respective \textbf{fast radial dynamics} over the cylinder. Summarizing, we conclude that the dynamics over $\Sigma_x$ behaves as described in the theorem below, whose proof consists in the analysis done above. \begin{thm}[Double Discontinuity Dynamics] \label{sec:framework:blowup:thm:dynamics} Given $\mathbf{F} \in \mathcal{D}^k$ with components $\mathbf{F}_i = (w_i, p_i, q_i)$, let $\mathbf{\tilde{F}} \in \tilde{\mathcal{D}}^k$ be the induced vector field by the blow-up $\phi_1(x,\theta,r) = (x, r\cos{\theta}, r\sin{\theta})$. Then, this blow-up associates the dynamics over $\Sigma_x$ with the following dynamics over the cylinder $C = \mathbb{R} \times S^1 = S_1 \cup \ldots \cup S_4$: over each stripe $S_i$ acts a slow-fast dynamics whose reduced dynamics is given by \begin{equation} \label{sec:framework:blowup:thm:dynamics:eq:reduced} \left\{\begin{matrix*}[l] \dot{x} = w_i\\ 0 = q_i \cos{\theta} - p_i \sin{\theta}\\ \end{matrix*}\right., \end{equation} \noindent with slow radial dynamics $\dot{r} = p_i \cos{\theta} + q_i \sin{\theta}$; and layer dynamics given by \begin{equation} \label{sec:framework:blowup:thm:dynamics:eq:layer} \left\{\begin{matrix*}[l] x' = 0\\ \theta' = q_i \cos{\theta} - p_i \sin{\theta} \end{matrix*}\right., \end{equation} \noindent with fast radial dynamics $r' = 0$. Finally, at every equation above the functions $w_i$, $p_i$ and $q_i$ must be calculated at the point $\phi_1(x,\theta,0) = (x,0,0)$. \qed \end{thm} In order to perform a deeper analysis of the dynamics given by \cref{sec:framework:blowup:thm:dynamics} with GSP-Theory as described at \cref{sec:preliminaries:sub:gsp}, let $S_i$ be one of the cylinder's stripe and let \begin{equation*} \mathcal{M}_i = \left\{ (x, \theta) \in \mathbb{R} \times S^1 ;~ f_i(x, \theta, 0) = 0 \right\} \end{equation*} \noindent be its slow manifold, where $f_i(x, \theta, 0) = q_i \cos{\theta} - p_i \sin{\theta}$. Given $(x_0, \theta_0, 0) \in \mathcal{M}_i \times \left\{ 0 \right\}$, the Jacobian matrix of the complete layer system \cref{sec:framework:blowup:eq:fast_system} over this point is \begin{equation*} \mathbf{J}_{\text{fast}} = \begin{bmatrix} 0 & 0 & 0 \\ (f_i)_x & (f_i)_{\theta} & 0 \\ 0 & 0 & 0 \end{bmatrix}, \end{equation*} \noindent where $(f_i)_x$ and $(f_i)_{\theta}$ represents the partial derivatives calculated at $\left( x_0, \theta_0, 0 \right)$. The eigenvalues of this matrix are the elements of the set $\left\{0, 0, (f_i)_{\theta} \right\}$ and, therefore, $(x_0, \theta_0)$ is normally hyperbolic if, and only if, $(f_i)_{\theta} \neq 0$. However, we observe that, since we are over the slow manifold, then $(f_i)_{\theta} = 0$ leads to the homogeneous linear system \begin{equation*} \left\{\begin{matrix*}[r] f_i = 0 \\ (f_i)_{\theta} = 0 \end{matrix*}\right. \quad \sim \quad \left\{\begin{matrix} q_i \cos{\theta} - p_i \sin{\theta} = 0 \\ q_i \sin{\theta} + p_i \cos{\theta} = 0 \end{matrix}\right. \quad \sim \quad \begin{bmatrix} \cos{\theta} & -\sin{\theta} \\ \sin{\theta} & \cos{\theta} \end{bmatrix} \begin{bmatrix} q_i \\ p_i \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix}, \end{equation*} \noindent whose only solution is the trivial, $p_i = q_i = 0$, since the trigonometrical matrix above is invertible ($\text{det} \equiv 1$) for every $\theta \in S^1$ and, therefore, we conclude that $(f_i)_{\theta} \neq 0$ whenever \begin{equation} \tag{WFH} \label{sec:framework:sub:blowup:eq:weak_fund_hypo} p_i \neq 0 \quad \text{or} \quad q_i \neq 0, \end{equation} \noindent henceforth, called \textbf{weak fundamental hypothesis}, or WFH for short. We also observe that \begin{equation*} (f_i)_x = (q_i)_x \cos{\theta} - (p_i)_x \sin{\theta} \end{equation*} \noindent which, as above, supposing $(f_i)_x = 0$ leads to the homogeneous linear system \begin{equation*} \left\{\begin{matrix*}[r] q_i \cos{\theta} - p_i \sin{\theta} = 0 \\ (q_i)_x \cos{\theta} - (p_i)_x \sin{\theta} = 0 \end{matrix*}\right. \quad \sim \quad \begin{bmatrix*}[c] q_i & p_i \\ (q_i)_x & (p_i)_x \end{bmatrix*} \begin{bmatrix} \cos{\theta} \\ \sin{\theta} \end{bmatrix} = \begin{bmatrix} 0 \\ 0 \end{bmatrix} \end{equation*} \noindent which only admits the absurd solution $\cos{\theta} = \sin{\theta} = 0$ if the matrix above is invertible. Hence, we can ensure $(f_i)_x \neq 0$ by imposing this absurd, i.e., \begin{equation} \tag{SFH} \label{sec:framework:sub:blowup:eq:strong_fund_hypo} 0 \neq \det{\begin{bmatrix*}[c] q_i & p_i \\ (q_i)_x & (p_i)_x \end{bmatrix*}} = q_i (p_i)_x - p_i (q_i)_x \end{equation} \noindent which always implies the weak fundamental hypothesis and, therefore, will be called \textbf{strong fundamental hypothesis}, or SFH for short. \begin{cor} \label{sec:framework:sub:blowup:cor:radial_dynamics} The radial dynamics can only be transversal ($\dot{r} \neq 0$) to the cylinder $C$ over the slow manifold $\mathcal{M}_i$. More over, under WFH, it is in fact transversal. \end{cor} \begin{proof} The first part of the statement is assured by \cref{sec:framework:blowup:thm:dynamics}. For the second part, just observe that $\dot{r} = - (f_i)_{\theta} \neq 0$ under WFH. \end{proof} \begin{cor} \label{sec:framework:sub:blowup:cor:explicit_slow_manifold} The slow manifold $\mathcal{M}_i$ is locally a graph $\left( x, \theta(x) \right)$ under WFH. However, if $\norm{(f_i)_{\theta}}$ admits a global positive minimum, then $\mathcal{M}_i$ is globally a graph $\left( x, \theta(x) \right)$. Either way, $\theta(x)$ is of class $C^k$. \end{cor} \begin{proof} The first part is assured by the usual Implicit Function Theorem applied to $f_i(x_0, \theta_0, 0) = 0$ over $\mathcal{M}_i$, since under WFH we have $\norm{(f_i)_{\theta}} > 0$. Analogously, the second part is assured by the Global Implicit Function Theorem found at \cite[p.~253]{Zhang2006}, which requires a stronger hypothesis. \end{proof} \begin{cor} \label{sec:framework:sub:blowup:cor:normal_hyperbolic} The slow manifold $\mathcal{M}_i$ is normally hyperbolic at every point that satisfies the WFH. \end{cor} \begin{proof} Just observe that the only non-trivial eigenvalue, $(f_i)_{\theta}$, is non-zero under the WFH. \end{proof} \begin{cor} \label{sec:framework:sub:blowup:cor:singularities} The hyperbolic singularities of the reduced system \cref{sec:framework:blowup:thm:dynamics:eq:reduced} acts as hyperbolic saddle or node singularities of $S_i$ under WFH. \end{cor} \begin{proof} Let $\mathbf{P} = \left( x_0, \theta_0 \right) \in \mathcal{M}_i$ be a hyperbolic singularity of the reduced system, i.e., $w_i(x_0, 0, 0) = 0$ with eigenvalue $\lambda_1 = (w_i)_{x}(x_0, 0, 0) \neq 0$. We have two possibilities: \begin{itemize} \item $\lambda_1 > 0 \Rightarrow (j^s, j^u) = (0,1)$; or \item $\lambda_1 < 0 \Rightarrow (j^s, j^u) = (1,0)$, \end{itemize} \noindent where $j^s$ and $j^u$ are the dimensions of the stable and unstable manifolds of $\mathbf{P}$ with respect to the reduced system, respectively. On the other hand, under WFH we also have the non-trivial eigenvalue $\lambda_2 = (f_i)_{\theta}(x_0, \theta_0, 0) \neq 0$ for the layer system and, therefore, the two possibilities: \begin{itemize} \item $\lambda_2 > 0 \Rightarrow (k^s, k^u) = (0,1)$; or \item $\lambda_2 < 0 \Rightarrow (k^s, k^u) = (1,0)$, \end{itemize} \noindent where $k^s$ and $k^u$ are the dimensions of the stable and unstable manifolds of $\mathbf{P}$ with respect to the layer system, respectively. Hence, observing that $j = \dim{\mathbf{P}} = 0$ and remembering \cref{sec:preliminaries:sub:gsp:thm:fenichel}, any combination of the signs of $\lambda_1$ and $\lambda_2$ leads to the total sum of dimensions \begin{equation*} (j^s + k^s) + (j^u + k^u) = 2 = \dim{S_i}, \end{equation*} \noindent and, therefore, $\mathbf{P}$ acts as a hyperbolic singularity of $S_i$. Finally, the saddle-node duality comes from the fact that both non-trivial eigenvalues above have no imaginary parts. \end{proof} In other words, under WFH, the slow manifold $\mathcal{M}_i$ is, at minimum, locally a graph. More than that, it is the entry-point for the external dynamics to the cylinder. Besides that, it is normally hyperbolic at its full extend, assuring then not only persistence and well-behaved stability for its invariant compact parts, but also that $\mathcal{M}_i$ is always attracting or repelling the all around (layer) dynamics. All this nice properties comes at the low cost of WFH. Therefore, it is not a surprise that, for every system studied below, we require at least WFH, but also always test for SFH, whose importance will become clear when studying affine systems. \subsection{Regularization}% \label{sec:framework:sub:regularization} The next step of the framework consists, as presented at \cref{sec:preliminaries:sub:regularization}, in the Sotomayor-Teixeira regularization of the half planes $\Sigma_{12}$, $\Sigma_{23}$, $\Sigma_{34}$ and $\Sigma_{14}$, see \cref{sec:framework:sub:blowup:fig:framework_process}. Since this part of the dynamics is well established, it is not always necessary nor sufficient. However, for completeness, we present here the calculations for reference on future works involving the external (radial) dynamics. For $ij = 14$ or $ij = 23$, since the switching manifold $\Sigma_{ij}$ is locally given as a subset of the zeros set of the map $(x,y,z) \to z$, then the regularization is given by \begin{equation*} \mathbf{F}_{ij}^{\varepsilon} = \frac{\mathbf{F}_i + \mathbf{F}_j}{2} + \varphi\left(\frac{z}{\varepsilon}\right) \frac{\mathbf{F}_i - \mathbf{F}_j}{2}, \end{equation*} \noindent where $\varphi:\mathbb{R} \to \mathbb{R}$ is a monotonous transition function. Now, assuming that $\mathbf{F}_{ij}^{\varepsilon} = \left(f_{ij}, g_{ij}, h_{ij}\right)$ and applying the blow-up $\phi_2:\mathbb{R}^2 \times [0,\pi] \times \mathbb{R}^+ \to \mathbb{R}^3 \times \mathbb{R}$ given by \begin{equation*} \phi_2(x, y, \psi, \eta) = (x, y, \eta \cos{\psi}, \eta \sin{\psi}), \end{equation*} \noindent we get a slow-fast system whose \textbf{slow dynamics} is given by \begin{equation} \label{sec:framework:sub:regularization:eq:slow} \left\{\begin{matrix*}[l] \dot{x} = f_{ij}\\ \dot{y} = g_{ij}\\ 0 = h_{ij} \sin{\psi} \end{matrix*}\right. \end{equation} \noindent and the \textbf{fast dynamics} is given by \begin{equation} \label{sec:framework:sub:regularization:eq:fast} \left\{\begin{matrix*}[l] x' = 0\\ y' = 0\\ \psi' = -h_{ij} \sin{\psi} \end{matrix*}\right., \end{equation} \noindent where $f_{ij}$, $g_{ij}$ and $h_{ij}$ must be calculated at the point $\phi_2(x,y,\psi,0)$. As a result, according to~\cite{Teixeira2012} and as presented at \cref{sec:preliminaries:sub:gsp}, we get that the Filippov dynamics over $\Sigma_{ij}$ is equivalent to the slow-fast dynamics given by \cref{sec:framework:sub:regularization:eq:slow,sec:framework:sub:regularization:eq:fast}. Analogous results can be obtained for $ij = 12$ and $ij = 34$. Finally, the analysis of the dynamics over each half plane $\Sigma_{ij}$ can then be carried out as presented in \cref{sec:preliminaries:sub:gsp:exmp:regul_process}. \section{Preliminaries}% \label{sec:preliminaries} \input{tex/sec/preliminaries/sub/piecewise.tex} \input{tex/sec/preliminaries/sub/regularization.tex} \input{tex/sec/preliminaries/sub/gsp.tex} \subsection{Geometric Singular Perturbation Theory}% \label{sec:preliminaries:sub:gsp} Let $W \subset \mathbb{R}^{m+n}$ be an open set whose elements are represented by $(\mathbf{x},\mathbf{y})$. Let also $\mathbf{f}:W \times [0,1] \to \mathbb{R}^m$ and $\mathbf{g}:W \times [0,1] \to \mathbb{R}^n$ be vector fields of class $C^r$ with $r \ge 1$. Given $0 < \xi < 1$, consider the system of differential equations \begin{equation} \label{sec:preliminaries:sub:gsp:eq:fast} \left\{\begin{matrix*}[l] \mathbf{x}' = \mathbf{f}(\mathbf{x}, \mathbf{y}, \xi)\\ \mathbf{y}' = \xi \mathbf{g}(\mathbf{x}, \mathbf{y}, \xi) \end{matrix*}\right., \end{equation} \noindent where $\square' = \sfrac{d\square}{d\tau}$, $\mathbf{x} = \mathbf{x}(\tau)$ and $\mathbf{y} = \mathbf{y}(\tau)$. Applying at the previous system the time rescaling given by $t = \xi \tau$, we obtain the new system \begin{equation} \label{sec:preliminaries:sub:gsp:eq:slow} \left\{\begin{matrix*}[l] \xi \dot{\mathbf{x}} = \mathbf{f}(\mathbf{x}, \mathbf{y}, \xi)\\ \dot{\mathbf{y}} = \mathbf{g}(\mathbf{x}, \mathbf{y}, \xi) \end{matrix*}\right., \end{equation} \noindent where $\dot{\square} = \sfrac{d\square}{dt}$, $\mathbf{x} = \mathbf{x}(t)$ and $\mathbf{y} = \mathbf{y}(t)$. As $0 < \xi < 1$, then \cref{sec:preliminaries:sub:gsp:eq:fast} and \cref{sec:preliminaries:sub:gsp:eq:slow} has exactly the same phase portrait, except for the trajectories speed, which is greater for first system and smaller for the second. Therefore, the following definition makes sense: \begin{defn}% \label{sec:preliminaries:sub:gsp:defn:slow-fast} We say that \cref{sec:preliminaries:sub:gsp:eq:fast} and \cref{sec:preliminaries:sub:gsp:eq:slow} forms a $(m,n)$-\textbf{slow-fast system} with \textbf{fast system} given by \cref{sec:preliminaries:sub:gsp:eq:fast} and \textbf{slow system} given by \cref{sec:preliminaries:sub:gsp:eq:slow}. \end{defn} Taking $\xi \to 0$ in \cref{sec:preliminaries:sub:gsp:eq:fast}, we get the so-called \textbf{layer system} \begin{equation} \label{sec:preliminaries:sub:gsp:eq:layer} \left\{\begin{matrix*}[l] \mathbf{x}' = \mathbf{f}(\mathbf{x}, \mathbf{y}, 0)\\ \mathbf{y}' = \mathbf{0} \end{matrix*}\right., \end{equation} \noindent which has dimension $m$. Taking $\xi \to 0$ in \cref{sec:preliminaries:sub:gsp:eq:slow}, we get the so-called \textbf{reduced system} \begin{equation} \label{sec:preliminaries:sub:gsp:eq:reduced} \left\{\begin{matrix*}[l] \mathbf{0} = \mathbf{f}(\mathbf{x}, \mathbf{y}, 0)\\ \dot{\mathbf{y}} = \mathbf{g}(\mathbf{x}, \mathbf{y}, 0) \end{matrix*}\right., \end{equation} \noindent which has dimension $n$. Beyond that, we say that the set \begin{equation*} \mathcal{M} = \left\{(\mathbf{x},\mathbf{y}) \in W;~ \mathbf{f}(\mathbf{x}, \mathbf{y}, 0) = \mathbf{0}\right\} \end{equation*} \noindent is the \textbf{slow manifold}. Observe that, on the one hand, $\mathcal{M}$ represents the set of singularities of the layer system; on the other hand, $\mathcal{M}$ represents the manifold over which the dynamics of the reduced system takes place. The main idea of Geometric Singular Perturbation Theory, or GSP-Theory for short, established by Fenichel at~\cite{Fenichel1979}, consists into combining the dynamics of the limit systems (layer and reduced) to recover the dynamics of the initial system (slow-fast) with $\xi > 0$ small. In fact, considering $\xi$ as an additional variable of the slow system \cref{sec:preliminaries:sub:gsp:eq:slow} we get the new one \begin{equation} \label{sec:preliminaries:sub:gsp:eq:fast_complete} \left\{\begin{matrix*}[l] \mathbf{x}' = \mathbf{f}(\mathbf{x}, \mathbf{y}, \xi)\\ \mathbf{y}' = \xi \mathbf{g}(\mathbf{x}, \mathbf{y}, \xi)\\ \xi' = 0 \end{matrix*}\right., \end{equation} \noindent whose Jacobian matrix at $\left( \mathbf{x}_0, \mathbf{y}_0, 0 \right) \in \mathcal{M} \times \left\{ 0 \right\}$ is \begin{equation} \label{sec:preliminaries:sub:gsp:eq:jacobian_layer} \mathbf{J}_{\text{fast}} = \begin{bmatrix} \mathbf{f}_{\mathbf{x}} & \mathbf{f}_{\mathbf{y}} & 0 \\ \mathbf{0} & \mathbf{0} & 0 \\ \mathbf{0} & \mathbf{0} & 0 \end{bmatrix}, \end{equation} \noindent where $\mathbf{f}_{\mathbf{x}}$ and $\mathbf{f}_{\mathbf{y}}$ represents the partial derivatives calculated at the point $\left( \mathbf{x}_0, \mathbf{y}_0, 0 \right)$. The matrix above have the trivial eigenvalue $\lambda = 0$ with algebraic multiplicity $n+1$. The remaining eigenvalues, called \textbf{non-trivial}, are divided into three categories: negative, zero or positive real parts; we denote the number of such eigenvalues by $k^s$, $k^c$ and $k^u$, respectively. \begin{defn} \label{sec:preliminaries:sub:gsp:defn:normal_hyperbolic} We say that $\left( \mathbf{x}_0, \mathbf{y}_0, 0 \right) \in \mathcal{M} \times \left\{ 0 \right\}$ is \textbf{normally hyperbolic} if every non-trivial eigenvalue of \cref{sec:preliminaries:sub:gsp:eq:jacobian_layer} have non-zero real part, i.e., $k^c = 0$. \end{defn} Fenichel, at~\cite{Fenichel1979}, proved that normal hyperbolicity allows the persistence of invariant compact parts of the slow manifold under singular perturbation, i.e., the dynamical structure of such parts with $\xi = 0$ persists for $\xi > 0$ small. More precisely: \begin{thm}[Fenichel,~\cite{Fenichel1979}] \label{sec:preliminaries:sub:gsp:thm:fenichel} Let $\mathcal{N}$ be a normally hyperbolic compact invariant $j$-dimensional submanifold of $\mathcal{M}$. Suppose that the stable and unstable manifolds of $\mathcal{N}$, with respect to the reduced system, have dimensions $j+j^s$ and $j+j^u$, respectively. Then, there exists a 1-parameter family of invariant submanifolds $\left\{ \mathcal{N}_{\xi} ;~ \xi \sim 0 \right\}$ such that $\mathcal{N}_{0} = \mathcal{N}$ and $\mathcal{N}_{\xi}$ have stable and unstable manifolds with dimensions $j+j^s+k^s$ and $j+j^u+k^u$, respectively. \end{thm} The reverse idea of GSP-Theory can also be used to recover the non-smooth component of the Filippov dynamics, given by the piecewise vector field ($\varepsilon = 0$), from its regularization ($\varepsilon > 0$). In fact, let $\mathbf{F} = (\mathbf{F}_+,\mathbf{F}_-) \in \mathcal{R}^k(U,h)$ be a piecewise smooth vector field with switching manifold $\Sigma = h^{-1}(\{0\})$. Let also $\varphi:\mathbb{R} \to \mathbb{R}$ be a monotonous transition function and $\mathbf{F}_{\varepsilon}$ the $\varphi_{\varepsilon}$-regularization of $\mathbf{F}$. We need to transform $\mathbf{F}_{\varepsilon}$ into a slow-fast system. In order to do so, observe that, as $0$ is a regular value of $h$, then from the Local Normal Form for Submersions follows that, without loss of generality, we can admit that $h(x_1,\ldots,x_n) = x_1$ in a neighborhood of a given point $\mathbf{x} \in \Sigma$. Therefore, if we write $\mathbf{F}_+ = (f_1^+,\ldots,f_n^+)$ and $\mathbf{F}_- = (f_1^-,\ldots,f_n^-)$, then follows that $\mathbf{F}_{\varepsilon}$ can be written as \begin{equation*} \dot{x}_i = \left[\frac{1+\varphi_{\varepsilon}(x_1)}{2}\right]f_i^+(x_1,\ldots,x_n) + \left[\frac{1+\varphi_{\varepsilon}(x_1)}{2}\right]f_i^-(x_1,\ldots,x_n), \end{equation*} \noindent where $i \in \{1, \ldots, n\}$. Now, applying to the system above the polar blow-up given by $x_1 = \xi \cos{\theta}$ and $\varepsilon = \xi \sin{\theta}$, where $\xi \ge 0$ and $\theta \in [0,\pi]$, we obtain a $(1,n-1)$-slow-fast system given by \begin{equation} \label{sec:preliminaries:sub:gsp:eq:regul_slow-fast} \left\{\begin{matrix*}[l] \xi \dot{\theta} = \alpha_1(\theta,x_2,\ldots,x_n,\xi)\\ \dot{x}_i = \alpha_i(\theta,x_2,\ldots,x_n,\xi) \end{matrix*}\right., \end{equation} \noindent where $i \in \{2,\ldots,n \}$. Observe that, for $\xi = 0$, we have $x_1 = 0$ and $\varepsilon = 0$, i.e., we are at the non-regularized system $\mathbf{F}$ over the manifold $\Sigma$. In the other hand, for $\xi > 0$ and $\theta \in (0,\pi)$, we have $-\xi < x_1 < \xi$ and $0 < \varepsilon < \xi$, i.e., we are at the regularized system $\mathbf{F}_{\varepsilon}$ over the rectangle where it doesn't coincide with $\mathbf{F}$, see \cref{sec:preliminaries:sub:regularization:subfig:regularization}. The authors of~\cite{Teixeira2012} then proved the result below: \begin{prop}[Regular case,~\cite{Teixeira2012}]% \label{sec:preliminaries:sub:gsp:prop:regular} Consider the piecewise smooth vector field $\mathbf{F}$ and the slow-fast system \cref{sec:preliminaries:sub:gsp:eq:regul_slow-fast}. The sliding region $\Sigma_s$ is homeomorphic to the slow manifold given by \begin{equation*} \alpha_1(\theta,x_2,\ldots,x_n,0) = 0 \end{equation*} \noindent and the dynamics of the sliding vector field $\mathbf{F}_s$ over $\Sigma_s$ is topologically equivalent to that of the reduced system given by \begin{equation*} \left\{\begin{matrix*}[l] 0 = \alpha_1(\theta,x_2,\ldots,x_n,0)\\ \dot{x}_i = \alpha_i(\theta,x_2,\ldots,x_n,0) \end{matrix*}\right., \end{equation*} \noindent where $i \in \{2,\ldots,n \}$. \end{prop} \subsection{Piecewise Smooth Dynamics}% \label{sec:preliminaries:sub:piecewise} Let $U \subset \mathbb{R}^n$ be an open set and $h:U \to \mathbb{R}$ a continuously differentiable function such that $0 \in h(U)$ is a \textbf{regular value}, i.e., the derivative of $h$ at every point in $\Sigma = h^{-1}(\left\{0\right\})$ is a surjective map. In that case, the open set $U$ can be split into the two regions $\Sigma_{+} = \left\{\mathbf{x} \in U ;~ h(\mathbf{x}) \ge 0\right\}$ and $\Sigma_{-} = \left\{\mathbf{x} \in U ;~ h(\mathbf{x}) \le 0\right\}$ which intersect at the regular surface $\Sigma$. Given the vector fields $\mathbf{F}_{\pm}:U \to \mathbb{R}^n$ of class $C^k(U)$ with $k \geq 1$, we say that $\mathbf{F}:U \to \mathbb{R}^n$ defined by \begin{equation} \label{sec:preliminaries:sub:piecewise:eq:vector_field} \mathbf{F}(\mathbf{x}) = \begin{cases} \mathbf{F}_+(\mathbf{x}), & \text{ if } \mathbf{x} \in \Sigma_{+}, \\ \mathbf{F}_-(\mathbf{x}), & \text{ if } \mathbf{x} \in \Sigma_{-}, \end{cases} \end{equation} \noindent is a \textbf{piecewise smooth (or discontinuous) vector field} with \textbf{switching (or discontinuous) manifold} $\Sigma$. The set of all vector fields $\mathbf{F}$ defined as above will be denoted by $\mathcal{R}^k(U,h) \equiv C^k(U) \times C^k(U)$ and equipped with the Whitney product topology. The expression \begin{equation} \label{sec:preliminaries:sub:piecewise:eq:dynamical_system} \dot{\mathbf{x}} = \mathbf{F}(\mathbf{x}), \end{equation} \noindent where $\dot{\square} = \sfrac{d\square}{dt}$, defines a \textbf{piecewise smooth (or discontinuous) dynamical system}, whose dynamics can be defined as follows. For points $\mathbf{x} \in U \setminus \Sigma$, it is natural to consider the usual local dynamics given by $\mathbf{F}_{\pm}$, i.e., the local trajectory given by the curve $\varphi_{\pm}(t,\mathbf{x})$ that satisfies the differential equation $\dot{\mathbf{x}} = \mathbf{F}_{\pm}(\mathbf{x})$. In the other hand, for points $\mathbf{x} \in \Sigma$, we consider the well established Filippov convention described in~\cite{Filippov1988,Smirnov2002} to define its dynamics. Roughly, using the notation $\mathbf{F}_{\pm}h(\mathbf{x}) \coloneqq \nabla h(\mathbf{x}) \cdot \mathbf{F}_{\pm}(\mathbf{x})$ for Lie derivatives, the switching manifold $\Sigma$ is split as follows: \begin{itemize} \item \textbf{Crossing region}: $\Sigma_{cr} = \left\{\mathbf{x} \in \Sigma;~ \mathbf{F}_+h(\mathbf{x}) \mathbf{F}_-h(\mathbf{x}) > 0\right\}$. In this case, a trajectory which meets $\Sigma_{cr}$ crosses $\Sigma$. See \cref{sec:preliminaries:sub:piecewise:subfig:dynamics:crossing}. \item \textbf{Sliding region}: $\Sigma_{sl} = \left\{\mathbf{x} \in \Sigma;~ \mathbf{F}_+h(\mathbf{x}) > 0,~ \mathbf{F}_-h(\mathbf{x}) < 0\right\}$. In this case, any trajectory which meets $\Sigma_{sl}$ remains tangent to $\Sigma$ for positive time. See \cref{sec:preliminaries:sub:piecewise:subfig:dynamics:sliding}. \item \textbf{Escaping region}: $\Sigma_{es} = \left\{\mathbf{x} \in \Sigma;~ \mathbf{F}_+h(\mathbf{x}) < 0,~ \mathbf{F}_-h(\mathbf{x}) > 0\right\}$. In this case, any trajectory which meets $\Sigma_{es}$ remains tangent to $\Sigma$ for negative time. See \cref{sec:preliminaries:sub:piecewise:subfig:dynamics:sliding}. \end{itemize} By continuity, all regions above are open sets separated by \textbf{tangency points} $\mathbf{x} \in \Sigma$ where $\mathbf{F}_+h(\mathbf{x}) \mathbf{F}_-h(\mathbf{x}) = 0$. Dynamically, this points acts like singularities. For $\mathbf{x} \in \Sigma_s \coloneqq \Sigma_{sl} \cup \Sigma_{es}$, the trajectory slides tangent to $\Sigma$ following the well-defined \textbf{sliding vector field} $\mathbf{F}_s:\Sigma_s \to T\Sigma_s$ given by \begin{equation} \mathbf{F}_s(\mathbf{x}) = \frac{\mathbf{F}_-h(\mathbf{x}) \mathbf{F}_+(\mathbf{x}) - \mathbf{F}_+h(\mathbf{x}) \mathbf{F}_-(\mathbf{x})} {\mathbf{F}_-h(\mathbf{x}) - \mathbf{F}_+h(\mathbf{x})}, \end{equation} \noindent which is the unique vector in the intersection $\text{Conv}(\{\mathbf{F}_+(\mathbf{x}),\mathbf{F}_-(\mathbf{x})\}) \cap \Sigma$, where $\text{Conv}(\cdot)$ represents the convex closure. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.4\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/preliminaries/fig/}{crossing.pdf_tex} \caption{Crossing.}% \label{sec:preliminaries:sub:piecewise:subfig:dynamics:crossing} \end{subfigure} \qquad \begin{subfigure}[b]{0.4\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/preliminaries/fig/}{sliding.pdf_tex} \caption{Sliding.}% \label{sec:preliminaries:sub:piecewise:subfig:dynamics:sliding} \end{subfigure} \caption{Local trajectories around the switching manifold $\Sigma$ with crossing at (a) and sliding at (b). The escaping case is similar to (b) but with $\varphi_{\pm}(t,\mathbf{x})$ pointing outward $\Sigma$.}% \label{sec:preliminaries:sub:piecewise:fig:dynamics} \end{figure} The study of the dynamics of piecewise smooth systems using the raw theory described above can be quite complicated. It is, generally, useful to apply the regularization process described in the two subsections below. \subsection{Regularization}% \label{sec:preliminaries:sub:regularization} Let $\mathbf{F} = (\mathbf{F}_+,\mathbf{F}_-) \in \mathcal{R}^k(U,h)$ be a piecewise smooth vector field as defined above. A Sotomayor-Teixeira regularization of $\mathbf{F}$, as described at~\cite{Sotomayor1996}, is an 1-parameter family of smooth vector fields $\mathbf{F}_{\varepsilon}$ that converges pointwisely to $\mathbf{F}$ as $\varepsilon \to 0$. More precisely, for $\mathbf{x} \in U \setminus \Sigma$, observe that the field $\mathbf{F}$ can be written in the form \begin{equation} \label{sec:preliminaries:sub:regularization:eq:F_0} \mathbf{F}(\mathbf{x}) = \left[\frac{1 + \sgn(h(\mathbf{x}))}{2}\right] \mathbf{F}_+(\mathbf{x}) + \left[\frac{1 - \sgn(h(\mathbf{x}))}{2}\right] \mathbf{F}_-(\mathbf{x}), \end{equation} \noindent where $\sgn:\mathbb{R} \to \mathbb{R}$ is the \textbf{signal function} given by \begin{equation*} \sgn(x) = \begin{cases} -1, & \text{ if } x < 0, \\ 0, & \text{ if } x = 0, \\ 1, & \text{ if } x > 0, \end{cases} \end{equation*} \noindent which is a discontinuous function whose graph if represented at \cref{sec:preliminaries:sub:regularization:subfig:signal}. In order to approximate the piecewise smooth vector $\mathbf{F}$ with an 1-parameter family of smooth vector fields, we approximate the signal function at \cref{sec:preliminaries:sub:regularization:eq:F_0} with a certain type of smooth function. More precisely: \begin{defn}% \label{sec:preliminaries:sub:regularization:defn:transition} We say that a smooth function $\varphi:\mathbb{R} \to \mathbb{R}$ is a \textbf{monotonous transition function} if \begin{equation*} \varphi(x) = \begin{cases} -1, & \text{ if } x \le -1, \\ 1, & \text{ if } x \ge 1, \end{cases} \end{equation*} \noindent and $\varphi'(x) > 0$ for $-1 < x < 1$. \end{defn} The graph of a typical transition function is represented at \cref{sec:preliminaries:sub:regularization:subfig:transition}. Observe that, if we define $\varphi_{\varepsilon}(x) = \varphi\left(\frac{x}{\varepsilon}\right)$, where $\varepsilon > 0$, then clearly $\varphi_{\varepsilon} \to \sgn$ pointwisely when $\varepsilon \to 0$, as long as their domains are restricted to the set $\mathbb{R}\setminus\{0\}$. In particular, if we define \begin{equation} \label{sec:preliminaries:sub:regularization:eq:F_epsilon} \mathbf{F}_{\varepsilon}(\mathbf{x}) = \left[\frac{1 + \varphi_{\varepsilon}(h(\mathbf{x}))}{2}\right] \mathbf{F}_+(\mathbf{x}) + \left[\frac{1 - \varphi_{\varepsilon}(h(\mathbf{x}))}{2}\right] \mathbf{F}_-(\mathbf{x}), \end{equation} \noindent then we get an 1-parameter family of vector fields $\mathbf{F}_{\varepsilon} \in C^r(U)$ such that $\mathbf{F}_{\varepsilon} \to \mathbf{F}$ pointwisely when $\varepsilon \to 0$, as long as their domains are restricted to the set $\mathbb{R}\setminus\{0\}$. \begin{defn}% \label{sec:preliminaries:sub:regularization:defn:regularization} Let $\varphi:\mathbb{R} \to \mathbb{R}$ be a monotonous transition function. We say that \cref{sec:preliminaries:sub:regularization:eq:F_epsilon} is a $\varphi_{\varepsilon}$-\textbf{regularization} of \cref{sec:preliminaries:sub:regularization:eq:F_0}. \end{defn} \begin{figure}[ht] \centering \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/preliminaries/fig/}{signal.pdf_tex} \caption{Signal function.}% \label{sec:preliminaries:sub:regularization:subfig:signal} \end{subfigure} \qquad \begin{subfigure}[b]{0.45\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/preliminaries/fig/}{transition.pdf_tex} \caption{Transition function.}% \label{sec:preliminaries:sub:regularization:subfig:transition} \end{subfigure} \bigskip \begin{subfigure}[b]{0.4\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/preliminaries/fig/}{piecewise.pdf_tex} \caption{Piecewise field.}% \label{sec:preliminaries:sub:regularization:subfig:piecewise} \end{subfigure} \qquad \begin{subfigure}[b]{0.4\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/preliminaries/fig/}{regularization.pdf_tex} \caption{Regularized field.}% \label{sec:preliminaries:sub:regularization:subfig:regularization} \end{subfigure} \caption{Grid representation of the Sotomayor-Teixeira's regularization with the signal function (a) associated to the piecewise smooth vector field (c) and the transition function (b) associated to the regularized vector field (d).}% \label{sec:preliminaries:sub:regularization:fig:regularization_grid} \end{figure} Observe that the regularization $\mathbf{F}_{\varepsilon}$ coincides with $\mathbf{F}$ outside the rectangle given by $-\varepsilon < h(\mathbf{x}) < \varepsilon$. In fact, \begin{equation*} \mathbf{F}_{\varepsilon}(\mathbf{x}) = \begin{cases} \mathbf{F}_+(\mathbf{x}), & \text{ if } h(\mathbf{x}) \ge \varepsilon, \\ \mathbf{F}_-(\mathbf{x}), & \text{ if } h(\mathbf{x}) \le -\varepsilon, \end{cases} \end{equation*} \noindent as represented at \cref{sec:preliminaries:sub:regularization:subfig:regularization}. In particular, it is clear that $\mathbf{F}_{\varepsilon}$ recovers the smooth component of the Filippov dynamics given by $\mathbf{F}$, i.e., that associated to the region $U\setminus\Sigma$, as long as we take $\varepsilon > 0$ small enough. As described in the next subsection, $\mathbf{F}_{\varepsilon}$ also recovers the non-smooth component of the Filippov dynamics, i.e., that associated to the region $\Sigma$. \section{Statement of the Problem}% \label{sec:problem} One of the fundamental hypothesis in the theory described above is the fact that $0 \in \mathbb{R}$ is a \textbf{regular value} of the function $h:\mathbb{R} \to \mathbb{R}$ and, therefore, the switching manifold $\Sigma = h^{-1}(\{0\})$ is a regular surface. In that case, as we have seen, there exists at least one well-defined and established dynamics associated: the Filippov dynamics. A natural question to ask at this point is: a Filippov-like dynamics can be defined for the case when $0 \in \mathbb{R}$ is a \textbf{singular value} of the function $h:\mathbb{R} \to \mathbb{R}$, i.e., when the switching manifold isn't a regular surface? \begin{figure}[ht] \centering \def0.85\linewidth{0.6\linewidth} \import{tex/sec/problem/fig/}{switching_manifold.pdf_tex} \caption{Double discontinuity.}% \label{sec:problem:fig:switching_manifold} \end{figure} In this work, we would like to study the particular case known as the \textbf{double discontinuity}. This particular configuration of the switching manifold is the simplest one between the 4 singular configurations (known as Gutierrez-Sotomayor or simple manifolds) that, according to~\cite{Gutierrez1982}, breaks the regularity condition in a dynamically stable manner. The double discontinuity is described in detail below. Let $\mathbf{F}_i: \mathbb{R}^3 \to \mathbb{R}^3$ be vector fields of class $C^k(\mathbb{R}^3)$ with $i \in \{1,2,3,4 \}$. The piecewise smooth vector field $\mathbf{F}: \mathbb{R}^3 \to \mathbb{R}^3$ given by \begin{equation} \label{sec:problem:eq:piecewise_field} \mathbf{F}(x,y,z) = \begin{cases} \mathbf{F}_1(x,y,z),~ & \text{if}~y \ge 0~\text{and}~z \ge 0 \\ \mathbf{F}_2(x,y,z),~ & \text{if}~y \le 0~\text{and}~z \ge 0 \\ \mathbf{F}_3(x,y,z),~ & \text{if}~y \le 0~\text{and}~z \le 0 \\ \mathbf{F}_4(x,y,z),~ & \text{if}~y \ge 0~\text{and}~z \le 0 \end{cases}, \end{equation} \noindent and denoted by $\mathbf{F} = \left(\mathbf{F}_1, \mathbf{F}_2, \mathbf{F}_3, \mathbf{F}_4\right)$ is said to have a \textbf{double discontinuity} as switching manifold, see \cref{sec:problem:fig:switching_manifold}. The set of all vector fields $\mathbf{F}$ defined as above will be denoted by \begin{equation*} \mathcal{D}^k \equiv C^k(\mathbb{R}^3) \times C^k(\mathbb{R}^3) \times C^k(\mathbb{R}^3) \times C^k(\mathbb{R}^3) \end{equation*} \noindent and equipped with the Whitney product topology. The double discontinuity, as defined above, consists into the planes $xy$ and $xz$ perpendicularly intersecting at the $x$-axis, $\Sigma_x = \{(x,0,0) ;~ x \in \mathbb{R}\}$. For points in $\Sigma \setminus \Sigma_{x}$, the ordinary Filippov dynamics described in \cref{sec:preliminaries} can be locally applied. However, for points $(x,0,0) \in \Sigma_x$ that theory can't be directly applied. In fact, $\Sigma = h^{-1}(\{0\})$, where $h:\mathbb{R}^3 \to \mathbb{R}$ given by $h(x,y,z) = yz$ has $0 \in \mathbb{R}$ as a singular value, since $Dh(x,0,0)$ isn't a surjective map for $(x,0,0) \in \Sigma_{x}$. Therefore, we state the problem: given $\mathbf{F} \in \mathcal{D}^k$, can we define a Filippov-like dynamics over $\Sigma_{x}$? How does it generally behave there? In the next section we provide a framework, based on~\cite{Buzzi2012,Llibre2015,Panazzolo2017,Teixeira2012}, to approach this problem. \subsection{Affine Dynamics}% \label{sec:stability:affine} Let $\mathbf{F} \in \mathcal{A}$ be a piecewise smooth vector field with a double discontinuity given by affine vector fields \begin{equation} \label{sec:stability:sub:affine:eq:system} \begin{aligned} \mathbf{F}_i(x,y,z) = ( & a_{i1}x + b_{i1}y + c_{i1}z + d_{i1}, \\ & a_{i2}x + b_{i2}y + c_{i2}z + d_{i2}, \\ & a_{i3}x + b_{i3}y + c_{i3}z + d_{i3}), \end{aligned} \end{equation} \noindent with $\gamma_{i} \neq 0$. Remember that, in this case, \cref{sec:affine:thm:dynamics} provides a full description of the dynamics of \cref{sec:stability:sub:affine:eq:system} and, therefore, as in the previous section, we would like to combine it with \cref{sec:stability:prop:conditions} to derive a semi-local structural stability theorem. In order to apply this results, given $\Sigma_{\theta_0} \in \tilde{\mathcal{I}}_C$, let $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ be the Filippov system induced by \cref{sec:stability:sub:affine:eq:system} in a rectangular compact $K \subset C_{+} \cup C_{-}$, where $C_{+}$ and $C_{-}$ are two consecutive stripes meeting at $\Sigma_{\theta_0}$ as represented at \cref{sec:stability:fig:compact}. According to \cref{sec:affine:thm:dynamics}, the following are the possible categories of dynamics for a stripe $S_i \in \left\{ C_{+}, C_{-}\right\}$, which we now analyse against conditions \cref{sec:stability:conditions:ms_hyperbolic} — \cref{sec:stability:conditions:sg_tangency} of \cref{sec:stability:prop:conditions} case by case in order to discover those that can possibly generate structural stable systems, i.e., the \emph{candidates}: \begin{enumerate} \item $a_{i2} \neq 0$: \begin{enumerate} \item $a_{i3} \neq 0$: \\ The characterizing property of this case is the fact that $\beta_{i} \neq 0$ and, therefore, the horizontal asymptotes resides inside the stripes, possibly even $S_i$. As a consequence, there is always a visible part of the slow manifold inside $S_i$. Hence, if $a_{i1} = 0$ and $d_{i1} = 0$, then we have a continuum of singularities; if $a_{i1} = 0$ and $d_{i1} \neq 0$, then we have a similar bifurcation to that described at \cref{sec:stability:prop:bifurcation} when perturbing. Either way, \cref{sec:stability:conditions:ms_hyperbolic} is violated. However, if $a_{i1} \neq 0$, then \cref{sec:framework:sub:blowup:cor:singularities} assures the existence of at most one robust singularity $\mathbf{P}$, always hyperbolic and, therefore, \cref{sec:stability:conditions:ms_hyperbolic} and \cref{sec:stability:conditions:ms_separatrix} validates, since obviously the is no periodic orbits inside $S_i$. As in the constant case, the $\alpha$ or $\omega$-limit nature of the slow manifold also assures \cref{sec:stability:conditions:ms_wandering}. For \cref{sec:stability:conditions:sg_zeros} and \cref{sec:stability:conditions:sg_tangency}, observe that the fast dynamics is always transversal and, therefore, we only need the additional condition $\mathbf{P} \not\in \Sigma_{\theta_0}$. Finally, as in the constant case, invoking theorems such as \emph{continuity theorems} and \emph{Thom Transversality Theorem}, we easily conclude the robustness of the properties validated above when perturbing inside $\mathcal{A}$. Therefore, this case is a \emph{candidate} if, and only if, $a_{i1} \neq 0$ and $\mathbf{P} \not\in \Sigma_{\theta_0}$. \item $a_{i3} = 0$: \\ The only difference between this case and the previous is the fact that $\beta_{i} = 0$ and, therefore, the horizontal asymptotes are exactly at the borders $\theta = 0$ and $\theta = \pi$ of the stripes. However, since we are working inside a rectangular compact set $K$, then the same arguments of the previous case applies here. \end{enumerate} \item $a_{i2} = 0$: \\ Finally, the only difference between this case and the previous ($a_{i2} \neq 0$ and $a_{i3} = 0$) is the fact that now the horizontal asymptotes are exactly at the borders $\theta = \sfrac{\pi}{2}$ and $\theta = \sfrac{3\pi}{2}$ of the stripes. Therefore, the same arguments applies. \end{enumerate} The analysis of the remaining conditions \cref{sec:stability:conditions:sg_colinear} — \cref{sec:stability:conditions:cr_recurrent} requires the combined dynamics of the stripes $C_{+}$ and $C_{-}$. Therefore, in order to decide stability, we need to analyse all the combinations of candidates obtained above against these conditions. Generally, it is fairly easy to perform this analysis given a specific combination. However, a translation of this final conditions to parametric ones, although possible, would lead to a relatively big number\footnote{More specifically, \cref{sec:affine:thm:dynamics} give us a normal form with 8 possible dynamics for each stripe. Combining them 2 by 2 (with repetition) leave us with 36 combinations. Even if half of the combinations lead to a repeating condition, we would still be left with 18 conditions!} of conditions that, worse than that, would carry little to none geometrical meaning. Hence, leaving this final conditions ``untranslated'' is a better approach and, therefore, the following theorem has been proved: \begin{thm}[Affine Double Discontinuity Stability] \label{sec:stability:sub:affine:thm:conditions} Let $\mathbf{F} \in \mathcal{A}$ be given by \cref{sec:stability:sub:affine:eq:system} with $\gamma_{i} \neq 0$. Given $\Sigma_{\theta_0} \in \tilde{\mathcal{I}}_C$, let $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ be the Filippov system induced around $\Sigma_{\theta_0}$ and inside a rectangular compact $K \subset C_{+} \cup C_{-}$, where $C_{+}$ and $C_{-}$ are two consecutive stripes meeting at $\Sigma_{\theta_0}$. Then, $\mathbf{F}$ is $(\Sigma_{\theta_0}, K)$-semi-local structurally stable in $\mathcal{A}$ if, and only if, $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ satisfies \begin{enumerate} \item $a_{i1} \neq 0$ and $\mathbf{P} \not\in \Sigma_{\theta_0}$, where $\mathbf{P}$ is the only singularity of $\mathbf{X}_{\pm}$; \item conditions \cref{sec:stability:conditions:sg_colinear} — \cref{sec:stability:conditions:cr_recurrent} of \cref{sec:stability:prop:conditions}. \end{enumerate} \end{thm} \begin{exmp} \label{sec:stability:sub:affine:exmp:unstable} Let's see an example of instability around the discontinuity manifold $\Sigma_{0} \in \tilde{\mathcal{I}}_C$ of the cylinder generated by affine vector fields. More precisely, take $\mathbf{F} \in \mathcal{A}$ with $\mathbf{F}_4$ and $\mathbf{F}_1$ affine vector fields given by \cref{sec:stability:sub:affine:eq:system} such that \begin{equation*} \mathbf{F}_4 : \begin{bmatrix} a_{41} & d_{41} \\ a_{42} & d_{42} \\ a_{43} & d_{43} \end{bmatrix} = \begin{bmatrix*}[r] -1 & 1 \\ 0 & -1 \\ 1 & 0 \end{bmatrix*} \quad \text{and} \quad \mathbf{F}_1 : \begin{bmatrix} a_{11} & d_{11} \\ a_{12} & d_{12} \\ a_{13} & d_{13} \end{bmatrix} = \begin{bmatrix*}[r] 1 & -1 \\ 0 & 1 \\ 1 & 0 \end{bmatrix*}, \end{equation*} \noindent whose dynamics over the stripes $S_4 \cup S_1$, represented at \cref{sec:stability:sub:affine:fig:unstable} below, can be determined as in \cref{sec:affine:exmp:slow_cycle} using \cref{sec:affine:thm:dynamics}. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{affine_x4.pdf_tex} \caption{$\mathbf{X}_4$} \label{sec:stability:sub:affine:subfig:affine_x4} \end{subfigure} \quad \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{affine_x1.pdf_tex} \caption{$\mathbf{X}_1$} \label{sec:stability:sub:affine:subfig:affine_x1} \end{subfigure} \bigskip \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{affine_x.pdf_tex} \caption{$\mathbf{X} = \left( \mathbf{X}_4, \mathbf{X}_1 \right)$} \label{sec:stability:sub:affine:subfig:affine_x} \end{subfigure} \quad \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{affine_rot.pdf_tex} \caption{$\mathbf{X} = \left( \mathbf{X}_4, \mathbf{X}_1 \right)$ with $r > 0$} \label{sec:stability:sub:affine:subfig:affine_rot} \end{subfigure} \caption{Dynamics over the stripes $S_4 \cup S_1$ generated by the fields studied at \cref{sec:stability:sub:affine:exmp:unstable}.} \label{sec:stability:sub:affine:fig:unstable} \end{figure} Regarding $\mathbf{F}_4$, since $a_{42} = 0$ and $\gamma_4 = -1 < 0$, then \cref{sec:affine:thm:dynamics} tells us that the slow manifold is a decreasing arctangent with horizontal asymptotes $\theta = -\sfrac{\pi}{2}$ and $\theta = \sfrac{\pi}{2}$, as represented at \cref{sub@sec:stability:sub:affine:subfig:affine_x4}. This manifold crosses the line $\theta = \theta_0 = 0$ at $x \in \mathbb{R}$ such that \begin{equation*} 0 = \theta_0 = \theta_4(x) = \arctan{\left(\frac{a_{43}x + d_{43}}{d_{42}}\right)} = \arctan{\left( -x \right)} \Leftrightarrow x = 0, \end{equation*} \noindent i.e., at the point $\mathbf{Q}_4 = (0,0)$. Besides that, over the slow manifold acts the dynamics $\dot{x} = -x+1$ whose only singularity at the point \begin{equation*} \mathbf{P}_4 = \left( \delta_4, \theta_4(\delta_4) \right) = \left( 1, \arctan{\left( -1 \right)} \right) = \left( 1, -\frac{\pi}{4} \right), \end{equation*} \noindent is stable, since $a_{41} < 0$. Even more, since $a_{42} = 0$ and $d_{42} < 0$ then, according to \cref{sec:affine:tab:fast}, the slow manifold repels the layer dynamics around. Therefore, remembering of \cref{sec:framework:sub:blowup:cor:singularities} we conclude that $\mathbf{P}_4$, as a singularity of $\mathbf{X}_4$, is a hyperbolic \emph{saddle}. On the other hand, regarding $\mathbf{F}_1$, since $a_{12} = 0$ and $\gamma_1 = 1 > 0$, then \cref{sec:affine:thm:dynamics} tells us that the slow manifold is a decreasing arctangent with horizontal asymptotes $\theta = -\sfrac{\pi}{2}$ and $\theta = \sfrac{\pi}{2}$, as represented at \cref{sec:stability:sub:affine:subfig:affine_x1}. This manifold crosses the line $\theta = \theta_0 = 0$ at $x \in \mathbb{R}$ such that \begin{equation*} 0 = \theta_0 = \theta_1(x) = \arctan{\left(\frac{a_{13}x + d_{13}}{d_{12}}\right)} = \arctan{\left( x \right)} \Leftrightarrow x = 0, \end{equation*} \noindent i.e., also at the point $\mathbf{Q}_1 = (0,0)$. Besides that, over the slow manifold acts the dynamics $\dot{x} = x-1$ whose only singularity at the point \begin{equation*} \mathbf{P}_1 = \left( \delta_1, \theta_1(\delta_1) \right) = \left( 1, \arctan{\left( 1 \right)} \right) = \left( 1, \frac{\pi}{4} \right), \end{equation*} \noindent is unstable, since $a_{11} > 0$. Even more, since $a_{12} = 0$ and $d_{12} > 0$ then, according to \cref{sec:affine:tab:fast}, the slow manifold attracts the layer dynamics around. Therefore, remembering of \cref{sec:framework:sub:blowup:cor:singularities} we conclude that $\mathbf{P}_1$, as a singularity of $\mathbf{X}_1$, is also a hyperbolic \emph{saddle}. Hence, as represented at \cref{sub@sec:stability:sub:affine:subfig:affine_x}, since $\mathbf{Q}_4 = \mathbf{Q}_1$ with $\mathbf{P}_4$ and $\mathbf{P}_1$ hyperbolic saddles, then the Filippov system $\mathbf{X} = (\mathbf{X}_4, \mathbf{X}_1)$ has a separatrix-connection and, therefore, it violates condition \cref{sec:stability:conditions:cr_separatrix} of \cref{sec:stability:prop:conditions}, whatever the rectangular compact $K$ considered. In other words, according to \cref{sec:stability:sub:affine:thm:conditions}, this configuration is structurally unstable around the discontinuity manifold $\Sigma_0$. Finally, we observe that, as represented at \cref{sec:stability:sub:affine:subfig:affine_x}, there is actually two separatrix-connections between the saddles $\mathbf{P}_4$ and $\mathbf{P}_1$. These connections enclose a \emph{rotating region}, represented at \cref{sec:stability:sub:affine:subfig:affine_rot}. \qed \end{exmp} \subsection{Constant Dynamics}% \label{sec:stability:constant} Let $\mathbf{F} \in \mathcal{C}$ be a piecewise smooth vector field with a double discontinuity given by constant vector fields \begin{equation} \label{sec:stability:sub:constant:eq:system} \mathbf{F}_i(x,y,z) = (d_{i1}, d_{i2}, d_{i3}), \end{equation} \noindent with $d_{i2} \neq 0$ or $d_{i3} \neq 0$. Remember that, in this case, \cref{sec:constant:thm:dynamics} provides a full description of the dynamics of \cref{sec:stability:sub:constant:eq:system} and, therefore, we would like to combine it with \cref{sec:stability:prop:conditions} to derive a semi-local structural stability theorem. In order to apply this results, given $\Sigma_{\theta_0} \in \tilde{\mathcal{I}}_C$, let $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ be the Filippov system induced by \cref{sec:stability:sub:constant:eq:system} in a rectangular compact $K \subset C_{+} \cup C_{-}$, where $C_{+}$ and $C_{-}$ are two consecutive stripes meeting at $\Sigma_{\theta_0}$ as represented at \cref{sec:stability:fig:compact}. According to \cref{sec:constant:thm:dynamics}, the following are the possible categories of dynamics for a stripe $S_i \in \left\{ C_{+}, C_{-}\right\}$, which we now analyse against conditions \cref{sec:stability:conditions:ms_hyperbolic} — \cref{sec:stability:conditions:sg_tangency} of \cref{sec:stability:prop:conditions} case by case in order to discover those that can possibly generate structural stable systems, henceforth called \textbf{candidates}: \begin{enumerate} \item $d_{i2} \neq 0$: \begin{enumerate} \item $d_{i3} \neq 0$: \\ One, and only one, of the straight lines $L_{i}$ or $L_{i}^{\pi}$ is visible inside the stripe. Hence, if $d_{i1} = 0$, then we have a continuum of singularities, i.e., a violation of condition \cref{sec:stability:conditions:ms_hyperbolic}. However, if $d_{i1} \neq 0$, then no critical elements are present and, therefore \cref{sec:stability:conditions:ms_hyperbolic} and \cref{sec:stability:conditions:ms_separatrix} validates. About \cref{sec:stability:conditions:ms_wandering}, since the slow manifold acts as $\alpha$ or $\omega$-limit of the surrounding dynamics, then it also validates if $d_{i1} \neq 0$. Even more, since over the borders of $S_i$ there is only transversal layer dynamics, then \cref{sec:stability:conditions:sg_zeros} and \cref{sec:stability:conditions:sg_tangency} also validates. Finally, observe that, invoking theorems such as \emph{continuity theorems} and \emph{Thom Transversality Theorem}, we easily conclude the robustness of the properties validated above when perturbing inside $\mathcal{C}$. Therefore, this case is a \emph{candidate} if, and only if, $d_{i1} \neq 0$. \item $d_{i3} = 0$: \\ The only difference between this case and the previous is the fact that, now, one of straight lines $L_{i}$ or $L_{i}^{\pi}$ is over one of the borders of the stripe $S_i$ and, therefore, \cref{sec:stability:conditions:sg_tangency} is possibly violated, whatever $d_{i1}$. More specifically, if $L_{i}$ or $L_{i}^{\pi}$ coincides with $\Sigma_{\theta_0}$, then we have instability; otherwise, we have a \emph{candidate}. \end{enumerate} \item $d_{i2} = 0$ and $d_{i3} \neq 0$: \\ This case is similar to the previous one ($d_{i2 \neq 0}$ and $d_{i3} = 0$): whatever $d_{i1}$, if $L_{i}$ or $L_{i}^{\pi}$ coincides with $\Sigma_{\theta_0}$, then we have instability; otherwise, we have a \emph{candidate}. \end{enumerate} The analysis of the remaining conditions \cref{sec:stability:conditions:sg_colinear} — \cref{sec:stability:conditions:cr_recurrent} requires the combined dynamics of the stripes $C_{+}$ and $C_{-}$. Therefore, in order to decide stability, we shall now analyse all the combinations of candidates obtained above, and summarized at \cref{sec:stability:sub:constant:tab:candidates}, against these conditions. \begin{table}[ht] \centering \begin{tabular}{ccc} \hline & $d_{i2} \neq 0$ & $d_{i2} = 0$ \\ \hline $d_{i3} \neq 0$ & $d_{i1} \neq 0$ & $\theta_i \neq \theta_0$ \\ $d_{i3} = 0$ & $\theta_i \neq \theta_0$ & unstable \\ \hline \end{tabular}% \caption{Conditions under which the stripe $S_i$ is a semi-local structural stability candidate.} \label{sec:stability:sub:constant:tab:candidates} \end{table} Actually, most of the remaining conditions can be easily dropped. In fact, according to \cref{sec:constant:thm:dynamics}, none of the candidates have periodic orbits and, besides that, because of the $\alpha$ and $\omega$-limit nature of $\mathcal{M}_i$, an orbit that enters $S_i$ never touches the same border again and, therefore, \cref{sec:stability:conditions:cr_periodic} always validates, because there isn't period orbits. Likewise, there isn't singularities, usual or not and, therefore, there isn't separatrix-connections or relations, i.e., \cref{sec:stability:conditions:cr_separatrix} always validates. Finally, as long as $d_{i1} \neq 0$, Poincar\'{e}-Bendixson Theorem assures that no non-trivial recurrent orbits can happen inside $S_i$ and, besides that, again because of the $\alpha$ and $\omega$-limit nature of $\mathcal{M}_i$, neither can they happen thought the switching manifold and, therefore, \cref{sec:stability:conditions:cr_recurrent} also always validates. At this point, the following theorem has been proved: \begin{thm}[Constant Double Discontinuity Stability] \label{sec:stability:sub:constant:thm:conditions} Let $\mathbf{F} \in \mathcal{C}$ be given by $\mathbf{F}_i(x,y,z) = (d_{i1}, d_{i2}, d_{i3})$ with $d_{i2} \neq 0$ or $d_{i3} \neq 0$. Given $\Sigma_{\theta_0} \in \tilde{\mathcal{I}}_C$, let $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ be the Filippov system induced around $\Sigma_{\theta_0}$ and inside a rectangular compact $K \subset C_{+} \cup C_{-}$, where $C_{+}$ and $C_{-}$ are two consecutive stripes meeting at $\Sigma_{\theta_0}$. Then, $\mathbf{F}$ is $(\Sigma_{\theta_0}, K)$-semi-local structurally stable in $\mathcal{C}$ if, and only if, $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ satisfies at least one the conditions \begin{enumerate} \item $d_{i1}d_{i2}d_{i3} \neq 0$; or \item $d_{i1} \neq 0$, $d_{i2}^2 + d_{i3}^2 \neq 0$ and $\theta_i \neq \theta_0$; \end{enumerate} \noindent and, additionally, $X_{+}$ and $X_{-}$ are non-colinear over $\Sigma_{\theta_0}$, except at finitely many points. \end{thm} \begin{exmp} \label{sec:stability:sub:constant:exmp:unstable} Let's see an example of instability around the discontinuity manifold $\Sigma_{\frac{\pi}{2}} \in \tilde{\mathcal{I}}_C$ of the cylinder generated by constant vector fields. More precisely, take $\mathbf{F} \in \mathcal{C}$ with \begin{equation*} \mathbf{F}_1(x,y,z) = (1,-1,1) \quad \text{and} \quad \mathbf{F}_2(x,y,z) = (-1,1,1), \end{equation*} \noindent whose dynamics over the stripes $S_1 \cup S_2$, represented at \cref{sec:stability:sub:constant:fig:const_sep} below, can be determined as in \cref{sec:constant:exmp:no_singularities} using \cref{sec:constant:thm:dynamics}. \begin{figure}[ht] \centering \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{const_x1.pdf_tex} \caption{$\mathbf{X}_1$} \label{sec:stability:sub:constant:subfig:const_x1} \end{subfigure} \quad \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{const_x2.pdf_tex} \caption{$\mathbf{X}_2$} \label{sec:stability:sub:constant:subfig:const_x2} \end{subfigure} \quad \begin{subfigure}[b]{0.47\linewidth} \def0.85\linewidth{\linewidth} \import{tex/sec/stability/fig/}{const_x.pdf_tex} \caption{$\mathbf{X} = \left( \mathbf{X}_1, \mathbf{X}_2 \right)$} \label{sec:stability:sub:constant:subfig:const_x} \end{subfigure} \caption{Dynamics over the stripes $S_1 \cup S_2$ generated by the fields studied at \cref{sec:stability:sub:constant:exmp:unstable}.} \label{sec:stability:sub:constant:fig:const_sep} \end{figure} Since $d_{11}d_{12}d_{13} = -1 \neq 0$ and $d_{21}d_{22}d_{23} = -1 \neq 0$, then the first part of \cref{sec:stability:sub:constant:thm:conditions} is satisfied. However, the induced dynamics $\mathbf{X}_1$ and $\mathbf{X}_2$ over the stripes $S_1$ and $S_2$, respectively, are colinear over their whole intersection, the discontinuity manifold $\Sigma_{\frac{\pi}{2}}$. In fact, as represented at \cref{sec:stability:sub:constant:subfig:const_x1}, for $\mathbf{F}_1$ the slow manifold consists of the straight lines given by $\theta = \theta_1 = -\frac{\pi}{4}$ and $\theta = \theta_1 + \pi = \frac{3\pi}{4}$; over then acts the increasing dynamics $\dot{x} = 1$. Besides that, the first line is repellor and, the second, attractor of the allround dynamics. On the other hand, as represented at \cref{sec:stability:sub:constant:subfig:const_x2}, for $\mathbf{F}_2$ the slow manifold consists of the straight lines given by $\theta = \theta_2 = \frac{\pi}{4}$ and $\theta = \theta_1 + \pi = \frac{5\pi}{4}$; over then acts the decreasing dynamics $\dot{x} = -1$. Besides that, the first line is attractor and, the second, repellor of the surrounding layer dynamics. In other words, the only differences between their dynamics is a $\pi$-translation in $\theta$ and inverse stability. This symmetry assures the colinearity of $\mathbf{X}_1$ and $\mathbf{X}_2$ over $\Sigma_{\frac{\pi}{2}}$, as represented at \cref{sec:stability:sub:constant:subfig:const_x}. Hence, the final part of \cref{sec:stability:sub:constant:thm:conditions} is violated and, therefore, this configuration is structurally unstable around $\Sigma_{\frac{\pi}{2}}$, whatever the rectangular compact $K$ considered. Geometrically, the instability here comes from the fact that each point of colinearity is associated with a pseudo-singularity of the sliding vector field of the Filippov system $\mathbf{X} = \left(\mathbf{X}_1, \mathbf{X}_2 \right)$ and, at our configuration we have a continuum of them. This whole continuum of pseudo-singularities can be easily destroyed by perturbing any of associated vector fields. \qed \end{exmp} \section{Structural Stability}% \label{sec:stability} Let $\mathbf{F} \in \mathcal{D}^k$ be a piecewise smooth vector field with a double discontinuity given by affine vector fields \eqref{sec:affine:eq:system}. The theorems obtained in the previous sections fully describe the affine double discontinuity dynamics over the cylinder $C$ of the induced vector field $\mathbf{\tilde{F}} \in \tilde{\mathcal{D}}^k$. As an application, we would like to use this knowledge to study its structural stability. The first step in this process consist into defining a concept of structural stability for the systems that we are studying. In order to do so, we are going to mimic the classic definition for the regular case, $\mathcal{R}(U,h)$, found at \cite{Teixeira1990}: \begin{defn} \label{sec:stability:defn:equivalence_R} Let $\mathbf{F}, \mathbf{G} \in \mathcal{R}^k(U,h)$ with $\Sigma = h^{-1}(0)$. We say that $\mathbf{F}$ and $\mathbf{G}$ are \textbf{topologically equivalent} and denote $\mathbf{F} \sim \mathbf{G}$ if, and only if, there exists a homeomorphism $\varphi: U \to U$ that keeps $\Sigma$ invariant and takes orbits of $\mathbf{F}$ into orbits of $\mathbf{G}$ preserving the orientation of time. From this definition the concept of structural stability in $\mathcal{R}^k(U,h)$ is naturally obtained. \end{defn} The previous definitions can be easily extended to $\mathcal{D}^k$. In fact, on the one hand, systems in $\mathcal{R}^k(U,h)$ have a single subset which should be kept invariant, $\Sigma = h^{-1}(0)$; on the other hand, systems in $\mathcal{D}^k$ have a set of subsets \begin{equation*} \mathcal{I} = \left\{ \Sigma_{12}, \Sigma_{23}, \Sigma_{34}, \Sigma_{14}, \Sigma_{x} \right\} \end{equation*} \noindent which should be kept invariants by topological equivalence. Therefore, a direct substitution gives us the following definitions: \begin{defn} \label{sec:stability:defn:equivalence_D} Let $\mathbf{F}, \mathbf{G} \in \mathcal{D}^k$. We say that $\mathbf{F}$ and $\mathbf{G}$ are \textbf{topologically equivalent} and denote $\mathbf{F} \sim \mathbf{G}$ if, and only if, there exists a homeomorphism $\varphi: \mathbb{R}^3 \to \mathbb{R}^3$ \noindent that keeps every $I \in \mathcal{I}$ invariant and takes orbits of $\mathbf{F}$ into orbits of $\mathbf{G}$ preserving the orientation of time. From this definition the concept of structural stability in $\mathcal{D}^k$ is naturally obtained. \end{defn} For the blow-up induced vector fields, $\tilde{\mathcal{D}}^k$, the set of invariant subsets is given by \begin{equation*} \tilde{\mathcal{I}} = \left\{ \tilde{\Sigma}_{12}, \tilde{\Sigma}_{23}, \tilde{\Sigma}_{34}, \tilde{\Sigma}_{14}, C \right\} \end{equation*} \noindent and, therefore, we define: \begin{defn} \label{sec:stability:defn:equivalence_D_ind} Let $\mathbf{\tilde{F}}, \mathbf{\tilde{G}} \in \tilde{\mathcal{D}}^k$. We say that $\mathbf{\tilde{F}}$ and $\mathbf{\tilde{G}}$ are \textbf{topologically equivalent} and denote $\mathbf{\tilde{F}} \sim \mathbf{\tilde{G}}$ if, and only if, there exists a homeomorphism $\tilde{\varphi}: \mathbb{R} \times S^1 \times \mathbb{R}^+ \to \mathbb{R} \times S^1 \times \mathbb{R}^+$ that keeps every $I \in \tilde{\mathcal{I}}$ invariant and takes orbits of $\mathbf{\tilde{F}}$ into orbits of $\mathbf{\tilde{G}}$ preserving the orientation of time. From this definition the concept of structural stability in $\tilde{\mathcal{D}}^k$ is naturally obtained. \end{defn} Now, let $\mathbf{\tilde{F}}, \mathbf{\tilde{G}} \in \tilde{\mathcal{D}}^k$ be topologically equivalent by a homeomorphism $\tilde{\varphi}$. In this case, we have that $\restr{\tilde{\varphi}}{I}$ with $I \in \tilde{\mathcal{I}}$ are too homeomorphisms taking orbits into orbits and preserving the orientation of time. In other words, the existence of these homeomorphisms is a necessary condition for the topological equivalence. More precisely: \begin{prop} \label{sec:stability:prop:equivalence_invariants} If $\mathbf{\tilde{F}} \sim \mathbf{\tilde{G}}$, then $\restr{\mathbf{\tilde{F}}}{I} \sim \restr{\mathbf{\tilde{G}}}{I}$ for every $I \in \tilde{\mathcal{I}}$. \end{prop} We are interested on the dynamics over the cylinder $C$. Therefore, given $\mathbf{F} \in \mathcal{D}^k$, we look for necessary and/or sufficient conditions for the structural stability of $\restr{\mathbf{\tilde{F}}}{C}$. Beyond the intrinsic interest, given \cref{sec:stability:prop:equivalence_invariants} above, such conditions shall also reveal relevant information on the structural stability of $\mathbf{\tilde{F}}$ and, therefore, on the structural stability of $\mathbf{F}$. In fact, from \cref{sec:stability:prop:equivalence_invariants} follows the result below. \begin{cor} \label{sec:stability:cor:stability_invariants} If $\mathbf{\tilde{F}}$ is structurally stable, then $\restr{\mathbf{\tilde{F}}}{I}$ is structurally stable for every $I \in \tilde{\mathcal{I}}$. \end{cor} \begin{proof} Given $I \in \tilde{\mathcal{I}}$, let $\tilde{\mathcal{W}} \subset \mathcal{D}^k$ be an open neighborhood of $\mathbf{\tilde{F}}$. Observe that \begin{equation*} \restr{\tilde{\mathcal{W}}}{I} = \left\{ \restr{\mathbf{\tilde{H}}}{I} ;~ \mathbf{\tilde{H}} \in \tilde{\mathcal{W}} \right\} \end{equation*} \noindent is an open neighborhood of $\restr{\mathbf{\tilde{F}}}{I}$. Therefore, if $\restr{\mathbf{\tilde{F}}}{I}$ wasn't structurally stable, would exist $\restr{\mathbf{\tilde{G}}}{I} \in \restr{\tilde{\mathcal{W}}}{I}$ such that $\restr{\mathbf{\tilde{F}}}{I} \nsim \restr{\mathbf{\tilde{G}}}{I}$ and, therefore, from \cref{sec:stability:prop:equivalence_invariants} would follow that $\mathbf{\tilde{G}} \nsim \mathbf{\tilde{F}}$, then implying that $\mathbf{\tilde{F}}$ wouldn't be structurally stable. \end{proof} Thus, from now on we'll exclusively study conditions for the structural stability of $\restr{\mathbf{\tilde{F}}}{C}$. In order to do so, remember that over $C$ acts a regular Filippov dynamics whose switching manifold is formed by the elements of \begin{equation*} \tilde{\mathcal{I}}_C = \left\{ \Sigma_{0}, \Sigma_{\frac{\pi}{2}}, \Sigma_{\pi}, \Sigma_{\frac{3\pi}{2}} \right\}, \end{equation*} \noindent where $\Sigma_{\theta} = \left\{ (x,\theta) ;~ x \in \mathbb{R} \right\}$. Therefore, without loss of generality for the previous results, it is natural to adopt the following definitions of equivalence and stability for $C$: \begin{defn} \label{sec:stability:defn:equivalence_cyl} Let $\mathbf{\tilde{F}}, \mathbf{\tilde{G}} \in \tilde{\mathcal{D}}^k$. We say that $\mathbf{\tilde{F}}$ and $\mathbf{\tilde{G}}$ are \textbf{$C$-topologically equivalent} and denote $\mathbf{\tilde{F}} \sim_{c} \mathbf{\tilde{G}}$ if, and only if, there exists a homeomorphism $\tilde{\varphi}: C \to C$ that keeps every $I \in \tilde{\mathcal{I}}_C$ invariant and takes orbits of $\restr{\mathbf{\tilde{F}}}{C}$ into orbits of $\restr{\mathbf{\tilde{G}}}{C}$ preserving the orientation of time. From this definition the concept of $C$-structural stability is naturally obtained. \end{defn} Although global and naturally derived from the regular case, $C$-structural stability as presented above is still a fairly complex property to proof and, in fact, to the knowledge of the authors it is an open problem to characterize it through simple conditions and, therefore, shall be treated on future works. However, many of the difficulties found at characterizing $C$-structural stability comes from its global aspect. In fact, conditions for a semi-local approach can be found at \cite{Broucke2001} and, in order to apply this results, a \emph{regular} and \emph{compact} Filippov section of the cylinder $C$ must be taken. \begin{figure}[ht] \centering \def0.85\linewidth{0.85\linewidth} \import{tex/sec/stability/fig/}{compact.pdf_tex} \caption{Regular Filippov system $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ defined at a rectangular compact $K \subset C_{+} \cup C_{-}$ with switching manifold $\Sigma_{\theta_0}$.} \label{sec:stability:fig:compact} \end{figure} More precisely, given $\mathbf{F} \in \mathcal{D}^k$ and two consecutive stripes $C_{+}$ and $C_{-}$ meeting at a straight line $\Sigma_{\theta_0} \in \tilde{\mathcal{I}}_C$, let $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ be the smooth vector fields induced over $C_{+} \cap K$ and $C_{-} \cap K$, respectively, as described at the previous sections and where $K \subset C_{+} \cup C_{-}$ is a rectangular compact, see \cref{sec:stability:fig:compact}. Observe that $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ is a \emph{regular} and \emph{compact} Filippov system with switching manifold $\Sigma_{\theta_0}$. Then, a direct application of Theorem B at \cite[p.~5]{Broucke2001} and the Proposition at \cite[p.~122]{Palis1982} give us the following result: \begin{prop} \label{sec:stability:prop:conditions} Given $\mathbf{F} \in \mathcal{D}^k$, two consecutive stripes $C_{+}$ and $C_{-}$ and a rectangular compact $K \subset C_{+} \cup C_{-}$, then the induced Filippov system $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ is structurally stable inside $K$ if, and only if, the following sets of conditions are satisfied: \begin{enumerate}[label=(\Roman*)] \item\label[condition]{sec:stability:conditions:ms} $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ are robustly\footnote{In other words, the property is stable under small perturbations.} Morse-Smale, i.e., they have: \begin{enumerate}[label=(C.\arabic*)] \item\label[condition]{sec:stability:conditions:ms_hyperbolic} finitely many critical elements\footnote{Singularities and periodic orbits.}, all hyperbolic; \item\label[condition]{sec:stability:conditions:ms_separatrix} no saddle-connections; \item\label[condition]{sec:stability:conditions:ms_wandering} only critical elements as non-wandering points; \end{enumerate} \item\label[condition]{sec:stability:conditions:sg} $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ robustly satisfies that: \begin{enumerate}[resume, label=(C.\arabic*)] \item\label[condition]{sec:stability:conditions:sg_zeros} none of them vanishes at a point of $\Sigma_{\theta_0}$; \item\label[condition]{sec:stability:conditions:sg_tangency} they are tangent to $\Sigma_{\theta_0}$ at only finitely many points with both never tangent at the same point; \item\label[condition]{sec:stability:conditions:sg_colinear} they are colinear at only finitely many points; \end{enumerate} \item\label[condition]{sec:stability:conditions:cr} $\mathbf{X}$ have: \begin{enumerate}[resume, label=(C.\arabic*)] \item\label[condition]{sec:stability:conditions:cr_periodic} only hyperbolic periodic orbits; \item\label[condition]{sec:stability:conditions:cr_separatrix} no separatrix-connections or relations; \item\label[condition]{sec:stability:conditions:cr_recurrent} only trivial recurrent orbits. \end{enumerate} \end{enumerate} \end{prop} Observe that \cref{sec:stability:conditions:ms} refers only to the usual dynamics of $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ over the smooth parts. On the other hand, \cref{sec:stability:conditions:sg} considers only the values of $\mathbf{X}_{+}$ and $\mathbf{X}_{-}$ over the switching manifold $\Sigma_{\theta_0}$. Finally, only \cref{sec:stability:conditions:cr} refers to the actual Filippov dynamics of $\mathbf{X}$. With that in mind, over the next, and final sections, we will apply \cref{sec:constant:thm:dynamics} and \cref{sec:affine:thm:dynamics} to analyse this conditions for the particular cases of constant and affine double discontinuities, respectively, and therefore derive semi-local structural stability theorems or, more precisely: \begin{defn} \label{sec:stability:defn:semilocal-stability} We say that $\mathbf{F} \in \mathcal{D}^k$ is \textbf{$(I, K)$-semi-local structurally stable} if, and only if, the induced Filippov system $\mathbf{X} = (\mathbf{X}_{-}, \mathbf{X}_{+})$ is structurally stable inside a rectangular compact $K \subset C_{+} \cup C_{-}$, where $C_{+}$ and $C_{-}$ are two consecutive stripes meeting at $I \in \tilde{\mathcal{I}}_C$. \end{defn} In fact, given the bifurcation described below, it is natural to study the constant and affine cases separately, since the first is always structurally unstable inside the last one. More precisely: \begin{prop} \label{sec:stability:prop:bifurcation} Every $\mathbf{F} \in \mathcal{C}$ is structurally unstable as an element of $\mathcal{A}$. \end{prop} \begin{proof} Let $\mathbf{F} \in \mathcal{C} \subset \mathcal{D}^k$ be a piecewise smooth vector field with a double discontinuity given by constant vector fields \begin{equation*} \mathbf{F}_i(x,y,z) = (d_{i1}, d_{i2}, d_{i3}), \end{equation*} \noindent where $d_{ij} \in \mathbb{R}$ for all $i$ and $j$. Assume, without loss of generality, that $d_{i1} > 0$. Then, according to \cref{sec:constant:thm:dynamics}, over the slow manifold we have the dynamics $\dot{x} = d_{i1}$. As $d_{i1} > 0$, then it is strictly increasing and, in particular, has no singularities. However, considering $\mathbf{F}$ as an element of $\mathcal{A} \subset \mathcal{D}^k$ and, in particular, perturbing $\mathbf{F}_i$ inside $\mathcal{A}$ with $a_{i1} \neq 0$, then we would now have the dynamics $\dot{x} = a_{i1}x + d_{i1}$ over the slow manifold. As $d_{i1} > 0$ and $a_{i1} \neq 0$, then it does now have a single singularity at $x = \delta_{i}$ and, besides that, half of its stability was inverted when compared with the unperturbed dynamics. In other words, $\mathbf{F}$ as an element of $\mathcal{A}$ violates the robustness of condition \cref{sec:stability:conditions:ms_hyperbolic} of \cref{sec:stability:prop:conditions} and, therefore, is structurally unstable. \end{proof} \input{tex/sec/stability/sub/constant.tex} \input{tex/sec/stability/sub/affine.tex} \section{Conclusion}% \label{sec:conclusion} In this paper, we presented results that solves the problems stated at \cref{sec:problem}: given $\mathbf{F}$ with a double discontinuity, can we define a Filippov-like dynamics over the intersection, $\Sigma_{x}$? How does it generally behave there? More specifically, we presented a crystal clear blow-up approach to this problem, resulting in~\cref{sec:framework:blowup:thm:dynamics}, which associates the previously unknown dynamics over $\Sigma_{x}$ with a discontinuous slow-fast dynamics over a cylinder. Many dynamical properties where also derived from this theorem for the general non-linear case, as long as the so-called \emph{weak fundamental hypothesis} were satisfied. Applying this results, at~\cref{sec:constant:thm:dynamics} and~\cref{sec:affine:thm:dynamics} we were able to fully describe the dynamics over this cylinder when $\mathbf{F}$ is given by constant or affine vector fields, respectively, leading us to the realization of those general properties. Once we had this knowledge,~\cite{Broucke2001,Gomide2020} inspired us to look after the semi-local structural stability of the dynamics over the cylinder, resulting in~\cref{sec:stability:sub:constant:thm:conditions} and~\cref{sec:stability:sub:affine:thm:conditions}. The approach presented in this paper has then proved to be quite effective at the study of the double discontinuity dynamics. In fact, it essentially transforms a singular switching manifold problem into a regular one, where many results are already known. Regarding structural stability, since the semi-local approach, as in \cref{sec:stability:defn:semilocal-stability}, is a necessary condition for the global one, as in \cref{sec:stability:defn:equivalence_cyl}, then it is reasonable to conjecture that a set of conditions for the global case would be given by \cref{sec:stability:prop:conditions}, taken from~\cite{Broucke2001}, plus an additional set of conditions similar to \cref{sec:stability:conditions:cr}, but involving the whole cylinder, rather than just one of its discontinuities. Finally, not only concerning just the dynamics over the cylinder, but also the external one, a whole world of bifurcations and minimal sets can be studied. For instance, we've seen a slow-cycle at \cref{sec:affine:exmp:slow_cycle} and a rotating region at \cref{sec:stability:sub:affine:exmp:unstable}. How many cycles can we have over the cylinder and what do they mean when we look back to the external dynamics? How many ``global'' cycles can we have touching the cylinder? The detailed dynamical description presented in this paper will not only be helpful for this particular tasks, but also at the unfolding of many other interesting bifurcations and chaotic behaviors. \section{Introduction}% \label{sec:introduction} The classical theory of Ordinary Differential Equations given by smooth vector fields has been, in a certain way, developed since the first appearance of Calculus itself. The machinery provided by this theory allows the study of phenomena all around mathematical sciences: from classical Newtonian mechanics to modern Machine Learning~\cite{Weinan2017}. However, most of the time due to practical reasons, many of these phenomena are better approached with a non-smooth model. For instance, systems with mechanical impacts/friction, electronic switching, etc. Nonsmooth systems has then been widely studied in the past decades; and one of the biggest steps was done by Filippov at \cite{Filippov1988}, by providing a well-defined and reasonable dynamics for an important class of such systems: those whose non-smoothness resides on regular surfaces. Since then, many advances has been achieved on this class of systems concerning, for instance, its generic bifurcations~\cite{Guardia2011}, regularization~\cite{Panazzolo2017,Sotomayor1996,Teixeira2012}, structural stability~\cite{Broucke2001,Gomide2020,Teixeira1990} and uncountable works regarding minimal sets. Nevertheless, as previously said, the theory established by Filippov's convention has a fundamental hypothesis: a regular surface as switching manifold between the smooth parts of the system. More specifically, a surface $\Sigma = h^{-1}(\left\{0\right\})$ where $0$ is a regular value of a continuously differentiable function $h:U \to \mathbb{R}$ with $U \subset \mathbb{R}^n$ open. Many relevant phenomena, however, can be naturally approached with a model where $\Sigma$ is actually the preimage of a singular value. See, for instance, the examples and references in~\cite{Dieci2011}. An interesting class of nonsmooth systems with singular switching manifolds $\Sigma$, known as Gutierrez-Sotomayor and described at~\cite{Gutierrez1982}, is obtained when the regularity condition is broken in a dynamically stable manner. More precisely, in order to avoid non-trivial recurrence on non-orientable manifolds, a restriction to $\Sigma$ is imposed so that its smooth part is either orientable or diffeomorphic to an open set of $\mathbb{P}^2$ (projective plane), $\mathbb{K}^2$ (Klein's bottle) or $G^2 = \mathbb{T}^2 \# \mathbb{P}^2$ (torus with cross-cap). This restriction leads to four singular configurations; with the simplest one, known as double discontinuity, given by two planes in $\mathbb{R}^3$ intersecting at a straight line $\Sigma_{x}$. Outside this intersection, at $\Sigma \setminus \Sigma_{x}$, regular Filippov theory is applicable. However, at $\Sigma_{x}$, a priori, the dynamics is unknown. Over the last years, two main frameworks arose aiming to solve this problem. The first one, presented at~\cite{Jeffrey2014}, extends Filippov dynamics naturally through the so called ``canopy'', a convex-like surface that intersect $\Sigma_{x}$ at finitely many points. Each of these intersections represents a possible sliding vector and, therefore, this solution predicts non-uniqueness of sliding. To deal with this lack of uniqueness, the author there propose the so called ``dummy dynamics''. This idea has lead to many interesting results such as \cite{Jeffrey2018,Kaklamanos2019}. However, as stated in \cite[p.~1102]{Jeffrey2014}, a justification for the dummy dynamics remains an open problem. The second one, explored in this paper and first presented at~\cite{Buzzi2012}, extends Filippov dynamics to $\Sigma_{x}$ through the use of a blow-up and Geometric Singular Perturbation Theory. Although less conceptual than canopies, it is also a natural approach. In fact, as shown ahead, the non-uniqueness predicted by canopy's theory is not only predicted by this second approach but also explained and handled naturally. However, not only~\cite{Buzzi2012} but also the further works~\cite{Llibre2015,Panazzolo2017,Teixeira2012} lack a clear presentation and justification for the dynamics induced over $\Sigma_{x}$. Therefore, as one of the fundamental steps given by this paper, after the preliminaries at~\cref{sec:preliminaries} and detailed setting of the problem at~\cref{sec:problem}, the blow-up framework is clearly presented at \cref{sec:framework}, resulting in \cref{sec:framework:blowup:thm:dynamics} and several corollaries. Next, we use this framework to study double discontinuities given by constant and affine vector fields at \cref{sec:constant} and \cref{sec:affine}, resulting in \cref{sec:constant:thm:dynamics} and \cref{sec:affine:thm:dynamics}, respectively, that fully describes the dynamics induced over $\Sigma_{x}$ for those systems. Finally, at \cref{sec:stability}, we apply this dynamical knowledge to derive semi-local structural stability (in the sense of \cite{Gomide2020}) results about the induced dynamics over $\Sigma_{x}$, namely, \cref{sec:stability:sub:constant:thm:conditions} and \cref{sec:stability:sub:affine:thm:conditions}, respectively.
null
null
null
proofpile-arXiv_000-10190
{"arxiv_id":"2009.08971","language":"en","timestamp":1600654711000,"url":"https:\/\/arxiv.org\/abs\/2009.08971","yymm":"2009"}
2024-02-18T23:40:25.335Z
2020-09-21T02:18:31.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10192"}
null
\section{Introduction} \label{sec:intro} Epidemics greatly affect Homo sapiens \cite{McN,O,Benedict}, as well as all other animals. Epidemic modeling goes back at least to Bernoulli who modeled the spread of smallpox \cite{Ber,Ber-history}. The methods and concepts developed in this field have been applied to modeling recent human epidemics like HIV, H1N1 Swine Flu, Zika Virus, and COVID-19, and also to modeling non-biological processes like the spread of rumors or cultural fads, diffusion of innovations, computer viruses, etc. (see \cite{MT,Lud,CFL,Volovik} and references therein). One general lesson that we learn about epidemics is that they often operate close to a critical regime. Subcritical epidemics are actually the most numerous, but they quickly die out. Supercritical epidemics can kill a finite fraction of the population thereby destroying the environment that the virus needs to thrive and multiply. Epidemics close to the critical regime are kind of optimal in a view of never-ending competition between the hosts and the viruses. Furthermore, humans are now sufficiently advanced to devise and employ the containment measures to suppress supercritical epidemics to critical. Mathematical models of epidemics are over-simplified. In physics, for instance, we know that electrons are identical. In contrast, organisms are different, even twins are different. This heterogeneity is rarely taken into account, and it is far from clear how to model it in a reasonable way. In physics, we usually rely on binary and symmetric interactions. Both these features are questionable in the realm of epidemics. Other realistic features are also mostly ignored. However, the populations where the epidemics spread are usually very large and the lore from statistical physics tells us that in large systems qualitative behaviors can be predicted even if one greatly simplifies the model. This is especially true in critical regimes. The very concept of critical regimes comes from epidemic modeling. This concept clearly emerges from the well-known susceptible-infected-recovered (SIR) process \cite{McK,KMcK,May,Siam,Murray}, a toy model that mimics the spread of infection. According to the rules of the SIR process, infected individuals recover (become immune or die) with equal rates and every infected individual transmits a disease to every susceptible individual with the rate $R_0/N$, where $N$ is the population size. Thus each infected individual on average spreads the infection to $R_0$ individuals before recovery. Therefore the behavior of the SIR process greatly depends on whether the reproduction number $R_0$ is smaller or larger than the recovery rate which we set to unity. When $R_0<1$, i.e., for subcritical SIR processes, outbreaks quickly end, namely just a few individuals catch the disease. For supercritical SIR processes ($R_0>1$), the outbreak may affect only a few individuals, e.g. starting from a single infected individual the size of the outbreak is finite with probability $R_0^{-1}$. With complimentary probability, $rN+O(\sqrt{N})$ individuals catch the disease before the outbreaks dies out; the fraction $r=r(R_0)$ is implicitly determined by \begin{equation} \label{rR} r+e^{-R_0 r} = 1 \end{equation} Huge outbreaks killing finite fractions of the population continue to devastate animal species. They also used to decimate human societies \cite{McN,O,Benedict}, e.g., the Black Death killed about 50\% of the European population \cite{Benedict}. Preventive and containment measures such as quarantine, improved hygiene, etc. suppress supercritical infectious diseases and often drive them to a critical situation. This critical state is effectively self-organized. Indeed, suppressing the disease to the subcritical regime may be possible but costly and psychologically difficult to maintain when the number of newly infected starts to decrease exponentially. Therefore, if the outbreak is not quickly and completely eradicated, the containment measures are relaxed and the system may return to the supercritical stage, the disease gets again out of control, so the containment measures are tightened driving the system back to the subcritical state. It would be interesting to devise a self-organized process of the spread of infection with dynamics similar to the critical SIR process. In this paper, however, we merely consider the critical SIR process with many initially infected individuals. The SIR processes are often treated using a deterministic framework \cite{McK,KMcK,May,Siam,Murray}. This framework can be applied to the supercritical regime where it gives e.g. the simplest derivation of Eq.~\eqref{rR}. Stochastic effects are unavoidable, however, for the critical and subcritical SIR processes (see \cite{Bailey50,Bailey,AB,book}). When the population of susceptible is finite, finite-size corrections become important, particularly for the critical SIR process \cite{rr,ML,bk,KS,Gordillo,Hofstad,bk12}. In this paper, we study the critical SIR process and hence we employ stochastic methods. We consider finite populations. The population size is assumed to be large, $N\gg 1$. We focus on the situation when the initial number of infected individuals is also large: $k\gg 1$. Most epidemics start with a single infected individual. We want to describe the SIR process that has begun at the supercritical regime and exhibited an exponential growth regime. Preventive and containment measures subsequently suppressed the reproduction number to unity. Ignoring the earlier regime yields the critical SIR process with a certain large number $k$ of initially infected individuals. The same critical SIR process with a large number of initially infected individuals has been recently studied, using large-scale simulations and scaling arguments, by Radicchi and Bianconi \cite{Ginestra}. Our analysis relies on exact calculations and asymptotic methods. Our analytical and asymptotic predictions qualitatively agree with simulation results \cite{Ginestra}, explain the major features, and perhaps suggest slightly different scaling fits which are a little simpler than the fits used in Ref.~\cite{Ginestra}. The chief reason for subtle behaviors are algebraic tails, the average size and duration of outbreaks are especially sensitive to these tails. Considerable insight into the behavior of finite systems can be gained from the analysis of the infinite-population limit where the critical SIR process is equivalent to the critical branching process with continuous time. This is a classical subject, but we are studying an unusual setting with an arbitrary number $k$ of initially infected individuals. We derive several exact results in this infinite-population limit. Our analysis of finite systems employs asymptotic and scaling methods. The outline of this paper is as follows. In Sec.~\ref{sec:CBP}, we consider the infinite-population limit, present exact results for the outbreak size distribution and show that exact results approach a simple scaling form in the most interesting situation when $k\gg 1$. We then analyze the critical SIR process in a population with $N\gg 1$ individuals. The size of an outbreak is a random quantity, its average and variance are studied using scaling and heuristic arguments (Sec.~\ref{sec:SIR}) and asymptotically exact analysis (Sec.~\ref{sec:EA}). In Sec.~\ref{sec:SIR-time}, we investigate the duration of outbreaks. Several technical calculations are relegated to the Appendices~\ref{ap:det}--\ref{ap:time}. \section{infinite-population limit: Outbreak Size Distribution} \label{sec:CBP} In the infinite-population limit, the SIR process reduces to the branching process \cite{feller,teh,athreya04,branch,vatutin}. Branching processes involve duplication and death. Symbolically, \begin{displaymath} \xymatrix{P+P & \\ P \ar[u] \ar[r] & \emptyset} \end{displaymath} For the critical branching process, the rates of duplication and death are equal. Branching processes have numerous applications, e.g., they mimic cell division and death; for adult organisms, the critical branching process is appropriate as the number of cells remains (approximately) constant. We begin with a classical setting when one individual was initially infected. Let $A_n$ be the probability that exactly $n$ individuals catch the infection before the epidemic is over. With probability $\tfrac{1}{2}$, the initially infected individual joins the population of recovered before infecting anyone else, so $A_1=\tfrac{1}{2}$. Further, $A_2=\tfrac{1}{2}A_1^2$ since at the first step a new individual must get infected, and then both must recover without spreading infection. Proceeding along these lines we arrive at the recurrence \begin{equation} \label{An_rec} A_n = \frac{1}{2}\sum_{i+j=n}A_iA_j+\frac{1}{2}\,\delta_{n,1} \end{equation} reflecting that the first infection event creates two independent infection processes \cite{teh}. A solution to \eqref{An_rec} is found by introducing the generating function \begin{equation} \label{An_gen} \mathcal{A}(z) = \sum_{n\geq 1}A_n z^n \end{equation} converting the recurrence \eqref{An_rec} into a quadratic equation \begin{equation} \label{2A:eq} 2\mathcal{A} = \mathcal{A}^2 + z \end{equation} whose solution reads \begin{equation} \label{Az} \mathcal{A}(z) = 1 - \sqrt{1-z} \end{equation} Expanding $\mathcal{A}(z)$ in powers of $z$ we find \begin{equation} \label{An_sol} A_n = \frac{1}{\sqrt{4\pi}}\, \frac{\Gamma\left(n-\tfrac{1}{2}\right)}{\Gamma(n+1)} \simeq \frac{1}{\sqrt{4\pi}}\,n^{-3/2} \end{equation} In particular, the probabilities $A_n$ are given by \begin{equation*} \tfrac{1}{2},\tfrac{1}{8},\tfrac{1}{16},\tfrac{5}{128},\tfrac{7}{256},\tfrac{21}{1024},\tfrac{33}{2048},\tfrac{429}{32768},\tfrac{715}{65536},\tfrac{2431}{262144},\tfrac{4199}{524288} \end{equation*} for $n=1,\dots,11$. Generally when the critical branching process begins with $k$ initially infected individuals, infection processes originated with each individual are independent. Hence the probability $A_n^{(k)}$ that exactly $n$ individuals catch the infection before the epidemic is over can be expressed via the probabilities $A_m \equiv A_m^{(1)}$ corresponding to the classical situation with one initially infected individual: \begin{equation} \label{An-k} A_n^{(k)} = \sum_{i_1+\ldots+i_k=n}A_{i_1}\ldots A_{i_k} \end{equation} The generating function \begin{equation} \label{gen-k-def} \mathcal{A}^{(k)}(z) = \sum_{n\geq k}A_n^{(k)} z^n \end{equation} is therefore \begin{equation} \label{gen-k} \mathcal{A}^{(k)}(z) = [\mathcal{A}(z)]^k = \left\{1 - \sqrt{1-z}\right\}^k \end{equation} This generating function encapsulates all $A_n^{(k)}$, but it is desirable to obtain more explicit representation. Needless to say, $A_n^{(k)}=0$ when $n<k$. When $n\geq k$, one can express $A_n^{(k)}$ through the probabilities \eqref{An_sol}. Here are a first few explicit formulas: \begin{equation*} \begin{split} &A_n^{(2)} = 2A_n\\ &A_n^{(3)} = 4A_n - A_{n-1}\\ &A_n^{(4)} =8A_n - 4A_{n-1}\\ &A_n^{(5)} =16A_n - 12A_{n-1} + A_{n-2}\\ &A_n^{(6)} =32A_n - 32A_{n-1} + 6A_{n-2}\\ &A_n^{(7)} =64A_n - 80A_{n-1} + 24A_{n-2}-A_{n-3}\\ &A_n^{(8)} =128A_n - 192A_{n-1} + 80A_{n-2}-8A_{n-3}\\ &A_n^{(9)} =256A_n - 448A_{n-1} + 240A_{n-2}-40A_{n-3}+ A_{n-4} \end{split} \end{equation*} The general formula is \begin{equation} \label{Ank-sum} A_n^{(k)}=\sum_{p=0}^{\lfloor \frac{k-1}{2}\rfloor} (-1)^p \binom{k-1-p}{p} 2^{k-1-2p}A_{n-p} \end{equation} where $\lfloor x\rfloor$ denotes the largest integer $\leq x$. \begin{figure} \centering \includegraphics[width=7.89cm]{An-k-3} \caption{Top to bottom: $\sqrt{k}\,A_n^{(k)}$ versus $n/k$ for $k=1, 16, 30$. The asymptotic behavior is $(4\pi)^{-1/2} (n/k)^{-3/2}$ in agreement with Eq.~\eqref{An-k-asymp}. } \label{Fig:An-k} \end{figure} Plugging \eqref{An_sol} into the sum in Eq.~\eqref{Ank-sum}, one reduces the sum to a hypergeometric series \cite{Knuth}. Using identities involving hypergeometric series \cite{Knuth}, as well as identities involving the gamma function (particularly, the duplication formula), we have computed the sum in \eqref{Ank-sum} and arrived at a neat final formula \begin{equation} \label{Ank-gamma} A_n^{(k)}=\frac{k}{2^{2n-k}}\,\frac{\Gamma(2n-k)}{ \Gamma(n+1-k)\Gamma(n+1)} \end{equation} Note that substituting \eqref{An_sol} into the recurrence \eqref{An-k} one can directly compute \begin{equation} \label{Akk} \begin{split} &k\geq 1: \quad A_k^{(k)} =\frac{1}{2^k} \\ &k\geq 2: \quad A_{k+1}^{(k)}=\frac{k}{2^{k+2}} \\ &k\geq 3: \quad A_{k+2}^{(k)}=\frac{k}{2^{k+3}}+ \frac{k(k-1)}{2^{k+5}} \end{split} \end{equation} These results are recovered from the general solution \eqref{Ank-gamma} thereby providing a consistency check. As another consistency check we note that \eqref{Ank-gamma} agrees with normalization \begin{equation} \label{norm} \sum_{n\geq k} A_n^{(k)}=1 \end{equation} The sequence $A_n^{(k)}$ has a single peak located at $n=k$ when $k\leq 4$, while for $k\geq 5$ the peak is at $n=\nu(k)>k$, see Fig.~\ref{Fig:An-k}. The sequence $A_n^{(k)}$ grows from $A_k^{(k)}=2^{-k}$ to the maximum \begin{equation} \label{An-k-max} b(k):=\text{max}\{A_n^{(k)}|\, n\geq k\} \end{equation} at $n=\nu(k)$, then decays and eventually approaches \begin{equation} \label{An-k-asymp} A_n^{(k)}\simeq \frac{k}{\sqrt{4\pi}}\,n^{-3/2} \end{equation} This asymptotic behavior is straightforwardly deduced from \eqref{Ank-gamma}. Using the general solution \eqref{Ank-gamma} together with the Stirling formula one can establish the behaviors of $b(k)$ and $\nu(k)$ in the $k\to\infty$ limit. One gets \begin{equation} \label{nu-a} \nu(k)\simeq \tfrac{1}{6}k^2, \qquad b(k)\simeq B k^{-2} \end{equation} with \begin{equation} \label{C:def} B=3e^{-3/2}\sqrt{\frac{6}{\pi}}= 0.92508197882\ldots \end{equation} The general solution \eqref{Ank-gamma} approaches the scaling form \begin{equation} \label{Ank-scaling} A_n^{(k)}=\frac{4}{k^2}\,\Phi(\mu), \quad \Phi(\mu)=\pi^{-1/2} \mu^{-3/2}\,e^{-1/\mu} \end{equation} in the scaling limit \begin{equation} \label{mu-scaling} n\to\infty, \quad k\to \infty, \quad \mu=\frac{4n}{k^2}= \text{finite} \end{equation} \begin{figure} \centering \includegraphics[width=7.89cm]{Phi-mu} \caption{The scaled distribution $\Phi(\mu)$ versus the scaled outbreak size $\mu$ given by Eq.~\eqref{Ank-scaling}. } \label{Fig:Phi} \end{figure} The scaled distribution $\Phi(\mu)$ has a single peak (Fig.~\ref{Fig:Phi}) and it vanishes faster than any power of $\mu$ when $\mu\to 0$. We also note that from Eqs.~\eqref{Ank-scaling}--\eqref{mu-scaling} one easily recovers \eqref{nu-a}--\eqref{C:def}. Due to the algebraic tail \eqref{An-k-asymp}, the moments defined by $\langle n^a\rangle = \sum_{n\geq k} n^a A_n^{(k)}$ diverge when $a\geq \frac{1}{2}$. To perform the summation in the $a<\frac{1}{2}$ range where the moments converge it is convenient to consider modified moments with $n^a$ replaced by $\Gamma(n+1)/\Gamma(n+1-a)$. Such moments admit an analytical expression \begin{equation} \label{moments-a-k} \sum_{n\geq k} \frac{\Gamma(n+1)}{\Gamma(n+1-a)}\, A_n^{(k)}=\frac{\Gamma(1-2a)}{\Gamma(1-a)}\, \frac{\Gamma(1+k)}{\Gamma(1-2a+k)} \end{equation} Identities which we have used in computing the sum in \eqref{moments-a-k} are standard, they can be found e.g. in Ref.~\cite{Knuth}. When $k\gg 1$, the modified moments are asymptotically equal to the regular moments since $\frac{\Gamma(n+1)}{\Gamma(n+1-a)}\to n^a$ for $n\geq k\gg 1$, so Eq.~\eqref{moments-a-k} simplifies to \begin{equation} \label{mom-a-k} \langle n^a\rangle=\sum_{n\geq k} n^a A_n^{(k)}\simeq \frac{\Gamma(1-2a)}{\Gamma(1-a)}\, k^{2a} \end{equation} The moments $\langle n^a\rangle$ remain finite for finite populations, but diverge as $N\to\infty$ if $a\geq \frac{1}{2}$. This divergent leading behavior of the moments in finite populations is computed in the next section; in the simplest case of $k=1$, it is given by Eq.~\eqref{mom-a-k-N}. \section{Outbreak Size Distribution: Scaling Analysis} \label{sec:SIR} Consider the critical SIR process with $k$ initially infected individuals, but in a finite population. Denote by $N$ the size of the population and by $A_n(k; N)$ the probability that the size of the outbreak is $n$. In the infinite-population limit, $A_n(k; \infty)\equiv A_n^{(k)}$ is given by \eqref{Ank-gamma}; it approaches the scaling form \eqref{Ank-scaling}--\eqref{mu-scaling} when $k\gg 1$. The probability distribution $A_n(N)\equiv A_n(1; N)$ for the critical SIR process starting with a single initially infected individual has been previously investigated in Refs.~\cite{bk,KS,Gordillo,Hofstad,bk12}. The probability disrtibution $A_n(N)$ acquires a scaling form \begin{equation} \label{AAF} A_n(N)= A_n\, F(\nu) \end{equation} in the scaling limit \begin{equation} \label{scaling} n\to\infty, ~~N\to\infty, ~~\nu=\frac{n}{N^{2/3}}=\text{finite} \end{equation} This scaling was proposed and numerically supported in \cite{bk}. Kessler and Shnerb \cite{KS} derived the scaling function $F(\nu)$. The moments $\langle n^a\rangle$ which diverge in the infinite-population limit when $a\geq \frac{1}{2}$ are now finite. In this range, the scaling \eqref{AAF}--\eqref{scaling} implies the following leading asymptotic behavior \begin{equation} \label{mom-a-k-N} \langle n^a\rangle\simeq \begin{cases} (9\pi)^{-1/2}\ln N & a = \frac{1}{2}\\ C_a N^\frac{2a-1}{3} & a > \frac{1}{2} \end{cases} \end{equation} with \begin{equation} \label{Ca} C_a = \int_0^\infty \frac{d\nu}{\sqrt{4 \pi}}\,\nu^{a-3/2}F(\nu) \end{equation} Two most important cumulants are the average $\mathbb{E}_k(N)=\langle n\rangle$ and the variance $\mathbb{V}_k(N)=\langle n^2\rangle- \langle n\rangle^2$. When $k=1$, these cumulants exhibit the following growth with the population size \begin{subequations} \begin{equation} \label{EV:CC} \mathbb{E}_1(N) \simeq C_1 N^{1/3}, \qquad \mathbb{V}_1(N) \simeq C_2 N \end{equation} The amplitudes can be (numerically) computed using the exact expression \cite{KS} for the scaling function to give \begin{equation} \label{CC:12} C_1=1.4528\ldots, \quad C_2\approx 3.99 \end{equation} \end{subequations} In the general case, the distribution $A_n(k,N)$ depends on three variables. The interesting range is \begin{equation} \label{nNkN} n\sim N^{2/3}, \quad k\sim N^{1/3} \end{equation} The first scaling in \eqref{nNkN} follows from \eqref{scaling}, the second is an outcome of \eqref{mu-scaling} and \eqref{scaling}. In the scaling region \eqref{nNkN}, the distribution $A_n(k,N)$ is expected to acquire a scaling form \begin{equation} \label{ANG} A_n(k,N) = N^{-2/3}\, G(\kappa,\nu) \end{equation} with $\nu=n/N^{2/3}$, see Eq.~\eqref{scaling}, and the scaled initial number of infected individuals $\kappa=k/N^{1/3}$. More precisely, \eqref{ANG} should hold in the scaling limit \eqref{scaling} and \begin{equation} \label{kN-scaling} k\to\infty, ~~N\to\infty, ~~\kappa=\frac{k}{N^{1/3}}=\text{finite} \end{equation} The normalization condition, $\sum_{n\geq k}A_n(k,N)=1$, gives $\int_0^\infty d\nu\, G(\kappa,\nu)=1$ and explains the pre-factor in \eqref{ANG}. The two-variable distribution $G(\kappa,\nu)$ is unknown, so let us discuss the average outbreak size which is the basic quantity with simpler scaling behavior. The average outbreak size $\mathbb{E}_k(N)$ depends on two variables and its conjectural scaling behavior is \begin{equation} \label{av-scaling} \mathbb{E}_k(N) = N^{2/3}\Psi(\kappa) \end{equation} The two scaling behaviors, \eqref{ANG} and \eqref{av-scaling}, are compatible if $\Psi(\kappa) = \int_0^\infty d\nu\, \nu G(\kappa,\nu)$. We now show that the scaled distribution $\Psi(\kappa)$ has simple extremal behaviors \begin{equation} \label{kappa-extreme} \Psi(\kappa) = \begin{cases} C_1\kappa &\text{when}\quad \kappa\ll 1\\ \sqrt{2\kappa} &\text{when}\quad \kappa\gg 1 \end{cases} \end{equation} with $C_1$ appearing in \eqref{EV:CC}--\eqref{CC:12}. To establish the small $\kappa$ asymptotic we note that in the $k\ll N^{1/3}$ regime, the infectious processes generated by each initially infected individual are mutually independent. Thus \begin{equation} \label{NkN-asymp} \mathbb{E}_k(N) \simeq C_1\, k\, N^{1/3} \qquad\text{when}\quad k\ll N^{1/3} \end{equation} Comparing \eqref{av-scaling} and \eqref{NkN-asymp} we obtain $\Psi(\kappa)\simeq C_1\kappa$ as $\kappa\to 0$ as asserted in \eqref{kappa-extreme}. To establish the large $\kappa$ behavior we first mention that $\mathbb{E}_N(N) = N$. This suggests that $\Psi(N^{2/3}) \sim N^{1/3}$, from which $\Psi(\kappa)\sim \sqrt{\kappa}$ as $\kappa\to \infty$. This argument is heuristic. In Sec.~\ref{sec:EA}, we derive the large $\kappa$ asymptotic asserted in \eqref{kappa-extreme}. In Appendix \ref{ap:det}, we present an elementary derivation based on the observation that the behavior in the $\kappa\to \infty$ limit is essentially deterministic. The deterministic analysis is significantly simpler than the exact approach, but it cannot be used to study fluctuations while using the exact approach we also compute the variance (Sec.~\ref{sec:EA}). Simulation results \cite{Ginestra} are in a reasonably good agreement with the (conjectural) scaling behavior \eqref{kN-scaling}--\eqref{av-scaling}. The numerical data \cite{Ginestra} well agree with \eqref{kappa-extreme} in the small $\kappa$ limit: $\Psi(\kappa) = C_1\kappa$ with $C_1\approx 1.5$ in simulations, the analytical prediction is $C_1=1.4528\ldots$. In the large $\kappa$ limit, numerical data in \cite{Ginestra} were fitted to $\sqrt{\kappa}$ up to a logarithmic correction. We now turn to the variance. For sufficiently small $k$, we rely again on the mutual independence of $k$ infectious processes to deduce \begin{equation} \label{Var-asymp} \mathbb{V}_k(N) \simeq C_2\, k\, N \qquad\text{when}\quad k\ll N^{1/3} \end{equation} The scaling region is given by \eqref{kN-scaling}. The natural scaling behavior of the variance compatible with \eqref{Var-asymp} is \begin{equation} \label{var-scaling} \mathbb{V}_k(N) = N^{4/3}\Psi_2(\kappa) \end{equation} where $\Psi_2(\kappa)\simeq C_2\kappa$ when $\kappa\to 0$. The asymptotically exact behaviors in the complimentary $\kappa\to \infty$ limit is established in Sec.~\ref{sec:EA}. Summarizing, \begin{equation} \label{var-extreme} \Psi_2(\kappa) \simeq \begin{cases} C_2\kappa &\text{when}\quad \kappa\ll 1\\ \sqrt{2/\kappa} &\text{when}\quad \kappa\gg 1 \end{cases} \end{equation} Simulation results \cite{Ginestra} support an algebraic decay in the $\kappa\to \infty$ limit: $\Psi_2\sim \kappa^{-\gamma}$. The numerical uncertainty \cite{Ginestra} in the magnitude of the exponent $\gamma$ is rather significant, $\gamma=0.75\pm 0.15$. \section{Outbreak Size Distribution: Exact treatment} \label{sec:EA} The critical SIR process admits an exact treatment. Denote by $s, i$ and $r$ the population sizes in the susceptible, infected and recovered individuals. The entire population is constant: \begin{equation} \label{all} s + i + r = N \end{equation} Due to the constraint \eqref{all}, the state of the process can be described by any pair of variables $s, i, r$. We choose $(i,x)$ with $x=N-s$. For the critical SIR process, in the interesting regime $s$ is close to $N$, viz. $N-s\ll N$, and hence $x$ is a more convenient variable than $s$. The constraint \eqref{all} shows that $x=i+r$, so $x\geq i$. Infection and recovery events are symbolically \begin{subequations} \begin{align} \label{inf} & (i,x) \to (i+1,x+1) ~\quad \text{rate} ~~ i(N-x)/N\\ \label{rec} & (i,x) \to (i-1,x) \quad\qquad \text{rate} ~~ i \end{align} \end{subequations} Denote by $t(i,x)$ the number of transitions from the state $(i,x)$ to termination. We are mostly interested in $t(k,k)$, i.e., starting with $k$ infected and no recovered. The process terminates at some state $(0,n)$, where $n$ is the size of the outbreak. The rules \eqref{inf} and \eqref{rec} show that the quantity $i-2x$ decreases by 1 in each transition. Thus starting at $(i,x)=(k,k)$ gives $i-2x=-k-T$ after $T$ transitions, and in particular \begin{equation} \label{n:tkk} n=\frac{k+t(k,k)}{2} \end{equation} The rates of the processes \eqref{inf} and \eqref{rec} imply that they occur with probabilities \begin{equation} p_+(x)=\frac{N-x}{2N-x}\,, \quad p_-(x)=\frac{N}{2N-x} \end{equation} The stochastic transition time $t(i,x)$ evolves according to the rules \begin{equation} \label{tix:rules} t(i,x) = \begin{cases} 1+t(i+1,x+1) & \text{prob} ~p_+(x)\\ 1+t(i-1,x) & \text{prob} ~p_-(x) \end{cases} \end{equation} \subsection{Average number of transitions} Averaging \eqref{tix:rules} we find that $T_1(i,x)=\langle t(i,x)\rangle$ satisfies \begin{equation} \label{T1:eq} T_1 = 1 + p_+T_1(i+1,x+1)+ p_-T_1(i-1,x) \end{equation} To avoid cluttering of formulae, we write $p_\pm \equiv p_\pm(x)$ and $T_1\equiv T_1(i,x)$ when there is no confusion. The recurrence \eqref{T1:eq} should be solved subject to the boundary condition \begin{equation} \label{BC:cat} T_1(0,x)=0 \end{equation} The boundary-value problem \eqref{T1:eq}--\eqref{BC:cat} admits an exact solution \begin{eqnarray} \label{T1:exact} T_1 &=& i +2(N-x)\nonumber\\ & - & \sum_{j=1}^{N-x} \left(\frac{N}{N+j}\right)^{i+N-x-j} B_{j}^{(N-x)}(N) \end{eqnarray} with $B_{j}^{p}(N)$ determined recurrently from \begin{equation} \begin{split} \label{Bjp} & B_{j}^{(p)}(N) = \frac{p}{p-j}\,B_{j}^{(p-1)}(N) \,, \quad j=1,\ldots,p-1\\ & B_{p}^{(p)}(N) = 2p - \sum_{j=1}^{p-1} \left(\frac{N}{N+j}\right)^{p-j} B_{j}^{(p)}(N) \end{split} \end{equation} One can verify by direct substitution that Eq.~\eqref{T1:exact} with amplitudes determined from the recurrent relations \eqref{Bjp} satisfies \eqref{T1:eq}--\eqref{BC:cat}. In Appendix \ref{ap:exact}, we show that the guess form \eqref{T1:exact} is rather natural. Specializing \eqref{T1:exact} to $i=x=k$ we obtain \begin{eqnarray} \label{Tkk} T_1(k,k) &=& 2N -k \nonumber\\ & - & \sum_{j=1}^{N-k} \left(\frac{N}{N+j}\right)^{N-j} B_{j}^{(N-k)}(N) \end{eqnarray} from which the average size of an outbreak is \begin{equation} \label{av:outbreak} \mathbb{E}_k(N)=N-\frac{1}{2}\sum_{j=1}^{N-k} \left(\frac{N}{N+j}\right)^{N-j} B_{j}^{(N-k)}(N) \end{equation} Thus, we have found an exact formula \eqref{av:outbreak} for the average size of the outbreak. We have not succeeded in extracting an asymptotic behavior of the sum in \eqref{av:outbreak}. There are two technical challenges. First, the amplitudes, which are in principle known from the recurrence relations \eqref{Bjp}, are unwieldy. Second, the sum involves $N-k$ terms. One can find compact exact results when $N-k=O(1)$, see Appendix \ref{ap:exact}. In the interesting range $k=O(N^{1/3})$, and hence the number of terms in the sum is huge. Another line of attack on discrete problems relies on continuum methods. We assume that $T_1(i,x)$ is a smooth function of $i$ and $x$, and we expand $T_1(i+1,x+1)$ and $T_1(i-1,x)$ appearing in \eqref{T1:eq} up to the second order \begin{equation} \label{T1:exp} \begin{split} T_1(i-1,x) &= T_1 - \partial_i T_1 + \tfrac{1}{2}\partial_i^2 T_1 \\ T_1(i+1,x+1) &= T_1 + \partial_i T_1 + \partial_x T_1 \\ &+ \tfrac{1}{2}\partial_i^2 T_1 + \tfrac{1}{2}\partial_x^2 T_1 + \partial_i \partial_x T_1 \end{split} \end{equation} Here $\partial_i=\partial/\partial i$, $\partial_x=\partial/\partial x$, etc. denote partial derivatives. Inserting \eqref{T1:exp} into \eqref{T1:eq} and keeping only dominant terms we obtain \begin{equation} \label{t:ix} \partial_i^2 T_1 + \partial_x T_1 + 2 = \frac{x}{N}\,\partial_i T_1 \end{equation} Suppose that the scaling in the interesting range is \begin{equation} \label{abc} i\sim N^\alpha, \quad x\sim N^\beta, \quad T_1\sim N^\gamma \end{equation} Plugging \eqref{abc} into \eqref{t:ix} we find that the terms in are comparable only when $\alpha=\frac{1}{3}$ and $\beta=\gamma=\frac{2}{3}$. Thus we re-scale the variables \begin{equation} \label{IX} i = N^{1/3} I, \quad x = N^{2/3} X \end{equation} and the average number of transitions \begin{equation} \label{TIX:def} T_1(i,x) = N^{2/3} \mathcal{T}(I, X) \end{equation} One can verify that terms not included in \eqref{t:ix} are subdominant. For instance, computing the second derivates gives $\partial_i \partial_x T_1=O(N^{-1/3})$ and $\partial_x^2 T_1=O(N^{-2/3})$, so these derivatives can indeed be dropped. The transformation \eqref{IX}--\eqref{TIX:def} turns \eqref{t:ix} into a partial differential equation (PDE) \begin{equation} \label{TIX} \frac{\partial^2 \mathcal{T}}{\partial I^2} + \frac{\partial \mathcal{T}}{\partial X} + 2= X\frac{\partial \mathcal{T}}{\partial I} \end{equation} for the re-scaled transition time $\mathcal{T}(I, X)$. We must solve \eqref{TIX} in the quadrant $I\geq 0$ and $X\geq 0$. The boundary condition \eqref{BC:cat} yields \begin{equation} \label{BC:cont} \mathcal{T}(0, X) = 0 \end{equation} Solving the boundary-value problem \eqref{TIX}--\eqref{BC:cont} is an intriguing challenge that we leave for the future. Here we limit ourselves by a simpler problem of computing the asymptotic behavior of $T_1(k,k)$ when $k\gg N^{1/3}$. In the realm of the framework \eqref{IX}--\eqref{BC:cont}, we should learn how to extract the large $I$ behavior of $\mathcal{T}(I, X)$. This can be done by noting that when $I\gg 1$, the diffusion term can be dropped from Eq.~\eqref{TIX}. Thus we arrive at the first order PDE \begin{equation} \label{PDE} \frac{\partial \mathcal{T}}{\partial I} - \frac{1}{X}\,\frac{\partial \mathcal{T}}{\partial X} = \frac{2}{X} \end{equation} Introducing new variables \begin{equation} \label{uv} u = I + \tfrac{1}{2}X^2\,, \qquad v = I - \tfrac{1}{2}X^2 \end{equation} we recast \eqref{PDE} into \begin{equation} \label{PDE:uv} \frac{\partial \mathcal{T}}{\partial v} = \frac{1}{\sqrt{u-v}} \end{equation} The solution is \begin{equation*} \mathcal{T}=-2\sqrt{u-v}+f(u) \end{equation*} with an arbitrary function $f(u)$. The boundary condition \eqref{BC:cont} gives $\mathcal{T} = 0$ when $v=-u$. This fixes $f(u)=2\sqrt{2u}$. Combining $\mathcal{T}= 2\sqrt{2u}-2\sqrt{u-v}$ and \eqref{uv} we obtain \begin{equation} \label{far} \mathcal{T}(I, X) = 2\sqrt{2I+X^2}-2X \end{equation} We want to determine $T_1(k,k) = N^{2/3} \mathcal{T}(\kappa, N^{-1/3}\kappa)$. Thus $I=\kappa\gg 1$ and $X=0$ as we always consider large populations, $N\gg 1$. More precisely, setting $X=0$ amounts for a tacit assumption $\kappa\ll N^{1/3}$. Summarizing, our asymptotic results are valid in the range \begin{equation} \label{k-bounds} N^{1/3}\ll k \ll N^{2/3} \end{equation} The upper and lower bounds are well separated when $N^{1/3}\gg 1$. This is satisfied for large populations, yet the convergence may be slow as the effective small parameter is $N^{-1/3}$. Thus $T_1(k,k) = N^{2/3} \mathcal{T}(\kappa, 0)$ when the bounds \eqref{k-bounds} are obeyed. Using Eq.~\eqref{far} we get $T_1(k,k) = \sqrt{8kN}$ which we insert into $\mathbb{E}_k(N)=[k+T_1(k,k)]/2$ obtained after averaging Eq.~\eqref{n:tkk}. Keeping only the leading term gives $\mathbb{E}_k(N) = \sqrt{2kN}$. This completes the derivation of the large $\kappa$ behavior announced in Eq.~\eqref{kappa-extreme}. \subsection{Variance} Taking the square of Eq.~\eqref{tix:rules} and averaging we find that $T_2(i,x)=\langle t^2(i,x)\rangle$ satisfies \begin{eqnarray} \label{T2:eq} T_2 &=& 1 + p_+T_2(i+1,x+1)+ p_- T_2(i-1,x) \nonumber\\ &+& 2 p_+ T_1(i+1,x+1)+ 2p_- T_1(i-1,x) \end{eqnarray} Again we shortly write $p_\pm \equiv p_\pm(x)$ and $T_2\equiv T_2(i,x)$. We now subtract the square of Eq.~\eqref{T1:eq} from Eq.~\eqref{T2:eq} and find that the variance \begin{equation} V(i,x) = \langle t^2(i,x)\rangle - \langle t(i,x)\rangle^2 \end{equation} satisfies \begin{eqnarray} \label{V:eq} V &=&p_+V(i+1,x+1)+ p_- V(i-1,x) \nonumber\\ &+& p_+ p_- [T_1(i+1,x+1) - T_1(i-1,x)]^2 \end{eqnarray} Similar to \eqref{T1:exp} we expand the variance \begin{equation} \label{V:exp} \begin{split} V(i-1,x) &= V - \partial_i V + \tfrac{1}{2}\partial_i^2 V \\ V(i+1,x+1) &= V + \partial_i V + \partial_x V \\ &+ \tfrac{1}{2}\partial_i^2 V + \tfrac{1}{2}\partial_x^2 V + \partial_i \partial_x V \end{split} \end{equation} Plugging the expansions \eqref{T1:exp} and \eqref{V:exp} into \eqref{V:eq} and keeping only dominant terms we obtain \begin{equation} \label{V:ix} \partial_i^2 V + \partial_x V -\frac{x}{N}\, \partial_i V + 2(\partial_i T_1)^2 = 0 \end{equation} We use the same rescaled variables \eqref{IX} as before, and seek the variance in the scaling form \begin{equation} \label{VIX:def} V(i,x) = N^{4/3} \mathcal{V}(I, X) \end{equation} The transformation \eqref{IX} and \eqref{VIX:def} turns \eqref{V:ix} into \begin{equation} \label{VIX} \frac{\partial^2 \mathcal{V}}{\partial I^2} + 2 \left(\frac{\partial \mathcal{T}}{\partial I}\right)^2= X\frac{\partial \mathcal{V}}{\partial I} - \frac{\partial \mathcal{V}}{\partial X} \end{equation} When $I\gg 1$, we can drop again the diffusion term from \eqref{VIX}. We also use the asymptotic expression \eqref{far} for $\mathcal{T}(I,X)$ and arrive at the first order PDE \begin{equation} \label{VIX:1} X\frac{\partial \mathcal{V}}{\partial I} - \frac{\partial \mathcal{V}}{\partial X} = \frac{8}{2I+X^2} \end{equation} Using the variables \eqref{uv} we recast \eqref{VIX:1} into \begin{equation} \label{PDE:V} \frac{\partial \mathcal{V}}{\partial v} = \frac{2}{u\sqrt{u-v}} \end{equation} which is integrated to give $\mathcal{V}=4u^{-1}\left[\sqrt{2u}-\sqrt{u-v}\right]$, or \begin{equation} \label{far:V} \mathcal{V}(I, X) = 8\frac{\sqrt{2I+X^2}-X}{2I+X^2} \end{equation} We set again $I=\kappa$ and $X=0$ in \eqref{far:V} and arrive at the asymptotic $\mathcal{V}(\kappa, 0) = 4 \sqrt{2/\kappa}$ when $\kappa\gg 1$. Using Eq.~\eqref{n:tkk} we find $\mathbb{V}_k(N)=N^{4/3}\,\tfrac{1}{4}\mathcal{V}(\kappa, 0)$ which is indeed the large $\kappa$ behavior announced in Eq.~\eqref{var-extreme}. \section{Duration of Outbreaks} \label{sec:SIR-time} Some basic features of the duration of the outbreaks in the critical SIR process in a finite system can be extracted from the temporal behaviors in the infinite-population limit. We first recall these infinite-population results in the simplest case with one initially infected individual \cite{Bailey,bk12}. The probability $P_i(t)$ to have $i$ infected individuals at time $t$ satisfies \begin{subequations} \begin{align} \label{Pi:eq} \dot P_i & = (i-1)P_{i-1}-2iP_i+(i+1)P_{i+1}, \quad i\geq 1\\ \label{P0:eq} \dot P_0 & = P_1 \end{align} \end{subequations} where dot denotes the derivative with respect to time. A solution of an infinite set of equations \eqref{Pi:eq}--\eqref{P0:eq} subject to the initial condition $P_i(0)=\delta_{i,1}$ reads \begin{subequations} \begin{align} \label{Pi:sol} P_i(t) &=(1+t)^{-2}\, \tau^{i-1}, \quad i\geq 1\\ \label{P0:sol} P_0(t)&= \tau \equiv \frac{t}{1+t} \end{align} \end{subequations} This soultion can be verified by a direct substitution, or derived using e.g. generating function techniques [see Appendix \ref{ap:time} for details]. The probability that the outbreak is still alive at time $t$, is \begin{equation} \label{s} P(t)=\sum_{i\geq 1} P_i(t)=1-P_0(t)=\frac{1}{1+t} \end{equation} The average number of infected individuals in outbreaks which are still alive at time $t$ is therefore \begin{equation} \label{iav} \langle i\rangle=\frac{\sum iP_i(t)}{\sum P_i(t)}=1+t \end{equation} A general solution of Eqs.~\eqref{Pi:eq}--\eqref{P0:eq} describing an infinite-population limit subject to an arbitrary number of initially infected individuals is somewhat cumbersome, it is presented in Appendix \ref{ap:time}. In a finite population, the infection process eventually comes to an end. To estimate heuristically this final time $t_\text{f}$ one uses \eqref{iav} to express \cite{bk} the final size of the outbreak through the final time: $n_\text{f} \sim\int^{t_\text{f}} dt\,\langle i\rangle \sim t_\text{f}^2$. The maximal outbreak size scales as $n_*\sim N^{2/3}$ (see e.g. \cite{bk,KS,bk12}) and hence the maximal duration is \begin{equation} \label{TN3} t_* \sim n_*^{1/2} \sim N^{1/3} \end{equation} The average duration of the outbreak is formally \begin{equation} \label{time-av} \mathbb{E}[t] = \int_0^\infty dt\,t\left(-\frac{dP}{dt}\right) = \int_0^\infty dt\,t P_1(t) \end{equation} Recalling that $P_1(t)=(1+t)^{-2}$ in the infinite-population limit (equivalently, for the critical branching process), we notice that the integral in \eqref{time-av} diverges. We should use, however, the finite upper limit given by \eqref{TN3}. This leads to an estimate \begin{equation} \label{time-av-log} \mathbb{E}_1[t] \simeq \int_0^{\sqrt[3]{N}} dt\,\frac{t}{(1+t)^2} \simeq \frac{1}{3}\,\ln N \end{equation} where the subscript reminds that the process begins with a single infected individual. The logarithmic growth of the average duration time was predicted by Ridler-Rowe many years ago \cite{rr}, albeit with incorrect amplitude; the correct amplitude $1/3$ is easy to appreciate \cite{caveat}, it was argued and numerically supported in \cite{bk,bk12}. The above argument also suggests the more precise asymptotic \begin{equation} \label{time-av-log-1} \mathbb{E}_1[t] = \tfrac{1}{3}\ln N + c_1 + o(1/N) \end{equation} Since the logarithm is a slowly growing function, the sub-leading constant term $c_1$ significantly contributes to the average duration. The computation of the sub-leading term requires much more comprehensive analysis than what we have used so far. If the number of initially infected individuals is sufficiently small, $k\ll N^{1/3}$, one can generalize the prediction \eqref{time-av-log} for the average duration of an outbreak without using a complete solution of the infinite-population limit (Appendix \ref{ap:time}), it suffices to rely on the independence of infection processes generated by each initially infected individual. The probability that the infection is over at time $t$ is $P_0^k$, with $P_0$ given by \eqref{P0:sol}. Thus $-dP_0^k/dt=kP_0^{k-1}/(1+t)^2$ is the probability density that the infection is eradicated at time $t$, from which \begin{equation} \label{time-av-log-k} \mathbb{E}_k[t] \simeq \int_0^{\sqrt[3]{N}} dt\,\frac{kt^k}{(1+t)^{k+1}} \simeq \frac{k}{3}\,\ln N \end{equation} implying that the average duration of the outbreak exhibits a simple logarithmic scaling with amplitude proportional to the initial number of infected individuals. To guess the scaling form we notice that \eqref{time-av-log-k} can be re-written as $\mathbb{E}_k[t] \simeq k\, \ln(N^{1/3} k^{-1})$ with the same leading accuracy. Thus one plausible scaling form is \begin{equation} \label{av-time-scaling} \mathbb{E}_k[t] = N^{1/3}\, \Theta(\kappa) \end{equation} with $\Theta(\kappa)\simeq -\kappa \ln \kappa$ when $\kappa\to 0$. The variance can be probed similarly to the average, see \eqref{time-av-log}. One establishes the scaling law \begin{equation} \label{time-var} \mathbb{V}_1[t] \sim \int_0^{\sqrt[3]{N}} dt\,\frac{t^2}{(1+t)^2} \sim N^{1/3} \end{equation} but not an amplitude. Independence gives $\mathbb{V}_1[t] \sim kN^{1/3}$ when $k\ll N^{1/3}$, and hence the hypothetical scaling form of the variance is \begin{equation} \label{var-time-scaling} \mathbb{V}_k[t] = N^{2/3}\, \Theta_2(\kappa) \end{equation} with $\Theta_2(\kappa)\sim \kappa$ when $\kappa\to 0$. Simulations \cite{Ginestra} support the scaling law \eqref{var-time-scaling} and the linear small $\kappa$ behavior. Simulations also suggest \cite{Ginestra} that the scaled distribution $\Theta_2(\kappa)$ is inversely proportional to $\kappa$ in the large $\kappa$ limit. Thus \begin{equation} \label{var-time-extreme} \Theta_2(\kappa)\sim \begin{cases} \kappa & \kappa\to 0\\ \kappa^{-1} & \kappa\to\infty \end{cases} \end{equation} The scaling behavior of the average outbreak duration seems rather tricky. A scaling form with pre-factor being a product of an algebraic and a logarithmic in $N$ terms has been used in \cite{Ginestra} to fit simulation data. This scaling form is notably different from three other scaling forms that all have a standard purely algebraic in $N$ pre-factor [cf. Eqs.~\eqref{av-scaling}, \eqref{var-scaling}, and \eqref{var-time-scaling}]. It may be worthwhile to try to fit numerical data for the average outbreak duration with the standard scaling form \eqref{av-time-scaling}, but allowing the possibility of an anomalous small $\kappa$ behavior such as $\Theta(\kappa)\simeq -\kappa \ln \kappa$ argued above. \section{Conclusions} We have studied the critical SIR process starting with a large number of initially infected individuals, $k\gg 1$. Particularly interesting behaviors emerge when $k$ scales as a cubic root of the population size, $k\sim N^{1/3}$. The critical SIR process exhibits large fluctuations, so we have relied on the stochastic formulation. We have treated the problem using a combination of exact calculations and asymptotic methods. We have focused on the size and duration of the outbreaks, more precisely on the average and variance of these quantities. The analysis of the size of outbreaks is rather detailed, Secs.~\ref{sec:SIR}--\ref{sec:EA}. Our analytical and asymptotic predictions qualitatively agree with simulation results \cite{Ginestra}. Whenever there is a little discrepancy between simulation results and theoretical predictions, it seems that data may be fitted using slightly different scaling forms, simpler than the fits used in Ref.~\cite{Ginestra}. The chief reason for subtle behaviors are algebraic tails, the average size and duration of outbreaks are especially sensitive to these tails. There are many remaining challenges. For instance, we have found the average size of outbreaks (Sec.~\ref{sec:EA}). The exact expression for the average size involves the sum of $N-k$ terms, with amplitudes determined recurrently but becoming increasingly cumbersome. We have not succeeded in extracting a clean asymptotic behavior of this sum. We have also shown that continuum methods in principle allow one to determine the scaling functions, one should just solve linear PDEs with non-constant coefficients. We have used continuum methods in extracting asymptotic behaviors of the scaling functions. The derivation of the exact average size in Sec.~\ref{sec:EA} can be probably generalized to establish the variance, and perhaps all cumulants (that is, to compute the cumulant generating function). \vskip 1cm \noindent {\bf Acknowledgments.} I want to thank Ginestra Bianconi and Sid Redner for collaboration on similar problems, G. Bianconi and F. Radicchi for sending a preliminary version of Ref.~\cite{Ginestra}, and Boston University Network group for discussions.
null
null
null
proofpile-arXiv_000-10191
{"arxiv_id":"2009.08940","language":"en","timestamp":1600654670000,"url":"https:\/\/arxiv.org\/abs\/2009.08940","yymm":"2009"}
2024-02-18T23:40:25.345Z
2020-09-21T02:17:50.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10193"}
null
\section{\label{sec1}Introduction} The discoveries of integer and fractional quantum Hall effects have initiated a revolution in the study of condensed matter that highlights the interplay between topology and physics \cite{Klitzing80, Tsui82, Laughlin1983}. In particular, fractional quantum Hall (FQH) states are characterized by topological order, which transcends Landau's paradigm of spontaneous symmetry-breaking \cite{Wen2004QFT}. Topologically ordered states host anyons, which are point-like quasiparticle-excitations that obey neither the bosonic nor fermionic exchange statistics. Rather, anyons have fractional statistics, and depending on whether they have a single or multiple fusion channels, they are classified as Abelian or non-Abelian anyons respectively. On practical grounds, there have been many research efforts investigating non-Abelian topological phases since certain types of non-Abelian anyons, such as the Fibonacci anyon and the Ising anyon, can support universal quantum computation \cite{Freedman2002QC, Freedman2003TQC, Kitaev2003TQC, Nayak2008TQCReview}. By braiding Fibonacci anyons, all possible unitary gates can be implemented with intrinsic fault-tolerance \cite{Simon2005Fibonacci,Simon2007Fibonacci}. By braiding Ising anyons, supplemented with a single-qubit phase gate and a two-qubit measurement gate, universal quantum computation can also be realized given rather mild error-correcting protocols \cite{Bravyi2006Ising, Freedman2006Ising}. There are also proposals of using Abelian anyons for quantum computations, with some but not all of the robustness as provided by non-Abelian anyons \cite{Lloyd2002AbelianTQC, Pachos2006AbelianTQC, QCToricCode2007}. On fundamental grounds, FQH states are prototypical platforms exhibiting the bulk-boundary correspondence. The bulk of a FQH state is gapped and characterized by a 2+1D topological quantum field theory, in which gauge fluxes are attached to quasiparticles \cite{Witten1989TQFT, Zhang1989EFT, Wen1990EFT, Wen1990EFT2, Read1990EFT, Fradkin1991CS, WenZee1992CS}. The boundary theory is gapless and described by a closely related 1+1D conformal field theory (CFT), where the primary fields are associated with the quasiparticle types \cite{Wen1991edge, MR1991edge, Wen1992edge, ReadRezayi1999, Hansson2017CFTReview}. The simplest example is the Laughlin state at filling $\nu=1/k$, whose bulk is described by the $U(1)_k$ Chern-Simon theory and the edge theory is the circle CFT (a free boson compactified on a circle) with radius $R=1/\sqrt{k}$ and chiral central charge $c=1$. The bulk-edge correspondence of a topological phase is particularly manifest in a coupled-wire construction \cite{CWC02, CWC14}. There are two central ingredients in a coupled-wire construction. First, each wire is a one-dimensional Luttinger liquid consisting of two decoupled chiral and anti-chiral gapless modes. These decoupled modes are selected by tuning the intra-wire back-scattering appropriately. Second, an array of wires interact together such that the chiral mode on one wire is coupled the anti-chiral mode on the next wire, leading to a two-dimensional bulk that is completely gapped. At the end, a pair of gapless chiral modes remain, which are separated by the gapped bulk and localized at the boundary. This is conceptually similar to what happens in the non-trivial phase of the Su-Schrieffer-Heeger model \cite{SSH1980}, where inter-site couplings are engineered such that a physical electron can ``split" into two halves, each residing at a domain wall. Following this spirit, the coupled-wire construction has been applied to study various Abelian and non-Abelian FQH states \cite{CWC02, CWC14, CWC17, Fuji2017CWC, CWC18, Yukihisa2019, CWC20}, quantum spin liquid states \cite{CSL2015, Debanjan2016Z2QSL, CSL2017}, as well as higher-dimensional topological phases \cite{3dCWC2016, Fuji2019coupled-layers}. As an exactly solvable model, the wire construction also allows one to study the motion of quasiparticles explicitly. Quasiparticles correspond to kink-excitations defined on a link between two wires, and they can move around by acting local operators on individual wires. Thus, the theory of a single wire, which is a full non-chiral CFT, contains important information about the scattering pattern of quasiparticles. The way that chiral and anti-chiral sectors are sewed together defines the allowed physical operators that can act on a single wire, and subsequently defines the allowed operators that scatter quasiparticles in the wire model \citep{CWC14}. When the wire is described by a diagonal CFT, which is the case for a Laughlin state, a local operator is given by a diagonal combination of chiral and anti-chiral fields, implying that all quasiparticles can be scattered across a single wire, having essentially unconstrained motion in the bulk. The purpose of this work is to introduce a simple but non-trivial twist to the Laughlin state, where a single wire is described by a \textit{non-diagonal} circle CFT. The resulting Abelian FQH state, termed as the non-diagonal state, has an interesting and constrained pattern of quasiparticle motion, which have significant physical consequences that we investigate below. In this paper, we propose a family of non-diagonal QH states using a coupled wire construction. The bosonic non-diagonal states arise at filling fraction $\nu=p/2q$, and the fermionic non-diagonal states arise at $\nu= p/(p+2q)$, with $p$ and $q$ being two coprime integers. For $p=1$, our construction produces the well-known Laughlin states, but there are interesting physics to be unveiled for $p>1$. Importantly, the non-diagonal QH states serve to highlight not only the interplay between topology and physics, but also the interplay between symmetry and topological order. As we will see, in the absence of either the $U(1)$ charge symmetry or the $\mathbb{Z}$ translation symmetry of the wire model, a non-diagonal state cannot be distinguished from a Laughlin state of charge-$pe$ particles, which is also known as a strongly-clustered state \cite{CWC17, CWC18}. Non-diagonal states thus share the same intrinsic topological order with strongly-clustered states. With charge conservation alone, they are both characterized by a $U(1)$ charge sector and possess a chiral Luttinger liquid on the boundary. However, in the presence of both charge symmetry and discrete translation symmetry, the non-diagonal states possess a distinct \textit{symmetry-enriched} topological (SET) order \cite{Ran2013SET, Hung2013SET, Lu2016SET, Cheng2016translationSET, Chen2017SET}. There is an additional symmetry-enriched neutral sector characterized by the quantum double model $\mathcal{D}(\mathbb{Z}_p)$, which has a $\mathbb{Z}_p$ topological order. Earlier on, the $\mathbb{Z}_p$ topological order has been realized in spin/rotor models such as Kitaev's toric code \cite{Kitaev2003TQC, Kitaev2006exactly, Bombin2010AS} and Wen's plaquette model \cite{You2012plaquette, You2013plaquette}, and in this work, the non-diagonal QH state is introduced as a platform for realizing $\mathcal{D}(\mathbb{Z}_p)$ in an electronic setting. Similar to the lattice models with spins, the translation symmetry in the coupled wire model also plays the role of the $\mathbf{e}\leftrightarrow\mathbf{m}$ anyonic symmetry of the $\mathbb{Z}_p$ topological order. In fact, the constrained motion of quasiparticles in the non-diagonal states distinguishes quasiparticles excited on the even links from those on the odd links, and as we will see, excitations on even and odd links can be respectively associated to the $\mathbf{e}$-type and $\mathbf{m}$-type anyons in $\mathcal{D}(\mathbb{Z}_p)$. Translation by a wire then interchanges even and odd links, thus acting as an anyon-relabelling transformation. A similar mechanism has been featured in the work by Hong and Fu \cite{Hong2017TCIsurface}, where a fermionic $\mathbb{Z}_4$ topological order is realized on the surface of topological crystalline insulators. Consequently, a dislocation in our wire model, which corresponds to a sudden termination of a wire inside the bulk, acts as a two-fold twist-defect that interchanges $\mathbf{e}$ and $\mathbf{m}$ anyons \cite{Bombin2010AS}. We expect the proposed non-diagonal QH states to be a concrete test bed for the general theory of anyonic symmetry \cite{Teo2015twistliquid, Teo2016AS, Barkeshli2019AS}. Moreover, it maybe interesting for future studies to explore the possibility of experimentally realizing the non-diagonal states in twisted-bilayer materials, as an array of quasi-one-dimensional subsystems could be engineered there, with the required translation symmetry built-in \cite{Kennes20201dflatbands}. The rest of the paper is organized as follows. In Sec. \ref{sec2}, we present a detailed study of the coupled-wire construction for the non-diagonal quantum Hall states. The existence of non-diagonal states is established in Sec. \ref{sec2.1}, followed by a discussion that explains the relation to non-diagonal CFTs in Sec. \ref{sec2.2}. In Sec. \ref{sec2.3}-\ref{sec2.4}, we analyze the scattering pattern of quasiparticles for both bosonic and fermionic non-diagonal states by explicitly constructing physical operators that move the quasiparticles around. The analysis allows us to appreciate the importance of charge conservation in constraining the motion of quasiparticles. We also provide an analogue of non-diagonal QH states in the context of weak topological superconductors, which contains a similarly constrained motion of vortex excitations that can be understood using the fractional Josephson effect. In Sec. \ref{sec3}, we study the non-diagonal states from the ``symmetry-enriched" perspective. We first establish a bulk neutral sector described by the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model with a calculation of braiding statistics (Sec. \ref{sec3.1}), and discuss several concrete examples of non-diagonal states (Sec. \ref{sec3.2}). We then clarify the importance of the $U(1)$ charge symmetry and the $\mathbb{Z}$ translation symmetry in giving rise to the $\mathcal{D}(\mathbb{Z}_p)$ neutral sector (Sec. \ref{sec3.3}), thus establishing the non-diagonal state as possessing an SET order distinctive from the strongly-clustered Laughlin state. In Sec. \ref{sec4}, we present a detailed study of the boundary theory. We focus on the edge running perpendicular to the direction of wire, which is capable of realizing the translation symmetry. The corresponding effective Hamiltonian is found to consist of a chiral Luttinger liquid (Sec. \ref{sec4.1}) and a generalized $p$-state clock model (Sec.\ref{sec4.2}). Our discussion here corroborates and elaborates on some earlier works on critical parafermion chains \cite{criticalparafermionchain} and twist-defect chains \cite{twistdefectchain}. Translation symmetry in the bulk is realized as self-duality on the edge. The symmetric edge for $p=2,3$ is completely gapless, allowing for tunneling of a single electron into the edge, say from a Fermi liquid. Tunneling exponents are calculated in Sec. \ref{sec4.3}, which hopefully serve as an experimental probe for the non-diagonal states. Some complexities for the edge structure for $p\geq 4$ are addressed in Sec. \ref{sec4.4}, which supplement the results in earlier works \cite{twistdefectchain, criticalparafermionchain}. We conclude with outlook in Sec. \ref{sec5}. \section{\label{sec2}Wire model} First let us describe a coupled wire construction for a sequence of $c=1$ Abelian FQH states, which bear an intimate relation to the non-diagonal series of circle CFT. They would thus be referred to as the ``non-diagonal" states. Though our construction may look similar to earlier formulations of the coupled-wire model \citep{CWC02, CWC14}, it sets the stage for exploring a subtlety that has been overlooked. One important consequence lies in the pattern of quasiparticle scattering, which will be investigated explicitly in our wire model. Bulk quasiparticles in general have a constrained motion, which reflects a non-trivial interplay between charge conservation and translation symmetry in the non-diagonal states. The discussion in this section sets the stage for identifying a hidden neutral sector as the $\mathbb{Z}_p$ toric code, as to be explained in Sec. \ref{sec3}. For the simplicity of exposition, the coupled wire construction is done for \textit{bosonic} electrons in this section. The fermionic non-diagonal states can be constructed in a similar manner, with the details presented in Appendix \ref{secappendixa}. While the relation to non-diagonal CFT is more transparent for the bosonic case, both bosonic and fermionic non-diagonal states share similar properties which we discuss in Sec. \ref{sec2.4}. \subsection{\label{sec2.1}Inter-wire coupling} \begin{figure}[b!] \includegraphics[width=8cm,height=6.5cm ]{setup.png}\centering \caption{\small{(a) Schematic of the coupled wire model. (b) Pictorial representation of the inter-wire coupling. }} \label{setup} \end{figure} We consider a two-dimensional system consisting of an array of $M$ one-dimensional wires of bosons, as depicted in Fig. \ref{setup}(a). A perpendicular magnetic field is applied, and the one-dimensional (along wire) flux density is denoted as $b$. The low energy description of this system is provided by ``bosonizing the bosons", such that each wire is characterized by two slowly varying bosonic fields: the phase variable $\varphi(x)$ and the density variable $\theta(x)$ \citep{giamarchi2003, gogolin2004}. They form a conjugate pair, satisfying the following canonical commutation relation: \begin{equation} [\partial_x\theta_j(x), \varphi_{j'}(x')] = i\pi\delta_{jj'}\delta(x-x'), \end{equation} with $j,j'=1, 2, ..., M$ labeling the wires and $x,x'$ being the coordinate along the quantum wires. The operator that annihilates an (bosonic) electron on the $j$-th wire is then expressed as \begin{equation}\label{phi} \psi_j(x) \propto e^{i\varphi_j(x)}. \end{equation} The operators associated to density fluctuation are expressed as \begin{equation}\label{theta} \rho^n_j \propto e^{i2n(\pi \bar{\rho}x+\theta_j(x))}, \end{equation} with $n\in \mathbb{Z}$ and $\bar{\rho}$ being the 1d average density. This describes an intra-wire backscattering at wavevector $k\sim 2n\pi \bar{\rho}$. A crucial ingredient of a wire construction is the inter-wire coupling, which involves tunneling of electrons across neighboring wires. Ultimately, it determines how the bulk is gapped to give rise to a quantum Hall state, as well as what kind of gapless edge is left at the boundary. For our purpose, the tunneling interaction is characterized by a pair of coprime integers $(p,q)$, which describes the inter-wire tunneling of $p$ electrons between two nearest wires, accompanied by intra-wire backscattering at wavevector $k\sim 2q\pi\bar{\rho}$. A pictorial representation of this tunneling interaction is provided in Fig. \ref{setup}(b). The inter-wire coupling term, defined for each link $\ell=j+1/2$ between wires $j$ and $j+1$, then takes the following expression: \begin{equation}\label{tunnel1} \begin{split} \mathcal{V}^{(p,q)}_{\ell} &= (\psi_{j+1}^\dagger \psi_j e^{-ibx})^p \rho^q_{j+1}\rho^q_j + h.c.\\ &=e^{i(4\pi\bar{\rho}q-pb)x} e^{i\Theta_\ell}+h.c.\;, \end{split} \end{equation} with the following link variable defined: \begin{equation} \Theta_{\ell} = p(\varphi_{j}-\varphi_{j+1})+2q(\theta_{j}+\theta_{j+1}). \end{equation} Notice that the Lorentz force provides an impulse to each tunneled electron, which is accounted for by the $e^{-ibx}$ factor attached above. The oscillatory factor in Eq. (\ref{tunnel1}) describes the net momentum of the tunneling operator $\mathcal{V}^{(p,q)}_{\ell}$. Demanding momentum conservation, we obtain the filling fraction for the bosonic FQH state under consideration, \begin{equation}\label{filling} \nu \equiv \frac{2\pi \bar{\rho}}{b} = \frac{p}{2q}. \end{equation} Next, we proceed to demonstrate how the inter-wire coupling gaps out the bulk, and then examine the gapless chiral modes left at the boundary. To that end, it is convenient to introduce a set of chiral bosonic fields, \begin{subequations}\label{chiralboson} \begin{align} \phi^\text{R}_j&=p\varphi_j+2q\theta_j,\\ \phi^\text{L}_j&=p\varphi_j-2q\theta_j, \end{align} \end{subequations} which can be checked to have the following commutation relation: \begin{equation}\label{chiralcommutation} [\partial_x\phi^{\tilde{r}}_j(x),\phi^{\tilde{r}'}_{j'} (x')] = 4i\pi pq \tilde{r}\delta_{\tilde{r}\tilde{r}'}\delta_{jj'}\delta(x-x'), \end{equation} where $\tilde{r},\tilde{r}'=\text{R}/\text{L}=+1/-1$. From the Luttinger liquid theory \citep{giamarchi2003, gogolin2004}, it is known that with an appropriate intra-wire back-scattering the Luttinger parameter can be adjusted such that the Hamiltonian of a single wire takes the following form, \begin{equation}\label{singlewire} \mathcal{H}_j = \frac{u}{2\pi}[(\partial_x\phi^\text{R}_j)^2+(\partial_x\phi^\text{L}_j)^2], \end{equation} In this way, the designated chiral bosonic fields are decoupled in each wire. (Here $u$ is the speed of sound, which is not essential to our later discussions.) Subsequently, the complete Hamiltonian for the coupled wire model is \begin{equation}\label{wireHamtot} \mathcal{H}_{\rm tot}=\sum_{j=1}^M \mathcal{H}_j + \sum_{j=1}^{M-1}t_{j+\frac{1}{2}}\cos\Theta_{j+\frac{1}{2}}. \end{equation} The second term comes from the tunneling operator $\mathcal{V}^{(p,q)}_{\ell}$ for each link $\ell=j+1/2$, with $t_{\ell}$ characterizing the tunneling strength. The filling fraction is adjusted so that the oscillatory factor is canceled. Notice that in terms of chiral bosonic fields, the link variable can be expressed as \begin{equation}\label{link1} \Theta_{j+\frac{1}{2}} = \phi^\text{R}_j-\phi^\text{L}_{j+1}. \end{equation} Therefore, while the chiral modes are decoupled within individual wires, the tunneling operators are simply coupling chiral modes with opposite chirality from neighboring wires, as illustrated in Fig. \ref{setup}(a). This simple picture motivates us to analyze the interacting Hamiltonian in Eq. (\ref{wireHamtot}) by the following decomposition: \begin{equation} \mathcal{H}_{\rm tot} = \mathcal{H}_{\rm edge} + \mathcal{H}_{\rm bulk}, \end{equation} with the contribution from the boundary being \begin{equation}\label{edgeHam} \mathcal{H}_{\rm edge} = \frac{u}{2\pi}[(\partial_x \phi^\text{L}_1)^2+(\partial_x \phi^\text{R}_M)^2], \end{equation} and the contribution from the bulk being \begin{equation}\label{bulkHam} \begin{split} \mathcal{H}_{\rm bulk} = \sum_{j=1}^{M-1}\{ \frac{u}{4\pi}[(\partial_x \Phi_{j+\frac{1}{2}})^2+(\partial_x\Theta_{j+\frac{1}{2}})^2] \\ +\;t_{j+\frac{1}{2}}\cos\Theta_{j+\frac{1}{2}}\}. \end{split} \end{equation} Here we have also introduced the conjugate link variable \begin{equation}\label{link2} \Phi_{j+\frac{1}{2}} = \phi^\text{R}_j+\phi^\text{L}_{j+1} \end{equation} which, together with the link variable defined earlier in Eq. (\ref{link1}), obey the following commutation relation: \begin{equation}\label{linkcommutation} [\partial_x\Theta_{\ell}(x) , \Phi_{\ell'} (x')] = 8i\pi p q \delta_{\ell \ell'} \delta(x-x'). \end{equation} The bulk Hamiltonian $\mathcal{H}_{\rm bulk}$ can now be viewed as decoupled copies of sine-Gordon models, each for a link. In particular, when the interaction term $\cos\Theta_{\ell}$ flows to strong coupling, the link variables $\Theta_{\ell}$ in the bulk are all pinned at the bottom of the cosine potential, \text{i.e.} $\Theta_{\ell}=2\pi n\; (n\in \mathbb{Z})$, which leads to a gapped bulk. Such an exactly solvable limit of our interacting microscopic model can be attained when we include additional \textit{inter-wire scattering} of the form: $(\partial_x\phi^\text{R}_{j})(\partial_x\phi^\text{L}_{j+1})$. The net effect of inter-wire scattering can be absorbed into the Luttinger parameter $K$, so the bulk Hamiltonian is modified to: \begin{equation}\label{bulkHam2} \begin{split} \mathcal{H}'_{\rm bulk} = \sum_{j=1}^{M-1}\{&\frac{u}{4\pi}[K(\partial_x \Phi_{j+\frac{1}{2}})^2+\frac{1}{K}(\partial_x\Theta_{j+\frac{1}{2}})^2]\\ &+\; t_{j+\frac{1}{2}}\cos\Theta_{j+\frac{1}{2}}\}. \end{split} \end{equation} For $K< (pq)^{-1}$, the scaling dimension of the inter-wire tunneling is found to be $\Delta_t<2$ and thus the cosine potential is relevant. This gives an exactly solvable regime of our model, in which the bulk is known to be gapped. Up to this point, we have started from an interacting microscopic model and constructed a family of bosonic quantum Hall states at filling fraction $\nu=p/2q$. For $p=1$, these states are simply the Abelian Laughlin states which have been discussed before \citep{CWC14}. Indeed, even for generic values of $(p,q)$, the corresponding state is still Abelian. Thus the reader may wonder whether we can learn anything exciting here by studying the generic case with $p>1$. The answer is surprisingly affirmative, and has to do with how quasiparticles are scattered (\textit{i.e.} their allowed motion) in the wire model. In the coupled wire construction, quasiparticles appear at link $\ell$ when $\Theta_{\ell}$ has a kink where it jumps by $2\pi n $ ($n\in \mathbb{Z}$) \citep{CWC02, CWC14}. Away from the kink, the system is still in its ground state as suggested by Eq. (\ref{bulkHam2}), and around the kink there is an accumulation of charge $ne/2q$. Following Eq. (\ref{chiralboson}) and Eq. (\ref{link1}), a charge-$e/2q$ quasiparticle residing at link $j+1/2$ can be created by the following operator: \begin{equation}\label{QPcreation} (\Psi^{\text{R}/\text{L}}_{e/2q, j+\frac{1}{2}} (x))^\dagger= e^{-\frac{i}{2pq}\phi^{\text{R}/\text{L}}_{j/j+1}(x)}. \end{equation} This is not a local operator, as anticipated because quasiparticles cannot be created locally. On the other hand, scatterings of quasiparticles are expected to be represented by local operators. When acting on a single wire, it is clear from Eqs. (\ref{phi}) and (\ref{theta}) that local operators should take the form $e^{i(r\varphi-2s\theta)}$ ($r,s\in \mathbb{Z}$). One can then check that the minimal quasiparticle with charge $e/2q$ can be scattered across a \textit{single} wire only if the scattering operator \begin{equation}\label{scattop} \mathcal{O}_j= e^{\frac{i}{2pq}(\phi^\text{R}_j-\phi^\text{L}_j)} = e^{\frac{2i}{p}\theta_j} \end{equation} is local, which is true only when $p=1$. When $p>1$, a minimal quasiparticle needs to hop across \textit{two} wires in order to obey locality and preserve its charge, which we will explain in more detail. The motion of quasiparticle is thus constrained in general, and as we soon see, this is a defining feature of ``non-diagonal" states. We want to emphasize that the phase we have constructed here is different from the strongly-clustered phase studied before in Ref. \citep{CWC17,CWC18}. In the strongly-clustered phase, electrons are bound into charge-$pe$ clusters and form the Laughlin state at filling $\nu_{pe} = 1/2pq$ (equivalently the electronic filling is $\nu=p/2q$), which also has quasiparticles with minimal charge $e/2q$. However in that case, irrespective of what $p$ is, quasiparticles can always hop across a single wire. The constrained motion of quasiparticles also appears in fermionic non-diagonal states. Moreover, it also happens in a superconducting context. There, the contrast between the non-diagonal state and the strongly-clustered state can be understood in terms of fractional Josephson effect, by considering an array of 1d topological/trivial superconductors, with quasiparticles replaced by quantum vortices. We will elaborate more on this analogy, but before that, we want to introduce an intimate connection between the constructed FQH state to what is known as the \textit{non-diagonal} conformal field theory. The aforementioned pattern of bulk quasiparticle scattering can be understood from the perspective of the full non-chiral boundary CFT, or equivalently, the theory of a single wire. \subsection{\label{sec2.2} Single wire: non-diagonal CFT} While the bulk is gapped, it is manifest in the coupled wire model that there are gapless chiral modes left at the boundary, namely at the very first wire $(j=1)$ and the very last wire $(j=M)$. We have the edge Hamiltonian $\mathcal{H}_{\rm edge}$ written out in Eq. (\ref{edgeHam}), which resembles the Hamiltonian of a single wire in Eq. (\ref{singlewire}), except that the two chiral edge modes are separated by a gapped bulk. From Eq. (\ref{chiralcommutation}), it is clear that each edge is described by a chiral Luttinger liquid with Luttinger parameter $K=2pq$. The edge theory is equivalently known as the $U(1)_{2pq}$ chiral CFT, and it also describes the edge of the strongly-clustered state at filling $\nu_{pe}=1/2pq$. Hence, from the perspective of a single chiral sector at the boundary, the non-diagonal state is no different from the strongly-clustered state. To distinguish them, it is important to combine the chiral and anti-chiral sectors, and study the resulting non-chiral theory. This full non-chiral boundary theory, which is equivalent to the theory of a single wire, determines how quasipartilces can move in the bulk and be scattered from one edge to another. Here the non-chiral theory describes a boson compactified on a circle, and first let us determine its \textit{radius of compactification}. The circle-compactification can be easily inferred when the bosonic fields $\varphi$ and $\theta$ were first introduced in Eqs. (\ref{phi}) and (\ref{theta}). They are defined to have the following shift-symmetries that leave the physical operators invariant: \begin{equation} \varphi \mapsto \varphi+2\pi, \;\;\;\theta \mapsto \theta +\pi. \end{equation} Consequently, the circle CFT with the Hamiltonian, \begin{equation} \mathcal{H} = \frac{u}{2\pi}[(\partial_x \varphi)^2+(\partial_x\theta)^2] \end{equation} is considered to have radius $R=1$ (when looking at the compactification of $\varphi$), or radius $R=1/2$ (when looking at the compactification of $\theta$). These two descriptions of radius are indeed equivalent, thanks to a duality between circle CFTs of radius $R$ and $1/2R$, which is known as the $T$-duality \citep{Ginsparg88,BYB}. Substituting Eq. (\ref{chiralboson}) into Eq. (\ref{singlewire}), the Hamiltonian describing a single wire can be expressed as \begin{equation} \mathcal{H}_j = \frac{\tilde{u}}{2\pi}[\frac{p}{2q}(\partial_x\varphi)^2+\frac{2q}{p}(\partial_x\theta)^2], \end{equation} with the speed of sound rescaled to $\tilde{u} = 4pqu$. Comparing with the circle CFT at radius $R=1$, the circle CFT corresponding to our single wire now has the following radius: \begin{equation} R=\sqrt{\frac{p}{2q}}. \end{equation} In this paper, we focus on the situation where $p$ and $q$ are coprime integers, as otherwise there exists two smaller coprime integers giving rise to the same radius for the edge theory, as well as same filling factor for the bulk. For $p=1$, the radius is $R=1/\sqrt{2q}$, which leads to the familiar circle CFT that is known to describe the gapless edges of the $\nu=1/2q$-filling Laughlin state, and as we will soon see, it is a \textit{diagonal} theory. Below, let us introduce the distinction between diagonal and non-diagonal CFTs for a boson compactified on a circle, by first studying their corresponding partition functions. As discussed by DiFrancesco \textit{et al.} \citep{BYB}, the modular-invariant partition function for a compact boson of rational radius $R= \sqrt{p/2q}$ can be expressed as follows \citep{Tduality}, \begin{equation}\label{partition} Z(\sqrt{\frac{p}{2q}}) = \sum_{n=0}^{N-1}K_{ n}(\tau)\overline{K_{\omega n}(\tau)}, \end{equation} with $\tau$ being the modular parameter and $K_n(\tau)$ the extended character which can be expressed as, \begin{equation}\label{charac} K_n(\tau) = \frac{1}{\eta(\tau)}\sum_{m\in \mathbb{Z}}\lambda^{(Nm+n)^2/2N}, \end{equation} where $\eta(\tau)$ is the Dedekind eta function and $\lambda=e^{2i\pi\tau}$. In the above we have defined $N=2pq$, which counts the number of chiral primary fields. Modular-invariance requires the parameter $\omega$ in Eq. (\ref{partition}) to satisfy the following conditions: \begin{subequations}\label{Bezout} \begin{align} qr_0-ps_0&=1,\\ qr_0+ps_0&= \omega \;\;\;\text{mod }N. \end{align} \end{subequations} In the range $1\leq r_0 \leq p$, $1\leq s_0 \leq q-1$, the B\'ezout's lemma in number theory guarantees the unique existence of an integer solution $(r_0,s_0)$ to the first equation \cite{Bezout2009}, which subsequently defines $\omega$ in the second equation. For $p=1$, we have the solution $(r_0,s_0) = (1,q-1)$, which leads to $\omega = -1\;\text{mod }N$. From Eq. (\ref{charac}), it can be seen that the extended character obeys $K_{n} = K_{-n}$, and hence for $p=1$ we have, \begin{equation} Z(\frac{1}{\sqrt{2q}}) = \sum_{n=0}^{N-1} \abs{K_{n}}^2. \end{equation} This defines the diagonal theory, in which the extended characters from the chiral and anti-chiral sectors are combined in a symmetric manner. On the contrary, for coprime integers $p,q > 1$, the partition function in Eq. (\ref{partition}) \textit{cannot} be expressed in the diagonal form, and the corresponding theory is known as non-diagonal \citep{BYB}. This explains why we would name the Abelian states constructed in Sec. \ref{sec2.1} as the ``non-diagonal" states for $p,q >1$. It is worth making a contrast with the strongly-$p$-clustered state at filling $\nu=p/2q$ \citep{CWC17,CWC18}, which also has a minimal quasiparticle of charge $e/2q$ but its edge theory has compactification radius $R=1/\sqrt{2pq}$, and is thus a ``diagonal" state. In the coupled wire model, one can see an important physical consequence regarding the distinction between diagonal and non-diagonal theories. Each extended chiral/anti-chiral character $K_{n}/\overline{K}_n$ is associated to a chiral/anti-chiral primary operator, $e^{\pm\frac{in}{2pq}\phi^{\text{R}/\text{L}}}$, while the expansion of the partition function in terms of extended characters, namely Eq. (\ref{partition}), suggests how a physical local operator can be constructed from a combination of chiral and anti-chiral sectors. For the $\nu=p/2q$ state constructed in the wire model, the theory of a single wire is non-diagonal for $p,q>1$, so the desired scattering operator $\mathcal{O}_j$ introduced in Eq. (\ref{scattop}), as a diagonal combination of chiral and anti-chiral primary operators, is \textit{not} an allowed local operator. Hence the minimal quasiparticle in a non-diagonal state cannot hop across just a single wire in the coupled wire model. But it can always hop across two, as we explain in the next subsection. This constraint leads to the distinction of two types of quasiparticles: one that lives on the even links and the other on the odd links. As we explain in Sec. \ref{sec3}, they are associated to the $\mathbf{e}$-type and $\mathbf{m}$-type anyons in the $\mathbb{Z}_p$ toric code, respectively. The states with $p>1$ and $q=1$ require special attention, as they are also ``non-diagonal". Admittedly, from the perspective of CFT, they have diagonal partition functions just like the diagonal states (with $p=1$ and $q>1$). In fact under $T$-duality, which interchanges $\varphi$ and $2\theta$, $p$ and $q$ are also interchanged, so this is expected. Nevertheless, $\varphi$ and $\theta$ have their respective physical meanings in the wire model. In particular, the electron operator $e^{i\varphi}$ carries charge-$e$ while the density operator $e^{i2\theta}$ is charge-neutral. As we see next, charge conservation (as a natural symmetry in a quantum Hall system) plays an important role in constraining the motion of quasiparticle. One thus should not use $T$-duality to disqualify the states with $p>1$ and $q=1$ as being ``non-diagonal". In fact, by analyzing the allowed quasiparticle scattering pattern, these states are constrained in just the same way as any other non-diagonal states. \subsection{\label{sec2.3}Quasiparticle scattering} Let us now discuss the allowed motion of bulk quasiparticles in more detail. Following the theory of Luttinger liquid and bosonization \citep{giamarchi2003, gogolin2004}, the smeared density on the $j$-th wire is \begin{equation} \rho_j = \frac{1}{\pi} \partial_x\theta _j = \frac{1}{4\pi q}(\partial_x\phi^\text{R}_j - \partial_x \phi^\text{L}_j), \end{equation} with the second equality following from the definition of chiral bosonic fields in Eq. (\ref{chiralboson}). Instead of assigning electric charge to the wires, one can equally well assign charge to the links and consider the following density operator for link $\ell = j+1/2$: \begin{equation}\label{linkdensity} \rho_{\ell} = \frac{1}{4\pi q} (\partial_x \phi^\text{R}_{j}-\partial_x \phi^\text{L}_{j+1}) = \frac{1}{4\pi q} \partial_x\Theta_\ell, \end{equation} with the second equality following from the definition of link variable $\Theta_\ell$ in Eq. (\ref{link1}). In our wire construction, a quantum Hall state is established when all link variables $\Theta_{\ell}$ are pinned at the bottom of the cosine potential (arising from inter-wire coupling), where they take values in $2\pi n$ ($n\in \mathbb{Z}$). Therefore, a minimal non-trivial quasiparticle excitation located on link $\ell$ would correspond to a $2\pi$-kink in $\Theta_{\ell}$. The density operator in Eq. (\ref{linkdensity}) suggests that this quasiparticle carries charge $e/2q$. The creation/annihilation operator for such a quasiparticle can be constructed by requiring it to create a $2\pi$-kink in $\Theta$. We have already introduced them ahead of time in Eq. (\ref{QPcreation}), where it is used to illustrate the motivation of our study. Since we make heavy use of them in this subsection, they are copied to here again: \begin{equation}\label{QPannihilation} \Psi^{\text{R}/\text{L}}_{e/2q, \ell} = e^{\frac{i}{2pq}\phi^{\text{R}/\text{L}}_{j/j+1}}. \end{equation} This operator is non-local, as it cannot be expressed as a product of electronic operators. In the language of CFT, it is a chiral primary operator. On the other hand, $e^{i\phi^{\text{R}/\text{L}}}$ is a local operator, which acts as a simple current in the CFT description. It can be interpreted as creating/annihilating a ``trivial" quasiparticle of charge $pe$ (this is like the charge-$e$ quasiparticle in the Laughlin state, which is identified with the vacuum). Therefore, the number of chiral primaries of the edge CFT, or equivalently, the number of distinct quasiparticles living on a specific link equals to $N=2pq$. The notations in Eq. (\ref{QPannihilation}) are adopted so as to make intuitive sense when combined with the picture of the wire model, see Fig. \ref{qpoperators}: a quasiparticle can be created in two equal manner, in one case by acting $(\Psi^\text{R}_{e/2q,\ell})^\dagger$ on the $j$-th wire and creating a quasiparticle to its \textit{right}; in another case by acting $(\Psi^\text{L}_{e/2q,\ell})^\dagger$ on the $(j+1)$-th wire and creating a quasiparticle to its \textit{left}. \begin{figure}[t!] \includegraphics[width=6cm,height=4.5cm]{qpoperators.png}\centering \caption{\small{Schematic illustration of quasiparticle operators. Quasiparticles in the coupled wire model are excitations living on the links between consecutive wires. For the $\nu=p/2q$ Abelian quantum Hall states, the minimal non-trivial quasiparticle has charge $e/2q$. Such a quasiparticle on link $j+1/2$ can be created either by $e^{\frac{-i}{2pq}\phi^{\text{R}}_{j}}$ or $e^{\frac{-i}{2pq}\phi^{\text{L}}_{j+1}}$, which are non-local operators. Local operators defined in Eq. (\ref{localop}) scatter quasiparticles, and thus allow them to move in the bulk. For instance, applying $\mathcal{O}^{\{r_0,s_0\}}_j$ would scatter a quasiparticle of charge $e/2q$ on link $j+1/2$ to a quasiparticle of charge $-\omega e/2q$ on link $j-1/2$, with $r_0, s_0$ and $\omega$ defined through Eq. (\ref{Bezout}).}} \label{qpoperators} \end{figure} Next we construct local operators that act on a single wire, and interpret their effect as scattering/moving quasiparticles from one side of the wire to another. This interpretation is also reflected in the langauge of CFT, where a physical operator is a product of a chiral and an anti-chiral vertex operators. Combination of the two would determine the partition function, as discussed in Sec. \ref{sec2.2}. Let us begin with the following operator \begin{equation}\label{localop} \mathcal{O}^{\{r,s\}}_j = e^{i(r\varphi_j-2s\theta_j)}, \end{equation} which is known to be local when $r,s \in \mathbb{Z}$, as it can be constructed out of the electronic operators introduced in Eqs. (\ref{phi}) and (\ref{theta}). In this way, all local operators can be organized onto a lattice, each labeled by a point $(x,y)=(r,2s)$, as depicted in Fig. \ref{localoperator}. Making the change of variables to chiral bosonic fields, the above local operator becomes \begin{equation}\label{localop2} \mathcal{O}^{\{r,s\}}_j = {\rm exp}\; \frac{i}{2pq}[(qr-ps)\phi^\text{R}_j+(qr+ps)\phi^\text{L}_j]. \end{equation} This expression has two important implications. Firstly, on a formal ground this is connected to the partition function for the CFT of a single wire. The way that chiral/anti-chiral \textit{vertex operators} are combined to form a physical operator, as shown in Eq. (\ref{localop2}), can be translated to the way chiral/anti-chiral \textit{characters} are combined to form the modular-invariant partition function, as shown in Eq. (\ref{partition}). Specifically, by expressing $n=qr-ps$ and using the definition of $\omega$ in Eq. (\ref{Bezout}), we have $\omega n = qr+ps$ and hence the dictionary follows: \begin{equation} \mathcal{O}^{\{r,s\}} \leftrightarrow K_n \overline{K}_{\omega n}. \end{equation} Secondly, from Eq. (\ref{localop2}) one can read off the physical effect of $\mathcal{O}^{\{r,s\}}_j$, which is to scatter a quasiparticle of charge $e(qr-ps)/2q$ residing on link $j+1/2$ to another quasiparticle of charge $-e(qr+ps)/2q$ residing on link $j-1/2$. An example is depicted in Fig. \ref{qpoperators}. Notice for the special case that $r=\pm p$ and $s= \pm q$, the operator is actually creating/annihilating a ``trivial" quasiparticle of charge $pe$. As we see next, the aforementioned implications suggest that we can learn about the scattering pattern of quasiparticles by examining the local operator $\mathcal{O}^{\{r,s\}}$, and since $\mathcal{O}^{\{r,s\}}$ is related to the partition function, the distinction between diagonal and non-diagonal CFT is also reflected in the scattering of bulk quasiparticles. \begin{figure}[t!] \includegraphics[width=8.5cm,height=4cm ]{localoperator_new.png}\centering \caption{\small{Diagrams of the allowed local scattering operators acting on a single wire for a bosonic system. Each lattice point corresponds to an operator $e^{i(x\varphi-y\theta)}$. Red dots are labeling the points $(x,y) = (r,2s)$ with $r,s\in \mathbb{Z}$, which correspond to local operators. (a) For $p=1$ and $q=2$, which gives a diagonal Laughlin state at $\nu=1/4$. (b) For $p=2$ and $q=1$, which gives a non-diagonal bosonic state at $\nu=1$. The arrows connecting the origin to the points $(x,y)=(\pm p, 2q)$ represent the creation/annihilation operators for the trivial quasiparticle, and they bind the shaded region which covers all $N=2pq$ distinct quasiparticle scattering operators. Only operators on the vertical axis are charge-neutral, while all others involve injection/removal of electrons from a single wire.}} \label{localoperator} \end{figure} Alongside the constraint of locality, the constraint of charge conservation also plays an important role here. It is clear from Eq. (\ref{localop}) that the local scattering operator is \textit{charged} when $r\neq 0$, since it removes $r$ electrons from the $j$-th wire. Therefore, combining locality with charge conservation, only quasiparticles with charge $pe/2q$ (or its multiples) can be scattered across a single wire, which corresponds to the operators on the vertical axis in Fig. \ref{localoperator}. For the ``diagonal" Abelian states, in which case $p=1$, all quasiparticles can be scattered across a single wire. This can also be seen from Fig. \ref{localoperator}(a), where all distinct non-trivial scattering operators lie on the vertical axis, and thus are charge-neutral. The Laughlin states, as well as the strongly-clustered states \citep{CWC17,CWC18}, belong to this category. On the other hand, for the ``non-diagonal" states with $p>1$, there exist quasiparticles, (including the minimal quasiparticle with charge $e/2q$) that cannot be scattered across just a single wire, as local operators that would scatter them require injection/removal of electrons. A representative situation is illustrated in Fig. \ref{localoperator}(b), which clearly shows the existence of charged scattering operators in the non-diagonal case. Having said that, by no means do we imply that quasiparticles with charge other than (multiples of) $pe/2q$ cannot move at all in the bulk of non-diagonal states. Though they cannot hop across just a \textit{single} wire, they can indeed hop across \textit{two}. For instance, the following local operator: \begin{equation}\label{scattertwowire} \mathcal{O}^{\{-r,s\}}_{j-1}\mathcal{O}^{\{r,s\}}_j \propto {\rm exp}\;i[\frac{(qr-ps)}{2pq}(\phi^\text{R}_j-\phi^\text{L}_{j-1})] \end{equation} would scatter a quasiparticle with charge $e(qr-ps)/2q$ from link $j+1/2$ to link $j-3/2$. Note that in writing Eq. (\ref{scattertwowire}) we have used the fact that $\Theta_{j-1/2}=\phi^\text{R}_{j-1}-\phi^\text{L}_{j}$ has been pinned at $2\pi n$ ($n\in \mathbb{Z}$) as the bulk has been gapped, so there is a numerical constant that can be factored out, which explains the proportionality sign. This operator is charge-neutral, because while $r$ electrons are removed from the $j$-th wire, $r$ electrons are injected into the $(j-1)$-th wire, so the the minimal quasiparticle can indeed move across two wires given an inter-wire tunneling interaction, which is local and charge-preserving. To summarize, non-diagonal quantum Hall states can be distinguished from the diagonal states in terms of the scattering pattern of bulk quasiparticle in the wire model. While all quasiparticles in the diagonal states can hop across a single wire, certain quasiparticles (including the minimal quasiparticle) in the non-diagonal states are only allowed to hop across two wires at a time. This then differentiates two types of quasiparticles: ones which live on the \textit{even} links, and the others which live on the \textit{odd} links. As we will show in Sec. \ref{sec3}, they can be associated to the $\mathbf{e}$-type and $\mathbf{m}$-type anyons in the $\mathbb{Z}_p$ toric code (also known as the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model), thus allowing us to assign a $\mathcal{D}(\mathbb{Z}_p)$ neutral sector to the non-diagonal states. After establishing this relation, we will also rigorously address the difference between non-diagonal states and strongly-clustered states, from the perspectives of intrinsic topological order and symmetry-enriched topological order. \subsection{Fermionic states}\label{sec2.4} In the above discussion we have been focusing on bosonic non-diagonal states. Here we explain that a similar constrained pattern of quasiparticle motion also arise in fermionic QH states. Moreover, in an analogous setting of 2d weak topological superconductor, vortices have a similar constrained motion that can be understood in terms of fractional Josephson effect. \subsubsection{Non-diagonal quantum Hall states}\label{sec2.4.1} The coupled wire construction for the fermionic non-diagonal state is detailed in Appendix \ref{secappendixa}. The inter-wire coupling is essentially the same, but due to the non-local nature of fermion, which requires attaching a Jordan-Wigner string to the bosonized electron operator, the filling fraction is modified to \begin{equation} \nu = \frac{p}{p+2q}. \end{equation} In fact, most changes from the bosonic case to the fermionic case can be accounted for by substituting $2q \mapsto p+2q$. The annihilation operator for the minimal quasiparticle on link $\ell=j+1/2$ is \begin{equation} \Psi^{\text{R}/\text{L}}_{e/(p+2q), \ell} = e^{\frac{i}{p(p+2q)}\phi^{\text{R}/\text{L}}_{j/j+1}}, \end{equation} where the quasiparticle carries charge $e/(p+2q)$. Here $\phi^{\text{R}/\text{L}}$ is the chiral bosonic field of a circle CFT with compactification radius $R=\sqrt{p/(p+2q)}$, and $e^{\pm i\phi^{\text{R}/\text{L}}}$ creates/annihilates a charge-$pe$ trivial quasiparticle. A physical operator that scatters a quasiparticle across a single wire takes the following form: \begin{equation}\label{fermioniclocalop2maintext} \mathcal{O}^{\{r,s\}}_j = \text{exp} \frac{i}{p(p+2q)}[(qr-ps)\phi^\text{R}_j+(qr+ps+pr)\phi^\text{L}_j], \end{equation} with $r,s\in \mathbb{Z}$. To avoid confusion, let us be more clear about our terminology: an operator is physical (and thus allowed) in the sense that it can be expressed in terms of electronic operators. For bosonic states, a physical operator is equivalently a local operator, as bosons are local objects. Thus, we have used the terms ``physical" and "local" interchangeably in the earlier discussion. However, since fermions are non-local, a distinction should be made here. \begin{figure}[t!] \includegraphics[width=8.5cm,height=4cm ]{flocaloperator.png}\centering \caption{\small{Diagrams of the allowed physical scattering operators acting on a single wire for a fermionic system. Each lattice point corresponds to an operator $e^{i(x\varphi-y\theta)}$. Red dots are labeling the points $(x,y) = (r,r+2s)$ with $r,s\in \mathbb{Z}$, which correspond to physical operators. (a) For $p=1$ and $q=2$, which gives rise to the diagonal Laughlin state at $\nu=1/5$. (b) For $p=2$ and $q=1$, which gives rise to a non-diagonal fermionic state at $\nu=1/2$. The arrows connecting the origin to the points $(x,y)=(\pm p, p+2q)$ represent the creation/annihilation operators for the trivial quasiparticle, and they bind the shaded region which covers all $N=p(p+2q)$ distinct quasiparticle scattering operators. Only operators on the vertical axis are charge-neutral, while all others involve injection/removal of electrons from a single wire. In particular, operators with odd $x$ (or $r$) also change the fermion-parity.}} \label{flocaloperator} \end{figure} Physical operators in the fermionic state can be organized into a lattice as depicted in Fig. \ref{flocaloperator}, which is analogous to Fig. \ref{localoperator} for the bosonic state. Notice that the lattice here is in a checker-board pattern because a physical operator is now attached to a Jordan-Wigner string. The action of $\mathcal{O}^{\{r,s\}}_j$ is to scatter a quasiparticle of charge $e(qr-ps)/(p+2q)$ across the $j$-th wire to become a quasiparticle of charge $-e(qr+ps+pr)/(p+2q)$. Analogous to the bosonic case, this operator violates charge conservation when $r\neq 0$, so the associated scattering process is forbidden in the presence of $U(1)$ charge symmetry. For the fermionic states with $p>1$, the minimal quasiparticle clearly cannot hop across just a single wire. Its motion is constrained to hop across two wires at a time, which can be achieved by exchanging electrons between the two wires. Again, quasiparticles on the even links shall be distinguished from those on the odd links, which is considered to be the defining feature of a non-diagonal quantum Hall state. As in the bosonic case, this would allow us to associate the quasiparticles to anyons in the $\mathbb{Z}_p$ toric code. We care to describe the fermionic case not only because it is physically more relevant, but also because there is a subtle difference between it and the bosonic case. For the bosonic non-diagonal states, we have emphasized the importance of charge conservation in constraining the motion of quasiparticles. However, for a fermionic system, one can also talk about the conservation of \textit{fermion parity}, which could play an additional role. Due to the non-locality of fermionic electrons, $\mathbb{Z}_2$ fermion-parity symmetry is more robust than the $U(1)$ charge symmetry. For fermionic states, the physical scattering operator $\mathcal{O}^{\{r,s\}}_j $ with $r\in 2\mathbb{Z}+1$ violates not only charge conservation but also fermion-parity conservation. Therefore, the constrained motion of \textit{some} quasiparticles in the fermionic state is more robustly protected by the fermion-parity symmetry. Having said that, in general the fermion-parity symmetry cannot completely replace the role of charge symmetry. We will elaborate more on this issue when we discuss the symmetry-enrichment of bosonic and fermionic non-diagonal states in Sec. \ref{sec3}. Before moving on, it is instructive to take a digression and consider a setting different from the FQH state, where fermion-parity symmetry \textit{alone} can constrain the motion of low-energy excitations in the way we have just discussed. \subsubsection{Fractional Josephson effect}\label{sec2.4.2} Let us consider a wire model consisting of one-dimensional superconductors, each described by a single-channel quantum wire with attractive interaction. With superconductors, charge is no longer conserved while fermion-parity still is. Instead of coupling wires to form a quantum Hall state, a two-dimensional superconductor is formed by locking the pairing phases between neighboring wires. As discussed in Refs. \citep{CWC17, CWC18}, such a wire has two distinct phases: one being the ``strongly-paired" phase where effectively all electrons are bound to form Cooper pairs, in which case the wire is a 1d \textit{trivial} superconductor. Another phase is the ``weakly-paired" phase described by the coexistence of unpaired electrons and Cooper pairs, in which case the wire is a 1d \textit{topological} superconductor, and has been shown to adiabatically connect to the Luttinger liquid phase. Here we are concerned with how superconducting vortices, which are analogs of the quasiparticles in the quantum Hall setting, can move around in the coupled wire model when the constituting wires are either trivial or topological superconductors. The minimal vortex carries flux $h/2e$, around which the pairing phase $\Theta_{\rm sc}$ is advanced by $2\pi$. When the wires are trivial, or in the ``strongly-paired" phase, the vortex has no issue tunneling across a single wire, because the wire contains only charge-$2e$ Cooper pairs which are local with respect to the vortex. As the vortex tunnels across a trivial superconductor, and induces a $2\pi$ phase slip, the wire simply returns back to its original state due to the ordinary Josephson effect. Things are different when the wires are 1d topological superconductors, which when coupled together form the \textit{weak topological superconductor}. A possible material realization of this setup has been proposed for a thin slab of Sr$_2$RuO$_4$ with enhanced pairing instability for the quasi-1D band \cite{Raghu2010weakTSC, Hughes2014weakTSC}. In this case there are unpaired electrons in each wire, which are non-local with respect to the $h/2e$-vortex. Consequently, tunneling the vortex across the wire would lead to the fractional Josephson effect as illustrated in Fig. \ref{josephson}. The tunneling process can be modeled by cutting the wire open at the place where the process happens, and since the wire is topological, each open end hosts a Majorana mode. The Majorana modes are coupled in the Josephson junction and together defines a fermion parity for the weak link. As predicted by Kitaev \citep{KitaevChain}, a $2\pi$ phase slip leads to a swtich of fermion parity for the ground state, and thus injection/removal of an electron is required in order to move a single vortex across the wire. The fermion parity symmetry for a single wire thus forbids the minimal vortex from moving across it. \begin{figure}[b!] \includegraphics[width=8cm,height=4.1cm ]{Josephson.png}\centering \caption{\small{Fractional Josephson effect in weak topological superconductor. (a) shows a wire model for the weak topological superconductor, with individual wires being 1d topological superconductors. Between two wires live a single vortex excitation, around which the pairing phase $\Theta_{\rm sc}$ is advanced by $2\pi$. When the vortex hops across the middle wire, its associated branch cut is also dragged across the wire to induce a $2\pi$ phase slip there. (b) illustrates the above process by modeling the place where the vortex crosses the wire as a Josephson junction. There the topological superconductor is cut open and hosts two Majorana modes that define a fermion parity in the circled region. (c) shows the evolution of energy levels of the coupled Majorana modes as a function of phase difference. A $2\pi$ phase slip leads to a change of fermion parity in the ground state.}} \label{josephson} \end{figure} In this situation, there are two ways for vortices to move in the bulk. One way is for a \textit{double-vortex} to tunnel across a single wire, which leads to a total $4\pi$ phase slip that restores the fermion parity of the wire. Also, from the perspective of locality the $h/e$-vortex is local with respect to everything in the topological superconductor, and hence should be allowed to tunnel across. Alternatively, a single $h/2e$-vortex can tunnel across \textit{two} wires at a time, as this can be achieved by exchanging fermion parity between the two wires. These features are analogous to what we have advertised for the non-diagonal QH states, and later we will see that these give rise to an interpretation of the $h/2e$ vortices as anyons in the toric code. We will further comment on this similarity, as well as an important difference in this regard, when compared with the non-diagonal state in Sec. \ref{sec3.2}. Aside from the scattering pattern of low-energy excitations, there is yet another revealing similarity with the original wire model for QH states that is worth mentioning. Just as the 1d topological superconductors that host Majorana end modes, the quantum wires used for constructing non-diagonal QH states actually host $\mathbb{Z}_p$ parafermion end modes. This is related to the inter-wire coupling discussed in Sec. \ref{sec2.1}, which preserves the particle number mod $p$ of each wire. Coupling together these modes that appear at the top/bottom edge would lead to an edge theory that is fundamentally different from the one describing the left/right side edge. We analyze this in detail in Sec. \ref{sec4}. Given discrete translation symmetry in the bulk, it leads to a gapless (for $p=2,3$ at least) theory for the top/bottom edge that is even richer than the side edge already addressed in Sec. \ref{sec2.2}. Next, let us unveil more interesting physics from the bulk perspective first, using the tools we have just developed, which would eventually guide us to a complete description for the edge of non-diagonal states. \section{\label{sec3}Neutral sector as $\mathbb{Z}_p$ toric code} In this section we first calculate the braiding statistics of quasiparticles in non-diagonal quantum Hall states, so as to reveal a ``hidden" $\mathbb{Z}_p$ topological order that can be attributed to the neutral sector. It will be explained later that this additional topological order originates from \textit{symmetry-enrichment} \cite{Ran2013SET, Hung2013SET, Lu2016SET, Cheng2016translationSET, Chen2017SET}, which ultimately distinguishes the non-diagonal states from the strongly-clustered states. For simplicity in exposition, the following discussion would mostly refer to the bosonic states. Further specification would be made when the fermionic case is worth a distinction. As demonstrated in the wire model for the $\nu=p/2q$ non-diagonal state, quasipaticles are created/annihilated by the vertex operators in Eqs. (\ref{QPcreation}) and (\ref{QPannihilation}), so the fusion algebra is simply Abelian. To characterize the topological order, we focus our attention on the braiding statistics. At first sight, it appears like the topological data resembles to those of the Laughlin states. Indeed, the non-trivial Abelian quasiparticle with minimal charge $e/2q$ also exists in the Laughlin state of charge-$pe$ bosons at filling $\nu=1/2pq$ (also known as the strongly-clustered state). Nevertheless, as we have noted before, quasiparticles in the non-diagonal states have constrained motion in the bulk, which differentiates excitations on the even links from those on the odd links. The result of braiding depends on whether two quasiparticles live on links of the the same type or not, and as we will see in Sec. \ref{sec3.1}, the result can be understood in the context of the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model by associating quasiparticles on even/odd links to $\mathbf{e}/\mathbf{m}$-particles respectively. The $\mathcal{D}(\mathbb{Z}_p)$ quantum double model has a $\mathbb{Z}_p$ topological order, and is also known as the $\mathbb{Z}_p$-generalization of Kitaev's toric code (which has $p=2$) \cite{Teo2015twistliquid, Teo2016AS, Kitaev2006exactly, Kitaev2003TQC, Barkeshli2019AS}. It is important to notice that the distinction between even links and odd links originates from $U(1)$ charge symmetry. Specific examples of non-diagonal states are analyzed in Sec. \ref{sec3.2} to demonstrate how this would affect the quasiparticle spectrum. Besides, the $\mathbf{e}\leftrightarrow\mathbf{m}$ anyonic relabeling is related to the $\mathbb{Z}$ translation in the coupled wire model, which lead us to eventually identify the non-diagonal states, with a $\mathbb{Z}_p$ toric code in the neutral sector, as a $U(1)\times \mathbb{Z}$ symmetry-enriched topological (SET) order. Furthermore, the ``gauging" of anyonic symmetry can be physically realized in the wire model as the proliferation of dislocation defects, which are sudden terminations of wires in the bulk. We explain these in Sec. \ref{sec3.3}. \subsection{Braiding statistics}\label{sec3.1} The braiding statistics is encoded in the quasiparticle operators studied in Sec. \ref{sec2.3}. Let us begin with a quasiparticle of charge $n e/2q$ on link $\ell=j+1/2$. Here we denote $n=qr-ps$ for some $r, s \in \mathbb{Z}$. Following Eq. (\ref{scattertwowire}), the local operator that transfers this quasiparticle from link $\ell$ to link $\ell-2\mathcal{N}$ can be written as \begin{equation} \begin{split} &\prod_{\eta=0}^{\mathcal{N}-1}\mathcal{O}^{\{-r,s\}}_{j-1-2\eta}(x) \mathcal{O}^{\{r,s\}}_{j-2\eta}(x)\\ &= e^{i\frac{n}{2pq}[\phi^\text{R}_j (x) - \phi^\text{L}_{j-2\mathcal{N}+1}(x)]}\;\;\times\\ &\;\;\;\;\;\prod_{\mu=1}^{2\mathcal{N}-1} e^{i\frac{n}{2pq}\Theta_{\ell-\mu}(x)} \times \prod_{\eta = 0}^{\mathcal{N}-1}e^{-i\frac{r_0 n}{p}\Theta_{\ell-2\eta-1}(x)} \end{split} \end{equation} with $r_0$ in the last term defined in Eq. (\ref{Bezout}). The first term with the chiral fields clearly generates the anticipated scattering process. Here we have made explicit the wire coordinate $x$, and retain the link variables $\Theta(x)$ which are essential to deducing the braiding phase. Notice that in the last equality the second term is contributed by every link between the first ($\ell$) and the last ($\ell-2\mathcal{N}$), while the third term is contributed only by the links whose indices are of the same parity as $\ell$. In order to consider a closed loop for a braiding process one also needs to move a quasiparticle on link $\ell$ \textit{along} the wire direction, say from $x_1$ to $x_2$. This is accomplished by the following operator: \begin{equation} \begin{split} \wp^{\text{R}/\text{L}}_{\ell}(x_1,x_2) &= {\rm exp}\;i\frac{n}{2pq}[\phi^{\text{R}/\text{L}}_{j/j+1}(x_1)-\phi^{\text{R}/\text{L}}_{j/j+1}(x_2)]\\ & = {\rm exp}\;i\frac{n}{2pq}\int_{x_2}^{x_1}dx \; \partial_x \phi^{\text{R}/\text{L}}_{j/j+1}(x)\;. \end{split} \end{equation} This is indeed a local operator, as the last equality suggests that it can be expressed in terms of bare electron densities and currents. With these established, we can consider transferring a quasiparticle around a closed loop by local operators. To be specific, let us set up a coordinate $(\ell, x)$ for a quasiparticle at position $x$ on link $\ell$, then we want to consider the loop $\mathcal{C}: (\ell, x_1) \rightarrow (\ell-2\mathcal{N}, x_1) \rightarrow (\ell-2\mathcal{N}, x_2) \rightarrow (\ell, x_2) \rightarrow (\ell, x_1)$. An example is depicted in Fig. \ref{braiding}. \begin{figure}[t!] \includegraphics[width=8cm,height=5.3cm ]{braiding.png}\centering \caption{\small{Schematic illustration of a braiding process between two quasiparticles, one living on the even links and another on the odd links.}} \label{braiding} \end{figure} It is now clear that the following phase is picked up after completing the loop $\mathcal{C}$: \begin{equation} \begin{split} &\prod_{\mu=1}^{2\mathcal{N}-1} e^{i\frac{n}{2pq}[\Theta_{\ell-\mu}(x_1)-\Theta_{\ell-\mu}(x_2)]}\\ & \times \prod_{\eta = 0}^{\mathcal{N}-1}e^{-i\frac{r_0 n}{p}[\Theta_{\ell-2\eta-1}(x_1)-\Theta_{\ell-2\eta-1}(x_2)]}. \end{split} \end{equation} Recall that the bulk is gapped so that $\Theta$'s are pinned at integer multiples of $2\pi$, while quasiparticle excitations from the ground state correspond to $2\pi$-kinks. Thus the first term is contributed by every enclosed quasiparticle, while the second term is contributed only by the enclosed quasiparticles that live on the links with parity \textit{different} from that of $\ell$. Hence the braiding phase for two quasiparticles $\mathbf{a}$ and $\mathbf{b}$, with charge $n_a e/2q$ and $n_b e/2q$ respectively, is encoded in the following matrix: \begin{equation}\label{braidingmatrix} M_{\bar{a}\bar{b}} = e^{2\pi i\frac{n_a n_b }{2pq}} \begin{pmatrix} 1 & e^{-2\pi i \frac{r_0 n_a n_b}{p}} \\ e^{-2\pi i \frac{r_0 n_a n_b}{p}} & 1 \end{pmatrix}, \end{equation} with the matrix index $\bar{a}=1/2$ for the quasiparticle $\mathbf{a}$ living on the odd/even links. The braiding statistics for the fermionic state is obtained by substituting $2q$ with $p+2q$ in the above discussion (in both cases $r_0$ is defined by $qr_0-ps_0=1$). The first factor gives the mutual statistics between quasiparticles of the same type, namely for those living on links of the same parity. The same braiding statistics describes a strongly-clustered state, which is essentially a Laughlin state of charge-$pe$ particles at filling $\nu_{pe}=1/2pq$ (or $\nu_{pe}=1/p(p+2q)$ in the fermionic case). For $p=1$, this is the full story because even and odd links need not be distinguished. However, for $p>1$, which corresponds to a non-diagonal state, the topological order is richer. The second factor in Eq. (\ref{braidingmatrix}) is not an identity matrix for $p>1$, and since $r_0$ is by definition coprime to $p$, it is actually the braiding matrix for the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model. Below, we briefly overview this well-known topological order. \subsubsection{The $\mathcal{D}(\mathbb{Z}_p)$ quantum double model} The $\mathcal{D}(\mathbb{Z}_p)$ quantum double model is a non-chiral Abelian topological order that can be realized in the deconfined phase of the $\mathbb{Z}_p$ discrete gauge theory in 2+1D. Alternatively, it can be characterized by a two-component Chern-Simons theory with the following Lagrangian \cite{WenZee1992CS}: \begin{equation} \mathcal{L} = \frac{\epsilon^{\mu\nu\lambda}}{4\pi}\vec{\alpha}^T_\mu K \partial_\nu \vec{\alpha}_{\lambda}+ \vec{\alpha}^T_\mu \vec{j}_\mu, \end{equation} where $\vec{\alpha}^T = (\alpha^1, \alpha^2)$ represents the internal $U(1)^2$ gauge field and $\vec{j}$ is the quasiparticle current. The $K$-matrix which encodes all the topological data is \begin{equation} K = p\sigma_x = \begin{pmatrix} \;\;0\; & \;\;p\;\; \\ \;\;p\;& \;\;0\;\; \end{pmatrix}. \end{equation} The chiral central charge for this phase is $c\propto \text{tr}(K) = 0 $. A quasiparticle is labeled by a two-component vector $\vec{t}$ defined on the so-called anyon integral lattice $\Gamma^* = \mathbb{Z}^2$, while the sublattice $\Gamma = K\Gamma^*$ consists of the states that are local particles which braid trivially with all quasiparticles, and thus belong to the identity topological sector. Hence, distinct quasiparticles are defined on the quotient lattice $\Gamma^*/\Gamma$, which in this case has $p^2$ points. There are two types of minimal quasiparticle, which is the $\mathbf{e}$-particle with $\vec{t}^T = (1,0)$ and the $\mathbf{m}$-particle with $\vec{t}^T = (0,1)$. The $p^2$ distinct quasiparticles in the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model can thus be labeled by $\mathbf{e}^\alpha\mathbf{m}^\beta$, where $0\leq \alpha < p$ and $0\leq \beta < p$. The complete topological information is specified by the fusion algebra and the braiding statistics. The fusion algebra is Abelian: \begin{equation}\label{TCfusion} \mathbf{e}^{\alpha_1}\mathbf{m}^{\beta_1} \times \mathbf{e}^{\alpha_2}\mathbf{m}^{\beta_2} = \mathbf{e}^{\alpha_1+\alpha_2}\mathbf{m}^{\beta_1+\beta_2}. \end{equation} Notice that $\mathbf{e}^{p}=\mathbf{m}^{p}=\mathds{1}$ is the trivial quasiparticle. The self and mutual statistics are encoded in the $\mathcal{T}$ and $\mathcal{S}$ matrices (the fusion algebra also follows from $\mathcal{S}$ through the Verlinde formula), and in the $K$-matrix formulation they are given by: \begin{equation}\label{STmatrices} \mathcal{T}_{\mathbf{a}\mathbf{b}}= \delta_{\mathbf{a}\mathbf{b}}e^{\pi i\vec{a}^TK^{-1}\vec{a}}, \;\; \mathscr{D}\mathcal{S}_{\mathbf{a}\mathbf{b}} = e^{2\pi i\vec{a}^TK^{-1}\vec{b}}. \end{equation} Here $\vec{a}, \vec{b} \in \Gamma^*/\Gamma$ are the vectors in the anyon lattice labeling the two quasiparticles $\mathbf{a}$ and $\mathbf{b}$, and $\mathscr{D}=\sqrt{\abs{{\rm det}K}}=p$ is the total quantum dimension. It follows that $\mathbf{e}$-particles and $\mathbf{m}$-particles are all self-bosons (they have trivial self-exchange statistics), while $\mathbf{e}^{\alpha}$ and $\mathbf{m}^\beta$ have a non-trivial braiding phase of $e^{2\pi i\frac{\alpha\beta}{p}}$. The case with $p=2$ has four anyons: $\mathds{1}, \mathbf{e}, \mathbf{m}$ and $\psi=\mathbf{em}$ (a composite fermion), which is exactly the topological order in Kitaev's toric code \cite{Kitaev2003TQC, Kitaev2006exactly}. There is an important global symmetry in the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model, known as the $\mathbb{Z}_2$ $\mathbf{e}$-$\mathbf{m}$ anyonic symmetry. More precisely, it is an anyon \textit{relabeling} symmetry that interchanges the $\mathbf{e}$-particles with the $\mathbf{m}$-particles, leaving the fusion rules and braiding statistics invariant. In the $K$-matrix formalism, the $\mathbb{Z}_2$ anyonic symmetry is implemented by acting $\sigma_x$ on the anyon lattice $\Gamma^*$, or equivalently by transforming $K \rightarrow \sigma_x^T K \sigma_x =K$. As the $K$-matrix is left invariant, it is clear that all topological information is left invariant. Anyonic symmetry is of physical importance because a non-Abelian phase can be obtained from gauging the anyonic symmetry in an Abelian phase \cite{Teo2015twistliquid, Teo2016AS, Barkeshli2019AS}. \subsubsection{$\mathcal{D}(\mathbb{Z}_p)$ in non-diagonal QH states} Getting back to our original discussion, one realizes that the braiding statistics in Eq. (\ref{braidingmatrix}) can be understood by viewing the (bosonic) non-diagonal quantum Hall state as consisting of a $U(1)_{2pq}$ charge sector and a $\mathcal{D}(\mathbb{Z}_p)$ neutral sector. The net braiding phase is obtained by adding the phase in the charge sector and the phase in the neutral sector. A quasiparticle of charge $n_a e/2q$ excited on an even link can be labeled by $(n_a, \mathbf{e}^{-r_0 n_a})$, while a quasiparticle of charge $n_b e/2q$ excited on an odd link can be labeled by $(n_b, \mathbf{m}^{n_b})$. A generic quasiparticle, which can be viewed as a composite of quasiparticles from even and odd links, is then denoted as \begin{equation}\label{quasiparticlelabel} (n_a+n_b\;,\; \mathbf{e}^{-r_0 n_a}\mathbf{m}^{n_b}). \end{equation} The first component represents the electric charge (in unit of $e/2q$), while the second component represents the neutral sector and obeys $\mathbf{e}^{p}=\mathbf{m}^{p}=\mathds{1}$. The interpretation of a $\mathbb{Z}_p$ toric code in the neutral sector also implies that the $\mathbf{e}$-$\mathbf{m}$ anyonic symmetry is concretely realized in the wire model as the discrete translation symmetry by one wire. More precisely, here the $\mathbb{Z}_2$ anyonic symmetry interchanges $\mathbf{e}^{-r_0} \leftrightarrow \mathbf{m}$, with $r_0$ defined in Eq. (\ref{Bezout}). Next, let us analyze some specific examples of non-diagonal states, which would familiarize ourselves with the connection to the $\mathbb{Z}_p$ toric code just advertised. Moreover, they highlight the importance of symmetry considerations, particularly the $U(1)$ charge symmetry, for characterizing the topological order of non-diagonal states. There are also exceptional cases in which the fermion-parity symmetry can replace the role of charge symmetry. \subsection{Examples}\label{sec3.2} \subsubsection{Bosonic state} We first study a representative example of bosonic non-diagonal states, with $p=2$ and $q=1$, which occur at filling $\nu=1$. According to the discussion in Sec. \ref{sec2.3}, each link hosts four distinct quasiparticle excitations, which have charge $0$, $e/2$, $e$ and $3e/2$ respectively. Figure \ref{localoperator}(b) summarizes the possible scattering operators. In a system with charge conservation, only the charge-$e$ quasiparticle (and the trivial quasiparticle) can hop across a single wire, while the charge-$e/2$ and charge-$3e/2$ excitations cannot, so those on the even links are regarded as different from those on the odd links. A quasiparticle excitation composed of a charge-$e/2$ excitation on the even link and a charge-$e/2$ excitation on the odd link, hence with total charge $e$, is then distinct from a single charge-$e$ excitation on either the even or odd link. Using the presentation introduced in Eq. (\ref{quasiparticlelabel}), we have the following quasiparticle spectrum: \begin{equation}\label{spectrum} \begin{alignedat}{4} &\text{charge }0&&:\;\;\;(0,\mathds{1}),\;&&&(0,\mathbf{em})\\ &\text{charge }e/2&&:\;\;\;(1, \mathbf{e}),&&&(1, \mathbf{m})\\ &\text{charge }e&&:\;\;\;(2, \mathds{1}),\;&&&(2, \mathbf{em})\\ &\text{charge }3e/2&&:\;\;\;(3, \mathbf{e}),&&&(3, \mathbf{m})\\ \end{alignedat} \end{equation} The first component labels the $U(1)_4$ charge sector, and the second component labels the $\mathcal{D}(\mathbb{Z}_2)$ neutral sector. It is important to notice that, from the point of view of \textit{intrinsic} topological order, $(2,\mathbf{em})$ should really be treated as a trivial quasiparticle due to the trivial self- and mutual-statistics, which would in turn reduce the spectrum down to only four distinct quasiparticles. Specifically, by fusing with $(2,\mathbf{em})$, $(1, \mathbf{m})$ would be identified with $(3,\mathbf{e})$, $(3, \mathbf{m})$ would be identified with $(1,\mathbf{e})$, and $(0, \mathbf{em})$ would be identified with $(2,\mathds{1})$. From this perspective, it seems unnecessary to assign a neutral sector, as $\mathbf{m}$-particles can be identified with $\mathbf{e}$-particles. The intrinsic topological order in this example is thus the same as the strongly-paired state, which only has the $U(1)_4$ charge sector. However, the importance of the neutral sector becomes clear from the \textit{symmetry-enriched} perspective. In particular, the constrained motion of quasiparticles in the wire model is tied up with the $U(1)$ charge symmetry, which motivates us to study the non-diagonal states in the presence of charge conservation. This then requires us to distinguish quasiparticles with different electric charge, and forbid us from identifying $\mathbf{e}$-particles with $\mathbf{m}$-particles as above. A similar discussion applies to a generic bosonic non-diagonal state. \subsubsection{Fermionic state} Next we study a special example of fermionic non-diagonal states, with $p=2$ and $q=1$, which occur at filling $\nu=1/2$. This state is presumably more relevant experimentally, and moreover, it is exceptional from the symmetry perspective. Unlike bosonic states, the $\mathcal{D}(\mathbb{Z}_2)$ neutral sector needs not be protected by the $U(1)$ charge symmetry. Instead, the $\mathbb{Z}_2$ fermion-parity symmetry suffices to distinguish $\mathbf{e}$-particles on even links from the $\mathbf{m}$-particles on odd links. To see this, we list out the quasiparticle spectrum: \begin{equation}\label{fermionspectrum} \begin{alignedat}{4} &\text{charge }0&&:\;\;\;(0,\mathds{1}),\;&&&(0,\mathbf{em})\\ &\text{charge }e/4&&:\;\;\;(1, \mathbf{e}),&&&(1, \mathbf{m})\\ &\text{charge }e/2&&:\;\;\;(2, \mathds{1}),\;&&&(2, \mathbf{em})\\ &\text{charge }3e/4&&:\;\;\;(3, \mathbf{e}),&&&(3, \mathbf{m})\\ &\text{charge }e&&:\;\;\;(4,\mathds{1}),\;&&&(4,\mathbf{em})\\ &\text{charge }5e/4&&:\;\;\;(5, \mathbf{e}),&&&(5, \mathbf{m})\\ &\text{charge }3e/2&&:\;\;\;(6, \mathds{1}),\;&&&(6, \mathbf{em})\\ &\text{charge }7e/4&&:\;\;\;(7, \mathbf{e}),&&&(7, \mathbf{m}) \end{alignedat} \end{equation} The first component labels the $U(1)_8$ charge sector, and the second component labels the $\mathcal{D}(\mathbb{Z}_2)$ neutral sector. These quasiparticles can move in the bulk by the physical operators summarized in Fig. \ref{flocaloperator}(b). Using the fermionic version of Eq. (\ref{braidingmatrix}), one can check that $(4,\mathbf{em})$ braids trivially with all quasiparticles. However, strictly speaking it does not belong to the identity sector as it carries topological spin $-1$. In fact, $(4,\mathbf{em})$ corresponds to the physical electron. For a fermionic topological order, the physical electron is usually included in the counting of topological excitations (this is known as fermion-parity grading). Thus the above spectrum is complete and irreducible. To put it another way, the fermion-parity symmetry ensures the distinction between $\mathbf{e}$-particles and $\mathbf{m}$-particles, as turning one into another would require a switch in fermion-parity. It is easy to verify that the distinction between even and odd links is robust for all $p=2$ non-diagonal states (\textit{i.e.} $q$ can be an arbitrary odd integer). However, for $p>2$, fermion-parity is not enough to protect the $\mathcal{D}(\mathbb{Z}_p)$ neutral sector. For odd $p$, any $\mathbf{m}$-particle can be transformed into an $\mathbf{e}$-particle by adding/removing \textit{even} number of electrons. For even $p>2$, $\mathbf{m}^{2\mathbb{Z}}$-particles can be transformed into $\mathbf{e}^{2\mathbb{Z}}$-particles without changing fermion-parity. Therefore, except for $p=2$, both fermionic and bosonic non-diagonal states generally relies on the $U(1)$ charge symmetry to protect the $\mathcal{D}(\mathbb{Z}_p)$ neutral sector. \subsubsection{Weak topological superconductor}\label{sec3.2.3} The third example is related to the digression taken in Sec \ref{sec2.4.2}. There we have considered a coupled wire model of 2d weak topological superconductor (TSC), in which the vortex excitations have a similar constrained motion as the quasiparticles of non-diagonal quantum Hall states. Recall that the conservation of fermion-parity dictates the $h/2e$ vortex to be tunneled across two wires at a time, so the vortex excited on an even link should be differentiated from the one on an odd link. When the vortex is tunneled across two wires, notice that there is an accompanying tunneling of an electron between the wire, so braiding an $h/2e$ vortex on even links around another one on an odd link would require an electron to be transferred around a $\pi$-flux. This results in a braiding phase of $e^{i\pi}$. Equivalently, one could understand this by viewing the $\pi$-flux on an even link as a composite of a $\pi$-flux on an odd link together with a single electron. Therefore, the $\pi$-flux on even/odd links can be viewed as $\mathbf{e}/\mathbf{m}$-anyon in the $\mathbb{Z}_2$ toric code. This situation is similar to the $p=2$ fermionic non-diagonal state, in that none of them require $U(1)$ charge symmetry to protect the neutral sector. However, the weak TSC is different from the $p=2$ fermionic non-diagonal state in another important aspect: the $\mathbb{Z}$ translation symmetry in the weak TSC is \textit{not essential} for the $\mathbb{Z}_2$ topological order. While the translation symmetry acts as the $\mathbf{e} \leftrightarrow \mathbf{m}$ anyonic symmetry for the toric code (which is a virtue of the wire model), with or without this symmetry the superconductor always has a topological order. This is indeed a well-known fact: a fully-gapped superconductor coupled with \textit{dynamic electromagnetism} has a $\mathbb{Z}_2$ topological order \cite{Hansson2004SCisTO, Bonderson2013quasiTO, Radzihovsky2017TOinSC}. On the other hand, as we are going to elaborate below, the presence of translation symmetry is actually \textit{essential} to the $\mathcal{D}(\mathbb{Z}_p)$ neutral sector of non-diagonal QH states. Next, we discuss the importance of charge symmetry and translation symmetry in a more systematic manner. \subsection{Symmetry enrichment}\label{sec3.3} We have now established that quasiparticles on the even/odd links can be associated to $\mathbf{e}$/$\mathbf{m}$-particles respectively. This leads us to interpret the non-diagonal states as having a $U(1)_{2pq}$ charge sector (for fermionic states it would be $U(1)_{p(p+2q)}$) and a $\mathcal{D}(\mathbb{Z}_p)$ neutral sector. This interpretation is useful as it consistently describes the fusion and braiding properties of the non-diagonal states. However, it is important to ask whether this interpretation is \textit{essential}. This is equivalent to asking whether the non-diagonal state is really different (if yes, then in what circumstances different) from a strongly-clustered state. From the perspective of a single wire that constitutes the wire model, as we have discussed in Sec. \ref{sec2.2}, these two states are respectively related to two distinct non-chiral CFTs, one at radius $R=\sqrt{p/2q}$ (\textit{i.e.} non-diagonal) while another at $R=1/\sqrt{2pq}$ (\textit{i.e.} diagonal), which suggests that the answer is yes. To fully answer the question we need to address the role of two symmetries: charge conservation and translation symmetry. The former has been hinted about in the examples just analyzed, while the latter is related to the $\mathbf{e}$-$\mathbf{m}$ anyonic symmetry. Here we discuss the symmetry issue from the bulk perspective, and in the next section we study the implications to the boundary. \subsubsection{Charge conservation} From the specific examples analyzed in Sec.\ref{sec3.2}, we have seen that charge conservation plays an important role in constraining the quasiparticle scattering pattern. Here we provide a more general argument to establish the $U(1)$ charge symmetry as a necessary ingredient to protect the neutral sector. We focus on the bosonic states first. As discussed in Sec. \ref{sec2.3}, certain local scattering operators are charged for non-diagonal states ($p>1$), which indicate that hopping the associated quasiparticle across a single wire would violate charge conservation. Instead, the charge-conserving process is for a quasiparticle to hop across two wires at a time, thus differentiating excitations on the even and odd links. In the absence of $U(1)$ charge symmetry, however, such a distinction would be meaningless. For the bosonic non-diagonal state at filling $\nu=p/2q$, a local electron operator $e^{-i\varphi_j}$ creates an $(2q, \mathbf{e}^{-r_0q}\mathbf{m}^q)$-excitation, which is trivial from the perspective of intrinsic topological order. By fusing with $(2q, \mathbf{e}^{-r_0q}\mathbf{m}^q)$-excitations, all $\mathbf{m}$-particles can be transformed to $\mathbf{e}$-particles, thus rendering the neutral sector label meaningless. Hence, \textit{without} charge conservation the non-diagonal state of electron at filling $\nu=p/2q$ is topologically equivalent to the Laughlin state of $pe$-clusters at filling $\nu_{pe}=1/2pq$ (or the strongly-clustered state for short). In the presence of $U(1)$ charge symmetry, quasiparticles should be distinguished not only by their braiding statistics but also by their symmetry charge. This in turn distinguishes the $\mathbf{e}$-particles from the $\mathbf{m}$-particles in non-diagonal states. The only trivial quasiparticle, in the ``symmetry-enriched" sense, is $(0,\mathds{1})$. While it is also true for the strongly-clustered state that quasiparticles with different electric charge should be distinguished in the presence of $U(1)$ symmetry, there is no enriched neutral sector in that case. The situation is similar for the fermionic states. Following Eq. (\ref{fermioniclocalop2maintext}), the local operator $\mathcal{O}^{\{-2,0\}}_j$ creates the $(2(p+2q), \mathbf{e}^{-2r_0q}\mathbf{m}^{2(p+q)})$-excitation. This is equivalent to a pair of electrons, so the fermion-parity is preserved. Notice that when $p$ is odd, $2(p+q)$ is coprime to $p$ (given our assumption that $p$ and $q$ are coprime), thus fusing with an appropriate number of the $(2(p+2q), \mathbf{e}^{-2r_0q}\mathbf{m}^{2(p+q)})$-excitations can turn any $\mathbf{m}$-particles into $\mathbf{e}$-particles. When $p$ is even, fusing with the $(2(p+2q), \mathbf{e}^{-2r_0q}\mathbf{m}^{2(p+q)})$-excitations would identify the $\mathbf{m}^{2\mathbb{Z}}$-particles with $\mathbf{e}^{2\mathbb{Z}}$-particles. Therefore, except for the $p=2$ states, a fermionic non-diagonal state also relies on the $U(1)$ charge symmetry to protect its $\mathcal{D}(\mathbb{Z}_p)$ neutral sector. Having said that, $U(1)$ charge symmetry is only necessary but not sufficient for distinguishing the non-diagonal states from the strongly-clustered state. \subsubsection{Translation and anyonic symmetry} In the presence of $U(1)$ charge conservation, excitations on the even links are distinguished from those on the odd links. However, without the translation symmetry that transforms wire $j \mapsto j+1$ (which we denote as the $\mathbb{Z}$ translation), the non-diagonal state is actually \textit{adiabatically connected} to the strongly-clustered state. This can be seen if we dimerize the $2j$-th wire with the $(2j+1)$-th wire (for all $j\in\mathbb{Z}$) such that the inter-wire couplings in Eq. (\ref{wireHamtot}), \textit{i.e.} $t_{2j+1/2}$'s, are pushed to infinity. This corresponds to setting the gap in the even links to be infinite, and thus all $\mathbf{e}$-particles are infinitely heavy and only $\mathbf{m}$-particles are left in the spectrum. In this way, the neutral sector becomes meaningless as the quasiparticle spectrum is the \textit{same} for both the non-diagonal state and the strongly-clustered state. In fact, they have the same topological ground state degeneracy on a torus: for bosonic state, the degeneracy is $N=2pq$; for fermionic state, the degeneracy is $N=p(p+2q)$. Thus, \textit{without} the $\mathbb{Z}$ translation symmetry, the non-diagonal state of electron at filling $\nu=p/2q$ is topologically equivalent to the Laughlin state of $pe$-bosons at filling $\nu_{pe}=1/2pq$. To sum up, the non-diagonal state is different from the strongly-clustered state precisely in that the former can be \textit{enriched} to a more exotic topological phase with an additional $\mathcal{D}(\mathbb{Z}_p)$ neutral sector, by the $U(1)\times \mathbb{Z}$ symmetry. It is important to notice that the $\mathbb{Z}$ translation symmetry in the wire model is related to the $\mathbb{Z}_2$ anyonic symmetry in the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model, because the translation by one wire would transform even links to odd links, thus exchanging $\mathbf{e} \leftrightarrow \mathbf{m}$. In this regard, the non-diagonal QH state is similar to Kitaev's toric code on honeycomb lattice and Wen's plaquette model \cite{Kitaev2006exactly, You2012plaquette, You2013plaquette}, as well as the weak TSC discussed in Sec. \ref{sec3.2.3}. Nevertheless, the $\mathcal{D}(\mathbb{Z}_p)$ in a non-diagonal state is only realized with symmetry-enrichment, while the $\mathcal{D}(\mathbb{Z}_p)$ in Kitaev's and Wen's models (as well as the $\mathcal{D}(\mathbb{Z}_2)$ in superconductor) is intrinsic. We also want to comment on a subtle relation between the translation symmetry and the anyonic symmetry. When speaking of anyonic symmetry, it is usually viewed as an abstract \textit{relabeling} symmetry that permutes the anyon types in a way that all fusion and braiding properties are left invariant \cite{Teo2015twistliquid, Teo2016AS,Barkeshli2019AS}. In such a definition, no explicit reference to the Hamiltonian is made, and thus the energetics of anyons are really not concerned. In our case, it is then more precise to relate the translation symmetry to the \textit{exact} anyonic symmetry. The \textit{exactness} lies in the energetics, which requires the anyonic excitations to have the same energy under the relabeling transformation. The presence of an exact anyonic symmetry in the non-diagonal states has an important physical consequence, as then the symmetry can be gauged. According to the general theory of anyonic symmetry, the gauged phase is non-Abelian in nature \cite{Teo2015twistliquid, Teo2016AS, Barkeshli2019AS}. In the coupled wire model there is an explicit description of such gauging process, which is the proliferation of dislocation defects. A dislocation defect in the wire model is a sudden termination of a wire in the bulk, as illustrated in Fig. \ref{dislocation}. Braiding an $\mathbf{e}$-particle around a dislocation defect would \textit{relabel} the quasiparticle as an $\mathbf{m}$-particle, and vice versa. Hence a single dislocation defect is a gauge flux of the $\mathbb{Z}_2$ anyonic symmetry, which is also known as a twist-defect \cite{Bombin2010AS}. To be more precise, as double-dislocations are condensed, a single dislocation would be truly $\mathbb{Z}_2$ in nature. In the ``melting phase" of the coupled wire model where single-dislocations are deconfined and double-dislocations proliferate, the $\mathbb{Z}_2$ anyonic symmetry in the neutral sector of the (Abelian) non-diagonal quantum Hall state is gauged, and the resulting phase would be isotropic and non-Abelian. We hope to better characterize this exotic quantum Hall state in future works. \begin{figure}[t!] \includegraphics[width=7cm,height=4.8cm ]{dislocation.png}\centering \caption{\small{Illustration of a braiding process between a quasiparticle and a dislocation defect in the wire model, which turns a quasiparticle on the even link ($\mathbf{e}$-particle) into a quasiparticle on the odd link ($\mathbf{m}$-particle). A dislocation in the wire model thus acts as a twist defect for the anyonic symmetry in the $\mathcal{D}(\mathbb{Z}_p)$ quantum double model. }} \label{dislocation} \end{figure} Now, equipped with the knowledge in the bulk we shall revisit the edge and discuss a signature of the symmetry-enrichment in non-diagonal states. When the anyonic symmetry in the bulk is exact, that is when the wire model has a discrete translation symmetry, an additional gapless boundary theory could emerge. \section{\label{sec4}Theory of the symmetric edge} We now study the boundary theory in more detail. The coupled wire model has two types of edges: one is the left/right side edge which has been studied in Sec. \ref{sec2.2}; another type is the top/bottom edge that is formed by coupling together the ends of wires. In this section, by studying the top/bottom edge, we first recover the chiral Luttinger liquid which has been shown to live on the left/right edge (Sec. \ref{sec4.1}). This is the chiral edge theory of the $U(1)_{2pq}$ charge sector. Without additional symmetry, it describes the \textit{only} gapless edge mode for the non-diagonal state, which is the same edge mode for a strongly-clustered state. This is reflecting that these states share the same intrinsic topological order, as explained in the previous section. However, with the $\mathbb{Z}$ translation symmetry in the wire model, a non-chiral gapless theory could emerge in the neutral sector, which describes the critical transition of a quantum $\mathbb{Z}_p$ clock model (Sec. \ref{sec4.2}). When both the charge and neutral sectors are gapless, a single electron can be tunneled into the symmetric edge from a metal. The associated tunneling exponent is predicted in Sec. \ref{sec4.3}, which may serve as a possible experimental signature for the non-diagonal states. What we discover for the symmetry-enriched neutral sector corroborates with earlier studies on critical parafermion chains \cite{criticalparafermionchain} and twist defect chains \cite{twistdefectchain}, which linked together translation invariance with self-duality of the clock model. Indeed, the symmetric edge of a non-diagonal quantum Hall state provides an electronic platform to realize the physics discussed in these earlier works. Given the discussion in Sec. \ref{sec2.4.2}, one would expect the ends of wires to host parafermion zero modes, which are coupled by electron-tunneling to form a parafermion chain at the edge. Alternatively, the discussion in Sec. \ref{sec3.3} suggests the termination of a wire as a twist defect of the $\mathbb{Z}_p$ toric code that exchanges $\mathbf{e}$ and $\mathbf{m}$ particles, so the top/bottom edge can be equivalently viewed as a twist defect chain. While in Ref. \cite{twistdefectchain} the equivalence between the twist defect chain and the clock model is demonstrated using Wilson loop operators, in our following analysis we intend to provide a more transparent derivation based on inter-wire electron-tunneling interactions at the edge. Importantly, we notice that a \textit{generalized} quantum clock model is actually realized at the edge, in contrast to the conventional clock model discussed previously. This complicates the situation for $p\geq 4$, and in Sec. \ref{sec4.4} we address the related subtleties. For convenience, our discussions in Sec. \ref{sec4.1} and \ref{sec4.2} are based on the bosonic states. The results for the fermionic states are essentially the same, differ simply by a substitution $2q \mapsto p+2q$. In Sec. \ref{sec4.3}, where we discuss possible experimental signatures by tunneling electrons from Fermi liquid into the symmetric edge, we focus only on the fermionic states. \subsection{Charge sector}\label{sec4.1} The edge theory in the charge sector can be intuitively understood in a pictorial depiction of the coupled wire model as shown in Fig. \ref{edgecharge}. While the inter-wire couplings have gapped out the bulk by freezing the degrees of freedom therein, a chiral Luttinger liquid is left freely fluctuating near the termination of wires, where the inter-wire couplings diminish. Here we provide a more rigorous derivation of this Luttinger liquid edge mode, and the setup would also be useful for understanding the more non-trivial edge modes in the neutral sector. \begin{figure}[b!] \includegraphics[width=8cm,height=5.5cm ]{edgecharge.png}\centering \caption{\small{Chiral Luttinger liquids at the top (T) and bottom (B) edges of the coupled wire model, labeled as $\chi^\text{T}$ and $\chi^\text{B}$ respectively. The grey shaded region represents the gapped bulk, obtained from inter-wire tunneling of charge-$pe$ clusters. The termination of each wire is modeled by a hard-wall boundary condition, such that the chiral and anti-chiral modes of each wire ($\phi^\text{R}$ and $\phi^\text{L}$) are reflected into each other. While $\phi^{\text{R}/\text{L}}$ on neighboring wires are locked together deep in the bulk, they are left to fluctuate near the boundary where the inter-wire couplings vanish, giving rise to the gapless charge mode $\chi^{\text{T}/\text{B}}$.}} \label{edgecharge} \end{figure} To model the termination of a wire, we adopt the hard-wall boundary condition so that left-movers are reflected into right movers, and vice versa at the other end. The finite-size Luttinger liquid is then characterized by the bosonized variables $\varphi(x)$ and $\theta(x)$ that satisfy \begin{equation}\label{hardwallcommutation} [\theta_j(x), \varphi_{j'}(x')]= i\pi \delta_{jj'} H(x-x'), \end{equation} where $H(x)$ is the Heaviside step function, together with the boundary conditions \begin{equation} \theta_j(0^-) = 0\;\;\text{and}\;\;\theta_j(L^+)=\pi N_j. \end{equation} Here $j,j'$ label the wires, which terminate at $x=0,L$, and $N_j$ is the electron number operator for wire $j$. Importantly, we have been careful in specifying the $x$-coordinates of the bosonic fields, in the above and in what follows, so as to ensure that commutation relations can always be evaluated unambiguously. In our notation, the very end of the wire with a fixed boundary condition is located at $x=0^-\;(L^+)$, the inter-wire tunneling that fluctuates near the boundary happens at $x=0\;(L)$, and the inter-wire tunneling that is pinned in the bulk is thought of as happening at $x=0^+\;(L^-)$. This seemingly pedantic effort would prove to be crucial when we derive the chiral algebra for the top/bottom edge mode. In terms of the chiral modes introduced in Eq. (\ref{chiralboson}), we have \begin{subequations} \begin{align} [\phi^{\text{R}/\text{L}}_j(x), \phi^{\text{R}/\text{L}}_{j'}(x')] &= \pm 2i\pi pq \delta_{jj'}\text{sgn}(x-x'),\\ [\phi^{\text{R}}_j(x), \phi^{\text{L}}_{j'}(x')] &= 2i\pi pq \delta_{jj'}, \end{align} \end{subequations} together with the boundary conditions \begin{subequations} \begin{align} &\phi^\text{R}_j(0^-) = \phi^\text{L}_j(0^-),\\ &\phi^\text{R}_j(L^+)= \phi^\text{L}_j(L^+)+4\pi qN_j. \end{align} \end{subequations} Notice that there are generally discontinuities in these chiral modes at the edge ($x=0, L$) from one wire to the next, which are caused by the inter-wire tunneling term $\cos\Theta_{j+1/2}$. Indeed, $\phi^{\text{R}/\text{L}}(x)$ is the chiral mode of each single wire defined along the $x$-direction, so they are not quite the right variables for describing the top/bottom edge modes which run along the $y$-direction. To identify the appropriate chiral edge modes at $x=0\; ({\rm top})$ and $x= L\;({\rm bottom})$, which should vary slowly from one wire to the next, let us examine again the bulk inter-wire coupling, but now slightly modified to $\cos\tilde{\Theta}_{\ell}(x)$, with the link variable \begin{equation} \tilde{\Theta}_{j+1/2}(x) \equiv \phi^\text{R}_j(x)-\phi^\text{L}_{j+1}(x)-2\pi q N_j. \end{equation} Compared with Eq. (\ref{link1}), the inter-wire coupling is defined with an extra $2\pi q N_j$ term. This modification is needed to ensure that $[\tilde{\Theta}_{\ell}(x),\tilde{\Theta}_{\ell'}(x')]=0$, given the commutation relation in Eq. (\ref{hardwallcommutation}) which is appropriate for a hard-wall boundary condition. As we have shown in Sec. {\ref{sec2.1}, the inter-wire couplings then pin $\tilde{\Theta}_{\ell} \in 2\pi \mathbb{Z}$ everywhere in the bulk, and thus completely gap out the bulk. At the boundaries $(x=0,L)$, the inter-wire interaction diminishes so that $\tilde{\Theta}_{\ell}$ is allowed to fluctuate there. As we see next, this fluctuation gives rise to the chiral Luttinger liquid at the top/bottom edge. We now introduce the chiral edge mode living at top/bottom ($x=0/L$) edge as follows, \begin{subequations}\label{chargemode} \begin{align} \chi_j(0) &= \phi^\text{L}_j(0)-2\pi q \sum_{j \leq i} N_i -\sum_{j\leq i}\tilde{\Theta}_{i+1/2}(0^+),\\ \chi_j(L) &= \phi^{\text{L}}_j(L)+2\pi q \sum_{j \leq i} N_i -\sum_{j\leq i}\tilde{\Theta}_{i+1/2}(L^-), \end{align} \end{subequations} where $\tilde{\Theta}_{\ell}(0^+)$ and $\tilde{\Theta}_{\ell}(L^-)$ correspond to the bulk link variables that are pinned. For link $\ell=j+1/2$, the link variables at the edge are then \begin{subequations} \begin{align} \tilde{\Theta}_{\ell}(0) &= \chi_j(0)-\chi_{j+1}(0)+\tilde{\Theta}_{\ell}(0^+),\\ \tilde{\Theta}_{\ell}(L) &= \chi_j(L)-\chi_{j+1}(L)+\tilde{\Theta}_{\ell}(L^-), \end{align} \end{subequations} which imply that $\chi_j(0/L)$ indeed varies slowly between neighboring wires. The fluctuation of $\chi$ is controlled by the inter-wire tunneling near the boundary, which is proportional to \begin{equation} \cos \tilde{\Theta}_{\ell}(0/L) \sim (\chi_j(0/L)-\chi_{j+1}(0/L))^2. \end{equation} Note that the series expansion is legitimate because $\tilde{\Theta}_{\ell}(0^+)$ and $\tilde{\Theta}_{\ell}(L^-)$ are pinned at $2\pi\mathbb{Z}$. Taking the continuum limit in the $y$-direction, \textit{i.e.} $\chi_j(0) \mapsto \chi^\text{T}(y)$ and $\chi_j(L) \mapsto \chi^\text{B}(y)$, we obtain the effective Hamiltonian for the top/bottom edge, \begin{equation}\label{top/bottomHam} \mathcal{H}^{\text{T}/\text{B}}_{\rho}= \frac{u}{2\pi}(\partial_y\chi^{\text{T}/\text{B}})^2. \end{equation} Furthermore, one can readily check that \begin{subequations} \begin{align} [\chi_j(0),\chi_{j'}(0)] &= 2i\pi pq\;\text{sgn}(j-j'),\\ [\chi_j(L),\chi_{j'}(L)] &= -2i\pi pq\; \text{sgn}(j-j'), \end{align} \end{subequations} which imply the chiral algebra in the continuum limit, \begin{equation}\label{top/bottomchiralalgebra} [\chi^{\text{T}/\text{B}}(y),\chi^{\text{T}/\text{B}}(y')] = \pm\;2i\pi pq\;\text{sgn}(y-y'). \end{equation} Altogether, Eqs. (\ref{top/bottomHam}) and (\ref{top/bottomchiralalgebra}) suggest that the low-energy effective theory for the top/bottom edge of the $\nu=p/2q$ non-diagonal state is \textit{partly} described by a chiral Luttinger liquid with Luttinger parameter $K=2pq$. A similar result holds for the fermionic state at filling $\nu=p/(p+2q)$, with $K=p(p+2q)$. This is the edge mode guaranteed by the bulk topological order, and it coincides with the gapless mode on the left/right side edge described by $\phi^{\text{R}/\text{L}}$. The subscript $\rho$ in Eq. (\ref{top/bottomHam}) represents the charge sector, and as we discuss next, the edge Hamiltonian could have other contributions that would be attributed to the neutral sector $(\sigma)$, which become particularly important in the presence of symmetry. \subsection{Neutral sector}\label{sec4.2} \subsubsection{Physical picture}\label{sec4.2.1} In the bulk of non-diagonal states, wires of Luttinger liquid are coupled together by inter-wire tunneling of $p$ electrons. As shown in Sec. \ref{sec2.1}, at electron filling $\nu=p/2q$ the bulk is completely gapped, so the $pe$-tunneling is the only interaction that matters in the bulk. This leaves a gapless chiral Luttinger liquid fluctuating at the boundary as we have shown above. This interaction preserves the electron number mod $p$ in each wire. From now on this quantity is referred to as the ``number $p$-rity". Note, however, the number $p$-rity of each wire is generally \textit{not} conserved. By tunneling a single electron between the ends of two neighboring wires, e.g. $e^{i(\varphi_{j}-\varphi_{j+1})}$, the number $p$-rity of each involved wire is shifted by $1$. Given that the charge sector at the boundary (associated to $pe$-tunneling) is gapless, the inter-wire tunneling of a single electron could be important at the boundary. Thus, a complete description of the edge should take into account all possible fluctuations of the number $p$-rity of each wire. To gain physical insights, say for the top edge, we pretend to dimerize the array of wires by connecting wire $2j$ with wire $2j+1$ (for all $j \in \mathbb{Z}$) at the bottom edge, as depicted in Fig. \ref{edgeneutral}. The $x=0$ end of wire $2j$ and the $x=0$ end of wire $2j+1$ then become two ends of the \textit{same} Luttinger liquid, and the electron tunneling between them, \textit{i.e.} $e^{i(\varphi_{2j}-\varphi_{2j+1})}$, would conserve the number $p$-rity of this Luttinger liquid. We expect the inter-wire tunneling over link $2j+1/2$ to be related to a $p$-state clock operator $\tau_j$ that measures this number $p$-rity, while the inter-wire tunneling over neighboring links ($2j-1/2$ and $2j+3/2$) to be related to a shift operator $\sigma_j$ that changes this number $p$-rity. The effective Hamiltonian then describes a $p$-state clock model. \begin{figure}[t!] \includegraphics[width=8cm,height=5.5cm ]{edgeneutral.png}\centering \caption{\small{Quantum $\mathbb{Z}_p$ clock model at the top edge of the coupled wire model, which is obtained from inter-wire tunneling of a single electron. As an aid of thinking, we imagine that the array of wires are dimerized such that wire $2j$ and wire $2j+1$ are connected at the bottom edge, forming a single Luttinger liquid, to which we associate a number $p$-rity. Tunneling between wire $2j$ and $2j+1$ (dashed circle) preserves this number $p$-rity, while tunneling between wire $2j$ and $2j-1$ (dotted circle), as well as between wire $2j+1$ and $2j+2$, shift the number $p$-rity. These edge-coupling can be associated to the clock operators $\tau_j$ and shift operators $\sigma_j$ as shown in the main text. Choosing a different dimerization pattern is equivalent to a Kramers-Wannier duality transformation. Given the translation symmetry in the bulk of wire model, the clock model at the edge is self-dual.}} \label{edgeneutral} \end{figure} The dimerization procedure just described is fictitious, but it provides an intuitive perspective for understanding the edge neutral sector. In particular, it naturally leads to the Kramers-Wannier duality in the clock model. Had we chosen another dimerization pattern, which connects wire $2j$ with wire $2j-1$, we would have associated a dual clock operator $\nu_{j-1/2}$ to measure the number $p$-rity over link $2j-1/2$, and a dual shift operator $\mu_{j-1/2}$ to change this number $p$-rity. Importantly, when the $\mathbb{Z}$ translation symmetry is present in our coupled wire model, dimerization is actually \textit{forbidden}. The two ways of dimerization described above are thus put on the same footing, which suggests that the edge neutral sector is a \textit{self-dual} clock model, described by some gapless critical theory in the continuum limit. Next, we will supplement the above argument by a more rigorous derivation. We focus on the $x=0$ (top) edge, as the situation for the bottom edge is essentially the same. \subsubsection{Generalized $\mathbb{Z}_p$ clock chain}\label{sec4.2.2} Let us first consider the inter-wire tunneling of a single electron, \begin{equation} H_{1e} = -J_1 \sum_{j} \cos(\varphi_{j}-\varphi_{j+1}). \end{equation} The notation at the boundary is simplified, \textit{i.e.} $\varphi_j \equiv \varphi_j (0)$. We also assume translation symmetry here, so that the tunneling strength is the same for each link. Later on, we will discuss the physical consequences with/without this symmetry. Using Eq. (\ref{chargemode}), we have \begin{equation}\label{singletunnelterm} \varphi_j-\varphi_{j+1} = \frac{1}{p}(\chi_j-\chi_{j+1})+\frac{2\pi q}{p}N_j+\frac{2\pi}{p}\widetilde{N}_{j+1/2}. \end{equation} For later convenience, we have introduced the quasiparticle number operator $\widetilde{N}_{\ell} = \tilde{\Theta}_{\ell}(0^+)/2\pi$. By definition, $\widetilde{N}_{\ell}$ has integer eigenvalue, and it is shifted by $1$ whenever a minimal quasiparticle is tunneled from one end of the link to another. The first term in Eq. (\ref{singletunnelterm}), which involves $\chi_j-\chi_{j+1}$, simply contributes to the Luttinger liquid in the charge sector. The remaining terms represent additional contributions in the neutral sector that we are interested in. This motivates us to introduce the following operator, \begin{equation} \mathcal{W}_{j+1/2} = e^{i(\frac{2\pi q}{p}N_j+\frac{2\pi}{p}\widetilde{N}_{j+1/2}+\pi q)}. \end{equation} One can readily check that $\mathcal{W}_{\ell}^p=1$, which follows from the commutation relation \begin{equation} [N_j, \widetilde{N}_{k+1/2}] = \frac{ip}{2\pi}(\delta_{j,k}-\delta_{j,k+1}). \end{equation} Moreover, these operators satisfy the commutation algebra appropriate for a quantum $\mathbb{Z}_p$ clock model, \begin{subequations} \begin{align} [\mathcal{W}_{j+1/2}, \mathcal{W}_{k+1/2}] = 0,\; \text{for }\abs{j-k}>1, \\ \mathcal{W}_{j+1/2}\mathcal{W}_{j-1/2}=\omega\;\mathcal{W}_{j-1/2}\mathcal{W}_{j+1/2}, \end{align} \end{subequations} with $\omega=e^{2\pi i q/p}$ \cite{clockfermionic}. This reflects the physical intuition we discussed earlier: the single-electron tunneling through each link shall be associated to a $p$-state clock operator, while the tunneling through the neighboring link shall be treated as the corresponding shift operator. We can make an explicit correspondence to the $\mathbb{Z}_p$ clock model by defining the clock variables as follows, \begin{subequations}\label{topclockvar} \begin{align} \mathcal{W}_{2j+1/2} &= \tau_j,\\ \mathcal{W}_{2j-1/2} &= \sigma_j\sigma^\dagger_{j-1}. \end{align} \end{subequations} They satisfy \begin{subequations}\label{topclockalg} \begin{align} &\tau_j^p = \sigma_j^p =1,\\ &\tau_j \sigma_j = \omega \sigma_j \tau_j, \end{align} \end{subequations} and $\tau_j$ commutes with $\sigma_{k}$ for $j\neq k$. Consequently, the Hamiltonian for the inter-wire tunneling of a single electron can be written as, \begin{equation} H_{1e} = -J_1\sum_j (\tau_j+\sigma_j\sigma^\dagger_{j-1})+\text{H.c.}\;, \end{equation} Only the contribution in the neutral sector is considered here, as the charge sector has been taken into account already. More generally, one should consider all possible $ne$-tunneling processes, for $ 1\leq n < p$. The effective Hamiltonian in the neutral sector thus takes the following form, \begin{equation}\label{genclockHamiltonian} H_\sigma = -\sum_j \sum_{n=1}^{p-1} J_n[(\tau_j)^n +(\sigma_j\sigma^\dagger_{j-1})^n+\text{H.c.}]\;. \end{equation} Without loss of generality one may assume $J_n = J_{p-n}$, so the model has $\lfloor p/2 \rfloor$ parameters. The spin-spin coupling and the transverse-field coupling have the same strength due to the translation symmetry which interchanges even and odd links. Notice that the translation symmetry in the bulk implies the Kramers-Wannier self-duality at the edge. Indeed, we can introduce the dual clock variables through the Kramers-Wannier transformation, \begin{subequations}\label{topdualclock} \begin{align} \mu_{j-\frac{1}{2}} &= \prod_{j \leq i} \tau^\dagger_i,\\ \nu_{j-\frac{1}{2}} &= \sigma_{j} \sigma^\dagger_{j-1}, \end{align} \end{subequations} then the Hamiltonian can be rewritten as \begin{equation} H_\sigma = -\sum_j \sum_{n=1}^{p-1} J_n[(\nu_{j-\frac{1}{2}})^n +(\mu_{j+\frac{1}{2}}\mu^\dagger_{j-\frac{1}{2}})^n+\text{H.c.}]\;. \end{equation} Hence the $p$-state clock model is self-dual, provided that the wire model has translation symmetry, which is equivalent to the bulk $\mathbf{e}$-$\mathbf{m}$ anyonic symmetry according to our discussion in Sec. \ref{sec3.3}. Some words of caution are due here. We would refer to the symmetry-enriched neutral sector Hamiltonian in Eq. (\ref{genclockHamiltonian}) as the self-dual \textit{generalized} $\mathbb{Z}_p$ clock model, which is to be contrasted with the conventional clock model for $p\geq 4$. In 2d classical statistical mechanics, the distinction between the general clock model and the conventional one (see \cite{Elitzur1979pureclock}) is discussed by Cardy in Ref. \cite{Cardy1980generalclock}. Here for the 1d quanutm chain, the differences are two-fold: firstly, the clock operators are defined to obey Eq. (\ref{topclockalg}) with $\omega=e^{2\pi i q/p}$, in contrast to $\omega=e^{2\pi i/p}$ in the conventional model. Our model thus have distinct self-dualities for different $q$'s; secondly, the clock operators here appear with various powers, \textit{i.e.} $(\tau_j)^n$ and $(\sigma_j\sigma^\dagger_{j-1})^n$ with $n$ ranging from $1$ to $p-1$, in contrast to the conventional model with just $n=1$ \cite{Ortiz2012pureclock,Ortiz2019pureclock}. Consequently, for $p\geq 4$ the generalized model is different form the conventional one. One has to pay special attention to the more complicated phase diagram at self-duality \cite{Alcaraz1980generalclock, Dorey1996generalclock}. As we are going to discuss in Sec. \ref{sec4.4}, the symmetry-enriched edge neutral sector can sometime be gapped. For $p=2,3$, the generalized clock model is no differnt from the conventional one. For $p=2$ the neutral sector is described by an Ising-Majorana chain \cite{gogolin2004}, while for $p=3$ it is described by a three-state Potts chain \cite{mong2014parafermionic}. Self-duality then implies a critical transition characterized by some gapless continuum theory. As is well-known in statistical mechanics, the corresponding gapless theories are the Ising CFT and the $\mathbb{Z}_3$ parafermion CFT respectively \cite{Ginsparg88, BYB, Fateev:1985mm, lecheminant2002criticality}. With both the charge and neutral sectors being gapless, a single electron can be tunneled into the symmetric edge. Such tunneling experiments may be used to probe the non-diagonal states. Our next task is to compute the edge tunneling exponents for non-diagonal states, especially for $p=2,3$, which have symmetry-protected gapless edges. \subsubsection{Edge operators}\label{sec4.2.3} To that end, it is useful to express the edge electron operator $\psi_j(0) \propto e^{i\varphi_j(0)}$ in terms of operators in the charge and neutral sectors explicitly. To do so, let us define the lattice parafermion operator in the neutral sector by combining the order and disorder operators, \begin{subequations} \begin{align} &\beta_{2j} = \omega^{\frac{p-1}{2}} \mu^\dagger_{j-\frac{1}{2}}\sigma^\dagger_{j},\\ &\beta_{2j-1} = \mu^\dagger_{j-\frac{1}{2}}\sigma^\dagger_{j-1}, \end{align} \end{subequations} which satisfy $\beta_j^p=1$ and \begin{equation} \beta_j \beta_{k} = \omega^{\text{sgn}(j-k)}\beta_{k}\beta_j, \end{equation} where $\omega= e^{2i\pi q/p}$. Maneuvering through the definition of variables introduced in this section, one can verify that the edge electron operator can be expressed simply as follows, \begin{equation}\label{latticeedgeelectron} \psi_{j}(0) \propto \beta_j e^{\frac{i}{p}\chi_{j}(0)}. \end{equation} Therefore, in the continuum limit, the scaling dimension of the edge electron is \begin{equation}\label{scalingofelectron} \begin{split} \Delta_e = \Delta_{\beta}+\frac{K}{2p^2} = \Delta_{\beta}+\frac{1}{2\nu}. \end{split} \end{equation} Here, $\Delta_\beta$ is the scaling dimension of the (most relevant) continuum field corresponding to the lattice parafermion operator. The Luttinger parameter is $K=2pq$ for a bosonic state at filling $\nu=p/2q$, and $K=p(p+2q)$ for a fermionic state at $\nu=p/(p+2q)$. The above expression holds up as long as the charge and neutral sectors decouple at low energy. As we will explain, this is indeed the case for $p=2,3$. The above discussion allows one to experimentally reveal the symmetry-enriched edge structure through the tunneling exponent for tunneling electrons from an ordinary metal into the symmetric edge. We will elaborate on this in the next subsection. Alternatively, one can consider the inter-edge quasiparticle tunneling through a point contact, which make use of the operators that scatter a minimal quasiparticle from the top edge to the bottom edge. For instance, for the bosonic states, one can check that $[\widetilde{N}_{j+1/2}, (\phi^\text{R}_j(0)-\phi^\text{R}_j(L))/2pq]=i$, hence the following operator tunnels a minimal quasiparticle of charge $e/2q$ from the top edge to the bottom edge through link $\ell=j+1/2$, \begin{equation} \Pi^{e/2q}_{\ell}=e^{\frac{i}{2pq}(\phi^\text{R}_j(0)-\phi^\text{R}_j(L))}. \end{equation} Combining the above discussions for both the charge and neutral sectors, we can re-express the inter-edge tunneling operator as \begin{subequations}\label{interedgeqp} \begin{align} \Pi^{e/2q}_{ 2j+\frac{1}{2}} & \propto (\sigma^\text{T}_j)^{-r_0} (\sigma^\text{B}_j)^{r_0} e^{\frac{i}{2pq}(\chi_{2j}(0)-\chi_{2j}(L))}, \\ \Pi^{e/2q}_{2j-\frac{1}{2}} & \propto (\mu^{\text{T}}_{j-\frac{1}{2}})^{-r_0} (\mu^{\text{B}}_{j-\frac{1}{2}})^{r_0} e^{\frac{i}{2pq}(\chi_{2j}(0)-\chi_{2j}(L))}, \end{align} \end{subequations} where $r_0$ satisfies $qr_0 = 1 \text{ mod }p$. Here $\sigma^\text{T}$ and $\mu^\text{T}$ are the clock variables for the top edge defined in Eqs. (\ref{topclockvar}) and (\ref{topdualclock}), while $\sigma^\text{B}$ and $\mu^\text{B}$ are the clock variables for the bottom edge which can be defined analogously. The above expression suggests that quasiparticles excited on the even links, which are known as the $\mathbf{e}$-particles in Sec. \ref{sec3}, are created at the top/bottom edge with the spin operator $\sigma^{\text{T}/\text{B}}$. On the other hand, quasiparticles excited on the odd links, which are known as the $\mathbf{m}$-particles, are created with the disorder operator $\mu^{\text{T}/\text{B}}$. Again, we are seeing here the equivalence between Kramers-Wannier duality at the edge and the $\mathbf{e}$-$\mathbf{m}$ anyonic symmetry in the bulk \cite{Lichtman2020bulkedge}. In principle, one could use Eq. (\ref{interedgeqp}) to compute the tunneling exponent for inter-edge quasiparticle tunneling at a point contact and thus reveal the structure of the symmetric edge. Having said that, in making the constriction, translation symmetry on the edge may be easily broken to render a gapped neutral sector. A more practical way of probing the symmetric edge structure is by tunneling electrons into the edge from a Fermi liquid, which is what we focus on in the following. For experimental relevance, we only consider the fermionic non-diagonal states. \subsection{Tunneling from metal into the symmetric edge}\label{sec4.3} In the presence of translation symmetry, both the charge and neutral sectors of the top/bottom edge are gapless for non-diagonal states with $p=2,3$. A single electron can then be tunneled into the symmetric edge. For the left/right side edge, however, the neutral sector is gapped and this edge is completely characterized by the chiral Luttinger liquid of charge-$pe$ clusters, which is gapless only to the tunneling of $p$ electrons. This anisotropy between the top/bottom and the left/right edges highlights the symmetry-enrichment aspect of the non-diagonal quantum Hall states. Experimentally, the edge structure of a quantum Hall state can be revealed by measuring the tunneling exponents \cite{kane1996edge, chang2003CLLedge}. In the following, we are mainly interested in the tunneling from an ordinary metal, a Fermi liquid, into the edge of fermionic non-diagonal state at filling $\nu=p/(p+2q)$. Before analyzing the symmetric edge, let us make a contrast with the situation where the translation symmetry is broken. In this case the clock model is no longer self-dual, so the neutral sector is generally gapped. The top/bottom edge is then identical to the left/right side edge. Both are described only by a chiral Luttinger liquid with $K=p(p+2q)$. Notice that, neither $e^{\frac{i}{p}\chi}$ nor $e^{\frac{i}{p}\phi}$ are local operators, hence a single electron cannot be tunneled into these edges. The most relevant local operator at the edge is either $e^{i\phi}$ or $e^{i\chi}$, which corresponds a charge-$pe$ cluster with scaling dimension $\Delta_{pe} = K/2$. The charge-$pe$ cluster in the Fermi liquid has scaling dimension $\delta_{pe}=p^2/2$. Thus, for the non-symmetric edge, the tunneling current $I$ has the following scaling \cite{chang2003CLLedge}, \begin{equation}\label{sideedgetunnel} I \sim V^{2(\Delta_{pe}+\delta_{pe})-1} = V^{p^2/\nu+p^2-1}, \end{equation} where $V$ is the bias voltage. The same tunneling exponent is obtained by tunneling from metal into the strongly-clustered state (Laughlin state of $pe$-clusters) at filling $\nu_{pe}=\nu/p^2=1/p(p+2q)$. This is expected given our discussion in Sec. \ref{sec3.3}: the non-diagonal state shares the same intrinsic topological order as a strongly-clustered state. On the other hand, the symmetric edge is gapless to a single electron, at least for $p=2,3$, and this can be used to reveal the signature of symmetry-enrichment in the non-diagonal state. The tunneling current from metal into the symmetric edge has the following power-law behavior, \begin{equation}\label{tunnelingexponentsymmetricedge} I \sim V^{2(\Delta_e+\delta_e)-1}, \end{equation} where $\delta_e=1/2$ and $\Delta_e$ is given by Eq. (\ref{scalingofelectron}) provided that charge and neutral sectors decouple. Let us now analyze the specific cases in detail. \subsubsection{$p = 2$: Ising CFT} We first note that the $U(1)$ charge sector decouples with the Ising neutral sector at low energy. To couple together the two sectors, one would consider an operator $\widehat{O}_{cn} = \widehat{O}_c \widehat{O}_n$, where $\widehat{O}_c$ and $\widehat{O}_n$ are local operators in the charge and neutral sectors respectively. In the charge sector, the most relevant non-trivial operator is $\partial_y \chi$, with scaling dimension 1. In the neutral sector, the spin field $\sigma$ is not local. In fact, as we have seen in last subsection, the spin operator $\sigma$ and the disorder operator $\mu$ correspond to the bulk anyons $\mathbf{e}$ and $\mathbf{m}$ respectively. As for the energy operator $\epsilon \sim \beta \bar{\beta}$ (with scaling dimension 1), while being local, it dimerizes the Ising spin chain and violates the translation symmetry. Therefore, the dominant allowed coupling is $\widehat{O}_{cn}=(\partial_y\chi)\mathcal{T}$, with $\mathcal{T}$ being the stress-energy tensor in the Ising CFT. The total scaling dimension of the coupling is $3$, hence irrelevant, which implies the decoupling between the charge and neutral sectors. It then follows from Eq. (\ref{tunnelingexponentsymmetricedge}) that the edge tunneling current scales with the bias voltage as \begin{equation} I \sim V^{1/\nu+1}, \end{equation} for the fermionic non-diagonal state at filling $\nu=1/(q+1)$, with $q \in 2\mathbb{Z}+1$. Here we have used $\Delta_\beta=1/2$ for the Majorana field \citep{BYB}. \subsubsection{$p = 3$: $\mathbb{Z}_3$ parafermion CFT} The situation for $p=3$ is similar to $p=2$. For an operator $\widehat{O}_{cn} = \widehat{O}_c \widehat{O}_n$ coupling the charge and neutral sectors, $\widehat{O}_n$ again cannot be the spin or disorder operator as they are associated to creating the non-local $\mathbf{e}$/$\mathbf{m}$ quasiparticles in the bulk. Also, the translation symmetry at the edge forbids $\widehat{O}_n$ to be the energy operator with dimension $4/5$. The most relevant allowed coupling is then given by $\hat{O}_{cn}=(\partial_y\chi)\mathcal{T}$, where $\mathcal{T}$ is the stress tensor for the $\mathbb{Z}_3$ parafermion CFT. Again, with scaling dimension 3, this coupling is irrelevant at low energy. Hence, the $U(1)$ charge sector and the $\mathbb{Z}_3$ parafermion neutral sector are decoupled at infra-red on the symmetric edge. The $\mathbb{Z}_3$ parafermion is little more subtle than the Majorana fermion, as the continuum limit of the lattice parafermion operator is not just the parafermion primary field. As argued by Mong \textit{et al.} \cite{mong2014parafermionic}, aside from the parafermion field with dimension $2/3$, the lattice parafermion operator actually contains a more relevant primary field with scaling dimension $7/15$. Thus, we should use $\Delta_\beta=7/15$ for $p=3$. This leads to the following scaling relation between the tunneling current and the bias voltage, \begin{equation} I \sim V^{1/\nu+14/15}, \end{equation} for the fermionic non-diagonal state at filling $\nu=3/(2q+3)$, with $q \in 3\mathbb{Z}\pm 1$. \subsection{Complexities for $p \geq 4$: the generalized clock model}\label{sec4.4} Finally, let us comment on the edge structure of non-diagonal states with $p\geq 4$. Unlike cases for $p<4$, translation symmetry (or self-duality) alone does not guarantee a gapless neutral sector. Our following discussion supplements the results obtained in Ref. \cite{twistdefectchain}, where the twist-defect chain (as the edge of $\mathbb{Z}_p$ toric code) had been modeled as a conventional $\mathbb{Z}_p$ clock model. As explained in Sec. \ref{sec4.2}, the quantum clock chain realized at the edge of non-diagonal states (as well as the $\mathbb{Z}_p$ toric code) is actually the \textit{generalized} clock model, which has a much richer phase diagram for $p\geq 4$ as we discuss below. \subsubsection{$p=4$: Ashkin-Teller model} For $p=4$, the symmetry-enriched (self-dual) neutral sector is described by the following Hamiltonian \begin{equation} \begin{split} H_{\sigma} = &-\sum_j \{J_1[\tau_j+\sigma_j\sigma^\dagger_{j-1}]\\ &+J_2[(\tau_j)^2+(\sigma_j\sigma^\dagger_{j-1})^2]+\text{H.c.}\}. \end{split} \end{equation} Without loss of generality, we can assume $q=1$ (the non-trivial effect for $q>1$ would become important for $p\geq 5$). What we have got here is a one-dimensional quantum model equivalent to the highly anisotropic limit of the two-dimensional Ashkin-Teller model at self-duality \cite{AshkinTeller1943}. The corresponding phase diagram had been studied thoroughly in Ref. \cite{Kadanoff1981ATmodel}. When $J_2/J_1=0$, this model reduces to the ``conventional" $\mathbb{Z}_4$ clock model, which is equivalent to two decoupled copies of Ising models. At self-duality, the neutral sector is then gapless, characterized by the $\text{Ising}^2$ CFT which is also known as the $U(1)/\mathbb{Z}_2$ orbifold CFT at radius $R_{orb}=1$ \cite{Ginsparg88, DVVV1989}. When $J_2=J_1$, the generalized clock model has an additional $S_4$ permutation symmetry, which makes it into the four-state Potts model \cite{baxter1982exactly}. At self-duality, the neutral sector is again gapless, but this time characterized by the four-state Potts CFT, which is the $U(1)/\mathbb{Z}_2$ orbifold CFT at radius $R_{orb}=\sqrt{2}$ \cite{DVVV1989}. In fact, for $\abs{J_2/J_1}\leq 1$, there is a continuous line of criticality described by the orbifold CFT, which includes also the $\mathbb{Z}_4$ parafermion CFT \cite{Ginsparg88, Fateev:1985mm, DVVV1989}. Hence, for this region of parameter space, the $p=4$ non-diagonal state does have a gapless edge allowing for tunneling of a single electron, though the tunneling exponent is non-universal. Importantly, the self-dual Ashkin-Teller model is \textit{gapped} when $\abs{J_2/J_1}>1$, and this is a totally allowed region in our parameter space. Intuitively, for $J_2 \gg J_1$, the generalized clock model is dominated by the $J_2$ terms: $(\tau_j)^2$ and $(\sigma_j\sigma^\dagger_{j-1})^2$, which favor the simultaneous condensation of $\tau^2$ and $\sigma^2$ (notice that they do commute for $p=4$). This results in a partially ordered phase where $\langle \sigma^2 \rangle=\pm 1$ (there is a spontaneous symmetry breaking as either $+1$ or $-1$ is chosen) and $\langle\sigma\rangle=0$. This phase is in fact separated from a fully ordered region with $\langle \sigma \rangle \neq 0$ and a fully disordered region with $\langle\sigma^2\rangle=\langle\sigma\rangle = 0$ by two Ising transitions. For $J_2<-J_1$, the system is ordered in an antiferromagnetic frozen phase, where $\langle \sigma^2 \rangle$ equals $1$ in one sublattice and $-1$ in another. The phase diagram for the self-dual $\mathbb{Z}_4$ generalized clock chain is summarized in Fig. \ref{ATphase}. We thus conclude that, for the $p=4$ non-diagonal state, translation symmetry in the bulk (self-duality on the edge) \textit{does not} necessarily imply a gapless neutral sector on the edge. \begin{figure}[t!] \includegraphics[width=\columnwidth, height=2.8cm ]{ATphasediagram.png}\centering \caption{\small{Schematic phase diagram of the self-dual $\mathbb{Z}_4$ general clock model. Notice that there exist gapped phases even at self-duality, hence there is no guarantee that the symmetric edge of the $p=4$ non-diagonal state is gapless. A detailed discussion of the complete phase diagram can be found in Ref. \cite{Kadanoff1981ATmodel}.}} \label{ATphase} \end{figure} \subsubsection{$p\geq 5$} Similarly, for the non-diagonal states with $p\geq 5$, the neutral sector of the symmetric edge is also \textit{not} guranteed to be gapless. This is most easily demonstrated by tuning all the parameters $J_n$ to the same value, in which case the generalized clock model becomes a $p$-state Potts model. It is well-known that for $p\geq 5$ the self-dual Potts model is described by a \textit{first-order} phase transition, and is thus gapped \cite{baxter1982exactly}. In this situation, the symmetric edge would develop spontaneous dimerization appropriate for either the ordered or disordered phase. As phase coexistence could occur at a first-order transition, one may anticipate seeing both the ordered and disordered phases on the edge. Parafermion zero modes would then reside at the domain walls that separate these two phases \cite{Fendley2012parafermionzeromode}. On the other hand, it is interesting to ask if there can exist any gapless phase at all on the symmetric edge. The answer turns out to depend on $q$ as well. For $q = \pm1\; (\text{mod } p)$, by setting all $J_n$'s to be zero except for $J_1(=J_{p-1})$, the self-dual generalized $\mathbb{Z}_p$ model reduces to the conventional $\mathbb{Z}_p$ model at criticality, which is known to be in the gapless Berezinskii-Kosterlitz-Thouless (BKT) phase for $p\geq 5$ \cite{Dorey1996generalclock, Alcaraz1980generalclock, Ortiz2012pureclock, Ortiz2019pureclock}. However, such a gapless phase is not always allowed for a generic $q$, as can be seen by attempting (and failing) to construct a self-dual sine-Gordon representation for the BKT phase. Suppose there exists such a sine-Gordon model, then it is expected to take the following form, \begin{equation}\label{sineGordonHam} \begin{split} \mathcal{H}_{SG} = &\frac{u_\sigma}{2\pi} [\widetilde{q}^2(\partial_y \phi_{\textbf{e}})^2+(\partial_y \phi_{\textbf{m}})^2]\\ &+v_{\textbf{e}} \cos(p\phi_{\textbf{e}})+ v_{\textbf{m}}\cos(p\phi_{\textbf{m}})+ ...\;, \end{split} \end{equation} for some $\widetilde{q} = q\;(\text{mod }p)$. Here $\phi_{\textbf{e}}$ and $\phi_{\textbf{m}}$ are defined to satisfy \begin{equation}\label{sineGordoncommutation} [\phi_{\textbf{e}}(y), \phi_{\textbf{m}}(y')] =2i\pi p^{-1}H(y-y'), \end{equation} where $H(y)$ is the Heaviside step function. The $\cos(p\phi_{\textbf{m}})$ term then creates vortices for $\phi_{\textbf{e}}$ with a $2\pi$-compactification, and the $\cos(p\phi_{\textbf{e}})$ term provides a $p$-state anisotropy that leads to a clock model. The clock operators can be expressed in terms of the sine-Gordon variables as follows, \begin{equation} e^{-i\widetilde{q}\phi_{\textbf{e}}} \sim \sigma\;\;\text{and}\;\; e^{i\phi_{\textbf{m}}} \sim \mu, \end{equation} The appropriate clock algebra with $\omega=e^{2i\pi q/p}$ simply follows from the commutation relation in Eq. (\ref{sineGordoncommutation}). The duality transformation in the generalized clock model, which interchanges $\sigma \leftrightarrow \mu$, is thus equivalent to the transformation $-\tilde{q}\phi_{\textbf{e}} \leftrightarrow \phi_{\textbf{m}}$ in the sine-Gordon model. This explains the kinetic terms in Eq. (\ref{sineGordonHam}), which are chosen to ensure the self-duality. The duality would also require $v_{\textbf{m}}\cos(p\tilde{q}\phi_{\textbf{e}})$ term to appear in the Hamiltonian, but for simplicity we have swept it under the ellipsis. Notice that the two $v_{\textbf{m}}$-terms have scaling dimension $\Delta_{\textbf{m}} = p\abs{\widetilde{q}}/2 > 2$, hence they are irrelevant at low energy. Now a crucial observation is that the presumed dual of $\cos{p\phi_{\textbf{e}}}$ does not exist in general, because $\cos(p\phi_{\textbf{m}}/\widetilde{q})$ is not an allowed operator unless $\abs{\widetilde{q}}=1$. Without its dual, there is no term to compete with the $v_{\textbf{e}}$-term, and this would lead to gap-opening if $v_{\textbf{e}}$ flows to strong coupling. Since the scaling dimension of $\cos{p\phi_{\textbf{e}}}$ is $\Delta_{\textbf{e}}=p/(2\abs{\widetilde{q}})$, we conclude that the gapless BKT phase (or equivalently a Luttinger liquid) is allowed only when $\abs{\widetilde{q}}<p/4$, with $\widetilde{q} = q\;(\text{mod }p)$. For example, the non-diagonal state for $p=5$ and $q=1$ can have a gapless neutral sector on the symmetric edge, while for $p=5$ and $q=2$ the neutral sector can only be gapped. Our above discussion is not likely to be comprehensive for the symmetric edge theory of non-diagonal quantum Hall states with $p\geq 4$, and we look forward to future numerical studies that can fully characterize the phase diagram of the generalized clock chain, including the chiral model where coupling strengths are made complex. Nevertheless, our discussion suffices to emphasize the distinction between the $p<4$ case and the $p\geq 4$ case: while a gapless edge is guaranteed by translation symmetry (or self-duality) in the former case, it is not guaranteed in the latter due to the possibility of having a first-order transition, and moreover, depending on the value of $q$, sometime the only possibility is to have a gapped edge that spontaneously breaks the symmetry. \section{\label{sec5}Summary and outlook} In this paper, we have proposed a family of Abelian fractional quantum Hall states known as the non-diagonal states, which happen at filling fraction $\nu=p/2q$ for bosonic electrons and $\nu=p/(p+2q)$ for fermionic electrons, with $p$ and $q$ being a pair of relatively prime integers. These states are constructed using a coupled-wire model, where a single wire of Luttinger liquid is described by a non-diagonal circle CFT, and inter-wire couplings are the $pe$-tunneling. The ``non-diagonal" property dictates that a generic physical operator cannot be written as a diagonal combination of chiral and anti-chiral primary fields, which in turn strongly constrains the motion of quasiparticles in the wire construction. We realize that, in the presence of $U(1)$ charge conservation and $\mathbb{Z}$ translation symmetry of the wire model, the non-diagonal quantum Hall state possesses a non-trivial symmetry-enriched topological order. Without the translation symmetry, the non-diagonal state is identical to a strongly-clustered Laughlin state of charge-$pe$ particles, which has a $U(1)$ charge sector and a boundary characterized by the chiral Luttinger liquid. In the presence of both charge and translation symmetries, the non-diagonal state also possesses an additional neutral sector characterized by the quantum double model $\mathcal{D}(\mathbb{Z}_p)$, which has a $\mathbb{Z}_p$ topological order. Similar to Kitaev's toric code \cite{Kitaev2006exactly, Bombin2010AS} and Wen's plaquette model \cite{You2012plaquette, You2013plaquette}, the translation symmetry in the wire model acts as the $\mathbf{e}$-$\mathbf{m}$ anyonic symmetry of the $\mathbb{Z}_p$ topological order. As a result, a dislocation in the wire model, which is a termination of a wire in the bulk, acts as a twist defect for the anyonic symmetry. The non-diagonal states thus provide an electronic quantum Hall setting for realizing and testing out various ideas developed in the general theory of anyonic symmetry \cite{Teo2015twistliquid, Teo2016AS, Barkeshli2019AS}. An experimental arena for the realization of non-diagonal states maybe found in twisted materials, where an array of quasi-one-dimensional subsystems emerge with built-in translation symmetry \cite{Kennes20201dflatbands}. We have also investigated in detail the edge structure of non-diagonal states. For the edges perpendicular to the direction of wire, we have derived the corresponding low-energy effective Hamiltonian, which is found to consist of a chiral Luttinger liquid (for the $U(1)$ charge sector) and a generalized $p$-state quantum clock model (for the $\mathcal{D}(\mathbb{Z}_p)$ neutral sector). Translation symmetry in the bulk of wire model then implies the self-duality of the clock model on the edge. For $p=2$ and $p=3$, the self-dual clock model is at a gapless critical transition, hence the non-diagonal states possess a pair of edges that are completely gapless. This is referred to as the symmetric edge, whose charge and neutral sectors are both gapless, thus allowing a single electron to be tunneled into it. In contrast, for the boundary parallel to the direction of wire, only the charge sector remains gapless and thus only allows a cluster of $p$ electrons to be tunneled into it. Hence, the non-diagonal state is anisotropic, possessing two distinct pairs of edges, as a reflection of its symmetry-enrichment. As a potential experimental probe, we have predicted the tunneling exponent for tunneling electrons from a Fermi liquid into the symmetric edge. As for $p\geq 4$, the self-dual generalized clock model on the symmetric edge acquires a richer phase diagram, which allows the neutral sector to be gapped even in the presence of symmetry. This is because the symmetric edge could be at a first-order transition, thus gapped by spontaneous symmetry breaking. It is of intellectual interest (and hopefully of practical interest in the future) to numerically study the phase diagram of the self-dual generalized $p$-clock model in greater detail, as previous studies have instead focused on the conventional clock model. We would save this for future work. An important future direction for us to pursue is to better characterize the non-diagonal states with the translation symmetry, equivalently the anyonic symmetry, \textit{gauged}. According to the general theory of anyonic symmetry, the gauging of anyonic symmetry in an Abelian topological phase would give rise to a non-Abelian phase \cite{Teo2015twistliquid, Teo2016AS, Barkeshli2019AS}. In the coupled-wire construction, such a gauging process concretely corresponds to the \textit{melting} of the wire model, because a dislocation (as a termination of wire) has been shown to correspond to a twist defect (\textit{i.e.} gauge flux of anyonic symmetry). Therefore, by melting the wire model of the non-diagonal anisotropic quantum Hall state, an isotropic non-Abelian quantum Hall state can be realized. We hope to develop a comprehensive theory to characterize such a state in the future. \begin{acknowledgments} The authors thank Meng Cheng, Paul Fendley and Jeffrey C.Y. Teo for helpful discussions. P.M.T. also thanks Ken K.W. Ma for discussions in the early stage of the work. This work is in part supported by the Croucher Scholarship for Doctoral Study from the Croucher Foundation (P.M.T.) and a Simons Investigator grant from the Simons Foundation (C.L.K.). \end{acknowledgments}
null
null
null
proofpile-arXiv_000-10192
{"arxiv_id":"2009.08993","language":"en","timestamp":1600740024000,"url":"https:\/\/arxiv.org\/abs\/2009.08993","yymm":"2009"}
2024-02-18T23:40:25.348Z
2020-09-22T02:00:24.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10194"}
null
\section{Introduction} Robotic arms are showing a continual expansion in use, with expected growth continuing into the foreseeable future \cite{marketreport}. While the use of robotic arms continues to grow, they do not have a standard form of communication \cite{saunderson2019robots}, and current methods are costly to implement from both a technical and financial perspective. These systems generally focus on communicating intent, or describing what the arm will be about to perform, while less research has been done on the application of social and emotional communication. For collaborative processes, displaying emotion has repeatedly been shown to increase key collaboration metrics in robotics, such as likelihood of humans to follow social norms in robotic interactions\cite{jost2019examining}, better engagement with disability \cite{10.1145/3369457.3370915} and treating robots like an equal human collaborator \cite{desideri2019emotional}. It is even argued that no real collaboration can take place without social and emotional display \cite{fischer2019collaborative}. We believe musical prosody, that uses non-linguistic audio phrases based on musical melodies, can be used for effective interaction with human collaborators without requiring a change in core functionality. The ability for sound to display information for robotic platforms beyond trivial indicators is often underutilized, despite the use of intentional sound to deliver information in almost every device we encounter day to day \cite{walker1996human}. It has also been shown that displaying emotion is key for creating believable agents that people enjoy collaborating with \cite{Mateas:1999:ORI:1805750.1805762}, and prosody is effective in displaying emotions for humans and robots \cite{crumpton2016survey}. While affective nonverbal behavior has been shown to affect HRI metrics like humans' emotional state, self-disclosure, and perceived animacy of the robot \cite{rosenthal2018effects}, gestures are often studied \cite{beck2010towards}, but non-linguistic forms of audio feedback are under-explored \cite{macdorman2006subjective}. Prosody has the potential to allow the robot to communicate in a manner relatable to that of humans, but still different enough from human speech to avoid the uncanny valley \cite{macdorman2006subjective}. Emotional musical prosody is therefore uniquely positioned to enable better robotic communication and collaboration, capturing the advantages of sonic interaction, emotion conveyance and avoiding uncanny valley. In this paper we describe our approach to musical prosody using a custom dataset of musical phrases for robotic arm interaction. We evaluate these interactions firstly to confirm that there is no impact through potential distraction in collaboration with a robotic arm. We then measure how musical prosody compares to single-pitch audio and no audio systems for trust, trust recovery, likeability and the perception of intelligence and safety. \section{Background} \subsection{Robotic Arm Forms of Communication} Amongst research into methods for robotic arms to communicate and signal their intent, there is no standardized set of approaches \cite{cha2018survey}. In social robotics communication is often derived from human behaviour - such as gestures and gaze - these are not however readily available to robotic arms \cite{rosen2019communicating}. Additionally when these forms of communication are added to arms they require significant expense, such as extra visual displays like a face\cite{6839819}, or in the case of added gestures risk challenging and reducing the core functionality of the arm. In robotic research, forms of non-verbal communication can generally be split into four categories; kinesics, proxemics, haptics and chronemics, none of which are easily applied to an existing robotic system \cite{saunderson2019robots}. While varying movement to show intent has shown successful results \cite{bodden2016evaluating}, changes to path planning and movement dynamics is often not feasible. Another effective method for arms to display their intent is through vision of a robot's future trajectory, such as by a human worn head mounted display \cite{ruffaldi2016third}, however this requires a significant investment and potential distraction to the user. Emotion has been more commonly used as an input to robotic arms, such as facial emotion recognition to control and change the behaviour of robotic arms \cite{saverysurvey}. Likewise, Galvanic Skin Response emotional evaluation on humans has been used to impact a robot's control pattern \cite{takahashi2001human}. Nevertheless, robotic arm displays of emotion in work and collaboration, or interaction beyond showing intent are widely overlooked in robotics literature. \subsection{Communication for Trust and Trust Recovery} For collaboration with robotic arms trust is required, without which they can be underutilized \cite{johndlee}. Trust is largely developed in the first phase of a relationship both between humans and robots \cite{kim2009repair}, meaning first impressions are crucial for trust. First impressions from audio and visual stimulus can also damage the ability to develop trust later on \cite{Schaefer2016}. In this work we focus on affective trust, which is developed through emotional bonds and personal relationship, not competence \cite{freedy2007measurement}. Affective trust makes relationships more resilient to mistakes by either party \cite{rousseau1998not}. The display of emotion is critical for affective trust and increases the willingness of collaboration \cite{gompei2018factors}. Music and prosody has been shown as a powerful medium to convey emotions \cite{sloboda1999music}. In music and robotics emotion can be categorized in many ways, such as a discrete categorical manner (happiness, sadness, fear, etc.) \cite{devillers2005challenges}, and through continuous dimensions such as valence and arousal \cite{russell2009emotion}. Most recent efforts to generate and manipulate robotic emotions through prosody focused on linguistic robotic communication \cite{crumpton2016survey}. \section{Methods} \subsection{Research Questions and Hypotheses} Our first research question focuses on understanding the role of musical prosody and trust for a robotic arm. \begin{itemize} \item [RQ1] \textit{How does emotional musical prosody alter trust and trust recovery from mistakes, compared to no audio and single-pitch audio?} \end{itemize} For this question our hypothesis is that the overall trust at the end of the interaction will be significantly higher for musical prosody over single-pitch and higher for single-pitch audio over no audio. Our next research question compares common HRI metrics between each system; the perceived intelligence, perceived safety and likeability of the robotic system. \begin{itemize} \item [RQ2] \textit{How does emotional musical prosody alter perceived safety, perceived intelligence and likeability? } \end{itemize} Our second research question explores the relation between users' self-reported metrics, gathered through highly cited surveys and their actual responses collected through a performance based task. We are interested in comparing whether the system that is trusted more through self-reports is actually then utilized more in performance based tasks. \begin{itemize} \item [RQ3] \textit{When a user indirectly self-reports higher levels of trust in a robot, does this in turn lead to higher utilization and trust in a robotic arm's suggestions?} \end{itemize} We hypothesize that users' self-reported trust ratings will correspond to their actual use and trust levels implied by choice to follow the decisions of the robotic system. We also hypothesize that by using musical prosody after mistakes, human collaborators will be more likely to trust the robotic arm's suggestions directly after a mistake. For the first two research questions, we believe that participants will develop an internal model of the robot as an interactive emotional collaborator for the prosody model. This will lead to higher levels of trust and improved perception of safety and intelligence. \subsection{Experimental Design} Our experiment requires participants to perform a pattern learning and prediction task collaboratively with a robot. This is followed by two commonly used surveys; Schaefer's survey for robotic trust \cite{schaefer2016measuring}, and the Godspeed measurement for Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and the Perceived Safety of Robots\cite{bartneck2009measurement}. The study process followed 5 steps for each participant: \begin{enumerate} \item Consent form and introduction to online form \item Description of the pattern recognition task \item 20 Trial Pattern Recognition Tasks \item 80 Pattern Recognition Tasks, recorded for data \item Godspeed and Schaefer Trust Survey (order randomized per participant) \end{enumerate} The pattern learning method was originally created by Dongen et al. to understand the reliance on decisions and develop a framework for testing different agents\cite{van2013framework}. Since then it has been re-purposed many times, including for comparing the dichotomy of human-human and human-automation trust \cite{de2012world}, as well as the use of audio by cognitive agents\cite{muralidharan2014effects}. In our version of the pattern recognition task, participants attempted to correctly predict the next number in a sequence. Participants were told beforehand that humans and the pattern recognition software being tested in the experiment tend to be about 70\% accurate on average, which has been shown to cause humans to alternate between relying on themselves and a decision agent. No further information was provided to the participants about the sequence's structure. The sequence was made up of a repeated sub-sequence that was 5 numbers long, containing only 1, 2, or 3 (such as 3, 1, 1, 2, 3). To prevent the ability for participants to quickly identify the pattern, 10\% of the numbers in the sequence were randomly altered. Participants first completed a training exercise to learn the interface, in which a sub-sequence was repeated 4 times (20 total numbers). Then participants were informed a new sequence had been generated for the final task. This was generated in the same way, using a new sub-sequence with 16 repetitions (80 total numbers). Before the user chose which number they believed came next in the sequence, the robot would suggest an answer, with the robot being correct 70\% of the time (see Figure \ref{fig:sequence}). This process mirrors the process from the original paper \cite{van2013framework}. The previous timestep's correct answer was displayed for the user at decision time to help them better keep track of the pattern throughout the time the robot takes to perform its movements. We required participants to submit their answer after the robot finished pointing to its prediction, which took between 2.5 and 4.5 seconds. This also forced participants to spend time considering their decision given the robot's recommendation. The robot would respond to the user's choice depending on the outcome and the version of the experiment, described in the following section. \begin{figure} [t] \centering \includegraphics[width=5cm]{images/flowchart.png} \caption{Robot Arm Emotional Response} \label{fig:sequence} \end{figure} \subsection{Experimental Groups and Robot Reactions} \label{subsec:groups} Our study was designed as a between-group experiment, where participants were randomly allocated to one of three groups. These groups were a prosody audio group (prosody), a single-pitch audio group (notes), and a control with no audio (gesture). The robot always responded to a user's action with the emotion determined by the process shown in Figure \ref{fig:sequence}. In all three versions of the experiment, the robot responded with the emotional gestures described in Section \ref{sec:gestures}. In the prosody group, the robot additionally responded with prosody-based audio sample, randomly selected each time from the five phrases matching the response emotion. These phrases which were obtained using the process described in Section \ref{sec:dataset}. In the notes group, the robot additionally responded instead with an audio file playing one note. Each emotion was randomly assigned one pitch from the midi pitches 62, 65, 69, and 72. This assignment remained consistent throughout the experiment to maintain a relation between the sounds and the outcome. For each pitch, five different audio files were available to be selected, each with a different instrument timbre and length (varying from 2-5 seconds), to provide variety similar to that of the five different prosody phrases available for each emotion. Finally, in the gesture group, the gesture was performed in silence. \subsection{Participants} We recruited 46 participants through the online survey platform Prolific\footnote{https://www.prolific.co/}. The participants ages ranged from 19 to 49, while the mean age was 25, with a standard deviation of 7. Participants were randomly sorted into one of the three categories, audio with emotional musical prosody (15 participants), single-pitch audio (16 participants), and no audio (15 participants). Each experiment took approximately 30 minutes to complete. Participants were paid \$4.75USD. \subsection{Dataset} \label{sec:dataset} In past work we created a deep learning generation system for musical prosody \cite{savery_finding_2019,savery2019establishing}. For this paper and experiment, we chose to use our recently created dataset of a human singing emotional phrases, to avoid any potential noise added by a generative system. The recorded dataset contains 4.22 hours of musical material recorded by Mary Carter\footnote{https://maryesthercarter.com/}, divided into 1-15 second phrases each corresponding to one of the 20 different emotions in the Geneva Emotion Wheel \cite{sacharin2012geneva} shown in Figure \ref{fig:geneva}. \begin{figure} [t] \centering \includegraphics[width=7cm]{images/geneva.png} \caption{Geneva Emotion Wheel} \label{fig:geneva} \end{figure} As part of our evaluation of the dataset, we manually selected 5 phrases for each emotion that we felt best represented the emotion, and had participants select an emotion and intensity when listening to each provided phrase. Participants recruited from Prolific and MTurk were used. For quality assurance, participants were randomly given test questions throughout the experiment telling them to select a certain answer. Each participant was given 6.5 on average, and responses which had more than one incorrect attention question were ignored, leaving a total of 45 participants for data analysis. In order to prevent the survey from being overly long, questions were randomly allocated, with 12 participants on average evaluating each individual phrase. Answers of None or Other were ignored in the analysis, resulting in an average of 11.3 valid evaluations for each phrase. Our analysis of the phrases used the metrics defined by Coyne et al\cite{coyne2020using}. We calculated the rated emotion's mean and variance in units of emotion (converted from degrees on the wheel), weighted by user-rated intensity. For the experiment, we used phrases for the four emotions joy, shame, sadness, and anger. These emotions were chosen as both being the emotions that best matched the outcomes in \ref{fig:sequence} and had gesture descriptions specified in \cite{walbott98}. 5 phrases for each emotion were chosen to add variety to the robot's response to not tire the user with the same sounds, while still allowing for only high-quality phrases to be included. In selecting the phrases for each of the four emotions, phrases from the closest two other emotions on the wheel within the same quadrant were also considered for selection. The sets were therefore \{joy, pride, pleasure\}, \{shame, disappointment, regret\}, \{sadness, guilt, regret\}, and \{anger, hate, contempt\}. We selected 5 of the 15 potential phrases for each by limiting length to be between 4 and 10 seconds, restricting the variance to be less than 2, requiring the weighted mean emotion rating to fall within the correct quadrant of the wheel, and finally selecting the phrases with the smallest difference between the actual emotion and mean rated emotion. \subsection{Interaction} Participants interacted with a virtual 3-D model of the robot in an application designed in Unity. Each time a participant was asked to answer a question, the robot acted as a decision agent, pointing to an answer that may be correct or incorrect. The user would then type their answer using their computer keyboard. There were three versions of the interaction application, varying the way the robot reacted to the user's answer, that are described in Section \ref{subsec:groups}. An example image of the interface is shown in Figure \ref{fig:interface}. \begin{figure} [h] \centering \includegraphics[width=7cm]{images/interface.png} \caption{Example image from the robot interaction application} \label{fig:interface} \end{figure} \subsection{Gestures} \label{sec:gestures} We created a gesture for each of the emotions joy, shame, sadness, and anger, as one way the robot reacted to user input. We designed the gestures by utilizing the table of emotion-specific nonverbal behaviors provided in \cite{walbott98}, which is based on work by Darwin, as well as their post-hoc overview of discriminative body movements and poses. These ideas have been used before in designing emotional robot gestures \cite{bretan2015emotionally}. Our joy gesture has the robot lift its arm up high, making three quick upwards movements alternating which side it faces. The shame gesture has the robot slowly bend down and away from the camera to one side. The sadness gesture has the robot slowly bend down while still centered with respect to the camera. The anger gesture has the robot first lean downwards and make two fast lateral movements, and then lean upwards to make two more fast lateral movements. Examples of poses encountered during each gesture are shown in Figure \ref{fig:gesturePoses}. \begin{figure} [h] \centering \includegraphics[width=7cm]{images/gesture_poses.png} \caption{Example poses passed through during emotional gestures} \label{fig:gesturePoses} \end{figure} \section{Results} \begin{figure} \centering \includegraphics[width=6.5cm]{images/Trust.png} \caption{Box plot of trust scores} \label{fig:trust} \end{figure} \begin{figure} \centering \includegraphics[width=6.5cm]{images/robotAgreementPlot.png} \caption{Box plot showing percentage of answers agreeing with the robot overall and after the robot made a mistake (means indicated by white squares)} \label{fig:agreeBox} \end{figure} \subsection{RQ1: Trust Recovery} We first calculated Cronbach's alpha for each metric in the trust survey, which gave a high reliability of 0.92. We then calculated the overall trust score by inverting categories when appropriate and then generating the mean for each individual. The mean trust of each group was prosody 0.71, notes 0.57 and gesture 0.62 (see Figure \ref{fig:trust}). After running a one-way ANOVA the p-value was significant, \textit{p}=0.041. Pair-wise t-tests between groups' trust rating gave the results: notes-gestures \textit{p}= 0.46, notes-prosody \textit{p}=0.025, and gesture-prosody \textit{p}=0.025. This supports our hypothesis that trust would be higher from the arm using prosody. We also evaluated trust based on participants' actual use of the system. The percentage of answers for which users agreed with the robot for each group are plotted in Figure \ref{fig:agreeBox}. We performed a one-way ANOVA test to test whether there was a significant difference in this metric between groups, \textit{p}=0.68, which was not significant. To compare trust recovery after mistakes between groups, we analyzed the percentage of times each user agreed with the robot immediately after an instance of following the robot's incorrect suggestion. The results are plotted in Figure \ref{fig:agreeBox}. The one-way ANOVA test yielded \textit{p}=0.87, which was not significant. \subsection{RQ2: Safety, Intelligence and Likeability} \begin{figure} \centering \includegraphics[width=7cm]{images/Metrics.png} \caption{Box plot of HRI metrics} \label{fig:metrics} \end{figure} Cronbach's alpha for Anthropomorphism (0.85), Intelligence (0.89) and Likeability (0.92) all showed high reliability values above 0.85. Safety's coefficient was slightly lower at 0.75. Across each category results showed a higher median for each metric for the system using emotional prosody (see Figure \ref{fig:metrics}), while gestures consistently outperformed notes. We performed a one-way ANOVA on each category, and only Anthropomorphism was significant, \textit{p}=0.006. \subsection{RQ3: Trust Survey and Participant Choices} We calculated the Pearson correlation coefficient between the final trust scores, and the percentage of answers users agreed with the robot. The result was \textit{r}=0.12, which indicates a weak correlation between the two metrics. \subsection{User Comments} The comments provided by participants indicate that it was possible, in all groups, to perceive the emotions the robot was trying to convey. In the prosody group, one user said, `The arm seems quite emotional! When it's right it is quite happy, but when it is wrong it gets particularly sad.' In the notes group, a user said `When we got the right answer the robot seemed cheerful, as opposed to when we selected the wrong answer (based on the robot's recommendation) it seemed as if he was sorry for giving wrong suggestions. If I chose an option different than the robot's suggestion and its answer was correct, it seemed as if he gave the look of I told you the right answer!' And in the gesture group, one comment was `the emotions were very easily perceivable.' Two participants in the notes group had negative comments on the audio response, describing it as `horrible' and `annoying', while one participant in the prosody group said the `humming was annoying.' Several participants mentioned that the robot moved too slowly. Some comments mentioned having a hard time detecting any pattern in the sequence, while in others users discussed their strategies. \section{Discussion and Conclusion} This study was performed using virtual interactions with a robot, and 46 participants. It would be useful to investigate this further with a larger sample size, and to have participants interact with a physical robot for comparison. Additionally, more variations of robot responses could be compared and analyzed beyond the three that we investigated. For example, prosodic audio of a human voice could be compared with that of musical instruments. Our results support that when the robot responded with musical prosody (alongside the gestures present for all groups), users reported higher trust metrics than when the robot responded with single-pitched notes or no audio. This supports that musical prosody has a positive effect on humans' trust of a robot. Comparing the Godspeed metrics, it was unsurprising to find that the addition of human vocalizations increased the Anthropomorphism of the system. We had expected likeability to be higher, and while it was not a significant result, it would still be worth investigating further with more subjects. The most surprising result was that the notes audio fell well below the median of gestures-only in every category. We believe this shows that while prosody can have positive outcomes, audio when implemented ineffectively has the capability to drastically reduce HRI metrics. The reason for this is likely due to the fact that the notes audio was not related to the emotion being displayed by the gesture beyond remaining consistent throughout the experiment. However, it would be interesting to further explore more types of audio responses. Users' ratings of trust in the survey did not strongly correlate with their actual behavior during the task, in terms of how often they agreed with the robot's suggestions. This is consistent with the fact that while users reported significantly higher trust for audio with musical prosody, no significant differences were found in their actual choices during the interactions. A similar conflict between these types of metrics was found in the original decision framework paper \cite{van2013framework}, where higher reported trust in the decision aid did not always result in higher percent agreement with the aid. Some potential explanations include cognitive biases and reliance heuristic. \section{Acknowledgements} This material is based upon work supported by the National Science Foundation under Grant No. 1925178 \bibliographystyle{IEEEtran} \subsection{Robotic Arm Forms of Communication} Amongst research into methods for robotic arms to communicate and signal their intent, there is no standardized set of approaches \cite{cha2018survey}. In social robotics communication is often derived from human behaviour - such as gestures and gaze - these are not however readily available to robotic arms \cite{rosen2019communicating}. Additionally when these forms of communication are added to arms they require significant expense, such as extra visual displays like a face\cite{6839819}, or in the case of added gestures risk challenging and reducing the core functionality of the arm. In robotic research, forms of non-verbal communication can generally be split into four categories kinesics, proxemics, haptics and chronemics, none of which are easily applied to an existing robotic system \cite{saunderson2019robots}. While varying movement to show intent has shown successful results \cite{bodden2016evaluating}, changes to path planning and movement dynamics is often not feasible. Another effective method for arms to display their intent is through vision of a robot's future trajectory, such as by a human worn head mounted display \cite{ruffaldi2016third}, however this requires a significant investment and potential distraction to the user. Emotion has been more commonly used as an input to robotic arms, such as facial emotion recognition to control and change the behaviour of robotic arms \cite{mei2016emotion,iengo2012attentional,ying2016emotion}. Likewise, GSR emotional evaluation on humans has been used to impact a robot's control pattern \cite{takahashi2001human}. Nevertheless, robotic arm displays of emotion in work and collaboration, or interaction beyond showing intent are widely overlooked in robotics literature. \subsection{Communication for Trust and Trust Recovery} For collaboration with robotic arms trust is required, without which they can be underutilized \cite{johndlee}. Trust is largely developed in the first phase of a relationship both between human's and robots \cite{kim2009repair,miles1995organizational}, meaning first impressions are crucial for trust. First impressions from audio and visual stimulus can also damage the ability to develop trust later on \cite{Schaefer2016}. In this work we focus on affective trust, which is developed through emotional bonds and personal relationship, not competence \cite{freedy2007measurement}. Affective trust makes relationships more resilient to mistakes by either party \cite{rousseau1998not}. The display of emotion is critical for affective trust and increases the willingness of collaboration \cite{gompei2018factors}. Music and prosody has been shown as a powerful medium to convey emotions \cite{sloboda1999music}. In music and robotics emotion can be categorized in many ways, such as a discrete categorical manner (happiness, sadness, fear, etc.) \cite{devillers2005challenges}, and through continuous dimensions such as valence, arousal \cite{russell2009emotion}. While some recent efforts to generate and manipulate robotic emotions through prosody focused on linguistic robotic communication \cite{crumpton2016survey}, \cite{breazeal2002recognition}, there has only recently been work focusing on musical prosody \cite{savery_finding_2019,savery2019establishing}. One of the main aims of the presented work concerns recovery of trust, which it assumes a loss of trust. I think the background could be extended by including a deeper discussion concerning trust and sounds, and the communication of errors.
null
null
null
proofpile-arXiv_000-10193
{"arxiv_id":"2009.09048","language":"en","timestamp":1600740149000,"url":"https:\/\/arxiv.org\/abs\/2009.09048","yymm":"2009"}
2024-02-18T23:40:25.357Z
2020-09-22T02:02:29.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10195"}
null
\section{Introduction and motivation} Spectral graph theory is an important branch of discrete mathematics that links graphs to linear algebra. Its applications are numerous. For instance, in Chemistry the H\"{u}ckel molecular orbital theory of conjugated $\pi$ systems \cite{streitwieser1961} is essentially an exercise in applied spectral graph theory \cite{trinajstic1992}. Unlike the adjacency matrix itself, the spectrum of the adjacency matrix is an invariant. Singular graphs, i.e.\ graphs that have a zero eigenvalue of the adjacency matrix, have been studied extensively. One subclass of singular graphs, known as nut graphs, is of particular interest. A \emph{nut graph} is a singular graph whose $0$-$1$ adjacency matrix has a one-dimensional kernel (nullity $\eta = 1$) and has a full corresponding eigenvector. Some properties of nut graphs are easily proved. For instance, nut graphs are connected, non-bipartite, and have no vertices of degree one \cite{ScirihaGutman-NutExt}. It is conventional to require that a nut graph has $n \geq 2$ vertices, although some authors consider $K_1$ as the trivial nut. There exist numerous construction rules for making larger nut graphs from smaller \cite{ScirihaGutman-NutExt}. In this contribution we will consider signed graphs, i.e.\ graphs with edge weights drawn from $\{-1, 1\}$ and ask whether they can also be nut graphs. Although the problem may appear to be purely mathematical, we were driven to study it by the chemical interest arising from the study of specific properties of conjugated $\pi$ systems. For example, in electronic structure theory, nut graphs have distributed radical reactivity, since occupation by a single electron of the molecular orbital corresponding to the kernel eigenvector leads to spin density on all carbon centres \cite{feuerpaper}. In the context of theories of molecular conduction, nut graphs have a unique status as the strong omni-conductors of nullity $1$ \cite{PWF_JCP_2014}. M\"{o}bius carbon networks, representable by signed graphs, obey different electron counting rules from those of unweighted H\"{u}ckel networks \cite{Fowler02}. The notion of signed graphs goes back to at least the 1950s and is documented in two dynamic survey papers \cite{ZaslavDS9,ZaslavDS8}. The fundamentals of this theory and its standard notation was developed by Thomas Zaslavsky in 1982 in \cite{Zaslav} and in a series of other papers. For general discussion of signed graphs the reader is referred to recent papers \cite{BeCiKoWa2019,GhHaMaMa2020} which set out notation and basic properties. For nut graphs, several useful papers are available, for instance \cite{Basic2020,Sc1998,Sc2007,Sc2008}. Recently the problem of existence of regular nut graphs was posed, and solved for cubic and quartic graphs \cite{GPS}. Later, it was solved for degrees $\rho$, $\rho \leq 11$ \cite{Jan}. In this paper we carry the problem of the existence of regular nut graphs over from ordinary graphs to signed graphs. We are able to characterise all cases ($\rho \le 11$) for which a $\rho$-regular nut graph of order $n$ (either signed or unsigned) exists. In addition, we describe a construction, based on work on unsigned nut graphs \cite{Jan,GPS}, which produces larger signed nut graphs from smaller. \section{Signed graphs, nut graphs and signed nut graphs} \subsection{Signed graphs} A \emph{signed graph} $\Gamma = (G, \Sigma)$ is a graph $G=(V, E)$ with a distinguished subset of edges $\Sigma \subseteq E$ that we shall call \emph{negative edges}, or more informally M\"{o}bius edges. Equivalently, we may consider the signed graph $(G, \sigma)$ to be a graph endowed with a mapping $\sigma\colon E \rightarrow \{-1,+1\}$ where $\Sigma = \{e \in E \mid \sigma(e) = -1\}$. The adjacency matrix $A(\Gamma)$ of a signed graph is a symmetric matrix obtained from the adjacency matrix $A(G)$ of the \emph{underlying graph} $G$ by replacing $1$ by $-1$ for entries $a_{uv}$ where $u$ is connected with $v$ by a negative edge. Symmetries (automorphisms) of a signed graph $\Gamma = (G,\Sigma)$ are also symmetries of the underlying graph $G$. They preserve edge weights: $$ \Aut \Gamma = \{\alpha \in \Aut G \mid \forall e = uv \in E(G): e \in \Sigma \iff \alpha(e)= \alpha(u)\alpha(v) \in \Sigma \}. $$ Hence, the automorphism group $\Aut \Gamma$ of a signed graph $\Gamma$ is a subgroup of the automorphism group $\Aut G$ of the underlying graph $G$. \subsection{Singular graphs, core graphs and nut graphs} A graph that has zero as an eigenvalue is called a \emph{singular graph}, i.e.\ a graph is singular if and only if its adjacency matrix has a non-trivial kernel. The dimension of the kernel is the \emph{nullity}. An eigenvector $\mathbf{x}$ can be viewed as a weighting of vertices, i.e.\ a mapping $\mathbf{x}\colon V \to \mathbb{R}$. A vector $\mathbf{x}$ belongs to the kernel, $\ker A$, which is denoted by $\mathbf{x} \in \ker A$, if and only if for each vertex $v$ the sum of entries on the neighbours $N_G(v)$ equals~$0$: \begin{equation} \sum_{u \in N_G(v)} \mathbf{x}(u) = 0. \label{eq:locCon} \end{equation} The equation \eqref{eq:locCon} is called the \emph{local condition}. The support, $\supp \mathbf{x}$, of a kernel eigenvector $\mathbf{x} \in \ker A$ is the subset of $V$ at which $\mathbf{x}$ attains non-zero values: \begin{equation} \supp \mathbf{x} = \{v \in V \mid \mathbf{x}(v) \neq 0\}. \label{eq:suppdef} \end{equation} If $\supp \mathbf{x} = V$, we say that vector $\mathbf{x}$ is \emph{full}. Define $\supp \ker A$ as follows: \begin{equation} \supp \ker A = \bigcup_{\mathbf{x} \in \ker A} \supp \mathbf{x}. \label{eq:suppkerdef} \end{equation} A singular graph $G$ is a \emph{core graph} if $\supp \ker A = V$. A core graph of nullity $1$ is called a \emph{nut graph}. Although the kernel of a core graph may have a basis that has no full vectors, there exists a basis with all vectors being full. \begin{proposition} \label{prop:1} Each core graph admits a kernel basis that contains only full vectors. \end{proposition} \begin{proof} Let $V(G) = \{1, \ldots, n\}$ and let $\mathbf{x}_1, \ldots, \mathbf{x}_\eta$ be an arbitrary kernel basis. Let $\iota$ be the smallest integer (i.e.\ vertex label), such that at least one of the entries $\mathbf{x}_1(\iota), \ldots, \mathbf{x}_\eta(\iota)$ is zero, and let $\mathbf{x}_\ell(\iota)$ be one of those entries. As $G$ is a core graph, at least one of the entries $\mathbf{x}_1(\iota), \ldots, \mathbf{x}_\eta(\iota)$ is non-zero; let us denote the first such encountered entry by $\mathbf{x}_k(\iota)$. We can replace vector $\mathbf{x}_\ell$ by $\mathbf{x}_\ell + \alpha \mathbf{x}_k$, where $\alpha > 0$. If we pick $\alpha$ large enough, i.e.\ if $$\alpha > \max \left\{ \frac{|\mathbf{x}_\ell(i)|}{|\mathbf{x}_k(i)|} \mid i \neq \iota, \mathbf{x}_k(i)\neq 0 \right \},$$ then $\mathbf{x}_\ell(i) + \alpha \mathbf{x}_k(i)$ will be non-zero for all $i$. No new zero entries were created in the replacement process and at least one zero was eliminated. We repeat this process until no more zeros remain. \end{proof} \begin{corollary} Each core graph admits a kernel basis that contains only full vectors with integer entries. \end{corollary} \begin{proof} An integer basis always exists for an integer eigenvalue. The replacement process described in the proof of Proposition~\ref{prop:1} will keep all entries integer if we choose an integer for the value of $\alpha$ at each step. \end{proof} \subsection{Switching equivalence of signed graphs} Let $\Gamma = (G,\Sigma)$ be a signed graph over $G= (V,E)$ and let $U \subseteq V(G)$ be a set of its vertices. A \emph{switching} at $U$ is an operation that transforms $\Gamma = (G,\Sigma)$ to a signed graph $\Gamma^U = (G, \Sigma \symdif \partial U)$ where $\symdif$ denotes symmetric difference of sets and \[ \partial U = \{uv \in E \mid u \in U, v \notin U\}. \] Note that $\partial U = \partial (V(G) \setminus U)$, $\Gamma^U = \Gamma^{V(G) \setminus U}$ and $(\Gamma^U)^U = \Gamma$. In fact, switching is an equivalence relation among the signed graphs with the same underlying graph. Any graph $G$ can be regarded as a \emph{traditional} signed graph $\Gamma = (G, \emptyset)$. We extend this definition. Any signed graph is a \emph{traditional} signed graph if it is switching equivalent to $\Gamma = (G, \emptyset)$, otherwise it is called a \emph{proper} signed graph. Note that our `traditional' signed graphs are called \emph{balanced} and our `proper' signed graphs are called \emph{unbalanced} in \cite{Zaslav}; see also \cite{ZaslavDS9}. In the literature of nut graphs the term `balanced' is used with a different meaning \cite{feuerpaper}. As observed, for instance in \cite{BeCiKoWa2019}, switching has an obvious linear algebraic description. \begin{proposition}[\cite{BeCiKoWa2019}] \label{prop:switchProp} Let $A(\Gamma)$ be the adjacency matrix of signed graph $\Gamma$ and $A(\Gamma^U)$ be the corresponding adjacency matrix of the signed graph switched at $U$. Let $S = \diag(s_1,s_2, \ldots, s_n)$ be the diagonal matrix with $s_i = -1$ if $v_i \in U$ and $s_i = 1$ elsewhere. Then \[ A(\Gamma^U) = S A(\Gamma) S. \] Since $S^T = S^{-1} = S$ we also have: \[ A(\Gamma) = S A(\Gamma^U) S. \] \end{proposition} \begin{proposition} \label{thm:scsizes} Let $G$ be a connected graph on $n$ vertices and $m$ edges. There are $2^m$ signed graphs over $G$, there are $2^{m-n+1}$ switching equivalence classes, and each class has $2^{n-1}$ signed graphs. \end{proposition} \begin{proof} The essential idea of using a spanning tree is present in the earlier literature, e.g.\ Lemma~3.1 in \cite{Zaslav}. Here we use it to find cardinalities of the switching classes, which in turn is needed for the analysis of Algorithm~\ref{alg-1}. We divide our argument into four steps. \vspace{0.5\baselineskip} \noindent {\bf Step (a)}: For a given connected graph $G$ (and a spanning tree $T$) there are $2^m$ different signed graphs. Indeed, we may choose any subset $\Sigma$ of edges $E$ and make all edges in $\Sigma$ negative. Some of the signed graphs will have all edges of $T$ positive, while others will have some edges of $T$ negative. \vspace{0.5\baselineskip} \noindent {\bf Step (b)}: Among the $2^m$ signed graphs over $G$ exactly $2^{m-n+1}$ will have all edges of $T$ positive. Indeed, while fixing $(n-1)$ edges of $T$ positive, any selection of the remaining $(m-n+1)$ non-tree edges determines $\Sigma$. Such a selection can be done in $2^{m-n+1}$ ways. \vspace{0.5\baselineskip} Hence, Step (a) gives the total number of signed graphs while Step (b) gives the number of signed graphs having all edges of $T$ positive. \vspace{0.5\baselineskip} \noindent {\bf Step (c)}: There are $2^{n-1}$ switchings available. Namely, any switching is determined by a pair $(U,V \setminus U)$, but $(U,V \setminus U)$ is the same switching as $(V \setminus U,U)$. Hence we have to divide $2^n$, the number of subsets of $V$, by $2$. Thus, each switching equivalence class contains $2^{n-1}$ signed graphs. Dividing the total number of signed graphs $2^m$ by the cardinality of each switching class $2^{n-1}$ we obtain the number of different switching equivalence classes: $2^{m-n+1}$. \vspace{0.5\baselineskip} Every switching equivalence class of signed graphs over $G$ contains exactly one signed graph with all edges of $T$ positive. Recall that in any tree there is a unique path between any two vertices. Choose any vertex $w$ from $V$. Let $U$ be the set of vertices $v$ in $T$ that have an even number of negative edges on the unique $w-v$ path of $T$. Then $V \setminus U$ contains the vertices $v$ that have an odd number of negative edges on the path from $w$ to $v$ along $T$. The switching $(U,V \setminus U)$ will make $T$ all positive. Hence, each switching class has at least one signed graph that makes $T$ all positive. However, since the cardinality under Step (c) is the same as under Step (b), namely $2^{m-n+1}$, we may deduce that each switching class contains exactly one all-positive $T$. \end{proof} \subsection{Signed singular graphs, signed core graphs and signed nut graphs} One may consider the kernel of the adjacency matrix of a \emph{signed} graph. Note that definitions \eqref{eq:suppdef} and \eqref{eq:suppkerdef} can be extended to signed graph in a natural way. A signed graph is a \emph{signed singular graph} if it has a zero as an eigenvalue. A signed graph is a \emph{signed core graph} if $\supp \ker A(\Gamma) = V(G)$. A signed graph is a \emph{signed nut graph} if its adjacency matrix $A(\Gamma)$ has nullity one and its kernel $\ker A(\Gamma)$ contains a full kernel eigenvector. A graph $G$ on $n$ vertices and $m$ edges gives rise to $2^m$ distinct signed graphs. If we are interested only in non-isomorphic signed graphs, this number may be reduced by the symmetries preserve signs. However, there is also an equivalence relation, to be described in the next section, among the signed graphs $\Gamma = (G, \sigma)$ over the same underlying graph that is very convenient as it preserves several important signed invariants and reduces the number of graphs to be considered. \subsection{Switching equivalence and signed singular graphs} \noindent Proposition~\ref{prop:switchProp} has the following immediate consequence: \begin{proposition} Let $\Gamma$ be a signed graph and let $U \subseteq V(G)$. If $\Gamma$ is singular and if $\mathbf{x}$ is any of its kernel vectors, then $\Gamma^U$ is singular and the vector $\mathbf{x}^U$ defined as \[ \mathbf{x}^U(v) = \begin{cases} \phantom{-} \mathbf{x}(v), & \text{if } v \in V(G) \setminus U, \\ - \mathbf{x}(v), & \text{if } v \in U, \end{cases} \] is a kernel eigenvector for $\Gamma^U$. \end{proposition} This proposition is helpful in the study of singular graphs. Namely, it follows that many properties of signed graphs concerning singularity hold for the whole switching equivalence class. \begin{corollary} Let $\Gamma$ and $\Gamma'$ be two switching equivalent signed graphs. The following holds: \begin{enumerate}[label=(\arabic*)] \item If one of the pair is singular, then the other is also singular. In addition, if both are singular, they have the same nullity. \item If one of the pair is a core graph, then the other is also a core graph. \item If one of the pair is a nut graph then the other is also a nut graph. \end{enumerate} \end{corollary} In particular, this reduces the search for nut graphs to a search over distinct switching equivalence classes. The following fact may be useful. \begin{corollary} Every switching equivalence class of signed nut graphs has exactly one representative that has kernel eigenvector with all entries positive. \end{corollary} \begin{proof} Let $\Gamma$ be a signed nut graph and let $\mathbf{x}$ be its kernel eigenvector. Let \[ U = \{v \in V \mid \mathbf{x}(v) < 0 \} \] The switching at $U$ gives rise to the switching-equivalent signed nut graph $\Gamma^U$ with an all-positive kernel eigenvector. \end{proof} The above corollary enables us to select for any signed nut graph $\Gamma = (G, \Sigma)$ a unique switching equivalent graph $\Gamma' = (G, \Sigma')$ s.t.\ the kernel eigenvector $\bf{x'}$ relative to $\Gamma'$ is given by ${\bf x'}(v) = |{\bf x}(v)|$. This canonical choice of switching can be viewed in the more general setting of signed graphs. Using the idea of the proof of Proposition~\ref{thm:scsizes} and a database of regular connected graphs of a given order \cite{McKay201494} we may search for signed nut graphs of that order. \renewcommand{\algorithmicrequire}{\textbf{Input:}} \renewcommand{\algorithmicensure}{\textbf{Output:}} \begin{algorithm} \caption{Given the class of graphs $\mathcal{G}_{n,\rho}$, i.e.\ the class of connected $\rho$-regular graphs of order $n$, find a signed nut graph in this class.} \label{alg-1} \begin{algorithmic}[1] \Require{$\mathcal{G}_{n,\rho}$, the class of all connected $\rho$-regular graphs of order $n$.} \Ensure{A signed nut graph in $\mathcal{G}_{n,\rho}$ (or report that there is none).} \ForAll{$G \in \mathcal{G}_{n,\rho}$} \State $T \gets$ spanning tree of $G$ \ForAll{$\Sigma \subseteq E(G) \setminus T$} \State $\Gamma \gets (G, \Sigma)$ \If{$\Gamma$ is a signed nut graph} \State \textbf{return} $\Gamma$ \EndIf \EndFor \EndFor \State \textbf{report} there is no signed nut graph in class $\mathcal{G}_{n,\rho}$ \end{algorithmic} \end{algorithm} Let $F(n,\rho)$ be the number of connected graphs of order $n$ and degree $\rho$. In the worst case the algorithm has to check $2^{m-n+1}$ signed structures on each. Since $2m = n\rho$ this implies a maximum of $F(n,\rho)2^{m-n+1}$ tests. \section{Results} Our contribution here is based on recent interest in the study of families of nut graphs. An efficient strategy for generating nut graphs of small order was published in 2018 \cite{CoolFowlGoed-2017} and the full collection of nut graphs found there for orders up to 20 was reported in the House of Graphs \cite{hog}. For arbitrary simple graphs, the list is complete for orders up to 12, and counts are give up to 13. A list of regular nut graphs for orders from 3 to 8 was deposited in the same place. This list covers orders up to 22 and is complete up to order 14. More recently, the orders for which regular nut graphs of degree $\rho$ exist have been established for $\rho \in \{3,4,5,6,7,8,9,10,11\}$. In~\cite{GPS}, the set $N(\rho)$ was defined as the set consisting of all integers $n$ for which a $\rho$-regular nut graph of order $n$ exists. There it was shown that \begin{align*} N(1) & = N(2) = \emptyset, \\ N(3) & = \{12\} \cup \{2k \mid k\geq 9\}, \\ N(4) & = \{8,10,12\} \cup \{k \mid k\geq14\}. \end{align*} In \cite{Jan}, $N(\rho)$ was determined for every $\rho$, $5 \leq \rho \leq 11$. Combining these results, we obtain the following theorem. \begin{restatable}{theorem}{mainthm} \label{thm_orders} The following holds: \begin{enumerate} \item $N(1) = \emptyset$ \item $N(2) = \emptyset$ \item $N(3) = \{12\} \cup \{2k \mid k\geq 9\}$ \item $N(4) = \{8,10,12\} \cup \{k \mid k\geq14\}$ \item $N(5) = \{2k \mid k\geq 5\}$ \item $N(6) = \{k \mid k\geq 12\}$ \item $N(7) = \{2k \mid k\geq 6\}$ \item $N(8) = \{12\} \cup \{k \mid k\geq 14\}$ \item $N(9) = \{2k \mid k\geq 8\}$ \item $N(10) = \{k \mid k\geq 15\}$ \item $N(11) = \{2k \mid k\geq 8\}$ \end{enumerate} \end{restatable} Note that for each $\rho, 3 \leq \rho \leq 11$, the set $N(d)$ misses only a finite number of integer values. The question we tackle here is: which of the missing numbers can be covered by regular signed nut graphs? The main result of this paper is embodied in Table~{\ref{tab:pinky}} and stated formally in Theorems \ref{thm_orders},~{\ref{thm:main}} and~\ref{thm:10}. \newcommand{\checkmark}{\checkmark} \newcommand{{\scriptsize\XSolidBrush}}{{\scriptsize\XSolidBrush}} \newcommand{{\small$\nexists$}}{{\small$\nexists$}} \newcommand{\maltcross}{\maltcross} \newcommand{\cellcolor{gray!25!white}}{\cellcolor{gray!25!white}} \begin{table}[!htbp] \centering \renewcommand*{\arraystretch}{1.3} \begin{tabular}{|c|*{15}{|c}|} \hline \backslashbox{$\rho$}{$n$} & 5 & 6 & 7 & 8 & 9 & 10 & 11 & 12 & 13 & 14 & 15 & 16 & 17 & 18 & 19 \\ \hline\hline 3 & \cellcolor{gray!25!white} & {\scriptsize\XSolidBrush} & \cellcolor{gray!25!white} & {\scriptsize\XSolidBrush} & \cellcolor{gray!25!white} & {\scriptsize\XSolidBrush} & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \maltcross & \cellcolor{gray!25!white} & \maltcross & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} \\ \hline 4 & \maltcross & {\scriptsize\XSolidBrush} & \maltcross & \checkmark & \maltcross & \checkmark & \maltcross & \checkmark & \maltcross & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline 5 & \cellcolor{gray!25!white} & {\small$\nexists$} & \cellcolor{gray!25!white} & {\scriptsize\XSolidBrush} & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} \\ \hline 6 & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & {\small$\nexists$} & \maltcross & \maltcross & \maltcross & \maltcross & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline 7 & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & {\small$\nexists$} & \cellcolor{gray!25!white} & \maltcross & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} \\ \hline 8 & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \maltcross & \maltcross & \maltcross & \checkmark & \maltcross & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline 9 & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & {\small$\nexists$} & \cellcolor{gray!25!white} & \maltcross & \cellcolor{gray!25!white} & \maltcross & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} \\ \hline 10 & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & {\small$\nexists$} & \maltcross & \maltcross & \maltcross & \checkmark & \checkmark & \checkmark & \checkmark & \checkmark \\ \hline 11 & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & \cellcolor{gray!25!white} & {\small$\nexists$} & \cellcolor{gray!25!white} & \maltcross & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} & \checkmark & \cellcolor{gray!25!white} \\ \hline \end{tabular} \caption{Existence of small regular signed nut graphs of order $n$ and degree $\rho$. Notation: \checkmark \ldots there exists a traditional signed nut graph; \protect\maltcross \ldots there exists a proper signed nut graph (but no traditional signed nut graph); {\small$\nexists$} \ldots there exists no signed nut graph (by Theorem~\protect\ref{thm:10}); {\scriptsize\XSolidBrush} \ldots there exists no signed nut graph (proof by exhaustion).} \label{tab:pinky} \end{table} \begin{theorem} \label{thm:main} Let $N_s(\rho)$ denote the set of orders $n$ for which no $\rho$-regular nut graph exists but there exists a (proper) signed nut graph. \begin{enumerate} \item $N_s(3) = \{14,16\}$ \item $N_s(4) = \{5,7,9,11,13\}$ \item $N_s(5) = \emptyset$ \item $N_s(6) = \{8,9,10,11\}$ \item $N_s(7) = \{10\}$ \item $N_s(8) = \{9,10,11,13\}$ \item $N_s(9) = \{12, 14\}$ \item $N_s(10) = \{12, 13, 14\}$ \item $N_s(11) = \{14\}$ \end{enumerate} \end{theorem} \begin{proof} In the proof we first used computer search based on the data about regular nut graphs from the House of Graphs \cite{hog}. For some set of parameters $n,d$, we were able to prove existence and non-existence by search using the straightforward Algorithm~\ref{alg-1}. For some sets the search was unfeasible, but in these cases an example was generated by a heuristic approach. In some cases this involved planting a small number of negative edges. In others, a negative hamiltonian cycle was added to an ordinary nut graph. \end{proof} A further theorem extends the results of Theorem~{\ref{thm:main}} to infinity along the leading diagonal of the table. \begin{theorem} \label{thm:10} Let $\Gamma$ be a signed graph whose underlying graph is $K_n$. If $\Gamma$ is a signed nut graph then $n \equiv 1 \pmod 4$. Moreover, for each $n \equiv 1 \pmod 4$, there exists a signed nut graph with underlying graph $K_n$. \end{theorem} \begin{proof} Let $n = 4k + q$, where $0 \leq q \leq 3$. We divide the proof into three parts: (a) $q \in \{0, 2\}$, (b) $q = 3$, and (c) $q = 1$. Assume that $\Gamma$ is singular. Let $\bf x$ be a full kernel eigenvector (existence established by Proposition \ref{prop:1}). We may assume that $\bf x$ is a non-zero integer vector. If $\bf x$ has no odd coordinate, we may multiply $\bf x$ by an appropriate power of $\frac{1}{2}$ so that at least one coordinate becomes odd. We call vertex $s$ even if $x_s \equiv 0 \pmod 2$ and odd if $x_s \equiv1 \pmod 2$. The local condition for a kernel eigenvector $\bf x$ is \begin{equation} \sum_{s\sim r}x_s \sigma(rs) = 0 \label{} \end{equation} for each choice of a pivot vertex $r$, where $\sigma(rs)$ is the weight ($\pm1$) of the edge between $r$ and $s$. We know that at least one vertex, say $r$, must be odd. The local condition at $r$ implies that there is an even number of odd vertices around $r$ and hence, together with $r$, an odd number of odd vertices in total for a presumed $K_n$ nut graph. Case (a): $q \in \{0, 2\}$. This means that $n$ is even. The signed graph $\Gamma$ must have an even vertex $t$. The local condition at $t$ implies that there is an even number of odd vertices around $t$, and hence an even number of odd vertices in total, a contradiction. This rules out the existence of signed complete nut graphs for $q \in \{0, 2\}$. Case (b): $q \in 3$. In this case all entries $x_s$ are odd: the parity of the sum in (1) is opposite for even and odd vertices. Let $m_+$ denote the number of edges in $\Gamma$ with positive sign. For each vertex $s$, let $\rho_+(s)$ and $\rho_-(s)$, respectively, denote the number of edges with positive sign and negative sign that are incident with $s$. Note that $\rho_+(s) + \rho_-(s) = n - 1$. Summing local conditions over all pivots $r$: $$ 0 = \sum_r \rho_+(r) x_r - \sum_{r}\rho_-(r) x_r $$ Hence, \begin{align*} 0 & = \sum_r \{ \rho_+(r) - \rho_-(r) \} x_r = \sum_r \{ 2 \rho_+(r) -n + 1 \} x_r \end{align*} and \begin{align} \sum_r \rho_+(r) x_r & = \frac{n - 1}{2} \sum_r x_r = \{2k + 1 \} \sum_r x_r. \label{eq:evenodd} \end{align} The RHS of \eqref{eq:evenodd} is an odd number since it is a product of two odd numbers, hence $\sum_r \rho_+(r) x_r $ is odd. Therefore, the subgraph with positive edges only has an odd number of vertices with odd degree. By the Handshaking Lemma this is impossible. Case (c): $q = 1$ is the only remaining possibility and the first part of the theorem follows, provided such nut graphs exist. Now we construct a signed nut graph for each $n$ of the form $n = 4k + 1$. A signed graph (call it $\Gamma$) is constructed from $K_{4k+1}$ as follows. Partition the vertex set of $K_{4k+1}$ into a single vertex $r = 0$ and $k$ subsets of $4$ vertices for $k$ copies of the path graph $P_4$. Change the signs of all edges internal to each $P_4$ to $-1$. To construct a kernel eigenvector $\bf x$ of $\Gamma$ for $\lambda = 0$ place $+1$ on vertex $0$, then entries $-1,+1,+1,-1$ on each $P_4$. We denote by $1,2,3,4$ the vertices of the first $P_4$, by $5,6,7,8$ the vertices of the second $P_4$, etc. Owing to the symmetry of $\Gamma$ (since automorphisms of $\Gamma$ preserve edge-weights) there are only three vertex types to be considered. \begin{enumerate} \item For $r = 0$ all weights $\sigma(rs) = 1$ and exactly half of the $x_s$ are equal to $1$ and the other half are equal to $-1$. Hence, (1) is true in this case. \item The vertex $r$ may be an end-vertex of any of the $k$ paths $P_4$. The net contribution of the three remaining vertices of $P_4$ is $-1$, and all contributions of other paths cancel out. Taking into account the edge to vertex $0$, the weighted sum in (1) is indeed equal to $0$. \item The vertex $r$ may be an inner vertex of any of the $k$ paths $P_4$. Again, the net contribution of the three remaining vertices of $P_4$ is $-1$, and by the same argument, the weighted sum in (1) is again equal to $0$. \end{enumerate} \par\noindent As $\bf x$ is a full kernel eigenvector, $\Gamma$ is a signed core graph. It remains to prove that $\Gamma$ is a signed nut graph, i.e.\ with nullity $\eta(\Gamma) = 1$. This is done by showing that the constructed full kernel eigenvector $\bf x$ is the only eigenvector for $\lambda = 0$ (up to a scalar multiple). First, note that all edges incident with vertex $0$ have weight $+1$. Hence for $r = 0$, (1) becomes: \begin{equation} \sum_{s \neq 0}x_s = 0 \end{equation} It follows that \begin{equation} \sum_{s=0}^{n}x_s = x_0 \end{equation} Now consider any path $P_4$ with vertices, say, $1,2,3,4$. Note that vertices $1$ and $4$ fall into one symmetry class while vertices $2$ and $3$ are in the other symmetry class. The local conditions are: \begin{align} 0 = x_0 - x_2 + \sum_{s\neq 0,1,2}x_s & = x_0 - x_1 - 2x_2 \\ 0 = x_0 - x_1 -x_3 + \sum_{s\neq 0,1,2,3}x_s & = x_0 - 2x_1 - x_2-2x_3 \\ 0 = x_0 - x_2 - x_4 + \sum_{s\neq 0,2,3,4}x_s & = x_0 - 2x_2 - x_3 -2x_4 \\ 0 = x_0 - x_3 + \sum_{s\neq 0,3,4}x_s & = x_0 - 2x_3 - x_4 \end{align} It is straightforward to show that $x_1$ to $x_4$ are related to $x_0$ as: \begin{equation} x_1 = x_4 = -x_0, x_2 = x_3 = x_0 \end{equation} Since this holds for all $k$ path graphs $P_4$ it follows that $\Gamma$ is indeed a signed nut graph. \end{proof} The spectrum of $\Gamma$ is easily described. The eigenvalues, with multiplicities, are: \begin{equation} (2(k-1) \pm \sqrt{4k(k-1) + 5})^1, (\pm \sqrt{5})^k, (\pm \sqrt{5}-2)^{k-1}, 0^1. \end{equation} For $k = 1$, this reduces to five distinct eigenvalues. \section{A construction for proper signed nut graphs} Theorem {\ref{thm:main}} has answered our initial question, in that if we consider ordinary graphs as special cases of signed graphs and $\rho \leq 11$ we need only to perform a computer search for existence of signed nut graphs for those values of $n$ for which no ordinary nut graph exists. However, if we wanted to search for {proper} signed nut graphs with the intention of determining the orders for which a proper signed graph exists, then methods would be needed for generating larger signed nut graphs. There are several known constructions that take a nut graph and produce a larger nut graph. We will revisit one construction here and extend it to signed graphs. This is the so-called Fowler construction for enlarging unweighted nut graphs. Recall that $\Gamma$ is a proper signed nut graph if and only if it is not switching equivalent to an ordinary unweighted nut graph. Let $G$ be a graph and $v$ a vertex of degree $\rho$. Let $N(v) = \{u_1, u_2, \ldots, u_\rho \}$. Recall \cite{GPS,Jan} that the Fowler Construction, denoted $F(G, v)$, is a graph with \begin{align*} V(F(G, v)) = V(G) \sqcup \{ q_1, \ldots, q_\rho \} \sqcup \{ p_1, \ldots , p_\rho \} \end{align*} and \begin{align*} E(F(G, v)) = (E(G) \setminus \{ vu_i \mid 1 \leq i \leq \rho \}) & \cup \{ q_ip_j \mid 1 \leq i, j \leq \rho, i \neq j \} \\ & \cup \{ vq_i \mid 1 \leq i \leq \rho \} \cup \{ p_i u_i \mid 1 \leq i \leq \rho \}. \end{align*} \begin{figure}[!htbp] \subcaptionbox{$\Gamma$\label{Fig-FowlerExta}}[.5\linewidth]{ \centering \begin{tikzpicture} \definecolor{mygreen}{RGB}{205, 238, 231} \tikzstyle{vertex}=[draw,circle,font=\scriptsize,minimum size=13pt,inner sep=1pt,fill=mygreen] \tikzstyle{edge}=[draw,thick] \coordinate (u1) at (-2, -1); \coordinate (u2) at (-0.7, -1); \coordinate (ud) at (2, -1); \path[edge,fill=yellow!20!white] (u1) .. controls ($ (u1) + (-60:0.5) $) and ($ (u2) + (-120:0.5) $) .. (u2) .. controls ($ (u2) + (-60:0.7) $) and ($ (ud) + (-120:0.7) $) .. (ud) .. controls ($ (ud) + (-60:3) $) and ($ (u1) + (-120:3) $) .. (u1); \node[vertex,fill=mygreen,label={[yshift=0pt,xshift=0pt]90:$a$}] (v) at (0, 0) {$v$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:$b_1$}] (ux1) at (u1) {$u_1$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:$b_2$}] (ux2) at (u2) {$u_2$}; \node[vertex,fill=mygreen,label={[xshift=-2pt,yshift=0pt]0:$b_\rho$}] (uxd) at (ud) {$u_\rho$}; \node at (0.5, -1) {$\ldots$}; \path[edge,color=red] (v) -- (ux1) node[midway,xshift=-23pt,yshift=0,color=black] {\footnotesize $\sigma(vu_1)$}; \path[edge,color=red] (v) -- (ux2) node[midway,xshift=16pt,yshift=0,color=black] {\footnotesize $\sigma(vu_2)$}; \path[edge] (v) -- (uxd) node[midway,xshift=20pt,yshift=0,color=black] {\footnotesize $\sigma(vu_\rho)$}; \end{tikzpicture} }% \subcaptionbox{$F(\Gamma, v)$\label{Fig-FowlerExtb}}[.5\linewidth]{ \centering \begin{tikzpicture} \definecolor{mygreen}{RGB}{205, 238, 231} \tikzstyle{vertex}=[draw,circle,font=\scriptsize,minimum size=13pt,inner sep=1pt,fill=mygreen] \tikzstyle{edge}=[draw,thick] \coordinate (u1) at (-2, -4); \coordinate (u2) at (-0.7, -4); \coordinate (ud) at (2, -4); \path[edge,fill=yellow!20!white] (u1) .. controls ($ (u1) + (-60:0.5) $) and ($ (u2) + (-120:0.5) $) .. (u2) .. controls ($ (u2) + (-60:0.7) $) and ($ (ud) + (-120:0.7) $) .. (ud) .. controls ($ (ud) + (-60:3) $) and ($ (u1) + (-120:3) $) .. (u1); \node[vertex,fill=mygreen,label={[yshift=-4pt,xshift=0pt]90:$(1-\rho)a$}] (v) at (0, 0) {$v$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:{\footnotesize $\sigma(vu_1)b_1$}}] (q1) at (-2, -1) {$q_1$}; \node[vertex,fill=mygreen,label={[xshift=-2pt,yshift=0pt]0:{\footnotesize $\sigma(vu_2)b_2$}}] (q2) at (-0.7, -1) {$q_2$}; \node[vertex,fill=mygreen,label={[xshift=-2pt,yshift=0pt]0:{\footnotesize $\sigma(vu_\rho)b_\rho$}}] (qd) at (2, -1) {$q_\rho$}; \node at (1.25, -1) {$\ldots$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:$a$}] (p1) at (-2, -2.5) {$p_1$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:$a$}] (p2) at (-0.7, -2.5) {$p_2$}; \node[vertex,fill=mygreen,label={[xshift=0pt,yshift=0pt]0:$a$}] (pd) at (2, -2.5) {$p_\rho$}; \node at (0.5, -2.5) {$\ldots$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:$b_1$}] (ux1) at (-2, -4) {$u_1$}; \node[vertex,fill=mygreen,label={[xshift=2pt,yshift=0pt]180:$b_2$}] (ux2) at (-0.7, -4) {$u_2$}; \node[vertex,fill=mygreen,label={[xshift=-2pt,yshift=0pt]0:$b_\rho$}] (uxd) at (2, -4) {$u_\rho$}; \node at (0.5, -4) {$\ldots$}; \path[edge] (v) -- (q1); \path[edge] (v) -- (q2); \path[edge] (v) -- (qd); \path[edge] (p1) -- (q2); \path[edge] (p1) -- (qd); \path[edge] (p2) -- (q1); \path[edge] (p2) -- (qd); \path[edge] (pd) -- (q1); \path[edge] (pd) -- (q2); \path[edge,color=red] (p1) -- (ux1) node[midway,xshift=-15pt,yshift=0,color=black] {\footnotesize $\sigma(vu_1)$}; \path[edge,color=red] (p2) -- (ux2) node[midway,xshift=16pt,yshift=0,color=black] {\footnotesize $\sigma(vu_2)$}; \path[edge] (pd) -- (uxd) node[midway,xshift=16pt,yshift=0,color=black] {\footnotesize $\sigma(vu_\rho)$}; \end{tikzpicture} }% \caption{A construction for expansion of a signed nut graph $\Gamma$ about vertex $v$ of degree $\rho$, to give $F(\Gamma, v)$. The labelling of vertices in $\Gamma$ and $F(\Gamma, v)$ is shown within the circles that represent vertices. Shown beside each vertex is the corresponding entry of the unique kernel eigenvector of the respective graph. Panel (a) shows the neighbourhood of vertex $v$ in $\Gamma$. Edges from vertex $v$ to its neighbours have weights $\sigma(vu_i)$ which are either $+1$ or $-1$. In the figure, edges with weight $-1$ are indicated in red, as an illustration. Edges of the remainder of the graph, indicated by the shaded bubble, may take arbitrary signs. Panel (b) shows additional vertices and edges in $F(\Gamma, v)$. Vertices $q_i$ inherit their entries from $\Gamma$ as described in Equation \eqref{eq:feigenvector}. Edges $p_iu_i$ inherit their weights (signs) from $\Gamma$. All other explicit edges in Panel (b) have weights $+1$.} \label{Fig-FowlerExt} \end{figure} Here, we generalise this construction to signed graphs. \begin{definition} Let $\Gamma = (G, \sigma)$ be a signed graph and $v$ a vertex of $G$ that has degree $\rho$. Then $F(\Gamma, v) = (F(G, v), \sigma')$, where for $1 \leq i, j \leq \rho$, \begin{equation} \sigma'(e) = \begin{cases} 1 & \text{if } e = vq_i, \\ 1 & \text{if } e = q_ip_j, \\ \sigma(vu_i) & \text{if } e = p_iu_i, \\ \sigma(e) & \text{otherwise}, \end{cases} \end{equation} is \emph{the Fowler Construction for signed graphs}. \end{definition} \begin{lemma} Let $\Gamma = (G, \sigma)$ be a signed graph and $v$ a vertex of $G$ that has degree $\rho$ and let $\bf x$ be a kernel eigenvector for $\Gamma$. Then $\bf x'$, defined as \begin{equation} {\bf x'}(w) = \begin{cases} -(\rho - 1){\bf x}(v) & \textup{if } w = v, \\ \sigma(vu_i){\bf x}(u_i) & \textup{if } w = q_i, \\ {\bf x}(v) & \textup{if } w = p_i, \\ {\bf x}(w) & \textup{otherwise}, \end{cases} \label{eq:feigenvector} \end{equation} for $w \in V(F(\Gamma, v))$, is a kernel eigenvector for $F(\Gamma, v)$. \end{lemma} The local structures in the signed graphs $\Gamma$ and $F(\Gamma, v)$ are shown in Figure~\ref{Fig-FowlerExt}, which also indicates the local relationships between kernel eigenvectors in these graphs. \begin{lemma} \label{EquivLabeling} Let $\Gamma$ be a singular signed graph, and let $\mathbf{x}$ be a kernel eigenvector. Let $u,v \in V$ be any two non-adjacent vertices, having the same degree, say $\rho$, and sharing $\rho-1$ neighbours. Let $u'$ denote the neighbour of $u$ that is not a neighbour of $v$, and let $v'$ denote the neighbour of $v$ that is not a neighbour of $u$. If $\sigma(uw) = \sigma(wv)$ for all $w \in N(u) \setminus \{u'\}$, then $|\mathbf{x}(u')| = |\mathbf{x}(v')|$. Moreover, $\mathbf{x}(u') = \mathbf{x}(v')$ if and only if $\sigma(vv') = \sigma(uu')$. \end{lemma} \begin{figure}[!htbp] \centering \begin{tikzpicture} \definecolor{mygreen}{RGB}{205, 238, 231} \tikzstyle{vertex}=[draw,circle,font=\scriptsize,minimum size=13pt,inner sep=1pt,fill=mygreen] \tikzstyle{edge}=[draw,thick] \node[vertex] (w1) at (0, 0) {$w_2$}; \node[vertex] (w2) at (0, -0.7) {$w_3$}; \node[vertex] (w3) at (0, -1.4) {$w_4$}; \node[vertex] (w4) at (0, -2.1) {$w_5$}; \node[vertex] (wk) at (0, -3.5) {$w_{\rho}$}; \node[vertex] (u) at (-2, -1.4) {$u$}; \node[vertex] (up) at (-3.5, -1.4) {$u'$}; \node[vertex] (v) at (2, -1.4) {$v$}; \node[vertex] (vp) at (3.5, -1.4) {$v'$}; \path[edge,color=blue,dashed] (up) -- (u); \path[edge,color=blue,dashed] (vp) -- (v); \path[edge] (u) -- (w1) -- (v); \path[edge,color=red] (u) -- (w2) -- (v); \path[edge,color=red] (u) -- (w3) -- (v); \path[edge] (u) -- (w4) -- (v); \path[edge] (u) -- (wk) -- (v); \node at (0, -2.7) {$\vdots$}; \end{tikzpicture} \caption{The neighbourhood of vertices $u$ and $v$ from Lemma~\ref{EquivLabeling}. The red edges indicate a possible selection of edges with weight $-1$.} \label{fig:locLemma} \end{figure} \begin{proof} Let $N(u) \setminus \{u'\} = N(v) \setminus \{v'\} = \{w_2, \ldots, w_\rho \}$ (see Figure~\ref{fig:locLemma}). The respective local conditions at vertices $u$ and $v$ are \begin{align} \sigma(uu') \mathbf{x}(u') + \sum_{i=2}^{\rho} \sigma(uw_i) \mathbf{x}(w_i)& = 0, \label{eq:loc1} \\ \sigma(vv') \mathbf{x}(v') + \sum_{i=2}^{\rho} \sigma(w_iv)\mathbf{x}(w_i) & = 0. \label{eq:loc2} \end{align} Since $\sigma(uw_i) = \sigma(w_iv)$ for all $2 \leq i \leq \rho$, we get that \begin{equation} \sigma(uu') \mathbf{x}(u')= \sigma(vv') \mathbf{x}(v') , \label{eq:xeseql} \end{equation} by taking the difference of \eqref{eq:loc1} and \eqref{eq:loc2}. Clearly, $$ |\mathbf{x}(u')| = |\sigma(uu') \mathbf{x}(u') | = |\sigma(vv') \mathbf{x}(v')| = |\mathbf{x}(v')|. $$ If $\sigma(uu') = \sigma(vv')$, then \eqref{eq:xeseql} implies $\mathbf{x}(u') = \mathbf{x}(v')$. Similarly, if $\mathbf{x}(u') = \mathbf{x}(v')$, then \eqref{eq:xeseql} implies $\sigma(uu') = \sigma(vv')$. \end{proof} \begin{lemma} \label{lem:lemmaxxx} Let $\Gamma$ and $\Gamma'$ be signed graphs over the same base graph $G$, i.e.\ $\Gamma = (G, \sigma)$ and $\Gamma' = (G, \sigma')$. Let $v$ be a vertex of $G$. Then $\Gamma$ is switching equivalent to $\Gamma'$ if and only if $F(\Gamma, v)$ is switching equivalent to $F(\Gamma',v)$. \end{lemma} \begin{proof} Let $\Gamma$ and $\Gamma'$ be two signed graphs over graph $G$, say $\Gamma = (G,\Sigma)$ and $\Gamma' = (G,\Sigma')$. Let $v \in V(G)$ and let $F(\Gamma,v)$ and $F(\Gamma',v)$ be the corresponding Fowler constructions. Let $\Gamma$ and $\Gamma'$ be switching equivalent. This means that there exists $S \subset V(G)$, such that $\Gamma' = \Gamma^S$. We know that $\Gamma^S = \Gamma^{V(G) \setminus S}$. Without loss of generality we may assume that $v \notin S$. Let the vertex labelling of $F(\Gamma,v)$, $F(\Gamma',v)$ and $F(G,v)$ be the same as in Figure~\ref{Fig-FowlerExt}. In particular, this means that all vertices of $G$ belong also to $F(G,v)$. Since $v \notin S$ we have: $$F(\Gamma',v) = F(\Gamma^S,v) = F(\Gamma,v)^S.$$ Hence it follows: $$\Gamma \sim \Gamma' \Rightarrow F(\Gamma,v) \sim F(\Gamma',v).$$ To prove the converse assume the following: $F(\Gamma,v) \sim F(\Gamma',v)$, where $\Gamma = (G, \Sigma)$ and $\Gamma' = (G,\Sigma')$. Let $S \subset V(F(G,v))$ such that $v \notin S$. Since all edges above $u_1,u_2, \ldots, u_s$ in Figure \ref{Fig-FowlerExt}(\subref{Fig-FowlerExtb}) are positive in both signed graphs, it is clear that $S \subset V(G)$ and the result follows. \end{proof} \begin{theorem} \label{thm:dathm} Let $\Gamma$ be a signed graph and $v$ any one of its vertices. Then the nullities of $\Gamma$ and $F(\Gamma, v)$ are equal, i.e.\ $\eta(\Gamma) = \eta(F(\Gamma, v))$. Moreover, $\ker \Gamma$ admits a full eigenvector if and only if $\ker F(\Gamma, v)$ admits a full eigenvector. \end{theorem} \begin{proof} Let $u_1,\ldots, u_{\rho}$ be the neighbours of vertex $v$ in $G$. Assume first that $G$ is a core graph and that $\mathbf{x}$ is an admissible eigenvector. Let $\mathbf{x}(w)$ denote the entry of $\mathbf{x}$ at vertex $w$. Let $a = \mathbf{x}(v)$ and let $b_i = \mathbf{x}(u_i)$. We now produce a vertex labelling $\mathbf{x}'$ of $F(\Gamma,v)$ as above. It follows that if $\mathbf{x}$ is a valid assignment of $G$ then $\mathbf{x}'$ is a valid assignment on $F(\Gamma,v)$. Thus $\eta( F(\Gamma, v)) \geq \eta(\Gamma)$. On the other hand, apply Lemma~\ref{EquivLabeling} to $F(\Gamma,v)$ and an admissible assignment $\mathbf{x}'$. First consider vertices $q_i$ and $q_j$ and their neighbourhoods. Lemma~\ref{EquivLabeling} implies that $\mathbf{x}'(p_i) = \mathbf{x}'(p_j)$. Hence $\mathbf{x}'$ is constant on $p_i$, say $\mathbf{x}'(p_i) = a$. Thus, it follows that $\mathbf{x}'(v) = -(\rho-1)a$. The second application of the lemma goes to vertices $v$ and $p_i$. It implies that for each $i$ the values $\mathbf{x}'(q_i)$ and $\mathbf{x}'(u_i)$ are equal, namely $\mathbf{x}'(q_i) = \mathbf{x}'(u_i)$. Finally, let $\mathbf{x}(w) = \mathbf{x}'(w)$ for every $w \in V(G) \setminus \{v\}$ and let $\mathbf{x}(v) = a$. Hence, the existence of an admissible $\mathbf{x}'$ on $F(\Gamma,v)$ implies the existence of an admissible $\mathbf{x}$ on $\Gamma$. Thus $\eta( F(\Gamma, v)) \leq \eta(\Gamma)$. \end{proof} \begin{proof}[An alternative proof of $\eta( F(\Gamma, v)) \leq \eta(\Gamma)$] One may work out the rank of $F(\Gamma, v)$ directly. Let $N_G(v)=\{u_1,\ldots,u_{\rho}\}$ and $V(G)\setminus N[v]=\{w_1,\ldots,w_{n-\rho-1}\}$, where $n=|V(G)|$. The adjacency matrix $\mathbf{A}(\Gamma)$ can be partitioned into block matrices as follows: \begin{equation} \mathbf{A}(\Gamma) = \begin{blockarray}{cccccccc} v & w_1 & \ldots & w_{n-\rho-1} & u_1 & \ldots & u_{\rho} & \\ \begin{block}{(c|ccc|ccc)c} 0 & 0 & \ldots & 0 & \sigma(vu_1) & \ldots & \sigma(vu_{\rho}) & v\\ \cline{1-7} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf B$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf C$}} & w_1 \\ \vdots & & & & & & & \vdots \\ 0 & & & & & & & w_{n-\rho-1}\\ \cline{1-7} \sigma(vu_1) & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf C}^T$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf D$}} & u_1 \\ \vdots & & & & & & & \vdots \\ \sigma(vu_{\rho}) & & & & & & & u_{\rho}\\ \end{block} \end{blockarray} \end{equation} where submatrices $\bf B$, $\bf C$ and $\bf D$ encode the signed edges between the respective vertex-sets. The adjacency matrix of $\mathbf{A}(F(\Gamma,v))$ can similarly be partitioned as follows: \begin{equation} \begin{blockarray}{cccccccccccccc} v & w_1 & \ldots & w_{n-\rho-1} & u_1 & \ldots & u_{\rho} & q_1 & \ldots & q_{\rho} & p_1 & \ldots & p_{\rho}\\ \begin{block}{(c|ccc|ccc|ccc|ccc)c} 0 & 0 & \ldots & 0 & 0 & \ldots & 0 & 1 & \ldots & 1 & 0 & \ldots & 0 & v\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf B$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf C$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & w_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & w_{n-\rho-1}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf C}^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf D$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf K$}} & u_1 \\ \vdots & & & & & & & & & & & & & \vdots \\ 0 & & & & & & & & & & & & & u_{\rho}\\ \cline{1-13} 1 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf J-I$}} & q_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 1 & & & & & & & & & & & & & q_{\rho}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf K$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf J-I$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & p_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & p_{\rho}\\ \end{block} \end{blockarray} \end{equation} where $\bf I$ is the identity matrix, $\bf J$ is the all-one matrix and ${\bf K} = \diag(\sigma(vu_{1}), \ldots, \sigma(vu_{\rho})) = \diag(\sigma'(p_1u_{1}), \ldots, \sigma'(p_{\rho}u_{\rho}))$ is a modified identity matrix in which weights of edges from the neighbourhood of the original vertex $v$ replace the unit entries. Elementary row and corresponding column operations that leave the rank unchanged are performed by replacing the rows and columns corresponding to $u_1,\ldots, u_{\rho}$ by $u_1+\sigma(vu_1) q_1,\ldots, u_{\rho}+\sigma(vu_\rho) q_{\rho}$, respectively (where by abuse of notation $u_i$ and $q_i$ stand for rows/columns corresponding to $u_i$ and $q_i$, respectively), to obtain the matrix \begin{equation} \makeatletter\setlength\BA@colsep{4pt}\makeatother \begin{blockarray}{cccccccccccccc} v & w_1 & \ldots & w_{n-\rho-1} & u_1 & \ldots & u_{\rho} & q_1 & \ldots & q_{\rho} & p_1 & \ldots & p_{\rho}\\ \begin{block}{(c|ccc|ccc|ccc|ccc)c} 0 & 0 & \ldots & 0 & \sigma(vu_1) & \ldots & \sigma(vu_{\rho}) & 1 & \ldots & 1 & 0 & \ldots & 0 & v\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf B$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf C$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & w_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & w_{n-\rho-1}\\ \cline{1-13} \sigma(vu_1) & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf C}^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf D$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf J'$}} & u_1 \\ \vdots & & & & & & & & & & & & & \vdots \\ \sigma(vu_\rho) & & & & & & & & & & & & & u_{\rho}\\ \cline{1-13} 1 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf J-I$}} & q_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 1 & & & & & & & & & & & & & q_{\rho}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf (J')^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf J-I$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & p_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & p_{\rho}\\ \end{block} \end{blockarray} \label{mat:18} \end{equation} where ${\bf J'}$ is a $\rho \times \rho$ matrix defined as $({\bf J'})_{i,j} = \sigma(vu_i)$ and all other symbols have their previous meanings. We remark that the block $\bf J-I$ is of full rank. We now pre-multiply the blocked matrix for $F(\Gamma,v)$ by a non-singular matrix that is chosen to transform the first $\bf J-I$ block of \eqref{mat:18} to $\bf I$. The transformation matrix is \begin{equation} \begin{blockarray}{cccccccccccccc} v & w_1 & \ldots & w_{n-\rho-1} & u_1 & \ldots & u_{\rho} & q_1 & \ldots & q_{\rho} & p_1 & \ldots & p_{\rho}\\ \begin{block}{(c|ccc|ccc|ccc|ccc)c} 1 & 0 & \ldots & 0 & 0 & \ldots & 0 & 0 & \ldots & 0 & 0 & \ldots & 0 & v\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf I}_{n-\rho-1}$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & w_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & w_{n-\rho-1}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf I}_\rho$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & u_1 \\ \vdots & & & & & & & & & & & & & \vdots \\ 0 & & & & & & & & & & & & & u_{\rho}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf (J_\rho-I_\rho)}\sp{-1}$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & q_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & q_{\rho}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf I_\rho$}} & p_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & p_{\rho}\\ \end{block} \end{blockarray} \label{mat:19} \end{equation} The matrix $({\bf J_\rho-I_\rho})\sp{-1}$ has diagonal entries $ \frac{1}{\rho - 1} - 1$ and off-diagonal entries $\frac{1}{\rho - 1}$. Since the matrix \eqref{mat:19} is non-singular, the matrix resulting from the premultiplication has the same rank as before, and is \begin{equation} \makeatletter\setlength\BA@colsep{4pt}\makeatother \begin{blockarray}{cccccccccccccc} v & w_1 & \ldots & w_{n-\rho-1} & u_1 & \ldots & u_{\rho} & q_1 & \ldots & q_{\rho} & p_1 & \ldots & p_{\rho}\\ \begin{block}{(c|ccc|ccc|ccc|ccc)c} 0 & 0 & \ldots & 0 & \sigma(vu_1) & \ldots & \sigma(vu_{\rho}) & 1 & \ldots & 1 & 0 & \ldots & 0 & v\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf B$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf C$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & w_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & w_{n-\rho-1}\\ \cline{1-13} \sigma(vu_1) & \BAmulticolumn{3}{c|}{\multirow{3}{*}{${\bf C}^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf D$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf J'$}} & u_1 \\ \vdots & & & & & & & & & & & & & \vdots \\ \sigma(vu_\rho) & & & & & & & & & & & & & u_{\rho}\\ \cline{1-13} \frac{1}{\rho-1} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf I_\rho$}} & q_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ \frac{1}{\rho-1} & & & & & & & & & & & & & q_{\rho}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf (J')^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf J_\rho-I_\rho$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & p_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & p_{\rho}\\ \end{block} \end{blockarray} \label{mat:20} \end{equation} Now we take linear combinations of rows to reduce the block ${\bf J}'$ to $\bf 0$. We replace each row ${\bf R}_k$ ($k = 1$ to $\rho$) of ${\bf J}'$ by the linear combination ${\bf R}_k - \sigma(vu_k) \sum_i {\bf r}_i$, where ${\bf r}_i$ is the row of the identity matrix ${\bf I}_\rho$, and by definition $\sigma(vu_k)$ is the sign of all the entries in the $k^\text{th}$ row of matrix ${\bf J}'$. This converts the matrix \eqref{mat:20} to \newcommand{\cellcolor{gray!25!white}}{\cellcolor{gray!25!white}} \begin{equation} \makeatletter\setlength\BA@colsep{4pt}\makeatother \begin{blockarray}{cccccccccccccc} v & w_1 & \ldots & w_{n-\rho-1} & u_1 & \ldots & u_{\rho} & q_1 & \ldots & q_{\rho} & p_1 & \ldots & p_{\rho}\\ \begin{block}{(c|ccc|ccc|ccc|ccc)c} \color{red}0 & \color{red}0 & \color{red}\ldots & \color{red}0 & \color{red}\sigma(vu_1) & \color{red}\ldots & \color{red}\sigma(vu_{\rho}) & 1 & \ldots & 1 & 0 & \ldots & 0 & v\\ \cline{1-13} \color{red}0 & \BAmulticolumn{3}{c|}{ \multirow{3}{*}{\color{red}$\bf B$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{\color{red}$\bf C$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & w_1 \\ \color{red}\vdots & & & & & & & & & & & & &\vdots \\ \color{red}0 & & & & & & & & & & & & & w_{n-\rho-1}\\ \cline{1-13} \color{red}-\frac{\sigma(vu_1)}{\rho-1} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{\color{red}${\bf C}^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{\color{red}$\bf D$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & u_1 \\ \color{red}\vdots & & & & & & & & & & & & & \vdots \\ \color{red}-\frac{\sigma(vu_\rho)}{\rho-1} & & & & & & & & & & & & & u_{\rho}\\ \cline{1-13} \frac{1}{\rho-1} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf I_\rho$}} & q_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ \frac{1}{\rho-1} & & & & & & & & & & & & & q_{\rho}\\ \cline{1-13} 0 & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf 0$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf (J')^T$}} & \BAmulticolumn{3}{c|}{\multirow{3}{*}{$\bf J_\rho-I_\rho$}} & \BAmulticolumn{3}{c}{\multirow{3}{*}{$\bf 0$}} & p_1 \\ \vdots & & & & & & & & & & & & &\vdots \\ 0 & & & & & & & & & & & & & p_{\rho}\\ \end{block} \end{blockarray} \end{equation} Now consider the rank of the red block of this matrix. Before this last transformation, it was equal to ${\bf A}(\Gamma)$ and had $\eta = 1$, as $\Gamma$ is a signed nut graph. After the transformation, the red block differs only by the replacement of $\sigma(vu_i)$ with $-\sigma(vu_i)/(\rho-1)$. The block now has nullity $0$ or nullity $1$. (We know that the submatrix $\begin{bsmallmatrix*}[l] {\bf B} & {\bf C} \\ {\bf C}\sp{T} & {\bf D} \end{bsmallmatrix*}$ is non-singular, as deletion of any vertex (here $v$) from a nut graph reduces the nullity to zero \cite{SciCommEigDck09,ScMaxCorSzSing09}.) Hence, the rank of the red block is $\ge n-1$ and the inequality follows: $$ \rk(\mathbf{A}(F(\Gamma,v))) \geq {\rk}(\mathbf{A}(\Gamma))+2\rho=n-1+2\rho, $$ since the last $2\rho$ rows are linearly independent of all the other rows. Thus, $\eta(F(\Gamma,v))\leq \eta(\Gamma)$, as before. \end{proof} \begin{corollary} Let $\Gamma = (G, \sigma)$ be a signed graph and $v \in V(G)$ any one of its vertices. The following statements hold: \begin{enumerate}[label=(\arabic*)] \item $F(\Gamma, v)$ is a signed nut graph if and only if $\Gamma$ is a signed nut graph. \item $F(\Gamma, v)$ is a proper signed nut graph if and only if $\Gamma$ is a proper signed nut graph. \end{enumerate} \end{corollary} \begin{proof} Follows directly from Lemma~\ref{lem:lemmaxxx} and Theorem~\ref{thm:dathm}. Namely, if $\Gamma$ is proper then it is switching equivalent to the all-positive (traditional) signed nut graph $\Gamma' = (G,\emptyset)$. However, in this case $F(\Gamma',v) = F((G,\emptyset),v) = (F(G,v),\emptyset)$. Virtually the same argument can be used in the opposite direction. \end{proof} \section{Conclusion} Invocation of signed graphs as candidates for nut graphs allows extension of the orders at which a nut graph exists, and allows proof of all cases for regular nut graphs (signed and unsigned) with degree at most 11. As with unweighted nut graphs, signed nut graphs can be generated by a generic construction in which the order of a smaller signed nut graph increases from $n$ to $n + 2\rho$, where $\rho$ is the degree of the vertex chosen as the focus of this vertex-expansion construction. \section*{Acknowledgements} We thank Thomas Zaslavsky for helpful comments on an earlier draft of this paper. The work of Toma\v{z} Pisanski is supported in part by the Slovenian Research Agency (research program P1-0294 and research projects N1-0032, J1-9187, J1-1690, N1-0140 and J1-2481), and in part by H2020 Teaming InnoRenew CoE. The work of Nino Bašić is supported in part by the Slovenian Research Agency (research program P1-0294 and research projects J1-9187, J1-1691, N1-0140 and J1-2481). The work of Irene Sciriha and Patrick W. Fowler is supported by The University of Malta under the project Graph Spectra, Computer Network Design and Electrical Conductivity in Nano-Structures MATRP01-20. \nocite{*} \bibliographystyle{amcjoucc}
null
null
null
proofpile-arXiv_000-10194
{"arxiv_id":"2009.09018","language":"en","timestamp":1601518916000,"url":"https:\/\/arxiv.org\/abs\/2009.09018","yymm":"2009"}
2024-02-18T23:40:25.359Z
2020-10-01T02:21:56.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10196"}
null
\section{Introduction} \label{intro} The Milky Way (MW) is an important laboratory for dark matter (DM) science. A robust prediction of cosmological simulations containing collisionless DM (and no baryons) is that halos are triaxial (with short/long axis ratio $ \sim 0.6$ and intermediate/long axis ratio $\sim 0.8$), with axis ratios that are almost independent of radius \citep{dubinski_carlberg_91,jing_suto_00}. The dissipative collapse of cold baryonic gas and the formation of stellar disks alter halo shapes making them oblate or nearly spherical within the inner one-third of the virial radius, but allowing them to remain triaxial at intermediate radii and prolate at large radii \citep{kazantzidis_etal_04_shapes, deb_etal_08, zemp_etal_12}. Cosmological simulations with Warm Dark Matter ~\citep[WDM, sterile neutrinos,][]{bose_frenk_16_WDM} and Self Interacting Dark Matter (SIDM)~\citep{peter_13_SIDM} also predict triaxial DM halos, although there are small but quantifiable differences in the radial variation in axis ratios and the degree of triaxiality. Since triaxial DM halos form via hierarchical mergers they generally have angular momentum, primarily due to the relative orbital angular momentum of the progenitor halos involved in the merger, but also due to the internal streaming motions within the halos. Since DM halos are triaxial, this angular momentum can manifest either as streaming motions of individual particles or as tumbling (figure rotation) of the entire triaxial halo, or both. $\Lambda$CDM cosmological $N$-body simulations predict that $\sim$90\% of dark matter halos are significantly triaxial and have measurable figure rotation \citep{dubinski_92,bailin_steinmetz_04,bryan_cress_07}. The pattern speed ($\Omega_p$) of figure rotation for halos from dark matter-only simulations follows a log normal distribution centered on $0.148h\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ with a width of 0.83\footnote{ $\Omega_p =0.148h\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ $=0.11\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}} = 6.3^{\circ}{\rm Gyr}^{-1}$ $=22.6\mu {\rm arcsec~yr}^{-1}$; $1\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}} \simeq 1 {\rm rad Gyr}^{-1}$.}. \citet[][hereafter BS04]{bailin_steinmetz_04} find that the axis about which the figure rotates aligns fairly well with the halo minor axis in 85\% of the halos and with the major axis in the remaining 15\% of the halos. The study by \citet{bryan_cress_07} found that only small fraction of halos (5/222) showed coherent rotation over 5~Gyr but when rotation was measured over 1~Gyr most halos showed figure rotation with log normal distributed pattern speeds, with median and width similar to those found by \citepalias{bailin_steinmetz_04}. Since rotation is induced by torques from companions, the duration of steady rotation is expected to depend on the interaction and merger history of a galaxy. \citetalias{bailin_steinmetz_04} also found that for CDM halos $\Omega_p$ is correlated with the cosmological halo spin parameter\footnote{The halo spin parameter $\lambda= J|E|^{1/2}G^{-1}M^{-5/2}$ where $J, |E|$ and $M$ are the angular momentum, total energy and total mass of the halo.} $\lambda$ \citep{peebles_69}, but is independent of halo mass. Valluri, Hofer et al. (in prep) have measured the pattern speed of figure rotation of DM within 100~kpc of the center of disk galaxies in the Illustris suite of simulations \citep{vogelsberger_etal_14}. They find that most halos shows steady (coherent) figure rotation with $\Omega_p \sim 0.15-0.6\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}} \sim 9^{\circ}-35^{\circ} \mathrm{Gyr}^{-1}$ over durations of $\sim 3-4$~Gyr. Steady figure rotation for DM halos with baryons was not necessarily expected. Unlike DM-only simulations where the halos are strongly triaxial with nearly constant axis ratios as a function of radius, DM halos in simulations with baryons have radially varying shapes: oblate at small radii, triaxial at intermediate radii and prolate at large radii \citep{kazantzidis_etal_04_shapes, debattista_etal_08, zemp_etal_12}. In addition the presence of a dissipative baryonic component, which (in disk galaxies) is demonstrably rotating, is expected to absorb much of the angular momentum from hierarchical mergers. We are unaware of any works that measure the pattern speeds of figure rotation of DM halos in WDM or SIDM cosmological simulations (with or without baryons). However, since halos in these simulations are triaxial \citep{bose_frenk_16_WDM,peter_13_SIDM} and have halo spin parameters $\lambda$ comparable to their CDM counterparts it is reasonable to expect them to also have figure rotation (although future studies are need to measure the distribution of their pattern speeds). Despite having been predicted over 15 years ago, few methods to measure the figure rotation of DM halos have been proposed, and it has never been measured. Figure rotation was suggested as a mechanism to explain the ``anomalous dust lanes'' in triaxial elliptical galaxies \citep{vanalbada_etal_82}. It was also suggested as a possible mechanism for driving spiral structure and warps in extremely extended HI disks. For instance the HI disk of NGC~2915~ is 30 times larger than its optical disk and shows a strong bisymmetric spiral feature seen only in the HI. Since this galaxy has no nearby companions that could have triggered the spiral features \citep{bureau_etal_99, dubinski_chakrabarty_09, chakrabarty_dubinski_11}, it was proposed that the spiral was triggered by figure rotation of the DM halo. However simulations show that in order for figure rotation to account for the observed features of NGC~2915, the DM halo would have to have a pattern speed $\Omega_p \sim 4-8$ \ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}, 25-50 times larger than median value predicted by cosmological simulations \citep{bekki_freeman_02,masset_bureau_03}. These simulations also showed that production of a spiral feature also required the rotation axis of the halo to be significantly misaligned with the disk. These extreme requirements make it unlikely that halo figure rotation has triggered the spiral structure in the extended HI disk in NGC~2915. While it is still unclear how the extended spiral structure in the HI disk of NGC~2915 is generated, it cannot be the result of figure rotation of the dark matter halo. To our knowledge, no method for measuring extremely small figure rotation of a dark matter halo has ever been proposed. {\em In this work we propose the first plausible method for measuring figure rotation of the MW halo that can be tested with current and future {\it Gaia}\ data.} A definitive measurement of coherent figure rotation of the DM halo of the MW and/or other galaxies would be strong evidence of the particle nature of dark matter. In alternative theories such as MOND \citep{MOND,milgrom_2019}, dark matter does not exist, rather it is a modification in either gravity or Newton's second law at low acceleration scales, that mimics a dark component in galaxies. In MOND (and most other similar theories) it is only the baryons that produce the gravitational force. An unambiguous measurement of halo figure rotation would, therefore, be a validation of dark matter models. In a potential theory like MOND, a disk galaxy like the MW cannot produce a triaxial potential that rotates independently of the disk potential. While the MW does have a triaxial central bar of scale length $\sim 3-5$~kpc that comprises nearly 2/3rd the total stellar mass of the disk, its pattern speed, $\Omega_p \sim 40-50 \ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ \citep{bland_hawthorn_gerhard_16} is about 300 times larger than the pattern speeds predicted for DM halos. In the past decade numerous coherent tidal streams have been detected in the Milky Way halo. Since tidal streams consist of a large number of stars on similar orbits they are excellent tracers of the gravitational potential of the Galaxy \citep{johnston_etal_99}. Since figure rotation induces an additional ``centrifugal potential'' \citep[herafter BT]{BT08} it alters both the trajectory of a satellite and the morphology and kinematics of its tidal stream. In principal, therefore, streams should be sensitive to figure rotation. The Sagittarius tidal stream (here after Sgr stream) \citep{mateo_etal_96, mateo_etal_98_sgrstream,majewski_etal_03,majewski_etal_04,carlin_etal_11} is the most prominent, coherent known stream in the MW halo, and indeed in the local universe. It has been used by numerous authors to probe the MW potential and is considered a prototype of the dynamical tidally-induced evolution of satellites \citep[for a review see][]{law_majewski_16}. The effects of figure rotation are most easily seen in its effects on the morphology of orbits. However the orbital periods of halo stars are so long that individual orbits and the effects of halo figure rotation on them are unobservable. Tidal streams are good proxies for the orbits of their progenitors and are frequently used to determine the properties of dark matter halos, such as their shapes \citep{johnston_etal_99,eyre_binney_09}. If the halo of the MW is triaxial and it does rotate, the Sgr stream with a pericenter radius of 20~kpc and an apocenter radius of 100~kpc from the Galactic center \citep{belokurov_etal_14,hernitschek_etal_17} extending over $\sim 500^\circ$ on the sky is likely to be an ideal probe of figure rotation. In this paper we explore how the rotation of a triaxial dark matter halo would alter the structure of a Sgr-like tidal stream. We do not attempt to either constrain the pattern speed of figure rotation, or other parameters of the Galactic potential. We simply describe the nature of the pseudo forces (Coriolis and centrifugal) resulting from figure rotation about three different axes would alter such a stream and illustrate how figure rotation would alter the stream. At present there is no consensus on the shape of the MW's dark matter halo. Despite two decades of effort to model the spatial and velocity distributions of stars in the Sgr stream to determine the shape of the halo \citep{johnston_etal_99, helmi_04, johnston_etal_05,law_majewski_10, deg_widrow_12, carlin_etal_12,dierickx_loeb_17a,dierickx_loeb_17b,fardal_etal_19}, current results range from oblate to prolate to spherical and triaxial. No model satisfactorily reproduces all the observed features of the Sgr stream such as the large ratio of the trailing apocenter radius ($\sim 100$~kpc) to the leading apocenter distance ($\sim 50$~kpc), the relatively small angle of $95^{\circ}$ between the leading and trailing apocenters \citep{belokurov_etal_14}, and the ``bifurcations'' in the Sgr stream \citep{fellhauer_etal_06, penarrubia_etal_10, koposov_etal_12, slater_etal_13}. Although the model of \citet{law_majewski_10} does very well to describe pre-2014 data \citep[also see][]{deg_widrow_12}, it requires an oblate-triaxial halo with the disk perpendicular to the intermediate axis of the halo, an orientation that is violently unstable \citep{debattista_etal_13}. To match the angle between the leading and trailing apocenters a halo with a shallow central radial density profile with an extended flat core, not predicted by cosmological simulations, is needed \citep{belokurov_etal_14,fardal_etal_19}. In addition it has been shown that the LMC may significantly perturb the stream \citep{vera-ciro_helmi_13, gomez_besla_15} and alter the DM distribution of the halo by producing a wake \citep{garavito-camargo_etal_19}. The effects of figure rotation on the orbit of a progenitor depend strongly on the shape of the DM halo. Since this is uncertain we simply adopt four plausible models with disk, bulge, and halo mass distributions motivated by previous work. We explore a small range of pattern speeds and use an evolution time $t{_ev}$ for the Sgr-stream of 4~Gyr. This is larger than $t_{ev} \sim 2.3-2.9$~Gyr preferred by other authors \citep{fardal_etal_19, vasiliev_belokurov_20}, but short enough that it is reasonable to assume that the halo has maintained a constant pattern speed over this timescale. Although some authors \citep{laporte_etal_18} claim, based on dynamical features in the Milky Way's stellar disk, that the Sgr stream has been evolving for at least 6~Gyr, we do not consider such a long timescale since it is unlikely that DM halos maintain coherent pattern speeds for such a long duration. In reality the progenitor of Sgr was probably at least $5\times 10^{10}$\mbox{$M_{\odot}$} and its initial infall probably started $\sim$10 Gyr ago from a much larger initial distance \citep{jiang_binney_00, gibbons_etal_17}. Our experiments with 1.5~Gyr~$< t{ev} <$~8~Gyr show that for the progenitor mass selected here $t_{ev} \lesssim 3$~Gyr do not produce streams that are long enough to match the observations. We also found that $t_{ev} \gtrsim 6$~Gyr in the rotating potential result in tidal streams that are more stronger perturbed (less coherent) than streams in stationary potentials evolved over the same duration. We believe this is due to the fact that orbits in rotating potentials are on average more likely to be chaotic than in stationary potentials \citep{deibel_etal_11} and it is this increased chaoticity that results in less coherent streams \citep{price_whelan_etal_16}. For these reasons we limit our study to $t_{ev} = 4$~Gyr. The objectives of this paper are (a) to demonstrate that figure rotation of a (moderately) triaxial halo can demonstrably alter the morphology and kinematics of a tidal stream in ways that are already measurable with existing data, (b) to highlight the features that would be most likely to distinguish a rotating halo from a static one. In Section~\ref{sec:method} we describe the set up of our test-particle simulations. In Section~\ref{sec:orbits} we describe some general principles governing the behavior of orbits and tidal streams in triaxial halos subjected to figure rotation. We also show that the magnitude of the Coriolis force on a Sgr-like stream is a significant fraction of the gravitational force even for a small pattern speed. In Section~\ref{sec:results} we present our simulations of Sgr-like streams and make some comparisons with observations. In Section~\ref{sec:conclusions} we summarize our results and discuss the implications of this work and future directions. \section{Simulations and Numerical Methods} \label{sec:method} We use the Gala package \citep{gala:JOSS, gala} to simulate tidal streams in Milky Way like potentials. Streams are generated by simulating the orbital evolution of stars once they have been tidally stripped from the progenitor satellite. We use the ``particle-spray'' stream generation method \citep{fardal_etal_15} that assumes that stars are lost from the L1 and L2 Lagrange points of the progenitor at a uniform rate \citep[e.g.,][] {Kupper_etal_12}. Once stars escape from the progenitor they experience the gravitational potential of both the progenitor the Galaxy. In the current work we do not consider the effects of dynamical friction from the DM halo on the progenitor, even though the Sgr dwarf progenitor was probably massive enough to experience significant dynamical friction \citep[e.g.,][]{fardal_etal_19}. The ``particle-spray'' method used produces stream stars drawn from initial distribution function that depends on the potential of the progenitor and therefore more massive satellites produce dynamically hotter streams. While they are not as accurate as N-body simulations, the advantage of test particle simulation is that they allow for a rapid exploration of the parameter space (e.g. Galactic potential parameters, halo shape, pattern speed and axis of figure rotation). We use a galactocentric coordinate system that is right handed with the $x$-axis coincident with the Galactic $X$-axis, $y$-axis parallel to the direction of the velocity of the LSR (parallel to the Galactic $Y$-axis) and $z$-axis perpendicular to the disk plane with the sun located at located at $(-8.122, 0, 0.0208)$~kpc (the default Galactocentric parameters in Astropy v4.0). The Sgr dwarf progenitor has a mass of $6\times 10^8$\mbox{$M_{\odot}$}\, \citep{law_majewski_10}, which is lower than some recent estimates \citep[$10^{10}-10^{11}$\mbox{$M_{\odot}$},][]{laporte_etal_18}. The progenitor potential is modeled as a spherical Plummer model with a core radius of 0.65~kpc and does not include a separate dark matter component. This is a lower progenitor mass than recent estimates ($5\times 10^{10}$\mbox{$M_{\odot}$}) but since the mass of the progenitor does not change with time in ``particle-spray'' models, it produces streams with a somewhat closer visual appearance to the observed stream. For most of our models we use the present day phase-space coordinates for the Sgr dwarf obtained with GaiaDR2 {\it Gaia}\ \citep{gaiacollab_helmi_18,vasiliev_belokurov_20} ($x=17.2$~kpc, $y= 2.5$~kpc, $z=-6.4$~kpc, $v_x=237.9~\ensuremath{\,\mathrm{km\ s}^{-1}}, v_y= -24.3~\ensuremath{\,\mathrm{km\ s}^{-1}}, v_z=209.0~\ensuremath{\,\mathrm{km\ s}^{-1}}$). Starting at this position we evolved the orbit backwards in time for 4~Gyrs to determine the initial position and velocity of the progenitor. All of our Milky Way models use a composite halo+disk+bulge potential. We use three different halo shapes with $a, b, c$ defined as the semi-axis lengths of the density (not potential) model along the Galactocentric $x,y,z$ axes respectively (in this work $a, b, c$ are not aligned with the long, intermediate and short axes of the halo, since these change from model to model). The first Galactic model considered is the one found by \citet{law_majewski_10} (referred to hereafter as the {\em LM10\,}\ model)\footnote{The LMPotential in Gala}. This model has a logarithmic halo potential with rotation velocity set such that the total circular velocity $v_c = 220\ensuremath{\,\mathrm{km\ s}^{-1}}$ at 8~kpc and semi-axis lengths (relative to the longest axis) of {\em density profile} of $a=0.44, b=1, c=0.97$. The {\em LM10\,} model has a Miyamoto-Nagai disk \citep{miyamoto_nagai_75} with mass $M_{\rm disk} = 1\times10^{11}\mbox{$M_{\odot}$}$, radial scale length of $6.5$~kpc, and and vertical scale length of $0.26$~kpc. It contains a spherical Hernquist bulge with mass $M_{\rm b}= 3.4\times 10^{10}\mbox{$M_{\odot}$}$ and radial scale length $0.7$~kpc resulting in a somewhat deeper potential than in the other three models. The deeper potential results in simulations that are unable to produce a trailing stream with apocenter radius of $\sim 100$kpc as observed by \citep{belokurov_etal_14,hernitschek_etal_17}. Therefore we only use it to illustrate that the effect of figure rotation in this deeper potential are similar to, but more extreme than the effects in a shallower potential of a similar shape (the {\em LMm\,}\, model below). The other three models have triaxial dark matter halos with radial density profiles of NFW form \citep{NFW}. We use the formulation of \citep{lee_suto_03} to compute the potential from the density profile. The halo is defined by a circular velocity $v_c = 162\ensuremath{\,\mathrm{km\ s}^{-1}}$ and a scale radius $r_s = 28$~kpc. The latter is somewhat larger than predicted by halo mass - concentration relationship for MW mass halos \citep{dutton_maccio_14}, but is consistent with estimates based on the local escape speed measurements from {\it Gaia}\ \citep{hattori_etal_18}. It value of $r_s$ is significantly smaller than $r_s = 68$~kpc estimated by \citet{fardal_etal_19}. These three models use a Miyamoto-Nagai disk \citep{miyamoto_nagai_75} with $M_{\rm disk} = 6\times 10^{10}\mbox{$M_{\odot}$}$, a radial scale length of 3~kpc and vertical scale length 0.26~kpc. Instead of a box-peanut bulge/bar we use a spherical Hernquist bulge \citep{hernquist_90} with mass $M_{\rm b} = 6\times10^9$\mbox{$M_{\odot}$}\ and radial bulge scale length 0.7~kpc. These values were chosen since they produce streams that give a reasonably good match to many Sgr stream observations, including the larger apocenter radius of the trailing arm (although all our streams produce a slightly larger leading apocenter radius than observed, probably because we do not include dynamical friction). Although simulated triaxial halos in cosmological hydrodynamical simulations have radially varying halo shapes \citep{kazantzidis_etal_04_shapes, debattista_etal_08, zemp_etal_12}, our DM halos have their mass stratified on concentric ellipsoidal shells of fixed axis length ratios $a:b:c$. The three models with NFW DM halo density distributions have the different shapes as described below. The oblate-triaxial model (here after {\em OT\,}\ model) has axis scale lengths $a=0.9, b=1, c=0.8$ (i.e. the long-axis and short axes are along galactocentric $y$ and $z$ axes respectively). The prolate halo model \citep[hereafter the {\em F19m\,}\ model,][modified]{fardal_etal_19} is similar to the one found by \citet{fardal_etal_19} with $a=0.95, b=1., c=1.106$ (for the density\footnote{\citet{fardal_etal_19} find their best triaxial NFW potential model has a potential axis lengths of 1: 1.1: 1.15. The density axis lengths we use give the same axis lengths for the derived potential.}) but with an NFW halo parameters ($v_c, r_s$), disk and bulge parameters as given in the paragraph above. The last model is the {\em LMm\,}\ model (``Law-Majewski modified'') which has the same axis scale lengths (for the density) as the {\em LM10\,}\ model $(a=0.44, b=1, c=0.97)$, but with halo, disk and bulge parameters as defined in the paragraph above. The streams were evolved in each of the four models above, both in static halos ($\Omega_p=0$) and rotating halos with clock-wise and anti-clockwise rotation about each of the three Galactocentric principal axes ($x,y,z$) with $|\Omega_p| = 0.1, 0.3, 0.6, 0.8, 1.0, 1.4\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$\footnote{Hereafter the pattern frequency will be denoted by a 3-vector $(\Omega_{p,x},\Omega_{p,y}, \Omega_{p,z})$}. After some initial exploration (not presented) we kept potential parameters, the duration of evolution, the initial mass and phase space coordinates of the Sgr dwarf fixed to the values stated above. Our simulations do not include dynamical friction and the mass of the halo/disk does not change with time. While previous authors \citep{law_majewski_10, deg_widrow_12, belokurov_etal_14, fardal_etal_19} have done far more exhaustive searches of parameter space with the goal of constraining the shape of the halo and other Galactic parameters, we defer such an exploration to the future. In Section~\ref{sec:results} we present a limited set of results to illustrate the broad {\it qualitative effects} of halo figure rotation on the properties of the stream. \section{Effects of figure rotation on orbits in triaxial halos} \label{sec:orbits} In this section we briefly describe how the orbit of a Sagittarius-like dwarf satellite and the tidal stream it produces are altered by steady figure rotation over a duration of 4~Gyr. The effects of figure rotation are most easily seen in the rotating frame of the halo. While strictly speaking, the Sun should not be regarded as being in the rotating frame of the dark halo, at the solar position, a pattern speed of $0.15-0.6\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ (the range of figure rotation values explored) translates to a velocity of only $1-5\ensuremath{\,\mathrm{km\ s}^{-1}}$, which is smaller than the random velocities of stars in the solar neighborhood and smaller than the velocity of the sun relative to the LSR. Thus, we assume that the heliocentric view of the stream is (almost) identical to what it would be in the rotating frame of the dark halo (although it would be straightforward to make the transformation to a heliocentric frame if necessary). In a rotating potential, the energy of an orbit is not an integral of motion but the Jacobi integral ($E_J$) is a conserved quantity: \begin{equation} E_J = {\frac{1}{2}}|\dot{\mathbf{x}}|^2+\Phi-{\frac{1}{2}}|{\mathbf \Omega_p} \times {\mathbf{x}}|^2, \end{equation} where $\mathbf{x}$ and $\mathbf{\dot{x}}$ are three dimensional spatial and velocity vectors, respectively. The equation of motion in the rotation frame is given by the vector differential equation: \begin{eqnarray} \ddot{\mathbf{x}} = - {\mathbf{\nabla\Phi}} - 2{\mathbf{\Omega_p \times \dot{x}}} +|\Omega_p|^2{|\mathbf x|} \end{eqnarray} where $- 2{\mathbf{\Omega_p \times \dot{x}}}$ is the Coriolis acceleration (hereafter ${\mathbf{a_{Co}}}$) and $|\Omega_p|^2{|\mathbf x|}$ is the centrifugal acceleration (hereafter ${\mathbf{a_{Cf}}}$) \citepalias[see \S~3.3.2, eq. 3.116][]{BT08}. In what follows we refer to the gravitational acceleration as $\mathbf{a_{\mathrm g}}$. Table~\ref{tab:f_pseudo} gives exact expressions for the Coriolis and centrifugal accelerations about each of the three principal axes. \begin{figure*} \begin{center} \includegraphics[angle=0,trim=10. 0. 70. 20., clip,width=0.515\textwidth]{OT_rotz+3_CoF_CF_Grav.png} \includegraphics[angle=0,trim=105. 0. 40. 20., clip,width=0.475\textwidth]{OT_roty+3_CoF_CF_Grav.png} \end{center} \caption{The ratio of the magnitude of $|$${\mathbf{a_{Cf}}}$/$\mathbf{a_{\mathrm g}}$$|$, and $|$${\mathbf{a_{Co}}}$/$\mathbf{a_{\mathrm g}}$$|$ along an orbit in a halo potential with pattern speeds $\Omega_p = (0,0,0.3)\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ (left) and $\Omega_p = (0,0.3,0)\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ (right) (in all plots that follow the labels gives pattern speed in units of \ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}). While the centrifugal acceleration (blue) is small everywhere ($<$2\% of the gravitational acceleration) and increases monotonically with radius, the Coriolis acceleration (which depends on orbital velocity) changes through out the orbit and can be as high as $\sim$15\% of gravitational acceleration. \label{fig:orb_forces} } \end{figure*} \begin{table*} \begin{centering} \caption{Pseudo forces for a polar Sgr-like orbit in rotating frame} \begin{tabular}{ll|l|l} \hline & \multicolumn{1}{c|}{Rotation about $z$} & \multicolumn{1}{c|}{Rotation about $x$}& \multicolumn{1}{c}{Rotation about $y$} \\ \hline ${\mathbf{a_{Co}}}$ & = $\hat{i}(2\Omega_p v_y) - \hat{j}(2\Omega_p v_x)$ & = $\hat{j}(2\Omega_p v_z) - \hat{k}(2\Omega_p v_y)$ & = $\hat{i}(-2\Omega_p v_z) + \hat{k}(2\Omega_p v_x)$\\ & $\approx - \hat{j}(2\Omega_p v_x)$ \; $[\because |v_y|\approx 0]$ & $\approx \hat{j}(2\Omega_p v_z) $ \; $[\because |v_y|\approx 0]$ & \\ \hline ${\mathbf{a_{Cf}}}$& $= \Omega_p^2\sqrt{x^2+y^2}$& $= \Omega_p^2\sqrt{y^2+z^2}$ & $= \Omega_p^2\sqrt{x^2+z^2}$ \\ & $ \approx \Omega_p^2|x|$\; $[\because |y|\approx 0]$ & $ \approx \Omega_p^2|z|$\; $[\because |y|\approx 0]$ & \\ \hline \end{tabular} \label{tab:f_pseudo} \end{centering} \end{table*} Figure~\ref{fig:orb_forces} shows the ratios of the magnitudes of the Coriolis and centrifugal accelerations to the gravitational acceleration: $|$${\mathbf{a_{Co}}}$/$\mathbf{a_{\mathrm g}}$$|$ (red curve) and $|$${\mathbf{a_{Cf}}}$/$\mathbf{a_{\mathrm g}}$$|$ (blue curve) as a function of galactocentric radius along an orbit. The orbit shown was evolved in an {\it OT} model, with rotation vectors as indicated by labels in each panel. For a Sgr-like stream that lies approximately in the $x-z$-plane, rotation about the $x$ axis produces pseudo forces of similar magnitude to rotation about the $z$ axis and is not shown. The figures show that ${\mathbf{a_{Cf}}}$\ (blue curves) is never more than $\sim 2$\% of $\mathbf{a_{\mathrm g}}$\, and changes monotonically with radius. In contrast the Coriolis acceleration, changes continuously, and non-monotonically along the entire orbit (since the velocity changes along the orbit) and can be as large as $\sim 15$\% of the gravitational acceleration along this Sgr-like orbit. Since the Coriolis acceleration is a significant fraction of the gravitational acceleration even for $|\Omega_p| =0.3$ it should alter an orbit of a Sgr-like dwarf, and possibly also the Sgr tidal stream. \begin{figure*} \begin{center} \includegraphics[angle=0,trim=30. 0. 85. 0., clip, width=0.63\textwidth]{OT_rotz+3_xz_yz.png} \includegraphics[angle=0,trim=520. 0. 40. 0., clip, width=0.315\textwidth]{OT_rotz-3_xz_yz.png} \includegraphics[angle=0,trim=30. 0. 85. 0., clip, width=0.63\textwidth]{OT_roty+3_xz_xz.png} \includegraphics[angle=0,trim=520. 0. 40. 0., clip, width=0.315\textwidth]{OT_roty-3_xz_xz.png} \end{center} \caption{Galactocentric Cartesian frame projections of a Sgr-like tidal stream with points colored by Coriolis acceleration $a_{Co}$ along different axes as indicated in the color bars, magnitude and axis of pattern speed as shown in legend. Solid (dashed) black lines show past (future) orbit of the Sgr dwarf whose current position is shown as an orange dot. Position of the sun shown as yellow star. Top row: left and middle panels show two projections of the stream and orbit for positive rotation, while right panel shows $y-z$ projection for negative rotation of the same magnitude. Coriolis acceleration along the $x$ axis is nearly zero everywhere. The middle and left plots show that the sign of rotation affects the tilt of the stream in the $y-z$ plane with the southern leading arm tilted to negative (positiive) $y$ values due to $a_{Co}$$(x) <0$ ( $a_{Co}$$(x) >0$) for positive (negative) rotation. Bottom row: left and middle panels show $x-z$ projection of orbit and stream with particles colored by $x$ and $z$ components of Coriolis force. Right panel shows same projection for negative rotation of the same magnitude. \label{fig:stream_forces} } \end{figure*} In the rotating frame the pseudo forces (${\mathbf{a_{Cf}}}$\, and ${\mathbf{a_{Co}}}$) alter the trajectory of an object relative to the trajectory in a static (non-rotating) potential. We focus here on the effects on orbits in triaxial potentials but note that orbits in other potentials (e.g. axisymmetric potentials -- oblate or prolate -- rotating about a axis other than the axis of symmetry) would also appear modified in the rotating frame relative to orbits in static potentials. The effects of figure rotation on the main families of orbits in triaxial potentials \mdash\, box orbits, short axis tubes, and inner and outer long axis tubes \mdash\, have been extensively described in previous works \citep{schwarzschild_82,heisler_etal_82,dezeeuw_merritt_83,udry_pfenniger_88, udry_91,deibel_etal_11,valluri_etal_16}. However all of these studies have been restricted to the effects of rotation about the {\em short axis} of a triaxial potential. We briefly summarize results that are relevant to the present study. In triaxial potentials there are two families of tube-like orbits: short axis tubes and long-axis tubes, both of which are affected by rotation about the short axis. \citet{binney_81} showed that figure rotation about the short-axis of a triaxial potential destabilizes loop orbits that circulate retrograde about the rotation axis close to the equatorial plane, making them unstable to perturbations { perpendicular} to that plane if they lie in an annular region (called the ``Binney instability strip''). This instability results from a resonant coupling that can cause orbits to become unstable to oscillations perpendicular to the equatorial plane. \citet{heisler_etal_82} showed that closed periodic orbits rotating about the long-axis of the halo are stable to figure rotation but Coriolis forces tip them about the intermediate axis. Two stable periodic orbit families exist, which rotate clockwise and anti-clockwise about the long-axis. The Coriolis forces, cause the orbits with positive angular momentum to be tipped clockwise about the intermediate axis while orbits with negative angular momentum to be tipped anti-clockwise about the intermediate axis. Since these two periodic orbits ``parent'' the clockwise and anticlockwise rotating long-axis tube orbit families, the non-period orbits are similarly tipped about the intermediate axis \citep{deibel_etal_11,valluri_etal_16}. \citet{schwarzschild_82} showed that when a triaxial potential is rotated about the short axis, the linear long-axis orbits acquire prograde rotation about the short axis as a result of the Coriolis force. Since this orbit is the ``parent'' of the box orbit family, many such orbits acquire small prograde rotation and also experience ``envelope doubling'' \citep{valluri_etal_16} which causes some resonant (``boxlet'') and non-resonant box orbits to acquire a small net angular momentum in the rotating frame (frequently called `x1' orbits in bars). In self-consisent models no tube orbits are found to circulate around the intermediate axis of a triaxial potential since it has been shown that intermediate axis tubes are violently unstable \citep{heiligman_schwarzschild_79,adams_etal_07}. The Sgr stream is on a polar, tube-like orbit with net angular momentum about an axis approximately lying in the Galactic equatorial plane \citep{majewski_etal_03,law_majewski_16}. In the current best fit models for the Sgr stream \citep{law_majewski_10, deg_widrow_12} the long-axis of the triaxial halo lies about 7$^{\circ}$ away from the Galactocentric $y$-axis. The short-axis of the triaxial halo lies roughly along the Galactocentric $x$-axis (along the sun-Galactic center line) and the intermediate axis is aligned with Galactocentric $z$-axis perpendicular to the disk plane. This would put the Sgr dwarf on a long-axis tube orbit -- a stable orbital configuration. However, since the Sgr stream has been evolving for less than 10 orbital periods, considerations of orbital stability under halo figure rotation are likely, at most, to affect the coherence of the stream \citep{price_whelan_etal_16}. \citetalias{bailin_steinmetz_04} find that 85\% of DM halos in cosmological simulations rotate about an axis that is within 25$^{\circ}$ of the minor axis of the halo, but \citet{bryan_cress_07} find that less than half of their halos rotate about the minor axis. These authors and Valluri, Hofer et al. (in prep) find that recent or ongoing interactions can induce figure rotation over a short duration of time. The MW is currently undergoing an interaction with the LMC, which if massive enough could itself have induced halo rotation. As it is beyond the scope of this paper to determine the axis about which the LMC would induce figure rotation, for the sake of completeness, we consider rotation about each of the three principal axes of the Galactic potential. Since the Sgr dwarf and its resultant tidal stream are on an almost planar orbit that lies approximately in the Galactocentric $x-z$ plane, we can simplify the discussion of the expected effects of figure rotation on the appearance of the Sgr stream by considering an orbit that lies {\em exactly} in the $x-z$ plane in the stationary potential. (For such an orbit, $y \approx v_y \approx 0$.) Table~\ref{tab:f_pseudo} gives explicit equations for the Coriolis and centrifugal accelerations (${\mathbf{a_{Co}}}$, ${\mathbf{a_{Cf}}}$) for rotation about each of the three principal axes and the approximate expressions for accelerations on an orbit lying in the $x-z$ plane. The simulations however, compute the exact orbits for the progenitor and stream particles assuming the current position of the Sgr-dwarf from {\it Gaia}\ DR2 \citep{gaiacollab_helmi_18,vasiliev_belokurov_20} From Table~\ref{tab:f_pseudo} we see that rotation about the $z$ or $x$ axes gives rise to ${\mathbf{a_{Co}}}$\ with non-zero components primarily along the $y$ axis, while rotation about the $y$ axis gives rise to both $x$ and $z$ components of ${\mathbf{a_{Co}}}$. The effect of the centrifugal acceleration ${\mathbf{a_{Cf}}}$\ $=\Omega_p^2 {|\mathbf x|}$ is to push the stream away from the axis of rotation. Table~\ref{tab:f_pseudo} shows that rotation about the $z$ ($x$) axis causes the stream to be pushed outwards to larger $|x|$ ($|z|$), while rotation about the $y$ axis causes the entire stream to experience an outward radial centrifugal acceleration whose magnitude is linearly proportional to the distance from the Galactic center. Since the pattern speeds being considered in this paper are tiny ($\Omega_p = \pm0.3\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$) ${\mathbf{a_{Cf}}}$\, is much smaller than ${\mathbf{a_{Co}}}$. As will be shown later, these qualitative predictions for a strictly planar orbit are in general agreement with the behavior seen for the simulated streams, which are not confined to the $x-z$ plane. Figure ~\ref{fig:stream_forces} shows a stream (dots) and orbit (lines) evolving in the {\em OT\,}\ model rotating about the $z$ axis (top row) and $y$ axis (bottom row). The middle and right panels in both rows show the effect of simply changing the sign of figure rotation. The points in the plots are colored by one component of the Coriolis acceleration (as indicated by the color bar label). Changing the sign of rotation switches the sign of the Coriolis acceleration, but does not appreciably affect its magnitude. The solid lines show the past orbit of the progenitor (going back 4 Gyr from $z=0$) and the dotted curve shows the future orbit (which is expected to lie close to the position of the leading tidal arm), the orange dot shows the current location of the Sgr-dwarf and the yellow star shows the position of the sun. From Table~\ref{tab:f_pseudo} one sees that rotation about the $z$ (or $x$) axis results in significant Coriolis forces only along the $y$ axis. Therefore it primarily causes {\em tilting and warping} of the plane of the tidal stream (which is approximately perpendicular to the galactocentric $y$ axis). The top row, middle panel of Figure~\ref{fig:stream_forces} shows that the southernmost tip of the stream experiences a negative $a_{Co}$$(y)$ and is therefore pushed to negative $y$ values, while changing the sign of figure rotation (top row, right panel) causes this part of the stream to experience positive $a_{Co}$$(y)$ and hence it is pushed to positive $y$ values. Rotation about the $y$-axis (bottom row of Fig.~\ref{fig:stream_forces}) results in significant Coriolis forces in both the $x$ direction (left panel) and $z$ direction (middle, right). These in-plane Coriolis forces have a strong effect on the shape of the {\em orbit} of the progenitor, particularly the angles between the lobes of the rosette and the widths of the lobes. In the bottom row, the middle (right) panel shows that a positive (negative) $a_{Co}$$(z)$ pushes the orbit to more positive (negative) $z$ values. Similarly, the positive (negative) $a_{Co}$$(x)$ pushes the orbit to more positive (negative) $x$ values. Since the stream approximately follows the orbit of the progenitor, we expect that if figure rotation causes a change in the shape of the orbit, it will also affect the morphology of stream. \section{Results} \label{sec:results} We now present results of simulations designed to study the effects of figure rotation on a Sgr-like stream evolved for 4~Gyrs in each of the four models described in Section~\ref{sec:method}. For completeness we discuss rotation about each of the three principal axes of the Galaxy. The primary effects of rotation about the $z$ or $x$ axes arise from the Coriolis force in the $y$ direction (see Table~1) which causes the warping of the northern most and southern most tips of the stream, as previously shown in Figure~\ref{fig:stream_forces} (top row middle and right panels). Similar effects are seen in all the models. Figure~\ref{fig:YZ_OT_F19_z} shows $yz$-projections of Sgr-like streams evolved in the {\em OT\,}\ model (top row), the {\em F19m\,}\ model (middle row) and the {\em LM10\,}\ model (bottom row). The middle column shows the stream in a static halo for each model while the left and right columns show streams in halos with pattern speeds as indicated by ${\mathbf\Omega_p}$. Small dots show simulated stream stars in the leading (green) and trailing (mauve) arms of the simulated stream. For reference the median positions of RRLyrae stars of the Sgr stream from PanSTARRS \citep{hernitschek_etal_17} are shown for both the leading arm (dark green squares) and trailing arms (magenta circles). In all three models the stream in the static halo is slightly tilted relative to the $z$-axis. Although the effects of rotation are subtle it is clear that clockwise rotation about $z$ (left column) in all three halos causes simulated stars in the leading (green) arm at negative $z$ values to be shifted towards positive $y$ values while anti clock-wise rotation (right column) causes the same stars to be pushed towards negative $y$ values. The warping in the plane of the leading arm is a result of the $y$-component of the Coriolis force being greatest at the point where the $v_x$ is largest. As expected from Table 1 (first column) the direction of the warping is reversed when the sense of rotation is reversed. In all the models we see that the planes of the leading and trailing arms become slightly misaligned (especially for clockwise rotation, see left column). This is because most of the leading arm stars (except for those at the northern and southern tips) are moving along the $z$ axis and experience no Coriolis acceleration. In contrast, trailing arm stars (see Fig~\ref{fig:Pol_LMm_rxyz}) are moving primarily along the $x$ axis and therefore experience a larger Coriolis acceleration along the $y$ axis. This is the primary cause of the misalignment of the planes which contain the leading and trailing arms. Thus it is clear from this figure that figure rotation can result in subtle, but predictable, changes to the morphology of a Sgr-like stream. \begin{figure*} \begin{center} \includegraphics[trim=10. 60. 80. 60., clip, angle=0,width=0.364\textwidth]{100-03YZcart.png} \includegraphics[trim=110. 60. 80. 60., clip, angle=0,width=0.30\textwidth]{100_statYZcart.png} \includegraphics[trim=110. 60. 80. 60., clip, angle=0,width=0.30\textwidth]{100+03YZcart.png} \includegraphics[trim=10. 60. 80. 60., clip, angle=0,width=0.367\textwidth]{102-03YZcart.png} \includegraphics[trim=110. 60. 80. 60., clip, angle=0,width=0.30\textwidth]{102_statYZcart.png} \includegraphics[trim=110. 60. 80. 60., clip, angle=0,width=0.30\textwidth]{102+03YZcart.png} \includegraphics[trim=10. 0. 80. 60., clip, angle=0, width=0.367\textwidth]{103-03YZcart.png} \includegraphics[trim=110. 0. 80. 60., clip, angle=0, width=0.30\textwidth]{103_statYZcart.png} \includegraphics[trim=110. 0. 80. 60., clip, angle=0, width=0.30\textwidth]{103+03YZcart.png} \end{center} \caption{Cartesian ($y-z$) projection of the stream (small green points: leading arm, small mauve points: trailing arm) for static models (middle column) or rotating models with pattern speed as indicated, for {\em OT\,}, {\em F19m\,}, {\em LM10\,}\ models as indicated by labels. Observed median positions of Sgr stream RRLyrae from PanSTARRS \citep{hernitschek_etal_17} are shown for the leading arm (green squares) and trailing arm (magenta circles). \label{fig:YZ_OT_F19_z} } \end{figure*} We now examine the effect of figure rotation about each of the three principal axes in the {\em LMm\,}\ model. As mentioned previously this model has the same halo shape as the {\em LM10\,}\ model, but the masses of the disk and halo are lower, resulting in a trailing arm that extends to a much larger Galactocentric radius, and therefore gives a better match to observations of BHB stars \citep{belokurov_etal_14} and RR-Lyrae stars from PanSTARRS \citep{hernitschek_etal_17} at the trailing apocenter. \begin{figure*} \begin{center} \includegraphics[trim=10. 60. 80. 50., clip, angle=0,width=0.364\textwidth]{101-03YZcart.png} \hspace{0.24\textwidth} \includegraphics[trim=10. 60. 80. 50., clip, angle=0,width=0.37\textwidth]{101+03YZcart.png} \includegraphics[trim=10. 60. 80. 50., clip, angle=0,width=0.364\textwidth]{111-03YZcart.png} \hspace{0.24\textwidth} \includegraphics[trim=10. 60. 80. 50., clip, angle=0,width=0.37\textwidth]{111+03YZcart.png} \includegraphics[trim=10. 0. 80. 50., clip, angle=0,width=0.370\textwidth]{121-03YZcart.png} \includegraphics[trim=110. 0. 80. 50., clip, angle=0,width=0.305\textwidth]{101_statYZcart.png} \includegraphics[trim=110. 0. 80. 50., clip, angle=0,width=0.305\textwidth]{121+03YZcart.png} \end{center} \caption{Cartesian plot in $y-z$ plane for streams in the {\em LMm\,}\ model rotated about each of the three principle axes: $z$-axis (top), $x$-axis (middle row), $y$-axis (bottom row). Rotation axes and magnitudes as shown by the labels. \label{fig:YZ_LMm_allax} } \end{figure*} Figure~\ref{fig:YZ_LMm_allax} shows $yz$-projections of the {\em LMm\,}\ model for clockwise (anti-clockwise) rotation about three different axes shown in the left (right) columns, with the static model shown in the middle of the bottom row. The top row shows that rotation about the $z$ axis causes dramatic warping and misalignment of the leading (green) arm and trailing (mauve) arm of the stream with the direction of the tilting of the leading arm reversing when the sign of rotation flips. Rotation about the $x$ axis (middle row), which is perpendicular to the plane of the figure, causes fanning of the northern-most end of the leading arm of the stream, but less misalignment between the leading and trailing arm planes. Recall that the {\em LMm\,}\ model has its short axis along this ($x$) axis. The greatest asymmetry between clockwise and anti-clockwise rotation is seen in the bottom row which shows rotation about the $y$ axis (long-axis of the {\em LMm\,}\ and {\em LM10\,}\ halos). It is particularly striking that clockwise rotation (bottom row, left) cause the plane of the trailing arm to deviate very strongly from that of the leading arm. We speculate that this asymmetry arises because the orbital angular velocity of the Sgr-dwarf and the stream is opposite to the sense of rotation and hence the stream experiences an enhanced force perpendicular to the stream plane similar to the ``Binney instability'', while for anti-clockwise halo rotation, the orbit is stable. Once again we see that the primary signatures of the effect of the Coriolis force on a Sgr-like stream are to warp the stream and cause more significant misalignment (precession) between the planes containing the leading and trailing arms of the stream, regardless of the axis of rotation. Figure~\ref{fig:YZ_OT_F19_z} ({\em LM10\,}\ model, bottom row) and Figure~\ref{fig:YZ_LMm_allax} show that figure rotation has the strongest effect on streams in halos with the shape proposed by \citet{law_majewski_10}, independent of the other details of the gravitational potential. \citet{deg_widrow_12} confirmed that this halo shape was needed to account for the heliocentric velocities of stars in the leading arm. For this reason we focus on model {\em LMm\,}\ for the remainder of this paper. We do not show results for the {\em LM10\,}\ model since the deeper potential results in a trailing arm apocenter distance that is smaller than observed. Figure~\ref{fig:Pol_all_rxyz} shows the simulated Sgr streams in Figure~\ref{fig:YZ_OT_F19_z} in a polar plot with Sgr great-circle coordinate \citep[$\Lambda_0$,][]{majewski_etal_03} in the angular direction and heliocentric distance in the radial direction. The dot-dashed line in each panel marks the orientation of the Galactic plane. Following \citet{majewski_etal_03} the angular Sgr stream coordinate $\Lambda_0$ increases clockwise from $\Lambda_0=0$ (which marks the position of the Sgr dwarf) and is offset by 13$^\circ$ clockwise from the Galactic plane (shown by the dot-dashed line). Simulated stream stars (small dots) are colored by their Sgr coordinate $B_0$, which is the angle in degrees perpendicular to the Sgr great-circle plane defined by $B_0=0$. Coloring the stream by $B_0$ provides a 3D view of how figure rotation alters the warping and misalignment (or precession) in the planes containing the leading and trailing arms. To further aid comparison with the observed Sgr stream we also show observed median positions of RRLyrae stars along the leading arm (green squares) and trailing arm (magenta circles) from PanSTARRS \citep{hernitschek_etal_17}. This polar plot is close to what would be observed if the stream was plotted in the $x-z$ plane. As this figure shows, all the simulated streams do a reasonably good job of matching parts of the observed stream but none of them match the the RR-Lyrae data points precisely. In particular we see that the {\em LM10\,}\, model (bottom row) produces a trailing arm with too small an apocenter, although it does the best job of matching the leading arm. In Figure~\ref{fig:Pol_LMm_rxyz}, the simulated stream in the {\em LMm\,}\ model is shown for rotation about each of the three principal axes $z$, $x$, $y$ (from top to bottom) with pattern speeds (left to right) $\Omega_p = -0.3, 0, 0.3$\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}. The middle panel in the 2nd row shows 15,050 individual RRLyrae observed with PanSTARRS from the catalog published by \citet{hernitschek_etal_17}. The observed and simulated stream stars are both shown on polar plots as in Figure~\ref{fig:Pol_all_rxyz}. The heliocentric distance of the apocenter of the leading arm in all the models is greater than the observed apocenter distance (marked by green squares), most likely because we do not include the effects of dynamical friction from the Milky Way's dark matter halo on the Sgr dwarf \citet{fardal_etal_19}. Nonetheless, the middle panel shows that, like the simulated streams, there are substantial gradients in $B_0$ across the observed stream stars, with the leading arm lying primarily at negative $B_0$ values and the trailing arm lying primarily at positive $B_0$, except at trailing apocenter which is at $B_0\lesssim0$. It is clear (from the colors of the points) that rotation about the $z$-axis (top row) causes the plane of the leading arm ($225^\circ <\Lambda_0 < 315^\circ$, marked by green squares) to be warped so that stars at the leading apocenter lie at $B_0>0$ for negative figure rotation (left column) and $B_0>0$ for positive figure rotation (right column). None of the simulated streams matches the observed angle between the leading and trailing apocenters. It has previously been shown that this angle is determined by the radial density profile of the dark matter halo \citep{belokurov_etal_14, fardal_etal_19} and that cored dark matter halos with larger scale lengths are needed to produce the observed angle of $\sim 95^{\circ}$ between the apocenters. We have kept $r_s=28$~kpc fixed for all of our models. This value is larger than expected from $\Lambda$CDM simulations for MW mass halos \citep{dutton_maccio_14, klypin_yepes_16} but smaller than the value of $r_s=68$~kpc determined by \citet{fardal_etal_19}. Therefore, none of our models give the correct angle between the apocenters. It is however interesting to note from the bottom left panel that this angle can be altered by figure rotation. While the static model (bottom row, middle) fails to produce streams that match the observed positions of stars near trailing apocenter ($135^\circ <\Lambda_0 < 225^\circ$, marked by magenta circles), rotation about the $y$ axis (bottom row) with $\Omega_p = -0.3$ (left column) results in a Coriolis force that push the southern arm of the stream northwards resulting in a slightly closer match to the observed stream. An examination of Figure~\ref{fig:stream_forces} (bottom row, middle) shows that near the trailing apocenter $a_{Co}$$(z)$ goes from negative to positive for $\Omega_p <0$, implying that the Coriolis force would lift the stars lying south of the trailing apocenter to more positive $z$ values as we observe. Clockwise rotation about $y$ also alters the shape of the leading lobe, flattening it and bringing the apocenter to smaller distances. The colors of the trailing stream stars (bright red) in this panel show that stars at trailing apocenter and beyond are pushed to negative $B_0$, which is not what is observed for the PanSTARRS RRLyrae (middle panel, 2nd row). While this implies that rotation about the $y$-axis may not be adequate to change the angle between the apocenters, it certainly has a strong enough effect that it should be considered in future models, since it could allow for a halo with less extreme values of $r_s$ than found by \citet{fardal_etal_19}. \begin{figure*} \begin{center} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{100-03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{100_statLambda_Heldist_polar.png} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.39\textwidth]{100+03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{102-03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{102_statLambda_Heldist_polar.png} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.39\textwidth]{102+03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{103-03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{103_statLambda_Heldist_polar.png} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.39\textwidth]{103+03Lambda_Heldist_polar.png} \end{center} \vspace{-.4cm} \caption{Polar plot for models {\em OT\,} (top row), {\em F19m\,} (middle), {\em LM10\,} (bottom) with polar angle showing Sgr-coordinate $\Lambda_0$ clockwise from the current position of the Sgr dwarf ($\Lambda_0=0$), and radial coordinate showing heliocentric distance in kpc. The stream is shown for clockwise rotation (right), static (middle), anti-clockwise rotation about the $z$ axis. Stream particles are colored by their angular distance $B_0$ [ degrees] from the Sgr-stream great-circle plane. The Galactic plane is marked by a dot-dashed line. \label{fig:Pol_all_rxyz} } \end{figure*} \begin{figure*} \begin{center} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{101-03Lambda_Heldist_polar.png} \hspace{0.29\textwidth} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.39\textwidth]{101+03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{111-03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{Data_Sgr_Lambda_dist_polar.png} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.39\textwidth]{111+03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{121-03Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.29\textwidth]{121_statLambda_Heldist_polar.png} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.39\textwidth]{121+03Lambda_Heldist_polar.png} \end{center} \vspace{-.4cm} \caption{ Similar to Fig.~\ref{fig:Pol_all_rxyz} for {\em LMm\,}\ model for rotation about each of the three principle axes: $z$ (top), $x$ (left and right of 2nd row), $y$ (left and right of bottom row. The stream in the static {\em LMm\,}\ model is shown in the bottom middle panel. The central panel shows 15050 individual observed RR-Lyrae stars from \citet{hernitschek_etal_17}. \label{fig:Pol_LMm_rxyz} } \end{figure*} Figure~\ref{fig:Lam_pm} shows the total proper motion of the stream vs. $\Lambda_0$ for the observed Sgr stream from {\it Gaia}\ DR2 observations \citep{antoja_etal_20}\footnote{These authors only provide what they consider reliable Sgr stream data between $-150^\circ < \Lambda_0 <120^\circ$.} (second row, middle). As before the color bar shows $B_0$ in degrees from the Sgr great-circle plane. The other panels in Figure~\ref{fig:Lam_pm} show the stream in the {\em LMm\,}\ model with rotation axes and pattern speed as indicated by the labels. In this Figure $\Lambda_0$ is plotted in reverse to match $\tilde{\Lambda}_0$ values plotted by \citet{antoja_etal_20} (which increase anticlockwise per standard convention). The color bar shows $B_0$ on the same scale for both simulated Sgr stream and the observed stream. In each panel the five magenta triangles show the proper motions in five fields from \citep{sohn_etal_15, sohn_etal_16,fardal_etal_19} based on Hubble Space Telescope observations. Two facts are immediately clear. First, the overall sinusoidal shape of the observed total proper motions as a function of $\Lambda_0$ along the stream is broadly in agreement with all of the simulated streams (the simulated streams show both wraps of the stream which are not shown for the observations). \citet{antoja_etal_20} found similar broad agreement between the observed proper motions for Sgr stream stars and the N-body simulation from \citet{law_majewski_10}. Second, the observed streams show a substantial gradient in $B_0$ along the stream especially in the range $-60 > \Lambda_0 > -120$, starting at fairly negative values of $B_0$ (red/oranges at $\Lambda_0 \sim -60$) at and increasing to positive values of $B_0$ (blues at $\Lambda_0 \sim -120$). While this precise gradient in $B_0$ is not seen over this stretch of the stream in any of the simulations shown, it is clear that the static model (bottom row, middle panel) does not show such a gradient and is strictly at negative $B_0$. Rotation about the $z$ axis causes this part of the stream to lie at strictly positive $B_0$ values (top row, left) or at strictly negative $B_0$ values (top row, right). Clockwise rotation about the $x$-axis causes this part of the steam to show a gradient going from negative $B_0$ to positive $B_0$. Another feature of the observed stream - $B_0<0$ in the region $120 > \Lambda_0 >10$ - not seen in any of the models. In the static model and in most of the other models this portion of the stream is has $B_0 \sim 0$ or slightly positive (light blue). Only anti-clockwise rotation about the $z$ axis (top right) causes the portion of the stream in the region $120 > \Lambda_0 >70$ to shift to negative $B_0$ as seen in the observed stream. While none of the simulated streams shown produce both the amplitude and gradient of deviation of the stream from the $B_0=0$ great circle plane that is observed, rotation about each of the three principal axes changes the gradient in $B_0$ over some parts of the stream. \begin{figure*} \begin{center} \hspace{-1cm} \includegraphics[trim=30. 88. 117. 90., clip, angle=0,width=0.33\textwidth]{101-03L_pm_tot.png} \hspace{0.275\textwidth} \includegraphics[trim=30. 88. 10. 90., clip, angle=0,width=0.382\textwidth]{101+03L_pm_tot.png} \\ \hspace{-1cm} \includegraphics[trim=30. 88. 117. 90., clip, angle=0,width=0.335\textwidth]{111-03L_pm_tot.png} \includegraphics[trim=102. 103. 117. 90., clip, angle=0,width=0.308\textwidth]{Data_L_pm_tot.png} \includegraphics[trim=105. 88. 10. 90., clip, angle=0,width=0.35\textwidth]{111+03L_pm_tot.png} \\ \hspace{-1cm} \includegraphics[trim=30. 0. 117. 90., clip, angle=0,width=0.339\textwidth]{121-03L_pm_tot.png} \includegraphics[trim=105. 0. 117. 90., clip, angle=0,width=0.304\textwidth]{121_statL_pm_tot.png} \includegraphics[trim=105. 0. 10. 90., clip, angle=0,width=0.354\textwidth]{121+03L_pm_tot.png} \\ \caption{$\mu_{\rm tot}$ vs. $\Lambda_0$, with color bar signifying $B_0$. All panels except middle panel show {\em LMm\,}\ model with rotation axis and frequency as indicated by the label. Middle column of 2nd row shows observed {\it Gaia}\ total proper motion as a function of $\Lambda_0$, from Table~E.1. of \citet{antoja_etal_20}. $\Lambda_0$ is plotted from positive to negative values to enable easier comparison with $\tilde{\Lambda}_\odot$ \citep{belokurov_etal_14}. The magenta triangles are HST proper motions \citep{sohn_etal_15, sohn_etal_16}. \label{fig:Lam_pm}} \end{center} \end{figure*} In Figure~\ref{fig:Pol_LMm_mult_omega} we show streams in the {\em LMm\,}\ model for two larger pattern speeds and rotation about the $z$-axis (panels 1, 2 from left) and $y$-axis (panels 3, 4). As pattern speed increases we see that the coherence of the stream decreases and the distortions to the stream increase dramatically. Notice increasing $|\Omega_p|$ increases the angle between the apocenters of the leading and trailing arms in the {\em LMm\,}\ model increases. Our examination of the other models confirms that in most cases increasing the pattern speed causes a greater perturbation to the stream regardless of the axis of rotation. We see that even a pattern speed of $|\Omega_p|=0.6$ produces a significantly larger distortions than $|\Omega_p|=0.3$ (see Fig.~\ref{fig:Pol_LMm_rxyz}). The pattern speeds in this figure are at the high end of the pattern speed distribution expected from cosmological simulations. Based on the work of \citet{bailin_steinmetz_04} we infer that less than 5\% of DM halos have such large pattern speeds. Nonetheless this is further indication that the Sgr stream is a sensitive probes of the sign, axis and pattern speed of figure rotation, in the range of pattern speeds values predicted by cosmological simulations. \begin{figure*} \begin{center} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.22\textwidth]{101-06Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.22\textwidth]{101-08Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 130. 0., clip, angle=0,width=0.22\textwidth]{121-06Lambda_Heldist_polar.png} \includegraphics[trim=40. 80. 0. 0., clip, angle=0,width=0.29\textwidth]{121-08Lambda_Heldist_polar.png} \end{center} \vspace{-.4cm} \caption{ Effect of changing pattern speed in the {\em LMm\,}\ model for rotation about $z$-axis and $y$-axis. Increasing $|\Omega_p|$ to 0.6\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}} or greater produces stronger distortions in the stream. \label{fig:Pol_LMm_mult_omega} } \end{figure*} Our study has been restricted to rotation about the three principal axes of four specific halo models. Many other parameters (such as the mass of the Sgr dwarf progenitor, the evolution time) determine other properties of the stream but have been held fixed in this study. A future study that allows for rotation about an arbitrary axis and carries out a systematic search over other model parameters could result in streams that match the observed gradients in $B_0$ as well as other properties of the stream. We have shown that the morphology of the stream is sensitive to all properties of figure rotation of the halo: the rotation axis, the magnitude of pattern speed and the sign of rotation. Although several degeneracies might exist, a comprehensive study of the parameter space should be carried out to determine whether or not the MW halo exhibits figure rotation, as predicted by cosmological simulations. \section{ Conclusions and Discussion} \label{sec:conclusions} It is over 15 years since it was predicted that triaxial dark matter halos should have figure rotation \citep{bailin_steinmetz_04,bryan_cress_07}. The lead author of the current paper (Valluri, Hofer et al. in prep) has recently confirmed these early predictions, which were based on dark-matter only simulations, with the cosmological hydrodynamical Illustris simulations \citep{vogelsberger_etal_14}. The predicted pattern speeds of simulated halos have a log-normal distribution with median $\Omega_p \sim 0.15h$\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}\ and a width of 0.83\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}. These patterns speeds are so small that they have long been considered both undetectable and unimportant. We show for the first time that if the dark matter halo of the Milky Way galaxy is triaxial as expected from cosmological simulations \citep{dubinski_94_shapes, jing_suto_02} and maintains a steady pattern speed over a duration of 3-4~Gyr (as predicted by cosmological simulations), the rotating gravitational potential will exert a torque on a Sagittarius-like tidal stream which will alter its morphology and kinematics in ways that should already be detectable with current data. We do not attempt to model the observed properties of the Sgr stream in this paper, rather we show that rotation with a pattern speed even as small as $|\Omega_p|= 0.3$\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}\ about any of the three principal axes generates a Coriolis acceleration that varies between $\sim$ 2--15\% of the gravitational acceleration along the stream (see Fig.~\ref{fig:orb_forces}). The Coriolis forces in the direction perpendicular to the stream plane (for rotation about any axis) and in the stream plane for rotation about the $y$ axis result in detectable differences in the progenitor orbit and therefore the morphology and kinematics of the tidal stream. Our main results are listed below: \begin{itemize} \item Our simulations suggest that figure rotation of the Milky Way's dark matter halo would warp the Sagittarius stream at its northern- and southern-most Galactic positions and produce a misalignment (or relative precession) between the instantaneous orbital planes of the leading and trailing arms (see Figures~\ref{fig:YZ_OT_F19_z} and \ref{fig:YZ_LMm_allax}). \item Figure rotation about the galactocentric $y$ axis produces significant Coriolis forces throughout the halo in the $x-z$ plane (Figure~\ref{fig:stream_forces}), which can change the angle between apocenters of Sagittarius-like orbits even at fixed radial density profile of the halo. This in turn can result in a change in the angle between the apocenters of the leading and trailing tidal arms (see the bottom left panel of Figure~\ref{fig:Pol_LMm_rxyz}), which suggests that this angle encodes information both about the density profile of the halo \citep[e.g.,][]{belokurov_etal_14} and the rotation of the halo. \item The recently observed, sinusoidal form of the total proper motion as a function of $\Lambda_0$ is qualitatively seen in the {\em LMm\,} models (as well as other models). However, if the halo has the shape inferred by \citet{law_majewski_10}, the observed values and gradient in $B_0$ along the stream (especially for $120^\circ<\Lambda_0<20^\circ$) are not produced in either a static halo or in any of the rotating models. \item Based on our simulations and within the context of the Sagittarius stream, we find that the southernmost portion of the leading tidal arm will likely provide the strongest constraints on the pattern speed of the halo. This part of the stream is warped in significantly different ways depending on the sign (and magnitude) of figure rotation almost independent of the type of potential used. Therefore, mapping the locations (especially $B_0$) of Sgr stream stars in this region could help to constrain the magnitude of figure rotation of the halo. Unfortunately, as can be seen in the middle panel of Figure~\ref{fig:Pol_LMm_rxyz}, the leading arm of the Sgr stream is difficult to trace after it passes through the disk plane. \item Pattern speeds of $|\Omega_p| \gtrsim 0.6\ensuremath{\,\mathrm{{km\ s}^{-1}\ {kpc}^{-1}}}$ can cause severe distortions to the Sgr stream in the potentials we considered. Although fewer than 5\% of DM halos are expected to exhibit pattern speeds as large as this \citep{bailin_steinmetz_04}, it is clear that the observed coherence and morphology of the Sgr stream could be used to set a realistic upper limit on the pattern speed of figure rotation. \end{itemize} Although we have not shown results for the line-of-sight velocities of stream stars, we found that streams in the {\em LM10\,}, {\em LMm\,}\, and {\em F19m\,}\, models provide a reasonably good match to the line of sight velocities of the leading arm, consistent with \citet{law_majewski_10} while these line-of-sight velocities in the {\em OT\,}\, model are much too negative (as found by previous authors). Line-of-sight velocities of stars in the trailing arm are well fitted by all models (even though none of our models reproduces the correct angle between apocenters). Our test particle simulations have shown that figure rotation of the halo has a negligible effect on heliocentric distances of stream stars. An increase in the heliocentric distance of the stream could be produced by a strong centrifugal acceleration in a rotating halo, but since this is always much less than the Coriolis acceleration, the latter dominates the behavior of the stream. While it is not yet well established how halos acquire figure rotation in cosmological simulations, it was found in early studies \citep{bailin_steinmetz_04, bryan_cress_07} that it frequently arises after a close tidal interaction with a massive galaxies or satellites. The LMC, is known to be on its first infall towards the Milky Way \citep{besla_07, besla_10} and is probably massive enough ($\sim 10^{11}\mbox{$M_{\odot}$}$) to have moved the center of the Milky Way such that the pair of galaxies is orbiting their common center of mass \citep{gomez_besla_15}. This motion of the center of the Milky Way would also affect the Sgr stream and is not simulated here. However if the Milky Way's halo is triaxial as determined by previous models of the Sgr stream \citep{law_majewski_10,deg_widrow_12}, a massive satellite like the LMC is probably capable of inducing figure rotation in the MW halo. A more detailed study of figure rotation in cosmological simulations is needed to understand exactly how figure rotation is induced and what determines the axis of rotation and its magnitude and direction. An alternative method for generating an effective rotation of the halo (relative to our viewpoint in the disk) is if the disk is currently tilting relative to the Milky Way halo. As shown by \citet{debattista_etal_13} a halo with shape determined by \citep{law_majewski_10}, which has the intermediate axis of the halo perpendicular to the plane of the disk would be violently unstable and would result in the disk tilting relative to the halo such that it evolved to an orientation with the short axis of the halo (currently approximately along the galactocentric $x$ axis) becoming aligned with the rotation axis of the disk. If this is currently occurring in the Milky Way, it would result in rotation about the $y$ axis (which causes some of the most significant effects on the Sgr stream). \citet{debattista_etal_13} showed that this instability induced tilting of the disk would occur fairly rapidly and estimated a rate of $\sim 20^\circ$Gyr$^{-1}$ which is comparable to the values we have considered. \citet{earp_etal_17} have shown that if the disk of the Milky Way is tilting, the angular speed of this tilting would be observable with {\it Gaia}. They recently showed using state-of-the-art cosmological hydrodynamical simulations, that the minimum tilting rate of disks is high enough \citep{earp_etal_17, earp_etal_19} to be detectable with the astrometric precision of the {\it Gaia}\ reference frame \citep{perryman_etal_14}, which will have an end-of-mission accuracy better than 1$\mu$ arcsec\,yr$^{-1}$ (0.28$^{\circ}$Gyr$^{-1}$). At the present time, too little is known about the circumstances giving rise to figure rotation in cosmological halos or the circumstances that would produce a halo with its intermediate axis in the unstable condition where it is perpendicular to the disk. However, with the large number of publicly available cosmological hydrodynamical simulations currently available both for $\Lambda$CDM and with other types of DM candidates (WDM, SIDM), these questions can be answered in the near future and can lead to improved simulations of the Sgr stream. With the wealth of existing and upcoming data from large photometric and astrometric surveys ({\it Gaia}, WFIRST, LSST), and large spectroscopic surveys \citep[DESI, WEAVE, 4MOST,][]{DESI_2016a, DESI_2016b,WEAVE2014,4MOST2012} it will soon be possible to construct much more accurate models for the Sgr stream, and potentially to constrain not only the halo shape and density profile but also, for the first time, its pattern speed and axis of figure rotation. \acknowledgments MV thanks members of the stellar halos group at the University of Michigan for stimulating discussion and continued camaraderie especially during the COVID-19 pandemic. MV is supported by NASA-ATP awards NNX15AK79G and 80NSSC20K0509 and a Catalyst Grant from the Michigan Institute for Computational Discovery and Engineering at the University of Michigan. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia}\ (\url{http://www.cosmos.esa.int/gaia}), processed by the {\it Gaia}\ Data Processing and Analysis Consortium (DPAC, \url{http://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement. This research has also made use of NASA's Astrophysics Data System Bibliographic Services; the arXiv pre-print server operated by Cornell University. \software{ Astropy \citep{astropy, astropy:2018}, gala \citep{gala:JOSS, gala}, IPython \citep{IPython}, matplotlib \citep{hunter07}, numpy \citep{vanderwalt11}, scipy \citep{jones01}.} \medskip \bibliographystyle{aasjournal}
null
null
null
proofpile-arXiv_000-10195
{"arxiv_id":"2009.09004","language":"en","timestamp":1600740051000,"url":"https:\/\/arxiv.org\/abs\/2009.09004","yymm":"2009"}
2024-02-18T23:40:25.363Z
2020-09-22T02:00:51.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10197"}
null
\section{#1}\vspace{-2ex}} \newcommand{\subsectionsmall}[1]{\subsection{#1}\vspace{-1ex}} \newcommand{\supplementary}{ \clearpage \newpage \setcounter{table}{0} \renewcommand{\thetable}{S\arabic{table}} \setcounter{figure}{0} \renewcommand{\thefigure}{S\arabic{figure}} \setcounter{section}{0} \renewcommand{\thesection}{S\arabic{section}} \onecolumn } \makeatletter \newcommand{\@maketitle}{\@maketitle} \makeatother \newcommand{\mathsf{c}}{\mathsf{c}} \DeclarePairedDelimiterX{\dotp}[2]{\langle}{\rangle}{#1, #2} \newcommand{\ramp}[1]{\left [ #1 \right]_+} \newcommand{\PRIME}[1]{#1^\prime} \newcommand{\dotP}[2]{#1 \cdot #2} \DeclareMathOperator*{\argmin}{\arg\!\min} \DeclareMathOperator*{\argmax}{\arg\!\max} \newcommand{\relu}[1]{\phi(#1)} \makeatletter \newcommand*\bigcdot{\mathpalette\bigcdot@{.5}} \newcommand*\bigcdot@[2]{\mathbin{\vcenter{\hbox{\scalebox{#2}{$\m@th#1\bullet$}}}}} \makeatother \newcommand{\begin{equation}}{\begin{equation}} \newcommand{\end{equation}}{\end{equation}} \newcommand{\frac{1}{2}}{\frac{1}{2}} \newcommand{\stackrel{i.i.d.}{\sim}}{\stackrel{i.i.d.}{\sim}} \newcommand{\normsmall}[1]{\| #1\|} \newcommand{\norm}[1]{\left\lVert#1\right\rVert \newcommand{\br}[1]{\left\{#1\right\}} \newcommand{\qquad \qquad}{\qquad \qquad} \newcommand{\abs}[1] {| #1 |} \newcommand{\hbox{\rlap{$\sqcap$}$\sqcup$}}{\hbox{\rlap{$\sqcap$}$\sqcup$}} \newcommand{\eps}{\ensuremath{\varepsilon}} \renewcommand{\epsilon}{\varepsilon} \newcommand{\texttt{Median}}{\texttt{Median}} \DeclareMathOperator*{\Var}{Var} \newcommand{\Var}{\Var} \DeclarePairedDelimiter{\ceil}{\lceil}{\rceil} \DeclarePairedDelimiter{\floor}{\lfloor}{\rfloor} \newcommand{\inprod}[1]{\left\langle #1 \right\rangle} \newcommand{\mathbin{\stackrel{\rm def}{=}}}{\mathbin{\stackrel{\rm def}{=}}} \DeclareMathOperator*{\rank}{\text{rank}} \DeclareMathOperator*{\sign}{\text{sign}} \declaretheorem[name=Theorem]{theorem} \declaretheorem[name=Corollary, numberlike=theorem]{corollary} \declaretheorem[name=Lemma, numberlike=theorem]{lemma} \declaretheorem[name=Proposition, numberlike=theorem]{proposition} \declaretheorem[name=Definition]{definition} \declaretheorem[name=Assumption]{assumption} \declaretheorem[name=Remark, numberlike=theorem]{Remark} \declaretheorem[name=Claim, numberlike=theorem]{Claim} \declaretheorem[name=Example, numberlike=theorem]{Example} \declaretheorem[name=Observation, numberlike=theorem]{Observation} \newenvironment{itemizeAlg}{\begin{itemize}[topsep=1pt,itemsep=-1ex,partopsep=1ex,parsep=1ex]}{\end{itemize}} \newcommand{\mid \, }{\mid \, } \newcommand{{\mathrm{D}}}{{\mathrm{D}}} \newcommand\supp{\mathrm{supp}(\DD)} \newcommand{\size}[1]{\mathrm{size}({#1})} \newcommand{\nnz}[1]{\mathrm{nnz}(#1)} \newcommand{\ensuremath{\mathbb{R}}}{\ensuremath{\mathbb{R}}} \newcommand{\REAL}{\ensuremath{\mathbb{R}}} \newcommand{\ensuremath{\mathbb{N}}}{\ensuremath{\mathbb{N}}} \DeclareMathOperator*{\E}{\mathbb{E} \,} \let\Pr\relax \DeclareMathOperator*{\Pr}{\mathbf{Pr}} \DeclareMathOperator*{\1}{\mathbf{1}} \newcommand\BigO{\mathcal{O}} \newcommand\Bigo{\BigO} \newcommand\PP{\mathcal{P}} \newcommand\DD{\mathcal{D}} \newcommand\QQ{\mathcal{Q}} \renewcommand\SS{\mathcal{S}} \newcommand\MM{\mathcal{M}} \renewcommand{\AA}{{\mathcal{A}}} \newcommand\LLL{\mathcal{L}} \newcommand\GG{\mathcal{G}} \newcommand\TT{\mathcal{T}} \newcommand\BB{\mathcal{B}} \newcommand\UU{\mathcal{U}} \newcommand{\xmark}{\ding{55}}% \newcommand*{o.o.d.\@\xspace}{o.o.d.\@\xspace} \newtheorem{claim}[theorem]{Claim} \newcommand{C_{size}}{C_{size}} \newcommand{Q_{size}}{Q_{size}} \newcommand{epochs}{epochs} \DeclareMathOperator*{\argmina}{arg\,min} \section{Introduction}\label{sec:introduction}} \IEEEPARstart{C}{oreset} is usually defined as a small weighted subset of the original input set that provably approximates the given loss (objective) function for every query in a given set of queries. Coresets are useful in machine learning applications as they offer significant efficiency improvements. Namely, traditional (possibly inefficient, but provably optimal) algorithms can be applied on coresets to obtain an approximation of the optimal solution on the full dataset using time and memory that are smaller by order of magnitudes. Moreover, existing heuristics that already run fast can be improved in terms of accuracy by running them many times on the coreset in the time it takes for a single run on the original (big) dataset. Finally, coresets can be maintained for a distributed \& streaming data, where the stream is distributed in parallel from a server to $m$ machines (e.g. cloud), and the goal is to maintain the optimal solution (or an approximation to it) for the whole input seen so far in the stream using small update time, memory, and communication to the server. In the recent decade, coresets, under different formal definitions, were applied to many machine learning algorithms e.g. logistic regression~\cite{huggins2016coresets,munteanu2018coresets}, SVM~\cite{har2007maximum,tsang2006generalized,tsang2005core,tsang2005very,tukan2020coresets}, clustering problems~\cite{ feldman2011scalable,gu2012coreset,jubran2020sets,lucic2015strong, schmidt2019fair}, matrix approximation~\cite{feldman2013turning, maalouf2019fast,maalouf2020tight, sarlos2006improved,maalouf2021coresets}, $\ell_z$-regression~\cite{cohen2015lp, dasgupta2009sampling, sohler2011subspace}, decision trees~\cite{jubran2021coresets}, and others; see surveys~\cite{feldman2020core,phillips2016coresets,jubran2019introduction}. Some attempts of using coresets were recently suggested in application to deep networks. Apart from the standard use of coresets for reducing the amount of computations in training, e.g., by replacing full data~\cite{mirzasoleiman-icml-2020} or a batch~\cite{batchSizeReduction} with a coreset, there are other applications that motivate the use of summarization methods in deep networks, e.g., model compression, continual learning, domain adaptation, federated learning, neural architecture search. We discuss some of the them below. \noindent\textbf{Model Compression.} Deep networks are highly over-parametrized, resulting in high memory requirements and slower inference. While many methods have been developed for reducing the size of a previously trained network with no (or small) accuracy loss~\cite{liu2019metapruning,li2019learning,chen2020storage,he2019filter,dong2017more,kang2020operation,ye2020good, ye2018rethinking,maalouf2021deep}, most of them relied on heuristics, which performed well on known benchmarks, but diverged considerably from the behavior of the original network on specific sub-sets of input distribution~\cite{brain_workshop}. Few previous works~\cite{MussayOBZF20,Mussai20a,baykal2018datadependent,Liebenwein2020Provable} tried to resolve this problem by deriving a coreset for a fully connected or a convolutional layer with provable trade-off between the compression rate and the approximation error for any future input. However, since these works construct a coreset for a layer, the full network compression is performed in a layer-by-layer fashion. \noindent\textbf{Limited Data Access.} Problems, such as continual / incremental learning~\cite{LwF,iCarl,Few_shot_reminder,Prototype_reminder,Lopez-PazR17,BorsosM020}, domain adaptation~\cite{LiSWZG17,Asami}, federated learning~\cite{goetz2020federated} do not have access to the full data (due to memory limitation or privacy issues) and only a small summary of it can be used. Coresets offer a natural solution for these problems. \noindent\textbf{NAS.} Another important application that could benefit from coresets is neural architecture search (NAS). Evaluating different architectures or a choice of parameters using a large data set is extremely time consuming. A representative, small summary of the training set could be used for a reliable approximation of the full training, while greatly speeding up the search. Few recent works~\cite{Condensation, GTN} (inspired by the work of~\cite{wang2018dataset}) tried to learn a small synthetic set that summarizes the full set for NAS. Previous attempts of summarizing a full training set with a small subset or a synthetic set showed a merit of using coresets in modern AI (e.g., for training deep network). However, the summarization methods that they suggested were based on heuristics with no guarantees on the approximation error. Hence, it is not clear that existing heuristics for data summarization could scale up to real-life problems. On the other hand, theoretical coresets that provably quantify the trade-off between data reduction and information loss for an objective function of interest, are mostly limited to simple, shallow models due to the challenges discussed below in Section~\ref{subsection:challanges}. From the theoretical perspective, it seems that we cannot have coresets for a reasonable neural network under the classic definition of the worst case query (e.g. see Theorem 6~\cite{MussayOBZF20}). In this paper we try to find a midway between these two paradigms. \subsection{Coreset challenges}\label{subsection:challanges} In many modern machine learning problems, obtaining non-trivial theoretical worst-case guarantees is usually impossible (due to a high complexity of the target model, e.g. deep networks or since every point in the input set is important in the sense of high sensitivity~\cite{tukan2020coresets2}). Even for the simple problems, it may take years to design a coreset and prove its correctness for a specific problem at hand. Another problem with the existing theoretical frameworks is the lack of generality. Even the most generic frameworks among them~\cite{feldman2011unified, langberg2010universal} replace the problem of computing a coreset for $n$ points with $n$ new optimization problems (known as sensitivity bound), one for each of the $n$ input points. Solving these, however, might be harder than solving the original problem. Hence, different approximation techniques are usually tailored for each and every problem. \subsection{Our Contribution} The above observations suggest that there is a need in a more generic approach that can compute a coreset automatically for a given pair of dataset and loss function, and can be applied to hard problems, such as deep networks. It seems that this would require some relaxation in the standard coreset definition. Would such a relaxed coreset produced by a generic algorithm yield comparable empirical results with the traditional coresets that have provable guarantees? We affirmably answer this question by providing: \begin{enumerate}[(i)] \item A new definition of a coreset, which is a relaxation of the traditional definition of the strong coreset. \item AutoCL: a generic and simple algorithm that is designed to compute a coreset (under the new definition) for almost any given input dataset and loss function. \item Example applications with highly competitive empirical results for: (a) problems with known coreset construction algorithms, namely, linear regression and logistic regression, where the goal is to summarize the input training set, and (b) model compression, i.e., learning a coreset of all training parameters of a deep neural network at once (useful for model pruning). To our knowledge, this is the first algorithm that aims to compute a coreset for the network at once, and not layer by layer or similar divide-and-conquer methods. It is also the first approach that suggests to represent the coreset itself as a small (trainable) network that keeps improving on each iteration. In this sense we suggest "coreset for deep learning using deep learning". \item Open code for reproducing our results~\cite{opencode}. We expect that it would be the baseline for producing ``empirical" coresets for many problems in the future. Mainly, since it requires very little familiarity with the existing theoretical research on coresets. \end{enumerate} \section{Preliminaries} \textbf{Notations.} For a set $P$ of $n$ items, we use $\abs{P}$ to denote the number of items in $P$ (i.e., $\abs{P}=n$). For an event $B$ we use $\Pr(B)$ as the probability that event $B$ occurs, and for a random variable $x$ with a probability measure $\mu$, we use $\mathbb{E}_{\mu}(x)$ to denote its mean (expected value). Finally, for a loss function $\mathrm{loss}$ and an input set of variables $C$ (from any form), we use $\nabla \mathrm{loss}(C)$ to denote a standard gradient computation of $\mathrm{loss}$ with respect to the set of variables $C$, and $C- \alpha\nabla \mathrm{loss}(C)$ to denote a standard variables update ($C$) using a gradient step, where $\alpha>0$ is the learning rate. The following (generic) definition of a query space encapsulates all the ingredients required to formally define an optimization problem. \begin{definition} [Query space; see Definition 4.2 in~\cite{braverman2016new}] \label{def:querySpace} Let $\mathbb{P}$ be a (possibly infinite) set called \emph{ground set}, $Q'$ be a (possibly infinite) set called \emph{query set}, and let $f:\mathbb{P}\times Q' \to [0,\infty)$ be a loss (or cost) function. Let $P\subseteq \mathbb{P}$ be a finite set called \emph{input set}, and let $w:P\to [0,\infty)$ be a \emph{weight function}. The tuple $(P,w,Q',f)$ is called a \emph{query space} over $\mathbb{P}$. \end{definition} Typically, in the training step (of machine learning model), we solve the optimization problem, i.e., we aim at finding the solution $q^*$ that minimizes the sum of fitting errors $\sum_{p\in P} w(p) f(p,q)$ over every $q\in Q'$. \begin{definition} [Query cost] \label{def:onequerycost} Let $(P,w,Q',f)$ be a query space over $\mathbb{P}$. Then, for a query $q\in Q'$ we define the total cost of $q$ as $f(P,w,q) = \sum_{p\in P} w(p)f(p,q).$ \end{definition} In the next definition, we describe formally a (strong) coreset for a given optimization problem. \begin{definition}[Traditional Coresets]\label{def:strongCoreset} For a query space $(P,w,Q',f)$, and an error parameter $\eps\in (0,1)$, an $\eps$-coreset is a pair $(C,u)$ such that $C\subseteq P$, $u:C\to \ensuremath{\mathbb{R}}$ is a weight function, and for every $q\in Q'$, $f(C,u,q)$ is a $1\pm\eps$ multiplicative approximation for $f(P,w,q)$, i.e., \begin{equation} \abs{f(P,w,q) - f(C,u,q)} \leq \eps f(P,w,q). \label{eq:traditional-coreset} \end{equation} \end{definition} \section{Method} In this section we first explain our approach in general, emphasising its novelty and then, we present our suggested framework including all the details. \subsection{Novel Framework} We propose a \emph{practical} and \emph{generic} framework for coreset construction to a wide family of problems via the following steps: \begin{enumerate}[(a)] \item \emph{Make a problem simpler by relaxing the definition of a coreset.}\label{framework:1} Namely, we propose a new $(\eps,\mu)$-coreset for the Average Loss (in Definition~\ref{def:aca}) that is a relaxation of the standard definition (in Definition~\ref{def:strongCoreset}), and is more suited for the learning formalism. \item \emph{Define coreset construction as a learning problem}. Here, the coreset (under the new definition in Definition~\ref{def:aca}) is the training variable. \label{framework:2} \item \emph{Find the coreset that optimizes the empirical risk over a training set of queries.} \label{framework:3} We assume that we are given a set of queries, chosen i.i.d. from an unknown distribution and we find a coreset that approximates the average loss of the original input data over the training set of queries. \item \emph{Show that the optimized coreset generalizes to all members in the query set.} \label{framework:4} Namely, the expected loss on the coreset over all queries approximates the expected loss on the original input data. \end{enumerate} \subsection {$(\eps,\mu)$-Coreset for the Average Loss} We relax the definition of a coreset by observing that in data mining and machine learning problems we are usually interested in approximating the average loss over the whole set of queries rather than approximating the loss of a specific query. To this end, we define a distribution over the set of queries in Definition~\ref{def:prob-querySpace}, and then focus on approximating the expected loss in Definition~\ref{def:aca}. \begin{definition} [Measurable query space] \label{def:prob-querySpace} Let $(P,w,Q',f)$ be a query space over the ground set $\mathbb{P}$, and let $\mu$ be a probability measure on a Probability space $(Q',2^{Q'})$. Then, the tuple $(P,w,Q',f,\mu)$ is called a measurable query space over $\mathbb{P}$. \end{definition} \begin{definition} [$(\eps,\mu)$-coreset for the Average Loss]\label{def:aca} Let $(P,w ,Q',f,\mu)$ be a measurable query space over $\mathbb{P}$. Let $\eps \in [0,\infty)$ be an error parameter, $C\subset \mathbb{P}$ be a set, and $u:C\to\ensuremath{\mathbb{R}}$ be a weight function such that: $$\abs{\mathbb{E}_\mu(f(P,w,q)) - \mathbb{E}_\mu (f(C,u,q))} \leq\eps,$$ i.e., the expected loss of the original set $P$ over the randomness of sampling a query $q$ from the distribution $\mu$ is approximated by the expected loss on $C$. Then, the pair $(C,u)$ is called an $(\eps,\mu)$-coreset for the measurable query space $(P,w,Q',f,\mu)$. \end{definition} While, $(P,w)$ is also an $(\eps,\mu)$-coreset of $(P,w,Q',f,\mu)$, coreset $(C,u)$ is efficient if the cardinality of $C$ is significantly smaller than $P$, i.e., $|C|\ll|P|$, hopefully by order of magnitude. \textbf{Remark:} Throughout the literature, the term ``coreset'' usually refers to a small weighted \textbf{subset} of the input set (data). However, in other works (and in ours), this requirement is relaxed~\cite{cohen2015dimensionality,phillips2016coresets}. In many applications this relaxation gives a significant benefit as it supports a much larger family of instances as coreset candidates. \subsection{Coreset Learning}\label{sec:coreset-learning} \newcommand{\textsc{AutoCL}}{\textsc{AutoCL}} \begin{algorithm}[b] \small \caption{$\textsc{AutoCL}(P,w,Q,f,C_{size})$} \label{alg:main-new} \textbf{Input:} {A finite input set $P$, and its weight function $w:P\to \ensuremath{\mathbb{R}}$, a finite set of queries $Q$, a loss function $f:P \times Q\to [0,\infty)$, and an integer $C_{size}\geq1$.} \begin{spacing}{1.1} \begin{algorithmic}[1] \small \STATE $C:=\br{c_i}_{i=1}^{C_{size}}$ is an arbitrary set of $C_{size}$ vectors in $\mathbb{P}$.\label{line:initc-new}\\ \STATE $u(c):=1/C_{size}$ for every $c\in C$.\label{line:initu-new}\\ \FOR{$i:= 1 \to epochs$} \label{lin:opt_start} \STATE $f_C:= \frac{1}{k}\sum_{q\in Q}f(C,u,q)$ \label{line:cerror-new} \COMMENT{The average loss on $C$.} \\ \STATE $f_P:= \frac{1}{k}\sum_{q\in Q}f(P,w,q)$\label{line:perror-new} \COMMENT{The average loss on $P$.} \STATE $\mathrm{loss}:= \abs{f_P - f_C} + \lambda \abs{\sum_{p\in P}w(p) - \sum_{p\in C}u(p)}$\label{line:apperror-new} \\ \COMMENT{The approximation error that we wish to minimize, $\lambda>0$ is a hyper-parameter to balance the two losses.} \STATE $C:= C - \alpha \nabla \mathrm{loss}(C)$\label{line:updatec-new} \\ \COMMENT{Update $C$, where $\alpha>0$ is the learning rate.}\\ \STATE $u:= \max\{0,u - \alpha \nabla loss(u)\}$ \label{line:updateu-new}\COMMENT{Update $u$.} \ENDFOR \label{lin:opt_end} \STATE \textbf{return} $(C,u)$ \end{algorithmic} \end{spacing} \end{algorithm} We propose to learn a coreset (and its weights) as in Definition~\ref{def:aca} using gradient-based methods. We assume that we are given a set $P$, its weights $w$ such that $\sum_{p\in P} w(p) =1$,\footnote{We use this assumption for simplicity of the writing. Practically, we can implement it by scaling the input weights to sum to $1$, and formally, all is needed is scaling the sample size of the queries according to the sum of weights.} and a set $Q$ of $|Q|=k$ queries sampled i.i.d. from $Q'$ (according to the measure $\mu$). First, we aim to compute an $(\eps,\mu)$-coreset $(C,u)$ of $(P,w)$ with respect to the finite set of queries $Q$. Formally speaking, $(C,u)$ should satisfy: \begin{align} \left|\sum_{q\in Q}\frac{1}{k}f(P,w,q)-\sum_{q\in Q}\frac{1}{k}f(C,u,q)\right| \leq \eps. \label{ALG:guarantee} \end{align} To do so, we can treat $Q$ as our training data and learn coreset $(C,u)$ of $(P,w)$ with respect to the objective $f$ by minimizing the following loss: $$\left|{\frac{1}{k}\sum_{q\in Q}f(P,w,q)-\frac{1}{k}\sum_{q\in Q}f(C,u,q)}\right|.$$ This will guarantee that $(C,u)$ is an $(\eps,\mu)$-coreset for the measurable query space $(P,w,Q,f,\pazocal{U})$, where $\pazocal{U}:Q\to [0,1]$ is the uniform distribution over the finite set $Q$, i.e., $\pazocal{U}(q)=1/k=1/|Q|$ for every $q\in Q$. However, we wish that the constraint in Eq.~\eqref{ALG:guarantee} would hold for the whole set of queries $Q'$ in order to obtain an $(\eps,\mu)$-coreset for our desired (original) measurable query space $(P,w,Q',f,\mu)$. \begin{comment} To understand which modifications should be done to the loss function in order to obtain a generalized solution, we first state the sufficient guarantees for the $(\eps,\mu)$-coreset: \begin{enumerate}[(i)] \item \label{firststep} With high probability, the expected loss on the set $P$ over all queries in $Q'$ (i.e., $\mathbb{E}_\mu(f(P,w,q))$) is approximated by the average loss on the same set $P$ over the sampled set $Q$ of $k$ queries, i.e., with high probability $$\abs{\frac{1}{k}\sum_{q\in Q} f(P,w,q) - \mathbb{E}_\mu (f(P,w,q))}\leq \eps$$ \item \label{secondstep} The same should hold for $(C,w)$, i.e., with high probability $$\abs{\frac{1}{k}\sum_{q\in Q} f(C,u,q) - \mathbb{E}_\mu (f(C,u,q))}\leq \eps.$$ \end{enumerate} Finally, by Eq.~\eqref{ALG:guarantee} we have that $\frac{1}{k}\sum_{q\in Q} f(C,u,q)$ approximates $\frac{1}{k}\sum_{q\in Q} f(P,w,q)$, hence combining~\ref{firststep} and~\ref{secondstep} with Eq.~\eqref{ALG:guarantee}, yields that $\mathbb{E}_\mu (f(C,u,q))$ approximates $\mathbb{E}_\mu (f(P,w,q))$. For~\ref{firststep} to hold, we can rely on Hoeffding's inequality, which informally states that, with high probability, the average loss on the set $P$ over the i.i.d sampled set $Q$ of $k$ queries approximates the expected loss on the set $P$ over all queries in $Q'$ (i.e., $\mathbb{E}_\mu(f(P,w,q))$). However, the size $k$ of $Q$ should be large enough and proportional to the approximation error $\eps$, the probability of failure $\delta$, and finally, the maximum loss over every $q\in Q'$, i.e., $\sup_{q\in Q'}{f(P,w,q)}$ (see Claim~\ref{hofff}). Now, recall that $\mathbb{P}$ is the ground set, i.e., $P,C \subset \mathbb{P}$, and let $M=\sup_{q\in Q',p\in \mathbb{P}}\abs{f(p,q)}$. Since, $\eps$ and $\delta$ are fixed, and since $\sup_{q\in Q'}{f(P,w,q)} = \sup_{q\in Q'}{\sum_{p\in P}w(p)f(p,q)} \leq \sum_{p\in P}w(p)M =M$, all is needed for~\ref{firststep} to hold, is to sample enough queries (based on the Hoeffding's inequality). Now, for~\ref{secondstep} to hold, we can also use the Hoeffding's inequality. Also here, we have that $\eps$ and $\delta$ are fixed, but what about $\sup_{q\in Q'}{f(C,u,q)}$? \end{comment} To obtain a generalized solution (as we show in Section~\ref{sec:generalization}), we need to bound $\sup_{q\in Q'}{f(C,u,q)}$. To do so, we should guarantee that the sum of coreset weights approximates the original sum of weights, i.e: \begin{align} \abs{\sum_{p\in P}w(p) - \sum_{p\in C}u(p)} \leq \eps. \label{ALG:guarantee2} \end{align} The motivation behind bounding Eq~\eqref{ALG:guarantee2} is as follows. Recall that $\mathbb{P}$ is the ground set, i.e., $P,C \subset \mathbb{P}$. Let $M=\sup_{q\in Q',p\in \mathbb{P}}\abs{f(p,q)}$, so that enforcing Eq.~\eqref{ALG:guarantee2}, yields for every $q\in Q'$ $$f(C,u,q) \leq \sum_{p\in C}u(p)f(p,q) \leq (\sum_{p \in P}w(p)+\eps)M= (1+\eps)M.$$ Hence, we ``force'' our coreset to have a bounded loss over the whole query space $\sup_{q\in Q'}f(C,u,q) \leq (1+\eps)M$, furthermore, this bound is proportional to the bound of the loss on the original input $P$, i.e, it is proportional to $$\sup_{q\in Q'}f(P,w,q)\leq \sum_{p\in P} w(p) M =M,$$ and the approximation error $\eps$. To summarize, we learn an $(\eps,\mu)$-coreset $(C,u)$ of $(P,w)$ with respect to the objective $f$ given a training data (set of queries) $Q$. To enforce the conditions in Eqs.~\eqref{ALG:guarantee} and~\eqref{ALG:guarantee2} to hold with small $\eps$, we minimize the following loss: \begin{equation} \begin{split} \mathrm{loss}(Q;C,u)&:=\left|\frac{1}{k}\sum_{q\in Q}f(P,w,q)-\frac{1}{k}\sum_{q\in Q}f(C,u,q)\right| \\ &\quad + \lambda \left|{\sum_{p\in P}w(p) - \sum_{p\in C}u(p)}\right|. \label{alg:loss} \end{split} \end{equation} Here, $\lambda>0$ is a hyper-parameter to balance the two losses. The algorithm for coreset learning is summarised in Algorithm~\ref{alg:main-new}. \subsection{Generalization} \label{sec:generalization} We start by stating the sufficient guarantees for the $(\eps,\mu)$-coreset (i.e., the sufficient guarantees to obtain a generalized solution): \begin{enumerate}[(i)] \item \label{firststep} With high probability, the expected loss on the set $P$ over all queries in $Q'$ (i.e., $\mathbb{E}_\mu(f(P,w,q))$) is approximated by the average loss on the same set $P$ over the sampled set $Q$ of $k$ queries, i.e., with high probability $$\abs{\frac{1}{k}\sum_{q\in Q} f(P,w,q) - \mathbb{E}_\mu (f(P,w,q))}\leq \eps.$$ \item \label{secondstep} The same should hold for $(C,u)$, i.e., with high probability $$\abs{\frac{1}{k}\sum_{q\in Q} f(C,u,q) - \mathbb{E}_\mu (f(C,u,q))}\leq \eps.$$ \end{enumerate} Then, by Eq.~\eqref{ALG:guarantee}, we have that $\frac{1}{k}\sum_{q\in Q} f(C,u,q)$ approximates $\frac{1}{k}\sum_{q\in Q} f(P,w,q)$, hence combining~\ref{firststep} and~\ref{secondstep} with Eq.~\eqref{ALG:guarantee}, yields that $\mathbb{E}_\mu (f(C,u,q))$ approximates $\mathbb{E}_\mu (f(P,w,q))$. To show that~\ref{firststep} holds, we rely on Hoeffding's inequality as follows. \begin{claim}[Mean of Losses] \label{hofff Let $(P,w,Q',f,\mu)$ be a measurable query space such that $\sum_{p\in P}w(p)=1$, and let $M=\sup_{q\in Q'}\abs{f(P,w,q)}$. Let $\eps \in (0,\infty)$ be an approximation error, and let $\delta \in (0,1)$ be a probability of failure. Let $Q$ be a sample of $k\geq \frac{2M^2\ln(2/\delta)}{\eps^2} $ queries from $Q'$, chosen i.i.d, where each $q\in Q'$ is sampled with probability $\mu(q)$. Then, with probability at least $1-\delta$, $$\abs{\frac{1}{k}\sum_{q\in Q} f(P,w,q) - \mathbb{E}_\mu (f(P,w,q))} \leq \eps .$$ \end{claim} This claim states that, with high probability, the average loss on the set $P$ over the i.i.d sampled set $Q$ of $k$ queries approximates the expected loss on the set $P$ over all queries in $Q'$ (i.e., $\mathbb{E}_\mu(f(P,w,q))$). However, the size $k$ of $Q$ should be large enough and proportional to the approximation error $\eps$, the probability of failure $\delta$, and finally, the maximum loss over every $q\in Q'$, i.e., $\sup_{q\in Q'}{f(P,w,q)}$ (see Claim~\ref{hofff}). Now, recall that $\mathbb{P}$ is the ground set, i.e., $P,C \subset \mathbb{P}$, and $M=\sup_{q\in Q',p\in \mathbb{P}}\abs{f(p,q)}$. As we formally show in Section~\ref{ProofofClaim1}, since, $\eps$ and $\delta$ are fixed, and since $\sup_{q\in Q'}{f(P,w,q)} = \sup_{q\in Q'}{\sum_{p\in P}w(p)f(p,q)} \leq \sum_{p\in P}w(p)M =M$, all is needed for Claim~\ref{hofff} to hold, is to sample enough queries (based on the Hoeffding's inequality). To show that~\ref{secondstep} holds, we can also use the Hoeffding's inequality, but additionally we need to bound $\sup_{q\in Q}f(C,u,q)$. This was the reason for adding the constraint on the sum of weights: $\abs{\sum_{p\in P}w(p) - \sum_{p\in C}u(p)}\leq \eps ,$ to obtain $\sup_{q\in Q}f(C,u,q)\leq (1+\eps)M$. Formally, \newcommand{ALG}{ALG} \begin{claim}\label{main:theroem} Let $(P,w,Q',f,\mu)$ be a measurable query space over $\mathbb{P}$, where $\sum_{p\in P}w(p)=1$, and let $M=\sup_{q\in Q',p\in \mathbb{P}}\abs{f(p,q)}$. Let $\eps\in (0,\infty)$ be an approximation error, $\delta \in (0,1)$ be a probability of failure, and let $C_{size} \geq 1$ be an integer. Let $Q$ be a sample of $k \geq \frac{2((1+\eps)M)^2\ln(2/\delta)}{\eps^2}$ queries from $Q'$, chosen i.i.d, where each $q\in Q'$ is sampled with probability $\mu(q)$. Let $(C,u)$ be the output of a call to $\textsc{AutoCL}(P,w,Q, f,C_{size})$; see Algorithm~\ref{alg:main-new}. If \begin{enumerate} \item $\abs{\sum_{p\in P}w(p) - \sum_{p\in C}u(p)}\leq \eps ,$ and\label{assump1} \item $\abs{\frac{1}{k}\sum_{q\in Q} f(P,w,q) - \frac{1}{k}\sum_{q\in Q} f(C,u,q)} \leq \eps.$ \label{assump} \end{enumerate} Then, we obtain that, with probability at least $1-\delta$, $$\abs{\mathbb{E}_\mu (f(P,w,q)) - \mathbb{E}_\mu (f(C,u,q))} < 3\eps.$$ \end{claim} \begin{proof} See proof in Section~\ref{ProofofClaim2} in the appendix. \end{proof} \subsection{Bridging the Gap Between Theory and Practice} We take one more step towards deriving effective, practical coresets and replace the loss in Eq.~\ref{alg:loss} (and Line~\ref{line:apperror-new} in Algorithm~\ref{alg:main-new}) with a formulation that is more similar to the standard coreset definition, namely, $loss(q;C,u)=\abs{1 - \frac{ f(C,u,q)}{f(P,w,q)}} + \lambda \abs{\sum_{p\in P}w(p) - \sum_{p\in C} u(p)}$ and we minimize this loss on average over the training set of queries $Q$; See Algorithm~\ref{alg:main} in the appendix. A solution obtained by Algorithm~\ref{alg:main} aims to minimize the average approximation error over every query $q$ in the sampled set $Q$ and thus is very similar to the Definition~\ref{def:strongCoreset} with the modification of average instead of the worst case. This enables us to obtain a better coreset in practice that approximates the loss of every query $q$ (as the minimization is on the average approximation error over all queries and not only on the difference between the average losses of the coreset and the original data over all queries). Our empirical evaluation in Section~\ref{sec:exp} verifies that the coreset obtained by running Algorithm~\ref{alg:main} generalizes to unseen queries, i.e., the average approximation error of the coreset over all queries is small compared to other coreset construction algorithms. Moreover, we show below that the solution obtained by Algorithm~\ref{alg:main} satisfies Definition~\ref{def:aca}. Let $(C^*,u^*)$ be a solution that minimizes the average loss in Algorithm~\ref{alg:main}. We can find a constant $\eps'>0$, such that \begin{align} \frac{1}{k}\sum_{q\in Q} \abs{1-\frac{f(C^*,u^*,q)}{f(P,w,q)}}\leq \eps'. \label{ALG:guarantee_new} \end{align} For a constant $\eps$ from Definition~\ref{def:aca}, let $M=\sup_{q\in Q}|f(P,w,q)|$, and let $\eps = \eps'M $. By simple derivations (see Section~\ref{proofeq5} in the appendix) we can show that \begin{equation} \label{eq:M} \begin{split} &\abs{\frac{1}{k}\sum_{q\in Q}f(P,w,q)-\frac{1}{k}\sum_{q\in Q} f(C^*,u^*,q)} \leq \eps. \end{split} \end{equation} Hence by Claim~\ref{main:theroem} the solution obtained by Algorithm~\ref{alg:main} generalizes to the whole measurable query space $(P,w,Q',f,\mu)$ and thus it satisfies the definition of $(\eps,\mu)$-coreset, while simultaneously satisfying Eq.~\eqref{ALG:guarantee_new} which is closely related to the original definition of coresets as in Definition~\ref{def:strongCoreset}. \section{Experimental Results}\label{sec:exp} We proposed a unified framework for coreset construction that allows us to use the same algorithm for different problems. We demonstrate this on the examples of training set reduction for linear and logistic regression in Section~\ref{sec:exp_data_reduction} and on the examples of model size reduction a.k.a. model compression of MLP and CNN in Section~\ref{sec:exp_model_compr}. We show that in both cases our unified framework yields comparable or even better results than previous coresets, which are specifically fitted to the problem at hand. \begin{figure*} \centering \includegraphics[width=0.3\textwidth,height=0.2\textwidth]{images3/linear_weak_new.png} \includegraphics[width=0.48\textwidth,height=0.2\textwidth]{images3/linear_strong_mean_mean_with_arrow_new.png} \caption{\small Linear regression: a -- Approximation error for the optimal solution as a function of the coreset's size; b -- average approximation error on the unseen test data as a function of the coreset's size.} \label{fig:linearresst} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.3\textwidth,height=0.2\textwidth]{images3/logistic_weak.png} \includegraphics[width=0.48\textwidth,height=0.2\textwidth]{images3/logistic_strong_mean_mean.png} \caption{\small Logistic regression: a -- Approximation error for the optimal solution as a function of the coreset's size; b -- average approximation error on the unseen test data as a function of the coreset's size.} \label{fig:logisticresst} \end{figure*} \subsection{Training Data Coresets}\label{sec:exp_data_reduction} We demonstrate the practical strength of our coreset construction scheme in the context of data reduction for linear and logistic regression. \subsubsection{Setup} For linear regression, we ran our experiments on the 3D Road Networks dataset\footnote{\url{https://archive.ics.uci.edu/ml/datasets/3D+Road+Network+(North+Jutland,+Denmark)}} (North Jutland, Denmark)~\cite{kaul2013building} that contains 434,874 records. We used two attributes: ``Longitude'' [Double] and ``Latitude'' [Double] to predict the third attribute ``Height in meters'' [Double]. We created a set of queries by sampling models from training trajectories of linear regression computed using the full data set from 20 random starting points. We split the sampled models into training, validation and tests sets of sizes 20,000 ($|Q|$=20,000), 2,000, 2,000 correspondingly. We computed weighted coresets of different sizes, from 50 to 140. For each coreset size, we invoked Algorithm~\ref{alg:main} with Adam optimizer~\cite{kingma2014adam} for 10 epochs with a batch size of 25 and learning rate of 0.01. The results were averaged across $10$ trials. In this experiments we used $\lambda = 1$. For the logistic regression we performed the experiments on HTRU~\footnote{\url{https://archive.ics.uci.edu/ml/datasets/HTRU2}} dataset, comprising 17,898 radio emissions of the pulsar star represented by 8 features and a binary label~\cite{lyon2016fifty}. We created a set of queries similarly to linear regression and we sampled from this set training, validation and test sets of sizes 8,000, 1,600, 800 correspondingly. The results were averaged across $5$ trials. To make the optimization simpler, we removed the weight fitting term from the loss in Algorithm~\ref{alg:main} and assumed that all members of the coreset have the same weight $1/|C|$. We ran the optimization for $1000$ epochs with the batch size of $100$ using Adam optimizer and learning rate of $0.001$. Using this modification, we computed coresets of different sizes randing from 100 to 500. The differences in hyper-parameters and the coreset sizes between the logistic and linear regression experiments are due to the higher complexity of the problem for logistic regression. First, computing a coreset for logistic regression is known to be a complex problem where (high) lower bounds on the coreset size exists~\cite{munteanu2018coresets}. The second (and probably less significant) reason is the dimension of the input data, where we used a higher dimensional input in logistic regression. \subsubsection{Results} We refer to a weighted labeled input dataset by $(P,w,b)$, where $P$ is the dataset, $b:P\to \ensuremath{\mathbb{R}}$ and $w:P\to[0,\infty)$ are the labeling function and weight function respectively, i.e., each point $p$ in $P$ is a sample in the dataset, $b(p)$ is its corresponding label and, $w(p)$ is its weight. Similarly, we refer to the compressed labeled data set (coreset) by $(C,u,y)$. We report the results using two measures as explained below. \begin{enumerate} \item \textbf{Approximation error for the optimal solution.} Let $q^*$ be the query that minimizes the corresponding objective loss function, e.g., in linear regression: $q^*\in \argmina_{q\in \ensuremath{\mathbb{R}}^{d}} f(P,w,b,q)$, where $f(P,w,b,q) = \sum_{p\in P}w(p)({p^Tq-b(p)})^2$. For each coreset $(C,u,y)$, we compute $q^*_{c}\in \argmina_{q\in \ensuremath{\mathbb{R}}^{d}} f(C,u,y,q)$, then we calculate the approximation error for the optimal solution as $Err_{opt}=\abs{1-\frac{f(P,w,b,q^*_{c})}{f(P,w,b,q^*)}}.$ \item \textbf{Average approximation error.} For every coreset $(C,u,y)$, we report the average case approximation error over every query $q$ in the test set $Q_{test}$, i.e., $Err_{avg}=\frac{1}{|Q_{test}|}\sum_{q\in Q_{test} }\abs{1-\frac{f(C,u,y,q)}{f(P,w,b,q)}}.$ \end{enumerate} We compare our coresets for linear regression with uniform sampling and with the coreset from~\cite{maalouf2020tight}; $Err_{opt}$ of the three methods is shown in Figure~\ref{fig:linearresst}(a) and $Err_{avg}$ in Figure~\ref{fig:linearresst}(b). We compare our coreset for logistic regression with uniform sampling and with the coreset from~\cite{tukan2020coresets}; $Err_{opt}$ of the compared methods is shown in Figure~\ref{fig:logisticresst}(a) and $Err_{avg}$ in Figure~\ref{fig:logisticresst}(b). In both experiments we observe that our learned coresets outperform the uniform sampling, and the theoretical counterparts. Our method yields very low average approximation error, because it was explicitly trained to derive a coreset that minimizes the average approximation error on the training set of queries, and the learned coreset succeeded to generalize to unseen queries. \subsection{Model Coreset for Structured Pruning}\label{sec:exp_model_compr} The goal of model compression is reducing the run time and the memory requirements during inference with no or little accuracy loss compared to the original model. Structured pruning reduces the size of a large trained deep network by reducing the width of the layers (pruning neurons in fully-connected layers and filters in convolutional layers). An alternative approach is sparsification, which zeros out unimportant parameters in a deep network. The main drawback of sparsification is that it leads to an irregular network structure, which needs a special treatment to deal with sparse representations, making it hard to achieve actual computational savings. Structured pruning simply reduces the size of the tensors, which allows running the resulting network without any amendment. Due to the advantage of structured pruning over sparsification, we perform structured pruning of a deep networks in our experiments. We assume that the target small architecture is given, and our task is to compute the training parameters of the small architecture that best approximate the original large network. We view filters in CNN or neurons in a fully connected network as items in the full set $P$, and the training data as the query set $Q$. We use the small architecture to define the coreset size in each layer and we learn an equally weighted coreset $C$ (the small network) using Algorithm~\ref{alg:main} and setting $\lambda=0$. We report the experiments for structured pruning of a fully connected network in Section~\ref{subsec:neuralpruning} and of channel pruning in Section~\ref{subsec:chanpruning}. \subsubsection{Neuron Pruning}\label{subsec:neuralpruning} \textbf{Setup.} We used LeNet-$300$-$100$ model with 266,610 parameters trained on MNIST~\cite{lecun1998gradient} as our baseline fully-connected model. It comprises two fully connected hidden layers with $300$ and $100$ neurons correspondingly, each followed with a ReLu activation. After training the baseline model with Adam optimizer for $40$ epochs and batch size of $64$, it achieved test accuracy of $97.93\%$ and loss = $0.0917$. The target small architecture included $30$ neurons in the first layer and $100$ in the second, resulting in $89.63\%$ compression ratio. We applied the training procedure in Algorithm~\ref{alg:main} to learn the weights of this network using Adam optimizer with $L_2$ regularization for $400$ epochs with the batch size of $500$. \noindent\textbf{Results.} The coreset (compressed) model achieved $97.97\%$ accuracy and $0.0911$ loss on the test data, i.e., improvement in both terms. Next, we compare our results to a pair of other coreset-based compression methods in Table~\ref{table:comparison-lenet}, and to non-coreset methods: Filter Thresholding (FT)~\cite{li2016pruning}, SoftNet~\cite{he2018soft}, and ThiNet~\cite{luo2017thinet} implemented in~\cite{Liebenwein2020Provable}. We observe that the learned coreset performs better than most compared methods and comparably to the algorithm derived from the theoretical coreset framework. Note that previous coreset methods~\cite{MussayOBZF20,Liebenwein2020Provable} are designed for a single layer, while our algorithm does not have this limitation and can be applied to compress all layers of the network in a single run. Moreover applied to DNN compression, our framework can work on individual weights (sparcification), neurons (as shown above) and channels (as we show next). \subsubsection{Channel Pruning}\label{subsec:chanpruning} \textbf{Setup.} We used Pytorch implementation of VGGNet-19 network \footnote{\href{https://github.com/Eric-mingjie/network-slimming/blob/master/models/vgg.py}{VGG-code-link}} for CIFAR10 from~\cite{liu2017learning} with about 20M parameters as our baseline CNN model (see Table~\ref{table:vggarch} for more details). The baseline accuracy and loss in our experiments was $93.25\%$ and $0.3387$ correspondingly. The target architecture\footnote{https://github.com/foolwood/pytorch-slimming} of the small network (see Table~\ref{table:vggarch}) corresponds to 70\% compression ratio and to the reduction of the parameters by roughly 88\%. We ran Algrothm~\ref{alg:main} using the small architecture to define the size of each layer for $180$ epochs with batch size of $500$ using Adam optimizer and $L_2$ regularization. \noindent\textbf{Results.} Our compressed model improved the baseline network and achieved $93.51\%$ accuracy and $0.32$ loss. Table~\ref{table:comparison-vgg} compares the small network accuracy of the learned coreset with the channel pruning coreset from~\cite{Mussai20a} and several non-coreset methods. While the results are comparable, our algorithm is much simpler and is not tailored to the problem at hand. The coreset reported in~\cite{Mussai20a} was constructed by applying a channel pruning coreset in a layer by layer fashion, while our learned coreset is computed in one-shot for the entire network. Finally, we remind the reader that our framework is generic and could be applied to many other applications in addition to compressing DNNs. \begin{table}[!h] \begin{center} \begin{tabular}{| c || c | c |} \hline Layer & {Width (original)} & {Width (compressed)} \\ \hline \hline 1 & 64 & 49 \\ \hline 2 & 64 & 64 \\ \hline 3 & 128 & 128\\ \hline 4 & 128 & 128\\ \hline 5 & 256 & 256\\ \hline 6 & 256 & 254\\ \hline 7 & 256 & 234\\ \hline 8 & 256 & 198\\ \hline 9 & 512 & 114 \\ \hline 10 & 512 & 41\\ \hline 11 & 512 & 24\\ \hline 12 & 512 & 11\\ \hline 13 & 512 & 14\\ \hline 14 & 512 & 13\\ \hline 15 & 512 & 19\\ \hline 16 & 512 & 104\\ \hline \end{tabular} \end{center} \caption{VGG-19 original and compressed architectures.} \label{table:vggarch} \end{table} \begin{table}[!h] \centering \begin{adjustbox}{width=1\columnwidth} \begin{tabular}{|l|ccc|} \hline Pruning Method & Baseline & Small Model& Compression \\ & Error(\%)&Error(\%)&Ratio \\\hline \hline FT\cite{li2016pruning} & 1.59 & +0.35 & 81.68\%\\ \hline SoftNet~\cite{he2018soft}& 1.59& +0.41& 81.69\% \\ \hline ThiNet~\cite{luo2017thinet}&1.59& +10.58& 75.01\% \\ \hline Sample-based& & &\\ Coreset~\cite{Liebenwein2020Provable}& 1.59&+0.41&84.32\% \\ \hline Pruning& & & \\ via Coresets~\cite{Mussai20a} &2.16 &-0.13& $\sim 90$\%\\ \hline Learned Coreset (ours)& 2.07 &-0.04 &89.63\%\\ \hline \end{tabular} \end{adjustbox} \caption{\small Neural Pruning of LeNet-300-100 for MNIST. The results of FT, SoftNet, ThiNet and Sample-Based Coreset are reported in~\cite{Liebenwein2020Provable}. `+' and `-' correspond to increase and decrease in error, respectively.} \label{table:comparison-lenet} \end{table} \begin{table}[!h] \centering \begin{adjustbox}{width=1\columnwidth} \begin{tabular}{|l|ccc|} \hline Pruning Method & Baseline & Small Model& Compression \\ & Error(\%)&Error(\%)&Ratio \\\hline \hline Unstructured Pruning~\cite{han2015learning}& 6.5 & -0.02 & 80\%\\ \hline Structured Pruning~\cite{liu2017learning}& 6.33 & -0.13 & 70\% \\ \hline Pruning via Coresets~\cite{Mussai20a} &6.33 &-0.29 & 70\%\\ \hline Learned Coreset (ours) & 6.75&-0.26&70\%\\ \hline \end{tabular} \end{adjustbox} \caption{\small Channel Pruning of VGG-19 for CIFAR-10} \label{table:comparison-vgg} \end{table} \section{Conclusions} We proposed a novel unified framework for coreset learning that is theoretically motivated and can address problems for which obtaining theoretical worst-case guarantees is impossible. Following this framework, we suggested a relaxation of the coreset definition from the worst case to the average loss approximation. We proposed a learning algorithm that inputs a sample set of queries and a loss function associated with the problem at hand and outputs an average-loss coreset that holds for the training set of queries and generalizes to unseen queries. We showed that if the sample set of queries is sufficiently large, then the average loss over the coreset closely approximates the average loss over the full set for the entire query space. We then showed empirically, that our learned coresets are capable to generalize to unseen queries even for arbitrary sampling sizes. Our experiments demonstrated that coresets learned by our new approach yielded comparable and even better approximation of the optimal solution loss and average loss over the unseen queries than coresets that have worst-case guarantees. Moreover, our method applied to the problem of deep networks pruning provides the first full-network coreset with excellent performance. In future work we will try reducing the sampling bound and will apply the proposed framework to derive new coresets. \bibliographystyle{plain}
null
null
null
proofpile-arXiv_000-10196
{"arxiv_id":"2111.03044","language":"en","timestamp":1636075342000,"url":"https:\/\/arxiv.org\/abs\/2111.03044","yymm":"2111"}
2024-02-18T23:40:25.367Z
2021-11-05T01:22:22.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10198"}
null
\section{The minimal twist of eleven-dimensional supergravity} \label{s:dfn} In this section we define the central theory of study within the Batalin--Vilkovisky (BV) formalism. The theory will be defined on any eleven-dimensional manifold of the form $X \times L$, where $X$ is a Calabi--Yau five-fold and $L$ is a smooth oriented one-manifold. In \cite{SWspinor}, we showed that the underlying free limit of the theory we consider here is the free limit of the minimal twist of 11-dimensional supergravity on $\CC^5 \times \RR$. The goal of this section is to introduce interactions at the level of the minimal twist. The main result is Theorem \ref{thm:dfn} where we show that the theory is consistent within the BV formalism. In the remainder of the paper we discuss further consistency with supersymmetry and string theory, but in this section we focus mostly on the theory as a partially holomorphic, partially topological, theory of gravity. Nevertheless, we do provide some preliminary justification towards the relationship to physical supergravity including a matching of indices in \S \ref{sec:locchar}. \subsection{Divergence-free vector fields} \subsubsection{} \label{sec:divfree} We set up some notations and conventions in the context of complex geometry. Let $V$ be a holomorphic vector bundle on a $d$ dimensional complex manifold $X$. If $j$ is an integer, we let $\Omega^{0,j}(X, V)$ denote the space of anti-holomorphic Dolbeault forms of type $j$ on~$X$ with values in $V$. The $\dbar$ operator $\dbar \colon \Omega^{0,j}(X, V)\to \Omega^{0,j+1}(X,V)$ defines the Dolbeault complex of $V$: \[ \Omega^{0,\bu}(X, V) = \left(\Omega^{0,j}(X, V)[-j] , \; \dbar\right) . \] This is a free (smooth) resolution for the sheaf of holomorphic sections of $V$. Suppose $X$ is a Calabi--Yau manifold with holomorphic volume form $\Omega$. The divergence $\div(\mu)$ of a holomorphic vector field $\mu$ is defined by the formula \[ \div (\mu) \wedge \Omega = L_\mu (\Omega) \] where, on the right hand side, we mean the Lie derivative of $\Omega$ with respect to $\mu$. Let $\T_X$ denote the holomorphic tangent bundle and consider its Dolbeault complex $\Omega^{0,\bu}(X , \T_X)$ resolving the sheaf of holomorphic vector fields. The divergence operator extends to the Dolbeault complex to yield a map of cochain complexes \[ \div \colon \Omega^{0,\bu}(X , \T_X) \to \Omega^{0,\bu}(X) . \] The resulting complex of sheaves \beqn\label{eqn:cplx1} \begin{tikzcd} \ul{0} & \ul{1} \\ \Omega^{0,\bu}(X , \T_X) \ar[r, "\div"] & \Omega^{0,\bu}(X) , \end{tikzcd} \eeqn resolves the sheaf of holomorphic divergence-free vector fields $\Vect_0 (X)$. The anti-holomorphic Dolbeault degrees and the $\dbar$ operator are left implicit. There is a direct way to extend the Lie bracket of vector fields to the complex \eqref{eqn:cplx1}. Denote by $\mu$ an element of $\Omega^{0,\bu}(X , \T_X)$ and $\nu$ an element of $\Omega^{0,\bu}(X)$ (for simplicity in notation, we will not expand the anti-holomorphic dependence). The Lie bracket defined by the formulas \begin{align*} [\mu, \mu'] & = L_\mu \mu' \\ [\mu, \nu] & = L_\mu \nu \end{align*} is compatible with $\div$ and endows \eqref{eqn:cplx1} with the structure of a sheaf of dg Lie algebras. We will refer to this sheaf by the symbol $\cL_0(X)$, or just $\cL_0$ if $X$ is understood. The sheaf $\cL_0$ has the structure of a {\em local} dg Lie algebra~\cite[\S 3.1.3]{CG2}. This means that, as a graded sheaf, $\cL_0$ is the smooth sections of a graded vector bundle, and its differential and Lie bracket are given by differential and bidifferential operators respectively. \parsec[sec:Linfty] Recall that an $L_\infty$ algebra is a $\ZZ$-graded vector space $\cL$ together with the datum of a square-zero, degree $+1$ derivation $\delta_\cL$ of the free commutative graded algebra $\Sym\left(\cL^\vee [-1] \right)$. The Chevalley--Eilenberg cochain complex is \[ \left(\Sym\left(\cL^\vee [-1] \right), \delta_\cL\right) . \] The Taylor components of $\delta_\cL$ define higher brackets $\{[-]_k\}_{k=1,2,\ldots}$ where $[-]_k \colon \cL^{\otimes k} \to \cL[2-k]$. The condition that the differential $\delta_\cL$ is square-zero is equivalent to the higher Jacobi relations. An $L_\infty$ morphism $\Phi: \cL \rightsquigarrow \cL'$ is the same datum as a map of commutative dg algebras \deq{ \Phi^*: \clie^\bu(\cL') \to \clie^\bu(\cL) } between their respective Lie algebra cochains. It follows from this that \emph{any} automorphism $\Phi$ of the free commutative algebra on $\cL^\vee[-1]$ defines a new model of the $L_\infty$ algebra $\cL$, for which the Chevalley--Eilenberg differential is obtained by conjugating $\delta_\cL$ by~$\Phi$, and where $\Phi$ itself defines the $L_\infty$ isomorphism. $\cL_0$ is the sheaf~\eqref{eqn:cplx1} resolving divergence-free vector fields equipped with the dg Lie algebra structure constructed in the previous section. We consider the following automorphism of~$\Sym(\cL_0^\vee[-1])$, defined by its action on generators: \deq[eq:newbase]{ \Psi_\infty: \nu \mapsto 1 - e^{-\nu}, \quad \mu \mapsto e^{-\nu} \mu. } This map defines a new model of the $L_\infty$ algebra with underlying graded vector space the same as \eqref{eqn:cplx1}, which we will call $\cL_\infty$.\footnote{We are being slightly abusive and using the symbols $\nu,\mu$ dually as coordinates, or operators, on the graded linear space $\cL[1]$.} The formulas for the automorphism above clearly arise from maps of vector bundles and hence endow $\cL_\infty$ with the structure of a local $L_\infty$ algebra, meaning all operations are given by polydifferential operators. The notation refers to the fact that this new model has nonvanishing $L_\infty$ brackets of every order. It is this new model that we will use to define the eleven-dimensional theory of twisted supergravity. We can describe the $L_\infty$ structure on our new model $\cL_\infty$ explicitly. Recall that we have two types of elements: $\mu \in \PV^{1,\bu}$ and $\nu \in \PV^{0,\bu}[-1]$. (Here, and in what follows, we will use the symbol $\PV^{i,\bu}$ for the Dolbeault resolution of \emph{holomorphic polyvector fields;} by definition, this is the complex $\Omega^{0,\bu}(X,\wedge^i \T_X)$.) The first few nonzero brackets are \begin{align*} [\mu]_1 & = \dbar \mu + \div \mu \\ [\mu_1,\mu_2]_2 & = \div (\mu_1 \wedge \mu_2) \\ [\nu, \mu_1,\mu_2]_3 & = \div(\nu \mu_1 \wedge \mu_2) \end{align*} For $k \geq 2$ the general formula for the $k$-ary brackets are \begin{align*} [\nu_1, \ldots, \nu_{k-2}, \mu_1,\mu_2]_{k} & = \div(\nu_1 \cdots \nu_k \mu_1 \wedge \mu_2) \\ [\nu_1,\ldots, \nu_{k-3}, \mu_1,\mu_2,\gamma]_k & = \nu_1 \cdots \nu_{k-3} (\mu \wedge \mu') \vee \del \gamma .\\ [\nu_1,\ldots,\nu_{k-2}, \mu, \gamma]_k & = \nu_1 \cdots \nu_{k-2} \mu \vee \del \gamma . \end{align*} \subsection{Theories of BF type} \parsec Suppose that $\cL$ is an $L_\infty$ algebra with $L_\infty$ operations $\{[-]^\cL_k\}_{k=1,2,\ldots}$ and that $(\cA, \d_\cA)$ is a commutative dg algebra. The graded vector space $\cL \otimes \cA$ is equipped with the natural structure of an $L_\infty$ algebra with operations $\{[-]_k\}_{k=1,2,\ldots}$ defined by \begin{align*} [x \otimes a]_1 & = [x]^\cL_1 \otimes a + (-1)^{|x|} x \otimes \d_\cA a \\ [x_1 \otimes a_1, \ldots , x_k \otimes a_k]_k & = [x_1,\ldots,x_k]^\cL_k \otimes (a_1 \cdots a_k), \qquad k \geq 2 . \end{align*} We apply this construction, taking $\cL$ to be the sheaf resolving divergence-free holomorphic vector fields on a Calabi--Yau manifold $X$ equipped with either the strict dg Lie algebra structure $\cL_0(X)$ or its non-strict $L_\infty$ structure $\cL_\infty (X)$. The algebra $\cA$ will be the smooth de Rham complex $(\Omega^\bu(S) , \d_S)$ where $S$ is a smooth manifold. We thus obtain the structure of an dg Lie algebra on $\cL_0(X) \otimes \Omega^{\bu}(S)$ or an $L_\infty$ algebra $\cL_\infty(X) \otimes \Omega^\bu(S)$. These define equivalent local $L_\infty$ algebras on the product manifold~$X \times S$. \parsec[s:bf] Associated to any local $L_\infty$ algebra is a classical field theory in the BV formalism. Let $\cL$ be a local $L_\infty$ algebra on some manifold $M$, it is the sheaf of sections of some graded vector bundle $L$. For a section $A \in \cL$, introduce the `higher curvature map' defined by the formula \[ \mathsf{F}_A = [A]_1 + \frac12 [A,A]_2 + \frac{1}{3!} [A,A,A]_3 + \cdots . \] The fields of the associated BV theory are pairs \[ (A, B) \in \cL[1] \oplus \cL^{!}[-2] . \] Here $\cL^!$ denotes the sheaf of sections of the bundle $L^* \otimes {\rm Dens}$, where ${\rm Dens}$ is the bundle of densities. The shifted symplectic BV pairing is the obvious integration pairing between $A$ and $B$. The action functional reads $S_{\rm BF} = \int_M B \, \mathsf{F}_{A}$ which leads to the equations of motion $\mathsf{F}_{A} = 0$ and $\mathsf{D}_A B= 0$ where $\mathsf{D}_A$ is the higher covariant derivative along $A$. We refer to this as the ``BF theory'' associated to $\cL$. We thus obtain a theory in the BV formalism on the product manifold $X \times S$ associated to both local $L_\infty$ algebras $\cL_0(X) \otimes \Omega^{\bu}(S)$ and $\cL_\infty(X) \otimes \Omega^\bu(S)$. \parsec For concreteness, we spell out the fields of the theories we have constructed on $X \times S$. In both cases, the space of fields equipped with the linear BRST operator is \begin{equation} \label{eq:sympfields} \begin{tikzcd}[row sep = 1 ex] -n & -n + 1 & -1 & 0 \\ \hline \Omega^{0}(X;S) \ar[r, "\del"] & \Omega^{1}(X;S) & \PV^{1}(X; S) \ar[r, "\div"] & \PV^{0}(X; S). \end{tikzcd} \end{equation} We denote the fields $(\beta,\gamma,\mu,\nu)$ respectively. We are using the shorthand notation \begin{align*} \Omega^{i}(X;S) & = \Omega^{i , \bu;\bu}(X;S) \\ & = \oplus_{j,k} \PV^{i,j}(X) \otimes \Omega^k(S) [-j-k] . \end{align*} which is equipped with the $\dbar + \d_S$ operator and similarly for $\PV^{i}(X;S)$. The natural pairing between $\PV^i(X;S)$ and~$\Omega^i(X;S)$ is of degree $-\dim_\C(X) -\dim_\R(S)$. As such, the $\Z$-grading indicated in~\eqref{eq:sympfields} equips the sheaf of fields with a degree $(-1)$ pairing, provided that we choose the shift to be given by \deq{ n = \dim_\C(X) + \dim_\R(S) - 1. } The pairing is defined by the formula \[ \int^\Omega_{X \times S} \mu \vee \gamma + \int^\Omega_{X \times S} \nu \beta \] where $\int^\Omega_{X \times S} \alpha = \int_{X \times S} \alpha \wedge \Omega$. We have constructed two equivalent descriptions of the BF theory which share the linear BRST complex \eqref{eq:sympfields}. Explicitly, the action functional for BF theory associated to the local dg Lie algebra $\cL_0(X) \otimes \Omega^{\bu}(S)$ is \deq{ S_{BF,0} = \int^\Omega \bigg[\beta \wedge (\dbar + \d_S) \nu + \gamma \wedge (\dbar + \d_S) \mu + \beta \wedge \partial_\Omega \mu + \frac{1}{2} [\mu,\mu] \vee \gamma + [\mu,\nu] \beta \bigg] . } As in the Lie algebra structure of this strict model, notice that the Schouten--Nijenhuis bracket appears explicitly. The action functional of BF theory associated to $\cL_\infty(X) \otimes \Omega^{\bu}(S)$ is non polynomial. In fact, it is related to the BCOV action functional via dimensional reduction (see \S \ref{sec:dimred}). Explicitly, this action functional is \deq{ S_{BF,\infty} = \int^\Omega \bigg[ \beta \wedge (\dbar + \d_S) \nu + \gamma \wedge (\dbar + \d_S) \mu + \beta \wedge \partial_\Omega \mu + \frac12 \frac{1}{1-\nu} \mu^2 \vee \del \gamma \bigg] . } We demonstrated above that the two local $L_\infty$ algebras on which these BF theories are based are equivalent. As such, the BF theories are also equivalent; the map~\eqref{eq:newbase} extends uniquely to an automorphism of BV theories. Explicitly, the automorphism is \begin{equation}\label{eqn:auto1} \mu \mapsto e^{-\nu} \mu, \qquad \nu \mapsto 1-e^{-\nu} , \qquad \beta \mapsto (\beta - \mu \vee \gamma) e^{\nu},\qquad \gamma \mapsto e^{\nu} \gamma . \end{equation} \parsec[] In what follows, we specialize to the case that $X$ is a Calabi--Yau five-fold and that $S$ is a one-dimensional smooth orientable manifold. In this case, with $n = 5 + 1 - 1 = 5$ the theories described in this section are $\ZZ$-graded in the BV formalism. Momentarily, we consider a new term in the action which will break this grading; as such, this integer shift will not play an essential role. \subsection{A deformation of BF theory} Let $X$ be a Calabi--Yau five-fold and $S$ be a smooth oriented one-dimensional real manifold. We will break the $\ZZ$-grading present in BF theory discussed in the previous section to a $\ZZ/2$ grading. For reference, this means that the linear cochain complex of fields of the model now takes the following form. \begin{equation} \label{eq:sympfields} \begin{tikzcd}[row sep = 1 ex] {\rm odd} & {\rm even} & {\rm odd} & {\rm even} \\ \hline \Omega^{0}(X;S)_\beta \ar[r, "\del"] & \Omega^{1}(X;S)_\gamma & \PV^{1}(X; S)_\mu \ar[r, "\div"] & \PV^{0}(X; S)_\nu. \end{tikzcd} \end{equation} \parsec To define our classical field theory on $X \times S$, we consider a deformation of BF theory $S_{BF}$ (this refers to either the presentation as $S_{BF,0}$ or $S_{BF,\infty}$). Such deformations are governed by the classical master equation: the parameterized family of actions \beqn\label{eqn:defaction} S_{BF} + g J \eeqn defines a consistent theory in the BV formalism if and only if \deq{ \{S_{BF} + g J, S_{BF} + g J \} = 0. } Since this must hold for all $g$, and since the undeformed action $S$ is already a solution to the classical master equation, this reduces to the pair of conditions \deq[eq:2cond]{ \{S_{BF},J\} = \{J,J\} = 0. } The form of $J$ depends on which presentation we use for BF theory. To begin, we will use the presentation of BF theory $S_{BF, \infty}$ which uses the the non-strict $L_\infty$ structure on divergence-free holomorphic vector fields. The deformation $J$ does not make reference to the Calabi--Yau structure explicitly, but it does involve the holomorphic de Rham operator $\del$ on $X$. The main result of this section is the following. \begin{thm} \label{thm:dfn} Let $X$ be a Calabi--Yau five-fold and $S$ a smooth one-dimensional manifold, and consider the BV theory $(\cE, S_{BF,\infty})$ on $X \times S$ defined above. The local functional \deq{ J = \frac16 \gamma \wedge \del \gamma \wedge \del \gamma , } where $\gamma \in \Omega^{1,\bu}(X;S)$, defines a deformation of~$(\cE,S_{BF,\infty})$ as a $\Z/2$-graded BV theory. \end{thm} \parsec[] Before proceeding to the proof, we remark on grading issues. In the original $\Z$-grading on the BF theory given in \eqref{eq:sympfields} with $n=5$, the component \[ \gamma^{1,i;j} \in \Omega^{1,i}(X) \otimes \Omega^j(S) \] sits in degree $-4+i+j$. Thus, we see that in the original $\Z$-grading on BF theory one has \deq{ \deg(J) = 6. } Thus $S_{BF} + J$ is not of homogenous $\ZZ$ grading (although it is even). This is completely reasonable from the point of view of twisting supersymmetry in eleven dimensions. Indeed, the $R$-symmetry group is trivial, and there is not a way to regrade the fields of the twisted theory using twisting data~\label{CosHol,ESW}. Nevertheless, if we break to the obvious $\ZZ/2$ grading, the functional $S_{BF} + g J$ defines an even action functional. Unless otherwise stated, we will work with this $\ZZ/2$ grading for the remainder of this section. \parsec[] We proceed to show that $S_{BF,\infty} + g J$ solves the classical master equation. For notational simplicity we will omit the integral symbol $\int^\Omega$. \begin{proof} It is immediate from the form of the BV bracket that $\{J,J\} = 0$, since $J$ depends only on the $\gamma$ field. It remains to check that $\{S_{BF,\infty},J\} = 0$. For the quadratic term in the BF action, we note that \deq{ \{\beta \wedge \div\mu, J\} = \frac12 \del\beta \wedge \del \gamma \wedge \del \gamma = 0, } because total derivatives are equivalent to zero as local functionals. The contribution from the remaining BF action takes the form \[ \left\{ \frac12 \frac{1}{1-\nu} \del\gamma \vee \mu^2, \frac16 \gamma \wedge \del\gamma \wedge \del \gamma \right\} =\frac12 (\mu \vee \del \gamma) \wedge \del \gamma \wedge \del \gamma . \] This expression is zero for symmetry reasons. Recall that $\del\gamma$ is a two-form, and that the expression must be a totally symmetric local functional which is cubic in this two-form. We can ask whether such a contraction exists just at the level of $\lie{sl}(5)$ representation theory. Let $\ydiagram{1}$ denote the fundamental representation of~$\lie{sl}(5)$, which we identify with constant one-forms. Since the term must be a scalar, the contraction $(\partial\gamma^3$ must sit in the fundamental representation again, since it is dual to a vector field. Computing the decomposition of the tensor cube of the two-form, we find \deq{ \Sym^3 \left( \ydiagram{1,1} \right) \cong \ydiagram{3,3} \oplus \ydiagram{2,2,1,1}\,. } (In fact, the absence of the relevant irreducible representation does not even depend on the parity of the field $\gamma$, since \deq{ \wedge^3\left( \ydiagram{1,1} \right) \cong \ydiagram{3,1,1,1} \oplus \ydiagram{2,2,2}\, ; } the fundamental representation has symmetry type $\ydiagram{2,1}$.) \end{proof} \parsec[s:coupling] We make note of the dependence on the coupling constant $g$ in the definition of the deformed action $S_{BF,\infty} + g J$. When $g = 0$ we recover BF theory for the $L_\infty$ algebra $\cL_\infty(\CC^5) \otimes \Omega^\bu(\RR)$. For any $g \ne 0$ the theories are essentially equivalent in perturbation theory. Indeed, if $g \ne 0$ we can make the following field redefinition \[ \gamma \mapsto \sqrt{g} \gamma, \quad \beta \mapsto \sqrt{g} \beta \] to write the action as \[ \frac{1}{\sqrt{g}} \left(S_{BF,\infty} + J \right) . \] In perturbation theory, this has the affect of modifying the quantization parameter $\hbar$ to $\hbar / \sqrt{g}$. Thus, after modifying $\hbar$ and making the above field redefinition, the perturbative expansion of any theory is equivalent to the one with $g = 1$. \parsec[s:altdfn] We remark on an alternative, equivalent, description of the deformed theory which involves the strict dg Lie algebra structure on divergence-free holomorphic vector fields. We can replace $S_{BF,\infty}$ by $S_{BF,0}$ via applying the field automorphism \eqref{eqn:auto1}. Doing this we see that $J$ becomes \[ \til{J} = \frac16 e^\nu \gamma \wedge \del (e^\nu \gamma) \wedge \del(e^\nu \gamma) . \] Since this automorphism preserves the odd BV bracket, the actions $S_{BF,\infty} + g J$ and $S_{BF, 0} + g \til{J}$ are both solutions to the classical master equation, and are equivalent as~$\ZZ/2$ graded BV theories. \subsection{Equations of motion of the component fields} \label{s:components} Soon, we will provide a series of justifications for the assertion that the deformed theory $S_{BF, \infty} + g J$ is the minimal twist of eleven-dimensional supergravity on flat space $X \times S = \CC^5 \times \RR$ where $\CC^5$ is equipped with its flat Calabi--Yau form. For the moment, we briefly read off the equations of motion of the general theory on $X \times S$. Let $\Omega$ denote the Calabi--Yau form on $X$. We consider the action $S_{BF, \infty} + gJ$. The equation of motion obtained by varying $\beta$ is especially simple---in fact linear---since $\beta$ only appears in the action via a quadratic term. It is \beqn\label{eqn:eombeta} \dbar \nu + \d_S \nu + \div \mu = 0 . \eeqn Varying $\gamma$ we obtain the equation of motion \beqn\label{eqn:eomgamma} \dbar \mu + \d_S \mu + \frac12 \frac{1}{1-\nu} \div (\mu^2) + \frac12 (\del \gamma \wedge \del \gamma) \vee (g \Omega^{-1}) = 0 . \eeqn The last term represents the contraction of an element of $\Omega^{4,\bu}(X;S)$ with the nonvanishing section $\Omega^{-1} \in \PV^{5,\bu}(X;S)$ to yield an element of $\PV^{1,\bu}(X;S)$. If we vary the $\mu$ we obtain \beqn\label{eqn:eommu} (\dbar + \d_S) \gamma + \del \beta + \frac{1}{1-\nu} (\mu \vee \del \gamma) = 0 . \eeqn Finally, if we vary $\nu$ we obtain \beqn\label{eqn:eomnu} (\dbar + \d_S) \beta + \frac12 \frac{1}{(1-\nu)^2} \mu^2 \vee \del \gamma = 0 . \eeqn The equation of motion must hold for any inhomogenous superfields. We can get a better sense of the equations if we expand in components of these fields. The component fields of the eleven-dimensional theory on $X \times S$ have the following form: \begin{itemize} \item $\mu = \sum_{i,j} \mu^{i;j}$ is a superfield where \[ \mu^{i;j} \in \PV^{1,i}(X) \otimes \Omega^j(\RR) ,\quad i=0,\ldots, 5, \quad j=0,1. \] The component $\mu^{i;j}$ has parity $i+j+1 \pmod 2$. \item $\nu = \sum_{i,j} \nu^{i;j}$ is a superfield where \[ \nu^{i;j} \in \PV^{0,i}(X, \T_X) \otimes \Omega^j(\RR) ,\quad i=0,\ldots, 5, \quad j=0,1. \] The component $\nu^{i;j}$ has parity $i+j \pmod 2$. \item $\gamma = \sum_{i,j} \gamma^{i;j}$ is a superfield where \[ \gamma^{i;j} \in \Omega^{1,i}(X) \otimes \Omega^j(\RR) ,\quad i=0,\ldots, 5, \quad j=0,1. \] The component $\gamma^{i;j}$ has parity $i+j\pmod 2$. \item \item $\beta = \sum_{i,j} \beta^{i;j}$ is a superfield where \[ \beta^{i;j} \in \Omega^{0,i}(X) \otimes \Omega^j(\RR) ,\quad i=0,\ldots, 5, \quad j=0,1. \] The component $\beta^{i;j}$ has parity $i+j+1 \pmod 2$. \end{itemize} We look closely at the geometric meaning of \eqref{eqn:eombeta}. Let's make the simplifying assumption that all components of $\mu$ are divergence-free, and further that all fields are locally constant along $S$: that is, $\div \mu = 0$ and $\d_S \mu = \d_S \gamma = 0$. Then $\nu = 0$ is a solution to \eqref{eqn:eombeta} and we can assume that all fields are functions, or zero-forms, along $S$. Then, there is a component of \eqref{eqn:eomgamma} which can be written as \beqn\label{eqn:eomgamma1} \dbar \mu^{1;0} + \frac12 [\mu^{1;0},\mu^{1;0}] + \left(\frac12 \del \gamma^{1;0} \wedge \del \gamma^{1;0} + \del \gamma^{2;0} \wedge \del \gamma^{0;0}\right) \vee (g \Omega^{-1}) = 0 \eeqn where now $[-,-]$ stands for the Schouten bracket. To further simplify \eqref{eqn:eomgamma1}, we can look for a solutions where $\gamma^{1;0}$, the $(0,1)$ Dolbeault part of $\gamma$, is zero. Then, up to the term involving \[ \alpha \define \del \gamma^{0;0}, \] we find precisely the integrability equation for the complex structure determined by Beltrami differential $\mu^{1;0} \in \PV^{1,1} \otimes \Omega^0$. If $\dbar \alpha = 0$, the holomorphic two-form $\alpha \in \Omega^{2,hol}(X)$ defines a map of sheaves \[ \Omega^{2,hol}_X \xto{\wedge \alpha} \Omega^{4,hol}_X \cong_\Omega \cT^{hol}_X \] where $\cT^{hol}_X$ denotes the sheaf of holomorphic vector fields and the last isomorphism uses the Calabi--Yau form $\Omega$ on $X$. The image of $\Omega^{2,hol}_X$ defines a subsheaf $\cF_{\alpha} \subset \cT^{hol}_X$. Since $\del \alpha = 0$, this subsheaf is automatically integrable and hence determines a foliation. Summarizing, see that there is a field configuration where the Beltrami diffrential $\xi = \mu^{1;0} \in \Omega^{0,1}(X, \T_X)$ satisfies the modified integrability condition \[ \dbar \xi + \frac12 [\xi , \xi] = \alpha \vee \rho \] for some $\rho \in \PV^{2,2} (X)$. In other words, $\xi$ defines an integrable complex structure deformation along the leaf space associated to the foliation $\cF_\alpha$. We leave a more complete exploration of the moduli space of solutions of the equations of motion for future work. In~\cite{SWspinor}, the second two authors showed that the free limit of the minimal twist of eleven-dimensional supergravity agrees with the free limit of the eleven-dimensional theory that we have introduced here. Given this result, we can recognize many fields in the twisted theory as components of the physical fields of supergravity which remain after we twist. \begin{itemize} \item The components \begin{align*} \mu^{1;0} & = \mu^j_i(z,\zbar,t) \d \zbar_j \partial_{z_i} \\ \mu^{0;1} & = \mu^t_i (z,\zbar,t) \d t \partial_{z_i} \end{align*} of $\mu$ comprise components of the metric which remain after the twist. The components \[ \mu^{0;0} = \mu_i (z,\zbar,t) \partial_{z_i} \] comprise the ghosts for infinitesimal (holomorphic) diffeomorphisms. \item The three-form fields \begin{multline} \beta^{3;0} = \beta^{ijk} (z,\zbar,t) \d \zbar_i \d \zbar_j \d \zbar_k , \quad \beta^{2;1} = \beta^{ij}_t (z,\zbar,t) \d \zbar_i \d \zbar_j \d t \\ \gamma^{2;0} = \gamma^{ijk} (z,\zbar,t) \d z_i \d \zbar_j \d \zbar_k , \quad \gamma^{1;1} = \gamma^{ij}_t (z,\zbar,t) \d z_i \d \zbar_j \d t . \end{multline} comprise components of the supergravity $C$-field which remain after the twist. The two-form fields $\beta^{2;0}, \beta^{1;1}, \gamma^{1;0}, \gamma^{0;1}$, the one-form fields $\beta^{1;0}, \beta^{0;1}$, and the zero-form field $\beta^{0;0}$ is what remains of the ghost system (ghosts, ghosts for ghosts, etc.) for the supergravity $C$-field. \end{itemize} \subsection{Local character}\label{sec:locchar} We consider the eleven-dimensional theory on the manifold $\CC^5 \times \RR$, where $\CC^5$ is equipped with its standard Calabi--Yau structure. On this background, the theory is manifestly $SU(5)$ invariant. In this section, we compute the corresponding character of the local operators at the origin. The local character is only sensitive to the free limit of the theory. Furthermore, the linear BRST operator is an $SU(5)$-invariant deformation of the $(\dbar + \d_{\RR})$ operator. Therefore, to compute the character it suffices to compute the $SU(5)$-equivariant character of the $\dbar$ cohomology. The solutions to the $(\dbar + \d_{\RR})$-equations of motion simply say that all fields are holomorphic along $\CC^5$ and constant along $\RR$. Thus, the solutions can be identified with \begin{align*} \mu^{i}\partial_{z_i} & \in \Vect(\CC^5) \cong \cO(\C^5)\partial_{z_i},\quad \nu \in \cO (\C^5) \\ \beta & \in \cO (\C^5), \quad \gamma^{i} \d z_i \in \Omega^{1}(\CC^5) \cong \cO (\C^5)\d z_i \end{align*} where $z_i$ is a holomorphic coordinate on $\CC^5$. Corresponding to each of the above, we have a tower of linear local operators labeled by $(m_j) = (m_1, m_2, m_3, m_4, m_5)\in \Z^5_{\geq 0}$; these are given by \begin{align*} \boldsymbol{\mu}^{i}_{(m_j)} &: \mu^{i}\mapsto \partial_{z_1}^{m_1}\partial_{z_2}^{m_2}\partial_{z_3}^{m_3}\partial_{z_4}^{m_4}\partial_{z_5}^{m_5}\mu^{i} (0) \\ \boldsymbol{\nu}_{(m_j)} &: \nu\mapsto \partial_{z_1}^{m_1}\partial_{z_2}^{m_2}\partial_{z_3}^{m_3}\partial_{z_4}^{m_4}\partial_{z_5}^{m_5}\nu (0) \\ \boldsymbol{\gamma}^{i}_{(m_j)} &: \gamma^{i}\mapsto \partial_{z_1}^{m_1}\partial_{z_2}^{m_2}\partial_{z_3}^{m_3}\partial_{z_4}^{m_4}\partial_{z_5}^{m_5}\gamma^{i} (0) \\ \boldsymbol{\beta}_{(m_j)} &: \beta\mapsto \partial_{z_1}^{m_1}\partial_{z_2}^{m_2}\partial_{z_3}^{m_3}\partial_{z_4}^{m_4}\partial_{z_5}^{m_5}\beta (0) \end{align*} \iffalse We choose generators of the Cartan subgroup for the $SU(5)$ action with the following weights: \[\begin{array}{|c|c|c|c|c|c|} & z_1 & z_2 & z_3 & z_4 & z_5 \\ \hline q & & & & 1 & -1 \\ t_1 & 1 & -1 & & & \\ t_2 & & 1 & -1 & & \\ y & -1 & -1 & -1 &\frac 3 2 & \frac 3 2 \end{array}\] We are choosing the weights in this way as a matter of convenience for the later sections. For instance, upon performing the further twist of the eleven-dimensional theory, the $SU(5)$ symmetry is broken to an $SU(3)\times SU(2)$ symmetry; the Cartan of the unbroken symmetries corresponds to the fugacities $q, t_1, t_2$ in the above table. We can now readily compute the characters. \begin{prop} The $SU(5)$ local character of the holomorphic twist of the eleven-dimensional theory on flat space is \begin{multline} \chi_{SU(5)} = \prod_ {(m_j)\in \Z^5_{\geq 0}} \frac{1- t_1^{-m_1+m_2}t_2^{-m_2+m_3}q^{-m_4+m_5}y^{m_1+m_2+m_3-\frac 3 2 (m_4+m_5)}}{1- t_1^{-m_1+m_2}t_2^{-m_2+m_3}q^{-m_4+m_5}y^{m_1+m_2+m_3-\frac 3 2 (m_4+m_5)} } \\ \times \frac{t_1^{-1}y + t_1t_2^{-1}y + t_2y + q^{-1}y^{-\frac 3 2} + qy^{-\frac 3 2}}{t_1y^{-1} + t_1^{-1}t_2y^{-1} + t_2^{-1}y^{-1} + q^{-1}y^{\frac 3 2} + qy^{\frac 3 2}}. \end{multline} \end{prop} \begin{proof} We compute the local character as the plethystic exponential of the character of linear local operators. Note that the linear local operators $ \nu_{(m_j)}$ and $\beta_{(m_j)}$ are of the same weight but opposite parity so contribute to the character with opposite sign. These contributions therefore cancel. Next, there is a summand of the linear local operators of the form \[ \bigoplus _{(m_j)\in \Z^5_{\geq 0 }} \Pi \left ( \C\mu^{z_1}_{(m_j)}\partial_{z_1}\oplus \C\mu^{z_2}_{(m_j)}\partial_{z_2}\oplus \C\mu^{z_3}_{(m_j)}\partial_{z_3}\oplus\C\mu^{w_1}_{(m_j)}\partial_{w_1}\oplus \C\mu^{w_2}_{(m_j)}\partial_{w_2}\right). \] This contributes \begin{multline} \sum_{(m_j)\in \Z^5_{\geq 0 }}- t_1^{-m_1+m_2}t_2^{-m_2+m_3}q^{-m_4+m_5}y^{m_1+m_2+m_3-\frac 3 2 (m_4+m_5)} \times \\ \left (t_1^{-1}y + t_1t_2^{-1}y + t_2y + q^{-1}y^{-\frac 3 2} + qy^{-\frac 3 2} \right). \end{multline} Finally, there is a summand of the form \[ \bigoplus _{(m_i;n_j)\in \Z^5_{\geq 0 }} \Pi \left ( \C\gamma^{z_1}_{(m_i; n_j)}d{z_1}\oplus \C\gamma^{z_2}_{(m_i; n_j)}d{z_2}\oplus\C\gamma^{z_3}_{(m_i; n_j)}d{z_3}\oplus\C\gamma^{w_1}_{(m_i; n_j)}d{w_1}\oplus \C\gamma^{w_2}_{(m_i; n_j)}d {w_2}\right). \] This likewise contributes \[ \sum_{(m_j)\in \Z^5_{\geq 0 }}t_1^{-m_1+m_2}t_2^{-m_2+m_3}q^{-n_1+n_2}y^{m_1+m_2+m_3-\frac 3 2 (m_4+m_5)}\left (t_1y^{-1} + t_1^{-1}t_2y^{-1} + t_2^{-1}y^{-1} + q^{-1}y^{\frac 3 2} + qy^{\frac 3 2} \right). \] In sum, the character of linear local operators is the geometric series \[ \sum_{(m_j)\in \Z^5_{\geq 0 }}-t_1^{-m_1+m_2}t_2^{-m_2+m_3}q^{-n_1+n_2}y^{m_1+m_2+m_3-\frac 3 2 (m_4+m_5)}\left (\begin{aligned}t_1y^{-1} + t_1^{-1}t_2y^{-1} + t_2^{-1}y^{-1} + q^{-1}y^{\frac 3 2} + qy^{\frac 3 2} \\ - (t_1^{-1}y + t_1t_2^{-1}y + t_2y + q^{-1}y^{-\frac 3 2} + qy^{-\frac 3 2})\end{aligned}\right). \] The plethystic exponential returns the desired expression. \end{proof} \fi It is easiest to label the Cartan subgroup of $SU(5)$ by fugacities $q_1,\ldots, q_5$ subject to the constraint that $\prod_{i=1}^5 q_i = 1$. We first compute the single particle index. This is the $SU(5)$ character of the space of linear local operators. \begin{lem} The single particle index is \[ i(q_1,\ldots,q_5) = \frac{\sum_{i=1}^5 q_i}{\prod_{i=1}^5 (1-q_i)} + \frac{\sum_{i=1}^5 q_i^{-1}}{\prod_{i=1}^5 (1-q_i^{-1})} \] where the fugacities satisfy the constraint $\prod_{i=1} q_i = 1$. \end{lem} \begin{proof} The linear local operators $ \boldsymbol{\nu}_{(m_j)}$ and $\boldsymbol{\beta}_{(m_j)}$ are of the same $q$-weight but opposite parity. Thus, they do not contribute to the single particle index. The $q$-weight of the odd local operator $\boldsymbol{\mu}_{(m_j)}^i$ is \[ q_1^{m_1+1} \cdots q_i^{m_i} \cdots q_5^{m_5+1} . \] The $q$-weight of the even local operator $\boldsymbol{\gamma}_{(m_j)}^i$ is \[ q_1^{m_1} \cdots q_i^{m_i + 1} \cdots q_5^{m_5} . \] Thus we find that the single particle index is given by the infinite series \beqn\label{infseriesindex} \sum_{i=1}^5\left ( \sum_{(m_i)\in \Z^5_{\geq 0}} q_1^{m_1} \cdots q_i^{m_i + 1} \cdots q_5^{m_5} - \sum_{(m_i)\in \Z^5_{\geq 0}} q_1^{m_1+1} \cdots q_i^{m_i} \cdots q_5^{m_5+1} \right) \eeqn which sums to the expression \beqn\label{singleparticleindex} - \frac{\sum_{i=1}^5 q_1 \cdots \Hat{q_i} \cdots q_5}{\prod_{i=1}^5 (1-q_i)} + \frac{\sum_{i=1}^5 q_i}{\prod_{i=1}^5 (1-q_i)} . \eeqn This simplifies to the stated expression. \end{proof} This single particle index for our space of local operators agrees with the one computed in \cite{NekrasovInstanton}. To obtain the full index of local operators we apply the plethystic exponential ${\rm PE}[f(x)] = \exp\left(\sum_n \frac1n f(x^n)\right)$. \begin{prop}\label{prop:locchar} The character of local operators of the eleven-dimensional theory on $\CC^5 \times \RR$ is \[ \prod_{i=1}^{5} \prod_{(m_i)\in \Z^5_{\geq 0}} \frac{1-q_1^{m_1+1}\cdots q_i^{m_i}\cdots q_5^{m_5+1}}{1-q_1^{m_1}\cdots q_i^{m_i+1}\cdots q_5^{m_5}} \] \end{prop} \begin{proof} Recall that the plethystic exponential takes sums to products and monomials to geometric series. Apply this to the infinite series \eqref{infseriesindex}. \end{proof} \subsection{One-loop quantization} In \cite{GRWthf} an existence result for one-loop quantizations of mixed topological-holomorphic theories was established. We apply this to the eleven-dimensional model at hand. The eleven-dimensional theory is a mixed topological-holomorphic theory. On flat space $\CC^5_z \times \RR_t$, this means that the theory is translation invariant and that the following act homotopically trivially: \begin{itemize} \item the vector fields $\del_{\zbar_1}, \ldots, \del_{\zbar_{5}}$ corresponding to infinitesimal anti-holomorphic translations, \item the vector field $\partial_t$ corresponding to infinitesimal translations in the $\RR_t$ direction. \end{itemize} Recall that the action functional of the eleven-dimensional theory is $S_{BF, \infty} + c J$. Since the cubic and higher interactions only involve holomorphic derivatives, we obtain the following directly from the main result of \cite{GRWthf}. \begin{thm} There exists a gauge fixing condition for the eleven-dimensional theory on $\CC^5 \times \RR$ which renders its one-loop quantization finite and anomaly-free. \end{thm} When $g=0$, this result is actually exact, since there are no Feynman diagrams present past one-loop order in this case. When $g \ne 0$, on the other hand, this result does not immediately imply the existence of a gauge-invariant perturbative quantization to higher orders in $\hbar$. The presence of the functional $J = \frac16 \int \gamma \del \gamma \del \gamma$ allows one to construct Feynman graphs at arbitrary loop order. In \cite{CostelloM5}, Costello argues that, upon performing the $\Omega$-background, the theory localizes to a five-dimensional theory on $\CC^2 \times \RR$. Via a cohomological argument, it is shown that this effective five-dimensional theory exhibits an essentially unique quantization in perturbation theory. We will return to the existence and uniqueness of a higher order quantization of the eleven-dimensional theory in future work. \section{Dimensional reduction and ten-dimensional supergravity} \label{sec:dimred} In this section we demonstrate that our proposal for the action of minimally twisted eleven-dimensional supergravity agrees with conjectural descriptions of twisted type IIA and type I supergravities due to Costello and Li. The original motivation for M-theory was as the strong coupling limit for type IIA string theory. Roughly, the radius of the M-theory circle plays the role of this coupling constant. Additionally, at low energies M-theory is expected to be approximated by eleven-dimensional supergravity in the same way that the low energy limit of type IIA/IIB string theory is type IIA/IIB supergravity. Combining these two pictures, various checks have been made that the dimensional reduction of eleven-dimensional supergravity along the M-theory circle is type IIA supergravity. Motivated by the topological string, Costello and Li have laid out a series of conjectures for twists of type IIA/IIB supergravity \cite{CLsugra} and type I supergravity \cite{CLtypeI}. Their description was inspired by the model of the open and closed $B$-model topological string on a Calabi--Yau manifold. The open sector is holomorphic Chern--Simons theory \cite{WittenOpen} and the closed sector is called Kodaira--Spencer theory \cite{BCOV}. There are a few different versions of Kodaira--Spencer theory, but the shared characteristic is that they are all `gravitational' in nature; they describe fluctuations of the Calabi--Yau structure. From this point of view, Kodaira--Spencer theory is at the heart of the formulation of the various flavors of twisted ten-dimensional supergravity. We begin by introducing certain variants of Kodaira--Spencer theory which will feature in the descriptions of twists of type IIA and type I supergravity. \subsection{Kodaira--Spencer theory} \label{s:BCOV} Let $X$ be a Calabi--Yau manifold; for now it can be of arbitrary complex dimension $d$. Define \deq{ \PV^{i,j}(X) = \Omega^{0,j}(X, \wedge^i \T_X). } We will consider the graded space $\PV^{\bu,\bu}(X) = \oplus_{i,j} \PV^{i,j}(X)[-i-j]$ where the piece of type $(i,j)$ sits in degree $i+j$. For each fixed $i$, while we let $j$ vary, the $\dbar$ operator defines a cochain complex $\PV^{i,\bu}(X) = (\oplus_j\PV^{i,j}(X) [-j], \dbar)$ which provides a resolution for the sheaf of holomorphic polyvector fields of type $i$. The divergence operator extends to an operator of the form \[ \div \colon \PV^{i,\bu}(X) \to \PV^{i-1,\bu}(X) . \] Motivated by the states of the topological $B$-model, one defines the fields of Kodaira--Spencer gravity on $X$ to be the cochain complex \beqn\label{eqn:ks1} \left(\PV^{\bu,\bu} (X)[[u]] [2] \, , \, \dbar + u \div\right) . \eeqn Here, $u$ is a parameter of cohomological degree $+2$, which turns $\delta_{KS}^{(1)} = \dbar + u \div$ into an operator of homogenous degree $+1$. We also have performed an overall cohomological shift by $2$ so that $u^k \PV^{i,j}$ sits in degree $i+j+2k-2$. More precisely, this is a model for the $S^1$-equivariant cohomology of the states of the $B$-model on a closed disk. We refer to \cite{CLtypeI, CLsugra} for detailed justification for this ansatz. \parsec[s:poisson] The original action for Kodaira--Spencer theory posited by \cite{BCOV} has a nonlocal kinetic term. In the BV formalism, this is codified by stipulating that the BV pairing is a degenerate odd Poisson tensor rather than an odd symplectic form. The Poisson kernel is given by the expression \[ (\div\otimes 1)\delta_{\Delta \subset X\times X} \in \left[\PV^{\bu,\bu}(X)\right]^{\hotimes 2} , \] see \cite[{\S 1.4}]{CLbcov1}. Here, we view the $\delta$-distribution as a polyvector field using the Calabi--Yau form. Notice that the shifted Poisson tensor does not involve the parameter $u$ at all. For this reason, only the duals of a small number of fields pair nontrivially under the resulting odd BV bracket. \parsec[s:ksaction] There is a natural local interaction which equips the complex \eqref{eqn:ks1} with the structure of $\Z/2$ graded Poisson BV theory. Explicitly, it is given by \beqn I_{BCOV}(\Sigma) = {\rm Tr}_X \, \langle \exp \Sigma\rangle_0 = \sum_{n\geq 0} {\rm Tr}_X \, \langle\Sigma^{\otimes n}\rangle_0 \eeqn where ${\rm Tr}_X \, \Phi = \int_X (\Phi \vee \Omega) \wedge \Omega$ and where $\langle - \rangle_0$ denotes the genus zero Gromov-Witten invariant with marked points \beqn \langle u^{k_1}\mu_1 \otimes \cdots \otimes u^{k_m}\mu_m\rangle_0 := \left (\int _{\overline {\cM}_{0,m}} \psi_1^{k_1}\cdots \psi_m^{k_m}\right ) \mu_1\cdots \mu_m = \binom{m-3}{ k_1,\cdots, k_m} \mu_1\cdots \mu_m. \eeqn This interaction is extremely natural from the point of view of string field theory. Indeed, the B-model localizes to the space of constant maps into $X$, which factors as a product of $\overline{\cM}_{0,m}\times X$. This is in keeping with finding an interaction that factors as an integral over $X$ times an integral over $\overline{\cM}_{0,m}$. In \cite{BCOV} the authors show that the above interaction satisfies the classical master equation. Moreover, they show that the $L_\infty$ structure determined by the above action is equivalent to a natural dgla structure on the complex of fields with Lie bracket given by the Schouten bracket. Explicitly, the equivalence is given by the transcendental automorphism \[ \Sigma \mapsto [u(\exp (\Sigma/u)-1)]_+ \] where $[-]_+$ denotes projection onto positive powers of $u$. \parsec[s:minimalks] We pointed out in \S\ref{s:poisson} that the majority of fields pair to zero under the Poisson tensor. Physically these correspond to closed string fields that do not propogate. In the supergravity approximation, the fields that survive are those closed string fields that propogate. In terms of our description of closed string field theory in terms of Kodaira--Spencer theory, this motivates us to consider the smallest cochain complex containing those fields thathave nonzero pairing under the Poisson tensor. This is referred to as minimal Kodaira--Spencer theory. The fields of minimal Kodaira--Spencer theory are given by the subcomplex of \label{eqn:ks1} \beqn \left (\bigoplus_{i+j\leq d -1}u^i\PV^{j,\bu}(X)[2], \dbar + u \div\right). \eeqn We observe that the original odd Poisson tensor lives in this subcomplex. There is a natural action functional given by restricting $I_{BCOV}$ to this space. \subsection{The $SU(4)$ twist of type IIA supergravity}\label{sec:SU(4)twist} We recall the description of the $SU(4)$ twist of type IIA supergravity conjectured in \cite{CLsugra}. In principal, there is also a minimal, $SU(5)$ invariant, twist of type IIA supergravity but so far no description, even conjecturally, exists. We turn to this in \S \ref{s:su5IIA}. Let $X$ be a Calabi--Yau manifold of complex dimension four. The $\ZZ/2$ graded complex of fields of minimal Kodaira--Spencer theory on $X$ takes the form \beqn \begin{tikzcd} - & + & - & + \\ \hline & & & {\PV^{0,\bu}} \\ & & {\PV^{1,\bu}} \arrow[r, "u\div"] & {u\PV^{0,\bu}} \\ & {\PV^{2,\bu}} \arrow[r, "u\div"] & {u\PV^{1,\bu}} \arrow[r, "u\div"] & {u^2\PV^{0,\bu}} \\ {\PV^{3,\bu}} \arrow[r, "u\div"] & {u\PV^{2,\bu}} \arrow[r, "u\div"] & {u^2\PV^{1,\bu}} \arrow[r, "u\div"] & {u^3\PV^{0,\bu}} \end{tikzcd}. \eeqn Denote this complex by $\cE_{mKS}(X)$. Here, $u^\ell\PV^{k,i}$ is placed in parity $k + i -1 \mod 2$. The classical BCOV action $I_{BCOV}$ follows from the general formula we gave above. With this in hand the conjecture of \cite{CLsugra} takes the following form. \begin{conj} The $SU(4)$-invariant twist of type IIA supergravity on $\RR^2\times \CC^4$ is the $\Z/2$-graded Poisson BV theory with fields \beqn\label{eqn:IIAfields} \alpha = \sum_n \alpha_n u^n \in \cE_{mKS}(\CC^4) \otimes \Omega^{\bu}(\RR^2). \eeqn The classical interaction takes the form \[I_{IIA} = \int_{\CC^4 \times \RR^2} \alpha_0^3 + \cdots\] \end{conj} We will need a more detailed description of the classical action. For the moment, let us introduce some notations for the fields of this IIA model. As always, we leave the internal Dolbeault degree implicit: \begin{multline} \eta \in \PV^{0,\bu}(\CC^4) \otimes \Omega^\bu (\RR^2), \quad \mu + u \nu \in \PV^{1, \bu}(\CC^4) \otimes \Omega^\bu (\RR^2) \oplus u \PV^{0,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2) \\ \Pi \in \PV^{3,\bu}(\CC^4) \otimes \Omega^\bu(\RR^2), \quad \sigma \in \PV^{3,\bu}(\CC^4) \otimes \Omega^\bu (\RR^2) . \end{multline} We will not need an explicit notation for the remaining descendant fields. With this notation in hand, we have the more precise form of the action appearing in the conjecture: \beqn\label{eqn:IIAaction} I_{IIA} = \frac12 {\rm Tr}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \mu^2 \wedge \Pi + {\rm Tr}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \eta \wedge \mu \wedge \sigma + \frac12 {\rm Tr}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \eta \wedge \Pi^2 + \cdots \eeqn where the $\cdots$ denotes terms involving higher-order descendants. \subsection{Reduction to IIA supergravity} \label{s:su4red} We now turn back to our eleven-dimensional theory. The first goal is to compare the dimensional reduction of our eleven-dimensional theory on $\CC^5 \times \RR$ with the $SU(4)$ invariant twist of type IIA on $\R^{2}\times \CC^4$. Doing so will require a slight modification to the description of the $SU(4)$ twist of IIA supergravity recollected in \S \ref{sec:SU(4)twist}. \parsec[sec:IIApot] Recall that in the physical theory, the components of the $C$-field in eleven dimensions that are not supported along the M-theory circle become the components of the Ramond--Ramond 2-form of type IIA. However, as noted in \cite{CLsugra} components of Ramond--Ramond fields do not appear as fields in Kodaira--Spencer theory; rather it is components of their field strengths that appear. We recalled in \S \ref{s:components} that components of the $C$-field become components of $\gamma_{11d}$ in $\cE$. This suggests that we must modify our description of the twist of type IIA to include potentials for certain fields. The fundamental fields of the $SU(4)$ twist of IIA supergravity were given in \eqref{eqn:IIAfields}. We modify the space of fields by introducing potentials for both the $\Pi$ and $\sigma$ fields. First, we introduce a field $\gamma \in \Omega^{1,\bu}(\CC^4) \otimes \Omega^\bu(\RR^2)$ (not to be confused, yet, with the $\gamma$ field in our eleven-dimensional theory) which satisfies $\Pi \vee \Omega = \del \gamma$ where $\Omega$ is the Calabi--Yau form on $\CC^4$. This condition does not uniquely fix $\gamma$. There is a new linear gauge symmetry determined by $\gamma \to \gamma + \div \beta$ where $\beta$ is a ghost that we must also introduce. Similarly, we introduce a field $\theta \in \Omega^{0,\bu}(\CC^4) \otimes \Omega^\bu(\RR^2)$ which satisfies $\sigma \vee \Omega = \del \theta$, there is no extra gauge symmetry present in this condition.\footnote{Using the Calabi--Yau form we have normalized the potential fields $\gamma, \beta,\theta$ to be written as differential forms instead of polyvector fields.} In diagrammatic detail, the potential theory we are considering has underlying cochain complex of fields \beqn\label{eqn:IIApot} \begin{tikzcd} - & + \\ \hline & {\PV^{0,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2) }_\eta \\ {\PV^{1,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2)}_\mu \arrow[r, "u\div"] & u{\PV^{0,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2)}_\nu \\ u^{-1}{\Omega^{0,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2)}_\beta \arrow[r, "u\del"] & {\Omega^{1,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2)}_\gamma \\ {\Omega^{0,\bu} (\CC^4) \otimes \Omega^\bu (\RR^2)}_\theta & \end{tikzcd} \eeqn. The original space of fields of the twist of IIA supergravity on $\CC^4 \times \RR^2$ was equipped with an odd Poisson bivector which was degenerate. In other words, it did not define a theory in the conventional BV formalism. One of the key features of this new complex of fields, after we have taken these potentials, is that it is equipped with an odd nondegenerate pairing, thus equipping it with the structure of a theory in the conventional BV formalism. The pairing is $\Res_u \frac{\d u}{u} \int^\Omega_{\CC^4 \times \RR^2} \alpha \vee \alpha'$ where $\alpha, \alpha'$ are two general fields in this potential theory on $\CC^4 \times \RR^2$. Explicitly, in the description of the fields in \eqref{eqn:IIApot} the pairing is \[ \int^\Omega_{\CC^4 \times \RR^2} \eta \theta + \int^\Omega_{\CC^4 \times \RR^2} \mu \vee \gamma + \int^\Omega_{\CC^4 \times \RR^2} \nu \beta . \] This pairing is compatible with the odd Poisson bracket present in the original theory on $\CC^4 \times \RR^2$. The type IIA action completely determines the action of this theory with potentials. One simply takes the \eqref{eqn:IIAaction} and replaces all appearances of $\Pi$ with $\div \gamma$ and all appearances of $\sigma$ with $\div \theta$. This yields the interaction of the potential theory \beqn\label{eqn:IIAactionpot} \til I_{IIA} = \frac12 \int^\Omega_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \mu^2 \vee \del \gamma + \int^\Omega_{\CC^4 \times \RR^2} \frac{1}{1-\nu} (\eta \wedge \mu) \vee \del \theta + \frac12 \int_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \eta \wedge \del \gamma \wedge \del \gamma \eeqn Notice that the terms involving higher descendants vanishes since these fields are set to zero in the potential theory. \parsec[-] We turn to the proof of the main result of this section that the dimensional reduction of our eleven-dimensional theory agrees with the twist of IIA supergravity just introduced. We recall the notion of dimensional along a holomorphic direction following \cite{ESW}. Suppose that $V_\RR$ is a real vector space and denote by $V$ its complexification. We consider a field theory defined on $M \times V$, which is holomorphic along $V$ (in particular, this means that the theory is translation invariant along $V$). We consider the dimensional reduction along the projection \beqn\label{eqn:dimred} M \times V \to M \times V_\RR \eeqn induced by ${\rm Re} \colon V \to V_\RR$. Most relevant for us is the case when $V = \CC$ and $M$ is $\CC^4 \times \RR$, but the explicit form of the theory along $M$ is not important at the moment. For illustrative purposes, let us first assume that $M$ is a point and that the space of fields is of the form $\Omega^{0,\bu}(V) \otimes W$ for $W$ some graded vector space. As properly formulated in \cite{ESW}, it is shown that the dimensional reduction along $V \to V_\RR$ is equivalent to the theory whose fields are $\Omega^\bu(V_\RR) \otimes W$. In other words, the dimensional reduction of the holomorphic theory on $V$ is a topological theory on $V_\RR$. If we put $M$ back in, the result is similar. Suppose the original theory is of the form $\cE(M) \otimes \Omega^{0,\bu}(V) \otimes W$. Then, the dimensional reduction along \eqref{eqn:dimred} is the theory whose space of fields is $\cE(M) \otimes \Omega^\bu(V_\RR) \otimes W$. An explicit model for this reduction can be described as follows. Suppose $V \cong \CC^n$ and place the theory on $(\CC^\times)^{\times n} \subset \CC^n$. The dimensional reduction along $\CC^n \to \RR^n$ agrees with the compactification of the theory along $S^1 \times \cdots \times S^1$ where one throws away all nonzero winding modes around each circle. \begin{prop}\label{prop:dimred} The $SU(4)$ invariant twist of type IIA on $\CC^4 \times \RR^2$ is the dimensional reduction of the eleven-dimensional theory along \[ \CC^4 \times \CC \times \RR_t \to \CC^4 \times \RR_x \times \RR_t \cong \CC^4 \times \RR^2 . \] \end{prop} \begin{proof} Let us denote the holomorphic coordinate we are reducing along by $z_5 = x + {\rm i} y$. We first read off the dimensional reduction of each component field of the eleven-dimensional theory. Per the above discussion, this is obtained by taking all fields to be independent of $y$ and replacing $\d \zbar_5$ by $\d x$. To not confuse the notations of fields in ten and eleven dimensions, we use the notation $\alpha_{11d}$ to denote an eleven-dimensional field. The reductions of the eleven-dimensional fields $\nu_{11d}, \beta_{11d}$ are easy to describe. Recall that \[ \nu_{11d} \in \PV^{0,\bu}(\CC^5) \otimes \Omega^\bu(\RR) . \] The reduction of this field is a ten-dimensional $\nu$ field \[ \nu (z_i,x,t) = \nu_{11d} (z_i, x, y=0, t) |_{\d \zbar_5 = \d x} . \] Similarly, the reduction of $\beta_{11d}$ is a ten-dimensional $\beta$ field \[ \beta (z_i,x,t) = \beta_{11d} (z_i, x, y=0, t) |_{\d \zbar_5 = \d x} . \] The reduction of the eleven-dimensional fields $\mu_{11d}$ and $\gamma_{11d}$ require a bit of massaging. We break the $SU(5)$ symmetry to $SU(4)$ to write \[ \mu_{11d} = \mu^0_{11d} + \theta_{11d} \partial_{z_5} \] where \begin{align*} \mu^0_{11d} & \in \PV^{1,\bu}(\CC^4) \otimes \Omega^{0,\bu}(\CC_{z_5}) \otimes \Omega^\bu(\RR_t) \\ \theta_{11d} & \in \Omega^{0,\bu}(\CC^4) \otimes \Omega^{0,\bu}(\CC_{z_5}) \otimes \Omega^\bu(\RR_t) . \end{align*} The dimensional reduction of $\mu^0_{11d}$ is a ten-dimensional $\mu$ field \[ \mu(z_i,x,t) = \mu_{11d}^0 (z_i, x,y=0,t)|_{\d \zbar_5 = \d x} . \] The dimensional reduction of $\theta_{11d}$ is a $\theta$ field \[ \theta(z_i,x,t) = \theta_{11d} (z_i, x,y=0,t)|_{\d \zbar_5 = \d x} . \] Finally, write the eleven-dimensional field $\gamma_{11d}$ as \[ \gamma_{11d} = \gamma_{11d}^0 + \eta_{11d} \d z_5 \] where \begin{align*} \gamma^0_{11d} & \in \Omega^{1,\bu}(\CC^4) \otimes \Omega^{0,\bu}(\CC_{z_5}) \otimes \Omega^\bu(\RR_t) \\ \eta_{11d} & \in \PV^{0,\bu}(\CC^4) \otimes \Omega^{0,\bu}(\CC_{z_5}) \otimes \Omega^\bu(\RR_t) . \end{align*} The dimensional reduction of $\gamma^0_{11d}$ is a ten-dimensional $\gamma$ field \[ \gamma(z_i,x,t) = \gamma_{11d}^0 (z_i, x,y=0,t)|_{\d \zbar_5 = \d x} . \] The dimensional reduction of $\eta_{11d}$ is an $\eta$ field \[ \eta(z_i,x,t) = \eta_{11d} (z_i, x,y=0,t)|_{\d \zbar_5 = \d x} . \] Next, we read off the dimensional reduction of the eleven-dimensional action. Let us first focus on the term present in BF theory which is $\int^\Omega \frac{1}{1-\nu_{11d}} \mu_{11d}^2 \vee \del \gamma_{11d}$. Upon reduction, this becomes \beqn\label{eqn:bfred} \int^{\Omega_{\CC^4}}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \mu^2 \vee \del \gamma + \int^{\Omega_{\CC^4}}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} (\theta \wedge \mu) \vee \del \eta \eeqn Next, consider the cubic term in the eleven-dimensional action $J = \frac16 \int \gamma_{11d} \wedge \del \gamma_{11d} \wedge \del \gamma_{11d}$. Upon reduction, this becomes \beqn\label{eqn:jred} \int_{\CC^4 \times \RR^2} \eta \wedge \del \gamma \wedge \del \gamma . \eeqn The sum of the action functionals \eqref{eqn:bfred} and \eqref{eqn:jred} does not precisely agree with the IIA action $\til I_{IIA}$. To relate the two actions we must make the following field redefinition: \[ \til \theta = \frac{1}{1-\nu} \theta, \quad \til \eta = (1- \nu) \eta, \quad \til \beta = \beta + \frac{1}{1-\nu} \eta \wedge \theta . \] Notice that this change of coordinates is compatible with the odd symplectic pairing on the fields. Under this field redefinition the total dimensionally reduced action can be written as \begin{multline} \int^{\Omega_{\CC^4}}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \mu^2 \vee \del \gamma + \int^{\Omega_{\CC^4}}_{\CC^4 \times \RR^2} \frac{1}{1-\nu} \til\eta \wedge \del \gamma \wedge \del \gamma + \int^{\Omega_{\CC^4}}_{\CC^4 \times \RR^2} (\til\theta \wedge \mu) \vee \del \left(\frac{1}{1-\nu} \til\eta\right) \\ + \int_{\CC^4 \times \RR^2}^{\Omega_{\CC^4}} \frac{1}{1-\nu} (\til \eta \wedge \til \theta) \div \mu . \end{multline} The first line comes from plugging in the new fields into the interactions \eqref{eqn:bfred} and \eqref{eqn:jred}. The second line comes from plugging in the new fields into the kinetic term $\int \beta \div \mu$, which because of the non-linear change of coordinates now contributes to the interaction. We observe that the first two terms agree with the first and third terms in \eqref{eqn:IIAactionpot}. After integrating by parts, the remaining terms can be written as \[ - \int^{\Omega_{\CC^4}}_{\CC^4 \times \RR^2} \left(\frac{1}{1-\nu} \til\eta\right) \div (\til\theta \mu) + \int_{\CC^4 \times \RR^2}^{\Omega_{\CC^4}} \left(\frac{1}{1-\nu} \til \eta\right) \til \theta \div \mu . \] Applying the identity $\div (\til \theta \mu) = \til \theta \div \mu + \del (\til \theta) \vee \mu$, we see that this agrees exactly with the second term in \eqref{eqn:IIAactionpot}. \end{proof} \subsection{The twist of type I supergravity} We now turn to a different type of redution of the eleven-dimensional theory, this time involving type I supergravity. We begin by briefly recalling the description of type I supergravity following \cite{CLtypeI} which was motivated by the unoriented $B$-model. In \cite{SWspinor}, the second two authors verified the conjectural description of the space of fields recalled below using the pure spinor formalism. Unlike type IIA supergravity, there only exists an $SU(5)$ invariant twist of type I supergravity and it is holomorphic in the maximal number of dimensions. Concretely, the space of fields of the $SU(5)$ twist of type I supergravity is a subspace of minimal Kodaira--Spencer theory on $\CC^5$. The $\ZZ/2$ graded space of fields equipped with its linear BRST operator is \beqn\label{eqn:IIApot} \begin{tikzcd} - & + & - & + \\[-1.7em] \hline {\PV^{1,\bu}}(\CC^5) \arrow[r, "u\div"] & {u\PV^{0,\bu}}(\CC^5) \\ {\PV^{3,\bu}} (\CC^5)\arrow[r, "u\div"] & {u\PV^{2,\bu}} (\CC^5)\arrow[r, "u\div"] & {u^2\PV^{1,\bu}}(\CC^5) \arrow[r, "u\div"] & {u^3\PV^{0,\bu}}(\CC^5) \end{tikzcd}. \eeqn Let us give a description of the classical action. Introduce notations for the fields of this type I model: \beqn\label{eqn:Ifields} \mu + u \nu \in \PV^{1, \bu}(\CC^5) \oplus u \PV^{0,\bu} (\CC^5), \quad \sigma \in \PV^{3,\bu}(\CC^5) . \eeqn We will not need an explicit notation for the remaining descendant fields. \begin{conj} The twist of type I supergravity on $\CC^5$ is the $\Z/2$-graded theory with fields $\mu+u\nu, \sigma$ as above and with classical action \beqn\label{eqn:typeIaction} I_{{\rm type\, I}} = {\rm Tr}_{\CC^5} \frac{1}{1-\nu} \mu^2 \vee \sigma + \cdots \eeqn where the $\cdots$ stands for terms involving the higher descendant fields. \end{conj} \parsec[s:typeIpot] Like in the type IIA discussion, there is a slight modification of the type I model above which is most directly related to eleven-dimensional supergravity. This modification involves replacing the field $\sigma$ above by a potential $\til \gamma \in \Omega^{1,\bu}(\CC^5)$ which satisfies $\Omega \vee \sigma = \del \til \gamma$. This condition does not fix $\til \gamma$ uniquely, there is a gauge symmetry of the form $\til \gamma \to \til \gamma + \del \til \beta$. In detail, this potential theory we are considering has underlying cochain complex of fields \begin{equation} \label{eq:Ipot} \begin{tikzcd}[row sep = 1 ex] - & + & -\\ \hline \PV^{1,\bu}(\CC^5)_\mu \ar[r, "\div"] & \PV^{0,\bu}(\CC^5)_\nu \\ & \Omega^{0,\bu}(\CC^5)_{\til\beta} \ar[r, "\del"] & \Omega^{1,\bu}(\CC^5)_{\til\gamma} . \end{tikzcd} \end{equation} This space of fields is equipped with an odd nondegenerate pairing. Like the eleven-dimensional theory, it is a classical BV theory in the $\ZZ/2$-graded sense. The type I action \eqref{eqn:typeIaction} completely determines the action of this theory with potentials. One simply takes the action and replaces all appearances of $\sigma$ with $\Omega^{-1} \vee \del \til\gamma$. This yields the interaction of the potential theory \beqn\label{eqn:Iactionpot} \til I_{\text{type I}} = \frac12 \int^\Omega_{\CC^5} \frac{1}{1-\nu} \mu^2 \vee \del \til\gamma . \eeqn Notice that the terms involving higher descendants vanishes since these fields are set to zero in the potential theory. \subsection{Slab compactification}\label{s:Ired} We consider placing twisted eleven-dimensional supergravity on the manifold $\CC^5 \times [0,1]$. In order to do this, we must choose appropriate boundary conditions at $t=0$ and $t=1$. Our eleven-dimensional theory on such manifolds fits nicely into the formalism of \cite{BY,Eugene} in that it is topological in the direction transverse to the boundary. The phase space of the theory at $t=0$ or $t=1$ is \begin{equation} \label{eq:lin1} \begin{tikzcd}[row sep = 1 ex] - & + \\ \hline \PV^{1,\bu}(\CC^5)_\mu \ar[r, "\div"] & \PV^{0,\bu}(\CC^5)_\nu \\ \Omega^{0,\bu}(\CC^5)_\beta \ar[r, "\del"] & \Omega^{1,\bu}(\CC^5)_\gamma. \end{tikzcd} \end{equation} The wedge an integrate pairing between the top and bottom lines induces an {\em even} symplectic structure on the phase space. Denote this phase space by $\cE_{\del}$ for the moment. The phase space is equipped with the restriction of the linear BRST operator of the full eleven-dimensional theory. There is also a non linear BRST operator, just like in the bulk theory. The BV action induces a $L_\infty$ structure on the parity shift $\Pi\cE_{\del}$ whose cohomology is still a trivial central extension of $E(5,10)$. A boundary condition is given by a Lagrangian subspace of $\cE_{\del}$ with respect to this even symplectic structure. To make sense of the theory on $\CC^5 \times [0,1]$ we must make the choice of two separate boundary conditions \[ \cM_{t=0} , \cM_{t=1} \subset \cE_{\del} . \] Moreover, these boundary conditions carry non linear BRST operators endowing their parity shifts $\Pi \cM_{t=0} , \Pi\cM_{t=1}$ with the structures of $L_\infty$ algebras. These $L_\infty$ structures must be compatible with the one on the phase space. In fact, in our context these boundary conditions are abstractly isomorphic. We will explain the explicit boundary conditions momentarily. An important thing to note is that the fields of the theory compactified on the slab is computed by the {\em derived} intersection of the two Lagrangians: \[ \cM_{t=0} \overset{\LL}{\underset{\cE_{\del}}{\times}}\cM_{t=1}. \] To compute this derived intersection we must suitably resolve the boundary conditions. \parsec[s:boundary] At $t=0$, the boundary condition of the eleven-dimensional theory is determined by declaring \[ \cM_{t=0}: \quad \gamma|_{t=0} = \beta|_{t=0} = 0 . \] We will place the theory on $\CC^5 \times [0,1]$ by imposing the same boundary condition at $t=1$: \[ \cM_{t=1}: \quad \gamma|_{t=1} = \beta|_{t=1} = 0 . \] \begin{prop} With these boundary conditions for the classical eleven-dimensional theory on $\CC^5 \times [0,1]$, the dimensional reduction along \[ \CC^5 \times [0,1] \to \CC^5 \] is equivalent to the twist of type I supergravity on $\CC^5$. \end{prop} \begin{proof} Notice that both $\cM_{t=0}$ and $\cM_{t=0}$ are abstractly isomorphic to the complex resolving divergence-free vector fields \begin{equation} \label{eq:lin2} \begin{tikzcd}[row sep = 1 ex] - & + \\ \hline \PV^{1,\bu}(\CC^5)_\mu \ar[r, "\div"] & \PV^{0,\bu}(\CC^5)_\nu . \end{tikzcd} \end{equation} To compute the derived intersection between the two Lagrangians at $t=0$ and $t=1$ we replace the Lagrangian morphism $\cM_{t=0} \hookrightarrow \cE_{\del}$. Consider the cochain complex $\til \cM_{t=0}$ defined by \begin{equation} \label{eq:lin3} \begin{tikzcd}[row sep = 1 ex] - & + & - \\ \hline \PV^{1,\bu}(\CC^5)_\mu \ar[r, "\div"] & \PV^{0,\bu}(\CC^5)_\nu \\ \Omega^{0,\bu}(\CC^5)_\beta \ar[dr,dotted,"\id"]\ar[r, "\del"] & \Omega^{1,\bu}(\CC^5)_\gamma \ar[dr,dotted,"\id"] \\ & \Omega^{0,\bu}(\CC^5)_{\til\beta} \ar[r, "\del"] & \Omega^{1,\bu}(\CC^5)_{\til\gamma} . \end{tikzcd} \end{equation} Notice that as a graded vector space, this complex is of the form $\cE_{\del} \oplus (\Omega^{0,\bu} \oplus \Pi \Omega^{1,\bu})$. The $L_\infty$ structure on $\Pi \til \cM_{t=0}$ extends the one on $\cE_{\del}$ coming from the bulk BV action. Notice that the obvious embedding $\cM_{t=0} \hookrightarrow \til \cM_{t=0}$ is a quasi-isomorphism. The projection map $\til \cM_{t=0} \twoheadrightarrow \cE_{\del}$ factors the original Lagrangian inclusion as \[ \cM_{t=0} \hookrightarrow \til \cM_{t=0} \twoheadrightarrow \cE_{\del} . \] To compute the derived intersection of $\cM_{t=0}$ and $\cM_{t=1}$ we can compute the ordinary intersection of $\til \cM_{t=0}$ and $\cM_{t=1}$. Let $\mu_{t=1}$ and $\nu_{t=1}$ denote the fields present in the other boundary condition $\cM_{t=1}$. The intersection $\til \cM_{t=0} \times_{\cE_{\del}} \cM_{t=1}$ is computed by setting the fields $\beta, \gamma$ to zero and $\mu=\mu_{t=1}$, $\nu = \nu_{t=1}$. Thus, we are left with \begin{equation} \label{eq:lin3} \begin{tikzcd}[row sep = 1 ex] - & + & -\\ \hline \PV^{1,\bu}(\CC^5)_\mu \ar[r, "\div"] & \PV^{0,\bu}(\CC^5)_\nu \\ & \Omega^{0,\bu}(\CC^5)_{\til\beta} \ar[r, "\del"] & \Omega^{1,\bu}(\CC^5)_{\til\gamma} \end{tikzcd} \end{equation} This is precisely the underlying cochain complex of fields for the type I model with potentials. The odd nondegenerate pairing on this complex agrees with the one on this particular potential theory for the twist of type I supergravity. The $L_\infty$ structure on the parity shift of this complex is compatible with the one induced from the BV action in \eqref{eqn:Iactionpot}. \end{proof} \subsection{The $SU(5)$ twist of type IIA supergravity} \label{s:su5IIA} Thus, given that our eleven-dimensional theory correctly describes the $SU(5)$-invariant twist of supergravity on $\CC^5 \times \RR$, to obtain the $SU(5)$ twist of type IIA supergravity we should reduce along the topological $\RR$ direction. This results in a $SU(5)$ invariant, holomorphic, theory on $\CC^5$. Let us briefly spell out the fields present in this dimensional reduction. The reduction is obtained by replacing $\Omega^\bu(\RR)$ with its translation invariant subalgebra $\CC[\ep] = \CC[\d t]$. Here, $\ep$ is an odd parameter playing the role of the translation invariant one-form $\d t \in \Omega^1(\RR)$. Equivalently, we are compactifying the theory along \[ \CC^5 \times S^1 \to \CC^5 . \] The field $\mu_{11d}$ is replaced by the field \[ \mu + \ep \mu' \in \Pi \PV^{1,\bu}(\CC^5) [\ep] . \] Notice that the lowest component of $\mu$ is odd (just like $\mu_{11d})$, but the lowest component of $\mu'$ is now even. Completely similarly, the remaining fields reduce as $\nu + \ep \nu'$, $\gamma + \ep \gamma'$, and $\beta + \ep \beta'$. In summary, the linear complex of fields of the dimensionally reduced theory on $\CC^5$ is \begin{equation} \label{eqn:IIAsu5} \begin{tikzcd}[row sep = 1 ex] {\rm odd} & {\rm even} & {\rm even} & {\rm odd} \\ \hline \PV^{1,\bu}(\CC^5)_{\mu} \ar[r, "\del"] & \PV^{0,\bu}(\CC^5)_\nu & \\ & \ep\Omega^{0,\bu}(\CC^5)_{\beta'} \ar[r, "\div"] & \ep\Omega^{1,\bu}(\CC^5)_{\gamma'} . \\ & \ep \PV^{1,\bu}(\CC^5)_{\mu'} \ar[r, "\div"] & \ep \PV^{0,\bu}(\CC^5)_{\nu'} \\ & & \Omega^{0,\bu}(\CC^5)_\beta \ar[r, "\div"] & \Omega^{1,\bu}(\CC^5)_\gamma. \end{tikzcd} \end{equation} We can compute the dimensional reduction of the eleven-dimensional action $S_{BF,\infty} + J$ in a similar way to how we have done in the past few sections. We arrive at the action functional described below. \begin{conj} \label{conj:IIAsu5} The $SU(5)$ twist of type IIA supergravity on $\CC^5$ is equivalent to the theory whose linear BRST complex of fields is displayed in \eqref{eqn:IIAsu5}. The full action functional is \begin{multline} \label{eqn:su5action} \int^\Omega_{\CC^5}\bigg(\beta' \wedge \dbar \nu + \beta \wedge \dbar \nu' + \gamma' \wedge \dbar \mu + \gamma \wedge \dbar \mu' + \beta' \wedge \div \mu + \beta \wedge \div \mu' \bigg) \\ + \int^\Omega_{\CC^5} \bigg( \frac12 \frac{1}{1-\nu} \mu^2 \vee \del \gamma' + \frac{1}{1-\nu} (\mu \wedge \mu') \vee \del \gamma' + \frac12 \frac{\nu'}{(1-\nu)^2} \mu^2 \vee \del \gamma \bigg) \\ + \frac12 \int_{\CC^5} \gamma' \wedge \del \gamma \wedge \del \gamma . \end{multline} \end{conj} The first two lines in \eqref{eqn:su5action} arise from the reduction of the BF action $S_{BF,\infty}$. The final line arises from the reduction of $J = \frac16 \int \gamma_{11d} \del \gamma_{11d} \del \gamma_{11d}$. \parsec[]\label{s:orbifold} The slab compactification of the previous section was one way to implement the $S^1 / \ZZ/2$ reduction of the eleven-dimensional theory. We offer another point of view of this $S^1 / \ZZ/2$ reduction. First off, there is the following $\ZZ/2$ action on the eleven-dimensional theory on $\CC^5 \times S^1$ before compactifying. We obtain it by the following tensor product of $\ZZ/2$ actions. First, $\ZZ/2$ acts on $\Omega^\bu(S^1)$ by orientation reversing diffeomorphisms. Second, we declare that the eigenvalue of the $\ZZ/2$ action on $\PV^{k,\bu}(\CC^5)$, for $k=0,1$ is $+1$ and the eigenvalue of the $\ZZ/2$ action on $\Omega^{k,\bu}(\CC^5)$ for $k=0,1$ is $-1$. This determines a $\ZZ/2$ action on the full space of fields of the eleven-dimensional theory. Upon $S^1$ compactification the $\ZZ/2$ action is easy to describe: $\mu,\nu$ both have eigenvalue $+1$, $\mu',\nu'$ both have eigenvalue $-1$, $\gamma,\beta$ both have eigenvalue $-1$, and $\gamma',\beta'$ both have eigenvalue $+1$. In particular, we see that the $\ZZ/2$ fixed points simply pick out the $\mu, \nu, \gamma', \beta'$ fields; this comprises the first two lines of \eqref{eqn:IIAsu5}. The fields match precisely with the fields in the twist of type I supergravity that we recalled in \S \ref{s:typeIpot} (Under the relabeling $\gamma' \leftrightarrow \til \gamma, \beta' \leftrightarrow \til \beta$). Furthermore, restricting the action in the above conjecture agrees precisely with the action of this twisted type I model. \subsection{Compactification along a CY3}\label{s:CY3} In the first section we saw that the eleven-dimensional theory can be defined on any manifold that is a product of a Calabi--Yau five-fold with a smooth oriented one-manifold. In this section, we investigate an important compactification of the eleven-dimensional theory which involves the Calabi--Yau manifold $X \times \CC^2$ where $X$ is a simply connected compact Calabi--Yau three-fold. The compactification of the theory along the three-fold $X$ \[ X \times \CC^2 \times \RR \to \CC^2 \times \RR \] yields an effective five-dimensional theory which is holomorphic along $\CC^2$ and topological along $\RR$. Upon compactification, we will find a match with a description of the twist of five-dimensional minimally supersymmetric supergravity. \begin{prop} \label{prop:5dsugra} The compactification of the eleven-dimensional theory along a Calabi--Yau three-fold $X$ is equivalent to the twist of five-dimensional $\cN=1$ supergravity with $h^{1,1}(X)-1$ vector multiplets and $h^{1,2}(X) + 1$ hypermultiplets. \end{prop} \parsec[s:5dsugra] We give a conjectural capitulation of the twist of five-dimensional $\cN=1$ supergravity. Before twisting, a general five-dimensional $\cN=1$ supergravity contains a gravity multiplet coupled to some number of vector and hypermultiplets. The twist of the vector and hypermultiplet has been computed in \cite{ESW}, and we recall it below. The twist of the gravity multiplet is less clear. A thorough computation of the twist has yet to appear, though some checks have been established by Elliott and the last author in \cite{EWpoisson}. We give a description of the twist now, but leave a detailed computation from first principles to future work. The gravity multiplet, see \cite{CCDF} for instance, consists of a graviton $e$, a gravitino $\psi$, and a one-form gauge field $\cA_{grav}$. After twisting, the graviton and components of the gravitino decompose into two Dolbeault-de Rham valued fields \[ \alpha, \eta \in \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) , \] whose lowest components both carry odd parity. The one-form gauge field $\cA_{grav}$ and the remaining components of the gravitino decompose into two more Dolbeault-de Rham valued fields \[ A_{grav} , B_{grav} \in \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) , \] whose lowest components also both carry odd parity. \begin{conj} \label{conj:5dsugra} The twist of five-dimensional supergravity (with nonzero Chern--Simons term) with vector multiplets valued in a Lie algebra $\fg$ and hypermultiplets valued in a representation $V$ has BV fields \begin{itemize} \item $\alpha, A_{grav} \in \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR)$ with conjugate BV fields $\eta, B_{grav}$, \item $A \in \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \otimes \fg$ with conjugate BV field $B$, \item $\chi \in \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \otimes V$ with conjugate BV field $\psi$. \end{itemize} The action is \begin{multline} \label{eqn:5daction} \int^\Omega_{\CC^2 \times \RR} \left(\eta \dbar \alpha + B_{grav} \dbar A_{grav} + B \dbar A + \psi \dbar \chi \right) \\ + \int^\Omega_{\CC^2 \times \RR} \left( \frac12\eta \{\alpha, \alpha\} + B_{grav} \{\alpha, A_{grav}\}+ B \{\alpha, A\} + \psi \{\alpha, \chi \}\right) \\ + \frac16 \int_{\CC^2 \times \RR} B_{grav} \del B_{grav} \del B_{grav} . \end{multline} \end{conj} \parsec[-] With this description of the twist of five-dimensional supergravity, we turn to the proof of Proposition \ref{prop:5dsugra}. First, we set up some notation. Let $\Omega_X$ be the holomorphic volume form on $X$. To define the eleven-dimensional theory on $X \times \CC^2 \times \RR$ we use the Calabi--Yau form $\Omega_X \wedge \d z_1 \wedge \d z_2$, where $\{z_i\}$ is a holomorphic coordinate on $\CC^2$. Let $\omega \in \Omega^{1,1}(X)$ be a fixed K\"ahler form on $X$. For any $k$, let $H^k(X, \Omega^k_X)_\perp$ denote the cohomology of the primitive elements. \begin{proof} Consider the eleven-dimensional field $\nu_{11d}$. Under the equivalence \begin{align*} \PV^{0,\bu}(X \times \CC^2) \otimes \Omega^\bu(\RR) & \simeq H^{\bu}(X, \cO) \otimes \PV^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \\ & = \PV^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \oplus \Pi \Bar{\Omega}_X \PV^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \end{align*} the $\nu_{11d}$ field decomposes as \[ \nu_{11d} = \nu + \Bar{\Omega}_X \til \nu . \] Here $\Bar{\Omega}_X$ is the complex conjugate to the holomorphic volume form on $X$. Notice that the zero form component of $\til \nu$ is a field with even parity. Next, consider the eleven-dimensional field $\mu_{11d}$. Under the equivalence \begin{align*} \Pi \PV^{1,\bu}(X \times \CC^2) \otimes \Omega^\bu(\RR) & \simeq \Pi H^{\bu}(X, \cO) \otimes \PV^{1,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \\ & \oplus \Pi H^\bu(X, \T_X) \otimes \PV^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \\ & = \Pi \PV^{1,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \oplus \Bar{\Omega}_X \PV^{1,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \\ & \oplus H^{1}(X, \T_X) \otimes \PV^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \oplus \Pi H^{2}(X, \T_X) \otimes \PV^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \end{align*} the field $\mu_{11d}$ decomposes as \begin{align*} \mu_{11d} & = \mu + \Bar{\Omega}_X \til \mu \\ & + e^i \chi_i + f^a A_a + (\Omega_X^{-1} \vee \omega^2)A_{grav} . \end{align*} Here, $\{e^i\}_{i=1,\ldots, h^{2,1}}$ is a basis for $H^{1}(X, \T_X)$ and $\{f^a\}_{a=1,\ldots, h^{1,1}-1}$ is a basis for \[ H^2 (X, \Omega^2_X)_\perp \subset H^2(X, \Omega^2_X) \cong H^2(X, \T_X) . \] Notice that the zero form part of $\til{\mu}$ is an even field, the zero form part of $\chi_i$ is an even field, the zero form part of $A_a$ is an odd field, and the zero form part of $\mu_\omega$ is an odd field. The decomposition for the eleven-dimensional fields $\gamma_{11d}$ and $\beta_{11d}$ is similar. We record it here: \begin{align*} \beta_{11d} & = \beta + \Bar{\Omega}_X \til \beta \\ \gamma_{11d} & = \gamma + \Bar{\Omega}_X \til \gamma + e_i \psi^i + f_a B^a + \omega \wedge B_{grav} . \end{align*} Here, $\{e_i\}_{i=1,\ldots,h^{2,1}}$ is a basis for $H^2 (X, \Omega^1_X)$ dual to the basis $\{e^i\}$ under the Serre pairing. Also, $\{f_a\}_{a=1,\ldots,h^{1,1}-1}$ is a basis for $H^{1}(X, \Omega^1_X)_\perp$ dual to the basis $\{f^a\}$. To compare most directly to the description of the twist of five-dimensional $\cN=1$ supergravity we modestly modify the fields. Let $\del$ be the holomorphic de Rham differential along $\CC^2$. First, we introduce a potential for the fields $\mu$ and $\til \mu$. Let \[ \alpha, \chi \in \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \] be differential forms satisfying $\del \alpha = \mu \vee \Omega_{\CC^2}$ and $\del \chi = \til \mu \vee \Omega_{\CC^2}$. The fields $\nu, \til \nu$ are set to zero. Dually, we replace the fields $\gamma, \til \gamma$ their `field strengths', suitably renormalized with respect to the volume form \[ \eta = (\d^2 z)^{-1} \vee \del \til \gamma , \quad \psi = (\d^2 z)^{-1} \vee \del \gamma \in \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) . \] The roles of $\beta, \til \beta$ were as gauge symmetries implementing $\gamma \to \gamma + \del \beta$ and $\til \gamma \to \til \gamma + \del \til \beta$. Since we are replacing $\gamma, \til \gamma$ by their images under the operator $\del$, these gauge symmetries are set to zero. In summary, we are left with the following fields \[ \begin{array}{cccccccccc} \alpha,A_{grav} & \in & \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR), & \eta, B_{grav} & \in & \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) \\ \chi, \chi_i & \in & \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR), & \psi, \psi^i & \in & \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR), & i=1,\ldots, h^{2,1} \\ A_a & \in & \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR), & B^a & \in & \Pi \Omega^{0,\bu}(\CC^2) \otimes \Omega^\bu(\RR) , & a = 1, \ldots, h^{1,1}-1. \end{array} \] Let us plug these fields in to the eleven-dimensional action. First, consider the BF term $\frac12 \int^{\Omega} \frac{1}{1-\nu_{11d}} \mu_{11d}^2 \gamma_{11d}$. With the field redefinitions above, this decomposes as \begin{multline}\label{eqn:5dsugra1} \int_{\CC^2 \times \RR}^\Omega \left( \frac12 \del \alpha \wedge \del \alpha \wedge \eta + \del A_{grav} \wedge \del A_{grav} \wedge B_{grav} \right) \\ + \int_{\CC^2 \times \RR}^\Omega \left( \del \alpha \wedge \del \chi \wedge \psi + \del \alpha \wedge \del \chi_i \wedge \psi^i + \del \alpha \wedge \del A_a \wedge B^a \right) . \end{multline} This term agrees with the second line in the five-dimensional action \eqref{eqn:5daction}. Finally, consider the term in the eleven-dimensional action $J(\gamma_{11d}) = \frac16 \int \gamma_{11d} \wedge \del \gamma_{11d} \wedge \del \gamma_{11d}$. This induces the five-dimensional Chern--Simons term \beqn\label{eqn:5dsugra2} \frac16 \int_{\CC^2 \times \RR} B_{grav} \del B_{grav} \del B_{grav} . \eeqn This completes the proof. \end{proof} \parsec[s:5dglobal] In \S \ref{sec:global} we computed the global symmetry algebra of the eleven-dimensional theory on $\CC^5 \times \RR$ and found a close relationship to the exceptional super Lie algebra $E(5,10)$. In this section we deduce the form of the global symmetry algebra of the five-dimensional compactified theory on $\CC^2 \times \RR$. Consider the full de Rham cohomology of $X$ by \[ H^\bu (X, \Omega^\bu) = \oplus_{i,j} H^i (X, \Omega^j_X) . \] This is a graded commutative algebra using the wedge product of differential forms. Next, consider the space of holomorphic functions $\cO(\CC^2)$ on $\CC^2$. The Poisson bracket $\{-,-,\}$ associated to the standard holomorphic symplectic structure on $\CC^2$ endows $\cO(\CC^2)$ with the structure of a Lie algebra. In particular, we can tensor $\cO(\CC^2)$ with $H^\bu (X, \Omega^\bu)$ to obtain the structure of a graded Lie algebra on \[ H^\bu (X, \Omega^\bu) \otimes \cO(\CC^2) . \] Let $[\omega] \in H^1(X, \Omega^1_X)$ be the class of the K\"ahler form on $X$. The global symmetry algebra of the compactified theory along the Calabi--Yau three-fold $X$ is equivalent to a deformation of this graded Lie algebra. The deformation introduces the following Lie bracket \[ \big[ [\omega] \otimes f, [\omega] \otimes g\big] = [\omega^2] \otimes \{f,g\} \in H^{2,2}(X, \Omega^2) \otimes \cO(\CC^2) . \] \subsection*{A geometric description of the model} Our eleven-dimensional theory is defined more generally on the product of manifolds \[ X \times S \] where $X$ is a Calabi--Yau five-fold and $S$ is a smooth oriented real one-dimensional manifold. The theory is of a holomorphic gravitational flavor as it describes ``partial'' deformations of complex structures along $X$. We will explain what we mean by this momentarily. In our theory there is a field which encodes this partial deformation of complex structure on the Calabi--Yau manifold $X$. It is an even field \[ \mu^{1,1} \in \Omega^{0,1} (X , \T_X) \otimes C^\infty(S) \] where $\T_X$ denotes the holomorphic tangent bundle on $X$. Locally, $\mu^{1,1}$ can be decomposed as Beltrami-like differential \[ \mu^{1,1} = \mu^i_j (z,\zbar, t) \d \zbar_i \frac{\partial}{\partial z_j} . \] Physically speaking, $\mu^{1,1}$ is a component of the metric which survives in the twisted theory; see~\S\ref{s:components}. Next, there are two fields \[ \gamma^{1,0} \in \Omega^{1,0} (X) \otimes C^\infty(S), \quad \gamma^{1,2} \in \Omega^{1,2}(X) \otimes C^\infty(S) . \] The field $\gamma^{1,2}$ is a component of the supergravity $3$-form $C$-field that survives in the twisted theory. The field $\gamma^{1,0}$ can be interpreted as a component of the one-form which is a ghost-for-a-ghost of the $C$-field.\footnote{The $C$-field has a gauge symmetry of the form $\delta C = \d B$ where $B$ is a two-form. This ghost $B$ field has an additional gauge symmetry $\delta B = \d A$ for $A$ a one-form. The field $\gamma^{1,0}$ is a component of $A$.} There is a background where the equation of motion involving the fields $\mu^{1,1}$, $\gamma^{1,0}$, and~$\gamma^{1,2}$ reads \begin{align*} \dbar \mu^{1,1} + \frac12 [\mu^{1,1}, \mu^{1,1}] + \Omega^{-1} \vee \left(\del \gamma^{1,0} \wedge \del \gamma^{1,2} \right) & = 0 \\ \end{align*} Additionally, there are the conditions that $\mu^{1,1}$ preserve the holomorphic volume form on $X$ and that all fields are locally constant along the topological direction $S$. Notice that this would be precisely the integrability equation for $\mu^{1,1}$ to determine a complex structure on $X$, were it not for the presence of the last term in the first equation. If we work in a background where one of $\gamma^{1,0}$ or $\gamma^{1,2}$ is zero, then we see that $\mu^{1,1}$ is exactly a deformation of complex structure along $X$. In terms of the eleven-dimensional geometry, these field configurations describe deformations of the natural transverse holomorphic foliation (THF structure) on $X \times S$ which further preserve the holomorphic volume form along the leaves. We further unpack the equations of motion in more general backgrounds in \S \ref{s:components}, but leave a complete study for future work. \subsection*{Appearance of exceptional Lie superalgebras} The gauge symmetries of a field configuration in any theory form a Lie algebra. In the Batalin--Vilkovisky formalism, one combines fields of all ghost number into a single object which, together with the linear BRST operator, has the structure of a dg Lie algebra.\footnote{In some models one actually obtains an $L_\infty$ algebra.} From the point of view of deformation theory, this dg Lie algebra describes the formal moduli space of deformations of the particular field configuration. The simplest field configuration in the twisted theory is the flat background; this corresponds to considering to our eleven-dimensional theory on $\CC^5 \times \RR$, where we equip $\CC^5$ with the flat holomorphic volume form. In this case, we find a striking relationship to a certain infinite-dimensional simple super Lie algebra studied by Kac and collaborators \cite{KacBible,KacE510}.\footnote{The twist of the eleven-dimensional gravity multiplet was computed in~\cite{SWspinor} by reducing it to a particular pure spinor model based on $\Gr(2,5)$; a possible relationship between this pure spinor model and $E(5,10)$ was first pointed out by Martin Cederwall in~\cite{martinSL5}.} \begin{thm} The global symmetry algebra of the eleven-dimensional theory on $\CC^5 \times \RR$ is equivalent to a central extension of the exceptional simple super Lie algebra $E(5,10)$. \end{thm} In particular, correlation functions involving observables of the eleven-dimensional theory will be constrained by the infinite-dimensional symmetry algebra $E(5,10)$. Given our main conjecture that the interacting eleven-dimensional BV theory on $\CC^5 \times \RR$ is the twist of supergravity, we obtain the following. \begin{conj} A central extension of the super Lie algebra $E(5,10)$ is a symmetry of supergravity on $\RR^{11}$ which preserves the background where the bosonic ghost takes value equal to a holomorphic supercharge $Q$. \end{conj} \subsection*{Relationship to other twists of supergravity} Motivated by the topological string, Costello and Li formulated conjectural descriptions of certain twists of 10-dimensional theories of supergravity. All of the descriptions center around the theory of Kodaira--Spencer gravity, otherwise known as BCOV theory. This theory was introduced in \cite{BCOV} as the closed-string field theory of the topological $B$-model on Calabi--Yau three-folds. It was further extended to all Calabi--Yau manifolds in \cite{CLbcov1}. It is a sort of holomorphic version of gravity which at the genus zero describes fluctuations of the complex structure of the Calabi--Yau manifold. We recall relevant aspects of Kodaira--Spencer gravity in \S\ref{s:BCOV}. Through the dimensional reduction of our eleven-dimensional theory we will find a match with Costello and Li's descriptions of twists of 10-dimensional supergravity in terms of BCOV theory. In Table \ref{diagram}, we provide a summary of the comparisons with twists of supergravity in lower dimensions we perform. The squiggly arrows denote further twists, and the solid arrows denote various kinds of reductions. \begin{table} \begin{center} \[ \begin{tikzcd}[row sep=large,column sep = 3ex] & \parbox{3cm}{\centering minimal twist of 11d} \arrow[ld, "\text{slab }" '] \arrow[d, "\text{topological }S^1" description] \arrow[rdd, "\text{CY3}" description, pos=.7, bend left = 5] \arrow[rr, rightsquigarrow] \arrow[rd, "\text{holomorphic }S^1"] & & \parbox{3cm}{\centering non-minimal twist of 11d} \arrow[d, "\text{topological }S^1" description] \\ \parbox{3cm}{\centering minimal twist of type I} & \parbox{3cm}{\centering minimal twist of type IIA}\arrow[l, "\text{orbifold}", start anchor={[xshift=1.5ex]west}, end anchor={[xshift=-1.5ex]east}]\arrow[r, rightsquigarrow, crossing over, end anchor = {[xshift=2ex]west}] & \parbox{3cm}{\centering SU(4) twist of type IIA} \arrow[r, rightsquigarrow, start anchor={[xshift=-2ex]east}, end anchor={[xshift=2ex]west}] & \parbox{3cm}{\centering SU(2) twist of type IIA} \\ & & \parbox{3cm}{\centering minimal twist of 5d $\cN=1$} & \end{tikzcd} \] \end{center} \caption{Relations between various twists} \label{diagram} \end{table} In \S\ref{s:su4red} we will show that the reduction along $\{0\} \times \RR \times \{0\} \subset \CC^4 \times \CC \times \RR$ is equivalent to the $SU(4)$ twist of type IIA supergravity. The topological string approach does not lead to a description of the minimal, or holomorphic, twist of type IIA supergravity. In \S \ref{s:su5IIA}, we describe the reduction along the line $\{0\} \times \RR \subset \CC^5 \times \RR$ to obtain a conjectural description of the holomorphic $SU(5)$ twist of type IIA supergravity. In \S \ref{s:Ired}, we show that the reduction along the interval $\{0\}\times [0,1]\subset \CC^5\times [0,1]$ with certain boundary conditions on the endpoints of the interval is equivalent to the twist of type I supergravity. We further observe in \S \ref{s:orbifold} that the twist of type I arises as the fixed points of a natural $\ZZ/2$ action on the minimal twist of IIA. Next, in \S \ref{s:CY3} we study the reduction along a Calabi-Yau 3 fold to obtain a conjectural description of the minimal twist of 5d $\cN=1$ supergravity. Finally, we show in \S \ref{s:nonmin} how the non-minimal twist of eleven-dimensional supergravity arises as a further twist of our holomorphic twist. \subsection*{Acknowledgements} We warmly thank I.~Brunner, M.~Cederwall, K.~Costello, R.~Eager, C.~Elliott, D.~Gaiotto, O.~Gwil\-liam, F.~Hahner, J.~Huerta, H.~Loosen, S.~Li, N.~Paquette, S.~R\"osch, J.~Walcher, P.~Yoo for conversation and inspiration of all kinds. I.S. and B.W. thank the University of Heidelberg, B.W. thanks the University of Edinburgh and the Mathematisches Forschungsinstitut Oberwolfach, and I.S. thanks the Krameterhof for hospitality while portions of this work were being completed. The work of S.R. is supported by the Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada, through the Department of Innovation, Science and Economic Development Canada, and by the Province of Ontario, through the Ministry of Colleges and Universities. The work of I.S. is supported by the Free State of Bavaria. The work of B.W. is supported by the University of Edinburgh. \section{Twisted matrix model} \section{Infinite-dimensional symmetry in flat backgrounds} \label{s:E(5,10)} \subsection{Global symmetry algebra} \label{sec:global} In any field theory, the cohomology classes of states of odd ghost number have the structure of a Lie algebra. More generally, after shifting the cohomological degree by one, the full cohomology of states with respect to the linear BRST operator is naturally a graded Lie algebra. If we forget the grading to a $\ZZ/2$ grading, then this global symmetry algebra has the structure of a super Lie algebra. In general, taking cohomology loses information. If the dg Lie (or $L_\infty$) algebra we start with is not formal, then there exist higher-order operations on the linearized BRST cohomology. We will refer to this $L_\infty$ algebra as the global symmetry algebra of the theory. Before taking cohomology with respect to the linear BRST operator, we described the super $L_\infty$ structure on the parity shift of the eleven-dimensional fields in the previous section. This is encoded by the full BV action of the eleven-dimensional theory. The cubic component of the full BV action induces the super Lie algebra structure present in the linearized BRST cohomology. Our main result is to relate the global symmetry algebra of the minimal twist of eleven-dimensional supergravity on $\CC^5 \times \RR$ to a certain infinite-dimensional exceptional super Lie algebra studied by Kac \cite{KacBible,KacE510} called $E(5,10)$. We recall the definition below. \begin{thm}\label{thm:global} Let $\Pi\cE(\CC^5 \times \RR)$ be the parity shift of the fields of eleven-dimensional supergravity on $\CC^5 \times \RR$ and denote by $\delta^{(1)}$ the linearized BRST operator. \begin{enumerate} \item As a super Lie algebra, the $\delta^{(1)}$-cohomology of $\Pi\cE(\CC^5 \times \RR)$ is isomorphic to the trivial one-dimensional central extension of the super Lie algebra $E(5,10)$. \item The global symmetry algebra is equivalent, as a super $L_\infty$ algebra, to the non-trivial central extension of $E(5,10)$ determined by the even cocycle defined in~\eqref{eqn:cocycle}. \end{enumerate} \end{thm} This result implies that the action functional $S_{BF, \infty} + J$ of the eleven-dimensional theory is invariant for the infinite-dimensional Lie algebra $E(5,10)$. \subsection{Linearized BRST cohomology} We compute the linearized BRST cohomology of eleven-dimensional supergravity. Then we will describe the induced structure of a super Lie algebra present in the parity shift of the cohomology, proving part (1) of Theorem~\ref{thm:global}. \parsec[] First we recall the definition of the exceptional simple super Lie algebra $E(5,10)$. Recall that $\Vect_0 (\CC^5)$ is the Lie algebra of divergence-free holomorphic vector fields on $\CC^5$. Let $\Omega^{2}_{cl} (\CC^5)$ be the module of holomorphic $2$-forms that are closed for the holomorphic de Rham operator $\del$. The even part of the super Lie algebra $E(5,10)$ is the Lie algebra \[ E(5,10)_+ = \Vect_0(\CC^5) \] of divergence-free vector fields on $\CC^5$, whose elements we continue to denote by $\mu$. The odd piece is the module \[ E(5,10)_- = \Omega^{2}_{cl} (\CC^5), \] whose elements we denote by $\alpha$. Besides the natural module structure, there is odd bracket $ E(5,10)_-\otimes E(5,10)_\to E(5,10)_+$ The bracket uses the isomorphism $\Omega^{-1} \vee (-) \colon \Omega^{4} \cong \Vect (\CC^5)$ induced by the standard Calabi--Yau form $\d^5 z$, and is defined by \beqn\label{eqn:e510} [\alpha, \alpha'] = \Omega^{-1} \vee (\alpha \wedge \alpha') . \eeqn Since both $\alpha, \alpha'$ are closed two-forms, the resulting vector field on the right hand side is divergence free. In coordinates, if $f^{ij} \d z_i \wedge \d z_j$ and $g^{kl} \d z_k \wedge \d z_l$ are two closed two-forms, their bracket is the vector field $\ep_{ijklm} f^{ij}g^{kl} \partial_{z_m}$. To be precise, Kac studied a more algebraic version of the algebra we have just introduced, where holomorphic functions are replaced by holomorphic polynomials. As such, the simple super Lie algebra that appears in the classification in~\cite{KacBible} is a dense sub Lie algebra of what we call $E(5,10)$, consisting of those vector fields and two-forms that have polynomial coefficients. \parsec[] If $\cE$ is the space of fields of any theory in the BV or BRST formalism, the shift $\cL = \cE[-1]$ has the structure of a Lie, possibly $L_\infty$ algebra. In the $\ZZ/2$ graded world, the parity shifted object $\cL = \Pi \cE$ has the structure of a super $L_\infty$ algebra. In this section, we use the description of the eleven-dimensional theory as the deformation of the BF action $S_{BF,\infty}$ by the functional $J$ of Theorem \ref{thm:dfn}. We set the coupling $g = 1$. For any other nonzero value of $g$, we will obtain an isomorphic super $L_\infty$ algebra as explained above. We would also obtain equivalent results if we used the other model of the eleven-dimensional theory explained in~\S\ref{s:altdfn}. The full differential on the cochain complex of observables of the theory is given by the BV bracket with the BV action. For us, this~is \[ \delta = \{S_{BF,\infty} + J, -\} . \] The linear BRST operator (dual to the differential on the cochain complex of fields) comes only from the quadratic summands in $S_{BF,\infty}$, and is of the form \beqn\label{eqn:linearBRST} \delta^{(1)} = \dbar + \d_{\RR} + \div |_{\mu \to \nu} + \del |_{\beta \to \gamma} . \eeqn To compute the cohomology with respect to $\delta^{(1)}$ we can use a spectral sequence, first taking the cohomology with respect to $\dbar + \d_{\RR}$ and then with respect to $\div$. By the $\dbar$ and de Rham Poincar\'e lemmas, the cohomology of the space of fields of the eleven-dimensional theory on $\CC^5 \times \RR$ with respect $\dbar + \d_{\RR}$ results in the cochain complex \begin{equation} \label{eq:lin1} \begin{tikzcd}[row sep = 1 ex] - & + \\ \hline \Vect(\CC^5) \ar[r, "\div"] & \cO(\CC^5) \\ \cO(\CC^5) \ar[r, "\del"] & \Omega^{1}(\CC^5). \end{tikzcd} \end{equation} Recall that $\Vect(\CC^5), \cO(\CC^5)$, and $\Omega^1(\CC^5)$ denote the space of holomorphic vector fields, functions, and one-forms, respectively. The cohomology with respect to the remaining linearized BRST operator consists of the space of triples $(\mu, [\gamma], b)$ where: \begin{itemize} \item $\mu$ is a divergence-free holomorphic vector field on $\CC^5$, which is constant along $\RR$ \[ \mu = \mu \otimes 1 \in \Pi \Vect_0(\CC^5) \otimes \Omega^0(\RR) . \] Note that $\mu$ is a ghost in the $\ZZ/2$ graded theory. \item $[\gamma]$ is an equivalence class of a holomorphic one-form modulo exact holomorphic one-forms along $\CC^5$, which are also constant along $\RR$ \[ [\gamma] = [\gamma] \otimes 1 \in \left(\Omega^{1}(\CC^5) / \d \cO(\CC^5) \right) \otimes \Omega^0(\RR) . \] \item A constant function $b \in \Pi \CC$ on $\CC^5 \times \RR$. This is a $\beta$-type field in the eleven-dimensional theory, any constant function is closed for the de Rham differential. This element is also a ghost in the $\ZZ/2$-graded theory. \end{itemize} \parsec[] After parity shifting, we've identified the solutions to the linear equations of motion with triples \[ (\mu, [\gamma], b) \in \Vect_0(\CC^5) \oplus \Pi \Omega^{1}(\CC^5) / \del \cO(\CC^5) \oplus \CC . \] The bracket induced by the cubic component of $S_{BF, \infty}$ in the classical BV action is the usual bracket on divergence-free vector fields together with the module structure on holomorphic one-forms by Lie derivative. Notice that the Lie derivative commutes with the $\del$ operator, so this action descends to equivalence classes as above. The elements $b$ are central. The final term in the BV action $J = \frac16\int \gamma \wedge \del \gamma \wedge \del \gamma$ induces the following Lie bracket on the solutions to the linearized equations of motion \beqn\label{eqn:eqb} \big[[\gamma], [\gamma'] \big] = \Omega^{-1} \vee (\del \gamma \wedge \del \gamma') \in \Vect_0(\CC^5) . \eeqn where $\Omega^{-1}$ denotes the section of $\PV^{5,hol}(\CC^5)$ which is inverse to the Calabi--Yau form $\Omega$ on $\CC^5$. Notice that this bracket is well-defined as it does not depend on the particular equivalence classes and that the resulting vector field is automatically divergence-free. \parsec[] Having described the linearized BRST cohomology as a super vector space, we turn to the proof of Theorem \ref{thm:global}. \begin{proof}[Proof of Theorem \ref{thm:global}] For the first part, we write down an explicit map between the cohomology computed above and the algebra $E(5,10)$. The relationship of the $\mu$-elements in $E(5,10)$ and the eleven-dimensional theory is apparent. Next, we need to relate the equivalence classes $[\gamma]$ with the closed two-forms $\alpha$ in $E(5,10)$. On flat space, any closed differential form is exact (this is a holomorphic version of the Poincar\'e lemma). In other words, there is an isomorphism \[ \del \colon \Omega^1 (\CC^5) / \d \cO(\CC^5) \xto{\cong} \Omega^{2}_{cl}(\CC^5) \] induced by the holomorphic de Rham differential. This gives the relationship between the equivalence class $[\gamma]$ in the eleven-dimensional theory and a closed two-form in $E(5,10)$ by $\alpha = \del \gamma$. It is clear from Equations \eqref{eqn:e510} and \eqref{eqn:eqb} that this assignment intertwines the Lie brackets in $E(5,10)$ and the twist of eleven-dimensional supergravity. This completes the proof of part (1). For part (2), we first produce the following homotopy data: \begin{equation} \begin{tikzcd} \arrow[loop left]{l}{K}(\Pi \cE , \delta^{(1)})\arrow[r, shift left, "q"] &(E(5,10) \oplus \CC_b \, , \, 0)\arrow[l, shift left, "i"] \: , \end{tikzcd} \end{equation} \begin{itemize} \item On the $\nu$'s we take $K$ to be any operator $K \colon \cO \to \Vect$ such that $\div K \nu = \nu$. On the $\gamma$'s we take $K$ to be any operator $K \colon \Omega^1 \to \Omega^0$ such that $\del K(\gamma) = \gamma$. Also, introduce the auxiliary operator $\til{K} \colon \Omega^2_{cl} \to \Omega^1$ which satisfies the homotopy relation \beqn\label{eqn:htpy1} \til{K} \del \gamma + \del K \gamma = \gamma . \eeqn The precise form of each of these operators will not be needed. The existence of such operators is guaranteed by the holomorphic Poincar\'e lemma. The operator $K$ annihilates fields $\beta$ and $\mu$. \item The map $q$ is described as follows. First $q(\mu) = \mu - K \div (\mu)$. Notice that $q(\mu)$ is automatically divergence-free. Next, $q(\gamma) = [\gamma]$, the equivalence class in $\Omega^1 / \d$. If $\beta$ is a holomorphic function, then $q(\beta) = \beta (z=0)$. \item The map $i$ embeds $\mu$ and $b$ in the obvious way. On the equivalence class $[\gamma] \in \Omega^1 / \d$ we define $i([\gamma]) = \gamma - \til{K} \del \gamma$. Notice that this is independent of the choice of representative $\gamma$. \end{itemize} It is straightforward to check that this comprises well-defined homotopy data, the only nontrivial thing to check is the relation $\id - i \circ q = \delta^{(1)} K - K \delta^{(1)}$. Plugging in the field $\gamma$ we see that we must check that \[ \gamma - \til{K} \del \gamma = \del K \gamma \] which is precisely \eqref{eqn:htpy1}. Given this homotopy data, we can compute the homotopy transferred $L_\infty$ structure on the linearized BRST cohomology. Since $\nu$ does not survive to cohomology and the fact that there are no nontrivial Lie brackets involving the field $\beta$, this transferred structure is easy to compute. There is a single diagram which contributes to the transferred structure, it is given by \begin{equation} \begin{tikzpicture} \begin{feynman} \vertex(a) at (-1,1) {$i(\mu)$}; \vertex(b) at (-1,0) {$i([\gamma])$}; \vertex(c) at (-1,-1) {$i(\mu')$}; \vertex(d) at (0,0.5); \vertex(e) at (1,0); \vertex(f) at (2,0) {$q$}; \diagram* {(a)--(d), (b)--(d), (d)--[edge label = $K$](e), (c)--(e), (f)--(e)}; \end{feynman} \end{tikzpicture} \end{equation} together with a similar diagram with the $\mu$ and $\mu'$ flipped. This diagram leads to a new $3$-ary bracket on $E(5,10) \oplus \CC_b$ \[ \big[\mu,\mu',[\gamma]\big]_3 = \varphi(\mu,\mu',[\gamma]) \] where $\varphi \in \clie^\text{even} (E(5,10))$ is the even Lie algebra cocycle defined by the formula \beqn \begin{array}{rclr} \varphi \colon E(5,10) \times E(5,10) \times E(5,10) & \to & \CC_b \\ \varphi(\mu,\mu',\alpha) & = & \<\mu \wedge \mu', \alpha\>|_{z=0} . \label{eqn:cocycle} \end{array} \eeqn Since $b$ is central, this cocycle defines a central extension of $E(5,10)$. \end{proof} \parsec[] We briefly remark on Lie algebra cohomology for super Lie algebras. The Lie algebra cohomology $\clie^{\bu,\bu}(\cL)$ of any super Lie algebra $\cL$ is graded by $\ZZ \times \ZZ/2$. The first grading is by the symmetric degree in the Chevalley--Eilenberg complex. The second grading is the internal parity of the super Lie algebra $\cL$. The Chevalley--Eilenberg differential is degree $(1,+)$. The cocycle $\varphi$ has homogenous bigrading $(3,-)$. In the above discussion we forgot the bigrading to a totalized $\ZZ/2$ grading where \begin{align*} \clie^\text{even} (\cL) & = \clie^{2\bu , +} (\cL) \oplus \clie^{2\bu+1, -}(\cL) \\ \clie^\text{odd} (\cL) & = \clie^{2\bu , -} (\cL) \oplus \clie^{2\bu+1, +}(\cL) . \end{align*} With this totalization, $\varphi$ is an even cocycle and hence determines a super $L_\infty$ central extension by the one-dimensional even vector space $\CC$. \section{The non-minimal twist}\label{s:nonmin} We have provided numerous consistency checks that the eleven-dimensional theory defined on a manifold with $SU(5)$ holonomy is a twist of supergravity. We have referred to this theory as ``minimal,'' since it renders the minimal number of translations homotopically trivial, or (slightly improperly) as ``holomorphic.'' In this section we characterize the unique further twist of eleven-dimensional supergravity on flat space, as seen through the lens of the holomorphic theory. This further twist is invariant for the group $G_2 \times SU(2)$ and is fully topological along seven directions, as opposed to just a single direction as in the minimal twist. This is easiest to see by decomposing the eleven-dimensional spinor as a representation of $\Spin(4) \times \Spin(7)$; from this perspective, a square-zero element is a rank-one element in the tensor product of a chiral spin representation of~$\Spin(4)$ and the spin representation of~$\Spin(7)$. Elements of the latter fall into two distinct orbits under the $\Spin(7)$ action, the minimal orbit---``Cartan pure spinors''---and the generic orbit~\cite{Igusa}. The stabilizer of an element of the generic orbit is~$G_2$, almost by definition. We will show that the non-minimal twist is equivalent to an interacting theory on $\CC^2 \times \RR^7$ that we call ``Poisson'' Chern--Simons theory, using a direct description of the further twist together with an indirect cohomological argument. This completes the confirmation of a conjecture in the literature~(\cite{CostelloM5}; see also~\cite{RY}); the result was checked at the level of the free theory in~\cite{EagerHahner} by computing the nonminimal twist of the eleven-dimensional multiplet directly at the component-field level. In the BV formalism, the theory is $\ZZ/2$ graded, with fields given by \[ A \in \Pi \Omega^{0,\bu}(\CC^2) \hotimes \Omega^\bu(\RR^7) , \] where $\Pi$, as always, denotes parity shift. The equations of motion are of the form \[ \dbar A + \d_{\RR^7} A + \partial_{z_1} A \wedge \partial_{z_2} A = 0 . \] The action functional depends on the holomorphic symplectic structure on $\CC^2$ through the Poisson bracket on the algebra of holomorphic functions. We give a precise definition below. The main result of this section is the following. \begin{thm} \label{thm:nonmin} The non-minimal twist of the eleven-dimensional theory is equivalent to Poisson Chern--Simons theory on \[ \CC^2 \times \RR^7 . \] \end{thm} From the point of view of the untwisted theory, the non-minimal twist is defined by working in a background where the fermionic ghost in the physical theory is equal to a supertranslation of the form \[ Q + Q_{nm} \] where $Q$ is the supertranslation which defines the minimal twist, see \S \ref{sec:mintwist}. The minimal twist of supergravity is obtained by setting a fermionic ghost equal to $Q$. In the language of the minimal twist, the supercharge $Q_{nm}$ determines a square-zero element in the $Q$-cohomology of the original supersymmetry algebra (which we will denote by the same letter). The characterization of this cohomology in Proposition \ref{prop:susycoh} implies that $Q_{nm}$ is an element \[ Q_{nm} \in \wedge^2 \left(L^\vee\right) \] where $L \cong \CC^5$ is the defining $SU(5)$ representation. In other words, $Q$ is a translation invariant holomorphic two-form on $\CC^5$. The condition that $[Q_{nm}, Q_{nm}] = 0$ simply says that $Q_{nm}\wedge Q_{nm} = 0$ as a translation invariant four-form on $\CC^5$. By a linear change of coordinates, all such two-forms $Q$ are of the form $Q_{nm} = \d z_i \wedge \d z_j$ where $i,j=1,\ldots, 5$. From hereon in this section we will rename coordinates by \[ \CC^5 \times \RR = \CC^2_{z_i} \times \CC^3_{w_a} \times \RR \] which is most natural from the point of view of the non-minimal twist. We will fix the non-minimal supercharge \[ Q_{nm} = \d z_1 \wedge \d z_2 . \] Notice that this choice of supercharge breaks the holonomy of the eleven-dimensional theory from $SU(5)$ to $SU(2) \times SU(3)$. \subsection{Index matching} \label{sec:indexcheck} As a first consistency check, we can compare deformation invariants attached to the holomorphic twist and the nonminimal twist. We will find that the local character of the latter agrees with a specialization of the local character computed in~\S\ref{sec:locchar} \begin{prop} The local character of the nonminimal twist of eleven-dimensional supergravity on flat space is given by \[ \prod _{(n_1,n_2)\in \Z^2_{\geq 0}} \frac{1}{1-q^{-n_1+n_2}}. \] This agrees with the specializaiton of the local character computed in proposition \ref{prop:locchar}. \end{prop} \begin{proof} The space of solutions to linearized equations of motion is parametrized by a holomorphic function $A$ on $\C^2_{w_j}$. The corresponding linear local operators are labeled by $(n_1,n_2)\in \Z^2_{\geq 0}$ and are given by \[ \boldsymbol{A}_{(n_1,n_2)} : A \mapsto \partial_{w_1}^{n_1}\partial_{w_2}^{n_2} A (0). \] The character of the linear span of these is given by the geometric series \beqn\label{nonmin:singleparticle} \sum _{(n_1,n_2)\in \Z^2_{\geq 0}} q^{-n_1+n_2} \eeqn with plethystic exponential given by \[ \prod _{(n_1,n_2)\in \Z^2_{\geq 0}} \frac{1}{1-q^{-n_1+n_2}}. \] For the last part, it suffices to observe the specialization at the level of single particle indices. A natural choice of fugacities for $SU(2)\times SU(3)$ is given in terms of the fugacities $q_i$ for $SU(5)$ chosen in~\S\ref{sec:locchar} by requiring the additional constraints \[q_1q_2 = 1, \ \ \ q_3q_4q_5=1.\] After imposing the above constraints, the single particle index~\eqref{singleparticleindex} is \[ i(q) = \frac{1}{(1-q)(1-q^{-1})} \] where $q = q_1=q_1^{-1}$. This is exactly the sum of the geometric series~\eqref{nonmin:singleparticle}. \end{proof} \subsection{The non-minimal global symmetry algebra} We constructed an embedding of the $Q$-cohomology of the supersymmetry algebra into the fields of our eleven-dimensional theory on $\CC^5 \times \RR$. The further twist is obtained by working in a background where a certain field on $\CC^5 \times \RR$ takes nonzero value $Q_{nm}$. Explicitly, the element $Q_{nm} \in \wedge^2 L$ corresponds to the image under $\del$ of a $\gamma$-field of type $\Omega^{1,0}(\CC^5) \otimes \Omega^0(\RR)$. According to the embedding in \S \ref{s:residual} this is the $\gamma$-field \beqn\label{eqn:gammanm} \gamma_{nm} = \frac12 (z_1 \d z_2 - z_2 \d z_1) \in \Omega^{1,0}(\CC^5) \otimes \Omega^0(\RR) \eeqn Notice that $\del \gamma_{nm} = \d z_1 \wedge \d z_2$ as desired. \parsec[sec:nmsymmetry] Before proceeding to the proof of the theorem above, we perform a simple calculation of the global symmetry algebra present in the $Q_{nm}$-twisted theory. Recall that up to a copy of constant functions, the global symmetry algebra of the holomorphic twist of the eleven-dimensional theory is the super Lie algebra $E(5,10)$. From this point of view, the global symmetry algebra of the $Q_{nm}$-twisted theory is given by deformation of this super Lie algebra by the Maurer--Cartan element \[ \d z_1 \wedge \d z_2 \in \Omega^{2}_{cl}(\CC^5) . \] We recall that the space of closed two-forms on $\CC^5$ is precisely the odd part of the super Lie algebra $E(5,10)$. We compute the cohomology of $E(5,10)$ with respect to the differential which is bracketing with this Maurer--Cartan element. Recall that we are using the holomorphic coordinates $(z_1,z_2,w_1,w_2,w_3)$ on $\CC^5$. There are the following brackets in the super Lie algebra $E(5,10)$ \begin{align*} [f_l \partial_{z_l} , \d z_1 \wedge \d z_2] & = \del f_i \wedge \d z_j - \del f_j \wedge \d z_i \\ [g_a \partial_{w_a} , \d z_1 \wedge \d z_2] & = 0 \\ [h^{ab} \d w_a \wedge \d w_b , \d z_1 \wedge \d z_2 ] & = \ep_{abc} h^{ab} \partial_{w_c} . \end{align*} where $f_l \partial_{z_l}$, $g_a \partial_{w_a}$ are divergence-free vector fields on $\CC^5$ and $h^{ab} \d w_a \wedge \d w_b$ is a closed two-form. From these relations, we see that the following elements are in the kernel of $[\d z_1 \wedge \d z_2, -]$: \begin{itemize} \item $f(z_i, w_a)dz_1\wedge dz_2$ for $f$ a holomorphic function on $\CC^5$. \item $f(z_i) \partial_{z_1} + g(z_i) \partial_{z_2}$ for holomorphic functions $f,g$ on $\CC_{z_1} \times \CC_{z_2}$ which satisfy \[ \del_{z_1} f + \del_{z_2} g = 0 . \] In other words, this is a divergence-free vector field on $\CC_{z_1} \times \CC_{z_2}$. \item $f_b(z_i, w_a) \partial_{w_b}$ for $f_b$ a holomorphic function on $\CC^5$ where $b=1,2,3$. \end{itemize} It is immediate to check that these are the only nonzero elements in the kernel. Further, any element of the first type is clearly exact and any element of the last type is clearly exact by the closed two-form $\ep^{ijklm} f \d w_l \d w_m$. Thus, the cohomology is the (purely bosonic) Lie algebra of divergence-free vector fields on $\CC^2 = \CC_i \times \CC_j$ \[ H^\bu\big(E(5,10), [\d z_1 \wedge \d z_2, -] \big) \simeq \Vect_0(\CC^2) . \] We proved in Theorem \ref{thm:global} that the global symmetry algebra of the eleven-dimensional theory on $\CC^5 \times \RR$ is equivalent to a central extension $\Hat{E(5,10)}$ of the super Lie algebra~$E(5,10)$. The Lie algebra of divergence-free vector fields on $\CC^2$ also admits a central extension: \beqn\label{eqn:centralvect} 0 \to \CC \to \cO (\CC^2) \to \Vect_0 (\CC^2) \to 0 \eeqn where $\cO(\CC^2)$ is equipped with the Poisson bracket with respect to the symplectic form~$\d z_1 \wedge \d z_2$. These two central extension are compatible. \begin{prop} \label{prop:ham} Let $\Hat{E(5,10)}$ be the central extension of $E(5,10)$ which is equivalent to the global symmetry algebra of the eleven-dimensional theory on $\CC^5 \times \RR$. Then there is an isomorphism of Lie algebras \[ H^\bu \big(\Hat{E(5,10)} , [\d z_1 \wedge \d z_2, -] \big) \simeq \cO(\CC^2) . \] \end{prop} \begin{proof} The only thing to check is that, in cohomology, the cocycle defining the central extension of $E(5,10)$ is the cocycle exhibiting $\cO(\CC^2)$ as the central extension of divergence-free vector fields. Recall that the formula \eqref{eqn:cocycle} for the cocycle is \[ \varphi(\mu, \mu', \alpha) = \<\mu \wedge \mu', \alpha\>|_{z=0}. \] In cohomology, we obtain the cocycle for divergence-free vector fields by plugging in $\alpha = \d z_1 \wedge \d z_2$. This gives the cocycle on $\Vect_0(\CC^2)$ \[ (f_i \del_{z_i}, g_j \del_{z_j}) \mapsto (f_1 g_2 - f_2 g_1)(z_1=z_2=0) . \] This is the cocycle defining \eqref{eqn:centralvect}, as desired. \end{proof}. This proposition implies that the global symmetry algebra of the non-minimal twist of eleven-dimensional supergravity is the Lie algebra $\cO(\CC^2)$. We will see that this is compatible with the calculation of the non-minimal twist of the full BV theory. \subsection{The non-minimal twist of the eleven-dimensional theory} Now, we turn to deducing the action functional of the non-minimal twist and hence the proof of Theorem \ref{thm:nonmin}. We will show that the eleven-dimensional theory on $\CC^5 \times \RR$ placed in the background where the $(1,0)$ component of $\gamma$ takes the value $\gamma_{nm}$ \eqref{eqn:gammanm} is equivalent to a theory with a purely Chern--Simons-like action functional that we referred to in the introduction to this section. Poisson Chern--Simons theory is defined on any manifold of the form \[ Z \times M \] where $Z$ is a hyper K\"ahler surface and $M$ is a smooth manifold of real dimension seven. The fundamental field of the theory is \[ \alpha \in \Pi \Omega^{0,\bu}(Z) \; \Hat{\otimes} \; \Omega^\bu(M) . \] Just in our original eleven-dimensional theory, this theory is also only $\ZZ/2$ graded. The holomorphic symplectic form $\omega_Z^{2,0}$ on $Z$ induces a Poisson bracket define on all Dolbeault forms $\Omega^{0,\bu}(Z)$ which we denote by $\{-,-\}_{pb}$. In local Darboux coordinates $(z_1,z_2)$, this bracket reads \[ \{\alpha^I (z,\zbar) \d \zbar_I , \alpha'^J (z,\zbar) \d \zbar_J\}_{pb} = (\partial_{z_1} \alpha^I \partial_{z_2} \alpha^J \pm \partial_{z_2} \alpha^I \partial_{z_1} \alpha^J) \d \zbar_I \wedge \d \zbar_J . \] The action functional of Poisson Chern--Simons theory is \beqn\label{eqn:pcsaction} \frac12 \int_{Z \times M} (\alpha \wedge \d\alpha) \wedge \omega^{2,0}_Z + \frac16 \int_{Z\times M} \alpha \wedge \{\alpha, \alpha\}_{pb} \wedge \omega^{2,0}_Z \eeqn where $\{-,-\}$ is the Poisson bracket induced from the symplectic form $\omega_Z$ on $Z$. For simplicity, we will work only on flat space $\CC^5 \times \RR = \CC^2_z \times (\CC^3_w \times \RR)$, where we view $Z = \CC^2_z$ as a hyper K\"ahler manifold with its standard holomorphic symplectic form $\omega^{2,0} = \d^2 z$. We will decompose the fields according to these coordinates. \begin{proof}[Proof of Theorem \ref{thm:nonmin}] Decompose the $\mu$-field as $\mu = \mu_z + \mu_w$ where \begin{align*} \mu_z &\in \PV^{1,\bu}(\CC^2_z) \otimes \PV^{0,\bu}(\CC^3_w) \otimes \Omega^\bu(\RR) \\ \mu_w & \in \PV^{0,\bu}(\CC^2_z) \otimes \PV^{1,\bu}(\CC^3_w) \otimes \Omega^\bu(\RR) . \end{align*} and similarly $\gamma = \gamma_z + \gamma_w$. We will also use the notation $\del^z$ for the holomorphic de Rham differential along $\CC_z^2$ and similarly $\del^w$ for the holomorphic de Rham differential along $\CC^3_w$. To twist, we expand near the background where the field $\gamma_z$ takes value $\gamma_{nm}$ as in \eqref{eqn:gammanm}. This will generate new kinetic and interacting terms. There are two types of interactions in the original theory. The first is \begin{equation}\label{eqn:int1} \frac12 \int_{\CC^2 \times \CC^3 \times \RR} \frac{1}{1-\nu} \left(\del \gamma \vee \mu^2 \right) \wedge (\d^2 z \wedge \d^3 w) \end{equation} and the second is \begin{equation} \label{eqn:int2} \frac16\int_{\CC^2 \times \CC^3 \times \RR} \gamma \partial \gamma \partial \gamma . \end{equation} Expanding \eqref{eqn:int1} around the background where $\gamma$ takes value $\gamma_{nm}$, we obtain, \begin{multline} \int \frac{1}{1-\nu} \left(\frac12 \del^w \gamma_w \vee \mu_w^2 + \del^z \gamma_w \vee \mu_w \mu_z + \del^w \gamma_z \vee \mu_w\mu_z + \frac12 \del^z \gamma_z \vee \mu_z^2 \right) \wedge (\d^2 z \wedge \d^3 w) \\ + \frac{1}{2} \int \frac{1}{1-\nu} \left(\d^2 z \vee \mu_z^2 \right) \wedge (\d^2 z \wedge \d^3 w) . \label{eqn:delta1} \end{multline} We similarly expand (\ref{eqn:int2}), \beqn \frac16 \int \left(\gamma_w \partial^z \gamma_w \partial^z \gamma_w +\gamma_w \partial^w \gamma_w \partial^z \gamma_z + \gamma_w \partial^w \gamma_z \partial^w \gamma_z \right) + \frac{1}{2} \int \left(\gamma_w \partial^w \gamma_w \right) \wedge \d^2 z \label{eqn:delta2} \eeqn The new terms in the non-minimally twisted linearized BRST differential arise from the quadratic terms in the action in Equations \eqref{eqn:delta1} and \eqref{eqn:delta2}: \begin{equation}\label{eqn:newterms} \frac{1}{2} \int (\d^2 z \vee \mu_z^2) \wedge (\d^2 z \wedge \d^3 w) + \frac{1}{2} \int \left(\gamma_w \wedge \partial^w \gamma_w \right) \wedge \d^2 z . \end{equation} The non-minimally twisted linear BRST complex thus takes the form \beqn\label{eqn:twisteddiagram} \begin{tikzcd} & \PV^{1,\bu}_Z \hotimes \PV^{0,\bu}_W \ar[dr, "\div^z"] \ar[dashed, rounded corners, to path={ -- ([yshift=-2ex]\tikztostart.west) |- ([xshift=-1.5ex]\tikztotarget.west) -- (\tikztotarget)}, dddddr]\\ & & \PV^{0,\bu}_Z \hotimes \PV^{0,\bu}_W \\ & \PV^{0,\bu}_Z \hotimes \PV^{1,\bu}_W \ar[ur, "\div^w"'] & \\ \;_{\cong} & & \Omega^{0,\bu}_Z \hotimes \Omega^{1,\bu}_W \ar[ul, dashed, bend left = 10, "\Omega^{-1}_W \partial^w"]\\ & \Omega^{0,\bu}_Z \hotimes \Omega^{0,\bu}_W \ar[ur, "\partial^w"] \ar[dr,"\partial^z"'] \\ & & \Omega^{1,\bu}_Z \hotimes \Omega^{0,\bu}_W \end{tikzcd} \eeqn Here, we write $Z = \CC^2_z$ and $X = \CC^3_w$ for notational simplicity. Here, the dashed arrow along the outside of the diagram corresponds to the BV antibracket with the first term in (\ref{eqn:newterms}). It is given by the isomorphism \[ \Omega^{1,\bu}_Z \hotimes \Omega^{0,\bu}_W \xto{\omega^{2,0}_Z \otimes \id} \PV^{1,\bu}_Z \hotimes \PV^{0,\bu}_W \] induced holomorphic symplectic form on $Z$. The other dashed arrow corresponds to the BV antibracket with the second term in (\ref{eqn:newterms}). It is given by the composition \[ \Omega^{0,\bu}_Z \hotimes \Omega^{1,\bu}_W \xto{\id \otimes \del^w} \Omega^{0,\bu}_Z \hotimes \Omega^{2,\bu}_W \xto{\id \otimes \Omega_W} \PV^{0,\bu}_Z \hotimes \PV^{1,\bu}_W \] given by applying the holomorphic de Rham operator along $X$ followed by contracting with the inverse holomorphic volume form along $X$. We replace this linear BRST complex, up to quasi-isomorphism, with a smaller BRST complex. Consider the complex \beqn\label{eqn:zw} \Omega^{0,\bu}_Z \hotimes \Omega^{\bu,\bu}_W \hotimes \Omega^\bu_L = \oplus_{k =0}^3 \Omega^{0,\bu}_Z \hotimes \Omega^{k,\bu}_W \hotimes \Omega^\bu_L \eeqn which is equipped with the differential $\dbar^z + \dbar^w + \del^w + \d_{\RR}$. Write $\alpha = \alpha^0 + \cdots + \alpha^3$ for a field in this complex, using the decomposition on the right hand side. The full Poisson Chern--Simons action $S_{pCS}$ equips this complex with the structure of a dg Lie algebra. Define the following non-linear map of BRST complexes from \eqref{eqn:zw} to the twisted theory \eqref{eqn:twisteddiagram}. It is defined by the equations \begin{multline} \mu_z = (1-\til \alpha^3) (\del_{z_1} \wedge \del_{z_2}) \vee \del^z \alpha^0 , \quad \mu_w = (\del_{w_1} \wedge \del_{w_2} \wedge \del_{w_3}) \vee \alpha^2, \quad \nu = \til{\alpha}^3 \\ \beta = \alpha^0 , \quad \gamma_w = \alpha^1 , \quad \gamma_z = 0 . \label{eqn:g2map} \end{multline} In the above equation we have introduced the notation $\til{\alpha}^3 = \Omega_X^{-1} \vee \alpha^3$. Notice the only non-linearity appears in the definition of $\mu_z$. The restriction of the kinetic terms $\int \gamma (\dbar + \d_{\RR}) \mu + \beta (\dbar + \d_{\RR}) \nu$ along \eqref{eqn:g2map} is \beqn\label{eqn:kin1} \int \sum_{k=0}^3 \alpha^k (\dbar + \d_{\RR}) \alpha^{3-k} \eeqn The restriction of the kinetic term $\int \beta \div \mu$ along \eqref{eqn:g2map} is \beqn\label{eqn:kin2} \int \alpha^0 \del^w \alpha^2 - \int \alpha^0 \del^z \alpha^0 \del^z \alpha^3 . \eeqn Finally, the restriction of the kinetic term $\frac12 \int \gamma \del^w \gamma$ in \eqref{eqn:delta2} along \eqref{eqn:g2map} is \beqn\label{eqn:kin3} \int \frac12 \alpha^1 \del^w \alpha^1 . \eeqn Together, \eqref{eqn:kin1}--\eqref{eqn:kin3} give the kinetic term in Poisson Chern--Simons theory. The formulas above show that the linear terms in \eqref{eqn:g2map} define a map of linear BRST complexes. Applying the apparent contracting homotopy, we see that this map is a quasi-isomorphism. We will show that the full non-linear map intertwines the action functionals up to cohomologically exact terms, and hence defines an equivalence of BV theories. We substitute the values for the fields in \eqref{eqn:g2map} into the original eleven-dimensional action. Notice that any terms involving $\gamma_z$ can be discarded. Restricting \eqref{eqn:delta1} along this map, we obtain the action functional \beqn\label{eqn:rest1} \frac12 \int \frac{1}{1-\til\alpha^3} \del^w \alpha^1 (\til \alpha^2)^2 \d^2 z + \int \alpha^2 \del^z \alpha^0 \del^z \alpha^1 + \frac12 \int (1-\alpha^3) \del^z \alpha^0 \del^z \alpha^0 . \eeqn Here, $\til \alpha^2$ denotes the element of $\Omega^{0,\bu}_Z \hotimes \PV^{1,\bu}_W \hotimes \Omega^\bu_L$ corresponding to $\alpha^2$ determined by the Calabi--Yau form $\d^3 w$. Notice that the very last term is equivalent to the functional $-\frac12\int \alpha^3 \del^z \alpha^0 \del^z \alpha^0$ since the quadratic part is a total derivative. There is only one cubic term left in \eqref{eqn:delta2} when we substitute the fields according to \eqref{eqn:g2map}. It is \beqn \frac16 \int \alpha^1 \del^z \alpha^1 \del^z \alpha^1. \eeqn Combining all of these terms, we see that the total action restricted along the map \eqref{eqn:g2map} is \beqn\label{eqn:pcs1} S_{pCS} (\alpha) + \int \frac12 \frac{1}{1-\til\alpha^3} \del^w \alpha^1 (\til \alpha^2)^2 \d^2 z \eeqn where $S_{pCS}$ is the Poisson Chern--Simons action in \eqref{eqn:pcsaction}. We will show that the term not appearing in $S_{pCS}(\alpha)$ is cohomologically trivial. Consider the odd local functional \beqn\label{eqn:triv1} \frac16 \int \frac{1}{1 - \til \alpha^3} \alpha^2 (\til \alpha^2)^2 . \eeqn Applying the linearized BRST operator (in the non-minimal twist) this becomes \[ \frac12 \int \frac{1}{1 - \til \alpha^3} \del^w \alpha^1 (\til \alpha^2)^2 + \frac16 \int \frac{1}{1 - \til \alpha^3} \del^w (\alpha^2) \alpha^2 (\til \alpha^2)^2 . \] The first term in this expression agrees the term in \eqref{eqn:pcs1} which is not in $S_{pCS}(\alpha)$. The latter term $(1 - \til \alpha^3)^{-1} \del^w (\alpha^2) \alpha^2 (\til \alpha^2)^2$ is of polynomial degree $\geq 4$. Since this term has trivial self BV bracket, it determines a cocycle for the dg Lie algebra underlying Poisson Chern--Simons theory. Since it is manifestly translation invariant, it arises via descent from a cocycle in the dg Lie algebra of $\infty$-jets of fields at $0 \in\CC^2 \times \RR^7$. This dg Lie algebra is quasi-isomorphic to the Lie algebra $\CC[[z_1,z_2]]$ equipped with the holomorphic Poisson bracket. (This is the formal power series version of the Lie algebra from Proposition~\ref{prop:ham}.) There is a weight grading on this Lie algebra, given by declaring $|z_1^{n+1} z_2^{m+1}| = n+m$; in turn, this induces a grading on the Gelfand--Fuks Lie algebra cohomology. The weight of the Gelfand--Fuks class corresponding to our deformation is $+2$, since no derivatives of $z_1$ or $z_2$ appear. Results from~\cite{Fuks} show that there is no cohomology in this weight. As such, the cocycle must define a trivial deformation of the dg Lie algebra up to equivalence. This completes the proof that the non-minimal twist is equivalent to Poisson Chern--Simons theory. \end{proof} \section{Twisted supergravity on AdS space} \label{sec:ads} So far, we have mostly given evidence for the eleven-dimensional theory as a twist of supergravity in a flat background. We now turn to twisted versions of AdS backgrounds of eleven-dimensional supergravity. In M-theory, AdS backgrounds arise from backreacting some number $N$ of branes. For M2 branes, the backreacted geometry is ${\rm AdS}_4 \times S^7$. For the M5 branes, the backreacted geometry is ${\rm AdS}_7 \times S^4$. According to the AdS/CFT correspondence, supergravity on such backgrounds should be dual to the relevant worldvolume theory in the large-$N$ limit. In this section, we do not directly refer to the worldvolume theories on the holomorphic twists of the M2 and M5 branes. Rather, we identify the fields sourcing the branes at the level of the twisted eleven-dimensional theory. In turn, we give a proposal for the twisted AdS background. We will show that the twist of the superconformal algebra is a global symmetry of this twisted background. \subsection{Superconformal algebras} The complex form of the algebra of isometries for supergravity in both the ${\rm AdS}_4$ and ${\rm AdS}_7$ backgrounds is $\lie{osp}(8|2)$ (though, their real forms differ). This agrees with the complex form of the 6d $\cN=(2,0)$ superconformal algebra and the 3d $\cN=8$ superconformal algebra. The bosonic part of this algebra is isomorphic to $\lie{so}(8) \oplus \lie{sp}(2) \cong \lie{so}(8) \oplus \lie{so}(5)$. The minimal supercharge $Q$ acting on eleven-dimensional supersymmetry algebra is an element of this superconformal algebra. Its $Q$-cohomology is isomorphic to $\lie{osp}(6|1)$. (Twisted superconformal symmetry in six dimensions is studied in detail by the second two authors in~\cite{SWsuco2}.) This super Lie algebra will play the role of the isometries in the twisted AdS background. \subsection{The ${\rm AdS}_4 \times S^7$ background} In this section we introduce the analog of the ${\rm AdS}_4 \times S^7$ background in our conjectural description of the minimal twist of eleven-dimensional supergravity. \parsec[] Decompose the eleven-dimensional manifold $\CC^5 \times \RR$ as \[ \CC^4_w\times \CC_z \times \RR . \] Analogous to before, the ${\rm AdS}_4 \times S^7$ background arises from backreacting M2 branes. Consider a stack of $N$ M2 branes wrapping $\R\times \C_z$. A natural interaction to consider is \[ I_{M2}(\gamma) = N\int_{\C_z} \gamma + \cdots \] which is nonzero only on the component of $\gamma$ in $\Omega^1(\R)\otimes \Omega^{1,1}(\C^5)$. Unlike the case of M5 branes, the coupling does not involve choosing a primitive for a field strength---it is an electric coupling. We have only indicated the lowest order coupling, the $\cdots$ indicate higher-order couplings which will be higher order in the fields of the eleven-dimensional theory and explicitly involve the fields in the worldvolume theory. This coupling is justified by comparison with the physical theory and by dimensional reduction. Indeed, as discussed in~\S\ref{s:components}, the component of $\gamma$ which participates in the above coupling is a component of the $C$-field of eleven dimensional supergravity. Thus, the proposal mirrors electric couplings of M2 branes in the physical theory, which simply involves integrating the $C$-field over the worldvolume of the brane. Moreover, reducing on a circle transverse to the M2 brane yields the $SU(4)$ twist of type IIA supergravity on $\R^2\times \C_z\times \C^3$ with $N$ $D2$ branes wrapping $\R\times \C_z$. As is shown in \cite{CLsugra}, an electric coupling of D2 branes to the $SU(4)$ twist of type IIA supergravity is given by \[ I_{D2}(\gamma) = N \int_{\R\times\C_z} \gamma + \cdots \] where $\gamma$ now denotes the 1-form field of the $SU(4)$ twist of type IIA supergravity. It is immediate that the pullback of $I_{M2}$ along the map in the proof of proposition \ref{prop:dimred} recovers $I_{D2}$. \parsec[sec:m2backreact] The backreacted geometry will be given by a solution to the equations of motion upon deforming the eleven-dimensional action by the interaction $I_{M2}(\gamma)$. Varying the deformed action with respect to $\gamma$, we obtain the equation of motion \beqn\label{eqn:ads4eom1} \dbar \mu + \frac12 [\mu, \mu] + \partial\gamma\partial\gamma = N \Omega^{-1} \delta_{w=0} . \eeqn Here $[-,-]$ is the Schouten bracket. Varying $\beta$, we obtain the equation of motion \beqn\label{eqn:adseom2} \div \mu = 0 . \eeqn \begin{lem} Let \[ F_{M2} = \frac{6}{(2\pi i)^4} \frac{\sum_{a=1}^4 \wbar_a \d \wbar_1 \cdots \Hat{\d \wbar_a} \cdots \d \wbar_4}{\|w\|^{8}} \partial_z . \] Then the background where $\mu = N F_{M2}$ and $\gamma = 0$ satisfies the above equations of motion in the presence of a stack of $N$ M2 branes: \begin{align*} \dbar (N F_{M2}) + \frac12 [N F_{M2}, N F_{M2}] & = N \Omega^{-1} \delta_{w=0} \\ \div (N F_{M2}) & = 0 . \end{align*} Here we set all components of the field $\gamma$ equal to zero (as well as the fields $\nu,\beta$). \end{lem} \begin{proof} Upon specializing $\gamma = 0$, the last term in the first equation above vanishes. The equation $\dbar F_{M2} = \Omega^{-1} \delta_{w=0}$ characterizes the Bochner--Martinelli kernel representing the residue class on $\CC^4 \, \setminus \, 0$. It is clear that $\div F_{M2} = 0$ and \[ [F_{M2}, F_{M2}] = 0 \] by simple type reasons. \end{proof} \parsec[] To provide evidence for the claim that this is the twisted analog of the AdS geometry, we will show that the twist of the symmetries present in the physical theory are witnessed in the twisted theory in this background. We have recalled that the $Q$-cohomology of $\lie{osp}(8|2)$ is isomorphic to the super Lie algebra $\lie{osp}(6|1)$. We will define an embedding of $\lie{osp}(6|1)$ into the eleven-dimensional theory on $\CC^5 \times \RR \setminus \{w=0\}$ which corresponds to the twist of the 3d superconformal algebra. We first focus on the case where the flux $N=0$, for which the embedding can be extended to all of $\CC^5 \times \RR$. \parsec[] The bosonic part of $\lie{osp}(6|1)$ is the direct sum Lie algebra $\lie{sl}(4) \oplus \lie{sl}(2)$. The Lie algebra $\lie{sl}(2)$ represents (holomorphic) conformal transformations in $\CC_z$, which are inherited from the natural M\"obius group action on~$P^1(\C)$; the vector fields representing these transformations are not all divergence-free, and as such must be slightly adjusted. The Lie algebra $\lie{sl}(4)$ represents rotations along the plane $\CC^4_w$. \begin{itemize}[leftmargin=\parindent] \item The bosonic summand $\lie{sl}(2)$ is mapped to the vector fields: \[ \frac{\del}{\del z} ,\quad z \frac{\del}{\del z} - \frac14 \sum_{a=1}^4 w_a \frac{\del}{\del w_a} , \quad z \left(z \frac{\del}{\del z} - \frac12 \sum_{a=1}^4 w_a \frac{\del}{\del w_a} \right) \in \PV^{1,0}(\CC^5) \otimes \Omega^0(\RR) . \] Notice that these vector fields are divergence-free and reduce to the usual holomorphic conformal transformations along $w=0$. \item The bosonic summand $\lie{sl}(4)$ is mapped to four-dimensional rotations: \[ \sum_{a,b=1}^4 B_{ab} w_a \frac{\del}{\del w_b} \in \PV^{1,0}(\CC^5) \otimes \Omega^0(\RR) , \quad (B_{ab}) \in \lie{sl}(4) . \] \end{itemize} The odd part of the algebra $\lie{osp}(6|1)$ is $\wedge^4 W \otimes R$ where $W$ is the fundamental $\lie{sl}(4)$ representation and $R$ is the fundamental $\lie{sl}(2)$ representation. It is natural to split $R = \CC_{+1} \oplus \CC_{-1}$, so that the odd part decomposes as \[ (\wedge^2 \CC^4)_{+1} \oplus (\wedge^2 \CC^4)_{-1} . \] \begin{itemize}[leftmargin=\parindent] \item The fermionic summand $(\wedge^2 \CC^4)_{+1}$ consists of the supertranslations. It is mapped to the fields: \[ \frac{1}{2} (w_a \d w_b - w_b \d w_a) \in \Omega^{1,0}(\CC^5) \otimes \Omega^0(\RR) , \quad a,b=1,2,3,4 . \] \item The fermionic summand $(\wedge^2 \CC^4)_{-1}$ consists of the remaining superconformal transformations. It is mapped to the fields: \[ \frac{1}{2} z (w_a \d w_b - w_b \d w_a) \in \Omega^{1,0}(\CC^5) \otimes \Omega^0(\RR) , \quad a,b=1,2,3,4. \] \end{itemize} \begin{lem}\label{lem:m2emb} These assignments define an embedding of $\lie{osp}(6|1)$ into the linearized BRST cohomology of the fields of the eleven-dimensional theory on $\CC^5 \times \RR$. Equivalently, it defines an embedding \[ i_{M2} \colon \lie{osp}(6|1) \hookrightarrow E(5,10) . \] \end{lem} \begin{proof} The second assertion follows from Theorem \ref{thm:global}, which shows that, as a super Lie algebra, the linearized BRST cohomology of the global symmetry algebra of the eleven-dimensional theory on $\CC^5 \times \RR$ is the trivial central extension of $E(5,10)$. Recall that the odd part of $E(5,10)$ is precisely the module of closed two-forms on $\CC^5$. To explicitly describe the embedding into $E(5,10)$ we simply apply the de Rham differential to the last two formulas above. Recall, we are using the holomorphic coordinates $(z,w_1,\ldots,w_4)$ on $\CC^5$ where $z$ is the holomorphic coordinate along the M2 brane. \begin{itemize}[leftmargin=\parindent] \item The fermionic summand $(\wedge^2 \CC^4)_{+1}$ embeds into closed two-forms as \[ \d w_a \wedge \d w_b, \quad a,b=1,2,3,4. \] \item The fermionic summand $(\wedge^2 \CC^4)_{-1}$ embeds into closed two-forms as \[ z \d w_a \wedge \d w_b + \frac12 \d z \wedge (w_a \d w_b - w_b \d w_a) , \quad a,b=1,2,3,4. \] \end{itemize} \end{proof} \parsec[] Next, we turn on $N \ne 0$ units of nontrivial flux. Since not all of the fields we wrote down above commute with the flux $N F_{M2}$, they are not compatible with the total differential $\delta^{(1)} + [N F_{M2}, -]$ acting on the fields supported on $\CC^5 \times \RR \setminus \{w=0\}$. Nevertheless, we have the following. \begin{prop} \label{prop:brads4} There exist $N$-dependent corrections to the fields defining the embedding of $\lie{osp}(6|1)$ summarized above which are closed for the modified BRST differential $\delta^{(1)} + [N F_{M2},-]$. Furthermore, these order $N$ corrections define an embedding of $\lie{osp}(6|1)$ inside the cohomology of the fields of eleven-dimensional theory on $\CC^5 \times \RR \setminus \CC \times \RR$ with respect to the differential $\delta^{(1)} + [N F_{M2},-]$. \end{prop} \begin{proof} Let $\cL(\CC^5 \times \RR \setminus \{w=0\})$ denote the super $L_\infty$ algebra obtained by parity shifting the fields of the eleven-dimensional theory. We make the identification \[ (\CC^5 \times \RR) \setminus \{w=0\} \cong (\CC_w^4 \setminus 0) \times \CC_z \times \RR . \] Set $F = F_{M2}$ for notational convenience. Recall that we are viewing $F$ as an element of $\PV^{1,3}(\CC_w^4 \setminus 0) \otimes \Omega^{0,0}(\CC_z) \otimes \Omega^0(\RR)$. The operator $[F,-]$ acts on the fields according to two types of maps: \begin{align*} [F ,-] & \colon \PV^{i,\bu}(\CC^4_w \setminus 0) \otimes \PV^{j,\bu} (\CC_z) \otimes \Omega^\bu (\RR) \to \PV^{i,\bu+3}(\CC^4_w \setminus 0) \otimes \PV^{j,\bu} (\CC_z) \otimes \Omega^\bu (\RR) \\ [F,-] & \colon \Omega^{i,\bu}(\CC^4_w \setminus 0) \otimes \Omega^{j,\bu} (\CC_z) \otimes \Omega^\bu (\RR) \to \Omega^{i,\bu+3}(\CC^4_w \setminus 0) \otimes \Omega^{j,\bu} (\CC_z) \otimes \Omega^\bu (\RR). \end{align*} The first page of the spectral sequence is the cohomology with respect to the original linearized BRST differential $\delta^{(1)}$. Recall that the linearized BRST differential decomposes as \[ \delta^{(1)} = \dbar + \d_{\RR} + \div |_{\mu \to \nu} + \del |_{\beta \to \gamma} . \] To compute this page, we use an auxiliary spectral sequence which simply filters by the holomorphic form and polyvector field type. This first page of this auxiliary spectral sequence is simply given by the cohomology with respect to $\dbar + \d_{\RR}$. This cohomology is given by \begin{equation} \label{eqn:ads4ss} \begin{tikzcd}[row sep = 1 ex] + & - \\ \hline H^\bu(\CC^4\setminus 0, \T) \otimes H^\bu(\CC, \cO) & H^\bu(\CC^4 \setminus 0, \cO) \otimes H^\bu(\CC, \cO) \\ H^\bu(\CC^4\setminus 0, \cO) \otimes H^\bu(\CC, \T) \\ H^\bu(\CC^4\setminus 0, \cO) \otimes H^\bu(\CC, \cO) & H^\bu(\CC^4\setminus 0, \cO) \otimes H^\bu(\CC, \Omega^1) \\ & H^\bu(\CC^4\setminus 0, \Omega^1) \otimes H^\bu(\CC, \cO) \end{tikzcd} \end{equation} where $\T$ denotes the holomorphic tangent sheaf, $\Omega^1$ denotes the sheaf of holomorphic one-forms, and $\cO$ is the sheaf of holomorphic functions. The cohomology of $\CC$ is concentrated in degree zero and there is a dense embedding \[ \CC[z] \hookrightarrow H^\bu(\CC, \cF) \] for $\cF = \cO, \T$, or $\Omega^1$. For $\cF = \cO, \T$, or $\Omega^1$, the cohomology $H^\bu(\CC^4 \setminus 0, \cF)$ is concentrated in degrees $0$ and $3$. There are the following dense embeddings \begin{align*} \CC[w_1,\ldots, w_4] & \hookrightarrow H^0(\CC^4 \setminus 0, \cO) \\ \CC[w_1,\ldots, w_4] \{\partial_{w_i}\} & \hookrightarrow H^0(\CC^4 \setminus 0, \T) \\ \CC[w_1,\ldots, w_4] \{\d w_i\} & \hookrightarrow H^0(\CC^4 \setminus 0, \Omega^1) \end{align*} and \begin{align*} (w_1\cdots w_4)^{-1} \CC[w_1^{-1},\ldots, w_4^{-1}] & \hookrightarrow H^3(\CC^4 \setminus 0, \cO) \\ (w_1\cdots w_4)^{-1} \CC[w_1^{-1},\ldots, w_4^{-1}] \{\partial_{w_i}\} & \hookrightarrow H^3(\CC^4 \setminus 0, \T) \\ (w_1\cdots w_4)^{-1} \CC[w_1^{-1},\ldots, w_4^{-1}] \{\d w_i\} & \hookrightarrow H^3(\CC^4 \setminus 0, \Omega^1) . \end{align*} It follows that (up to completion) the cohomology \[ H^\bu(\cL(\CC^5 \times \RR \setminus \{w=0\}) , \dbar) \] is the direct sum of $H^\bu(\cL(\CC^5 \times \RR), \dbar)$ with \begin{equation} \label{eqn:ads4ss2} \begin{tikzcd}[row sep = 1 ex] - & + \\ \hline H^3(\CC^4\setminus 0, \cO)[z] \{\partial_{w_i}\} \ar[r, dotted, "\div"] & H^3(\CC^4 \setminus 0, \cO) [z] \\ H^3(\CC^4\setminus 0, \cO) [z] \partial_z \ar[ur, dotted, "\div"'] \\ H^3(\CC^4\setminus 0, \cO) [z] \ar[r, dotted, "\del"] \ar[dr, dotted, "\del"'] & H^3(\CC^4\setminus 0, \cO)[z] \d z \\ & H^3(\CC^4\setminus 0, \Omega^1)[z] \{\d w_i\} . \end{tikzcd} \end{equation} The remaining piece of the original BRST operator is drawn in dotted lines. The first page of the spectral sequence converging to the cohomology with respect to $\delta^{(1)} + [N F, -]$ is given by the cohomology of the global symmetry algebra on $\CC^5 \times \RR$, which we computed in \S \ref{sec:global}, plus the cohomology of the above complex with respect to the dotted-line operators. In this description, the image of the flux $F$ at this page in the spectral sequence corresponds to the class \[ [F] = (w_1 \cdots w_4)^{-1} \partial_z \in H^3(\CC^4\setminus 0, \cO) [z] \partial_z . \] The next page of the spectral sequence is given by computing the cohomology with respect to the operator $[N F,-]$. As observed above, this operator maps Dolbeault degree zero elements to Dolbeault degree three elements. For degree reasons, there are no further differentials and the spectral sequence collapses after the second page. The embedding of $\lie{osp}(6|1)$ we wrote down in lemma \ref{lem:m2emb} lands in the kernel of the original BRST operator $\delta^{(1)}$. To see that it this embedding can be lifted to the full cohomology we need to check that any element in the image of the original embedding is annihilated by $\big[ N [F] , - \big]$. This is a direct calculation. For instance, recall that an element in the image of the odd summand $(\wedge^2 \CC^2)_{-1}$ (which corresponds to a superconformal transformation) is of the form $z w_a \wedge \d w_b = z(w_a \d w_b - w_b \d w_a)$. We have \[ \big[[F] , z(w_a \d w_b - w_b \d w_a) \big] = (w_1\cdots w_4)^{-1} (w_a \d w_b - w_b \d w_a) = 0 \] since the class $(w_1\cdots w_4)^{-1}$ is in the kernel of the operator given by multiplication by $w_a$ for any $a = 1,\ldots 4$. \end{proof} \subsection{The ${\rm AdS}_7 \times S^4$ background} In this section we introduce the analog of the ${\rm AdS}_7 \times S^4$ background in our description of the minimal twist of eleven-dimensional supergravity. Decompose the eleven dimensional spacetime as $\C^3_z\times \C^2_w\times \R$. \parsec[sec:m5coupling] Analogous to the physical theory, the ${\rm AdS}_7 \times S^4$ background in the holomorphic twist will arise by backreacting M5 branes. To this effect, we begin by discussing how the eleven-dimensional theory couples to M5 branes. Consider a stack of $N$ M5 branes wrapping \[ \{w_1=w_2=t=0\} \subset \C^3_z\times \C^2_w\times \R. \] It is natural to consider the nonlocal interaction \[ I_{M5} = N\int_{\C^3_z} \div^{-1}\mu \vee \Omega +\cdots \] Note that this expression is only nonzero on the component of $\mu$ in $\PV^{1,3}$. We argue that this coupling is consistent with expectations from the physical theory and from dimensional reduction. The twisted field $\mu^{1,3}$ is a component of the Hodge star of the $G$-flux in the physical theory (\S\ref{s:components}). In the physical theory, M5 branes magnetically couple to the $C$-field; the coupling involves choosing a primitive for the Hodge star of the $G$-flux and integrating it over the M5 worldvolume. Our twist contains no fields corresponding to components of such a primitive; hence such a magnetic coupling is reflected in the appearance of $\div^{-1}$. \parsec[] We obtain a deeper justification for this coupling through dimensional reduction to type IIA supergravity. Reducing on the circle along the directions the M5 branes wrap yields the $SU(4)$ invariant twist of type IIA supergravity on $\CC^4 \times \RR^2$ with $N$ $D4$ branes wrapping $\CC^2 \times \RR$. In \cite{CLsugra}, it is shown that the magnetic coupling of $D4$ branes to the $SU(4)$ twist of IIA is of the form \[ N \int _{\C^2 \times \RR} \div^{-1} \mu \vee \Omega_{\C^4} + \cdots . \] Again, we have only explicitly indicated the first-order piece of the coupling. \parsec[s:m5backreact] The backreacted geometry will be given by a solution to the equations of motion upon deforming the eleven-dimensional action by the interaction $I_{M5}(\mu)$. Varying the potential $\div^{-1} \mu$, we obtain the following equation of motion involving the field $\gamma$: \beqn\label{eqn:m5eom1} \dbar \del \gamma + \div \left(\frac{1}{1-\nu} \mu\right) \wedge \del \gamma = N \delta_{w_1=w_2=t=0} . \eeqn Notice that there is an extra derivative compared to the equation of motion arising from varying the field $\mu$. This equation only depends on $\gamma$ through its field strength $\del \gamma$. Varying $\gamma$ we obtain the equation of motion \beqn\label{eqn:m5eom2} (\dbar + \d_\RR) \mu + \del \gamma \del \gamma = 0 . \eeqn Again, this only depends on $\gamma$ through its field strength $\del \gamma$. \begin{lem} \label{lem:ads7flux} Let \[ F_{M5} = \frac{1}{(2\pi i)^3} \frac{\wbar_1 \d \wbar_2 \wedge \d t - \wbar_2 \d \wbar_1 \wedge \d t + t \d \wbar_1 \wedge \d \wbar_2}{(\|w\|^2 + t^2)^{5/2}} \wedge \d w_1 \wedge \d w_2 \] Then, $\del\gamma = N F_{M5}$, $\mu = 0$, and $\nu = 0$ satisfies the equations of motion in the presence of a stack of $N$ M5 branes sourced by the term $N \delta_{w_1=w_2=t=0}$: \begin{align*} \dbar (NF_{M5}) + \d_{\RR} (NF_{M5}) & = N \delta_{w_1=w_2=t=0} \\ (NF_{M5}) \wedge (NF_{M5}) & = 0 . \end{align*} Here, we set all components of the field $\mu$ equal to zero (as well as the fields $\nu,\beta$). \end{lem} \begin{proof} The first equation, \[ \dbar F + \d_{\RR} F = N \delta_{w_1=w_2=t=0}, \] characterizes the kernel representing $N$ times the residue class for a four-sphere in \[ (\CC^2 \times \RR) \setminus 0 \simeq S^4 \times \RR . \] That is \[ \oint_{S^4} N F = N \] for any four-sphere centered at $0 \in \CC^2 \times \RR$. The second equation $F \wedge F = 0$ follows by simple type reasons. \end{proof} \parsec[s:m5embedding] To provide evidence for the claim that this is the twisted analog of the AdS geometry, we will again show that the twist of the symmetries present in the physical theory on ${\rm AdS}_7 \times S^4$ appear in the twisted theory on this background. We have recalled that the $Q$-cohomology of $\lie{osp}(8|2)$ is isomorphic to the super Lie algebra $\lie{osp}(6|1)$. We will define an embedding of $\lie{osp}(6|1)$ into the eleven-dimensional theory on $\CC^5 \times \RR \setminus \{w_1=w_2=t=0\}$ which corresponds to the twist of the 6d superconformal algebra. We first focus on the case where the flux $N=0$. In this case the embedding can be extended to all of $\CC^5 \times \RR$. Recall that we have chosen coordinates of the form \[ \CC^5 \times \RR = \CC_z^3 \times \CC_w^2 \times \RR_t \] with $z_i, i=1,2,3$ and $w_a, a=1,2$. The stack of M5 branes wrap $w_1=w_2=t=0$. The embedding of the bosonic piece of $\lie{osp}(6|1)$ can be described as follows. Recall that the bosonic part of $\lie{osp}(6|1)$ is the direct sum Lie algebra \[ \lie{sl}(4) \oplus \lie{sl}(2) . \] which we write as $\lie{sl}(W) \oplus \lie{sl}(R)$ with $W,R$ complex four, two dimensional complex vector spaces. The roles of the $\lie{sl}(4)$ and $\lie{sl}(2)$ summands are interchanged compared to the case of the M2 brane. The Lie algebra $\lie{sl}(4)$ represents (holomorphic) conformal transformations along the plane $\CC^3_z$, again coming from the natural action on $P^3(\C)$. Since not all such infinitesimal transformations are divergence-free, the precise vector fields must be adjusted. The Lie algebra $\lie{sl}(2)$ represents rotations in $\CC^2_w$; the vector fields representing these transformations are automatically divergence-free. In more detail, the embedding of the bosonic piece can be given by the following explicit formulas. \begin{itemize}[leftmargin=\parindent] \item The bosonic abelian subalgebra $\CC^3 \subset \lie{sl}(4)$ of translations is mapped to the obvious vector fields \[ \frac{\del}{\del z_i} \in \PV^{1,0}(\CC^5) \otimes \Omega^0(\RR) , \quad i=1,2,3. \] \item The bosonic subalgebra $\lie{sl}(3) \subset \lie{sl}(4)$ is mapped to the rotations \[ A_{ij} z_i \frac{\del}{\del z_j} \in \PV^{1,0}(\CC^5)\otimes \Omega^0(\RR) , \quad (A_{ij}) \in \lie{sl}(3) . \] \item The bosonic subalgebra $\CC \subset \lie{sl}(4)$ corresponding to rescaling $\C^3$ is mapped to the element \[ \sum_{i=1}^3 z_i \frac{\del}{\del z_i} - \frac32 \sum_{a=1}^2 w_a \frac{\del}{\del w_a} \in \PV^{1,0}(\CC^5) \otimes \Omega^0(\RR) . \] Notice that this vector field are divergence-free and restricts to the ordinary dilation (Euler vector field) along $w=0$. \item The bosonic subalgebra of $\lie{sl}(4)$ describing special conformal transformations on $\CC^3$ is mapped to the elements \[ z_j \left(\sum_{i=1}^3 z_i \frac{\del}{\del z_i} - 2 \sum_{a=1}^2 w_a \frac{\del}{\del w_a} \right) \in \PV^{1,0}(\CC^5) \otimes \Omega^0(\RR) . \] Notice that these vector fields are divergence-free and restrict to the ordinary special conformal transformations along $w=0$. \item The bosonic summand $\lie{sl}(2)$ ($R$-symmetry) is mapped to the triple \[ w_1 \frac{\del}{\del w_2}, w_2 \frac{\del}{\del w_1}, \frac12 \left(w_1 \frac{\del}{\del w_1} - w_2 \frac{\del}{\del w_2}\right) \in \PV^{1,0}(\CC^5) \otimes \Omega^0(\RR) . \] \end{itemize} The odd part of the algebra $\lie{osp}(6|1)$ is $\wedge^4 W \otimes R$ where $W$ is the fundamental $\lie{sl}(4)$ representation and $R$ is the fundamental $\lie{sl}(2)$ representation. It is natural to split $W = L \oplus \CC$ with $L = \CC^3$ the fundamental $\lie{sl}(3) \subset \lie{sl}(4)$ representation. Then the odd part decomposes as \[ L \otimes R \oplus \wedge^2 L \otimes R \cong \CC^3 \otimes \CC^2 \oplus \wedge^2 \CC^3 \otimes \CC . \] \begin{itemize} \item The summand $L \otimes R$ consists of the remaining 6d supertranslations. It is mapped to the fields \[ z_i \d w_a \in \Omega^{1,0}(\CC^5) \otimes \Omega^0(\RR) ,\quad a=1,2, \quad i =1,2,3. \] \item The summand $\wedge^2 L \otimes R$ consists of the remaining 6d superconformal transformations. It is mapped to the fields \[ \frac12 w_a (z_i \d z_j - z_j \d z_i) \in \Omega^{1,0}(\CC^5)\otimes \Omega^0(\RR) , \quad a = 1,2, \quad k = 1,2,3. \] \end{itemize} \begin{lem} These assignments define an embedding of $\lie{osp}(6|1)$ into the linearized BRST cohomology of the fields of the eleven-dimensional theory on $\CC^5 \times \RR$. Equivalently, it defines an embedding \[ i_{M5} \colon \lie{osp}(6|1) \hookrightarrow E(5,10) . \] \end{lem} \begin{proof} To explicitly describe the embedding into $E(5,10)$ we simply apply the de Rham differential to the last two formulas above. Recall, we are using the holomorphic coordinates $(z_1,z_2,z_3, w_1,w_2)$ on $\CC^5$ where $z_i$ are the holomorphic coordinates along the M5 brane. \begin{itemize} \item The fermionic summand $L \otimes R$ embeds into closed two-forms as \[ \d z_i \wedge \d w_a, \quad i=1,2,3, \quad a=1,2. \] \item The fermionic summand $\wedge^2 L \otimes R$ embeds into closed two-forms as \[ w_a \d z_i \wedge \d z_j + \frac12 \d w_a \wedge (z_i \d z_j - z_j \d z_i) , \quad i,j=1,2,3, \quad a=1,2. \] \end{itemize} \end{proof} \parsec[] Next, we turn on $N \ne 0$ units of nontrivial flux. Since not all of the fields we wrote down above commute with the flux $N F$, they are not compatible with the total differential $\delta^{(1)} + [N F, -]$ acting on the fields supported on $\CC^5 \times \RR \setminus \{w_1=w_2=t=0\}$. Nevertheless, we have the following. \begin{prop} \label{prop:brads7} There exist $N$-dependent corrections to the embedding $i_{M5}$ which are compatible with the modified BRST differential $\delta^{(1)} + [N F_{M5},-]$. Furthermore, these $N$-dependent corrections define an embedding of $\lie{osp}(6|1)$ inside the cohomology of the fields of eleven-dimensional theory on $\CC^5 \times \RR \setminus \CC \times \RR$ with respect to the differential $\delta^{(1)} + [N F_{M5},-]$. \end{prop} \parsec[s:thfcohomology] The proof of the above proposition follows from another indirect cohomological argument. Before getting to the proof, we introduce the relevant cohomology. The eleven-dimensional theory is built from fields which live in the tensor product of complexes \[ \Omega^{0,\bu}(\CC^5) \otimes \Omega^\bu(\RR). \] More precisely, this is where the fields $\beta$ and~$\nu$ live. The $\mu$ and~$\gamma$ fields live in versions of this complex where we take Dolbeault forms with coefficients in the holomorphic tangent and cotangent bundles, respectively. Another way to think about this complex is to first consider the full de Rham complex $\Omega^\bu(\CC^5 \times \RR)$, which includes both holomorphic and anti-holomorphic forms in the $\CC^5$ direction. The dg algebra of all de Rham forms on $\CC^5 \times \RR$ has an ideal generated by the holomorphic one forms $\{\d z_i\}_{i=1,\ldots,5}$. There is an isomorphism of dg algebras \[ \Omega^{0,\bu}(\CC^5) \otimes \Omega^\bu(\RR) \cong \Omega^\bu(\CC^5 \times \RR) \, / \, (\d z_1,\ldots, \d z_5) . \] The advantage of this presentation is that we can define a complex associated to more general manifolds that are not products of a complex manifold with a smooth manifold.\footnote{More generally, we are describing the cohomology of a manifold equipped with a topological holomorphic foliation.} For the M5 brane, it was convenient to rename the holomorphic coordinates on $\CC^5$ to $z_1,z_2,z_3,w_1,w_2$. At the twisted level, the geometry arising from backreacting M5 branes is based on the manifold \[ \CC^5 \times \RR \setminus \CC^3 \cong \CC_z^3 \times (\CC^2_w \times \RR \setminus 0) . \] The $\beta$ and~$\nu$ fields of the eleven-dimensional theory on this submanifold of $\CC^5 \times \RR$ live in the complex \[ \Omega^\bu\bigg(\CC^5 \times \RR \setminus \CC^3\bigg) \, / \, (\d z_1,\d z_2,\d z_3, \d w_1, \d w_2) . \] The $\mu$ and~$\gamma$ fields live in similar complexes, where we introduce a (trivial) vector bundle on $\CC^5 \times \RR \setminus \CC^3$. Since the $\CC^3$ wraps $w_1=w_2=t=0$ we can apply a version of the K\"unneth formula to identify this complex with \[ \Omega^{0,\bu}(\CC^3_z) \otimes \bigg( \Omega^\bu\left(\CC^2_w \times \RR \setminus 0 \right) \, / \, (\d w_1, \d w_2) \bigg). \] The cohomology of the Dolbeault complex of $\CC^3_z$ is easy to compute. The cohomology of the bit in parentheses is concentrated in degrees zero and two. In degree zero, there is a dense embedding \[ \CC[w_1,w_2] \hookrightarrow H^0 \bigg( \Omega^\bu\left(\CC^2_w \times \RR \setminus 0 \right) \, / \, (\d w_1, \d w_2) \bigg) \] In degree two, there is a dense embedding \[ w_{1}^{-1} w_2^{-1} \CC[w_1,w_2] \hookrightarrow H^2 \bigg( \Omega^\bu\left(\CC^2_w \times \RR \setminus 0 \right) \, / \, (\d w_1, \d w_2) \bigg). \] It will be useful to explain this last embedding in more detail. Consider the homogenous element $w_1^{-1} w_2^{-1}$. This represents the class of the Dolbeault-de Rham two-form \[ \frac{\wbar_1 \d \wbar_2 \wedge \d t - \wbar_2 \d \wbar_1 \wedge \d t + t \d \wbar_1 \wedge \d \wbar_2}{(\|w\|^2 + t^2)^{5/2}} . \] Notice that, if we wedge with the volume form $\d w_1 \d w_2$, this is the unit flux ($N=1$) introduced in Lemma \ref{lem:ads7flux}. The homogenous element $w_1^{-n-1} w_2^{-m-1}$ represents the class of the holomorphic derivatives $\partial_{w_1}^n \partial_{w_2}^{m}$ applied to this two-form. Observe that, when restricted to $\CC^5 \times \RR \setminus \CC^3$, the holomorphic tangent bundle along $\CC^5$ is still trivializable. \parsec[] Let's turn to the proof of Proposition~\ref{prop:brads7}. We proceed completely analogously to the case of backreacted M2 branes as in the proof of Proposition \ref{prop:brads4}. \begin{proof}[Proof of Proposition \ref{prop:brads7}] Let $\cL (\CC^5 \times \RR \setminus \{w_1=w_2=t=0\})$ denote the super $L_\infty$ algebra obtained by parity shifting the fields of the eleven-dimensional theory on $\CC^5 \times \RR \setminus \{w_1=w_2=t=0\}$. There is a spectral sequence which converges to the cohomology of the fields with respect to the deformed linear BRST differential $\delta^{(1)} + [N F_{M5},-]$ whose first page is the cohomology with respect to the original linearized BRST differential $\delta^{(1)}$. Recall that the linearized BRST differential decomposes as \[ \delta^{(1)} = \dbar + \d_{\RR} + \div |_{\mu \to \nu} + \del |_{\beta \to \gamma} . \] To compute this page, we use an auxiliary spectral sequence which simply filters by the holomorphic form and polyvector field type. This first page of this auxiliary spectral sequence is simply given by the cohomology of the fields supported on \[ \CC^5 \times \RR \setminus \{w_1=w_2=t=0\} \cong \CC_z^3 \times (\CC^2_w \times \RR \setminus 0) \] with respect to $\dbar + \d_{\RR}$. To compute this cohomology we follow the discussion in \S \ref{s:thfcohomology}. Just as in the case of the M2 brane, we see that the $\dbar + \d_{\RR}$ cohomology is (up to completions) is the direct sum of the cohomology on flat space $H^\bu(\cL(\CC^5 \times \RR), \dbar)$ with \begin{equation} \label{eqn:ads7ss2} \begin{tikzcd}[row sep = 1 ex] + & - \\ \hline w_1^{-1} w_2^{-1} \CC[w_1^{-1}, w_2^{-1}][z_1,z_2,z_3] \{\partial_{w_i}\} \ar[r, dotted, "\div"] & w_1^{-1} w_2^{-1} \CC[w_1^{-1}, w_2^{-1}] [z_1,z_2,z_3] \\ w_1^{-1} w_2^{-1} \CC[w_1^{-1}, w_2^{-1}] [z_1,z_2,z_3] \{\del_{z_i}\} \ar[ur, dotted, "\div"'] \\ w_1^{-1} w_2^{-1} \CC[w_1^{-1}, w_2^{-1}] [z_1,z_2,z_3] \ar[r, dotted, "\del"] \ar[dr, dotted, "\del"'] & w_1^{-1} w_2^{-1} \CC[w_1^{-1}, w_2^{-1}][z_1,z_2,z_3] \{\d z_i\} \\ & w_1^{-1} w_2^{-1} \CC[w_1^{-1}, w_2^{-1}][z_1,z_2,z_3] \{\d w_i\} . \end{tikzcd} \end{equation} Recall that the flux $F$ was defined as the image under $\del$ of some $\gamma$-type field. Therefore, the class $[F]$ does not live inside this page of the spectral sequence, but the operator $[[F], -]$ does act on this page nevertheless. For instance, if $f^i(z,w) \d z_i$ is a one-form living in $H^0(\CC^5, \Omega^1) \otimes H^0(\RR)$, then \[ [ [F] , f^i (z,w) \d z_i ] = \ep_{ijk} w_1^{-1} w_2^{-1} \partial_{z_j} f^i(z,w) \del_{z_k} \] which is an element in \[ \CC[w_1^{-1}, w_2^{-1}][z_1,z_2,z_3] \{\del_{z_i}\} \subset H^0(\CC^3, \T) \otimes H^2 \big(\Omega^\bu(\CC^2 \times \RR \setminus 0) / (\d w_1 , \d w_2) \big) . \] The first page of the spectral sequence converging to the cohomology with respect to $\delta^{(1)} + [N F, -]$ is given by the cohomology of the global symmetry algebra on $\CC^5 \times \RR$, which we computed in \S \ref{sec:global}, plus the cohomology with respect to the dotted-line operators in~\eqref{eqn:ads7ss2}. The next page of the spectral sequence is given by computing the cohomology with respect to the operator $[N F,-]$. This operator maps Dolbeault-de Rham degree zero elements to Dolbeault-de Rham degree two elements. For degree reasons, there are no further differentials and the spectral sequence collapses after the second page. The embedding of $\lie{osp}(6|1)$ for $N=0$ lives in the kernel of the original BRST operator $\delta^{(1)}$. To see that it this embedding can be lifted to the full cohomology we need to check that any element in the image of the original embedding is annihilated by $\big[ N [F] , - \big]$. This is a direct calculation. For instance, recall that an element in the image of the odd summand $\wedge^2 L \otimes R = \wedge^2 \CC^3 \otimes \CC^2$ (which corresponds to a superconformal transformation) is of the form $w_a (z_i \d z_j - z_j \d z_i)$, $a=1,2, i,j=1,2,3$. We have \[ \big[[F] , w_a (z_i \d z_j - z_j \d z_i)\big] = 2 \ep_{ijk} (w_1^{-1} w_2^{-1}) \cdot w_a \del_{z_k} = 0 \] since the class $w_1^{-1} w_2^{-1}$ is in the kernel of the operator given by multiplication by $w_a$ for $a=1,2$. Verifying that the remaining elements in the image of $i_{M5}$ are in the kernel of $\big[ [F], -\big]$ is similar. This completes the proof. \end{proof} \section{Residual supersymmetry} \label{sec:susy} In this section we consider the minimal twist of eleven-dimensional supersymmetry explicitly. We compute the residual supersymmetry algebra given by taking the cohomology of the eleven-dimensional supersymmetry algebra with respect to the minimal twisting supercharge. In order for this to map to the gauge symmetries of the eleven-dimensional theory, it is necessary to consider an extension of the eleven-dimensional supersymmetry algebra corresponding to the M2 brane. We will see how this extension is compatible, upon twisting by the minimal supercharge, with the central extension of $E(5,10)$ we found as the global symmetry algebra in the previous section. \subsection{Supersymmetry in eleven dimensions} \label{sec:11dsusy} The (complexified) eleven-dimensional supertranslation algebra is a complex super Lie algebra of the form \[ \ft_{11d} = V \oplus \Pi S \] where $S$ is the (unique) spin representation and $V \cong \CC^{11}$ the complex vector representation, of~$\lie{so}(11, \CC)$. The bracket is the unique surjective $\lie{so}(11,\CC)$-equivariant map from the symmetric square of~$S$ to~$V$; this decomposes into three irreducibles, \beqn\label{eqn:decomp} \Sym^2(S) \cong V \oplus \wedge^2 V \oplus \wedge^5 V. \eeqn Denote by $\Gamma_{\wedge^1}, \Gamma_{\wedge^2}, \Gamma_{\wedge^5}$ the projections onto each of the summands above. The bracket in $\ft_{11d}$ is defined using the first projection \[ [\psi, \psi'] = \Gamma_{\wedge^1} (\psi, \psi') . \] The super Poincar\'{e} algebra is \[ \lie{siso}_{11d} = \lie{so}(11 , \CC) \ltimes \ft_{11d} . \] The $R$-symmetry is trivial in eleven-dimensional supersymmetry. \subsection{Extensions of the supersymmetry algebra} \label{sec:m2brane} Extensions of the supersymmetry algebra correspond to the existence of extended objects, such as branes, in the supergravity theory. In eleven-dimensional supersymmetry, there are two such extensions corresponding to the M2 brane and the M5 brane. We begin by describing a less standard dg Lie algebra model for the M2 brane algebra. In the next section we will explain the relationship to other descriptions in terms of $L_\infty$ algebras \cite{Basu_2005,Bagger_2007,FSS}. Our model for the M2 brane algebra is a dg Lie algebra extension of the super Poincar\'e algebra $\lie{siso}_{11d}$. Introduce the cochain complex $\Omega^{\bu}(\RR^{11})$ of (complex valued) differential forms on $\RR^{11}$ equipped with the de Rham differential $\d$. The M2 brane algebra arises as an extension of $\lie{siso}_{11d}$ by the cochain complex $\Omega^\bu(\RR^{11})[2]$ and is defined by a cocycle \[ c_{M2} \in \clie^{2,+} \left(\lie{siso}_{11d} \; ; \; \Omega^\bu (\RR^{11})[2]\right) . \] The formula is \[c_{M2} (\psi, \psi') = \Gamma_{\wedge^2}(\psi, \psi') \in \Omega^2(\RR^{11})\] where $\Gamma_{\wedge^2}$ is the projection onto $\wedge^2 V$, thought of as the space of constant coefficient two-forms, as in the decomposition \eqref{eqn:decomp}. \begin{dfn} The algebra $\m2$ is the $\ZZ \times \ZZ/2$-graded dg Lie algebra defined by the extension of $\lie{siso}_{11d}$ by the cocycle $c_{M2}$. \end{dfn} Here, we are using a bigrading by $\ZZ \times \ZZ/2$. The super Poincar\'e algebra is concentrated in zero integer grading and carries its natural $\ZZ/2$ grading as a super Lie algebra. The complex $\Omega^{\bu}(\RR^{11})[2]$ is concentrated in integer degrees $[-2,9]$ and has even parity. The bracket in $\m2$ is bidegree $(0,+)$ and the differential is bidegree $(1,+)$. \subsection{The minimal twist} \label{sec:mintwist} Fix a supercharge $Q \in S$ satisfying $Q^2 = 0$ that is in the lowest stratum of the nilpotence variety. Such a supercharge has a six-dimensional image in the space of (complexified) translations on $\RR^{11}$ and defines the minimal twist of eleven-dimensional supersymmetry \cite{SWspinor}. We characterize the cohomology of the algebra $\m2$ with respect to this supercharge. $Q$ defines a maximal isotropic subspace $L \subset V$. In turn, we will decompose the super Poincar\'e algebra into $\lie{sl}(L) = \lie{sl}(5)$ representations. First, the defining and spinor representations decompose as \deq{ V = L \oplus L^\vee \oplus \CC_t, \qquad S = \wedge^\bu L. } In the expression for $S$, we are omitting factors of $\det(L)^{\frac12}$ for simplicity. Also, $\lie{so}(11, \CC)$ decomposes as \[ \lie{sl}(L) \oplus \wedge^2 L \oplus \wedge^2 L^\vee \oplus L \oplus L^\vee \oplus \C . \] Furthermore, the spin representation can be identified with \[ S = \wedge^\bu (L) = \CC \oplus L \oplus \wedge^2 L \oplus \wedge^3 L \oplus \wedge^4 L \oplus \wedge^5 L . \] The element $Q$ lives in the first summand. Let \[ {\rm Stab}(Q) = \lie{sl}(L) \oplus \wedge^2 L^\vee \oplus L^\vee \subset \lie{so}(11,\CC) \] be the stabilizer of $Q$. This is a parabolic subalgebra whose Levi factor is $\lie{sl}(5)$. \subsection{$Q$-cohomology of $\m2$} \label{sec:m2branetwist} Any element $Q \in S$ satisfying $Q^2 = 0$ determines a deformation of the dg Lie algebra $\m2$. To deform $\d$ by $Q$ we must break the $\ZZ \times \ZZ/2$ bigrading. The supercharge $Q$ is odd and of cohomological degree zero. Recall, the original differential on $\m2$ is the de Rham differential $\d$ which just acts on the summand $\Omega^\bu(\R^{11})[2]$ and is even of cohomological degree $+1$. Thus, only the totalized $\ZZ/2$ grading makes the differential $\d + [Q,-]$ homogenous. \begin{dfn} The $Q$-twist $\m2^Q$ of $\m2$ is the super dg Lie algebra whose differential is $\d + [Q,-]$. The bracket is unchanged. \end{dfn} Let $Q$ be a minimal supercharge satisfying $Q^2 = 0$. We first determine $H^\bullet(\m2^q)$ as a super vector space. \begin{lem} As a $\ZZ/2$ graded space, the cohomology of the $Q$-twist $\m2^Q$ is \beqn\label{eqn:susycoh} L \oplus {\rm Stab}(Q) \oplus \Pi \left(\wedge^2 L^\vee\right) \oplus \CC \eeqn whose elements we denote by $(v, m, \psi, c)$. \end{lem} \begin{proof} The cohomology of the non-centrally extended algebra was computed in \cite{SWspinor}, we briefly recall the result. The element $Q$ only acts nontrivially on the summands $\wedge^4 L$ and $\wedge^5 L$ in $S$. The image of $\wedge^4 L \cong L^\vee$ trivializes the antiholomorphic translations while the image of $\wedge^5 L$ trivializes the time translation. So, of the translations, only the holomorphic ones, which live in $L$, survive. The map \[ [Q,-] \colon \lie{so}(11,\CC) \to S \] is the projection onto $\wedge^0 L \oplus \wedge^1 L \oplus \wedge^2 L$. The kernel of $[Q,-]$ is the stabilizer~${\rm Stab}(Q)$. In summary, the space of odd translations which survive cohomology is $\wedge^3 L \cong \wedge^2 L^\vee$; two such elements bracket to a holomorphic translation by taking the wedge product to get an element of $\wedge^4 L^\vee \cong L$. This completes the calculation of the cohomology. \end{proof} The main result of this subsection is the following: \begin{prop}\label{prop:susycoh} The cohomology of the $Q$-twist $H^\bu(\m2^Q)$ has the following structures: \begin{enumerate} \item As a super Lie algebra, $H^\bu(\m2^Q)$ is the natural extension of ${\rm Stab}(Q)$ together with the bracket \beqn\label{eqn:susy2bra} [\psi, \psi']_2 = \psi \wedge \psi' \in \wedge^4 L^\vee \cong L_v \\ \eeqn \item $\m2^Q$ is not formal as a super dg Lie algebra. As a super $L_\infty$ algebra, the $Q$-twist is equivalent to \eqref{eqn:susycoh} with $2$-brackets described in (1) where we additionally introduce the $3$-ary bracket \beqn\label{eqn:susy3bra} [v, v', \psi]_3 = 4 \<v \wedge v', \psi\> \in \CC_b . \eeqn \end{enumerate} \end{prop} It will be useful to list the formulas for the brackets in terms of coordinates. Let $\{z_i\}$ denote a basis for $L$, which we will also think of as a linear coordinate on $\CC^5$. Let $\{\partial_{z_i}\}$ be a dual basis for $L^\vee$, which we will also think of as translation invariant vector fields. The $2$-ary bracket above is \[ [z_i \wedge z_j, z_k \wedge z_l]_2 = \ep_{ijklm} \partial_{z_m} \] and the $3$-ary bracket is \[ [\partial_{z_i}, \partial_{z_j}, z_{k} \wedge z_{\ell}]_3 = 4 (\delta^i_k \delta^j_\ell - \delta^i_\ell \delta^j_k) . \] \parsec[] One way to prove the proposition above is to use homotopy transfer directly to $\m2^Q$, just as we did in the proof of Theorem \ref{thm:global} to deduce the form of the $3$-ary bracket. Instead, we will use the following minimal model for $\m2^Q$ to prove Proposition~\ref{prop:susycoh}. This minimal model also has the advantage of being more directly related to the eleven-dimensional supergravity theory. \begin{lem} \label{lem:gmodel} Let $\fg$ denote the following $\ZZ/2$ graded dg Lie algebra which as a cochain complex is \[ H^\bu(\m2^Q) \oplus (L^\vee \xto{\id} \Pi L^\vee) . \] Denote the elements of the second summand by $(\lambda, \til\lambda)$. The Lie structure extends the one on $H^\bu(\m2^Q)$ described in (1) of Proposition \ref{prop:susycoh} together with the brackets \begin{align*} [v,\lambda] & = \<v, \lambda\> \in \CC_b \\ [v,\psi] & = \<v, \psi\> \in \Pi L^\vee_{\Tilde{\lambda}}. \end{align*} There is an $L_\infty$ map \[ \fg \rightsquigarrow \m2^Q \] which is a quasi-isomorphism of cochain complexes. \end{lem} \begin{proof} We embed $\fg$ into $\m2^Q$ in the following way: ${\rm Stab}(Q)$ and $L$ sit inside in the evident way. The central element maps to $c \mapsto - 1 \in \Omega^0(\RR^{11})$. The summand $L_\lambda$ is mapped to the linear functions in $\Omega^0(\RR^{11})$ and $\Pi L_{\Tilde{\lambda}}$ is sent to the constant coefficient one-forms in $\Pi \Omega^1(\RR^{11})$. It remains to define where $\psi \in \wedge^2 L$ is mapped. Notice that, at least naively, $\psi \in \wedge^2 L$ is not $Q$-closed due to the presence of the central extension. To embed $\wedge^2 L$ we introduce the operator \[ H \colon \Omega^2 (\RR^{11}) \to \Omega^1(\RR^{11}), \] which sends a two-form $\alpha$ to the one-form $H \alpha$ defined by the formula $(H \alpha) (x) = \int_0^x \alpha$ where we integrate over a straight line path from $0$ to $x$. Notice that if $\alpha$ is $\d$-closed then $\d (H \alpha) = \alpha$. It follows that any element $\psi \in \wedge^2 L \subset S$ can be lifted to a closed element at the cochain level in $\m2^Q$ by the formula \[ \Tilde{\psi} = \psi - H \Gamma_{\wedge^2} (Q, \psi) \in \Pi S \oplus \Pi \Omega^1 . \] Thus, sending $\psi \mapsto \Tilde{\psi}$ defines a cochain map $\fg \to \m2^Q$. The Lie bracket $[\Tilde{\psi}, \Tilde{\psi}']$ agrees with $[\psi, \psi']$. On the other hand, in $\m2^Q$ there is the Lie bracket \[ [v,\Tilde{\psi}] = - L_v (H \Gamma_{\wedge^2} (Q, \psi)) = -\<v, \Gamma_{\wedge^2}(Q, \psi)\> - \d \<v, H \Gamma_{\wedge^2}(Q, \psi)\> . \] The first term agrees with the bracket $[v, \psi]_{\fg}$ in $\fg$. The other term is exact in $\m2^Q$ and can hence be corrected by the following bilinear \[ v \otimes \psi \mapsto \<v, H \Gamma_{\wedge^2} (Q,\psi) \> \in L_\lambda . \] Together with the cochain map described above, this bilinear term prescribes the desired $L_\infty$ map. \end{proof} \parsec[] We now proceed to the proof of proposition \ref{prop:susycoh}. \begin{proof}[Proof of Proposition \ref{prop:susycoh}] Using the model $\fg$, the first part of Proposition \ref{prop:susycoh} follows immediately. We deduce the second part using homotopy transfer. Recall that we described the cohomology of $\m2^Q$ in \eqref{eqn:susycoh}. Let $\delta$ denote the differential on $\fg$ which simply maps $\Pi L$ to $L$ by the identity map. We produce the homotopy data \begin{equation} \begin{tikzcd} \arrow[loop left]{l}{K}(\fg , \delta)\arrow[r, shift left, "q"] &(H^\bu(\m2^Q) \, , \, 0)\arrow[l, shift left, "i"] \: , \end{tikzcd} \end{equation} as follows. \begin{itemize}[leftmargin=\parindent] \item The operator $K$ annihilates $H^\bu(\m2^Q)$ and is the identity map~$K \colon \Pi L_{\til \lambda} \to L_\lambda$. \item The map $q$ is the identity on $H^\bu(\m2^Q)$ and annihilates the summand~$L \to \Pi L$. \item The map $i$ embeds $H^\bu(\m2^Q)$ in the obvious way. \end{itemize} It is immediate to verify this data prescribes valid homotopy data. There is only a single term in the $L_\infty$ structure generated by homotopy transfer. It is determined by the following tree diagram \begin{equation} \begin{tikzpicture} \begin{feynman} \vertex(a) at (-1,1) {$i(v)$}; \vertex(b) at (-1,0) {$i(\psi)$}; \vertex(c) at (-1,-1) {$i(v)$}; \vertex(d) at (0,0.5); \vertex(e) at (1,0); \vertex(f) at (2,0) {$q$}; \diagram* {(a)--(d), (b)--(d), (d)--[edge label = $K$](e), (c)--(e), (f)--(e)}; \end{feynman} \end{tikzpicture} \end{equation} together with a similar diagram with the $v$ and $v'$ reversed. It is an immediate calculation to show that these trees recover the formula in (2) of Proposition \ref{prop:susycoh}. \end{proof} \subsection{Embedding supersymmetry into the eleven-dimensional theory} \label{s:residual} Consider now the super $L_\infty$ algebra $\cL$ underlying the eleven-dimensional theory on $\CC^5 \times \RR$. \begin{prop} Endow the cohomology of $\m2^Q$ with the $L_\infty$ structure of Proposition \ref{prop:susycoh} and let $\cL(\CC^5 \times \RR)$ be the super $L_\infty$ algebra underlying eleven-dimensional supergravity on $\CC^5 \times \RR$. There is a map of super $L_\infty$ algebras \[ H^\bu(\m2^Q) \rightsquigarrow \cL (\CC^5 \times \RR) \] In particular, the $Q$-twisted algebra $\m2^Q$ is a symmetry of eleven-dimensional theory on $\CC^5 \times \RR$. \end{prop} \begin{proof} Recall that the cohomology of $\m2^Q$ takes the following form \beqn \begin{tikzcd} \ul{even} & \ul{odd} & \ul{even} \\ L^\vee & (\wedge^2 L^\vee)_2 & L \\ (\wedge^2 L^\vee)_1 & & \\ \lie{sl}(5) && \CC_b \end{tikzcd} \eeqn The lefthand column is ${\rm Stab}(Q)$. The subscripts are used to distinguish between the two copies of $\wedge^2 L^\vee$. The $L_\infty$ map from the dg Lie model $\fg$ to the fields of the twisted eleven-dimensional supergravity theory has a linear piece $\Phi^{(1)}$ and a quadratic piece $\Phi^{(2)}$. Define the linear map $\Phi^{(1)} \colon \fg \to \cL$ as follows: \begin{align*} L^\vee & \mapsto 0 \\ \wedge^2 L_1^\vee & \mapsto 0 \\ z_i \wedge z_j \in \wedge^2 L^\vee_2 & \mapsto \frac12 (z_i \d z_j - z_j \d z_i) \in \Omega^{1,0} (\CC^5) \hotimes \Omega^0 (\RR) \\ A_{ij} \in \lie{sl}(5) & \mapsto \sum_{ij} A_{ij} z_i \partial_{z_j} \in \PV^{1,0}(\CC^5) \hotimes \Omega^0(\RR) \\ \partial_{z_j} \in L & \mapsto \partial_{z_i} \in \PV^{1,0} (\CC^5) \hotimes \Omega^0 (\RR^5) \\ 1 \in \CC_b & \mapsto 1 \in \Omega^{0,0}(\CC^5) \hotimes \Omega^0 (\RR) . \end{align*} It is immediate to check that this is a map of cochain complexes, since all elements in the image of this map lie in the kernel of the linearized BRST operator~\eqref{eqn:linearBRST}. This map also preserves the bracket between odd elements in $\wedge^2 L_2^\vee$. In the cohomology of $\m2^Q$ we have the bracket \[ [z_i\wedge z_j , z_k \wedge z_l] = \ep_{ijklm} \partial_{z_m} \] which is precisely the bracket induced by the cubic term in the action $J = \frac16 \in \gamma \del \gamma \del \gamma$. This map does not preserve all of the brackets, however. Indeed, in the eleven-dimensional theory $\cL(\CC^5 \times \RR)$ there is the bracket \[ \left[\partial_{z_i}, z_j \d z_k - z_k \d z_j\right] = \delta^i_j \d z_k - \delta^i_k \d z_j \] arising from the cubic term in $\frac12 \int \frac{1}{1-\nu} \mu^2 \del \gamma$. To remedy the failure for $\Phi^{(1)}$ to preserve the brackets, we introduce the odd bilinear map $\Phi^{(2)} \colon \fg \otimes \fg \to \Pi \cL$ defined by \beqn\label{eqn:phi2} \Phi^{(2)} \left(\partial_{z_i} , z_j \wedge z_k\right) = \frac12 (\delta^i_j z_k - \delta^i_k z_j) . \eeqn Notice that the field on the right hand side is of type $\beta$. The bilinear map $\Phi^{(2)}$ provides a homotopy trivialization for the failure for $\Phi^{(1)}$ to preserve the $2$-ary bracket: \[ [\Phi^{(1)} (\partial_{z_i}) , \Phi^{(1)}(z_j \wedge z_k)] = \del \Phi^{(2)}\left(\partial_{z_i} , z_j \wedge z_k\right). \] The lefthand side is $\frac12 (\delta_j^i \d z_k - \delta_k^i \d z_j)$ which is precisely the de Rham differential applied to \eqref{eqn:phi2}. To define an $L_\infty$ morphism $\Phi^{(1)} + \Phi^{(2)}$ must satisfy additional higher relations. There is a single nontrivial cubic relation to verify: \begin{multline} \label{eqn:cubicrln} \Phi^{(1)}\left[\partial_{z_i}, \partial_{z_j}, z_k \wedge z_l\right]_3 = [\Phi^{(1)}(\partial_{z_i}), \Phi^{(1)}(\partial_{z_i}), \Phi^{(1)}(z_k \wedge z_l)]_3 \\ + [\del_{z_i}, \Phi^{(2)}(\partial_{z_j}, z_k \wedge z_l)] + [\del_{z_j}, \Phi^{(2)}(\del_{z_i}, z_k \wedge z_l)] \end{multline} where $[-]_3$ on the left hand side is the $3$-ary bracket defined in Proposition \ref{prop:susycoh} and $[-]_3$ on the right hand side is the $3$-ary bracket defined by the quartic part of the action $\frac12 \int \frac{1}{1-\nu} \mu^2 \vee \del \gamma$. The two terms in the second line of \eqref{eqn:cubicrln} cancel for symmetry reasons and the quartic term in the BV action induces precisely the correct $3$-ary bracket. \end{proof} \parsec[] From the previous proposition we can readily compare the super $L_\infty$ algebra $H^\bu(\m2^Q)$ with the global symmetry algebra of our theory. \begin{cor} There is a map of super $L_\infty$ algebras \[ H^\bu(\m2^Q)\to \Hat{E(5,10)}, \] where $\Hat{E(5,10)}$ is a central extension of~$E(5,10)$ by the cocycle~\eqref{eqn:cocycle}. \end{cor} \begin{proof} Because this map preserves differentials, it descends to a map in cohomology. We have already computed the cohomology of $\cL$ on $\CC^5 \times \RR$; it is the trivial one-dimensional central extension of $E (5,10)$. The Lie algebra structure present in the cohomology of $\m2^Q$ is described in part (1) of Proposition~\ref{prop:susycoh}. The map \[ L \oplus {\rm Stab}(Q) \oplus \Pi \left(\wedge^3 L \right) \oplus \CC_b \to E (5,10) \oplus \CC_{b'} \] is defined by very similar formulas as above \begin{align*} L^\vee_1 & \mapsto 0 \\ \wedge^2 L^\vee_1 & \mapsto 0 \\ z_i \wedge z_j \in \wedge^2 L_2 & \mapsto \d z_i \wedge \d z_j \in \Omega^{2}_{cl} (\CC^5) \\ A_{ij} \in \lie{sl}(5) & \mapsto \sum_{ij} A_{ij} z_i \partial_{z_j} \in \Vect_0(\CC^5) \\ \partial_{z_i} \in L & \mapsto \partial_{z_i} \in \Vect_0(\CC^5) \\ b \in \CC_b & \mapsto b \in \CC_{b'} . \end{align*} The relationship between the transferred $L_\infty$ structures can be described as follows. Recall that the linear BRST cohomology of the parity shift of the fields of the eleven-dimensional theory is equivalent to the super $L_\infty$ algebra $\Hat{E(5,10)}$. Also, we described the $L_\infty$ structure present in the cohomology of $\m2^Q$ in part (2) of Proposition~\ref{prop:susycoh}. Each of these $L_\infty$ structure involved introducing a single new $3$-ary bracket, which are easily seen to be compatible. \end{proof} \parsec[] In this short section we compare to another description of the M2 brane algebra given as a one-dimensional $L_\infty$ central extension of the super Poincar\'e algebra. Such central extensions were studied in~\cite{BHsusyII, SSS, FSS}, following~\cite{CDF}. In these references, the algebra $\m2$ is defined as an $L_\infty$ central extension of $\lie{siso}_{11d}$. Recall that given two spinors $\psi, \psi' \in S$ we can form the constant coefficient two-form $\Gamma_{\wedge^2} (\psi, \psi')$. Using this two-form we can define the following four-linear expression \[ \mu_2 (\psi, \psi',v,v') = \<v \wedge v', \Gamma(\psi, \psi')\> . \] This expression is symmetric on the spinors and antisymmetric on the vectors, therefore it defines an element in $\clie^4(\lie{siso}_{11d})$. This expression defines a nontrivial class in $H^4(\lie{siso}_{11d})$ so defines a one dimensional central extension of $\lie{siso}_{11d}$ as a Lie 3-algebra. Instead of working with a one-dimensional central extension by $\CC[2]$, we work with a central extension by the resolution $\Omega^\bullet(\RR^{11})[2]$ determined by a cocycle $c_{M2}$, see \S \ref{sec:m2brane}. There is a quasi-isomorphism $\clie^\bu(\lie{siso}_{11d})\to \clie^\bu(\lie{siso}_{11d}, \Omega^\bullet (\R^{11}))$ induced by the embedding of constant functions into the full de Rham complex. The cocycles $\mu_2$ and $c_{M2}$ are cohomologous via a two-step zig-zag in the double complex $\clie^\bu(\lie{siso}_{11d}, \Omega^\bullet (\R^{11}))$.
null
null
null
proofpile-arXiv_000-10197
{"arxiv_id":"2111.03049","language":"en","timestamp":1636075350000,"url":"https:\/\/arxiv.org\/abs\/2111.03049","yymm":"2111"}
2024-02-18T23:40:25.372Z
2021-11-05T01:22:30.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10199"}
null
\section{Missing Proofs from Section \ref{sec:gaussian}}\label{sec:proof-gaussian} \subsection{Missing proofs from Section~\ref{sec:median-general}} \label{sec:proof-median-general} \begin{customtheorem}{\ref{thm:medianub}} Let $\mathcal{C}$ be a class of distributions over $\mathbb{R}$ with median in $[0,1]$. Define: $$a_F:=\sup_{F\in\mathcal{C}}\log{\textstyle|F^{-1}(\frac{1}{2}+\frac{\epsilon}{3})-F^{-1}(\frac{1}{2}-\frac{\epsilon}{3})|^{-1}}.$$ We have \[\textsc{PriceCplx}_{\mathcal{C},\theta}(\epsilon) \leq \tilde{O}(1/\epsilon^2)a_F\log a_F.\] \end{customtheorem} \begin{proof}[Proof of Theorem \ref{thm:medianub}] Algorithm \ref{alg:binary_ucb} searches for a price $p^*$ with $F(p^*)\in[\frac{1}{2}-\epsilon,\frac{1}{2}+\epsilon]$. The algorithm starts with a range $[0,1]$ of the potential median, and repeatedly price at the middle point $p=\frac{1}{2}$ of the range. After $n$ queries, if the current acceptance rate of the price $p$ is $\bar{q}$, then the true quantile $q=Q(p)=1-F(p)$ is bounded in a confidence interval $[\bar{q}-\tilde{O}(n^{-1/2}),\bar{q}+\tilde{O}(n^{-1/2})]$ with high probability by Chernoff bound. If $\frac{1}{2}$ is outside the confidence interval after $n$ steps, assume that $\frac{1}{2}>\bar{q}+\tilde{O}(n^{-1/2})$, then with high probability $q=Q(p)<\frac{1}{2}$, and $|\frac{1}{2}-q|=\Omega(n^{-1/2})$. The algorithm proceeds with a new range $[\frac{1}{2},1]$, and repeat the binary search process until it finds a price $p$ with $\Omega(n^{-1/2})<\epsilon$.\\ Consider any iteration with price $p$ and corresponding quantile $q=Q(p) := 1 - F(p)$. Assume that $q<\frac{1}{2}$, the case where $q>\frac{1}{2}$ is similar. Let $\alpha=6\sqrt{\log(t/\delta)/n}$. By Chernoff bound, after $n$ pricing queries on $p$, the probability that the selling rate $\bar{q}=\frac{k}{n}\not\in[q-\alpha,q+\alpha]$ is \begin{eqnarray*} \Pr[\bar{q}\not\in[q-\alpha,q+\alpha]]\leq 2\exp\left(-\frac{1}{2}n\alpha^2\right)=\frac{2\delta^3}{t^3}<\frac{\delta}{2t^3} \end{eqnarray*} when $\delta\leq\frac{1}{2}$. Thus by union bound, the probability that at any time the true quantile $q\in[\bar{q}-\alpha,\bar{q}+\alpha]$ is $>1-\sum_t\frac{\delta}{t^2}>1-\delta$. Thus with probability $1-\delta$, the algorithm never falsely searches a range $[\ell,r]$ that does not contain $p^*$. When $\bar{q}$ is not out of the confidence interval, we have $|\frac{1}{2}-\bar{q}|\leq 2\alpha$, while $|\bar{q}-q|\leq \alpha$ with probability $1-\delta$. Thus with probability $1-\delta$, $\epsilon_p=3\alpha\geq |\frac{1}{2}-q|$. Therefore $\epsilon_p$ is indeed an upper bound of $|\frac{1}{2}-q|$. Now we analyze the pricing complexity of the algorithm. When the algorithm prices at $p$ with $q=Q(p)$ such that $|q-\frac{1}{2}|\leq \frac{1}{3}\epsilon$, we show that the algorithm always returns with this price. Consider the following two cases. If the algorithm observes $|\frac{1}{2}-\bar{q}|>2\alpha$, then since with probability $1-\delta$ $|\bar{q}-q|\leq\alpha$, we have $|\frac{1}{2}-q|>\alpha$. Then $\epsilon_p=3\alpha<3|\frac{1}{2}-q|\leq \epsilon$, which means the algorithm returns the current price $p$. Otherwise the algorithm observes $\epsilon_p\leq \epsilon$ and returns. Thus if the algorithm sets some price $p$ with $q=Q(p)$ satisfying $|q-\frac{1}{2}|\leq \frac{1}{3}\epsilon$, then the algorithm would always return with this price. Since the algorithm always finds a price $p$ with $|Q(p)-\frac{1}{2}|\leq \frac{1}{3}\epsilon$ in $a_F=\log\frac{1}{F^{-1}(\frac{1}{2}+\frac{\epsilon}{3})-F^{-1}(\frac{1}{2}-\frac{\epsilon}{3})}$ rounds of different prices. Let $n$ be the number of queries in the last round of pricing. Then $t\leq a_Fn$, and $18\sqrt{\frac{\log(a_Fn/\delta)}{n}}<\epsilon$. Therefore $n=\tilde{O}(\frac{1}{\epsilon^2})\log\frac{a}{\delta}$, which means the query complexity of the algorithm is at most $a\tilde{O}(\frac{1}{\epsilon^2})\log\frac{a_F}{\delta}$. \end{proof} \subsection{Missing proofs from Section~\ref{sec:mean-normal}} \label{sec:proof-mean-normal} \begin{customcorollary}{\ref{cor:median-ub}} Let $\bar\mathcal{C}_{\bar\sigma} = \left\{ \mathcal{N}(\mu, \sigma) \text{ s.t. } \mu \in [0,1],\ \sigma\leq\bar\sigma\right\}$ be the class of normal distributions with variance at most $\bar\sigma^2$ with the loss $\L(\theta, \hat \theta) = \abs{\theta - \hat \theta}$. Then the pricing complexity of computing the mean is: $$\textsc{PriceCplx}_{\bar\mathcal{C}_{\bar\sigma},\mu}(\epsilon)= O(\bar\sigma^2/\epsilon^2).$$ \end{customcorollary} \begin{proof}[Proof of Corollary~\ref{cor:median-ub}] use the following variant of the algorithm for Theorem~\ref{lm:norm}: \begin{itemize} \item Use Algorithm~\ref{alg:binary_ucb} with $\epsilon = \frac{1}{12}$ to find $p_1$ and $p_2$ with $Q(p_1) \in [1/6, 2/6]$ and $Q(p_2) \in [4/6, 5/6]$, \item Using $O(\bar \sigma^2/\epsilon^2)$ pricing queries, find estimates $\hat q_i$ such that $\abs{\hat q_i - Q_{\mu,\sigma}(p_i)} \leq \epsilon/\bar \sigma$. \item Solve the system of equations $\hat{q}_1 = Q_{\hat \mu, \hat \sigma}(p_1)$ and $\hat{q}_2 = Q_{\hat \mu, \hat \sigma}(p_2)$ to find estimates of $\hat \mu, \hat \sigma$. \end{itemize} By the same argument as in the proof of Theorem~\ref{lm:norm}, the equations above can be re-written in linear form: $$\hat \mu + \hat \sigma \cdot Q_{0,1}^{-1}(\hat q_1) = p_1 \qquad \hat \mu + \hat \sigma \cdot Q_{0,1}^{-1}(\hat q_2) = p_2$$ The actual mean and variance are solutions to the same equation with exact quantiles: $$ \mu + \sigma \cdot Q_{0,1}^{-1}( q_1) = p_1 \qquad \mu + \sigma \cdot Q_{0,1}^{-1}( q_2) = p_2$$ Comparing those equations we can observe the following (we omit the details of the calculations as they are quite standard): $$\abs{\mu - \hat \mu} \leq O\left( \sigma \abs{p_1 - p_2} \cdot \max_i \abs{Q_{0,1}^{-1}(\hat q_i) - q_i} \right) = O\left(\sigma \cdot \frac{\epsilon}{\bar \sigma}\right) = O(\epsilon)$$ This leads to an algorithm with $O(\bar \sigma^2 / \epsilon^2)$ pricing queries that only requires knowledge of a \emph{variance upper bound}. \end{proof} \section {Missing Proofs from Section \ref{sec:monopoly}} \label{sec:proof-monopoly} \begin{customlemma}{\ref{lem:repeatprice}} For any value distribution $F$ and value $p$, let $Q(p)=1-F(p)$ be the probability that a random sample from $F$ is at least $p$. Then if we make $m=\tilde{O}(\frac{1}{\epsilon^2}\log\frac{1}{\delta})$ pricing queries with price $p$, then with probability $>1-\delta$ the price is accepted $m(Q(p)\pm\epsilon)$ times. \end{customlemma} \begin{proof}[Proof of Lemma~\ref{lem:repeatprice}] Let $q^*=Q(p)$ be the probability that $v\sim F$ is at least $p$. For any $m$ queries of price $p$, suppose that there are $t$ queries with signal ``$v\geq p$''. By Chernoff bound, \begin{eqnarray*} \Pr\left(t\not\in[m(q^*-\epsilon)]\right)\leq 2\exp\left(-\frac{\epsilon^2m}{2q^*}\right)\leq 2\exp\left(-\frac{\epsilon^2m}{2}\right). \end{eqnarray*} When $m=2\frac{1}{\epsilon^2}\log\frac{2}{\delta}=\Theta(\frac{1}{\epsilon^2}\log\frac{1}{\delta})$, the right hand side of the above inequality be at most $\delta$. Thus if $q^*\not\in[q-\epsilon,q+\epsilon]$ for some probability $q$, after $m=\Theta(\frac{1}{\epsilon^2}\log\frac{1}{\delta})$ pricing queries on $p$, with probability $<\delta$ the number of queries with signal ``$v\geq p$'' is in $[m(q^*-\epsilon),m(q^*+\epsilon)]$. \end{proof} \subsection{Missing proofs from Section~\ref{sec:regular}} \label{sec:proof-regular} \begin{customlemma}{\ref{lem:myerson-average}}[Relative Flatness of Regular Distributions] Let $\textsc{Rev}$ be the revenue curve of a regular distribution. Consider four equidistant values $p_1<p_2<p_3<p_4=cp_1$ in $[0,1]$ and let $\textsc{Rev}_{\max}$ and $\textsc{Rev}_{\min}$ be the maximum and minimum value of the revenue curve at those four points. Then if $\textsc{Rev}_{\min} \geq \textsc{Rev}_{\max}-\epsilon$, for some $\epsilon>0$, then for any $p\in[p_1,p_4]$, $\textsc{Rev}(p)\leq \textsc{Rev}_{\max}+O(c\epsilon)$. \end{customlemma} \begin{proof}[Proof of Lemma~\ref{lem:myerson-average}] For each $i=1,2,3,4$, let $q_i=Q(p_i)$. Denote $q=Q(p)$. Since $p_1-\ell=p_2-p_1=p_3-p_2=p_4-p_3$, we have $p_1<p_4=cp_1$. If $\textsc{Rev}_{\max}<2\epsilon$, then for any $p\in[p_1,p_4]$, $\textsc{Rev}(p)=pQ(p)\leq p_4Q(p_1)=cp_1Q(p_1)\leq c\textsc{Rev}_{\max}=O(c\epsilon)$, thus $\textsc{Rev}(p)\leq\textsc{Rev}_{\max}+O(c\epsilon)$. Therefore we assume $\textsc{Rev}_{\max}\geq 2\epsilon$, then $\textsc{Rev}(p_1),\textsc{Rev}(p_2),\textsc{Rev}(p_3),\textsc{Rev}(p_4)\geq \frac{1}{2}\textsc{Rev}_{\max}$. Thus $q_1=\frac{1}{p_1}\textsc{Rev}(p_1)\leq \frac{c}{p_4}\cdot2\textsc{Rev}(p_4)=2cq_4$, in other words, all $q\in[q_4,q_1]$ are within a constant factor of $2c$. Since $\textsc{Rev}$ is the revenue curve of a regular distribution, it is a concave function in quantile space. Consider the following two cases: \noindent\emph{Case 1:} $p\geq Q^{-1}(\frac{q_2+q_3}{2})$. We can express $q_2$ as a linear combination of $q_1$ and $q$ by $q_2=\frac{q_2-q}{q_1-q}q+\frac{q_1-q_2}{q_1-q}q_1$, thus \[\textsc{Rev}(p_2)\geq \frac{q_2-q}{q_1-q}\textsc{Rev}(p)+\frac{q_1-q_2}{q_1-q}\textsc{Rev}(p_1).\] Then \begin{eqnarray} \textsc{Rev}(p)&\leq&\frac{(q_1-q)\textsc{Rev}(p_2)-(q_1-q_2)\textsc{Rev}(p_1)}{q_2-q}\leq\frac{(q_1-q)(\textsc{Rev}(p_1)+\epsilon)-(q_1-q_2)\textsc{Rev}(p_1)}{q_2-q}\nonumber\\ &=&\textsc{Rev}(p_1)+\frac{q_1-q}{q_2-q}\epsilon=\textsc{Rev}(p_1)+\frac{q_1-q_2}{q_2-q}\epsilon+\epsilon\leq\textsc{Rev}(p_1)+\frac{q_1-q_2}{q_2-q_3}2\epsilon+\epsilon.\label{eqn:revp} \end{eqnarray} Here the last inequality is by $q\leq\frac{q_2+q_3}{2}$. If $\epsilon>\frac{1}{2}q_3(p_3-p_2)$, then \[\epsilon>\frac{1}{2}q_3(p_3-p_2)\geq \frac{1}{2}\cdot\frac{1}{2c}q\cdot\frac{1}{2}(p-p_2)\geq\frac{1}{8c}(qp-q_2p_2)=\frac{1}{8c}(\textsc{Rev}(p)-\textsc{Rev}(p_2)).\] Here the second inequality is by $q\geq\frac{1}{2c}q_3$, and $p-p_2\leq p_4-p_2=2(p_3-p_2)$; the third inequality is by $q\leq q_2$. Therefore $\textsc{Rev}(p)\leq\textsc{Rev}(p_2)+O(c\epsilon)=\textsc{Rev}_{\max}+O(c\epsilon)$ if $\epsilon>\frac{1}{2}q_3(p_3-p_2)$. Now we assume that $\epsilon\leq\frac{1}{2}q_3(p_3-p_2)$. Since $\textsc{Rev}(p_1)\leq \textsc{Rev}(p_2)+\epsilon$, we have $p_1q_1\leq p_2q_2+\epsilon$, then $q_1-q_2\leq\frac{q_1(p_2-p_1)+\epsilon}{p_1}$. At the same time, $\textsc{Rev}(p_2)\geq\textsc{Rev}(p_3)-\epsilon$, we have $p_2q_2\geq p_3q_3-\epsilon$, then $q_2-q_3\geq\frac{q_3(p_3-p_2)-\epsilon}{p_2}$. Thus \begin{eqnarray*} \frac{q_1-q_2}{q_2-q_3}&\leq& \frac{p_2(q_1(p_2-p_1)+\epsilon)}{p_1(q_3(p_3-p_2)-\epsilon)}\leq \frac{p_2(q_1(p_2-p_1)+\frac{1}{2}q_3(p_3-p_2))}{p_1(q_3(p_3-p_2)-\frac{1}{2}q_3(p_3-p_2))}\\ &\leq&\frac{2(q_1+\frac{1}{2}q_3)}{(q_3-\frac{1}{2}q_3)}=2+\frac{4q_1}{q_3}\leq 2+8c=O(c). \end{eqnarray*} Here the second inequality is by $\epsilon\leq\frac{1}{2}q_3(p_3-p_2)$; the third inequality is by $p_2\leq 2p_1$ and $p_3-p_2=p_2-p_1$; the last inequality is by $q_1\leq 2cq_3$. By \eqref{eqn:revp} we have $\textsc{Rev}(p)\leq \textsc{Rev}_{\max}+O(c\epsilon)$. \noindent\emph{Case 2:} $p< Q^{-1}(\frac{q_2+q_3}{2})$. We can express $q_3$ as a linear combination of $q$ and $q_4$ by $q_3=\frac{q-q_3}{q-q_4}q+\frac{q_3-p_4}{q-q_4}q_4$. Thus by concavity of $\textsc{Rev}$ in the quantile space, \[\textsc{Rev}(p_3)\geq \frac{q-q_3}{q-q_4}\textsc{Rev}(p)+\frac{q_3-q_4}{q-q_4}\textsc{Rev}(p_4).\] Then \begin{eqnarray} \textsc{Rev}(p)&\leq&\frac{(q-q_4)\textsc{Rev}(p_3)-(q_3-q_4)\textsc{Rev}(p_4)}{q-q_3}\leq\frac{(q-q_4)(\textsc{Rev}(p_4)+\epsilon)-(q_3-q_4)\textsc{Rev}(p_4)}{q-q_3}\nonumber\\ &=&\textsc{Rev}(p_4)+\frac{q-q_4}{q-q_3}\epsilon=\textsc{Rev}(p_4)+\frac{q_3-q_4}{q-q_3}\epsilon+\epsilon<\textsc{Rev}(p_4)+\frac{q_3-q_4}{q_2-q_3}2\epsilon+\epsilon.\label{eqn:revp-case2} \end{eqnarray} Here the last inequality is by $q>\frac{q_2+q_3}{2}$. If $\epsilon>\frac{1}{2}q_3(p_3-p_2)$, then as we have reasoned in Case 1, $\textsc{Rev}(p)\leq \textsc{Rev}_{\max}+O(c\epsilon)$. Now we assume that $\epsilon\geq\frac{1}{2}q_3(p_3-p_2)$. Since $\textsc{Rev}(p_3)\leq \textsc{Rev}(p_4)+\epsilon$, we have $p_3q_3\leq p_4q_4+\epsilon$, then $q_3-q_4\leq\frac{(p_4-p_3)q_4+\epsilon}{p_3}$. At the same time, as we have shown in Case 1, $q_2-q_3\geq\frac{q_1(p_3-p_2)-\epsilon}{p_2}$. Thus \begin{eqnarray*} \frac{q_3-q_4}{q_2-q_3}&\leq& \frac{p_2(q_4(p_4-p_3)+\epsilon)}{p_3(q_3(p_3-p_2)-\epsilon)}\leq \frac{p_2(q_4(p_4-p_3)+\frac{1}{2}q_3(p_3-p_2))}{p_3(q_3(p_3-p_2)-\frac{1}{2}q_3(p_3-p_2))}\\ &\leq&\frac{q_4+\frac{1}{2}q_3}{q_3-\frac{1}{2}q_3}=1+\frac{2q_4}{q_3}\leq 1+2=3. \end{eqnarray*} Here the second inequality is by $\epsilon\leq\frac{1}{2}q_3(p_3-p_2)$; the third inequality is by $p_2\leq p_3$ and $p_3-p_2=p_4-p_3$; the last inequality is by $q_4\leq q_3$. By \eqref{eqn:revp-case2} we have $\textsc{Rev}(p)\leq \textsc{Rev}_{\max}+O(c\epsilon)$. \end{proof} \subsection{Mission proofs from Section~\ref{sec:mhr}} \label{sec:proof-mhr} In this section, we prove the pricing query complexity for estimating the monopoly price of MHR distributions. \begin{customtheorem}{\ref{thm:mhrlb}} Let $\mathcal{C}_{\textsc{MHR}}$ be the class of Monotone Hazard Rate distributions supported in $[0,1]$ and let $\theta$ be the monopoly price. Then $$\textsc{PriceCplx}_{\mathcal{C}_{\textsc{MHR}},\theta}(\epsilon) = \Omega(1/\epsilon^2).$$ \end{customtheorem} Before proving the theorem, we start with a technical lemma bounding the KL-divergence of two Bernoulli variables. \begin{lemma}\label{lem:kl-bernoulli} For any $\epsilon<\frac{\sqrt{2}}{2}$ and two Bernoulli variables $X,Y$ with $\frac{1}{1+\epsilon}\leq \frac{\Pr[X=1]}{\Pr[Y=1]},\frac{\Pr[X=0]}{\Pr[Y=0]}\leq 1+\epsilon$, $D_{\textrm{KL}}(X\|Y)<\epsilon^2$. \end{lemma} \begin{proof}[Proof of Lemma~\ref{lem:kl-bernoulli}] Let $q=\Pr[X=1]$, $q'=\Pr[Y=1]$. Without loss of generality assume $q\leq \frac{1}{2}$ (otherwise just set $q\leftarrow 1-q$ and $q'\leftarrow 1-q'$). Let $f(q,q')=D_{\textrm{KL}}(X\|Y)$. Then \begin{equation*} f(q,q')=D_{\textrm{KL}}(X\|Y)=q\ln\frac{q}{q'}+(1-q)\ln\frac{1-q}{1-q'}=q\ln q-q\ln q'+(1-q)\ln(1-q)-(1-q)\ln (1-q'). \end{equation*} Then \begin{equation*} \frac{\partial}{\partial q}f(q,q')=\ln q-\ln q'-\ln(1-q)+\ln(1-q')=\ln\frac{q}{q'}-\ln\frac{1-q}{1-q'} \end{equation*} has unique zero point $q=q'$, which means $f(q,q')$ increases when $q>q'$, and decreases when $q<q'$. Therefore, to upper bound $f(q,q')$, it suffices to bound it for $q=(1+\epsilon)q'$ and $q=(1-\epsilon)q'$. \paragraph{Case 1.} When $q=(1+\epsilon)q'$, \begin{eqnarray*} f(q,q')&=&q\ln\frac{q}{q'}+(1-q)\ln\frac{1-q}{1-q'}\\ &=&q\ln(1+\epsilon)+(1-q)\ln(1-q)-(1-q)\ln\left(1-\frac{q}{1+\epsilon}\right). \end{eqnarray*} Take the derivative of the right hand side with respect to $q$, we have \begin{eqnarray*} \frac{\partial}{\partial q}f\left(q,\frac{q}{1+\epsilon}\right)&=&\ln(1+\epsilon)-1-\ln(1-q)+\ln\left(1-\frac{q}{1+\epsilon}\right)+(1-q)\frac{1}{1+\epsilon}\frac{1}{1-\frac{q}{1+\epsilon}}\\ &=&(\ln(1+\epsilon)-1)+\ln\frac{1-\frac{q}{1+\epsilon}}{1-q}+\frac{1-q}{1+\epsilon-q}>0. \end{eqnarray*} The inequality is true since each term in the sum is non-negative. Thus $f(q,\frac{q}{1+\epsilon})$ is maximized when $q=\frac{1}{2}$, and \begin{eqnarray*} f\left(q,\frac{q}{1+\epsilon}\right)\leq f\left(\frac{1}{2},\frac{1}{2+2\epsilon}\right) =\frac{1}{2}\ln(1+\epsilon)+\frac{1}{2}\ln\frac{\frac{1}{2}}{1-\frac{1}{2+2\epsilon}}=\frac{1}{2}\ln\frac{(1+\epsilon)^2}{1+2\epsilon}<\frac{1}{2}\frac{\epsilon^2}{1+2\epsilon}<\epsilon^2, \end{eqnarray*} here the second inequality is by $\ln(1+x)<x$ for any $x>0$. \paragraph{Case 2.} When $q=\frac{1}{1+\epsilon}q'$, \begin{eqnarray*} f(q,q')&=&q\ln\frac{q}{q'}+(1-q)\ln\frac{1-q}{1-q'}\\ &=&-q\ln(1+\epsilon)+(1-q)\ln(1-q)-(1-q)\ln\left(1-(1+\epsilon)q\right). \end{eqnarray*} Take the derivative of the right hand side with respect to $q$, we have \begin{eqnarray*} \frac{\partial}{\partial q}f\left(q,(1+\epsilon)q\right)&=&-\ln(1+\epsilon)-1-\ln(1-q)+\ln\left(1-(1+\epsilon)q\right)+(1-q)(1+\epsilon)\frac{1}{1-(1+\epsilon)q}\\ &=&-\ln\frac{(1-q)(1+\epsilon)}{1-(1+\epsilon)q}-1+\frac{(1-q)(1+\epsilon)}{1-(1+\epsilon)q}\geq0. \end{eqnarray*} The inequality follows from $-\ln x-1+x\geq0$ for any $x\geq 0$. Thus $f\left(q,(1+\epsilon)q\right)$ is maximized when $q=\frac{1}{2}$, and \begin{eqnarray*} f\left(q,(1+\epsilon)q\right)\leq f\left(\frac{1}{2},\frac{1+\epsilon}{2}\right) =-\frac{1}{2}\ln(1+\epsilon)+\frac{1}{2}\ln\frac{\frac{1}{2}}{1-\frac{1}{2}(1+\epsilon)}=\frac{1}{2}\ln\frac{1}{1-\epsilon^2}<\frac{1}{2}\frac{\epsilon^2}{1-\epsilon^2}< x^2, \end{eqnarray*} here the second inequality is by $\ln(1+x)<x$ for any $x>0$, and the last inequality is by $\epsilon>\frac{\sqrt{2}}{2}$. \end{proof} The lemma is then used to give a lower bound on the pricing query complexity of distinguishing two distributions whose c.d.f. $F(p)$ and quantiles $Q(p) = 1-F(p)$ are both within $1\pm \epsilon$ of each other. \begin{customlemma}{\ref{lem:query-complexity}} For two value distributions $D$ and $D'$, if $\frac{1}{1+\epsilon}\leq\frac{Q_D(v)}{Q_{D'}(v)},\frac{F_D(v)}{F_{D'}(v)} \leq (1+\epsilon)$ for every $v\in[0,1]$, then $\Omega(\frac{1}{\epsilon^2})$ pricing queries are needed to distinguish the two distributions with probability $>1-\delta$. \end{customlemma} The proof idea of Lemma~\ref{lem:query-complexity} is as follows. For two such distributions $D$ and $D'$, and any pricing algorithm $\mathcal{A}$, let $X_i$ and $X'_i$ be the bit each algorithm receives from the $i$-th pricing query. To show that $\mathcal{A}$ needs $m=\Omega(\epsilon^{-2})$ queries to distinguish the two distribution, it suffices to show that $(X_1,\cdots,X_m)$ and $(X'_1,\cdots,X_m)$ has $\Omega(1)$ statistical distance (thus KL-divergence $\Omega(1)$) only if $\Omega(\epsilon^{-2})$. By Lemma~\ref{lem:kl-bernoulli} the gain of relative entropy from each query is at most $\epsilon^2$, thus $\Omega(\epsilon^{-2})$ queries are needed for the KL-divergence to become $\Omega(1)$. \begin{proof}[Proof of Lemma~\ref{lem:query-complexity}] Consider any algorithm $\mathcal{A}$ that adaptively sets a pricing query in each step. For every $i\geq 1$, let $X_{i}$ be a boolean random variable that denotes whether the $i$-th sampled value $v_i\sim D$ is at least the price $p_{i,D}$ at the $i$-th step in algorithm $\mathcal{A}$ for value distribution $D$. In other words, $X_i=\mathbf{1}[v_i\geq p_{i,D}]$. Similarly, define $X'_{i}$ to be a boolean random variable that denotes whether the $i$-th sampled value $v'_i\sim D'$ is at least the price $p_{i,D'}$ at the $i$-th step in algorithm $\mathcal{A}$ for value distribution $D'$. For each $n\geq 1$, denote $\mathbf{X}_{\leq n}=(X_1,X_2,\cdots,X_n)$ and $\mathbf{X}'_{\leq n}=(X'_1,X'_2,\cdots,X'_n)$. For any price $p\in[0,1]$, random values $v\sim D$ and $v'\sim D'$, and random variables $X=\mathbf{1}[v\geq p]$ and $Y=\mathbf{1}[v'\geq p]$, let $q=\Pr[X=1]$ and $q'=\Pr[Y=1]$. The Kullback–Leibler divergence of $X$ and $Y$ can be bounded by Lemma \ref{lem:kl-bernoulli}. Then for any fixed $(x_1,\cdots,x_{m-1})\in\{0,1\}^{m-1}$, conditioned on $(X_1,\cdots,X_{m-1})=(x_1,\cdots,x_{m-1})$ algorithm $\mathcal{A}$ sets a random price $p$; $X_{m}$ is a Bernoulli variable with probability $q=\mathbb{E}_{p}[\Pr_{v\sim D}[v\geq p]]=\mathbb{E}_{p}[Q_D(p)]$ being 1; $X'_{m}$ is a Bernoulli variable with probability $q'=\mathbb{E}_{p}[\Pr_{v\sim D'}[v\geq p]]=\mathbb{E}_{p}[Q_{D'}(p)]$ being 1. By the condition of Lemma~\ref{lem:query-complexity}, $\frac{1}{1+\epsilon}<\frac{q}{q'},\frac{1-q}{1-q'}<1+\epsilon$. Thus by Lemma~\ref{lem:kl-bernoulli}, \begin{equation*} D_{\textrm{KL}}\bigg(X_m|_{\mathbf{X}_{\leq m-1}=(x_1,\cdots,x_{m-1})}\bigg\|X'_m|_{\mathbf{X}'_{\leq m-1}=(x_1,\cdots,x_{m-1})}\bigg)<\epsilon^2. \end{equation*} For any $n\geq 1$, let $\mathbf{x}_{\leq n}=(x_1,x_2,\cdots,x_n)$. Then by the chain rule of KL divergence, \begin{eqnarray*} D_{\textrm{KL}}(X_{\leq m}\|X'_{\leq m}) &=&D_{\textrm{KL}}(X_{\leq m-1}\|X'_{\leq m-1})+\mathbb{E}_{\mathbf{X}_{\leq m-1}}D_{\textrm{KL}}\bigg(X_m|_{\mathbf{X}_{\leq m-1}=\mathbf{x}_{\leq m-1}}\bigg\|X'_m|_{\mathbf{X}'_{\leq m-1}=\mathbf{x}_{\leq m-1}}\bigg)\\ &<&D_{\textrm{KL}}(X_{\leq m-1}\|X'_{\leq m-1})+\epsilon^2<D_{\textrm{KL}}(X_{\leq m-2}\|X'_{\leq m-2})+2\epsilon^2<\cdots<m\epsilon^2. \end{eqnarray*} Let $D_m$ and $D'_m$ be the distribution of $\mathbf{X}_{\leq m}$ and $\mathbf{X}'_{\leq m}$ respectively. The probability that algorithm $\mathcal{A}$ can identify the distribution with $m$ samples is at most $\frac{1+\delta(D_m,D'_m)}{2}$, here $\delta(A,B)=\frac{1}{2}\int |f_A(x)-f_B(x)|dx$ represents the statistical distance of two distributions $A$ and $B$ with density $f_A$ and $f_B$ respectively. Thus to identify distribution $D$ and $D'$ with probability $1-\delta$, the number of samples $m$ need to be large enough such that $\delta(D_m,D'_m)=\Omega(1)$. By Pinsker's inequality, \begin{equation*} \delta(D_m,D'_m)\leq\sqrt{\frac{1}{2}D_{\textrm{KL}}(D_m,D'_m)}<\sqrt{\frac{m}{2}\epsilon^2}. \end{equation*} Thus when $\delta(D_m,D'_m)=\Omega(1)$, $\sqrt{\frac{m}{2}\epsilon^2}=\Omega(1)$. Thus $m=\Omega(\frac{1}{\epsilon^2})$ samples are necessary to distinguish $D$ and $D'$ with probability $>\frac{2}{3}$ for any (randomized adaptive) algorithm $\mathcal{A}$. \end{proof} Finally, we are ready to analyze the pricing query complexity for estimating the monopoly price for MHR distributions. \begin{proof} [Proof of Theorem~\ref{thm:mhrlb}] We consider the following MHR distribution $D$ with sample complexity $\Omega(\epsilon^{-3/2})$ that is modified from an example in Huang et al \cite{HMR18}. Let $\epsilon_0=16\epsilon$, and $f$ be the density of $D$ as follows: \begin{equation*} f_D(v)=\begin{cases} 0.4 &\textrm{ if } 0\leq v< \frac{1}{2};\\ 1.6-3.2\sqrt{\epsilon_0}&\textrm{ if } \frac{1}{2}\leq v< \frac{1}{2}+\frac{1}{2}\sqrt{\epsilon_0};\\ 1.6+\frac{3.2\epsilon_0}{1-\sqrt{\epsilon_0}}&\textrm{ if } \frac{1}{2}+\frac{1}{2}\sqrt{\epsilon_0}\leq v\leq 1. \end{cases} \end{equation*} Let $f_{D'}(v)$ be the density of distribution $D'$ as follows: \begin{equation*} f_{D'}(v)=\begin{cases} 0.4 & \textrm{ if } 0\leq v< \frac{1}{2};\\ 1.6 & \textrm{ if } \frac{1}{2}\leq v\leq 1. \end{cases} \end{equation*} In other words, distribution $D'$ is uniform on $[0,\frac{1}{2}]$ and $[\frac{1}{2},1]$, while distribution $D$ is a perturbation of $D'$ on $[\frac{1}{2},1]$, with the density of $D'$ in $[\frac{1}{2},\frac{1}{2}+\frac{1}{2}\sqrt{\epsilon_0}]$ scaled down by a factor of $1-2\sqrt{\epsilon_0}$, and the density of $D'$ in $[\frac{1}{2}+\frac{1}{2}\sqrt{\epsilon_0},1]$ scaled up by a factor of $1+\frac{2\epsilon_0}{1-\sqrt{\epsilon_0}}$ in $D$. Let $F_D$ and $F_{D'}$ be the cumulative density function of $D$ and $D'$ respectively, and let $Q_D(v)=1-F_D(v)$, $Q_{D'}(v)=1-F_{D'}(v)$. Then for any $v\in[0,1]$, $Q_{D'}(v)\leq Q_{D}(v)\leq \left(1+\frac{2\epsilon_0}{1-\sqrt{\epsilon_0}}\right)Q_{D'}(v)$. In other words, when setting any pricing query over the two distributions, the probability that the buyer can afford to purchase the item differs by only a factor of $1+\frac{2\epsilon_0}{1-\sqrt{\epsilon_0}}<1+3\epsilon_0$ for small enough $\epsilon_0$. For uniform distribution $D'$, the optimal monopoly price is $p_{D'}^*=\frac{1}{2}$, with revenue $0.4$. Only prices in $p\leq p_0=\frac{1}{2}+\sqrt{\epsilon}$ would lead to revenue $\geq0.4-1.6\epsilon$. For uniform distribution $D$, the optimal monopoly price is $p_D^*=\frac{1}{2}+\frac{1}{2}\sqrt{\epsilon_0}$, with revenue $0.4+0.4\epsilon_0+0.8\epsilon_0^{3/2}=0.4+6.4\epsilon+51.2\epsilon^{3/2}>p^*_{D'}Q_{D'}(p^*_{D'})+\epsilon$. For $p\leq p_0$, the revenue is \begin{eqnarray*} pQ_D(p)&\leq& p_0Q_D(p_0)=(0.5+\sqrt{\epsilon})(0.8-1.6\sqrt{\epsilon}(1-2\sqrt{\epsilon_0})) =0.4-1.6\epsilon+1.6\sqrt{\epsilon\eps_0}+3.2\epsilon\sqrt{\epsilon_0}\\ &=&0.4+4.8\epsilon+12.8\epsilon^{3/2}<p^*_{D}Q_D(p^*_D)-\epsilon. \end{eqnarray*} Thus no price can get $\epsilon$-close to the optimal revenue for both distribution $D$ and $D'$ at the same time. Consider a distribution that is one of $D$ and $D'$. To learn the optimal monopoly price or the optimal revenue of the distribution, one must be able to distinguish the two distributions with pricing queries. Observe that for any $v\leq\frac{1}{2}$, $F_D(v)=F_{D'}(v)$ and $Q_D(v)=Q_{D'}(v)$, thus $\frac{F_D(v)}{F_{D'}(v)}=\frac{Q_D(v)}{Q_{D'}(v)}=1$. For any $v\in(\frac{1}{2},1]$, \begin{equation*} 1\geq \frac{F_D(v)}{F_{D'}(v)}\geq\frac{F_D(\frac{1}{2}+\frac{1}{2}\sqrt{v_0})}{F_{D'}(\frac{1}{2}+\frac{1}{2}\sqrt{v_0})}=1-\frac{1.6\epsilon_0}{0.2+0.8\sqrt{\epsilon_0}}=\frac{1}{1+O(\epsilon)} \end{equation*} and \begin{equation*} 1\leq \frac{Q_D(v)}{Q_{D'}(v)}\leq\frac{Q_D(\frac{1}{2}+\frac{1}{2}\sqrt{v_0})}{Q_{D'}(\frac{1}{2}+\frac{1}{2}\sqrt{v_0})}= 1+\frac{1.6\epsilon_0}{0.8-0.8\sqrt{\epsilon_0}}=1+O(\epsilon) \end{equation*} by $\epsilon_0=16\epsilon$. Thus for any $v\in[0,1]$, $\frac{1}{1+O(\epsilon)}\leq \frac{F_D(v)}{F_{D'}(v)},\frac{Q_D(v)}{Q_{D'}(v)}\leq 1+O(\epsilon)$. By Lemma~\ref{lem:query-complexity} the number of queries needed to distinguish $D$ and $D'$ is $\Omega(\frac{1}{\epsilon^2})$, which is also a lower bound on the pricing complexity of estimating the monopoly price for MHR distributions. \end{proof} \subsection{Missing proofs from Section~\ref{sec:general-distribution}}\label{sec:proof-general} \begin{customtheorem}{\ref{thm:general-distribution}} Let $\mathcal{C}_{\textsc{ALL}}$ be the class of all value distributions on $[0,1]$ and $\theta$ be the monopoly. Then $$\textsc{PriceCplx}_{\mathcal{C}_{\textsc{ALL}},\theta}(\epsilon)= \tilde{\Theta}(1/\epsilon^3).$$ \end{customtheorem} \begin{proof}[Proof of Theorem~\ref{thm:general-distribution}] Firstly we show that $\textsc{PriceCplx}_{\mathcal{C}_{\textsc{ALL}},\theta}(\epsilon)= \tilde{O}(1/\epsilon^3).$ Actually, consider the following simple algorithm: price at every multiple of $\epsilon$ in $[0,1]$ for $\tilde{O}(\frac{1}{\epsilon^2})$ times such that for every such price $p$, $Q(p)$ is estimated by $\tilde{Q}(p)\in Q(p)\pm\epsilon$ (by Lemma~\ref{lem:repeatprice}). Notice that for price $p^*$ that maximizes the revenue $\textsc{Rev}(p^*)=p^*Q(p^*)$, the revenue from price $p=\epsilon\lfloor\frac{p^*}{\epsilon}\rfloor$ is $\textsc{Rev}(p)=pQ(p)\geq (p^*-\epsilon)Q(p^*)\geq \textsc{Rev}(p^*)-\epsilon$. Since the algorithm estimate $Q(p)$ with additive error $\epsilon$, thus $\widetilde{\rev}(p)\geq \textsc{Rev}(p)-\epsilon\geq \textsc{Rev}(p^*)-2\epsilon$. This means that the price $\hat{p}$ being a multiple of $\epsilon$ that maximizes $\hat{p}\tilde{Q}(\hat{p})$ estimates the optimal revenue with error $O(\epsilon)$. Now we show that the $\Omega(\frac{1}{\epsilon^3})$ pricing complexity is unavoidable. Consider the following $m=\frac{1}{16\epsilon}$ distributions $F_0, F_1,\cdots,F_{m}$. Distribution $F_i$ has support $\frac{1}{2}+4\epsilon$, $\frac{1}{2}+8\epsilon$, $\cdots$, $\frac{3}{4}-4\epsilon$, $\frac{3}{4}$, with the item-pricing revenue and quantiles satisfying \begin{equation*} \textsc{Rev}_i\left(\frac{1}{2}+4k\epsilon\right)=\left(\frac{1}{2}+4k\right) \cdot Q_i\left(\frac{1}{2}+4k\right)=\begin{cases} \frac{1}{4},&\textrm{ if }k\neq i;\\ \frac{1}{4}+\epsilon,&\textrm{ if }k=i. \end{cases} \end{equation*} In other words, for $i\geq 1$, each distribution $F_i$ has a unique revenue-maximizing price $\frac{1}{2}+4i\epsilon$ with revenue $\frac{1}{4}+\epsilon$, while other prices leads to revenue $\frac{1}{4}$ that is $\epsilon$-far from the optimal revenue. $F_0$ is an equal-revenue distribution with revenue $\frac{1}{4}$ for every price $\frac{1}{2}+4k\epsilon$. Thus the quantile of any two distributions $F_0$ and $F_{i}$ only differs at one point: $Q_i(\frac{1}{2}+4i\epsilon)=Q_{0}(\frac{1}{2}+4i\epsilon)+\Theta(\epsilon)$. For any other price $p=\frac{1}{2}+4k\epsilon$ with $k\neq i,j$, $Q_i(p)=Q_0(p)$. Then for any $i\in[\frac{m}{3},\frac{2m}{3}]$, since $Q_i(\frac{1}{2}+4i\epsilon)=\Theta(1)$ and $F_i(\frac{1}{2}+4i\epsilon)=\Theta(1)$, $\frac{1}{1+O(\epsilon)}\leq \frac{Q_i(v)}{Q_0(v)},\frac{F_i(v)}{F_0(v)}\leq 1+O(\epsilon)$. By Lemma~\ref{lem:query-complexity}, to distinguish $F_i$ and $F_0$, $\Omega(\frac{1}{\epsilon^2})$ pricing queries on $\frac{1}{2}+4i\epsilon$ are needed. Consider a setting where the underlying value distribution is $F_i$ for $i$ uniformly selected from $[\frac{1}{3}m,\frac{2}{3}m]$ but unknown beforehand. To find out the optimal revenue and the corresponding price, it is equivalent to find out the underlying value distribution. To distinguish $F_0$ and $F_i$, at least $\Omega(\frac{1}{\epsilon^2})$ pricing queries are needed on price $\frac{1}{2}+4i\epsilon$. Thus to distinguish $F_0$ and all other distributions, at least $\Omega(\frac{1}{\epsilon^2})$ pricing queries are needed on every price $\frac{1}{2}+4i\epsilon$, thus the query complexity is $\Omega(\frac{1}{\epsilon^3})$. \end{proof} \section*{Acknowledgement} \bibliographystyle{plainnat} \section{Introduction} An important question in the intersection of economics and statistics is to estimate properties of a buyer's willingness-to-pay (a.k.a. value) from samples. Samples typically come from buyer's bids in a truthful auction. In various economic setups though, collecting samples of true value is quite hard or impossible. For example, in many settings auction is not an option; often, posting prices is the only option. And the seller therefore has access only to a buyer's decision to buy or not at a given price. This is also true for auctions with very few buyers (not too uncommon in digital auctions as targeting becomes increasingly narrow and focused): if a buyer is the sole competitor, they may decide to bid just above the reserve price that is sent to them, or to not bid at all when their value is below the reserve price, in order to conceal information about their valuation. Likewise in non-truthful auctions, we don't have access to true value samples; we just have access to past bids. One thing we do have in all these scenarios is whether or not the true value exceeded a (reserve) price. Our goal in this paper to compare the power of \emph{value samples} (namely, samples from the buyer's value distribution) to that of \emph{pricing queries} (namely, a price $p$ and a response to whether or not the buyer's value is at least $p$).\\ What can we do with a single value sample? Dhangwatnotai et al.~\cite{DRY15} say that we can do a lot: we can price the buyer using that sample to extract \emph{half of the expected optimal revenue} achievable with full knowledge of the regular value distribution. Similarly a single value sample acts as an unbiased estimator of the mean of the distribution. Now consider a single pricing query. It is not possible to extract \emph{any non-zero fraction of the optimal revenue} from a single pricing query. Likewise there is nothing meaningful to estimate about the mean using a single pricing query. Both of these hold regardless of our choice of the price $p$. The contrast between the power of a single value sample and that of a single pricing query cannot be more stark. The question we investigate in this paper is what happens when we have a large number of pricing queries. How do the number of value samples required to get a $1-\epsilon$ approximation of the optimal revenue, or to estimate the mean within $\epsilon$ compare with the number of pricing queries required?\\ Let $\mathcal{C}$ be a class of distributions over $\mathbb{R}$ and $\theta:\mathcal{C} \rightarrow \mathbb{R}$ a parameter (such as the mean, the variance or the monopoly price). Let $\L:\mathbb{R}^2 \rightarrow \mathbb{R}$ be a loss function estimating the error in the parameter estimator\footnote{The loss can implicitly depend on the distribution $\mathcal{D} \in \mathcal{C}$.}. We will consider the following parameters and respective losses: \begin{itemize} \item \emph{Median:} $\theta(\mathcal{D}) = \text{median}(\mathcal{D})$ with loss $\L(\theta; \hat\theta) = \abs{F_\mathcal{D}(\theta) - F_\mathcal{D}(\hat \theta)}$, where $F_\mathcal{D}$ is the c.d.f. \item \emph{Mean:} $\theta(\mathcal{D}) = \mathbb{E}_{v \sim \mathcal{D}}[v]$ with loss $\L(\theta; \hat\theta) = \abs{\theta - \hat \theta}$. \item \emph{Monopoly price:} $\theta(\mathcal{D}) = \text{argmax}_p \textsc{Rev}_\mathcal{D}(p) := \mathbb{E}_{v \sim \mathcal{D}} [p \cdot \mathbf{1}\{v \geq p\}]$ with the loss that compares the revenue at the optimal and estimated point: $\L(\theta; \hat \theta) = \abs{\textsc{Rev}_\mathcal{D}(\theta) - \textsc{Rev}_\mathcal{D}(\hat \theta)}$. \end{itemize} In our last example, the parameter is a function rather than a real number: \begin{itemize} \item \emph{CDF:} $\theta(\mathcal{D}) = F_\mathcal{D}(\cdot)$ with loss $\L(F_\mathcal{D}; \hat F) = \textsf{LevyDistance}(F_\mathcal{D}; \hat F)$ which corresponds to the infimum over $\epsilon>0$ such that $F_\mathcal{D}(v-\epsilon)-\epsilon \leq \hat F(v) \leq F_\mathcal{D}(v+\epsilon)+\epsilon$ for all $v \in \mathbb{R}$. \end{itemize} A pricing estimator with $m$ queries consists of an algorithm that posts $m$ prices\footnote{We allow both values $v_t$ and prices $p_t$ to be negative. A negative value means that the buyer has a disutility for the good, while a negative price means that the buyer is compensated for acquiring the good. Negative values are allowed for convenience to obtain cleaner algorithms. The same results can be obtained for non-negative values by dealing with the corner cases around zero.} $p_1, p_2, \hdots, p_m \in \mathbb{R}$. For each price $p_t$ posted the algorithm observes if a buyer with a freshly drawn value buys or not at that price. In other words, we observe $s_t = \textsf{sign}(v_t - p_t) \in \{-1,+1\}$ for some $v_t \sim \mathcal{D}$. The prices can be computed adaptively. Finally, the algorithm outputs an estimate $\hat \theta (s_1, \hdots, s_m)$. An $(\epsilon,\delta)$-pricing-query-estimator is an algorithm such that: $$\P_{v_1, \hdots, v_m \sim \mathcal{D}} \left( \L(\theta(\mathcal{D}); \hat \theta (s_1, \hdots, s_m)) \leq \epsilon \right) \geq 1-\delta, \quad \forall \mathcal{D} \in \mathcal{C}$$ Given a class $\mathcal{C}$ and parameter $\theta$ and loss $\L$ we define the \emph{pricing query complexity} $\textsc{PriceCplx}_{\mathcal{C},\theta}(\epsilon)$ to be the minimum $m$ such that there exists a $(\epsilon,1/2)$-pricing-query-estimator for $\mathcal{C}$, $\L$ and $\delta$. It is useful to compare with traditional estimators and sample complexity. A traditional $(\epsilon,\delta)$-estimator is an algorithm that processes the samples $v_1, \hdots, v_m$ directly and produces an estimator $ \hat \theta (v_1, \hdots, v_m)$ such that: $$\P_{v_1, \hdots, v_m \sim \mathcal{D}} \left( \L(\theta(\mathcal{D}); \hat \theta (v_1, \hdots, v_m)) \leq \epsilon \right) \geq 1-\delta, \quad \forall \mathcal{D} \in \mathcal{C}$$ Similarly, we can define the sample complexity as $\textsc{SampleCplx}_{\mathcal{C},\theta}(\epsilon)$ as the minimum $m$ such that there exists a $(\epsilon,1/2)$-traditional-estimator for $\mathcal{C}$ and $\delta$. Since one can simulate a pricing estimator from a traditional estimator we have: \begin{equation} \textsc{PriceCplx}_{\mathcal{C},\theta}(\epsilon) \geq \textsc{SampleCplx}_{\mathcal{C},\theta}(\epsilon) \end{equation} The main question we ask is how big is this gap? I.e. how much harder is it to estimate a parameter from pricing queries as compared to samples. \paragraph{Practical Motivation and Relation to Bandits} In practice auctions are optimized by using a small fraction of traffic (say 1\%) to run experiments and the remaining 99\% runs the deployed production auction. After a period (say a few days) we use what we learned in the experimental slice to deploy in the main slice. Typically one does not care about the revenue in the experimental slice, but we do care about making that slice as small as possible. This translates directly to the goal of pricing query complexity: how to learn the most with the fewest possible queries? The alternative to run a small experimental slice to collect data in order to optimize the main mechanism is to apply a bandit algorithm on the full traffic. This may be appealing in theory, but it is not viable in practice where upper bounds on experimental traffic is a reality. For one, bandit algorithms would lead to a more unstable system for the bidders. Furthermore, only learning in a small (typically random) experimental slice minimizes incentives for bidders to strategize against the learning algorithm, since any given query will only be used to learn a price for the next day with a very small probability. This practical difference also explains how our objectives and guarantees differ from those of bandit learning for revenue optimization. While the bandit approach focuses on optimizing average revenue while learning, the sample/pricing complexity approach provides a guarantee on the loss of the final estimator with high probability, but doesn't care about the actual revenue obtained during the learning phase. This is very much in line with the literature on sample complexity. \paragraph{Our Results} We give matching upper and lower bounds (up to $\text{polylog}(1/\epsilon)$ factors) for the pricing complexity of various parameters and classes of distributions. Our results are summarized in Table~\ref{tab:results}. For each row of the table, we assume for the pricing complexity bounds that the parameter being estimated is in the range $[0,1]$. Since pricing queries only receive binary feedback, we need an initial range to be able to return any meaningful guarantee. It is interesting to compare our bounds with the sample complexity for the same regime. Interestingly in many important cases, such as estimating the mean of a normal distribution or the monopoly price of a regular distribution, pricing queries don't pose any significant handicap when one looks at the asymptotic regime. This is however not always the case: when computing the monopoly price of an MHR distribution, we establish a clear demarcation: we show a tight bound of $\widetilde{O}(1/\epsilon^2)$ for pricing complexity which is asymptotically higher than the known bound of $\widetilde{O}(1/\epsilon^{3/2})$ for sample complexity by Huang et al \cite{HMR18}. We also show a gap for computing the monopoly price for general distributions, while $\Omega(1/\epsilon^3)$ is required for pricing queries, $\widetilde{O}(1/\epsilon^2)$ is sufficient for value samples. \begin{table}[h] \centering \begin{tabular}{ |c|c|c|c| } \hline Parameter & Distribution & Pricing Complexity & Sample Complexity \\ \hline Median & General & $\tilde{O}(1/\epsilon^2)$, $ \Omega(1/\epsilon^2)$ & $O(1/\epsilon^2)$,$ \Omega(1/\epsilon^2)$ \\ Mean & Normal & ${O}(\sigma^2/\epsilon^2)$, $\Omega(\sigma^2/\epsilon^2)$ & $O(\sigma^2/\epsilon^2)$, $\Omega(\sigma^2/\epsilon^2)$ \\ CDF & General & $\tilde{O}(1/\epsilon^3)$, $\Omega(1/\epsilon^3)$ & $\tilde{O}(1/\epsilon^2)$, $\Omega(1/\epsilon^2)$ \\ CDF & Regular & $\tilde{O}(1/\epsilon^3)$, $\Omega(1/\epsilon^{2.5})$ & $\tilde{O}(1/\epsilon^2)$ \\ Monopoly Price & MHR & $\tilde{O}(1/\epsilon^2), \Omega(1/\epsilon^2)$ & $\tilde{O}(1/\epsilon^{1.5}), \Omega(1/\epsilon^{1.5})$ \\ Monopoly Price & Regular & $\tilde{O}(1/\epsilon^2)$ , $\Omega(1/\epsilon^{2})$ & $\tilde{O}(1/\epsilon^{2})$ \\ Monopoly Price & General & $\tilde{O}(1/\epsilon^3)$, $\Omega(1/\epsilon^3)$ & $\tilde{O}(1/\epsilon^2)$, $\Omega(1/\epsilon^2)$ \\ \hline \end{tabular} \caption{Our upper and lower bounds for various estimators and comparison with sample complexity. In each case, the pricing complexity bounds assume that the parameter being estimated is in $[0,1]$. $\sigma$ is the standard deviation of the normal distribution.} \label{tab:results} \end{table} \paragraph{Techniques: Mean and Median} As a warm up, we first study how to estimate the median of a distribution using pricing queries. The algorithm is a hybrid of binary search and the UCB algorithm. The algorithm both keeps a range $[\ell, r]$ that contains the median (as in binary search) and builds a confidence interval (like the UCB algorithm) to estimate the quantile of $p = (\ell + r)/2$. Whenever the confidence bound can safely separate the quantile of $p$ from $1/2$, a binary search step is performed. The algorithm requires very few queries to perform a binary search step that is far from the median, but starts requiring more and more as we approach the median. The algorithm is agnostic to $\epsilon$ and works in the streaming model: it keeps an estimate of the median through the execution that gets better as it receives more pricing query opportunities. Surprisingly, its performance matches what one could obtain from value samples. We then use the algorithm to estimate the mean of a normal distribution. Remarkably, we obtain the same guarantee of $O(\sigma^2/\epsilon^2)$ (up to constant factors) as the optimal algorithm that has access to sample values. We obtain a guarantee of $\tilde{O}(\sigma^2/\epsilon^2)$ without needing to know $\sigma$ at all. To obtain a bound of $O(\sigma^2/\epsilon^2)$ the algorithm needs to know $\sigma$; or if the algorithm knows only an upper bound $\bar{\sigma}$ on $\sigma$, we can obtain a guarantee of $O(\bar{\sigma}^2/\epsilon^2)$. \paragraph{Techniques: Monopoly Price} For the monopoly price our techniques are significantly more involved. At the heart of the analysis is a new property of regular distributions called \emph{relative flatness}, which is of potential independent interest. It says that if the revenue curve has roughly the same value at $4$ different equidistant points, it can't hide a point with very high revenue point in between those four points. This is not just a consequence of a single-peaked curve, as such curves can easily hide a high revenue point between four points of equal revenue. The concavity of the revenue curve in the quantile space is well known for regular distributions. The novelty is in finding the right property in the \emph{value space} to be used. Using this property we design an algorithm that keeps a list of subintervals of $[0,1]$ to explore. Unlike binary search we are not able to keep a single interval. Once the algorithm explores an interval it may discard it, but it may also break it into multiple subintervals and add them to the list. The relative flatness is put to use in efficiently discarding intervals that don't contain the monopoly price. We design a potential function to control the number of sub-intervals to guarantee we explore a small number of them. The argument is delicate: even the next subinterval to pick needs care; picking an arbitrary subinterval to process does not yield the desired bound. This algorithm obtains a bound of $\tilde{O}(1/\epsilon^2)$ for regular distributions. We then turn our attention to MHR distributions. For value samples, it is known that learning the monopoly price for MHR distribution is easier than learning the monopoly price of regular distributions. Surprisingly, this is no longer the case for pricing complexity. To prove this fact, we design two distributions such that no price is an $\epsilon$-approximation simultaneously for both distributions even though the distributions have very close c.d.f. and quantiles. We then use an information-theoretic argument to show that the sequence of bits obtained by the pricing queries for both distributions will be indistinguishable with fewer than $\Omega(1/\epsilon^2)$ queries. We do so by bounding the KL-divergence between the distribution of bits observed by running any algorithm on both distributions. \paragraph{Why Not Learn the CDF of the Whole Distribution?} Known results on sample complexity for revenue maximization typically use a variant of optimal reserve price of the empirical distribution. In the case of pricing queries, first, there is no clear notion of what the empirical distribution is. Second, even if one is tempted to learn the CDF of the entire distribution within an $\epsilon$ distance in Levy metric, we show that this is provably inferior (see Table~\ref{tab:results} for CDF Parameter) to the tight bounds we get. Our technique of quickly zooming into the right region of the distribution using the notion of relative flatness is crucial for getting tight bounds. \paragraph{Related Work} The work that is closest to ours is the sample complexity of single buyer, single item revenue maximization by Huang et al.~\cite{HMR18}. Surprisingly Huang et al.~\cite{HMR18} show that the number of samples required to estimate the monopoly price to obtain a $1-\epsilon$ approximation to revenue is even smaller than the number of samples required to estimate the optimal revenue. Our detailed comparison of pricing query complexity with the sample complexity of Huang et al. has already been presented, and, the pricing complexity often matches the already surprisingly small sample complexity. The literature on sample complexity has seen a lot of activity in the past decade. Particularly, in the single-parameter setting, starting with Elkind's work~\cite{Elkind07} on finite support auctions, there has been a lot of progress. Cole and Roughgarden~\cite{CR14} study the sample complexity of revenue maximization in single item settings with independent non-identical distributions, where a ``sample'' is an $n$-tuple when there are $n$ bidders in the auction. They give sample complexity as a polynomial in $n$ and $1/\epsilon$ when seeking to obtain a $1-\epsilon$ approximation to the optimal revenue. With this definition, in many settings the number of samples is not even a function of $n$. When one seeks a $1/4$ approximation, based on a generalization of Bulow and Klemperer's result~\cite{BK96} by Hartline and Roughgarden~\cite{HR09}, just a single sample would do. Fu et al.~\cite{fu2015randomization} shows that the revenue guarantees can be improved with randomization. When the distributions of the $n$ bidders are i.i.d., Dhangwatnotai et al.~\cite{DRY15} showed that poly($\epsilon$) samples are enough to get a $1-\epsilon$ approximation, and so is it in the case of unlimited supply (digital goods) setting as that can be boiled down to a single agent setting. Devanur et al.~\cite{DHP16} consider the single parameter setting where the buyer distributions are not in discrete categories, but in a continuum, represented as a signal in the hands of the auctioneer. The line of literature for single-parameter settings culminates with the work of Guo et al.~\cite{guo2019settling} which provides tight sample complexity for buyers with regular, MHR, or bounded values. \cite{hu2021targeting} further studies a targeting sample complexity model where for each buyer distribution, it is allowed to specify a fixed size of quantile range to sample from. Sample complexity results in multi-parameter settings are comparatively fewer. Morgenstern and Roughgarden~\cite{MR16} and Vasilis Syrgkanis~\cite{syrgkanis2017sample} study the sample complexity of learning simple auctions (item pricing, bundle pricing etc.) in multi-parameter settings via pseudo-dimension. Gonczarowski and Weinberg \cite{gonczarowski2018sample} obtains a polynomial sample complexity for up to $\epsilon$-optimal multi-item auctions for additive buyers with independent item values. Brustle et al. \cite{brustle2020multi} further extends the result to buyers with some specific types of correlated item values. Sample complexity of revenue-optimal item pricings was explored earlier by Balcan et al.~\cite{BBHM08}. Sample complexity of welfare-optimal item pricings was explored by Feldman et al.~\cite{FGL15} and Hsu et al.~\cite{HMRRV16}. Finally, we also note that pricing query complexity is different from sample complexity in PAC learning using threshold concepts i.e, the literature following the work of~\cite{valiant}, despite the resemblance in definitions. In PAC learning, the same concept may be used as a test on different samples. However, this is not the case in pricing complexity, here we never explicitly observe any sample. This means that the techniques and bounds for PAC learning are going to be closer to usual sample complexity results than to pricing complexity, as is borne out from the lower bounds. \section{Warmup: Estimating the Median and the Mean}\label{sec:gaussian} Before going to the technically more involved Section~\ref{sec:monopoly}, we begin with the estimation of the median and mean of distributions as a warmup. While the results in Section~\ref{sec:monopoly} are our main results, there are some nuances in this Section~\ref{sec:gaussian} as well, including the removal of log factor in Theorem~\ref{lm:norm}. \subsection{Median of a General Distribution} \label{sec:median-general} We start with one of the most basic problems in statistics: estimating the median of a distribution. Let $C$ be the class of all distributions with finite variance and median in $[0,1]$. When we are estimating the median of a distribution $\mathcal{D}$ with cdf $F$, a natural loss function is $\L(\theta,\hat\theta)=|F(\theta)-F(\hat\theta)|$. Then we can bound the pricing query complexity as follows: \begin{theorem}\label{thm:medianub} Let $\mathcal{C}$ be a class of distributions over $\mathbb{R}$ with median in $[0,1]$. Define: $$a_F:=\sup_{F\in\mathcal{C}}\log{\textstyle|F^{-1}(\frac{1}{2}+\frac{\epsilon}{3})-F^{-1}(\frac{1}{2}-\frac{\epsilon}{3})|^{-1}}.$$ We have \[\textsc{PriceCplx}_{\mathcal{C},\theta}(\epsilon) \leq \tilde{O}(1/\epsilon^2)a_F\log a_F.\] \end{theorem} Observe that the bound depends on a parameter $a_F$ which measures how concentrated the function is around the median. For example, if there is an $O(\epsilon)$ probability mass exactly at the median then $a_F = \infty$. In such case, in order to obtain $\L(\theta,\hat\theta)\leq O(\epsilon)$ we would need to find the the median \emph{exactly} which is impossible only using pricing queries. Thus the dependence on $a_F$.\\ The bound in Theorem \ref{thm:medianub} is achieved by a combination of binary search and the UCB algorithm, described in Algorithm \ref{alg:binary_ucb}. The algorithm starts with a range $[0,1]$ of the potential median, and repeatedly prices at the middle point $p$ of the range. Since the algorithm obtains only binary feedback, it needs to keep sending pricing queries at this point until it has enough information to decide with high probability whether $Q(p) = 1- F(p) < 1/2$ or $Q(p) > 1/2$. To do that we keep an estimate of the quantile $Q(p)$ based on the binary feedback together with a confidence interval for this estimate. Once $1/2$ leaves this confidence interval we know with high probability on which side of $p$ the median is, and we can do a binary search step. After the binary search step, we restart the UCB step. \begin{algorithm}[htb] \caption{Binary search algorithm that returns $p^*$ with $|Q(p^*)-\frac{1}{2}|\leq\epsilon$ with prob $1-\delta$ } \label{alg:binary_ucb} {Initialize $\ell =0$, $r=1$, $p=1/2$; $n=0$, $k=0$, $p^* =1/2$, $\epsilon^* = 1$\; \For {t=1,2,3... (each new arriving query)} { Increment number of samples $n = n+1$ \; Price at $p$, if sold, increment $k= k+1$ \; If $\epsilon_p = 18\sqrt{\log(t/\delta)/ n} < \epsilon^*$ update $p^* = p, \epsilon^* = \epsilon_p$ \; \If {$\abs{1/2 - k/n} > 12\sqrt{\log(t/\delta)/ n}$ (i.e. $1/2$ is outside the confidence bound)} { If $k/n < 1/2$, update $r = p$. Otherwise, $\ell=p$\; Update $k=n = 0$ (reset the counters) and set $p =(\ell+r)/2$\; } \textrm{Return} $p^*$ if $\epsilon^*\leq\epsilon$\; } } \end{algorithm} One important feature of our algorithm is that it uses no information about $\epsilon$ other than as a stopping condition. In fact, one can think of the algorithm in the \emph{streaming model}: it processes a stream of `pricing opportunities' and updates a constant-size set of parameters in each step. At each point in time, the algorithm keeps an estimate $p^*$ together with a confidence bound $\epsilon^*$, that keeps getting better as the algorithm receives more data. The analysis of the pricing complexity is deferred to Section~\ref{sec:proof-median-general}. \subsection{Mean of a Normal Distribution} \label{sec:mean-normal} We now apply this idea to estimate the mean of a normal distribution using pricing queries. The idea can be extended to other families of parametric distributions, but we focus here on normals to allow a crisp comparison with sample complexity. Our main result is the following price complexity upper bound, which exactly matches the well known sample complexity lower bound of $\Omega(\sigma^2/\epsilon^2)$: \begin{theorem}\label{lm:norm} Let $\mathcal{C}_{\sigma} = \left\{ \mathcal{N}(\mu, \sigma) \text{ s.t. } \mu \in [0,1]\right\}$ be the class of normal distributions with variance $\sigma^2$ with the loss $\L(\theta, \hat \theta) = \abs{\theta - \hat \theta}$. Then the pricing complexity of computing the mean is: $$\textsc{PriceCplx}_{\mathcal{C}_{\sigma},\mu}(\epsilon)= O(\sigma^2/\epsilon^2).$$ \end{theorem} \begin{proof} A direct application of Theorem \ref{thm:medianub} leads to an algorithm with $\tilde{O}(\sigma^2/\epsilon^2)$ pricing query complexity. To estimate a point $\hat \theta$ such that $\abs{\hat \theta - \mu} \leq \epsilon$ for a distribution with mean $\mu$, it is enough to find a point $\hat\theta$ such that $\abs{F(\hat\theta) - F(\mu)} \leq \epsilon/\sigma$. For a normal distribution the parameter $a_F$ in Theorem \ref{thm:medianub} is $a_F = O(\log 1/(\epsilon \cdot \sigma))$. If $\sigma \geq \epsilon$, the statement of the theorem directly yields a $\tilde{O}(\sigma^2/\epsilon^2)$. If $\sigma < \epsilon$ on the other hand, all the points $p \notin [\mu-\epsilon, \mu+\epsilon]$ are such that $\abs{Q(p) - 1/2} = \Omega(1)$ and hence the UCB step in Algorithm \ref{alg:binary_ucb} takes a constant number of queries at each of those points. Hence in $O(\log (1/\epsilon))$ we query a point that is $\epsilon$-close to the mean. This algorithm is robust in the sense that it doesn't require us to know the variance as it is only used in the analysis.\\ Assume for now that we know exactly what the variance $\sigma^2$ is. If we allow the algorithm to use the exact value of the variance $\sigma^2$, we can improve the guarantee from $\tilde{O}(\sigma^2/\epsilon^2)$ to $O(\sigma^2/\epsilon^2)$. We can apply the following procedure: \begin{itemize} \item Use Algorithm \ref{alg:binary_ucb} with $\epsilon = 1/4$ to find a point $p$ with quantile $Q_{\mu, \sigma}(p) \in [1/4, 3/4]$. \item Using $O(\sigma^2/\epsilon^2)$ pricing queries, find an estimate $\hat q$ such that $\abs{\hat q - Q_{\mu,\sigma}(p)} \leq \epsilon/\sigma$. \item Solve the equation $\hat q = Q_{\hat \mu, \sigma}(p)$ to find an estimate $\hat \mu$ for the mean. \end{itemize} Note that since $Q_{\hat \mu, \sigma}(p) = Q_{0,1}(\frac{p-\mu}{\sigma})$ where $Q_{0,1}$ is the quantile function for the standard Gaussian, the solution to the equation in last step is: $$\hat{\mu} = p - \sigma \cdot Q_{0,1}^{-1}(\hat q)$$ Since the derivatives of $Q_{0,1}^{-1}$ are bounded in $[1/4, 3/4]$ an error of $\epsilon/\sigma$ in the estimation $\hat{q}$ leads to an error of $O(\epsilon/\sigma)$ in $Q_{0,1}^{-1}(\hat q)$. Hence $\abs{\mu - \hat \mu} \leq O(\epsilon)$.\\ \end{proof} We can relax the assumption that we know $\sigma$ exactly and assume we only know an upper bound $\bar \sigma$. The proof is deferred to Section~\ref{sec:proof-mean-normal}. \begin{corollary}\label{cor:median-ub} Let $\bar\mathcal{C}_{\bar\sigma} = \left\{ \mathcal{N}(\mu, \sigma) \text{ s.t. } \mu \in [0,1],\ \sigma\leq\bar\sigma\right\}$ be the class of normal distributions with variance at most $\bar\sigma^2$ with the loss $\L(\theta, \hat \theta) = \abs{\theta - \hat \theta}$. Then the pricing complexity of computing the mean is: $$\textsc{PriceCplx}_{\bar\mathcal{C}_{\bar\sigma},\mu}(\epsilon)= O(\bar\sigma^2/\epsilon^2).$$ \end{corollary} \section{Estimating the Monopoly Price} \label{sec:monopoly} In this section, we focus on estimating the monopoly price (i.e., the revenue optimal price or the Myerson price) and measure our loss via the natural metric of the revenue gap between the true revenue and that obtained by our estimated price: $\L(\theta; \hat \theta) = \abs{\textsc{Rev}_\mathcal{D}(\theta) - \textsc{Rev}_\mathcal{D}(\hat \theta)}, \text{ where } \textsc{Rev}_\mathcal{D}(p) := \mathbb{E}_{v \sim \mathcal{D}} [p \cdot \mathbf{1}\{v \geq p\}]$, where $\theta$ and $\hat \theta$ are the true and estimated monopoly prices. An important tool in this section is the following lemma, which shows that to estimate the quantile $Q(p)=1-F(p)$ of price $p$ with additive error $\epsilon$, it suffices to use $\tilde{O}(\frac{1}{\epsilon^2})$ pricing queries on $p$. The proof is relatively standard, and is deferred to Section~\ref{sec:proof-monopoly}. \begin{lemma}\label{lem:repeatprice} For any value distribution $F$ supported in $[0,1]$ and value $p$, there is an algorithm $\textsf{EQ}_F(p,\epsilon,\delta)$ that makes $m=\tilde{O}(\frac{1}{\epsilon^2}\log\frac{1}{\delta})$ pricing queries and with probability $>1-\delta$ returns an estimate $\hat{q}$ for the quantile $Q(p)=1-F(p)$ such that $\abs{\hat q - Q(p)} < \epsilon$. \end{lemma} \subsection{Regular Distributions}\label{sec:regular} Firstly we study the class of regular distributions $\mathcal{C}_{\textsc{REG}}$, supported on $[0,1]$. Regular distributions are the class of distributions for which the revenue curve in quantile space $\hat R(q) = q \cdot F^{-1}(1-q)$ is concave. This in particular implies that the revenue curve in the space of values/prices $\textsc{Rev}(p)=p(1-F(p))$ is single-peaked. This class includes many important distributions such as uniform, exponential and all log-concave distributions.\\ In the heart of our algorithm will be the following lemma on regular distributions: \begin{lemma}[Relative Flatness of Regular Distributions]\label{lem:myerson-average} Let $\textsc{Rev}$ be the revenue curve of a regular distribution. Consider four equidistant\footnote{In the sense that $p_i = (p_4i + p_1(3-i))/3$ for $i=2,3$.} values $p_1<p_2<p_3<p_4=cp_1$ in $[0,1]$ and let $\textsc{Rev}_{\max}$ and $\textsc{Rev}_{\min}$ be the maximum and minimum value of the revenue curve at those four points. Then if $\textsc{Rev}_{\min} \geq \textsc{Rev}_{\max}-\epsilon$, for some $\epsilon>0$, then for any $p\in[p_1,p_4]$, $\textsc{Rev}(p)\leq \textsc{Rev}_{\max}+O(c\epsilon)$. \end{lemma} The geometric intuition is that is the curve is reasonably flat at four equidistant points, then it should be reasonably flat within most of that interval. Note that this is not generally true for any single-peaked function, as the peak could be easily hiding between any of those points. See Figure \ref{fig:flatness}. The proof (deferred to Section~\ref{sec:proof-regular}) will strongly use the fact that the revenue curve is concave when looked at in the quantile space. From this point on, the only two facts we will use about regular distributions is the fact it is single-peaked and the relative flatness lemma. \begin{figure}[h] \centering \begin{tikzpicture}[scale=1] \draw (0,0) -- (5,0); \node[circle,fill,inner sep=1pt] at (1,.45) {}; \node[circle,fill,inner sep=1pt] at (2,.48) {}; \node[circle,fill,inner sep=1pt] at (3,.5) {}; \node[circle,fill,inner sep=1pt] at (4,.5) {}; \draw (1,-.1)--(1,.1); \draw (2,-.1)--(2,.1); \draw (3,-.1)--(3,.1); \draw (4,-.1)--(4,.1); \draw [line width=1pt, color=blue] plot [smooth] coordinates { (0,.2) (1,.45) (2,.48) (2.2,1) (2.5,2) (2.8,1) (3,.5) (4,.5) (5,.2)}; \end{tikzpicture} \caption{The relative flatness lemma shows that the revenue curve of a regular distribution can't hide a very high revenue point in a region that appears flat when measured with respect to $4$ equidistant points. For example, the curve in the figure can't be the revenue curve of a regular distribution in the value space.} \label{fig:flatness} \end{figure} With that we can prove our main result: \begin{theorem}\label{thm:regular} Let $\mathcal{C}_{\textsc{REG}}$ be the class of regular distributions supported in $[0,1]$ and let $\theta$ be the monopoly price. Then, $$\textsc{PriceCplx}_{\mathcal{C}_{\textsc{REG}},\theta}(\epsilon) = \tilde{O}(1/\epsilon^2).$$ \end{theorem} In the next section we will show this is tight even for the subclass of MHR distributions.\\ Now we describe the algorithm achieving that bound. The main data structure the algorithm will keep is a list $L$ which will contain intervals with disjoint interiors in the revenue curve that still need to be explored. For each of the endpoints of those intervals, we will keep an $\pm \epsilon$-estimate $\hat{q}(p)$ of its quantile $Q(p) = 1-F(p)$ obtained by the algorithm in Lemma \ref{lem:repeatprice} with success probability $\delta=\epsilon^2$ together with a revenue estimate: $$\widetilde{\rev}(p) = p \cdot \hat{q}(p).$$ In the algorithm, we will show that we only price at $\tilde{O}(1)$ different prices, thus by union bound the overall success probability is at least $\frac{2}{3}$ for small $\epsilon$. We initialize the algorithm with $L = \{[0,1]\}$. While the list is non-empty, we will process it by removing an interval from it and analyzing it. In the process of analyzing it, we will query some of its points. Depending on the the outcome of those queries, we may decide to add subintervals of $L$ back to the list for further analysis. We stop whenever the list is empty. At that point, we return the point queried in the process with largest estimated revenue. We now describe how to process each interval. At every round, we remove the second leftmost interval $[\ell, r]$ from the list whenever the list has more than one interval. If not, we remove the only interval from the list. We process this interval by dividing it in $4$ pieces of the same length and querying the breakpoints $p_i = (i \ell + (4-i) r)/4$ for $i = 1,2,3$ for the approximate quantile and construct a revenue estimate $\widetilde{\rev}$ for each. We define the following quantities for the interval: $$\textsc{Rev}_{\max} = \max_{p \in \{\ell, p_1, p_2, p_3, r\}} \textsc{Rev}(p), \qquad \widetilde{\rev}_{\max} = \max_{p \in \{\ell, p_1, p_2, p_3, r\}} \widetilde{\rev}(p) $$ Now we proceed according to one of seven cases:\\ \noindent\emph{Case 1:} If $r - \ell < \epsilon$ we discard the interval. In that case, we don't need to further analyze the interval since no point $p\in[\ell,r]$ can generate much better revenue than just pricing at $\ell$ since: \[\textsc{Rev}(\ell)=\ell Q(\ell)\geq \ell Q(p)\geq (p-\epsilon)Q(p)\geq pQ(p)-\epsilon=\textsc{Rev}(p)-\epsilon.\] \noindent\emph{Case 2:} If $\min_{p \in \{\ell, p_1, p_2, p_3, r\}} \widetilde{\rev}(p) \geq \widetilde{\rev}_{\max} - 2\epsilon$. In that case, we add $[\ell,p_1]$ to the list. At this point, since $r\leq 4p_1$, the relative flatness lemma guarantees us that no point in $[p_1, \ell]$ can be much better than the points that were already queried.\\ \noindent\emph{Case 3:} If $\min_{p \in \{p_1, p_2, p_3, r\}} \widetilde{\rev}(p) \geq \widetilde{\rev}_{\max} - 2\epsilon$ but $\widetilde{\rev}(\ell) < \widetilde{\rev}_{\max} - 2\epsilon$ we remove all intervals to the left of $\ell$ from the list and add $[\ell, p_1]$ and $[p_1,r]$. In this case, we know that: $$\textsc{Rev}(\ell) \leq \widetilde{\rev}(\ell) + \epsilon < \widetilde{\rev}_{\max}-\epsilon \leq \textsc{Rev}_{\max}$$ and hence the revenue in $\ell$ is suboptimal and the optimal point $p^*$ is to the right of $\ell$. Here we are only using the fact that the revenue function is single-peaked.\\ All the remaining cases follow from similar arguments based on the single-peakedness of the revenue function:\\ \noindent\emph{Case 4:} If $\min_{p \in \{\ell, p_1, p_2, p_3\}} \widetilde{\rev}(p) \geq \widetilde{\rev}_{\max} - 2\epsilon$ but $\widetilde{\rev}(r) < \widetilde{\rev}_{\max} - 2\epsilon$ we remove all intervals to the right of $r$ from the list and add $[\ell, p_3]$ and $[p_3,r]$. \\ \noindent\emph{Case 5:} If $\min_{p \in \{p_1, p_2, p_3\}} \widetilde{\rev}(p) \geq \widetilde{\rev}_{\max} - 2\epsilon$ but $\widetilde{\rev}(\ell), \widetilde{\rev}(r) < \widetilde{\rev}_{\max} - 2\epsilon$ then remove all intervals from the list and add back $[\ell, p_1]$, $[p_1, p_3]$ and $[p_3, r]$. \\ \noindent\emph{Case 6:} If $\widetilde{\rev}(\ell), \widetilde{\rev}(p_1)< \widetilde{\rev}_{\max} - 2\epsilon$ we remove all intervals to the left of $\ell$ from the list and add $[p_1,r]$ to the list. \\ \noindent\emph{Case 7:} If $\widetilde{\rev}(r), \widetilde{\rev}(p_3) <\widetilde{\rev}_{\max} - 2\epsilon$ we remove all intervals to the right of $r$ from the list and add $[\ell,p_3]$ to the list.\\ Observe that by single-peakedness those are the only possible cases. The algorithm always finishes since it replaces intervals by other intervals that are smaller by at least a constant factor and once intervals become of size $\epsilon$, they are discarded. What remains to be done is to analyze the query complexity of the algorithm: \begin{proof}[Proof of Theorem \ref{thm:regular}] Now we argue the algorithm described performs at most $\tilde{O}(1/\epsilon^2)$ queries. To see that, we first claim that there are at most three intervals in the candidate list at any given time. We prove this claim by induction over the number of iterations. Suppose that after the $t$-th iteration there are $k\leq 3$ intervals in $list$. We argue that after the $(t+1)$-th iteration, there are still at most $3$ intervals in the list. Observe that in Case 1, 2, 6, 7, the number of intervals does not increase. In Case 5, the number of intervals become 3 no matter how many intervals were in the list. In Case 3 or 4, the number of intervals increase by at most 1, so the number of intervals in the list becomes at most $3$ if $k\leq 2$. The only remaining case is that the algorithm runs through Case 3 or 4, with $list$ containing $k=3$ intervals at the beginning of the iteration. Since the algorithm picks the second leftmost interval from $list$, the algorithm picks the middle one of the intervals (sorted according to the left boundary). Notice that in Case 3, the leftmost interval is also removed in this iteration; in Case 4, the rightmost interval is also removed in this iteration. Thus after the algorithm runs through Case 3 and Case 4, no matter how many intervals are in the list at the beginning of the iteration, at most 3 intervals remain in the list after the iteration. This finishes the proof of the claim. After each iteration, either one of the interval of length $d$ splits to multiple intervals of length at most $\frac{3}{4}d$ each, with some other intervals getting removed; or one of the interval of length $d$ shrinks to a new interval of length at most $\frac{3}{4}d$ (recall that our five prices $\ell,p_1,p_2,p_3,r$ chosen in the algorithm are equidistant). Therefore if we record the lengths of the three intervals to be $d_1\geq d_2\geq d_3$, then either $d_1$ decreases to a fraction of at most $\frac{\sqrt{3}}{2}$; or $d_1$ does not increase, and $d_2$ decreases to a fraction of at most $\frac{\sqrt{3}}{2}$; or $d_1,d_2$ do not increase, and $d_3$ decreases to a fraction of at most $\frac{3}{4}$. For $i=1,2,3$, let $a_{i}=0$ if $d_i=0$, and $a_{i}=\lceil\log_{2/\sqrt{3}}\frac{d_i}{\epsilon}\rceil$ if $d_i>0$. Then at each step, either $a_1$ decreases by at least 1; or $a_2$ decreases by at least 1 while $a_1$ does not increase; or $a_3$ decreases by at least 1. Let $n=1+\lceil\log_{2/\sqrt{3}}\frac{1}{\epsilon}\rceil$, and denote a potential function $\phi(d_1,d_2,d_3)=n^2a_1+na_2+a_3$. Then at each step, since $a_i\leq n-1$, $\phi$ decreases by at least 1. As at the beginning of the algorithm $\phi(d_1,d_2,d_3)=\phi(1,1,1)\leq n^3$, we have the total number of iterations is at most $n^3=O(\log^3\frac{1}{\epsilon})=\tilde{O}(1)$. Since in each iteration the query complexity is $\tilde{O}(\frac{1}{\epsilon^2})$, the total query complexity of the algorithm is $\tilde{O}(\frac{1}{\epsilon^2})$. \end{proof} \subsection{MHR Distributions}\label{sec:mhr} We complement the above result by providing a lower bound on the pricing complexity for estimating the monopoly price for MHR distributions. \begin{theorem}\label{thm:mhrlb} Let $\mathcal{C}_{\textsc{MHR}}$ be the class of Monotone Hazard Rate distributions supported in $[0,1]$ and let $\theta$ be the monopoly price. Then $$\textsc{PriceCplx}_{\mathcal{C}_{\textsc{MHR}},\theta}(\epsilon) = \Omega(1/\epsilon^2).$$ \end{theorem} To prove lower bound results for the pricing complexity for estimating the monopoly price, a key observation is that for two distributions with almost identical cumulative density functions, it's very hard to distinguish them with pricing queries. \begin{lemma}\label{lem:query-complexity} For two value distributions $D$ and $D'$, if $\frac{1}{1+\epsilon}\leq\frac{Q_D(v)}{Q_{D'}(v)},\frac{F_D(v)}{F_{D'}(v)} \leq (1+\epsilon)$ for every $v\in[0,1]$, then $\Omega(\frac{1}{\epsilon^2})$ pricing queries are needed to distinguish the two distributions with probability $>1-\delta$. \end{lemma} The proof of the lemma is technically involved, and is deferred to Section~\ref{sec:proof-mhr}. We first show that the two distributions have small KL-divergence, then use Pinsker's inequality from information theory to bound the pricing complexity. A similar approach was also used in the literature \cite{HMR18}. Thus to prove the lower bound on the pricing complexity of estimating the monopoly price for MHR distributions, we only need to find two MHR distributions with close cumulative density functions satisfying the above lemma. Such distributions indeed exist, and we defer the construction to Section~\ref{sec:proof-mhr}. \subsection{General Distributions}\label{sec:general-distribution} We now provide the pricing complexity for the class of general distributions. \begin{theorem}\label{thm:general-distribution} Let $\mathcal{C}_{\textsc{ALL}}$ be the class of all value distributions on $[0,1]$ and $\theta$ be the monopoly. Then $$\textsc{PriceCplx}_{\mathcal{C}_{\textsc{ALL}},\theta}(\epsilon)= \tilde{\Theta}(1/\epsilon^3).$$ \end{theorem} The proof idea is as follows. On one hand, we can price at all multiples of $\epsilon$ to get an estimated revenue for each price. We can use $\tilde{O}(\epsilon^{-2})$ pricing queries to get an $\epsilon$ accuracy at each price, thus $\tilde{O}(\epsilon^{-3})$ queries can guarantee us a price that estimates the optimal revenue with error $O(\epsilon)$. On the other hand, for a distribution with the same revenue on all but one multiples of $\epsilon$, suppose that pricing at $i\epsilon$ gives a higher revenue than any $j\epsilon$ with $j\neq i$. Any pricing algorithm needs to price at every multiple of $\epsilon$ to determine the exact value of $i$, while $\Omega(\epsilon^{-2})$ queries are needed for accuracy. Thus $\Omega(\epsilon^{-3})$ queries are necessary for the general class of distributions. The detailed proof is deferred to Section~\ref{sec:proof-general}. \subsection{Discussion on Learning the Entire Regular Distribution} The algorithms with optimal sample complexity designed by Huang, Mansour and Roughgarden \cite{HMR18} work by using the samples to build an empirical distribution and then choosing the optimal reserve price with respect to the empirical distribution. We know by the DKW inequality that after $\tilde{O}(1/\epsilon^2)$ samples, the empirical distribution will be $\epsilon$-close to the real distribution in Kolmogorov distance with high enough probability. Hence for a bounded distribution, there is no asymptotic gap between the sample complexity of learning the monopoly price and learning the entire CDF. Is there a similar phenomenon for pricing queries? In Section~\ref{sec:regular}, we discussed how to \textit{directly} learn the optimal monopoly prices with $\epsilon$ error for regular distributions using $\tilde{O}(1/\epsilon^2)$ pricing queries. Using these many pricing queries, we may wonder if we can get a full picture of the distribution: is it possible to learn a distribution within a small distance to the true distribution? There are multiple metrics measuring the distance of two distributions. Metrics that are measured by the pointwise difference in cumulative density functions like Kolmogorov distance\footnote{Two distributions with cumulative density functions $F$ and $G$ are within $\epsilon$ Kolmogorov distance, if $F(x)-\epsilon\leq G(x)\leq F(x)+\epsilon$ for every $x\in\mathbb{R}$.} are inappropriate when we only have pricing access to the distribution. For example, if a distribution is defined by a deterministic value in $[0,1]$, pricing queries cannot determine that specific value. Instead, we measure the distance of two distributions by Levy distance\footnote{Two distributions with cumulative density functions $F$ and $G$ are within $\epsilon$ Levy distance, if $F(x-\epsilon)-\epsilon\leq G(x)\leq F(x+\epsilon)+\epsilon$ for every $x\in\mathbb{R}$.}. Observe that if two distributions $F$ and $G$ are within $\epsilon$ Levy distance, then for any price $p\in[0,1]$, $\textsc{Rev}_F(p-\epsilon)\geq \textsc{Rev}_G(p)-O(\epsilon)$. This means that if $p_F$ is the optimal monopoly price for $F$, then $p_F-\epsilon$ is a good price for $G$ that achieves near-optimal revenue (with $O(\epsilon)$ revenue loss). Thus, learning a distribution within a small Levy distance is a harder problem than learning the optimal monopoly price. For a general distribution in $[0,1]$, the pricing query complexity for learning an approximate distribution is $\tilde{\Theta}(\epsilon^{-3})$, which is the same as the complexity for learning the optimal monopoly price, as we can estimate the quantile of each multiple of $\epsilon$ using $\tilde{O}(\epsilon^{-2})$ pricing queries. What about regular distributions? Can we also find a regular distribution within $O(\epsilon)$ Levy distance using only $\tilde{O}(1/\epsilon^2)$ pricing queries? We answer the question negatively by showing that $\Omega(1/\epsilon^{2.5})$ pricing queries are required. This in particular shows that our zooming procedure based on the relative flatness lemma is necessary to obtain optimal pricing query complexity bounds. This also provides another scenario (other than learning the monopoly price for an MHR distribution) where there is an asymptotic gap between the pricing query complexity and the sample complexity (which is $O(1/\epsilon^2)$ by DKW inequality). \begin{theorem}\label{thm:regular-levy-pricing} Let $\mathcal{C}_{\textsc{REG}}$ be the class of regular distributions supported in $[0,1]$ and let $\theta$ be the true distribution\footnote{The distance metric $|\theta-\theta'|$ of two distributions $\theta$ and $\theta'$ is measured by Levy distance.}. Then $$\textsc{PriceCplx}_{\mathcal{C}_{\textsc{REG}},\theta}(\epsilon) = \Omega(1/\epsilon^{2.5}).$$ \end{theorem} \begin{proof} Let $D^*$ be the uniform distribution on $[0,1]$. In other words, the density function $f$ of $D^*$ satisfies $f(v)=1$ for every $v\in[0,1]$. For any $x\in[0.1,0.9]$, let $D_x$ be the distribution with the following density function $f_x$: \begin{equation*} f_x(v) = \left\{\begin{array}{lr} 1, & \text{if } v\in[0,x)\cup(x+4\sqrt{\epsilon},1];\\ v-x+1, & \text{if } v\in[x,x+\sqrt{\epsilon}];\\ -v+x+1+2\sqrt{\epsilon}, & \text{if } v\in(x+\sqrt{\epsilon},x+3\sqrt{\epsilon});\\ v-x-4\sqrt{\epsilon}+1, & \text{if } v\in[x+3\sqrt{\epsilon},x+4\sqrt{\epsilon}].\\ \end{array}\right. \end{equation*} Intuitively, $D_x$ is obtained by perturbing the uniform distribution $D^*$ in a small interval $[x,x+4\sqrt{\epsilon}]$. In that interval, $f_x(v)$ linearly increases from $1$ to $1+\sqrt{\epsilon}$, then decreases gradually to $1-\sqrt{\epsilon}$, finally back to $1$. Observe that distribution $D_x$ is regular since Myerson's virtual value $\phi_x(v)=v-\frac{1-F_x(v)}{f_x(v)}$ is increasing even when $f_x(v)$ decreases. Note: the derivative of $\phi_x$ is $\phi_x’(v)=2+\frac{(1-F_x(v))f_x’(v)}{f_x^2(v)}$. The density of $D_x$ satisfies $1-\sqrt{\epsilon}\leq f_x(v)\leq 1+\sqrt{\epsilon}$ and $-1\leq f_x’(v)\leq 1$, thus $\phi_x’(v)>0$ when $\epsilon$ is small enough, which means $\phi_x(v)$ is monotone increasing. Now we analyze the number of pricing queries needed for distinguishing $D^*$ and $D_x$. The cumulative density function of $D_x$ is \begin{equation*} F_x(v) = \left\{\begin{array}{lr} v, & \text{if } v\in[0,x)\cup(x+4\sqrt{\epsilon},1];\\ v+\frac{(v-x)^2}{2}, & \text{if } v\in[x,x+\sqrt{\epsilon}];\\ v+\epsilon-\frac{(x+2\sqrt{\epsilon}-v)^2}{2}, & \text{if } v\in(x+\sqrt{\epsilon},x+2\sqrt{\epsilon});\\ v+\epsilon-\frac{(v-x-2\sqrt{\epsilon})^2}{2}, & \text{if } v\in[x+2\sqrt{\epsilon},x+3\sqrt{\epsilon});\\ F(v)=v+\frac{(x+4\sqrt{\epsilon}-v)^2}{2}, & \text{if } v\in[x+3\sqrt{\epsilon},x+4\sqrt{\epsilon}].\\ \end{array}\right. \end{equation*} Observe that for any $v\in[x,x+4\sqrt{\epsilon}]$, $v\leq F_x(v)\leq v+\epsilon$; for any $v\in[0,x)\cup(x+4\sqrt{\epsilon},1]$, $F_x(v)=v$. This means that the cdf $F_{x}$ of distribution $D_x$ differs from the cdf $F^*$ of uniform distribution $D^*$ by at most an additive $\epsilon$ in value range $[x,x+4\sqrt{\epsilon}]$; for $v$ being out of the range, $F_{x}(v)=F^*(v)=v$. Thus for $x\in[0.1,0.9]$, $\frac{F_{x}(v)}{F^*(v)}=1+O(\epsilon)$ for every $v\in[0,1]$. For quantile functions $Q_x(v)=1-F_x(v)$ and $Q^*(v)=1-F^*(v)$, similar results hold: $\frac{Q_{x}(v)}{Q^*(v)}=1-O(\epsilon)$ for every $v\in[0,1]$. Thus $\Omega(\epsilon^{-2})$ pricing queries are necessary due to Lemma~\ref{lem:query-complexity}. Furthermore, the queries must be in $[x,x+4\sqrt{\epsilon}]$, as $Q_{D_x}$ and $Q_{D^*}$ are identical outside of $[x,x+4\sqrt{\epsilon}]$, which means querying outside of $[x,x+4\sqrt{\epsilon}]$ gives no information. For any $x\in[0.1,0.9]$, to distinguish $D_x$ and $D^*$, $\Omega(\epsilon^{-2})$ pricing queries in $[x,x+4\sqrt{\epsilon}]$ are necessary due to Lemma~\ref{lem:query-complexity}. Now consider the following $\Omega(\epsilon^{-0.5})$ distributions: $D^*$, $D_{0.1}$, $D_{0.1+4\sqrt{\epsilon}}$, $D_{0.1+8\sqrt{\epsilon}}$, $\cdots$, $D_{0.9-8\sqrt{\epsilon}}$, $D_{0.9-4\sqrt{\epsilon}}$. To distinguish all these distributions, $\Omega(\epsilon^{-2})$ queries are necessary in every interval $[x,x+4\sqrt{\epsilon}]$, thus $\Omega(\epsilon^{-2.5})$ queries in total are necessary. Notice that any two distributions $D_x$ and $D^*$ are at least $\frac{\epsilon}{2}$-far apart in Levy distance because, $F_x(x+2\sqrt{\epsilon})=x+2\sqrt{\epsilon}+\epsilon=F^*(x+2\sqrt{\epsilon}+\epsilon/2)+\epsilon/2$. To learn a distribution that is within $\frac{\epsilon}{4}$ Levy distance from the true distribution, we need to be able to distinguish all those distributions. To summarize, even in the class of regular distributions, $\Omega(\epsilon^{-2.5})$ pricing queries are necessary for learning a distribution within $\frac{\epsilon}{4}$ Levy distance (thus also $\epsilon$ Levy distance). \end{proof}
null
null
null
proofpile-arXiv_000-10198
{"arxiv_id":"2111.03158","language":"en","timestamp":1636425001000,"url":"https:\/\/arxiv.org\/abs\/2111.03158","yymm":"2111"}
2024-02-18T23:40:25.386Z
2021-11-09T02:30:01.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10200"}
null
\section{COVID-19 datasets}\label{sec:data} According to the aim of this paper, we used the screening data of daily new cases and the number of the total amount of positive cases of SARS CoV-2 according to three countries in different phases of an epidemic outbreak. For each country, we considered the data in a time window that starts about 15 days before the adoption of restrictive measures and the sources of the data are described in the Appendix. At the time of writing, the United States of America (US) was in the growing phase of the epidemic outbreak with an increasing trend of new cases. In the US every single state could decide the need to adopt stay at home measures. In those states that adopted restrictions, we registered different starting dates: the earliest state was Puerto Rico (March 15, 2020), followed by California (March 19, 2020) and New York (March 20, 2020) where the highest increase of new cases was subsequently recorded. Every single state could adopt a different and inhomogeneous panel of restrictive measures and in some States, as Arkansas, Iowa, Nebraska, and North Dakota, the local government never issued stay-at-home orders \citep{Lyu:2020}. Data were analyzed from March 4, 2020, to April 27, 2020, for a total of 55 days of observation. At the end of the considered period, we reported 820,514 current positive cases, 56,259 deaths, and a total of 988,197 confirmed cases nationwide. Italy was a country in the middle phase where a first stabilization of the SARS-CoV-2 incidence was reported after the restriction measure, and, at the time of writing, we observed the beginning of a decreasing trend. The Italian Government adopted a national home lockdown restriction on March 9, 2020, for all the population followed by more severe measures on March 11, and ordered all nonessential businesses to close on March 22. Data were analysed from February 23, 2020, to April 28, 2020, for a total of 66 daily observations. At the end of the considered period, we reported 105,813 current positive cases, 26,977 deaths, and a total of 199,414 confirmed cases. Finally, Iceland was a country in the ending phase, where after a stabilization, the incidence of new cases was going down, and the current epidemic outbreak was probably going to the end. The Iceland government adopted stricter measures to slow down the spread of SARS-CoV-2 on March 16, 2020, with an active searching strategy of new cases that lead to perform oropharyngeal swabs to about 10\% of the entire population. We considered data from February 29, 2020, to April 27, 2020, for a total of 59 days of observation. At the end of the considered period, we reported 158 current positive cases, 10 deaths, and a total of 1,792 confirmed cases. \section{Discussion}\label{sec:discussion} International health organizations recommend to implement public health and social measures to slow or stop the spread of SARS-CoV-2, reaching the full engagement of all members of society \citep{who2020}. Countries have adopted different public health and social measures depending on the local specific historical evolution of the SARS-CoV-2 pandemic and on their health system capacity. Our analysis considers data of three countries, the US, Italy and Iceland which have, on one side, different geographic and demographic characteristics and, on the other one, as many dissimilar approaches in terms of public health policies and restrictive measures concerning the on-going epidemic. Our proposal allows us to estimate an epidemiological SEIR model with a time-varying transmission rate ($\beta(t)$) with the scope to assess the timeline and the strength of the effects produced by the adopted restrictive measures. The removal rate $\gamma$ was estimated, considering both two different time series (daily new and current positive cases). We avoided considering estimates of clinical SARS-CoV-2 recovery rate and specific mortality rate calculated by others, with the scope to comment on the knowledge provided by the analysed data on the removal rate estimated by our model. In the US, the transmission rate of SARS-CoV-2 was very high at the beginning of the considered temporal window, and its reduction appears to be later and slower in comparison with those observed in Italy and even more in Iceland. This difference may be viewed as a result of the approach adopted by the US in the epidemic onset based only on a limited and non-homogeneous containment measures \citep{parodi2020}. The adoption of this strategy in the US was a decision of particular importance since the COVID-19 onset began a few days later than Italy, where, on March 9, 2020, the Italian Government set Europe's first nationwide restriction on movement due to the incoming SARS-CoV-2 epidemic. The estimates on the Italian transmission rate confirm the control of the epidemic wave after approximately 20 days of home restrictions, but with a high mortality toll in comparison with the preceding Chinese epidemic \citep{rubino2020}. A social distancing and passive testing of symptomatic cases was the Italian strategy to contain the epidemic. Positive cases with few symptoms were confined in home isolation. However, there was a consistent amount of asymptomatic, which remained undetected, contributing to spread the epidemic \citep{lavezzo2020, flaxman2020}. Iceland has the advantage that the epidemic outbreak started later than Italy; we observed that the Icelandic transmission rate quickly moved to values close to 0 after only 15 days of the restrictive measures \citep{gudbjartsson2020}. These results were mainly attributable to an active searching strategy of asymptomatic positive cases organized by the national health service, which lead to be tested about 6\% of the Iceland population at the date of April 2, 2020. However, the presence of a free voluntary private screening program estimated that the fraction of undetected infections by the Icelandic health service ranged from 88.7\% to 93.6\% \citep{stock2020}. The comparison between the US, Italy, and Iceland was certainly affected by a regional/state variation to COVID-19 response. However this administrative-level granularity plays a role in the diffuseness of non-pharmacological health measures in the first phase of an epidemic outbreak. A faster and more aggressive response of each local administrative unit helps to contain the contagious spread and to achieve a relative control of the disease \citep{Lancet:2020}. Despite fears of the negative consequences on their economy, Italy and Iceland experienced a contagion control in a relative short period. With a rapid and structured stay-at-home order and an assertive infection control measures Iceland reduced the required time to flatten the curve. The estimated values for $\gamma$ reflects both the locally adopted swab policy and the specific phase of the epidemic wave: in fact, the active monitoring in Iceland provide a reliable value for the removal rate that is of about 12 days in line with that measured in China (14 days, \citep{wang2020}). In Italy, the controlling strategy implies that after a first positive swab test, a control swab will be repeated after a period of home isolation and this fact implies a longer time to obtain the healing confirmation. In the US the low number of removed subject in comparison with the high increase in the incidence makes challenging a reliable estimation of the removal rate. The proposed model has become a standard approach to estimate the transmission rate in a dynamic context \citep{hong2020,petropoulos2020,piccolomini2020,wu2020,zhang2020,Godio:2020}. The model has the double scope of having a real-time monitoring and of supplying possible evolution scenarios. The estimated fluctuations of $\beta(t)$ were driven by gradual changes in the behaviour of the population at risk as a consequence of the adopted restrictions. Respect to other specifications, our approach has the advantage of employing a basis of splines that allow us a high grade of flexibility for the estimation. We estimated the parameters for $\beta (t)$ and $\gamma$ through a composite likelihood considering the information provided by both the occurrence of new SARS-CoV-2 cases and the current positive cases; in order to cope with the possible presence of heteroscedasticity and autocorrelation in the data, we estimated consistent standard errors combining a sandwich variance estimator and a HAC correction. Even though the model formulation has some ancestors, our proposal differs in two aspects from the aforementioned literature: we allow the data to indicate the shape of the function $\beta(t)$ using a semiparametric approach. This feature could help to identify the best health intervention policy in a country; HAC variance estimators permits to reduce the bias allows to correct the underestimation of the variance of the estimator and therefore to produce future scenarios with a more appropriate margin of uncertainty. There are several other limitations to our analysis. We used plausible biological SARS-CoV-2 parameters for the SEIR model based on updated numbers (i.e., $\sigma$), but these values may be refined as more comprehensive data become available. The predicted values for $\beta(t)$ are valid only in the absence of future changes to the restrictions, which is not likely to happen if an intermittent social distancing measures will be adopted \citep{ferguson2020}. Our results point out that the transmission rate in the US, Italy, and Iceland showed a decline after the introduction of restriction measures. Despite this common trend, some differences in terms of timeline and impact are present. In particular, US experts argue that more helpful tools are needed in order to reach the control of the epidemic wave \citep{parmet2020}. The adoption of restrictive measures results in flatten epidemic curves and thus the distribution of the SARS-CoV-2 cases in a more extended period, respect to an uncontrolled epidemic outbreak. In the absence of a specific vaccine, the high number of susceptibles and the relaxation of restrictions taken represent a cause of future outbreaks. \newpage \section*{Appendix} \subsection*{Data sources accessed on April 28, 2020.} \begin{description} \item[Iceland:] John Hopkins University (\texttt{github.com/datasets/covid-19}), \citet{csse2020}. \item[Italy:] Italian Civil Protection, (\texttt{github.com/pcm-dpc/COVID-19}), \citet{morettini2020}. \item[US:] John Hopkins University (\texttt{github.com/datasets/covid-19}), \citet{csse2020}. \end{description} \subsection*{Software} The statistical analysis was carried using the R software \citep{R2019} and some its package: the minimization of the previous quantity was performed by means of a non-linear minimization process using the function \texttt{nlm}; package \texttt{deSolve} to resolve the standard ODE and package \texttt{ggplot2} to enhance the quality of the figures. Results and figures can be reproduced using the companion code in \texttt{github.com/Paolin83/SARS-CoV-2\_SEIR\_TV\_model}. \section{} \label{} \section{Introduction}\label{sec:introduction} The use of epidemic models permits to simulate disease transmission dynamics to detect emerging outbreaks and to assess public health interventions \citep{unkel2012,boily2007}. With the scope to describe the dynamics of epidemics, standard methods, such as the SIR model \citep{anderson1992}, divide the population into portions of subjects on the basis of their relation concerning the epidemic vector. Here the focus is on the dynamic such as the depletion of the susceptible portion to the infected one or the possible evolution of the rate of immunization. However, the standard SIR model, and other extensions as the SEIR model, do not take into account the time-varying nature of epidemics and several attempts were made to overcome this limitation \citep{Dureau:2013,boatto2018,kucharski2020}. In particular, most extensions were proposed for adapting the SIR model to specific case studies \citep{liu2012,peng2020} or to include time-varying coefficients with the scope to estimate epidemic dynamics \citep{hooker2011,chavez2017,fang2020}. This paper considers a flexible extension of the SEIR model, which incorporates the temporal dynamic connected to the transmission rate parameter, which is one of the most critical indicators for epidemiologists and at the basis of the basic reproduction number $R_O$. Also, this method allows us to make considerations about both the trend and the prediction of the number of infected cases to evaluate how any possible influencing factors like as the presence of a vaccine or restriction measures taken by the central authorities can affect an epidemic outbreak \citep{haas2020}. The proposed method is applied to the 2019–20 coronavirus pandemic, the COronaVIrus Disease 2019 (COVID-19), caused by a Severe Acute Respiratory Syndrome CoronaVirus 2 named SARS‑CoV‑2 \citep{who2020}. The World Health Organization declared the outbreak to be a Public Health Emergency of International Concern on January 30, 2020 and recognized it as a pandemic on March 11, 2020 \citep{eurosurveillance2020}. It is worth to mention, several studies are known in the literature concerning SEIR models with different time-varying parameter specifications \citep{hong2020,petropoulos2020,piccolomini2020,wu2020,zhang2020}. Our proposal differs in a statistical model consistent with counting data, a semiparametric (and therefore more flexible) specification of the time-varying parameters. We also pay more attention to assessing the uncertainty of estimates. To study how country-based mitigation measures influence the course of the SARS-CoV-2 epidemic \citep{anderson2020will}, we have looked to the on-going epidemic in the three countries (Italy, Iceland and the United States of America) where the adopted mitigation measures have been different \citep{remuzzi2020, gudbjartsson2020,dong2020}. The reference datasets are presented in Section \ref{sec:data}, while the proposed model and the statistical inference are illustrated in Section \ref{sec:methods} and Section \ref{sec:statinf}. In Section \ref{sec:results}, we collect our results for the different countries with a proposal for the forecast. We end the paper with a brief discussion in Section \ref{sec:discussion}. \section{SEIR model with time-varying coefficients}\label{sec:methods} \textbf{\subsection{SEIR model}} We start introducing the SEIR model, which is one the most used extensions of the standard SIR model, an Ordinary Differential Equation (ODE) based epidemiological model \citep{kermack1927}. Traditionally the SEIR model divides a population of hosts into four classes: Susceptible (S), Exposed (E), Infected (I) and Recovered (R). However in our framework, the last class should collect all the subjects which move outside the (I) status, i.e. recovered and deceased; for this reason, hereafter we denote (R) as Removed status. The model describes how the different portions of the population change over time $t$. In the standard SEIR model, deaths are modelled as flows from the $S$, $E$, $I$, or $R$ compartment to outside, because natural deaths are normally not monitored. If $S$, $E$, $I$, and $R$ refer to the numbers of individuals in each compartment, then these ``state variables'' change according to the following system of differential equations: \begin{subequations}\label{eq:SEIR} \begin{align} \frac{d}{dt}S(t) &= \mu (N-S(t))-\beta \frac{S(t) I(t) }{N}\\ \frac{d}{dt}E(t) &= \beta \frac{S(t) I(t)}{N}-(\mu+\sigma) E(t) \\ \frac{d}{dt}I(t) &=\sigma E(t)- (\mu+\gamma)I(t) \\ \frac{d}{dt}R(t) &= \gamma\,I(t)-\mu\,R(t) \end{align} \end{subequations} In the equations \eqref{eq:SEIR} $N$ is the total population, $\mu$ is the mortality rate, $\beta$ is the transmission rate, $\sigma$ is the exposed to infectious rate, and $\gamma$ is the removal rate that can broadly assumed to be the sum of $\gamma_R +\gamma_D$, where $\gamma_R$ and $\gamma_D$ are the recovery and the mortality rate, respectively. In general, $\beta$ is called transmission rate that is the number of people that a positive case infects each day; in our settings, $\beta$ is defined as equal to $a b$, where $a$ is the contact rate that is the average number of contacts per person in a day while $b$ is the probability of disease transmission in a single contact. However, $a$ and $b$ cannot be identified on the basis of the current information. The ratio ${S(t)}/{N}$ permits to adjust $\beta$ taking to account people who cannot infect each other. The parameters $\sigma$ and $\gamma$ are strictly dependent on the specific disease causing the epidemic and on the fraction of susceptible population. The parameter $\sigma$ is set equal to $\eta^{-1}$ where $\eta$ is the incubation period which may be higher for asymptomatic subjects; $\gamma$ is the recovery rate calculated as $\gamma=\rho^{-1}$ where $\rho$ is the average duration of the disease in days. Moreover, unlike the full specification, we do not consider the effect of births in model \eqref{eq:SEIR} and, therefore, $\sigma E (t) $ represents the number of new infected. Based on this parametrization we can define the reproduction number, $R_O$, as $$R_O=\frac{\beta \sigma}{(\gamma+\mu)(\sigma+\mu)}.$$ The index conveys the strength of contagious in an epidemic outbreak. In the case of both $\sigma$ and $\gamma \gg \mu$, $R_O$ can be approximated by ${\beta}/{\gamma}$. \subsection{Time-varying parameter specification} The standard SEIR model does that the parameters $\mu, \beta, \sigma$, and $\gamma$ are time-invariant. However, the characteristics of an epidemic suggest us that these parameters can vary. In particular, the overall mortality rate $\mu$ may increase if the number of deaths in a population directly or indirectly attributable to the disease (i.e., the insufficient capacity of health services) rises. The $\beta$ rate may also vary according to social distancing policies or, even, the isolation of infected people. We aim to evaluate as the actions taken by the governments, and how a different degree of travel restrictions, social distancing or limitation of the people movement can affect the epidemic curve of the infectious population. Our working hypothesis is that if there is an effect of the actions, they only affect the transmission rate of the epidemic, $\beta$. For this reason, we propose to modify this parameter over time, namely \begin{subequations}\label{eq:SEIRvar} \begin{align} \frac{d}{dt}S(t) &= \mu (N-S(t))-\beta(t) \frac{S(t) I(t) }{N}\\ \frac{d}{dt}E(t) &= \beta(t) \frac{S(t) I(t)}{N}-(\mu+\sigma) E(t) \\ \frac{d}{dt}I(t) &=\sigma E(t)- (\mu+\gamma)I(t) \\ \frac{d}{dt}R(t) &= \gamma\,I(t)-\mu\,R(t) \end{align} \end{subequations} Since the function $\beta(t)$ takes positive values, in the estimation step we consider the following log-linear specification \begin{equation}\label{eq:splines} \log(\beta(t))=\sum_{k=1}^K \psi_{k} N_k(t), \end{equation} where $N_k(t)$, $k=1,\ldots,K$, are $K$ natural cubic spline basis functions evaluated at $K-2$ equally spaced knots in addition to the boundary knots. The representation in \eqref{eq:splines} has the advantage that the estimation of $\beta(t)$ reduces to the estimation of the coefficients $\psi_k$. We refer to the next subsection for a short discussion about the number of knots and their positions. The time-dependent transmission rate $\beta(t)$ allows us to define a time dependent version $R_O$ (the basic reproduction number) as follows $$R_O(t)=\frac{\beta (t) }{\gamma}.$$ This index permits to evaluate the strength of contagious over a temporal window comparing $\beta (t)$ with the removal rate $\gamma$. In order to constraint $\gamma$ between 0 and 1, in the estimation process we reparametrize $\gamma$ as $\gamma$=$\frac{exp(\gamma^*)}{1+exp(\gamma^*)}$. The system \eqref{eq:SEIRvar} is a system of nonlinear ODEs, which must be solved numerically. In this paper we use the ODE solver lsode \citep{Hindmarsh:1983} as it has implemented in the R package \texttt{deSolve}. If we suppose that $\mu$ and $\sigma$ are known parameters, the (numerical) solutions $S(t;\theta)$, $E(t;\theta)$, $I(t;\theta)$, an $R(t;\theta)$ depend on the (vector of) parameters $\theta=(\psi_1,\ldots,\psi_K,\gamma^*)$. \section{Statistical inference}\label{sec:statinf} Different agencies in the world that daily update and publish datasets of epidemic data that contain at least three time series: the total number of infected, the number of dead, the number of recovered (see section \ref{sec:data} for more details). We derive from these time series the daily number of current positive cases $Y(t)$ and the daily number of new positive cases $Z(t)$, recorded at day $t$, $t=1,\ldots,T$. Usually, the time series are supposed to be realizations of a stochastic version of the compartmental models. The different versions can be broadly classified into continuous models and discrete models. In the first group fall the continuous-time Markov chains (CTMCs) and the stochastic differential equations (SDEs) \citep{allen:2008}. In the second group a discrete-time approximation to the stochastic continuous-time model is considered \citep{Lekone:Finkenstadt}. There exists an extensive literature on calibrating the stochastic models against time-series with different inferential approaches \citep{Finkesdadt:Grenfell:2000,Ionides:2006,hooker2011,andersson2012stochastic,Dureau:2013}. Instead in this paper we follow the simplest idea that the solutions of the system \eqref{eq:SEIRvar} are actually the expectations at days $t=1,\ldots,T$ of as many counting random variables. More precisely we model the observed counts $\{Y(t),Z(t)\}$, as \begin{subequations}\label{eq:poisson} \begin{align} Y(t) \sim &\operatorname{Poisson} (I(t;\theta))\\ Z(t) \sim &\operatorname{Poisson}(\sigma E(t;\theta)).\qquad \end{align} \end{subequations} Then the estimate of the parameter $\theta$ already defined are obtained by maximizing the independence log-likelihood \citep{Chandler:Bate:2007} \begin{eqnarray}\label{eq:cl} cl(\theta) &=& \sum_{t=1}^T Y(t)\log I(t;\theta) -I(t;\theta)+ Z(t)\log (\sigma E(t;\theta)) -\sigma E(t;\theta)\\ &=&\sum_{t=1}^T cl(\theta;t).\nonumber \end{eqnarray} Note that $CL(\theta)$ is not a `true' log-likelihood but an instance of a composite likelihood \citep{Lindsay:1988} since it does not seem reasonable to assume that $Y(t)$ and $Z(t)$ are mutually and temporally independent. However, even though the model is not correctly specified, the maximum composite likelihood estimator, $\widehat{\theta}$, is still a consistent and asymptotically Gaussian estimator with asymptotic variance $V({\theta})$ under mild conditions \citep{Chandler:Bate:2007,Jacod:Sorensen:2018}. The variance $V(\theta)$ can be estimated by the sandwich estimator $ \hat{V}=\widehat{B}^{-1} \widehat{M}\widehat{B}^{-1'} $. The `bread' matrix is given by $\widehat{B}={T^{-1}}\sum_{t=1}^T\nabla u(\hat\theta;t)$ with $u(\hat\theta;t)=\nabla cl(\hat\theta;t)$. In the presence of time-dependence the `meat' matrix $\hat{M}$ is given by the heteroskedasticity and autocorrelation consistent (HAC) estimator $$ \hat{M}=T^{-1}\sum_{t=1}^T\sum_{s=1}^Tw_{|t-s|}\nabla u(\hat\theta;t)\nabla u(\hat\theta;t)^\top $$ where $w = (w_0 , \ldots , w_{T-1})$ is a vector of weights \citep{Andrews:1991}. With the aim of forecasting the spread of the epidemic outside the observed period, the number and the positions of knots in \eqref{eq:splines} play a crucial role. The higher the number of nodes, the less smooth the function $\beta(t)$. In this way, however, there is a risk of over-fitting the data. On the other hand, the trend of the $\beta(t)$ outside the observation time interval is mainly determined by the basis functions corresponding to the boundary knots. We select the number of knots by maximizing the Composite Likelihood Information Criterion (CLIC) \citep{varin2005} $ CLIC(\widehat{\theta})=cl(\hat\theta)+\operatorname{tr}( \widehat{B}^{-1} \widehat{M}). $$ The criterion has a strong analogy with the Akaike Information Criterion (AIC). In fact $cl(\hat\theta)$ measures the goodness-of-fit similarly to the log-likelihood and the penalty tr($\widehat{B}^{-1} \widehat{M}$) reduces to $-(K+1)$ if the model \eqref{eq:poisson} is correctly specified, i.e. if equation \eqref{eq:cl} is the `true' log-likelihood. We could locate the internal knots to reflect policy interventions. However, it is very difficult to hypothesize the immediate effects of these policies and a simpler choice has been to place temporally equally spaced nodes. As for the boundary knots, it was chosen to place them at the beginning of the period and one week after the last observation available to obtain more stable estimates in the forecast period. \section{Results}\label{sec:results} The epidemic outbreak showed different patterns in the selected time window: the reported SARS-CoV-2 cases in the US were rapidly increasing, reaching a peak and subsequent stabilization of the number of daily new cases with an incidence of about 10 cases x100.000 inhabitants; in Italy, a drop up to 2.5 daily new SARS-CoV-2 cases x100.000 was reported, after an initial growth which reached a peak of incidence of about 10 cases x100.000 people similar to the US; in Iceland, the incidence of new SARS-CoV-2 cases knew a huge peak ($\approx$ 25 cases x100.000 people), then a decreasing trend and finally a limited number of new cases in the last considered day (Figure \ref{fig:Figure1}). A the end of the temporal window prevalence of the disease was quite different among the three considered countries: we registered 250, 180, and 45 current SARS-CoV-2 cases x100.000 people in the US, Italy, and Iceland, respectively. \begin{figure}[!ht] \begin{center} \includegraphics[width=\linewidth]{Figure1_rev2.pdf} \end{center} \caption{Daily new (solid black line) and current (dashed red line) cases of SARS-CoV-2 in (a) the US, (b) Italy and (c) Iceland.}\label{fig:Figure1} \end{figure} In the literature the incubation duration of the SARS-CoV-2 was estimated as $ { \eta}={5.2} $ \citep{wang2020} and therefore we set the specific parameter $\sigma=1/\eta=0.192$. The overall mortality rate $\mu$ was calculated as ${1}/{(\mbox{lifespan})}={1}/(365.25 \times\mbox{LE})$ where the Life Expectancy (LE) is 78.5 years in the US, 83.2 years in Italy and 82.2 years in Iceland, respectively. The total population (N) in 2020 was is 329.23 (US), 60.32 (Italy) and 0.36 (Iceland) million of inhabitants. The starting values $S(0)$, $E(0)$, $I(0)$ and $R(0)$ for the numerical resolution of the system \eqref{eq:SEIRvar} were set as follows \begin{itemize} \item $I(0)=Y(1)$ i.e. the number of currently infected on the first day of the dataset (US: 142, Italy: 155, Iceland: 1); \item $R(0)$ equal to the number of currently recovered on the first day of the dataset (US: 7, Italy: 0, Iceland: 0); \item $E(0)= Z(1)/{\sigma}$ where $Z(1)$ is the number of new infected on the first day of the dataset (US: 68, Italy: 66, Iceland: 2); \item $S(0)=N-E(0)-I(0)-R(0)$. \end{itemize} We have tried several values for the number of basis function $K$, i.e. from 3 to 8, and we found that $K=5$ and $K=3$ minimize the value of CLIC for Italy/US and Iceland, respectively. \begin{figure}[!ht] \begin{center} \includegraphics[width=\linewidth]{Figure2_rev2.pdf} \end{center} \caption{The estimate of $\beta(t)$ curves and 95\% confidence bands (in grey colour) for the US, Italy and Iceland. The dashed line represents the 30-day predicted evolution.}\label{fig:Figure2} \end{figure} The estimate of $\beta(t)$ (see Figure $\ref{fig:Figure2}$) showed an overall decreasing pattern of the transmission rate across the selected countries. In particular, in the US the estimate of ${\beta(t)}$ reached a peak close to $0.8$ after about ten days by the beginning of the epidemic outbreak, denoting an uncontrolled situation, moving to values of approximately $0.1$ after about $45$ days of the epidemic, with a predicted scenario of a slightly decreasing trend and a great amount of uncertainty. In Italy, the estimate of ${\beta(t)}$ was moving from initial values of $0.75$ to value close to $0.05$ after about 50 days of observations. The estimate of $\beta(t)$ were lower than those reported for the US. The estimate of $\beta(t)$ in Iceland showed a fast decreasing trend from values a bit over $1.0$ to about 0 at day 40. The 30-days prediction for $\beta(t)$ is pratically zero, denoting the end of the current phase of the epidemic. \begin{table} \centering \begin{tabular}{lcc} \hline Country & $\gamma$ & (95\% CI) \\ \hline US & 0.012 & [0.009-0.015]\\ Italy & 0.025 & [0.023-0.027]\\ Iceland & 0.080 & [0.063-0.101] \\ \hline \end{tabular} \caption{Estimated values of the $\gamma$ parameter and its relative 95\% Confidence Interval.} \label{tab:table1} \end{table} The estimates of $\gamma$ ranged from a low rate in the US and Italy, $0.012$ and $0.025$ respectively, to a higher rate in Iceland, $0.080$. Then the removal duration, i.e. the reciprocal of $\gamma$, was estimated at 85.0 days (95\% CI: 65.5-110.5) for the US, at 40.2 days (95\% CI: 37.6-43.1) for Italy and at 12.5 days (95\% CI: 9.9-15.9) for Iceland. \begin{figure}[!ht] \begin{center} \includegraphics[width=\linewidth]{Figure3_rev2.pdf} \end{center} \caption{The expected number of Exposed, Infected and Recovered subjects for the US, Italy and Iceland, based on the model parameter estimates. The dotted points indicate the observed number of infected cases. }\label{fig:Figure3} \end{figure} \begin{figure}[!ht] \begin{center} \includegraphics[width=\linewidth]{Figure4_rev2.pdf} \end{center} \caption{The expected number of new infected cases for the US, Italy and Iceland, based on the model parameter estimates. The dotted points indicate the observed number of new infected cases. The dashed line indicates the last day observed. }\label{fig:Figure4} \end{figure} The model fitting was deemed satisfactory (Figure $\ref{fig:Figure3}$ and Figure $\ref{fig:Figure4}$) both with respect to the number of new cases and to the cumulative positive cases. Our major findings were: in the US, the current positive cases were going to increase, reaching a probable maximum after the window of the next 30 days. In Italy, the epidemic outbreak had known its maximum in the number of positive cases around April 20th, and the tendency was for a slight decline. In Iceland, the peak of positive cases was registered on April 10th, associated with a rapid decreasing phase and a low number of new cases in the last observed days; in this case, the SARS-CoV-2 epidemic was going to be overcome approximately at the end of May. \begin{figure}[!ht] \begin{center} \begin{tabular}{c} \includegraphics[width=\linewidth]{Figure5_rev2.pdf} \end{tabular} \end{center} \caption{Estimated $R_O(t)$ values for the US, Italy, and Iceland and predicted evolution. The Y-axis is in the log-scale. The dashed lines indicate $R_O=1$ and the last day observed, respectively.}\label{fig:Figure5} \end{figure} The estimated trend for $R_O(t)$ appears quite different among the selected countries (see Figure \ref{fig:Figure5}). The value $R_O(t)=1$ was reached on different dates in Iceland (March 28th) and in Italy (April 16th), while in the US is expected to be achieved only in the end of May.
null
null
null
proofpile-arXiv_000-10199
{"arxiv_id":"2111.03157","language":"en","timestamp":1636337088000,"url":"https:\/\/arxiv.org\/abs\/2111.03157","yymm":"2111"}
2024-02-18T23:40:25.391Z
2021-11-08T02:04:48.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10201"}
null
\section{Introduction} \noindent \citet{evm:Mironenko1984} introduced the notion of the reflecting function for the qualitative investigations of the ODE system \begin{equation} \label{evm:EQ1} \dot{x}=X(t,x),\quad t\in {\mathbb R}, x\in D\subset {\mathbb R}^{n} \end{equation} under the condition that $X(t,x)$ is continuously differentiable function. This function is known now as Mironenko reflecting function (MRF) and has been efficiently applied by many authors to solve such problems of the qualitative theory of ODEs as the existence and stability of periodic solutions \cite{evm:Mironenko1989, evm:Belskii2013, evm:Liu2014, evm:Maiorovskaya2009, evm:Musafirov2008, evm:ZhouJ2020}, the existence of solutions for boundary value problems \cite{evm:Mironenko1996, evm:Musafirov2002, evm:Varenikova2012}, the solution of the center-focus problem \cite{evm:Zhou2017}, study of the global behavior of families of solutions for ODE systems \cite{evm:Mironenko2004book} and others \cite{evm:Mironenko2004book, evm:Belokurskii2013}. Moreover, it was proved that solutions of different ODE systems with the same MRF have many of the same qualitative properties \cite{evm:Mironenko2004book, evm:Mironenko2009}. Therefore, the study of the qualitative properties of solutions for a whole class of systems with the same MRF can be reduced to corresponding study of the simple (well-studied) system. In such cases non-autonomous systems \eqref{evm:EQ1} can be investigated on the base of corresponding autonomous system. In other words, an autonomous system can be perturbed into a non-autonomous systems \eqref{evm:EQ1} by using special perturbations preserving MRF which are called as \textit{admissible perturbations} (for example, admissible perturbations of the Lorenz-84 climate model were obtained by \citet{evm:Musafirov2019b}). In this paper the describered approuch is applied for the generalized Langford system \cite{evm:Yang2018}: \begin{equation} \label{evm:EQ2} \begin{array}{l} {\dot{x}=ax+by+xz,} \\ {\dot{y}=cx+dy+yz,} \\ {\dot{z}=ez-\left(x^{2} +y^{2} +z^{2} \right);\quad (x,y,z)\in {\mathbb R}^{3} ,} \end{array} \end{equation} where $a,b,c,d,e\in {\mathbb R}$ are parameters of the system. \citet{evm:Yang2018} analyzed the stability of equilibrium points, obtained an exact expression for a periodic orbit and some approximate expressions for limit cycles, investigated the nature of their stability, proved the existence of two heteroclinic cycles and their coexistence with a periodic orbit. \citet{evm:Nikolov2021} considered a particular case of system \eqref{evm:EQ2} for $c=-b$, $d=a\ne 0$ and showed that system \eqref{evm:EQ2} in this case is equivalent to the nonlinear force-free Duffing oscillator $\ddot{x}+k\dot{x}+\omega x+x^{3} =0$, where $k=-(2a+e)$, $\omega =a(a+e)$. Such an equation is obtained, for example, when a steel console oscillates in an inhomogeneous field of two permanent magnets \cite{evm:Moon1979}; oscillation of a mathematical pendulum at small angles of deflection; vibrations of mass on a spring with a nonlinear restoring force located on a flat horizontal surface; and also when describing the motion of a particle in a potential of two wells and other oscillations \cite{evm:kovacic2011}. In addition, \citet{evm:Nikolov2021} proved that in this particular case, under one of three additional conditions ($e=a$ or $e=-a/2$ or $e=-2a$), the solutions of system \eqref{evm:EQ2} are expressed in explicit analytical form by means of elementary and Jacobi elliptic functions. For the particular case of system \eqref{evm:EQ2} when $a=d=-1/3$, $b=-1$, $c=1$, $e=2/3$, the presence of chaos in the system is proved, and the chaotic attractor is also shown by \citet{evm:Belozyorov2015}. The admissible perturbations of the non-generalized Langford system for $a=d=-2e-1$, $b=-1$, $c=1$ and for $a=d=e-1$, $b=-1$, $c=1$ were obtained by \citet{evm:Musafirov2016, evm:Musafirov2017}. The main our purpose here is to derive a non-autonomous generalization for system \eqref{evm:EQ2} and detect qualitative properties for equilibrium points and periodic solutions of the derived system. The structure of our paper is as follows. In section 2 we recall the definition of the MRF and basic facts for the construction of an admissible perturbations of system \eqref{evm:EQ1}. In section 3 we represent the sets of admissible perturbations of system \eqref{evm:EQ2}. In section 4 we prove the instability (in the sense of Lyapunov) of the equilibrium point $O(0,0,0)$ of admissibly perturbed systems. Section 5 presents the conditions under which admissibly perturbed systems have periodic solutions, as well as conditions for the asymptotic stability (instability) of periodic solutions. In the last section, using numerical simulations, we show similar chaotic attractors of the generalized Langford system \eqref{evm:EQ2} and an admissibly perturbed system. \section{Brief theory of the MRF} \noindent First of all, we give a brief information on the theory of the MRF from \cite{evm:Mironenko2004book}. For system \eqref{evm:EQ1}, MRF is defined as $F(t,x):=\varphi (-t;t,x)$, where $x=\varphi (t;t_{0} ,x_{0} )$ is the general solution in the Cauchy form of system \eqref{evm:EQ1}. Although the MRF is determined through the general solution of system \eqref{evm:EQ1}, it is sometimes possible to find a MRF even for non-integrable systems. A function $F(t,x)$ is a MRF of system \eqref{evm:EQ1} if and only if it is a solution of the PDE system $\frac{\partial F}{\partial t} +\frac{\partial F}{\partial x} X(t,x)+X(-t,F)=0$ with the initial condition $F(0,x)=x$. If the function $F(t,x)$ is continuously differentiable and satisfies the condition $F\left(-t,F(t,x)\right)\equiv F(0,x)\equiv x$, then it is the MRF of a set of systems. Moreover, all systems from this set have the same shift operator on any interval $(-\alpha ;{\kern 1pt} \alpha )$ \cite{evm:Krasnoselskii2007}. If system \eqref{evm:EQ1} is $2\omega $-periodic with respect to $t$, and $F(t,x)$ is its MRF, then $F(-\omega ,x)=\varphi (\omega ;-\omega ,x)$ is the mapping of the system over the period $[-\omega ,{\kern 1pt} \omega ]$ (Poincar\'{e} map). And therefore, all $2\omega $-periodic (with respect to $t$) systems from the set with the same MRF have the same mapping over the period $[-\omega ,{\kern 1pt} \omega ]$. Let $2\omega $-periodic (with respect to $t$) system \eqref{evm:EQ1} and the system \begin{equation} \label{evm:EQ3} \dot{x}=Y(t,x),\quad t\in {\mathbb R},\; x\in D\subset {\mathbb R}^{n} \end{equation} have the same MRF $F(t,x)$. If the solution $\varphi (t;-\omega ,x)$ of system \eqref{evm:EQ1} and the solution $\psi (t;-\omega ,x)$ of system \eqref{evm:EQ3} are extendable to $[-\omega ,\omega ]$, then the mapping over the period $[-\omega ,\omega ]$ for system \eqref{evm:EQ1} is $\varphi (\omega ;-\omega ,x)\equiv F(-\omega ,x)\equiv \psi (\omega ;-\omega ,x)$, although system \eqref{evm:EQ3} may be non-periodic. That is, it is possible to establish a one-to-one correspondence between the $2\omega $-periodic solutions of system \eqref{evm:EQ1} and the solutions of the two-point boundary value problem $y(-\omega )=y(\omega )$ for system \eqref{evm:EQ3}. Thanks to \citet{evm:Mironenko2009}, it became possible to find out whether two different systems of ODEs have the same MRF (in this case, the MRF itself may not be known). \begin{theorem}[\cite{evm:Mironenko2009}] Let the vector functions $\Delta _{i} (t,x)$ ($i=\overline{1,m}$, where $m\in {\mathbb N}$ or $m=\infty $) be solutions of the equation \begin{equation} \label{evm:EQ4} \frac{\partial \Delta }{\partial t} +\frac{\partial \Delta }{\partial x} X-\frac{\partial X}{\partial x} \Delta =0 \end{equation} and $\alpha _{i} (t)$ be any scalar continuous odd functions. Then MRF of every perturbed system of the form $\dot{x}=X(t,x)+\sum _{i=1}^{m}\alpha _{i} (t)\Delta _{i} (t,x) ,\quad t\in {\mathbb R},\; x\in D\subset {\mathbb R}^{n} $ is equal to MRF of system \eqref{evm:EQ1}. \end{theorem} \section{Admissible perturbations} \noindent For system \eqref{evm:EQ2}, we were looking for admissible perturbations of the form $\Delta \cdot \alpha (t)$, where \[\Delta =\left( {\sum _{i+j+k=0}^{n} q_{ijk} x^{i} y^{j} z^{k} },\quad {\sum _{i+j+k=0}^{n} r_{ijk} x^{i} y^{j} z^{k} },\quad {\sum _{i+j+k=0}^{n} s_{ijk} x^{i} y^{j} z^{k} } \right)^{{\rm T}} ,\] $q_{ijk} ,r_{ijk} ,s_{ijk} \in {\mathbb R}$, $i,j,k,n\in {\mathbb N}\cup \{ 0\} $; $\alpha (t)$ is an arbitrary continuous scalar odd function. To do this, we looked for the values of the parameters $a,b,c,d,e$, $q_{ijk} ,r_{ijk} ,s_{ijk} $ for which the relation \eqref{evm:EQ4} is valid, i.e. the relation $\frac{\partial \Delta }{\partial t} +\frac{\partial \Delta }{\partial (x,y,z)} X(t,x,y,z)-\frac{\partial X(t,x,y,z)}{\partial (x,y,z)} \Delta =0$ where $X(t,x,y,z)=\left( {ax+by+xz}, {cx+dy+yz}, {ez-x^{2} -y^{2} -z^{2} } \right)^{{\rm T}} $ is the right-hand side of the original unperturbed system \eqref{evm:EQ2}. As a result, we were able to obtain the following statement. \begin{theorem} Let $\alpha _{i} (t)$ ($i=\overline{1,5}$) be arbitrary scalar continuous odd functions. Then \begin{romanlist} \item the MRF of system \eqref{evm:EQ2} coincides with the MRF of the system \begin{eqnarray} \dot{x} & = & \left(ax+by+xz\right)\left(1+\alpha _{1} (t)\right),\nonumber \\ \dot{y} & = & \left(cx+dy+yz\right)\left(1+\alpha _{1} (t)\right),\nonumber \\ \dot{z} & = & \left(ez-\left(x^{2} +y^{2} +z^{2} \right)\right)\left(1+\alpha _{1} (t)\right);\nonumber \end{eqnarray} \item for $c=-b$, $d=a$, the MRF of system \eqref{evm:EQ2} coincides with the MRF of the system \begin{eqnarray} \dot{x} & = & \left(ax+by+xz\right)\left(1+\alpha _{1} \left(t\right)\right)+x\left(a+z\right)\alpha _{2} \left(t\right)+y\alpha _{3} \left(t\right),\nonumber \\ \dot{y} & = & \left(-bx+ay+yz\right)\left(1+\alpha _{1} \left(t\right)\right)+y\left(a+z\right)\alpha _{2} \left(t\right)-x\alpha _{3} \left(t\right),\label{evm:EQ5} \\ \dot{z} & = & \left(ez-x^{2} -y^{2} -z^{2} \right)\left(1+\alpha _{1} \left(t\right)+\alpha _{2} \left(t\right)\right);\nonumber \end{eqnarray} \item for $c=-b$, $d=a$, $e=-2a$, the MRF of system \eqref{evm:EQ2} coincides with the MRF of the system \begin{eqnarray} \dot{x} & = & \left(ax+by+xz\right)\left(1+\alpha _{1} \left(t\right)\right)+x\left(a+z\right)\alpha _{2} \left(t\right)+y\alpha _{3} \left(t\right) \nonumber\\ & & -y\left(x^{2} +y^{2} \right)\left(4az+x^{2} +y^{2} +2z^{2} \right)\alpha _{4} \left(t\right), \nonumber\\ \dot{y} & = & \left(-bx+ay+yz\right)\left(1+\alpha _{1} \left(t\right)\right)+y\left(a+z\right)\alpha _{2} \left(t\right)-x\alpha _{3} \left(t\right) \label{evm:EQ6} \\ & & +x\left(x^{2} +y^{2} \right)\left(4az+x^{2} +y^{2} +2z^{2} \right)\alpha _{4} \left(t\right), \nonumber\\ \dot{z} & = & -\left(2az+x^{2} +y^{2} +z^{2} \right)\left(1+\alpha _{1} \left(t\right)+\alpha _{2} \left(t\right)\right); \nonumber \end{eqnarray} \item for $c=b=0$, $d=a$, $e=-2a$, the MRF of system \eqref{evm:EQ2} coincides with the MRF of the system \begin{eqnarray} \dot{x} & = & \left(ax+xz\right)\left(1+\alpha _{1} \left(t\right)\right)+y\alpha _{2} \left(t\right) \nonumber\\ & & +y\left(4az+x^{2} +y^{2} +2z^{2} \right)\left(x^{2} \alpha _{3} \left(t\right)+xy\alpha _{4} \left(t\right)+y^{2} \alpha _{5} \left(t\right)\right), \nonumber\\ \dot{y} & = & \left(ay+yz\right)\left(1+\alpha _{1} \left(t\right)\right)-x\alpha _{2} \left(t\right) \label{evm:EQ7} \\ & & -x\left(4az+x^{2} +y^{2} +2z^{2} \right)\left(x^{2} \alpha _{3} \left(t\right)+xy\alpha _{4} \left(t\right)+y^{2} \alpha _{5} \left(t\right)\right), \nonumber\\ \dot{z} & = & -\left(2az+x^{2} +y^{2} +z^{2} \right)\left(1+\alpha _{1} \left(t\right)\right).\nonumber \end{eqnarray} \end{romanlist} \end{theorem} \begin{proof} Let us prove the second assertion of the theorem. For $c=-b$, $d=a$, the right-hand side of system \eqref{evm:EQ2} is $X=\left( {ax+by+xz}, {-bx+ay+yz}, {ez-x^{2} -y^{2} -z^{2} } \right)^{{\rm T}}$ and its Jacobi matrix is \[\frac{\partial X(t,x,y,z)}{\partial (x,y,z)} = \left( \begin{array}{ccc} {a+z} & {b} & {x} \\ {-b} & {a+z} & {y} \\ {-2x} & {-2y} & {e-2z} \end{array} \right).\] Let us write out the vector factors for $\alpha _{i} (t)$ from the right-hand side of system \eqref{evm:EQ5}: \(\Delta _{1} = \left( {ax+by+xz}, {-bx+ay+yz}, {ez-x^{2} -y^{2} -z^{2} } \right)^{{\rm T}}\), \(\Delta _{2} = \left( {x\left(a+z\right)}, {y\left(a+z\right)}, {ez-x^{2} -y^{2} -z^{2}}\right)^{{\rm T}}\), \(\Delta _{3} = \left( {y}, {-x}, {0} \right)^{{\rm T}}\). By successively checking the identity \eqref{evm:EQ4} for each vector-multiplier $\Delta _{i} $ we will make sure that it is true. Let us show this, for example, for $\Delta _{2} $. The Jacobi matrix is \[\frac{\partial \Delta _{2} }{\partial (x,y,z)} = \left( \begin{array}{ccc} {a+z} & {0} & {x} \\ {0} & {a+z} & {y} \\ {-2x} & {-2y} & {e-2z} \end{array} \right).\] Whence we obtain \begin{multline*} \frac{\partial \Delta _{2} }{\partial t} +\frac{\partial \Delta _{2} }{\partial (x,y,z)} X(t,x,y,z)-\frac{\partial X(t,x,y,z)}{\partial (x,y,z)} \Delta _{2} \\ \equiv \left( \begin{array}{c} {0} \\ {0} \\ {0} \end{array} \right)+\left( \begin{array}{ccc} {a+z} & {0} & {x} \\ {0} & {a+z} & {y} \\ {-2x} & {-2y} & {e-2z} \end{array} \right)\left( \begin{array}{c} {ax+by+xz} \\ {-bx+ay+yz} \\ {ez-x^{2} -y^{2} -z^{2} } \end{array} \right) \\ -\left( \begin{array}{ccc} {a+z} & {b} & {x} \\ {-b} & {a+z} & {y} \\ {-2x} & {-2y} & {e-2z} \end{array} \right)\left( \begin{array}{c} {x\left(a+z\right)} \\ {y\left(a+z\right)} \\ {ez-x^{2} -y^{2} -z^{2} } \end{array} \right)\equiv \left( \begin{array}{c} {0} \\ {0} \\ {0} \end{array} \right). \end{multline*} Then the second assertion of the theorem follows from Theorem 1. The rest of the statement of the theorem can be proved similarly. \end{proof} When modeling real processes, the time $t\ge 0$ is usually considered, therefore the requirement that the functions $\alpha _{i} (t)$ be odd is not essential, since they can be extended continuously in an odd way to the negative time semi-axis (provided that $\alpha _{i} (0)=0$). Theorem 2 can be used to study the qualitative behavior of the solutions of admissible perturbed systems. \section{Instability of equilibrium point} \noindent By Theorem 1 \cite{evm:Yang2018}, for $e=0$, the equilibrium point $O(0,0,0)$ of system \eqref{evm:EQ2} is unstable. With this in mind, let us prove a similar statement for systems \eqref{evm:EQ5} -- \eqref{evm:EQ7}. \begin{theorem} Let $\alpha _{i} (t)$ ($i=\overline{1,5}$) be scalar continuous functions (not necessarily odd). \begin{romanlist} \item If $e=0$ and $\alpha _{1} (t)+\alpha _{2} (t)\ge l>-1$ $\forall t\ge 0$ ($l={\rm const}$), then the solution $x=y=z=0$ of system \eqref{evm:EQ5} is unstable (in the sense of Lyapunov). \item If $a=0$ and $\alpha _{1} (t)+\alpha _{2} (t)\ge l>-1$ $\forall t\ge 0$ ($l={\rm const}$), then the solution $x=y=z=0$ of system \eqref{evm:EQ6} is unstable (in the sense of Lyapunov). \item If $a=0$ and $\alpha _{1} (t)\ge l>-1$ $\forall t\ge 0$ ($l={\rm const}$), then the solution $x=y=z=0$ of system \eqref{evm:EQ7} is unstable (in the sense of Lyapunov). \end{romanlist} \end{theorem} \begin{proof} Consider the function $V(x,y,z)=-z^{3} $. In any neighborhood of the origin of $ {\mathbb R}^{3} $, the function $V$ is bounded and exist a region such that $V>0$. \begin{romanlist} \item For $e=0$, the derivative of the function $V$ along trajectories of system \eqref{evm:EQ5} is $\dot{V}=3z^{2} \left(x^{2} +y^{2} +z^{2} \right)\left(1+\alpha _{1} (t)+\alpha _{2} (t)\right)$. Since $\alpha _{1} (t)+\alpha _{2} (t)\ge l>-1$ $\forall t\ge 0$, then $\forall t\ge 0$ we have $\dot{V}\ge 3z^{2} \left(x^{2} +y^{2} +z^{2} \right)\left(1+l\right)$, where $l>-1$. Considering that $3z^{2} \left(x^{2} +y^{2} +z^{2} \right)>0$ $\forall (x,y,z)\ne (0,0,0)$, then $\dot{V}$ is positive definite function. Then, by Theorem 4.7.1 \cite{evm:Liao2007} (taking into account Corollary 4.7.3 \cite{evm:Liao2007} and its proof), the solution $x=y=z=0$ of system \eqref{evm:EQ5} is unstable. \item For $a=0$, the derivative of the function $V$ along trajectories of system \eqref{evm:EQ6} is $\dot{V}=3z^{2} \left(x^{2} +y^{2} +z^{2} \right)\left(1+\alpha _{1} (t)+\alpha _{2} (t)\right)$. Repeating the reasoning from item (i), we find that the solution $x=y=z=0$ of system \eqref{evm:EQ6} is unstable. \item For $a=0$, the derivative of the function $V$ along trajectories of system \eqref{evm:EQ7} is $\dot{V}=3z^{2} \left(x^{2} +y^{2} +z^{2} \right)\left(1+\alpha _{1} (t)\right)$. Since $\alpha _{1} (t)\ge l>-1$ $\forall t\ge 0$, then $\forall t\ge 0$ we have $\dot{V}\ge 3z^{2} \left(x^{2} +y^{2} +z^{2} \right)\left(1+l\right)$, where $l>-1$. Further, repeating the reasoning from item (i), we find that the solution $x=y=z=0$ of system \eqref{evm:EQ7} is unstable. \end{romanlist} \end{proof} \section{Periodic solution} \noindent By Theorem 9 \cite{evm:Yang2018}, for $d=a$, \textit{$c=-b\ne 0$ }and $a(a+e)<0$, system \eqref{evm:EQ2} has a $2\pi /\left|b\right|$-periodic solution \begin{eqnarray} x(t) & = & \sqrt{-a(a+e)} \sin \left(bt\right), \nonumber\\ y(t) & = & \sqrt{-a(a+e)} \cos \left(bt\right),\label{evm:EQ8} \\ z(t) & = & -a \nonumber \end{eqnarray} corresponding to the cycle $x^{2} +y^{2} =-a(a+e)$, $z=-a$. Moreover, this solution is asymptotically stable for $2a+e<0$ and unstable for $2a+e>0$. Similar statements are valid for systems \eqref{evm:EQ5} and \eqref{evm:EQ6}. \begin{lemma} Let $\alpha _{i} (t)$ ($i=\overline{1,4}$) be scalar continuous functions (not necessarily odd). \begin{romanlist} \item If $a(a+e)<0$, then system \eqref{evm:EQ5} has a solution \begin{eqnarray} x(t) & = &\sqrt{-a\left(a+e\right)} \sin \left(bt+\int\limits_{0}^{t} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\right),\nonumber \\ y(t) & = & \sqrt{-a\left(a+e\right)} \cos \left(bt+\int\limits_{0}^{t} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\right),\label{evm:EQ9} \\ z(t) & = & -a\nonumber \end{eqnarray} corresponding to the cycle $x^{2} +y^{2} =-a(a+e)$, $z=-a$. \item System \eqref{evm:EQ6} has a solution \begin{eqnarray} x(t) & = & a\sin \left(bt+\int\limits_{0}^{t} \left(b\alpha _{1} (s)+\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s\right),\nonumber \\ y(t) & = & a\cos \left(bt+\int\limits_{0}^{t} \left(b\alpha _{1} (s)+\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s\right),\label{evm:EQ10} \\ z(t) & = & -a \nonumber \end{eqnarray} corresponding to the cycle $x^{2} +y^{2} =a^{2} $, $z=-a$. \end{romanlist} \end{lemma} \begin{proof} The assertions of the lemma are proved by direct substitution of \eqref{evm:EQ9} into system \eqref{evm:EQ5} and \eqref{evm:EQ10} into system \eqref{evm:EQ6}. \end{proof} \begin{theorem} Let $\alpha _{i} (t)$ ($i=\overline{1,4}$) be scalar twice continuously differentiable odd functions, $b\ne 0$ and the right-hand sides of systems \eqref{evm:EQ5} and \eqref{evm:EQ6} be $2\pi /\left|b\right|$-periodic with respect to time $t$. \begin{romanlist} \item If $a(a+e)<0$ and $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{-2\pi /\left|b\right|} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s=2\pi k$, then solution \eqref{evm:EQ9} of system \eqref{evm:EQ5} is $2\pi /\left|b\right|$-periodic and asymptotically stable for $2a+e<0$ and unstable for $2a+e>0$. \item If $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{-2\pi /\left|b\right|} \left(b\alpha _{1} (s)+\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s=2\pi k$, then solution \eqref{evm:EQ10} of system \eqref{evm:EQ6} is $2\pi /\left|b\right|$-periodic. \end{romanlist} \end{theorem} \begin{proof} \begin{romanlist} \item It follows from Theorem 2 that the MRF of system \eqref{evm:EQ5} coincides with the MRF of system \eqref{evm:EQ2} for $c=-b$ and $d=a$. By Theorem 9 \cite{evm:Yang2018}, for $d=a$, $c=-b\ne 0$ and $a(a+e)<0$, system \eqref{evm:EQ2} has a $2\pi /\left|b\right|$-periodic solution \eqref{evm:EQ8}, which is asymptotically stable for $2a+e<0$ and unstable for $2a+e>0$. By Lemma 1, system \eqref{evm:EQ5} has a solution \eqref{evm:EQ9}. Let $\Bar{\gamma}(t)=\left(x(t),y(t),z(t)\right)$ denote solution \eqref{evm:EQ8} of system \eqref{evm:EQ2} and $\bar{\chi }(t)=\left(x(t),y(t),z(t)\right)$ denote solution \eqref{evm:EQ9} of system \eqref{evm:EQ5}. If $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{-2\pi /\left|b\right|} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s=2\pi k$, then $\bar{\chi }\left(-\pi /\left|b\right|\right)=\Bar{\gamma}\left(-\pi /\left|b\right|\right)$ and the statement of the theorem immediately follows from Theorem 5 \cite{evm:MironenkoV2004}. \item It follows from Theorem 2 that the MRF of system \eqref{evm:EQ6} coincides with the MRF of system \eqref{evm:EQ2} for $c=-b$, $d=a$ and \textit{$e=-2a$}. By Theorem 9 \cite{evm:Yang2018}, for $d=a$, $c=-b\ne 0$ and $a(a+e)<0$, system \eqref{evm:EQ2} has a $2\pi /\left|b\right|$-periodic solution \eqref{evm:EQ8}. By Lemma 1, system \eqref{evm:EQ6} has a solution \eqref{evm:EQ10}. Let $\Bar{\gamma}(t)=\left(x(t),y(t),z(t)\right)$ denote solution \eqref{evm:EQ8} of system \eqref{evm:EQ2} and $\tilde{\chi }(t)=\left(x(t),y(t),z(t)\right)$ denote solution \eqref{evm:EQ10} of system \eqref{evm:EQ6}. If $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{-2\pi /\left|b\right|} \left(b\alpha _{1} (s)+\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s=2\pi k$, then $\tilde{\chi }\left(-\pi /\left|b\right|\right)=\Bar{\gamma}\left(-\pi /\left|b\right|\right)$ and the statement of the theorem immediately follows from Theorem 5 \cite{evm:MironenkoV2004}. \end{romanlist} \end{proof} \begin{theorem} Let $\alpha _{i} (t)$ ($i=\overline{1,4}$) be scalar continuous functions (not necessarily odd) and $b\ne 0$. \begin{romanlist} \item Let the function $b\alpha _{1} (t)+\alpha _{3} (t)$ be $2\pi /\left|b\right|$-periodic, $a(a+e)<0$, and $\int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s=0$, then solution \eqref{evm:EQ9} of system \eqref{evm:EQ5} is $2\pi /\left|b\right|$-periodic (the period is not necessarily minimal). \item Let the function $b\alpha _{1} (t)+\alpha _{3} (t)+a^{4} \alpha _{4} (t)$ be $2\pi /\left|b\right|$-periodic and $\int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s=0$, then solution \eqref{evm:EQ10} of system \eqref{evm:EQ6} is $2\pi /\left|b\right|$-periodic (the period is not necessarily minimal). \end{romanlist} \end{theorem} \begin{proof} To prove the first assertion, it suffices to prove that $\int\limits_{0}^{t+2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\equiv \int\limits_{0}^{t} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s$. Taking into account that $\int\limits_{0}^{t+2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\equiv \int\limits_{0}^{t} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s+\int\limits_{t}^{t+2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s$, it remains to prove that $\int\limits_{t}^{t+2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\equiv 0$. Let us introduce the notation $A(t)=\int\limits_{t}^{t+2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s$. Since $b\alpha _{1} (t)+\alpha _{3} (t)$ is a continuous function, by the properties of an integral with a variable upper limit, $A(t)$ is a differentiable function and $\dot{A}(t)\equiv b\alpha _{1} (t+2\pi /b)+\alpha _{3} (t+2\pi /b)-\left(b\alpha _{1} (t)+\alpha _{3} (t)\right)$. Since the function $b\alpha _{1} (t)+\alpha _{3} (t)$ is \textit{$2\pi /\left|b\right|$}-periodic, it follows that $\dot{A}(t)\equiv 0$, that is, $A(t)\equiv {\rm const}$. In particular, $A(t)\equiv A(0)$, i.e. \begin{equation} \label{evm:EQ11} \int\limits_{t}^{t+2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\equiv \int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s. \end{equation} By the hypothesis of the theorem, $\int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s=0$, which completes the proof of the first statement. The second assertion of the theorem is proved similarly to the first. \end{proof} \begin{proposition} In the formulation of Theorem 5: \begin{romanlist} \item the condition $\int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s=0$ can be replaced by the condition that the function $b\alpha _{1} (t)+\alpha _{3} (t)$ is odd; \item the condition $\int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s=0$ can be replaced by the condition that the function $b\alpha _{1} (t)+\alpha _{3} (t)+a^{4} \alpha _{4} (t)$ is odd. \end{romanlist} \end{proposition} \begin{proof} It follows from identity \eqref{evm:EQ11} for $t=-2\pi /b$ that $\int\limits_{-2\pi /b}^{0} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\equiv \int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s$. And since the function $b\alpha _{1} (t)+\alpha _{3} (t)$ is odd, it follows that $-\int\limits_{-2\pi /b}^{0} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s\equiv \int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s$. Therefore, we have $\int\limits_{0}^{2\pi /b} \left(b\alpha _{1} (s)+\alpha _{3} (s)\right){\rm d}s=0$. The second statement is proved similarly to the first. \end{proof} \begin{theorem} Let $\alpha _{i} (t)$ ($i=\overline{1,4}$) be scalar continuous functions (not necessarily odd) and $b=0$. \begin{romanlist} \item Let the function $\alpha _{3} (t)$ be $\omega $-periodic, $a(a+e)<0$ and $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s=2\pi k$, then solution \eqref{evm:EQ9} of system \eqref{evm:EQ5} is $\omega $-periodic (the period is not necessarily minimal). \item Let the function $\alpha _{3} (t)+a^{4} \alpha _{4} (t)$ be $\omega $-periodic and $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{\omega } \left(\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s=2\pi k$, then solution \eqref{evm:EQ10} of system \eqref{evm:EQ6} is $\omega $-periodic (the period is not necessarily minimal). \end{romanlist} \end{theorem} \begin{proof} For \textit{$b=0$}, solution \eqref{evm:EQ9} of system \eqref{evm:EQ5} takes the form $x(t)=\sqrt{-a\left(a+e\right)} \sin \left(\int\limits_{0}^{t} \alpha _{3} (s){\rm d}s\right)$, $y(t)=\sqrt{-a\left(a+e\right)} \cos \left(\int\limits_{0}^{t} \alpha _{3} (s){\rm d}s\right)$, $z(t)=-a$, and to prove the first assertion of the theorem, it suffices to prove that $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{t+\omega } \alpha _{3} (s){\rm d}s\equiv \int\limits_{0}^{t} \alpha _{3} (s){\rm d}s+2\pi k$. Taking into account that $\int\limits_{0}^{t+\omega } \alpha _{3} (s){\rm d}s\equiv \int\limits_{0}^{t} \alpha _{3} (s){\rm d}s+\int\limits_{t}^{t+\omega } \alpha _{3} (s){\rm d}s$, it remains to prove that $\exists k\in {\mathbb Z}$ such that $\int\limits_{t}^{t+\omega } \alpha _{3} (s){\rm d}s\equiv 2\pi k$. Let us introduce the notation $B(t)=\int\limits_{t}^{t+\omega } \alpha _{3} (s){\rm d}s$. Since $\alpha _{3} (t)$ is a continuous function, by the properties of an integral with a variable upper limit, $B(t)$ is a differentiable function and $\dot{B}(t)\equiv \alpha _{3} (t+\omega )-\alpha _{3} (t)$. Since the function $\alpha _{3} (t)$ is \textit{$\omega $}-periodic, it follows that $\dot{B}(t)\equiv 0$, that is, $B(t)\equiv {\rm const}$. In particular, $B(t)\equiv B(0)$, i.e. \begin{equation} \label{evm:EQ12} \int\limits_{t}^{t+\omega } \alpha _{3} (s){\rm d}s\equiv \int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s. \end{equation} It remains to note that, by the hypothesis of the theorem, $\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s=2\pi k$. The second assertion of the theorem is proved similarly to the first. \end{proof} \begin{proposition} In the formulation of Theorem 6: \begin{romanlist} \item the condition ``$\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s=2\pi k$'' can be replaced by the condition that the function $\alpha _{3} (t)$ is odd; \item the condition ``$\exists k\in {\mathbb Z}$ such that $\int\limits_{0}^{\omega } \left(\alpha _{3} (s)+a^{4} \alpha _{4} (s)\right){\rm d}s=2\pi k$'' can be replaced by the condition that the function $\alpha _{3} (t)+a^{4} \alpha _{4} (t)$ is odd. \end{romanlist} \end{proposition} \begin{proof} From identity \eqref{evm:EQ12} for $t=-\omega $ it follows that $\int\limits_{-\omega }^{0} \alpha _{3} (s){\rm d}s\equiv \int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s$. Since the function $\alpha _{3} (t)$ is odd, it follows that $\int\limits_{-\omega }^{0} \alpha _{3} (s){\rm d}s\equiv \int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s$, then $\int\limits_{0}^{\omega } \alpha _{3} (s){\rm d}s=0$, i.e. $k=0$. The second statement is proved similarly to the first. \end{proof} \section{Chaotic attractor} By Theorem 13 \cite{evm:Yang2018}, for $c=-b$, $d=a$, $e=-2a$ and $ab\ne 0$, system \eqref{evm:EQ2} has two heteroclinic orbits connecting the equilibrium points $O(0,0,0)$ and $G(0,0,-2a)$, the eigenvalues of the Jacobi matrix for which are $\lambda _{1}^{O} =-2a$, $\lambda _{2,3}^{O} =a\pm b\sqrt{-1} $ and $\lambda _{1}^{G} =2a$, $\lambda _{2,3}^{G} =-a\pm b\sqrt{-1} $. Since $\lambda _{1}^{O} \lambda _{1}^{G} =-4a^{2} <0$ and $\Re\left(\lambda _{2}^{O} \right)\Re\left(\lambda _{2}^{G} \right)=-a^{2} <0$, the conditions of Shilnikov's Heteroclinic Theorem \cite{evm:ZhouT2006} are not satisfied. Despite this, one can expect the presence of chaos in system \eqref{evm:EQ2}, which was proved (and also showed a chaotic attractor) by \citet{evm:Belozyorov2015} for the particular case when $a=d=-1/3$, $b=-1$, $c=1$, $e=2/3$. A numerical simulation (using Wolfram Mathematica software) shows the presence (see Fig. \ref{evm:FIG1}--\ref{evm:FIG2}) of similar chaotic attractors in systems \eqref{evm:EQ2} and \eqref{evm:EQ6} for $a=d=-3$, $b=-8$, $c=8$, $e=6$, $\alpha _{i} (t)=\sin \left(i t\right)$, $i=\overline{1,4}$. In this case, the largest Lyapunov exponent for system \eqref{evm:EQ2} is $\lambda _{\max } =0.0254794$, which confirms the chaotic nature of the attractor. To calculate the Lyapunov exponents, we used the command \(F\left[\{x\_,y\_,z\_\}\right]:=\left\{x z-3 x-8 y,8 x+y z-3 y,-x^2-y^2-z^2+6 z\right\};\ LCEsC[F, \{1/100, 2/100, 3\}, 0.05, 10000, 2, 0.01]\) from the LCE package for Wolfram Mathematica \cite{evm:Sandri1996}. Note that if the conditions of Theorems 4 or 5 or 6 are satisfied for system \eqref{evm:EQ6}, then system \eqref{evm:EQ6} has a periodic solution, that is, system \eqref{evm:EQ6} demonstrates the coexistence of a periodic solution and a chaotic attractor. \begin{figure}[h] \begin{center} \includegraphics[width=0.49\linewidth]{evm1a}\includegraphics[width=0.49\linewidth]{evm1b} \end{center} \caption{Phase portraits of chaotic attractors of systems \eqref{evm:EQ2} and \eqref{evm:EQ6} (left and right, respectively) for $a=d=-3$, $b=-8$, $c=8$, $e=6$, $\alpha _{i} (t)=\sin \left(i t\right)$, $i=\overline{1,4}$.} \label{evm:FIG1} \end{figure} \begin{sidewaysfigure} \begin{center} \includegraphics[width=0.33\linewidth]{evm2a}\includegraphics[width=0.33\linewidth]{evm3a} \includegraphics[width=0.33\linewidth]{evm4a}\\ \includegraphics[width=0.33\linewidth]{evm2b}\includegraphics[width=0.33\linewidth]{evm3b} \includegraphics[width=0.33\linewidth]{evm4b} \end{center} \caption{Projections onto the coordinate planes of chaotic attractors of systems \eqref{evm:EQ2} and \eqref{evm:EQ6} (top and bottom row, respectively) for $a=d=-3$, $b=-8$, $c=8$, $e=6$, $\alpha _{i} (t)=\sin \left(i t\right)$, $i=\overline{1,4}$.} \label{evm:FIG2} \end{sidewaysfigure} \section{Conclusion} \noindent A set of non-stationary systems of ordinary differential equations is obtained, the MRF of which coincides with the MRF of the autonomous generalized Langford system \eqref{evm:EQ2}. The same MRF of these systems determines the coincidence of some qualitative properties of the behavior of their solutions. This made it possible to use the results of studying the qualitative behavior of solutions of the well-studied generalized Langford system [16] to study nonstationary perturbed systems that are more complicated in kind. For such systems (\eqref{evm:EQ5}, \eqref{evm:EQ6}, and \eqref{evm:EQ7}), conditions were obtained under which the equilibrium point is unstable (in the sense of Lyapunov). For systems \eqref{evm:EQ5} and \eqref{evm:EQ6}, conditions were obtained under which these systems have periodic solutions; in addition, for system \eqref{evm:EQ5}, conditions for asymptotic stability (instability) of a periodic solution were obtained. The presence of similar chaotic attractors of systems \eqref{evm:EQ2} and \eqref{evm:EQ6} is shown using a numerical experiment. Moreover, the coexistence of a periodic solution and a chaotic attractor was shown for system \eqref{evm:EQ6}. \nonumsection{Acknowledgments} \noindent This research was supported by Horizon2020-2017-RISE-777911 project. The authors are grateful to Politehnica University of Timisoara, Romania for the hospitality as well to professor Gheorghe Tigan for his support. \bibliographystyle{ws-ijbc}
null
null
null
proofpile-arXiv_000-10200
{"arxiv_id":"2111.03101","language":"en","timestamp":1636336884000,"url":"https:\/\/arxiv.org\/abs\/2111.03101","yymm":"2111"}
2024-02-18T23:40:25.394Z
2021-11-08T02:01:24.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10202"}
null
\section{Introduction} Ascent sequences~\cite{BMCDK} are rich number sequences in that they uniquely encode four different combinatorial objects and thereby induce bijections between these objects. These objects are (2+2)-free posets; Fishburn permutations; upper-triangular matrices of non-negative integers having neither columns nor rows of only zeros; and Stoimenow matchings. Statistics on those objects have been shown to be related to natural considerations on the ascent sequences to which they correspond. In this paper we will define a new sequence that we term a {\it{weak ascent}} sequence and study the rich connections these sequences have to other combinatorial objects that are similar in spirit to those mentioned above. Given a sequence of integers $x=(x_1,\ldots,x_n)$, we say there is a {\it{weak ascent}} at position $i$ if $x_i \leq x_{i+1}$. We denote by $\mathrm{wasc}(x)$ the number of weak ascents in the sequence $x$. Throughout this paper we will use the notation $[a,b]$ for the set $\{a,a+1,a+2,\ldots,b\}$. \begin{definition}\label{was-defn} We call a sequence of integers $a=(a_1,\ldots,a_n)$ a {\textit{weak ascent sequence}} if $a_1=0$ and $a_{i+1} \in [0,1+\mathrm{wasc}(a_1,\ldots,a_i)]$ for all $i\in [0,n-1]$. Let $\mathrm{WAsc}_n$ be the set of weak ascent sequences of length $n$. \end{definition} In Table~\ref{allfourseqs} we list all weak ascent sequences of length at most four. \begin{table}[!ht] $$\def1.3{1.2} \begin{array}{c@{\quad}l}\hline n & \mathrm{WAsc}_n \\ \hline \\[-3ex] 1 & (0) \\ 2 & (0,0), \, (0,1) \\ 3 & (0,0,0), \, (0,0,1),\, (0,0,2),\, (0,1,0),\, (0,1,1),\, (0,1,2) \\ 4 & (0,0,0,0),\, (0,0,0,1),\, (0,0,0,2),\, (0,0,0,3),\, (0,0,1,0),\, (0,0,1,1),\, (0,0,1,2), \\ & (0,0,1,3),\, (0,0,2,0),\, (0,0,2,1),\, (0,0,2,2),\, (0,0,2,3),\, (0,1,0,0),\, (0,1,0,1), \\ & (0,1,0,2),\, (0,1,1,0),\, (0,1,1,1),\, (0,1,1,2),\, (0,1,1,3),\, (0,1,2,0),\, (0,1,2,1), \\ & (0,1,2,2),\, (0,1,2,3) \\[1ex] \hline \end{array} $$ \caption{All weak ascent sequences of length at most 4.\label{allfourseqs}} \end{table} To contrast this with the original ascent sequences, recall that a sequence of integers $x=(x_1,\ldots,x_n)$ has an {\it{ascent}} at position $i$ if $x_i < x_{i+1}$. An \emph{ascent sequence} is a sequence of integers $a=(a_1,\ldots,a_n)$ with $a_1=0$ and $a_{i+1} \in [0,1+\mathrm{asc}(a_1,\ldots,a_i)]$ for all $i\in [0,n-1]$, where $\mathrm{asc}$ denotes the number of ascents in the sequence. Clearly, ascent sequences are weak ascent sequences. In this paper we will show how these weak ascent sequences uniquely encode each of the following objects: permutations avoiding a particular length-4 bivincular pattern (in Section~\ref{permssection}); upper-triangular binary matrices that satisfy a column-adjacency rule (in Section~\ref{matricessection}); factorial posets that contain no {\em weak} (3+1) subposet (in Section~\ref{posetssection}). We show in Section~\ref{inversionssection} how weak ascent sequences are related to a class of pattern avoiding inversion sequences that has been a topic of recent research by Auli and Elizalde~\cite{AuliSergi1,AuliSergi2,elizalde}. In Section~\ref{inversionssection} we also consider the problem of enumerating these new sequences and give a closed form expression for the number of weak ascent sequences having a prescribed length and number of weak ascents. The objects that we study in this paper are summarised in Figure~\ref{fig:summary}, along with the names of the bijections that we construct and prove between these objects. In that diagram we also include the section numbers where each of the bijections may be found. \begin{figure}[!h] \begin{tikzpicture}[scale=0.5, rounded corners = 2pt] \begin{scope} \draw (0,4) node [align=left] {$\mathrm{WAsc}_n$:\, weak ascent}; \draw (0,2.5) node [align=left] {sequences of length $n$}; \draw (0,1) node [align=left] {(Definition~\ref{was-defn})}; \draw (-5,0) rectangle (5,5); \end{scope} \begin{scope}[xshift = 90ex] \draw (0,4) node [align=left] {$\mathcal{W}_n$: weak Fishburn}; \draw (0,2.5) node [align=left] {permutations of length $n$}; \draw (0,1) node [align=left] {(Section~\ref{permssection})}; \draw (-5,0) rectangle (5,5); \end{scope} \begin{scope}[yshift = -55ex] \draw (0,4) node [align=left] {$\mathrm{WMat}_n$: upper-triangular}; \draw (0,2.5) node [align=left] {0/1-matrices given in}; \draw (0,1) node [align=left] {Definition~\ref{matrixdefn}}; \draw (-5,0) rectangle (5,5); \end{scope} \begin{scope}[xshift = 90ex, yshift = -60ex] \draw (0,6.05) node {$\mathrm{WPoset}_n$: weakly}; \draw (0,4.5) node {$(3+1)$-free factorial}; \draw (0,3.00) node {posets on $\{1,\ldots,n\}$}; \draw (0,1.5) node {(Section~\ref{posetssection})}; \draw (-5,0) rectangle (5,7); \end{scope} \draw[->] (0,-1) -- (0,-4); \draw (-0.75,-2.5) node {$\Omega$}; \draw (0.75,-2.6) node {\S3}; \draw[<-] (6,2.5) -- (10,2.5); \draw (8, 3.25) node {$\Gamma$}; \draw (8, 1.75) node {\S2}; \draw[->] (6,-6) -- (10,-6); \draw (8, -5.25) node {$\Psi$}; \draw (8, -6.75) node {\S\ref{posetssection}}; \draw[<-] (6,-9) -- (10,-9); \draw (8, -8.25) node {$\Phi$}; \draw (8, -9.75) node {\S\ref{posetssection}}; \end{tikzpicture} \caption{Diagrammatic summary of the sets and bijections of interest.} \label{fig:summary} \end{figure} \section{Weak Fishburn permutations} \label{permssection} Let $S_n$ be the set of permutations of the set $\{1,\ldots,n\}$. Given a pattern $P$, in the pattern-avoidance literature the convention is to denote by $S_n(P)$ the set of permutations in $S_n$ that do not contain the pattern $P$. The set of Fishburn permutations~\cite{BMCDK, GilWeiner}, $\mathcal{F}_n=S_n(F)$, are those that avoid the bivincular pattern $$ F = \big(\,231,\,[0,3]\!\times\!\{1\}\cup \{1\}\!\times\![0,3]\,\big) = \pattern{scale=1}{3} {1/2, 2/3, 3/1} {0/1, 1/1, 2/1, 3/1, 1/0, 1/2, 1/3}, $$ here defined and depicted as a mesh pattern~\cite{anderspetter}. The inclusion of shaded rows and columns indicates that in an occurrence of such a pattern in a permutation, there should be no other permutation dots in the shaded zones when this pattern is placed over a permutation. Bousquet-M\'elou et al.~\cite{BMCDK} gave a bijection between ascent sequences and Fishburn permutations. More precisely, ascent sequences encode the so called active sites of the Fishburn permutations. We define the bivincular pattern $$ W = \big(\,3412,\,[0,4]\!\times\!\{2\}\cup \{1\}\!\times\![0,4]\,\big) = \pattern{scale=1}{4} {1/3, 2/4, 3/1, 4/2} {1/0, 1/1, 1/2, 1/3,1/4, 0/2, 1/2, 2/2, 3/2, 4/2} $$ and call $\mathcal{W}_n=S_n(W)$ the set of \emph{weak Fishburn permutations}. For the benefit of readers not familiar with bivincular or mesh patterns we also give an elementary definition of the \emph{weak Fishburn pattern} $W$: A permutation $\pi\inS_n$ contains $W$ if there are four indices $1\leq i<j<k<\ell \leq n$ such that $j= i+1$, $\pi_i = \pi_{\ell} +1$ and $\pi_k < \pi_{\ell} < \pi_i < \pi_j$. In this case we also say that $\pi_i\pi_j\pi_k\pi_{\ell}$ is an \emph{occurrence} of $W$ in $\pi$. If there are no occurrences of $W$ in $\pi$, then we say that $\pi$ \emph{avoids} $W$. If $\pi_i\pi_j\pi_k\pi_{\ell}$ is an occurrence of $W$ then $\pi_i\pi_j\pi_{\ell}$ is an occurrence of $F$. In other words, every Fishburn permutation is a weak Fishburn permutation and we have $\mathcal{F}_n\subseteq\mathcal{W}_n$. We can construct permutations in $\mathcal{W}_n$ inductively: Let $\pi$ be a permutation in $\mathcal{W}_{n}$ with $n>0$. Suppose that $\tau$ is obtained from $\pi$ by deleting the entry $n$. Then $\tau \in \mathcal{W}_{n-1}$. To see why this must be the case, let $\tau_i \tau_{i+1} \tau_k \tau_{\ell}$ be an occurrence of $W$ in $\tau$ but not in $\pi$. This can only happen if $\pi_{i+1}=n$. However, this implies that $\pi_i \pi_{i+1} \pi_{k+1}\pi_{\ell +1}$ is an occurrence of a $W$ in $\pi$. Given $\tau\in \mathcal{W}_{n-1}$, let us call the sites where the new maximal value $n$ can be inserted in $\tau$ so as to produce an element of $\mathcal{W}_{n}$ {\em{active sites}}. The site before $\tau_1$ and the site after $\tau_{n-1}$ are always active. Determining whether the site between entries $\tau_i$ and $\tau_{i+1}$ is {active} depends on whether $\tau_i\leq 2$ or if there does not exist $(\tau_i,t,\tau_i -1)$ in $\tau$ with $t<\tau_i - 1$. This latter (non-existence) condition is somewhat hard to absorb, so let us disentangle it as follows. The site between entries $\tau_i$ and $\tau_{i+1}$ is active if \begin{itemize} \item $\tau_i \leq 2$, or \item $\tau_i -1 $ is to the left of $\tau_i$, or \item $\tau_i -1$ is to the right of $\tau_i$ and there is no value $t<\tau_i -1$ between $\tau_i$ and $\tau_i -1$. \end{itemize} With this notion of active sites let us label the active sites, from left to right, with $\{0,1,2,\dots\}$. We will now introduce a map $\Gamma$ from $\mathcal{W}_{n}$ to $\mathrm{WAsc}_n$, the set of weak ascent sequences of length $n$, that we then show (in Theorem \ref{th:mainbiject}) to be a bijection. This mapping is defined recursively. For $n=1$, we define $\Gamma(1)=(0)$. Next let $n \ge 2$ and suppose that $\pi \in \mathcal{W}_{n}$ is obtained by inserting $n$ into active site labeled $i$ of $\tau \in \mathcal{W}_{n-1}$. The sequence associated with $\pi$ is then $\Gamma(\pi)=(x_1, \dots, x_{n-1}, i)$, where $(x_1, \dots,x_{n-1})=\Gamma(\tau)$. \begin{example} The permutation $\pi = 6 2 7 5 4 1 3 8 $ corresponds to the sequence $x=(0,0,2,1,1,0,1,5)$. It is obtained through the following recursive insertion of new maximal values into active sites. The subscripts indicate the labels of the active sites. \begin{align*} _0 1 _1 &\,\xrightarrow{x_2=0}\, {_0} 2 {_1} 1 {_2} \\ &\,\xrightarrow{x_3=2}\, {_0} 2 {_1} 1 {_2} 3 {_3} \\ &\,\xrightarrow{x_4=1}\, {_0} 2 {_1} 4 \hspace*{0.425em} 1 {_2} 3 {_3} \\ &\,\xrightarrow{x_5=1}\, {_0} 2 {_1} 5 {_2} 4 \hspace*{0.425em} 1 {_3} 3 {_4} \\ &\,\xrightarrow{x_6=0}\, {_0} 6 \hspace*{0.425em} 2 {_1} 5 {_2} 4 \hspace*{0.425em} 1 {_3} 3 {_4} \\ &\,\xrightarrow{x_7=1}\, {_0} 6 \hspace*{0.425em} 2 {_1} 7 {_2} 5 {_3} 4 \hspace*{0.425em} 1 {_4} 3 {_5} \\ &\,\xrightarrow{x_8=5}\, \hspace*{0.425em} 6 \hspace*{0.425em} 2 \hspace*{0.425em} 7 \hspace*{0.425em} 5 \hspace*{0.425em} 4 \hspace*{0.425em} 1 \hspace*{0.425em} 3 \hspace*{0.425em} 8. \end{align*} In our terminology, we thus have $\Gamma(6, 2, 7, 5, 4, 1, 3, 8) = (0,0,2,1,1,0,1,5)$. \end{example} \begin{theorem}\label{th:mainbiject} \label{pwa} The map $\Gamma$ is a bijection from $\mathcal{W}_{n}$ to $\mathrm{WAsc}_n$. \end{theorem} \begin{proof} The integer sequence $\Gamma(\pi)$ encodes the construction of the $W$-avoiding permutation $\pi$ so the map $\Gamma$ is injective. In order to prove that $\Gamma$ is bijective we must show that the image $\Gamma(\mathcal{W}_n)$ is the set $\mathrm{WAsc}_n$. Let $\mathrm{numact}(\pi)$ be the number of active sites in the permutation $\pi$. The recursive description of the map $\Gamma$ tells us that $x=(x_1, \dots,x_n)\in \Gamma(\mathcal{W}_n)$ if and only if \begin{equation}\label{description} x'=(x_1, \dots, x_{n-1})\in \Gamma(\mathcal{W}_{n-1}) \quad\text{and}\quad 0\le x_n \le \mathrm{numact}\bigl(\Gamma^{-1}( x')\bigr)-1 \end{equation} Recall that the leftmost active site is labeled 0 and the rightmost active site is labeled $\mathrm{numact}(\pi)-1$. We will now prove by induction (on $n$) that for all $\pi \in \mathcal{W}_n$, with associated sequence $\Gamma(\pi)=x=(x_1, \dots, x_n)$, one has \begin{equation}\label{properties} \mathrm{numact}(\pi)=2+\mathrm{wasc}(x)\quad\text{and}\quad \mathrm{lastact}(\pi)= x_n, \end{equation} where $\mathrm{lastact}(\pi)$ is the label of the site located just before the largest entry of $\pi$. This will convert the above description~\eqref{description} of $ \Gamma(\mathcal{W}_n)$ into the definition of weak ascent sequences, thereby concluding the proof. Let us focus on the properties~\eqref{properties}. It is straightforward to see that they hold for $n=1$. Next let us assume they hold for some $n-1$ where $n\ge 2$. Let $\pi \in \mathcal{W}_n$ be obtained by inserting $n$ into the active site labeled $i$ of $\tau \in \mathcal{W}_{n-1}$. Then $\Gamma(\pi)= x=(x_1, \dots, x_{n-1},i)$ where $\Gamma(\tau)= x'=(x_1, \dots, x_{n-1})$. Every entry of the permutation $\pi$ that is smaller than $n$ is followed in $\pi$ by an active site if and only if it was followed in $\tau$ by an active site. The leftmost site in $\pi$ also remains active. The label of the active site preceding $n$ in $\pi$ is $i=x_n$, and this proves the second property. In order to determine $\mathrm{numact}(\pi)$, we must see whether the site following $n$ is active in $\pi$. There are three cases to consider. Before doing this recall that, by the induction hypothesis, we have $\mathrm{numact}(\tau)=2+\mathrm{wasc}(x')$ and $\mathrm{lastact}(\tau)=x_{n-1}$. {\it Case 1.} If $0\leq i< \mathrm{lastact}(\tau)=x_{n-1}$ then $\mathrm{wasc}(x)=\mathrm{wasc}(x')$ and the entry $n$ in $\pi$ is to the left of $n-1$ and there is at least one element in-between these. This `in-between' element must be $<n-1$, so the site after $n$ in $\pi$ cannot be active since it would lead to the creation of a $W$-pattern. The number of active sites remains unchanged and $\mathrm{numact}(\pi) = \mathrm{numact}(\tau)=2+\mathrm{wasc}(x')=2+\mathrm{wasc}(x)$. {\it Case 2.} If $i=\mathrm{lastact}(\tau)=x_{n-1}$ then $\mathrm{wasc}(x)=1+\mathrm{wasc}(x')$ and the entry $n$ in $\pi$ is immediately to the left of $n-1$ in $\pi$. Furthermore, there are no elements in-between $n$ and $n-1$. The site that follows $n$ is therefore active, and $\mathrm{numact}(\pi ) = 1+\mathrm{numact}(\tau)=3+\mathrm{wasc}(x')=2+\mathrm{wasc}(x)$. {\it Case 3.} If $i>\mathrm{lastact}(\tau)=x_{n-1}$ then $\mathrm{wasc}(x)=1+\mathrm{wasc}(x')$ and the entry $n$ in $\pi$ is to the right of $n-1$. The site that follows $n$ is therefore active, and $\mathrm{numact}(\pi ) = 1+\mathrm{numact}(\tau)=3+\mathrm{wasc}(x')=2+\mathrm{wasc}(x)$. \end{proof} \section{A class of upper-triangular binary matrices} \label{matricessection} Dukes and Parviainen~\cite{DP} showed how the set of upper triangular integer matrices whose entries sum to $n$ and which contain no zero rows or columns are in one-to-one correspondence with ascent sequences. A property of that correspondence is that the number of ascents in an ascent sequence equals the dimension of the corresponding matrix, while the depth of the first non-zero entry in the rightmost-column corresponds one plus the final entry of the ascent sequence. In this section we will present a similar construction for weak ascent sequences. This correspondence is different to \cite{DP} in that the matrix entries are binary and rows of zeros will be allowed. The reason for this is that we would like the dimension of a matrix to match the number of weak ascents in the corresponding weak ascent sequence. Further to this, and in keeping with the spirit of \cite{DP}, we also wish to preserve the second property ``the depth of the first non-zero entry in the rightmost-column corresponds to the final entry of the weak-ascent sequence''. Let us first define the class of matrices we will be interested in and then present the correspondence between these matrices and weak ascent sequences. The notation $\dim(A)$ refers to the dimension of the matrix $A$. \begin{definition}\label{matrixdefn} Let $\mathrm{WMat}_n$ be the set of upper triangular square 0/1-matrices $A$ that satisfy the following properties: \begin{enumerate} \item[(a)] There are $n$ 1s in $A$. \item[(b)] There is at least one 1 in every column of $A$. \item[(c)] For every pair of adjacent columns, the topmost 1 in the left column is weakly above the bottommost 1 in the right column. \end{enumerate} \end{definition} All of the matrices in $\mathrm{WMat}_1,\ldots,\mathrm{WMat}_4$ are shown in Table~\ref{allfourmatrices}. \begin{table}[!ht] $$ \def1.3{1.3} \begin{array}{c@{\quad}l} \hline n & \mathrm{WMat}_n\\ \hline\\[-2.5ex] 1 & \Mat{1}\\[1ex] 2 & \Mat{1 & 1\\ 0 & 0}\!,\, \Mat{1 & 0\\ 0 & 1}\\[1.5ex] 3 & \Mat{1&1\\ 0&1}\!,\, \Mat{1&1&1\\ 0&0&0\\ 0&0&0}\!,\, \Mat{1&1&0\\ 0&0&1\\ 0&0&0}\!,\, \Mat{1&1&0\\ 0&0&0\\ 0&0&1}\!,\, \Mat{1&0&0\\ 0&1&1\\ 0&0&0}\!,\, \Mat{1&0&0\\ 0&1&0\\ 0&0&1}\\[2.6ex] 4 & \Mat{1&1&1\\ 0&0&1\\ 0&0&0}\!,\, \Mat{1&1&1\\ 0&0&0\\ 0&0&1}\!,\, \Mat{1&1&0\\ 0&0&1\\ 0&0&1}\!,\, \Mat{1&1&1\\ 0&1&0\\ 0&0&0}\!,\, \Mat{1&1&0\\ 0&1&1\\ 0&0&0}\!,\, \Mat{1&1&0\\ 0&1&0\\ 0&0&1}\!,\, \Mat{1&0&1\\ 0&1&1\\ 0&0&0}\!,\, \Mat{1&0&1\\ 0&1&0\\ 0&0&1}\!,\, \Mat{1&0&0\\ 0&1&1\\ 0&0&1}\!,\\[1.9ex] & \Mat{1&1&1&1\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&0}\!,\, \Mat{1&1&1&0\\ 0&0&0&1\\ 0&0&0&0\\ 0&0&0&0}\!,\, \Mat{1&1&1&0\\ 0&0&0&0\\ 0&0&0&1\\ 0&0&0&0}\!,\, \Mat{1&1&1&0\\ 0&0&0&0\\ 0&0&0&0\\ 0&0&0&1}\!,\, \Mat{1&1&0&0\\ 0&0&1&1\\ 0&0&0&0\\ 0&0&0&0}\!,\, \Mat{1&1&0&0\\ 0&0&1&0\\ 0&0&0&1\\ 0&0&0&0}\!,\, \Mat{1&1&0&0\\ 0&0&1&0\\ 0&0&0&0\\ 0&0&0&1}\!,\\[2.4ex] & \Mat{1&1&0&0\\ 0&0&0&0\\ 0&0&1&1\\ 0&0&0&0}\!,\, \Mat{1&1&0&0\\ 0&0&0&0\\ 0&0&1&0\\ 0&0&0&1}\!,\, \Mat{1&0&0&0\\ 0&1&1&1\\ 0&0&0&0\\ 0&0&0&0}\!,\, \Mat{1&0&0&0\\ 0&1&1&0\\ 0&0&0&1\\ 0&0&0&0}\!,\, \Mat{1&0&0&0\\ 0&1&1&0\\ 0&0&0&0\\ 0&0&0&1}\!,\, \Mat{1&0&0&0\\ 0&1&0&0\\ 0&0&1&1\\ 0&0&0&0}\!,\, \Mat{1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1}\\[2.4ex] \hline \end{array} $$ \caption{Matrices in our class of interest.\label{allfourmatrices}} \end{table} Given a matrix $A\in\mathrm{WMat}$, let $\mathrm{topone}(A)$ be the index such that $A_{\mathrm{topone}(A),\dim(A)}=1$ is the topmost 1 in the rightmost column of $A$. Such a value always exists since, by Definition~\ref{matrixdefn}(b), there is at least one 1 in every column of $A$. Let us define $$ \mathrm{reduce}(A) = \bigl(B,\,\mathrm{topone}(A)-1\bigr) $$ where $B$ is a copy of $A$ except that $B_{\mathrm{topone}(A),\dim(A)}=0$ (this entry was 1 in $A$), and if this results in $B$ having a final column of all zeros then we delete that column and the bottommost row so that $B$ has dimension 1 less than $A$. \begin{lemma} If $A \in \mathrm{WMat}_n$, with $n\geq 2$, and $\mathrm{reduce}(A)=\bigl(B,\mathrm{topone}(A)-1\bigr)$, then $B\in\mathrm{WMat}_{n-1}$. \end{lemma} \begin{proof} Suppose $n$ and $A$ are as stated and $\mathrm{reduce}(A) = (B,\mathrm{topone}(A)-1)$. Let us first observe that the number of 1s in $B$ is one less than the number of 1s in $A$, and is $n-1$. This shows property (a) of Definition~\ref{matrixdefn} is satisfied. Since $A\in\mathrm{WMat}_n$ there is at least one 1 in every column of $A$, let us consider what happens in the reduction from $A$ to $B$. If there was a single 1 in the rightmost column of $A$, then it is removed along with that column and bottom row so that there is at least one 1 in every column of $B$. Alternatively, if there was more than one 1 in the rightmost column of $A$, then changing the 1 at position $(\mathrm{topone}(A),\dim(A))$ to 0 will still ensure there is at least one more 1 in that column. This shows property (b) of Definition~\ref{matrixdefn} is satisfied. Showing that property (c) in Definition~\ref{matrixdefn} is preserved is a little bit more delicate. Notice that in terms of our reduction we need only consider property (c) and how things change in terms of the rightmost two columns. If there was only one 1 in the rightmost column of A, then that column will not appear in $B$ so property (c) certainly holds true in this case. If there is more than one 1 in the rightmost column of $A$, then removing it does not change the depth of the bottommost one in that column, so property (c) will still hold true. In both cases, property (c) still holds. Therefore $B \in \mathrm{WMat}_{n-1}$. \end{proof} Next we will define a matrix insertion operation that is complementary to the removal operation $\mathrm{reduce}$. \begin{definition}\label{defmatadd} Given a matrix $A \in \mathrm{WMat}_n$ and an integer $i\in [0,\dim(A)]$, let us define $\mathrm{expand}(A,i)$ as follows. \begin{enumerate} \item[(a)] If $i<\mathrm{topone}(A)-1$ then let $\mathrm{expand}(A,i)$ be the matrix $A$ with $A_{i+1,\dim(A)}$ changed from 0 to 1. \item[(b)] If $i\geq \mathrm{topone}(A)-1$ then let $\mathrm{expand}(A,i)$ be the matrix $A$ with a new column of 0s added to the right and a new row of 0s appended to the bottom. Then change the 0 at position $(i+1,\dim(A)+1)$ in $\mathrm{expand}(A,i)$ to 1. \end{enumerate} \end{definition} To illustrate Definition~\ref{defmatadd} let $$ A = \Mat{ \vrule width 0pt height 6pt 1&1&0&1&0&0 \\ 0&0&1&1&1&0 \\ 0&0&1&0&0&0 \\ 0&0&0&0&0&1 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&1}\!. $$ Then\\[-4.5ex] $$\hspace{-3ex} \mathrm{expand}(A,2) = \Mat{ \vrule width 0pt height 6pt 1&1&0&1&0&0 \\ 0&0&1&1&1&0 \\ 0&0&1&0&0&\mathbf{1} \\ 0&0&0&0&0&1 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&1}\\ \,\text{ and }\; \mathrm{expand}(A,4) = \Mat{ \vrule width 0pt height 8pt 1&1&0&1&0&0&\mathbf{0} \\ 0&0&1&1&1&0&\mathbf{0} \\ 0&0&1&0&0&0&\mathbf{0} \\ 0&0&0&0&0&1&\mathbf{0} \\ 0&0&0&0&0&0&\mathbf{1} \\ 0&0&0&0&0&1&\mathbf{0} \\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}} \vrule width 0pt height 6pt \!. $$ \begin{lemma}\label{lemma2} Let $n \geq 2$ and $B \in \mathrm{WMat}_{n-1}$. Let $i \in [0,\dim(A)]$ and define $A=\mathrm{expand}(B,i)$. Then $A \in \mathrm{WMat}_{n}$ and $\mathrm{topone}(A)=i+1$. \end{lemma} \begin{proof} Let $n$, $i$, and $B$ be as stated in the lemma. In Definition~\ref{defmatadd}, the two operations (a) and (b) increase the number of 1s in the matrix by 1, so the number of 1s in the matrix $A=\mathrm{expand}(B,i)$ will be $n$. This means matrix $A$ satisfies Definition~\ref{matrixdefn}(a). Similarly, if addition rule (a) is used then the number of 1s in a column of $A$ is at least as many as in $B$, so $A$ satisfies Definition~\ref{matrixdefn}(b). If addition rule (b) is used then the new column that appears has precisely one 1, so again $A$ satisfies Definition~\ref{matrixdefn}(b). Let us now consider the cases $i<\mathrm{topone}(B)-1$ and $i\ge \mathrm{topone}(B)-1$ separately. If $i<\mathrm{topone}(B)-1$ then matrix $A$ is created by inserting a 1 into position $(i+1,\dim(B))$ of $B$ which is above the topmost 1 in that column. Consequently in the new matrix $A$ one has $\mathrm{topone}(A) = i+1$. Furthermore, the positions of the topmost 1 in the second to last column and the bottommost 1 in the final column remain unchanged, so $A$ satisfies Definition~\ref{matrixdefn}(c). If $i\ge \mathrm{topone}(B)-1$ then $A$ is created from $B$ by adding a new column and row, and inserting a 1 at position $(i+1,\dim(B)+1)$. Notice that the topmost 1 in column $\dim(B)$ is at position $(\mathrm{topone}(B),\dim(B))$. The bottommost 1 in the final column of $A$ is now at position $(i+1,\dim(B)+1)$. Since $\mathrm{topone}(B) -1 \leq i$ we have $\mathrm{topone}(B) \leq i+1$ and again $A$ satisfies Definition~\ref{matrixdefn}(c). \end{proof} \begin{lemma} Let $B \in \mathrm{WMat}_n$ and let $i \in [0,\dim(B)]$. Then $$\mathrm{reduce}\bigl(\mathrm{expand}(B,i)\bigr) = (B,i) $$ and, if $n\geq 2$, $$ \mathrm{expand}\bigl(\mathrm{reduce}(B)\bigr) = B. $$ \end{lemma} \begin{proof} Let $A=\mathrm{expand}(B,i)$. From Lemma~\ref{lemma2} we have $\mathrm{topone}(A)=i+1$ and the removal operation when applied to $A$ will yield $(C,i)$ for some matrix $C$. We need to show $B=C$ for the two different cases of Definition~\ref{defmatadd}. Suppose that $i<\mathrm{topone}(B)-1$. Then $A$ is a copy of $B$ with a new 1 at position $(i+1,\dim(B))$, which becomes the topmost one in that column. The reduction operation applied to $A$ removes that topmost one in the rightmost column and the resulting matrix is $C=B$. A similar argument holds for the case $i \ge \mathrm{topone}(B)-1$. This establishes the first part of our lemma. Suppose $B \in \mathrm{WMat}_n$ is a matrix that has only one $1$ entry in the last column. Then $\mathrm{reduce}(B)= (C, i)$, where $C$ is the matrix that we obtain by deleting the rightmost column and bottommost row of $B$ and $i = \mathrm{topone}(B)-1$. By the property (c) of Definition~\ref{matrixdefn} $\mathrm{topone}(C)\leq \mathrm{topone}(B)$. So, $i\geq \mathrm{topone}(C)-1$ and by Definition~\ref{defmatadd}(b) we have that the matrix $A=\mathrm{expand}(C,i)$ is the matrix that we obtain by appending a column with a single $1$ in row $i+1$ to the right and an all-zeros row to the bottom of $C$. Hence $A=B$. Suppose now that $B \in \mathrm{WMat}_n$ is a matrix with more than one $1$ in the rightmost column. Then $\mathrm{reduce}(B)= (C, i)$, where $C$ is the matrix that we obtain by exchanging the topmost $1$ in the rightmost column to $0$ and $i = \mathrm{topone}(B)-1$. Note that due to this we have $\mathrm{topone}(C)>\mathrm{topone}(B)$ and $i<\mathrm{topone}(C)-1$. By Definition~\ref{defmatadd}(a) $A=\mathrm{expand}(C,i)$ is the matrix that we obtain by changing the $0$ in the $i+1$th row in the rightmost column of $C$ to $1$, hence we have $A=B$. \end{proof} Let us now define a mapping $\Omega$ from $\mathrm{WMat}_n$ to integer sequences of length $n$. \begin{definition} For $n=1$ let $\Omega(\Mat{1}) = (0)$. Now let $n\geq 2$ and suppose that the removal operation, when applied to $A\in \mathrm{WMat}_n$ gives $\mathrm{reduce}(A)=(B,i)$. Then the sequence associated with $A$ is $\Omega(A) = (x_1,\ldots,x_{n-1},i)$ where $(x_1,\ldots,x_{n-1}) = \Omega(B)$. \end{definition} \begin{theorem} The mapping $\Omega: \mathrm{WMat}_n\to\mathrm{WAsc}_n$ is a bijection. \end{theorem} \begin{proof} Since the sequence $\Omega(A)$ encodes the construction of the matrix $A$, the mapping $\Omega$ is injective. We have to prove that the image of $\mathrm{WMat}_n$ is the set $\mathrm{WAsc}_n$. By definition, $x=(x_1,\ldots,x_n) \in \Omega(\mathrm{WMat}_n)$ if and only if \begin{align} x'=(x_1,\ldots,x_{n-1}) \in \Omega(\mathrm{WMat}_{n-1}) \quad\mbox{and}\quad x_n \in [0,\dim(\Omega^{-1}(x'))]. \label{condone} \end{align} We will prove by induction on $n$ that for all $A \in \mathrm{WMat}_n$, with associated sequence $\Omega(A) = x = (x_1,\ldots,x_n)$, one has \begin{align} \dim(A) = \mathrm{wasc}(x) \quad\mbox{and}\quad \mathrm{topone}(A) = x_n+1. \label{condtwo} \end{align} This will convert the description \eqref{condone} above into the definition of weak ascent sequences, thus concluding the proof. Let us examine the two statements of \eqref{condtwo} more closely. They hold for $n=1$. Assume they hold for $n-1$ with $n\geq 2$ and let $A=\mathrm{expand}(B,i)$ for $B \in \mathrm{WMat}_{n-1}$. If $\Omega(B) = (x_1,\ldots,x_{n-1})$ then $\Omega(A) = (x_1,\ldots,x_{n-1},i)$. Lemma~\ref{lemma2} gives us that $\mathrm{topone}(A)=i+1$ and it follows that $$\dim(A) = \begin{cases} \dim(B)=\mathrm{wasc}(x') = \mathrm{wasc}(x) & \mbox{ if }i<x_{n-1}\\ \dim(B)+1 = \mathrm{wasc}(x')+1=\mathrm{wasc}(x) & \mbox{ if }i\ge x_{n-1}. \end{cases} $$ The result follows from this. \end{proof} \begin{example}\label{starone} Let us construct the matrix $A$ that corresponds to the weak ascent sequence $x=(0,0,2,1,1,0,1,5) \in \mathrm{WAsc}_8$; that is, $\Omega(A)=x$. To begin, we have $\Omega(\Mat{1})=(0)$. From this, $$ \begin{array}{cccccccccl} \Mat{1} & \xrightarrow{x_2=0} & \Mat{1&1 \\ 0&0} & \xrightarrow{x_3=2} & \Mat{1&1&0 \\ 0&0&0 \\ 0&0&1} & \xrightarrow{x_4=1} & \Mat{1&1&0 \\ 0&0&1 \\ 0&0&1} & \xrightarrow{x_5=1} & \Mat{1&1&0&0 \\ 0&0&1&1 \\ 0&0&1&0 \\ 0&0&0&0} &\\[1em] &&&& & \xrightarrow{x_6=0} & \Mat{1&1&0&1 \\ 0&0&1&1 \\ 0&0&1&0 \\ 0&0&0&0} & \xrightarrow{x_7=1} & \Mat{1&1&0&1&0 \\ 0&0&1&1&1 \\ 0&0&1&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0 }& \\[1.5em] &&&&&&& \xrightarrow{x_8=5} & \Mat{ \vrule width 0pt height 6pt 1&1&0&1&0&0 \\ 0&0&1&1&1&0 \\ 0&0&1&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&1 \vrule width 0pt height 6pt }& \!\!=\; A \end{array} $$ \end{example} It is straightforward to see how several simple statistics get translated through the bijection $\Omega$. \begin{proposition} Let $w=(w_1,\ldots,w_n) \in \mathrm{WAsc}_n$ and suppose that $M \in \mathrm{WMat}_n$ is such that $\Omega(M)=w$. Then \begin{itemize} \item the number of zeros in $w$ equals the sum of the entries in the top row of $M$, \item the number of weak ascents in $w$ equals the dimension of $M$, \item the number of right to left maxima of the sequence $w$ equals the sum of the entries in the rightmost column of $M$, and \item $w_n$, the last entry of $w$, is equal to $\mathrm{topone}(M)-1$. \end{itemize} \end{proposition} \subsection{The image of the set of ascent sequences} In this subsection we will characterise those matrices that correspond, via our bijection $\Omega$, to ascent sequences. First, we need some definitions. Let a $1$ entry in a $0/1$-matrix be called \emph{weak} entry if (a) there are only $0$s below the $1$ entry in its column, and (b) there is a $1$ to the left of it in the same row such that there are only $0$s above this $1$. More precisely, \begin{definition}\label{def:weakentry} Given a matrix $A\in \mathrm{WMat}_n$ the entry $A_{k,\ell}=1$ is weak, if \begin{enumerate} \item[(a)] $A_{j,\ell}=0$ for all $j>k$, and \item[(b)] $A_{k,\ell-1}=1$ and $A_{i,\ell-1}=0$ for all $i<k$. \end{enumerate} \end{definition} \begin{example}In matrix $C$ the entry at position $(2,4)$ is the only weak entry, while in the matrix $D$ the entries at positions $(1,2)$ and $(2,5)$ are the weak entries. $$ C= \Mat{ \vrule width 0pt height 6pt 1&1&{\bf{0}}&1&0&0 \\ 0&1&{\bf{1}}&{\bf{1}}& 1&0 \\ 0&0&1&{\bf{0}}&0&0 \\ 0&0&0&{\bf{0}}&0&0 \\ 0&0&0&{\bf{0}}&0&0 \\ 0&0&0&{\bf{0}}&0&1 \vrule width 0pt height 6pt } \quad \quad D = \Mat{ \vrule width 0pt height 6pt {\bf{1}}&{\bf{1}}&0&{\bf{0}}&1&1 \\ 0&{\bf{0}}&1&{\bf{1}}&{\bf{1}}&0 \\ 0&{\bf{0}}&0&1&{\bf{0}}&1 \\ 0&{\bf{0}}&0&0&{\bf{0}}&0 \\ 0&{\bf{0}}&0&0&{\bf{0}}&0 \\ 0&{\bf{0}}&0&0&{\bf{0}}&0 \vrule width 0pt height 6pt}$$ \end{example} Next we introduce the merge operation that acts on an $n\times n$ matrix and results in an $(n-1)\times (n-1)$. \begin{definition} \label{def:weakmerge} We define $A'= \mbox{merge}(A, (k,\ell))$ as follows. \begin{enumerate} \item[(a)] Let $A'_{i,\ell-1} = A_{i,\ell-1}+A_{i,\ell}$ for $i<k$ (we add the entries in the two columns) \item[(b)] Let $A'_{k,\ell-1}=1$ \item[(c)] Delete the final row of $A'$. Delete the entries $A'_{i,\ell}$ for $i\leq \ell$ and $A'_{j,j}$ for $j>\ell$. \end{enumerate} \end{definition} \begin{example} $$C= \Mat{ \vrule width 0pt height 6pt 1&1&0&1&0&0 \\ 0&1&1&{\bf{1}}& 1&0 \\ 0&0&1&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&1 \vrule width 0pt height 6pt }\quad\quad \implies\quad\quad \mbox{merge}(C, (2,4)) = \Mat{ \vrule width 0pt height 6pt 1&1&1&0&0 \\ 0&1&1& 1&0 \\ 0&0&1&0&0 \\ 0&0&0&0&0 \\ 0&0&0&0&0 \vrule width 0pt height 6pt }$$ \end{example} We say that a matrix $A$ is \emph{mergeable at an entry} $A_{k,\ell}$ if the entries that we delete during the merging operation $A'= \mbox{merge}(A, (k,\ell))$ are all zeros. \begin{example} Matrix $C$ is not mergeable at $C_{2,4}$, because of the $1$ in the bottom right corner, while $D$ is mergeable at both weak entries. $$C= \Mat{ \vrule width 0pt height 6pt 1&1&0&1&0&0 \\ 0&1&1&{\bf{1}}& 1&0 \\ 0&0&1&{\bf{0}}&0&0 \\ 0&0&0&{\bf{0}}&0&0 \\ 0&0&0&0&{\bf{0}}&0 \\ 0&0&0&0&0&{\bf{1}} \vrule width 0pt height 6pt} \quad \quad D = \Mat{ \vrule width 0pt height 6pt 1&{\bf{1}}&0&0&1&1 \\ 0&{\bf{0}}&1&1&1&0 \\ 0&0&{\bf{0}}&1&0&1 \\ 0&0&0&{\bf{0}}&0&0 \\ 0&0&0&0&{\bf{0}}&0 \\ 0&0&0&0&0&{\bf{0}} \vrule width 0pt height 6pt} \quad\quad D = \Mat{ \vrule width 0pt height 6pt 1&1&0&0&1&1 \\ 0&0&1&1&{\bf{1}}&0 \\ 0&0&0&1&{\bf{0}}&1 \\ 0&0&0&0&{\bf{0}}&0 \\ 0&0&0&0&{\bf{0}}&0 \\ 0&0&0&0&0&{\bf{0}} \vrule width 0pt height 6pt}$$ \end{example} \begin{lemma} Let $A\in \mathrm{WMat}_n$ and let $A_{k,\ell}$ be a weak entry. If $A$ is mergeable at $A_{k,\ell}$, then $A' = \mbox{merge}(A, (k,\ell))\in \mathrm{WMat}_{n-1}$. \end{lemma} \begin{proof} We obtain $A'$ from the matrix $A$ by deleting only zeros and keeping all the $1$ entries (by column addition of columns $\ell-1$ and $\ell$) except at the entry $A'_{\ell-1,k}$. The entry $A'_{\ell-1,k}$ is defined to be $1$. So we {\it{loose}} only one $1$ entry, and we have that the number of 1s in $A'$ is $n-1$, which verifies Definition~\ref{matrixdefn}(a). As there is at least one $1$ in every column of $A$, there is also at least one $1$ in every column of $A'$ (because we deleted only zero elements in the merging operation), and this implies Definition~\ref{matrixdefn}(b). Property (c) of Definition~\ref{matrixdefn} only needs to be checked for the $\ell-1$th and $\ell$th column of $A'$ pair since the merge operation has, essentially, only changed these these columns. We applied the merge operation at a weak entry, and have added the $\ell$th column to the $\ell-1$th column, so the topmost $1$ in the $\ell-1$th column of $A'$ is the same as the topmost $1$ in the $\ell$th column of $A$. The entries in the $\ell$th column of $A$ above the diagonal are added to the left, and then deleted. Hence the $\ell$th column of $A'$ is the same as the $\ell+1$th column of $A$. Since $A\in \mathrm{WMat}_n$, the bottommost $1$ in the $\ell+1$th column is weakly below the topmost $1$ in the $\ell$th column. These imply that in the matrix $A'$ the topmost $1$ in $\ell-1$th column is weakly above the bottommost $1$ in $\ell$th column, and this implies Definition~\ref{matrixdefn}(c). Since the three properties of Definition~\ref{matrixdefn} hold for $A'$ we have $A'\in \mathrm{WMat}_{n-1}$. \end{proof} Given a matrix $A\in \mathrm{WMat}_n$ and let $e_1,\ldots, e_r$ be the sequence of weak entries in the order of their occurrences in the columns from left to right. Further, let $A^{(1)}$, $A^{(2)}$, $\ldots$, $A^{(r)}$ denote the matrices that we obtain by the operations $A^{(i+1)}=\mbox{merge}(A^{(i)}, e_i)$. If all $A^{(i)}$ are mergeable at $e_i$, we say that the matrix $A$ is \emph{mergeable}. \begin{example} The sequence of operations are shown in the next example, where $D$ is a mergeable matrix. $$ D = \Mat{ \vrule width 0pt height 6pt 1&{\bf{1}}&0&0&1&1 \\ 0&{\bf{0}}&1&1&1&0 \\ 0&0&{\bf{0}}&1&0&1 \\ 0&0&0&{\bf{0}}&0&0 \\ 0&0&0&0&{\bf{0}}&0 \\ 0&0&0&0&0&{\bf{0}} \vrule width 0pt height 6pt} \rightarrow D^{(1)} = \Mat{ \vrule width 0pt height 6pt 1&0&0&1&1 \\ 0&1&1&{\bf{1}}&0 \\ 0&0&1&{\bf{0}}&1 \\ 0&0&0&{\bf{0}}&0 \\ 0&0&0&{\bf{0}}&0 \\ 0&0&0&0&{\bf{0}} \vrule width 0pt height 6pt} \rightarrow D^{(2)} = \Mat{ 1&0&1&1 \\ 0&1&1&0 \\ 0&0&1&1 \\ 0&0&0&0 } $$ \end{example} \begin{theorem} $\Omega^{-1}(\mathrm{Asc}_n)$ is the set of mergeable matrices of $\mathrm{WMat}_n$. \end{theorem} \begin{proof} Given a word $w=w_1\cdots w_n$ let an entry $w_i$ with $w_{i-1}=w_i$ be called a \emph{plateau}. By the map $\Omega^{-1}$ a plateau in a weak ascent sequence $w$ corresponds to a weak entry in the matrix $\Omega^{-1}(w)$. Note that in an ascent sequence the appearance of a plateau $w_i$ restricts the possible values of $w_j$ for $j\geq i+1$ , though it does not influence the values in a weak ascent sequence. Suppose $w_{n-1}$ is a plateau, then the corresponding restriction in matrices (under the bijection $\Omega$) is exactly the property that the matrix is mergeable at the $n-1$th entry (the last $1$ is not in the bottom right corner). Since for a matrix being mergeable is a successive application of this property the theorem follows. \end{proof} \section{A class of factorial posets} \label{posetssection} In this section we will define a mapping from the set of matrices $\mathrm{WMat}$ to a set of labeled posets and prove that this mapping is a bijection. First let us recall the definition of a factorial poset from ~\cite{factorialposets}. To begin, a poset $P$ on the elements $\{1,\ldots,n\}$ is \emph{naturally labeled} if $i <_P j$ implies $i<j$. The poset whose Hasse diagram is depicted in Figure~\ref{figpos} is a naturally labeled poset. \begin{definition}[\cite{factorialposets}] A naturally labeled poset $P$ on $[1,n]$ such that, for all $i,j,k \in [1,n]$, we have $$i<j \mbox{ and } j <_P k \;\implies\; i <_P k $$ is called a \emph{factorial poset}. \end{definition} Factorial posets are (2+2)-free. This fact, and further properties of factorial posets can be found in \cite{factorialposets}. \begin{definition}[The mapping $\Psi$]\label{posdef} Let $A \in \mathrm{WMat}_n$. Form a matrix $B$ as follows. Make a copy of $A$. Beginning with the leftmost column, and within each column one goes from bottom to top, replace every 1 that appears with the elements $1,2,\ldots,n$. Further, define a partial order $(P,<)$ on $[1,n]$ as follows: $i <_P j$ if the index of the column that contains $i$ is strictly less than the index of the row that contains $j$. Let $$P=\Psi(A) $$ be the resulting poset. \end{definition} Diagrammatically the relation in Definition~\ref{posdef} is equivalent to $i$ being north-west of $j$ in the matrix and the ``lower hook'' of $i$ and $j$ being strictly beneath the diagonal:\smallskip \myonematrix Note that the set of entries contained in the first $s$ columns for an $s$ is the complete set $\{1,2,\ldots, {s_{k}}\}$ for some $s_k$. \begin{example} Consider the matrix $A$ from Example~\ref{starone}. Form matrix $B$ by relabeling the 1s in the matrix according to the rule. $$ \begin{array}{ccc} A = \begin{bmatrix} 1&1&0&1&0&0 \\ 0&0&1&1&1&0 \\ 0&0&1&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&1 \end{bmatrix} &\mapsto & B = \begin{bmatrix} 1&2&0&6&0&0 \\ 0&0&4&5&7&0 \\ 0&0&3&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&8 \end{bmatrix} \end{array} $$ This gives the poset $(P,<)= \Psi(A)$ with the following relations: \begin{itemize} \item $1<_P 3,4,5,7,8$ \item $2 <_P 3,8$ \item $3,4,5,6,7 <_P 8$ \end{itemize} The Hasse diagram of this poset is illustrated in Figure~\ref{figpos}. \begin{figure} $$ \begin{tikzpicture}[xscale=0.7, yscale=0.9] \tikzstyle{every node} = [font=\footnotesize]; \tikzstyle{disc} = [ circle,fill=black,draw=black, minimum size=4pt, inner sep=0pt]; \path node [disc] (1) at (1, 1) {} node [disc] (2) at (3, 1) {} node [disc] (6) at (7, 1) {} node [disc] (3) at (0, 3) {} node [disc] (4) at (2, 3) {} node [disc] (5) at (4, 3) {} node [disc] (7) at (6, 3) {} node [disc] (8) at (6.5, 5) {}; \draw (1) node[left=2pt] {$1$} -- (3) node[left=2pt] {$3$} (1) -- (4) node[left=2pt] {$4$} (1) -- (5) node[above left] {$5$} (1) -- (7) node[above left] {$7$} (6) node[right=2pt] {$6$} -- (8) node[right=2pt] {$8$} (2) node[right=2pt] {$2$} -- (3) {} (3) -- (8) (4) -- (8) (5) -- (8) (7) -- (8) ; \end{tikzpicture} $$ \caption{A weakly $(3+1)$-free factorial poset.\label{figpos}} \end{figure} \end{example} The mapping $\Psi$ is a mapping from $\mathrm{WMat}_n$ to a set of labeled (2+2)-free posets on the set $[1,n]$, which we will now define. Let $P$ be a factorial poset on $[1,n]$. We say that $P$ contains a \emph{special 3+1} if there exist four distinct elements $i<j<j+1<k$ such that the poset $P$ restricted to $\{i,j,j+1,k\}$ induces the 3+1 poset with $i<_P j <_P k$: $$ \begin{tikzpicture}[scale=0.7] \tikzstyle{disc} = [ circle,fill=black,draw=black, minimum size=4pt, inner sep=0pt]; \path node [disc] (i) at (0, 1) {} node [disc] (j) at (0, 2) {} node [disc] (k) at (0, 3) {} node [disc] (j+1) at (1, 1) {}; \draw (i) node[left=2pt] {$i$} -- (j) node[left=2pt] {$j$} (j) -- (k) node[left=2pt] {$k$} (j+1) node[right=2pt] {$j+1$}; \end{tikzpicture} $$ If $P$ does not contain a special 3+1 we say that $P$ is \emph{weakly $(3+1)$-free}. Let $\mathrm{WPoset}_n$ be the set of weakly $(3+1)$-free factorial posets on $[1,n]$. \begin{theorem} Let $\Psi$ be as in Definition~\ref{posdef}. If $A \in \mathrm{WMat}_n$, then the poset $P=\Psi(A)$ is factorial and weakly $(3+1)$-free. That is $P\in\mathrm{WPoset}_n$ so that $$\Psi:\mathrm{WMat}_n \to \mathrm{WPoset}_n. $$ \end{theorem} \begin{proof} Let $A \in \mathrm{WMat}_n$ and $P=\Psi(A)$. Given $i \in [1,n]$, we define the strict downset $D(i) = \{ j \in [1,n]: j <_P i\}$ of $i$. A defining characteristic of a (2+2)-free poset is that the collection $\{D(i):i\in[1,n]\}$ of strict downsets can be linearly ordered by inclusion. Similarly, a defining characteristic of a factorial poset is that each strict downset is of the form $[1,k]$ for some $k<n$. From Definition~\ref{posdef}, it is clear why this must be the case for $P$: In constructing $P$ from the matrix $A$, the intermediate matrix $B$ contains all entries in $[1,n]$ exactly once. If $j$ appears in row $t$ of $B$, then the strict downset $D(j)$ will consist of all $i$'s that appear in columns 1 through to $t-1$ (inclusive). All elements that appear in row $t$ of $B$ have the same strict downset. Furthermore, since the entries $1$, $2$, \dots, $n$ in $B$ are such that their indices appear from left to right in increasing order, the strict downset of every element must be $[1,k]$ for some $k<n$. Thus $P$ is factorial. Let us now suppose that $P$ contains an induced subposet on the four elements $i<j<j+1<k$ that forms a special 3+1. In particular, $i <_P j <_P k$. Consider the matrix entries in $B$ that correspond to $i$, $j$, $j+1$ and $k$. Suppose that $i$ is at position $(i_1,i_2)$ in $B$, and that $j$ and $k$ are at positions $(j_1,j_2)$ and $(k_1,k_2)$, respectively. The hooks formed from $(i,j)$ and $(j,k)$ must be beneath the diagonal, so we must have $i_1\leq i_2< j_1 \leq j_2 < k_1 \leq k_2$. Consider now $\ell = j+1$ that is at position $(\ell_1,\ell_2)$ in $B$. This element must appear either (a) in the same column as $j$ and strictly above it, or (b) in the next column and in a row weakly below $j$. For case (a) this means $\ell_1 \leq \ell_2 = j_2$, from which we find that $\ell <_P k$, but this cannot happen since $\ell$ and $k$ are incomparable. For case (b) this means $j_2 \leq \ell_1 \leq \ell_2$, from which we find that $i<_P\ell$, but this cannot happen since $\ell$ and $i$ are incomparable. Therefore $P$ cannot contain a special 3+1. In other words, $P$ is weakly (3+1)-free and $\Psi:\mathrm{WMat}_n \to \mathrm{WPoset}_n$. \end{proof} We next define a function $\Phi$ that maps posets in $\mathrm{WPoset}_n$ to matrices. We shall show that $\Phi$ is the inverse of $\Psi$. \begin{definition}\label{bfromc} Given $P \in \mathrm{WPoset}_n$, suppose there are $k$ different strict downsets of the elements of $P$, and that these are $D_0 = \emptyset$, $D_1$, $\ldots$, $D_{k-1}$. By convention we also let $D_k = [1,n]$. Suppose that $L_i$ is the set of elements $p \in P$ such that $D(p) = D_i$; these are called \emph{level sets}. Let $C$ be the matrix with $C_{i,j} = L_{i-1} \cap (D_{j}\backslash D_{j-1})$ for all $i,j \in [1,k]$, see \cite{compositionmatrices} for details. Start with $B$ as a copy of $C$ and then repeat the following steps until every entry in $B$ is at most a singleton: \begin{enumerate} \item[A1.] Choose the first column, $i$ say, of $B$ that contains a set of size $>1$. \item[A2.] With respect to the usual order on ${\mathbb N}$, let $\ell$ be the smallest entry in column $i$. \item[A3.] Introduce a new empty column between columns $i-1$ and $i$ so that the old column $i$ is now column $i+1$. \item[A4.] Move $\ell$ one column to the left (to the current column $i$) and set $j=1$. \item[A5.] If $\ell+j$ is strictly above $\ell+j-1$, then move it one column to the left, increase $j$ by 1, and repeat A5. Otherwise, go to A6. (The outcome of step A5 will be that the non-empty singleton sets in column $i$, from bottom to top, are $\ell, \ell+1,\ldots, \ell+t$ for some $t \geq 0$.) \item[A6.] Introduce a new row of empty sets between rows $i-1$ and $i$ of $B$. Matrix $B$ will have increased in dimension by 1. \end{enumerate} Finally, let $\Phi(P)$ be the result of replacing the singletons in $B$ with ones and the empty sets in $B$ with zeros. \end{definition} \begin{example} Let $P$ be the poset in Figure~\ref{figpos}. The strict downsets of $P$ are $$D_0=\emptyset, D_1=\{1\}, D_2=\{1,2\}\text{ and } D_3 = \{1,2,3,4,5,6,7\}. $$ The level sets of $P$ are $$L_0=\{1,2,6\}, L_1=\{4,5,7\}, L_2=\{3\}\text{ and } L_3=\{8\}. $$ This gives the matrix $$C= \begin{bmatrix} \{1\} & \{2\} & \{6\} & \emptyset \\ & \emptyset & \{4,5,7\} & \emptyset \\ & & \{3\} & \emptyset \\ & & & \{8\} \end{bmatrix}. $$ Before continuing we would like to point out that the matrix obtained from $C$ by replacing each set $C_{i,j}$ by its cardinality $|C_{i,j}|$ corresponds, via \cite[\S 3.1]{compositionmatrices}, to the unlabeled version of the poset which is (2+2)-free. Continuing, by applying A1 we find that $i=3$ is the first column containing a set of size $>1$. The smallest number in this column is $3$. Furthermore, $4$ is above $3$ so we move $4$ to the 3rd column, but $5$ is in the same row as $4$. So the largest contiguous sequence starting from $3$ as we go up from it in that column (and skip rows if we wish) is $\{3,4\}$. We next go to step A6 and insert a new row with empty sets below the row of $3$. Hence, the outcome of A6 will be $$B= \begin{bmatrix} \{1\} & \{2\} & \emptyset & \{6\} & \emptyset \\ & \emptyset & \{4\} & \{5,7\} & \emptyset \\ & & \{3\} & \emptyset &\emptyset \\ & & & \emptyset &\emptyset \\ & & & & \{8\} \end{bmatrix}. $$ Since there is still a non-singleton set in this matrix we start over with A1 and find that $i=4$ is the first column containing a set of size $>1$. Going through the process, the outcome of A6 is $$B= \begin{bmatrix} \{1\} & \{2\} & \emptyset & \{6\} & \emptyset & \emptyset \\ & \emptyset & \{4\} & \{5\} & \{7\} & \emptyset \\ & & \{3\} & \emptyset & \emptyset &\emptyset \\ & & & \emptyset &\emptyset & \emptyset \\ & & & &\emptyset & \emptyset \\ & & & & & \{8\} \end{bmatrix}. $$ Since there are no sets of size at least two in this matrix, we replace singletons with 1s and emptysets with 0s to find $$\Phi(P) = \begin{bmatrix} 1&1&0&1&0&0 \\ 0&0&1&1&1&0 \\ 0&0&1&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&0 \\ 0&0&0&0&0&1 \end{bmatrix}. $$ \end{example} \begin{theorem} \label{welldefinedptm} Let $\Phi$ be the mapping from Definition~\ref{bfromc}. If $P$ is a poset in $\mathrm{WPoset}_n$, then the matrix $\Phi(P)$ is in $\mathrm{WMat}_n$ so that $$\Phi: \mathrm{WPoset}_n \to \mathrm{WMat}_n. $$ \end{theorem} \begin{proof} Let $P \in\mathrm{WPoset}_n$. Since $P$ is a factorial poset, we know that the strict downsets of elements all have the same contiguous form $[1,\ell]$ for some $\ell\in [0,n-1]$. Let us suppose that there are $k$ different strict downsets of the elements of $P$, and that these are $D_0 = \emptyset$, $D_1$, $\ldots$, $D_{k-1}$. Let us further suppose that $L_i$ is the set of elements $p \in P$ such that $D(p) = D_i$, the elements at {\it{level $i$}} of the poset. Next let $C$ be the matrix with $$C_{i,j} = L_{i-1} \cap (D_{j}\backslash D_{j-1}) \text{ for all }i,j \in [1,k].$$ The matrix $C$ is upper triangular and is in fact a partition matrix. A partition matrix is an upper triangular matrix whose entries form a set partition of an underlying set with the additional property: for all $1\leq a<b\leq n$, the column containing $b$ cannot be right of that containing $a$. See \cite{partition_and_composition_matrices} for further details on partition matrices. Moreover, since $P$ is weakly (3+1)-free, the structure of the matrix $C$ is further restricted in the following sense: \begin{description} \item[Property 1] The matrix $C$ does not contain four entries $i$, $j$, $j+1$ and $k$ such that the hooks for the pairs $(i,j)$ and $(j,k)$ are both below the main diagonal, whereas the hooks between $j+1$ and each of $i,j,k$ are on or above the main diagonal. \end{description} As $C$ is a partition matrix, the top left entry must be non-empty and contains the entry $1$. Similarly, the bottom right entry is non-empty and contains some entry, $v$ say. Note that it is not necessarily the largest entry (with respect to the order on ${\mathbb N}$). Consider now the entry $j$ in Property~1 above, and how the condition translates with respect to these ``extremal'' matrix entries $1$ and $v$ (in other words we are considering $i=1$ and $k=v$). Property~1 is equivalent to \begin{description} \item[Property 1'] The matrix $C$ does not contain an entry $j$ such that (a) $j$ is not in the top row and not in the rightmost column, and (b) $j+1$ is in the top right position of the matrix. \end{description} Again, since $C$ is a partition matrix, for all $w \in [1,n-1]$ the element $w+1$ must be in the same column as $w$, or in the one to its right. This observation, combined with Property 1' allows us to write the following equivalent statement: \begin{description} \item[Property 1''] If $C$ contains an entry $j+1$ as a top right entry, then $j$ must either be in the top row (in that same position or immediately to its left) or in the rightmost column of $C$ beneath $j$. \end{description} Next let us consider $B$ that is constructed from $C$ in Definition~\ref{bfromc}. When a column of $B$ is split into two (as per A3), a new empty row is added in step A6 which preserves the upper-triangular property. Also, there can be no empty columns in $B$ since the dissection step A4 ensures a set of size at least 2 is split into a singleton set (that will appear in the new left column) and the set difference (that will appear one place to its right). Since $C$ is upper triangular, the construction of $B$ ensures it is upper triangular. Furthermore, by construction, it can never be the case that on completion of all A1--A6, the entry $a+1$ appears above $a$ in the column to its right. If it were then then rule A5 would not have been executed properly. So the matrix $B$ is such that the highest indexed entry in every column is the highest entry in that column, $a$ say, is weakly above the smallest entry (with respect to the order on ${\mathbb N}$) in the subsequent column $a+1$ (which is also the lowest in that column). This condition absorbs Property~1'' when one considers the final pair of columns and the entries $j$ and $j+1$. The replacement of all singleton sets with ones and empty sets with zeros results in a matrix $\Phi(P)$ with the following propoerties: \begin{itemize} \item $\Phi(P)$ contains $n$ ones and is upper triangular. \item There are no columns consisting of all-zeros, but there can be rows of all-zeros. \item The topmost one in every column is weakly above the bottommost 1 in the column to its right. \end{itemize} Therefore, we have $\Phi(P) \in \mathrm{WMat}_n$. \end{proof} \begin{theorem} The mapping $\Psi:\mathrm{WMat}_n \to \mathrm{WPoset}_n$ is a bijection. \end{theorem} \begin{proof} We start by showing that $\Psi$ is injective. Suppose that $A$ and $A'$ are two different matrices in $\mathrm{WMat}_n$. As there are $n$ 1s in each of the matrices $A$ and $A'$, and they are different, there must be at least two positions in which they differ. In $A$ there is a 1 in position $(x_1,y_1)$ but no 1 in position $(x_2,y_2)$, however in $A'$ there is no 1 in position $(x_1,y_1)$ but a 1 in position $(x_2,y_2)$. Consider next the intermediate matrices $B$ and $B'$ in the construction and the entries $B_{x_1,y_1} = a$ and $B'_{x_2,y_2} = b$. Depending on the relative values of $x_1,y_1,x_2,y_2$, the strict downsets and strict upsets of the elements $a$ and $b$ ensure that different posets are constructed via $\Psi$. To prove that $\Psi$ is surjective, we will show that $\Psi(\Phi(P))=P$, thereby establishing $\Phi$ as the inverse of $\Psi$. Let $P \in \mathrm{WPoset}_n$ that has $k$ different levels $L_0,\ldots,L_{k-1}$ and down sets $D_0,\ldots,D_{k-1}$. Let $M=\Phi(P)$ be the matrix that satisfies Definition~\ref{matrixdefn}. Suppose that $Q=\Psi(\Phi(P)) = \Psi(M)$. As $M \in \mathrm{WMat}_n$ we know, by Theorem~\ref{welldefinedptm}, that $Q \in \mathrm{WPoset}_n$. The poset $Q=\Psi(M)$ that we construct using Definition~\ref{posdef} is such that the level $j$ of the poset $Q$ corresponds to the set of elements in the $j+1$th non-zero row of $M$ once the labelleing of the 1s in $M$ that is described in Definition~\ref{posdef} has taken place. Given that the $j+1$th non-zero row of $M$ corresponded to the $j$th level of $P$, we have that $L_{j+1}(Q) = L_{j+1}(P)$ for all $j$. Let $i_1,\ldots,i_k$ denote the indices of the $k$ non-empty rows of $M$. The elements of $D_{j}(Q)$ are all of those matrix entries (with the labelling of Definition~\ref{posdef}) weakly to the right of column $i_j$. As $M$ was constructed from $P$ using a process of separating out columns while creating empty rows, the $j$th downset of $D_j(P)$ will coincide with that of $D_j(Q)$. \end{proof} Just as we did in the previous section, we can see how statistics between these two sets are translated: \begin{proposition} Suppose that $M \in \mathrm{WMat}_n$ and $P=\Psi(M)$. Then \begin{itemize} \item the sum of the top row of $M$ is the number of minimal elements in $P$, \item the number of non-zero rows in $M$ equals the number of levels in the poset $P$, \item the sum of the entries in the rightmost column of $M$ equals the number of maximal elements of $P$. \end{itemize} \end{proposition} \section{Pattern-avoiding inversion sequences and enumeration} \label{inversionssection} The study of patterns in inversion sequences was recently considered by Corteel et al.~\cite{corteel} and continued throughout several papers \cite{Auliphd, AuliSergi1, AuliSergi2, elizalde, Lin, Mansour}. In a recent paper Auli and Elizalde~\cite{elizalde} focused on vincular patterns in inversion sequences. We recall some important definitions. It is well known that a permutation of $[1,n]$ can be encoded by an integer sequence $e_1e_2\ldots e_n = (e_1,\ldots e_n)$, where $e_i$ is the number of larger elements to the left of the entry $\pi_i$. It is easy to see that every sequence $e_1e_2\ldots e_n$ with the property that $e_i \in [0,i-1]$ for all $i$ corresponds uniquely to a permutation. Such a sequence is called an \emph{inversion sequence}. A \emph{vincular pattern} is a sequence $p = p_1 p_2 \ldots p_r$, where some disjoint subsequences of two or more adjacent entries may be underlined, satisfying $p_i \in \{0,1,\ldots,r-1\}$ for each $i$, where any value $j > 0$ can only appear in $p$ if $j -1$ appears as well. The \emph{reduction} of a word $w = w_1w_2 \ldots w_k$ is the word obtained by replacing all instances of the $i$th smallest entry of $w$ with $i>1$, for all $i$. An inversion sequence $e$ \emph{avoids} the vincular pattern $p$ if there is no subsequence $e_{i_1} e_{i_2}\ldots e_{i_r}$ of $e$ whose reduction is $p$, and such that $i_{s+1} = i_{s} +1$ whenever $p_{i_s}$ and $p_{i_{s+1}}$ are part of the same underlined subsequence. $\mathcal{I}_n(p)$ denotes the set of inversion sequences of size $n$ that avoid the vincular pattern $p$ and $I_n(p)$ denotes the size of this set. Auli and Elizalde~\cite{elizalde} showed that $I_n(\underline{10}0)= I_n(\underline{10}1)$ and this number sequence (\cite[A336070]{OEIS}) begins \[1 , 2 , 6 , 23 , 106 , 567 , 3440 , 23286 ,\ldots\] The main result of this section is the following theorem. \begin{theorem}\label{weak_ascent_elizalde} The number of length-$n$ weak ascent sequences is the same as the number of length-$n$ inversion sequences that avoid the vincular pattern $\underline{10}0$, and also the same as the number of length-$n$ inversion sequences that avoid the vincular pattern $\underline{10}1$. \end{theorem} We prove Theorem \ref{weak_ascent_elizalde} by establishing a bijection between the sets $\mathrm{WAsc}_n$ and $\mathcal{I}(\underline{10}0)$. First, we recall a crucial property of the elements of $\mathcal{I}(\underline{10}0)$ from Auli and Elizalde~\cite{elizalde}. Let $\mathrm{desbot}(e)$ denote the set of descent bottoms of $e$, i.e. the set of $e_i$'s with $e_{i-1}>e_i$. For each $e\in \mathcal{I}_n(\underline{10}0)$, $e_n\in \{0,1,\ldots, n-1\}\setminus \mathrm{desbot}(\bar{e})$, where $\bar{e}$ denotes the inversion sequence that we obtain by deleting the last entry of $e$. Another important observation is that the descent bottoms are distinct elements in $e\in \mathcal{I}_n(\underline{10}0)$. Assume namely, that $e_i$ and $e_j$, with $i<j$ are descent bottoms, and $e_i=e_j$. Since $e_i$ is a descent bottom, $e_{i-1}>e_i$. Then the elements $e_{i-1}$, $e_{i}$, and $e_j$ would form the pattern $\underline{10}0$. On the other hand, by definition, given a weak ascent sequence $w= w_1\cdots w_n$, we have $w_n\in \{0,1,\ldots, \mathrm{wasc}(\bar{w})\}$, and $\mathrm{wasc}(\bar{w})$ is equal to $n - 1-|\mathrm{desbot}(\bar{w})|$. \begin{proof}[Proof of Theorem \ref{weak_ascent_elizalde}] We define a map $\phi$ from the set $\mathcal{I}_n(\underline{10}0)$ to the set $\mathrm{WAsc}_n$. Let $e=e_1\ldots e_n$ be an inversion sequence from the set $\mathcal{I}_n(\underline{10}0)$. We map $e$ entry by entry such that $x=\phi(e)$ is becoming a weak ascent sequence. We map in each step the possible set for $e_n$ to the possible set for $w_n$. For the initial values $i=1,2,3$ $e_i$ are the same as $w_i$. Let $n>3$. We know that $e_n$ is from the set $\{0,1,2,\ldots, n-1\}\setminus \mathrm{desbot}(\bar{e})$. But, we know also that $w_n$ is from the set $\{0,1,\ldots, n - 1- |\mathrm{desbot}(\bar{w})|\}$. Since the two sets are equinumerous \[|\{0,1,2,\ldots, n-1\}\setminus \mathrm{desbot}(\bar{e})|=|\{0,1,\ldots, n - 1 - |\mathrm{desbot}(\bar{w})\}|,\] it is easy to define a one-to-one correspondence between them that preserves the number of descent bottoms for all cases. So, if the possible values for the last entry in $e$ are $\ell_1 < \ell_2 < \ldots < \ell_k$ then the map $\phi$ restricted to the last entries maps $\ell_1\rightarrow 1$, $\ell_2\rightarrow 2$, etc. Hence, the last entry of $e$ will be mapped to $w_n = \phi(e_n)$. In particular, if $e_n=\ell_i$, then $w_n = i$. \end{proof} \begin{example} Let $e = 010213$. \begin{itemize} \item $e_1e_2e_3=w_1w_2w_3=010$ \item The only descent bottom in the word $e_1e_2e_3=010$ is the second $0$, hence $e_4$ is from the set $\{\textcolor{blue}{1},\textcolor{blue}{2},\textcolor{blue}{3}\}$, ($0$ is not allowed). Similarly, the possible set for $w_4$ is $\{\textcolor{red}{0},\piros{1},\piros{2}\}$ here $3$ is forbidden. We map the two possible sets, $\textcolor{blue}{1}\rightarrow \piros{0}$, $\textcolor{blue}{2}\rightarrow \piros{1}$, $\textcolor{blue}{3}\rightarrow \piros{2}$. Now we have $e_4 = 2$, so we have $w_4=1$. So, we have $w_1w_2w_3w_4=0101$. Note that neither $e_4$ nor $w_4$ is a descent. \item The only descent bottom in the word $e_1e_2e_3e_4 = 0102$ is still the $0$, so the possible set for $e_5$ is $\{\textcolor{blue}{1},\textcolor{blue}{2},\textcolor{blue}{3},\textcolor{blue}{4}\}$. Similarly, the possible set for $w_5$ is $\{\piros{0},\piros{1},\piros{2},\piros{3}\}$, $4$ is not in the set. We map these two possible sets as before, and since $e_5 = 1$ we have $w_5=0$. Note that $w_5$ is a descent bottom, just as $e_5$. \item There are two descent bottoms in the word $e_1e_2e_3e_4e_5 = 01021$, just as in the word $w_1w_2w_3w_4w_5=01010$. Hence, $e_6$ is from the set $\{\textcolor{blue}{2},\textcolor{blue}{3},\textcolor{blue}{4},\textcolor{blue}{5}\}$, $0$ and $1$ are deleted from the ground set. Similarly, the possible set for $w_6$ is $\{\piros{0},\piros{1},\piros{2},\piros{3}\}$, where $4$ and $5$ are deleted from the ground set. We map the two possible sets, and obtain that $e_6 = 3$ is mapped to $w_6=1$. Hence, we have $w = 010101$. \end{itemize} \end{example} We give another set of inversion sequences that are also in one-to-one correspondence with weak ascent sequences. Let $\mathcal{I}_n(\mbox{posdt})$ denote the sequences of integers $w = w_1\cdots w_n$ with $$w_i\in \{0,1,\ldots, i-1\}\setminus \{j:w_j>w_{j+1}\},$$ i.e., the set of length-$n$ inversion sequences where the positions of the descent tops are forbidden as entries. Another way to describe this is as the set of inversion sequences that contain only entries at which positions a weak ascent occur. The sequence $0102$ is not in $\mathcal{I}_4(\mbox{posdt})$, because $w_2$ is a descent top, hence the value $2$ is forbidden. All other length-4 inversion sequences are in $\mathcal{I}_4(\mbox{posdt})$. While we do not give a formal proof of the following result, let us note that it follows by an argument similar to that given for Theorem~\ref{weak_ascent_elizalde} in conjunction with the bijection $\Lambda$ from partition matrices to inversion sequences that was given in \cite{partition_and_composition_matrices}. \begin{proposition} The set $\mathcal{I}_n(\mbox{posdt})$ is equinumerous with the set $\mathrm{WAsc}_n$. \end{proposition} Note that the inductive construction is very similar in each case, weak-ascent sequences, inversion sequences avoiding the vincular pattern, and the weak Fishburn permutations. In each case there is a set of possible values for the $j$'th entry that is determined in the prefix of length $j-1$. Auli and Elizalde~\cite{elizalde} use the method of generating trees to derive an expression for the generating function \begin{align*} A(z)=\sum_{n\geq 0} I_n(\underline{10}0)z^n =\sum_{n\geq 0}I_n(\underline{10}1)z^n. \end{align*} \begin{proposition}[\cite{elizalde} Proposition 3.12] We have that $A(z) =G(1,z)$, where $G(u,z)$ is defined recursively by \begin{align*} G(u,z) = u(1-u)+uG(u(1+z-uz),z). \end{align*} \end{proposition} This expression and the bijection from the proof of Theorem~\ref{weak_ascent_elizalde} imply that if we denote by $A_n$ the enumeration sequence that counts the number of weak ascent sequences of length $n$, we have $A_n = \sum_{k=0}^n a_{n,k}$, where $a_{n,k}$ is given by the following formula. The initial values $a_{0,0} = 1$, $a_{n,0} = a_{0,k} = 0$ and \begin{align}\label{a_n_k} a_{n,k} = \sum_{i=0}^n \sum_{j=0}^{k-1} (-1)^{j} \binom{k-j}{i}\binom{i}{j} a_{n-i,k-j-1}. \end{align} \begin{proposition} The number of weak ascent sequences of length $n$ having $k$ weak ascents is $a_{n,k}$. \end{proposition} \begin{proof} Let $b_{n,k}$ denote the number of weak ascent sequences of length $n$ and having $k$ weak ascents. A weak ascent is an ascent, or a plateau, where we mean under a plateau an entry that is followed by the same entry. ($\pi_i$ with $\pi_i=\pi_{i-1}$.) Consider weak ascent sequences with $k$ weak ascents of length $n$ such that in the last $i$ entries there are only plateaux and descents. Let $j$ be the number of plateaux of the sequence among the last $i$ entries. One can construct such a weak ascent sequence by the following procedure. Take a weak ascent sequence of length $n-i$ with $k-j-1$ weak ascents. This means that for the entry at the $n-i+1$'th position we have a restriction that the weak ascents that are among the last $i$ entries are all plateaux (no ascents), it is at most $k-j$. However, since we have at the remaining positions only descents and plateaux, all of the entries at positions, $n-i+1, n-i+2,\ldots, n$ are at most $k-j$. Choose first $i$ distinct entries from the available $k-j$ values, which we list in a decreasing order. To create the plateaux, choose now $j$ positions (from the $i$) to make it a plateau, to equal it to the previous value. Clearly, what we get from this procedure is a desired weak ascent sequence. However, there are overlappings, we obtain each such sequence several times, so we apply the inclusion exclusion principle. We obtain for $b_{n,k}$ the same formula as for $a_{n,k}$ in (\ref{a_n_k}). Thus, $b_{n,k} = a_{n,k}$. \end{proof} Note that $a_{n,n}$ are the Catalan numbers, which is clear, since weak ascent sequences that have only ascents are in a trivial bijection for instance with Dyck paths. \begin{remark} Since the sequence $A_n$ has a rapid growth, greater than $n^{{n}/{2}}$, the series $\sum_{n=0}^{\infty}A_nz^n$ converges only for $z=0$. On the other hand, since $A_n \leq n!$, the exponential generating function $\sum_{n=0}^{\infty} A_n \frac{t^n}{n!}$ determines an analytic function on a certain domain. However, it could be difficult to represent it as a function by using classical functions. We did not manage to derive a nice closed formula for it. \end{remark} \section{Concluding remarks} Experimentation with restricted classes of weak ascent sequences has shown that there are relationships to other known number sequences. As an example, we offer the following simple Catalan result: \begin{proposition} The number of weak ascent sequences $w=(w_1,\ldots,w_n)$ that are weakly-increasing, i.e. $ w_{i}\leq w_{i+1}$ for all $i$, is given by the Catalan numbers. \end{proposition} \begin{proof} If a weak-ascent sequence is weakly increasing then there is no restriction on the entries, so the set of weakly-increasing weak ascent sequences is the same as the set of nondecreasing sequences of integers $a_i$ with $0\leq a_i\leq i$ which are known to be enumerated by the Catalan numbers. \end{proof} A slightly different restriction gives rise to the following conjecture: \begin{conjecture} The number of weak ascent sequences $w=(w_1,\ldots,w_n)$ that satisfy $w_{i+1} \geq w_i -1$ for all $i$ equals OEIS~\cite[A279567]{OEIS} ``Number of length $n$ inversion sequences avoiding the patterns 100, 110, 120, and 210.'' \end{conjecture} The paper \cite{McNamara} probed restrictions on ascent sequences and how such restrictions played out in the bijective correspondences. The above proposition and conjecture represent a first step in that direction for weak ascent sequences. Research into pattern avoidance in ascent sequences (see Duncan and Steingr\'{i}msson~\cite{Duncan}) proved to be a fruitful avenue of research that produced a wealth of enumerative identities and conjectures, some of which are still open. The asymptotics of generating functions for these has recently been investigated by Conway et al.~\cite{conway}. We posit that a similarly rich collection of results are to be discovered by exploring pattern avoidance for weak ascent sequences. \section*{Acknowledgments} The authors would like to thank Toshiki Matsusaka for his assistance with Equation~\eqref{a_n_k}.
null
null
null
proofpile-arXiv_000-10201
{"arxiv_id":"2111.03159","language":"en","timestamp":1636337089000,"url":"https:\/\/arxiv.org\/abs\/2111.03159","yymm":"2111"}
2024-02-18T23:40:25.395Z
2021-11-08T02:04:49.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10203"}
null
\section{Introduction} There is significant astrophysical and cosmological evidence showing that non-baryonic dark matter dominates the mass in the Universe \citep[e.g.][]{Einasto74,Rubin78,Trimble87,Wittman2000,Mandelbaum2013,Kwan2017}. Cosmological constraints have pinpointed that the dark matter mass density relative to critical is $\Omega_{\rm dm} \simeq 0.27$ today \citep[see][and references therein]{Planck2020}. For a comprehensive historical perspective on the observational and theoretical motivations for dark matter see \citet{BH18}. One popular theory for dark matter suggests that it is made up of Weakly Interacting Massive Particles (WIMPs) \citep[WIMPs, see][for discussions]{Jungman96,dodelson2003modern,Feng05,Bertone05,mo2010galaxy}. In the standard WIMP scenario, where dark matter particles are their own antiparticles, WIMPs self-annihilate and recombine in equilibrium when the Universe is young, hot, and dense. As the Universe cools and expands, annihilation rates become too low to maintain equilibrium, and the co-moving particle abundance “freezes out.” The resultant abundance is set directly by the interaction cross section during freeze-out, and this gives us a way to relate a macroscopic observable ($\Omega_{\rm dm} \simeq 0.27$) to microscopic properties of the particle. Specifically, if the thermally-averaged cross-section during freeze out is $\langle \sigma_A v \rangle \equiv \langle \sigma_A v \rangle_T \simeq 2.3 \times 10^{-26}$ cm$^3$ s$^{-1}$, then the thermal relic density is naturally of the right order of magnitude to match the observed abundance \citep[see][for a precise treatment]{Steigman12}. The same annihilations that set the thermal abundance of WIMPs in the early Universe should be occurring again today in regions where the dark matter has become dense in dark matter halos. If those annihilations produce Standard Model particles, this provides a means for indirect dark matter detection. Specifically, an observed flux of Standard Model particles from such a location could provide evidence for dark matter. One region of particular interest is the Galactic Center \citep[][]{Bergstrom88}. Not only is the Galactic Center expected to be dense in dark matter but its relative proximity to Earth has made it a subject of significant study for indirect messengers of annihilation, including cosmic rays and neutrinos \citep[see][for a review]{Gaskins2016}. If annihilation to quarks and charged lepton states happens for dark matter particles, this will ultimately produce photons with energies of order $\sim 10\%$ of the dark matter particle mass, making gamma-ray observations of particular interest for indirect searches for WIMP dark matter with $m_\chi \sim 100$ GeV \citep{daylan2016characterization}. An observed excess in gamma-ray emission from the Galactic Center based on Fermi Large Area Telescope observations has sparked considerable interest as a potential indirect detection signal \citep{Hooper11,Abazajian12,martin2014fitting,Ajello16,Ackerman17}. The basic excess has been confirmed by multiple groups \citep[see][for a review]{murgia2020} and is consistent with expectations for a dark matter particle with mass $m_\chi \sim 10-100$ GeV annihilating with a velocity-averaged cross section that matches the thermal WIMP expectation. A different signal from the Andromeda galaxy halo is potentially consistent with this interpretation \citep{Karwin2020}. Although a dark matter origin of the Galactic Center Excess (GCE) is the most intriguing possibility, astrophysical sources, including gamma-ray emitting pulsars \citep[e.g.][]{Abazajian11,Bartels16} and supernovae remnants \citep[e.g.][]{carlson2014cosmic} are plausible alternatives. The case for an astrophysical interpretation has been strengthened over the last several years, with analyses showing that the morphology of the excess traces the flattened "boxy" stellar over-density of the Galactic bulge, rather than the more spherical distribution one would expect for a dark matter signal \citep{Macias18,Bartels18}. Based on this realization, \citet{Abazajian20} used templates for the galactic and nuclear stellar bulges to show that the GC shows no significant evidence for DM annihilation and used this to place strong constraints on the s-wave cross section. In particular, in the case of a pure b-quark annihilation channel, assuming a range of DM profiles consistent with numerical simulations, \citet{Abazajian20} ruled out s-wave cross sections $ \gtrsim 0.015 \langle \sigma_A v \rangle_T$ for dark matter masses $m_\chi \sim 10$ GeV. One way that this thermal limit could be avoided is if dark matter annihilation is velocity-dependent \citep{Robertson_Zentner09,Giacchino13,Choquette16,Petac18,Boddy18,johnson2019search,Arguelles19,Board21}. Specifically, in some models, symmetries forbid the s-wave contribution to the annihilation cross-section, and the leading contribution to DM annihilations could be p-wave $\sigma v \propto (v/c)^2$ or even d-wave $\sigma v \propto (v/c)^4$. In the Milky Way, typical DM velocities are usually thought to be $\sim 3 \times 10^{-4} c$ near the Galactic Center, while at thermal freeze out $v \sim 0.1 c$. This means that for p-wave, the cross section is expected to be suppressed by a factor of $\sim 10^{-5}$ compared to the value during freeze-out. For a fixed particle physics model with s-wave annihilation, the expected annihilation signal depends on the square of the dark matter density along the line-of-sight from the observer. This "astrophysical J-factor" is therefore critical to the interpretation of any indirect dark matter search \citep{Stoehr03,Diemand07,Springel08,Kamionkowski10,Grand21}. It is common for interpretive analyses to adopt analytic priors for the dark matter profile shape inferred from cosmological simulations and to normalize the profiles so that the local dark matter density near the Sun matches observationally-inferred values \citep{necib2019under}. For velocity-dependent models, the J-factor is generalized to include the local velocity distribution \citep{Boddy18}. Recently, \citet{Board21} used several cosmological zoom hydrodynamical simulations to investigate the generalized J-factor for velocity dependent models for Milky-Way size galaxies. They found that J-factors were enhanced for hydrodynamic runs in p and d-wave cases and that the relative velocity distribution of dark matter particles was well-described by a Maxwell-Boltzmann form. They also concluded that the J-factor in all models was strongly correlated with the local Solar dark matter density. In this paper, we perform a similar analysis to that in \citet{Board21} utilizing 12 Milky Way mass zoom simulations with $\sim 10$ times better mass resolution done as part of the FIRE-2 collaboration \citep{Wetzel16,garrison2017not,hopkins2018fire,Garrison-Kimmel19,lazar2020dark}. These simulations represent some of the highest resolution simulations of Milky-Way mass halos with full galaxy formation physics yet run, and allow us to resolve the J-factor within $\sim 3$ degrees of the Galactic Center. For each halo, we have a dark-matter-only version, and this allows us to explore the differential effect of galaxy formation physics on J-factor predictions. One major difference from \citet{Board21} is that we re-normalize every halo so that the local dark matter density at mock solar locations are identical. This allows us to mimic what is done in indirect detection analyses and to explore how differences in the shape of the dark matter density and velocity profile will affect J-factor predictions from simulation to simulation. The outline of this paper is as follows. In section 2 we describe our simulations. In section 3 we provide our nomenclature for the effective J-factor and describe our analysis. In section 4 present our results and conclude in section 5. \begin{table*} \centering \caption{(1) Simulation name. The suffix ``DMO" stands for ``Dark Matter Only" and refers to the same simulation run with no hydrodynamics or galaxy formation physics. (2) Factor $f$ by which dark matter particle masses have been multiplied ($m_{\rm dm} \rightarrow fm_{\rm dm}$) in order to normalize the dark matter density at $\rho(r = R_\odot) = 10^7$ $M_{\odot}$ ${\rm kpc}^{-3} = 0.38 $GeV cm$^{-3}$ for a mock solar location $R_\odot = 8.3$ kpc. (3) Stellar mass $M_\star$ of the central galaxy. (4) Virial mass (of raw simulation, not including the $f$ factor) defined by \citet{ByranNorman1998}. The following quantities are derived after normalizing (by $f$) to the local dark matter density: (5) Cumulative s-wave J-factor within 3 degrees of the Galactic Center. (6) Cumulative s-wave J-factor within 10 degrees of the Galactic Center. (7) Cumulative s-wave J-factor integrated over the sky. (8) Cumulative p-wave J-factor within 3 degrees of the Galactic Center. (9) Cumulative p-wave J-factor within 10 degrees of the Galactic Center. (10) Cumulative p-wave J-factor integrated over the sky. (11) Cumulative d-wave J-factor within 3 degrees of the Galactic Center. (12) Cumulative d-wave J-factor within 10 degrees of the Galactic Center. (13) Cumulative d-wave J-factor integrated over the sky. } \label{tab:one} \begin{tabular}{lccc @{\hspace{.1cm} \vline \hspace{.1cm} } ccc @{\hspace{.1cm} \vline \hspace{.1cm} } ccc @{\hspace{.1cm} \vline \hspace{.1cm} } ccc} \hline Simulation & f & M$_\star$ & M$_{\rm vir}$ & J$_s$(< $3^{\circ}$) & J$_s$(< $10^{\circ}$) & J$_s^{\rm tot}$ & J$_p$(< $3^{\circ}$) & J$_p$(< $10^{\circ}$) & J$_p^{\rm tot}$ & J$_d$(< $3^{\circ}$) & J$_d$(< $10^{\circ}$) & J$_d^{\rm tot}$ \\ & & $10^{10}$ M$_\odot$ & $10^{12}$ M$_\odot$ & & $\hspace{-2 cm}$ ($10^{22}$ & \hspace{-2 cm} GeV$^2$ cm$^{-3}$) & & \hspace{-2 cm} ($10^{16}$ & \hspace{-2 cm} GeV$^2$ cm$^{-3}$) & & \hspace{-2 cm} ($10^{10}$ & \hspace{-2 cm} GeV$^2$ cm$^{-3}$) \\ \hline \texttt{M12i} & 1.28 & 6.4 & 0.90 & 0.964 & 5.08 & 12.5 & 4.62 & 21.1 & 42.2 & 121 & 477 & 806 \\ \texttt{M12iDMO} & 1.59 & - & 1.3 & 0.295 & 1.32 & 5.21 & 0.133 & 0.881 & 5.34 & 0.390 & 3.52 & 31.2 \\ \\ \texttt{M12c} & 1.26 & 6.0 & 1.1 & 1.03 & 5.63 & 15.5 & 4.32 & 20.9 & 46.6 & 98.5 & 423 & 790 \\ \texttt{M12cDMO} & 1.83 & - & 1.3 & 1.26 & 4.62 & 12.3 & 0.545 & 2.93 & 11.3 & 1.42 & 10.9 & 59.6 \\ \\ \texttt{M12m} & 0.885 & 11 & 1.2 & 0.603 & 3.94 & 13.6 & 2.25 & 13.3 & 38.2 & 45.5 & 243.0 & 594 \\ \texttt{M12mDMO} & 1.42 & - & 1.4 & 1.05 & 3.90 & 9.41 & 0.490 & 2.42 & 8.10 & 1.31 & 8.60 & 40.3 \\ \\ \texttt{M12f} & 1.01 & 8.6 & 1.3 & 0.844 & 5.31 & 14.5 & 3.65 & 20.0 & 44.4 & 86.5 & 411 & 765 \\ \texttt{M12fDMO} & 1.82 & - & 1.6 & 0.620 & 2.38 & 6.67 & 0.303 & 1.61 & 7.02 & 0.890 & 6.46 & 44.2 \\ \\ \texttt{M12w} & 1.28 & 5.8 & 0.83 & 0.921 & 3.99 & 11.2 & 3.58 & 14.5 & 33.1 & 74.8 & 285 & 550 \\ \texttt{M12wDMO} & 1.68 & - & 1.1 & 0.436 & 1.72 & 5.66 & 0.207 & 1.18 & 5.68 & 0.610 & 4.78 & 32.5 \\ \\ \texttt{M12b} & 0.990 & 8.1 & 1.1 & 1.04 & 6.49 & 16.6 & 5.82 & 29.7 & 58.0 & 182 & 759 & 1201 \\ \texttt{M12bDMO} & 1.25 & - & 1.4 & 1.25 & 4.37 & 11.2 & 0.655 & 3.10 & 11.0 & 2.08 & 12.9 & 63.1 \\ \\ \texttt{Juliet} & 1.31 & 4.2 & 0.85 & 3.08 & 10.9 & 18.6 & 10.7 & 33.4 & 50.2 & 200 & 555 & 755 \\ \texttt{JulietDMO} & 1.59 & - & 1.0 & 2.10 & 6.12 & 12.6 & 1.07 & 4.43 & 11.5 & 3.26 & 18.6 & 60.3 \\ \\ \texttt{Romeo} & 0.99 & 7.4 & 1.0 & 4.08 & 14.4 & 25.2 & 10.2 & 35.7 & 60.5 & 136 & 472 & 780 \\ \texttt{RomeoDMO} & 1.26 & - & 1.2 & 1.74 & 5.73 & 12.2 & 0.881 & 3.83 & 11.3 & 2.56 & 14.7 & 61.2 \\ \\ \texttt{Thelma} & 1.17 & 7.9 & 1.1 & 0.304 & 2.18 & 8.86 & 1.22 & 8.12 & 26.7 & 26.9 & 188 & 449 \\ \texttt{ThelmaDMO} & 1.70 & - & 1.3 & 0.721 & 2.52 & 6.69 & 0.310 & 1.62 & 6.87 & 0.825 & 6.31 & 42.3 \\ \\ \texttt{Louise} & 1.42 & 2.9 & 0.85 & 0.811 & 4.39 & 11.3 & 2.00 & 10.0 & 23.6 & 26.6 & 124 & 267 \\ \texttt{LouiseDMO} & 1.41 & - & 1.0 & 1.01 & 3.77 & 10.4 & 0.557 & 2.85 & 10.3 & 0.825 & 6.30 & 42.3 \\ \\ \texttt{Romulus} & 1.00 & 10 & 1.53 & 7.30 & 18.4 & 28.6 & 26.8 & 63.7 & 92.2 & 529 & 1182 & 1610 \\ \texttt{RomulusDMO} & 1.01 & - & 1.9 & 1.15 & 4.40 & 11.9 & 0.540 & 2.94 & 11.5 & 1.52 & 11.5 & 64.2 \\ \\ \texttt{Remus} & 1.10 & 5.1 & 0.97 & 1.67 & 7.48 & 16.6 & 4.65 & 19.5 & 39.0 & 69.4 & 272 & 500 \\ \texttt{RemusDMO} & 1.18 & - & 1.3 & 1.57 & 5.45 & 13.3 & 0.814 & 3.87 & 12.5 & 2.53 & 16.0 & 68.9 \\ \\ \hline \end{tabular} \end{table*} \section{Overview of Simulations} Our analysis relies on cosmological zoom-in simulations performed as part of the Feedback In Realistic Environments (FIRE) project\footnote{\url{https://fire.northwestern.edu/}} with FIRE-2 feedback implementation \citep{Hopkins17} with the gravity plus hydrodynamics code {\small GIZMO} \citep{Hopkins15}. FIRE-2 includes radiative heating and cooling for gas with temperatures ranging from 10\,{\rm K} to $10^{10}\,{\rm K}$, an ionising background \citep{Faucher2009}, stellar feedback from OB stars, AGB mass-loss, type Ia and type II supernovae, photoelectric heating, and radiation pressure. Star formation occurs in gas that is locally self-gravitating, sufficiently dense ($ > 1000$ cm$^{-3}$), Jeans unstable, and molecular (following \citealt{Krumholz_2011}). Locally, the star formation efficiency is set to $100\%$ per free-fall time, though the global efficiency of star formation within a giant-molecular cloud (or across larger scales) is self-regulated by feedback to $\sim$1-10\% per free-fall time \citep{Orr_2018}. In this work, we analyse 12 Milky-Way-mass galaxies (Table \ref{tab:one}). These zoom simulations are initialised following the approach outlined in \citet{Onorbe14} using the MUSIC code \citep{HA11}. Six of these galaxies were run as part of the Latte suite ~\citep{Wetzel16,Garrison-Kimmel17,Garrison-Kimmel19,samuel2020profile,hopkins2018fire} and have names following the convention \texttt{m12*}. The other six, with names associated with famous duos, are set in paired configurations to mimic the Milky Way and M31 \citep{Garrison-Kimmel19,Garrison-Kimmel19_2}. Analysis has shown these are good candidates for comparison with the Milky Way \citep{sanderson2020synthetic}. Gas particles for the \texttt{M12*} runs have initial masses of $m_{\rm g,i} = 7070\,$ $M_{\odot}$. The ELVIS on FIRE simulations have roughly two times better mass resolution ($m_{\rm g,i} \simeq 3500 - 4000$ $M_{\odot}$). Gas softening lengths are fully adaptive down to $\simeq$0.5$-$1 pc. The dark matter particle masses are $m_{\rm dm} = 3.5 \times 10^4$ $M_{\odot}$ for the Latte simulations and $m_{\rm dm} \simeq 2 \times 10^4$ $M_{\odot}$ for the ELVIS runs. Star particle softening lengths are $\simeq$4 pc physical and a dark matter force softening is $\simeq$40 pc physical. Lastly, each FIRE simulation has an analogous dark matter only (DMO) version. The individual dark matter particle masses in the DMO simulations are larger by a factor of $(1 − f_{\rm b})^{−1}$ in order to keep the total gravitating mass of the Universe the same, where $f_{\rm b} = \Omega_{ \rm b} /\Omega_{\rm m}$ is the cosmic baryon fraction. The initial conditions are otherwise identical. DMO versions of each halo are referred to with the same name as the FIRE version with the added suffix ``DMO." As can be seen in Table \ref{tab:one}, the stellar masses of the main galaxy in each FIRE run (second column) are broadly in line with the Milky Way: $M_\star \approx (3 - 11) \times 10^{10}$ $M_{\odot}$. The virial masses \citep{ByranNorman1998} of all the halos in these simulation span a range generally in line with expectations for the Milky Way: $M_{\rm vir} \approx (0.9 - 1.8) \times 10^{12} $ $M_{\odot}$. In every case, the DMO version of each pair ends up with a higher virial mass. This is consistent with the expectation that halos will have lost their share of cosmic mass by not retaining all baryons in association with feedback. As we discuss in the next section, in our primary analysis we re-normalize all halos (both FIRE and DMO runs) so that they have the same ``local" dark matter density at the Solar location (by the factor $f$ listed in the table). \begin{figure*} \includegraphics[width =0.99\columnwidth,trim = 0 0 0 0 ]{Figures/Dark_matter_density_all_halos_FIRE_un_norm.png} \hspace{0.1in} \includegraphics[width = 0.99 \columnwidth, trim = 0 0 0 0]{Figures/Velocity_dispersion_all_halos_new_un_norm.png} \\ \includegraphics[width =0.99\columnwidth,trim = 0 0 0 0 ]{Figures/Dark_matter_density_all_halos_FIRE.png} \hspace{0.1in} \includegraphics[width =0.99\columnwidth,trim = 0 0 0 0]{Figures/Velocity_dispersion_all_halos_new.png} \caption{ {\bf Upper Left:} Simulated dark matter halo density profiles for all DMO (dashed) and FIRE (solid) runs (solid). {\bf Upper Right:} Three-dimensional dark matter velocity dispersion profiles for all DMO (dashed) and FIRE (solid) simulations. {\bf Lower Left:} Dark matter density profiles after re-normalizing. The mass per particle in each simulation has been multiplied by a factor $f$ (ranging from $f=0.89 - 1.8$, see Table 1) to give them the same density at a mock solar location: $\rho(r = R_\odot) = 10^7$ $M_{\odot}$ ${\rm kpc}^{-3}$ with $R_\odot = 8.3$ kpc (vertical dotted line). {\bf Bottom Right:} Velocity dispersion profiles for each halo after re-scaling the particle velocities by a factor $\sqrt{f}$. This roughly accounts for the re-scaling of the mass/density profile. Note that after re-scaling, the DMO velocity dispersion profiles become similar for $r \lesssim R_\odot$. Even when normalized at the solar radius, there is almost an order of magnitude scatter in the inner ($\sim 400$pc) density for the FIRE simulations and they all have higher central dark matter velocities than would have been expected from DMO simulations.} \label{fig:profiles} \end{figure*} \begin{figure*} \includegraphics[height =0.95 \columnwidth]{Figures/rho_ratio_all_halos.png} \hspace{.1in} \includegraphics[height=.95\columnwidth, trim = 0 0 0 0 ]{Figures/sigma_ratio_all_halos.png} \caption{{\bf Left:} The ratio of the FIRE dark matter density profiles shown in Figure \ref{fig:profiles} to its DMO counterpart. We see that in some cases the FIRE runs are less dense than the DMO runs though most halos get denser at small radius in response to galaxy formation. {\bf Right:} The ratio of the FIRE dark matter halo velocity dispersion profiles shown in Figure \ref{fig:profiles} divided by the DMO version for each halo. We see that in every case, the process of galaxy formation heats the dark matter at small radius. } \label{fig:ratios} \end{figure*} \subsection{Dark Matter Density and Velocity Dispersion Profiles} \citet{lazar2020dark} provide an extensive discussion of the dark matter halo density profiles for the simulations we analyze here. Every system has more than $\sim 1000$ dark matter particles within the inner 400pc and is converged outside of this radius according to the criteria discussed \citet{Hopkins17}. Though some systems are even better converged, for simplicity we adopt the same convergence radius, $r_{\rm conv} = 400$pc, for each halo and only present values that depend on quantities outside of this radius. For an adopted Solar location at radius $R_\odot = 8.3$ kpc, our $400$ pc convergence radius corresponds to an angle $\psi = 2.75^\circ$ in projection at the Galactic Center. Figure \ref{fig:profiles} shows the spherically-averaged density and velocity dispersion profiles of the simulations in our sample. The upper two panels show raw simulation results, with differential density profiles on the left and velocity-dispersion profiles on the right. The DMO simulations are in black and FIRE runs are in blue. Note that the density profiles of the FIRE runs are systematically steeper for $r \gtrsim 3$ kpc than the DMO runs. This is consistent with the expectations that baryonic contraction makes halos more concentrated at this stellar-mass scale \citep[see][]{lazar2020dark}. At smaller radii ($r \lesssim 3$ kpc), however, the FIRE halos have a larger range of central densities; sometimes feedback produces a core-like profile and sometimes the halo remains fairly steep \citep[see][for an investigation into the origin of this variation] {mercado2021relationship}. The FIRE halos also have systematically higher central dark matter velocity dispersion than the DMO runs. A similar result was reported by \citet{Robles19}, which studied the velocity dispersion profiles of cold dark matter halos using zoom in dark matter simulations that included a slowly-grown Milky-Way disk potential. In these simulations, even without feedback, they found that the central velocity dispersion of the dark matter was much higher in runs with disk potentials compared to those without. \citet{Board21} also found that the central dark matter velocity dispersion was higher in simulations that included full galaxy formation physics (though with a different implementation than our own). Taken together, these results suggest that the dark matter velocity dispersion at the center of Milky-Way mass halos should be significantly higher than would be expected from DMO simulations, irrespective of galaxy formation model. The pair of panels on the bottom of Figure \ref{fig:profiles} show the profiles after we have re-scaled them to the defaults we will use in the rest of the analysis. Our aim here is to normalize each run to have the same local dark matter density at the solar location $r = R_\odot$. We are motivated to do this because it is customary in indirect-detection analyses to normalize the assumed profile at $R_\odot$ and to marginalize about the local density range inferred by observations. While our halos are Milky-Way like in virial mass and stellar mass, they are not precise replicas of the Milky Way. By re-normalizing at the solar location, our results become primarily about profile {\em shape} rather normalization, and can be scaled appropriately as observational estimates of the local density become more precise. We assume $R_\odot = 8.3$ kpc (vertical dotted lines in the left panels) and set the density there to be $\rho(r = R_\odot) = 10^7$ $M_{\odot}$ ${\rm kpc}^{-3} = 0.38 $ GeV cm$^{-3}$ \citep{Guo20}. We do this by scaling the particle masses in each simulation (post process) by a factor $f$: m$_{\rm dm} \rightarrow f \, $m$_{\rm dm}$. The values of $f$ for each simulation are given in Table 1 and range from $f = 1.8$ to $f=0.89$. We also re-scale the particle velocities in each simulation by a factor\footnote{This assumes $v \propto \sqrt{f M/r}$. We have checked that the dark matter velocity dispersion in our simulations does roughly scale with the local dark matter density as $\sqrt{f}$.} $\sqrt{f}$ in order to roughly account for the re-scaling of the total mass: $v \rightarrow \sqrt{f} v$. Note that after re-scaling, the DMO velocity dispersion profiles become similar for $r \lesssim R_\odot$, as expected. Even when normalized at the Solar radius, there is considerable scatter in the inner density and the FIRE simulations display more variance than the DMO simulations. The difference between DMO and FIRE is most systematic in the inner velocity dispersion (bottom right of Figure \ref{fig:profiles}). While the normalized DMO simulations all have $\sigma \simeq 140 \hspace{0.1cm} {\rm km \, s}^{-1}$ at $r = 400$ pc, the FIRE runs have $\sigma \simeq 350-550 \, {\rm km \, s}^{-1}$ at the same radius. While this scatter is interesting to note, giving a precise and detailed answer as to why it occurs while require more analysis and is the topic for future analysis. For this present paper, we note that it exists, and that is has a significant impact on the magnitude of the J-factor signal, so that the total magnitude of the J-factor signal varies quite signficantly from halo to halo. Figure \ref{fig:ratios} shows FIRE to DMO ratios for the normalized density profile (left) and velocity dispersion profile (right) of each halo pair. On the left we see that galaxy formation has generally made the halos less dense at large radii, corresponding to steeper (contracted) density profile. The effect of galaxy formation on the inner density is quite varied, with some systems (e.g. \texttt{Romulus} and \texttt{Romeo}) displaying higher central densities in the FIRE runs, while others (e.g. \texttt{Thelma} and \texttt{m12m}) have lower densities. There is no clear trend with stellar mass or virial mass associated with these differences. \texttt{Thelma} and \texttt{Romeo}, for example, have very similar galaxy masses and virial masses but galaxy formation seems to have had an opposite effect on their relative density profiles.This is likely an artifact of some of the important baryonic differences between halos at late times, which are studied in other papers: some have late-occurring minor mergers or strong stellar bars \citep[which tend to push DM outwards and lower central densities; e.g.][]{Sanderson2017,Debattista2019}, others have strong torques or early multiple-mergers which produce inflows and dense bulges and more compact disks \citep[e.g.][]{SGK18,Ma17}. For the purposes of this paper, it is noted that it impacts the calculated J-factor and affects variance from halo to halo. The right panel of Figure \ref{fig:ratios} shows again that the effect of galaxy formation on the dark matter velocity dispersion is systematic. In every case the FIRE runs are hotter, with $\sim 3-4$ times higher velocity dispersion than their DMO counterparts at $r = 400$ pc. In Appendix B we show that these halos become baryon dominated within 3-8 kpc from the center (Figure \ref{fig:ratios}. As discussed next, this enhancement in central velocity dispersion has a systematic effect on the dark-matter annihilation J-factors for velocity-dependent cross sections. \section{Astrophysical J-Factors} \subsection{Definitions} If dark matter particle of mass $m_\chi$ is its own antiparticle with an annihilation cross section $\sigma_A$, the resulting differential particle flux produced by annihilation in a dark matter halo can be written as the integral along a line of sight $\ell$ from the observer (located at the solar location in our case) in a direction $\vec{\theta}$ in the plane of the sky over pairs of dark matter particles with velocities $\vec{v}_1$ and $\vec{v}_2$: \begin{equation} \frac{d^2\Phi}{dE d\Omega} = \frac{1}{4\pi} \frac{d N}{d E} \int {\rm{d}} \ell ~{\rm{d}}^3v_1 \, {\rm{d}}^3 v_2 \frac{f(\vec{r}, \vec{v}_1)}{m_\chi} \frac{f(\vec{r}, \vec{v}_2)}{m_\chi} \, \frac{(\sigma_A v_{\rm rel}) }{2} \ . \end{equation} Here, $\vec{r} = \vec{r}(\ell,\vec{\theta})$ is the 3D position, which depends on the distance along the line of sight $\ell$ and sky location $\vec{\theta}$. The dark matter velocity distribution $f(\vec{r}, \vec{v})$ is normalized such that $\int d^3v f(\vec{r}, \vec{v}) = \rho (\vec{r})$, where $\rho$ is the dark matter density at that location. The symbol $v_{\rm rel}=|\vec{v}_1 - \vec{v}_2|$ represents the relative velocity between pairs of dark matter particles. The quantity $m_\chi$ is the dark matter particle mass and $d N / d E$ is the particle energy spectrum ultimately produced by a single annihilation. Following \citet{Boddy18}, we parameterize the velocity-dependence of the dark matter annihilation cross section as \begin{equation} \sigma_A v_{\rm rel} = [\sigma v]_0 \, Q(v_{\rm rel}), \label{eq:sigv} \end{equation} where $[\sigma v]_0$ is the overall amplitude and the function $Q(v)$ parameterizes the velocity dependence. For $s$-wave, $p$-wave, and $d$-wave annihilation, $Q(v) = 1$, $(v/c)^2$, and $(v/c)^4$, respectively. We can then rewrite the differential particle flux as \begin{equation} \frac{d^2 \Phi}{d E {\rm{d}} \Omega} = \frac{(\sigma_A v)_0}{8\pi m_X^2} \frac{d N}{d E_\gamma} \left[ \frac{d J_Q}{d \Omega} \right] \, . \label{eq:Jfactor} \end{equation} Here, the term in brackets absorbs all of the astrophysics inputs and defines the astrophysical "J-factor" \begin{equation} \frac{dJ_Q}{d \Omega} (\vec{\theta}) = \int {\rm{d}} \ell \int {\rm{d}}^3 v_1 {f(\vec{r}, \vec{v}_1)} \int {\rm{d}}^3 v_2 {f(\vec{r}, \vec{v}_2)}\, Q( v_{\rm rel}) \, . \label{eq:Jfactor_def} \end{equation} In principle, the $\ell$ integral above sums pairs along the line-of-sight from the observer ($\ell = 0$) to infinity. In practice, we are focusing on J-factors arising from an individual ``Milky Way" halo, and truncate our integrals at the halo's edge (see below). It is often useful to quote the cumulative J-factor within a circular patch of sky of angular radius $\psi$ centered on the Galactic Center. In this case, the patch defined by $\psi$ subtends a solid angle $\Omega_\psi = 4 \pi \sin^2({\psi}/2)$ and we have: \begin{equation} J_Q (< \psi) = \int_0^{\Omega_\psi} \frac{d J_Q}{d \Omega} (\vec{\theta}) \, {\rm{d}} \Omega \, . \label{eq:Jf} \end{equation} \begin{figure*} \includegraphics[width=\columnwidth,trim = 130 0 50 90]{Figures/Hammer_dm_only_s-waveJuliet.png} \includegraphics[width=\columnwidth, trim = 50 0 130 90]{Figures/Hammer_bary_only_full_fs-wavejuliet.png} \\ \includegraphics[width=\columnwidth,trim = 130 0 50 90]{Figures/Hammer_dm_only_p_wave_oldp-waveJuliet.png} \includegraphics[width=\columnwidth, trim = 50 0 130 90]{Figures/Hammer_bary_p_wave_oldp-waveJuliet.png} \\ \includegraphics[width=\columnwidth,trim = 130 0 50 90]{Figures/hammer_dm_only_oldd-waveJuliet.png} \includegraphics[width=\columnwidth, trim = 50 0 130 90]{Figures/hammer_bary_oldd-waveJuliet.png} \caption{All-sky Hammer projections of J-factors ($d J/d \Omega$) for \texttt{JulietDMO} (left) and \texttt{Juliet} (right) in Galactic coordinates as viewed from mock solar locations 8.3 kpc from the halo centers. Maps utilize bins of roughly 1.3 square degrees on the sky. From top to bottom, the rows assume s-wave, p-wave, and d-wave annihilation. The color map in each pair of panels is fixed for each type of assumed annihilation and is logarithmic in $d J/d \Omega$, as indicated by the bar along the top of each image. Note that FIRE runs (right) produce systematically rounder J-factor maps on the sky. The p-wave and d-wave maps are significantly brighter and more extended for the FIRE runs as well, owing to the effects of galaxy formation in enhancing dark matter velocity dispersion in the center of each halo.} \label{fig:map_juliet} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth,trim = 130 0 50 90]{Figures/Hammer_dm_only_fulls-wavem12c.png} \includegraphics[width=\columnwidth, trim = 50 0 130 90]{Figures/Hammer_bary_only_full_fs-wavem12i.png} \\ \includegraphics[width=\columnwidth,trim = 130 0 50 90]{Figures/Hammer_dm_only_oldp-wavem12c.png} \includegraphics[width=\columnwidth, trim = 50 0 130 90]{Figures/Hammer_bary_oldp-wavem12c.png} \\ \includegraphics[width=\columnwidth,trim = 130 0 50 90]{Figures/Hammer_dm_only_oldd-wavem12c.png} \includegraphics[width=\columnwidth, trim = 50 0 130 90]{Figures/hammer_bary_oldd-wavem12c.png} \caption{Same as figure 3, but for \texttt{M12cDMO} (left) and \texttt{M12c} (right).} \label{fig:map_m12c} \end{figure*} \subsection{Approach} In what follows we aim to determine the astrophysical J-factors for each of our simulated halos for s-wave, p-wave, and d-wave annihilation. In doing so we approximate the dark matter distribution $f(\vec{r}, \vec{v})$ as a separable function: \begin{equation} f(\vec{r}, \vec{v}_1) = \rho(\vec{r}) \, g(\vec{v}(\vec{r})), \end{equation} with the dark matter density $\rho$ estimated using direct particle counts in the simulation. In this estimate, we everywhere rely on the nearest 32 local particles to determine the density.~\footnote{ As discussed in \citet{Board21}, the inclusion of subhalos that can be resolved at this resolution is not expected to increase global J-factors by more than $\sim 20\%$.} We assume that $g(\vec{v}(\vec{r}))$ at location $\vec{r}$ can be approximated as a local Maxwellian, $g_r$. \citet{Board21} have shown that the distribution of velocities in their simulations are well approximated by this. Assuming a local 3D velocity dispersion $\sigma = \sigma(r)$ we can write \begin{equation} g_r(v) {\rm{d}}^3 v = \left(\frac{3}{2 \pi \sigma^2(r)}\right)^{3/2} \, \exp{\left[-\frac{3 v^2}{2 \sigma^2(r)}\right]} \, {\rm{d}}^3 v. \end{equation} We measure $\sigma$ using direct particle counts at a given radius in the simulation. The Maxwellian assumption gives us $\int {\rm{d}}^3v \, g_r(v) = 1.0$, $ \langle v^2 \rangle = \sigma^2$, and $\langle v^4 \rangle = 15/9 \, \sigma^4$. For standard $s$-wave annihilation, $Q(v)=1$, and the effective J-factor (Eq. \ref{eq:Jfactor_def}) reduces to a simple integral over the density squared: \begin{eqnarray} \frac{d J_{s}}{d \Omega} (\vec{\theta}) & = & \int {\rm{d}} \ell \, \rho^2(\vec{r}) \int {\rm{d}}^3 v_1 \, g_r(v_1) \int {\rm{d}}^3 v_2 \, g_r(v_2) \nonumber \\ & = & \int {\rm{d}} \ell \, \rho^2[\ell(r)]. \end{eqnarray} For $p$-wave annihilation, $Q(v) = (v/c)^2$, and Eq. \ref{eq:Jfactor_def} becomes \begin{eqnarray} \frac{d J_{p}}{d \Omega} (\vec{\theta}) & = & \int {\rm{d}} \ell \, \rho^2(\vec{r}) \int {\rm{d}}^3 v_1 \, g_r(v_1) \int {\rm{d}}^3 v_2 \, g_r(v_2) \, \frac{|\vec{v}_1 - \vec{v_2}|^2}{c^2} \nonumber \\ & = & \frac{2}{c^2} \int {\rm{d}} \ell \, \rho^2[\ell(r)] \, \sigma^2[\ell(r)]. \end{eqnarray} In the second line we have used $ \langle (\vec{v_1} - \vec{v_2})^2 \rangle = 2 \langle v^2 \rangle$, under the assumption that the cross-terms vanish via local spherical symmetry. For $d$-wave annihilation, $Q(v) = (v/c)^4$, which implies \begin{eqnarray} \frac{d J_{d}}{d \Omega} (\vec{\theta}) & = & \int {\rm{d}} \ell \, \rho^2(\vec{r}) \int {\rm{d}}^3 v_1 \, g_r(v_1) \int {\rm{d}}^3 v_2 \, g_r(v_2) \, \frac{|\vec{v}_1 - \vec{v_2}|^4}{c^4} \nonumber \\ & = & \frac{48}{9 c^4} \int {\rm{d}} \ell \, \rho^2[\ell(r)] \, \sigma^4[\ell(r)]. \end{eqnarray} The co-factor $48/9$ comes from $ \langle (\vec{v_1} - \vec{v_2})^4 \rangle = 2 \langle v^4 \rangle + 2 \langle v^2 \rangle^2$, again assuming that the cross terms vanish and that we have a Maxwellian distribution ($\langle v^4 \rangle = 15/9 \, \sigma^4$). \subsection{Geometric Setup} For each halo in our sample, we calculate J-factors as defined in Equation \ref{eq:Jfactor_def}, integrating from a mock Solar location (setting $\ell = 0$) to the edge of the halo, which we define as a sphere of radius $r = 300$ kpc from the center of each halo in every case. While the virial radii \citep{ByranNorman1998} of our halos range from $300 - 335$ kpc, we fix $300$ kpc as the halo boundary for consistency. Since most of the J-factor signal comes from the inner halo, changing the outer radius by $10\%$ has no noticeable affect on our results. For the DMO runs, we assume that the Galactic Center corresponds to the halo center and fix the observer location to be at a distance 8.3 kpc from the halo center along the x-axis of the simulation. For FIRE runs, we position the observer in the galaxy disk plane at a radius of 8.3 kpc from the halo center. We define the disk plane to be perpendicular to the angular momentum vector of all the stars within 20 kpc of the central galaxy. We note that while we have placed the observer within the disk for physical consistency, in practice galaxy formation renders the dark matter halos quite spherical within $\sim 10$ kpc \citep{zhu2016baryonic}, so the location of the observer in the disk plane is not a major factor on the results. \begin{figure*} \includegraphics[height =0.6\columnwidth, trim = 0 0 0 0 ]{Figures/DJ_Dos_waveallhalos.png} \hspace{.1in} \includegraphics[height =0.6\columnwidth, trim = 0 0 0 0 ]{Figures/DJ_Dop_waveallhalos.png}\hspace{.1in} \includegraphics[height =0.6\columnwidth, trim = 0 0 0 0 ]{Figures/dJ_dod_waveallhalos.png} \caption{Differential J-factor profiles (Equation \ref{eq:Jfactor_def}) as a function of angle $\psi$ from the Galactic center for s-wave, p-wave, and d-wave annihilation (from left to right). The FIRE simulation halos are shown as solid blue lines and their DMO counterparts are shown as dashed black lines. For the s-wave case (left) we see that the central J-factor values are similar for FIRE and DMO cases, though the FIRE profiles are flatter at small angle, and more extended on the sky. The FIRE p-wave profiles (middle) are a noticeably amplified compared to the DMO cases. Their shapes are also significantly different -- with a flatter inner profile and sharper fall-off at angles beyond 10 degrees. The d-wave case (right panel) demonstrates the starkest difference between FIRE and DMO runs. } \label{fig:djdo} \end{figure*} \begin{figure*} \includegraphics[height =0.6\columnwidth,trim = 0 0 0 0]{Figures/RATIOS_OF_dJ_dos_waveallhalos.png} \hspace{.1in} \includegraphics[height =0.6\columnwidth, trim = 0 0 0 0]{Figures/RATIOS_OF_dJ_dop_waveallhalos.png} \hspace{.1in} \includegraphics[height =0.6\columnwidth, trim = 0 0 0 0]{Figures/RATIOS_OF_dJ_dod_waveallhalos.png} \caption{Ratio of FIRE to DMO J-factors for each halo as a function of angle $\psi$ from the Galactic center for s-wave, p-wave, and d-wave annihilation (from left to right). Each halo pair has a unique color, as indicated. For the case of s-wave (left), the ratio is of order unity, ranging from a factor of $\sim 3$ higher to a factor of $\sim 0.3$ lower at small angles. For p-wave, the FIRE runs have J-factors as much as $\sim 30$ times higher, though the amplification can be as small as a factor of $\sim 3$. In the case of d-wave, amplification factors as large as $\sim 100-400$ at small angle are seen. } \label{fig:djratio} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{Figures/J_cumulatives_waveallhalos_DMO_s.png} \includegraphics[width=\columnwidth]{Figures/J_cumulatives_waveallhalos_FIREONLY_s.png} \\ \includegraphics[width=\columnwidth]{Figures/J_cumulativep_waveallhalos_DMO.png} \includegraphics[width=\columnwidth]{Figures/J_cumulativep_waveallhalos_FIREONLY.png} \\ \includegraphics[width=\columnwidth]{Figures/J_cumulatived_waveDMO_allhalos.png} \includegraphics[width=\columnwidth]{Figures/J_cumulatived_waveallhalos_FIREONLY.png} \\ \caption{ Cumulative J-factor within annular angle $\psi$ from the Galactic Center (Eq. \ref{eq:Jf}) for DMO (left) and FIRE (right) for s-wave (top), p-wave (middle), and d-wave (bottom) annihilation. For the s-wave case, FIRE runs have J-factors of the same order of magnitude as the DMO runs, while for p-wave and d-wave the J-factors are considerably higher. } \label{fig:Jtot} \end{figure*} \begin{figure*} \includegraphics[width=\columnwidth]{Figures/cross_section_comparison_s_waves_dark_only.png} \includegraphics[width=\columnwidth]{Figures/cross_section_comparison_s_waves.png} \\ \includegraphics[width=\columnwidth]{Figures/cross_section_comparison_p_waves_dark_only.png} \includegraphics[width=\columnwidth]{Figures/cross_section_comparison_p_waves.png} \\ \includegraphics[width=\columnwidth]{Figures/cross_section_comparison_d_wave_dmo.png} \includegraphics[width=\columnwidth]{Figures/cross_section_comparison_d_waves.png} \\ \caption{Schematic illustration of how the cross section versus particle mass constraints from \citet{Abazajian20} (black solid lines, upper panels) would shift for s-wave (top), p-wave (middle), and d-wave (bottom) annihilation based on the relative J-factors (blue) from each of our DMO (left) and FIRE runs (right). The region above the lines is ruled out. The dotted lines show the $[\sigma v]_0$ required for thermal dark matter to match the observed abundance. FIRE results suggest that current constraints for p-wave are much closer to the thermal cross section than would have been expected from DMO halos, potentially within a factor of $\sim 10$ for $10$ GeV WIMPS.} \label{fig:constraints} \end{figure*} \section{Results} Figures \ref{fig:map_juliet} and \ref{fig:map_m12c} illustrate graphically our results for two example halo pairs, \texttt{Juliet} and \texttt{m12c}, respectively. We show all-sky Hammer projection maps of $dJ/d\Omega$ for s-wave (top), p-wave (middle), and d-wave (bottom) for the DMO (left) and FIRE (right) run of each halo. We are viewing the Galactic Center (middle of each image) from mock solar locations as defined in the previous section. The color bars are are mapped to J-factor amplitude as indicated at the top of each image. Note that every row has the same color mapping, so that the relative difference between DMO and FIRE runs can be seen clearly for each assumed velocity dependence. The binning in these maps is $1.3$ square degrees. The first takeaway from these images is that the FIRE runs are significantly brighter (with amplified J-factors) than the DMO runs for the p-wave and d-wave cases. This is a direct result of the FIRE halos having enhanced dark matter velocity dispersion compared to the DMO halos (e.g., the right panel of Figure \ref{fig:ratios}). The maps are also more extended from the Galactic Center. The s-factor maps are not as different, given the modest differences in central densities for these particular halos (see the left panel of Figure \ref{fig:ratios}), though substructure is significantly reduced in the FIRE runs, as expected from the destructive effects of the central galaxy \citep{garrison2017not,Kelly19}. Note that in all cases, including s-wave, the FIRE maps are {\em rounder} on the sky -- this is a result of galaxy formation tending to sphericalize the dark matter distributions compared to DMO runs in halo centers \citep{Debattista08,bernal2016spherical,Chau19,Kelly19,Shen21,Sameie21}. The fact that we expect annihilation signals to be even rounder than in the DMO case should in principle make it easier to detect or exclude dark matter annihilation in the face of astrophysical backgrounds, which are expected to track more closely the shape of the Galaxy \citep[e.g.][]{Abazajian20}. We will not focus on quantifying this difference in shape on the sky here because it will be the subject of future work. We will instead focus on azimuthally-averaged results in what follows. Figure \ref{fig:djdo} provides a summary of J-factor results for all of our FIRE halos (solid blue) and DMO halos (dashed black). Plotted are $dJ/d\Omega$ profiles (Equation \ref{eq:Jfactor_def}) as a function of angle $\psi$ with respect to the Galactic Center. Results for s-wave, p-wave, and d-wave are shown in separate panels, from left to right. As expected, the FIRE runs are generally amplified compared to DMO runs, especially for the p-wave and s-wave cases. The shapes of the profiles are also significantly different in character. While the DMO runs show a trend for J-factor profiles to be more peaked at small angle for s-wave, and to become flatter and more extended on the sky as we progress to p-wave and d-wave, the FIRE profiles are more similar in shape. In all cases (s-wave, p-wave, and d-wave) the "emission" profile is fairly constant out to $\sim 5-10$ degrees in the FIRE runs, with a steep fall-off towards larger angles beyond that point. Figure \ref{fig:djratio} shows, for each halo pair, the ratio of $dJ/d\Omega$ in FIRE to the DMO case as a function of angle from the Galactic Center, $\psi$. Each halo pair has a unique color, as indicated. For s-wave annihilation (left panel) we see that the FIRE runs sometime produce higher J-factors (up to a factor of $\sim 6$) at small angle and sometimes give decreased J-factors (as small as $\sim 0.3$ of the DMO value). Because for s-wave annihilation the J-factor depends only on the density, this behavior tracks what seen for the density profiles (Figure \ref{fig:ratios}). Sometimes feedback has produced a cored-out central density profile, leading to a lower central J-factor; sometimes baryonic contraction is more important, and this creates higher central densities and higher J-factors at small angle. The middle and right panels of Figure \ref{fig:djratio} show that the J-factor ratios are higher, in all cases, for the FIRE runs for p-wave and d-wave annihilation. This is because the dark matter velocities are always enhanced (see Figure \ref{fig:profiles}) by factors of $2.5-4$, which is enough to boost the p-wave and d-wave annihilation with respect to DMO runs, even in the cases where the central density is slightly smaller. Typical amplification factors are $\sim 10$ for p-wave and $\sim 100$ d-wave. In cases like \texttt{M12i} and \texttt{Romulus}, where the dark matter density is also higher in the FIRE runs, the p-wave and d-wave ratios can very large (factors of $\sim 40-60$ and $\sim 400-500$, respectively). Another way to see the difference between DMO and FIRE models is to compare the cumulative J-factors within annular angle $ \psi $ from the Galactic Center. Figure \ref{fig:Jtot} shows this quantity for each DMO (left) and FIRE (right) run. Each halo pair has a unique color, as indicated. For s-wave annihilation (top left and right panels) we see that the FIRE runs generally do have larger integrated J-factors within 20 degrees of the Galactic Center, even in cases (like \texttt{Thelma}) that have somewhat smaller signals within 10 degrees. The middle and bottom panels of Figure \ref{fig:Jtot} show that the J-factor cumulative totals are higher, in all cases, for the FIRE runs for p-wave and d-wave annihilation. Though the cumulative totals are more enhanced within $\sim 10$ degrees owing the the same of the profile. In the next section we briefly explore implications of our results for dark matter indirect detection. \section{Implications} \label{sec:implications} One of our primary results is that the DM velocities in our full galaxy formation runs are significantly higher than would be expected from DMO runs; this elevates the expected signal for fixed cross section in p-wave and d-wave models (see, e.g. Figure \ref{fig:Jtot}). In what follows we aim to provide a schematic illustration of how our results may impact attempts to constrain dark matter models with thermal abundance cross sections, especially those with velocity dependence. We use the published results of \citet{Abazajian20} in this illustrative example, and adopt a simple scaling of their published limits to provide a first-order sketch of how our results may impact future attempts to constrain s, p, and d-wave annihilation. As discussed in the introduction, the realization that the observed Galactic Center gamma-ray excess has a non-circular, boxy shape that traces the Galactic Bulge \citep{Macias18,Bartels18} has allowed \citet{Abazajian20} to rule out a number of thermal-abundance WIMP models with s-wave annihilation channels for $m_\chi \lesssim 500$ GeV. Such a result motivates the exploration of p-wave (and d-wave models). There is a general expectation that velocity-suppressed p-wave and d-wave annihilation will be far from detectable in Milky Way \citep[though see][]{johnson2019search}. This is because typical DM velocities in the Galactic Center are usually thought to be $v \sim 100$ km s$^{-1}$ (based on DMO simulations), compared values $\sim 10^{3}$ times higher during thermal freeze out. While, from the point of view of a model builder, such a suppression is ``good" because it evades direct-detection bounds, from the point of view of an observer or experimentalist, this level of suppression is a potential nightmare: how can we detect such a signal? The thick black lines in the upper set of panels Figure \ref{fig:constraints} reproduces the s-wave constraints published by \citet{Abazajian20}. The horizontal axis shows the WIMP mass and vertical axis is the velocity-averaged cross section. In our generalized language, the vertical axis specifically corresponds to the normalization $[\sigma v]_0$, defined by $\langle \sigma v \rangle = [\sigma v]_0 Q(v_{\rm rel})$ in Equation \ref{eq:sigma}, where $Q(v)=1$ is the s-wave case. Cross sections above the black line are excluded. In deriving this constraint, they assumed a $b\bar{b}$ annihilation channel and a plausible range of Milky Way dark matter profiles (their "NFW" case) as expected from DMO simulations. The dashed line shows the required cross section to produce the correct thermal abundance of dark matter observed \citep{Steigman12}. The blue lines in Figure \ref{fig:constraints} provide schematic estimates for how the \citet{Abazajian20} limit would shift for s- (top), p- (middle), and d-wave (bottom) annihilation for halos that match our simulation results. Here we have made the simplistic assumption that limit will scale in direct proportion to the integrated J-factor within $10^\circ$ of the Galactic Center. The range of NFW profiles considered in \citet{Abazajian20} have central densities quite similar to our own \texttt{M12wDMO} case, and we use this to set the reference J-factor for the constraint: $J_s(<10^\circ) \equiv J_{\rm ref} = 1.7 \times 10^{22}$ GeV$^2$ cm$^{-3}$. Note that this reference J$_s$-factor is at the lower range of those from our DMO halos (See Table \ref{tab:one}). This is mainly due to the fact that while we normalized each halo to a local, solar dark matter density of $0.38$ GeV cm$^{-3}$, they assumed a median normalization of $0.28$ GeV cm$^{-3}$ at the solar radius. Some of our DMO halos also deviate somewhat from a strict NFW shape, effectively making them slightly denser at $\sim 1$ kpc than that shape would predict for a fixed solar normalization \citep[e.g.][]{lazar2020dark}. Each blue line is a scaled version of the black line. Specifically, for each halo in our suite, we determine the ratio of the reference J-factor from Abazajian et al. to the measured J-factor for s, p, and d-wave cases: $J_{\rm ref}/J_q(<10^\circ)$ (for $q=$ s, p, and d), and multiply the Abazajian limit by that ratio to estimate the implied limit. The top pair of panels shows how the implied limit scales for each of our DMO runs (left) and FIRE runs (right) in the s-wave case. As mentioned above, even in the DMO case, our halos tend to have larger J-factors than the halos used in \citet{Abazajian20} because of our chosen local-density normalization. The spread in lines comes about because of the halo-to-halo scatter. Interestingly, for the FIRE cases, which we regard as more realistic, the limit lies above all lines, though is within $\sim 25\%$ of the upper envelope. One way to interpret this is that the Abazajian limit is conservative in comparison to our expectations, but not exceedingly so, especially considering the sensitivity to local dark matter density normalization. In this sense, our results are unlikely to affect current constraints for the s-wave cross section significantly. The middle panels show the implied constraints for the case of p-wave annihilation, with DMO halos on the left and FIRE halos on the right. The dotted line shows the required cross section normalization for the thermal abundance in the p-wave annihilation case. The value is about $50$ times higher than the s-wave thermal abundance normalization to make up for the fact that the total cross section is suppressed by a factor $Q=(v/c)^2$ with $v/c \sim 0.15$ during freeze-out. We work out the thermal abundance normalization explicitly for p-wave and d-wave dark matter in Appendix \ref{sec:appendix}. Note that the scaled limits in the DMO p-wave panel are more than two orders of magnitude above the thermal cross section, which would suggest indirect detection for such a model is unlikely. The right middle panel of Figure \ref{fig:constraints} unitizes what we believe to be more realistic p-wave J-factors from our FIRE runs. The blue lines in this case suggest that a much more powerful constraint is possible for p-wave annihilation than would have been expected from DMO halos alone. In particular, we see that for low-mass WIMPS, we may be within a factor of $\sim 10-20$ of detecting a p-wave annihilation signal if the Milky Way resembles halos like \texttt{Romulus}, \texttt{Juliet}, and \texttt{Romeo}, which have among our highest p-wave J-factors. Finally, the bottom panels show the implied constraints for the case of d-wave annihilation. The required cross section normalization for the thermal abundance (dotted line) in the d-wave annihilation case is $\sim 2000$ times higher than the s-wave thermal abundance normalization (Appendix \ref{sec:appendix}), though it is still orders of magnitude below any of the scaled constraints. Inferred constraints from the DMO halos (left) are five to six orders of magnitude above the thermal abundance normalization. For the FIRE cases, the situation is slightly better (roughly four orders of magnitude) though still far out of reach. Of course, realistic constraints will require a careful analysis of Fermi-LAT Galactic Center observations, including templates for the stellar galactic and nuclear bulges, variations in the Galactic diffuse emission models, and a careful consideration of the shape of p-wave J-factor models of the kind show in Figures \ref{fig:map_juliet} and \ref{fig:map_m12c}, which tend to be even less boxy in the hydrodynamic runs than would be expected in DMO. Based on the rough estimates presented here, such an analysis is certainly warranted. \section{Conclusions} We have explored how galaxy formation affects predictions for the astrophysical J-factors of Milky Way size dark matter halos. For a fixed particle physics model, astrophysical J-factors are directly proportional to the expected flux of Standard Model particles sourced by dark matter annihilation, and therefore provide a crucial input for dark-matter indirect detection searches in the Milky Way (see Eq. \ref{eq:Jfactor_def}). In particular, we have used twelve FIRE zoom simulations of Milky Way-type galaxies along with dark-matter-only (DMO) versions of the same halos and worked out implications for both velocity-independent (s-wave) and velocity-dependent (p-wave and d-wave) annihilation cross sections. One significant result is that the central dark matter velocity dispersion in FIRE halos is {\em systematically} amplified by factors of $\sim 2.5-4$ compared to their DMO counterparts (right panel of Figure \ref{fig:ratios}). The effect of galaxy-formation on the central dark matter density in the same halos is less systematic, sometimes increasing and sometimes decreasing the central density, with ratios ranging from $\sim 0.3-2.5$ (left panel of Figure \ref{fig:ratios}). For p-wave ($\propto (v/c)^2)$ and d-wave ($\propto (v/c)^4$) models, our FIRE-derived J-factors are amplified by factors of $\sim 3-60$ and $\sim 10-500$ compared to DMO runs (see Figure \ref{fig:djratio}). FIRE halos generally produce J-factor profiles that are flatter (less peaked) towards the Galactic Center (see Figure \ref{fig:djdo}) and rounder on the sky (see images \ref{fig:map_juliet} and \ref{fig:map_m12c}). Note that these differences occur despite the fact that we have normalized all of our halos to have the same local (solar location) dark matter density. That is, these results are driven by differences in the {\em shape} of the underlying dark matter density and velocity dispersion profiles brought about by galaxy formation processes. One basic implication of our results is that we expect p-wave and d-wave dark matter annihilation to produce more easily detectable signals than would have been expected from DMO halos. For example, while it is typical to suspect that p-wave annihilation ($\propto (v/c)^2)$) is suppressed to undetectable levels~\footnote{Though see \citet{johnson2019search}, who have investigated the role of the central black hole in altering the dark matter velocity dispersion.} in the Milky Way today (where $v \ll c$), we showed in section \ref{sec:implications} that this may not be the case. With the amplified velocities we see in our FIRE runs, the detection of (or interesting constraints on) thermal-relic p-wave dark matter may not be too far out of reach. In particular, by scaling the s-wave constraints from Fermi-LAT derived by \citet{Abazajian20}, we showed that a similar analysis could bring p-wave constraints to within a factor of $\sim 10$ of the naive thermal cross section (see the right middle panel of Figure \ref{fig:constraints}). Future analyses that include detailed simulation-inspired priors on the shape of the annihilation signal in these models, could potentially approach the thermal value. Another result worth highlighting is that we see significant scatter in the density profiles (and associated s-wave J-factor profiles) in our FIRE runs. As we begin to explore joint-constraints from multiple galaxies (e.g. M31 and the Milky Way) it will be important to allow for the possibility that even halos with similar halo masses are expected to have a large scatter in J-factor normalization. Figure \ref{fig:djdo}, for example, shows that the variance in our FIRE s-wave J-factors is larger than one order of magnitude for our sample at 1 degree from the Galactic Center, despite the fact that our halos have been fixed to have the same local density of dark matter at the solar location. \section*{Acknowledgements} We dedicate this paper to the memory of our dear friend and colleague José Antonio Florez Velázquez. We would like to thank Tyler Kelly for helpful suggestions and assistance in the analysis. We thank Kev Abazajian and Louis Strigari for useful discussions. DM, JSB, and FJM were supported by NSF grant AST-1910346. ZH is supported by a Gary A. McCue postdoctoral fellowship at UC Irvine. PH is supported by NSF Research Grants 1911233 \&\ 20009234, NSF CAREER grant 1455342, NASA grants 80NSSC18K0562, HST-AR-15800.001-A. Numerical calculations were run on the Caltech compute cluster ``Wheeler,'' allocations FTA-Hopkins/AST20016 supported by the NSF and TACC, and NASA HEC SMD-16-7592. AW received support from: NSF grants CAREER 2045928 and 2107772; NASA ATP grant 80NSSC20K0513; HST grants AR-15809 and GO-15902 from STScI; a Scialog Award from the Heising-Simons Foundation; and a Hellman Fellowship. MBK acknowledges support from NSF CAREER award AST-1752913, NSF grant AST-1910346, NASA grant NNX17AG29G, and HST-AR-15006, HST-AR-15809, HST-GO-15658, HST-GO-15901, HST-GO-15902, HST-AR-16159, and HST-GO-16226 from the Space Telescope Science Institute, which is operated by AURA, Inc., under NASA contract NAS5-26555. \section*{Data Availability} The data supporting the plots within this article are available on reasonable request to the corresponding author. A public version of the GIZMO code is available at \href{http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html}{http://www.tapir.caltech.edu/~phopkins/Site/GIZMO.html}. \bibliographystyle{mnras}
null
null
null
proofpile-arXiv_000-10202
{"arxiv_id":"2111.03076","language":"en","timestamp":1636336829000,"url":"https:\/\/arxiv.org\/abs\/2111.03076","yymm":"2111"}
2024-02-18T23:40:25.401Z
2021-11-08T02:00:29.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10204"}
null
\section{Introduction}\label{sec:intro} The success of deep neural networks (DNNs) across different domains has created the desire to apply them in safety-critical applications such as autonomous vehicles~\citep{kendall2019learning,lechner2020neural} and healthcare systems~\citep{shen2017deep}. The fundamental challenge for the deployment of DNNs in these domains is certifying their safety~\citep{amodei2016concrete}. Thus, formal safety verification of DNNs in isolation and closed control loops~\citep{KatzBDJK17,HenzingerLZ21,GehrMDTCV18,TjengXT19,DuttaCS19} has become an active research topic. Bayesian neural networks (BNNs) are a family of neural networks that place distributions over their weights~\citep{neal2012bayesian}. This allows learning uncertainty in the data and the network's prediction, while preserving the strong modelling capabilities of DNNs~\citep{MacKay92a}. In particular, BNNs can learn arbitrary data distributions from much simpler (e.g.~Gaussian) weight distributions. This makes BNNs very appealing for robotic and medical applications~\citep{mcallister2017concrete} where uncertainty is a central component of data. Despite the large body of literature on verifying safety of DNNs, the formal safety verification of BNNs has received less attention. Notably, \citep{CardelliKLPPW19,WickerLPK20,MichelmoreWLCGK20} have proposed sampling-based techniques for obtaining probabilistic guarantees about BNNs. Although these approaches provide some insight into BNN safety, they suffer from two key limitations. First, sampling provides only bounds on the probability of the BNN's safety which is insufficient for systems with critical safety implications. For instance, having an autonomous vehicle with a $99.9\%$ safety guarantee is still insufficient for deployment if millions of vehicles are deployed. Second, samples can only simulate the system for a finite time, making it impossible to reason about the system's safety over an unbounded time horizon. \begin{wrapfigure}{R}{7cm} \centering \includegraphics[width=7cm]{standalone.pdf} \caption{BNNs are typically unsafe by default. Top figure: The posterior of a typical BNN has unbounded support, resulting in a non-zero probability of producing an unsafe action. Bottom figure: Restricting the support of the weight distributions via rejection sampling ensures BNN safety.} \label{fig:intro} \end{wrapfigure} In this work, we study the safety verification problem for BNN policies in safety-critical systems over the infinite time horizon. Formally, we consider discrete-time closed-loop systems defined by a dynamical system and a BNN policy. Given a set of initial states and a set of unsafe (or bad) states, the goal of the safety verification problem is to verify that no system execution starting in an initial state can reach an unsafe state. Unlike existing literature which considers probability of safety, we verify {\em sure safety}, i.e.~safety of every system execution of the system. In particular, we present a method for computing {\em safe weight sets} for which every system execution is safe as long as the BNN samples its weights from this set. Our approach to restrict the support of the weight distribution is necessary as BNNs with Gaussian weight priors typically produce output posteriors with unbounded support. Consequently, there is a low but non-zero probability for the output variable to lie in an unsafe region, see Figure \ref{fig:intro}. This implies that BNNs are usually unsafe by default. We therefore consider the more general problem of computing safe weight sets. Verifying that a weight set is safe allows re-calibrating the BNN policy by rejecting unsafe weight samples in order to guarantee safety. As most BNNs employ uni-modal weight priors, e.g. Gaussians, we naturally adopt weight sets in the form of products of intervals centered at the means of the BNN's weight distributions. To verify safety of a weight set, we search for a safety certificate in the form of a {\em safe positive invariant} (also known as {\em safe inductive invariant}). A safe positive invariant is a set of system states that contains all initial states, is closed under the system dynamics and does not contain any unsafe state. The key advantage of using safe positive invariants is that their existence implies the {\em infinite time horizon safety}. We parametrize safe positive invariant candidates by (deterministic) neural networks that classify system states for determining set inclusion. Moreover, we phrase the search for an invariant as a learning problem. A separated verifier module then checks if a candidate is indeed a safe positive invariant by checking the required properties via constraint solving. In case the verifier finds a counterexample demonstrating that the candidate violates the safe positive invariant condition, we re-train the candidate on the found counterexample. We repeat this procedure until the verifier concludes that the candidate is a safe positive invariant ensuring that the system is safe. The safe weight set obtained by our method can be used for safe exploration reinforcement learning. In particular, generating rollouts during learning by sampling from the safe weight set allows an exploration of the environment while ensuring safety. Moreover, projecting the (mean) weights onto the safe weight set after each gradient update further ensures that the improved policy stays safe. \textbf{Contributions} Our contributions can be summarized as follows: \begin{compactenum} \item We define a safety verification problem for BNN policies which overcomes the unbounded posterior issue by computing and verifying safe weight sets. The problem generalizes the sure safety verification of BNNs and solving it allows re-calibrating BNN policies via rejection sampling to guarantee safety. \item We introduce a method for computing safe weight sets in BNN policies in the form of products of intervals around the BNN weights' means. To verify safety of a weight set, our novel algorithm learns a safe positive invariant in the form of a deterministic neural network. \item We evaluate our methodology on a series of benchmark applications, including non-linear systems and non-Lyapunovian safety specifications. \end{compactenum} \section{Related work}\label{sec:relatedwork} \textbf{Verification of feed-forward NN} Verification of robustness and safety properties in feed-forward DNNs has received much attention but remains an active research topic~\citep{KatzBDJK17,HenzingerLZ21,GehrMDTCV18,RuanHK18,BunelTTKM18,TjengXT19}. As the majority of verification techniques were designed for deterministic NNs, they cannot be readily applied to BNNs. The safety verification of feed-forward BNNs has been considered in \citep{CardelliKLPPW19} by using samples to obtain statistical guarantees on the safety probability. The work of~\citep{WickerLPK20} also presents a sampling-based approach, however it provides certified lower bounds on the safety probability. The literature discussed above considers NNs in isolation, which can provide input-output guarantees on a NN but are unable to reason holistically about the safety of the system that the NN is applied in. Verification methods that concern the safety of NNs interlinked with a system require different approaches than standalone NN verification, which we will discuss in the rest of this section. \textbf{Finite time horizon safety of BNN policies} The work in~\citep{MichelmoreWLCGK20} extends the method of~\citep{CardelliKLPPW19} to verifying safety in closed-loop systems with BNN policies. However, similar to the standalone setting of \cite{CardelliKLPPW19}, their method obtains only statistical guarantees on the safety probability and for the system's execution over a finite time horizon. \textbf{Safe RL} Safe reinforcement learning has been primarily studied in the form of constrained Markov decision processes (CMDPs) \citep{altman1999constrained,Geibel06}. Compared to standard MDPs, an agent acting in a CMDP must satisfy an expected auxiliary cost term aggregated over an episode. The CMDP framework has been the base of several RL algorithms~\citep{uchibe2007constrained}, notably the Constrained Policy Optimization (CPO)~\citep{achiam2017constrained}. Despite these algorithms providing a decent performance, the key limitation of CMDPs is that the constraint is satisfied in expectation, which makes violations unlikely but nonetheless possible. Consequently, the CMDP framework is unsuited for systems where constraint violations are critical. \textbf{Lyapunov-based stability} Safety in the context of ''stability'', i.e.~always returning to a ground state, can be proved by Lyapunov functions~\citep{BerkenkampTS017}. Lyapunov functions have originally been considered to study stability of dynamical systems~\citep{khalil2002nonlinear}. Intuitively, a Lyapunov function assigns a non-negative value to each state, and is required to decrease with respect to the system's dynamics at any state outside of the stable set. A Lyapunov-based method is proposed in~\citep{ChowNDG18} to ensure safety in CMDPs during training. Recently, the work of~\citep{ChangRG19} presented a method for learning a policy as well as a neural network Lyapunov function which guarantees the stability of the policy. Similarly to our work, their learning procedure is counterexample-based. However, unlike \citep{ChangRG19}, our work considers BNN policies and safety definitions that do not require returning to a set of ground states. \textbf{Barrier functions for dynamical systems} Barrier functions can be used to prove infinite time horizon safety in dynamical systems~\citep{PrajnaJ04,PrajnaJP07}. Recent works have considered learning neural network barrier functions~\citep{ZhaoZC020}, and a counterexample-based learning procedure is presented in~\cite{PeruffoAA21}. \textbf{Finite time horizon safety of NN policies} Safety verification of continuous-time closed-loop systems with deterministic NN policies has been considered in~\citep{IvanovWAPL19,gruenbacher2020lagrangian}, which reduces safety verification to the reachability analysis in hybrid systems~\citep{ChenAS13}. The work of~\citep{DuttaCS19} presents a method which computes a polynomial approximation of the NN policy to allow an efficient approximation of the reachable state set. Both works consider finite time horizon systems. Our safety certificate most closely resembles inductive invariants for safety analysis in programs~\citep{Floyd1967} and positive invariants for dynamical systems~\citep{blanchini2008set}. \section{Preliminaries and problem statement}\label{sec:prelims} We consider a discrete-time dynamical system \[ \mathbf{x}_{t+1} = f(\mathbf{x}_t,\mathbf{u}_t),\, \mathbf{x}_0\in\mathcal{X}_0. \] The dynamics are defined by the function $f:\mathcal{X}\times \mathcal{U}\rightarrow \mathcal{X}$ where $\mathcal{X}\subseteq \mathbb{R}^m$ is the state space and $\mathcal{U}\subseteq \mathbb{R}^n$ is the control action space, $\mathcal{X}_0\subseteq \mathcal{X}$ is the set of initial states and $t\in\mathbb{N}_{\geq 0}$ denotes a discretized time. At each time step $t$, the action is defined by the (possibly probabilistic) positional policy $\pi:\mathcal{X}\rightarrow\mathcal{D}(\mathcal{U})$, which maps the current state $\mathbf{x}_t$ to a distribution $\pi(\mathbf{x}_t)\in\mathcal{D}(U)$ over the set of actions. We use $\mathcal{D}(U)$ to denote the set of all probability distributions over $U$. The next action is then sampled according to $\mathbf{u}_t\sim \pi(\mathbf{x}_t)$, and together with the current state $\mathbf{x}_t$ of the system gives rise to the next state $\mathbf{x}_{t+1}$ of the system according to the dynamics $f$. Thus, the dynamics $f$ together with the policy $\pi$ form a closed-loop system (or a feedback loop system). The aim of the policy is to maximize the expected cumulative reward (possibly discounted) from each starting state. Given a set of initial states $\mathcal{X}_0$ of the system, we say that a sequence of state-action pairs $(\mathbf{x}_t,\mathbf{u}_t)_{t=0}^{\infty}$ is a trajectory if $\mathbf{x}_0\in \mathcal{X}_0$ and we have $\mathbf{u}_t\in\mathsf{supp}(\pi(\mathbf{x}_t))$ and $\mathbf{x}_{t+1} = f(\mathbf{x}_t,\mathbf{u}_t)$ for each $t\in\mathbb{N}_{\geq 0}$. A neural network (NN) is a function $\pi:\mathbb{R}^m\rightarrow \mathbb{R}^n$ that consists of several sequentially composed layers $\pi=l_1\circ\dots\circ l_k$. Formally, a NN policy maps each system state to a Dirac-delta distribution which picks a single action with probability $1$. Each layer $l_i$ is parametrized by learned weight values of the appropriate dimensions and an activation function $a$, \[ l_i(\mathbf{x}) = a(\mathbf{W}_i\mathbf{x}+\mathbf{b}_i), \mathbf{W}_i\in \mathbb{R}^{n_i\times m_i}, \mathbf{b}_i\in \mathbb{R}^{n_i}. \] In this work, we consider ReLU activation functions $a(\mathbf{x})=\textrm{ReLU}(\mathbf{x})=\max\{\mathbf{x},\mathbf{0}\}$, although other piecewise linear activation such as the leaky-ReLU \citep{JarrettKRL09} and PReLU \citep{HeZRS15} are applicable as well. In Bayesian neural networks (BNNs), weights are random variables and their values are sampled, each according to some distribution. Then each vector of sampled weights gives rise to a (deterministic) neural network. Given a training set $\mathcal{D}$, in order to train the BNN we assume a prior distribution $p(\mathbf{w},\mathbf{b})$ over the weights. The learning then amounts to computing the posterior distribution $p(\mathbf{w},\mathbf{b}\mid \mathcal{D})$ via the application of the Bayes rule. As analytical inference of the posterior is in general infeasible due to non-linearity introduced by the BNN architecture~\citep{MacKay92a}, practical training algorithms rely on approximate inference, e.g.~Hamiltonian Monte Carlo~\citep{neal2012bayesian}, variational inference~\citep{blundell2015weight} or dropout~\citep{GalG16}. When the policy in a dynamical system is a BNN, the policy maps each system state $\mathbf{x}_t$ to a probability distribution $\pi(\mathbf{x}_t)$ over the action space. Informally, this distribution is defined as follows. First, BNN weights $\mathbf{w}$, $\mathbf{b}$ are sampled according to the posterior BNN weight distribution, and the sampled weights give rise to a deterministic NN policy $\pi_{\mathbf{w},\mathbf{b}}$. The action of the system is then defined as $\mathbf{u}_t=\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}_t)$. Formal definition of the distribution $\pi(\mathbf{x}_t)$ is straightforward and proceeds by considering the product measure of distributions of all weights. \textbf{Problem statement} We now define the two safety problems that we consider in this work. The first problem considers feed-forward BNNs, and the second problem considers closed-loop systems with BNN policies. While our solution to the first problem will be a subprocedure in our solution to the second problem, the reason why we state it as a separate problem is that we believe that our solution to the first problem is also of independent interest for the safety analysis of feed-forward BNNs. Let $\pi$ be a BNN. Suppose that the vector $(\mathbf{w},\mathbf{b})$ of BNN weights in $\pi$ has dimension $p+q$, where $p$ is the dimension of $\mathbf{w}$ and $q$ is the dimension of $\mathbf{b}$. For each $1\leq i\leq p$, let $\mu_i$ denote the mean of the random variable $w_i$. Similarly, for each $1\leq i\leq q$, let $\mu_{p+i}$ denote the mean of the random variable $b_i$. Then, for each $\epsilon\in [0,\infty]$, we define the set $W^{\pi}_{\epsilon}$ of weight vectors via \[ W^{\pi}_{\epsilon} = \prod_{i=1}^{p+q}[\mu_i-\epsilon,\mu_i+\epsilon] \subseteq \mathbb{R}^{p+q}. \] We now proceed to defining our safety problem for feed-forward BNNs. Suppose that we are given a feed-forward BNN $\pi$, a set $\mathcal{X}_0\subseteq \mathbb{R}^m$ of input points and a set $\mathcal{X}_u\subseteq \mathbb{R}^n$ of unsafe (or bad) output points. For a concrete vector $(\mathbf{w},\mathbf{b})$ of weight values, let $\pi_{\mathbf{w},\mathbf{b}}$ to be the (deterministic) NN defined by these weight values. We say that $\pi_{\mathbf{w},\mathbf{b}}$ is safe if for each $\mathbf{x}\in\mathcal{X}_0$ we have $\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x})\not\in\mathcal{X}_u$, i.e.~if evaluating $\pi_{\mathbf{w},\mathbf{b}}$ on all input points does not lead to an unsafe output. \begin{adjustwidth}{1cm}{} \begin{problem}[Feed-forward BNNs]\label{problem1} Let $\pi$ be a feed-forward BNN, $\mathcal{X}_0\subseteq \mathbb{R}^m$ a set of input points and $\mathcal{X}_u\subseteq \mathbb{R}^n$ a set of unsafe output points. Let $\epsilon\in [0,\infty]$. Determine whether each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe. \end{problem} \end{adjustwidth} Next, we define our safety problem for closed-loop systems with BNN policies. Consider a closed-loop system defined by a dynamics function $f$, a BNN policy $\pi$ and an initial set of states $\mathcal{X}_0$. Let $\mathcal{X}_u\subseteq\mathcal{X}$ be a set of unsafe (or bad) states. We say that a trajectory $(\mathbf{x}_t,\mathbf{u}_t)_{t=0}^{\infty}$ is safe if $\mathbf{x}_t\not\in \mathcal{X}_u$ for all $t\in\mathbb{N}_0$, hence if it does not reach any unsafe states. Note that this definition implies infinite time horizon safety of the trajectory. Given $\epsilon\in[0,\infty]$, define the set $\mathsf{Traj}^{f,\pi}_{\epsilon}$ to be the set of all system trajectories in which each sampled weight vector belongs to $W^{\pi}_{\epsilon}$. \begin{adjustwidth}{1cm}{} \begin{problem}[Closed-loop systems with BNN policies]\label{problem2} Consider a closed-loop system defined by a dynamics function $f$, a BNN policy $\pi$ and a set of initial states $\mathcal{X}_0$. Let $\mathcal{X}_u$ be a set of unsafe states. Let $\epsilon\in [0,\infty]$. Determine whether each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{problem} \end{adjustwidth} Note that the question of whether the BNN policy $\pi$ is safe (i.e.~whether each trajectory of the system is safe) is a special case of the above problem which corresponds to $\epsilon=\infty$. \section{Experiments}\label{sec:exp} We perform an experimental evaluation of our proposed method for learning positive invariant neural networks that prove infinite time horizon safety. Our evaluation consists of an ablation study where we disable different core components of Algorithm~\ref{algorithm:ce} and measure their effects on the obtained safety bounds and the algorithm's runtime. First, we run the algorithm without any re-training on the counterexamples. In the second step, we run Algorithm~\ref{algorithm:ce} by initializing $D_{\textrm{spec}}$ with samples from $\mathcal{X}_0$ and $\mathcal{X}_u$ only. Finally, we bootstrap the positive invariant network by initializing $D_{\textrm{spec}}$ with random samples from the state space labeled with Monte-Carlo estimates of reaching the unsafe states. We consider environments with a piecewise linear dynamic function, initial and unsafe state sets so that the verification steps of our algorithm can be reduced to MILP-solving using Gurobi \cite{gurobi}. Details on our evaluation are in the Supplementary Material. Code is publicly available \footnote{\url{https://github.com/mlech26l/bayesian_nn_safety}}. We conduct our evaluation on three benchmark environments that differ in terms of complexity and safety specifications. We train two BNN policies for each benchmark-ablation pair, one with Bayesian weights from the second layer on (with $\mathcal{N}(0,0.1)$ prior) and one with Bayesian weights in all layers (with $\mathcal{N}(0,0.05)$ prior). Recall, in our BNN encoding in Section~\ref{sec:feedforward}, we showed that encoding of the BNN input layer requires additional constraints and extra care, since we do not know the signs of input neuron values. Hence, we consider two BNN policies in our evaluation in order to study how the encoding of the input layer affects the safe weight set computation. Our first benchmark represents an unstable linear dynamical system of the form $x_{t+1} = Ax_t + Bu_t$. A BNN policy stabilizes the system towards the point $(0,0)$. Consequently, the set of unsafe states is defined as $\{x\in\mathbb{R}^2\mid|x|_{\infty}\geq 1.2\}$, and the initial states as $\{x\in\mathbb{R}^2\mid |x|_{\infty}\leq 0.6\}$. Our second benchmark is the inverted pendulum task, which is a classical non-linear control problem. The two state variables $a$ and $b$ represent the angle and angular velocity of a pendulum that must be controlled in an upward direction. The actions produced by the policy correspond to a torque applied to the anchor point. Our benchmark concerns a variant of the original problem where the non-linearity in $f$ is expressed by piecewise linear functions. The resulting system, even with a trained policy, is highly unstable, as shown in Figure~\ref{fig:pend}. The set of initial states corresponds to pendulum states in an almost upright position and with small angular velocity. The set of unsafe states represents the pendulum falling down. Figure~\ref{fig:pend} visualizes the system and the learned invariant's decision boundary for the inverted pendulum task. \begin{figure} \centering \subcaptionbox{Vector field of the system when controlled by the posterior's mean. Green/red arrows indicate empirical safe/unsafe estimates.}% [.31\linewidth]{\includegraphics[width=0.25\textwidth]{plots/vector_field.png}} \hfill \subcaptionbox{First guess of $g^{\mathsf{Inv}}$. Green area shows $g^{\mathsf{Inv}}>0$, orange area $g^{\mathsf{Inv}}<0$. Red markers show the found counterexample.}% [.31\linewidth]{\includegraphics[width=0.25\textwidth]{plots/step0.png}} \hfill \subcaptionbox{Final $g^{\mathsf{Inv}}$ proving the safety of the system. Previous counterexamples marked in red.}% [.31\linewidth]{\includegraphics[width=0.25\textwidth]{plots/step2.png}} \caption{Invariant learning shown on the inverted pendulum benchmark.} \label{fig:pend} \end{figure} \begin{table}[] \centering \caption{Results of our benchmark evaluation. The epsilon values are multiples of the weight's standard deviation $\sigma$. We evaluated several epsilon values, and the table shows the largest that could be proven safe. A dash ''-'' indicates an unsuccessful invariant search. Runtime in seconds.} \begin{tabular}{l|rr|rr|rr}\toprule \multirow{2}{*}{Environment} & \multicolumn{2}{c|}{No re-training} & \multicolumn{2}{c|}{Init $D_{spec}$ with $\mathcal{X}_0$ and $\mathcal{X}_u$} & \multicolumn{2}{c}{Bootstrapping $D_{spec}$} \\ & Verified & Runtime & Verified & Runtime & Verified & Runtime \\\cmidrule(r){1-7} Unstable LDS & - & 3 & $1.5\sigma$ & 569 & $2\sigma$ & 760 \\ Unstable LDS (all) & $0.2\sigma$ & 3 & $0.5\sigma$ & 6 & $0.5\sigma$ & 96 \\ Pendulum & - & 2 & $2\sigma$ & 220 & $2\sigma$ & 40 \\ Pendulum (all) & - & 2 & $0.2\sigma$ & 1729 & $1.5\sigma$ & 877 \\ Collision avoid. & - & 2 & - & - & $2\sigma$ & 154 \\ Collision avoid. (all) & - & 2 & - & - & $1.5\sigma$ & 225 \\\bottomrule \end{tabular} \label{tab:results} \end{table} While the previous two benchmarks concern stability specifications, we evaluate our method on a non-Lyapunovian safety specification in our third benchmark. In particular, our third benchmark is a collision avoidance task, where the system does not stabilize to the same set of terminal states in every execution. The system is described by three variables. The first variable specifies the agent's own vertical location, while the other two variables specify an intruder's vertical and horizontal position. The objective is to avoid colliding with the intruder who is moving toward the agent by lateral movement commands as the policy's actions. The initial states represent far-away intruders, and crashes with the intruder define the unsafe states. Table \ref{tab:results} shows the results of our evaluation. Our results demonstrate that re-training with the counterexamples is the key component that determines our algorithm's success. In all cases, except for the linear dynamical system, the initial guess of the invariant candidate violates the invariant condition. Moreover, boostrapping $D_{\textrm{spec}}$ with random points labeled by empirical estimates of reaching the unsafe states improves the search process significantly. \section{Main results} In this section we present our method for solving the safety problems defined in the previous section, with Section~\ref{sec:feedforward} considering Problem~\ref{problem1} and Section~\ref{sec:loopcertificate} considering Problem~\ref{problem2}. Both problems consider safety verification with respect to a given value of $\epsilon\in[0,\infty]$, so in Section~\ref{sec:epscompute} we present our method for computing the value of $\epsilon$ for which our solutions to Problem~1 and Problem~2 may be used to verify safety. We then show in Section~\ref{sec:safeexploration} how our new methodology can be adapted to the safe exploration RL setting. \subsection{Safe weight sets for feed-forward BNNs}\label{sec:feedforward} Consider a feed-forward BNN $\pi$, a set $\mathcal{X}_0\subseteq\mathbb{R}^m$ of inputs and a set $\mathcal{X}_u\subseteq\mathbb{R}^n$ of unsafe output of the BNN. Fix $\epsilon\in[0,\infty]$. To solve Problem~\ref{problem1}, we show that the decision problem of whether each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe can be encoded as a system of constraints and reduced to constraint solving. Suppose that $\pi=l_1\circ\dots\circ l_k$ consists of $k$ layers, with each $l_i(\mathbf{x}) = \textrm{ReLU}(\mathbf{W}_i\mathbf{x}+\mathbf{b}_i)$. Denote by $\mathbf{M}_i$ the matrix of the same dimension as $\mathbf{W}_i$, with each entry equal to the mean of the corresponding random variable weight in $\mathbf{W}_i$. Similarly, define the vector $\mathbf{m}_i$ of means of random variables in $\mathbf{b}_i$. The real variables of our system of constraints are as follows, each of appropriate dimension: \begin{compactitem} \item $\mathbf{x}_0$ encodes the BNN inputs, $\mathbf{x}_l$ encodes the BNN outputs; \item $\mathbf{x}_1^{\textrm{in}}$, \dots, $\mathbf{x}_{l-1}^{\textrm{in}}$ encode vectors of input values of each neuron in the hidden layers; \item $\mathbf{x}_1^{\textrm{out}}$, \dots, $\mathbf{x}_{l-1}^{\textrm{out}}$ encode vectors of output values of each neuron in the hidden layers; \item $\mathbf{x}_{0,\textrm{pos}}$ and $\mathbf{x}_{0,\textrm{neg}}$ are dummy variable vectors of the same dimension as $\mathbf{x}_0$ and which will be used to distinguish between positive and negative NN inputs in $\mathbf{x}_0$, respectively. \end{compactitem} We use $\mathbf{1}$ to denote the vector/matrix whose all entries are equal to $1$, of appropriate dimensions defined by the formula in which it appears. Our system of constraints is as follows: \begin{equation}\label{eq:inputoutput}\tag{Input-output conditions} \mathbf{x}_0 \in \mathcal{X}_0, \hspace{0.5cm} \mathbf{x}_l \in \mathcal{X}_u \end{equation} \begin{equation}\label{eq:relu}\tag{ReLU encoding} \mathbf{x}_i^{\textrm{out}} = \textrm{ReLU}(\mathbf{x}_i^{\textrm{in}}), \text{ for each } 1\leq i\leq l-1 \end{equation} \begin{equation}\label{eq:weights}\tag{BNN hidden layers} \begin{split} &(\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_{i+1}^{\textrm{in}}, \text{ for each } 1\leq i\leq l-1 \\ &\mathbf{x}_{i+1}^{\textrm{in}} \leq (\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i+\epsilon\cdot\mathbf{1}), \text{ for each } 1\leq i\leq l-1 \end{split} \end{equation} \begin{equation}\label{eq:weights}\tag{BNN input layer} \begin{split} &\mathbf{x}_{0,\textrm{pos}} = \textrm{ReLU}(\mathbf{x}_0), \hspace{0.5cm} \mathbf{x}_{0,\textrm{neg}} = -\textrm{ReLU}(-\mathbf{x}_0) \\ &(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_1^{\textrm{in}} \\ &\mathbf{x}_1^{\textrm{in}} \leq (\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_0^{\textrm{out}}+(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0+\epsilon\cdot\mathbf{1}) \end{split} \end{equation} Denote by $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ the system of constraints defined above. The proof of Theorem~\ref{thm:constraints} shows that it encodes that $\mathbf{x}_0\in\mathcal{X}_0$ is an input point for which the corresponding output point of $\pi$ is unsafe, i.e.~$\mathbf{x}_l=\pi(\mathbf{x}_0)\in\mathcal{X}_u$. The first equation encodes the input and output conditions. The second equation encodes the ReLU input-output relation for each hidden layer neuron. The remaining equations encode the relation between neuron values in successive layers of the BNN as well as that the sampled BNN weight vector is in $W^{\pi}_{\epsilon}$. For hidden layers, we know that the output value of each neuron is nonnegative, i.e.$~\mathbf{x}_i^{\textrm{out}}\geq\mathbf{0}$ for the $i$-th hidden layer where $1\leq i\leq l-1$, and so \[ (\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}} \leq (\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}. \] Hence, the BNN weight relation with neurons in the successive layer as well as the fact that the sampled weights are in $W^{\pi}_{\epsilon }$ is encoded as in equations 3-4 above. For the input layer, however, we do not know the signs of the input neuron values $\mathbf{x}_0$ and so we introduce dummy variables $\mathbf{x}_{0,\textrm{pos}}=\textrm{ReLU}(\mathbf{x}_0)$ and $\mathbf{x}_{0,\textrm{neg}}=-\textrm{ReLU}(-\mathbf{x}_0)$ in equation 5. This allows encoding the BNN weight relations between the input layer and the first hidden layer as well as the fact that the sampled weight vector is in $W^{\pi}_{\epsilon}$, as in equations 6-7. Theorem~\ref{thm:constraints} shows that Problem~\ref{problem1} is equivalent to solving the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Its proof can be found in the Supplementary Material. \begin{theorem}\label{thm:constraints} Let $\epsilon\in[0,\infty]$. Then each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe if and only if the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is not satisfiable. \end{theorem} \textbf{Solving the constraints} Observe that $\epsilon$, $\mathbf{M}_i$ and $\mathbf{m}_i$, $1\leq i\leq l-1$, are constant values that are known at the time of constraint encoding. Thus, in $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$, only the ReLU constraints and possibly the input-output conditions are not linear. Depending on the form of $\mathcal{X}_0$ and $\mathcal{X}_u$ and on how we encode the ReLU constraints, we may solve the system $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ in several ways: \begin{compactenum} \item \textbf{MILP.} It is shown in~\citep{lomuscio2017approach,dutta2018output,TjengXT19} that the ReLU relation between two real variables can be encoded via mixed-integer linear constraints (MILP) by introducing 0/1-integer variables to encode whether a given neuron is active or inactive. Hence, if $\mathcal{X}_0$ and $\mathcal{X}_u$ are given by linear constraints, we may solve $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ by a MILP solver. The ReLU encoding requires that each neuron value is bounded, which is ensured if $\mathcal{X}_0$ is a bounded set and if $\epsilon<\infty$. \item \textbf{Reluplex.} In order to allow unbounded $\mathcal{X}_0$ and $\epsilon=\infty$, we may use algorithms based on the Reluplex calculus~\citep{KatzBDJK17,KatzHIJLLSTWZDK19} to solve $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Reluplex is an extension of the standard simplex algorithm for solving systems of linear constraints, designed to allow ReLU constraints as well. While Reluplex does not impose the boundedness condition, it is in general less scalable than MILP-solving. \item \textbf{NRA-SMT.} Alternatively, if $\mathcal{X}_0$ or $\mathcal{X}_u$ are given by non-linear constraints we may solve them by using an NRA-SMT-solver (non-linear real arithmetic satisfiability modulo theory), e.g.~dReal~\citep{gao2012delta}. To use an NRA-SMT-solver, we can replace the integer 0/1-variables of the ReLU neuron relations encoding with real variables that satisfy the constraint $x(x-1)=0$. While NRA-SMT is less scalable compared to MILP, we note that it has been used in previous works on RL stability verification \cite{ChangRG19}. \end{compactenum} \textbf{Safety via rejection sampling} As discussed in Section~\ref{sec:intro}, once the safety of NNs in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ has been verified, we can ``re-calibrate'' the BNN to reject sampled weights which are not in $W^{\pi}_{\epsilon}$. Hence, rejection sampling gives rise to a safe BNN. \subsection{Safe weight sets for closed-loop systems with BNN Policies}\label{sec:loopcertificate} Now consider a closed-loop system with a dynamics function $f:\mathcal{X}\times\mathcal{U}\rightarrow\mathcal{X}$ with $\mathcal{X}\subseteq\mathbb{R}^m$ and $\mathcal{U}\subseteq\mathbb{R}^n$, a BNN policy $\pi$, an initial state set $\mathcal{X}_0\subseteq\mathcal{X}$ and an unsafe state set $\mathcal{X}_u\subseteq\mathcal{X}$. Fix $\epsilon\in[0,\infty]$. In order to solve Problem~\ref{problem2} and verify safety of each trajectory contained in $\mathsf{Traj}^{f,\pi}_{\epsilon}$, our method searches for a positive invariant-like safety certificate that we define below. \textbf{Positive invariants for safety} A positive invariant in a dynamical system is a set of states which contains all initial states and which is closed under the system dynamics. These conditions ensure that states of all system trajectories are contained in the positive invariant. Hence, a positive invariant which does not contain any unsafe states can be used to certify safety of every trajectory over infinite time horizon. In this work, however, we are not trying to prove safety of every trajectory, but only of those trajectories contained in $\mathsf{Traj}^{f,\pi}_{\epsilon}$. To that end, we define $W^{\pi}_{\epsilon}$-safe positive invariants. Intuitively, a $W^{\pi}_{\epsilon}$-safe positive invariant is required to contain all initial states, to be closed under the dynamics $f$ and the BNN policy $\pi$ when the sampled weight vector is in $W^{\pi}_{\epsilon}$, and not to contain any unsafe state. \begin{definition}[$W^{\pi}_{\epsilon}$-safe positive invariants]\label{def:inv} A set $\mathsf{Inv}\subseteq\mathcal{X}$ is said to be a {\em $W^{\pi}_{\epsilon}$-positive invariant} if $\mathcal{X}_0\subseteq\mathsf{Inv}$, for each $\mathbf{x}\in\mathsf{Inv}$ and $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ we have that $f(\mathbf{x},\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}))\in\mathsf{Inv}$, and $\mathsf{Inv}\cap\mathcal{X}_u=\emptyset$. \end{definition} Theorem~\ref{thm:posinv} shows that $W^{\pi}_{\epsilon}$-safe positive invariants can be used to verify safety of all trajectories in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ in Problem~\ref{problem2}. The proof is straightforward and is deferred to the Supplementary Material. \begin{theorem}\label{thm:posinv} If there exists a $W^{\pi}_{\epsilon}$-safe positive invariant, then each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{theorem} \begin{algorithm}[t] \caption{Learning algorithm for $W^{\pi}_{\epsilon}$-safe positive invariants} \label{algorithm:ce} \begin{algorithmic} \STATE \textbf{Input} Dynamics function $f$, BNN policy $\pi$, Initial state set $\mathcal{X}_0$, Unsafe state set $\mathcal{X}_u$, $\epsilon \in [0,\infty]$ \STATE $\tilde{\mathcal{X}_0}, \tilde{\mathcal{X}_u} \leftarrow $ random samples of $\mathcal{X}_0,\mathcal{X}_u$ \STATE $D_{\textrm{spec}} \leftarrow \tilde{\mathcal{X}_u} \times \{0\} \cup \tilde{\mathcal{X}_0} \times \{1\}$, $D_{\textrm{ce}} \leftarrow \{\}$ \STATE Optional (bootstrapping): $S_{\textrm{bootstrap}} \leftarrow $ sample finite trajectories with initial state sampled from $\mathcal{X}$ \STATE \qquad\qquad\qquad $D_{\textrm{spec}} \leftarrow D_{\textrm{spec}} \cup \{(\mathbf{x},0)| \,\exists s\in S_{\textrm{bootstrap}} \text{ that starts in } \mathbf{x}\in \mathcal{X} \text{ and reaches } \mathcal{X}_u \}$ \STATE \qquad\qquad\qquad\qquad\qquad $\cup\, \{(\mathbf{x},1)| \,\exists s\in S_{\textrm{bootstrap}} \text{ that starts in } \mathbf{x}\in \mathcal{X} \text{ and does not reach } \mathcal{X}_u \}$ \STATE Pre-train neural network $g^{\mathsf{Inv}}$ on datasets $D_{\textrm{spec}}$ and $D_{\textrm{ce}}$ with loss function $\mathcal{L}$ \WHILE{timeout not reached} \IF{$\exists (\mathbf{x},\mathbf{x}',\mathbf{u},\mathbf{w},\mathbf{b})$ s.t. $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$, $g^{\mathsf{Inv}}(\mathbf{x}')<0$, $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$, $\mathbf{u}=\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x})$, $\mathbf{x}'=f(\mathbf{x},\mathbf{u})$} \STATE $D_{\textrm{ce}} \leftarrow D_{\textrm{ce}} \cup \{(\mathbf{x},\mathbf{x}')\}$ \ELSIF{$\exists (\mathbf{x})$ s.t. $\mathbf{x}\in\mathcal{X}_0$, $g^{\mathsf{Inv}}(\mathbf{x})< 0$} \STATE $D_{\textrm{spec}} \leftarrow D_{\textrm{spec}} \cup \{(\mathbf{x},1)\}$ \ELSIF{$\exists (\mathbf{x})$ s.t. $\mathbf{x}\in\mathcal{X}_u$, $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$} \STATE $D_{\textrm{spec}} \leftarrow D_{\textrm{spec}} \cup \{(\mathbf{x},0)\}$ \ELSE \STATE \textbf{Return} Safe \ENDIF \STATE {Train neural network $g^{\mathsf{Inv}}$ on datasets $D_{\textrm{spec}}$ and $D_{\textrm{ce}}$ with loss function $\mathcal{L}$} \ENDWHILE \STATE \textbf{Return} Unsafe \end{algorithmic} \end{algorithm} \textbf{Learning positive invariants} We now present a learning algorithm for a $W^{\pi}_{\epsilon}$-safe positive invariant. It learns a neural network $g^{\mathsf{Inv}}:\mathbb{R}^m\rightarrow\mathbb{R}$, where the positive invariant is then defined as the set $\mathsf{Inv} = \{\mathbf{x}\in\mathcal{X}\mid g^{\mathsf{Inv}}(\mathbf{x})\geq 0\}$. The pseudocode is given in Algorithm~\ref{algorithm:ce}. The algorithm first samples $\tilde{\mathcal{X}_0}$ from $\mathcal{X}_0$ and $\tilde{\mathcal{X}_u}$ from $\mathcal{X}_u$ and initializes the specification set $D_{\textrm{spec}}$ to $\tilde{\mathcal{X}_u} \times \{0\} \cup \tilde{\mathcal{X}_0} \times \{1\}$ and the counterexample set $D_{\textrm{ce}}$ to an empty set. Optionally, the algorithm also bootstraps the positive invariant network by initializing $D_{\textrm{spec}}$ with random samples from the state space $\mathcal{X}$ labeled with Monte-Carlo estimates of reaching the unsafe states. The rest of the algorithm consists of two modules which are composed into a loop: the {\em learner} and the {\em verifier}. In each loop iteration, the learner first learns a $W^{\pi}_{\epsilon}$-safe positive invariant candidate which takes the form of a neural network $g^{\mathsf{Inv}}$. This is done by minimizing the loss function $\mathcal{L}$ that depends on $D_{\textrm{spec}}$ and $D_{\textrm{ce}}$: \begin{equation}\label{eq:totalloss} \mathcal{L}(g^{\mathsf{Inv}}) = \frac{1}{|D_{\textrm{spec}}|}\sum_{(\mathbf{x},y)\in D_{\textrm{spec}}}^{}\mathcal{L}_{\textrm{cls}}\big(g^\mathsf{Inv}(\mathbf{x}),y\big) +\lambda \frac{1}{|D_{\textrm{ce}}|}\sum_{(\mathbf{x},\mathbf{x}')\in D_{\textrm{ce}}}^{}\mathcal{L}_{\textrm{ce}}\big(g^\mathsf{Inv}(\mathbf{x}),g^\mathsf{Inv}(\mathbf{x}')\big), \end{equation} where $\lambda$ is a tuning parameter and $\mathcal{L}_{\textrm{cls}}$ a binary classification loss function, e.g.~the 0/1-loss $\mathcal{L}_{\textrm{0/1}}(z,y) = \mathds{1}[\mathds{1}[z\geq0]\neq y]$ or the logistic loss $\mathcal{L}_{\textrm{log}}(z,y)= z - z \cdot y + \log(1 + \exp(-z))$ as its differentiable alternative. The term $\mathcal{L}_{\textrm{ce}}$ is the counterexample loss which we define via \begin{equation}\label{eq:loss} \mathcal{L}_{\textrm{ce}}(z,z') = \mathds{1}\big[z>0\big]\mathds{1}\big[z'<0\big] \mathcal{L}_{\textrm{cls}}\big(z,0\big) \mathcal{L}_{\textrm{cls}}\big(z',1\big). \end{equation} Intuitively, the first sum in eq.~\eqref{eq:totalloss} forces $g^{\mathsf{Inv}}$ to be nonnegative at initial states and negative at unsafe states contained in $D_{\textrm{spec}}$, and the second term forces each counterexample in $D_{\textrm{ce}}$ not to destroy the closedness of $\mathsf{Inv}$ under the system dynamics. Once $g^{\mathsf{Inv}}$ is learned, the verifier checks whether $\mathsf{Inv}$ is indeed a $W^{\pi}_{\epsilon}$-safe positive invariant. To do this, the verifier needs to check the three defining properties of $W^{\pi}_{\epsilon}$-safe positive invariants: \begin{compactenum} \item {\em Closedness of $\mathsf{Inv}$ under system dynamics.} The verifier checks if there exist states $\mathbf{x}\in\mathsf{Inv}$, $\mathbf{x'}\not\in\mathsf{Inv}$ and a BNN weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ such that $f(\mathbf{x},\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}))=\mathbf{x}'$. To do this, it introduces real variables $\mathbf{x},\mathbf{x}'\in\mathbb{R}^m$, $\mathbf{u}\in\mathbb{R}^n$ and $y,y'\in\mathbb{R}$, and solves: \begin{equation*} \begin{split} &\textrm{maximize } y - y' \textrm{ subject to} \\ &y\geq 0, y'<0, y=g^{\mathsf{Inv}}(\mathbf{x}), y'=g^{\mathsf{Inv}}(\mathbf{x'}) \\ &\mathbf{x'} = f(\mathbf{x},\mathbf{u}) \\ &\mathbf{u} \text{ is an output of } \pi \text{ on input } \mathbf{x} \textrm{ and weights } (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon} \end{split} \end{equation*} The conditions $y=g^{\mathsf{Inv}}(\mathbf{x})$ and $y'=g^{\mathsf{Inv}}(\mathbf{x'})$ are encoded by using the existing techniques for encoding deterministic NNs as systems of MILP/Reluplex/NRA-SMT constraints. The condition in the third equation is encoded simply by plugging variable vectors $\mathbf{x}$ and $\mathbf{u}$ into the equation for $f$. Finally, for condition in the fourth equation we use our encoding from Section~\ref{sec:feedforward} where we only need to omit the Input-output condition. The optimization objective is added in order to search for the ``worst'' counterexample. We note that MILP~\citep{gurobi} and SMT~\citep{gao2012delta} solvers allow optimizing linear objectives, and recently Reluplex algorithm~\citep{KatzBDJK17} has also been extended to allow solving optimization problems~\citep{strong2020global}. If a counterexample $(\mathbf{x},\mathbf{x}')$ is found, it is added to $D_{\textrm{ce}}$ and the learner tries to learn a new candidate. If the system of constraints is unsatisfiable, the verifier proceeds to the second check. \item {\em Non-negativity on $\mathcal{X}_0$.} The verifier checks if there exists $\mathbf{x}\in\mathcal{X}_0$ for which $g^{\mathsf{Inv}}(\mathbf{x})< 0$. If such $\mathbf{x}$ is found, $(\mathbf{x},1)$ is added to $D_{\textrm{spec}}$ and the learner then tries to learn a new candidate. If the system of constraints is unsatisfiable, the verifier proceeds to the third check. \item {\em Negativity on $\mathcal{X}_u$.} The verifier checks if there exists $\mathbf{x}\in\mathcal{X}_u$ with $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$. If such $\mathbf{x}$ is found, $(\mathbf{x},0)$ is added to $D_{\textrm{spec}}$ and the learner then tries to learn a new candidate. If the system of constraints is unsatisfiable, the veririfer concludes that $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-positive invariant which does not contain any unsafe state and so each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{compactenum} Theorem~\ref{thm:loss} shows that neural networks $f^{\mathsf{Inv}}$ for which $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariants are global minimizers of the loss function $\mathcal{L}$ with the 0/1-classification loss. Theorem~\ref{thm:correctness} establishes the correctness of our algorithm. Proofs can be found in the Supplementary Material. \begin{theorem}\label{thm:loss} The loss function $\mathcal{L}$ is nonnegative for any neural network $g$, i.e.~$\mathcal{L}(g)\geq 0$. Moreover, if $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant and $\mathcal{L}_{\textrm{cls}}$ the 0/1-loss, then $\mathcal{L}(g^{\mathsf{Inv}})=0$. Hence, neural networks $g^{\mathsf{Inv}}$ for which $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant are global minimizers of the loss function $\mathcal{L}$ when $\mathcal{L}_{\textrm{cls}}$ is the 0/1-loss. \end{theorem} \begin{theorem}\label{thm:correctness} If the verifier in Algorithm~\ref{algorithm:ce} shows that constraints in three checks are unsatisfiable, then the computed $\mathsf{Inv}$ is indeed a $W^{\pi}_{\epsilon}$-safe positive invariant. Hence, Algorithm~\ref{algorithm:ce} is correct. \end{theorem} \textbf{Safety via rejection sampling} As discussed in Section~\ref{sec:intro}, once the safety of all trajectories in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ has been verified, we can ``re-calibrate'' the BNN policy to reject sampled weights which are not in $W^{\pi}_{\epsilon}$. Hence, rejection sampling gives rise to a safe BNN policy. \subsection{Computation of safe weight sets and the value of $\epsilon$}\label{sec:epscompute} Problems~\ref{problem1} and~\ref{problem2} assume a given value of $\epsilon$ for which safety needs to be verified. In order to compute the largest value of $\epsilon$ for which our approach can verify safety, we start with a small value of $\epsilon$ and iteratively increase it until we reach a value that cannot be certified or until the timeout is reached, in order to compute as large safe weight set as possible. In each iteration, our Algorithm \ref{algorithm:ce} does not start from scratch but is initialized with the $g^{\mathsf{Inv}}$ and $D_{\textrm{spec}}$ from the previous successful iteration, i.e.~attempting to enlarge the current safe weight set. Our iterative process significantly speeds up the search process compared to naively restarting our algorithm in every iteration. \subsection{Safe exploration reinforcement learning}\label{sec:safeexploration} Given a safe but non-optimal initial policy $\pi_{0}$, safe exploration reinforcement learning (SERL) concerns the problem of improving the expected return of $\pi_{0}$ while ensuring safety when collecting samples of the environment \citep{uchibe2007constrained,achiam2017constrained,nakka2020chance}. Our method from Section~\ref{sec:loopcertificate} for computing safe weight sets can be adapted to this setting with minimal effort. In particular, the safety bound $\epsilon$ for the intervals centered at the weight means can be used in combination with the rejection sampling to generate safe but randomized rollouts on the environment. Moreover, $\epsilon$ provides bounds on the gradient updates when optimizing the policy using Deep Q-learning or policy gradient methods, i.e., performing \emph{projected gradient descent}. We sketch an algorithm for SERL in the Supplementary Material. \section{Conclusion}\label{sec:conclusion} In this work we formulated the safety verification problem for BNN policies in infinite time horizon systems, that asks to compute safe BNN weight sets for which every system execution is safe as long as the BNN samples its weights from this set. Solving this problem allows re-calibrating the BNN policy to reject unsafe weight samples in order to guarantee system safety. We then introduced a methodology for computing safe weight sets in BNN policies in the form of products of intervals around the BNN weight's means, and a method for verifying their safety by learning a positive invariant-like safety certificate. We believe that our results present an important first step in guaranteeing safety of BNNs for deployment in safety-critical scenarios. While adopting products of intervals around the BNN's weight means is a natural choice given that BNN priors are typically unimodal distributions, this is still a somewhat restrictive shape for safe weight sets. Thus, an interesting direction of future work would be to study more general forms of safe weight sets that could be used for re-calibration of BNN posteriors and their safety verification. Another interesting problem would be to design an approach for refuting a weight set as unsafe which would complement our method, or to consider closed-loop systems with stochastic environment dynamics. Any verification method for neural networks, even more so for neural networks in feedback loops, suffers from scalability limitations due to the underlying complexity class \cite{KatzBDJK17,IvanovWAPL19}. Promising research directions on improving the scalability of our approach by potentially speeding up the constraint solving step are gradient based optimization techniques \cite{HenriksenL20} and to incorporate the constraint solving step already in the training procedure \cite{ZhangCXGSLBH20}. Since the aim of AI safety is to ensure that systems do not behave in undesirable ways and that safety violating events are avoided, we are not aware of any potential negative societal impacts. \section*{Acknowledgments} This research was supported in part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award), ERC CoG 863818 (FoRM-SMArt), and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 665385. \bibliographystyle{plainnat} \section{Submission of papers to NeurIPS 2021} Please read the instructions below carefully and follow them faithfully. \subsection{Style} Papers to be submitted to NeurIPS 2021 must be prepared according to the instructions presented here. Papers may only be up to {\bf nine} pages long, including figures. Additional pages \emph{containing only acknowledgments and references} are allowed. Papers that exceed the page limit will not be reviewed, or in any other way considered for presentation at the conference. The margins in 2021 are the same as those in 2007, which allow for $\sim$$15\%$ more words in the paper compared to earlier years. Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the NeurIPS website as indicated below. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for NeurIPS and other conference information are available on the World Wide Web at \begin{center} \url{http://www.neurips.cc/} \end{center} The file \verb+neurips_2021.pdf+ contains these instructions and illustrates the various formatting requirements your NeurIPS paper must satisfy. The only supported style file for NeurIPS 2021 is \verb+neurips_2021.sty+, rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09, Microsoft Word, and RTF are no longer supported!} The \LaTeX{} style file contains three optional arguments: \verb+final+, which creates a camera-ready copy, \verb+preprint+, which creates a preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the \verb+natbib+ package for you in case of package clash. \paragraph{Preprint option} If you wish to post a preprint of your work online, e.g., on arXiv, using the NeurIPS style, please use the \verb+preprint+ option. This will create a nonanonymized version of your work with the text ``Preprint. Work in progress.'' in the footer. This version may be distributed as you see fit. Please \textbf{do not} use the \verb+final+ option, which should \textbf{only} be used for papers accepted to NeurIPS. At submission time, please omit the \verb+final+ and \verb+preprint+ options. This will anonymize your submission and add line numbers to aid review. Please do \emph{not} refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file \verb+neurips_2021.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing (leading) of 11~points. Times New Roman is the preferred typeface throughout, and will be selected for you by default. Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no indentation. The paper title should be 17~point, initial caps/lower case, bold, centered between two horizontal rules. The top rule should be 4~points thick and the bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and below the title to rules. All pages should start at 1~inch (6~picas) from the top of the page. For the final version, authors' names are set in boldface, and each name is centered above the corresponding address. The lead author's name is to be listed first (left-most), and the co-authors' names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in Section \ref{others} regarding figures, tables, acknowledgments, and references. \section{Headings: first level} \label{headings} All headings should be lower case (except for first word and proper nouns), flush left, and bold. First-level headings should be in 12-point type. \subsection{Headings: second level} Second-level headings should be in 10-point type. \subsubsection{Headings: third level} Third-level headings should be in 10-point type. \paragraph{Paragraphs} There is also a \verb+\paragraph+ command available, which sets the heading in bold, flush left, and inline with the text, with the heading followed by 1\,em of space. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone. \subsection{Citations within the text} The \verb+natbib+ package will be loaded for you by default. Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} If you wish to load the \verb+natbib+ package with options, you may add the following before loading the \verb+neurips_2021+ package: \begin{verbatim} \PassOptionsToPackage{options}{natbib} \end{verbatim} If \verb+natbib+ clashes with another package you load, you can add the optional argument \verb+nonatbib+ when loading the style file: \begin{verbatim} \usepackage[nonatbib]{neurips_2021} \end{verbatim} As submission is double blind, refer to your own published work in the third person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our previous work [4].'' If you cite your other papers that are not widely available (e.g., a journal paper under review), use anonymous author names in the citation, e.g., an author of the form ``A.\ Anonymous.'' \subsection{Footnotes} Footnotes should be used sparingly. If you do require a footnote, indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas). Note that footnotes are properly typeset \emph{after} punctuation marks.\footnote{As in this example.} \subsection{Figures} \begin{figure} \centering \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \end{figure} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction. The figure number and caption always appear after the figure. Place one line space before the figure caption and one line space after the figure. The figure caption should be lower case (except for first word and proper nouns); figures are numbered consecutively. You may use color figures. However, it is best for the figure captions and the paper body to be legible if the paper is printed in either black/white or in color. \subsection{Tables} All tables must be centered, neat, clean and legible. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. Note that publication-quality tables \emph{do not contain vertical rules.} We strongly suggest the use of the \verb+booktabs+ package, which allows for typesetting high-quality, professional tables: \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} This package was used to typeset Table~\ref{sample-table}. \begin{table} \caption{Sample table title} \label{sample-table} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \end{table} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textbf{References} section; see below). Please note that pages should be numbered. \section{Preparing PDF files} Please prepare submission files with paper size ``US Letter,'' and not, for example, ``A4.'' Fonts were the main cause of problems in the past years. Your PDF file must only contain Type 1 or Embedded TrueType fonts. Here are a few instructions to achieve this. \begin{itemize} \item You should directly generate PDF files using \verb+pdflatex+. \item You can check which fonts a PDF files uses. In Acrobat Reader, select the menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is available out-of-the-box on most Linux machines. \item The IEEE has recommendations for generating PDF files whose fonts are also acceptable for NeurIPS. Please see \url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf} \item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use "solid" shapes instead. \item The \verb+\bbold+ package almost always uses bitmap fonts. You should use the equivalent AMS Fonts: \begin{verbatim} \usepackage{amsfonts} \end{verbatim} followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+ for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following workaround for reals, natural and complex: \begin{verbatim} \newcommand{\RR}{I\!\!R} \newcommand{\Nat}{I\!\!N} \newcommand{\CC}{I\!\!\!\!C} \end{verbatim} Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package. \end{itemize} If your file contains type 3 fonts or non embedded TrueType fonts, we will ask you to fix it. \subsection{Margins in \LaTeX{}} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the figure width as a multiple of the line width as in the example below: \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} See Section 4.4 in the graphics bundle documentation (\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf}) A number of width problems arise when \LaTeX{} cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command when necessary. \begin{ack} Use unnumbered first level headings for the acknowledgments. All acknowledgments go at the end of the paper before the list of references. Moreover, you are required to declare funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2021/PaperInformation/FundingDisclosure}. Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission. \end{ack} \section*{References} References follow the acknowledgments. Use unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references. Note that the Reference section does not count towards the page limit. \medskip { \small [1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen (eds.), {\it Advances in Neural Information Processing Systems 7}, pp.\ 609--616. Cambridge, MA: MIT Press. [2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System.} New York: TELOS/Springer--Verlag. [3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262. } \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerTODO{} \item Did you describe the limitations of your work? \answerTODO{} \item Did you discuss any potential negative societal impacts of your work? \answerTODO{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerTODO{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerTODO{} \item Did you include complete proofs of all theoretical results? \answerTODO{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerTODO{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerTODO{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerTODO{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerTODO{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerTODO{} \item Did you mention the license of the assets? \answerTODO{} \item Did you include any new assets either in the supplemental material or as a URL? \answerTODO{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerTODO{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerTODO{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerTODO{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerTODO{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerTODO{} \end{enumerate} \end{enumerate} \section{Proofs} \begin{manualtheorem}{1} Let $\epsilon\in[0,\infty]$. Then each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe if and only if the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is not satisfiable. \end{manualtheorem} \begin{proof} We prove the equivalent claim that there exists a weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ for which $\pi_{\mathbf{w},\mathbf{b}}$ is unsafe if and only if $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is satisfiable. \medskip First, suppose that there exists a weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ for which $\pi_{\mathbf{w},\mathbf{b}}$ is unsafe and we want to show that $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is satisfiable. This direction of the proof is straightforward since values of the network's neurons on the unsafe input give rise to a solution of $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Indeed, by assumption there exists a vector of input neuron values $\mathbf{x}_0\in\mathcal{X}_0$ for which the corresponding vector of output neuron values $\mathbf{x}_l=\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}_0)$ is unsafe, i.e.~$\mathbf{x}_l\in\mathcal{X}_u$. By defining $\mathbf{x}_i^{\textrm{in}}$, $\mathbf{x}_i^{\textrm{out}}$ to be the vectors of the corresponding input and output neuron values for the $i$-th hidden layer for each $1\leq i\leq l-1$ and by setting $\mathbf{x}_{0,\textrm{pos}}=\textrm{ReLU}(\mathbf{x_0})$ and $\mathbf{x}_{0,\textrm{neg}}=-\textrm{ReLU}(-\mathbf{x}_0)$, we easily see that these variable values satisfy the Input-output conditions, the ReLU encoding conditions and the BNN input and hidden layer conditions, therefore we get a solution to the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. \medskip We now proceed to the more involved direction of this proof and show that any solution to the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ gives rise to weights $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ for which $\pi_{\mathbf{w},\mathbf{b}}$ is unsafe. Let $\mathbf{x}_0$, $\mathbf{x}_l$, $\mathbf{x}_{0,\textrm{pos}}$, $\mathbf{x}_{0,\textrm{neg}}$ and $\mathbf{x}_i^{\textrm{in}}$, $\mathbf{x}_i^{\textrm{out}}$ for $1\leq i\leq l-1$, be real vectors that satisfy the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Fix $1\leq i\leq l-1$. From the BNN hidden layers constraint for layer i, we have \begin{equation}\label{eq:1} (\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_{i+1}^{\textrm{in}} \leq (\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i+\epsilon\cdot\mathbf{1}). \end{equation} We show that there exist values $\mathbf{W}_i^{\ast},\mathbf{b}_i^{\ast}$ of BNN weights between layers $i$ and $i+1$ such that each weight value is at most $\epsilon$ apart from its mean, and such that $\mathbf{x}_{i+1}^{\textrm{in}}=\mathbf{W}_i^{\ast}\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i^{\ast}$. Indeed, to formally show this, we define $W^{\pi}_{\epsilon}[i]$ to be the set of all weight vectors between layers $i$ and $i+1$ such that each weight value is distant from its mean by at most $\epsilon$ (hence, $W^{\pi}_{\epsilon}[i]$ is a projection of $W^{\pi}_{\epsilon}$ onto dimensions that correspond to the weights between layers $i$ and $i+1$). We then consider a continuous function $h_i:W^{\pi}_{\epsilon}[i]\rightarrow \mathbb{R}$ defined via \begin{equation*} h_i(\mathbf{W}_i,\mathbf{b}_i) = \mathbf{W}_i\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i. \end{equation*} Since $W^{\pi}_{\epsilon}[i]\subseteq\mathbb{R}^{m_i\times n_i}\times\mathbb{R}^{n_i}$ is a product of intervals and therefore a connected set w.r.t.~the Euclidean metric and since $h_i$ is continuous, the image of $W^{\pi}_{\epsilon}[i]$ under $h_i$ is also connected in $\mathbb{R}$. But note that \[ h_i(\mathbf{M}_i-\epsilon\cdot\mathbf{1},\mathbf{m}_i-\epsilon\cdot\mathbf{1})=(\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i-\epsilon\cdot\mathbf{1}) \] and \[ h_i(\mathbf{M}_i+\epsilon\cdot\mathbf{1},\mathbf{m}_i+\epsilon\cdot\mathbf{1})=(\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i+\epsilon\cdot\mathbf{1}), \] with $(\mathbf{M}_i-\epsilon\cdot\mathbf{1},\mathbf{m}_i-\epsilon\cdot\mathbf{1}),(\mathbf{M}_i+\epsilon\cdot\mathbf{1},\mathbf{m}_i+\epsilon\cdot\mathbf{1})\in W^{\pi}_{\epsilon}[i]$. Thus, for the two points to be connected, the image set must also contain $\mathbf{x}_{i+1}^{\textrm{in}}$ which lies in between by eq.~\eqref{eq:1}. Thus, there exists $(\mathbf{W}_i^{\ast},\mathbf{b}_i^{\ast})\in W^{\pi}_{\epsilon}[i]$ with $\mathbf{x}_{i+1}^{\textrm{in}}=\mathbf{W}_i^{\ast}\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i^{\ast}$, as desired. \medskip For the input and the first hidden layer, from the BNN input layer constraint we know that \begin{equation*} (\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_1^{\textrm{in}} \leq (\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0+\epsilon\cdot\mathbf{1}). \end{equation*} Again, define $W^{\pi}_{\epsilon}[0]$ to be the set of all weight vectors between the input and the first hidden layer such that each weight value is distant from its mean by at most $\epsilon$. Consider a continuous function $h_0:W^{\pi}_{\epsilon}[0]\rightarrow \mathbb{R}$ defined via \begin{equation*} h_0(\mathbf{W}_0,\mathbf{b}_0) = \mathbf{W}_0\mathbf{x}_0+\mathbf{b}_0. \end{equation*} Let $\textrm{Msign}(\mathbf{x}_0)$ be a matrix of the same dimension as $\mathbf{M}_0$, with each column consisting of $1$'s if the corresponding component of $\mathbf{x}_0$ is nonnegative, and of $-1$'s if it is negative. Then note that \[ h_0(\mathbf{M}_0-\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0-\epsilon\cdot\mathbf{1})=(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0-\epsilon\cdot\mathbf{1}) \] and \[ h_0(\mathbf{M}_0+\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0+\epsilon\cdot\mathbf{1})=(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0+\epsilon\cdot\mathbf{1}). \] Since $(\mathbf{M}_0-\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0-\epsilon\cdot\mathbf{1}),(\mathbf{M}_0+\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0+\epsilon\cdot\mathbf{1})\in W^{\pi}_{\epsilon}[0]$, analogous image connectedness argument as the one above shows that there exist values $\mathbf{W}_0^{\ast},\mathbf{b}_0^{\ast}$ of BNN weights such tha $(\mathbf{W}_0^{\ast},\mathbf{b}_0^{\ast})\in W^{\pi}_{\epsilon}[0]$, and such that $\mathbf{x}_{1}^{\textrm{in}}=\mathbf{W}_0^{\ast}\mathbf{x}_0+\mathbf{b}_0^{\ast}$. \medskip But now, collecting $\mathbf{W}_0^{\ast},\mathbf{b}_0^{\ast}$ and $\mathbf{W}_i^{\ast},\mathbf{b}_i^{\ast}$ for $1\leq i\leq l-1$ gives rise to a BNN weight vector $(\mathbf{W}^{\ast},\mathbf{b}^{\ast})$ which is contained in $W^{\pi}_{\epsilon}$. Furthermore, combining what we showed above with the constraints in $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$, we get that: \begin{compactitem} \item $\mathbf{x}_0\in \mathcal{X}_0$, $\mathbf{x}_l\in\mathcal{X}_u$, from the Input-output condition in $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$; \item $\mathbf{x}_i^{\textrm{out}} = \textrm{ReLU}(\mathbf{x}_i^{\textrm{in}})$ for each $1\leq i\leq l-1$, from the ReLU-encoding; \item $\mathbf{x}_{1}^{\textrm{in}}=\mathbf{W}_0^{\ast}\mathbf{x}_0+\mathbf{b}_0^{\ast}$ and $\mathbf{x}_{i+1}^{\textrm{in}}=\mathbf{W}_i^{\ast}\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i^{\ast}$ for each $1\leq i\leq l-1$, as shown above. \end{compactitem} Hence, $\mathbf{x}_l\in\mathcal{X}_u$ is the vector of neuron output values of $\pi_{\mathbf{W}^{\ast},\mathbf{b}^{\ast}}$ on the input neuron values $\mathbf{x}_0\in\mathcal{X}_0$, so as $(\mathbf{W}^{\ast},\mathbf{b}^{\ast})\in W^{\pi}_{\epsilon}$ we conclude that there exists a deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ which is not safe. This concludes the proof. \end{proof} \begin{manualtheorem}{2} If there exists a $W^{\pi}_{\epsilon}$-safe positive invariant, then each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{manualtheorem} \begin{proof} Let $\mathsf{Inv}$ be a $W^{\pi}_{\epsilon}$-safe positive invariant. Given a trajectory $(\mathbf{x}_t,\mathbf{u}_t)_{t=0}^{\infty}$ in $\mathsf{Traj}^{f,\pi}_{\epsilon}$, we need to show that $\mathbf{x}_t\not\in\mathcal{X}_u$ for each $t\in\mathbb{N}_{\geq 0}$. Since $\mathsf{Inv}\cap\mathcal{X}_u=\emptyset$, it suffices to show that $\mathbf{x}_t\in\mathsf{Inv}$ for each $t\in\mathbb{N}_{\geq 0}$. We prove this by induction on $t$. The base case $\mathbf{x}_0\in\mathsf{Inv}$ follows since $\mathbf{x}_0\in\mathcal{X}_0\subseteq \mathsf{Inv}$. As an inductive hypothesis, suppose now that $\mathbf{x}_t\in\mathsf{Inv}$ for some $t\in\mathbb{N}_{\geq 0}$. We need to show that $\mathbf{x}_{t+1}\in\mathsf{Inv}$. Since the trajectory is in $\mathsf{Traj}^{f,\pi}_{\epsilon}$, we know that the BNN weight vector $(\mathbf{w}_t,\mathbf{b}_t)$ sampled at the time-step $t$ belongs to $W^{\pi}_{\epsilon}$, i.e.~$(\mathbf{w}_t,\mathbf{b}_t)\in W^{\pi}_{\epsilon}$. Thus, since $\mathbf{x}_t\in\mathsf{Inv}$ by the induction hypothesis and since $\mathsf{Inv}$ is closed under the system dynamics when the sampled weight vector is in $W^{\pi}_{\epsilon}$, it follows that $\mathbf{x}_{t+1}=f(\mathbf{x}_t,\mathbf{u}_t)=f(\mathbf{x}_t,\pi_{\mathbf{w}_t,\mathbf{b}_t}(\mathbf{x}_t))\in \mathsf{Inv}$. This concludes the proof by induction. \end{proof} \begin{manualtheorem}{3} The loss function $\mathcal{L}$ is nonnegative for any neural network $g$, i.e.~$\mathcal{L}(g)\geq 0$. Moreover, if $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant and $\mathcal{L}_{\textrm{cls}}$ the 0/1-loss, then $\mathcal{L}(g^{\mathsf{Inv}})=0$. Hence, neural networks $g^{\mathsf{Inv}}$ for which $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant are global minimizers of the loss function $\mathcal{L}$ when $\mathcal{L}_{\textrm{cls}}$ is the 0/1-loss. \end{manualtheorem} \begin{proof} Recall, the loss function $\mathcal{L}$ for a neural network $g$ is defined via \begin{equation}\label{eq:totallossupmat} \mathcal{L}(g) = \frac{1}{|D_{\textrm{spec}}|}\sum_{(\mathbf{x},y)\in D_{\textrm{spec}}}^{}\mathcal{L}_{\textrm{cls}}\big(g(\mathbf{x}),y\big) +\lambda \frac{1}{|D_{\textrm{ce}}|}\sum_{(\mathbf{x},\mathbf{x}')\in D_{\textrm{ce}}}^{}\mathcal{L}_{\textrm{ce}}\big(g(\mathbf{x}),g(\mathbf{x}')\big), \end{equation} where $\lambda$ is a tuning parameter and $\mathcal{L}_{\textrm{cls}}$ a binary classification loss function, e.g.~the 0/1-loss $\mathcal{L}_{\textrm{0/1}}(z,y) = \mathds{1}[\mathds{1}[z\geq0]\neq y]$ or the logistic loss $\mathcal{L}_{\textrm{log}}(z,y)= z - z \cdot y + \log(1 + \exp(-z))$ as its differentiable alternative. The term $\mathcal{L}_{\textrm{ce}}$ is the counterexample loss which we define via \begin{equation}\label{eq:losssupmat} \mathcal{L}_{\textrm{ce}}(z,z') = \mathds{1}\big[z>0\big]\mathds{1}\big[z'<0\big] \mathcal{L}_{\textrm{cls}}\big(z,0\big) \mathcal{L}_{\textrm{cls}}\big(z',1\big). \end{equation} The fact that $\mathcal{L}(g)\geq 0$ for each neural network $g$ follows immediately from the fact that summands in the first sum in eq.~\eqref{eq:totallossupmat} are loss functions which are nonnegative, and summands in the second sum are products of indicator and nonnegative loss functions and therefore also nonnegative. We now show that, if $\mathcal{L}_{\textrm{cls}}$ is the 0/1-loss, $\mathcal{L}(g^{\mathsf{Inv}})=0$ whenever $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant, which implies the global minimization claim in the theorem. This follows from the following two items: \begin{compactitem} \item For each $(\mathbf{x},y)\in D_{\textrm{spec}}$, we have $\mathcal{L}_{\textrm{cls}}\big(g^{\mathsf{Inv}}(\mathbf{x}),y\big)=0$. Indeed, for $(\mathbf{x},y)$ to be added to $D_{\textrm{spec}}$ in Algorithm~1, we must have that either $\mathbf{x}\in\mathcal{X}_0$ and $y=1$, or that $\mathbf{x}\in\mathcal{X}_u$ and $y=0$. Thus, since $\mathsf{Inv}$ is assumed to be a $W^{\pi}_{\epsilon}$-safe positive invariant, $g^{\mathsf{Inv}}$ correctly classifies $(\mathbf{x},y)$ and the corresponding loss is $0$. \item For each $(\mathbf{x},\mathbf{x}')\in D_{\textrm{ce}}$ we have $\mathcal{L}_{\textrm{ce}}\big(g^{\mathsf{Inv}}(\mathbf{x}),g^{\mathsf{Inv}}(\mathbf{x}')\big)=0$. Indeed, since \[ \mathcal{L}_{\textrm{ce}}(g^{\mathsf{Inv}}(\mathbf{x}),g^{\mathsf{Inv}}(\mathbf{x}')) = \mathds{1}\big[g^{\mathsf{Inv}}(\mathbf{x})>0\big]\mathds{1}\big[g^{\mathsf{Inv}}(\mathbf{x}')<0\big] \mathcal{L}_{\textrm{cls}}\big(g^{\mathsf{Inv}}(\mathbf{x}),0\big) \mathcal{L}_{\textrm{cls}}\big(g^{\mathsf{Inv}}(\mathbf{x}'),1\big), \] for the loss to be non-zero we must have that $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and $g^{\mathsf{Inv}}(\mathbf{x})<0$. But this is impossible since $\mathsf{Inv}$ is assumed to be a $W^{\pi}_{\epsilon}$-safe positive invariant and $(\mathbf{x},\mathbf{x}')$ was added by Algorithm~1 as a counterexample to $D_{\textrm{ce}}$, meaning that $\mathbf{x}'$ can be reached from $\mathbf{x}$ by following the dynamics function and sampling a BNN weight vector in $W^{\pi}_{\epsilon}$. Therefore, by the closedness property of $W^{\pi}_{\epsilon}$-safe positive invariants when the sampled weight vector is in $W^{\pi}_{\epsilon}$, we cannot have both $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and $g^{\mathsf{Inv}}(\mathbf{x})<0$. Hence, the loss must be $0$. \end{compactitem} \end{proof} \begin{manualtheorem}{4} If the verifier in Algorithm~1 shows that constraints in three checks are unsatisfiable, then the computed $\mathsf{Inv}$ is indeed a $W^{\pi}_{\epsilon}$-safe positive invariant. Hence, Algorithm~1 is correct. \end{manualtheorem} \begin{proof} The fact that the first check in Algorithm~1 correctly checks whether there exist $\mathbf{x},\mathbf{x}'\in\mathcal{X}_0$ and a weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ such that $\mathbf{x}'=f(\mathbf{x},\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}))$ with $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and $g^{\mathsf{Inv}}(\mathbf{x}')<0$ follows by the correctness of our encoding in Section~4.1, which was proved in Theorem~1. The fact that checks 2 and 3 correctly check whether for all $\mathbf{x}\in\mathcal{X}_0$ we have $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and for all $\mathbf{x}\in\mathcal{X}_u$ we have $g^{\mathsf{Inv}}(\mathbf{x})<0$, respectively, follows immediately from the conditions they encode. Therefore, the three checks together verify that (1)~$\mathsf{Inv}$ is closed under the system dynamics whenever the sampled weight vector is in $W^{\pi}_{\epsilon}$, (2)~$\mathsf{Inv}$ contains all initial states, and (3)~$\mathsf{Inv}$ contains no unsafe states. As these are the $3$ defining properties of $W^{\pi}_{\epsilon}$-safe positive invariants, Algorithm~1 is correct and the theorem claim follows. \end{proof} \section{Safe exploration reinforcement learning algorithm} Algorithm \ref{algorithm:serl} shows our sketch of how standard RL algorithms, such as policy gradient methods and deep Q-learning, can be adapted to a safe exploration setup by using the safe weight sets computed by our method. \begin{algorithm}[t] \caption{Safe Exploration Reinforcement Learning} \label{algorithm:serl} \begin{algorithmic} \STATE \textbf{Input} Initial policy $\pi_{0}$, learning rate $\alpha$, number of iterations $N$ \FOR{$i \in 1,\dots N$} \STATE $\epsilon \leftarrow$ find safe $\epsilon$ for $\pi_{i-1}$ \STATE collect rollouts of $\pi_{i-1}$ with rejection sampling $\epsilon$ \STATE compute $\nabla \mu_{i-1}$ for parameters $\mu_{i-1}$ of $\pi_{i-1}$ using DQN/policy gradient \STATE $\mu_i \leftarrow \mu_{i-1} - \alpha \nabla \mu_{i-1}$ (gradient descent) \STATE project $\mu_i$ back to interval $[\mu_{i-1}-\epsilon,\mu_{i-1}+\epsilon]$ \ENDFOR \STATE \textbf{Return} $\pi_{N}$ \end{algorithmic} \end{algorithm} \section{Experimental details} In this section, we describe the details of our experimental evaluation setup. The code and pre-trained network parameters are attached in the supplementary materials. Each policy is a ReLU network consisting of three layers. The first layer represents the input variables, the second one is a hidden layer with 16 neurons, and the last layer are the output variables. The size of the first and the last layer is task dependent and is shown in Table \ref{tab:epx}. The $W^{\pi}_{\epsilon}$-safe positive invariant candidate network differs from the policy network in that its weights are deterministic, it has a different number of hidden units and a single output dimension. Particularly, the invariant networks for the linear dynamical system and the inverted pendulum have 12 hidden units, whereas the invariant network for the collision avoidance task has 32 neurons in its hidden layer. The policy networks are trained with a $\mathcal{N}(0,0.1)$ (from second layer on) and $\mathcal{N}(0,0.05)$ (all weights) prior for the Bayesian weights, respectively. MILP solving was performed by Gurobi 9.03 on a 4 vCPU with 32GB virtual machine. \begin{table}[] \centering \begin{tabular}{c|ccc}\toprule Experiment & Input dimension & Hidden size & Output dimension \\\midrule Linear dynamical system & 2 & 16 & 1 \\ Inverted pendulum & 2 & 16 & 1 \\ Collision avoidance & 3 & 16 & 3\\\bottomrule \end{tabular} \caption{Number of dimensions of the policy network for the three experiments.} \label{tab:epx} \end{table} \paragraph{Linear dynamical system} The state of the linear dynamical system consists of two variables $(x,y)$. The update function takes the current state $(x_t,y_t)$ with the current action $u_t$ and outputs the next states $(x_{t+1},y_{t+1})$ governed by the equations \begin{align*} y_{t+1} &= y_t + 0.2\cdot \textrm{clip}_{\pm 1}(u_t)\\ x_{t+1} &= x_t + 0.3 y_{t+1} + 0.05\cdot \textrm{clip}_{\pm 1}(u_t),\\ \end{align*} where the function $\text{clip}_{\pm 1}$ is defined by \begin{equation*} \textrm{clip}_{\pm z}(x) = \begin{cases}-z & \text{ if } x\leq -z\\ z & \text{ if } x\geq z\\ x & \text{ otherwise. }\end{cases} \end{equation*} The set of unsafe states is defined as $\{(x,y)\in\mathbb{R}^2\mid|(x,y)|_{\infty}\geq 1.2\}$, and the initial states as $\{(x,y)\in\mathbb{R}^2\mid |(x,y)|_{\infty}\leq 0.6\}$. \paragraph{Inverted pendulum} The state of the inverted pendulum consists of two variables $(\theta,\dot{\theta})$. The non-linear state transition is defined by \begin{align*} \dot{\theta}_{t+1} &= \textrm{clip}_{\pm 8}\big(\dot{\theta}_t + \frac{-3g\cdot \textrm{angular}(\theta_t + \pi)}{2l} + \delta_t\frac{7.5 \textrm{clip}_{\pm 1}(u_t)}{(m\cdot l^2)}\big)\\ \theta_{t+1} &= \theta_t + \dot{\theta}_{t+1} * \delta_t, \end{align*} where $g=9.81,l=1,\delta_t=0.05$ and $m=0.8$ are constants. The function $\textrm{angular}$ is defined using the piece-wise linear composition \begin{equation*} \textrm{angular}(x) = \begin{cases} \textrm{angular}(x+2\pi) & \text{ if } x \leq \pi / 2\\ \textrm{angular}(x-2\pi) & \text{ if } x > 5 \pi / 2\\ \frac{x - 2\pi}{\pi / 2} & \text{ if } 3 \pi / 2 < x \leq 5 \pi / 2\\ 2 - \frac{x}{\pi / 2} & \text{ if } \pi / 2 < x \leq 3 \pi / 2. \end{cases} \end{equation*} The set of initial states are defined by $\{(\theta,\dot{\theta})\in\mathbb{R}^2\mid |\theta| \leq \pi/6 \text{ and } |\dot{\theta}|\leq0.2 \}$. The set of unsafe states are defined by $\{(\theta,\dot{\theta})\in\mathbb{R}^2\mid |\theta| \geq 0.9 \text{ or } |\dot{\theta}|\geq2 \}$. \paragraph{Collision avoidance} The state of the collision avoidance environment consists of three variables $(p_x,a_x,a_y)$, representing the agent's vertical position and the vertical and the horizontal position of an intruder. The intruder moves toward the agent, while the agent's vertical position must be controlled to avoid colliding with the intruder. The particular state transition is given by \begin{align*} p_{x,t+1} &= p_{x,t} + u_{t}\\ a_{x,t+1} &= a_{x,t}\\ a_{y,t+1} &= a_{y,t} - 1.\\ \end{align*} Admissible actions are defined by $u_t \in \{-1,0,1\}$. The set of initial states are defined as $\{(p_x,a_x,a_y)\in\mathbb{Z}^3\mid |p_x|\leq 2 \text{ and } |a_x|\leq 2 \text{ and } a_y=5\}$. Likewise, the set of unsafe states are given by $\{(p_x,a_x,a_y)\in\mathbb{Z}^3\mid |p_x-a_x|\leq 1 \text{ and } a_y=5\}$. \section{Conclusion}\label{sec:conclusion} In this work we formulated the safety verification problem for BNN policies in infinite time horizon systems, that asks to compute safe BNN weight sets for which every system execution is safe as long as the BNN samples its weights from this set. Solving this problem allows re-calibrating the BNN policy to reject unsafe weight samples in order to guarantee system safety. We then introduced a methodology for computing safe weight sets in BNN policies in the form of products of intervals around the BNN weight's means, and a method for verifying their safety by learning a positive invariant-like safety certificate. We believe that our results present an important first step in guaranteeing safety of BNNs for deployment in safety-critical scenarios. While adopting products of intervals around the BNN's weight means is a natural choice given that BNN priors are typically unimodal distributions, this is still a somewhat restrictive shape for safe weight sets. Thus, an interesting direction of future work would be to study more general forms of safe weight sets that could be used for re-calibration of BNN posteriors and their safety verification. Another interesting problem would be to design an approach for refuting a weight set as unsafe which would complement our method, or to consider closed-loop systems with stochastic environment dynamics. Any verification method for neural networks, even more so for neural networks in feedback loops, suffers from scalability limitations due to the underlying complexity class \cite{KatzBDJK17,IvanovWAPL19}. Promising research directions on improving the scalability of our approach by potentially speeding up the constraint solving step are gradient based optimization techniques \cite{HenriksenL20} and to incorporate the constraint solving step already in the training procedure \cite{ZhangCXGSLBH20}. Since the aim of AI safety is to ensure that systems do not behave in undesirable ways and that safety violating events are avoided, we are not aware of any potential negative societal impacts. \section{Experiments}\label{sec:exp} We perform an experimental evaluation of our proposed method for learning positive invariant neural networks that prove infinite time horizon safety. Our evaluation consists of an ablation study where we disable different core components of Algorithm~\ref{algorithm:ce} and measure their effects on the obtained safety bounds and the algorithm's runtime. First, we run the algorithm without any re-training on the counterexamples. In the second step, we run Algorithm~\ref{algorithm:ce} by initializing $D_{\textrm{spec}}$ with samples from $\mathcal{X}_0$ and $\mathcal{X}_u$ only. Finally, we bootstrap the positive invariant network by initializing $D_{\textrm{spec}}$ with random samples from the state space labeled with Monte-Carlo estimates of reaching the unsafe states. We consider environments with a piecewise linear dynamic function, initial and unsafe state sets so that the verification steps of our algorithm can be reduced to MILP-solving using Gurobi \cite{gurobi}. Details on our evaluation are in the Supplementary Material. Code is publicly available \footnote{\url{https://github.com/mlech26l/bayesian_nn_safety}}. We conduct our evaluation on three benchmark environments that differ in terms of complexity and safety specifications. We train two BNN policies for each benchmark-ablation pair, one with Bayesian weights from the second layer on (with $\mathcal{N}(0,0.1)$ prior) and one with Bayesian weights in all layers (with $\mathcal{N}(0,0.05)$ prior). Recall, in our BNN encoding in Section~\ref{sec:feedforward}, we showed that encoding of the BNN input layer requires additional constraints and extra care, since we do not know the signs of input neuron values. Hence, we consider two BNN policies in our evaluation in order to study how the encoding of the input layer affects the safe weight set computation. Our first benchmark represents an unstable linear dynamical system of the form $x_{t+1} = Ax_t + Bu_t$. A BNN policy stabilizes the system towards the point $(0,0)$. Consequently, the set of unsafe states is defined as $\{x\in\mathbb{R}^2\mid|x|_{\infty}\geq 1.2\}$, and the initial states as $\{x\in\mathbb{R}^2\mid |x|_{\infty}\leq 0.6\}$. Our second benchmark is the inverted pendulum task, which is a classical non-linear control problem. The two state variables $a$ and $b$ represent the angle and angular velocity of a pendulum that must be controlled in an upward direction. The actions produced by the policy correspond to a torque applied to the anchor point. Our benchmark concerns a variant of the original problem where the non-linearity in $f$ is expressed by piecewise linear functions. The resulting system, even with a trained policy, is highly unstable, as shown in Figure~\ref{fig:pend}. The set of initial states corresponds to pendulum states in an almost upright position and with small angular velocity. The set of unsafe states represents the pendulum falling down. Figure~\ref{fig:pend} visualizes the system and the learned invariant's decision boundary for the inverted pendulum task. \begin{figure} \centering \subcaptionbox{Vector field of the system when controlled by the posterior's mean. Green/red arrows indicate empirical safe/unsafe estimates.}% [.31\linewidth]{\includegraphics[width=0.25\textwidth]{plots/vector_field.png}} \hfill \subcaptionbox{First guess of $g^{\mathsf{Inv}}$. Green area shows $g^{\mathsf{Inv}}>0$, orange area $g^{\mathsf{Inv}}<0$. Red markers show the found counterexample.}% [.31\linewidth]{\includegraphics[width=0.25\textwidth]{plots/step0.png}} \hfill \subcaptionbox{Final $g^{\mathsf{Inv}}$ proving the safety of the system. Previous counterexamples marked in red.}% [.31\linewidth]{\includegraphics[width=0.25\textwidth]{plots/step2.png}} \caption{Invariant learning shown on the inverted pendulum benchmark.} \label{fig:pend} \end{figure} \begin{table}[] \centering \caption{Results of our benchmark evaluation. The epsilon values are multiples of the weight's standard deviation $\sigma$. We evaluated several epsilon values, and the table shows the largest that could be proven safe. A dash ''-'' indicates an unsuccessful invariant search. Runtime in seconds.} \begin{tabular}{l|rr|rr|rr}\toprule \multirow{2}{*}{Environment} & \multicolumn{2}{c|}{No re-training} & \multicolumn{2}{c|}{Init $D_{spec}$ with $\mathcal{X}_0$ and $\mathcal{X}_u$} & \multicolumn{2}{c}{Bootstrapping $D_{spec}$} \\ & Verified & Runtime & Verified & Runtime & Verified & Runtime \\\cmidrule(r){1-7} Unstable LDS & - & 3 & $1.5\sigma$ & 569 & $2\sigma$ & 760 \\ Unstable LDS (all) & $0.2\sigma$ & 3 & $0.5\sigma$ & 6 & $0.5\sigma$ & 96 \\ Pendulum & - & 2 & $2\sigma$ & 220 & $2\sigma$ & 40 \\ Pendulum (all) & - & 2 & $0.2\sigma$ & 1729 & $1.5\sigma$ & 877 \\ Collision avoid. & - & 2 & - & - & $2\sigma$ & 154 \\ Collision avoid. (all) & - & 2 & - & - & $1.5\sigma$ & 225 \\\bottomrule \end{tabular} \label{tab:results} \end{table} While the previous two benchmarks concern stability specifications, we evaluate our method on a non-Lyapunovian safety specification in our third benchmark. In particular, our third benchmark is a collision avoidance task, where the system does not stabilize to the same set of terminal states in every execution. The system is described by three variables. The first variable specifies the agent's own vertical location, while the other two variables specify an intruder's vertical and horizontal position. The objective is to avoid colliding with the intruder who is moving toward the agent by lateral movement commands as the policy's actions. The initial states represent far-away intruders, and crashes with the intruder define the unsafe states. Table \ref{tab:results} shows the results of our evaluation. Our results demonstrate that re-training with the counterexamples is the key component that determines our algorithm's success. In all cases, except for the linear dynamical system, the initial guess of the invariant candidate violates the invariant condition. Moreover, boostrapping $D_{\textrm{spec}}$ with random points labeled by empirical estimates of reaching the unsafe states improves the search process significantly. \section{Introduction}\label{sec:intro} The success of deep neural networks (DNNs) across different domains has created the desire to apply them in safety-critical applications such as autonomous vehicles~\citep{kendall2019learning,lechner2020neural} and healthcare systems~\citep{shen2017deep}. The fundamental challenge for the deployment of DNNs in these domains is certifying their safety~\citep{amodei2016concrete}. Thus, formal safety verification of DNNs in isolation and closed control loops~\citep{KatzBDJK17,HenzingerLZ21,GehrMDTCV18,TjengXT19,DuttaCS19} has become an active research topic. Bayesian neural networks (BNNs) are a family of neural networks that place distributions over their weights~\citep{neal2012bayesian}. This allows learning uncertainty in the data and the network's prediction, while preserving the strong modelling capabilities of DNNs~\citep{MacKay92a}. In particular, BNNs can learn arbitrary data distributions from much simpler (e.g.~Gaussian) weight distributions. This makes BNNs very appealing for robotic and medical applications~\citep{mcallister2017concrete} where uncertainty is a central component of data. Despite the large body of literature on verifying safety of DNNs, the formal safety verification of BNNs has received less attention. Notably, \citep{CardelliKLPPW19,WickerLPK20,MichelmoreWLCGK20} have proposed sampling-based techniques for obtaining probabilistic guarantees about BNNs. Although these approaches provide some insight into BNN safety, they suffer from two key limitations. First, sampling provides only bounds on the probability of the BNN's safety which is insufficient for systems with critical safety implications. For instance, having an autonomous vehicle with a $99.9\%$ safety guarantee is still insufficient for deployment if millions of vehicles are deployed. Second, samples can only simulate the system for a finite time, making it impossible to reason about the system's safety over an unbounded time horizon. \begin{wrapfigure}{R}{7cm} \centering \includegraphics[width=7cm]{standalone.pdf} \caption{BNNs are typically unsafe by default. Top figure: The posterior of a typical BNN has unbounded support, resulting in a non-zero probability of producing an unsafe action. Bottom figure: Restricting the support of the weight distributions via rejection sampling ensures BNN safety.} \label{fig:intro} \end{wrapfigure} In this work, we study the safety verification problem for BNN policies in safety-critical systems over the infinite time horizon. Formally, we consider discrete-time closed-loop systems defined by a dynamical system and a BNN policy. Given a set of initial states and a set of unsafe (or bad) states, the goal of the safety verification problem is to verify that no system execution starting in an initial state can reach an unsafe state. Unlike existing literature which considers probability of safety, we verify {\em sure safety}, i.e.~safety of every system execution of the system. In particular, we present a method for computing {\em safe weight sets} for which every system execution is safe as long as the BNN samples its weights from this set. Our approach to restrict the support of the weight distribution is necessary as BNNs with Gaussian weight priors typically produce output posteriors with unbounded support. Consequently, there is a low but non-zero probability for the output variable to lie in an unsafe region, see Figure \ref{fig:intro}. This implies that BNNs are usually unsafe by default. We therefore consider the more general problem of computing safe weight sets. Verifying that a weight set is safe allows re-calibrating the BNN policy by rejecting unsafe weight samples in order to guarantee safety. As most BNNs employ uni-modal weight priors, e.g. Gaussians, we naturally adopt weight sets in the form of products of intervals centered at the means of the BNN's weight distributions. To verify safety of a weight set, we search for a safety certificate in the form of a {\em safe positive invariant} (also known as {\em safe inductive invariant}). A safe positive invariant is a set of system states that contains all initial states, is closed under the system dynamics and does not contain any unsafe state. The key advantage of using safe positive invariants is that their existence implies the {\em infinite time horizon safety}. We parametrize safe positive invariant candidates by (deterministic) neural networks that classify system states for determining set inclusion. Moreover, we phrase the search for an invariant as a learning problem. A separated verifier module then checks if a candidate is indeed a safe positive invariant by checking the required properties via constraint solving. In case the verifier finds a counterexample demonstrating that the candidate violates the safe positive invariant condition, we re-train the candidate on the found counterexample. We repeat this procedure until the verifier concludes that the candidate is a safe positive invariant ensuring that the system is safe. The safe weight set obtained by our method can be used for safe exploration reinforcement learning. In particular, generating rollouts during learning by sampling from the safe weight set allows an exploration of the environment while ensuring safety. Moreover, projecting the (mean) weights onto the safe weight set after each gradient update further ensures that the improved policy stays safe. \textbf{Contributions} Our contributions can be summarized as follows: \begin{compactenum} \item We define a safety verification problem for BNN policies which overcomes the unbounded posterior issue by computing and verifying safe weight sets. The problem generalizes the sure safety verification of BNNs and solving it allows re-calibrating BNN policies via rejection sampling to guarantee safety. \item We introduce a method for computing safe weight sets in BNN policies in the form of products of intervals around the BNN weights' means. To verify safety of a weight set, our novel algorithm learns a safe positive invariant in the form of a deterministic neural network. \item We evaluate our methodology on a series of benchmark applications, including non-linear systems and non-Lyapunovian safety specifications. \end{compactenum} \section{Related work}\label{sec:relatedwork} \textbf{Verification of feed-forward NN} Verification of robustness and safety properties in feed-forward DNNs has received much attention but remains an active research topic~\citep{KatzBDJK17,HenzingerLZ21,GehrMDTCV18,RuanHK18,BunelTTKM18,TjengXT19}. As the majority of verification techniques were designed for deterministic NNs, they cannot be readily applied to BNNs. The safety verification of feed-forward BNNs has been considered in \citep{CardelliKLPPW19} by using samples to obtain statistical guarantees on the safety probability. The work of~\citep{WickerLPK20} also presents a sampling-based approach, however it provides certified lower bounds on the safety probability. The literature discussed above considers NNs in isolation, which can provide input-output guarantees on a NN but are unable to reason holistically about the safety of the system that the NN is applied in. Verification methods that concern the safety of NNs interlinked with a system require different approaches than standalone NN verification, which we will discuss in the rest of this section. \textbf{Finite time horizon safety of BNN policies} The work in~\citep{MichelmoreWLCGK20} extends the method of~\citep{CardelliKLPPW19} to verifying safety in closed-loop systems with BNN policies. However, similar to the standalone setting of \cite{CardelliKLPPW19}, their method obtains only statistical guarantees on the safety probability and for the system's execution over a finite time horizon. \textbf{Safe RL} Safe reinforcement learning has been primarily studied in the form of constrained Markov decision processes (CMDPs) \citep{altman1999constrained,Geibel06}. Compared to standard MDPs, an agent acting in a CMDP must satisfy an expected auxiliary cost term aggregated over an episode. The CMDP framework has been the base of several RL algorithms~\citep{uchibe2007constrained}, notably the Constrained Policy Optimization (CPO)~\citep{achiam2017constrained}. Despite these algorithms providing a decent performance, the key limitation of CMDPs is that the constraint is satisfied in expectation, which makes violations unlikely but nonetheless possible. Consequently, the CMDP framework is unsuited for systems where constraint violations are critical. \textbf{Lyapunov-based stability} Safety in the context of ''stability'', i.e.~always returning to a ground state, can be proved by Lyapunov functions~\citep{BerkenkampTS017}. Lyapunov functions have originally been considered to study stability of dynamical systems~\citep{khalil2002nonlinear}. Intuitively, a Lyapunov function assigns a non-negative value to each state, and is required to decrease with respect to the system's dynamics at any state outside of the stable set. A Lyapunov-based method is proposed in~\citep{ChowNDG18} to ensure safety in CMDPs during training. Recently, the work of~\citep{ChangRG19} presented a method for learning a policy as well as a neural network Lyapunov function which guarantees the stability of the policy. Similarly to our work, their learning procedure is counterexample-based. However, unlike \citep{ChangRG19}, our work considers BNN policies and safety definitions that do not require returning to a set of ground states. \textbf{Barrier functions for dynamical systems} Barrier functions can be used to prove infinite time horizon safety in dynamical systems~\citep{PrajnaJ04,PrajnaJP07}. Recent works have considered learning neural network barrier functions~\citep{ZhaoZC020}, and a counterexample-based learning procedure is presented in~\cite{PeruffoAA21}. \textbf{Finite time horizon safety of NN policies} Safety verification of continuous-time closed-loop systems with deterministic NN policies has been considered in~\citep{IvanovWAPL19,gruenbacher2020lagrangian}, which reduces safety verification to the reachability analysis in hybrid systems~\citep{ChenAS13}. The work of~\citep{DuttaCS19} presents a method which computes a polynomial approximation of the NN policy to allow an efficient approximation of the reachable state set. Both works consider finite time horizon systems. Our safety certificate most closely resembles inductive invariants for safety analysis in programs~\citep{Floyd1967} and positive invariants for dynamical systems~\citep{blanchini2008set}. \section*{Acknowledgments} This research was supported in part by the Austrian Science Fund (FWF) under grant Z211-N23 (Wittgenstein Award), ERC CoG 863818 (FoRM-SMArt), and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie Grant Agreement No. 665385. \bibliographystyle{plainnat} \section{Main results} In this section we present our method for solving the safety problems defined in the previous section, with Section~\ref{sec:feedforward} considering Problem~\ref{problem1} and Section~\ref{sec:loopcertificate} considering Problem~\ref{problem2}. Both problems consider safety verification with respect to a given value of $\epsilon\in[0,\infty]$, so in Section~\ref{sec:epscompute} we present our method for computing the value of $\epsilon$ for which our solutions to Problem~1 and Problem~2 may be used to verify safety. We then show in Section~\ref{sec:safeexploration} how our new methodology can be adapted to the safe exploration RL setting. \subsection{Safe weight sets for feed-forward BNNs}\label{sec:feedforward} Consider a feed-forward BNN $\pi$, a set $\mathcal{X}_0\subseteq\mathbb{R}^m$ of inputs and a set $\mathcal{X}_u\subseteq\mathbb{R}^n$ of unsafe output of the BNN. Fix $\epsilon\in[0,\infty]$. To solve Problem~\ref{problem1}, we show that the decision problem of whether each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe can be encoded as a system of constraints and reduced to constraint solving. Suppose that $\pi=l_1\circ\dots\circ l_k$ consists of $k$ layers, with each $l_i(\mathbf{x}) = \textrm{ReLU}(\mathbf{W}_i\mathbf{x}+\mathbf{b}_i)$. Denote by $\mathbf{M}_i$ the matrix of the same dimension as $\mathbf{W}_i$, with each entry equal to the mean of the corresponding random variable weight in $\mathbf{W}_i$. Similarly, define the vector $\mathbf{m}_i$ of means of random variables in $\mathbf{b}_i$. The real variables of our system of constraints are as follows, each of appropriate dimension: \begin{compactitem} \item $\mathbf{x}_0$ encodes the BNN inputs, $\mathbf{x}_l$ encodes the BNN outputs; \item $\mathbf{x}_1^{\textrm{in}}$, \dots, $\mathbf{x}_{l-1}^{\textrm{in}}$ encode vectors of input values of each neuron in the hidden layers; \item $\mathbf{x}_1^{\textrm{out}}$, \dots, $\mathbf{x}_{l-1}^{\textrm{out}}$ encode vectors of output values of each neuron in the hidden layers; \item $\mathbf{x}_{0,\textrm{pos}}$ and $\mathbf{x}_{0,\textrm{neg}}$ are dummy variable vectors of the same dimension as $\mathbf{x}_0$ and which will be used to distinguish between positive and negative NN inputs in $\mathbf{x}_0$, respectively. \end{compactitem} We use $\mathbf{1}$ to denote the vector/matrix whose all entries are equal to $1$, of appropriate dimensions defined by the formula in which it appears. Our system of constraints is as follows: \begin{equation}\label{eq:inputoutput}\tag{Input-output conditions} \mathbf{x}_0 \in \mathcal{X}_0, \hspace{0.5cm} \mathbf{x}_l \in \mathcal{X}_u \end{equation} \begin{equation}\label{eq:relu}\tag{ReLU encoding} \mathbf{x}_i^{\textrm{out}} = \textrm{ReLU}(\mathbf{x}_i^{\textrm{in}}), \text{ for each } 1\leq i\leq l-1 \end{equation} \begin{equation}\label{eq:weights}\tag{BNN hidden layers} \begin{split} &(\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_{i+1}^{\textrm{in}}, \text{ for each } 1\leq i\leq l-1 \\ &\mathbf{x}_{i+1}^{\textrm{in}} \leq (\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i+\epsilon\cdot\mathbf{1}), \text{ for each } 1\leq i\leq l-1 \end{split} \end{equation} \begin{equation}\label{eq:weights}\tag{BNN input layer} \begin{split} &\mathbf{x}_{0,\textrm{pos}} = \textrm{ReLU}(\mathbf{x}_0), \hspace{0.5cm} \mathbf{x}_{0,\textrm{neg}} = -\textrm{ReLU}(-\mathbf{x}_0) \\ &(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_1^{\textrm{in}} \\ &\mathbf{x}_1^{\textrm{in}} \leq (\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_0^{\textrm{out}}+(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0+\epsilon\cdot\mathbf{1}) \end{split} \end{equation} Denote by $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ the system of constraints defined above. The proof of Theorem~\ref{thm:constraints} shows that it encodes that $\mathbf{x}_0\in\mathcal{X}_0$ is an input point for which the corresponding output point of $\pi$ is unsafe, i.e.~$\mathbf{x}_l=\pi(\mathbf{x}_0)\in\mathcal{X}_u$. The first equation encodes the input and output conditions. The second equation encodes the ReLU input-output relation for each hidden layer neuron. The remaining equations encode the relation between neuron values in successive layers of the BNN as well as that the sampled BNN weight vector is in $W^{\pi}_{\epsilon}$. For hidden layers, we know that the output value of each neuron is nonnegative, i.e.$~\mathbf{x}_i^{\textrm{out}}\geq\mathbf{0}$ for the $i$-th hidden layer where $1\leq i\leq l-1$, and so \[ (\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}} \leq (\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}. \] Hence, the BNN weight relation with neurons in the successive layer as well as the fact that the sampled weights are in $W^{\pi}_{\epsilon }$ is encoded as in equations 3-4 above. For the input layer, however, we do not know the signs of the input neuron values $\mathbf{x}_0$ and so we introduce dummy variables $\mathbf{x}_{0,\textrm{pos}}=\textrm{ReLU}(\mathbf{x}_0)$ and $\mathbf{x}_{0,\textrm{neg}}=-\textrm{ReLU}(-\mathbf{x}_0)$ in equation 5. This allows encoding the BNN weight relations between the input layer and the first hidden layer as well as the fact that the sampled weight vector is in $W^{\pi}_{\epsilon}$, as in equations 6-7. Theorem~\ref{thm:constraints} shows that Problem~\ref{problem1} is equivalent to solving the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Its proof can be found in the Supplementary Material. \begin{theorem}\label{thm:constraints} Let $\epsilon\in[0,\infty]$. Then each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe if and only if the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is not satisfiable. \end{theorem} \textbf{Solving the constraints} Observe that $\epsilon$, $\mathbf{M}_i$ and $\mathbf{m}_i$, $1\leq i\leq l-1$, are constant values that are known at the time of constraint encoding. Thus, in $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$, only the ReLU constraints and possibly the input-output conditions are not linear. Depending on the form of $\mathcal{X}_0$ and $\mathcal{X}_u$ and on how we encode the ReLU constraints, we may solve the system $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ in several ways: \begin{compactenum} \item \textbf{MILP.} It is shown in~\citep{lomuscio2017approach,dutta2018output,TjengXT19} that the ReLU relation between two real variables can be encoded via mixed-integer linear constraints (MILP) by introducing 0/1-integer variables to encode whether a given neuron is active or inactive. Hence, if $\mathcal{X}_0$ and $\mathcal{X}_u$ are given by linear constraints, we may solve $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ by a MILP solver. The ReLU encoding requires that each neuron value is bounded, which is ensured if $\mathcal{X}_0$ is a bounded set and if $\epsilon<\infty$. \item \textbf{Reluplex.} In order to allow unbounded $\mathcal{X}_0$ and $\epsilon=\infty$, we may use algorithms based on the Reluplex calculus~\citep{KatzBDJK17,KatzHIJLLSTWZDK19} to solve $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Reluplex is an extension of the standard simplex algorithm for solving systems of linear constraints, designed to allow ReLU constraints as well. While Reluplex does not impose the boundedness condition, it is in general less scalable than MILP-solving. \item \textbf{NRA-SMT.} Alternatively, if $\mathcal{X}_0$ or $\mathcal{X}_u$ are given by non-linear constraints we may solve them by using an NRA-SMT-solver (non-linear real arithmetic satisfiability modulo theory), e.g.~dReal~\citep{gao2012delta}. To use an NRA-SMT-solver, we can replace the integer 0/1-variables of the ReLU neuron relations encoding with real variables that satisfy the constraint $x(x-1)=0$. While NRA-SMT is less scalable compared to MILP, we note that it has been used in previous works on RL stability verification \cite{ChangRG19}. \end{compactenum} \textbf{Safety via rejection sampling} As discussed in Section~\ref{sec:intro}, once the safety of NNs in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ has been verified, we can ``re-calibrate'' the BNN to reject sampled weights which are not in $W^{\pi}_{\epsilon}$. Hence, rejection sampling gives rise to a safe BNN. \subsection{Safe weight sets for closed-loop systems with BNN Policies}\label{sec:loopcertificate} Now consider a closed-loop system with a dynamics function $f:\mathcal{X}\times\mathcal{U}\rightarrow\mathcal{X}$ with $\mathcal{X}\subseteq\mathbb{R}^m$ and $\mathcal{U}\subseteq\mathbb{R}^n$, a BNN policy $\pi$, an initial state set $\mathcal{X}_0\subseteq\mathcal{X}$ and an unsafe state set $\mathcal{X}_u\subseteq\mathcal{X}$. Fix $\epsilon\in[0,\infty]$. In order to solve Problem~\ref{problem2} and verify safety of each trajectory contained in $\mathsf{Traj}^{f,\pi}_{\epsilon}$, our method searches for a positive invariant-like safety certificate that we define below. \textbf{Positive invariants for safety} A positive invariant in a dynamical system is a set of states which contains all initial states and which is closed under the system dynamics. These conditions ensure that states of all system trajectories are contained in the positive invariant. Hence, a positive invariant which does not contain any unsafe states can be used to certify safety of every trajectory over infinite time horizon. In this work, however, we are not trying to prove safety of every trajectory, but only of those trajectories contained in $\mathsf{Traj}^{f,\pi}_{\epsilon}$. To that end, we define $W^{\pi}_{\epsilon}$-safe positive invariants. Intuitively, a $W^{\pi}_{\epsilon}$-safe positive invariant is required to contain all initial states, to be closed under the dynamics $f$ and the BNN policy $\pi$ when the sampled weight vector is in $W^{\pi}_{\epsilon}$, and not to contain any unsafe state. \begin{definition}[$W^{\pi}_{\epsilon}$-safe positive invariants]\label{def:inv} A set $\mathsf{Inv}\subseteq\mathcal{X}$ is said to be a {\em $W^{\pi}_{\epsilon}$-positive invariant} if $\mathcal{X}_0\subseteq\mathsf{Inv}$, for each $\mathbf{x}\in\mathsf{Inv}$ and $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ we have that $f(\mathbf{x},\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}))\in\mathsf{Inv}$, and $\mathsf{Inv}\cap\mathcal{X}_u=\emptyset$. \end{definition} Theorem~\ref{thm:posinv} shows that $W^{\pi}_{\epsilon}$-safe positive invariants can be used to verify safety of all trajectories in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ in Problem~\ref{problem2}. The proof is straightforward and is deferred to the Supplementary Material. \begin{theorem}\label{thm:posinv} If there exists a $W^{\pi}_{\epsilon}$-safe positive invariant, then each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{theorem} \begin{algorithm}[t] \caption{Learning algorithm for $W^{\pi}_{\epsilon}$-safe positive invariants} \label{algorithm:ce} \begin{algorithmic} \STATE \textbf{Input} Dynamics function $f$, BNN policy $\pi$, Initial state set $\mathcal{X}_0$, Unsafe state set $\mathcal{X}_u$, $\epsilon \in [0,\infty]$ \STATE $\tilde{\mathcal{X}_0}, \tilde{\mathcal{X}_u} \leftarrow $ random samples of $\mathcal{X}_0,\mathcal{X}_u$ \STATE $D_{\textrm{spec}} \leftarrow \tilde{\mathcal{X}_u} \times \{0\} \cup \tilde{\mathcal{X}_0} \times \{1\}$, $D_{\textrm{ce}} \leftarrow \{\}$ \STATE Optional (bootstrapping): $S_{\textrm{bootstrap}} \leftarrow $ sample finite trajectories with initial state sampled from $\mathcal{X}$ \STATE \qquad\qquad\qquad $D_{\textrm{spec}} \leftarrow D_{\textrm{spec}} \cup \{(\mathbf{x},0)| \,\exists s\in S_{\textrm{bootstrap}} \text{ that starts in } \mathbf{x}\in \mathcal{X} \text{ and reaches } \mathcal{X}_u \}$ \STATE \qquad\qquad\qquad\qquad\qquad $\cup\, \{(\mathbf{x},1)| \,\exists s\in S_{\textrm{bootstrap}} \text{ that starts in } \mathbf{x}\in \mathcal{X} \text{ and does not reach } \mathcal{X}_u \}$ \STATE Pre-train neural network $g^{\mathsf{Inv}}$ on datasets $D_{\textrm{spec}}$ and $D_{\textrm{ce}}$ with loss function $\mathcal{L}$ \WHILE{timeout not reached} \IF{$\exists (\mathbf{x},\mathbf{x}',\mathbf{u},\mathbf{w},\mathbf{b})$ s.t. $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$, $g^{\mathsf{Inv}}(\mathbf{x}')<0$, $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$, $\mathbf{u}=\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x})$, $\mathbf{x}'=f(\mathbf{x},\mathbf{u})$} \STATE $D_{\textrm{ce}} \leftarrow D_{\textrm{ce}} \cup \{(\mathbf{x},\mathbf{x}')\}$ \ELSIF{$\exists (\mathbf{x})$ s.t. $\mathbf{x}\in\mathcal{X}_0$, $g^{\mathsf{Inv}}(\mathbf{x})< 0$} \STATE $D_{\textrm{spec}} \leftarrow D_{\textrm{spec}} \cup \{(\mathbf{x},1)\}$ \ELSIF{$\exists (\mathbf{x})$ s.t. $\mathbf{x}\in\mathcal{X}_u$, $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$} \STATE $D_{\textrm{spec}} \leftarrow D_{\textrm{spec}} \cup \{(\mathbf{x},0)\}$ \ELSE \STATE \textbf{Return} Safe \ENDIF \STATE {Train neural network $g^{\mathsf{Inv}}$ on datasets $D_{\textrm{spec}}$ and $D_{\textrm{ce}}$ with loss function $\mathcal{L}$} \ENDWHILE \STATE \textbf{Return} Unsafe \end{algorithmic} \end{algorithm} \textbf{Learning positive invariants} We now present a learning algorithm for a $W^{\pi}_{\epsilon}$-safe positive invariant. It learns a neural network $g^{\mathsf{Inv}}:\mathbb{R}^m\rightarrow\mathbb{R}$, where the positive invariant is then defined as the set $\mathsf{Inv} = \{\mathbf{x}\in\mathcal{X}\mid g^{\mathsf{Inv}}(\mathbf{x})\geq 0\}$. The pseudocode is given in Algorithm~\ref{algorithm:ce}. The algorithm first samples $\tilde{\mathcal{X}_0}$ from $\mathcal{X}_0$ and $\tilde{\mathcal{X}_u}$ from $\mathcal{X}_u$ and initializes the specification set $D_{\textrm{spec}}$ to $\tilde{\mathcal{X}_u} \times \{0\} \cup \tilde{\mathcal{X}_0} \times \{1\}$ and the counterexample set $D_{\textrm{ce}}$ to an empty set. Optionally, the algorithm also bootstraps the positive invariant network by initializing $D_{\textrm{spec}}$ with random samples from the state space $\mathcal{X}$ labeled with Monte-Carlo estimates of reaching the unsafe states. The rest of the algorithm consists of two modules which are composed into a loop: the {\em learner} and the {\em verifier}. In each loop iteration, the learner first learns a $W^{\pi}_{\epsilon}$-safe positive invariant candidate which takes the form of a neural network $g^{\mathsf{Inv}}$. This is done by minimizing the loss function $\mathcal{L}$ that depends on $D_{\textrm{spec}}$ and $D_{\textrm{ce}}$: \begin{equation}\label{eq:totalloss} \mathcal{L}(g^{\mathsf{Inv}}) = \frac{1}{|D_{\textrm{spec}}|}\sum_{(\mathbf{x},y)\in D_{\textrm{spec}}}^{}\mathcal{L}_{\textrm{cls}}\big(g^\mathsf{Inv}(\mathbf{x}),y\big) +\lambda \frac{1}{|D_{\textrm{ce}}|}\sum_{(\mathbf{x},\mathbf{x}')\in D_{\textrm{ce}}}^{}\mathcal{L}_{\textrm{ce}}\big(g^\mathsf{Inv}(\mathbf{x}),g^\mathsf{Inv}(\mathbf{x}')\big), \end{equation} where $\lambda$ is a tuning parameter and $\mathcal{L}_{\textrm{cls}}$ a binary classification loss function, e.g.~the 0/1-loss $\mathcal{L}_{\textrm{0/1}}(z,y) = \mathds{1}[\mathds{1}[z\geq0]\neq y]$ or the logistic loss $\mathcal{L}_{\textrm{log}}(z,y)= z - z \cdot y + \log(1 + \exp(-z))$ as its differentiable alternative. The term $\mathcal{L}_{\textrm{ce}}$ is the counterexample loss which we define via \begin{equation}\label{eq:loss} \mathcal{L}_{\textrm{ce}}(z,z') = \mathds{1}\big[z>0\big]\mathds{1}\big[z'<0\big] \mathcal{L}_{\textrm{cls}}\big(z,0\big) \mathcal{L}_{\textrm{cls}}\big(z',1\big). \end{equation} Intuitively, the first sum in eq.~\eqref{eq:totalloss} forces $g^{\mathsf{Inv}}$ to be nonnegative at initial states and negative at unsafe states contained in $D_{\textrm{spec}}$, and the second term forces each counterexample in $D_{\textrm{ce}}$ not to destroy the closedness of $\mathsf{Inv}$ under the system dynamics. Once $g^{\mathsf{Inv}}$ is learned, the verifier checks whether $\mathsf{Inv}$ is indeed a $W^{\pi}_{\epsilon}$-safe positive invariant. To do this, the verifier needs to check the three defining properties of $W^{\pi}_{\epsilon}$-safe positive invariants: \begin{compactenum} \item {\em Closedness of $\mathsf{Inv}$ under system dynamics.} The verifier checks if there exist states $\mathbf{x}\in\mathsf{Inv}$, $\mathbf{x'}\not\in\mathsf{Inv}$ and a BNN weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ such that $f(\mathbf{x},\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}))=\mathbf{x}'$. To do this, it introduces real variables $\mathbf{x},\mathbf{x}'\in\mathbb{R}^m$, $\mathbf{u}\in\mathbb{R}^n$ and $y,y'\in\mathbb{R}$, and solves: \begin{equation*} \begin{split} &\textrm{maximize } y - y' \textrm{ subject to} \\ &y\geq 0, y'<0, y=g^{\mathsf{Inv}}(\mathbf{x}), y'=g^{\mathsf{Inv}}(\mathbf{x'}) \\ &\mathbf{x'} = f(\mathbf{x},\mathbf{u}) \\ &\mathbf{u} \text{ is an output of } \pi \text{ on input } \mathbf{x} \textrm{ and weights } (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon} \end{split} \end{equation*} The conditions $y=g^{\mathsf{Inv}}(\mathbf{x})$ and $y'=g^{\mathsf{Inv}}(\mathbf{x'})$ are encoded by using the existing techniques for encoding deterministic NNs as systems of MILP/Reluplex/NRA-SMT constraints. The condition in the third equation is encoded simply by plugging variable vectors $\mathbf{x}$ and $\mathbf{u}$ into the equation for $f$. Finally, for condition in the fourth equation we use our encoding from Section~\ref{sec:feedforward} where we only need to omit the Input-output condition. The optimization objective is added in order to search for the ``worst'' counterexample. We note that MILP~\citep{gurobi} and SMT~\citep{gao2012delta} solvers allow optimizing linear objectives, and recently Reluplex algorithm~\citep{KatzBDJK17} has also been extended to allow solving optimization problems~\citep{strong2020global}. If a counterexample $(\mathbf{x},\mathbf{x}')$ is found, it is added to $D_{\textrm{ce}}$ and the learner tries to learn a new candidate. If the system of constraints is unsatisfiable, the verifier proceeds to the second check. \item {\em Non-negativity on $\mathcal{X}_0$.} The verifier checks if there exists $\mathbf{x}\in\mathcal{X}_0$ for which $g^{\mathsf{Inv}}(\mathbf{x})< 0$. If such $\mathbf{x}$ is found, $(\mathbf{x},1)$ is added to $D_{\textrm{spec}}$ and the learner then tries to learn a new candidate. If the system of constraints is unsatisfiable, the verifier proceeds to the third check. \item {\em Negativity on $\mathcal{X}_u$.} The verifier checks if there exists $\mathbf{x}\in\mathcal{X}_u$ with $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$. If such $\mathbf{x}$ is found, $(\mathbf{x},0)$ is added to $D_{\textrm{spec}}$ and the learner then tries to learn a new candidate. If the system of constraints is unsatisfiable, the veririfer concludes that $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-positive invariant which does not contain any unsafe state and so each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{compactenum} Theorem~\ref{thm:loss} shows that neural networks $f^{\mathsf{Inv}}$ for which $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariants are global minimizers of the loss function $\mathcal{L}$ with the 0/1-classification loss. Theorem~\ref{thm:correctness} establishes the correctness of our algorithm. Proofs can be found in the Supplementary Material. \begin{theorem}\label{thm:loss} The loss function $\mathcal{L}$ is nonnegative for any neural network $g$, i.e.~$\mathcal{L}(g)\geq 0$. Moreover, if $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant and $\mathcal{L}_{\textrm{cls}}$ the 0/1-loss, then $\mathcal{L}(g^{\mathsf{Inv}})=0$. Hence, neural networks $g^{\mathsf{Inv}}$ for which $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant are global minimizers of the loss function $\mathcal{L}$ when $\mathcal{L}_{\textrm{cls}}$ is the 0/1-loss. \end{theorem} \begin{theorem}\label{thm:correctness} If the verifier in Algorithm~\ref{algorithm:ce} shows that constraints in three checks are unsatisfiable, then the computed $\mathsf{Inv}$ is indeed a $W^{\pi}_{\epsilon}$-safe positive invariant. Hence, Algorithm~\ref{algorithm:ce} is correct. \end{theorem} \textbf{Safety via rejection sampling} As discussed in Section~\ref{sec:intro}, once the safety of all trajectories in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ has been verified, we can ``re-calibrate'' the BNN policy to reject sampled weights which are not in $W^{\pi}_{\epsilon}$. Hence, rejection sampling gives rise to a safe BNN policy. \subsection{Computation of safe weight sets and the value of $\epsilon$}\label{sec:epscompute} Problems~\ref{problem1} and~\ref{problem2} assume a given value of $\epsilon$ for which safety needs to be verified. In order to compute the largest value of $\epsilon$ for which our approach can verify safety, we start with a small value of $\epsilon$ and iteratively increase it until we reach a value that cannot be certified or until the timeout is reached, in order to compute as large safe weight set as possible. In each iteration, our Algorithm \ref{algorithm:ce} does not start from scratch but is initialized with the $g^{\mathsf{Inv}}$ and $D_{\textrm{spec}}$ from the previous successful iteration, i.e.~attempting to enlarge the current safe weight set. Our iterative process significantly speeds up the search process compared to naively restarting our algorithm in every iteration. \subsection{Safe exploration reinforcement learning}\label{sec:safeexploration} Given a safe but non-optimal initial policy $\pi_{0}$, safe exploration reinforcement learning (SERL) concerns the problem of improving the expected return of $\pi_{0}$ while ensuring safety when collecting samples of the environment \citep{uchibe2007constrained,achiam2017constrained,nakka2020chance}. Our method from Section~\ref{sec:loopcertificate} for computing safe weight sets can be adapted to this setting with minimal effort. In particular, the safety bound $\epsilon$ for the intervals centered at the weight means can be used in combination with the rejection sampling to generate safe but randomized rollouts on the environment. Moreover, $\epsilon$ provides bounds on the gradient updates when optimizing the policy using Deep Q-learning or policy gradient methods, i.e., performing \emph{projected gradient descent}. We sketch an algorithm for SERL in the Supplementary Material. \section{Submission of papers to NeurIPS 2021} Please read the instructions below carefully and follow them faithfully. \subsection{Style} Papers to be submitted to NeurIPS 2021 must be prepared according to the instructions presented here. Papers may only be up to {\bf nine} pages long, including figures. Additional pages \emph{containing only acknowledgments and references} are allowed. Papers that exceed the page limit will not be reviewed, or in any other way considered for presentation at the conference. The margins in 2021 are the same as those in 2007, which allow for $\sim$$15\%$ more words in the paper compared to earlier years. Authors are required to use the NeurIPS \LaTeX{} style files obtainable at the NeurIPS website as indicated below. Please make sure you use the current files and not previous versions. Tweaking the style files may be grounds for rejection. \subsection{Retrieval of style files} The style files for NeurIPS and other conference information are available on the World Wide Web at \begin{center} \url{http://www.neurips.cc/} \end{center} The file \verb+neurips_2021.pdf+ contains these instructions and illustrates the various formatting requirements your NeurIPS paper must satisfy. The only supported style file for NeurIPS 2021 is \verb+neurips_2021.sty+, rewritten for \LaTeXe{}. \textbf{Previous style files for \LaTeX{} 2.09, Microsoft Word, and RTF are no longer supported!} The \LaTeX{} style file contains three optional arguments: \verb+final+, which creates a camera-ready copy, \verb+preprint+, which creates a preprint for submission to, e.g., arXiv, and \verb+nonatbib+, which will not load the \verb+natbib+ package for you in case of package clash. \paragraph{Preprint option} If you wish to post a preprint of your work online, e.g., on arXiv, using the NeurIPS style, please use the \verb+preprint+ option. This will create a nonanonymized version of your work with the text ``Preprint. Work in progress.'' in the footer. This version may be distributed as you see fit. Please \textbf{do not} use the \verb+final+ option, which should \textbf{only} be used for papers accepted to NeurIPS. At submission time, please omit the \verb+final+ and \verb+preprint+ options. This will anonymize your submission and add line numbers to aid review. Please do \emph{not} refer to these line numbers in your paper as they will be removed during generation of camera-ready copies. The file \verb+neurips_2021.tex+ may be used as a ``shell'' for writing your paper. All you have to do is replace the author, title, abstract, and text of the paper with your own. The formatting instructions contained in these style files are summarized in Sections \ref{gen_inst}, \ref{headings}, and \ref{others} below. \section{General formatting instructions} \label{gen_inst} The text must be confined within a rectangle 5.5~inches (33~picas) wide and 9~inches (54~picas) long. The left margin is 1.5~inch (9~picas). Use 10~point type with a vertical spacing (leading) of 11~points. Times New Roman is the preferred typeface throughout, and will be selected for you by default. Paragraphs are separated by \nicefrac{1}{2}~line space (5.5 points), with no indentation. The paper title should be 17~point, initial caps/lower case, bold, centered between two horizontal rules. The top rule should be 4~points thick and the bottom rule should be 1~point thick. Allow \nicefrac{1}{4}~inch space above and below the title to rules. All pages should start at 1~inch (6~picas) from the top of the page. For the final version, authors' names are set in boldface, and each name is centered above the corresponding address. The lead author's name is to be listed first (left-most), and the co-authors' names (if different address) are set to follow. If there is only one co-author, list both author and co-author side by side. Please pay special attention to the instructions in Section \ref{others} regarding figures, tables, acknowledgments, and references. \section{Headings: first level} \label{headings} All headings should be lower case (except for first word and proper nouns), flush left, and bold. First-level headings should be in 12-point type. \subsection{Headings: second level} Second-level headings should be in 10-point type. \subsubsection{Headings: third level} Third-level headings should be in 10-point type. \paragraph{Paragraphs} There is also a \verb+\paragraph+ command available, which sets the heading in bold, flush left, and inline with the text, with the heading followed by 1\,em of space. \section{Citations, figures, tables, references} \label{others} These instructions apply to everyone. \subsection{Citations within the text} The \verb+natbib+ package will be loaded for you by default. Citations may be author/year or numeric, as long as you maintain internal consistency. As to the format of the references themselves, any style is acceptable as long as it is used consistently. The documentation for \verb+natbib+ may be found at \begin{center} \url{http://mirrors.ctan.org/macros/latex/contrib/natbib/natnotes.pdf} \end{center} Of note is the command \verb+\citet+, which produces citations appropriate for use in inline text. For example, \begin{verbatim} \citet{hasselmo} investigated\dots \end{verbatim} produces \begin{quote} Hasselmo, et al.\ (1995) investigated\dots \end{quote} If you wish to load the \verb+natbib+ package with options, you may add the following before loading the \verb+neurips_2021+ package: \begin{verbatim} \PassOptionsToPackage{options}{natbib} \end{verbatim} If \verb+natbib+ clashes with another package you load, you can add the optional argument \verb+nonatbib+ when loading the style file: \begin{verbatim} \usepackage[nonatbib]{neurips_2021} \end{verbatim} As submission is double blind, refer to your own published work in the third person. That is, use ``In the previous work of Jones et al.\ [4],'' not ``In our previous work [4].'' If you cite your other papers that are not widely available (e.g., a journal paper under review), use anonymous author names in the citation, e.g., an author of the form ``A.\ Anonymous.'' \subsection{Footnotes} Footnotes should be used sparingly. If you do require a footnote, indicate footnotes with a number\footnote{Sample of the first footnote.} in the text. Place the footnotes at the bottom of the page on which they appear. Precede the footnote with a horizontal rule of 2~inches (12~picas). Note that footnotes are properly typeset \emph{after} punctuation marks.\footnote{As in this example.} \subsection{Figures} \begin{figure} \centering \fbox{\rule[-.5cm]{0cm}{4cm} \rule[-.5cm]{4cm}{0cm}} \caption{Sample figure caption.} \end{figure} All artwork must be neat, clean, and legible. Lines should be dark enough for purposes of reproduction. The figure number and caption always appear after the figure. Place one line space before the figure caption and one line space after the figure. The figure caption should be lower case (except for first word and proper nouns); figures are numbered consecutively. You may use color figures. However, it is best for the figure captions and the paper body to be legible if the paper is printed in either black/white or in color. \subsection{Tables} All tables must be centered, neat, clean and legible. The table number and title always appear before the table. See Table~\ref{sample-table}. Place one line space before the table title, one line space after the table title, and one line space after the table. The table title must be lower case (except for first word and proper nouns); tables are numbered consecutively. Note that publication-quality tables \emph{do not contain vertical rules.} We strongly suggest the use of the \verb+booktabs+ package, which allows for typesetting high-quality, professional tables: \begin{center} \url{https://www.ctan.org/pkg/booktabs} \end{center} This package was used to typeset Table~\ref{sample-table}. \begin{table} \caption{Sample table title} \label{sample-table} \centering \begin{tabular}{lll} \toprule \multicolumn{2}{c}{Part} \\ \cmidrule(r){1-2} Name & Description & Size ($\mu$m) \\ \midrule Dendrite & Input terminal & $\sim$100 \\ Axon & Output terminal & $\sim$10 \\ Soma & Cell body & up to $10^6$ \\ \bottomrule \end{tabular} \end{table} \section{Final instructions} Do not change any aspects of the formatting parameters in the style files. In particular, do not modify the width or length of the rectangle the text should fit into, and do not change font sizes (except perhaps in the \textbf{References} section; see below). Please note that pages should be numbered. \section{Preparing PDF files} Please prepare submission files with paper size ``US Letter,'' and not, for example, ``A4.'' Fonts were the main cause of problems in the past years. Your PDF file must only contain Type 1 or Embedded TrueType fonts. Here are a few instructions to achieve this. \begin{itemize} \item You should directly generate PDF files using \verb+pdflatex+. \item You can check which fonts a PDF files uses. In Acrobat Reader, select the menu Files$>$Document Properties$>$Fonts and select Show All Fonts. You can also use the program \verb+pdffonts+ which comes with \verb+xpdf+ and is available out-of-the-box on most Linux machines. \item The IEEE has recommendations for generating PDF files whose fonts are also acceptable for NeurIPS. Please see \url{http://www.emfield.org/icuwb2010/downloads/IEEE-PDF-SpecV32.pdf} \item \verb+xfig+ "patterned" shapes are implemented with bitmap fonts. Use "solid" shapes instead. \item The \verb+\bbold+ package almost always uses bitmap fonts. You should use the equivalent AMS Fonts: \begin{verbatim} \usepackage{amsfonts} \end{verbatim} followed by, e.g., \verb+\mathbb{R}+, \verb+\mathbb{N}+, or \verb+\mathbb{C}+ for $\mathbb{R}$, $\mathbb{N}$ or $\mathbb{C}$. You can also use the following workaround for reals, natural and complex: \begin{verbatim} \newcommand{\RR}{I\!\!R} \newcommand{\Nat}{I\!\!N} \newcommand{\CC}{I\!\!\!\!C} \end{verbatim} Note that \verb+amsfonts+ is automatically loaded by the \verb+amssymb+ package. \end{itemize} If your file contains type 3 fonts or non embedded TrueType fonts, we will ask you to fix it. \subsection{Margins in \LaTeX{}} Most of the margin problems come from figures positioned by hand using \verb+\special+ or other commands. We suggest using the command \verb+\includegraphics+ from the \verb+graphicx+ package. Always specify the figure width as a multiple of the line width as in the example below: \begin{verbatim} \usepackage[pdftex]{graphicx} ... \includegraphics[width=0.8\linewidth]{myfile.pdf} \end{verbatim} See Section 4.4 in the graphics bundle documentation (\url{http://mirrors.ctan.org/macros/latex/required/graphics/grfguide.pdf}) A number of width problems arise when \LaTeX{} cannot properly hyphenate a line. Please give LaTeX hyphenation hints using the \verb+\-+ command when necessary. \begin{ack} Use unnumbered first level headings for the acknowledgments. All acknowledgments go at the end of the paper before the list of references. Moreover, you are required to declare funding (financial activities supporting the submitted work) and competing interests (related financial activities outside the submitted work). More information about this disclosure can be found at: \url{https://neurips.cc/Conferences/2021/PaperInformation/FundingDisclosure}. Do {\bf not} include this section in the anonymized submission, only in the final paper. You can use the \texttt{ack} environment provided in the style file to autmoatically hide this section in the anonymized submission. \end{ack} \section*{References} References follow the acknowledgments. Use unnumbered first-level heading for the references. Any choice of citation style is acceptable as long as you are consistent. It is permissible to reduce the font size to \verb+small+ (9 point) when listing the references. Note that the Reference section does not count towards the page limit. \medskip { \small [1] Alexander, J.A.\ \& Mozer, M.C.\ (1995) Template-based algorithms for connectionist rule extraction. In G.\ Tesauro, D.S.\ Touretzky and T.K.\ Leen (eds.), {\it Advances in Neural Information Processing Systems 7}, pp.\ 609--616. Cambridge, MA: MIT Press. [2] Bower, J.M.\ \& Beeman, D.\ (1995) {\it The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System.} New York: TELOS/Springer--Verlag. [3] Hasselmo, M.E., Schnell, E.\ \& Barkai, E.\ (1995) Dynamics of learning and recall at excitatory recurrent synapses and cholinergic modulation in rat hippocampal region CA3. {\it Journal of Neuroscience} {\bf 15}(7):5249-5262. } \section*{Checklist} The checklist follows the references. Please read the checklist guidelines carefully for information on how to answer these questions. For each question, change the default \answerTODO{} to \answerYes{}, \answerNo{}, or \answerNA{}. You are strongly encouraged to include a {\bf justification to your answer}, either by referencing the appropriate section of your paper or providing a brief inline description. For example: \begin{itemize} \item Did you include the license to the code and datasets? \answerYes{See Section~\ref{gen_inst}.} \item Did you include the license to the code and datasets? \answerNo{The code and the data are proprietary.} \item Did you include the license to the code and datasets? \answerNA{} \end{itemize} Please do not modify the questions and only use the provided macros for your answers. Note that the Checklist section does not count towards the page limit. In your paper, please delete this instructions block and only keep the Checklist section heading above along with the questions/answers below. \begin{enumerate} \item For all authors... \begin{enumerate} \item Do the main claims made in the abstract and introduction accurately reflect the paper's contributions and scope? \answerTODO{} \item Did you describe the limitations of your work? \answerTODO{} \item Did you discuss any potential negative societal impacts of your work? \answerTODO{} \item Have you read the ethics review guidelines and ensured that your paper conforms to them? \answerTODO{} \end{enumerate} \item If you are including theoretical results... \begin{enumerate} \item Did you state the full set of assumptions of all theoretical results? \answerTODO{} \item Did you include complete proofs of all theoretical results? \answerTODO{} \end{enumerate} \item If you ran experiments... \begin{enumerate} \item Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? \answerTODO{} \item Did you specify all the training details (e.g., data splits, hyperparameters, how they were chosen)? \answerTODO{} \item Did you report error bars (e.g., with respect to the random seed after running experiments multiple times)? \answerTODO{} \item Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? \answerTODO{} \end{enumerate} \item If you are using existing assets (e.g., code, data, models) or curating/releasing new assets... \begin{enumerate} \item If your work uses existing assets, did you cite the creators? \answerTODO{} \item Did you mention the license of the assets? \answerTODO{} \item Did you include any new assets either in the supplemental material or as a URL? \answerTODO{} \item Did you discuss whether and how consent was obtained from people whose data you're using/curating? \answerTODO{} \item Did you discuss whether the data you are using/curating contains personally identifiable information or offensive content? \answerTODO{} \end{enumerate} \item If you used crowdsourcing or conducted research with human subjects... \begin{enumerate} \item Did you include the full text of instructions given to participants and screenshots, if applicable? \answerTODO{} \item Did you describe any potential participant risks, with links to Institutional Review Board (IRB) approvals, if applicable? \answerTODO{} \item Did you include the estimated hourly wage paid to participants and the total amount spent on participant compensation? \answerTODO{} \end{enumerate} \end{enumerate} \section{Preliminaries and problem statement}\label{sec:prelims} We consider a discrete-time dynamical system \[ \mathbf{x}_{t+1} = f(\mathbf{x}_t,\mathbf{u}_t),\, \mathbf{x}_0\in\mathcal{X}_0. \] The dynamics are defined by the function $f:\mathcal{X}\times \mathcal{U}\rightarrow \mathcal{X}$ where $\mathcal{X}\subseteq \mathbb{R}^m$ is the state space and $\mathcal{U}\subseteq \mathbb{R}^n$ is the control action space, $\mathcal{X}_0\subseteq \mathcal{X}$ is the set of initial states and $t\in\mathbb{N}_{\geq 0}$ denotes a discretized time. At each time step $t$, the action is defined by the (possibly probabilistic) positional policy $\pi:\mathcal{X}\rightarrow\mathcal{D}(\mathcal{U})$, which maps the current state $\mathbf{x}_t$ to a distribution $\pi(\mathbf{x}_t)\in\mathcal{D}(U)$ over the set of actions. We use $\mathcal{D}(U)$ to denote the set of all probability distributions over $U$. The next action is then sampled according to $\mathbf{u}_t\sim \pi(\mathbf{x}_t)$, and together with the current state $\mathbf{x}_t$ of the system gives rise to the next state $\mathbf{x}_{t+1}$ of the system according to the dynamics $f$. Thus, the dynamics $f$ together with the policy $\pi$ form a closed-loop system (or a feedback loop system). The aim of the policy is to maximize the expected cumulative reward (possibly discounted) from each starting state. Given a set of initial states $\mathcal{X}_0$ of the system, we say that a sequence of state-action pairs $(\mathbf{x}_t,\mathbf{u}_t)_{t=0}^{\infty}$ is a trajectory if $\mathbf{x}_0\in \mathcal{X}_0$ and we have $\mathbf{u}_t\in\mathsf{supp}(\pi(\mathbf{x}_t))$ and $\mathbf{x}_{t+1} = f(\mathbf{x}_t,\mathbf{u}_t)$ for each $t\in\mathbb{N}_{\geq 0}$. A neural network (NN) is a function $\pi:\mathbb{R}^m\rightarrow \mathbb{R}^n$ that consists of several sequentially composed layers $\pi=l_1\circ\dots\circ l_k$. Formally, a NN policy maps each system state to a Dirac-delta distribution which picks a single action with probability $1$. Each layer $l_i$ is parametrized by learned weight values of the appropriate dimensions and an activation function $a$, \[ l_i(\mathbf{x}) = a(\mathbf{W}_i\mathbf{x}+\mathbf{b}_i), \mathbf{W}_i\in \mathbb{R}^{n_i\times m_i}, \mathbf{b}_i\in \mathbb{R}^{n_i}. \] In this work, we consider ReLU activation functions $a(\mathbf{x})=\textrm{ReLU}(\mathbf{x})=\max\{\mathbf{x},\mathbf{0}\}$, although other piecewise linear activation such as the leaky-ReLU \citep{JarrettKRL09} and PReLU \citep{HeZRS15} are applicable as well. In Bayesian neural networks (BNNs), weights are random variables and their values are sampled, each according to some distribution. Then each vector of sampled weights gives rise to a (deterministic) neural network. Given a training set $\mathcal{D}$, in order to train the BNN we assume a prior distribution $p(\mathbf{w},\mathbf{b})$ over the weights. The learning then amounts to computing the posterior distribution $p(\mathbf{w},\mathbf{b}\mid \mathcal{D})$ via the application of the Bayes rule. As analytical inference of the posterior is in general infeasible due to non-linearity introduced by the BNN architecture~\citep{MacKay92a}, practical training algorithms rely on approximate inference, e.g.~Hamiltonian Monte Carlo~\citep{neal2012bayesian}, variational inference~\citep{blundell2015weight} or dropout~\citep{GalG16}. When the policy in a dynamical system is a BNN, the policy maps each system state $\mathbf{x}_t$ to a probability distribution $\pi(\mathbf{x}_t)$ over the action space. Informally, this distribution is defined as follows. First, BNN weights $\mathbf{w}$, $\mathbf{b}$ are sampled according to the posterior BNN weight distribution, and the sampled weights give rise to a deterministic NN policy $\pi_{\mathbf{w},\mathbf{b}}$. The action of the system is then defined as $\mathbf{u}_t=\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}_t)$. Formal definition of the distribution $\pi(\mathbf{x}_t)$ is straightforward and proceeds by considering the product measure of distributions of all weights. \textbf{Problem statement} We now define the two safety problems that we consider in this work. The first problem considers feed-forward BNNs, and the second problem considers closed-loop systems with BNN policies. While our solution to the first problem will be a subprocedure in our solution to the second problem, the reason why we state it as a separate problem is that we believe that our solution to the first problem is also of independent interest for the safety analysis of feed-forward BNNs. Let $\pi$ be a BNN. Suppose that the vector $(\mathbf{w},\mathbf{b})$ of BNN weights in $\pi$ has dimension $p+q$, where $p$ is the dimension of $\mathbf{w}$ and $q$ is the dimension of $\mathbf{b}$. For each $1\leq i\leq p$, let $\mu_i$ denote the mean of the random variable $w_i$. Similarly, for each $1\leq i\leq q$, let $\mu_{p+i}$ denote the mean of the random variable $b_i$. Then, for each $\epsilon\in [0,\infty]$, we define the set $W^{\pi}_{\epsilon}$ of weight vectors via \[ W^{\pi}_{\epsilon} = \prod_{i=1}^{p+q}[\mu_i-\epsilon,\mu_i+\epsilon] \subseteq \mathbb{R}^{p+q}. \] We now proceed to defining our safety problem for feed-forward BNNs. Suppose that we are given a feed-forward BNN $\pi$, a set $\mathcal{X}_0\subseteq \mathbb{R}^m$ of input points and a set $\mathcal{X}_u\subseteq \mathbb{R}^n$ of unsafe (or bad) output points. For a concrete vector $(\mathbf{w},\mathbf{b})$ of weight values, let $\pi_{\mathbf{w},\mathbf{b}}$ to be the (deterministic) NN defined by these weight values. We say that $\pi_{\mathbf{w},\mathbf{b}}$ is safe if for each $\mathbf{x}\in\mathcal{X}_0$ we have $\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x})\not\in\mathcal{X}_u$, i.e.~if evaluating $\pi_{\mathbf{w},\mathbf{b}}$ on all input points does not lead to an unsafe output. \begin{adjustwidth}{1cm}{} \begin{problem}[Feed-forward BNNs]\label{problem1} Let $\pi$ be a feed-forward BNN, $\mathcal{X}_0\subseteq \mathbb{R}^m$ a set of input points and $\mathcal{X}_u\subseteq \mathbb{R}^n$ a set of unsafe output points. Let $\epsilon\in [0,\infty]$. Determine whether each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe. \end{problem} \end{adjustwidth} Next, we define our safety problem for closed-loop systems with BNN policies. Consider a closed-loop system defined by a dynamics function $f$, a BNN policy $\pi$ and an initial set of states $\mathcal{X}_0$. Let $\mathcal{X}_u\subseteq\mathcal{X}$ be a set of unsafe (or bad) states. We say that a trajectory $(\mathbf{x}_t,\mathbf{u}_t)_{t=0}^{\infty}$ is safe if $\mathbf{x}_t\not\in \mathcal{X}_u$ for all $t\in\mathbb{N}_0$, hence if it does not reach any unsafe states. Note that this definition implies infinite time horizon safety of the trajectory. Given $\epsilon\in[0,\infty]$, define the set $\mathsf{Traj}^{f,\pi}_{\epsilon}$ to be the set of all system trajectories in which each sampled weight vector belongs to $W^{\pi}_{\epsilon}$. \begin{adjustwidth}{1cm}{} \begin{problem}[Closed-loop systems with BNN policies]\label{problem2} Consider a closed-loop system defined by a dynamics function $f$, a BNN policy $\pi$ and a set of initial states $\mathcal{X}_0$. Let $\mathcal{X}_u$ be a set of unsafe states. Let $\epsilon\in [0,\infty]$. Determine whether each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{problem} \end{adjustwidth} Note that the question of whether the BNN policy $\pi$ is safe (i.e.~whether each trajectory of the system is safe) is a special case of the above problem which corresponds to $\epsilon=\infty$. \section{Proofs} \begin{manualtheorem}{1} Let $\epsilon\in[0,\infty]$. Then each deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ is safe if and only if the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is not satisfiable. \end{manualtheorem} \begin{proof} We prove the equivalent claim that there exists a weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ for which $\pi_{\mathbf{w},\mathbf{b}}$ is unsafe if and only if $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is satisfiable. \medskip First, suppose that there exists a weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ for which $\pi_{\mathbf{w},\mathbf{b}}$ is unsafe and we want to show that $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ is satisfiable. This direction of the proof is straightforward since values of the network's neurons on the unsafe input give rise to a solution of $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Indeed, by assumption there exists a vector of input neuron values $\mathbf{x}_0\in\mathcal{X}_0$ for which the corresponding vector of output neuron values $\mathbf{x}_l=\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}_0)$ is unsafe, i.e.~$\mathbf{x}_l\in\mathcal{X}_u$. By defining $\mathbf{x}_i^{\textrm{in}}$, $\mathbf{x}_i^{\textrm{out}}$ to be the vectors of the corresponding input and output neuron values for the $i$-th hidden layer for each $1\leq i\leq l-1$ and by setting $\mathbf{x}_{0,\textrm{pos}}=\textrm{ReLU}(\mathbf{x_0})$ and $\mathbf{x}_{0,\textrm{neg}}=-\textrm{ReLU}(-\mathbf{x}_0)$, we easily see that these variable values satisfy the Input-output conditions, the ReLU encoding conditions and the BNN input and hidden layer conditions, therefore we get a solution to the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. \medskip We now proceed to the more involved direction of this proof and show that any solution to the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$ gives rise to weights $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ for which $\pi_{\mathbf{w},\mathbf{b}}$ is unsafe. Let $\mathbf{x}_0$, $\mathbf{x}_l$, $\mathbf{x}_{0,\textrm{pos}}$, $\mathbf{x}_{0,\textrm{neg}}$ and $\mathbf{x}_i^{\textrm{in}}$, $\mathbf{x}_i^{\textrm{out}}$ for $1\leq i\leq l-1$, be real vectors that satisfy the system of constraints $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$. Fix $1\leq i\leq l-1$. From the BNN hidden layers constraint for layer i, we have \begin{equation}\label{eq:1} (\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_{i+1}^{\textrm{in}} \leq (\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i+\epsilon\cdot\mathbf{1}). \end{equation} We show that there exist values $\mathbf{W}_i^{\ast},\mathbf{b}_i^{\ast}$ of BNN weights between layers $i$ and $i+1$ such that each weight value is at most $\epsilon$ apart from its mean, and such that $\mathbf{x}_{i+1}^{\textrm{in}}=\mathbf{W}_i^{\ast}\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i^{\ast}$. Indeed, to formally show this, we define $W^{\pi}_{\epsilon}[i]$ to be the set of all weight vectors between layers $i$ and $i+1$ such that each weight value is distant from its mean by at most $\epsilon$ (hence, $W^{\pi}_{\epsilon}[i]$ is a projection of $W^{\pi}_{\epsilon}$ onto dimensions that correspond to the weights between layers $i$ and $i+1$). We then consider a continuous function $h_i:W^{\pi}_{\epsilon}[i]\rightarrow \mathbb{R}$ defined via \begin{equation*} h_i(\mathbf{W}_i,\mathbf{b}_i) = \mathbf{W}_i\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i. \end{equation*} Since $W^{\pi}_{\epsilon}[i]\subseteq\mathbb{R}^{m_i\times n_i}\times\mathbb{R}^{n_i}$ is a product of intervals and therefore a connected set w.r.t.~the Euclidean metric and since $h_i$ is continuous, the image of $W^{\pi}_{\epsilon}[i]$ under $h_i$ is also connected in $\mathbb{R}$. But note that \[ h_i(\mathbf{M}_i-\epsilon\cdot\mathbf{1},\mathbf{m}_i-\epsilon\cdot\mathbf{1})=(\mathbf{M}_i-\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i-\epsilon\cdot\mathbf{1}) \] and \[ h_i(\mathbf{M}_i+\epsilon\cdot\mathbf{1},\mathbf{m}_i+\epsilon\cdot\mathbf{1})=(\mathbf{M}_i+\epsilon\cdot\mathbf{1})\mathbf{x}_i^{\textrm{out}}+(\mathbf{m}_i+\epsilon\cdot\mathbf{1}), \] with $(\mathbf{M}_i-\epsilon\cdot\mathbf{1},\mathbf{m}_i-\epsilon\cdot\mathbf{1}),(\mathbf{M}_i+\epsilon\cdot\mathbf{1},\mathbf{m}_i+\epsilon\cdot\mathbf{1})\in W^{\pi}_{\epsilon}[i]$. Thus, for the two points to be connected, the image set must also contain $\mathbf{x}_{i+1}^{\textrm{in}}$ which lies in between by eq.~\eqref{eq:1}. Thus, there exists $(\mathbf{W}_i^{\ast},\mathbf{b}_i^{\ast})\in W^{\pi}_{\epsilon}[i]$ with $\mathbf{x}_{i+1}^{\textrm{in}}=\mathbf{W}_i^{\ast}\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i^{\ast}$, as desired. \medskip For the input and the first hidden layer, from the BNN input layer constraint we know that \begin{equation*} (\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0-\epsilon\cdot\mathbf{1}) \leq \mathbf{x}_1^{\textrm{in}} \leq (\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0+\epsilon\cdot\mathbf{1}). \end{equation*} Again, define $W^{\pi}_{\epsilon}[0]$ to be the set of all weight vectors between the input and the first hidden layer such that each weight value is distant from its mean by at most $\epsilon$. Consider a continuous function $h_0:W^{\pi}_{\epsilon}[0]\rightarrow \mathbb{R}$ defined via \begin{equation*} h_0(\mathbf{W}_0,\mathbf{b}_0) = \mathbf{W}_0\mathbf{x}_0+\mathbf{b}_0. \end{equation*} Let $\textrm{Msign}(\mathbf{x}_0)$ be a matrix of the same dimension as $\mathbf{M}_0$, with each column consisting of $1$'s if the corresponding component of $\mathbf{x}_0$ is nonnegative, and of $-1$'s if it is negative. Then note that \[ h_0(\mathbf{M}_0-\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0-\epsilon\cdot\mathbf{1})=(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0-\epsilon\cdot\mathbf{1}) \] and \[ h_0(\mathbf{M}_0+\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0+\epsilon\cdot\mathbf{1})=(\mathbf{M}_0+\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{pos}}+(\mathbf{M}_0-\epsilon\cdot\mathbf{1})\mathbf{x}_{0,\textrm{neg}}+(\mathbf{m}_0+\epsilon\cdot\mathbf{1}). \] Since $(\mathbf{M}_0-\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0-\epsilon\cdot\mathbf{1}),(\mathbf{M}_0+\epsilon\cdot\textrm{Msign}(\mathbf{x}_0),\mathbf{m}_0+\epsilon\cdot\mathbf{1})\in W^{\pi}_{\epsilon}[0]$, analogous image connectedness argument as the one above shows that there exist values $\mathbf{W}_0^{\ast},\mathbf{b}_0^{\ast}$ of BNN weights such tha $(\mathbf{W}_0^{\ast},\mathbf{b}_0^{\ast})\in W^{\pi}_{\epsilon}[0]$, and such that $\mathbf{x}_{1}^{\textrm{in}}=\mathbf{W}_0^{\ast}\mathbf{x}_0+\mathbf{b}_0^{\ast}$. \medskip But now, collecting $\mathbf{W}_0^{\ast},\mathbf{b}_0^{\ast}$ and $\mathbf{W}_i^{\ast},\mathbf{b}_i^{\ast}$ for $1\leq i\leq l-1$ gives rise to a BNN weight vector $(\mathbf{W}^{\ast},\mathbf{b}^{\ast})$ which is contained in $W^{\pi}_{\epsilon}$. Furthermore, combining what we showed above with the constraints in $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$, we get that: \begin{compactitem} \item $\mathbf{x}_0\in \mathcal{X}_0$, $\mathbf{x}_l\in\mathcal{X}_u$, from the Input-output condition in $\Phi(\pi,\mathcal{X}_0,\mathcal{X}_u,\epsilon)$; \item $\mathbf{x}_i^{\textrm{out}} = \textrm{ReLU}(\mathbf{x}_i^{\textrm{in}})$ for each $1\leq i\leq l-1$, from the ReLU-encoding; \item $\mathbf{x}_{1}^{\textrm{in}}=\mathbf{W}_0^{\ast}\mathbf{x}_0+\mathbf{b}_0^{\ast}$ and $\mathbf{x}_{i+1}^{\textrm{in}}=\mathbf{W}_i^{\ast}\mathbf{x}_i^{\textrm{out}}+\mathbf{b}_i^{\ast}$ for each $1\leq i\leq l-1$, as shown above. \end{compactitem} Hence, $\mathbf{x}_l\in\mathcal{X}_u$ is the vector of neuron output values of $\pi_{\mathbf{W}^{\ast},\mathbf{b}^{\ast}}$ on the input neuron values $\mathbf{x}_0\in\mathcal{X}_0$, so as $(\mathbf{W}^{\ast},\mathbf{b}^{\ast})\in W^{\pi}_{\epsilon}$ we conclude that there exists a deterministic NN in $\{\pi_{\mathbf{w},\mathbf{b}} \mid (\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}\}$ which is not safe. This concludes the proof. \end{proof} \begin{manualtheorem}{2} If there exists a $W^{\pi}_{\epsilon}$-safe positive invariant, then each trajectory in $\mathsf{Traj}^{f,\pi}_{\epsilon}$ is safe. \end{manualtheorem} \begin{proof} Let $\mathsf{Inv}$ be a $W^{\pi}_{\epsilon}$-safe positive invariant. Given a trajectory $(\mathbf{x}_t,\mathbf{u}_t)_{t=0}^{\infty}$ in $\mathsf{Traj}^{f,\pi}_{\epsilon}$, we need to show that $\mathbf{x}_t\not\in\mathcal{X}_u$ for each $t\in\mathbb{N}_{\geq 0}$. Since $\mathsf{Inv}\cap\mathcal{X}_u=\emptyset$, it suffices to show that $\mathbf{x}_t\in\mathsf{Inv}$ for each $t\in\mathbb{N}_{\geq 0}$. We prove this by induction on $t$. The base case $\mathbf{x}_0\in\mathsf{Inv}$ follows since $\mathbf{x}_0\in\mathcal{X}_0\subseteq \mathsf{Inv}$. As an inductive hypothesis, suppose now that $\mathbf{x}_t\in\mathsf{Inv}$ for some $t\in\mathbb{N}_{\geq 0}$. We need to show that $\mathbf{x}_{t+1}\in\mathsf{Inv}$. Since the trajectory is in $\mathsf{Traj}^{f,\pi}_{\epsilon}$, we know that the BNN weight vector $(\mathbf{w}_t,\mathbf{b}_t)$ sampled at the time-step $t$ belongs to $W^{\pi}_{\epsilon}$, i.e.~$(\mathbf{w}_t,\mathbf{b}_t)\in W^{\pi}_{\epsilon}$. Thus, since $\mathbf{x}_t\in\mathsf{Inv}$ by the induction hypothesis and since $\mathsf{Inv}$ is closed under the system dynamics when the sampled weight vector is in $W^{\pi}_{\epsilon}$, it follows that $\mathbf{x}_{t+1}=f(\mathbf{x}_t,\mathbf{u}_t)=f(\mathbf{x}_t,\pi_{\mathbf{w}_t,\mathbf{b}_t}(\mathbf{x}_t))\in \mathsf{Inv}$. This concludes the proof by induction. \end{proof} \begin{manualtheorem}{3} The loss function $\mathcal{L}$ is nonnegative for any neural network $g$, i.e.~$\mathcal{L}(g)\geq 0$. Moreover, if $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant and $\mathcal{L}_{\textrm{cls}}$ the 0/1-loss, then $\mathcal{L}(g^{\mathsf{Inv}})=0$. Hence, neural networks $g^{\mathsf{Inv}}$ for which $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant are global minimizers of the loss function $\mathcal{L}$ when $\mathcal{L}_{\textrm{cls}}$ is the 0/1-loss. \end{manualtheorem} \begin{proof} Recall, the loss function $\mathcal{L}$ for a neural network $g$ is defined via \begin{equation}\label{eq:totallossupmat} \mathcal{L}(g) = \frac{1}{|D_{\textrm{spec}}|}\sum_{(\mathbf{x},y)\in D_{\textrm{spec}}}^{}\mathcal{L}_{\textrm{cls}}\big(g(\mathbf{x}),y\big) +\lambda \frac{1}{|D_{\textrm{ce}}|}\sum_{(\mathbf{x},\mathbf{x}')\in D_{\textrm{ce}}}^{}\mathcal{L}_{\textrm{ce}}\big(g(\mathbf{x}),g(\mathbf{x}')\big), \end{equation} where $\lambda$ is a tuning parameter and $\mathcal{L}_{\textrm{cls}}$ a binary classification loss function, e.g.~the 0/1-loss $\mathcal{L}_{\textrm{0/1}}(z,y) = \mathds{1}[\mathds{1}[z\geq0]\neq y]$ or the logistic loss $\mathcal{L}_{\textrm{log}}(z,y)= z - z \cdot y + \log(1 + \exp(-z))$ as its differentiable alternative. The term $\mathcal{L}_{\textrm{ce}}$ is the counterexample loss which we define via \begin{equation}\label{eq:losssupmat} \mathcal{L}_{\textrm{ce}}(z,z') = \mathds{1}\big[z>0\big]\mathds{1}\big[z'<0\big] \mathcal{L}_{\textrm{cls}}\big(z,0\big) \mathcal{L}_{\textrm{cls}}\big(z',1\big). \end{equation} The fact that $\mathcal{L}(g)\geq 0$ for each neural network $g$ follows immediately from the fact that summands in the first sum in eq.~\eqref{eq:totallossupmat} are loss functions which are nonnegative, and summands in the second sum are products of indicator and nonnegative loss functions and therefore also nonnegative. We now show that, if $\mathcal{L}_{\textrm{cls}}$ is the 0/1-loss, $\mathcal{L}(g^{\mathsf{Inv}})=0$ whenever $\mathsf{Inv}$ is a $W^{\pi}_{\epsilon}$-safe positive invariant, which implies the global minimization claim in the theorem. This follows from the following two items: \begin{compactitem} \item For each $(\mathbf{x},y)\in D_{\textrm{spec}}$, we have $\mathcal{L}_{\textrm{cls}}\big(g^{\mathsf{Inv}}(\mathbf{x}),y\big)=0$. Indeed, for $(\mathbf{x},y)$ to be added to $D_{\textrm{spec}}$ in Algorithm~1, we must have that either $\mathbf{x}\in\mathcal{X}_0$ and $y=1$, or that $\mathbf{x}\in\mathcal{X}_u$ and $y=0$. Thus, since $\mathsf{Inv}$ is assumed to be a $W^{\pi}_{\epsilon}$-safe positive invariant, $g^{\mathsf{Inv}}$ correctly classifies $(\mathbf{x},y)$ and the corresponding loss is $0$. \item For each $(\mathbf{x},\mathbf{x}')\in D_{\textrm{ce}}$ we have $\mathcal{L}_{\textrm{ce}}\big(g^{\mathsf{Inv}}(\mathbf{x}),g^{\mathsf{Inv}}(\mathbf{x}')\big)=0$. Indeed, since \[ \mathcal{L}_{\textrm{ce}}(g^{\mathsf{Inv}}(\mathbf{x}),g^{\mathsf{Inv}}(\mathbf{x}')) = \mathds{1}\big[g^{\mathsf{Inv}}(\mathbf{x})>0\big]\mathds{1}\big[g^{\mathsf{Inv}}(\mathbf{x}')<0\big] \mathcal{L}_{\textrm{cls}}\big(g^{\mathsf{Inv}}(\mathbf{x}),0\big) \mathcal{L}_{\textrm{cls}}\big(g^{\mathsf{Inv}}(\mathbf{x}'),1\big), \] for the loss to be non-zero we must have that $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and $g^{\mathsf{Inv}}(\mathbf{x})<0$. But this is impossible since $\mathsf{Inv}$ is assumed to be a $W^{\pi}_{\epsilon}$-safe positive invariant and $(\mathbf{x},\mathbf{x}')$ was added by Algorithm~1 as a counterexample to $D_{\textrm{ce}}$, meaning that $\mathbf{x}'$ can be reached from $\mathbf{x}$ by following the dynamics function and sampling a BNN weight vector in $W^{\pi}_{\epsilon}$. Therefore, by the closedness property of $W^{\pi}_{\epsilon}$-safe positive invariants when the sampled weight vector is in $W^{\pi}_{\epsilon}$, we cannot have both $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and $g^{\mathsf{Inv}}(\mathbf{x})<0$. Hence, the loss must be $0$. \end{compactitem} \end{proof} \begin{manualtheorem}{4} If the verifier in Algorithm~1 shows that constraints in three checks are unsatisfiable, then the computed $\mathsf{Inv}$ is indeed a $W^{\pi}_{\epsilon}$-safe positive invariant. Hence, Algorithm~1 is correct. \end{manualtheorem} \begin{proof} The fact that the first check in Algorithm~1 correctly checks whether there exist $\mathbf{x},\mathbf{x}'\in\mathcal{X}_0$ and a weight vector $(\mathbf{w},\mathbf{b})\in W^{\pi}_{\epsilon}$ such that $\mathbf{x}'=f(\mathbf{x},\pi_{\mathbf{w},\mathbf{b}}(\mathbf{x}))$ with $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and $g^{\mathsf{Inv}}(\mathbf{x}')<0$ follows by the correctness of our encoding in Section~4.1, which was proved in Theorem~1. The fact that checks 2 and 3 correctly check whether for all $\mathbf{x}\in\mathcal{X}_0$ we have $g^{\mathsf{Inv}}(\mathbf{x})\geq 0$ and for all $\mathbf{x}\in\mathcal{X}_u$ we have $g^{\mathsf{Inv}}(\mathbf{x})<0$, respectively, follows immediately from the conditions they encode. Therefore, the three checks together verify that (1)~$\mathsf{Inv}$ is closed under the system dynamics whenever the sampled weight vector is in $W^{\pi}_{\epsilon}$, (2)~$\mathsf{Inv}$ contains all initial states, and (3)~$\mathsf{Inv}$ contains no unsafe states. As these are the $3$ defining properties of $W^{\pi}_{\epsilon}$-safe positive invariants, Algorithm~1 is correct and the theorem claim follows. \end{proof} \section{Safe exploration reinforcement learning algorithm} Algorithm \ref{algorithm:serl} shows our sketch of how standard RL algorithms, such as policy gradient methods and deep Q-learning, can be adapted to a safe exploration setup by using the safe weight sets computed by our method. \begin{algorithm}[t] \caption{Safe Exploration Reinforcement Learning} \label{algorithm:serl} \begin{algorithmic} \STATE \textbf{Input} Initial policy $\pi_{0}$, learning rate $\alpha$, number of iterations $N$ \FOR{$i \in 1,\dots N$} \STATE $\epsilon \leftarrow$ find safe $\epsilon$ for $\pi_{i-1}$ \STATE collect rollouts of $\pi_{i-1}$ with rejection sampling $\epsilon$ \STATE compute $\nabla \mu_{i-1}$ for parameters $\mu_{i-1}$ of $\pi_{i-1}$ using DQN/policy gradient \STATE $\mu_i \leftarrow \mu_{i-1} - \alpha \nabla \mu_{i-1}$ (gradient descent) \STATE project $\mu_i$ back to interval $[\mu_{i-1}-\epsilon,\mu_{i-1}+\epsilon]$ \ENDFOR \STATE \textbf{Return} $\pi_{N}$ \end{algorithmic} \end{algorithm} \section{Experimental details} In this section, we describe the details of our experimental evaluation setup. The code and pre-trained network parameters are attached in the supplementary materials. Each policy is a ReLU network consisting of three layers. The first layer represents the input variables, the second one is a hidden layer with 16 neurons, and the last layer are the output variables. The size of the first and the last layer is task dependent and is shown in Table \ref{tab:epx}. The $W^{\pi}_{\epsilon}$-safe positive invariant candidate network differs from the policy network in that its weights are deterministic, it has a different number of hidden units and a single output dimension. Particularly, the invariant networks for the linear dynamical system and the inverted pendulum have 12 hidden units, whereas the invariant network for the collision avoidance task has 32 neurons in its hidden layer. The policy networks are trained with a $\mathcal{N}(0,0.1)$ (from second layer on) and $\mathcal{N}(0,0.05)$ (all weights) prior for the Bayesian weights, respectively. MILP solving was performed by Gurobi 9.03 on a 4 vCPU with 32GB virtual machine. \begin{table}[] \centering \begin{tabular}{c|ccc}\toprule Experiment & Input dimension & Hidden size & Output dimension \\\midrule Linear dynamical system & 2 & 16 & 1 \\ Inverted pendulum & 2 & 16 & 1 \\ Collision avoidance & 3 & 16 & 3\\\bottomrule \end{tabular} \caption{Number of dimensions of the policy network for the three experiments.} \label{tab:epx} \end{table} \paragraph{Linear dynamical system} The state of the linear dynamical system consists of two variables $(x,y)$. The update function takes the current state $(x_t,y_t)$ with the current action $u_t$ and outputs the next states $(x_{t+1},y_{t+1})$ governed by the equations \begin{align*} y_{t+1} &= y_t + 0.2\cdot \textrm{clip}_{\pm 1}(u_t)\\ x_{t+1} &= x_t + 0.3 y_{t+1} + 0.05\cdot \textrm{clip}_{\pm 1}(u_t),\\ \end{align*} where the function $\text{clip}_{\pm 1}$ is defined by \begin{equation*} \textrm{clip}_{\pm z}(x) = \begin{cases}-z & \text{ if } x\leq -z\\ z & \text{ if } x\geq z\\ x & \text{ otherwise. }\end{cases} \end{equation*} The set of unsafe states is defined as $\{(x,y)\in\mathbb{R}^2\mid|(x,y)|_{\infty}\geq 1.2\}$, and the initial states as $\{(x,y)\in\mathbb{R}^2\mid |(x,y)|_{\infty}\leq 0.6\}$. \paragraph{Inverted pendulum} The state of the inverted pendulum consists of two variables $(\theta,\dot{\theta})$. The non-linear state transition is defined by \begin{align*} \dot{\theta}_{t+1} &= \textrm{clip}_{\pm 8}\big(\dot{\theta}_t + \frac{-3g\cdot \textrm{angular}(\theta_t + \pi)}{2l} + \delta_t\frac{7.5 \textrm{clip}_{\pm 1}(u_t)}{(m\cdot l^2)}\big)\\ \theta_{t+1} &= \theta_t + \dot{\theta}_{t+1} * \delta_t, \end{align*} where $g=9.81,l=1,\delta_t=0.05$ and $m=0.8$ are constants. The function $\textrm{angular}$ is defined using the piece-wise linear composition \begin{equation*} \textrm{angular}(x) = \begin{cases} \textrm{angular}(x+2\pi) & \text{ if } x \leq \pi / 2\\ \textrm{angular}(x-2\pi) & \text{ if } x > 5 \pi / 2\\ \frac{x - 2\pi}{\pi / 2} & \text{ if } 3 \pi / 2 < x \leq 5 \pi / 2\\ 2 - \frac{x}{\pi / 2} & \text{ if } \pi / 2 < x \leq 3 \pi / 2. \end{cases} \end{equation*} The set of initial states are defined by $\{(\theta,\dot{\theta})\in\mathbb{R}^2\mid |\theta| \leq \pi/6 \text{ and } |\dot{\theta}|\leq0.2 \}$. The set of unsafe states are defined by $\{(\theta,\dot{\theta})\in\mathbb{R}^2\mid |\theta| \geq 0.9 \text{ or } |\dot{\theta}|\geq2 \}$. \paragraph{Collision avoidance} The state of the collision avoidance environment consists of three variables $(p_x,a_x,a_y)$, representing the agent's vertical position and the vertical and the horizontal position of an intruder. The intruder moves toward the agent, while the agent's vertical position must be controlled to avoid colliding with the intruder. The particular state transition is given by \begin{align*} p_{x,t+1} &= p_{x,t} + u_{t}\\ a_{x,t+1} &= a_{x,t}\\ a_{y,t+1} &= a_{y,t} - 1.\\ \end{align*} Admissible actions are defined by $u_t \in \{-1,0,1\}$. The set of initial states are defined as $\{(p_x,a_x,a_y)\in\mathbb{Z}^3\mid |p_x|\leq 2 \text{ and } |a_x|\leq 2 \text{ and } a_y=5\}$. Likewise, the set of unsafe states are given by $\{(p_x,a_x,a_y)\in\mathbb{Z}^3\mid |p_x-a_x|\leq 1 \text{ and } a_y=5\}$.
null
null
null
proofpile-arXiv_000-10203
{"arxiv_id":"2111.03165","language":"en","timestamp":1636337107000,"url":"https:\/\/arxiv.org\/abs\/2111.03165","yymm":"2111"}
2024-02-18T23:40:25.406Z
2021-11-08T02:05:07.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10205"}
null
\section{Introduction} \label{sec:intro} Dedicated exoplanet-finding surveys, such as the NASA \textit{Kepler} mission \citep{borucki10}, have revolutionized our understanding of mature planetary system populations, but the formation and evolutionary processes that lead to their properties are still not well understood. The detailed study of exoplanets on an individual basis is usually hindered by the close proximity of the exoplanet to its bright stellar host. The atmospheres of transiting exoplanets can be characterized via transmission spectroscopy \citep[e.g.,][]{deming13,kreidberg14,wakeford17}, but this limits the planetary systems that can be studied to those with semi-major axes $<$1 au. Systems with planets orbiting on wider orbits have been observed via spectroscopy behind adaptive optics (AO; \citealt{patience10,barman11a,konopacky13,haffert19,petrus20}) but these observations are difficult and expensive. Direct-imaging surveys of nearby star-forming regions have found an interesting population of wide-orbit ($>$100 au), planetary-mass companions ($<$20 $M_{\mathrm{Jup}}$; hereafter PMCs), such as 1RXS J160929.1--210524 b (8 $M_{\rm Jup}$, 330 au; \citealt{lafreniere08b}), GSC 06214--00210 B (14 $M_{\rm Jup}$, 330 au; \citealt{ireland11}), and HD 106906 b (11 $M_{\rm Jup}$, 650 au; \citealt{bailey14}), and USco1621 B (15 $M_{\rm Jup}$, 2880 au; \citealt{chinchilla20}). These systems have far more favorable separations and contrasts for the detailed study of gas giant atmospheres on larger semi-major axes. In addition, at young ages these systems also offer a unique view of moon-forming circumplanetary disks. Most observations of directly-imaged exoplanets and planetary-mass companions have been made in the near-infrared (1--3 $\mu$m). Self-luminous exoplanets and planetary-mass objects emit substantial amounts of energy in the mid-infrared yet very few systems have been studied redward of the $L$-band (3 $\mu$m). From the ground, mid-infrared observations of exoplanets and PMCs are technically challenging, while from space, many systems were discovered after the cryogenic mission of the \textit{Spitzer Space Telescope} \citep{werner04} ended in 2009, and/or fall near or inside its diffraction limit. Extending the wavelength coverage of these objects into the mid-infrared to better fit spectral energy distributions (SEDs) will lead to more precise estimates of their physical properties and further constrain models of substellar and exoplanet atmospheres \citep[e.g.,][]{leggett08,bonnefoy10,bonnefoy14}. Utilizing the available mid-infrared observations that do exist of these systems are crucial for planning additional follow-up observations with next-generation facilities like the \textit{James Webb Space Telescope} (\textit{JWST}). PMCs also frequently harbor disks, mostly identified through accretion signatures (e.g., line emission in H$\alpha$, Pa$\beta$, Br$\gamma$), red near-infrared colors, or mid-infrared excesses. \cite{bowler17} found $46\%\pm14\%$ of young ($<$15 Myr) substellar ($<$20 $M_{\rm Jup}$) companions with existing moderate-resolution spectroscopy had detectable Pa$\beta$ emission. This high disk frequency is comparable to that observed around isolated young substellar objects \citep{luhman10,esplin17,luhman20} but it is not clear whether wide-orbit companion disks and isolated circum(sub)stellar disks have similar accretion rates, disk compositions, and grain size distributions. Observations of PMC disks at radio wavelengths have produced only upper limits, which suggests that the dust in PMC disks might actually be more compact and optically thick \citep[e.g.,][]{bowler15,macgregor17,wolff17,wu17b}. If so, wide-orbit PMC disks are much better suited for identification and characterization in the mid-infrared. In \cite{martinez19} (hereafter Paper I), we presented an automated point-spread function (PSF) subtraction pipeline to leverage the \textit{Spitzer} archive in the search for wide-orbit planetary-mass companions and identify excesses from circum(sub)stellar disks. Here, we apply our infrastructure to the remaining sample of known wide-orbit PMCs. In Sections \ref{spitzer_obs} and \ref{data_analysis}, we describe our sample and PSF-fitting framework. We present the results of our image analysis and pipeline performance in Section \ref{sec:results}. Finally in Section \ref{discussion}, we consider the mid-infrared photometry of the wide companions in our sample in the context of other young low-mass stars and brown dwarfs, and discuss the global disk frequency of PMCs. \section{Sample and \textit{Spitzer} Observations} \label{spitzer_obs} In Paper I the sample of wide-companion systems was chosen to test the feasibility of recovery via PSF-subtraction over a broad range of separations and contrast ratios. Here, we constructed a new sample to include other low-mass companions with potentially planetary mass that plausibly fit within those detection limits. We then identified systems with archival \textit{Spitzer}/Infrared Array Camera \citep[IRAC;][]{fazio04} observations from its cryogenic mission. Six of the companions have not been resolved in \textit{Spitzer}/IRAC, while three companions have had IRAC photometry reported in the literature previously. Seven of the systems belong to the young star-forming regions or stellar associations of Taurus, Carina, Chameleon, Upper Scorpius, and $\rho$ Ophiuchus, while two are young field objects. We target the young field objects because their lower distances provide good sensitivity to both mass and projected separation. IRAC operated with four filters in the mid-infrared: 3.6, 4.5, 5.8, and 8.0 $\mu$m. The IRAC detector has 256$\times$256 pixels with a pixel scale of $1\farcs22$. We work with IRAC's cryogenic-phase corrected basic data (CBCD) and uncertainty (CBUNC) files. All data were reduced with the \textit{Spitzer} Science Center software pipeline version S18.25.0. We used the high-precision astrometry measurements of the companions from previous high-contrast AO observations as priors in our Markov Chain Monte Carlo (MCMC) fits (see Section \ref{data_analysis}). The combined sample primary properties are given in Table \ref{prim_tab} and system properties in Table \ref{comp_tab}. The specific details about the \textit{Spitzer}/IRAC programs and data products are listed in Table \ref{files_tab}. \begin{deluxetable*}{llrrrccccc}[t!] \tablecaption{Primary Properties of Directly-Imaged Substellar Companion Systems} \tablehead{\colhead{2MASS} & \colhead{Other Name} & \colhead{$K_s$\tablenotemark{a}} & \colhead{$W1$\tablenotemark{b}} & \colhead{$W2$\tablenotemark{b}} & \colhead{SpT} & \colhead{$A_V$} & \colhead{Distance} & \colhead{Age} & \colhead{Ref.} \\ & & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & & \colhead{(mag)} & \colhead{(pc)} & \colhead{(Myr)} & } \startdata J04294155+2632582 & DH Tau & 8.18 & 7.65 & 7.12 & M1 & 1.4 & $133.3\pm0.4$ & $2\pm1$ & 1\\ J04414565+2301580 & 2M0441 Aab & 9.85 & 9.70 & 9.47 & M4.3; M7 & 0.2 & $122.9^{+1.1}_{-0.9}$ & $2\pm1$ & 2--4\\ J06191291--5803156 & AB Pic & 6.98 & 7.28 & 6.91 & K1 & 0.27 & $50.13\pm0.03$ & $45\pm4$ & 5, 6\\ J11062877--7737331 & CHXR 73 & 10.70 & 10.52 & 10.21 & M3 & 6.5\tablenotemark{c} & $190.0^{+3.3}_{-3.0}$ & $2\pm1$ & 7\\ J13164653+0925269 & GJ 504 & 4.03 & 4.20 & 5.30 & G0 & 0.0 & $17.57\pm0.04$ & $100-6500$& 8\\ J16093030--2104589 & 1RXS J1609 & 8.92 & 8.79 & 8.78 & M0 & 0.9 & $137.8^{+0.3}_{-0.4}$ & $11\pm2$ & 9\\ J16271951--2441403 & SR 12 AB & 8.41 & 8.29 & 8.16 & K4; M2.5 & 1.8\tablenotemark{c} & $112.5^{+5.8}_{-5.3}$ & $3\pm2$ & 10, 11\\ J16311501--2432436 & ROXs 42B & 8.67 & 8.48 & 8.37 & M0 & 2.4 & $145.4^{+0.5}_{-0.7}$ & $3\pm2$ & 12, 13\\ J21185820+2613500 & HD 203030 & 6.65 & 6.98 & 6.70 & K0 & 0.03 & $39.23^{+0.03}_{-0.04}$& $130-400$ & 14, 15 \enddata \tablenotetext{a}{2MASS Point Source Catalog \citep{cutri03}} \tablenotetext{b}{CatWISE Source Catalog \citep{marocco20}. The $W1$ and $W2$ values reported for AB Pic and GJ 504 are lower than in AllWISE \citep{cutri14} likely because of saturation.} \tablenotetext{c}{$A_V$ converted from $A_J$.} \tablecomments{\textit{Gaia} EDR3 parallactic distances are used from \cite{bailer-jones21} except for SR 12 AB, where we use its \textit{Gaia} DR2 parallactic distance from \cite{bailer-jones18}.} \tablerefs{(1) \cite{itoh05}; (2) \cite{todorov10}; (3) \cite{todorov14}; (4) \cite{bowler15}; (5) \cite{chauvin05}; (6) \cite{bonnefoy10}; (7) \cite{luhman06c}; (8) \cite{kuzuhara13}; (9) \cite{lafreniere08b}; (10) \cite{kuzuhara11}; (11) \cite{bowler14}; (12) \cite{kraus14}; (13) \cite{currie14}; (14) \cite{metchev06}; (15) \cite{miles-paez17}} \label{prim_tab} \end{deluxetable*} \begin{deluxetable*}{llccccc} \tablecaption{Properties of Directly-Imaged Substellar Companions} \tablehead{\colhead{2MASS} & \colhead{Other Name} & \colhead{Separation} & \colhead{Position Angle} & \colhead{Filter} & \colhead{$\Delta m$} & \colhead{Ref.} \\ \colhead{(Primary)}& \colhead{(Companion)} & \colhead{(arcsec)} & \colhead{(deg)} & & \colhead{(mag)} & } \startdata J04294155+2632582 & DH Tau B & $2.31\pm0.02$ & $138.5\pm0.1$ & $K^{\prime}$ & 5.92 & 1--4 \\ J04414565+2301580 & 2M0441 Bab & $12.325\pm0.007$ & $238.0\pm0.1$ & $K_s$ & 3.31 & 5--7 \\ J06191291--5803156 & AB Pic b & $5.453\pm0.025$ & $175.25\pm0.34$ & $K_s$ & 7.16 & 8, 9 \\ J11062877--7737331 & CHXR 73 b & $1.30\pm0.03$ & $234.9\pm1.0$ & $K_s$ & 4.70 & 10 \\ J13164653+0925269 & GJ 504 B & $2.483\pm0.015$ & $326.46\pm0.36$ & $L^{\prime}$ & 12.90 & 11, 12\\ J16093030--2104589 & 1RXS J1609 b & $2.219\pm0.002$ & $27.7\pm0.1$ & $K_s$ & 7.25 & 13, 14\\ J16271951--2441403 & SR 12 C & $8.673\pm0.153$ & $166\pm2$\tablenotemark{a}& $K_s$ & 6.16 & 15, 16\\ J16311501--2432436 & ROXs 42B b & $1.172\pm0.002$ & $270.09\pm0.17$ & $K_s$ & 6.34 & 17--19\\ J21185820+2613500 & HD 203030 B & $11.923\pm0.021$ & $108.76\pm0.12$ & $K_s$ & 9.56 & 20, 21 \enddata \tablenotetext{a}{P.A.~estimated from Fig. 1 of \cite{kuzuhara11}.} \tablecomments{Uncertainties listed were used as input errors on the P.A.~prior.} \tablerefs{(1) \cite{itoh05}; (2) \cite{bonnefoy14}; (3) \cite{zhou14}; (4) \cite{kraus14}; (5) \cite{todorov14}; (6) \cite{bowler15}; (7) \cite{kraus09b}; (8) \cite{chauvin05}; (9) \cite{bonnefoy14}; (10) \cite{luhman06c}; (11) \cite{kuzuhara13}; (12) \cite{skemer16}; (13) \cite{lafreniere08b}; (14) \cite{wu15}; (15) \cite{kuzuhara11}; (16) \cite{bowler14}; (17) \cite{kraus14}; (18) \cite{currie14}; (19) \cite{bowler14}; (20) \cite{metchev06}; (21) \cite{miles-paez17}} \label{comp_tab} \end{deluxetable*} \begin{deluxetable*}{lcccccclrl} \tablecaption{\textit{Spitzer}/IRAC Observations} \tablehead{\colhead{2MASS} & \multicolumn{4}{c}{No.\ of Frames} & \colhead{$T_{\mathrm{exp}}$} & \colhead{AOR} & \colhead{Date} & \colhead{PID} & \colhead{PI} \\ \cline{2-5} & Ch 1 & Ch 2 & Ch 3 & Ch 4 & \colhead{(s)} & & \colhead{(UT)} & & } \startdata J04294155+2632582 & 3 & 3 & 3 & 3 & 0.4/10.4 & 3963392 & 2004 Mar 7 & 37 & G. Fazio \\ & 1 & 1 & 1 & 1 & 0.4/10.4 & 11232256 & 2005 Feb 23 & 3584 & D. Padgett \\ & 1 & 1 & 1 & 1 & 0.4/10.4 & 11236096 & 2005 Feb 24 & 3584 & D. Padgett \\ J04414565+2301580 & 0/5 & 1/5 & 0/5 & 1/5 & 1.0/26.8 & 18364160 & 2007 Mar 28 & 30540 & J. Houck \\ J06191291--5803156 & 9 & 9 & 9 & 9 & 0.4/10.4 & 15174656 & 2005 Sep 18 & 20795 & P. Lowrance \\ J11062877--7737331 & 2 & 2 & 2 & 2 & 0.4/10.4 & 3960320 & 2004 Jun 10 & 37 & G. Fazio \\ J13164653+0925269 & 1/5 & 0/5 & 1/5 & 0/5 & 1.0/26.8 & 3921920 & 2004 Jan 9 & 34 & G. Fazio \\ & & 1/5 & & 1/5 & 1.0/26.8 & 18010368 & 2006 Jul 9 & 30298 & K. Luhman \\ J16093030--2104589 & 1/5 & & 1/5 & & 0.4/10.4 & 15844608 & 2005 Aug 24 & 20103 & L. Hillenbrand\\ & & 9 & & 9 & 1.2 & 13872384 & 2006 Mar 26 & 20069 & J. Carpenter \\ J16271951--2441403 & 2 & 3 & 2 & 3 & 0.4/10.4 & 3652096 & 2004 Mar 7 & 6 & G. Fazio \\ & 2 & 2 & 2 & 2 & 0.4/10.4 & 5771008 & 2004 Mar 28 & 177 & N. Evans \\ J16311501--2432436 & 2/2 & 2/2 & 2/2 & 2/2 & 0.4/10.4 & 5752320 & 2004 Mar 28 & 177 & N. Evans \\ & 2 & 2 & 2 & 2 & 10.4 & 5756928 & 2004 Mar 29 & 177 & N. Evans \\ J21185820+2613500 & 16/16 & 16/16 & 16/16 & 16/16 & 1.0/26.8 & 23036416 & 2008 Jun 19 & 40489 & S. Metchev \\ & 16/16 & 16/16 & 16/16 & 16/16 & 1.0/26.8 & 23796480 & 2007 Nov 15 & 40489 & S. Metchev \enddata \tablecomments{} \label{files_tab} \end{deluxetable*} \section{Data Analysis} \label{data_analysis} Previous analyses of \textit{Spitzer}/IRAC images have searched for wide-orbit PMC systems by taking advantage of IRAC's well-behaved PSF wings at $>>\lambda/D$ \citep[e.g.,][]{janson15,durkan16,baron18}. Our framework is optimized for probing the IRAC PSF at 1--5 $\lambda$/$D$, where companion identification is difficult because the PSF is undersampled at the native $1\farcs22$ pixel scale. Classical PSF-modeling techniques, such as ``locally optimized combination of images" (LOCI; \citealt{lafreniere07}) or principal component analysis, require more pixels to adequately model the primary star PSF. We use the framework described in Paper I to model the point spread functions of the system components in the IRAC images. To summarize, we use the point response function (PRF, or effective PSF; \citealt{hoffman05}) developed by the \textit{Spitzer} Science team to generate model PSFs at any position on the IRAC detector. We then fit a two-source PSF model in each image performing a MCMC analysis using a Metropolis-Hastings algorithm with Gibbs sampling. The PSF model is described by seven parameters: $x$-pixel coordinate of the primary centroid ($x$), $y$-pixel coordinate of the primary centroid ($y$), image background ($b$), peak pixel value of the primary ($n$), projected separation ($\rho$), position angle (PA), and contrast ($\Delta m$). In addition, image pixel values greater than 90\% of the saturation limit were masked. We adopt priors on separation and position angle from past high-resolution imaging results, listed in Table \ref{comp_tab}. The MCMC analysis is conducted in two stages to determine image-specific parameters ($x$, $y$, $b$) separately from system-specific parameters ($n$, $\rho$, PA, $\Delta m$). We ran four MCMC chains with 140,000 steps each, discarding the first 10\% of each chain as ``burn-in". The weighted average median ($x$, $y$)-centroid, $\rho$, PA, and $\Delta m$ generated by the MCMC fit is used to create individual PSF models of each system component from which aperture photometry using a $10\arcsec$ radius is measured. The zero-points of IRAC Channels 1--4 are $280.9\pm4.1$, $179.7\pm2.6$, $115.0\pm1.7$, and $64.9\pm0.9$ Jy, respectively. Some members of the sample have nearby neighbors with flux that could influence the results of the pipeline fit. The neighbors of DH Tau, 2MASS J$04414565$+$2301580$ and 2MASS J$04414489$+$2301513$ (hereafter 2M0441 A and 2M0441 B), and CHXR 73 are within $15\arcsec$ of the primary centroid and unsaturated. We use the same PSF model described above to fit and subtract each neighbor within each individual IRAC image prior to being put through the pipeline. SR 12 is $\sim$25$\arcsec$ away from a bright and saturated young stellar object, 2MASS J16272146--2441430 (YLW 13B). Although this object is well outside of the pipeline fitting region,the wings of its flux can still affect the PSF-fitting results. For this system we use the high-dynamic range PSF from \cite{marengo06} to model this bright neighbor and subtract off its contaminating flux (Figure \ref{fig:SR12n}). After the MCMC runs, stacked residual images are created by combining individual residual images after the primary PSF has been subtracted, placing each on a final grid with a pixel scale five times smaller than the original IRAC pixel scale of $1\farcs22$, shifting to a common origin, and rotating so that north is up and east is left. PSF subtraction occurs on the original data, not on mosaiced or subsampled images, because of the complicated nature of the IRAC PSF and because subsampling the images prior to PSF-fitting would introduce covariance between adjacent pixels. We perform aperture photometry on these subsampled stacked residuals images to determine detection limits around each primary. We use apertures with radii equal to the FWHM in each channel ($1\farcs66$, $1\farcs72$, $1\farcs88$, $1\farcs98$). The FWHMs are larger than the IRAC pixel scale ($1\farcs22$), thus all covariant pixels contribute to the measured aperture flux. To evaluate the sensitivity of our PSF-fitting framework to substellar companions in the IRAC images of our sample, we performed aperture photometry on the stacked images before and after PSF-subtraction. We measured the flux inside 100 randomly drawn apertures of radius 1 FWHM at $\mathrm{FWHM}/4$ ($0\farcs42$, $0\farcs43$, $0\farcs47$, $0\farcs50$) intervals radially outward from the primary star. The mean and standard deviation of these fluxes is used to determine the limiting flux and is then converted into \textit{Spitzer}/IRAC magnitudes to obtain 4-$\sigma$ limits. With $\sim$36, 34, 28, and 25 independent apertures in a search radius of $10\arcsec$ around a primary star, the probability of measuring a spurious $>$4-$\sigma$ signal is 0.003\%. We then convert our detection limit curves into mass detection limits using the BT-Settl evolutionary models of \cite{allard12} at the reported literature ages and \textit{Gaia} parallactic distances of our sample systems. For a given target, the companion height above (or below) the 4-$\sigma$ detection limit at that radius can be used to infer the systematic uncertainty due to residual primary-PSF structure in our modeling framework photometry. For example, if photometry measured for a companion is equal to the 4-$\sigma$ limit, its systematic flux uncertainty would be 25\%, or $\sim$0.24 mag, while a 5-$\sigma$ detection would have a systematic flux uncertainty of 20\%, or $\sim$0.20 mag. For all contrast and photometry measurements hereafter, we list this systematic uncertainty in addition to the statistical uncertainty from our MCMC fits. \begin{figure*} \centering \includegraphics[trim={0.97in 0.35in 0.75in 0.35in},clip,width=\textwidth]{SR_12_n_c9.png} \caption{Individual IRAC images of SR 12 before (top row) and after (bottom row) YLW 13B, a nearby young stellar object, is removed. High-dynamic range PSFs were used to model the bright PSF wings of YLW 13B which were then subtracted off to minimize contamination when determining the system parameters of SR 12.} \label{fig:SR12n} \end{figure*} \section{Results} \label{sec:results} \subsection{Detections} Our reprocessing of the IRAC images yielded detections in one or more filters for eight out of our sample of nine substellar companions. The one system whose companion was not detected, GJ 504, had the brightest primary ($K_s$=4.03 mag) and largest expected contrast ($>$12 mag), but we are still able to assess a robust upper limit. We present the final system parameters as determined by our pipeline in Table \ref{doublefit_tab}. The contrasts reported are marginalized values of the parameters as measured by our MCMC fits and we list both statistical and systematic uncertainties. Further analysis we perform with this output adds the uncertainties in quadrature. The projected separations and position angles reflect the input priors from previous adaptive optics imaging such that the information in the \textit{Spitzer}/IRAC images is entirely devoted to measuring companion contrast. IRAC magnitudes for the primary stars and substellar companions are calculated from the PSF models, assuming the median MCMC fit parameters, and are included in Table \ref{tab:phot}. \begin{deluxetable*}{lccccccc} \tabletypesize{\footnotesize} \tablecaption{Best-Fit System Properties of Detected Companions} \tablehead{\colhead{2MASS} & \colhead{Other Name} & \colhead{Separation} & \colhead{Position Angle} & \colhead{$\Delta [3.6]$} & \colhead{$\Delta [4.5]$} & \colhead{$\Delta [5.8]$} & \colhead{$\Delta [8.0]$}\\ \colhead{(Primary)}& \colhead{(Companion)} & \colhead{(arcsec)} & \colhead{(deg)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)} & \colhead{(mag)}} \startdata J04294155+2632582 & DH Tau B & $2.22\pm0.18$ & $137.30\pm1.5$& $5.74\pm0.24\pm0.56$ & $5.30\pm0.17\pm0.42$ & $4.79\pm0.16\pm0.31$ & $4.38\pm0.15\pm0.19$ \\ J04414565+2301580 & 2M0441 Bab & $12.35\pm0.01$& $237.4\pm0.1$ & $2.67\pm0.01\pm0.01$ & $2.43\pm0.01\pm0.01$ & $2.16\pm0.01\pm0.01$ & $1.78\pm0.01\pm0.01$ \\ J06191291--5803156 & AB Pic b & $5.52\pm0.09$ & $175.4\pm0.3$ & $6.34\pm0.06\pm0.08$ & $5.98\pm0.07\pm0.09$ & $5.65\pm0.26\pm0.09$ & $5.58\pm0.16\pm0.20$ \\ J11062877--7737331 & CHXR 73 b & $1.24\pm0.03$ & $228.5\pm3.8$ & $3.63\pm0.04\pm0.12$ & $3.79\pm0.06\pm0.04$ & ... & $2.92\pm0.12\pm0.08$ \\ J16093030--2104589 & 1RXS J1609 b & $2.14\pm0.11$ & $27.0\pm0.6$ & ... & $6.04\pm0.15\pm0.17$ & ... & ... \\ J16271951--2441403 & SR 12 c & $8.62\pm0.05$ & $164.8\pm0.6$ & $5.83\pm0.07\pm0.08$ & $5.08\pm0.03\pm0.06$ & $5.05\pm0.07\pm0.13$ & $4.16\pm0.03\pm0.12$ \\ J16311501--2432436 & ROXs 42B & $1.17\pm0.03$ & $263.6\pm4.8$ & ... & $5.08\pm0.56\pm0.24$ & ... & $4.55\pm0.11\pm0.10$ \\ J21185820+2613500 & HD 203030 b & $12.025\pm0.004$ & $108.69\pm0.02$ & $8.27\pm0.01\pm0.63$ & $7.88\pm0.01\pm0.27$ & $6.97\pm0.02\pm0.20$ & $6.84\pm0.02\pm0.21$ \enddata \tablecomments{If an entry in $\Delta m$ is missing, the companion was not detected in that channel.} \label{doublefit_tab} \end{deluxetable*} In Figure \ref{fig:abpic}, we present example pipeline results for an individual system, AB Pic. We show stacked images of the original data and final system model as well as stacked residuals images after the PSF models are subtracted. After subtracting the primary star PSF, a statistically significant positive residual is seen at the expected position of AB Pic b. This residual disappears after subtracting the best-fit system PSFs, indicating that it is a robust detection across all IRAC filters. Not all companions were detected in every IRAC channel. Generally, companions were not detected or had less constrained photometry in Channel 3 (5.8 $\mu$m), suggesting a possible PSF mismatch between templates and data in that channel. CHXR 73 b was detected in Channels 1, 2, and 4. ROXs 42B b was detected in Channels 2 and 4. 1RXS J160929.1--210524 b (hereafter 1RXS J1609 b) was detected only in Channel 2. These three objects had the smallest projected separations ($1\farcs2-2\farcs2$) of the sample. ROXs 42B b and 1RXS J1609 b also had the largest $K_s$-band contrasts which could explain the difficulty of detection in the other IRAC channels. Our measured photometry in Channel 4 (8.0 $\mu$m) of ROXs 42B b suggests it may have a long-wavelength excess, making its detection easier. In Figure \ref{fig:comp_examples}, we show stacked residuals images after the primary PSF has been subtracted, highlighting the companion detection in either Channel 2 or Channel 4 for these systems, as well as DH Tau. \begin{figure*} \centering \includegraphics[trim=1.5cm 2cm 1.15cm 2cm,clip,width=\textwidth]{AB_Pic_d_04_c9_dn.png} \caption{Stacked images of AB Pic across all four IRAC channels (rows) after it has gone through the PSF-fitting pipeline. All fits were conducted within the CBCD images at the native plate scale, but to convey the full data set, the images here were generated by combining individual frames after they had been re-scaled to $0\farcs24$/pixel ($\sim$5$\times$ smaller than the original IRAC pixel scale), shifted to a common origin, and rotated so that north is up and east is left. Columns 1 and 2 show the original IRAC data of AB Pic and the median two-source PSF model, respectively, displayed with a logarithmic color scale (leftmost color bar). Column 3 shows the residuals left behind after only the primary PSF model is subtracted from the data. Column 4 shows the residuals left behind after the two-source PSF model is subtracted from the data. Both Columns 3 and 4 are displayed with a linear color scale (rightmost color bar) and 3- and 5-$\sigma$ contours overlaid with solid and dotted lines, respectively. The standard deviation of the pixel values is displayed in the lower left-hand corner of Column 4 in units of DN/s. After subtracting the primary star PSF, a statistically significant positive residual is seen at the expected position of AB Pic b. This residual disappears after subtracting the best-fit system PSFs, indicating that it is a robust detection across all IRAC filters.} \label{fig:abpic} \end{figure*} \begin{figure*} \centering \includegraphics[trim=1.0cm 2cm 0.5cm 1.75cm,clip,width=\textwidth]{comp_example_dn.png} \caption{Stacked images of DH Tau, CHXR 73, 1RXS J1609, and ROXs 42B, the other four systems besides AB Pic (shown in Figure \ref{fig:abpic}) with companions that are newly resolved in this work. Columns 1 and 2 show the original IRAC data and the median two-source PSF model, respectively, displayed with a logarithmic color scale (leftmost color bar). Column 3 shows the residuals left behind after only the primary PSF model is subtracted from the data. Column 3 is displayed with a linear color scale (rightmost color bar) with 3- and 5-$\sigma$ contours overlaid with solid and dotted lines, respectively. For each panel north is up and east is left.} \label{fig:comp_examples} \end{figure*} In Figure \ref{fig:cmd}, we present a color-magnitude diagram of $M_{[3.6]}$ vs.~[3.6]--[8.0] color for the nine primaries and seven companions that were detected in those filters. We also show the intrinsic photospheric mid-infrared color-magnitude sequences from the BT-Settl models of \cite{allard12} for 1, 10, 100, and 500 Myr. Typically an object with [3.6]--[8.0] color significantly redder than the intrinsic photospheric isochrone on this diagram is interpreted as excess emission due to a disk. Based on this criterion, two primaries and seven companions appear red and may harbor circum(sub)stellar disks, but we will explore whether a more nuanced disk criterion is needed in Section \ref{discussion}. We detect the photospheres of AB Pic b, CHXR 73 b, and HD 203030 b while companions with significant [3.6]--[8.0] color excess are DH Tau B, SR 12 c, and ROXs 42B b. Although 1RXS J1609 b was not detected in Channels 1, 3, or 4, an SED fit of literature photometry and our Channel 2 measurement indicate we detected its photosphere (see Section \ref{sec:sedfits}). 2M0441 AB actually comprise a quadruple system consisting of two bound low-mass binaries (\citealt{todorov10,todorov14,bowler15}, and references therein). Mid-infrared excess has been identified for both pairs \citep{luhman10,adame11,bulger14}, indicating at least one component of each binary harbors a circum(sub)stellar disk. We readily confirm this excess with our pipeline in the \textit{Spitzer} images but determining the mid-infrared flux contributions from the individual components of 2M0441 B is beyond the scope of this paper. \begin{figure} \centering \includegraphics[trim=1.5cm 0.5cm 0.25cm 0cm,clip,width=0.45\textwidth]{bts_iso_i1vsi1_i4_v2-eps-converted-to.pdf} \caption{Color-magnitude diagram for our sample detected in both Channels 1 (3.6 $\mu$m) and 4 (8.0 $\mu$m). For comparison, we include young $>$M7 brown dwarfs members of the Taurus (triangles; \citealt{esplin17}) and Upper Scorpius (squares; \citealt{luhman20}) star-forming regions. Orange symbols represent disk-free members while red symbols denote disk-bearing members. We also include field brown dwarfs from \citet{dupuy12}, indicated as asterisks. $M_{[3.6]}$ was determined from \textit{Gaia} EDR3 parallactic measurements \citep{bailer-jones21} of each system primary. The primary components are indicated as filled stars while substellar companions are indicated as filled circles. Also displayed are the intrinsic photospheric [3.6]-[8.0] colors from BT-Settl models of \cite{allard12} at 1, 10, 100, and 500 Myr (dashed lines). Not shown are the companions to GJ 504 and 1RXS J1609 which were not detected in either channels 1 or 4. ROXs 42B b was not detected in Channel 1 by our pipeline but shown here as an $L^{\prime}$-band detection from \cite{kraus14}. The companions of our sample appear to be significantly redder than the BT-Settl isochrones, in line with previous comparisons between young and old free-floating brown dwarfs \citep{dupuy12,liu16}. The mid-infrared colors of the companions are similar to disk-bearing free-floating brown dwarfs in young star-forming regions.} \label{fig:cmd} \end{figure} \begin{figure*} \centering \includegraphics[angle=90,trim=1.25cm 2.0cm 1.15cm 1.75cm,clip,width=\textwidth]{dlimits_4sig_i4_all_v2-eps-converted-to.pdf} \caption{Contrast limits determined from the stacked IRAC Channel 4 images of our sample. The dashed blue line indicates the contrast curves prior to PSF subtraction as a function of separation from the primary in arcseconds. The solid blue line indicates the corresponding contrast curve after the median two-source PSF model has been subtracted. The top of each leftside y-axis lists the primary magnitude as measured by the pipeline. The contrast curves are presented in terms of apparent magnitudes as well as mass calculated by using the \textit{Gaia} EDR3 parallax distance estimates of \cite{bailer-jones21}, literature age determinations, and BT-Settl isochrones \citep{allard12}.} \label{fig:dl} \end{figure*} \begin{figure*} \centering \includegraphics[width=\textwidth]{SED_SWCII_v2-eps-converted-to.pdf} \caption{Spectral energy distributions of our sample. We fit system components with solar metallicity BT-Settl model atmospheres \citep{allard12} and fit $E(B-V)$ individually as a free parameter. SED-fitting results ($T_{\rm eff}$/$E(B-V)$) for each component of our wide companion systems are plotted near the reddened best-fit model. Literature photometry for the systems is plotted as blue squares while \textit{Spitzer}/IRAC photometry from this work is indicated as red stars. Synthetic photometry of the best fit model is also plotted as blue circles for the literature filters used or red circles at the IRAC channels. Specific filters and photometry used in our SED fits for the entire sample are listed in Table \ref{tab:phot}. SED-fitting results for the entire sample, including the $\chi^2_{\nu}$ of each fit and $[8.0]_{mod}-[8.0]_{obs}$ magnitude excess, are listed in Table \ref{tab:sedfit}.} \label{fig:sed} \end{figure*} \subsection{Detection Limits and Mass Sensitivity} Our PSF-fitting results yield sensitive upper limits on the companions that we did not detect, as well as for the presence of additional companions in these systems. We present the contrast and mass limits reached in the PSF-subtracted images as a function of radial separation in Tables \ref{tab:contrast_limits} and \ref{tab:mass_limits}, and show the Channel 4 detection limits in Figure \ref{fig:dl} (See Section \ref{data_analysis} for the details of the detection limit calculation). Our detection limit curves show that prior to PSF subtraction detectable companion contrasts plateau past $8\arcsec$, corresponding to projected physical separations of $\sim$150 au for the closest sample member (17.5 pc; GJ 504) and $\sim$1600 au for the furthest (191.0 pc; CHXR 73). Within $8\arcsec$, detectable contrasts improve by as much as 7 mag in Channel 1 and 5.5 mag in Channel 4. While we do not detect the companion of GJ 504 in the images, we can place a limit on its $L-$[8.0] color using previous AO imaging results in the $L$-band from \cite{skemer16} and our work. We find $L-[8.0]<8.43$ mag. \subsection{SED Fits} \label{sec:sedfits} Optical and near-infrared photometry from the literature can be used with our new \textit{Spitzer}/IRAC mid-infrared photometry to analyze the SEDs of our sample systems. We fit system components with solar metallicity BT-Settl model atmospheres \citep{allard12} spanning effective temperatures between 1000 and 7000 K ($\Delta T_{\mathrm{eff}}=100$ K) fixed at either $\log g=3.5$ (2M0441 B, CHXR 73, and ROXS 42B b) or $\log g=4.0$ which is appropriate for young dwarfs according to BT-Settl evolutionary models. We convolve the model atmospheric spectra with filter transmission profiles to generate synthetic photometric measurements and find the $\chi^2$-minimizing scale factor between the model photometry and the observed photometry for each object. We also fit $E(B-V)$ as a free parameter using the extinction curve of \cite{fitzpatrick99} in steps of 0.01 mag. In Figure \ref{fig:sed}, we show the SED fits for the systems in our sample. We show our new \textit{Spitzer}/IRAC photometry as red stars and the literature photometry used in our fits as blue squares. We also include the best-fit BT-Settl model for the primary and companion in gray. In Table \ref{tab:phot}, we list all the photometry used in each SED fit for each system, in addition to our measured photometry from the pipeline parameters. In Table \ref{tab:sedfit}, we summarize the properties found from our fits. We discuss the SED fits of specific systems in Section \ref{system_notes}, and interpret potential companion mid-infrared excesses further in Section \ref{discussion}. \subsection{Notes on Two Individual Systems} \label{system_notes} Our reprocessing of the IRAC images yielded detection of all nine primaries and eight substellar companions, five of which have not been resolved in the IRAC filters. We describe two systems in more detail in the following sections. \subsubsection{DH Tau} DH Tau is a protoplanetary disk host with spectral type M1 \citep{herbig77,watson09}. It is an actively accreting classical T Tauri star with previously detected mid-infrared excess \citep{valenti93,meyer97,luhman06c,luhman10}. DH Tau hosts a substellar companion at projected separation $\rho=2.3$ ($\sim$310 au at its \textit{Gaia} distance of $\sim$135 pc; \citealt{itoh05,bailer-jones21}). The companion mass was initially estimated to be $\sim$30--50 $M_{\mathrm{Jup}}$ but comparison of its bolometric luminosity to newer evolutionary models revealed a lower mass of $\sim$11 $M_{\mathrm{Jup}}$ \citep{luhman06c,kraus14,bowler16}, closer to the planet--brown dwarf boundary. Hydrogen emission lines and a UV continuum excess indicate active accretion onto DH Tau B \citep{zhou14,bonnefoy14} but emission from the circum(sub)stellar disk has not been detected \citep{wu20}. We resolve DH Tau B and measure its mid-infrared photometry in all IRAC channels (see Fig \ref{fig:comp_examples} for Channel 2 detection). As we described in \ref{sec:sedfits} and show in Figure \ref{fig:sed}, we use \textit{Hubble Space Telescope} (\textit{HST}) optical \citep{zhou14}, Two Micron All Sky Survey (2MASS) near-infrared \citep{cutri03}, and the \textit{Spitzer}/IRAC 3.6 $\mu$m photometry measured in this work to analyze the SEDs of DH Tau A and B. The best-fitting model for DH Tau A is $T_{\mathrm{eff}}=2600\pm100$ K and $E(B-V)=1.27\pm0.06$ mag and for DH Tau B, the best-fitting model is $T_{\mathrm{eff}}=1800\pm50$ K and $E(B-V)=0.00\pm0.03$ mag. The 8 $\mu$m photometry for DH Tau B disagrees with the best-fitting model at 3.8-$\sigma$. We find the mid-infrared color of DH Tau B to be [3.6]--[8.0]$=2.19\pm0.66$ mag which is discrepant with the \cite{luhman10} empirical color of an M9 dwarf atmosphere at the 2.7-$\sigma$ level. Using the dereddened $K$-band photometry from \citet{itoh05} and our Channel 4 detection, DH Tau B's infrared color is $K$--[8.0]$=2.93\pm0.24$ mag which is discrepant from an M9 atmosphere at at the 8.0-$\sigma$ level. This red color indicates a clear mid-infrared excess consistent with presence of circum(sub)stellar disk. We use our 3.6 $\mu$m photometric measurement, DH Tau's \textit{Gaia} parallactic distance \citep[133.3 pc;][]{bailer-jones21}, and Taurus's adopted age of $\tau\sim2$ Myr to estimate the mass of DH Tau B to be $M=17\pm6$ $M_\mathrm{Jup}$, consistent with previous mass determinations \citep[$M=18\pm4$ $M_{\rm Jup}$;][]{kraus14}. \subsubsection{AB Pic} AB Pic is a K2 star originally considered a member of the Tucana-Horologium association ($\tau\sim40$ Myr; \citealt{song03,kraus14b,bell15}). \cite{torres08} later re-assessed AB Pic to be a member of Carina, another young moving group (YMG) with age $\tau\sim30$ Myr \citep[e.g.,][]{bell15,miret-roig18}. Recently, \cite{booth2021} has revised the age of Carina to be younger and only 13 Myr. \cite{chauvin05} observed AB Pic to host a planetary-mass companion, AB Pic b, at projected separation $5\farcs5$ or 275 au at its 50 pc distance. Near-infrared spectroscopic observations measure the spectral type of the wide companion to be L0-L1 \citep{bonnefoy14}. Near-infrared spectroscopy of the companion has not detected emission line accretion indicators \citep{bonnefoy14}, nor has the companion been detected at wavelengths longer than $L^\prime$ filter \citep{rameau13,perez19}. We detect AB Pic b in all four IRAC channels, finding its mid-infrared color to be $[3.6]-[8.0]=0.81\pm0.27$ mag. AB Pic b is consistent with younger L0 photospheres to within 1-$\sigma$ \citep{luhman10}. Using near-infrared photometric measurements from \cite{chauvin05} and the mid-infrared photometry found in this work, we analyze the SED of AB Pic b similarly as we described previously for DH Tau B (see Figure \ref{fig:sed}) allowing $E(B-V)$ to range up to 2.0 mag. The best-fitting models for AB Pic A and b have $T_{\mathrm{eff}}=6000\pm100$ K and $T_{\mathrm{eff}}=2100\pm100$ but with discrepant $E(B-V)$ of $0.30\pm0.02$ mag and $1.70\pm0.19$ mag, respectively. The amount of reddening for the primary is consistent with previous measurements \citep[$0.27\pm0.02$ mag;][]{vanBelle09}. The significantly higher value found for AB Pic b could indicate the presence of an as of yet unresolved circum(sub)stellar disk, though the observations allow a substantial range of possible values. Our observed 8 $\mu$m flux disagrees with the model photosphere at the 0.7-$\sigma$ level. Fixing the companion color excess to agree with the primary at $E(B-V)=0.30$ mag results in a cooler best-fitting model with $T_{\mathrm{eff}}=1600$ K. For this model photosphere, the observed 8 $\mu$m flux from our work disagrees only at the 0.3-$\sigma$ level. We therefore conclude that there is not yet compelling evidence that AB Pic b hosts a disk or shows a mid-infrared excess. Using our 3.6 $\mu$m photometric measurement, the \textit{Gaia} parallactic distance \citep[50.1 pc;][]{bailer-jones21}, and revised age of Carina ($\tau\sim13$ Myr), we estimate the mass of AB Pic b to be $M=11\pm1$ $M_{\rm Jup}$. This new mass estimate is $\sim$2 $M_{\rm Jup}$ lower than if an age of 30 Myr were assumed for AB Pic and places the companion firmly below the deuterium-burning limit. \subsection{Companions with Previous IRAC Photometric Measurements} \subsubsection{2M0441 AB} \cite{esplin14} report IRAC photometry of $m_{[4.5]}=9.48\pm0.02$ mag, $m_{[5.8]}=9.37\pm0.03$ mag, and $m_{[8.0]}=9.22\pm0.03$ mag for 2M0441 A in Channels 2--4 (Channel 1 is saturated). For 2M0441 B, they report $m_{[3.6]}=12.26\pm0.02$ mag, $m_{[4.5]}=11.88\pm0.02$ mag, $m_{[5.8]}=11.53\pm0.03$ mag, and $m_{[8.0]}=11.00\pm0.03$ mag. Our pipeline IRAC photometry agrees with these prior measurements within the error bars, and through our PSF-fitting procedure that masks saturated pixels, we are able to recover a Channel 1 magnitude for the the primary that is consistent with its overall SED shape. \subsubsection{HD 203030} \cite{miles-paez17} observed HD 203030 with \textit{Spitzer}/IRAC (Program ID 40489) at two distinct epochs to utilize roll subtraction to obtain photometry of HD 203030 b that was minimally contaminated by its host bright stellar halo. No photometry is reported for HD 203030, but they do report IRAC photometry of $m_{[3.6]}=14.99\pm0.02$ mag, $m_{[4.5]}=14.73\pm0.02$ mag, $m_{[5.8]}=14.39\pm0.05$ mag, and $m_{[8.0]}=14.15\pm0.04$ mag for HD 203030 b. We initially fit the first epoch long-exposure images of HD 203030 and found our Channel 1 pipeline photometry of $m_{[3.6]}=15.01\pm0.40$ mag agreed with their measurement, but we measured higher fluxes for all other channels ($m_{[4.5]}=14.63\pm0.26$ mag, $m_{[5.8]}=13.84\pm0.20$ mag, and $m_{[8.0]}=13.44\pm0.21$ mag). No ground-based photometry at wavelengths greater than 3 $\mu$m have been reported in the literature. To explore this discrepancy further, we use our pipeline infrastructure to fit all of the IRAC images available simultaneously for each exposure time, as well as the individual epochs separately. In the 1 s exposures, the primary is not saturated and we measure its photometry to be $m_{[3.6]}=6.64\pm0.02$ mag, $m_{[4.5]}=6.68\pm0.02$ mag, $m_{[5.8]}=6.63\pm0.02$ mag, and $m_{[8.0]}=6.63\pm0.02$ mag across the IRAC channels for the first epoch, and $m_{[3.6]}=6.66\pm0.02$ mag, $m_{[4.5]}=6.68\pm0.02$ mag, $m_{[5.8]}=6.64\pm0.02$ mag, and $m_{[8.0]}=6.63\pm0.02$ mag for the second. The IRAC photometry is consistent with the Band 1 (3.5 $\mu$m), 2 (4.6 $\mu$m), and 3 (12.0 $\mu$m) mid-infrared photometry reported in the AllWISE catalog \cite{cutri14} ($W1=6.66\pm0.07$, $W2=6.63\pm0.02$, $W3=6.63\pm0.02$). In the 26.8 s exposures, our pipeline measures $m_{[3.6]}=6.73\pm0.02$ mag, $m_{[4.5]}=6.76\pm0.02$ mag, $m_{[5.8]}=6.88\pm0.02$ mag, and $m_{[8.0]}=6.61\pm0.02$ mag across the IRAC channels for the first epoch, and $m_{[3.6]}=6.80\pm0.02$ mag, $m_{[4.5]}=6.79\pm0.02$ mag, $m_{[5.8]}=6.69\pm0.02$ mag, and $m_{[8.0]}=6.61\pm0.02$ mag for the second, suggesting a systematic uncertainty of 0.1--0.2 mag for the brightness of the primary when its PSF core is saturated. This variation is still smaller than the $\sim$0.5--0.7 mag discrepancy between our photometric measurements for the companion and those of \cite{miles-paez17}, indicating that if there is an issue in our analysis, it comes when fitting the companion and after the primary PSF has been subtracted. The intrinsic photospheric colors of late-M and L dwarfs are typically determined by combining photometric observations with parallax measurements \citep[e.g.,][]{patten06,luhman10,filippazzo15,faherty16}. A L7.5 field dwarf should have $K_s-[3.6]=1.13$ mag, $[3.6]-[4.5]=-0.02$ mag, $[4.5]-[5.8]=0.32$ mag, and $[5.8]-[8.0]=0.23$ mag in the IRAC channels, according to \cite{dupuy12}. Both $[3.6]-[4.5]$ colors measured for HD 203030 b by \cite{miles-paez17} and in this work are $\sim$0.3--0.4 mag redder than expected, although consistent with younger planetary-mass objects \citep[e.g.,][]{filippazzo15,faherty16,liu16}. In the other channels, the colors measured by \cite{miles-paez17} agree with a field L7.5 photosphere, while our photometry continues to be significantly redder. \cite{miles-paez17} measure $[3.6]-[8.0]=0.84\pm0.04$ which is $\sim$0.3 mag redder than expected but still within the upper envelope of the rms scatter of the \cite{dupuy12} sample. We measure $[3.6]-[8.0]=1.57\pm0.45$ mag. This color excess could potentially indicate the presence of a circum(sub)stellar disk if confirmed, but given the disagreement with \cite{miles-paez17}, such an interpretation should be treated with caution. To test if this color excess might emerge from our reduction procedures, in Figure \ref{fig:hd203030} we show the stacked image output for our pipeline fits of the first epoch of long-exposure IRAC images. A significant positive residual is present at the expected location of HD 203030 b when only the primary PSF is subtracted, and no significant structure remains at that location when both primary and companion PSFs are subtracted, though in Channels 3 and 4, there appears to be a slight over-subtraction. The maximum pixel value at the location of HD 203030 b prior to PSF subtraction is 0.1359 DN/s and 0.2507 DN/s, respectively in those channels. After PSF subtraction, that pixel value is -0.0058 DN/s and -0.0237 DN/s, or 4\% and 9\% of the initial pixel value. If the over-subtraction is uniform for all pixels in an aperture, this would result in a maximum flux overestimation of 0.05 mag and 0.10 mag in Channels 3 and 4, still not enough to explain the differences between our IRAC photometry and those of \cite{miles-paez17}. The assumption of uniform over-subtraction across an aperture is unrealistic though, and we proceed with an empirical approach to better estimate our uncertainties. The number of resolution elements in the stacked residuals images is about 1220, 780, 470, and 250 for IRAC Channels 1, 2, 3, and 4, respectively. Thus, we would expect 3, 2, 1, and $<$1 spurious 3-$\sigma$ outliers in the residuals images for each channel. For HD 203030, it appears there are more outliers than expected from noise but many are aligned with the PSF structures most visible for the brightest primaries of our sample. We note that in Channel 2, the large lower right-hand residual is a background object ($\pi$=$0.56\pm0.12$ mas; \citealt{gaia18}). The HD 203030 b residual is present in all channels, bolstering confidence in the detections that do not fall upon PSF structure. To estimate a systematic uncertainty from PSF modeling, we consider the rms of flux values measured within apertures of 1 FWHM radius at the radial separation of the companion around HD 203030 when calculating the detection limit in the PSF-subtracted images. In Channel 3, we find the rms to be 3.26 DN/s which is 19.9\% of the flux measured for HD 203030 b. Similarly in Channel 4, we find the rms of flux values to be 9.25 DN/s, or 24.0\% of the measured flux of the companion. We therefore conclude that there is no clear evidence of systematic errors in our PSF fit, as the overluminosity would be a 4--5-$\sigma$ effect for each of the [5.8] and [8.0] filters, which sample the IRAC PSF in different ways. However, it is unlikely this discrepancy can be resolved without further observations to independently determine its mid-infrared brightness. \subsubsection{SR 12} Observations of SR 12 were a part of the \textit{Spitzer} c2d Legacy survey \citep{evans09} that imaged five nearby molecular clouds with the IRAC and MIPS instruments. Various studies \citep[e.g.,][]{cieza07,cieza09,gutermuth09,gunther14,esplin20} have reported IRAC photometry for SR 12 AB from these data, ranging from 8.16--8.27 mag at 3.6 $\mu$m, 8.16--8.25 mag at 4.5 $\mu$m, 7.99--8.12 mag at 5.8 $\mu$m, and 8.03--8.12 mag at 8.0 $\mu$m, with typical uncertainties between 0.02 and 0.06 mag. Our pipeline photometry for SR 12 AB agrees with these previous measurements within the uncertainties in Channels 1, 3, and 4, though our measurement is $\sim$0.06 mag brighter in Channel 2. The c2d IRAC photometry for SR 12 c is $m_{[3.6]}=13.65\pm0.08$, $m_{[4.5]}=13.60\pm0.03$ mag, $m_{[5.8]}=13.20\pm0.28$ mag, and $m_{[8.0]}=12.50\pm0.37$ \citep{cieza07,alves10,gunther14}. Our Channel 1 and 2 photometry are significantly discrepant ($m_{[3.6]}=13.99\pm0.11$, $m_{[4.5]}=13.18\pm0.07$ mag), but we are able to constrain SR 12 c's Channel 3 and Channel 4 photometry to $m_{[5.8]}=13.09\pm0.15$ mag, and $m_{[8.0]}=12.17\pm0.13$ mag. SR 12 c has a spectral type of M9-L0 \citep[e.g.,][]{kuzuhara11,bowler14,santamaria18} and thus its photosphere should have a $K_s$--[3.6] color of $\sim$0.6--0.7 mag based on empirical measurements of late M dwarfs \citep[e.g.,][]{patten06,luhman10}. \cite{kuzuhara11} reported ground-based photometry of $K_s=14.57\pm0.03$ mag with their discovery of SR 12 c. Combining this $K_s$-band measurement with our IRAC Channel 1 photometry gives $K_s-[3.6]=0.58\pm0.11$ mag, consistent with a detection of an M9 photosphere. We also measure $[3.6]-[8.0]=1.82\pm0.17$ mag which indicates the companion harbors a disk. \cite{santamaria18} identified numerous emission line accretion tracers in the spectrum of SR 12 c, confirming this disk. The 4.5 $\mu$m photometry we measure for the companion is the most discrepant from previous studies. No \textit{WISE} photometry has been reported for SR 12 c either, likely due to its crowded environment. Since our photometry of SR 12 AB agrees with previously reported values, any uncertainties would likely come from PSF subtraction. We again consider the rms of flux values measured using apertures with radius equal to 1 FWHM at the radial separation of the companion around SR 12 AB when calculating the detection limit in the PSF-subtracted images. We find at 4.5 $\mu$m the rms to be 1.75 DN/s which is 6.4\% of the flux measured for SR 12 c. Similarly at 5.8 $\mu$m, we find the rms of flux values to be 4.63 DN/s, or 14.2\% of the measured flux of the companion. We conclude here that any large-scale deviation from an M9 photosphere may be outlining the SED of the disk harbored by SR 12 c. As we mentioned in Section \ref{data_analysis}, SR 12 is $25\arcsec$ away from YLW 13B, a bright and saturated young stellar object. The discrepancies between the c2d companion photometry and ours photometry could be a result of the c2d pipeline's handling of bright neighbors, especially in Channel 1. Conversely, if there is a disk excess in the IRAC bands then it seems likely that we either overestimate the brightness at 4.5 $\mu$m or underestimate the brightness at 5.8 $\mu$m. \begin{figure*} \centering \includegraphics[trim=1.5cm 2cm 1.15cm 2cm,clip,width=\textwidth]{HD_203030_d_e1_268_msk_c9v2_dn.png} \caption{Stacked images of HD 203030 across all four IRAC channels (rows) after its first epoch images have gone through the PSF-fitting pipeline. The images were generated in the same fashion as AB Pic in Figure \ref{fig:abpic}. Columns 1 and 2 show the original IRAC data of HD 203030 and the median two-source PSF model, respectively, displayed with a logarithmic color scale (leftmost color bar). Column 3 shows the residuals left behind after only the primary PSF model is subtracted from the data. Column 4 shows the residuals left behind after the two-source PSF model is subtracted from the data. The standard deviation of the pixel values outside a $\sim$6$\arcsec$ radius from the primary centroid is displayed in the lower left-hand corner of Column 4 in units of DN/s. Both Columns 3 and 4 are displayed in a linear color scale (rightmost color bar) with the area within $\sim$6$\arcsec$ of the primary centroid masked, and 3- and 5-$\sigma$ contours overlaid with solid and dotted lines, respectively. After subtracting the primary star PSF, a statistically significant positive residual is seen at the expected position of HD 203030 b. This residual disappears after subtracting the best-fit system PSFs and no significant structure is left behind at the location of the companion.} \label{fig:hd203030} \end{figure*} \begin{figure*} \centering \includegraphics[trim=1.0cm 0.25cm 0.25cm 0.0cm,clip,width=1.0\textwidth]{swcii_vs_d12_v4-eps-converted-to.pdf} \caption{$M_{[3.6]}$ through $M_{[8.0]}$ vs.~spectral type for the $\leq$L2 wide-orbit PMC samples of this work and Paper I (purple upside triangles), in addition to the young Taurus and Upper Sco brown dwarfs (depicted as in Figure \ref{fig:cmd}). Similar to ROXs 42B b, 1RXS J1609 b was not detected in Channel 1 but is shown in the upper lefthand panel as an $L^{\prime}$-band detection from \citet{kraus14}. We also indicate the expected field polynomial sequence of \cite{dupuy12} (solid line; dark gray) and the young ultracool dwarf polynomial sequence from \cite{faherty16} (dashed line; light gray). The young objects sit above the field sequence, but the dynamic range is not high enough in magnitude space to distinguish between disk-bearing and disk-free members.} \label{fig:absmag_vs_sptype} \end{figure*} \begin{figure} \centering \includegraphics[trim=1.0cm 0.25cm 0.0cm 0.0cm,clip,width=0.5\textwidth]{i1i4_vs_sptype_v3-eps-converted-to.pdf} \caption{[3.6]--[8.0] color as a function of spectral type for our sample.We include the wide-orbit PMC sample from Paper I, and the young Taurus and Upper Sco brown dwarfs, depicted as in Figures \ref{fig:cmd} and \ref{fig:absmag_vs_sptype}. Also included are field and young moving group (YMG) member polynomial sequences of \cite{dupuy12} (solid line; dark gray) and \cite{faherty16} (dashed line; light gray). The dynamic range is refined enough in color space for the disk-bearing objects to clearly sit above the disk-free objects.} \label{fig:i1i4col_vs_sptype} \end{figure} \begin{figure*} \centering \includegraphics[trim=0.90cm 0.25cm 0.50cm 0.5cm,clip,width=0.95\textwidth]{i1i4_vs_ksi1_and_ksi4_vs_ksi1_v1-eps-converted-to.pdf} \caption{$K_s$--[8.0] vs.~$K_s$--[3.6] color (left panel) and [3.6]--[8.0] vs.~$K_s$--[3.6] color (right panel) for our sample companions, the Paper I wide-orbit PMC sample, and the young Taurus and Upper Sco brown dwarfs, depicted as in Figures \ref{fig:cmd}, \ref{fig:absmag_vs_sptype} and \ref{fig:i1i4col_vs_sptype}. We show both of these color spaces to take advantage of better constrained ground-based $K$-band contrasts for some companions that were marginally detected by our pipeline at [3.6]. We also include the expected color-color sequence of 5 Myr BT-Settl and AMES-Dusty isochrones. The disk-bearing objects are clear outliers in these particular color-color spaces, providing a criterion to say DH Tau B, 2M0441 B, ROXs 42B b, and SR 12 c appear to host disks.} \label{fig:i1i4col_vs_ksi1_v1} \end{figure*} \section{Discussion} \label{discussion} Free-floating young brown dwarfs are observed to follow color-magnitude sequences that are distinct from older brown dwarfs \citep[e.g.,][]{allers10,liu16,faherty16}. In the near-infrared young brown dwarfs are redder in $J-K$ colors, suggesting enhanced dust abundances \citep[e.g.,][]{woitke04,barman11a} or lower surface gravities \citep[e.g.,][]{burrows97,kirkpatrick06,looper08}. Determining whether wide-orbit PMCs also follow the trends previously established for free-floating brown dwarfs into the mid-infrared could point to formation pathway commonalities. Deviations would imply differing formation processes and redder colors could indicate the presence of circum(sub)stellar disks. \subsection{Absolute Magnitude Trends with Spectral Type} A star forms with a large radius that subsequently contracts in its pre-main sequence phase, which might result in an observable difference between the luminosities of young stars and substellar objects than those of the field. These objects would also appear brighter in the \textit{Spitzer}/IRAC bands, especially the later spectral types and objects that harbor disks. Figure \ref{fig:absmag_vs_sptype} shows absolute magnitude-spectral type diagrams plotting $M_{[3.6]}$ through $M_{[8.0]}$ versus spectral type for the detected wide-orbit companions in our sample (filled circles) as well as others from Paper I (FU Tau B, FW Tau C, SCH J0359 B, USco 1610 B), ROXs 12 B, GQ Lup B, and GSC 6214 B (purple upside down triangles). We complement these data with late-M to early-L brown dwarfs from the Taurus and Upper Sco star-forming regions \citep{esplin17,luhman20}. The absolute magnitudes for the individual PMCs were calculated from either the \textit{Gaia} EDR3 or DR2 parallactic measurements \citep{bailer-jones21,bailer-jones18}, or if not available, from the adopted distance to the star-forming region. Individual association members are color-coded red if they are thought to harbor a disk from measured mid-infrared excess, or orange if they are thought to be disk-free. We also indicate the expected field polynomial sequence as determined by \cite{dupuy12} (solid line; dark gray) as well as the young ($\tau<1$ Gyr, $\tau\sim5$--150 Myr; \citealt{faherty16}, \citealt{liu16}) ultracool dwarf polynomial sequence from \cite{faherty16} (dashed line; light gray). In general, brown dwarfs with spectral types $<$M8 are 1--2 magnitudes brighter than the YMG polynomial sequence, while substantial overlap begins between the YMG sequence and brown dwarfs with spectral types $>$M8. This overluminosity above the field sequence is expected as the young objects have not yet contracted to their final radii. DH Tau B, 2M0441 B, CHXR 73 b, and ROXs 42B b are consistently above the YMG polynomial sequence, as well as FU Tau B. These wide companions orbit host stars that are among the very young regions ($\tau\sim1$--3 Myr; Taurus, Chameleon, Ophiuchus). AB Pic b is the only PMC in our sample that is consistently below the YMG sequence. The high scatter within these sequences suggests that magnitudes alone do not provide a sensitive view of which objects are outliers. \subsection{Color Trends of Wide-orbit Companions in the Mid-Infrared} The colors of wide-orbit PMCs provide a more nuanced view of their non-photospheric behavior. The colors of our sample are expected to be close to zero given their range in spectral types, thus objects with non-zero colors are potentially interesting. In Figure \ref{fig:i1i4col_vs_sptype} we show [3.6]--[8.0] color as a function of spectral type for the same systems as described above for Figure \ref{fig:absmag_vs_sptype}. We again indicate the Taurus (triangles) and Upper Sco (squares) members as disk-bearing (red) or disk-free (orange), and include the field and YMG member polynomial sequences of \cite{dupuy12} (solid line; dark gray) and \cite{faherty16} (dashed line; light gray). DH Tau B, 2M0441 B, AB Pic b, CHXR 73 b, SR 12 c, and ROXs 42B b are significantly redder than the field polynomial sequence. The young ($\tau\sim2$--10 Myr) Taurus and Upper Sco disk-hosting and disk-free members also readily differentiate themselves in the [3.6]--[8.0] color space. Interestingly, the disk-bearing members fall right in line with the continuation of the YMG ($\sim$20--120 Myr) dwarf sequence. The detected PMCs of this sample also are consistent with the YMG sequence except for DH Tau B, which is already known to show active accretion. 2M0441 B, SR 12 c, and ROXs 42B b are above the average YMG polynomical sequence color for their spectral type which could be due to the youth of the systems or the presence of circum(sub)stellar disks. There also is the possibility that some YMG members may also harbor circum(sub)stellar disks. \subsection{Identifying Disk-Hosting PMCs in Color-Color Space} \label{sec:pmc_disks} Identifying disk hosts in color-color space removes reliance on spectral type measurements that can be highly uncertain. In Figure \ref{fig:i1i4col_vs_ksi1_v1} we show $K_s$--[8.0] vs.~$K_s$--[3.6] color (left panel) and [3.6]--[8.0] vs.~$K_s$--[3.6] color (right panel) for our PMC sample and the young Taurus and Upper Sco brown dwarfs, depicted as in Figures \ref{fig:absmag_vs_sptype} and \ref{fig:i1i4col_vs_sptype}. We also include the expected color-color sequence of 5 Myr BT-Settl and AMES-Dusty isochrones as a theoretical comparison. We show both of these color spaces to take advantage of better constrained ground-based $K$-band contrasts for some companions that were marginally detected by our pipeline at [3.6]. Five of the wide orbit PMCs in this work (DH Tau B, 2M0441B, ROXs 42B b, SR 12 c, and HD 203030 b), along with two from Paper I (FU Tau B and SCH J0359 B) have colors consistent with young disk-bearing brown dwarfs. However, HD 203030 b is the latest spectral type of our sample (L7.5) thus its position in this parameter space is likely explained by differences in late-L atmospheric characteristics rather than the presence of a circum(sub)stellar disk. Only one object in this combined sample, the more massive USco 1609 B ($M\sim70$ $M_{\mathrm{Jup}}$), falls among the disk-free young brown dwarfs. AB Pic b and CHXR 73 b fall outside of the disk-free locus, along with a few late-type disk-free members, but their locations are consistent with predictions from the AMES-Dusty models \citep{chabrier00b,allard01}. FW Tau C sits furthest from the disk-hosting and disk-free objects, but given the ongoing debate over whether it a more massive object hosting an edge-on disk \citep{wu17a}, it might be expected to have anomalous colors. \subsection{Disk Fraction of Wide-Orbit PMCs} \label{sec:disk_fraction} Determining the presence of circumstellar disks around young star-forming region members has been a useful tool to infer the dominant formation pathway of substellar objects, as well as their planet-forming capabilities. Similarly, identifying and characterizing the disks harbored by PMCs offers a direct avenue to study planet assembly and evolution, as well as potential satellite formation. Mass-dependent disk evolution has been observed for stars and brown dwarfs in young star-forming regions or associations through the measurement of disk fractions. \cite{luhman10} found the disk fraction for solar-type stars in Taurus ($\tau\sim2$ Myr) to be $\sim$75\% and the disk fraction for lower-mass stars (0.01--0.3 $M_{\odot}$) to be $\sim$45\%. For the older Upper Sco OB association ($\tau\sim10$ Myr), \cite{carpenter06} find $<$1\% of stars more massive than K0 have circumstellar disks while the disk fraction for K0--M5 stars is 19\%. Substantial disk fractions persisting for stars $<$1 $M_{\odot}$ and substellar objects indicate disk dispersal is less efficient and that planet formation timescales are longer. We can now begin to quantify whether these disk frequency trends continue for wide-orbit PMCs. For instance, \cite{bowler17} found $46\%\pm14\%$ of young ($<$15 Myr) substellar ($<$20 $M_{\rm Jup}$) companions have detectable Pa$\beta$ emission, indicating that accretion disks are very common around wide-orbit PMCs. Here, we incorporate our findings into previous disk fraction determinations and explore their global frequency. Combining the nine PMC systems from this work with three from Paper 1, ten belong to star-forming regions or associations with $\tau<15$ Myr: DH Tau, SCH J0359, FU Tau, FW Tau, 2M0441, AB Pic, CHXR 73, ROXs 42B, 1RXS J1609, and SR 12. Since 2M0441 is an interesting quadruple system comprised of close binary pairs, they should be considered separately and not incorporated into our disk fraction calculation. Thus, six of these companions have disk-like mid-infrared excesses determined from this work, suggesting a disk frequency of $67\%\pm16\%$ for PMCs with $\tau<15$ Myr. The two older PMC systems in our sample, GJ 504 and HD 203030, host companions that do not have disk-like mid-infrared excesses. Previous PMC disk fraction determinations from \cite{bowler17} and \cite{bryan20} required emission line accretion signatures or UV continuum excess detections to designate a companion as a disk host, potentially underestimating their occurrence rate measurement because of the variability of these signatures or the overall faintness of the disk. Here we combine our PMC sample disk determinations with their findings, updating ROXs 42B b and SR 12 c as disk-bearing, giving a disk fraction of $56\%\pm12\%$. This confirms that PMCs harboring circum(sub)stellar disks is very common at young ages. Even within our $<$15 Myr age bin, hints of PMC disk evolution may be emerging since two of the three companions with no mid-infrared excess from this work had system ages above 5 Myr. Increasing the sample of $>$5 Myr PMC systems with and without circum(sub)stellar disks will ultimately confirm whether the rate at with which they host disks follows that observed of star-forming region members. \section{Summary} We have used our MCMC-based PSF formalism to reanalyze \textit{Spitzer}/IRAC images of nine stars known to host faint planetary-mass companions, examining higher contrast systems and closer-in separations than our previous work to measure the mid-infrared photometry of the companions. We report new IRAC photometry for all nine primaries in our sample and eight of the companions, five of which have not been resolved in IRAC images before. For one of the newly resolved companions, AB Pic b, we use our photometry and the updated system age of 13 Myr \citep{booth2021} to estimate its mass at $M=11\pm1$ $M_\mathrm{Jup}$, placing the companion firmly below the deuterium-burning limit. We also measure an 8.0 $\mu$m excess for ROXs 42B b, a companion not thought to harbor a disk due to a lack of observed emission line accretion signatures. We also confirm mid-infrared excesses from the previously suggested disks around DH Tau B, 2M0441 B, and SR 12 c, and detect likely photospheric emission from four companions that do not show evidence of disks (AB Pic b, CHXR 73 b, 1RXS J1609 b, HD 203030 b). We find for our sample from Paper I and this work that $67\%\pm16\%$ of young ($<$15 Myr) wide-orbit PMCs harbor disks. Combined with past detections of disk indicators to wide-orbit PMCs, we find a global young disk fraction of $56\%\pm12\%$, signifying that both accreting and non-accreting PMC disks are very common. The increasing likelihood that the disks surrounding wide-orbit PMCs are compact and optically thick, and thus easier to study in the mid-infrared \citep{wu17b}, highlights the importance of leveraging \textit{Spitzer} to motivate future observations of PMC systems in the \textit{JWST} era. \begin{acknowledgments} We thank the referee for providing a helpful review that improved the clarity of this paper. R.A.M.~acknowledges support from the Donald D.~Harrington Fellowship and the NASA Earth \& Space Science Fellowship. This work is based on observations made with the \textit{Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology under a contract with NASA. This publication makes use of data products from the Two Micron All Sky Survey, which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation. This publication makes use of data products from the \textit{Wide-field Infrared Survey Explorer}, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory/California Institute of Technology, funded by the National Aeronautics and Space Administration. The Pan-STARRS1 Surveys (PS1) and the PS1 public science archive have been made possible through contributions by the Institute for Astronomy, the University of Hawaii, the Pan-STARRS Project Office, the Max-Planck Society and its participating institutes, the Max-Planck Institute for Astronomy, Heidelberg and the Max-Planck Institute for Extraterrestrial Physics, Garching, The Johns Hopkins University, Durham University, the University of Edinburgh, the Queen’s University Belfast, the Harvard-Smithsonian Center for Astrophysics, the Las Cumbres Observatory Global Telescope Network Incorporated, the National Central University of Taiwan, the Space Telescope Science Institute, the National Aeronautics and Space Administration under grant No.~NNX08AR22G issued through the Planetary Science Division of the NASA Science Mission Directorate, the National Science Foundation grant No.~AST-1238877, the University of Maryland, Eotvos Lorand University (ELTE), the Los Alamos National Laboratory, and the Gordon and Betty Moore Foundation. This work is also based on observations made with the NASA/ESA \textit{Hubble Space Telescope}, and obtained from the Hubble Legacy Archive, which is a collaboration between the Space Telescope Science Institute (STScI/NASA), the Space Telescope European Coordinating Facility (ST-ECF/ESA) and the Canadian Astronomy Data Centre (CADC/NRC/CSA). \end{acknowledgments} \clearpage \bibliographystyle{aasjournal}
null
null
null
proofpile-arXiv_000-10204
{"arxiv_id":"2111.03087","language":"en","timestamp":1636336846000,"url":"https:\/\/arxiv.org\/abs\/2111.03087","yymm":"2111"}
2024-02-18T23:40:25.418Z
2021-11-08T02:00:46.000Z
null
{"paloma_paragraphs":[]}
null
null
null
{"provenance":"002.jsonl.gz:10206"}
null
\section{Introduction} \label{sec:intro} The astrophysical locations which give rise to the synthesis of heavy nuclei via the rapid capture of neutrons onto lighter seed nuclei (the $r$-process; \citealt{Burbidge+57,cameron_nuclear_1957}) remains a topic of active debate (see \citealt{Horowitz+19,Cowan+21,siegel_21} for recent reviews). Several lines of evidence, ranging from measurements of radioactive isotopes on the sea floor (e.g.,~\citealt{Wallner+15,Hotokezaka+15}) to the abundances of metal-poor stars formed in the smallest dwarf galaxies (e.g.,~\citealt{Ji+16,Tsujimoto+17}), suggest that the dominant site of the $r$-process is much rarer than ordinary core collapse supernovae (SNe), both in the early history of our Galaxy and today. The most promising contenders are the mergers of neutron star binaries (e.g.,~\citealt{Lattimer&Schramm74,Symbalisty&Schramm82}) or rare channels of core collapse SNe, such as those which give birth to a rapidly spinning magnetar \citep{Thompson+04,Metzger+07,Winteler+12,Nishimura+15} or a hyper-accreting black hole (``collapsar''; e.g.,~\citealt{Surman+08,siegel_collapsars_2019}; see also \citealt{Grichener&Soker19}). Perhaps not coincidentally, the same two types of events$-$neutron star mergers and collapsars$-$are the leading models for the central engines of gamma-ray bursts (GRB) of the short- and long-duration classes, respectively (e.g., \citealt{Woosley&Bloom06,berger_short-duration_2014}). The radioactive decay of $r$-process elements in the ejecta of a neutron star mergers power a short-lived optical/infrared transient known as a kilonova \citep{Li&Paczynski98,Metzger+10,Barnes&Kasen13}. However, the large quantity of $r$-process ejecta $\gtrsim 0.02-0.06M_{\odot}$ inferred from the kilonova to accompany GW170817, as well as the relatively low inferred outflow velocity $\sim 0.1$ c of the bulk of this material (e.g., \citealt{cowperthwaite2017electromagnetic, drout2017light, Villar+17}), do not agree with predictions from numerical relativity for the mass ejected during the early dynamical phase of the merger (see \citealt{Metzger19,Siegel19,Margutti&Chornock20,Nakar20} for reviews). Instead, the dominant ejecta source in GW170817, and likely in the majority of neutron star mergers, are delayed outflows from the accretion disk which forms around the black hole (BH) or neutron star remnant (e.g., \citealt{Metzger+08,Fernandez&Metzger13,Just+15,siegel_three-dimensional_2017,siegel_three-dimensional_2018,Fujibayashi+18}). General relativistic magnetohydrodynamical (GRMHD) simulations of the long-term evolution of post-merger disk find that up to $\sim 30\%$ of its original mass is unbound in outflows with average velocities $\sim 0.1$ c \citep{siegel_three-dimensional_2017,Fujibayashi+18,Fernandez+19,Christie+19,fujibayashi_mass_2020}, broadly consistent with the kilonova observed from GW170817. As emphasized by \citet*{siegel_collapsars_2019}, similar accretion disk outflows to those generated in neutron star mergers occur also in collapsars (see also \citealt{MacFadyen&Woosley99,Janiuk+04,Surman+06,Miller+20,just_neutrino_2021}). Unlike in the merger case, the collapsing stellar material feeding the disk is composed of roughly equal numbers of protons and neutrons (electron fraction $Y_e \simeq 0.5$). However, for mass accretion rates above a critical threshold value ($\gtrsim 10^{-1}-10^{-3}M_{\odot}$ s$^{-1}$, which depends on the effective viscosity and BH mass; \citealt{chen_neutrino-cooled_2007,metzger_conditions_2008}), the inner regions of the disk are electron degenerate and act to ``self-neutronize'' via electron captures on protons (e.g.,~\citealt{Beloborodov03}), thus maintaining a low electron fraction $Y_e \approx 0.1$ in a regulated process \citep{siegel_three-dimensional_2017}. As a result, the collapsar disk outflows, which feed on this neutron-rich reservoir, can themselves possess a sufficiently high neutron concentration to enable an $r$-process, throughout much of the epoch over which the GRB jet is being powered. However, the details of the synthesized composition$-$particularly the partitioning between light and heavy $r$-process elements$-$are sensitive to the impact of neutrino absorption processes on the electron fraction of the outflowing material (\citealt{Surman+06,Miller+20,Li&Siegel21}). In comparison to neutron star mergers, collapsars hold several complementary advantages as $r$-process sources \citep{siegel_collapsars_2019}. Firstly, as a result of being generated promptly from very massive stars and empirically found to occur in small dwarf galaxies at low metallicity (e.g.,~\citealt{Fruchter+06}), collapsars naturally explain the $r$-process enrichment in ultra-faint dwarf galaxies such as Reticulum II (e.g., \citealt{Ji+16}) and metal-poor stars in the Galactic halo (e.g., \citealt{Brauer+21}). Furthermore, if the gamma-ray luminosity of GRBs scales with BH accretion rate in the same way in mergers as in collapsars, then from the relative rate and gamma-ray fluence distributions of long- versus short-duration GRBs, one is led to conclude that the total mass accreted through collapsar disks over cosmic time (and hence the integrated amount of disk wind ejecta) could exceed that in neutron star mergers \citep{siegel_collapsars_2019}. Arguments based on the chemical evolution history of the early Milky Way galaxy have been made both in favor of rare SNe/collapsars \citep{Cote+19,siegel_collapsars_2019,vandeVoort+20,Yamazaki+21,Brauer+21} and mergers \citep{Shen+15,Duggan+18,Macias&RamirezRuiz19,Bartos&Marka19,Holmbeck+19,Tarumi+21} as sources of early $r$-process. While the kilonova from GW170817 provided ample evidence that neutron star mergers can execute an $r$-process, the same signature has not yet been seen from the SNe observed in coincidence with long GRBs. However, this fact is not necessarily constraining yet, insofar that $r$-process material is easier to hide in the collapsar case. In particular, a prompt and powerful supernova explosion may be required to explain the large masses of $^{56}$Ni inferred from GRB supernova light curves (e.g.,~\citealt{Cano16,barnes_grb_2018}; however, see \citealt{zenati_nuclear_2020}). By contrast, the $r$-process-generating disk outflows occur over longer times, up to tens of seconds or more after collapse, commensurate with the observed duration of the long GRB. Unless efficiently mixed to the highest velocities, the $r$-process elements (and any associated photometric or spectroscopic signatures) are therefore buried behind several solar masses of ``ordinary'' supernova ejecta (dominated by $\alpha$-elements such as oxygen). Nevertheless, if present in the inner ejecta layers, $r$-process elements could manifest as a late-time infrared signal \citep{siegel_collapsars_2019} arising from the high opacity of heavy $r$-process nuclei \citep{Kasen+13,Tanaka&Hotokezaka13}. This signal is challenging to detect given the typically large distances to GRB SNe and the late times required (at which point the emission is faint). The detection prospects will improve with the advent of the {\it James Webb Space Telescope} ({\it JWST}), particularly if the nebular spectra of lanthanide-rich material also peaks in the infrared \citep{Hotokezaka+21}. Collapsars of the type observed so far as SNe may also only represent a subset of accretion-powered core collapse events. The progenitor stars which give rise to long GRBs are typically believed to possess ZAMS masses $\lesssim 40M_{\odot}$ with helium cores at death of $\lesssim 10M_{\odot}$ (e.g., \citealt{Woosley&Heger06}). Upon collapse of their iron cores, these events first go through a rapidly rotating proto-neutron star phase \citep{Dessart+08}, in which a millisecond magnetar is formed \citep{Thompson+93,Raynaud+20}. The strong and collimated outflow from such a magnetar during the first seconds after its birth \citep{Thompson+04,Metzger+07}, before it accretes sufficient matter to collapse into a BH, may play an important role in shock heating and in unbinding much of the outer layers of the star and generating the required large $^{56}$Ni masses (e.g., \citealt{Shankar+21}). On the other hand, the number of long GRBs with detected SNe only number around a dozen, and the majority of these are associated with the volumetrically more common but physically distinct ``low luminosity'' class of GRBs (e.g., \citealt{Liang+07}). It thus remains unclear whether the more energetic, classical long GRBs always occur in coincidence with $^{56}$Ni-powered SNe. Indeed, luminous SNe have been ruled out for a few nominally long GRBs (e.g.,~\citealt{Fynbo+06,Gehrels+06}), though the nature of these events (e.g., whether they are actually short GRBs masquerading as collapsars) remains unclear (e.g., \citealt{Zhang+07}). Within this context, we consider in this paper the fate of initially much more massive stars, those with ZAMS masses $M_{\rm ZAMS} \gtrsim 260M_{\odot}$ which are predicted to evolve helium cores by the time of core collapse above the pair-instability (PI) gap $\gtrsim 130M_{\odot}$ (e.g., \citealt{woosley:02}, \citealt{woosley:17}, \citealt{renzo:20csm}, \citealt{farmer:20}, \citealt{woosley:21}). If the initial mass function (IMF) is an indication, such stars are potentially much rarer than the ordinarily considered collapsar progenitors with $M_{\rm ZAMS} \lesssim 40M_{\odot}$. On the other hand, if such stars are rapidly spinning \citep[e.g.,][]{marchant:19, marchant:20}---possibly because of continuous gas accretion throughout their life \citep[e.g.][]{Jermyn+21, dittmann:21}---and if these form collapsar-like disks upon collapse in proportion to their (much higher) helium core masses, their resulting yield of $r$-process ejecta in disk winds could be substantially greater. Another key difference is that a prompt explosion (e.g., as attributed to a proto-magnetar above, or fallback accretion; \citealt{powell:21}) is more challenging to obtain for these very massive stars. This is because (1) the nominal timescale for BH formation is much faster, within $\lesssim 0.1$ s, due to large masses exceeding~$2M_{\odot}$ and high compactness of their iron cores \citep[e.g.,][]{renzo:20csm}; (2) their larger $\gtrsim 10^{53}$ erg gravitational binding energies relative to those of lower mass helium cores $\lesssim 10^{52}$ erg exceed the rotational energies of even maximally spinning neutron stars. As a result of the assuredly failed initial explosion of post-PI cores, these systems are unlikely to eject a large quantity of prompt, shock-synthesized $^{56}$Ni and unprocessed stellar material (however, see \citealt{Fujibayashi+21}). Instead, the bulk of the ejecta will arise over longer timescales from disk outflows, which--scaling up from low-mass collapsars---could amount to $\gtrsim 10M_{\odot}$ of $r$-process and Fe-group elements (including $^{56}$Ni). Rather than the usual picture of GRB SNe, the type of collapse transient above the PI-gap we envision is in some ways more akin to a scaled-up neutron star merger. At risk of committing etymological heresy, we therefore refer to these massive collapsar transient events as ``super-kilonovae'' (superKNe). As we shall discuss, if superKNe exist, their long durations and red colors may render them potentially identifiable through either follow-up infrared observations of long GRBs (e.g., with {\it JWST}), or blindly in surveys with the {\it Vera Rubin Observatory} (\citealt{Tyson+02}) or the {\it Nancy Grace Roman Space Telescope} ({\it Roman}; \citealt{Spergel+15}). The gravitational wave observatory LIGO/Virgo detected a binary BH merger, GW190521{}, for which both binary components of masses $\sim\!85M_{\odot}$ and $\sim\!66M_{\odot}$, respectively \citep{Abbott+20_190521}, were inside the nominal PI mass gap.\footnote{However, see \cite{fishbach:20, Nitz&Capano21}, who interpret GW190521{}~as a merger between one BH below the PI gap and one above.} Tentative evidence suggests an effective high BH spin of the progenitor binary, albeit with the spin axis misaligned with the orbital momentum axis (however, see \citealt{Mandel&Fragos20,Nitz&Capano21}). These unusual properties have motivated a number of theoretical studies proposing new ways to populate the PI mass gap, such as through dynamical stellar mergers \citep[e.g.][]{dicarlo:19, dicarlo:20, renzo:20merger}, hierarchical black hole mergers in dense environments (e.g., \citealt{Antonini&Rasio16,Yang+19,Tagawa+21,Gerosa&Fishbach21}), modifying stellar physics at low metallicity \citep[e.g.][]{farrell:21, vink:21}, or through external gas accretion (e.g., \citealt{Safarzadeh&Haiman20}). As we shall describe, if both the BHs acquired their low expected masses and high spin as a result of inefficient disk accretion, superKN events of the type envisioned here provide a novel single-star channel for filling the PI mass gap ``from above''. This paper is organized as follows. Sec.~\ref{sec:outflow_model} presents a semi-analytic model for the collapse of rotating massive stars, their accretion disks and disk wind ejecta, and resulting heavy element nucleosynthesis, which builds on earlier work in \citet{siegel_collapsars_2019}. Calibrating the model such that collapsars generate BH accretion events consistent with the observed properties of long GRB jets, we then apply the model to more massive $\gtrsim 130M_{\odot}$ progenitors above the PI mass gap. Using our results for the disk wind ejecta, in Sec.~\ref{sec:light_curve_models} we calculate the light curves and spectra of their superKN emission by means of Monte Carlo radiative transfer simulations. Sec.~\ref{sec:discovery} explores the prospects for discovering superKNe by future optical/infrared surveys or in follow-up observations of long GRBs. Section~\ref{sec:implications} discusses several implications of our findings, including gravitational-wave emission from self-gravitating phases of the collapsar disk evolution; the astrophysical origin of GW190521{}; and the luminous radio and optical emission that results from the superKN ejecta interacting with surrounding gas. Sec.~\ref{sec:conclusions} summarizes our results. \section{Disk Outflow Model} \label{sec:outflow_model} \subsection{Stellar models} \label{sec:stellar_models} To model the pre-collapse structure of the superKN progenitors, we employ the \texttt{MESA} stellar evolution models of \cite{renzo:20csm}, publicly available at \url{https://zenodo.org/record/3406357}. The simulations start from naked helium cores of metallicity $Z=0.001$ which are then self-consistently evolved from helium core ignition, through possible (pulsational) pair-instability (PPI), to the onset of core-collapse (defined as when the radial in-fall velocity exceeds $1000\,\mathrm{km\ s^{-1}}$). We label the input stellar models according to their initial helium core mass, e.g., model \texttt{200.25} corresponds to $M_{\rm He,init} = 200.25M_{\odot}$, and focus on models ``above'' the PI gap, which do not experience pair-instability driven pulses. The models are computed using a 22-isotope nuclear reaction network, which is sufficient to capture the bulk of the energy generation throughout the stellar evolution, but cannot accurately capture the weak-interactions in the innermost core \citep[e.g.,][]{farmer:16}. However, the deepest layers of the core promptly fall into the newly formed BH (see Sec.~\ref{sec:fallback}) and hence do not contribute to the accretion disk and its outflows. These models were evolved without rotation, which we instead artificially impose at the point of core collapse (see Sec.~\ref{sec:fallback}). The main effect of rotation during the pre-core-collapse evolution is mixing at the core-envelope interface, which leads to more massive helium cores for a given initial mass. In the extreme case of chemically homogeneous evolution \citep{maeder:00}, the entire star may become a helium core. This will impact how many stars develop core masses reaching into the PI/pulsational PI regime or beyond, and thus the predicted population statistics. However, because \cite{renzo:20csm} only simulate the helium core, this does not affect our present study. Rotation can also enhance the wind mass-loss rate \citep[e.g.,][]{langer:98}, and increase the radius in the outer layers at the rotational equator by up to 50\%, which is neglected in the progenitors we use. Finally, by adding centrifugal support to the core, rotation can modestly increase the PI/pulsational PI mass range \citep[e.g.,][]{glatzel:85}.\footnote{For example, using a setup similar to \cite{renzo:20csm}, \cite{marchant:20} study the impact of an initial rotation frequency $\omega/\omega_\mathrm{crit}=0.9$, where $\omega_\mathrm{crit} \equiv \sqrt{(1-L_{\star}/L_\mathrm{Edd})GM_{\star}/R_{\star}^3}$ and $L_{\star}/L_{\rm Edd}$ is the stellar luminosity in units of the Eddington luminosity. They found a $\sim{}4\%$ ($\sim{}15\%$) increase in the maximum BH mass below the PI mass gap assuming angular momentum is transported by a Spruit-Tayler dynamo (assuming no angular momentum transport). The stronger angular momentum coupling found by \citet{fuller:19} would likely result in an even more modest effect.} For sufficiently large initial core masses $M_\mathrm{He, init}\gtrsim 200\,M_\odot$, the final mass at collapse would nominally produce a BH above the PI mass gap (neglecting subsequent mass-loss in accretion disk outflows, as explored in the present study). This arises because the gravitational energy released by the PI-driven collapse acts to photo-disintegrate nuclei produced in the thermonuclear explosion, instead of generating outwards bulk motion \citep[e.g.,][]{bond:84}. Since these models do not experience pulses of mass loss, their pre-collapse total mass is determined by the assumed wind mass-loss prescription: the minimum final helium core mass above the PI mass gap for our \texttt{MESA} models is $M_\mathrm{He, fin}\gtrsim 125\,M_\odot$. \cite{renzo:20csm} estimated the corresponding final BH mass (again, neglecting post-collapse disk outflows) as the total baryonic mass with binding energy $<10^{48}$ ergs, which effectively corresponds to the total final mass within a few $0.1\,M_\odot$ \citep[e.g.,][]{farmer:19, renzo:20csm}. A key ingredient in modeling fallback accretion is the radial density profile of the star at collapse (Sec.~\ref{sec:fallback}). Despite their large masses, helium stars above the PI gap remain compact throughout their lives, never expanding as a result of PI pulses. Their typical radii $R_{\star} \approx 10\,R_\odot$ are similar to the normally considered Wolf-Rayet progenitors of GRBs (e.g., \citealt{Woosley&Heger06}; Fig.~\ref{fig:stellar_rotation_profile}). If stars in this mass range reach core collapse with their hydrogen envelope intact (e.g., for sufficiently low metallicity, as in population III stars) their radii could be considerably larger; however, no red supergiants of this mass have yet been observed. We also employ the $15\,M_\odot$ and $20\,M_\odot$ single (hydrogen-rich) star models from \cite{heger_presupernova_2000} to test our collapsar model on more canonical long-GRB progenitors (see Appendix \ref{app:collapsars} for a discussion of results). These are computed starting from a surface equatorial velocity of $200\,\mathrm{km\ s^{-1}}$ at ZAMS and assume that mean molecular weight gradients do not impede rotational mixing ($f_\mu=0$, ``weak molecular weight barriers''). They are labeled \texttt{E15} and \texttt{E20} respectively, and are publicly available at \href{https://2sn.org/stellarevolution/rotation/}{https://2sn.org/stellarevolution/rotation/}. \subsection{Collapsar Model} \label{sec:fallback} The masses and composition of the superKNe ejecta are computed by modeling the collapse of a progenitor star. Depending on the stellar angular momentum profile, the collapse and fallback of envelope material leads to the formation of an accretion disk, which gives rise to massive neutron-rich disk outflows \citep{siegel_collapsars_2019,Miller+20,just_neutrino_2021}. Although rotation profiles of massive stars at the time of core collapse, in particular of those above the PI mass gap considered here, are highly uncertain (e.g., \citealt{heger_presupernova_2000,ma_angular_2019, marchant:20}), the specific angular momentum $j_z$ generally increases with stellar radius. In-falling stellar material thus circularizes at increasingly larger radii from the BH with time. We endow the stellar models with mass $M_\star$ and radius $R_\star = R_{\rm He,fin}$ at the time of core collapse (Sec.~\ref{sec:stellar_models}) with an angular momentum profile that assumes rigid rotation on spherical shells, with angular velocity $\Omega(r,\theta)=\Omega(r)$. This results in \begin{equation} j_z(r,\theta) = j(r) \sin^2(\theta), \end{equation} where $r,\theta$ are the radial and polar angle coordinates, respectively. We adopt a general parametrized angular momentum profile of the form \begin{equation} j(r) = \left\{ \begin{array}{cc} f_{\rm K} j_{\rm K}(r)\left(\frac{r}{r_{\rm b}}\right)^{p}, & r < r_{\rm b} \\ f_{\rm K} j_{\rm K}(r), & r_{\rm b} \le r \le R_\star \end{array} \right., \label{eq:j_profile} \end{equation} where $r_{\rm b}$, $p$, and $f_{\rm K}$ are free parameters. This corresponds to a low-density `envelope', composed primarily of helium in the models considered here, rotating at a fraction $f_{\rm K} < 1$ of the local Keplerian angular momentum $j_{\rm K} = \sqrt{G M_{\rm enc}(r)r}$, where $M_{\rm enc}(r)$ is the mass enclosed interior to radius $r$, and an inner `core', in which rotation is suppressed by a power-law with index $p$ relative to the fraction of local break-up rotation adopted for the envelope. Although the parameter values ($r_{\rm b}$, $p$, $f_{\rm K}$) are uncertain, as we discuss below, they can be ``calibrated'' to produce the timescales and energetics of the disk accretion consistent with the observed properties of long GRB jets. Figure \ref{fig:stellar_rotation_profile} illustrates the parametrized rotation profile for model \texttt{250.25}. \begin{figure} \centering \includegraphics[width=0.99\linewidth]{M_r_Heger.pdf} \includegraphics[width=0.99\linewidth]{M_r_Renzo.pdf} \includegraphics[width=0.99\linewidth]{ j_profile_alpha=0.05_p_exp=4.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \caption{Properties of stellar models at the onset of collapse, showing the enclosed mass as a function of stellar radius (top: models \texttt{E15} and \texttt{E20} of \citealt{heger_presupernova_2000}; center: models \texttt{200.0} and \texttt{250.25} of \citealt{renzo:20csm}), and an example of the imposed specific angular momentum profile for model \texttt{250.25} with $p=4.5$, $r_{\rm b}=1.5\times 10^{9}$\,cm, and $f_{\rm K}=0.3$ (cf.~Eq.~\ref{eq:j_profile}) compared to the corresponding Keplerian profile (green solid line; bottom). The light (dark) shaded region in the top panel represents the hydrogen envelopes of the \texttt{E20} (\texttt{E15}) models. Such envelopes are absent in the models of \citet{renzo:20csm}. } \label{fig:stellar_rotation_profile} \end{figure} Assuming an axisymmetric rotating star, we discretize the progenitor stellar model into $(n_r, n_\theta)$ mass elements, logarithmically spaced in stellar radius $r$, and uniformly spaced in $\cos\theta$. The angular resolution is chosen sufficiently high (typically $n_\theta = 1001$) that the accuracy in numerically computing global quantities by integration (total mass, total fall-back mass, etc.) is dominated by the finite radial resolution of the stellar progenitor models. Defining $t=0$ as the onset of core-collapse, a given stellar layer at radius $r$ will start to collapse onto the centre upon the sound travel time $t_{\rm s}(r)=\int_0^r c_s^{-1}(r){\rm d}r$ from the centre to $r$. Due to its finite angular momentum, a given fluid element will do so on an eccentric trajectory and circularize on the equatorial plane at time (cf.~\citealt{kumar_mass_2008}) \begin{eqnarray} t_{\rm circ}(r,\theta) &=& t_{\rm s}(r)\label{eq:t_circ}\\ &+& \frac{(1+e)^{-3/2}}{\Omega_{\rm K}(r)}\left[\cos^{-1}(-e) + e(1-e^2)^{1/2}\right]\nonumber \end{eqnarray} and radius \begin{equation} r_{\rm circ}(r,\theta) = \left(\frac{8}{3\pi}\right)^2 r (1-e), \label{eq:r_circ} \end{equation} where $e(r,\theta) = 1- [\Omega^2(r)/\Omega^2_{\rm K}(r)]\sin^2\theta$ is the eccentricity of the trajectory and $\Omega_{\rm K} = (G M_{\rm enc}(r) /r^3)^{1/2}$ the Keplerian angular velocity. The innermost parts of the stellar core may not possess sufficient angular momentum to circularize in an accretion disk and, instead, directly collapse into a BH. We define the initial BH as a `seed BH' formed by the innermost stellar layers up to radius $r_{\bullet,0}$ with enclosed mass $M_{\bullet,0}=0.5\,M_\odot$, a safe assumption for all stellar models considered here. This seed BH has dimensionless spin parameter \begin{equation} a_{\bullet,0} = \frac{c J_{\rm enc}(r_{\bullet,0})}{GM^2_{\rm enc}(r_{\bullet,0})}, \end{equation} where $J_{\rm enc}(r)$ is the enclosed angular momentum, $G$ is the gravitational constant, and $c$ is the speed of light. The corresponding innermost stable circular orbit (ISCO) is given by \citep{bardeen_rotating_1972} \begin{eqnarray} r_\mathrm{ISCO}(M_{\bullet},a_{\bullet}) &=& \frac{GM_{\bullet}}{c^2}\\ & \times &\left\{ 3 + z_2 - \left[ (3-z_1)(3+z_1+2z_2)\right]^{1/2}\right\}, \nonumber \label{eq:r_isco} \end{eqnarray} where \begin{eqnarray} z_1 &=& 1 + (1-a_{\bullet}^2)^{1/3}\left[ (1+a_{\bullet})^{1/3} + (1-a_{\bullet})^{1/3}\right], \label{eq:z1}\\ z_2 &=& (3a_{\bullet}^2 + z_1^2)^{1/2}. \label{eq:z2} \end{eqnarray} Upon initial BH formation, we follow the collapse of the outer stellar layers according to Eqs.~\eqref{eq:t_circ} and \eqref{eq:r_circ} and distinguish between mass elements that circularize outside the BH to form a disk ($r_{\rm circ}(r(t),\theta) > r_{\rm ISCO} (t)$), giving rise to a `disk feeding rate' $\dot{m}_{\rm fb, disk}$, and those that directly fall into the BH without accreting through a disk ($r_{\rm circ}(r(t),\theta) \le r_{\rm ISCO} (t)$), giving rise to a direct fallback rate onto the BH $\dot{m}_{\rm fb, \bullet}$. Here, $r(t)$ refers to the radius of a stellar element at polar angle $\theta$ which circularizes at time $t$ in the equatorial plane. We denote the associated rates of angular momentum supplied to the disk and the BH by $\dot{J}_{\rm fb, disk}$ and $\dot{J}_{\rm fb, \bullet}$, respectively. We follow the evolution of the BH, disk, and ejecta properties solving the following equations: \begin{eqnarray} \frac{{\rm d}M_{\rm \bullet}}{{\rm d} t} &=& \dot{m}_{\rm fb, \bullet} + \dot{m}_{\rm acc}, \label{eq:evol_eqn_1}\\ \frac{{\rm d}J_{\rm \bullet}}{{\rm d} t} &=& \dot{J}_{\rm fb, \bullet} + \dot{m}_{\rm acc} j_{\rm ISCO},\\ \frac{{\rm d}M_{\rm disk}}{{\rm d} t} &=& \dot{m}_{\rm fb, disk} - \dot{m}_{\rm acc} - \dot{m}_{\rm wind}, \\ \frac{{\rm d}J_{\rm disk}}{{\rm d} t} &=& \dot{J}_{\rm fb, disk} - \dot{m}_{\rm acc} j_{\rm ISCO} - \dot{J}_{\rm wind},\\ \frac{{\rm d}M_{\rm ejecta}}{{\rm d} t} &=& \dot{m}_{\rm wind}, \\ \frac{{\rm d}J_{\rm ejecta}}{{\rm d} t} &=& \dot{J}_{\rm wind}. \label{eq:evol_eqn_6} \end{eqnarray} Here, \begin{eqnarray} j_{\rm ISCO} &=& (G M_{\rm \bullet}r_{\rm ISCO})^{1/2} \times\\ && \mskip-40mu \frac{r^2_{\mathrm{ISCO}}- a_{\rm \bullet}r_g(r_{\mathrm{ISCO}}r_g/2)^{1/2}+a_{\rm \bullet}^2 r_g^2/4}{r_{\mathrm{ISCO}}\left[r^2_{\mathrm{ISCO}}-3r_{\mathrm{ISCO}} r_g/2+ a_{\rm \bullet} r_g(r_{\mathrm{ISCO}}r_g/2)^{1/2}\right]^{1/2}} \nonumber \end{eqnarray} is the specific angular momentum of a fluid element at the ISCO of the BH with mass $M_{\rm \bullet}$, spin $a_{\rm \bullet}= c J_{\rm \bullet}/G M_{\rm \bullet}^2$, and gravitational radius $r_g = 2G M_{\rm \bullet}/c^2$ \citep{bardeen_rotating_1972}. Mass is accreted onto the BH at a rate \begin{equation} \dot{m}_{\rm acc} = f_{\rm acc} \frac{M_{\rm disk}}{t_{\rm visc}}, \label{eq:accretion_rate} \end{equation} where \begin{equation} t_{\rm visc} = \alpha^{-1} \Omega_{\rm K, disk}^{-1} h_{\rm z, disk}^{-2} \label{eq:t_visc} \end{equation} is the viscous timescale of the disk, with $\alpha$ being the standard dimensionless disk viscosity \citep{shakura_black_1973}, \begin{equation} \Omega_{\rm K, disk}(t) = (G M_{\rm \bullet}/r_{\rm disk}^{3})^{1/2} \label{eq:Omega_disk} \end{equation} is the Keplerian angular velocity of the disk, and $h_{\rm z, disk}$ its scale height (we take $h_{\rm z, disk}\approx 0.5$ as a fiducial value). The disk radius $r_{\rm disk}(t)$ is defined by the current disk mass and angular momentum, \begin{equation} j_{\rm disk} \equiv (G M_{\rm \bullet} r_{\rm disk})^{1/2} = \frac{J_{\rm disk}}{M_{\rm disk}}. \label{eq:r_disk} \end{equation} The disk accretion flow gives rise to powerful outflows with mass-loss at a rate \begin{equation} \dot{m}_{\rm wind} = (1-f_{\rm acc}) \frac{M_{\rm disk}}{t_{\rm visc}}, \label{eq:wind_outflow_rate} \end{equation} and associated angular momentum loss rate \begin{equation} \dot{J}_{\rm wind} = \dot{m}_{\rm wind} j_{\rm disk}. \label{eq:angular_momentum_wind} \end{equation} Neutrinos cool the disk effectively above the critical ``ignition'' accretion rate for weak interactions \citep{chen_neutrino-cooled_2007,metzger_conditions_2008,siegel_collapsars_2019,de_igniting_2020}, which is approximately given by (see Appendix \ref{sec:ignition}) \begin{equation} \dot{M}_{\rm ign}\approx 2\times 10^{-3} M_\odot {\rm s}^{-1} \left(\frac{\alpha}{0.02}\right)^{5/3} \left(\frac{M_{\rm \bullet}}{3 M_\odot}\right)^{4/3}. \label{eq:Mdot_ign} \end{equation} Motivated by the findings of GRMHD simulations of neutrino-cooled accretion flows \citep{siegel_three-dimensional_2018,Fernandez+19,siegel_collapsars_2019,de_igniting_2020}, we assume that for high accretion rates $>\!\dot{M}_{\rm ign}$ a fraction $1 - f_{\rm acc} \approx 0.3$ of the disk mass is unbound in outflows. This fraction is assumed to increase to $\approx 0.6$ below $\dot{M}_{\rm ign}$, under the assumption that inefficient cooling will result in excess heating and outflow production (e.g., \citealt{Blandford&Begelman99,de_igniting_2020}). Similarly, we assume that enhanced outflow production occurs also at very high accretion rates, for which neutrinos become effectively trapped in the optically thick accretion disk and are advected into the BH before radiating. This threshold ``trapping'' accretion rate is given by (see Appendix \ref{sec:trapping}) \begin{equation} \dot{M}_{\nu, {\rm trap}}\approx 1 M_\odot {\rm s}^{-1} \left(\frac{\alpha}{0.02}\right)^{1/3}\left(\frac{M_{\rm \bullet}}{3M_{\odot}}\right)^{4/3}. \label{eq:Mdot_trap} \end{equation} Insofar as $\dot{M}_{\nu, {\rm trap}}$ scales in the same way with the (growing) BH mass as $\dot{M}_{\rm ign}$, we find this trapped regime is of little practical importance in our models. In summary, the accretion efficiency is given by \begin{equation} f_{\rm acc} = \left\{ \begin{array}{cc} 0.4, & \dot{M}_{\nu,{\rm trap}} \le \frac{M_{\rm disk}}{t_{\rm visc}} \\ 0.7, & \dot{M}_{\rm ign} < \frac{M_{\rm disk}}{t_{\rm visc}} < \dot{M}_{\nu,{\rm trap}} \\ 0.4, & \frac{M_{\rm disk}}{t_{\rm visc}} \le \dot{M}_{\rm ign} \end{array}\right. . \label{eq:accretion_fraction} \end{equation} Eqs.~\eqref{eq:evol_eqn_1}--\eqref{eq:evol_eqn_6} allow a calculation of the total ejecta mass $M_{\rm ejecta}$ obtained from a particular collapsar model. We evolve this set of coupled differential equations numerically until all stellar progenitor material has collapsed and has either been accreted onto the BH or been ejected into outflows. Note that these equations explicitly conserve mass and angular momentum. Time stepping is equidistant in $\log t$ and chosen sufficiently high, such that i) the accuracy of the total fallback mass is dominated by the radial resolution of the provided stellar model (see Appendix \ref{app:convergence}) and that ii) conservation of total mass and angular momentum in Eqs.~\eqref{eq:evol_eqn_1}--\eqref{eq:evol_eqn_6} is achieved to better than $10^{-14}$ relative accuracy for all model runs. Once a disk forms around the BH and its accretion rate $\dot{m}_{\rm acc}$ exceeds $10^{-4}\,M_\odot\,\text{s}^{-1}$, we assume that a relativistic jet emerges, powerful enough to drill through the remaining outer layers in the polar region. This threshold is motivated by typical GRB luminosities $L_\gamma\sim\!2\times 10^{50}\,\text{erg}\,\text{s}^{-1}$ \citep{goldstein_estimating_2016}, which, if accretion powered, require an accretion rate of at least $\dot{M}\sim L_\gamma / c^2 \sim 1.1\times 10^{-4}\,M_\odot\,\text{s}^{-1}$. If this threshold is surpassed, we ignore any remaining material in the polar regions $\theta < \theta_{\rm jet}$ and $\theta > 180^\circ - \theta_{\rm jet}$ for the subsequent fallback process. This material has little effect on the total quantity of material accreted through the disk as it predominantly falls into the BH directly due to the low angular momentum in these regions. However, it has a slight indirect effect on nucleosynthesis by modifying the BH mass (see below). As a fiducial value, we take $\theta_{\rm jet}=30^\circ$. We further justify the existence of such a successful jet {\it a posteriori} by the fact that our models reach the regime $\dot{M}>10^{-4}\,M_\odot\,\text{s}^{-1}$ favorable for powering typical observed long GRBs including the time necessary for the jet to drill through the stellar envelope (see Sec.~\ref{sec:fallback_results}). The fallback process may in some cases give rise to massive, gravitationally unstable accretion disks. In this limit, the disk mass becomes comparable to the BH mass, and our assumption of a Kerr metric would not be justified anymore. We estimate this instability region by monitoring the ratio of self-gravity to external gravitational acceleration by the BH potential \citep{paczynski_model_1978,gammie_nonlinear_2001}, \begin{equation} Q^{-1} \equiv \frac{2 \pi \Sigma r_{\rm disk}^2}{M_{\rm \bullet}h_{z,{\rm disk}}} \simeq \frac{2}{h_{z,{\rm disk}}} \frac{M_{\rm disk}}{M_{\rm \bullet}} > 1, \label{eq:gravitational_instability} \end{equation} where $\Sigma$ is the disk's surface density. If $Q<1$, we remove excess disk mass by enhancing accretion and wind production such as to restore $Q=1$. This is motivated by the fact that gravitationally unstable disks tend to self-regulate by increased angular momentum transport via gravitationally driven turbulence, thereby increasing the accretion rate and reducing the disk mass until $Q>1$ (e.g., \citealt{gammie_nonlinear_2001}). The composition of the disk wind ejecta at a given time depends most sensitively on the instantaneous accretion rate \citep{siegel_collapsars_2019}. Following \citet{siegel_collapsars_2019}, \citet{Miller+20}, and \citet{Li&Siegel21}, we define the following accretion regimes: \begin{equation} \frac{M_{\rm disk}}{t_{\rm visc}} = \left\{ \begin{array}{cc} > \dot{M}_{\nu,{\rm r-p}} & \text{weak $r$-process} \\ \in [2\dot{M}_{\rm ign}, \dot{M}_{\nu,{\rm r-p}}] & \text{strong $r$-process} \\ \in [\dot{M}_{\rm ign}, 2\dot{M}_{\rm ign}] & \text{weak $r$-process} \\ < \dot{M}_{\rm ign} & \text{no $r$-process,} \\ & ^{56}\text{Ni production} \end{array}\right. . \label{eq:accretion_regimes} \end{equation} Here, $\dot{M}_{\nu,{\rm r-p}}$ represents a threshold between production of lanthanides and first-to-second peak $r$-process elements only, accounting for the fact that increased neutrino irradiation at high accretion rates tends to raise the electron fraction above $\approx\!0.25$ required for lanthanide production (e.g., \citealt{lippuner_r-process_2015}). We assume this threshold scales with the accretion rate above which the inner disk becomes optically thick to neutrinos, which we estimate as (see Appendix \ref{sec:trapping}) \begin{equation} \dot{M}_{\nu, {\rm r-p}}\approx 0.1 M_\odot {\rm s}^{-1} \left(\frac{\alpha}{0.02}\right) \left(\frac{M_{\rm \bullet}}{3M_{\odot}}\right)^{4/3}. \label{eq:Mdot_opaque} \end{equation} This expression has been normalized using numerical results by \citet{siegel_collapsars_2019} and \citet{Miller+20} for $M_{\rm \bullet}\approx 3 M_\odot$. Additionally including the effects of neutrino fast flavor conversions may increase $\dot{M}_{\nu,{\rm r-p}}$ significantly \citep{Li&Siegel21}, possibly up to $\approx\!1M_\odot$ s$^{-1}$ or higher for such light BHs. We therefore treat the normalization as a free parameter and explore different scenarios in which the value is scaled up by a factor of ten. Below the ignition rate $\dot{M}_{\rm ign}$, $r$-process production ceases abruptly and nucleosynthesis in the outflows with roughly equal numbers of neutrons and protons ($Y_e\simeq 0.5$) only proceeds up to iron-peak elements \citep{siegel_collapsars_2019}. A large fraction of the outflowing material in this epoch remains, however, as $^{4}$He instead of forming heavier isotopes. This is due to the slow rate of the triple-$\alpha$ reaction needed to create seed nuclei when $Y_{e} \approx 0.5$, relative to the much faster neutron-catalyzed reaction $^{4}$He($\alpha n,\gamma)^{9}$Be($\alpha,n)^{12}$C that operates when $Y_e \ll 0.5$ \citep{woosley_alpha-process_1992}. Here, we employ a simple model to estimate the yield of $^{56}$Ni in such $Y_e\simeq 0.5$ disk outflows, similar to \citet{siegel_collapsars_2019} (see Appendix \ref{sec:Ni56_production}). A requisite for the synthesis of $^{56}$Ni in disk outflows is that nuclei from stellar fallback material are dissociated into individual nucleons once entering the inner part of the accretion disk. At late times during the accretion process, the disk densities and temperatures may not be high enough to ensure full dissociation. We estimate the transition time $t_{\rm diss}$ to this state by evaluating the conditions under which only 50\% of $\alpha$ particles are dissociated in the disk (see Appendix \ref{sec:Ni56_production}). For $t>t_{\rm diss}$ we ignore any potential further nucleosynthesis in disk outflows. \subsection{Collapsar Model Results} \label{sec:fallback_results} We start in Sec.~\ref{sec:results_basic_model} by walking through the evolution of the collapse and mass-ejection process for a representative model corresponding to a star above the nominal PI mass gap. Appendix~\ref{app:collapsars} presents the results of our model when applied to ``ordinary'' low-mass collapsars (with BH masses below the PI mass gap), demonstrating that for the fiducial range of parameters considered in this work, we obtain properties in agreement with observed GRBs and previously predicted $r$-process ejecta. Using the same parameters (now ``calibrated'' to reproduce the properties of ordinary collapsars) we present in Sec.~\ref{sec:results_massgap_collapsars} a parameter exploration of ejecta masses and nuclear compositions for massive collapsars above the PI mass gap. \subsubsection{Basic Model Evolution} \label{sec:results_basic_model} \begin{figure} \centering \includegraphics[width=0.99\linewidth]{M_fb_ejecta_alpha=0.05_p_exp=2.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \includegraphics[width=0.99\linewidth]{J_plots_alpha=0.05_p_exp=4.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \includegraphics[width=0.99\linewidth]{M_plots_alpha=0.05_p_exp=4.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \caption{Collapse evolution for a representative stellar model \texttt{250.25} with typical rotation parameters $p=4.5$, $f_{\rm K}=0.3$ and $r_{\rm b}=1.5\times 10^{9}$\,cm. Top: fallback rates $\dot{M}_{\rm fb}$ onto the BH (direct; blue), onto an accretion disk (yellow), and total (green), as a function of the total cumulative collapsed mass $M_{\rm fb}$. Dotted lines indicate the corresponding evolution when ignoring the effect of a jet. Center and bottom: evolution of angular momenta (center) and masses (bottom) as determined by Eqs.~\eqref{eq:evol_eqn_1}--\eqref{eq:evol_eqn_6}.} \label{fig:Renzo_evolution} \end{figure} \begin{figure}[htbp] \centering \includegraphics[width=0.99\linewidth]{Mwind_alpha=0.05_p_exp=4.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \includegraphics[width=0.99\linewidth]{XNi_time_alpha=0.05_p_exp=4.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \caption{Top: accretion rate at which ejecta is being produced as a function of cumulative ejecta mass for model \texttt{250.25} with $p=4.5$, $f_{\rm kep}=0.3$, and $r_{\rm b}=1.5\times 10^{9}$\,cm. The nucleosynthesis regimes according to Eq.~\eqref{eq:accretion_regimes} are color-coded. Bottom: corresponding mass fraction of $^{56}$Ni synthesized in disk outflows and of $^{4}$He in the accretion disk. The vertical dashed line refers to the time $t_{\rm diss}$ at which only 50\% of $\alpha$-particles are dissociated in the disk. For $t>t_{\rm diss}$ we ignore further $^{56}$Ni production in the outflows.} \label{fig:Renzo_accretion_rate} \end{figure} Figure \ref{fig:Renzo_evolution} illustrates the collapse evolution of model \texttt{250.25} with representative rotation parameters of $p=4.5$, $f_{\rm K}=0.3$, and $r_{\rm b}=1.5\times 10^{9}$\,cm. Upon seed BH formation, the BH grows rapidly in mass and spin through low-angular momentum material at small radii directly falling into the seed BH before circularizing. Only after a few seconds and accretion of $\approx\!10\,M_\odot$ of the inner layers, first material starts to circularize outside the BH horizon to form an accretion disk (cf.~Fig.~\ref{fig:Renzo_evolution}, top and bottom panel), and initiate accretion onto the BH through a disk in addition to direct infall. Direct fallback onto the BH subsides after the accretion of about $20\,M_\odot$ in this model (cf.~Fig.~\ref{fig:Renzo_evolution}, top panel) when a significant fraction of low angular momentum material residing in the polar region of the progenitor model has fallen into the BH. Further BH growth then proceeds almost entirely through disk accretion. This initial direct fallback episode partially clears up the polar regions for a relativistic jet to propagate through the outer stellar layers to eventually break out of the star and generate a long gamma-ray burst. Around the same time, a significant fallback rate onto the disk sets in (cf.~Fig.~\ref{fig:Renzo_evolution}, top panel) to establish a heavy $\sim\!15\,M_\odot$ accretion disk on a timescale of a few seconds (cf.~Fig.~\ref{fig:Renzo_evolution}, bottom panel). The disk accretion rate onto the BH, $\dot{m}_{\rm acc}$, quickly exceeds $\dot{M}_{\rm ign}$ and we assume that a relativistic jet forms. This removes the remaining low-angular momentum material in the polar regions and thus results in suppression of direct fallback onto the BH, which becomes negligible compared to disk fallback (cf.~Fig.~\ref{fig:Renzo_evolution}, top panel). The top panel of Fig.~\ref{fig:Renzo_evolution} also shows that ignoring the effect of such a jet would lead to subdominant extended direct fallback of residual low-angular momentum material in polar regions onto the BH. While this does not have a direct impact on disk accretion, it has minor indirect consequences on nucleosynthesis in the disk winds due to its effect on the BH mass (cf.~Eq.~\eqref{eq:accretion_regimes}). For somewhat larger values of $r_{\rm b}$, the situation changes and direct fallback onto the BH may extend to late times even in the presence of a jet, due to the overall lower angular momentum budget of the progenitor star outside the polar cone with opening angle $\theta_{\rm jet}$. For more extreme scenarios, fallback onto the disk may become close to non-existent. As soon as the disk forms, most angular momentum resides in the disk rather than the BH in this model (cf.~Fig.~\ref{fig:Renzo_evolution}, center panel). The majority of this is being blown off in the ejecta, while a subdominant amount is transferred to the BH as disk matter gradually accretes through the ISCO onto the BH. For significantly larger values of $r_b$ this trend reverses, and most angular momentum is transferred to the BH rather than the ejecta as less material accretes through a disk. The top panel of Fig.~\ref{fig:Renzo_accretion_rate} shows the history of ejecta production in the model discussed above. Shown is the instantaneous accretion state $M_{\rm disk}/t_{\rm visc}$ of the disk as a function of the cumulative ejected wind mass, together with the nucleosynthesis regimes defined in Eq.~\eqref{eq:accretion_regimes}. This evolution shows a `sweep' through most nucleosynthesis regimes, typical of the models considered here. Nucleosynthesis regimes change during the evolution as a result of the BH mass growth and can be more dramatic in some cases than illustrated here. Outflows are first created in the regime of a main $r$-process with lanthanide production, during which the bulk of the wind ejecta is produced. The remaining $\approx10\,M_\odot$ of ejecta originate in a regime that mostly ejects $\alpha$-particles and $\sim\!0.1\,M_\odot$ of $^{56}$Ni. The bottom panel of Fig.~\ref{fig:Renzo_accretion_rate} illustrates $^{56}$Ni production in this regime. Shown are the mass fraction of $^{56}$Ni produced in disk outflows according to Eq.~\eqref{eq:Yseed} as well as the mass fraction of $\alpha$-particles in the accretion disk according to Eq.~\eqref{eq:Saha}. The vertical dashed line indicates the dissociation time $t_{\rm diss}$ after which $<50\%$ of $\alpha$-particles are dissociated into individual nucleons in the accretion disk (Sec.~\ref{sec:fallback}, Appendix \ref{sec:Ni56_production}). As a conservative estimate, for $t>t_{\rm diss}$, we ignore any further production of $^{56}$Ni according to Eq.~\eqref{eq:Yseed} as the required free nucleons become unavailable. However, this represents only a slight correction in most cases, as by far the dominant amount of $^{56}$Ni is typically produced before $t=t_{\rm diss}$. \subsubsection{Parameter Study of Massive Collapsars} \label{sec:results_massgap_collapsars} Before systematically applying our model across the parameter space of massive collapsars, we first apply it to `ordinary' collapsars of stars well below the PI mass gap, the results of which we describe in Appendix \ref{app:collapsars}. We use the progenitor models of \citet{heger_presupernova_2000} as representative of typical stellar progenitors of canonical long GRBs \citep{MacFadyen&Woosley99}. Our results for the nucleosynthesis yields of the disk outflows as a function of the parameters $\{f_{\rm K}, r_{\rm b}\}$ which enter the progenitor angular momentum profile (Fig.~\ref{fig:Heger_ejecta}), broadly agree with those previously presented in \citet{siegel_collapsars_2019}, though some quantitative differences arise due to our more detailed treatment of different regimes of BH accretion (see Appendix \ref{app:collapsars} for a discussion). Our low-mass collapsar models also exhibit BH accretion timescales and energetics of putative jet activity in agreement with long GRB observations. We can therefore claim a rough ``calibration'' of our model across the adopted parameter space of progenitor angular momentum properties, allowing for more confidence when extrapolating to the regime of more massive collapsars described below. \begin{figure} \centering \includegraphics[width=0.98\linewidth]{final_BH_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{rp_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{lrp_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{Ni_mass_alpha=0.05_p_exp_4.5.pdf} \caption{BH masses and disk wind ejecta properties across the parameter space of progenitor rotational profiles (envelope Keplerian fraction $f_{\rm K}$ and break radius $r_{\rm b}$; see Fig.~\ref{fig:stellar_rotation_profile}, bottom panel) for model \texttt{250.25}. Shown are the final BH mass (top), the total ejected mass in heavy ($A>136$) $r$-process elements including lanthanides (center top), light ($A<136$) $r$-process material (center bottom), and $^{56}$Ni (bottom). Red contours indicate the inferred primary mass of GW190521{}, together with its 90\% confidence limits ($85^{+21}_{-14}\,M_\odot$; \citealt{Abbott+20_190521}). Cyan contours delineate final BH masses of 60\,$M_\odot$ and 130\,$M_\odot$, which approximately correspond to the lower and upper end of the PI mass gap.} \label{fig:Renzo_ejecta} \end{figure} \begin{figure} \centering \includegraphics[width=0.98\linewidth]{lan_frac_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{rp_frac_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{lrp_frac_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{Ni_frac_alpha=0.05_p_exp_4.5.pdf} \caption{Scan of the parameter space for model \texttt{250.25}. We show the mass fractions of lanthanides (top), light $r$-process elements ($A<136$; center), and of $^{56}$Ni (bottom) in the ejecta, assuming full mixing of ejecta components. Red contours indicate the inferred primary mass of GW190521{}, together with its 90\% confidence limits. Cyan contours delineate final BH masses of 60\,$M_\odot$ and 130\,$M_\odot$, which approximately correspond to the lower and upper end of the PI mass gap.} \label{fig:Renzo_mass_fractions} \end{figure} Figs.~\ref{fig:Renzo_ejecta} and \ref{fig:Renzo_GRB_accretion} summarize our results for the ejecta and GRB properties for model \texttt{250.25} as a representative example of a stellar model above the PI mass gap, in the parameter space $\{f_{\rm K}, r_{\rm b}\}.$ The top panel of Fig.~\ref{fig:Renzo_ejecta} shows that, even for a progenitor mass $M_{\star} =150\,M_\odot$ at the onset of collapse (that is, well above the PI mass gap), the final BH remnant can populate the entire mass gap between $\sim\!55\,M_\odot-130\,M_\odot$ (for typical parameter values), depending on the rotation profile at the onset of collapse. Labelled contours indicate the inferred primary mass of GW190521{}, together with its 90\% confidence limits. We focus on this region of the parameter space in what follows, insofar that superKNe generated from such events probe BHs formed in the PI mass gap. As in case of the low-mass collapsars (Appendix~\ref{app:collapsars}), our results are not sensitive to the precise value of the power-law coefficient $p$, which we thus ignore in what follows. We find ubiquitous $r$-process production throughout the parameter space, ranging between $\sim 0.1-2.3\,M_\odot$ of heavy ($A>136$) $r$-process material including lanthanides and $\sim 1-29\,M_\odot$ of light ($A<136$) $r$-process elements. Additionally, between $\sim 0.05-1\,M_\odot$ of $^{56}$Ni are synthesized in the ejecta. Interestingly, the region of highest $r$-process production is well aligned with intermediate final BH masses in a range similar to the GW190521{} confidence region (Sec.~\ref{sec:GW190521}). For large $r_{\rm b}$ the outer stellar layers possess too little angular momentum to form massive accretion disks that give rise to copious $r$-process ejecta, as most material directly falls into the BH. On the other hand, for small values of $r_{\rm b}$ and high values of $f_{\rm K}$ massive disks form; however, high angular momentum leads to large disk radii $r_{\rm disk}$ and associated viscous timescales, such that the accretion rate drops below the required thresholds for $r$-process production for most of the accretion process. This occurs despite the presence of spiral modes in this regime, which tend to increase the accretion rate (Sec.~\ref{sec:fallback}). Most $r$-process material (both light and heavy) is synthesized for small values of both $r_{\rm b}$ and $f_{\rm K}$, which represents the optimal compromise between high angular momentum and sufficient compactness of the accretion disk. We discuss the possible contribution of massive collapsars to the long GRB population in Sec.~\ref{sec:GRB}. For use in our subsequent light curve models (Sec.~\ref{sec:light_curve_models}), we decompose the ejecta content of the collapsar models further into mass fractions of several constituents of interest. Assuming full mixing of all ejecta content (see also Sec.~\ref{sec:analytic}), we calculate the mass fraction $X_{\rm La}$ of lanthanides (atomic mass number $136\lesssim A\lesssim 176$) based on the amount of main $r$-process material, assuming the solar $r$-process abundance pattern \citep{arnould_r-process_2007} motivated by the results of \citet{siegel_collapsars_2019}. A mass fraction $X_{\rm lrp}$ for light $r$-process elements is based on the combined mass fraction of light $r$-process material only plus the fraction of main $r$-process ejecta with $A<136$ when applying the solar $r$-process abundance pattern. Finally, we also compute the mass fraction $X_{\rm Ni}$ of $^{56}$Ni. Results are depicted in Fig.~\ref{fig:Renzo_mass_fractions}. For concreteness, we select several models along iso-mass contours for the final BH mass within the GW190521{}\ confidence region and report the corresponding ejecta parameters in Tab.~\ref{tab:ejecta_parameters}. \begin{table*} \centering \caption{Ejecta parameters for select mass gap collapsar models with $p=4.5$ along contours of constant final BH mass (cf.~Figs.~\ref{fig:Renzo_ejecta} and \ref{fig:Renzo_mass_fractions}).} \label{tab:ejecta_parameters}{} \begin{tabular}{ccccccccc} \hline model & $M_{\bullet}$ & $M_{\rm ej}$ & $r_{\rm b}$ & $f_{\rm K}$ & $X_{\rm La}$ & $X_{\rm lrp}$ & $X_{\rm Ni}$ & $X_{\rm Ni}/X_{\rm rp}$\\ \hline & ($M_\odot$) & ($M_\odot$) & ($10^9$ cm) & & & & \\ \hline \hline \texttt{250.25} & 106 & 27.24 & 1.94 & 0.25 & 0.020 & 0.59 & 0.0011 & 0.0018\\ & 106 & 29.67 & 2.35 & 0.35 & 0.014 & 0.47 & 0.0020 & 0.0041\\ & 106 & 30.51 & 2.67 & 0.45 & 0.008 & 0.36 & 0.0028 & 0.0075\\ & 106 & 31.53 & 3.04 & 0.60 & 0.004 & 0.21 & 0.0039 & 0.0179\\ & 85 & 45.57 & 1.20 & 0.35 & 0.012 & 0.47 & 0.0040 & 0.0079\\ & 85 & 46.25 & 1.69 & 0.45 & 0.007 & 0.32 & 0.0059 & 0.0175\\ & 85 & 47.20 & 1.96 & 0.55 & 0.005 & 0.22 & 0.0070 & 0.0303\\ & 85 & 47.48 & 2.06 & 0.60 & 0.004 & 0.19 & 0.0072 & 0.0359\\ & 71 & 58.06 & 1.19 & 0.50 & 0.004 & 0.19 & 0.0120 & 0.0601\\ & 71 & 58.22 & 1.37 & 0.55 & 0.003 & 0.16 & 0.0094 & 0.0562\\ & 71 & 58.00 & 1.50 & 0.60 & 0.003 & 0.13 & 0.0058 & 0.0417\\ & 71 & 59.78 & 1.53 & 0.65 & 0.002 & 0.11 & 0.0121 & 0.1035\\ \texttt{200.25} & 106 & 11.41 & 2.14 & 0.25 & 0.011 & 0.42 & 0.0013 & 0.0029\\ & 106 & 13.87 & 2.59 & 0.35 & 0.008 & 0.34 & 0.0019 & 0.0054\\ & 85 & 28.77 & 1.32 & 0.35 & 0.016 & 0.53 & 0.0026 & 0.0046\\ & 85 & 29.74 & 1.59 & 0.45 & 0.010 & 0.39 & 0.0040 & 0.0096\\ & 71 & 40.93 & 1.07 & 0.50 & 0.007 & 0.29 & 0.0079 & 0.0254\\ & 71 & 40.64 & 1.21 & 0.55 & 0.006 & 0.25 & 0.0083 & 0.0308\\ \hline \end{tabular} \end{table*} \section{Super-Kilonova Emission} \label{sec:light_curve_models} As the disk outflows expand away from the BH, the ejecta shell they form eventually gives rise to optical/infrared emission powered by radioactive decay (the ``superKN''). \subsection{Analytic Estimates} \label{sec:analytic} We begin with analytic estimates of the superKN properties. The total ejecta mass $M_{\rm ej}$ is comprised of up to three main components: (1) radioactive $r$-process nuclei, mass fraction $X_{\rm rp}$; (2) radioactive $^{56}$Ni, $X_{\rm Ni}$; (3) non-radioactive $^{4}$He, $X_{\rm He} = 1 - X_{\rm rp} - X_{\rm Ni}$ (also a placeholder for other non-radioactive elements). Typical values for our fiducial models (Sec.~\ref{sec:fallback_results}) are $M_{\rm ej} \sim 10-60 M_{\odot}$, $X_{\rm rp} \sim 0.1-0.5$, $X_{\rm Ni} \sim 10^{-3}-10^{-2}$ ($M_{\rm Ni} \sim 10^{-2}-0.5M_{\odot}$). As described in the previous section, the total $r$-process mass fraction can be further subdivided into that of light $r$-process nuclei $X_{\rm lrb}$ and of lanthanides $X_{\rm La}$. For simplicity, throughout this section we assume the ejecta are mixed homogeneously into a single approximately spherical shell. Physically, such mixing could result from hydrodynamic instabilities that develop between different components of the radial and temporally-dependent disk winds and or due to its interaction with the GRB jet (e.g., \citealt{Gottlieb+21}). \begin{figure} \centering \includegraphics[width=0.98\linewidth]{lpk_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{teffpk_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{tpk_alpha=0.05_p_exp_4.5.pdf} \caption{Analytic light curve estimates across the parameter space for model \texttt{250.25}. Shown are the peak luminosity (top), peak effective temperature (center), and peak timescale (bottom). Red contours indicate the inferred primary mass of GW190521{}, together with its 90\% confidence limits. Cyan contours delineate final BH masses of 60\,$M_\odot$ and 130\,$M_\odot$, which approximately correspond to the lower and upper end of the PI mass gap.} \label{fig:Renzo_KN} \end{figure} The light curve will peak roughly when the expansion timescale equals the photon diffusion timescale (e.g., \citealt{arnett_type_1982}), \begin{eqnarray} t_{\rm pk} &\approx& \left(\frac{M_{\rm ej}\kappa}{4\pi v_{\rm ej} c}\right)^{1/2} \\ &\approx& 108\,{\rm d}\left(\frac{M_{\rm ej}}{50M_{\odot}}\right)^{1/2}\left(\frac{v_{\rm ej}}{0.1c}\right)^{-1/2}\left(\frac{\kappa}{1{\rm cm^{2}\,g^{-1}}}\right)^{1/2}, \label{eq:tpk} \nonumber \end{eqnarray} where $v_{\rm ej}$ is the average ejecta velocity. The effective gray opacity $\kappa$ varies in kilonovae from $\lesssim 1$ cm$^{2}$ g$^{-1}$ for ejecta dominated by light $r$-process species, to $\sim 10-30$ cm$^{2}$ g$^{-1}$ for ejecta containing a sizable quantity of lanthanide atoms and ions (e.g., \citealt{Kasen+13,Tanaka+20}). However, $\kappa$ will be smaller than these estimates in the superKN case due to the large mass fraction of light elements, $X_{\rm He} \sim 0.5-0.9$, which contribute negligibly to the opacity. For our analytical estimates below, we linearly interpolate $\kappa$ between 0.03 cm$^{2}$ g$^{-1}$(at $X_{\rm La}=10^{-4}$) and 3 cm$^{2}$ g$^{-1}$ (at $X_{\rm La}\ge 0.2$), which we find results in reasonable agreement with the detailed radiation transport calculations present in Sec.~\ref{sec:Supernova_radiation_transport_results}. The peak luminosity and effective temperature can also be estimated using analytic formulae (e.g., \citealt{Metzger19}), \begin{eqnarray} L_{\rm pk} &\approx& 4\times 10^{41}{\rm erg\,s^{-1}}\left(\frac{X_{\rm rp}}{0.2}\right) \\ &&\times\left(\frac{M_{\rm ej}}{50M_{\odot}}\right)^{0.35}\left(\frac{v_{\rm ej}}{0.1c}\right)^{0.65}\left(\frac{\kappa}{{\rm cm^{2}\,g^{-1}}}\right)^{-0.65}, \nonumber \label{eq:Lpk} \end{eqnarray} \begin{eqnarray} T_{\rm eff,pk} &\approx& 900\,{\rm K}\left(\frac{X_{\rm rp}}{0.2}\right)^{0.25} \\ &&\times\left(\frac{M_{\rm ej}}{50M_{\odot}}\right)^{-0.16}\left(\frac{v_{\rm ej}}{0.1c}\right)^{0.41}\left(\frac{\kappa}{{\rm cm^{2}\,g^{-1}}}\right)^{-0.41}, \nonumber \end{eqnarray} where we have used the radioactive heating rate of $r$-process nuclei from \citet{Metzger+10} with an assumed thermalization efficiency of 50\%. Near peak light $t \sim t_{\rm pk} \sim$ 100 d, the specific radioactive heating rate of $^{56}$Ni is $\sim 10-30$ times higher than that of $r$-process elements (e.g., \citealt{siegel_collapsars_2019}). Given values $X_{\rm Ni}/X_{\rm rp} \sim 0.01-0.05$ for most of our disk outflow models, $L_{\rm pk}$ is moderately underestimated by Eq.~(\ref{eq:Lpk}), which neglects $^{56}$Ni heating. Fig.~\ref{fig:Renzo_KN} shows the predicted peak timescale, luminosity, and effective temperature of the superKN emission in the parameter space $\{f_{\rm K}, r_{\rm b}\}$ for the fiducial model \texttt{250.25}. For the same parameters which generate remnant BHs with masses in the PI gap, we predict peak luminosities $L_{\rm pk} \sim 10^{42}$ erg s$^{-1}$ and characteristic durations of months. Though similar to other types of SNe in duration, superKNe are characterized by significantly cooler emission ($T_{\rm eff} \approx 1000$ K), as confirmed by radiative transfer calculations presented in the next section. \subsection{SuperKN Light Curves and Spectra}\label{sec:Supernova_radiation_transport_results} \begin{table*} \centering \caption{SuperKN Light Curve Models and Survey Detection Rates} \label{tab:LCmodels} \begin{tabular}{cccccccc} \hline \multirow{2}{*}{Model} & $M_{\rm ej}$ & $v_{\rm ej}$ & {$M_{\rm Ni}$} & {$M_{\rm lrp}$} & {$X_{\rm La}$} & $R_{\rm Rubin}^{(a)}$ & $R_{\rm Roman}^{(b)}$ \\[1 mm] & ($M_\odot$) & ($c$) & ($M_\odot$) & ($M_{\odot}$) & ($10^{-3}$) & (yr$^{-1}$) & (yr$^{-1}$) \\ \hline \hline a & 8.6 & 0.1 & 0.019 & 0.83 & 1.4 & 0.01 & 0.02 \\ b & 31.0 & 0.1 & 0.012 & 8.28 & 17.0 & 0.03 & 0.4 \\ c & 35.6 & 0.1 & 0.087 & 23.2 & 4.0 & 0.1 & 2 \\ d & 50.0 & 0.1 & 0.53 & 9.59 & 0.53 & 0.1 & 4 \\ e & 60.0 & 0.1 & 0.0 & 5.6 & 0.17 & 0.2 & 0.01 \\ \hline \end{tabular} \\$^{(a)},^{(b)}$Detection rates per year by {\it Rubin Observatory} and {\it Roman}, respectively for an assumed $z = 0$ superKN rate of 10 Gpc$^{-3}$ yr$^{-1}$ (see Sec.~\ref{sec:survey} for details). \end{table*} \subsubsection{Model Selection and Parameters} To elaborate on the estimates of \S \ref{sec:analytic}, we carried out detailed radiation transport simulations for five ejecta models whose properties (\ensuremath{M_{\rm ej}}, \ensuremath{M_{\rm Ni}}, \ensuremath{M_{\rm lrp}}, and $M_{\rm La}$) span the space defined by the subset of simulations that produced BHs within the mass gap ($60 \lesssim M_{\bullet}/\ensuremath{M_\odot} \lesssim 130$), i.e., models that fall between the two cyan contours of Fig.~\ref{fig:Renzo_mass_fractions}. (See also \citealt{woosley:17, farmer:19, renzo:20csm, farmer:20, costa:21, mehta:21}). The parameters of the mass gap models are largely confined to a plane in \ensuremath{M_{\rm ej}}-\ensuremath{M_{\rm Ni}}-\ensuremath{M_{\rm lrp}}-$M_{\rm La}$ space, making it straightforward to select a handful of characteristic parameters from the full set. We used the KMeans routine of \texttt{sklearn} \citep{scikit-learn} to divide our models into four clusters, and adopted the positions of the cluster centers as four representative super-kilonova models. However, a small fraction of the mass-gap models occupy a distinct region of the parameter space, having large \ensuremath{M_{\rm ej}}, but little to no nucleosynthetic products heavier than He. Since these models were not captured by our clusters, we added a fifth model to explore the edge case of a high-mass, nickel-free outflow. Our five models are listed in Tab.~\ref{tab:LCmodels}. We performed for the models of Tab.~\ref{tab:LCmodels} one-dimensional radiation transport calculations carried out with Monte Carlo radiation transport code \texttt{Sedona} \citep{kasen_time-dependent_2006,kasen.ea.inprep_sedona.paper}. We adopted for each model a density profile such that the mass external to the velocity coordinate $v$ follows a power-law, \begin{equation} M_{>v} \propto \left(\frac{v}{v_{\rm min}}\right)^{-\alpha}, \:\: v \geq v_{\rm min}. \end{equation} Above, the minimum ejecta velocity $v_{\rm min}$ is determined by the characteristic velocity $v_{\rm ej} = (2E_{\rm kin}/M_{\rm ej})^{1/2}$ (with $E_{\rm kin}$ the ejecta kinetic energy), and the choice of power-law index $\alpha$, \begin{equation} v_{\rm min} = \left(\frac{\alpha-2}{\alpha}\right)^{1/2}v_{\rm ej}. \end{equation} We take $\alpha=2.5$ and $v_{\rm ej} = 0.1c$ for all models, consistent with predictions of accretion disk outflow velocities \citep[e.g.][]{fernandez_outflows_2015,siegel_collapsars_2019}. The opacity of the outflowing gas, and therefore the nature of the transients' electromagnetic emission, is sensitive to the abundance pattern in the ejecta. Specifically, lanthanides and actinides, and to a lesser extent elements in the \emph{d}-block of the periodic table, contribute a high opacity, while the opacities of \emph{s}- and \emph{p}- block elements is significantly lower \citep{Kasen+13,Tanaka+20}. In this work, we predict the synthesis of helium, \iso{Ni}{56}, and light and heavy \emph{r}-process{} material, but do not carry out detailed nucleosynthesis calculations, e.g. by post-processing fluid trajectories. The composition of each model is then solely a function of its \ensuremath{M_{\rm ej}}, \ensuremath{M_{\rm Ni}}, \ensuremath{M_{\rm lrp}}, and \ensuremath{X_{\rm La}}. We assume that heavy ($A>136$) \emph{r}-process{} material is 41\% lanthanides and actinides by mass, equal to the solar value of $M_{\rm La}/M_{\rm A>136}$. The remainder is split between \emph{d}-block and \emph{s}/\emph{p}-block elements (54\% and 5\% by mass, respectively). For light \emph{r}-process{} material, $\ensuremath{X_{\rm La}}=0$. We estimated it comprises 95\% (5\%) \emph{d}-block (\emph{s-}/\emph{p}-block) elements by mass. The composition adopted for our radiation transport models is limited by both our imperfect knowledge of the details of nucleosynthesis and incomplete atomic data of the sort necessary to calculate photon opacities in the ejecta. Lanthanide and actinide mass ($M_{\rm La}$) is divided among lanthanide elements following the solar pattern, with one adjustment: because the required atomic data is not available for atomic number Z=71, we redistribute the solar mass fraction of Z=71 to Z=70. Atomic data is also unavailable for most of the \emph{d}-block elements produced by \emph{r}-process{} (whether heavy or light). We thus distribute \emph{d}-block mass evenly among elements with $Z=21-28$ (excluding $Z=23$ for lack of data), artificially increasing the mass numbers to $A \sim 90$ to avoid overestimating the ion number density. All \emph{r}-process{} \emph{s}- and \emph{p}-block material is modeled by the low-opacity filler Ca ($Z=20$). $^{4}$He and \iso{Ni}{56} (as well as its daughter products \iso{Co}{56} and \iso{Fe}{56}) are straightforward to incorporate into the composition. Our radiation transport simulations include radioactivity from both the \iso{Ni}{56} decay chain and from the \emph{r}-process. We explicitly track energy loss by $\gamma$-rays from \iso{Ni}{56} and \iso{Co}{56}, and assume that positrons from \iso{Co}{56} decay thermalize immediately upon production. We model \emph{r}-process{} radioactivity using the results of \citet{lippuner_r-process_2015} for an outflow with $(Y_{\rm e}, s_{\rm B}, \tau_{\rm exp})=(0.13, 32 k_{\rm B}, 0.84 \; \text{ms})$, with $s_{\rm B}$ the initial entropy per baryon and $\tau_{\rm exp}$ the expansion timescale. To account for thermalization, we adjust the absolute radioactive heating rate following the analytic prescription of \citet{barnes_radioactivity_2016}. \subsubsection{Radiation Transport Results} \begin{figure} \centering \includegraphics[width=\columnwidth]{lcbol_sncomp.pdf} \caption{The bolometric light curves of the models in Tab.~\ref{tab:LCmodels}, compared to prototypical SNe 2011fe (Type Ia), 2002ap (Type Ic-bl), 2013ab (Type II-p), and 2018zd (electron-capture). The superKN light curves are dimmer than SNe Ia, but at some epochs can approximate the light curves of SNe Type Ic-bl and Type IIp. } \label{fig:sn_bollc} \end{figure} The bolometric light curves of models A through E are presented in Fig.~\ref{fig:sn_bollc}. For comparison, we also show the light curves of typical SNe of various subtypes: Type Ia SN 2011fe \citep{Tsvetkov.ea_2013.CoSka_2011fe.obs}, Type Ic-bl SN 2002ap \citep{Tomita.ea_2006.ApJ_sn2002ap.lcs}, Type IIp SN 2013ab \citep{Bose.ea_2015MNRAS_2013ab.iip}, and the electron-capture SN 2018dz \citep{Hiramatsu.ea_2021.Nature_ECsn2018dz} . The superKN light curves exhibit considerable diversity, which is not surprising given the large ranges of ejecta and radioactive masses these systems may produce. As would be expected from simple Arnett-style \citep{arnett_type_1982} arguments, higher masses are generally associated with longer light-curve durations. This can be seen in the progression from model A to model D. However, as model E demonstrates, the shape of the light curve also depends on the presence of \iso{Ni}{56} in the ejecta. While the mass of \emph{r}-process{} material burned in superKN outflows greatly exceeds that of \iso{Ni}{56}, the energy generated by the \iso{Ni}{56} decay chain, per unit mass, exceeds that of \emph{r}-process{} decay by orders of magnitude (e.g., \citealt{Metzger+10, siegel_collapsars_2019}). When \iso{Ni}{56} is present, it can be the main source of radiation energy for the transient. As a result of the long half-life of the \iso{Ni}{56} daughter \iso{Co}{56} ($\tau_{1/2}^{\rm Co} \approx 77$ days), the energy generation rate for \iso{Ni}{56}-producing systems is declining slowly just around the time the light curves reach their maxima. The effect is a more extended light curve (see \citealt{Khatami.Kasen_2019ApJ_lc.relations} and \citealt{Barnes.ea_2021.ApJ_nuc.kne} for more detailed discussions). Model E, which produces no \iso{Ni}{56}, has a relatively short (${\sim}$month) duration, despite its high mass ($\ensuremath{M_{\rm ej}} = 60\ensuremath{M_\odot})$, owing to the steep decline of the \emph{r}-process{} radioactivity that is its only source of energy. The qualitative difference between models that burn even small amounts of \iso{Ni}{56} and models that burn none points to the importance of a careful treatment of nucleosynthesis in disk outflows. As is apparent from Fig.~\ref{fig:sn_bollc}, the diversity of superKN light curves allows them to mimic other types of SNe. While superKNe do not produce sufficient \iso{Ni}{56} to approach the luminosity of SNe Ia, they can, at various epochs, mimic the bolometric light curves of SNe Ic-bl, SNe IIp as well as electron-capture SNe. However, the high opacity of the \emph{r}-process-enriched ejecta pushes the superKN emission to redder wavelengths than what is observed for other classes of SNe. This is illustrated in Fig.~\ref{fig:sn_pkspec}, which shows the normalized spectra for models A through E at bolometric peak. Unlike other types of SNe, most of the superKN flux emerges at near- and even mid-infrared wavelengths. This is likely due to a combination of lower radioactive heating per unit ejecta mass, as well as the high opacity from \emph{r}-process{} elements (particularly lanthanides and actinides) and the high \ensuremath{M_{\rm ej}}, which work in concert to increase the optical depth across the ejecta and push the photosphere out to the exterior where temperatures are cooler. A second distinguishing feature of superKNe is their broad absorption features. These are a product of our assumed ejecta velocities ($v_{\rm ej} = 0.1c$), which are higher than what is inferred for all supernova other than the hyper-energetic SNe Ic-bl. And while SNe Ic-bl produce spectra with similarly wide absorption features, in the case of Ic-bl these features are found at much bluer (4000 \AA $\lesssim \lambda \lesssim 8000$ \AA) wavelengths. Thus, despite their bolometric similarities, superKNe are spectroscopically unique among SNe. The peak photospheric temperatures of superKNe $\sim 1000$ K are also similar to those required for solid condensation, suggesting the possibility of dust formation in the ejecta (e.g., \citealt{Takami+14,Gall+17}). Insofar as the optical/NIR opacity of ${\sim}\mu$m sized dust is roughly comparable to that of lanthanide-enriched ejecta, dust formation would not qualitatively impact the appearance of the transient. However, this does imply potential degeneracy between the photometric signatures of superKNe and other dust-enshrouded explosions unrelated to $r$-process production, including stellar mergers (e.g., \citealt{Kasliwal+17}). This degeneracy with dusty transients can generally be broken by the predicted broad spectral features of superKNe $(v_{\rm ej} \sim 0.1c$). \begin{figure} \centering \includegraphics[width=\columnwidth]{model_pkFlam.pdf} \caption{The flux per unit wavelength at bolometric peak for each of the five models defined in Tab.~\ref{tab:LCmodels}. All spectra have broad absorption features consistent with a high-velocity outflow, and a low-temperature, pseudo-black body spectrum, consistent with a high-opacity composition. These spectra distinguish superKNe from other classes of SNe, which are much bluer, and from other dust-enshrouded explosions, in which broad absorption features are absent.} \label{fig:sn_pkspec} \end{figure} \section{Discovery Prospects} \label{sec:discovery} In this section we explore the discovery prospects of superKNe with future optical/infrared transient surveys and via late-time infrared follow-up observations of energetic long GRBs. We then discuss how superKN emission could be enhanced by circumstellar interaction for collapsars embedded in AGN disks. \subsection{Volumetric Rates} \label{sec:rates} We begin by estimating the volumetric rate of superKNe. One approach is to scale from the observed rates of ordinary collapsars. The local (redshift $z \simeq 0$) volumetric rate of classical long GRBs is $\approx 0.6-2$ Gpc$^{-3}$yr$^{-1}$ \citep{wanderman_luminosity_2010}, which for an assumed gamma-ray beaming fraction $f_{\rm b} = 0.006$ \citep{goldstein_estimating_2016}, corresponds to a total collapsar rate of $\approx 100-300$ Gpc$^{-3}$ yr$^{-1}$. Under the assumption that ordinary collapsars originate from stars of initial mass $M_{\rm ZAMS} \gtrsim 40 M_{\odot}$, then the more massive stars $M_{\rm ZAMS} \gtrsim 250M_{\odot}$ which generate helium core masses above the PI mass gap ($M_\mathrm{BH}\gtrsim 130M_{\odot}$) will be less common by at least a factor $\sim (40/250)^{\alpha-1} \sim 0.1-0.3$ for an initial-mass function (IMF) $dN_{\star}/dM_{\star} \propto M_{\star}^{-\alpha}$, where we consider values for the power-law index between $\alpha = 2.35$ for a Salpeter IMF and a shallower value $\alpha \approx 1.8$ (\citealt{Schneider+18b}). This optimistically assumes that (i) stars that massive exist \citep[e.g.,][]{dekoter:97,crowther:16}, and that (ii) these can form helium cores such that $M_\mathrm{He}\simeq M_\mathrm{ZAMS}$, for instance because of rotational mixing \citep[e.g.,][]{maeder:00, marchant:16, demink:16} or continuous accretion of gas \citep[e.g.,][]{Jermyn+21, dittmann:21}. Various processes act to remove mass from a massive star during its evolution, and generally the more massive the star, the larger its mass loss rate. Some of these mechanisms (e.g., continuum-driven stellar winds and eruptive mass loss phenomena, \cite{} see also \citealt{renzo:20merger}) might occur even at low metallicity. With the above estimate and caveats, we obtain an optimistic maximum local rate of superKNe from massive collapsars of $\sim 10-100$ Gpc$^{-3}$ yr$^{-1}$. On the other hand, the long GRB rate increases with redshift in rough proportion to the cosmic star-formation rate (SFR $\propto (1+z)^{3.4}$ for $z \lesssim 1$; e.g., \citealt{Yuksel+08}) and hence the maximum rate of superKNe is larger at redshift $z \gtrsim 1$ by a factor $\sim 10$ than at $z \simeq 0$, corresponding to a maximum superKN rate of $\sim 100-1000$ Gpc$^{-3}$ yr$^{-1}$ at $z \gtrsim 1$. The superKN rate question can be approached from another perspective: What is the minimum birth-rate of BHs in the PI mass gap to explain GW190521{}-like merger events (Sec.~\ref{sec:GW190521}) via the massive collapsar channel? The rate of GW190521{}-like mergers at $z \simeq 0$ was estimated by LIGO/Virgo to be $\sim 0.5-1$ Gpc$^{-3}$ yr$^{-1}$ \citep{Abbott+20_190521}. This rate is smaller than the maximum superKN rate estimated above, consistent with only a small fraction of BHs formed through this channel ending up in tight binaries that merge due to gravitational waves at $z \approx 0$. \subsection{Discovery with Optical/Infrared Surveys} \label{sec:survey} We now evaluate the prospects for discovering superKNe with impending wide-field optical/infrared surveys. First, we explore the expected observable rates within the Legacy Survey of Space and Time (LSST) conducted with the {\it Vera Rubin Observatory}. LSST is currently set to commence in early 2024 and will explore the southern sky in optical wavelengths to a $5\sigma$ stacked nightly visit depth of $\sim24$ mag. We inject the set of SEDONA light curves of models described in Tab.~\ref{tab:LCmodels} into the publicly available LSST operations simulator, OpSim \citep{delgado2016lsst}. We use the most recent baseline scheduler ({\tt baseline v1.7}) to calculate LSST pointings, limiting magnitudes, and expected sky noise across a full simulated 10 year survey in $ugrizY$ bands. We additionally apply dust reddening following the dust maps of \cite{schlegel1998maps}. For each model, we inject a superKN randomly 300 times within the full LSST simulation (including both the wide-fast-deep survey and deep-drilling fields) at redshift bins of 0.01. We find that superKNe discovered with LSST are confined to the local universe, with $z<0.1$. Assuming that the superKN rate traces star-formation with a local rate of 10 Gpc$^{-3}$yr$^{-1}$, we expect LSST to discover $\sim 0 - 0.2$ superKNe annually, resulting in up to $2$ events over its 10-year nominal duration. We note that the larger the ejecta masses (i.e., Models b and e) the most likely the detection with LSST. Given the expected red colors of the superKN emission (Fig.~\ref{fig:sn_pkspec}), we additionally explore the possibility of discovering superKN with the \textit{Nancy Grace Roman Space Telescope}, expected to launch in the mid 2020s. Although not fully defined, \textit{Roman} expects to conduct a $\sim5$ year, 10 deg$^2$ SN survey, primarily targeted at Type Ia SNe for cosmological distance measurements. We assume a survey cadence of 30 days and single-visit, stacked $5\sigma$ depth of 27th magnitude, corresponding to roughly an hour of integration time (in F158 band). We inject the same set of models using the \textit{Roman} F062, F158 and F184 filters, corresponding to central wavelengths of 0.62, 1.58 and 1.83 $\mu$m, respectively. We assume observations are taken in each filter at the same epoch, and consider superKNe with three or more $3\sigma$ detections to be detectable. Assuming the \textit{Roman} wide-field survey footprint is chosen to minimize galactic dust, we do not account for any galactic reddening. We find that \textit{Roman} is most sensitive to models with the largest Lanthanide fractions. Assuming that the superKN rate traces star-formation with a local rate of 10 Gpc$^{-3}$yr$^{-1}$, we expect a 5 year \textit{Roman} survey as described would find roughly 1--20 superKNe, most favoring the Lanthanide-rich Model B. These superKNe will be observable out to a redshift of $z\sim0.9$. We note that longer cadences significantly decrease the number of superKN detections possible with \textit{Roman}, at least within the 3-detection discovery criterion we have adopted. \subsection{Energetic Long GRB Accompanied by SuperKNe} \label{sec:GRB} SuperKNe could also be detected following a subset of long GRBs. Figure \ref{fig:Renzo_GRB_accretion} summarizes the GRB properties for our fiducial massive collapsar model \texttt{250.25}. We find accretion timescales comparable to those of ordinary collapsars from lower mass progenitor stars (Appendix \ref{app:collapsars}). These mass gap collapsars are therefore candidates for contributing to the observed population of long GRBs, except that they may be a factor of $\sim\!10$ times more luminous and energetic than typical GRBs if the gamma-ray luminosity tracks the BH accreted mass. Furthermore, if the fraction of massive stars above the PI mass gap which form or evolve into collapsar progenitors is greater at lower metallicity, this could imprint itself on the redshift evolution of the long GRB luminosity function (for which there is claimed evidence; \citealt{Petrosian+15,Sun+15,Pescalli+16}). In the local universe, long GRBs with supernovae are commonly accompanied with the luminous hyper-energetic Type Ic SNe with broad lines (Ic-BL; e.g., \citealt{Woosley&Bloom06, japelj:18, modjaz:20}). The superKN transients we predict from the birth of more massive BHs are of comparable or moderately lower peak luminosities than ordinary collapsar SNe (e.g., \citealt{Cano16}) but significantly redder (Figs.~\ref{fig:sn_bollc}, \ref{fig:sn_pkspec}). Luminous optical SNe have been ruled out to accompany a few nominally long duration GRBs (\citealt{Fynbo+06,Gehrels+06}). One of these events, GRB 060614, was found to exhibit a red excess which \citet{jin2015light} interpreted as a kilonova. However, the luminosity and timescale of the excess could also be consistent with superKN emission from a massive collapsar of the type described here. We encourage future deep infrared follow-up observations of energetic long GRB with {\it Roman} or {\it JWST} on timescales of weeks to months after the burst to search for infrared superKN emission. \subsection{SuperKNe Embedded in AGN Disks} \label{sec:AGN} The optical emission from superKNe could be significantly enhanced by circumstellar interaction if they are embedded in a gas-rich environment. \citet{Graham+20} reported a candidate optical wavelength counterpart to GW190521{} in the form of a flare from an active galactic nucleus (AGN). The flare reached a peak luminosity $L_{\rm pk} \sim 10^{45}$ erg s$^{-1}$ in excess of the nominal level of AGN emission and lasted a timescale $t_{\rm pk} \sim 50$ days, over which it radiated a total energy of $E_{\rm rad} \sim 10^{51}$ erg. \citet{shibata_alternative_2021} propose a scenario for GW190521{} as a massive stellar core collapse generating a single BH and a massive accretion disk $\gtrsim 10-50M_{\odot}$ rather than a binary BH merger. Although our results in Secs.~\ref{sec:GW} and~\ref{sec:GW190521} challenge this interpretation, our present work shows that a prediction of this scenario is a superKN counterpart with $M_{\rm ej} \sim 3-20M_{\odot}$ and $v_{\rm ej} \sim 0.1$ c. Though the predicted peak timescale, $t_{\rm pk} \sim 50$ days (Eq.~\ref{eq:tpk}), of the superKNe emission roughly agrees with that observed by \citet{Graham+20}, the luminosity powered by radioactivity $\sim 10^{42}$ erg s$^{-1}$ (Fig.~\ref{fig:sn_bollc}) is too small to explain the observations by several orders of magnitude. This problem could be alleviated if the collapsing star is embedded in a dense gaseous AGN disk (e.g., \citealt{Jermyn+21, dittmann:21}). If the density of the AGN disk at the star location is sufficiently high, $\rho\gtrsim10^{-15}\,\mathrm{g\ cm^{-3}}$, runaway accretion of mass might help building up very massive and fast rotating helium cores. The mass accretion might be interrupted as the AGN turns off (on a few Myr timescale), and depending on the balance between mass loss processes and the previous accretion phase one might obtain a superKN progenitor. At its collapse, the shock-mediated collision between the superKN ejecta and the surrounding disk material could power a more luminous optical signal than from radioactive decay alone, akin to interaction-powered super-luminous SNe (e.g., \citealt{Smith+07}). Given the large kinetic energy of the superKN ejecta, $E_{\rm kin} \sim M_{\rm ej}v_{\rm ej}^{2}/2 \sim 1-5\times 10^{53}$ erg, the \citet{Graham+20} transient could be powered by tapping into only $\sim 1\%$ of $E_{\rm kin}$ by shock deceleration. Insofar as such luminous shocks are radiative and momentum-conserving, the swept-up gaseous mass in the AGN disk required to dissipate $E_{\rm rad} \sim 10^{51}$ erg is only $M_{\rm sw} \sim (E_{\rm rad}/E_{\rm kin})M_{\rm ej} \sim 0.1-1M_{\odot}$. Treating the swept-up material as being approximately spherical and expanding at $\sim v_{\rm ej}$, the optical diffusion time through $M_{\rm sw}$ is (Eq.~\ref{eq:tpk}), \begin{equation} t_{\rm pk,min} \approx 5\,{\rm d}\left(\frac{M_{\rm sw}}{0.3M_{\odot}}\right)^\frac{1}{2}\mskip-5mu\left(\frac{v_{\rm ej}}{0.1c}\right)^{-\frac{1}{2}}\mskip-5mu\left(\frac{\kappa}{0.3\,{\rm cm^{2}\,g^{-1}}}\right)^\frac{1}{2}, \end{equation} where $\kappa$ is now normalized to a value more appropriate to AGN disk material. Insofar that $t_{\rm pk,min}$ is significantly shorter than the observed $\sim 50$ d rise time of the \citet{Graham+20} counterpart, this implies the rise of the putative counterpart would instead need to be limited by photon diffusion through the unshocked external AGN disk material (e.g.,~\citealt{Graham+20,Perna+21}). A bigger challenge for this scenario is the typically much closer source distance for GW190521{} that would be predicted if this resulted from a self-gravitating collapsar disk instead of a binary BH merger (redshift $z \lesssim 0.05$; Sec.~\ref{sec:GW}), compared to that of the AGN identified by \citet{Graham+20} at redshift $z = 0.438$. \section{Other Observable Implications} \label{sec:implications} \subsection{Luminous Slow Radio Transients} \label{sec:radio} In addition to their prompt optical/IR signal, superKNe produce synchrotron radio emission as the ejecta decelerates by driving a shock into the circumburst medium (e.g., \citealt{nakar_detectable_2011,Metzger&Bower14}). This emission can be particularly luminous because the kinetic energy of the superKN ejecta $E_{\rm kin} \approx 1-5\times 10^{53}$ erg can be one to two orders of magnitude higher than those of ordinary collapsar SNe. The radio transient rises on the timescale required for the ejecta to sweep up a mass comparable to their own, \begin{equation} t_{\rm radio} \approx 200\,{\rm yr}\left(\frac{E_{\rm kin}}{5\times 10^{53}\,\rm erg}\right)^{\frac{1}{3}}\mskip-5mu \left(\frac{v_{\rm ej}}{0.1c}\right)^{-\frac{5}{3}}\mskip-5mu\left(\frac{n}{\rm 1\,cm^{-3}}\right)^{-\frac{1}{3}}, \end{equation} where $n$ is the particle density of the external medium. The peak luminosity at a frequency $\nu = $ 1 GHz can be estimated as (e.g., \citealt{nakar_detectable_2011}) \begin{eqnarray} \nu L_{\nu} &\approx& 5\times 10^{39}\,{\rm erg\,s^{-1}}\left(\frac{E_{\rm kin}}{5\times 10^{53}\,\rm erg}\right)\left(\frac{v_{\rm ej}}{0.1c}\right)^{2.3}\nonumber \\ && \times \left(\frac{n}{\rm 1\,cm^{-3}}\right)^{0.83}\left(\frac{\epsilon_e}{0.1}\right)^{1.3}\left(\frac{\epsilon_{B}}{0.01}\right) \end{eqnarray} where the fraction of the shock energy placed into relativistic electrons $\epsilon_{e}$ and magnetic fields $\epsilon_{B}$ are normalized to characteristic values, respectively, and we have assumed a power-law index $p = 2.3$ for the energy distribution of the shock accelerated electrons, $dN/dE \propto E^{-p}$. For characteristic circumstellar densities $n \sim 0.1-10$ cm$^{-3}$ the peak radio luminosity is comparable to that of rare energetic transients, such as those from binary neutron star mergers that generate stable magnetar remnants (e.g., \citealt{Metzger&Bower14,Schroeder+20}). However, the predicted timescale of the radio evolution of decades to centuries is much longer in the superKN case due to the large ejecta mass. This slow evolution makes it challenging to uniquely associate the radio source with a known GRB or gravitational wave event, or to even identify it as a transient in radio time-domain surveys (e.g., \citealt{Metzger+15}). We note that luminous radio point sources are in fact common in the types of dwarf galaxies which host collapsars (e.g., \citealt{Eftekhari+20}). \citet{Ofek17} place an upper limit on the local volumetric density of persistent radio sources in dwarf galaxies of luminosity $\gtrsim 3\times 10^{38}$ erg s$^{-1}$ of $\mathcal{N} \lesssim 5\times 10^{4}$ Gpc$^{-3}$. Assuming the superKN radio emission remains above this luminosity threshold for a time $t_{\rm det} \sim 10t_{\rm radio} \sim 10^{3}$ yr, this constrains the local rate of superKNe to obey $\lesssim 10-100$ Gpc$^{-3}$ yr$^{-1}$, consistent with the estimates given in Sec.~\ref{sec:rates}. \subsection{Gravitational Wave Emission} \label{sec:GW} \begin{figure} \centering \includegraphics[width=0.98\linewidth]{t_start_spiral_wave_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{dt_spiral_wave_alpha=0.05_p_exp_4.5.pdf} \caption{Time after core collapse at which gravitational instabilities in the collapsar disk first set in (top panel) and the duration over which gravitational instabilities are continuously excited during the fallback process (bottom panel), shown in the space of $\{r_{\rm b},f_{\rm K}\}$ for the fiducial progenitor model \texttt{250.25}. White space indicates models that do not experience gravitational instabilities during fallback accretion. Red contours indicate the inferred primary mass of GW190521{} [$M_\odot$], together with its 90\% confidence limits. Cyan contours delineate final BH masses of 60\,$M_\odot$ and 130\,$M_\odot$, which approximately correspond to the lower and upper end of the PI mass gap.} \label{fig:gravitational_instabilities} \end{figure} The accretion disks formed in superKN collapsars may become susceptible to gravitational instabilities if their disk mass approaches an order-unity fraction of the BH mass during the fallback evolution process (Sec.~\ref{sec:fallback}). As shown in Fig.~\ref{fig:gravitational_instabilities}, only progenitor cores with high angular momentum (small $r_{\rm b}$ and/or high $f_{\rm K}$) lead to fallback accretion that result in gravitational instabilities. Low-angular momentum cores instead form heavier BHs with relatively smaller disk masses. The onset time of the instability of typically a few seconds (Fig.~\ref{fig:gravitational_instabilities}), representative of all superKN progenitor models investigated here, is determined by the progenitor structure, its rotation profile, and the free-fall timescale. Once triggered, subsequent fallback material collapsing onto the disk continues to excite these instabilities in the collapsar disk for a timescale of seconds to hundreds of seconds (Fig.~\ref{fig:gravitational_instabilities}), until viscous draining of the disk becomes fast compared to the free-fall timescale of the remaining outer layers of the progenitor star (roughly $\sim\!10$\,s for our fiducial model in Fig.~\ref{fig:Renzo_evolution}). The onset of the instability manifests itself as the exponential growth of a non-axisymmetric one-arm ($m=1$) density mode in the disk with growth time on the order of the orbital period of the disk, typically followed by exponential growth of an $m=2$ mode (e.g., \citealt{kiuchi_nonaxisymmetric_2011,shibata_alternative_2021,wessel_gravitational_2021}). These non-axisymmetric density perturbations give rise to gravitational-wave emission with dominant frequency at the orbital and twice the orbital frequency, respectively (e.g., \citealt{wessel_gravitational_2021}). As long as further fallback keeps the disk in the instability regime defined by Eq.~\eqref{eq:gravitational_instability}, we assume that the dominant gravitational-wave frequencies of these modes are determined by the evolving angular frequency $\Omega_{\rm K, disk}$ of the disk (Eq.~\ref{eq:Omega_disk}) with radius $r_{\rm disk}(t)$ (Sec.~\ref{sec:fallback}). Since $r_{\rm disk}(t)$ monotonically increases with time as the black hole grows and material with larger specific angular momentum enters the disk, the gravitational-wave frequency decreases, sweeping down with a rate and amplitude that depends on the density and angular momentum structure of the progenitor star envelope. The gravitational-wave signal thus exhibits a ``sad-trombone'' pattern in the time-frequency spectrogram, as opposed to a ``chirp'' signal generally associated with gravitational waves from compact binary mergers. Examples of the frequency evolution of the disk for different mass models and for high and low specific angular momentum of the progenitor envelope are shown in Fig.~\ref{fig:gravitational_wave_frequency_evolution}. Over a large range of the parameter space and progenitor models explored here, superKN collapsars are strong emitters of quasi-monochromatic gravitational waves of duration $\sim\!1-100$\,s with a decreasing frequency trend (between $\sim\!0.1-40$\,Hz for the $l=m=2$ and $\sim\!\text{few}\times 10^{-2} -25$\,Hz for the $l=2$, $m=1$ mode) characteristic of their progenitor stellar structure (see Figs.~\ref{fig:gravitational_instabilities} and \ref{fig:gravitational_wave_frequency} for a representative example). If detected, such gravitational-wave signals could reveal information about the rotation profiles of and angular momentum transport in evolved massive stars. The ``sad-trombone'' feature simultaneously followed by typically two dominant modes separated in frequency space by the instantaneous characteristic disk rotation frequency may prove useful in searching and detecting such sources with gravitational-wave detectors. \begin{figure} \centering \includegraphics[width=0.98\linewidth]{GW_low_Omega_evolution.pdf} \includegraphics[width=0.98\linewidth]{GW_high_Omega_evolution.pdf} \caption{Disk frequency evolution (Eq.~\ref{eq:Omega_disk}) for three progenitor models (\texttt{250.25}, \texttt{220.25}, \texttt{200.25}) with rotation parameters $p=4.5$, $r_{\rm b}= 1.5\times10^{9}$\,cm, and overall small ($f_{\rm K} = 0.3$; top) or large ($f_{\rm K}=0.6$; bottom) Keplerian angular momentum parameter. Plotted is twice the orbital angular frequency, which corresponds to the gravitational-wave frequency of the $m=2$ density mode of the gravitationally unstable disk. The frequency evolution is largely controlled by $f_{\rm K}$, with all models reflecting the `sad-trombone' nature of the gravitational-wave signal.} \label{fig:gravitational_wave_frequency_evolution} \end{figure} \begin{figure} \centering \includegraphics[width=0.98\linewidth]{h_comb_alpha=0.05_p_exp=4.5_j_shell=0.3j_kep_R_shell=1.5e+09.pdf} \caption{Plus and cross polarization strain amplitudes of the $l=m=2$ mode of gravitational waves resulting from the gravitationally unstable collapsar disk of the fiducial model shown in Fig.~\ref{fig:Renzo_evolution} with $p=4.5$, $f_{\rm K}=0.3$ and $r_{\rm b}=1.5\times 10^{9}$\,cm, assuming a face-on orientation of the accretion disk ($\iota=0$). The emission starts a few seconds after the onset of collapse and persists for several seconds until viscous draining of the disk dominates fallback accretion and the disk becomes stable again at around 9\,s after the onset of collapse.} \label{fig:gravitational_wave_strain} \end{figure} We calculate the gravitational wave strain of emitted gravitational waves as described in Appendix \ref{app:GW_emission}. Figures \ref{fig:gravitational_wave_strain}--\ref{fig:gravitational_wave_frequency_sensitivity_diff_rotation} present results for gravitational-wave emission, evaluated for a typical distance of 200\,Mpc, at which superKN events are expected to occur once every $\sim\!3$ years for our fiducial local superKN rate of 10 Gpc$^{-3}$ yr$^{-1}$ (Sec.~\ref{sec:rates}). Figure~\ref{fig:gravitational_wave_strain} shows the time evolution of the plus and cross polarization strain calculated for the fiducial progenitor model (Fig.~\ref{fig:Renzo_evolution}) assuming a face-on orientation of the collapsar disk ($\iota=0$). The maximum characteristic strain $h_c$ (typically $h_c\sim 10^{-24} - 10^{-22}$) and the frequency range of the gravitational wave emission vary considerably across the $\{f_{\rm K},r_{\rm b}\}$ parameter space (Figs.~\ref{fig:gravitational_wave_frequency} and \ref{fig:gravitational_wave_strain_contour}, Appendix \ref{app:GW_emission}). SuperKN collapsars are multi-band gravitational-wave sources. Figures \ref{fig:gravitational_wave_frequency_sensitivity} and \ref{fig:gravitational_wave_frequency_sensitivity_diff_rotation} compare the gravitational-wave signal in frequency space to the sensitivity of advanced LIGO (aLIGO), {\it Cosmic Explorer} (CE), {\it Einstein Telescope} (ET), {\it DECi-hertz Interferometer Gravitational wave Observatory} (DECIGO), and {\it Big Bang Observer} (BBO). Gravitational-wave emission typically starts at a few tens of Hz in the frequency band of aLIGO, CE, and ET, and subsequently drifts into the deciherz regime of DECIGO and BBO as the disk expands. The relative strain amplitude in these two different bands encodes information about the total mass and mass profile of the progenitors (Fig.~\ref{fig:gravitational_wave_frequency_sensitivity}). Lighter progenitors typically give rise to louder gravitational-wave signals over a narrower frequency band for the same rotation profile. \begin{figure} \centering \includegraphics[width=0.98\linewidth]{GW_sensitivity_different_masses.pdf} \caption{Amplitude spectral density (ASD) of gravitational-wave emission from the collapsar disk, shown for three progenitor models (\texttt{250.25}, \texttt{220.25}, \texttt{200.25}) and stellar rotation parameters $p=4.5$, $f_{\rm K}=0.3$, $r_{\rm b}=1.5\times 10^{9}$\,cm at an assumed source distance of 200\,Mpc. The shaded region for each curve shows the unphysical frequency regime above the maximum disk frequency as plotted in Fig.~\ref{fig:gravitational_wave_frequency_evolution}, which is ignored in the SNR calculations. Shown for comparison are the measured or predicted noise curves for aLIGO, CE, ET, DECIGO, and BBO with sensitivity curve data from \url{https://dcc.ligo.org/LIGO-T1500293/public} and \citet{DECIGO_ASD}.} \label{fig:gravitational_wave_frequency_sensitivity} \end{figure} The overall magnitude of the amplitude spectral density is largely determined by the progenitor angular momentum as illustrated in Fig.~\ref{fig:gravitational_wave_frequency_sensitivity_diff_rotation}. In the limit of high angular momentum (large value of the parameter $f_{\rm K}$) for fixed $r_{\rm b}$, the instability and gravitational-wave emission are triggered earlier than for smaller values of $f_{\rm K}$ (cf.~Fig.~\ref{fig:gravitational_instabilities}). This is because matter deposition in the disk at early times is enhanced (rather than direct fallback onto the black hole). Under these conditions, the gravitational-wave signal is relatively weak due to the small disk and BH mass. Owing to enhanced viscosity and enhanced accretion during the instability epoch, disks that become unstable early on tend to stay relatively light; the gravitational-wave signal thus remains relatively weak throughout the fallback process. As a result, these signals tend to peak late and thus in the decihertz regime, which may only render them detectable there for Mpc distances. A non-detection in the high-frequency band may thus be indicative of the angular momentum budget of the progenitor star. In the other limit of low angular momentum (small value of the parameter $f_{\rm K}$ and large $r_{\rm b}$), the accretion disk may never become susceptible to the instability and gravitational-wave emission may be negligible (cf.~Fig.~\ref{fig:gravitational_instabilities}). Hence, there exists an intermediate regime of progenitor angular momentum (intermediate values of $f_{\rm K}$) in which the gravitational wave strain becomes maximal. For the given parameters of our fiducial progenitor model, this optimum is reached for $f_{\rm K}\approx 0.2$, which is also reflected by the detection horizons (Figs.~\ref{fig:gravitational_wave_detection_horizon}, \ref{fig:gravitational_wave_detection_horizon_2}). We calculate a detection horizon for these events assuming an optimal matched filter and an SNR of 8 (see Appendix \ref{app:GW_emission} for details). We find a detection horizon of $\sim$5\,Mpc (aLIGO), $\sim$300\,Mpc (ET), $\sim$250\,Mpc (CE), and $\sim$425\,Mpc (DECIGO) for our fiducial model with mass $250.25M_{\odot}$, $f_{\rm K}=0.3$, and $r_{\rm b}=1.5\times 10^9$\,cm. A parameter space study of the detection horizons is presented in Fig.~\ref{fig:gravitational_wave_detection_horizon}, showing that third-generation detectors (ET, CE) as well as DECIGO are able to detect gravitational waves from superKN collapsars at distances of typically a few hundred Mpc up to a few Gpc. Detection horizons for aLIGO are typically limited to $\lesssim 100$\,Mpc (Fig.~\ref{fig:gravitational_wave_detection_horizon_2}, Appendix~\ref{app:GW_emission}). BBO will be particularly sensitive to the lowest-frequency sources with low angular momentum in the progenitor `core' (medium to large values of $r_{\rm b}$) and typically reach several hundred Mpc to several Gpc. (Fig.~\ref{fig:gravitational_wave_detection_horizon_2}, Appendix~\ref{app:GW_emission}). \begin{figure} \centering \includegraphics[width=0.98\linewidth]{CE_detection_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{ET_detection_alpha=0.05_p_exp_4.5.pdf} \includegraphics[width=0.98\linewidth]{DECIGO_detection_alpha=0.05_p_exp_4.5.pdf} \caption{Detection horizons of gravitational waves from our fiducial model shown in Fig.~\ref{fig:Renzo_evolution} with $p=4.5$, $f_{\rm K}=0.3$ and $r_{\rm b}=1.5\times 10^{9}$\,cm for Cosmic Explorer (top), the Einstein Telescope (center), and DECIGO (bottom), assuming optimal matched filtering and a signal-to-noise ratio of 8. For progenitors with medium to low rotation, these detectors may be able to detect gravitational waves from superKN collapsars at distances of typically a few hundred Mpc up to a few Gpc. These estimates are based on the corresponding physical frequency regime as indicated in Fig.~\ref{fig:gravitational_wave_frequency_evolution}. The sharp decrease in detection horizon for CE at $\log r_{\rm b}\gtrsim 9.4$ is due to low-frequency emission below 10\,Hz (below CE's sensitivity band). Contours delineate final BH masses as in previous figures.} \label{fig:gravitational_wave_detection_horizon} \end{figure} \begin{figure} \centering \includegraphics[width=0.98\linewidth]{GW_sensitivity_different_rotation.pdf} \caption{Same as Fig.~\ref{fig:gravitational_wave_frequency_sensitivity} but for the progenitor model \texttt{250.25} with $p=4.5$, $r_{\rm b}= 1.5\times10^{9}$\,cm and different values of the Keplerian fraction, $f_{\rm K} = 0.2, 0.4$ and $0.6$. Relatively low to medium angular momentum models (here $f_{\rm K}\approx 0.2$) generate a stronger signal in all detectors (with detection horizon $\sim$ 400 Mpc with aLIGO at an SNR of 8) compared to cases with higher angular momentum (large values of $f_{\rm K}$). The $f_{\rm K} = 0.6$ model would only be detectable by BBO with a distance up to 100 Mpc at an SNR of 8 (cf.~Fig.~\ref{fig:gravitational_wave_detection_horizon_2}).} \label{fig:gravitational_wave_frequency_sensitivity_diff_rotation} \end{figure} \subsection{GW190521} \label{sec:GW190521} Our work has several potential implications for the gravitational wave event GW190521{} \citep{Abbott+20_190521}. Firstly, as already discussed, in the standard interpretation of GW190521{} as a binary BH coalescence, mass loss associated with the birth of one or both of the constituent BHs can place them in the nominal PI mass gap ``from above'', even if they would have been above the PI gap if all of the star's mass were accreted at the time of core collapse (Fig.~\ref{fig:Renzo_ejecta}, top panel). To generate a BH with a mass consistent with the more massive member of GW190521{} of $\sim 88 M_{\odot}$ \citep{Abbott+20_190521} from a star with a helium core nominally above the gap, would require the ejection of $\gtrsim 50M_{\odot}$ of ejecta (most of it $r$-process enriched; Sec.~\ref{sec:fallback}). In a direct sense, superKNe probe one channel for forming BHs in the PI mass gap. Our scenario requires a fast rotating pre-collapse star and predicts that the magnitude of the BH spin would be nearly maximal ($a_{\rm BH,fin} \sim 1$; Fig.~\ref{fig:Renzo_GRB_accretion}, bottom panel). Although a low orbit-aligned spin $\chi_{\rm eff} \lesssim 0.35$ (90\% confidence) was measured for GW190521{}, there is some evidence for a large spin component in the binary plane \citep{Abbott+20_190521}. However, assuming that the progenitor stars can retain large rotation rates \citep[see however][]{spruit:02,fuller:19} and the progenitor of GW190521{}~formed from an isolated stellar binary through common envelope evolution \citep[e.g.,][]{belczynski:16Nat}, stable mass transfer \citep[e.g.,][]{vandenheuvel:17, vanson:21}, or via chemically homogeneous evolution driven by tidal interactions \citep[e.g.,][]{maeder:00, demink:16, marchant:16}, one would expect the stellar angular momentum vector---and hence that of the BHs formed from the collapse---to be aligned with the orbital angular momentum \citep{mandel:16b}. In the case of rapidly rotating progenitors, we speculate that misaligned spins could arise from a kick imparted to the BH by mass loss in the disk winds. Our calculations in Sec.~\ref{sec:GW} indicate that the formed disks can become self-gravitating and hence will be subject to bar-mode like instabilities, generating non-axisymmetric spiral density waves. The latter could impart a non-axisymmetric component to the wind mass loss, which would endow the BH with an effective kick. To significantly misalign the spins without breaking the binary, the natal kick must be comparable to the pre-collapse orbital velocity of the system, $v_{\rm kick} \sim 300$ km s$^{-1}$ (e.g., \citealt{kalogera:96, callister:20}). Given the characteristic wind ejecta speed $v_{\rm ej} \sim 0.1$ c, from momentum conservation an asymmetry in the disk mass-loss rate or velocity at the level of $\gtrsim v_{\rm kick}/v_{\rm ej} \sim 10^{-2}$ would be sufficient to impart significant spin-orbital misalignment. Although self-gravitating instabilities result in non-axisymmetric disk density fluctuations at the level $\delta \rho/\rho \gtrsim 0.1$, quantifying the extent to which these impart non-axisymmetric mass-loss will require additional GRMHD simulations of the disk outflows in the regime of massive, self-gravitating disks. In an alternative approach, \citet{shibata_alternative_2021} interpret GW190521{} as gravitational waves from a non-axisymmetric instability similar to those discussed in Sec.~\ref{sec:GW} in a massive BH--disk system, thought to originate from the collapse of a massive star. In contrast to the BH--torus systems of fixed mass numerically evolved by \citet{shibata_alternative_2021}, in an astrophysical setting mass is continuously fed to the disk at a rate $\dot{m}_{\rm fb, disk}$ due to fallback of the progenitor envelope (Sec.~\ref{sec:fallback}). If the accretion disk approaches the gravitationally unstable regime its mass evolution $M_{\rm disk}(t)$ is dominated by the addition of fallback material through $\dot{m}_{\rm fb, disk}$. The associated timescale over which the fallback rate changes is $\tau_{\dot{m}_{\rm fb, disk}}\propto t^\alpha$, where $\alpha\approx 1$ (\citealt{siegel_collapsars_2019} and Sec.~\ref{sec:fallback}). This is reflected by the fact that disks for the progenitor models considered here become gravitationally unstable on a timescale of a few seconds after BH formation, and this instability phase then lasts for a few seconds to tens of seconds (Fig.~\ref{fig:gravitational_instabilities}, Sec.~\ref{sec:GW}). Generating a gravitational-wave signal of only a few cycles and duration $t_{\rm GW}\sim 0.1$\,s as required by GW190521{} \citep{Abbott+20_190521} with comparatively negligible amplitude thereafter thus requires a fallback rate given by the total mass of the BH--disk system divided by the duration of the signal of $\dot{m}_{\rm fb, disk} > (M_\bullet + M_{\rm disk})/t_{\rm GW}\sim 650-1000\,M_\odot\,\text{s}^{-1}$ for the configurations considered by \citet{shibata_alternative_2021}. While fallback rates of the order of up to $\sim\!30-40\,M_\odot\,\text{s}^{-1}$ may be reached realistically (cf., e.g., Fig.~\ref{fig:Renzo_evolution}), fallback rates that are larger by one or two orders of magnitude seem implausible even with the most compact progenitor models possible (Sec.~\ref{sec:stellar_models}). These fallback rates also set limits on the compactness and thus on the gravitational-wave frequency of possible BH--disk systems---the frequency of gravitational-wave emission may be in tension with GW190521{} as well. While a frequency around $\sim\!60$\,Hz of GW190521{} may not be impossible per se, our unstable BH--disk systems are typically not compact enough to reach such high frequencies even for an $l=m=2$ mode and instead strongly prefer maximum frequencies below 20-30\,Hz (Fig.~\ref{fig:gravitational_wave_frequency_evolution}). Another consequence of the weaker gravitational wave signals that we predict from massive collapsars are much closer detection horizons, which for Advanced LIGO at O3 sensitivity amount to $\lesssim 100$ Mpc (Fig.~\ref{fig:gravitational_wave_detection_horizon_2}, Appendix \ref{app:GW_emission}); it is unlikely that a superKN transient from GW190521{} would have gone undetected by existing wide-field optical surveys at such close distances. \subsection{Implications for Galactic $r$-Process Enrichment} Although the massive collapsars studied here are probably less common by a factor $\gtrsim 10-30$ than the bulk of ordinary collapsars (Sec.~\ref{sec:rates}), they can in principle generate $\sim 10$ times more $r$-process ejecta mass for similar progenitor angular momentum structure. The contribution of superKNe to total $r$-process production in the Universe could therefore be non-negligible relative to that of ordinary collapsars. SuperKNe are probably too rare to explain the occurrence of individual $r$-process pollution events in small dwarf galaxies (e.g., \citealt{Ji+16}), but this does not exclude them from contributing to larger stellar systems. The total mass of $r$-process elements in the Milky Way is only $\sim 10^{4}M_{\odot}$, so given an $r$-process yield of $\gtrsim 10M_{\odot}$ per superKNe, the number of contributing events must be $\lesssim 1000$ and probably $\lesssim 100$ (accounting for the dominant contribution likely coming from other channels such as lower mass collapsars and neutron star mergers). Depending on the efficiency of gas mixing and retention in the environments of superKNe, subsequent generations of star formation could produce a modest number of extremely $r$-process-enriched stars. The dilution mass of the interstellar medium into which the superKN ejecta is mixed, can be estimated as (e.g., \citealt{Macias&RamirezRuiz19}) \begin{equation} M_{\rm dil} \approx 2\times 10^{7}M_{\odot}\left(\frac{E_{\rm kin}}{5\times 10^{53} \rm erg}\right)^{0.97}\left(\frac{n}{\rm cm^{-3}}\right)^{-0.062},\end{equation} where we have assumed $10$ km s$^{-1}$ for the sound speed of the interstellar medium. The value of $M_{\rm dil}$ in superKNe is larger than in ordinary SNe ($E_{\rm kin} \sim 10^{51}$ erg) or ordinary collapsars ($E_{\rm kin} \sim 10^{52}$ erg) by a factor of $\gtrsim 10-100$. The total amount of $r$-process material generated by superKNe is larger than ordinary collapsars by a similar factor $\sim 10$, while the production $\sim 0.1-0.5M_{\odot}$ of $^{56}$Ni and hence $^{56}$Fe (Fig.~\ref{fig:Renzo_mass_fractions}) are similar to ordinary collapsars. If a superKN were to occur in otherwise pristine material at very low metallicity (perhaps an questionable idealization given that SNe tend to be spatially and temporally clustered), the next generation of stars which form from this material could possess a metallicity as low as [Fe/H] $\sim -5$ and a Europium abundance as high as [Eu/Fe] $\sim 5$, much higher than the current record holder (\citealt{Reichert+21}). This abundance combination would also contrast strongly with the chemical signatures of PI SNe (e.g., \citealt{woosley:02,Aoki+14}), for which a large quantity of iron group elements but no $r$-process elements are produced. \section{Conclusions} \label{sec:conclusions} We have explored the collapse of rotating very massive $\gtrsim 130M_{\odot}$ helium stars and predicted their nucleosynthetic, electromagnetic, and gravitational waves signatures. Our conclusions can be summarized as follows. \begin{itemize} \item Building on \citet*{siegel_collapsars_2019}, we present a semi-analytic model for the BH accretion disk in collapsars and its associated outflows which predict the quantity and composition of the disk wind ejecta, as well as the final BH mass and spin, given an assumed angular momentum structure of the progenitor star. The accretion regimes are calibrated based on the results of numerical GRMHD simulations and analytic scaling relations (Appendix \ref{sec:accretion_rates}). Although the radial angular momentum structure of the progenitor star at collapse is uncertain theoretically, our approach allows us to cover a wide portion of the physically allowed parameter space. Applied to ``ordinary'' low-mass collapsars, the model predicts accretion luminosities and durations broadly consistent with long GRB observations. \item Our main application is to massive collapsars, originating from progenitor stars with final helium core masses $\sim 125-150M_{\odot}$, which avoid pair instability SNe and nominally (in the case of zero mass ejection) would create BHs above the PI mass-gap. Analogous to lower-mass collapsars, as the fall-back accretion rate declines in time, the composition of the disk outflows systematically evolve from heavier to lighter elements (Fig.~\ref{fig:Renzo_accretion_rate}). Across a wide parameter space of progenitor rotational properties, we find total wind ejecta masses $\sim 10-50M_{\odot}$, of which $\sim 10-60\%$ is composed of $r$-process nuclei, including a sizable quantity of lanthanide elements associated with heavy $r$-process production. The remaining ejecta is primarily unprocessed material (assumed to be $^{4}$He in our models) and a modest quantity $\sim 0.1-1M_{\odot}$ of $^{56}$Ni, formed from the brief hot, proton-rich phases of the disk evolution. \item The radioactive decay of $r$-process nuclei and $^{56}$Ni in the ejecta of massive collapsars powers a months-long transient with a peak luminosity $\sim 10^{42}$ erg s$^{-1}$ (Fig.~\ref{fig:sn_bollc}), which we refer to as a ``superKN''. The spectral energy distribution of superKNe near maximum light peaks at several microns due to the large opacity of the lanthanide elements (Fig.~\ref{fig:sn_pkspec}), similar to lanthanide-rich kilonovae from neutron star mergers. Although the bolometric light curves of superKNe are broadly similar to common types of core-collapse SNe, their combination of extremely red colors and high-velocity spectral features ($v_{\rm ej} \sim 0.1$ c) should render superKNe distinguishable from other transient classes. Our radiative transfer calculations have assumed a homogeneous ejecta structure; if the ejecta instead exhibits significant radial stratification, particularly a low lanthanide abundance in the highest velocity outermost layers, then the early light curve could be substantially brighter and bluer than our baseline predictions. \item Even for a progenitor stars well above the PI mass gap at collapse, the final BH remnant can populate the entire mass gap between $\sim 55-130M_{\odot}$ due to disk wind mass-loss (e.g., Fig.~\ref{fig:Renzo_ejecta}; Tab.~\ref{tab:ejecta_parameters}). SuperKNe therefore probe one mechanism for populating the PI mass gap ``from above''. The BHs formed through this channel are predicted to be rapidly spinning due to the large accretion of angular momentum, with final Kerr parameter $a_{\rm BH,fin} \sim 1$. If the BH is formed in a binary, we speculate that its spin angular momentum axis could become misaligned with that of the binary angular momentum due to non-asymmetric mass-outflows associated with the gravitationally-unstable phases of the accretion (Sec.~\ref{sec:GW}). Future numerical simulation work is necessary to explore this possibility quantitatively. \item One avenue to discover SuperKNe is via wide-field optical/infrared surveys. A 5-year survey with the {\it Roman Space Telescope} similar to that planned for Type Ia SNe, could potentially detect $\sim 1-20$ superKNe out to redshift $z \simeq 1$, for an assumed $z = 0$ superKN rate of $\sim 10$ Gpc$^{-3}$ yr$^{-1}$. SuperKNe could also be discovered by LSST, but the detection rate is lower because the predicted emission peaks at redder wavelengths than covered by the LSST bands. Measurements or limits on the occurrence rate of superKNe would constrain the birth rate of PI mass gap BHs via this channel (for comparison, the local rate of GW190521{}-like mergers is $\sim 1$ Gpc$^{-3}$ yr$^{-1}$; \citealt{Abbott+20_190521}). SuperKNe may also be detectable following (particularly energetic) GRBs with {\it JWST} after the GRB afterglow has faded. \item The large kinetic energies of the superKN ejecta $\gtrsim 10^{53}$ erg results in a bright, long-lived synchrotron radio transient as the ejecta decelerates via shock interaction with the circumstellar medium (Sec.~\ref{sec:radio}). However, the slow evolution of the radio emission for typical circumstellar densities will render these radio sources challenging to identify as radio transients (they may appear as luminous persistent sources in star-forming dwarf galaxies, for example; \citealt{Eftekhari+20}). If superKNe occur inside gaseous AGN disks, shock interaction with the dense disk material could substantially enhance the optical luminosity of the event relative to that powered by radioactivity alone. This offers a speculative explanation for the claimed optical counterpart of GW190521{}~\citep{Graham+20}, provided it represents a gravitational wave burst from a core collapse event \citep{shibata_alternative_2021} instead of a black hole merger (however, see Sec.~\ref{sec:GW190521}). \item The massive accretion disks from massive collapsars can become gravitationally unstable, generating gravitational wave emission as a result of non-axisymmetric density fluctuations. The predicted duration of the gravitational waves is several seconds or longer (calling into question the core-collapse origin for GW190521{} proposed by \citealt{shibata_alternative_2021}, Sec.~\ref{sec:GW190521}), while the frequency range overlaps the sensitivity window of ground-based (e.g., LIGO/CE/ET) and space-based intermediate-frequency gravitational-wave detectors (e.g., DECIGO, BBO). Unlike the gravitational wave signal of compact binary mergers, which increase in frequency and amplitude with time (``chirp''), the gravitational wave signals of collapsar disks {\it decreases} in frequency as the disk radius grows (``sad-trombone''). Our simple estimates suggest that gravitational waves from massive collapsar disks are detectable by CE/ET/DECIGO to distances of up to several hundred Mpc (Figs.~\ref{fig:gravitational_wave_frequency_sensitivity}, \ref{fig:gravitational_wave_detection_horizon}, \ref{fig:gravitational_wave_frequency_sensitivity_diff_rotation}), interior to which the event rate could be as high as once every few years. \item SuperKNe are unlikely to contribute dominantly to the total production of $r$-process elements in the Universe, compared to neutron star mergers or ordinary low-mass collapsars, because the progenitors are disfavored by the initial mass function of stars. However, their extremely $r$-process-rich but iron-poor ejecta could in principle seed the creation of a small fraction of stars with abundance ratios more extreme than currently known metal-poor $r$-process-enhanced stars (e.g., [Eu/Fe] $\sim 5$). \end{itemize} \section*{Acknowledgements} DMS and AA acknowledge discussions with R.~Essick. This research was enabled in part by support provided by SciNet (www.scinethpc.ca) and Compute Canada (www.computecanada.ca). DMS acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), funding reference number RGPIN-2019-04684. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development Canada and by the Province of Ontario through the Ministry of Colleges and Universities. AA acknowledges support through a MITACS Globalink Graduate Fellowship. JB, BDM, and MR acknowledges support from the National Science Foundation (grant AST-2002577). VAV acknowledges support by the Simons Foundation through a Simons Junior Fellowship (\#718240) during the early phases of this project.
null
null
null
proofpile-arXiv_000-10205
{"arxiv_id":"2111.03094","language":"en","timestamp":1636336859000,"url":"https:\/\/arxiv.org\/abs\/2111.03094","yymm":"2111"}
2024-02-18T23:40:25.422Z
2021-11-08T02:00:59.000Z
null
{"paloma_paragraphs":[]}
null